path
stringlengths
7
265
concatenated_notebook
stringlengths
46
17M
assignment3/data/Elasticsearch-AssignmentTemplate.ipynb
###Markdown Load the Movielens Dataset ###Code df_dbpedia = pd.read_csv(os.path.join(data_dir, "dbpedia.csv")) df_dbpedia["dbpedia_content"] = df_dbpedia["dbpedia_content"].apply(json.loads) # Parse string to JSON df_movie = pd.read_csv(os.path.join(data_dir, "movies.csv")) df_movie["genres"] = df_movie["genres"].apply(lambda x: x.replace("|",",")) df_rating = pd.read_csv(os.path.join(data_dir, "ratings.csv")) df_user = pd.read_csv(os.path.join(data_dir, "users.csv")) ###Output _____no_output_____ ###Markdown Elasticsearch provides a RESTful API. This is language independent- For instance, you can do `curl -XGET http://localhost:9200/` to get information about the node- To create an index, you can use: - curl -XPUT http://localhost:9200/movies/ -H "Content-Type: application/json" -d '{ "mappings": { "movie": { "properties": { "title": { "type": "text", "analyzer": "whitespace", "term_vector": "yes" } } } } }'- It is easier to use the python elasticsearch library ###Code import elasticsearch # Imports the library es = elasticsearch.Elasticsearch() # Defines how to connect to a Elasticsearch node df_dbpedia_merged = df_dbpedia[["movie_id","dbpedia_content"]].merge(df_movie, on="movie_id") ###Output _____no_output_____
4-Classification/3-Classifiers-2/solution/notebook.ipynb
###Markdown Build More Classification Models ###Code import pandas as pd cuisines_df = pd.read_csv("../../data/cleaned_cuisines.csv") cuisines_df.head() cuisines_label_df = cuisines_df['cuisine'] cuisines_label_df.head() cuisines_feature_df = cuisines_df.drop(['Unnamed: 0', 'cuisine'], axis=1) cuisines_feature_df.head() ###Output _____no_output_____ ###Markdown Try different classifiers ###Code from sklearn.neighbors import KNeighborsClassifier from sklearn.linear_model import LogisticRegression from sklearn.svm import SVC from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier from sklearn.model_selection import train_test_split, cross_val_score from sklearn.metrics import accuracy_score,precision_score,confusion_matrix,classification_report, precision_recall_curve import numpy as np X_train, X_test, y_train, y_test = train_test_split(cuisines_feature_df, cuisines_label_df, test_size=0.3) C = 10 # Create different classifiers. classifiers = { 'Linear SVC': SVC(kernel='linear', C=C, probability=True,random_state=0), 'KNN classifier': KNeighborsClassifier(C), 'SVC': SVC(), 'RFST': RandomForestClassifier(n_estimators=100), 'ADA': AdaBoostClassifier(n_estimators=100) } n_classifiers = len(classifiers) for index, (name, classifier) in enumerate(classifiers.items()): classifier.fit(X_train, np.ravel(y_train)) y_pred = classifier.predict(X_test) accuracy = accuracy_score(y_test, y_pred) print("Accuracy (train) for %s: %0.1f%% " % (name, accuracy * 100)) print(classification_report(y_test,y_pred)) ###Output Accuracy (train) for Linear SVC: 76.4% precision recall f1-score support chinese 0.64 0.66 0.65 242 indian 0.91 0.86 0.89 236 japanese 0.72 0.73 0.73 245 korean 0.83 0.75 0.79 234 thai 0.75 0.82 0.78 242 accuracy 0.76 1199 macro avg 0.77 0.76 0.77 1199 weighted avg 0.77 0.76 0.77 1199 Accuracy (train) for KNN classifier: 70.7% precision recall f1-score support chinese 0.65 0.63 0.64 242 indian 0.84 0.81 0.82 236 japanese 0.60 0.81 0.69 245 korean 0.89 0.53 0.67 234 thai 0.69 0.75 0.72 242 accuracy 0.71 1199 macro avg 0.73 0.71 0.71 1199 weighted avg 0.73 0.71 0.71 1199 Accuracy (train) for SVC: 80.1% precision recall f1-score support chinese 0.71 0.69 0.70 242 indian 0.92 0.92 0.92 236 japanese 0.77 0.78 0.77 245 korean 0.87 0.77 0.82 234 thai 0.75 0.86 0.80 242 accuracy 0.80 1199 macro avg 0.80 0.80 0.80 1199 weighted avg 0.80 0.80 0.80 1199 Accuracy (train) for RFST: 82.8% precision recall f1-score support chinese 0.80 0.75 0.77 242 indian 0.90 0.91 0.90 236 japanese 0.82 0.78 0.80 245 korean 0.85 0.82 0.83 234 thai 0.78 0.89 0.83 242 accuracy 0.83 1199 macro avg 0.83 0.83 0.83 1199 weighted avg 0.83 0.83 0.83 1199 Accuracy (train) for ADA: 71.1% precision recall f1-score support chinese 0.60 0.57 0.58 242 indian 0.87 0.84 0.86 236 japanese 0.71 0.60 0.65 245 korean 0.68 0.78 0.72 234 thai 0.70 0.78 0.74 242 accuracy 0.71 1199 macro avg 0.71 0.71 0.71 1199 weighted avg 0.71 0.71 0.71 1199 ###Markdown Build More Classification Models ###Code import pandas as pd cuisines_df = pd.read_csv("../../data/cleaned_cuisine.csv") cuisines_df.head() cuisines_label_df = cuisines_df['cuisine'] cuisines_label_df.head() cuisines_feature_df = cuisines_df.drop(['Unnamed: 0', 'cuisine'], axis=1) cuisines_feature_df.head() ###Output _____no_output_____ ###Markdown Try different classifiers ###Code from sklearn.neighbors import KNeighborsClassifier from sklearn.linear_model import LogisticRegression from sklearn.svm import SVC from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier from sklearn.model_selection import train_test_split, cross_val_score from sklearn.metrics import accuracy_score,precision_score,confusion_matrix,classification_report, precision_recall_curve import numpy as np X_train, X_test, y_train, y_test = train_test_split(cuisines_feature_df, cuisines_label_df, test_size=0.3) C = 10 # Create different classifiers. classifiers = { 'Linear SVC': SVC(kernel='linear', C=C, probability=True,random_state=0), 'KNN classifier': KNeighborsClassifier(C), 'SVC': SVC(), 'RFST': RandomForestClassifier(n_estimators=100), 'ADA': AdaBoostClassifier(n_estimators=100) } n_classifiers = len(classifiers) for index, (name, classifier) in enumerate(classifiers.items()): classifier.fit(X_train, np.ravel(y_train)) y_pred = classifier.predict(X_test) accuracy = accuracy_score(y_test, y_pred) print("Accuracy (train) for %s: %0.1f%% " % (name, accuracy * 100)) print(classification_report(y_test,y_pred)) ###Output Accuracy (train) for Linear SVC: 76.4% precision recall f1-score support chinese 0.64 0.66 0.65 242 indian 0.91 0.86 0.89 236 japanese 0.72 0.73 0.73 245 korean 0.83 0.75 0.79 234 thai 0.75 0.82 0.78 242 accuracy 0.76 1199 macro avg 0.77 0.76 0.77 1199 weighted avg 0.77 0.76 0.77 1199 Accuracy (train) for KNN classifier: 70.7% precision recall f1-score support chinese 0.65 0.63 0.64 242 indian 0.84 0.81 0.82 236 japanese 0.60 0.81 0.69 245 korean 0.89 0.53 0.67 234 thai 0.69 0.75 0.72 242 accuracy 0.71 1199 macro avg 0.73 0.71 0.71 1199 weighted avg 0.73 0.71 0.71 1199 Accuracy (train) for SVC: 80.1% precision recall f1-score support chinese 0.71 0.69 0.70 242 indian 0.92 0.92 0.92 236 japanese 0.77 0.78 0.77 245 korean 0.87 0.77 0.82 234 thai 0.75 0.86 0.80 242 accuracy 0.80 1199 macro avg 0.80 0.80 0.80 1199 weighted avg 0.80 0.80 0.80 1199 Accuracy (train) for RFST: 82.8% precision recall f1-score support chinese 0.80 0.75 0.77 242 indian 0.90 0.91 0.90 236 japanese 0.82 0.78 0.80 245 korean 0.85 0.82 0.83 234 thai 0.78 0.89 0.83 242 accuracy 0.83 1199 macro avg 0.83 0.83 0.83 1199 weighted avg 0.83 0.83 0.83 1199 Accuracy (train) for ADA: 71.1% precision recall f1-score support chinese 0.60 0.57 0.58 242 indian 0.87 0.84 0.86 236 japanese 0.71 0.60 0.65 245 korean 0.68 0.78 0.72 234 thai 0.70 0.78 0.74 242 accuracy 0.71 1199 macro avg 0.71 0.71 0.71 1199 weighted avg 0.71 0.71 0.71 1199 ###Markdown Build More Classification Models ###Code import pandas as pd cuisines_df = pd.read_csv("../../data/cleaned_cuisine.csv") cuisines_df.head() cuisines_label_df = cuisines_df['cuisine'] cuisines_label_df.head() cuisines_feature_df = cuisines_df.drop(['Unnamed: 0', 'cuisine'], axis=1) cuisines_feature_df.head() ###Output _____no_output_____ ###Markdown Try different classifiers ###Code from sklearn.neighbors import KNeighborsClassifier from sklearn.linear_model import LogisticRegression from sklearn.svm import SVC from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier from sklearn.model_selection import train_test_split, cross_val_score from sklearn.metrics import accuracy_score,precision_score,confusion_matrix,classification_report, precision_recall_curve import numpy as np X_train, X_test, y_train, y_test = train_test_split(cuisines_feature_df, cuisines_label_df, test_size=0.3) C = 10 # Create different classifiers. classifiers = { 'Linear SVC': SVC(kernel='linear', C=C, probability=True,random_state=0), 'KNN classifier': KNeighborsClassifier(C), 'SVC': SVC(), 'RFST': RandomForestClassifier(n_estimators=100), 'ADA': AdaBoostClassifier(n_estimators=100) } n_classifiers = len(classifiers) for index, (name, classifier) in enumerate(classifiers.items()): classifier.fit(X_train, np.ravel(y_train)) y_pred = classifier.predict(X_test) accuracy = accuracy_score(y_test, y_pred) print("Accuracy (train) for %s: %0.1f%% " % (name, accuracy * 100)) print(classification_report(y_test,y_pred)) ###Output Accuracy (train) for Linear SVC: 79.4% precision recall f1-score support chinese 0.74 0.71 0.73 250 indian 0.88 0.93 0.90 245 japanese 0.74 0.78 0.76 237 korean 0.85 0.73 0.79 252 thai 0.76 0.83 0.79 215 accuracy 0.79 1199 macro avg 0.79 0.80 0.79 1199 weighted avg 0.80 0.79 0.79 1199 Accuracy (train) for KNN classifier: 72.6% precision recall f1-score support chinese 0.61 0.69 0.65 250 indian 0.85 0.85 0.85 245 japanese 0.65 0.84 0.73 237 korean 0.95 0.50 0.66 252 thai 0.70 0.76 0.73 215 accuracy 0.73 1199 macro avg 0.75 0.73 0.72 1199 weighted avg 0.76 0.73 0.72 1199 Accuracy (train) for SVC: 81.5% precision recall f1-score support chinese 0.73 0.71 0.72 250 indian 0.91 0.90 0.91 245 japanese 0.78 0.81 0.80 237 korean 0.88 0.78 0.83 252 thai 0.78 0.88 0.83 215 accuracy 0.81 1199 macro avg 0.82 0.82 0.82 1199 weighted avg 0.82 0.81 0.81 1199 Accuracy (train) for RFST: 83.1% precision recall f1-score support chinese 0.78 0.75 0.77 250 indian 0.89 0.91 0.90 245 japanese 0.83 0.83 0.83 237 korean 0.85 0.80 0.83 252 thai 0.79 0.87 0.83 215 accuracy 0.83 1199 macro avg 0.83 0.83 0.83 1199 weighted avg 0.83 0.83 0.83 1199 Accuracy (train) for ADA: 70.5% precision recall f1-score support chinese 0.61 0.47 0.53 250 indian 0.85 0.86 0.85 245 japanese 0.62 0.72 0.67 237 korean 0.71 0.81 0.75 252 thai 0.73 0.67 0.70 215 accuracy 0.70 1199 macro avg 0.70 0.70 0.70 1199 weighted avg 0.70 0.70 0.70 1199
kinase-bioactivities-in-chembl/kinase-bioactivities-in-chembl.ipynb
###Markdown Query ChEMBL for bioactivities involving protein kinasesChEMBL stores a good amount of bioactivities for protein-ligand complexes in the field of kinases. Before running this notebook, we have identified the set of human protein kinases we want to target (`/human-kinases`) and under what identifiers these proteins are stored in ChEMBL (`/kinases-in-chembl`).Now we can query ChEMBL for all bioactivities involving these targets but first we need to make sure we don't run into some common pitfalls (see section "Curate the dataset"). ###Code import sqlite3 as sql import csv import os from pathlib import Path from collections import defaultdict import pandas as pd from tqdm import tnrange import numpy as np HERE = Path(_dh[-1]) REPO = (HERE / "..").resolve() DATA = REPO / "data" OUT = HERE / "_out" OUT.mkdir(parents=True, exist_ok=True) ###Output _____no_output_____ ###Markdown Get the [local export](https://chembl.gitbook.io/chembl-interface-documentation/downloads) for the ChEMBL version you want to query. You need the one named `chembl__sqlite.tar.gz`. Extract the `*.db` file and point to its location using `CHEMBL_SQLITE_PATH` below: ###Code CHEMBL_VERSION = 30 CHEMBL_SQLITE_PATH = f"../../_chembl_fetcher/chembl_{CHEMBL_VERSION}/chembl_{CHEMBL_VERSION}_sqlite/chembl_{CHEMBL_VERSION}.db" ###Output _____no_output_____ ###Markdown Map `chembl_targets` to `UniprotID`.This file is generated with the `kinase-in-chembl` notebooks. Update it if you think there might be more ChEMBL targets. ###Code kinases = pd.read_csv(DATA / f"human_kinases_and_chembl_targets.chembl_{CHEMBL_VERSION}.csv") kinases ###Output _____no_output_____ ###Markdown We are only interested in `SINGLE PROTEIN` targets for now. ###Code kinases_sp = kinases[kinases.type == "SINGLE PROTEIN"].drop("type", axis=1) kinases_sp ###Output _____no_output_____ ###Markdown We will need this dataframe to map between chembl target and uniprot later, when we write the query results to disk. Query local ChEMBL DB for speed ###Code conn = sql.connect(CHEMBL_SQLITE_PATH, isolation_level=None) ###Output _____no_output_____ ###Markdown Types of assaysCheck which kind of assays can be found on human kinases. ###Code CHEMBL_TARGETS = set(kinases_sp.chembl_targets.tolist()) q = f""" SELECT standard_type, COUNT(standard_type) FROM activities LEFT JOIN assays ON assays.assay_id=activities.assay_id LEFT JOIN target_dictionary ON target_dictionary.tid=assays.tid WHERE target_dictionary.chembl_id IN ({', '.join([f'"{x}"' for x in CHEMBL_TARGETS])}) GROUP BY standard_type ORDER BY 2 DESC """ assay_types = pd.read_sql(q, conn) assay_types.columns = ["Value", "Count"] assay_types.head(10) ###Output _____no_output_____ ###Markdown There's a lot of information we are not using! `Inhibition` is as populated as `IC50`, but we don't know what kind of information this category contains. Query bioactivities Get all entries in the SQL db that:- Correspond to IC50, Ki, Kd measurements. Check `activities.standard_type` fields.- assay_type = B (Binding)- Relation is `=`- Target is part of the human kinome (as provided by `DATA / human_kinases_and_chembl_targets.chembl_{CHEMBL_VERSION}.csv`, see `kinases` cell)- Confidence score is greather than zero (in practice, only 43 entries have score=0; the rest are either 8 or 9) ###Code CHEMBL_TARGETS = set(kinases_sp.chembl_targets.tolist()) select_these = [ "activities.activity_id", "assays.chembl_id", "target_dictionary.chembl_id", "molecule_dictionary.chembl_id", "molecule_dictionary.max_phase", "activities.standard_type", "activities.standard_value", "activities.standard_units", "compound_structures.canonical_smiles", "compound_structures.standard_inchi", "component_sequences.sequence", "assays.confidence_score", "docs.chembl_id", "docs.year", "docs.authors", ] q = f""" SELECT {', '.join(select_these)} FROM activities LEFT JOIN assays ON assays.assay_id=activities.assay_id LEFT JOIN target_dictionary ON target_dictionary.tid=assays.tid LEFT JOIN compound_structures ON activities.molregno=compound_structures.molregno LEFT JOIN molecule_dictionary ON activities.molregno=molecule_dictionary.molregno LEFT JOIN target_components ON target_dictionary.tid=target_components.tid LEFT JOIN component_sequences ON target_components.component_id=component_sequences.component_id LEFT JOIN docs ON docs.doc_id=activities.doc_id WHERE target_dictionary.chembl_id IN ({', '.join([f'"{x}"' for x in CHEMBL_TARGETS])}) AND activities.standard_relation="=" AND assays.assay_type="B" AND activities.standard_type in ("IC50", "Ki", "Kd") AND assays.confidence_score > 0 """ activities_sql = pd.read_sql_query(q, conn) activities_sql.columns = select_these ###Output _____no_output_____ ###Markdown We need to add the UniprotID column from `kinases_sp`: ###Code activities = pd.merge(activities_sql, kinases_sp[["chembl_targets", "UniprotID"]], left_on="target_dictionary.chembl_id", right_on="chembl_targets", how="left").drop(columns=["chembl_targets"]) activities ###Output _____no_output_____ ###Markdown Although units have been standardized, not all of them are $nM$. ###Code activities["activities.standard_units"].unique() ###Output _____no_output_____ ###Markdown Let's keep only those that are $nM$. ###Code nm_activities = activities.query("`activities.standard_units` == 'nM'") nm_activities ###Output _____no_output_____ ###Markdown Before we continue, we want all the activities in logarithmic format (`pMeasurement`). Now that all the values are $nM$, we can do:```pythonpMeasurement = 9 - (log(measurement) / log(10))``` ###Code with pd.option_context("chained_assignment", None): nm_activities.loc[:, "activities.standard_value"] = nm_activities["activities.standard_value"].apply(lambda x: 9 - (np.log(x) / np.log(10))) nm_activities.loc[:, "activities.standard_type"] = nm_activities["activities.standard_type"].apply("p{}".format) nm_activities ###Output _____no_output_____ ###Markdown Let's save the dataset as is, with no curation, now. ###Code nm_activities.to_csv(OUT / f"activities-chembl{CHEMBL_VERSION}-not-curated.csv") ###Output _____no_output_____ ###Markdown Curate the dataset The following list is compiled from lessons learned in Kramer's _J. Med. Chem._ 2012, 55, 5165-5173. [10.1021/jm300131x](https://dx.doi.org/10.1021/jm300131x).Kramer et al propose the following pipeline to make sure the data queried from ChEMBL is high quality:1. **Remove the dummy target CHEMBL612545**. Maybe we don't have this because we are coming from UniProt IDs, but this is a dummy identifier for unchecked targets!2. **Group by protein and ligand, and remove singletons**. Systems that were measured only once are not taken into account. We might leave these ones.3. **Remove unclear units or values**. Only measurements with reported units. Values lower than $1fM$, higher than $10mM$ must be removed too.4. **Keep the highest pKi for those systems with several measurements in the _same_ publication**. This handles unclear stereoisomer annotations and/or experimental optimization.5. **Remove measurements that come from manuscripts citing the original reporting publication**. Probably the most important part here. Identical values for the same system in different publications were removed, as well as those within 0.02 pKi units (rounding error), or exactly 3 or 6 pKi units (transcription errors).6. **Remove measurements for the same system from different publications if they share one or more authors**. This helps identify truly independent measurements.We will try to implement this in the following sections. Each step will be checkpointed in the `curated` list. ###Code curated = [] print("Initial number of bioactivities:", nm_activities.shape[0]) ###Output Initial number of bioactivities: 252191 ###Markdown Remove the dummy target CHEMBL612545 ###Code no_dummy = nm_activities.query("'CHEMBL612545' not in `target_dictionary.chembl_id`") no_dummy.shape[0] curated.append(no_dummy) ###Output _____no_output_____ ###Markdown Group by protein and ligand, and remove singletons.We are _not_ removing the singletons because we can actually use them (Kramer et al were studing the distribution of activity values, we are doing predictions). This is here so we get an idea on how many "single measurements" the dataset contains. ###Code grouped_by_system = no_dummy.groupby(['target_dictionary.chembl_id', 'molecule_dictionary.chembl_id']) grouped_counts = grouped_by_system.size() singletons = grouped_counts[grouped_counts == 1].index print("Single measurements ratio:", singletons.shape[0], "out of", activities.shape[0], "->", 100 * singletons.shape[0] / activities.shape[0], "%") ###Output Single measurements ratio: 165840 out of 252727 -> 65.62021469807341 % ###Markdown Clean extreme values ###Code no_extreme = no_dummy.query("1 <= `activities.standard_value` <= 15") no_extreme.shape[0] curated.append(no_extreme) ###Output _____no_output_____ ###Markdown Keep the highest value for those systems with several measurements in the same publicationWe sort by activity value (largest first), and then remove the duplicate keys for target+ligand+document, thus removing those values in the same publication that are not the maximum, because we keep the first occurrence. ###Code max_activity_same_publication = no_extreme.sort_values("activities.standard_value", ascending=False).drop_duplicates(["target_dictionary.chembl_id", "molecule_dictionary.chembl_id", "docs.chembl_id"]) max_activity_same_publication.shape[0] curated.append(max_activity_same_publication) ###Output _____no_output_____ ###Markdown Remove measurements that come from manuscripts citing the original reporting publication. Identify systems that have the exact same number. ###Code no_exact_duplicates = max_activity_same_publication.drop_duplicates(["target_dictionary.chembl_id", "molecule_dictionary.chembl_id", "activities.standard_value"]) no_exact_duplicates.shape[0] curated.append(no_exact_duplicates) ###Output _____no_output_____ ###Markdown What about those within a certain rounding error? We do that by removing duplicates after rounding with two decimal points. As a result our threshold is smaller (0.01 vs Kramer's 0.02). ###Code no_rounded_duplicates = ( no_exact_duplicates .assign(activities_standard_value_rounded=lambda x: x["activities.standard_value"].round(2)) .drop_duplicates(["target_dictionary.chembl_id", "molecule_dictionary.chembl_id", "activities_standard_value_rounded"]) .drop(columns=["activities_standard_value_rounded"]) ) no_rounded_duplicates.shape[0] ###Output _____no_output_____ ###Markdown We don't deal with unit transcription errors because in that case we are trusting ChEMBL's standardized units. ###Code curated.append(no_rounded_duplicates) ###Output _____no_output_____ ###Markdown Remove measurements for the same system from different publications if they share one or more authors ###Code def shared_authors(group): "Return True if authors are not shared and we should keep this group" if group.shape[0] == 1: return [True] authors_per_entry = [(set() if entry is None else set(entry.split(", "))) for entry in group.values] return [any(a.intersection(b) for b in authors_per_entry if a != b) for a in authors_per_entry] no_shared_authors_mask = no_rounded_duplicates.groupby(["target_dictionary.chembl_id", "molecule_dictionary.chembl_id"])["docs.authors"].transform(shared_authors) no_shared_authors_mask no_shared_authors = no_rounded_duplicates[no_shared_authors_mask] no_shared_authors.shape[0] curated.append(no_shared_authors) final = curated[-1] ###Output _____no_output_____ ###Markdown Save to CSV ###Code final.to_csv(OUT / f"activities-chembl{CHEMBL_VERSION}.csv") ###Output _____no_output_____ ###Markdown Analyze cleaned data ###Code from matplotlib import pyplot as plt fig, ax = plt.subplots() ax.plot(range(1, len(curated) + 1), [df.shape[0] for df in curated]) ax.set_xlabel("Curation steps") ax.set_ylabel("# data points") ax.set_ylim(0, 300000); final["activities.standard_value"].plot.hist(title="Distribution of p-activity values", xlabel="pMeasurement"); display(final["assays.confidence_score"].value_counts()) final["assays.confidence_score"].plot.hist(title="Distribution of confidence scores"); ###Output _____no_output_____ ###Markdown Distribution of document ids: ###Code doc_counts = final["docs.chembl_id"].value_counts() display(doc_counts[:10]) doc_counts[:30].plot.bar(); ###Output _____no_output_____ ###Markdown Distribution of clinical phases: ###Code phase_counts = final["molecule_dictionary.max_phase"].value_counts() display(phase_counts[:10]) phase_counts[:30].plot.bar(); ###Output _____no_output_____ ###Markdown Distribution of measurements per kinase: ###Code counts_per_target = final.groupby("target_dictionary.chembl_id").size().sort_values(ascending=False) from IPython.display import Markdown md = ["| Target | Count |", "|--------|-------|"] for k, v in counts_per_target.head(20).iteritems(): md.append(f"| [{k}](https://www.ebi.ac.uk/chembl/target_report_card/{k}/) | {v} |") display(Markdown("\n".join(md))) counts_per_target.plot.bar() counts_per_target_and_measurement = pd.DataFrame(final.groupby(["target_dictionary.chembl_id", "activities.standard_type"]).size(), columns=["Count"]) counts_per_target_and_measurement counts_per_target_and_measurement.sort_values(by="Count", ascending=False).plot.bar() ###Output _____no_output_____ ###Markdown Query ChEMBL for bioactivities involving protein kinasesChEMBL stores a good amount of bioactivities for protein-ligand complexes in the field of kinases. Before running this notebook, we have identified the set of human protein kinases we want to target (`/human-kinases`) and under what identifiers these proteins are stored in ChEMBL (`/kinases-in-chembl`).Now we can query ChEMBL for all bioactivities involving these targets but first we need to make sure we don't run into some common pitfalls (see section "Curate the dataset"). ###Code import sqlite3 as sql import csv import os from pathlib import Path from collections import defaultdict import pandas as pd from tqdm import tnrange import numpy as np HERE = Path(_dh[-1]) REPO = (HERE / "..").resolve() DATA = REPO / "data" OUT = HERE / "_out" OUT.mkdir(parents=True, exist_ok=True) ###Output _____no_output_____ ###Markdown Get the [local export](https://chembl.gitbook.io/chembl-interface-documentation/downloads) for the ChEMBL version you want to query. You need the one named `chembl__sqlite.tar.gz`. Extract the `*.db` file and point to its location using `CHEMBL_SQLITE_PATH` below: ###Code CHEMBL_VERSION = 28 CHEMBL_SQLITE_PATH = f"../../_chembl_fetcher/chembl_{CHEMBL_VERSION}/chembl_{CHEMBL_VERSION}_sqlite/chembl_{CHEMBL_VERSION}.db" ###Output _____no_output_____ ###Markdown Map `chembl_targets` to `UniprotID`.This file is generated with the `kinase-in-chembl` notebooks. Update it if you think there might be more ChEMBL targets. ###Code kinases = pd.read_csv(DATA / f"human_kinases_and_chembl_targets.chembl_{CHEMBL_VERSION}.csv") kinases ###Output _____no_output_____ ###Markdown We are only interested in `SINGLE PROTEIN` targets for now. ###Code kinases_sp = kinases[kinases.type == "SINGLE PROTEIN"].drop("type", axis=1) kinases_sp ###Output _____no_output_____ ###Markdown We will need this dataframe to map between chembl target and uniprot later, when we write the query results to disk. Query local ChEMBL DB for speed ###Code conn = sql.connect(CHEMBL_SQLITE_PATH, isolation_level=None) ###Output _____no_output_____ ###Markdown Types of assaysCheck which kind of assays can be found on human kinases. ###Code CHEMBL_TARGETS = set(kinases_sp.chembl_targets.tolist()) q = f""" SELECT standard_type, COUNT(standard_type) FROM activities LEFT JOIN assays ON assays.assay_id=activities.assay_id LEFT JOIN target_dictionary ON target_dictionary.tid=assays.tid WHERE target_dictionary.chembl_id IN ({', '.join([f'"{x}"' for x in CHEMBL_TARGETS])}) GROUP BY standard_type ORDER BY 2 DESC """ assay_types = pd.read_sql(q, conn) assay_types.columns = ["Value", "Count"] assay_types.head(10) ###Output _____no_output_____ ###Markdown There's a lot of information we are not using! `Inhibition` is as populated as `IC50`, but we don't know what kind of information this category contains. Query bioactivities Get all entries in the SQL db that:- Correspond to IC50, Ki, Kd measurements. Check `activities.standard_type` fields.- assay_type = B (Binding)- Relation is `=`- Target is part of the human kinome (as provided by `DATA / human_kinases_and_chembl_targets.chembl_{CHEMBL_VERSION}.csv`, see `kinases` cell)- Confidence score is greather than zero (in practice, only 43 entries have score=0; the rest are either 8 or 9) ###Code CHEMBL_TARGETS = set(kinases_sp.chembl_targets.tolist()) select_these = [ "activities.activity_id", "assays.chembl_id", "target_dictionary.chembl_id", "molecule_dictionary.chembl_id", "molecule_dictionary.max_phase", "activities.standard_type", "activities.standard_value", "activities.standard_units", "compound_structures.canonical_smiles", "compound_structures.standard_inchi", "component_sequences.sequence", "assays.confidence_score", "docs.chembl_id", "docs.year", "docs.authors", ] q = f""" SELECT {', '.join(select_these)} FROM activities LEFT JOIN assays ON assays.assay_id=activities.assay_id LEFT JOIN target_dictionary ON target_dictionary.tid=assays.tid LEFT JOIN compound_structures ON activities.molregno=compound_structures.molregno LEFT JOIN molecule_dictionary ON activities.molregno=molecule_dictionary.molregno LEFT JOIN target_components ON target_dictionary.tid=target_components.tid LEFT JOIN component_sequences ON target_components.component_id=component_sequences.component_id LEFT JOIN docs ON docs.doc_id=activities.doc_id WHERE target_dictionary.chembl_id IN ({', '.join([f'"{x}"' for x in CHEMBL_TARGETS])}) AND activities.standard_relation="=" AND assays.assay_type="B" AND activities.standard_type in ("IC50", "Ki", "Kd") AND assays.confidence_score > 0 """ activities_sql = pd.read_sql_query(q, conn) activities_sql.columns = select_these ###Output _____no_output_____ ###Markdown We need to add the UniprotID column from `kinases_sp`: ###Code activities = pd.merge(activities_sql, kinases_sp[["chembl_targets", "UniprotID"]], left_on="target_dictionary.chembl_id", right_on="chembl_targets", how="left").drop(columns=["chembl_targets"]) activities ###Output _____no_output_____ ###Markdown Although units have been standardized, not all of them are $nM$. ###Code activities["activities.standard_units"].unique() ###Output _____no_output_____ ###Markdown Let's keep only those that are $nM$. ###Code nm_activities = activities.query("`activities.standard_units` == 'nM'") nm_activities ###Output _____no_output_____ ###Markdown Before we continue, we want all the activities in logarithmic format (`pMeasurement`). Now that all the values are $nM$, we can do:```pythonpMeasurement = 9 - (log(measurement) / log(10))``` ###Code with pd.option_context("chained_assignment", None): nm_activities.loc[:, "activities.standard_value"] = nm_activities["activities.standard_value"].apply(lambda x: 9 - (np.log(x) / np.log(10))) nm_activities.loc[:, "activities.standard_type"] = nm_activities["activities.standard_type"].apply("p{}".format) nm_activities ###Output _____no_output_____ ###Markdown Let's save the dataset as is, with no curation, now. ###Code nm_activities.to_csv(OUT / f"activities-chembl{CHEMBL_VERSION}-not-curated.csv") ###Output _____no_output_____ ###Markdown Curate the dataset The following list is compiled from lessons learned in Kramer's _J. Med. Chem._ 2012, 55, 5165-5173. [10.1021/jm300131x](https://dx.doi.org/10.1021/jm300131x).Kramer et al propose the following pipeline to make sure the data queried from ChEMBL is high quality:1. **Remove the dummy target CHEMBL612545**. Maybe we don't have this because we are coming from UniProt IDs, but this is a dummy identifier for unchecked targets!2. **Group by protein and ligand, and remove singletons**. Systems that were measured only once are not taken into account. We might leave these ones.3. **Remove unclear units or values**. Only measurements with reported units. Values lower than $1fM$, higher than $10mM$ must be removed too.4. **Keep the highest pKi for those systems with several measurements in the _same_ publication**. This handles unclear stereoisomer annotations and/or experimental optimization.5. **Remove measurements that come from manuscripts citing the original reporting publication**. Probably the most important part here. Identical values for the same system in different publications were removed, as well as those within 0.02 pKi units (rounding error), or exactly 3 or 6 pKi units (transcription errors).6. **Remove measurements for the same system from different publications if they share one or more authors**. This helps identify truly independent measurements.We will try to implement this in the following sections. Each step will be checkpointed in the `curated` list. ###Code curated = [] print("Initial number of bioactivities:", nm_activities.shape[0]) ###Output Initial number of bioactivities: 237336 ###Markdown Remove the dummy target CHEMBL612545 ###Code no_dummy = nm_activities.query("'CHEMBL612545' not in `target_dictionary.chembl_id`") no_dummy.shape[0] curated.append(no_dummy) ###Output _____no_output_____ ###Markdown Group by protein and ligand, and remove singletons.We are _not_ removing the singletons because we can actually use them (Kramer et al were studing the distribution of activity values, we are doing predictions). This is here so we get an idea on how many "single measurements" the dataset contains. ###Code grouped_by_system = no_dummy.groupby(['target_dictionary.chembl_id', 'molecule_dictionary.chembl_id']) grouped_counts = grouped_by_system.size() singletons = grouped_counts[grouped_counts == 1].index print("Single measurements ratio:", singletons.shape[0], "out of", activities.shape[0], "->", 100 * singletons.shape[0] / activities.shape[0], "%") ###Output Single measurements ratio: 156241 out of 237830 -> 65.69440356557205 % ###Markdown Clean extreme values ###Code no_extreme = no_dummy.query("1 <= `activities.standard_value` <= 15") no_extreme.shape[0] curated.append(no_extreme) ###Output _____no_output_____ ###Markdown Keep the highest value for those systems with several measurements in the same publicationWe sort by activity value (largest first), and then remove the duplicate keys for target+ligand+document, thus removing those values in the same publication that are not the maximum, because we keep the first occurrence. ###Code max_activity_same_publication = no_extreme.sort_values("activities.standard_value", ascending=False).drop_duplicates(["target_dictionary.chembl_id", "molecule_dictionary.chembl_id", "docs.chembl_id"]) max_activity_same_publication.shape[0] curated.append(max_activity_same_publication) ###Output _____no_output_____ ###Markdown Remove measurements that come from manuscripts citing the original reporting publication. Identify systems that have the exact same number. ###Code no_exact_duplicates = max_activity_same_publication.drop_duplicates(["target_dictionary.chembl_id", "molecule_dictionary.chembl_id", "activities.standard_value"]) no_exact_duplicates.shape[0] curated.append(no_exact_duplicates) ###Output _____no_output_____ ###Markdown What about those within a certain rounding error? We do that by removing duplicates after rounding with two decimal points. As a result our threshold is smaller (0.01 vs Kramer's 0.02). ###Code no_rounded_duplicates = ( no_exact_duplicates .assign(activities_standard_value_rounded=lambda x: x["activities.standard_value"].round(2)) .drop_duplicates(["target_dictionary.chembl_id", "molecule_dictionary.chembl_id", "activities_standard_value_rounded"]) .drop(columns=["activities_standard_value_rounded"]) ) no_rounded_duplicates.shape[0] ###Output _____no_output_____ ###Markdown We don't deal with unit transcription errors because in that case we are trusting ChEMBL's standardized units. ###Code curated.append(no_rounded_duplicates) ###Output _____no_output_____ ###Markdown Remove measurements for the same system from different publications if they share one or more authors ###Code def shared_authors(group): "Return True if authors are not shared and we should keep this group" if group.shape[0] == 1: return [True] authors_per_entry = [(set() if entry is None else set(entry.split(", "))) for entry in group.values] return [any(a.intersection(b) for b in authors_per_entry if a != b) for a in authors_per_entry] no_shared_authors_mask = no_rounded_duplicates.groupby(["target_dictionary.chembl_id", "molecule_dictionary.chembl_id"])["docs.authors"].transform(shared_authors) no_shared_authors_mask no_shared_authors = no_rounded_duplicates[no_shared_authors_mask] no_shared_authors.shape[0] curated.append(no_shared_authors) final = curated[-1] ###Output _____no_output_____ ###Markdown Save to CSV ###Code final.to_csv(OUT / f"activities-chembl{CHEMBL_VERSION}.csv") ###Output _____no_output_____ ###Markdown Analyze cleaned data ###Code from matplotlib import pyplot as plt fig, ax = plt.subplots() ax.plot(range(1, len(curated) + 1), [df.shape[0] for df in curated]) ax.set_xlabel("Curation steps") ax.set_ylabel("# data points") ax.set_ylim(0, 250000); final["activities.standard_value"].plot.hist(title="Distribution of p-activity values", xlabel="pMeasurement"); display(final["assays.confidence_score"].value_counts()) final["assays.confidence_score"].plot.hist(title="Distribution of confidence scores"); ###Output _____no_output_____ ###Markdown Distribution of document ids: ###Code doc_counts = final["docs.chembl_id"].value_counts() display(doc_counts[:10]) doc_counts[:30].plot.bar(); ###Output _____no_output_____ ###Markdown Distribution of clinical phases: ###Code phase_counts = final["molecule_dictionary.max_phase"].value_counts() display(phase_counts[:10]) phase_counts[:30].plot.bar(); ###Output _____no_output_____ ###Markdown Distribution of measurements per kinase: ###Code counts_per_target = final.groupby("target_dictionary.chembl_id").size().sort_values(ascending=False) from IPython.display import Markdown md = ["| Target | Count |", "|--------|-------|"] for k, v in counts_per_target.head(20).iteritems(): md.append(f"| [{k}](https://www.ebi.ac.uk/chembl/target_report_card/{k}/) | {v} |") display(Markdown("\n".join(md))) counts_per_target.plot.bar() counts_per_target_and_measurement = pd.DataFrame(final.groupby(["target_dictionary.chembl_id", "activities.standard_type"]).size(), columns=["Count"]) counts_per_target_and_measurement counts_per_target_and_measurement.sort_values(by="Count", ascending=False).plot.bar() ###Output _____no_output_____
BostonHousingPrice_Keras.ipynb
###Markdown **The Boston Housing Price dataset** Con esta base de datos, nuestro objetivo será el de predecir el precio promedio de casas de un suburbio de Boston de mediados de los 70's, de acuerdo a características tales como la tasa de crímines, impuestos, concentración de óxido nítrico, etcétera.**Un problema de predicción consiste en obtener valores en una escala continua.** Para entender este concepto, imaginemos que nos piden que cada uno de nosotros dé su predicción acerca de la temperatura actual. Algún compañero podría decir: 25°C, alguien más podría estimar que la temperatura es de 19.5°, otro hará una estimación de 22°C, etc. El punto es que las predicciones pueden tener cualquier valor, no nos estamos limitando a un rango o valor particular, puede ser un valor entero o un valor con decimales. Este comportamiento representa una caso de predicción de valores en una escala continua. En el caso del problema del Boston Housing Price dataset, sólo se va a predecir un valor por cada casa, en este caso, el precio de cada casa en cuestión. En machine learning, la tarea de predecir valores en una escala continua se conoce como **regresión**.La base de datos Boston Housing Price se compone únicamente de 506 instancias, de las cuales, 404 corresponden al conjunto de entrenamiento y 102 al conjunto de prueba. Otra propiedad peculiar de esta base de datos, consiste en que los atributos de los datos de entrada están definidos en diferentes escalas. Por ejemplo, algunas características tienes valores entre 0 y 1, otras entre 1 y 12, otras entre 0 y 100, etcétera.Keras tiene precargada esta base de datos: ###Code from keras.datasets import boston_housing (train_data, train_targets), (test_data, test_targets) = boston_housing.load_data() train_data.shape test_data.shape ###Output _____no_output_____ ###Markdown Las 13 características o atributos de los datos de entrada, se enlistan a continuación: 1. Per capita crime rate.2. Proportion of residential land zoned for lots over 25,000 square feet.3. Proportion of non-retail business acres per town.4. Charles River dummy variable (= 1 if tract bounds river; 0 otherwise).5. Nitric oxides concentration (parts per 10 million).6. Average number of rooms per dwelling.7. Proportion of owner-occupied units built prior to 1940.8. Weighted distances to five Boston employment centres.9. Index of accessibility to radial highways.10. Full-value property-tax rate per $10,000.11. Pupil-teacher ratio by town.12. 1000 * (Bk - 0.63) ** 2 where Bk is the proportion of Black people by town.13. % lower status of the population. ###Code train_data[9] #Atributos de la casa con el índice 9 del conjunto de entrenamiento len(train_data[9]) #Número de atributos de la casa con el índice 9 del conjunto de entrenamiento ###Output _____no_output_____ ###Markdown Las etiquetas de la base de datos, consisten en precios promedios de casas, fijadas en miles de dólares. ###Code train_targets[9] #Precio (en miles de dólares) de la casa con el índice 9 del conjunto de entrenamiento ###Output _____no_output_____ ###Markdown **Preprocesamiento de los datos** Debido a que los atributos de los datos están expresados en distintas escalas, es conveniente estandarizar estos valores para facilitarle a la red neuronal el proceso de ajuste de los pesos. Para lograrlo, la estandarización se llevará a cabo por atributo, de tal forma que a cada columna de la matriz de entrada, se le restará su promedio y luego se le divirá entre la desviación estándar, generando así atributos centrados en cero y con desviación estándar de 1. ###Code #sklearn, formalmente conocida como Scikit-learn, es una librería de Python enfocada en machine learning. #La clase StandardScaler hace que los atributos estén centrados en cero y que tengan una desviación estándar de 1. from sklearn.preprocessing import StandardScaler #Instanciamos un objeto de la clase StandardScaler. stdsc = StandardScaler() #Obtenemos los parámetros para la estandarización con base al conjunto de entrenamiento y luego, procedemos a estandarizar dicho conjunto con los #parámetros generados. train_data_std = stdsc.fit_transform(train_data) #El método fit_transform se aplica solamente al conjunto de entrenamiento #Estandarizamos el conjunto de prueba con los parámetros que se obtuvieron a partir del conjunto de entrenamiento. test_data_std = stdsc.transform(test_data) train_data_std test_data_std ###Output _____no_output_____ ###Markdown **Construcción de la red neuronal** ###Code from keras import models from keras import layers #Para este problema, definiremos el modelo dentro de una función, ya que, como veremos más adelante, #estaremos utilizando el mismo modelo en múltiples ocasiones. def build_model(): model = models.Sequential() model.add(layers.Dense(64, activation='relu', input_shape=(train_data.shape[1],))) model.add(layers.Dense(64, activation='relu')) model.add(layers.Dense(1)) model.compile(optimizer='rmsprop', loss='mse', metrics=['mae']) #MSE = Mean Square Error / MAE = Mean Absolute Error return model ###Output _____no_output_____ ###Markdown **Aplicando validación cruzada** A diferencia de la manera en la que implementamos la fase de validación en el problema referente a la base de datos IMDB, en esta ocasión, debido a que la base de datos Boston Housing Price se compone de pocas instancias, emplearemos el enfoque de validación cruzada. ###Code import numpy as np k = 4 # En total, el conjunto de entrenamiento se dividirá en 4 lotes o folds, y por lo tanto, se habrán de generar 4 modelos. num_val_samples = len(train_data_std) // k # Definimos el número de instancias que constituirán cada fold num_epochs = 100 # El entrenamiento se realizará durante 100 épocas para cada modelo all_scores = [] # La lista all_scores almacenará el valor del mae de cada unos de los cuatro modelos for i in range(k): print('Procesamiento de fold #', i) #i = 0 # Para cada fold: # Generamos su conjunto de validación a partir del conjunto de entrenamiento original val_data = train_data_std[i * num_val_samples: (i + 1) * num_val_samples] # val_data = train_data_std[0 * 101: (0 + 1) * 101] # val_data = train_data_std[0 : 1 *101] # val_data = train_data_std[0: 101] val_targets = train_targets[i * num_val_samples: (i + 1) * num_val_samples] # val_targets = train_targets[0: 101] # Generamos su conjunto de entrenamiento a partir del conjunto de entrenamiento original partial_train_data = np.concatenate( [train_data_std[:i * num_val_samples], # = [train_data_std[:0 * 101] = [train_data_std[:0] train_data_std[(i + 1) * num_val_samples:]], # = train_data_std[(0 + 1) * 101:] = train_data_std[1 * 101:] = train_data_std[101:] axis=0) #np.concatenate([train_data_std[:0], train_data_std[101:]) #train_data_std[303:] partial_train_targets = np.concatenate( [train_targets[:i * num_val_samples], train_targets[(i + 1) * num_val_samples:]], #np.concatenate([train_data_std[:0], train_data_std[101:]) axis=0) #----------------------------------------------------------------------------------- #Mandamos llamar al modelo que previamente compilamos model = build_model() # Entrenamos el modelo (el argumento verbose=0, indica que el entrenamiento se realizará en modo silencioso) model.fit(partial_train_data, partial_train_targets, epochs=num_epochs, batch_size=1, verbose=0) # Procedemos a evaluar el desempeño del modelo en el conjunto de validación val_mse, val_mae = model.evaluate(val_data, val_targets, verbose=0) # Debido a que la evaluación se realizará en k-folds, utilizamos la lista all_scores para # almacenar el resultado de la evaluación de cada fold. all_scores.append(val_mae) all_scores # Desplegamos el valor del mae de cada fold ###Output _____no_output_____ ###Markdown Observemos que existe cierta discrepancia en los resultados de validación anteriores, y para solventar esta situación, lo más apropiado es obtener un promedio de estos resultados. ###Code np.mean(all_scores) # Obtenemos el promedio del mae, que es el resultado que nos interesa ###Output _____no_output_____ ###Markdown Ahora, procederemos a entrenar el modelo durante 500 épocas. Además, daremos seguimiento al desempeño de cada modelo en las 500 épocas. ###Code num_epochs = 100 all_mae_histories = [] for i in range(k): print('Procesamiento de fold #', i) # Para cada fold: # Generamos su conjunto de validación a partir del conjunto de entrenamiento original val_data = train_data_std[i * num_val_samples: (i + 1) * num_val_samples] val_targets = train_targets[i * num_val_samples: (i + 1) * num_val_samples] # Generamos su conjunto de entrenamiento a partir del conjunto de entrenamiento original partial_train_data = np.concatenate( [train_data_std[:i * num_val_samples], train_data_std[(i + 1) * num_val_samples:]], axis=0) partial_train_targets = np.concatenate( [train_targets[:i * num_val_samples], train_targets[(i + 1) * num_val_samples:]], axis=0) #----------------------------------------------------------------------------------- # Mandamos llamar el modelo que previamente compilamos model = build_model() # Entrenamos el modelo y almacenamos su rendimiento (mae) en el conjunto de validación durante 500 épocas. history = model.fit(partial_train_data, partial_train_targets, validation_data=(val_data, val_targets), epochs=num_epochs, batch_size=1, verbose=0) mae_history = history.history['val_mae'] # Debido a que la evaluación se realizará en k-folds, utilizamos la lista all_mae_histories para # almacenar el resultado de la evaluación de cada fold durante 500 épocas. all_mae_histories.append(mae_history) all_mae_histories[0] len(all_mae_histories[3]) ###Output _____no_output_____ ###Markdown Por cada época, calculamos el promedio del mae de cada fold: ###Code average_mae_history = [ np.mean([x[i] for x in all_mae_histories]) for i in range(num_epochs)] # i = 0 # Indica que tenemos que extrar el desempeño de cada uno de los cuatros modelos en la primera época, para luego promediar estos cuatro valores # y así obtener un promedio del rendimiento de los cuatro modelos en la primera época. # i = 1 # Indica que tenemos que extrar el desempeño de cada uno de los cuatros modelos en la segunda época, para luego promediar estos cuatro valores # y así obtener un promedio del rendimiento de los cuatro modelos en la primera época. # i = 2 # Indica que tenemos que extrar el desempeño de cada uno de los cuatros modelos en la tercera época, para luego promediar estos cuatro valores # y así obtener un promedio del rendimiento de los cuatro modelos en la primera época. #... ###Output _____no_output_____ ###Markdown Grafiquemos el resultado anterior: ###Code import matplotlib.pyplot as plt plt.plot(range(1, len(average_mae_history) + 1), average_mae_history, 'g') plt.xlabel('Epochs') plt.ylabel('Validation MAE') plt.show() ###Output _____no_output_____ ###Markdown Con base a las gráficas previas, podemos observar que, aproximadamente a partir de la época 50, el modelo deja de mejorar y comienza a presentarse una disminución en su rendimiento. **Re-entrenamiento y evaluación del modelo final** Una vez que se seleccionó el modelo tomando como base su rendimiento en la validación cruzada, una práctica recomendada es volver a entrar la arquitectura con los mismos parámetros con los que se generó el modelo seleccionado y con el conjunto de entrenamieto original. ###Code # Instanciamos de nueva cuenta el modelo model = build_model() # Entrenamos el modelo en el conjunto de entrenamiento original model.fit(train_data_std, train_targets, epochs=45, batch_size=1, verbose=0) #Evaluamos el modelo final test_mse_score, test_mae_score = model.evaluate(test_data_std, test_targets) test_mae_score test_data_std.shape model.predict(test_data_std) ###Output _____no_output_____
split_data.ipynb
###Markdown Split Train & Val set such that label distributions match ###Code train_labels = pd.DataFrame() val_labels = pd.DataFrame() train_percent = 0.9 for col in labels.columns: curr_class = labels.loc[labels[col]==1] N_curr = len(curr_class) np.random.seed(123) perm = np.random.permutation(N_curr) curr_class = curr_class.iloc[perm] curr_train = curr_class.iloc[:int(train_percent*N_curr)] curr_val = curr_class.iloc[int(train_percent*N_curr):] train_labels = train_labels.append(curr_train) val_labels = val_labels.append(curr_val) train_class_dist = train_labels.sum(axis = 0, skipna = True) val_class_dist = val_labels.sum(axis=0,skipna=True) print("Train class distribution\n",train_class_dist/sum(train_class_dist)) print("\nVal class distribution\n",val_class_dist/sum(val_class_dist)) ###Output Train class distribution MEL 0.178512 NV 0.508336 BCC 0.131175 AK 0.034220 BKL 0.103580 DF 0.009432 VASC 0.009959 SCC 0.024787 UNK 0.000000 dtype: float64 Val class distribution MEL 0.178557 NV 0.507686 BCC 0.131257 AK 0.034292 BKL 0.103666 DF 0.009460 VASC 0.010248 SCC 0.024832 UNK 0.000000 dtype: float64 ###Markdown Success! Split train and val data into directories ###Code train_names = train_labels.index val_names = val_labels.index train_labels = train_labels.sort_values("image") val_labels = val_labels.sort_values("image") for curr_class in labels.columns: path = os.path.join("data","val",curr_class) if not os.path.exists(path): os.mkdir(path) for curr_class in labels.columns: path = os.path.join("data","train",curr_class) if not os.path.exists(path): os.mkdir(path) src = "ISIC_2019_Training_Input" for i in range(len(train_labels)): label = train_labels.iloc[i] curr_class = label[label==1].index[0] dest = os.path.join("data","train",curr_class) name = label.name + ".jpg" full_file_name = os.path.join(src,name) if (os.path.isfile(full_file_name)): shutil.copy(full_file_name, dest) src = "ISIC_2019_Training_Input" for i in range(len(val_labels)): label = val_labels.iloc[i] curr_class = label[label==1].index[0] dest = os.path.join("data","val",curr_class) name = label.name + ".jpg" full_file_name = os.path.join(src,name) if (os.path.isfile(full_file_name)): shutil.copy(full_file_name, dest) ###Output _____no_output_____ ###Markdown Notebook to split the text file to have the good processing format ###Code import re with open("train.txt") as f: text = " ".join(f.read().split("\n")) text[0:300] punctuation_elements = ['O', ',COMMA', '.PERIOD', '?QUESTIONMARK', ':COLON', '!EXCLAMATIONMARK', ';SEMICOLON'] new_text = [] starter = 0 ender = 0 for element in text.split(): ender += 1 if element in punctuation_elements: new_text.append(" ".join(text.split()[starter:ender-1]) + "\t" + text.split()[ender-1]) starter = ender with open("train_clean.txt", "w") as f: f.write("\n".join(new_text)) ###Output _____no_output_____ ###Markdown Split DatasetThis notebook shows how to split dataset into train, validation and test sub set. Read dataUse numpy and pandas to read data file list. ###Code import numpy as np import pandas as pd np.random.seed(1) ###Output _____no_output_____ ###Markdown Tell pandas where the csv file is: ###Code csv_file_url = "data/data.csv" ###Output _____no_output_____ ###Markdown Check the data. ###Code full_data = pd.read_csv(csv_file_url) total_file_number = len(full_data) print("There are total {} examples in this dataset.".format(total_file_number)) full_data.head() ###Output There are total 646 examples in this dataset. ###Markdown Split filesThere will be three groups: train, validation and test.Tell notebook number of samples for each group in the following cell. ###Code num_train = 400 num_validation = 146 num_test = 100 ###Output _____no_output_____ ###Markdown Make sure there are enough example for your choice. ###Code assert num_train + num_validation + num_test <= total_file_number, "Not enough examples for your choice." print("Looks good! {} for train, {} for validation and {} for test.".format(num_train, num_validation, num_test)) ###Output Looks good! 400 for train, 146 for validation and 100 for test. ###Markdown Random spliting files. ###Code index_train = np.random.choice(total_file_number, size=num_train, replace=False) index_validation_test = np.setdiff1d(list(range(total_file_number)), index_train) index_validation = np.random.choice(index_validation_test, size=num_validation, replace=False) index_test = np.setdiff1d(index_validation_test, index_validation) ###Output _____no_output_____ ###Markdown Merge them into sub datasets. ###Code train = full_data.iloc[index_train] validation = full_data.iloc[index_validation] test = full_data.iloc[index_test] ###Output _____no_output_____ ###Markdown Write to files ###Code train.to_csv('data/data_train.csv', index=None) validation.to_csv("data/data_validation.csv", index=None) test.to_csv('data/data_test.csv', index=None) print("All done!") ###Output All done! ###Markdown Split DatasetThis notebook shows how to split dataset into train, validation and test sub set. Read dataUse numpy and pandas to read data file list. ###Code import numpy as np import pandas as pd np.random.seed(1) ###Output _____no_output_____ ###Markdown Tell pandas where the csv file is: ###Code csv_file_url = "data/data.csv" ###Output _____no_output_____ ###Markdown Check the data. ###Code full_data = pd.read_csv(csv_file_url) total_file_number = len(full_data) print("There are total {} examples in this dataset.".format(total_file_number)) full_data.head() ###Output There are total 221565 examples in this dataset. ###Markdown Split filesThere will be three groups: train, validation and test.Tell notebook number of samples for each group in the following cell. ###Code num_train = 200000 num_validation = 11565 num_test = 10000 ###Output _____no_output_____ ###Markdown Make sure there are enough example for your choice. ###Code assert num_train + num_validation + num_test <= total_file_number, "Not enough examples for your choice." print("Looks good! {} for train, {} for validation and {} for test.".format(num_train, num_validation, num_test)) ###Output Looks good! 200000 for train, 11565 for validation and 10000 for test. ###Markdown Random spliting files. ###Code index_train = np.random.choice(total_file_number, size=num_train, replace=False) index_validation_test = np.setdiff1d(list(range(total_file_number)), index_train) index_validation = np.random.choice(index_validation_test, size=num_validation, replace=False) index_test = np.setdiff1d(index_validation_test, index_validation) ###Output _____no_output_____ ###Markdown Merge them into sub datasets. ###Code train = full_data.iloc[index_train] validation = full_data.iloc[index_validation] test = full_data.iloc[index_test] ###Output _____no_output_____ ###Markdown Write to files ###Code train.to_csv('data/data_train.csv', index=None) validation.to_csv("data/data_validation.csv", index=None) test.to_csv('data/data_test.csv', index=None) print("All done!") ###Output All done! ###Markdown Split Data ###Code data='img' og_path = root + '/' + data train_path = og_path + '_split/train' test_path = og_path + '_split/test' classes = os.listdir(og_path) for c in tqdm(classes): objs = sorted(os.listdir(og_path +'/' + c)) for i,obj in enumerate(objs): if obj.split('.')[1] != 'png': os.remove(og_path+'/'+c+'/'+obj) for c in tqdm(classes): objs = sorted(os.listdir(og_path +'/' + c)) n = len(objs) if not os.path.exists(train_path+'/'+c): os.makedirs(train_path+'/'+c) if not os.path.exists(test_path+'/'+c): os.makedirs(test_path+'/'+c) for i,obj in enumerate(objs): if obj.split('.')[1] == 'png': if i<0.8*n: img_src = og_path+'/'+c+'/'+obj img_dst = train_path+'/'+c+'/'+obj os.symlink(img_src, img_dst) else: img_src = og_path+'/'+c+'/'+obj img_dst = test_path+'/'+c+'/'+obj os.symlink(img_src, img_dst) ###Output 100%|██████████| 9/9 [00:00<00:00, 63.18it/s] ###Markdown Test Size ###Code render_path = root + '/render_split/train' foreground_path = root + '/foreground_split/train' background_path = root + '/background_split/train' img_path = root + '/img_split/train' for c in os.listdir(render_path): l1 = len(os.listdir(render_path + '/' + c)) l2 = len(os.listdir(foreground_path + '/' + c)) l3 = len(os.listdir(img_path + '/' + c)) l4 = len(os.listdir(background_path + '/' + c)) print(c,l1,l2,l3,l4,l1==l2==l3==l4) render_path = root + '/render_split/test' foreground_path = root + '/foreground_split/test' background_path = root + '/background_split/test' img_path = root + '/img_split/test' for c in os.listdir(render_path): l1 = len(os.listdir(render_path + '/' + c)) l2 = len(os.listdir(foreground_path + '/' + c)) l3 = len(os.listdir(img_path + '/' + c)) l4 = len(os.listdir(background_path + '/' + c)) print(c,l1,l2,l3,l4,l1==l2==l3==l4) import json annotations = json.load(open('pix3d.json')) import torchvision root = '/data5/drone_machinelearning/amir/pix3d' og_path = root + '/render' train_path = og_path + '_split/train' train_target_path = root +'/foreground_split/train' test_path = og_path + '_split/test' test_target_path = root +'/foreground_split/test' train_dataset, test_dataset = {}, {} train_dataset['input'], train_dataset['target'], test_dataset['input'], test_dataset['target'] = [], [], [], [] train_dataset['input'] = torchvision.datasets.ImageFolder(train_path) train_dataset['target'] = torchvision.datasets.ImageFolder(train_target_path) import PIL img = PIL.Image.open(og_path+'/'+classes[0]+'/'+os.listdir(og_path+'/'+classes[0])[0]) import torch import torch.nn.functional as F from Arch import * NLayerDiscriminator(3) PixelDiscriminator(input_nc=3) temp = Variable(torch.Tensor(2,1,256,256).fill_(0.1).cuda(),requires_grad=False) t1 = N_D(temp) t2 = P_D(temp) patch = t1.data.shape A=torch.rand(*patch) A=Variable(A) from torchvision import datasets, models, transforms input_transform = { 'train': transforms.Compose([ transforms.RandomResizedCrop(224,scale=(0.6,1.0)), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), 'test': transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]) } target_transform = { 'train': transforms.Compose([ transforms.RandomResizedCrop(224,scale=(0.6,1.0)), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), 'test': transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), #transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]) } from DRLoader import DRLoader from torch.utils.data import Dataset, DataLoader from torch.autograd import Variable temp = DRLoader(train_path, train_target_path, in_transform=input_transform['train'] \ ,target_transform=target_transform['train']) temp2 = DRLoader(test_path, test_target_path, in_transform=input_transform['test'] \ ,target_transform=target_transform['test']) data= DataLoader(temp,32, shuffle=True) temp.ID.shape from tqdm import tqdm for i,(x,y,z) in enumerate(tqdm(data)): if int(x.shape[1])!=4: print(z) print('x',x) break if y.shape[1]!=3: print(z) print('y',y) break z print(x.shape[1])==4 import numpy as np img = np.transpose(img[0].numpy(),[1,2,0]) import matplotlib.pyplot as plt plt.imshow(img[:,:,3]) plt.show() img = img[:,0:3] classes = sorted(os.listdir(input_dir)) im_path, target_path, label = [], [], [] for index,c in enumerate(tqdm(sorted(os.listdir(input_dir)))): for obj in sorted(os.listdir(input_dir+'/'+c)): ann, ext = os.path.splitext(obj)[0], os.path.splitext(obj)[1] if ext not in ['.jpeg','.png']: continue im_path.append(os.path.join(input_dir,c,obj)) target_path.append(os.path.join(target_dir,c,obj)) label.append(int(ann)) images, targets, labels = np.array(im_path), np.array(target_path), np.array(label) len(images) class UNet(nn.Module): def __init__(self, input_nc, ngf=64, output_nc=3): super(UNet, self).__init__() self.conv1 = nn.Conv2d(3, ngf, 3, padding=1) self.conv2 = nn.Conv2d(ngf, ngf, 3, padding=1) self.batchnorm1 = nn.BatchNorm2d(ngf) self.pool1 = nn.MaxPool2d(2,2) self.conv3 = nn.Conv2d(ngf, ngf*2, 3, padding=1) self.conv4 = nn.Conv2d(ngf*2, ngf*2, 3, padding=1) self.batchnorm2 = nn.BatchNorm2d(ngf*2) self.pool2 = nn.MaxPool2d(2,2) self.conv5 = nn.Conv2d(ngf*2, ngf*4, 3, padding=1) self.conv6 = nn.Conv2d(ngf*4, ngf*4, 3, padding=1) self.batchnorm3 = nn.BatchNorm2d(ngf*4) self.pool3 = nn.MaxPool2d(2,2) self.conv7 = nn.Conv2d(ngf*4, ngf*8, 3, padding=1) self.conv8 = nn.Conv2d(ngf*8, ngf*8, 3, padding=1) self.batchnorm4 = nn.BatchNorm2d(ngf*8) self.pool4 = nn.MaxPool2d(2,2) self.conv9 = nn.Conv2d(ngf*8, ngf*8, 3, padding=1) self.conv10 = nn.Conv2d(ngf*8, ngf*8, 3, padding=1) self.batchnorm5 = nn.BatchNorm2d(ngf*8) self.convtran1 = nn.ConvTranspose2d(ngf*8,ngf*8,2,stride=2) self.conv11 = nn.Conv2d(ngf*16,ngf*4,3, padding=1) self.conv12 = nn.Conv2d(ngf*4, ngf*4,3, padding=1) self.batchnorm6 = nn.BatchNorm2d(ngf*4) self.convtran2 = nn.ConvTranspose2d(ngf*4,ngf*4,2,stride=2) self.conv13 = nn.Conv2d(ngf*8,ngf*2,3, padding=1) self.conv14 = nn.Conv2d(ngf*2,ngf*2,3, padding=1) self.batchnorm7 = nn.BatchNorm2d(ngf*2) self.convtran3 = nn.ConvTranspose2d(ngf*2,ngf*2,2,stride=2) self.conv15 = nn.Conv2d(ngf*4,ngf,3, padding=1) self.conv16 = nn.Conv2d(ngf,ngf,3, padding=1) self.batchnorm8 = nn.BatchNorm2d(ngf) self.convtran4 = nn.ConvTranspose2d(ngf,ngf,2,stride=2) self.conv17 = nn.Conv2d(ngf*2,ngf,3, padding=1) self.conv18 = nn.Conv2d(ngf, ngf,3, padding=1) self.batchnorm9 = nn.BatchNorm2d(ngf) self.conv19 = nn.Conv2d(ngf,output_nc,1) def forward(self, x): c1 = torch.nn.functional.relu(self.batchnorm1(self.conv1(x))) c1 = torch.nn.functional.relu(self.batchnorm1(self.conv2(c1))) p1 = self.pool1(c1) print('c1',c1.shape) c2 = torch.nn.functional.relu(self.batchnorm2(self.conv3(p1))) c2 = torch.nn.functional.relu(self.batchnorm2(self.conv4(c2))) p2 = self.pool2(c2) print('c2',c2.shape) c3 = torch.nn.functional.relu(self.batchnorm3(self.conv5(p2))) c3 = torch.nn.functional.relu(self.batchnorm3(self.conv6(c3))) p3 = self.pool3(c3) print('c3',c3.shape) c4 = torch.nn.functional.relu(self.batchnorm4(self.conv7(p3))) c4 = torch.nn.functional.relu(self.batchnorm4(self.conv8(c4))) print('c4',c4.shape) p4 = self.pool4(c4) c5 = torch.nn.functional.relu(self.batchnorm5(self.conv9(p4))) c5 = torch.nn.functional.relu(self.batchnorm5(self.conv10(c5))) print('c5',c5.shape) u6 = self.convtran1(c5) u6 = torch.cat((u6,c4),dim=1) print('u6',u6.shape) c6 = torch.nn.functional.relu(self.batchnorm6(self.conv11(u6))) c6 = torch.nn.functional.relu(self.batchnorm6(self.conv12(c6))) print('c6',c6.shape) u7 = self.convtran2(c6) u7 = torch.cat((u7,c3),dim=1) print('u7',u7.shape) c7 = torch.nn.functional.relu(self.batchnorm7(self.conv13(u7))) c7 = torch.nn.functional.relu(self.batchnorm7(self.conv14(c7))) print('c7',c7.shape) u8 = self.convtran3(c7) u8 = torch.cat((u8,c2),dim=1) print('u8',u8.shape) c8 = torch.nn.functional.relu(self.batchnorm8(self.conv15(u8))) c8 = torch.nn.functional.relu(self.batchnorm8(self.conv16(c8))) print('c8',c8.shape) u9 = self.convtran4(c8) u9 = torch.cat((u9,c1),dim=1) print('u9',u9.shape) c9 = torch.nn.functional.relu(self.batchnorm9(self.conv17(u9))) c9 = torch.nn.functional.relu(self.batchnorm9(self.conv18(c9))) print('c9',c9.shape) out = torch.nn.functional.sigmoid(self.conv19(c9)) return out unet = UNet(3) unet.cuda() out = unet(Variable(torch.Tensor(img).cuda())) 512*1.5 ###Output _____no_output_____
temp/prepper_shipshoal.ipynb
###Markdown Pre-process input data for coastal variable extractionAuthor: Emily Sturdivant; [email protected]***Pre-process files to be used in extractor.ipynb (Extract barrier island metrics along transects). See the project [README](https://github.com/esturdivant-usgs/BI-geomorph-extraction/blob/master/README.md) and the Methods Report (Zeigler et al., in review). Pre-processing steps1. Pre-created geomorphic features: dunes, shoreline points, armoring.2. Inlets3. Shoreline4. Transects - extend and sort5. Transects - tidy Notes:This process requires some manipulation of the spatial layers by the user. When applicable, instructions are described in this file.*** Import modules ###Code import os import sys import pandas as pd import numpy as np import arcpy import matplotlib.pyplot as plt import matplotlib matplotlib.style.use('ggplot') import core.functions_warcpy as fwa import core.functions as fun from core.setvars import * ###Output site (options: Monomoy, RhodeIsland, Wreck, ParkerRiver, Parramore, Smith, CoastGuard, Assawoman, Myrtle, Cobb, Metompkin, ShipShoal, Rockaway, FireIsland, Cedar, Assateague, Fisherman, CapeLookout, CapeHatteras, Forsythe): Wreck year (options: 2010, 2012, 2014): 2014 Path to project directory (e.g. \\Mac olume\dir\FireIsland2014): ·········································· ###Markdown Initialize variablesChange the filename variables to match your local files. They should be in an Esri file geodatabase named site+year.gdb in your project directory, which will be the value of the variable `home`. Input the site, year, and project directory path. `setvars.py` retrieves the pre-determined values for that site in that year from `configmap.py`. The project directory will be used to set up your workspace. It's hidden for security – sorry! I recommend that you type the path somewhere and paste it in. ###Code # Inputs - vector orig_trans = os.path.join(home, 'trans_orig') # Extended transects: NASC transects extended and sorted, ready to be the base geometry for processing extendedTrans = os.path.join(home, 'extTrans') # Tidied transects: Extended transects without overlapping transects extTrans_tidy = os.path.join(home, 'tidyTrans') # Geomorphology points: positions of indicated geomorphic features ShorelinePts = os.path.join(home, 'Wreck2014_SLpts') # shoreline dlPts = os.path.join(home, 'Wreck2014_DLpts') # dune toe dhPts = os.path.join(home, 'Wreck2014_DHpts') # dune crest # Inlet lines: polyline feature classes delimiting inlet position. Must intersect the full island shoreline inletLines = os.path.join(home, 'Wreck2014_inletLines') # Full island shoreline: polygon that outlines the island shoreline, MHW on oceanside and MTL on bayside barrierBoundary = os.path.join(home, 'Wreck2014_bndpoly_2sl') # Elevation grid: DEM of island elevation at either 5 m or 1 m resolution elevGrid = os.path.join(home, 'Wreck2014_dem') # --- # OPTIONAL - comment out each one that is not available # --- # Study area boundary; manually digitize if the barrier island study area does not end at an inlet. # SA_bounds = os.path.join(home, 'SA_bounds') # Armoring lines: digitize lines of shorefront armoring to be used if dune toe points are not available. # armorLines = os.path.join(home, 'armorLines') ###Output _____no_output_____ ###Markdown Prepare input layers ###Code datapre = '14CNT01' csvpath = os.path.join(proj_dir, 'Input_Data', '{}_morphology'.format(datapre), '{}_morphology.csv'.format(datapre)) # state = sitevals['state'] dt_fc, dc_fc, sl_fc = fwa.MorphologyCSV_to_FCsByFeature(csvpath, state, proj_code, csv_fill = 999, fc_fill = -99999, csv_epsg=4326) ###Output _____no_output_____ ###Markdown DunesDisplay the points and the DEM in a GIS to check for irregularities. For example, if shoreline points representing a distance less than X m are visually offset from the general shoreline, they should likely be removed. Another red flag is when the positions of dlows and dhighs in relation to the shore are illogical, i.e. dune crests are seaward of dune toes. Address these irregularities by manually deleting points. Delete conservatively. ArmoringIf the dlows do not capture the entire top-of-beach due to atypical formations caused by anthropogenic modification, you may need to digitize the beachfront armoring. The next code block will generate an empty feature class. Refer to the DEM and orthoimagery. If there is no armoring in the study area, continue. If there is armoring, use the Editing toolbar to add lines to the feature class that trace instances of armoring. Common manifestations of what we call armoring are sandfencing and sandbagging and concrete seawalls. If there is no armoring file in the project geodatabase, the extractor script will notify you that it is proceeding without armoring.*__Requires manipulation in GIS__* ###Code arcpy.CreateFeatureclass_management(home, os.path.basename(armorLines), 'POLYLINE', spatial_reference=utmSR) print("{} created. Now manually digitize the shorefront armoring.".format(armorLines)) ###Output _____no_output_____ ###Markdown InletsWe also need to manually digitize inlets if an inlet delineation does not already exist. To do, the code below will produce the feature class. After which, use the Editing toolbar to create a line where the oceanside shore meets a tidal inlet. If the study area includes both sides of an inlet, that inlet will be represented by two lines. The inlet lines are used to define the bounds of the oceanside shore, which is also considered the point where the oceanside shore meets the bayside. Inlet lines must intersect the MHW contour. What do we do when the study area and not an inlet is the end?*__Requires manipulation in GIS__* ###Code # manually create lines that correspond to end of land and cross the MHW line (use bndpoly/DEM) arcpy.CreateFeatureclass_management(home, os.path.basename(inletLines), 'POLYLINE', spatial_reference=utmSR) print("{} created. Now we'll stop for you to manually create lines at each inlet.".format(inletLines)) ###Output _____no_output_____ ###Markdown ShorelineThe shoreline is produced through a combination of the DEM and the shoreline points. The first step converts the DEM to both MTL and MHW contour polygons. Those polygons are combined to produce the full shoreline, which is considered to fall at MHW on the oceanside and MTL on the bayside (to include partially submerged wetland).If the study area does not end cleanly at an inlet, create a separate polyline feature class (default name is 'SA_bounds') and add lines that bisect the shoreline; they should look and function like inlet lines. Specify this in the arguments for DEMtoFullShorelinePoly() and CreateShoreBetweenInlets().At some small inlets, channel depth may be above MTL. In this case, the script left to its own devices will leave the MTL contour between the two inlet lines. This can be rectified after processing by deleting the mid-inlet features from the temp file 'shore_2split.' ###Code # manually create lines that correspond to end of land and cross the MHW line (use bndpoly/DEM) if len(SA_bounds) and not arcpy.Exists(SA_bounds): arcpy.CreateFeatureclass_management(home, 'SA_bounds', 'POLYLINE', spatial_reference=utmSR) print("{} created. \nNow we'll stop for you to manually create lines at each inlet.".format(SA_bounds)) bndpoly = fwa.DEMtoFullShorelinePoly(elevGrid_5m, sitevals['MTL'], sitevals['MHW'], inletLines, ShorelinePts) print('Select features from {} that should not be included in the final shoreline polygon. '.format(bndpoly)) ###Output Creating the MTL contour polgon from the DEM... Creating the MHW contour polgon from the DEM... Combining the two polygons... Isolating the above-MTL portion of the polygon to the bayside... User input required! Select extra features in bndpoly for deletion. Recommended technique: select the polygon/s to keep and then Switch Selection. Select features from bndpoly that should not be included in the final shoreline polygon. ###Markdown *__Requires display in GIS__*User input is required to identify only the areas within the study area and eliminate isolated landmasses that are not. Once the features to delete are selected, either delete in the GIS or run the code below. Make sure the bndpoly variable matches the layer name in the GIS.__Do not...__ select the features in ArcGIS and then run DeleteFeatures in this Notebook Python kernel. That will delete the entire feature class. ```arcpy.DeleteFeatures_management(bndpoly)```The next step snaps the boundary polygon to the shoreline points anywhere they don't already match and as long as as they are within 25 m of each other. ###Code barrierBoundary = fwa.NewBNDpoly(bndpoly, ShorelinePts, barrierBoundary, '25 METERS', '50 METERS') ###Output Created: \\Mac\stor\Projects\TransectExtraction\Rockaway2014\Rockaway2014.gdb\bndpoly_2sl ###Markdown ShoreBetweenInlets ###Code # This step could be moved out of pre-processing and into the extractor because it doesn't require user input. shoreline = fwa.CreateShoreBetweenInlets(barrierBoundary, inletLines, shoreline, ShorelinePts, proj_code, SA_bounds) ###Output The projection of \\Mac\stor\Projects\TransectExtraction\Rockaway2014\Rockaway2014.gdb\bndpoly_2sl was changed. The new file is \\Mac\stor\Projects\TransectExtraction\Rockaway2014\Rockaway2014.gdb\bndpoly_2sl_utm. Splitting \\Mac\stor\Projects\TransectExtraction\Rockaway2014\Rockaway2014.gdb\bndpoly_2sl_utm at inlets... Preserving only those line segments that intersect shoreline points... Dissolving the line to create \\Mac\stor\Projects\TransectExtraction\Rockaway2014\Rockaway2014.gdb\ShoreBetweenInlets... ###Markdown Transects - extend, sort, and tidyCreate extendedTrans, which are NASC transects for the study area extended to cover the island, with gaps filled, and sorted in the field sort_ID. 1. Extend the transects and use a copy of the lines to fill alongshore gaps ###Code # Initialize temp file names trans_extended = os.path.join(arcpy.env.scratchGDB, 'trans_extended') trans_presort = trans_extended+'_presort' # Delete transects over 200 m outside of the study area. if input("Need to remove extra transects? 'y' if barrierBoundary should be used to select. ") == 'y': fwa.RemoveTransectsOutsideBounds(orig_trans, barrierBoundary) # Extend transects and create blank duplicate to use to fill gaps fwa.ExtendLine(orig_trans, trans_extended, extendlength, proj_code) fwa.CopyAndWipeFC(trans_extended, trans_presort, ['sort_ID']) print("MANUALLY: use groups of existing transects in new FC '{}' to fill gaps.".format(trans_presort)) ###Output Need to remove extra transects? 'y' if barrierBoundary should be used to select. n ###Markdown *__Requires manipulation in GIS__*1. Edit the trans_presort_temp feature class. __Move and rotate__ groups of transects to fill in gaps that are greater than 50 m alongshore. There is no need to preserve the original transects, but avoid overlapping the transects with each other and with the originals. Do not move any transects slightly. If they are moved, they will not be deleted in the next stage. If you slightly move any, you can either undo or delete that line entirely. ###Code fwa.RemoveDuplicates(trans_presort, trans_extended, barrierBoundary) trans_presort2 = trans_presort+'_extended' fwa.ExtendLine(trans_presort, trans_presort2, 200, proj_code) ###Output trans_extended_presort is already projected in UTM. Transects extended. ###Markdown 2. Sort the transects along the shoreUsually if the shoreline curves, we need to identify different groups of transects for sorting. This is because the GIS will not correctly establish the alongshore order by simple ordering from the identified sort_corner. If this is the case, answer __yes__ to the next prompt. ###Code sort_lines = fwa.SortTransectPrep(spatialref=utmSR) print('sort_lines: "{}"'.format(sort_lines)) ###Output sort_lines: "LL" ###Markdown *__Requires manipulation in GIS__*The last step generated an empty sort lines feature class if you indicated that transects need to be sorted in batches to preserve the order. Now, the user creates lines that will be used to spatially sort transects in groups. For each group of transects:1. __Create a new line__ in 'sort_lines' that intersects all transects in the group. The transects intersected by the line will be sorted independently before being appended to the preceding groups. (*__add example figure__*)2. __Assign values__ for the fields 'sort,' 'sort_corner,' and 'reverse.' 'sort' indicates the order in which the line should be used and 'sort_corn' indicates the corner from which to perform the spatial sort ('LL', 'UL', etc.). 'reverse' indicates whether the order should be reversed (roughly equivalent to 'DESCENDING').3. Run the following code to create a new sorted transect file. ###Code fwa.SortTransectsFromSortLines(trans_presort, extendedTrans, sort_lines, tID_fld) ###Output Added sort_ID field to trans_extended_presort sort_lines not specified, so we are sorting the transects in one group from the LL corner. Copying the generated OID values to the transect ID field (sort_ID)... ###Markdown 3. Tidy the extended (and sorted) transects to remove overlap*__Requires manipulation in GIS__*Overlapping transects cause problems during conversion to 5-m points and to rasters. We create a separate feature class with the 'tidied' transects, in which the lines don't overlap. This is largely a manually process with the following steps:1. Select transects to be used to split other transects. Prioritize transects that a) were originally from NASC, b) have dune points within 25 m, and c) are oriented perpendicular to shore. (add example figure)2. Use the Copy Features geoprocessing tool to copy only the selected transects into a new feature class. If desired, here is the code that could be used to copy the selected features and clear the selection:```pythonarcpy.CopyFeatures_management(extendedTrans, overlapTrans_lines)arcpy.SelectLayerByAttribute_management(extendedTrans, "CLEAR_SELECTION")```3. Run the code below to split the transects at the selected lines of overlap. ###Code overlapTrans_lines = os.path.join(arcpy.env.scratchGDB, 'overlapTrans_lines') if not arcpy.Exists(overlapTrans_lines): overlapTrans_lines = input("Filename of the feature class of only 'boundary' transects: ") arcpy.Intersect_analysis([extendedTrans, overlapTrans_lines], trans_x, 'ALL', output_type="POINT") arcpy.SplitLineAtPoint_management(extendedTrans, trans_x, extTrans_tidy) ###Output _____no_output_____ ###Markdown Delete the extraneous segments manually. Recommended:1. Using Select with Line draw a line to the appropriate side of the boundary transects. This will select the line segments that need to be deleted.1. Delete the selected lines.1. Remove any remaining overlaps entirely by hand. Use the Split Line tool in the Editing toolbar to split lines to be shortened at the points of overlap. Then delete the remnant sections. Join anthro data to transects1. Convert xls spreadsheet to points 2. Select the first points along each transects and create new FC3. Spatial Join the new FC to the updated transects - one to one - keep all target features - keep only the ID fields and the three anthro fields (and the transect fields [LRR, etc.]?) - intersect4. Join the transect values to the pts based on sort_ID ###Code # Input shapefiles shlpts_shp = os.path.join(proj_dir, 'rock14_shlpts.shp') dlpts_shp = os.path.join(proj_dir, 'rock14_dlowpts.shp') dhpts_shp = os.path.join(proj_dir, 'rock14_dhighpts.shp') trans_shp = os.path.join(proj_dir, 'rock_trans.shp') shoreline_shp = os.path.join(proj_dir, 'rock14_shoreline.shp') ###Output _____no_output_____
beginner_source/blitz/02 Autograd Tutorial.ipynb
###Markdown Autograd: Automatic DifferentiationCentral to all neural networks in PyTorch is the ``autograd`` package.Let’s first briefly visit this, and we will then go to training ourfirst neural network.The ``autograd`` package provides automatic differentiation for all operationson Tensors. It is a define-by-run framework, which means that your backprop isdefined by how your code is run, and that every single iteration can bedifferent.Let us see this in more simple terms with some examples. Tensor``torch.Tensor`` is the central class of the package. If you set its attribute``.requires_grad`` as ``True``, it starts to track all operations on it. Whenyou finish your computation you can call ``.backward()`` and have all thegradients computed automatically. The gradient for this tensor will beaccumulated into ``.grad`` attribute.To stop a tensor from tracking history, you can call ``.detach()`` to detachit from the computation history, and to prevent future computation from beingtracked.To prevent tracking history (and using memory), you can also wrap the code blockin ``with torch.no_grad():``. This can be particularly helpful when evaluating amodel because the model may have trainable parameters with``requires_grad=True``, but for which we don't need the gradients.There’s one more class which is very important for autogradimplementation - a ``Function``.``Tensor`` and ``Function`` are interconnected and build up an acyclicgraph, that encodes a complete history of computation. Each tensor hasa ``.grad_fn`` attribute that references a ``Function`` that has createdthe ``Tensor`` (except for Tensors created by the user - their``grad_fn is None``).If you want to compute the derivatives, you can call ``.backward()`` ona ``Tensor``. If ``Tensor`` is a scalar (i.e. it holds a one elementdata), you don’t need to specify any arguments to ``backward()``,however if it has more elements, you need to specify a ``gradient``argument that is a tensor of matching shape. ###Code import torch # Create a tensor and set ``requires_grad=True`` to track computation with it x = torch.ones(2, 2, requires_grad=True) print(x) # Do a tensor operation: y = x + 2 print(y) # ``y`` was created as a result of an operation, so it has a ``grad_fn``. print(y.grad_fn) # Do more operations on ``y`` z = y * y * 3 out = z.mean() print(z, out) # ``.requires_grad_( ... )`` changes an existing Tensor's ``requires_grad`` # flag in-place. The input flag defaults to ``False`` if not given. a = torch.randn(2, 2) a = ((a * 3) / (a - 1)) print(a.requires_grad) a.requires_grad_(True) print(a.requires_grad) b = (a * a).sum() print(b.grad_fn) ###Output False True <SumBackward0 object at 0x7facc3802850> ###Markdown GradientsLet's backprop now.Because ``out`` contains a single scalar, ``out.backward()`` isequivalent to ``out.backward(torch.tensor(1.))``. ###Code out.backward() # Print gradients d(out)/dx # print(x.grad) ###Output tensor([[4.5000, 4.5000], [4.5000, 4.5000]]) ###Markdown You should have got a matrix of ``4.5``. Let’s call the ``out``Tensor “$o$”.We have that $o = \frac{1}{4}\sum_i z_i$,$z_i = 3(x_i+2)^2$ and $z_i\bigr\rvert_{x_i=1} = 27$.Therefore,$\frac{\partial o}{\partial x_i} = \frac{3}{2}(x_i+2)$, hence$\frac{\partial o}{\partial x_i}\bigr\rvert_{x_i=1} = \frac{9}{2} = 4.5$. Mathematically, if you have a vector valued function :math:`\vec{y}=f(\vec{x})`,then the gradient of $\vec{y}$ with respect to $\vec{x}$is a Jacobian matrix:\begin{equation} J=\left(\begin{array}{ccc} \frac{\partial y_{1}}{\partial x_{1}} & \cdots & \frac{\partial y_{1}}{\partial x_{n}}\\ \vdots & \ddots & \vdots\\ \frac{\partial y_{m}}{\partial x_{1}} & \cdots & \frac{\partial y_{m}}{\partial x_{n}} \end{array}\right)\end{equation}Generally speaking, ``torch.autograd`` is an engine for computingvector-Jacobian product. That is, given any vector$v=\left(\begin{array}{cccc} v_{1} & v_{2} & \cdots & v_{m}\end{array}\right)^{T}$,compute the product $v^{T}\cdot J$. If $v$ happens to bethe gradient of a scalar function $l=g\left(\vec{y}\right)$,that is,$v=\left(\begin{array}{ccc}\frac{\partial l}{\partial y_{1}} & \cdots & \frac{\partial l}{\partial y_{m}}\end{array}\right)^{T}$,then by the chain rule, the vector-Jacobian product would be thegradient of $l$ with respect to $\vec{x}$:\begin{equation} J^{T}\cdot v=\left(\begin{array}{ccc} \frac{\partial y_{1}}{\partial x_{1}} & \cdots & \frac{\partial y_{m}}{\partial x_{1}}\\ \vdots & \ddots & \vdots\\ \frac{\partial y_{1}}{\partial x_{n}} & \cdots & \frac{\partial y_{m}}{\partial x_{n}} \end{array}\right)\left(\begin{array}{c} \frac{\partial l}{\partial y_{1}}\\ \vdots\\ \frac{\partial l}{\partial y_{m}} \end{array}\right)=\left(\begin{array}{c} \frac{\partial l}{\partial x_{1}}\\ \vdots\\ \frac{\partial l}{\partial x_{n}} \end{array}\right)\end{equation}(Note that $v^{T}\cdot J$ gives a row vector which can betreated as a column vector by taking $J^{T}\cdot v$.)This characteristic of vector-Jacobian product makes it veryconvenient to feed external gradients into a model that hasnon-scalar output. ###Code # Now let's take a look at an example of vector-Jacobian product: x = torch.randn(3, requires_grad=True) y = x * 2 while y.data.norm() < 1000: y = y * 2 print(y) ###Output tensor([-1011.4211, 724.2524, 307.6283], grad_fn=<MulBackward0>) ###Markdown Now in this case ``y`` is no longer a scalar. ``torch.autograd``could not compute the full Jacobian directly, but if we justwant the vector-Jacobian product, simply pass the vector to``backward`` as argument: ###Code v = torch.tensor([0.1, 1.0, 0.0001], dtype=torch.float) y.backward(v) print(x.grad) ###Output tensor([1.0240e+02, 1.0240e+03, 1.0240e-01]) ###Markdown You can also stop autograd from tracking history on Tensorswith ``.requires_grad=True`` either by wrapping the code block in``with torch.no_grad():`` ###Code print(x.requires_grad) print((x ** 2).requires_grad) with torch.no_grad(): print((x ** 2).requires_grad) # Or by using ``.detach()`` to get a new Tensor with the same # content but that does not require gradients: print(x.requires_grad) y = x.detach() print(y.requires_grad) print(x.eq(y).all()) ###Output True False tensor(True)
Flood_Freq/HW1_StudentVersion_FloodFreqAnalyses.ipynb
###Markdown GE 254 - Intro to Geomorph HW 1 - Using Python to calculate Flood Frequency Analysis Due: Thursday Sept. 24th by 11 amFor this assignment, we will be taking the work we did in excel on flood frequency and instead creating a python script to do the same thing. You will be given a partially done code. You need to finish the code. Answer any questions shown as comments in the code. And most importantly, comment the code extensively. The end result should be a useful python script that takes any USGS flow data and calculates predictive flood frequency analysis. This exercise will provide some experience with methods used for predicting flood frequency and magnitude. We will be using the US Geological Survey (USGS) website to retrieve historical stream gauge data of the sort used to predict the likelihood of flood events of particular magnitudes during a given time interval. Such predictions are the basis for numerous engineering, restoration and development projects in and around rivers. Objectives: 1. Practice using Python Pandas DataFrames to import, manipulate, and visualize data 2. Learn some powerful tools for subsetting your dataframes 3. Practice visualizing data * Dr A. C. Ortiz, Sept. 2020* Homework assignment adapted from T. Perron Lab on flood frequency* Used in a 200 level undergraduate geomorphology course, after doing the same work in lab using excel (or google sheets) to calculate flood frequency and rating curves. * This homework assumes a basic introduction to python, pandas, and dataframes. ###Code #import libraries #let's use pandas dataframes again import numpy as np import pandas as pd import matplotlib.pyplot as plt #so I've imported 3 libraries using common #abbreviations to reference these libraries in my code. This is personal choice. #If I just wrote "import numpy", it would still work but when I call a numpy command #I would have to write numpy.matrix() vs. np.matrix() - it's all about laziness #and minimizing typing. ###Output _____no_output_____ ###Markdown First make sure you've downloaded the correct data. I want you to download peak flow and daily flow measurements for 1 of these four sites (you will be assigned in class): * 02086849 * 02081000* 02081747 * 02083000 I also want you to download daily data for your site, on the waterdata.usgs.gov/nwis/sw site click on daily data instead of peak-flow data. Remember to download all available data for discharge and gage height (if available). Remember to select tab-separated file then download that file as a .txt, by right clicking on the page and selecting "save as". Make sure you upload these files into JupyterHub. Let's start by reading in the peak-flow measurements - this is similar to what we did in Lab 2 ###Code #ok let's break down this command - can you tell me what header does? #what is the delimiter mean? # why did I write skiprows? #what is usecols do? #for help with this use google - or go to the help menu above and select pandas #don't forget to add your file name to ADD_FILE_NAME_HERE.txt part of the '' in the line below peak_all = pd.read_csv('ADD_FILE_NAME_HERE.txt',header=72,delimiter="\t",\ skiprows=[73],usecols=range(0,8),parse_dates=True) peak_all.head() #if for some reason when peak_all is displayed below and you see weird values #in the first couple of rows, you need to adjust the values given to header #and skiprows. #now let's rename these columns to something a bit more useful new_column_names = ['Agency', 'SiteNo', 'Date', 'Time', \ 'Discharge (cfs)', 'Discharge_quality','Gage_ht (ft)',\ 'Gage_quality'] #ok now what does the "\" do in the above line of code? peak_all.columns = new_column_names peak_all.head() #now for some data cleanup peak_all['Discharge (m3/sec)'] = peak_all['Discharge (cfs)'] * 0.028316847 peak_all['Gage_ht (m)'] = peak_all['Gage_ht (ft)'] * 3.28084 new_station_name = "0" + str(peak_all['SiteNo'].unique()[0]) peak_all['SiteNo'] = new_station_name #ok what happens in this cell? peak_all.head() #the fun date-time work peak_all['Date'] = pd.to_datetime(peak_all.Date) peak_all['Year'] = peak_all['Date'].dt.year peak_all.head() #explain what the above lines of code do please #ok now that we've done some data management, let's pull out only the data we need peak = peak_all[['Year','Discharge (m3/sec)','Gage_ht (m)']] #now let's remove NaN measurements peak = peak.dropna() print(peak.head()) peak = peak.sort_values('Discharge (m3/sec)',ascending=False) #what does this do? peak.head() peak['Rank'] = range(1,peak.shape[0]+1) #what is the range command? n = peak.shape[0] #what value(s) does shape give us? why do I index at first position (0)? print('The total number of observations:', n) print(peak.head()) ###Output _____no_output_____ ###Markdown helpful description of indexing dataframes in python & pandas*https://www.shanelynn.ie/select-pandas-dataframe-rows-and-columns-using-iloc-loc-and-ix/* ###Code #ok now create a new column in peak dataframe that is the Recurrence Interval (RI = (1+n)/rank) ###Output _____no_output_____ ###Markdown The usual procedure is to assume that the frequency distribution of floods in our record conforms to a known distribution, and plot the observed floods in such a way that they will fall along a straight line if the data conform to the assumed distribution. Distributions in common use include the Gumbel, Weibull, and Pearson Type III distributions, all of which are designed to characterize extreme value phenomena. To keep our analysis simple, we will assume that flood frequency is lognormally distributed. This implies that there should be a semilogarithmic relationship between flood magnitude and RI. ###Code #ok now to plot data peak.plot(x='Recurrence Interval (years)',y='Discharge (m3/sec)',\ title='Flood Frequency of Station ' + peak_all['SiteNo'][0], \ kind='scatter',logx=True) #calculate the trendline to our floods x = peak['Recurrence Interval (years)'] y = peak['Discharge (m3/sec)'] f = np.polyfit(np.log10(x),y,1,w=np.sqrt(y)) print(f) xf = [min(x),max(x)] yf = f[0]*np.log10(xf) + f[1] print(xf,yf) #now plot the data and the trendline #hint to add the trendline - use plt.plot(x,y) where you need to figure out what x and y should be! #What is the 100 year flood discharge? dis100 = print('the 100 year discharge is:', dis100, 'm3/s') #now lets look at the rating curve - aka stage vs. discharge #can you fit a line to this plot? For predicting discharge for a given stage? #Hint use polyfit again #now plot the linear trendline on the rating curve #what is the predicted stage for the 100 year flood? #stg100 = print('the 100 year flood stage is:', stg100, 'm') ###Output _____no_output_____ ###Markdown 1. Ok now that we've recreated flood recurrence interval analysis, what would you have to change to run this code on a different USGS station? 2. What parameters might be different? What would you expect to be the same? 3. How easy would it be to change these things? visualizing and analyzing daily flow measurements. ###Code #alright nice job - now let's look at daily measurements. Make sure to fill in the correct file name in the '' #make sure you've uploaded the .txt file to jupyter hub daily_all = pd.read_csv('',header=32,delimiter="\t",\ skiprows=[33],usecols=[0,1,2,3,9],parse_dates=True) daily_all.head() #again - note the header and skip-row number, these might need to change #for your file... new_column_names = ['Agency', 'SiteNo', 'OldDateTime', \ 'Discharge (cfs)', 'Gage_ht (ft)'] daily_all.columns = new_column_names daily_all.head() #now for some data cleanup #first change all units for gage height & discharge to mks #then add in the corrected station number ###Output _____no_output_____ ###Markdown Questions about the above code:1. Why do I keep printing out/displaying daily_all or peak or peak_all? 2. What is the use or reason for this in the code? 3. What are other methods you can use to check your code validity? ###Code #now calculate the datetime objects #now add in separate columns for day, month, and year - year is shown below, do the other two daily_all['Year'] = daily_all['DateTime'].dt.year print(daily_all.head()) #ok how can you now find the average flow for YOUR birthday? change the code as needed avgdis_bday = daily_all['Discharge (m3/sec)'][((daily_all.Month==7)&(daily_all.Day==7))].mean() print('The average discharge for 7/7 is: ', avgdis_bday, 'm3/sec') #go ahead and redo this to find the minimum, maximum, and mean flow for YOUR birthday #now go ahead and plot the daily average flow (aka average per day) #this is how we use the very useful function group by avg_daily = daily_all.groupby(['Month','Day'],as_index=False).mean() #ok how does the above line of code work? Look up groupby and explain what is done. avg_daily.Year = 2000 #random year chosen - must be a leap year for datetime to work! avg_daily['Date'] = pd.to_datetime(avg_daily[['Year','Month','Day']]) #now go ahead and plot your avg daily value avg_daily.plot(x='Date',y='Discharge (m3/sec)',\ title='Averaged Daily Flow of Station ' + peak_all['SiteNo'][0], \ kind='scatter') #now can you plot a monthly mean? Let's use the groupby again avg_m = daily_all.groupby(['Month'],as_index=False).mean() avg_m['Day'] = 15 #add in a fake day avg_m['Year'] = 2000 #add in a fake year that fits the data - make sure it matches above avg_m['Date'] = pd.to_datetime(avg_m[['Year','Month','Day']]) print(avg_m) plt.plot(avg_m.Date,avg_m['Discharge (m3/sec)'],'--m',linewidth=4) #does the monthly average match the daily averages values - are the trends holding? print(daily_all.head()) ###Output _____no_output_____ ###Markdown Resampling DataNow let's calculate the per month average flow over our timeseries (aka resample our data)look at this or some helpful information *https://sergilehkyi.com/tips-on-working-with-datetime-index-in-pandas/* ###Code daily_all2 = daily_all.set_index('DateTime') print(daily_all2.head()) ma = daily_all2.resample('M').mean() #look up what this command does - this can be very powerful print(ma.head()) print(daily_all2.head()) #explain the two new dataframes created (ma and daily_all2). What information is in these? #why did I create these? daily_all.plot(x='DateTime', y='Discharge (m3/sec)',kind='scatter') plt.plot(ma.index,ma['Discharge (m3/sec)'],"--k",linewidth=3) ###Output _____no_output_____
Main-Copy1.ipynb
###Markdown Facial Recognition Applied To SecurityBala, Charu, PrabhatFirst part project report - linkThis code is available at [https://github.com/baladutt/face]. How To Run?Main.ipynb is the jupyter notebook to run the project. Before running that,* LFW dataset needs to be present in data directory. For example: data/lfw/Aaron_Eckhart/* Download http://download.tensorflow.org/models/inception_v3_2016_08_28.tar.gz into models/cnn Referenced Links* https://research.fb.com/publications/deepface-closing-the-gap-to-human-level-performance-in-face-verification/ * https://pypi.org/project/deepface/* https://medium.com/@williamkoehrsen/facial-recognition-using-googles-convolutional-neural-network-5aa752b4240e * https://towardsdatascience.com/capsule-networks-the-new-deep-learning-network-bd917e6818e8 Read Image data into memory ###Code import time import getpass import warnings warnings.filterwarnings('ignore') username = getpass.getuser() start_time = time.time() #Where is the data? if username == "bdutt": baseDir="/Users/bdutt/Documents/workspace-whitespace/face/face/data/lfw" elif username =="cgarg": baseDir="/Users/cgarg/Documents/charu/intuit/code/face/data/lfw" flow = "main" debug = False %run "Load Data.ipynb" filenames = [] labels = [] def collectFileNamesAndLabels(filename): filenames.append(filename) result = filename.split("/") labels.append(result[len(result)-2]) #Get the label name from filename doForEachFile(collectFileNamesAndLabels,baseDir, 100) print("Total File name collection time = %s seconds" % (time.time() - start_time)) print ("Files collected : ",len(filenames)) start_time = time.time() imagesList = [] doForEachFileNames(readImage, filenames, imagesList) print ("Images data collected : ",len(imagesList)) print("Total data collection time = %s seconds" % (time.time() - start_time)) #%run "TransferLearningWithCNN.ipynb" ###Output _____no_output_____ ###Markdown Generic Multi threading codeUsed later ###Code import threading import sys import traceback import time class myThread (threading.Thread): def __init__(self, threadName, index, worker): threading.Thread.__init__(self) self.threadName = threadName self.index = index self.worker = worker def run(self): print('^',self.threadName, ', ', end='') try: self.worker(self.threadName, self.index) except: e = sys.exc_info()[0] print("Exception in thread: ",self.threadName,", ", e) traceback.print_exc() print('V',self.threadName, end='') def runThreads(nThreads, worker): threads = [] for i in range(nThreads): try: t = myThread("Thread-"+str(i), i , worker) threads.append(t) t.start() except: e = sys.exc_info()[0] print("Error: unable to start thread: ", e) traceback.print_exc() time.sleep(1) for t in threads: t.join() start_time = time.time() import pandas as pd import numpy as np import matplotlib.pyplot as plt import matplotlib.image as mpimg def rgb2gray(rgb): return np.dot(rgb[...,:3], [0.2989, 0.5870, 0.1140]) Y = pd.DataFrame(labels, columns = ['name']) sizeOfYLabel = len(Y) oneDImageList = [] showOneImage = False imageListLock = threading.Lock() def flattenImage(image, imageList): # Convert RGB to single number global showOneImage if showOneImage: plt.imshow(image, cmap="Greys") plt.show() #gray = rgb2gray(image) gray = image if showOneImage: plt.imshow(gray, cmap="Greys") plt.show() showOneImage = False imageList.append(gray.flatten()) nThreads = 10 splitImagesList = np.array_split(imagesList, nThreads) splitLabelsList = np.array_split(labels, nThreads) labels = [] def imageFlatteningWorker(threadName, index): global splitImagesList global imageListLock imageList = [] for image in splitImagesList[index]: flattenImage(image, imageList) imageListLock.acquire() global oneDImageList oneDImageList.extend(imageList) global labels labels.extend(splitLabelsList[index]) imageListLock.release() runThreads(nThreads, imageFlatteningWorker) del imagesList X = pd.DataFrame(oneDImageList) del oneDImageList print("X shape", X.shape) print("label count", len(labels)) print("Total dataframe creation time = %s seconds" % (time.time() - start_time)) #Y['name'].value_counts() start_time = time.time() maxFrequency = 21 # We upsample/downsample to have equal quantity of all classes #X_sample = pd.DataFrame(columns = X.columns) #Y_sample = pd.DataFrame(columns = Y.columns) X_sampleList = [] Y_sampleList = [] XY_sampleLock = threading.Lock() def equalSampleLabel(label): global X_sampleList global Y_sampleList global XY_sampleLock dfa = X[Y['name']== label] dfasample = dfa.sample(n=maxFrequency,replace=True) dfb = Y[Y['name']== label] dfbsample = dfb.sample(n=maxFrequency,replace=True) print('.', end='') XY_sampleLock.acquire() print('-', end='') X_sampleList.append(dfasample) Y_sampleList.append(dfbsample) XY_sampleLock.release() print('+', end='') labels = Y['name'].unique() nThreads = 10 labelsArray = np.array_split(labels, nThreads) def worker(threadName, index): for label in labelsArray[index]: #print(threadName, label) equalSampleLabel(label) runThreads(nThreads, worker) X_sample = pd.concat(X_sampleList) Y_sample = pd.concat(Y_sampleList) print(X_sample.shape) print(Y_sample.shape) X = X_sample Y = Y_sample # print(X_sample) print("Total equal sampling time = %s seconds" % (time.time() - start_time)) start_time = time.time() sizeOfX = X_sample.shape[1] #%run "DNN.ipynb" print("Total DNN time = %s seconds" % (time.time() - start_time)) print(X_sample.shape) # print(Y) # enc.get_feature_names(['name']) # print(enc.get_feature_names(['name']).shape) # import numpy as np # import tensorflow as tf # Ypred = pd.DataFrame( columns = ['sampleId', "classAsIndex", "classAsString", "actualClass"]) # for i in range(300): # prediction = model.predict(X_sample.iloc[i:i+1,:],verbose=0) # class_labels = np.argmax(prediction[0], axis=0) # Ypred.at[i, 'sampleId'] = i # Ypred.at[i, 'classAsIndex'] = class_labels # Ypred.at[i, 'classAsString'] = enc.get_feature_names(['name'])[class_labels] # Ypred.at[i, 'actualClass'] = Y_sample.iloc[i].values # #print(class_labels, enc.get_feature_names(['name'])[class_labels]," Names of class :: ", Y_sample.iloc[i].values) # from IPython.display import display, HTML # display(HTML(Ypred.to_html())) class_mapping = { i : labels[i] for i in range(0, len(labels) ) } print(class_mapping) inv_class_mapping = {v: k for k, v in class_mapping.items()} print(inv_class_mapping) # print(type(inv_class_mapping)) # print(inv_class_mapping['0']) ###Output {0: 'German_Khan', 1: 'Stefano_Gabbana', 2: 'Dragan_Covic', 3: 'Jeff_Hornacek', 4: 'Sureyya_Ayhan', 5: 'Deb_Santos', 6: 'Bob_Newhart', 7: 'Wang_Hailan', 8: 'Paul_McNulty', 9: 'Jimmy_Iovine', 10: 'Claudia_Pechstein', 11: 'Ranil_Wickremasinghe', 12: 'Ben_Chandler', 13: 'Mark_Komara', 14: 'Rand_Beers', 15: 'Joanne_Woodward', 16: 'John_Bond', 17: 'Reginald_Hudlin', 18: 'Lee_Baca', 19: 'Mary-Kate_Olsen', 20: 'Emily_Stevens', 21: 'Xiang_Huaicheng', 22: 'Phil_Mickelson', 23: 'Gerry_Kelly', 24: 'Salma_Hayek', 25: 'Jim_Edmonds', 26: 'Martina_McBride', 27: 'Anthony_Pico', 28: 'Jose_Theodore', 29: 'Heidi_Fleiss', 30: 'Mark_Richt', 31: 'Mike_Smith', 32: 'Paul_ONeill'} {'German_Khan': 0, 'Stefano_Gabbana': 1, 'Dragan_Covic': 2, 'Jeff_Hornacek': 3, 'Sureyya_Ayhan': 4, 'Deb_Santos': 5, 'Bob_Newhart': 6, 'Wang_Hailan': 7, 'Paul_McNulty': 8, 'Jimmy_Iovine': 9, 'Claudia_Pechstein': 10, 'Ranil_Wickremasinghe': 11, 'Ben_Chandler': 12, 'Mark_Komara': 13, 'Rand_Beers': 14, 'Joanne_Woodward': 15, 'John_Bond': 16, 'Reginald_Hudlin': 17, 'Lee_Baca': 18, 'Mary-Kate_Olsen': 19, 'Emily_Stevens': 20, 'Xiang_Huaicheng': 21, 'Phil_Mickelson': 22, 'Gerry_Kelly': 23, 'Salma_Hayek': 24, 'Jim_Edmonds': 25, 'Martina_McBride': 26, 'Anthony_Pico': 27, 'Jose_Theodore': 28, 'Heidi_Fleiss': 29, 'Mark_Richt': 30, 'Mike_Smith': 31, 'Paul_ONeill': 32} ###Markdown Invoke Inception Neural Network with Transfer Learning ###Code from PIL import Image import pandas as pd import tensorflow as tf print(X_sample.shape) sample_dict = {} X_transfer_learning = pd.DataFrame() for ind in X_sample.index: X_sample_array = X_sample.iloc[ind].as_matrix() arr = X_sample_array.reshape(250,250,3) img = tf.keras.preprocessing.image.array_to_img(arr) img.resize(size=(299,299), resample=Image.BICUBIC) array = tf.keras.preprocessing.image.img_to_array(img) sample_dict[0] = array X_transfer_learning = X_transfer_learning.append(sample_dict,ignore_index=True) print(X_transfer_learning.shape) print(X_transfer_learning[0][0].shape) print(type(X_transfer_learning)) # Swap rows and columns of dataframe # X_transfer_learning = X_transfer_learning.transpose() # print(X_transfer_learning.shape) label_y_sample = Y_sample.copy() label_y_sample['name'] = label_y_sample['name'].map(inv_class_mapping) class_images = label_y_sample['name'].value_counts() #print(Y_sample.shape) #print(Y_sample) warnings.filterwarnings('ignore') start_time = time.time() sizeOfX = X_sample.shape[1] print("Shape of Input to Transfer learning: X: ", X_transfer_learning.shape, ", Y: ", Y_sample.shape) %run "TransferLearning-New.ipynb" print("Total transfer learning with CNN time = %s seconds" % (time.time() - start_time)) import matplotlib.pyplot as plt %matplotlib inline # Function to plot an array of RGB values def plot_color_image(image): plt.figure(figsize=(4,4)) print(image.shape) plt.imshow(image.astype(np.uint8), interpolation='nearest') plt.axis('off') plt.show() ex_index =0 plot_color_image(X_batch[ex_index]) plt.title('Original Image of {}'.format(class_mapping[y_batch[ex_index]])) # from scipy.misc import imresize from PIL import Image # Function takes in an image array and returns the resized and normalized array def prepare_image(image, target_height=299, target_width=299): if(type(image) == np.ndarray): plot_color_image(image[0]) image = image[0].reshape(250,250,3) plot_color_image(image) else: image = image.iloc[0].reshape(250,250,3) image = image.astype(np.uint8) image = np.array(Image.fromarray(obj=image, mode='RGB').resize(size=(target_height,target_width), resample=Image.BICUBIC)) plot_color_image(image) return image.astype(np.float32) / 255 # Function takes in an array of images and labels and processes the images to create # a batch of a given size def create_batch(X, y, start_index=0, batch_size=4): # print("create batch : start_index:: ", start_index, "batch size:: ", batch_size) stop_index = start_index + batch_size prepared_images = [] labels = [] for index in range(start_index, stop_index): if(type(X) == np.ndarray): preparedImage = prepare_image(X[index]).reshape(299,299, 3) else: preparedImage = prepare_image(X.iloc[index]).reshape(299,299, 3) dim = np.zeros((299,299)) #preparedImage = np.stack((preparedImage,preparedImage, preparedImage), axis=-1) prepared_images.append(preparedImage) if(type(y) == np.ndarray): labels.append(int(y[index])) else: labels.append(inv_class_mapping[y.iloc[index][0]]) # Combine the images into a single array by joining along the 0th axis X_batch = np.stack(prepared_images) y_batch = np.array(labels, dtype=np.int32) return X_batch, y_batch X_batch, y_batch = create_batch(X_train, y_train, 0, 1) ###Output (250, 250, 3)
Mundo01/Desafio024.ipynb
###Markdown **Desafio 024****Python 3 - 1º Mundo**Descrição: Crie um programa que leia o nome de uma cidade diga se ela começa ou não com o nome "SANTO".Link: https://www.youtube.com/watch?v=QroT8cZMRnc&t=16s ###Code cidade = str(input('Qual o nome da cidade: ')).strip() formatando = cidade.capitalize() split = formatando.split() print('A cidade começa com Santo? ', 'Santo' in split[0]) ###Output _____no_output_____
Algorithms/Tree/Binary Tree.ipynb
###Markdown Binary Tree Node Class ###Code class Node(object): def __init__(self, value=None): self.value = value self.left = None self.right = None def get_value(self): return self.value def set_value(self, value): self.value = value def set_left_child(self, node): self.left = node def get_left_child(self): return self.left def set_right_child(self, node): self.right = node def get_right_child(self): return self.right def has_left_child(self): return self.left != None def has_right_child(self): return self.right != None node0 = Node('root') node1 = Node('left') node2 = Node('right') # node0.set_left_child(Node1) node0.set_right_child(Node2) ###Output _____no_output_____ ###Markdown node0.get_left_child(Node1)node0.get_right_child(Node2) ###Code print('Node 0: ',node0.value) # print('Node 0 Left Child: ',node0.left.value) print('Node 0 Right Child: ',node0.right.value) print('------------------------') print('Node 0 has Left Child: ',node0.has_left_child()) print('Node 0 has Right Child: ',node0.has_right_child()) print('------------------------') ###Output Node 0: root Node 0 Right Child: right ------------------------ Node 0 has Left Child: False Node 0 has Right Child: True ------------------------ ###Markdown Tree Class ###Code class Tree(object): def __init__(self, value): self.root = Node(value) def get_root(self): return self.root tree0 = Tree('ROOT') print('Tree Root: ',tree0.root.value) ###Output Tree Root: ROOT
notebooks/classifier/Conv_EfficientNetB0_predictions_3_layers.ipynb
###Markdown ###Code from google.colab import drive import tensorflow as tf drive.mount("/content/drive") !unzip -q /content/drive/My\ Drive/PDR/Data/original.zip -d /content !rm -r sample_data ###Output Mounted at /content/drive ###Markdown Constants ###Code img_shape = (224, 224, 3) e_net_out_shape = (7, 7, 1280) nr_of_imgs = 49940 nr_of_val_imgs = 3862 batch_size = 64 nr_of_classes = 39 train_path = './original/train' val_path = './original/val' model_path = '/content/drive/My Drive/PDR/Results/models/ClassifierB0_3Layers.h5' monitor = 'val_acc' train_imgs_filename, train_labels_filename = 'imgs.npy', 'labels.npy' val_imgs_filename, val_labels_filename = 'val_imgs.npy', 'val_labels.npy' preds_dtype = 'float16' ###Output _____no_output_____ ###Markdown Get predictions from Convolutional part to fast train classifier ###Code from tensorflow.keras.applications import EfficientNetB0 from tensorflow.keras.preprocessing.image import ImageDataGenerator import numpy as np e_net = EfficientNetB0(include_top=False, weights="imagenet", input_shape=img_shape) def make_conv_predictions(): gen = ImageDataGenerator(rotation_range=45, horizontal_flip=True, vertical_flip=True, rescale=1/255) datagen = gen.flow_from_directory(train_path, target_size=img_shape[:2], batch_size=batch_size, class_mode='categorical') val_datagen = gen.flow_from_directory(val_path, target_size=img_shape[:2], batch_size=batch_size, class_mode='categorical') imgs = np.lib.format.open_memmap(train_imgs_filename, dtype=preds_dtype, mode='w+', shape=((nr_of_imgs,) + e_net_out_shape)) labels = np.lib.format.open_memmap(train_labels_filename, dtype='uint8', mode='w+', shape=(nr_of_imgs, nr_of_classes)) val_imgs = np.lib.format.open_memmap(val_imgs_filename, dtype=preds_dtype, mode='w+', shape=((nr_of_val_imgs,) + e_net_out_shape)) val_labels = np.lib.format.open_memmap(val_labels_filename, dtype='uint8', mode='w+', shape=(nr_of_val_imgs, nr_of_classes)) for i, (imgs_batch, labels_batch) in enumerate(datagen): count = i * batch_size line = ' ' if not i % 20 and i != 0: line = '\n' print(f'%5d{line}' %(count), end='') if count > nr_of_imgs: break predictions = e_net.predict(imgs_batch) imgs[count : count + batch_size] = predictions labels[count : count + batch_size] = labels_batch print() for i, (imgs_batch, labels_batch) in enumerate(val_datagen): count = i * batch_size line = ' ' if not i % 20 and i != 0: line = '\n' print(f'%5d{line}' %(count), end='') if count > nr_of_val_imgs: break predictions = e_net.predict(imgs_batch) val_imgs[count : count + batch_size] = predictions val_labels[count : count + batch_size] = labels_batch print() make_conv_predictions() from tensorflow.keras.models import Sequential, load_model from tensorflow.keras.layers import Dense, Flatten, InputLayer, BatchNormalization, Dropout import tensorflow.keras.callbacks as clb from tensorflow.keras.optimizers import Adam from tensorflow.keras.preprocessing.image import ImageDataGenerator from tensorflow.keras.constraints import max_norm import numpy as np import random norm = max_norm(4) def build_model(): model = Sequential() model.add(InputLayer(input_shape=e_net_out_shape)) model.add(Flatten()) model.add(BatchNormalization()) model.add(Dropout(0.3)) model.add(Dense(867, kernel_constraint=norm, activation='relu')) model.add(BatchNormalization()) model.add(Dropout(0.3)) model.add(Dense(453, kernel_constraint=norm, activation='relu')) model.add(BatchNormalization()) model.add(Dropout(0.3)) model.add(Dense(39, kernel_constraint=norm, activation='softmax')) model.compile(optimizer=Adam(learning_rate=0.01), loss='categorical_crossentropy', metrics=['acc']) return model def np_array_memmap_gen(feat_path, label_path, batch_size=128, shuffle_array=True): while 1: x = np.load(feat_path, mmap_mode='r') y = np.load(label_path, mmap_mode='r') lst = [i for i in range(x.shape[0])] if shuffle_array: random.shuffle(lst) iters = len(lst) // batch_size + 1 for i in range(iters): start = i * batch_size end = (i + 1) * batch_size yield (x[lst[start : end]], y[lst[start : end]]) callbacks = [ clb.ReduceLROnPlateau(monitor=monitor, factor=0.1, min_lr=1e-7, patience=3, verbose=1), clb.EarlyStopping(monitor=monitor, patience=7, verbose=1), clb.ModelCheckpoint(monitor=monitor, filepath=model_path, save_best_only=True, verbose=1) ] train_gen = np_array_memmap_gen(train_imgs_filename, train_labels_filename, batch_size=batch_size) val_gen = np_array_memmap_gen(val_imgs_filename, val_labels_filename, batch_size=batch_size) train_steps = nr_of_imgs // batch_size + 1 val_steps = nr_of_val_imgs // batch_size + 1 # model = build_model() model = load_model(model_path) model.compile(optimizer=Adam(learning_rate=0.001), loss='categorical_crossentropy', metrics=['acc']) history = model.fit(train_gen, epochs=100, validation_data=val_gen, callbacks=callbacks, verbose=1, steps_per_epoch=train_steps, validation_steps=val_steps) ###Output Epoch 1/100 781/781 [==============================] - 195s 246ms/step - loss: 1.0401 - acc: 0.6784 - val_loss: 1.1338 - val_acc: 0.6458 Epoch 00001: val_acc improved from -inf to 0.64578, saving model to /content/drive/My Drive/PDR/Results/models/ClassifierB0_3Layers.h5 Epoch 2/100 781/781 [==============================] - 148s 189ms/step - loss: 1.0979 - acc: 0.6613 - val_loss: 1.1733 - val_acc: 0.6339 Epoch 00002: val_acc did not improve from 0.64578 Epoch 3/100 781/781 [==============================] - 29s 37ms/step - loss: 1.1182 - acc: 0.6552 - val_loss: 1.3354 - val_acc: 0.5896 Epoch 00003: val_acc did not improve from 0.64578 Epoch 4/100 781/781 [==============================] - 24s 31ms/step - loss: 1.1216 - acc: 0.6501 - val_loss: 1.5216 - val_acc: 0.5518 Epoch 00004: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. Epoch 00004: val_acc did not improve from 0.64578 Epoch 5/100 781/781 [==============================] - 24s 31ms/step - loss: 0.9452 - acc: 0.7062 - val_loss: 0.6617 - val_acc: 0.8076 Epoch 00005: val_acc improved from 0.64578 to 0.80761, saving model to /content/drive/My Drive/PDR/Results/models/ClassifierB0_3Layers.h5 Epoch 6/100 781/781 [==============================] - 32s 40ms/step - loss: 0.8567 - acc: 0.7341 - val_loss: 0.6338 - val_acc: 0.8131 Epoch 00006: val_acc improved from 0.80761 to 0.81305, saving model to /content/drive/My Drive/PDR/Results/models/ClassifierB0_3Layers.h5 Epoch 7/100 781/781 [==============================] - 27s 35ms/step - loss: 0.8154 - acc: 0.7466 - val_loss: 0.6098 - val_acc: 0.8159 Epoch 00007: val_acc improved from 0.81305 to 0.81590, saving model to /content/drive/My Drive/PDR/Results/models/ClassifierB0_3Layers.h5 Epoch 8/100 781/781 [==============================] - 27s 35ms/step - loss: 0.7939 - acc: 0.7512 - val_loss: 0.5930 - val_acc: 0.8206 Epoch 00008: val_acc improved from 0.81590 to 0.82056, saving model to /content/drive/My Drive/PDR/Results/models/ClassifierB0_3Layers.h5 Epoch 9/100 781/781 [==============================] - 28s 35ms/step - loss: 0.7927 - acc: 0.7537 - val_loss: 0.5820 - val_acc: 0.8213 Epoch 00009: val_acc improved from 0.82056 to 0.82134, saving model to /content/drive/My Drive/PDR/Results/models/ClassifierB0_3Layers.h5 Epoch 10/100 781/781 [==============================] - 29s 37ms/step - loss: 0.7566 - acc: 0.7618 - val_loss: 0.5852 - val_acc: 0.8231 Epoch 00010: val_acc improved from 0.82134 to 0.82315, saving model to /content/drive/My Drive/PDR/Results/models/ClassifierB0_3Layers.h5 Epoch 11/100 781/781 [==============================] - 29s 37ms/step - loss: 0.7474 - acc: 0.7647 - val_loss: 0.5594 - val_acc: 0.8255 Epoch 00011: val_acc improved from 0.82315 to 0.82548, saving model to /content/drive/My Drive/PDR/Results/models/ClassifierB0_3Layers.h5 Epoch 12/100 781/781 [==============================] - 28s 36ms/step - loss: 0.7439 - acc: 0.7689 - val_loss: 0.5616 - val_acc: 0.8286 Epoch 00012: val_acc improved from 0.82548 to 0.82859, saving model to /content/drive/My Drive/PDR/Results/models/ClassifierB0_3Layers.h5 Epoch 13/100 781/781 [==============================] - 27s 34ms/step - loss: 0.7329 - acc: 0.7696 - val_loss: 0.5291 - val_acc: 0.8382 Epoch 00013: val_acc improved from 0.82859 to 0.83817, saving model to /content/drive/My Drive/PDR/Results/models/ClassifierB0_3Layers.h5 Epoch 14/100 781/781 [==============================] - 30s 39ms/step - loss: 0.7182 - acc: 0.7751 - val_loss: 0.5582 - val_acc: 0.8299 Epoch 00014: val_acc did not improve from 0.83817 Epoch 15/100 781/781 [==============================] - 24s 31ms/step - loss: 0.7088 - acc: 0.7747 - val_loss: 0.5385 - val_acc: 0.8371 Epoch 00015: val_acc did not improve from 0.83817 Epoch 16/100 781/781 [==============================] - 24s 31ms/step - loss: 0.7118 - acc: 0.7743 - val_loss: 0.5312 - val_acc: 0.8356 Epoch 00016: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. Epoch 00016: val_acc did not improve from 0.83817 Epoch 17/100 781/781 [==============================] - 24s 31ms/step - loss: 0.6836 - acc: 0.7847 - val_loss: 0.5117 - val_acc: 0.8379 Epoch 00017: val_acc did not improve from 0.83817 Epoch 18/100 781/781 [==============================] - 24s 31ms/step - loss: 0.6752 - acc: 0.7879 - val_loss: 0.5109 - val_acc: 0.8426 Epoch 00018: val_acc improved from 0.83817 to 0.84257, saving model to /content/drive/My Drive/PDR/Results/models/ClassifierB0_3Layers.h5 Epoch 19/100 781/781 [==============================] - 25s 32ms/step - loss: 0.6781 - acc: 0.7884 - val_loss: 0.4909 - val_acc: 0.8493 Epoch 00019: val_acc improved from 0.84257 to 0.84930, saving model to /content/drive/My Drive/PDR/Results/models/ClassifierB0_3Layers.h5 Epoch 20/100 781/781 [==============================] - 25s 32ms/step - loss: 0.6624 - acc: 0.7905 - val_loss: 0.5194 - val_acc: 0.8384 Epoch 00020: val_acc did not improve from 0.84930 Epoch 21/100 781/781 [==============================] - 24s 31ms/step - loss: 0.6645 - acc: 0.7916 - val_loss: 0.4780 - val_acc: 0.8537 Epoch 00021: val_acc improved from 0.84930 to 0.85370, saving model to /content/drive/My Drive/PDR/Results/models/ClassifierB0_3Layers.h5 Epoch 22/100 781/781 [==============================] - 25s 32ms/step - loss: 0.6647 - acc: 0.7887 - val_loss: 0.5039 - val_acc: 0.8426 Epoch 00022: val_acc did not improve from 0.85370 Epoch 23/100 781/781 [==============================] - 24s 31ms/step - loss: 0.6636 - acc: 0.7893 - val_loss: 0.4919 - val_acc: 0.8462 Epoch 00023: val_acc did not improve from 0.85370 Epoch 24/100 781/781 [==============================] - 24s 31ms/step - loss: 0.6625 - acc: 0.7931 - val_loss: 0.4885 - val_acc: 0.8483 Epoch 00024: ReduceLROnPlateau reducing learning rate to 1.0000000656873453e-06. Epoch 00024: val_acc did not improve from 0.85370 Epoch 25/100 781/781 [==============================] - 24s 31ms/step - loss: 0.6708 - acc: 0.7873 - val_loss: 0.4887 - val_acc: 0.8493 Epoch 00025: val_acc did not improve from 0.85370 Epoch 26/100 781/781 [==============================] - 24s 31ms/step - loss: 0.6529 - acc: 0.7949 - val_loss: 0.5092 - val_acc: 0.8400 Epoch 00026: val_acc did not improve from 0.85370 Epoch 27/100 781/781 [==============================] - 24s 31ms/step - loss: 0.6583 - acc: 0.7926 - val_loss: 0.4930 - val_acc: 0.8490 Epoch 00027: ReduceLROnPlateau reducing learning rate to 1.0000001111620805e-07. Epoch 00027: val_acc did not improve from 0.85370 Epoch 28/100 781/781 [==============================] - 24s 31ms/step - loss: 0.6535 - acc: 0.7904 - val_loss: 0.4763 - val_acc: 0.8547 Epoch 00028: val_acc improved from 0.85370 to 0.85474, saving model to /content/drive/My Drive/PDR/Results/models/ClassifierB0_3Layers.h5 Epoch 29/100 781/781 [==============================] - 25s 32ms/step - loss: 0.6529 - acc: 0.7915 - val_loss: 0.4916 - val_acc: 0.8426 Epoch 00029: val_acc did not improve from 0.85474 Epoch 30/100 781/781 [==============================] - 24s 31ms/step - loss: 0.6567 - acc: 0.7942 - val_loss: 0.4967 - val_acc: 0.8475 Epoch 00030: val_acc did not improve from 0.85474 Epoch 31/100 781/781 [==============================] - 24s 31ms/step - loss: 0.6605 - acc: 0.7919 - val_loss: 0.4862 - val_acc: 0.8493 Epoch 00031: ReduceLROnPlateau reducing learning rate to 1e-07. Epoch 00031: val_acc did not improve from 0.85474 Epoch 32/100 781/781 [==============================] - 24s 31ms/step - loss: 0.6603 - acc: 0.7906 - val_loss: 0.4949 - val_acc: 0.8462 Epoch 00032: val_acc did not improve from 0.85474 Epoch 33/100 781/781 [==============================] - 24s 31ms/step - loss: 0.6608 - acc: 0.7898 - val_loss: 0.4923 - val_acc: 0.8475 Epoch 00033: val_acc did not improve from 0.85474 Epoch 34/100 781/781 [==============================] - 24s 31ms/step - loss: 0.6599 - acc: 0.7904 - val_loss: 0.4848 - val_acc: 0.8511 Epoch 00034: ReduceLROnPlateau reducing learning rate to 1e-07. Epoch 00034: val_acc did not improve from 0.85474 Epoch 35/100 781/781 [==============================] - 24s 31ms/step - loss: 0.6563 - acc: 0.7939 - val_loss: 0.4953 - val_acc: 0.8462 Epoch 00035: val_acc did not improve from 0.85474 Epoch 00035: early stopping
Lectures notebooks/(Lectures notebooks) netology Feature engineering/6. Work with dimension/Practice_6_NMF.ipynb
###Markdown Matrix decomposition NMF ###Code import numpy as np import matplotlib.pyplot as plt %matplotlib inline plt.rcParams['axes.labelsize'] = 14 plt.rcParams['xtick.labelsize'] = 12 plt.rcParams['ytick.labelsize'] = 12 plt.rcParams['figure.figsize'] = (10, 5) M = np.random.rand(10, 20) plt.imshow(M, cmap=plt.cm.binary) from sklearn.decomposition import NMF nmf = NMF(5, alpha=1).fit(M) W = nmf.transform(M) H = nmf.components_ plt.subplot(121) plt.title("W") plt.imshow(W, cmap=plt.cm.binary) plt.subplot(122) plt.title("H") plt.imshow(H, cmap=plt.cm.binary) plt.figure(figsize=(12, 8)) plt.subplot(211) plt.title("Original") plt.imshow(M, cmap=plt.cm.binary) plt.subplot(212) plt.title("Approximation") plt.imshow(np.dot(W, H), cmap=plt.cm.binary) ###Output _____no_output_____
HumanPoseEstimation_project2.ipynb
###Markdown Loading dataTutorial and hints about the dataset: https://www.tensorflow.org/datasets/catalog/svhn_cropped ###Code import tensorflow_datasets as tfds dataset, info = tfds.load( name = "svhn_cropped", with_info= True, batch_size = 32, as_supervised = True) # Preparing data print(f"Training size: {info.splits['train'].num_examples}") print(f"Test size: {info.splits['test'].num_examples}") svhn_train = dataset['train'].prefetch(1) svhn_test = dataset['test'].prefetch(1) t1 = dataset['train'] ###Output _____no_output_____ ###Markdown Building a modelLet’s now implement the architecture with 3 convolutional layers. Check the dimensions of the feature maps after each layer.**Notice** how the dimensions change when you change padding or stride. The configuration below should give you an accuracy of about 88% on the SVHN dataset. Note: Make sure you use the “2d”-versions of the units.``` 1. convolution, kernel_size=5, channels=6, stride=1, padding=2 2. batch-normalization 3. ReLU 4. Max-pool, kernel_size=2, stride=2 5. convolution, kernel_size=3, channels=12, stride=1, padding=1 6. batch-normalization 7. ReLU 8. Max-pool, kernel_size=2, stride=2 9. convolution, kernel_size=3, channels=24, stride=1, padding=1 10. batch-normalization 11. ReLU 12. Max-pool, kernel_size=2, stride=2 13. fully connected layer, output_size=10``` ###Code import tensorflow as tf from tensorflow.keras.layers import Conv2D, BatchNormalization, MaxPool2D, ReLU, Dense from tensorflow.keras.layers.experimental.preprocessing import Rescaling model = tf.keras.models.Sequential([ Rescaling(1.0/255, input_shape = [32, 32, 3]), Conv2D(16, kernel_size = 5, strides = (1,1), input_shape= [32, 32, 3], padding = "valid"), BatchNormalization(), ReLU(), # 2nd layer Conv2D(32, kernel_size = 3, strides = (1, 1), padding = "valid"), BatchNormalization(), ReLU(), MaxPool2D(pool_size = (2, 2), strides = (2, 2)), # 3rd layer Conv2D(64, kernel_size = 3, strides = (1, 1), padding = "valid"), BatchNormalization(), ReLU(), MaxPool2D(pool_size = (2, 2), strides = (2, 2)), # final layer Dense(10) ]) model.build() print(model.summary()) model.compile( loss = "categorical_crossentropy", optimizer = tf.keras.optimizers.Adam(lr = 0.001), metrics = ["accuracy"] ) model.fit(svhn_train, epochs = 5, batch_size = 32) ###Output _____no_output_____
exercises/Customization/Condensed.ipynb
###Markdown Configuring IPython Finding configuration options IPython has many configurable attributes. These can be viewed using the `-h` flag to the command line applications: ###Code !ipython -h ###Output ========= IPython ========= Tools for Interactive Computing in Python ========================================= A Python shell with automatic history (input and output), dynamic object introspection, easier configuration, command completion, access to the system shell and more. IPython can also be embedded in running programs. Usage ipython [subcommand] [options] [-c cmd | -m mod | file] [--] [arg] ... If invoked with no options, it executes the file and exits, passing the remaining arguments to the script, just as if you had specified the same command with python. You may need to specify `--` before args to be passed to the script, to prevent IPython from attempting to parse them. If you specify the option `-i` before the filename, it will enter an interactive IPython session after running the script, rather than exiting. Files ending in .py will be treated as normal Python, but files ending in .ipy can contain special IPython syntax (magic commands, shell expansions, etc.). Almost all configuration in IPython is available via the command-line. Do `ipython --help-all` to see all available options. For persistent configuration, look into your `ipython_config.py` configuration file for details. This file is typically installed in the `IPYTHONDIR` directory, and there is a separate configuration directory for each profile. The default profile directory will be located in $IPYTHONDIR/profile_default. IPYTHONDIR defaults to to `$HOME/.ipython`. For Windows users, $HOME resolves to C:\Documents and Settings\YourUserName in most instances. To initialize a profile with the default configuration file, do:: $> ipython profile create and start editing `IPYTHONDIR/profile_default/ipython_config.py` In IPython's documentation, we will refer to this directory as `IPYTHONDIR`, you can change its default location by creating an environment variable with this name and setting it to the desired path. For more information, see the manual available in HTML and PDF in your installation, or online at http://ipython.org/documentation.html. Subcommands ----------- Subcommands are launched as `ipython cmd [args]`. For information on using subcommand 'cmd', do: `ipython cmd -h`. locate print the path to the IPython dir trust Sign notebooks to trust their potentially unsafe contents at load. install-nbextension Install IPython notebook extension files kernel Start a kernel without an attached frontend. kernelspec Manage IPython kernel specifications. console Launch the IPython terminal-based Console. nbconvert Convert notebooks to/from other formats. profile Create and manage IPython profiles. notebook Launch the IPython HTML Notebook Server. history Manage the IPython history database. qtconsole Launch the IPython Qt Console. Options ------- Arguments that take values are actually convenience aliases to full Configurables, whose aliases are listed on the help line. For more information on full configurables, see '--help-all'. --nosep Eliminate all spacing between prompts. --quiet set log level to logging.CRITICAL (minimize logging output) --term-title Enable auto setting the terminal title. --pylab Pre-load matplotlib and numpy for interactive use with the default matplotlib backend. --init Initialize profile with default config files. This is equivalent to running `ipython profile create <profile>` prior to startup. --pydb Use the third party 'pydb' package as debugger, instead of pdb. Requires that pydb is installed. --no-autoedit-syntax Turn off auto editing of files with syntax errors. --classic Gives IPython a similar feel to the classic Python prompt. --no-term-title Disable auto setting the terminal title. --no-banner Don't display a banner upon starting IPython. --no-automagic Turn off the auto calling of magic commands. --autoindent Turn on autoindenting. --no-deep-reload Disable deep (recursive) reloading by default. --matplotlib Configure matplotlib for interactive use with the default matplotlib backend. --debug set log level to logging.DEBUG (maximize logging output) --autoedit-syntax Turn on auto editing of files with syntax errors. --no-color-info Disable using colors for info related things. --no-pprint Disable auto pretty printing of results. --banner Display a banner upon starting IPython. --no-confirm-exit Don't prompt the user when exiting. --pdb Enable auto calling the pdb debugger after every exception. --color-info IPython can display information about objects via a set of functions, and optionally can use colors for this, syntax highlighting source code and various other elements. This is on by default, but can cause problems with some pagers. If you see such problems, you can disable the colours. --no-pdb Disable auto calling the pdb debugger after every exception. --quick Enable quick startup with no config files. --deep-reload Enable deep (recursive) reloading by default. IPython can use the deep_reload module which reloads changes in modules recursively (it replaces the reload() function, so you don't need to change anything to use it). deep_reload() forces a full reload of modules whose code may have changed, which the default reload() function does not. When deep_reload is off, IPython will use the normal reload(), but deep_reload will still be available as dreload(). This feature is off by default [which means that you have both normal reload() and dreload()]. --no-autoindent Turn off autoindenting. --automagic Turn on the auto calling of magic commands. Type %%magic at the IPython prompt for more information. -i If running code from the command line, become interactive afterwards. --pprint Enable auto pretty printing of results. --confirm-exit Set to confirm when you try to exit IPython with an EOF (Control-D in Unix, Control-Z/Enter in Windows). By typing 'exit' or 'quit', you can force a direct exit without any confirmation. --config=<Unicode> (BaseIPythonApplication.extra_config_file) Default: '' Path to an extra config file to load. If specified, load this config file in addition to any other IPython config. --pylab=<CaselessStrEnum> (InteractiveShellApp.pylab) Default: None Choices: ['auto', 'gtk', 'gtk3', 'inline', 'nbagg', 'notebook', 'osx', 'qt', 'qt4', 'qt5', 'tk', 'wx'] Pre-load matplotlib and numpy for interactive use, selecting a particular matplotlib backend and loop integration. -c <Unicode> (InteractiveShellApp.code_to_run) Default: '' Execute the given command string. --log-level=<Enum> (Application.log_level) Default: 30 Choices: (0, 10, 20, 30, 40, 50, 'DEBUG', 'INFO', 'WARN', 'ERROR', 'CRITICAL') Set the log level by value or name. --ipython-dir=<Unicode> (BaseIPythonApplication.ipython_dir) Default: '' The name of the IPython directory. This directory is used for logging configuration (through profiles), history storage, etc. The default is usually $HOME/.ipython. This option can also be specified through the environment variable IPYTHONDIR. --colors=<CaselessStrEnum> (InteractiveShell.colors) Default: 'Linux' Choices: ('NoColor', 'LightBG', 'Linux') Set the color scheme (NoColor, Linux, or LightBG). --matplotlib=<CaselessStrEnum> (InteractiveShellApp.matplotlib) Default: None Choices: ['auto', 'gtk', 'gtk3', 'inline', 'nbagg', 'notebook', 'osx', 'qt', 'qt4', 'qt5', 'tk', 'wx'] Configure matplotlib for interactive use with the default matplotlib backend. -m <Unicode> (InteractiveShellApp.module_to_run) Default: '' Run the module as a script. --logfile=<Unicode> (InteractiveShell.logfile) Default: '' The name of the logfile to use. --ext=<Unicode> (InteractiveShellApp.extra_extension) Default: '' dotted module name of an IPython extension to load. --profile=<Unicode> (BaseIPythonApplication.profile) Default: 'default' The IPython profile to use. --logappend=<Unicode> (InteractiveShell.logappend) Default: '' Start logging to the given file in append mode. --cache-size=<Int> (InteractiveShell.cache_size) Default: 1000 Set the size of the output cache. The default is 1000, you can change it permanently in your config file. Setting it to 0 completely disables the caching system, and the minimum value accepted is 20 (if you provide a value less than 20, it is reset to 0 and a warning is issued). This limit is defined because otherwise you'll spend more time re-flushing a too small cache than working --autocall=<Enum> (InteractiveShell.autocall) Default: 0 Choices: (0, 1, 2) Make IPython automatically call any callable object even if you didn't type explicit parentheses. For example, 'str 43' becomes 'str(43)' automatically. The value can be '0' to disable the feature, '1' for 'smart' autocall, where it is not applied if there are no more arguments on the line, and '2' for 'full' autocall, where all callable objects are automatically called (even if no arguments are present). --gui=<CaselessStrEnum> (InteractiveShellApp.gui) Default: None Choices: ('glut', 'gtk', 'gtk3', 'osx', 'pyglet', 'qt', 'qt5', 'tk', 'wx') Enable GUI event loop integration with any of ('glut', 'gtk', 'gtk3', 'osx', 'pyglet', 'qt', 'qt5', 'tk', 'wx'). --profile-dir=<Unicode> (ProfileDir.location) Default: '' Set the profile location directly. This overrides the logic used by the `profile` option. To see all available configurables, use `--help-all` Examples -------- ipython --matplotlib # enable matplotlib integration ipython --matplotlib=qt # enable matplotlib integration with qt4 backend ipython --log-level=DEBUG # set logging to DEBUG ipython --profile=foo # start with profile foo ipython qtconsole # start the qtconsole GUI application ipython help qtconsole # show the help for the qtconsole subcmd ipython console # start the terminal-based console application ipython help console # show the help for the console subcmd ipython notebook # start the IPython notebook ipython help notebook # show the help for the notebook subcmd ipython profile create foo # create profile foo w/ default config files ipython help profile # show the help for the profile subcmd ipython locate # print the path to the IPython directory ipython locate profile foo # print the path to the directory for profile `foo` ipython nbconvert # convert notebooks to/from other formats ###Markdown This is an important trick for finding out configuration info: $> ipython [subcommand] --help-all | grep [-C context] PATTERN`--help-all` exposes everything configurable in IPython,there is a good chance you will find what you are looking for. A common configuration question is:> how do I disable the "Do you really want to exit" message when quitting with `Ctrl-d`?Well, logically this has to do with `exit`, so let's look for it: ###Code !ipython --help-all | GREP_COLOR='1;31;46' grep --color exit ###Output If invoked with no options, it executes the file and exits, passing the IPython session after running the script, rather than exiting. Files ending --confirm-exit Set to confirm when you try to exit IPython with an EOF (Control-D in Unix, Control-Z/Enter in Windows). By typing 'exit' or 'quit', you can force a direct exit without any confirmation. --no-confirm-exit Don't prompt the user when exiting. --TerminalInteractiveShell.confirm_exit=<CBool> Set to confirm when you try to exit IPython with an EOF (Control-D in Unix, Control-Z/Enter in Windows). By typing 'exit' or 'quit', you can force a direct exit without any confirmation. ###Markdown Which shows me that I can disable the confirmation for a single IPython session with $> ipython --no-confirm-exit or I can set the `TerminalInteractiveShell.confirm_exit=False` in a config file,to have it be the default behavior. Configuration principles Here are the design principles of the IPython configuration system: * Configuration is always done using class attributes* Classes that have configurable attributes are subclasses of `Configurable`* Attributes that are configurable are typed traitlets objects (`Bool`, `Unicode`, etc.) that have `config=True`* In config files, configurable attributes can be set using the format `Class.attr_name=the_value`* At the command line, configurable attributes can be set using the syntax `--Class.attr_name=the_value`* At the command line, some attributes have shorthands of the form `--attr-name=value`* Values set at the command line have higher priority than those set in config files The IPython Profile IPython has a notion of 'profiles' - these are directories that live in your IPYTHONDIR,which contain configuration and runtime information.Let's create the default profile ###Code !ipython profile create newprofile ###Output [ProfileCreate] Generating default config file: '/home/takluyver/.ipython/profile_newprofile/ipython_config.py' [ProfileCreate] Generating default config file: '/home/takluyver/.ipython/profile_newprofile/ipython_kernel_config.py' [ProfileCreate] Generating default config file: '/home/takluyver/.ipython/profile_newprofile/ipython_console_config.py' [ProfileCreate] Generating default config file: '/home/takluyver/.ipython/profile_newprofile/ipython_qtconsole_config.py' [ProfileCreate] Generating default config file: '/home/takluyver/.ipython/profile_newprofile/ipython_notebook_config.py' [ProfileCreate] Generating default config file: '/home/takluyver/.ipython/profile_newprofile/ipython_nbconvert_config.py' ###Markdown This creates a profile in your IPYTHONDIR (`ipython locate` is a quick way to see where your IPYTHONDIR is),and populates it with automatically generated default config files. ###Code !ipython locate profile default !ipython locate profile newprofile ###Output /home/takluyver/.ipython/profile_default /home/takluyver/.ipython/profile_newprofile ###Markdown You can skim ###Code profile = get_ipython().profile_dir.location profile ls $profile ###Output db/ ipython_nbconvert_config.py nbconfig/ startup/ history.sqlite ipython_notebook_config.py notebook.json static/ history.sqlite-journal ipython_qtconsole_config.py pid/ ipython_config.py log/ security/ ###Markdown Let's peek at our config file ###Code pycat $profile/ipython_config.py ###Output _____no_output_____ ###Markdown Startup Files Startup files are simple Python or IPython scriptsthat are run whenever you start IPython.These are a useful way to do super common imports,or for building database connections to load on startup of a non-default profile.We can use a startup file to ensure that our `%tic/toc` magics are always defined,every time we start IPython. ###Code !ls $profile/startup !cat $profile/startup/README ###Output This is the IPython startup directory .py and .ipy files in this directory will be run *prior* to any code or files specified via the exec_lines or exec_files configurables whenever you load this profile. Files will be run in lexicographical order, so you can control the execution order of files with a prefix, e.g.:: 00-first.py 50-middle.py 99-last.ipy ###Markdown Adding common imports, so we never have to forget them again ###Code %%writefile $profile/startup/simpleimports.py import sys, os, time, re ###Output Writing /home/takluyver/.ipython/profile_default/startup/simpleimports.py ###Markdown **Restart the kernel** and then run the following cells immediately to verify these scripts have been executed: ###Code sys ###Output _____no_output_____ ###Markdown Defining your own magic As we have seen already, IPython has cell and line magics. You can define your own magics using any Python function and the `register_magic_function` method: ###Code from IPython.core.magic import (register_line_magic, register_cell_magic, register_line_cell_magic) @register_line_magic def sleep(line): """A simple function for sleeping""" import time t = float(line) time.sleep(t) %sleep 2 %sleep? ###Output _____no_output_____ ###Markdown Cell Magic **Cell magics** take two args:1. the **line** on the same line of the magic 2. the **cell** the multiline body of the cell after the first line ###Code @register_cell_magic def dummy(line, cell): """dummy cell magic for displaying the line and cell it is passed""" print("line: %r" % line) print("cell: %r" % cell) %%dummy this is the line this is the cell ###Output line: 'this is the line' cell: 'this\nis the\ncell'
docs/03_Defining_A_Lesion.ipynb
###Markdown Defining a Lesion Conducting a lesion analysis in ConWhAt is extremely simple. All that is needed is a binary `.nii` format lesion mask, with ones indicating lesioned tissue, and zeros elsewhere. >(Note: we terms like 'lesion' and 'damage' throughout most of this documentation, as that is the most natural primary context for ConWhAt analyses. Remember however that all we are doing at the end of the day is doing a set of look-up operations between a list of standard space coordinates on the one hand (as defined by non-zero values in a `.nii` image), and the spatial locations of each 'connectome edge' - i.e. each entry in our anatomical connectivity matrix. One can envisave many alternative interpretations/applications of this procedure; for example to map the connectivity effects of magnetic field or current distributions from nonivasive brain stimulation). Still, for concreteness and simplicity, we stick with 'lesion', 'damage', etc. for the most part. )A common way to obtain a lesion map is to from a patient's T1-weighted MR image. Although this can be done manually, it is strongly recommended to use an automated lesion segmentation tools, followed by manual editing. An alternative way is simply to define a lesion location using standard space coordinates, and build a 'lesion' mask *de-novo*. This is what we do in the following example. On the next page we do a ConWhAt connectome-based decomposition analysis on this 'synthetic' lesion mask.--- ###Code # ConWhAt stuff from conwhat import VolConnAtlas,StreamConnAtlas,VolTractAtlas,StreamTractAtlas from conwhat.viz.volume import plot_vol_and_rois_nilearn # Neuroimaging stuff import nibabel as nib from nilearn.plotting import plot_roi from nipy.labs.spatial_models.mroi import subdomain_from_balls from nipy.labs.spatial_models.discrete_domain import grid_domain_from_image # Viz stuff %matplotlib inline from matplotlib import pyplot as plt # Generic stuff import numpy as np ###Output _____no_output_____ ###Markdown Define some variables ###Code # Locate the standard space template image fsl_dir = '/global/software/fsl/5.0.10' t1_mni_file = fsl_dir + '/data/standard/MNI152_T1_1mm_brain.nii.gz' t1_mni_img = nib.load(t1_mni_file) # This is the output we will save to file and use in the next example lesion_file = 'synthetic_lesion_20mm_sphere_-46_-60_6.nii.gz' ###Output _____no_output_____ ###Markdown Define the 'synthetic lesion' location and size using standard (MNI) space coordinates ###Code com = [-46,-60,6] # com = centre of mass rad = 20 # radius ###Output _____no_output_____ ###Markdown Create the ROI ###Code domain = grid_domain_from_image(t1_mni_img) lesion_img = subdomain_from_balls(domain,np.array([com]), np.array([rad])).to_image() ###Output _____no_output_____ ###Markdown Plot on brain slices ###Code plot_roi(lesion_img,bg_img=t1_mni_img,black_bg=False); ###Output _____no_output_____ ###Markdown Save to file ###Code lesion_img.to_filename(lesion_file) ###Output _____no_output_____
chapter01/chapter01.ipynb
###Markdown ###Code import cv2 import dlib from imutils import face_utils, resize import numpy as np orange_img = cv2.imread('orange.jpg') orange_img = cv2.resize(orange_img, dsize=(512, 512)) detector = dlib.get_frontal_face_detector() predictor = dlib.shape_predictor('shape_predictor_68_face_landmarks.dat') # 0을 하면 웹 캠을 사용한다. cap = cv2.VideoCapture('0.mp4') # cap = cv2.VideoCapture(0) cap.isOpened() while cap.isOpened(): ret, img = cap.read() # 프레임이 없으면 반복문을 빠져나와라 if not ret: break faces = detector(img) result = orange_img.copy() try: if len(faces) > 0: face = faces[0] x1, y1, x2, y2 = face.left(), face.top(), face.right(), face.bottom() face_img = img[y1:y2, x1:x2].copy() shape = predictor(img, face) shape = face_utils.shape_to_np(shape) # eyes le_x1 = shape[36, 0] le_y1 = shape[37, 1] le_x2 = shape[39, 0] le_y2 = shape[41, 1] le_margin = int((le_x2 - le_x1) * 0.18) re_x1 = shape[42, 0] re_y1 = shape[43, 1] re_x2 = shape[45, 0] re_y2 = shape[47, 1] re_margin = int((re_x2 - re_x1) * 0.18) left_eye_img = img[le_y1-le_margin:le_y2,+le_margin, le_x1-le_margin:le_x2+le_margin].copy() right_eye_img = img[re_y1-re_margin:re_y2,+re_margin, re_x1-re_margin:re_x2+re_margin].copy() left_eye_img = resize(left_eye_img, width=100) right_eye_img = resize(right_eye_img, width=100) result = cv2.seamlessClone( left_eye_img, result, np.full(left_eye_img.shape[:2], 255, left_eye_img.dtype), (100, 200), cv2.MIXED_CLONE ) result = cv2.seamlessClone( right_eye_img, result, np.full(right_eye_img.shape[:2], 255, right_eye_img.dtype), (100, 200), cv2.MIXED_CLONE ) # mouth mouth_x1 = shape[48, 0] mouth_y1 = shape[50, 1] mouth_x2 = shape[54, 0] mouth_y2 = shape[57, 1] mouth_margin = int((mouth_x2 - mouth_x1) * 0.1) mouth_img = img[mouth_y1-mouth_margin:mouth_y2+mouth_margin, mouth_x1-mouth_margin:mouth_x2+mouth_margin].copy() mouth_img = resize(mouth_img, width=250) result = cv2.seamlessClone( mouth_img, result, np.full(mouth_img.shape[:2], 255, mouth_img.dtype), (180, 320), cv2.MIXED_CLONE ) cv2.imshow('left', left_eye_img) cv2.imshow('right', right_eye_img) cv2.imshow('mouth', mouth_img) cv2.imshow('face', face_img) cv2.imshow('result', result) except: continue ###Output _____no_output_____ ###Markdown Introducing Clojure Language Basics Basic syntax ###Code (str "Hello, " "World!") (str "Hello from " "Clojure with " "lots of " "arguments") ###Output _____no_output_____ ###Markdown Basic arithmethic ###Code (+ 1 2) (+ 1 2 3) ###Output _____no_output_____ ###Markdown Clojure makes all precedence rules explicit ###Code (+ 3 (* 4 2)) ###Output _____no_output_____ ###Markdown Host interoperation: A JVM crash course Using the dot operator to call Java static methods ###Code (. Math PI) (. Math abs -3) (Math/abs -3) ###Output _____no_output_____ ###Markdown Using the dot operator to call Java instance methods ###Code (.toUpperCase "foo") (. "foo" toUpperCase) ###Output _____no_output_____ ###Markdown Creating instances of Java classes ###Code (new Integer "42") (Integer. "42") ###Output _____no_output_____ ###Markdown CHAPTER01 숫자 1.1 정수 * 정수는 불변형 객체이다. * 불변형 객체는 객체와 변수 참조 간의 차이가 없다. * 파이썬의 정수는 최소 32비트(4바이트) 이상의 크기를 가진다. ###Code # 파이썬에서 정수의 바이트를 확인하는 메서드 (999).bit_length() # int 객체를 생성할 때 문자열과 진법기준(ex. 10진법이 디폴트, 2진법, 3진법)을 인자로 받아 그 값을 반환 s = '11' int(s), int(s, 2), int(s,3) ###Output _____no_output_____ ###Markdown 1.2 부동소수점 * 부동소수점(실수 float)는 불변형 객체이다. * IEEE 754 표준을 따르며, 단정도 방식에서 32비트를 이용하여 표현 * 1비트는 부호, 8비트는 지수, 23비트틑 유효 자릿수(가수) 로 표현된다. * 배정도 방식에서는 64비트를 이용하여 표현하는데, 1비트가 부호, 52비트가 가수, 11비트가 지수이다. 소수를 단정도 방식으로 표현 1. 2진법으로 변환 2. 소수점을 왼쪽으로 이동시켜 한자리만 남게하는 방식으로, 정규화 진행 3. 소수점 오른쪽 부분의 뒤에 0을 추가하여 가수로, 정규화 시키기 위해 줄인 만큼을 지수로 사용4. 지수로 사용할 때 바이어스를 더해 준다. 바이어스에 대해서는 더 살펴보자 ###Code 0.2*3==6, 1.2-0.2==1.0, 1.2-0.1==1.1, 0.1*0.1==0.01 ###Output _____no_output_____ ###Markdown 소수를 단정도 방식으로 표현하기 때문에 논리적으로는 같아도 표현이 달라, 섣불리 동등성 테스트를 진행하면 안된다.보통 근사하는 방식을 이용한다 ###Code def a(x, y, places=7): return round(abs(x-y), places) == 0 a(1.2-0.1, 1.1) ###Output _____no_output_____ ###Markdown * 파이썬에서 나누기는 항상 부동소수점을 반환 divmod를 이용하여 quotient 와 remainder를 정수로 반환가능 * round()는 반올림 함수로서 소수점부터 정수자리(음수를 인자로)까지 다양하게 가능 * as_inter_ration로 부동소수점의 분수표현 가능 ###Code divmod(45,7), round(113.866, -2), round(113.866, 0), round(113.866, 2), 8.75.as_integer_ratio() ###Output _____no_output_____ ###Markdown 1.3 복소수파이썬에서 복소수는 z = a + bj 와 같이 부동소수점 쌍으로 표현한다.(오 왜 j라 했을까 i가 아니고?!) real과 imag는 필드값으로 conjugate는 메서드로 사용가능, 또한 복소수를 다루기 위해서 cmath를 임포트 해야한다. ###Code z = 3 + 4j z.real, z.imag, z.conjugate() ###Output _____no_output_____ ###Markdown 1.4 분수분수는 Fraction 모듈을 사용하여 표현한다. 다음 함수들로 이해해보자 ###Code from fractions import Fraction def rounding_floats(n, places=7): return round(n, places) def float_to_fractions(n): return Fraction(*n.as_integer_ratio()) def get_denominator(n1, n2): f = Fraction(n1, n2) return f.denominator def get_numerator(n1, n2): f = Fraction(n1, n2) return f.numerator def test_testing_float(): assert(rounding_floats(1.25, 1) == 1.2) assert(rounding_floats(12.5, -1) == 10) assert(float_to_fractions(1.25) == 5/4) assert(get_denominator(5, 7) == 7) assert(get_numerator(5, 7) == 5) print("테스트 통과!") test_testing_float() Fraction(4,5), 4/5, Fraction(*(4/5).as_integer_ratio()) == 4/5, Fraction(4, 5) == 4/5 ###Output _____no_output_____ ###Markdown 1.5 decimal 모듈정확한 10진법의 부동소수점 숫자가 필요한 경우 decimal.Decimal 객체를 사용할 수 있다. 인자로 정수, 문자열을 받으며 decimal.Decimal.from_float()을 이용해 부동소수점에서도 받을 수 있다. 이 모듈을 이용해 동등성 비교등 부동 소수점에서 문제를 쉽게 해겨할 수 있다. ###Code e1 = sum([0.1 for i in range(10)]) == 1.0 from decimal import Decimal e2 = sum([Decimal('0.1') for i in range(10)]) == Decimal('1.0') e1, e2 ###Output _____no_output_____ ###Markdown 1.6 2진수 8진수 16진수 ###Code bin(999), oct(999), hex(999) ###Output _____no_output_____ ###Markdown Hello, Clojure Hello World ###Code (println "Hello, world!") ; Say hi ;; Double semicolons are used if the comment is all alone on its own line (println "Hello, world!") ; A single semicolon is used at the end of a line with some code ###Output Hello, world! ###Markdown Basic string manipulation ###Code ;; Concat strings (str "Clo" "jure") ;; Concat strings oand numbers (str 3 " " 2 " " 1 " Blast off!") ;; Count the number of characters of a string (count "Hello, world") ###Output _____no_output_____ ###Markdown Booleans ###Code (println true) ; Prints true... (println false) ; ...and prints false. ###Output false ###Markdown Nil ###Code (println "Nobody's home:" nil) ; Prints Nobody's home: nil (println "We can print many things:" true false nil) ###Output We can print many things: true false nil ###Markdown Basic Arithmetic operations ###Code ;; A simple sum example (+ 1900 84) ;; A simple product example (* 16 124) ;; A simple substraction example (- 2000 16) ; 1984 again. ;; A simple division example (/ 25792 13) ;; A simple average example (/ (+ 1984 2010) 2) ;; EVERYTHING in clojure is evaluated as follows """ (verb argument argument argument...) """ ;; The math operators take an arbitrary number of args (+ 1000 500 500 1) ; Evaluates to 2001. ;; The average of 2 numbers using floating-point numbers (/ (+ 1984.0 2010.0) 2.0) ;; Adding an integer to a float returns a float (+ 1984 2010.0) ###Output _____no_output_____ ###Markdown Not Variable Assignment, but Close ###Code ;; Binding a symbol (first-name) to a value ("Russ") (def first-name "Russ") ;; 'def' can accept any expression (def the-average (/ (+ 20 40.0) 2.0)) ###Output _____no_output_____ ###Markdown Basic function definitions ###Code ;; A simple function without args (defn hello-world [] (println "Hello, world!")) (hello-world) ;; A function with 1 arg (defn say-welcome [what] (println "Welcome to" what)) (say-welcome "Clojure") ;; A simple average function (defn average [a b] ; No commas between args (/ (+ a b) 2.0)) (average 5.0 10.0) ;; A more verbose average function (defn chatty-average [a b] (println "chatty-average function called") (println "** first argument:" a) (println "** second argument:" b) (/ (+ a b) 2.0)) (chatty-average 10 20) ###Output chatty-average function called ** first argument: 10 ** second argument: 20 ###Markdown Introduction to Leiningen ###Code ;; Execute the following command to start a new Clojure project skeleton """ !lein new app blottsbooks """ ;; Add the following code to core.clj, located at ./blottsbooks/src/blottsbooks/core.clj (ns blottsbooks.core ; :gen-class instructs that the namespace should be compiled (:gen-class)) (defn say-welcome [what] (println "Welcome to" what "!")) (defn -main [] ; The main function (say-welcome "Blotts Books")) ;; Execute the following command to execute the last snippet """ !cd ./blottsbooks !lein run """ (ns user) ###Output _____no_output_____ ###Markdown Common Clojure errors ###Code ;; Division by zero (/ 100 0) ;; Typo when calling a function (catty-average) ;; Too many parentheses (+ (* 2 2) 10)) ;; Too few parentheses (+ (* 2 2) 10 ###Output Syntax error reading source at (REPL:3:1). EOF while reading, starting at line 3
notebooks/Section2_1-MCMC.ipynb
###Markdown MCMC BasicsUncertainty may play an important role in business decisions. At the end of the day, our goal is to evaluate some *expectation* in the presence of uncertainty. Inverse CDF samplingGiven a probability density function, $p(x)$, the cumulative density function is given by $$\operatorname{cdf}(x) = \int_0^x p(t)~dt$$Note that the value $\operatorname{cdf}(x)$ is "the probability that a value is less than $x$", and is between 0 and 1. ###Code rv = st.norm(0, 1) t = np.linspace(-4, 4, 300) fig, axes = plt.subplots(ncols=2, figsize=(15, 5)) axes[0].plot(t, rv.pdf(t)) axes[0].set_title('Normal probability density function') axes[1].plot(t, rv.cdf(t)) axes[1].set_title('Normal cumulative density function') ###Output _____no_output_____ ###Markdown If we can *invert* the cumulative density function, we have a function $\operatorname{cdf}^{-1}(t)$, where $0 \leq t \leq 1$. We can use this function to draw random values:1. Draw $u \sim U(0, 1)$2. Use $y = \operatorname{cdf}^{-1}(u)$ as your sample ###Code np.random.seed(0) rv = st.norm(0, 1) t = np.linspace(-4, 4, 300) u = np.random.rand() fig, ax = plt.subplots(figsize=(10, 7)) ax.plot(t, rv.cdf(t), color='C0') ax.text(t.min() + 0.1, u + 0.02, '$u$', fontdict={"fontsize": 24}) ax.hlines(u, t.min(), rv.ppf(u), linestyles='dashed', color='C0') ax.vlines(rv.ppf(u), u, 0, linestyles='dashed', color='C0') bg_color = ax.get_facecolor() ax.plot(rv.ppf(u), u, 'o', mfc=bg_color, ms=15) ax.text(rv.ppf(u) + 0.1, 0.02, 'y', fontdict={"fontsize": 24}) ax.set_xlim(t.min(), t.max()) ax.set_ylim(0, 1) ax.set_title('Inverse CDF sampling'); ###Output _____no_output_____ ###Markdown Inverse CDF exercise: Fill out the following function that implements inverse CDF sampling. There is a cell below to visually check your implementation. ###Code def sample(draws, inv_cdf): """Draw samples using the inverse CDF of a distribution. Parameters ---------- draws : int Number of draws to return inv_cdf : function Gives the percentile of the distribution the argument falls in. This is vectorized, like in `scipy.stats.norm.ppf`""" # output should be an array of size (draws,), distributed according to inv_cdf #################### return np.random.rand(draws) # This is wrong, but it runs! #################### fig, axes = plt.subplots(ncols=2, figsize=(15, 5), sharex=True, sharey=True) draws = 10_000 # Two histograms should look the same axes[0].hist(st.norm().rvs(draws), bins='auto', density=True) axes[1].hist(sample(draws, st.norm().ppf), bins='auto', density=True); ###Output _____no_output_____ ###Markdown Inverse CDF exercise (calculus required)The probability density function of the exponential distribution is $$p(x | \lambda) = \lambda e^{-\lambda x}$$Calculate the cumulative density function, invert it, and use the `sample` function above to sample from the exponential function.Again, there is a plot below to check your implementation. ###Code def inv_cdf_exponential(u, lam=1): # Should return an array of shape `u.shape` #################### return u # wrong but compiles #################### fig, axes = plt.subplots(ncols=2, figsize=(15, 5), sharex=True, sharey=True) draws = 10_000 # Two histograms should look the same axes[0].hist(st.expon(scale=1.).rvs(draws), bins='auto', density=True) axes[1].hist(sample(draws, inv_cdf_exponential), bins='auto', density=True); ###Output _____no_output_____ ###Markdown Hints for previous exerciseThe cumulative density function is$$\operatorname{cdf}(x) = 1-e^{-\lambda x}.$$Invert the cumulative density function by solving $$y = 1-e^{-\lambda x}$$ for $x$ in terms of $y$. Rejection SamplingMost integrals are hard or impossible to do. Also, if we are iterating on a statistical model, we may want a method that works without requiring rederiving a formula for generating samples. Further, in Bayesian data analysis, we may not know a *normalizing constant*: we may only know $$\tilde{p}(x) = \frac{1}{Z_p}p(x),$$for some constant $Z_p$ ("constant" here is with respect to $x$). In order to sample, first we1. Choose a proposal distribution $q$ that you know how to sample from2. Choose a number $k$, so that $kq(x) \geq \tilde{p}(x)$ for all $x$Then, we repeatedly 1. Draw a $z$ from $q$2. Draw a $u$ from $\operatorname{Uniform}(0, kq(z))$3. If $\tilde{p} > u$, accept the draw, otherwise, reject.Importantly, every "rejection" is wasted computation! We will explore methods for having less wasted computation later. ###Code def mixture_of_gaussians(): rvs = (st.norm(-3, 1), st.norm(0, 1), st.norm(3, 1)) probs = (0.5, 0.2, 0.3) def pdf(x): return sum(p * rv.pdf(x) for p, rv in zip(probs, rvs)) return pdf np.random.seed(6) pdf = mixture_of_gaussians() k = 3 q = st.norm(0, 3) z = q.rvs() u = np.random.rand() * k * q.pdf(z) fig, ax = plt.subplots(figsize=(10, 5), constrained_layout=True) t = np.linspace(-10, 10, 500) ax.plot(t, pdf(t), '-', label='$q(x)$') ax.fill_between(t, 0, pdf(t), alpha=0.2) ax.plot(t, k * q.pdf(t), '-', label='$3 \cdot \mathcal{N}(z | 0, 3)$') ax.fill_between(t, pdf(t), 3 * q.pdf(t), alpha=0.2) bg_color = ax.get_facecolor() ax.vlines(z, 0, pdf(z), linestyles='dashed', color='green') ax.vlines(z, pdf(z), k * q.pdf(z), linestyles='dashed', color='red') ax.plot(z, 0, 'o', label='$z \sim \mathcal{N}(0, 3)$', ms=15, mfc=bg_color) ax.plot(z, pdf(z), 'o', color='C0', ms=15, mfc=bg_color) ax.plot(z, u, 'rx', label='$u \sim U(0, 3\cdot\mathcal{N}(z | 0, 3))$', ms=15, mfc=bg_color) ax.plot(z, k * q.pdf(z), 'o', color='C1', ms=15, mfc=bg_color) ax.set_ylim(bottom=0) ax.set_xlim(t.min(), t.max()) ax.legend(); ###Output _____no_output_____ ###Markdown Rejection Sampling ExerciseSample from the pdf returned by `mixture_of_gaussians` using rejection sampling. We will implement this as a Python generator, and yield the proposed draw, `z`, as well as whether it was accepted. You should assume `proposal_dist` comes from `scipy.stats`, so it has a `.rvs()` method that samples, and a `.pdf` method that evaluates the probability density function at a point.If $kq(x)$ is not larger than $\tilde{p}(x)$, throw an exception!The cell below has a plot to check your implementation. ###Code def rejection_sampler(pdf, proposal_dist, k): """ Yields proposals, and whether that proposal should be accepted or rejected """ while True: z = proposal_dist.rvs() # Enter your code below #################### accept = True yield z, accept #################### def gen_samples(draws, sampler): """An example of how to use the rejection sampler above.""" samples = [] for n_draws, (z, accept) in enumerate(sampler, 1): if accept: samples.append(z) if len(samples) == draws: return np.array(samples), n_draws %%time pdf = mixture_of_gaussians() proposal_dist = st.norm(0, 3) k = 3 samples, draws = gen_samples(10_000, rejection_sampler(pdf, proposal_dist, k)) fig, ax = plt.subplots(figsize=(10, 7)) t = np.linspace(-10, 10, 500) # This histogram should look very similar to the pdf that is plotted ax.hist(samples, bins='auto', density=True) ax.plot(t, pdf(t)) ax.set_title(f'{samples.size:,d} draws from the pdf with {100 * samples.size / draws:.2f}% efficiency'); ###Output CPU times: user 534 ms, sys: 34.7 ms, total: 569 ms Wall time: 678 ms ###Markdown Exercise: How does a rejection sampler scale with dimension?Use as your "unknown distribution" a multivariate Gaussian with identity covariance matrix, and use as your proposal distribution a multivariate Gaussian with covariance matrix `1.1 * I`. - Around what percent of samples are accepted with dimension 1? - 10 dimensions? - 100 dimensions? - What happens if you try to use 1,000 dimensions? ###Code #################### def run_experiment(dims, trials=1_000): pdf = st.multivariate_normal(mean=np.zeros(dims), cov=np.eye(dims)).pdf prop = st.multivariate_normal(mean=np.zeros(dims), cov=1.1 * np.eye(dims)) k = pdf(0) / prop.pdf(0) samples = prop.rvs(trials) sample_pdfs = prop.pdf(samples) u = np.random.uniform(low=0, high=k * sample_pdfs) accept = pdf(samples) > u return accept.mean() #################### ###Output _____no_output_____ ###Markdown Importance sampling is useful but we won't cover it!It produces _weighted_ samples, so that the output is samples and weights. See 11.1.4 in Bishop's "Pattern Recognition and Machine Learning". Introduction to MCMCOne way to intuitively waste less computation is to use knowledge from your current sample to inform your next proposal: this is called a *Markov chain*. Let $t$ be the index of our current sample, $x_t$ be our current sample, and $\operatorname{pdf}(x_t)$ be our probability density function evaluated at the current sample. We will define a *transition probability* that is conditioned on our current position: $T(x_{t + 1} | x_t)$. It turns out that a Markov chain will sample from $\operatorname{pdf}$ if:- $T$ is ergodic (sort of techinical -- roughly $T$ is aperiodic and can explore the whole space)- The chain satisfies *detailed balance*, which means $\operatorname{pdf}(x_t)T(x_{t+1} | x_t) = \operatorname{pdf}(x_{t + 1})T(x_{t} | x_{t + 1})$.This second criteria inspires the *Metropolis acceptance criteria*: If we use any proposal with density function $\operatorname{prop}$, we use this criterion to "correct" the transition probability to satisfy detailed balance:$$A(x_{t + 1} | x_t) = \min\left\{1, \frac{\operatorname{pdf}(x_{t + 1})}{\operatorname{pdf}(x_{t})}\frac{\operatorname{prop}(x_{t} | x_{t + 1})}{\operatorname{prop}(x_{t + 1} | x_t)} \right\}$$Now the *Metropolis-Hastings Algorithm* isInitialize at some point $x_0$. For each iteration:1. Draw $\tilde{x}_{t + 1} \sim \operatorname{prop}(x_t)$2. Draw $u \sim \operatorname{Uniform}(0, 1)$3. If $u < A(\tilde{x}_{t + 1} | x_t)$, then $x_{t + 1} = \tilde{x}_{t + 1}$. Otherwise, $x_{t + 1} = x_t$.This is "tested" in the following cell. ###Code def metropolis_hastings(pdf, proposal, init=0): """Yields a sample, and whether it was accepted. Notice that, unlike the rejection sampler, even when the second argument is `False`, we use the sample! """ current = init while True: prop_dist = proposal(current) prop = prop_dist.rvs() p_accept = min(1, pdf(prop) / pdf(current) \ * proposal(prop).pdf(current) / prop_dist.pdf(prop)) accept = np.random.rand() < p_accept if accept: current = prop yield current, accept def gen_samples(draws, sampler): """An example of using the metropolis_hastings API.""" samples = np.empty(draws) accepts = 0 for idx, (z, accept) in takewhile(lambda j: j[0] < draws, enumerate(sampler)): accepts += int(accept) samples[idx] = z return samples, accepts %%time pdf = mixture_of_gaussians() proposal_dist = st.norm(0, 3) k = 3 samples, accepts = gen_samples(10_000, metropolis_hastings(pdf, lambda x: st.norm(x, 1))) fig, ax = plt.subplots(figsize=(10, 7)) t = np.linspace(samples.min(), samples.max(), 500) ax.hist(samples, bins='auto', density=True) ax.plot(t, pdf(t)) ax.set_title(f'{samples.size:,d} draws from the pdf with {100 * accepts / samples.size :.2f}% accept rate'); ###Output CPU times: user 32.3 s, sys: 790 ms, total: 33.1 s Wall time: 49 s ###Markdown MCMC ExerciseThis implementation is wildly inefficient! We will speed it up by fixing the proposal distribution as a Gaussian centered at the previous point (this is fairly standard). Specifically,$$x_{t+1} \sim \mathcal{N}( x_t, \sigma),$$so$$\operatorname{prop}(x_{t+1} | x_{t}) = \mathcal{N}(x_{t + 1} | x_t, \sigma)$$We call $\sigma$ the *step size*.1. The Metropolis-Hastings acceptance criteria simplifies quite a bit - work out what $A(x_{t + 1} | x_t)$ is now.2. scipy.stats is doing a lot of work: `st.norm().rvs()` is ~1000x slower than `np.random.randn()`. Rewrite `metropolis_hastings` with the acceptance criteria, and without using `st.norm().rvs()` to provide proposals. ###Code def metropolis_hastings(pdf, step_size, init=0): current = init while True: #################### accept = True #################### yield current, accept %%time pdf = mixture_of_gaussians() proposal_dist = st.norm(0, 3) k = 3 samples, accepts = gen_samples(10_000, metropolis_hastings(pdf, 1)) fig, ax = plt.subplots(figsize=(10, 7)) t = np.linspace(samples.min(), samples.max(), 500) ax.hist(samples, bins='auto', density=True) ax.plot(t, pdf(t)) ax.set_title(f'{samples.size:,d} draws from the pdf with {100 * accepts / samples.size :.2f}% accept rate') ###Output CPU times: user 67.4 ms, sys: 3.05 ms, total: 70.5 ms Wall time: 35.4 ms ###Markdown MCMC Exercises 21. Find a step size so that the acceptance rate is ~25%2. Find a step size so that the acceptance rate is ~95%3. What is the general relationship between step size and acceptance rate? Bonus exerciseWrite a routine for finding a step size that gives a specific acceptance rate for Metropolis-Hastings. It may be helpful to return the acceptance probability instead of (or in addition to) the `accept` boolean. Literature suggests the overly specific 23.4% acceptance rate as a good target. PyMC3 aims for anything between 10% and 90%. Gibbs SamplingIf you can sample from all the marginal distributions, you can implement a sampler pretty efficiently just using those.The general idea is to:1. Initialize $\theta^0 = (\theta_1^0, \theta_2^0, \ldots, \theta_n^0)$, and $j = 0$2. For each $k = 1, 2, \ldots, n$: - Set $\theta_k^j \sim \pi(\theta_k^j | \theta_1^j, \theta_2^j, \ldots, \theta_n^j)$3. Increment $j$, and repeat as long as desiredThis is pretty tricky to automate, since you need to know all of these conditional distributions! That said, this is often seen in science when a sampler is hand-built to do inference with a specific model. In that case, each conditional distribution might be computed by hand. Coal mining exampleWe have a time series of recorded coal mining disasters in the UK from 1851 to 1961.Occurrences of disasters in the time series is thought to be derived from a Poisson process with a large rate parameter in the early part of the time series, and from one with a smaller rate in the later part. We are interested in locating the change point in the series, which perhaps is related to changes in mining safety regulations. ###Code disasters_array = np.array( [4, 5, 4, 0, 1, 4, 3, 4, 0, 6, 3, 3, 4, 0, 2, 6, 3, 3, 5, 4, 5, 3, 1, 4, 4, 1, 5, 5, 3, 4, 2, 5, 2, 2, 3, 4, 2, 1, 3, 2, 2, 1, 1, 1, 1, 3, 0, 0, 1, 0, 1, 1, 0, 0, 3, 1, 0, 3, 2, 2, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 2, 1, 0, 0, 0, 1, 1, 0, 2, 3, 3, 1, 1, 2, 1, 1, 1, 1, 2, 4, 2, 0, 0, 1, 4, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1]) years = np.arange(1851, 1962, dtype=int) fig, ax = plt.subplots() ax.vlines(years, 0, disasters_array, lw=6) ax.set_xlim(years.min() - 1, years.max() + 1) ax.set_ylim(bottom=0); ###Output _____no_output_____ ###Markdown Writing down the model and computing conditional distributionsIt is perhaps easiest to write the model as a PyMC3 model. In notation, we might write$$y_t \sim \operatorname{Poisson}(\lambda_t), t=1851, \ldots, 1962 \\\lambda_t = \left\{ \begin{array}{}\lambda_1 \text{ for } t \leq \tau \\ \lambda_2 \text{ for } t > \tau \end{array}\right. \\\lambda_j \sim \operatorname{Gamma}(1, 10) \\\tau \sim \operatorname{DiscreteUniform}(1851, 1962)$$ ###Code import pymc3 as pm def coal_disaster_model(): with pm.Model() as model: early_lambda = pm.Gamma('early_lambda', 1, 10) late_lambda = pm.Gamma('late_lambda', 1, 10) change_point = pm.DiscreteUniform('change_point', 1851, 1962) lam = pm.Deterministic('lam', pm.math.where(years > change_point, late_lambda, early_lambda)) pm.Poisson('rate', lam, observed=disasters_array) return model pm.model_to_graphviz(coal_disaster_model()) ###Output _____no_output_____ ###Markdown Now we need to go out and compute the conditional distributions:$$p(\tau | \lambda_1, \lambda_2, y_t) \\ p(\lambda_1 | \tau, \lambda_2, y_t) \\ p(\lambda_2 | \tau, \lambda_1, y_t)$$In this case, we can do some arithmetic, look up these distributions, and compute$$p(\tau | \lambda_1, \lambda_2, y_t) = \operatorname{Categorical}\left( \frac{\lambda_1^{\sum_{t=1851}^{\tau} y_t +\alpha-1} e^{-(\beta+\tau)\lambda_1} \lambda_2^{\sum_{t=\tau+1}^{1962} y_i + \alpha-1} e^{-\beta\lambda_2}}{\sum_{k=1851}^{1962} \lambda_1^{\sum_{t=1851}^{\tau} y_t +\alpha-1} e^{-(\beta+\tau)\lambda_1} \lambda_2^{\sum_{t=\tau+1}^{1962} y_i + \alpha-1} e^{-\beta\lambda_2}} \right) \\ p(\lambda_1 | \tau, \lambda_2, y_t) = \operatorname{Gamma}\left(\sum_{t=1851}^{\tau} y_t + \alpha, \tau + \beta\right)\\ p(\lambda_2 | \tau, \lambda_1, y_t) = \operatorname{Gamma}\left(\sum_{t=\tau + 1}^{1962} y_t + \alpha, 1962 - \tau + \beta\right)$$So far so good! Now here's an implementation! ###Code def gibbs_sample_disaster(samples, tau=1900, early_lambda=6, late_lambda=2): """Can supply different initial conditions!""" draws = np.empty((3, samples)) gamma_pdf = lambda lam, a, b: lam**(a-1) * np.exp(-b*lam) n_years = disasters_array.shape[0] years = np.arange(1851, 1962, dtype=int) draws = [] while len(draws) < samples: # update early_lambda early_lambda = np.random.gamma(disasters_array[:tau - 1851].sum() + 1, 1 / (tau - 1851 + 10)) draws.append([early_lambda, late_lambda, tau]) # update late_lambda late_lambda = np.random.gamma(disasters_array[tau - 1851 + 1:].sum() + 1, 1 / (1962 - tau + 10)) draws.append([early_lambda, late_lambda, tau]) # update tau tau_probs = np.empty(n_years) for t in range(n_years): tau_probs[t] = (gamma_pdf(early_lambda, disasters_array[:t].sum() + 1, t + 10) * gamma_pdf(late_lambda, disasters_array[t:].sum() + 1, n_years - t + 10)) tau = np.random.choice(years, p=tau_probs / tau_probs.sum()) draws.append([early_lambda, late_lambda, tau]) return np.array(draws)[:samples] ###Output _____no_output_____ ###Markdown Checking our workWe compare the Gibbs sampler to the PyMC3 model -- this one goes a bit faster, but maybe it took me longer to write! ###Code %%time draws = gibbs_sample_disaster(1000) draws.mean(axis=0) # early_lambda, late_lambda, change_point %%time with coal_disaster_model(): trace = pm.sample() pm.summary(trace, varnames=['early_lambda', 'late_lambda', 'change_point', ]) ###Output /Users/twiecki/miniconda3/envs/bayes_course/lib/python3.7/site-packages/pymc3/stats/__init__.py:35: UserWarning: Keyword argument `varnames` renamed to `var_names`, and will be removed in pymc3 3.9 "pymc3 3.9".format(old=old, new=new) /Users/twiecki/miniconda3/envs/bayes_course/lib/python3.7/site-packages/arviz/data/io_pymc3.py:89: FutureWarning: Using `from_pymc3` without the model will be deprecated in a future release. Not using the model will return less accurate and less useful results. Make sure you use the model argument or call from_pymc3 within a model context. FutureWarning, ###Markdown More contrived exampleThis example shows how you might use some knowledge of conjugate distributions to start to automate a Gibbs sampler.Suppose we have a generative model:$$w_1 \sim \mathcal{N}(0, 1) \\w_2 \sim \mathcal{N}(0, 1) \\x \sim \mathcal{N}(w_1 + w_2, 1)$$Then we observe $x$, and wish to compute $p(w_1, w_2 | x)$.We will do this by inializing at some point $(w_1^0, w_2^0)$, then 1. drawing $w_1^1 \sim p(w_1 | w_2^0, x)$, 2. drawing $w_2^1 \sim p(w_2 | w_1^1, x)$We now have samples $\{ (w_1^0, w_2^0),(w_1^1, w_2^0),(w_1^1, w_2^1) \}$, and we go back and sample $w_1^2$.We are going to use the following fact:If $x \sim \mathcal{N}(\mu, \sigma)$ and $y \sim \mathcal{N}(x, s)$, then $$x | y \sim \mathcal{N}\left(\frac{1}{\sigma + s} (\sigma y + s \mu), \frac{1}{\sigma + s}\right),$$which collapses to $$x | y \sim \mathcal{N}\left(\frac{y}{2}, \frac{1}{2}\right),$$when $\sigma = s = 1$ and $\mu = 0$. We can use this to make our update rule below. ###Code def gibbs_sample(draws, init, observed): current = init.copy() samples = np.empty((draws, 2)) for idx in range(draws): residual = observed - current[(idx + 1) % 2] current[idx % 2] = 0.5 * (np.random.randn() + residual) samples[idx] = current.copy() return samples %time samples = gibbs_sample(2_000, np.zeros(2), 1) # fast! ###Output CPU times: user 7.35 ms, sys: 2.19 ms, total: 9.54 ms Wall time: 9.11 ms ###Markdown Demonstrating that the Gibbs sampler works, and maybe an easier way to do itWe can just implement the same model with PyMC3. It does not always compare so favorably, but this is pretty nice. ###Code %%time with pm.Model(): w_1 = pm.Normal('w_1') w_2 = pm.Normal('w_2') x = pm.Normal('x', w_1 + w_2, 1, observed=1) trace = pm.sample() fig, axes = plt.subplots(ncols=2, figsize=(10, 4), sharex=True, sharey=True) axes[0].plot(*samples.T, '.', alpha=0.2) axes[1].plot(trace['w_1'], trace['w_2'], '.', alpha=0.2); print(samples.mean(axis=0), [trace['w_1'].mean(), trace['w_2'].mean()]) ###Output [0.29751545 0.37267921] [0.34074741103727174, 0.3224728425827269] ###Markdown MCMC BasicsUncertainty may play an important role in business decisions. At the end of the day, our goal is to evaluate some *expectation* in the presence of uncertainty. Inverse CDF samplingGiven a probability density function, $p(x)$, the cumulative density function is given by $$\operatorname{cdf}(x) = \int_0^x p(t)~dt$$Note that the value $\operatorname{cdf}(x)$ is "the probability that a value is less than $x$", and is between 0 and 1. ###Code rv = st.norm(0, 1) t = np.linspace(-4, 4, 300) fig, axes = plt.subplots(ncols=2, figsize=(15, 5)) axes[0].plot(t, rv.pdf(t)) axes[0].set_title('Normal probability density function') axes[1].plot(t, rv.cdf(t)) axes[1].set_title('Normal cumulative density function') ###Output _____no_output_____ ###Markdown If we can *invert* the cumulative density function, we have a function $\operatorname{cdf}^{-1}(t)$, where $0 \leq t \leq 1$. We can use this function to draw random values:1. Draw $u \sim U(0, 1)$2. Use $y = \operatorname{cdf}^{-1}(u)$ as your sample ###Code np.random.seed(0) rv = st.norm(0, 1) t = np.linspace(-4, 4, 300) u = np.random.rand() fig, ax = plt.subplots(figsize=(10, 7)) ax.plot(t, rv.cdf(t), color='C0') ax.text(t.min() + 0.1, u + 0.02, '$u$', fontdict={"fontsize": 24}) ax.hlines(u, t.min(), rv.ppf(u), linestyles='dashed', color='C0') ax.vlines(rv.ppf(u), u, 0, linestyles='dashed', color='C0') bg_color = ax.get_facecolor() ax.plot(rv.ppf(u), u, 'o', mfc=bg_color, ms=15) ax.text(rv.ppf(u) + 0.1, 0.02, 'y', fontdict={"fontsize": 24}) ax.set_xlim(t.min(), t.max()) ax.set_ylim(0, 1) ax.set_title('Inverse CDF sampling'); ###Output _____no_output_____ ###Markdown Inverse CDF exercise: Fill out the following function that implements inverse CDF sampling. There is a cell below to visually check your implementation. ###Code def sample(draws, inv_cdf): """Draw samples using the inverse CDF of a distribution. Parameters ---------- draws : int Number of draws to return inv_cdf : function Gives the percentile of the distribution the argument falls in. This is vectorized, like in `scipy.stats.norm.ppf`""" # output should be an array of size (draws,), distributed according to inv_cdf #################### return np.random.rand(draws) # This is wrong, but it runs! #################### fig, axes = plt.subplots(ncols=2, figsize=(15, 5), sharex=True, sharey=True) draws = 10_000 # Two histograms should look the same axes[0].hist(st.norm().rvs(draws), bins='auto', density=True) axes[1].hist(sample(draws, st.norm().ppf), bins='auto', density=True); ###Output _____no_output_____ ###Markdown Inverse CDF exercise (calculus required)The probability density function of the exponential distribution is $$p(x | \lambda) = \lambda e^{-\lambda x}$$Calculate the cumulative density function, invert it, and use the `sample` function above to sample from the exponential function.Again, there is a plot below to check your implementation. ###Code def inv_cdf_exponential(u, lam=1): # Should return an array of shape `u.shape` #################### return u # wrong but compiles #################### fig, axes = plt.subplots(ncols=2, figsize=(15, 5), sharex=True, sharey=True) draws = 10_000 # Two histograms should look the same axes[0].hist(st.expon(scale=1.).rvs(draws), bins='auto', density=True) axes[1].hist(sample(draws, inv_cdf_exponential), bins='auto', density=True); ###Output _____no_output_____ ###Markdown Hints for previous exerciseThe cumulative density function is$$\operatorname{cdf}(x) = 1-e^{-\lambda x}.$$Invert the cumulative density function by solving $$y = 1-e^{-\lambda x}$$ for $x$ in terms of $y$. Rejection SamplingMost integrals are hard or impossible to do. Also, if we are iterating on a statistical model, we may want a method that works without requiring rederiving a formula for generating samples. Further, in Bayesian data analysis, we may not know a *normalizing constant*: we may only know $$\tilde{p}(x) = \frac{1}{Z_p}p(x),$$for some constant $Z_p$ ("constant" here is with respect to $x$). In order to sample, first we1. Choose a proposal distribution $q$ that you know how to sample from2. Choose a number $k$, so that $kq(x) \geq \tilde{p}(x)$ for all $x$Then, we repeatedly 1. Draw a $z$ from $q$2. Draw a $u$ from $\operatorname{Uniform}(0, kq(z))$3. If $\tilde{p} > u$, accept the draw, otherwise, reject.Importantly, every "rejection" is wasted computation! We will explore methods for having less wasted computation later. ###Code def mixture_of_gaussians(): rvs = (st.norm(-3, 1), st.norm(0, 1), st.norm(3, 1)) probs = (0.5, 0.2, 0.3) def pdf(x): return sum(p * rv.pdf(x) for p, rv in zip(probs, rvs)) return pdf np.random.seed(6) pdf = mixture_of_gaussians() q = st.norm(0, 3) z = q.rvs() u = np.random.rand() * q.pdf(z) fig, ax = plt.subplots(figsize=(10, 5), constrained_layout=True) t = np.linspace(-10, 10, 500) ax.plot(t, pdf(t), '-', label='$q(x)$') ax.fill_between(t, 0, pdf(t), alpha=0.2) ax.plot(t, 3 * q.pdf(t), '-', label='$3 \cdot \mathcal{N}(z | 0, 3)$') ax.fill_between(t, pdf(t), 3 * q.pdf(t), alpha=0.2) bg_color = ax.get_facecolor() ax.vlines(z, 0, pdf(z), linestyles='dashed', color='green') ax.vlines(z, pdf(z), 3 * q.pdf(z), linestyles='dashed', color='red') # ax.plot(z, 0, 'o', label='z', ms=15, mfc=bg_color) ax.plot(z, pdf(z), 'o', color='C0', ms=15, mfc=bg_color) ax.plot(z, u, 'rx', label='$u \sim U(0, 3\cdot\mathcal{N}(z | 0, 3))$', ms=15, mfc=bg_color) ax.plot(z, 3 * q.pdf(z), 'o', color='C1', ms=15, mfc=bg_color) # ax.plot(z * np.ones(4), np.array([0, pdf(z), u, 3 * q.pdf(z)]), 'ko', ms=15, mfc=bg_color) ax.set_ylim(bottom=0) ax.set_xlim(t.min(), t.max()) ax.legend(); ###Output _____no_output_____ ###Markdown Rejection Sampling ExerciseSample from the pdf returned by `mixture_of_gaussians` using rejection sampling. We will implement this as a Python generator, and yield the proposed draw, `z`, as well as whether it was accepted. You should assume `proposal_dist` comes from `scipy.stats`, so it has a `.rvs()` method that samples, and a `.pdf` method that evaluates the probability density function at a point.If $kq(x)$ is not larger than $\tilde{p}(x)$, throw an exception!The cell below has a plot to check your implementation. ###Code def rejection_sampler(pdf, proposal_dist, k): """ Yields proposals, and whether that proposal should be accepted or rejected """ while True: z = proposal_dist.rvs() #################### accept = True yield z, accept #################### def gen_samples(draws, sampler): """An example of how to use the rejection sampler above.""" samples = [] for n_draws, (z, accept) in enumerate(sampler, 1): if accept: samples.append(z) if len(samples) == draws: return np.array(samples), n_draws %%time pdf = mixture_of_gaussians() proposal_dist = st.norm(0, 3) k = 3 samples, draws = gen_samples(10_000, rejection_sampler(pdf, proposal_dist, k)) fig, ax = plt.subplots(figsize=(10, 7)) t = np.linspace(samples.min(), samples.max(), 500) # This histogram should look very similar to the pdf that is plotted ax.hist(samples, bins='auto', density=True) ax.plot(t, pdf(t)) ax.set_title(f'{samples.size:,d} draws from the pdf with {100 * samples.size / draws:.2f}% efficiency'); ###Output CPU times: user 812 ms, sys: 28 ms, total: 840 ms Wall time: 810 ms ###Markdown Exercise: How does a rejection sampler scale with dimension?Use as your "unknown distribution" a multivariate Gaussian with identity covariance matrix, and use as your proposal distribution a multivariate Gaussian with covariance matrix `1.1 * I`. - Around what percent of samples are accepted with dimension 1? - 10 dimensions? - 100 dimensions? - What happens if you try to use 1,000 dimensions? ###Code #################### def run_experiment(dims, trials=1_000_000): pdf = st.multivariate_normal(mean=np.zeros(dims), cov=np.eye(dims)).pdf prop = st.multivariate_normal(mean=np.zeros(dims), cov=1.1 * np.eye(dims)) k = pdf(0) / prop.pdf(0) samples = prop.rvs(trials) sample_pdfs = prop.pdf(samples) u = np.random.uniform(low=0, high=k * sample_pdfs) accept = pdf(samples) > u return accept.mean() #################### ###Output _____no_output_____ ###Markdown Importance sampling is useful but we won't cover it!It produces _weighted_ samples, so that the output is samples and weights. See 11.1.4 in Bishop's "Pattern Recognition and Machine Learning". Introduction to MCMCOne way to intuitively waste less computation is to use knowledge from your current sample to inform your next proposal: this is called a *Markov chain*. Let $t$ be the index of our current sample, $x_t$ be our current sample, and $\operatorname{pdf}(x_t)$ be our probability density function evaluated at the current sample. We will define a *transition probability* that is conditioned on our current position: $T(x_{t + 1} | x_t)$. It turns out that a Markov chain will sample from $\operatorname{pdf}$ if:- $T$ is ergodic (sort of techinical -- roughly $T$ is aperiodic and can explore the whole space)- The chain satisfies *detailed balance*, which means $\operatorname{pdf}(x_t)T(x_{t+1} | x_t) = \operatorname{pdf}(x_{t + 1})T(x_{t} | x_{t + 1})$.This second criteria inspires the *Metropolis acceptance criteria*: If we use any proposal with density function $\operatorname{prop}$, we use this criterion to "correct" the transition probability to satisfy detailed balance:$$A(x_{t + 1} | x_t) = \min\left\{1, \frac{\operatorname{pdf}(x_{t + 1})}{\operatorname{pdf}(x_{t})}\frac{\operatorname{prop}(x_{t} | x_{t + 1})}{\operatorname{prop}(x_{t + 1} | x_t)} \right\}$$Now the *Metropolis-Hastings Algorithm* isInitialize at some point $x_0$. For each iteration:1. Draw $\tilde{x}_{t + 1} \sim \operatorname{prop}(x_t)$2. Draw $u \sim \operatorname{Uniform}(0, 1)$3. If $u < A(\tilde{x}_{t + 1} | x_t)$, then $x_{t + 1} = \tilde{x}_{t + 1}$. Otherwise, $x_{t + 1} = x_t$.This is "tested" in the following cell. ###Code def metropolis_hastings(pdf, proposal, init=0): """Yields a sample, and whether it was accepted. Notice that, unlike the rejection sampler, even when the second argument is `False`, we use the sample! """ current = init while True: prop_dist = proposal(current) prop = prop_dist.rvs() p_accept = min(1, pdf(prop) / pdf(current) * proposal(prop).pdf(current) / prop_dist.pdf(prop)) accept = np.random.rand() < p_accept if accept: current = prop yield current, accept def gen_samples(draws, sampler): """An example of using the metropolis_hastings API.""" samples = np.empty(draws) accepts = 0 for idx, (z, accept) in takewhile(lambda j: j[0] < draws, enumerate(sampler)): accepts += int(accept) samples[idx] = z return samples, accepts %%time pdf = mixture_of_gaussians() proposal_dist = st.norm(0, 3) k = 3 samples, accepts = gen_samples(10_000, metropolis_hastings(pdf, lambda x: st.norm(x, 1))) fig, ax = plt.subplots(figsize=(10, 7)) t = np.linspace(samples.min(), samples.max(), 500) ax.hist(samples, bins='auto', density=True) ax.plot(t, pdf(t)) ax.set_title(f'{samples.size:,d} draws from the pdf with {100 * accepts / samples.size :.2f}% accept rate'); ###Output CPU times: user 43.8 s, sys: 168 ms, total: 44 s Wall time: 44 s ###Markdown MCMC ExerciseThis implementation is wildly inefficient! We will speed it up by fixing the proposal distribution as a Gaussian centered at the previous point (this is fairly standard). Specifically,$$x_{t+1} \sim \mathcal{N}( x_t, \sigma),$$so$$\operatorname{prop}(x_{t+1} | x_{t}) = \mathcal{N}(x_{t + 1} | x_t, \sigma)$$We call $\sigma$ the *step size*.1. The Metropolis-Hastings acceptance criteria simplifies quite a bit - work out what $A(x_{t + 1} | x_t)$ is now.2. scipy.stats is doing a lot of work: `st.norm().rvs()` is ~1000x slower than `np.random.randn()`. Rewrite `metropolis_hastings` with the acceptance criteria, and without using `st.norm().rvs()` to provide proposals. ###Code def metropolis_hastings(pdf, step_size, init=0): current = init while True: #################### accept = True #################### yield current, accept %%time pdf = mixture_of_gaussians() proposal_dist = st.norm(0, 3) k = 3 samples, accepts = gen_samples(10_000, metropolis_hastings(pdf, 1)) fig, ax = plt.subplots(figsize=(10, 7)) t = np.linspace(samples.min(), samples.max(), 500) ax.hist(samples, bins='auto', density=True) ax.plot(t, pdf(t)) ax.set_title(f'{samples.size:,d} draws from the pdf with {100 * accepts / samples.size :.2f}% accept rate') ###Output CPU times: user 48 ms, sys: 0 ns, total: 48 ms Wall time: 41.8 ms ###Markdown MCMC Exercises 21. Find a step size so that the acceptance rate is ~25%2. Find a step size so that the acceptance rate is ~95%3. What is the general relationship between step size and acceptance rate? Bonus exerciseWrite a routine for finding a step size that gives a specific acceptance rate for Metropolis-Hastings. It may be helpful to return the acceptance probability instead of (or in addition to) the `accept` boolean. Literature suggests the overly specific 23.4% acceptance rate as a good target. PyMC3 aims for anything between 10% and 90%. Gibbs SamplingIf you can sample from all the marginal distributions, you can implement a sampler pretty efficiently just using those.The general idea is to:1. Initialize $\theta^0 = (\theta_1^0, \theta_2^0, \ldots, \theta_n^0)$, and $j = 0$2. For each $k = 1, 2, \ldots, n$: - Set $\theta_k^j \sim \pi(\theta_k^j | \theta_1^j, \theta_2^j, \ldots, \theta_n^j)$3. Increment $j$, and repeat as long as desiredThis is pretty tricky to automate, since you need to know all of these conditional distributions! That said, this is often seen in science when a sampler is hand-built to do inference with a specific model. In that case, each conditional distribution might be computed by hand. Coal mining exampleWe have a time series of recorded coal mining disasters in the UK from 1851 to 1961.Occurrences of disasters in the time series is thought to be derived from a Poisson process with a large rate parameter in the early part of the time series, and from one with a smaller rate in the later part. We are interested in locating the change point in the series, which perhaps is related to changes in mining safety regulations. ###Code disasters_array = np.array( [4, 5, 4, 0, 1, 4, 3, 4, 0, 6, 3, 3, 4, 0, 2, 6, 3, 3, 5, 4, 5, 3, 1, 4, 4, 1, 5, 5, 3, 4, 2, 5, 2, 2, 3, 4, 2, 1, 3, 2, 2, 1, 1, 1, 1, 3, 0, 0, 1, 0, 1, 1, 0, 0, 3, 1, 0, 3, 2, 2, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 2, 1, 0, 0, 0, 1, 1, 0, 2, 3, 3, 1, 1, 2, 1, 1, 1, 1, 2, 4, 2, 0, 0, 1, 4, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1]) years = np.arange(1851, 1962, dtype=int) fig, ax = plt.subplots() ax.vlines(years, 0, disasters_array, lw=6) ax.set_xlim(years.min() - 1, years.max() + 1) ax.set_ylim(bottom=0); ###Output _____no_output_____ ###Markdown Writing down the model and computing conditional distributionsIt is perhaps easiest to write the model as a PyMC3 model. In notation, we might write$$y_t \sim \operatorname{Poisson}(\lambda_t), t=1851, \ldots, 1962 \\\lambda_t = \left\{ \begin{array}{}\lambda_1 \text{ for } t \leq \tau \\ \lambda_2 \text{ for } t > \tau \end{array}\right. \\\lambda_j \sim \operatorname{Gamma}(1, 10) \\\tau \sim \operatorname{DiscreteUniform}(1851, 1962)$$ ###Code import pymc3 as pm def coal_disaster_model(): with pm.Model() as model: early_lambda = pm.Gamma('early_lambda', 1, 10) late_lambda = pm.Gamma('late_lambda', 1, 10) change_point = pm.DiscreteUniform('change_point', 1851, 1962) lam = pm.Deterministic('lam', pm.math.where(years > change_point, late_lambda, early_lambda)) pm.Poisson('rate', lam, observed=disasters_array) return model pm.model_to_graphviz(coal_disaster_model()) ###Output _____no_output_____ ###Markdown Now we need to go out and compute the conditional distributions:$$p(\tau | \lambda_1, \lambda_2, y_t) \\ p(\lambda_1 | \tau, \lambda_2, y_t) \\ p(\lambda_2 | \tau, \lambda_1, y_t)$$In this case, we can do some arithmetic, look up these distributions, and compute$$p(\tau | \lambda_1, \lambda_2, y_t) = \operatorname{Categorical}\left( \frac{\lambda_1^{\sum_{t=1851}^{\tau} y_t +\alpha-1} e^{-(\beta+\tau)\lambda_1} \lambda_2^{\sum_{t=\tau+1}^{1962} y_i + \alpha-1} e^{-\beta\lambda_2}}{\sum_{k=1851}^{1962} \lambda_1^{\sum_{t=1851}^{\tau} y_t +\alpha-1} e^{-(\beta+\tau)\lambda_1} \lambda_2^{\sum_{t=\tau+1}^{1962} y_i + \alpha-1} e^{-\beta\lambda_2}} \right) \\ p(\lambda_1 | \tau, \lambda_2, y_t) = \operatorname{Gamma}\left(\sum_{t=1851}^{\tau} y_t + \alpha, \tau + \beta\right)\\ p(\lambda_2 | \tau, \lambda_1, y_t) = \operatorname{Gamma}\left(\sum_{t=\tau + 1}^{1962} y_t + \alpha, 1962 - \tau + \beta\right)$$So far so good! Now here's an implementation! ###Code def gibbs_sample_disaster(samples, tau=1900, early_lambda=6, late_lambda=2): """Can supply different initial conditions!""" draws = np.empty((3, samples)) gamma_pdf = lambda lam, a, b: lam**(a-1) * np.exp(-b*lam) n_years = disasters_array.shape[0] years = np.arange(1851, 1962, dtype=int) draws = [] while len(draws) < samples: # update early_lambda early_lambda = np.random.gamma(disasters_array[:tau - 1851].sum() + 1, 1 / (tau - 1851 + 10)) draws.append([early_lambda, late_lambda, tau]) # update late_lambda late_lambda = np.random.gamma(disasters_array[tau - 1851 + 1:].sum() + 1, 1 / (1962 - tau + 10)) draws.append([early_lambda, late_lambda, tau]) # update tau tau_probs = np.empty(n_years) for t in range(n_years): tau_probs[t] = (gamma_pdf(early_lambda, disasters_array[:t].sum() + 1, t + 10) * gamma_pdf(late_lambda, disasters_array[t:].sum() + 1, n_years - t + 10)) tau = np.random.choice(years, p=tau_probs / tau_probs.sum()) draws.append([early_lambda, late_lambda, tau]) return np.array(draws)[:samples] ###Output _____no_output_____ ###Markdown Checking our workWe compare the Gibbs sampler to the PyMC3 model -- this one goes a bit faster, but maybe it took me longer to write! ###Code %%time draws = gibbs_sample_disaster(1000) draws.mean(axis=0) # early_lambda, late_lambda, change_point %%time with coal_disaster_model(): trace = pm.sample() pm.summary(trace, varnames=['early_lambda', 'late_lambda', 'change_point', ]) ###Output /home/colin/miniconda3/envs/bayes_course/lib/python3.7/site-packages/pymc3/stats/__init__.py:21: UserWarning: Keyword argument `varnames` renamed to `var_names`, and will be removed in pymc3 3.9 "pymc3 3.9".format(old=old, new=new) ###Markdown More contrived exampleThis example shows how you might use some knowledge of conjugate distributions to start to automate a Gibbs sampler.Suppose we have a generative model:$$w_1 \sim \mathcal{N}(0, 1) \\w_2 \sim \mathcal{N}(0, 1) \\x \sim \mathcal{N}(w_1 + w_2, 1)$$Then we observe $x$, and wish to compute $p(w_1, w_2 | x)$.We will do this by inializing at some point $(w_1^0, w_2^0)$, then 1. drawing $w_1^1 \sim p(w_1 | w_2^0, x)$, 2. drawing $w_2^1 \sim p(w_2 | w_1^1, x)$We now have samples $\{ (w_1^0, w_2^0),(w_1^1, w_2^0),(w_1^1, w_2^1) \}$, and we go back and sample $w_1^2$.We are going to use the following fact:If $x \sim \mathcal{N}(\mu, \sigma)$ and $y \sim \mathcal{N}(x, s)$, then $$x | y \sim \mathcal{N}\left(\frac{1}{\sigma + s} (\sigma y + s \mu), \frac{1}{\sigma + s}\right),$$which collapses to $$x | y \sim \mathcal{N}\left(\frac{y}{2}, \frac{1}{2}\right),$$when $\sigma = s = 1$ and $\mu = 0$. We can use this to make our update rule below. ###Code def gibbs_sample(draws, init, observed): current = init.copy() samples = np.empty((draws, 2)) for idx in range(draws): residual = observed - current[(idx + 1) % 2] current[idx % 2] = 0.5 * (np.random.randn() + residual) samples[idx] = current.copy() return samples %time samples = gibbs_sample(2_000, np.zeros(2), 1) # fast! ###Output CPU times: user 16 ms, sys: 0 ns, total: 16 ms Wall time: 10.4 ms ###Markdown Demonstrating that the Gibbs sampler works, and maybe an easier way to do itWe can just implement the same model with PyMC3. It does not always compare so favorably, but this is pretty nice. ###Code %%time with pm.Model(): w_1 = pm.Normal('w_1') w_2 = pm.Normal('w_2') x = pm.Normal('x', w_1 + w_2, 1, observed=1) trace = pm.sample() fig, axes = plt.subplots(ncols=2, figsize=(10, 4), sharex=True, sharey=True) axes[0].plot(*samples.T, '.', alpha=0.2) axes[1].plot(trace['w_1'], trace['w_2'], '.', alpha=0.2); print(samples.mean(axis=0), [trace['w_1'].mean(), trace['w_2'].mean()]) ###Output [0.3451967 0.31183137] [0.3454329361682147, 0.3341319264840685] ###Markdown MCMC BasicsUncertainty may play an important role in business decisions. At the end of the day, our goal is to evaluate some *expectation* in the presence of uncertainty. Inverse CDF samplingGiven a probability density function, $p(x)$, the cumulative density function is given by $$\operatorname{cdf}(x) = \int_0^x p(t)~dt$$Note that the value $\operatorname{cdf}(x)$ is "the probability that a value is less than $x$", and is between 0 and 1. ###Code rv = st.norm(0, 1) t = np.linspace(-4, 4, 300) fig, axes = plt.subplots(ncols=2, figsize=(15, 5)) axes[0].plot(t, rv.pdf(t)) axes[0].set_title('Normal probability density function') axes[1].plot(t, rv.cdf(t)) axes[1].set_title('Normal cumulative density function') ###Output _____no_output_____ ###Markdown If we can *invert* the cumulative density function, we have a function $\operatorname{cdf}^{-1}(t)$, where $0 \leq t \leq 1$. We can use this function to draw random values:1. Draw $u \sim U(0, 1)$2. Use $y = \operatorname{cdf}^{-1}(u)$ as your sample ###Code np.random.seed(0) rv = st.norm(0, 1) t = np.linspace(-4, 4, 300) u = np.random.rand() fig, ax = plt.subplots(figsize=(10, 7)) ax.plot(t, rv.cdf(t), color='C0') ax.text(t.min() + 0.1, u + 0.02, '$u$', fontdict={"fontsize": 24}) ax.hlines(u, t.min(), rv.ppf(u), linestyles='dashed', color='C0') ax.vlines(rv.ppf(u), u, 0, linestyles='dashed', color='C0') bg_color = ax.get_facecolor() ax.plot(rv.ppf(u), u, 'o', mfc=bg_color, ms=15) ax.text(rv.ppf(u) + 0.1, 0.02, 'y', fontdict={"fontsize": 24}) ax.set_xlim(t.min(), t.max()) ax.set_ylim(0, 1) ax.set_title('Inverse CDF sampling'); ###Output _____no_output_____ ###Markdown Inverse CDF exercise: Fill out the following function that implements inverse CDF sampling. There is a cell below to visually check your implementation. ###Code def sample(draws, inv_cdf): """Draw samples using the inverse CDF of a distribution. Parameters ---------- draws : int Number of draws to return inv_cdf : function Gives the percentile of the distribution the argument falls in. This is vectorized, like in `scipy.stats.norm.ppf`""" # output should be an array of size (draws,), distributed according to inv_cdf u = np.random.rand(draws) return inv_cdf(u) fig, ax = plt.subplots(figsize=(10, 7)) # Should look normally distributed! ax.hist(sample(1_000, st.norm().ppf), bins='auto', density=True); ###Output _____no_output_____ ###Markdown Inverse CDF exercise (calculus required)The probability density function of the exponential distribution is $$p(x | \lambda) = \lambda e^{-\lambda x}$$Calculate the cumulative density function, invert it, and use the `sample` function above to sample from the exponential function.Again, there is a plot below to check your implementation. ###Code def inv_cdf_exponential(u, lam=1): # Should return an array of shape `u.shape` return np.zeros(*u.shape) fig, axes = plt.subplots(ncols=2, figsize=(15, 5), sharex=True, sharey=True) draws = 10_000 # Two histograms should look the same axes[0].hist(st.expon(scale=1.).rvs(draws), bins='auto', density=True) axes[1].hist(sample(draws, inv_cdf_exponential), bins='auto', density=True); ###Output _____no_output_____ ###Markdown Hints for previous exerciseThe cumulative density function is$$\operatorname{cdf}(x) = 1-e^{-\lambda x}.$$Invert the cumulative density function by solving $$y = 1-e^{-\lambda x}$$ for $x$ in terms of $y$. Rejection SamplingMost integrals are hard or impossible to do. Also, if we are iterating on a statistical model, we may want a method that works without requiring rederiving a formula for generating samples. Further, in Bayesian data analysis, we may not know a *normalizing constant*: we may only know $$\tilde{p}(x) = \frac{1}{Z_p}p(x),$$for some constant $Z_p$ ("constant" here is with respect to $x$). In order to sample, first we1. Choose a proposal distribution $q$ that you know how to sample from2. Choose a number $k$, so that $kq(x) \geq \tilde{p}(x)$ for all $x$Then, we repeatedly 1. Draw a $z$ from $q$2. Draw a $u$ from $\operatorname{Uniform}(0, kq(z))$3. If $\tilde{p} > u$, accept the draw, otherwise, reject.Importantly, every "rejection" is wasted computation! We will explore methods for having less wasted computation later. ###Code def mixture_of_gaussians(): rvs = (st.norm(-3, 1), st.norm(0, 1), st.norm(3, 1)) probs = (0.5, 0.2, 0.3) def pdf(x): return sum(p * rv.pdf(x) for p, rv in zip(probs, rvs)) return pdf np.random.seed(6) pdf = mixture_of_gaussians() q = st.norm(0, 3) z = q.rvs() u = np.random.rand() * q.pdf(z) fig, ax = plt.subplots(figsize=(10, 5), constrained_layout=True) t = np.linspace(-10, 10, 500) ax.plot(t, pdf(t), '-', label='$q(x)$') ax.fill_between(t, 0, pdf(t), alpha=0.2) ax.plot(t, 3 * q.pdf(t), '-', label='$3 \cdot \mathcal{N}(z | 0, 3)$') ax.fill_between(t, pdf(t), 3 * q.pdf(t), alpha=0.2) bg_color = ax.get_facecolor() ax.vlines(z, 0, pdf(z), linestyles='dashed', color='green') ax.vlines(z, pdf(z), 3 * q.pdf(z), linestyles='dashed', color='red') # ax.plot(z, 0, 'o', label='z', ms=15, mfc=bg_color) ax.plot(z, pdf(z), 'o', color='C0', ms=15, mfc=bg_color) ax.plot(z, u, 'rx', label='$u \sim U(0, 3\cdot\mathcal{N}(z | 0, 3))$', ms=15, mfc=bg_color) ax.plot(z, 3 * q.pdf(z), 'o', color='C1', ms=15, mfc=bg_color) # ax.plot(z * np.ones(4), np.array([0, pdf(z), u, 3 * q.pdf(z)]), 'ko', ms=15, mfc=bg_color) ax.set_ylim(bottom=0) ax.set_xlim(t.min(), t.max()) ax.legend(); ###Output _____no_output_____ ###Markdown Rejection Sampling ExerciseSample from the pdf returned by `mixture_of_gaussians` using rejection sampling. We will implement this as a Python generator, and yield the proposed draw, `z`, as well as whether it was accepted. You should assume `proposal_dist` comes from `scipy.stats`, so it has a `.rvs()` method that samples, and a `.pdf` method that evaluates the probability density function at a point.If $kq(x)$ is not larger than $\tilde{p}(x)$, throw an exception!The cell below has a plot to check your implementation. ###Code def rejection_sampler(pdf, proposal_dist, k): """ Yields proposals, and whether that proposal should be accepted or rejected """ while True: z = proposal_dist.rvs() u = np.random.uniform(0, k * proposal_dist.pdf(z)) accept = u < pdf(z) yield z, accept def gen_samples(draws, sampler): """An example of how to use the rejection sampler above.""" samples = [] for n_draws, (z, accept) in enumerate(sampler, 1): if accept: samples.append(z) if len(samples) == draws: return np.array(samples), n_draws %%time pdf = mixture_of_gaussians() proposal_dist = st.norm(0, 3) k = 3 samples, draws = gen_samples(10_000, rejection_sampler(pdf, proposal_dist, k)) fig, ax = plt.subplots(figsize=(10, 7)) t = np.linspace(samples.min(), samples.max(), 500) # This histogram should look very similar to the pdf that is plotted ax.hist(samples, bins='auto', density=True) ax.plot(t, pdf(t)) ax.set_title(f'{samples.size:,d} draws from the pdf with {100 * samples.size / draws:.2f}% efficiency'); ###Output CPU times: user 10.6 s, sys: 178 ms, total: 10.8 s Wall time: 10.6 s ###Markdown Exercise: How does a rejection sampler scale with dimension?Use as your "unknown distribution" a multivariate Gaussian with identity covariance matrix, and use as your proposal distribution a multivariate Gaussian with covariance matrix `1.1 * I`. - Around what percent of samples are accepted with dimension 1? - 10 dimensions? - 100 dimensions? - What happens if you try to use 1,000 dimensions? ###Code def finite_sampler(attempts, sampler): samples = [] for n_draws, (z, accept) in takewhile(lambda j: j[0] < attempts, enumerate(sampler)): if accept: samples.append(z) return np.array(samples) dim = 1 pdf = st.multivariate_normal(np.zeros(dim), np.eye(dim)).pdf proposal_dist = st.multivariate_normal(np.zeros(dim), 1.1 * np.eye(dim)) k = pdf(0) / proposal_dist.pdf(0) sampler = rejection_sampler(pdf, proposal_dist, k) samples = finite_sampler(1_000, sampler) len(samples) ###Output _____no_output_____ ###Markdown Importance sampling is useful but we won't cover it!It produces _weighted_ samples, so that the output is samples and weights. See 11.1.4 in Bishop's "Pattern Recognition and Machine Learning". Introduction to MCMCOne way to intuitively waste less computation is to use knowledge from your current sample to inform your next proposal: this is called a *Markov chain*. Let $t$ be the index of our current sample, $x_t$ be our current sample, and $\operatorname{pdf}(x_t)$ be our probability density function evaluated at the current sample. We will define a *transition probability* that is conditioned on our current position: $T(x_{t + 1} | x_t)$. It turns out that a Markov chain will sample from $\operatorname{pdf}$ if:- $T$ is ergodic (sort of techinical -- roughly $T$ is aperiodic and can explore the whole space)- The chain satisfies *detailed balance*, which means $\operatorname{pdf}(x_t)T(x_{t+1} | x_t) = \operatorname{pdf}(x_{t + 1})T(x_{t} | x_{t + 1})$.This second criteria inspires the *Metropolis acceptance criteria*: If we use any proposal with density function $\operatorname{prop}$, we use this criterion to "correct" the transition probability to satisfy detailed balance:$$A(x_{t + 1} | x_t) = \min\left\{1, \frac{\operatorname{pdf}(x_{t + 1})}{\operatorname{pdf}(x_{t})}\frac{\operatorname{prop}(x_{t} | x_{t + 1})}{\operatorname{prop}(x_{t + 1} | x_t)} \right\}$$Now the *Metropolis-Hastings Algorithm* isInitialize at some point $x_0$. For each iteration:1. Draw $\tilde{x}_{t + 1} \sim \operatorname{prop}(x_t)$2. Draw $u \sim \operatorname{Uniform}(0, 1)$3. If $u < A(\tilde{x}_{t + 1} | x_t)$, then $x_{t + 1} = \tilde{x}_{t + 1}$. Otherwise, $x_{t + 1} = x_t$.This is "tested" in the following cell. ###Code def metropolis_hastings(pdf, proposal, init=0): """Yields a sample, and whether it was accepted. Notice that, unlike the rejection sampler, even when the second argument is `False`, we use the sample! """ current = init while True: prop_dist = proposal(current) prop = prop_dist.rvs() p_accept = min(1, pdf(prop) / pdf(current) * proposal(prop).pdf(current) / prop_dist.pdf(prop)) accept = np.random.rand() < p_accept if accept: current = prop yield current, accept def gen_samples(draws, sampler): """An example of using the metropolis_hastings API.""" samples = np.empty(draws) accepts = 0 for idx, (z, accept) in takewhile(lambda j: j[0] < draws, enumerate(sampler)): accepts += int(accept) samples[idx] = z return samples, accepts %%time pdf = mixture_of_gaussians() proposal_dist = st.norm(0, 3) k = 3 samples, accepts = gen_samples(10_000, metropolis_hastings(pdf, lambda x: st.norm(x, 1))) fig, ax = plt.subplots(figsize=(10, 7)) t = np.linspace(samples.min(), samples.max(), 500) ax.hist(samples, bins='auto', density=True) ax.plot(t, pdf(t)) ax.set_title(f'{samples.size:,d} draws from the pdf with {100 * accepts / samples.size :.2f}% accept rate'); ###Output CPU times: user 16.4 s, sys: 91.9 ms, total: 16.5 s Wall time: 16.5 s ###Markdown MCMC ExerciseThis implementation is wildly inefficient! We will speed it up by fixing the proposal distribution as a Gaussian centered at the previous point (this is fairly standard). Specifically,$$x_{t+1} \sim \mathcal{N}( x_t, \sigma),$$so$$\operatorname{prop}(x_{t+1} | x_{t}) = \mathcal{N}(x_{t + 1} | x_t, \sigma)$$We call $\sigma$ the *step size*.1. The Metropolis-Hastings acceptance criteria simplifies quite a bit - work out what $A(x_{t + 1} | x_t)$ is now.2. scipy.stats is doing a lot of work: `st.norm().rvs()` is ~1000x slower than `np.random.randn()`. Rewrite `metropolis_hastings` with the acceptance criteria, and without using `st.norm().rvs()` to provide proposals. ###Code def metropolis_hastings(pdf, step_size, init=0): current = init while True: prop = np.random.randn() * step_size + current p_accept = min(1, pdf(prop) / pdf(current)) accept = np.random.rand() < p_accept if accept: current = prop yield current, accept del k %%time pdf = mixture_of_gaussians() samples, accepts = gen_samples(10_000, metropolis_hastings(pdf, 1)) fig, ax = plt.subplots(figsize=(10, 7)) t = np.linspace(samples.min(), samples.max(), 500) ax.hist(samples, bins='auto', density=True) ax.plot(t, pdf(t)) ax.set_title(f'{samples.size:,d} draws from the pdf with {100 * accepts / samples.size :.2f}% accept rate') ###Output CPU times: user 4.66 s, sys: 64.4 ms, total: 4.72 s Wall time: 4.67 s ###Markdown MCMC Exercises 21. Find a step size so that the acceptance rate is ~25%2. Find a step size so that the acceptance rate is ~95%3. What is the general relationship between step size and acceptance rate? ###Code %%time pdf = mixture_of_gaussians() samples, accepts = gen_samples(10_000, metropolis_hastings(pdf, 11.7)) fig, ax = plt.subplots(figsize=(10, 7)) t = np.linspace(samples.min(), samples.max(), 500) ax.hist(samples, bins='auto', density=True) ax.plot(t, pdf(t)) ax.set_title(f'{samples.size:,d} draws from the pdf with {100 * accepts / samples.size :.2f}% accept rate') %%time pdf = mixture_of_gaussians() samples, accepts = gen_samples(10_000, metropolis_hastings(pdf, 0.22)) fig, ax = plt.subplots(figsize=(10, 7)) t = np.linspace(samples.min(), samples.max(), 500) ax.hist(samples, bins='auto', density=True) ax.plot(t, pdf(t)) ax.set_title(f'{samples.size:,d} draws from the pdf with {100 * accepts / samples.size :.2f}% accept rate') ###Output CPU times: user 4.75 s, sys: 91.1 ms, total: 4.84 s Wall time: 4.77 s ###Markdown Bonus exerciseWrite a routine for finding a step size that gives a specific acceptance rate for Metropolis-Hastings. It may be helpful to return the acceptance probability instead of (or in addition to) the `accept` boolean. Literature suggests the overly specific 23.4% acceptance rate as a good target. PyMC3 aims for anything between 10% and 90%. Gibbs SamplingIf you can sample from all the marginal distributions, you can implement a sampler pretty efficiently just using those.The general idea is to:1. Initialize $\theta^0 = (\theta_1^0, \theta_2^0, \ldots, \theta_n^0)$, and $j = 0$2. For each $k = 1, 2, \ldots, n$: - Set $\theta_k^j \sim \pi(\theta_k^j | \theta_1^j, \theta_2^j, \ldots, \theta_n^j)$3. Increment $j$, and repeat as long as desiredThis is pretty tricky to automate, since you need to know all of these conditional distributions! That said, this is often seen in science when a sampler is hand-built to do inference with a specific model. In that case, each conditional distribution might be computed by hand. Coal mining exampleWe have a time series of recorded coal mining disasters in the UK from 1851 to 1961.Occurrences of disasters in the time series is thought to be derived from a Poisson process with a large rate parameter in the early part of the time series, and from one with a smaller rate in the later part. We are interested in locating the change point in the series, which perhaps is related to changes in mining safety regulations. ###Code disasters_array = np.array( [4, 5, 4, 0, 1, 4, 3, 4, 0, 6, 3, 3, 4, 0, 2, 6, 3, 3, 5, 4, 5, 3, 1, 4, 4, 1, 5, 5, 3, 4, 2, 5, 2, 2, 3, 4, 2, 1, 3, 2, 2, 1, 1, 1, 1, 3, 0, 0, 1, 0, 1, 1, 0, 0, 3, 1, 0, 3, 2, 2, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 2, 1, 0, 0, 0, 1, 1, 0, 2, 3, 3, 1, 1, 2, 1, 1, 1, 1, 2, 4, 2, 0, 0, 1, 4, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1]) years = np.arange(1851, 1962, dtype=int) fig, ax = plt.subplots() ax.vlines(years, 0, disasters_array, lw=6) ax.set_xlim(years.min() - 1, years.max() + 1) ax.set_ylim(bottom=0); ###Output _____no_output_____ ###Markdown Writing down the model and computing conditional distributionsIt is perhaps easiest to write the model as a PyMC3 model. In notation, we might write$$y_t \sim \operatorname{Poisson}(\lambda_t), t=1851, \ldots, 1962 \\\lambda_t = \left\{ \begin{array}{}\lambda_1 \text{ for } t \leq \tau \\ \lambda_2 \text{ for } t > \tau \end{array}\right. \\\lambda_j \sim \operatorname{Gamma}(1, 10) \\\tau \sim \operatorname{DiscreteUniform}(1851, 1962)$$ ###Code import pymc3 as pm def coal_disaster_model(): with pm.Model() as model: early_lambda = pm.Gamma('early_lambda', 1, 10) late_lambda = pm.Gamma('late_lambda', 1, 10) change_point = pm.DiscreteUniform('change_point', 1851, 1962) lam = pm.Deterministic('lam', pm.math.where(years > change_point, late_lambda, early_lambda)) pm.Poisson('rate', lam, observed=disasters_array) return model pm.model_to_graphviz(coal_disaster_model()) ###Output _____no_output_____ ###Markdown Now we need to go out and compute the conditional distributions:$$p(\tau | \lambda_1, \lambda_2, y_t) \\ p(\lambda_1 | \tau, \lambda_2, y_t) \\ p(\lambda_2 | \tau, \lambda_1, y_t)$$In this case, we can do some arithmetic, look up these distributions, and compute$$p(\tau | \lambda_1, \lambda_2, y_t) = \operatorname{Categorical}\left( \frac{\lambda_1^{\sum_{t=1851}^{\tau} y_t +\alpha-1} e^{-(\beta+\tau)\lambda_1} \lambda_2^{\sum_{t=\tau+1}^{1962} y_i + \alpha-1} e^{-\beta\lambda_2}}{\sum_{k=1851}^{1962} \lambda_1^{\sum_{t=1851}^{\tau} y_t +\alpha-1} e^{-(\beta+\tau)\lambda_1} \lambda_2^{\sum_{t=\tau+1}^{1962} y_i + \alpha-1} e^{-\beta\lambda_2}} \right) \\ p(\lambda_1 | \tau, \lambda_2, y_t) = \operatorname{Gamma}\left(\sum_{t=1851}^{\tau} y_t + \alpha, \tau + \beta\right)\\ p(\lambda_2 | \tau, \lambda_1, y_t) = \operatorname{Gamma}\left(\sum_{t=\tau + 1}^{1962} y_t + \alpha, 1962 - \tau + \beta\right)$$So far so good! Now here's an implementation! ###Code def gibbs_sample_disaster(samples, tau=1900, early_lambda=6, late_lambda=2): """Can supply different initial conditions!""" draws = np.empty((3, samples)) gamma_pdf = lambda lam, a, b: lam**(a-1) * np.exp(-b*lam) n_years = disasters_array.shape[0] years = np.arange(1851, 1962, dtype=int) draws = [] while len(draws) < samples: # update early_lambda early_lambda = np.random.gamma(disasters_array[:tau - 1851].sum() + 1, 1 / (tau - 1851 + 10)) draws.append([early_lambda, late_lambda, tau]) # update late_lambda late_lambda = np.random.gamma(disasters_array[tau - 1851 + 1:].sum() + 1, 1 / (1962 - tau + 10)) draws.append([early_lambda, late_lambda, tau]) # update tau tau_probs = np.empty(n_years) for t in range(n_years): tau_probs[t] = (gamma_pdf(early_lambda, disasters_array[:t].sum() + 1, t + 10) * gamma_pdf(late_lambda, disasters_array[t:].sum() + 1, n_years - t + 10)) tau = np.random.choice(years, p=tau_probs / tau_probs.sum()) draws.append([early_lambda, late_lambda, tau]) return np.array(draws)[:samples] ###Output _____no_output_____ ###Markdown Checking our workWe compare the Gibbs sampler to the PyMC3 model -- this one goes a bit faster, but maybe it took me longer to write! ###Code %%time draws = gibbs_sample_disaster(1000) draws.mean(axis=0) # early_lambda, late_lambda, change_point %%time with coal_disaster_model(): trace = pm.sample() pm.summary(trace, varnames=['early_lambda', 'late_lambda', 'change_point', ]) ###Output /home/colin/miniconda3/envs/bayes_course/lib/python3.6/site-packages/pymc3/stats.py:991: FutureWarning: The join_axes-keyword is deprecated. Use .reindex or .reindex_like on the result to achieve the same functionality. axis=1, join_axes=[dforg.index]) ###Markdown More contrived exampleThis example shows how you might use some knowledge of conjugate distributions to start to automate a Gibbs sampler.Suppose we have a generative model:$$w_1 \sim \mathcal{N}(0, 1) \\w_2 \sim \mathcal{N}(0, 1) \\x \sim \mathcal{N}(w_1 + w_2, 1)$$Then we observe $x$, and wish to compute $p(w_1, w_2 | x)$.We will do this by inializing at some point $(w_1^0, w_2^0)$, then 1. drawing $w_1^1 \sim p(w_1 | w_2^0, x)$, 2. drawing $w_2^1 \sim p(w_2 | w_1^1, x)$We now have samples $\{ (w_1^0, w_2^0),(w_1^1, w_2^0),(w_1^1, w_2^1) \}$, and we go back and sample $w_1^2$.We are going to use the following fact:If $x \sim \mathcal{N}(\mu, \sigma)$ and $y \sim \mathcal{N}(x, s)$, then $$x | y \sim \mathcal{N}\left(\frac{1}{\sigma + s} (\sigma y + s \mu), \frac{1}{\sigma + s}\right),$$which collapses to $$x | y \sim \mathcal{N}\left(\frac{y}{2}, \frac{1}{2}\right),$$when $\sigma = s = 1$ and $\mu = 0$. We can use this to make our update rule below. ###Code def gibbs_sample(draws, init, observed): current = init.copy() samples = np.empty((draws, 2)) for idx in range(draws): residual = observed - current[(idx + 1) % 2] current[idx % 2] = 0.5 * (np.random.randn() + residual) samples[idx] = current.copy() return samples %time samples = gibbs_sample(2_000, np.zeros(2), 1) # fast! ###Output CPU times: user 4 ms, sys: 0 ns, total: 4 ms Wall time: 4.46 ms ###Markdown Demonstrating that the Gibbs sampler works, and maybe an easier way to do itWe can just implement the same model with PyMC3. It does not always compare so favorably, but this is pretty nice. ###Code %%time with pm.Model(): w_1 = pm.Normal('w_1') w_2 = pm.Normal('w_2') x = pm.Normal('x', w_1 + w_2, 1, observed=1) trace = pm.sample() fig, axes = plt.subplots(ncols=2, figsize=(10, 4), sharex=True, sharey=True) axes[0].plot(*samples.T, '.', alpha=0.2) axes[1].plot(trace['w_1'], trace['w_2'], '.', alpha=0.2); print(samples.mean(axis=0), [trace['w_1'].mean(), trace['w_2'].mean()]) ###Output [0.33119586 0.31902971] [0.2979419992804236, 0.3411182250375135]
notebooks/semisupervised/cifar10/euclidean/augmented-nothresh-Y/cifar10-aug-64ex-euc-nothresh-Y.ipynb
###Markdown Choose GPU ###Code %env CUDA_DEVICE_ORDER=PCI_BUS_ID %env CUDA_VISIBLE_DEVICES=0 import tensorflow as tf gpu_devices = tf.config.experimental.list_physical_devices('GPU') if len(gpu_devices)>0: tf.config.experimental.set_memory_growth(gpu_devices[0], True) print(gpu_devices) tf.keras.backend.clear_session() ###Output [PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')] ###Markdown Load packages ###Code import tensorflow as tf import numpy as np import matplotlib.pyplot as plt from tqdm.autonotebook import tqdm from IPython import display import pandas as pd import umap import copy import os, tempfile import tensorflow_addons as tfa import pickle ###Output /mnt/cube/tsainbur/conda_envs/tpy3/lib/python3.6/site-packages/tqdm/autonotebook/__init__.py:14: TqdmExperimentalWarning: Using `tqdm.autonotebook.tqdm` in notebook mode. Use `tqdm.tqdm` instead to force console mode (e.g. in jupyter console) " (e.g. in jupyter console)", TqdmExperimentalWarning) ###Markdown parameters ###Code dataset = "cifar10" labels_per_class = 64 # 'full' n_latent_dims = 1024 confidence_threshold = 0.0 # minimum confidence to include in UMAP graph for learned metric learned_metric = False # whether to use a learned metric, or Euclidean distance between datapoints augmented = True # min_dist= 0.001 # min_dist parameter for UMAP negative_sample_rate = 5 # how many negative samples per positive sample batch_size = 128 # batch size optimizer = tf.keras.optimizers.Adam(1e-3) # the optimizer to train optimizer = tfa.optimizers.MovingAverage(optimizer) label_smoothing = 0.2 # how much label smoothing to apply to categorical crossentropy max_umap_iterations = 500 # how many times, maximum, to recompute UMAP max_epochs_per_graph = 10 # how many epochs maximum each graph trains for (without early stopping) graph_patience = 10 # how many times without improvement to train a new graph min_graph_delta = 0.0025 # minimum improvement on validation acc to consider an improvement for training from datetime import datetime datestring = datetime.now().strftime("%Y_%m_%d_%H_%M_%S_%f") datestring = ( str(dataset) + "_" + str(confidence_threshold) + "_" + str(labels_per_class) + "____" + datestring + '_umap_augmented' ) print(datestring) ###Output cifar10_0.0_64____2020_08_20_10_52_40_783860_umap_augmented ###Markdown Load dataset ###Code from tfumap.semisupervised_keras import load_dataset ( X_train, X_test, X_labeled, Y_labeled, Y_masked, X_valid, Y_train, Y_test, Y_valid, Y_valid_one_hot, Y_labeled_one_hot, num_classes, dims ) = load_dataset(dataset, labels_per_class) ###Output _____no_output_____ ###Markdown load architecture ###Code from tfumap.semisupervised_keras import load_architecture encoder, classifier, embedder = load_architecture(dataset, n_latent_dims) ###Output _____no_output_____ ###Markdown load pretrained weights ###Code from tfumap.semisupervised_keras import load_pretrained_weights encoder, classifier = load_pretrained_weights(dataset, augmented, labels_per_class, encoder, classifier) ###Output WARNING: Logging before flag parsing goes to stderr. W0820 10:52:43.390450 139710613530432 base.py:272] Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program. Two checkpoint references resolved to different objects (<tensorflow_addons.layers.wrappers.WeightNormalization object at 0x7f0fc80cecf8> and <tensorflow.python.keras.layers.advanced_activations.LeakyReLU object at 0x7f0fc80ceef0>). W0820 10:52:43.392166 139710613530432 base.py:272] Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program. Two checkpoint references resolved to different objects (<tensorflow_addons.layers.wrappers.WeightNormalization object at 0x7f0fc80eb2e8> and <tensorflow.python.keras.layers.advanced_activations.LeakyReLU object at 0x7f0fc808e3c8>). W0820 10:52:43.414128 139710613530432 base.py:272] Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program. Two checkpoint references resolved to different objects (<tensorflow_addons.layers.wrappers.WeightNormalization object at 0x7f0fce8a4e48> and <tensorflow.python.keras.layers.normalization_v2.BatchNormalization object at 0x7f0fce4c91d0>). W0820 10:52:43.417735 139710613530432 base.py:272] Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program. Two checkpoint references resolved to different objects (<tensorflow.python.keras.layers.normalization_v2.BatchNormalization object at 0x7f0fce4c91d0> and <tensorflow.python.keras.layers.advanced_activations.LeakyReLU object at 0x7f0fcecbb3c8>). W0820 10:52:43.425808 139710613530432 base.py:272] Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program. Two checkpoint references resolved to different objects (<tensorflow_addons.layers.wrappers.WeightNormalization object at 0x7f0fce6ecb70> and <tensorflow.python.keras.layers.normalization_v2.BatchNormalization object at 0x7f0fce6ecd68>). W0820 10:52:43.428814 139710613530432 base.py:272] Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program. Two checkpoint references resolved to different objects (<tensorflow.python.keras.layers.normalization_v2.BatchNormalization object at 0x7f0fce6ecd68> and <tensorflow.python.keras.layers.advanced_activations.LeakyReLU object at 0x7f0fce70b710>). W0820 10:52:43.432683 139710613530432 base.py:272] Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program. Two checkpoint references resolved to different objects (<tensorflow_addons.layers.wrappers.WeightNormalization object at 0x7f0fce70f978> and <tensorflow.python.keras.layers.normalization_v2.BatchNormalization object at 0x7f0fce74a198>). W0820 10:52:43.435500 139710613530432 base.py:272] Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program. Two checkpoint references resolved to different objects (<tensorflow.python.keras.layers.normalization_v2.BatchNormalization object at 0x7f0fce74a198> and <tensorflow.python.keras.layers.advanced_activations.LeakyReLU object at 0x7f0fce74a320>). W0820 10:52:43.441506 139710613530432 base.py:272] Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program. Two checkpoint references resolved to different objects (<tensorflow_addons.layers.wrappers.WeightNormalization object at 0x7f0fce871588> and <tensorflow.python.keras.layers.normalization_v2.BatchNormalization object at 0x7f0fce871c18>). W0820 10:52:43.444298 139710613530432 base.py:272] Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program. Two checkpoint references resolved to different objects (<tensorflow.python.keras.layers.normalization_v2.BatchNormalization object at 0x7f0fce871c18> and <tensorflow.python.keras.layers.advanced_activations.LeakyReLU object at 0x7f0fce871e48>). W0820 10:52:43.448152 139710613530432 base.py:272] Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program. Two checkpoint references resolved to different objects (<tensorflow_addons.layers.wrappers.WeightNormalization object at 0x7f0fce775630> and <tensorflow.python.keras.layers.normalization_v2.BatchNormalization object at 0x7f0fce775c50>). W0820 10:52:43.450964 139710613530432 base.py:272] Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program. Two checkpoint references resolved to different objects (<tensorflow.python.keras.layers.normalization_v2.BatchNormalization object at 0x7f0fce775c50> and <tensorflow.python.keras.layers.advanced_activations.LeakyReLU object at 0x7f0fce775eb8>). W0820 10:52:43.454869 139710613530432 base.py:272] Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program. Two checkpoint references resolved to different objects (<tensorflow_addons.layers.wrappers.WeightNormalization object at 0x7f0fce7bc978> and <tensorflow.python.keras.layers.normalization_v2.BatchNormalization object at 0x7f0fce7bc9e8>). W0820 10:52:43.457687 139710613530432 base.py:272] Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program. Two checkpoint references resolved to different objects (<tensorflow.python.keras.layers.normalization_v2.BatchNormalization object at 0x7f0fce7bc9e8> and <tensorflow.python.keras.layers.advanced_activations.LeakyReLU object at 0x7f0fce7bce80>). W0820 10:52:43.463754 139710613530432 base.py:272] Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program. Two checkpoint references resolved to different objects (<tensorflow_addons.layers.wrappers.WeightNormalization object at 0x7f0fc81c80b8> and <tensorflow.python.keras.layers.normalization_v2.BatchNormalization object at 0x7f0fc81c8710>). W0820 10:52:43.466569 139710613530432 base.py:272] Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program. Two checkpoint references resolved to different objects (<tensorflow.python.keras.layers.normalization_v2.BatchNormalization object at 0x7f0fc81c8710> and <tensorflow.python.keras.layers.advanced_activations.LeakyReLU object at 0x7f0fc81c8908>). ###Markdown compute pretrained accuracy ###Code # test current acc pretrained_predictions = classifier.predict(encoder.predict(X_test, verbose=True), verbose=True) pretrained_predictions = np.argmax(pretrained_predictions, axis=1) pretrained_acc = np.mean(pretrained_predictions == Y_test) print('pretrained acc: {}'.format(pretrained_acc)) ###Output 313/313 [==============================] - 2s 6ms/step 313/313 [==============================] - 0s 1ms/step pretrained acc: 0.5993 ###Markdown get a, b parameters for embeddings ###Code from tfumap.semisupervised_keras import find_a_b a_param, b_param = find_a_b(min_dist=min_dist) ###Output _____no_output_____ ###Markdown build network ###Code from tfumap.semisupervised_keras import build_model model = build_model( batch_size=batch_size, a_param=a_param, b_param=b_param, dims=dims, encoder=encoder, classifier=classifier, negative_sample_rate=negative_sample_rate, optimizer=optimizer, label_smoothing=label_smoothing, embedder = embedder, ) ###Output _____no_output_____ ###Markdown build labeled iterator ###Code from tfumap.semisupervised_keras import build_labeled_iterator labeled_dataset = build_labeled_iterator(X_labeled, Y_labeled_one_hot, augmented, dims) ###Output _____no_output_____ ###Markdown training ###Code from livelossplot import PlotLossesKerasTF from tfumap.semisupervised_keras import get_edge_dataset from tfumap.semisupervised_keras import zip_datasets ###Output _____no_output_____ ###Markdown callbacks ###Code # plot losses callback groups = {'acccuracy': ['classifier_accuracy', 'val_classifier_accuracy'], 'loss': ['classifier_loss', 'val_classifier_loss']} plotlosses = PlotLossesKerasTF(groups=groups) history_list = [] current_validation_acc = 0 batches_per_epoch = np.floor(len(X_train)/batch_size).astype(int) epochs_since_last_improvement = 0 current_umap_iterations = 0 current_epoch = 0 # make dataset edge_dataset = get_edge_dataset( model, augmented, classifier, encoder, X_train, Y_masked, batch_size, confidence_threshold, labeled_dataset, dims, learned_metric = learned_metric ) # zip dataset zipped_ds = zip_datasets(labeled_dataset, edge_dataset, batch_size) from tfumap.paths import MODEL_DIR, ensure_dir save_folder = MODEL_DIR / 'semisupervised-keras' / dataset / str(labels_per_class) / datestring ensure_dir(save_folder / 'test_loss.npy') for cui in tqdm(np.arange(current_epoch, max_umap_iterations)): if len(history_list) > graph_patience+1: previous_history = [np.mean(i.history['val_classifier_accuracy']) for i in history_list] best_of_patience = np.max(previous_history[-graph_patience:]) best_of_previous = np.max(previous_history[:-graph_patience]) if (best_of_previous + min_graph_delta) > best_of_patience: print('Early stopping') break # train dataset history = model.fit( zipped_ds, epochs= current_epoch + max_epochs_per_graph, initial_epoch = current_epoch, validation_data=( (X_valid, tf.zeros_like(X_valid), tf.zeros_like(X_valid)), {"classifier": Y_valid_one_hot}, ), callbacks = [plotlosses], max_queue_size = 100, steps_per_epoch = batches_per_epoch, #verbose=0 ) current_epoch+=len(history.history['loss']) history_list.append(history) # save score class_pred = classifier.predict(encoder.predict(X_test)) class_acc = np.mean(np.argmax(class_pred, axis=1) == Y_test) np.save(save_folder / 'test_loss.npy', (np.nan, class_acc)) # save weights encoder.save_weights((save_folder / "encoder").as_posix()) classifier.save_weights((save_folder / "classifier").as_posix()) # save history with open(save_folder / 'history.pickle', 'wb') as file_pi: pickle.dump([i.history for i in history_list], file_pi) current_umap_iterations += 1 if len(history_list) > graph_patience+1: previous_history = [np.mean(i.history['val_classifier_accuracy']) for i in history_list] best_of_patience = np.max(previous_history[-graph_patience:]) best_of_previous = np.max(previous_history[:-graph_patience]) if (best_of_previous + min_graph_delta) > best_of_patience: print('Early stopping') #break plt.plot(previous_history) ###Output _____no_output_____ ###Markdown save embedding ###Code z = encoder.predict(X_train) reducer = umap.UMAP(verbose=True) embedding = reducer.fit_transform(z.reshape(len(z), np.product(np.shape(z)[1:]))) plt.scatter(embedding[:, 0], embedding[:, 1], c=Y_train.flatten(), s= 1, alpha = 0.1, cmap = plt.cm.tab10) np.save(save_folder / 'train_embedding.npy', embedding) ###Output _____no_output_____
hikyuu/examples/notebook/002-HowToGetStock.ipynb
###Markdown 1 全局获取股票对象==========1.1 获取股票对象-----------------通过全局管理对象 sm,或使用函数 get_stock。股票标识格式“市场标识+股票代码”,市场标识:沪市sh,深市sz。 ###Code #s = getStock('sh000001') s = sm['sh000001'] print(s) ###Output Stock(SH, 000001, 上证指数, 指数, 1, 1990-12-19 00:00:00, +infinity) ###Markdown 1.2 遍历所有股票----------------- ###Code i = 0 #遍历所有股票 for s in sm: i += 1 #print(s) print("全部数量:", i) len(sm) ###Output 全部数量: 5848 ###Markdown 2 通过板块(Block)遍历股票对象================2.1 通过 sm.get_stock("板块分类", "板块名称") 获取相应板块------------------------------------------------------------ ###Code blk = sm.get_block("指数板块", "上证380") for s in blk: if not s.valid: print(s) ###Output Stock(SH, 600270, 外运发展, A股, 0, 2000-12-28 00:00:00, +infinity) Stock(SH, 603959, 百利科技, A股, 0, 2016-05-17 00:00:00, +infinity) Stock(SH, 603899, 晨光文具, A股, 0, 2015-01-27 00:00:00, +infinity) Stock(SH, 603898, 好莱客, A股, 0, 2015-02-17 00:00:00, +infinity) Stock(SH, 603895, 天永智能, A股, 0, 2018-01-22 00:00:00, +infinity) Stock(SH, 600614, *ST鹏起, A股, 0, 1992-08-28 00:00:00, +infinity) Stock(SH, 603888, 新华网, A股, 0, 2016-10-28 00:00:00, +infinity) Stock(SH, 603989, 艾华集团, A股, 0, 2015-05-15 00:00:00, +infinity) Stock(SH, 600240, *ST华业, A股, 0, 2000-06-28 00:00:00, +infinity) Stock(SH, 603929, 亚翔集成, A股, 0, 2016-12-30 00:00:00, +infinity) Stock(SH, 603939, 益丰药房, A股, 0, 2015-02-17 00:00:00, +infinity) Stock(SH, 603997, 继峰股份, A股, 0, 2015-03-02 00:00:00, +infinity) ###Markdown 2.1 获取自定义板块------------------自定义板块的板块分类固定为 “self” ###Code blk = sm.get_block("self", "1") for s in blk: print(s) ###Output Stock(SH, 601018, 宁波港, A股, 1, 2010-09-28 00:00:00, +infinity) Stock(SH, 600601, 方正科技, A股, 1, 1990-12-19 00:00:00, +infinity) Stock(SH, 601098, 中南传媒, A股, 1, 2010-10-28 00:00:00, +infinity) Stock(SH, 600050, 中国联通, A股, 1, 2002-10-09 00:00:00, +infinity) Stock(SZ, 000001, 平安银行, A股, 1, 1991-01-02 00:00:00, +infinity) Stock(SZ, 000958, 东方能源, A股, 1, 1999-12-23 00:00:00, +infinity) Stock(SZ, 002339, 积成电子, A股, 1, 2010-01-22 00:00:00, +infinity) Stock(SZ, 002685, 华东重机, A股, 1, 2012-06-12 00:00:00, +infinity) Stock(SZ, 000728, 国元证券, A股, 1, 1997-05-22 00:00:00, +infinity) ###Markdown 2.2 板块信息的配置-------------------板块信息在数据存放路径中 “block” 子目录下,目前采用的是钱龙的格式,你也可从钱龙相应的目录下拷贝最新的板块配置信息。![板块配置](images/002_01_block_config.png) 3 查看权息信息======= ###Code ws = sm['sz000001'].get_weight() for w in ws: print(w) ###Output Weight(1991-04-03 00:00:00, 0, 0, 0, 0, 0, 150, 68) Weight(1993-05-24 00:00:00, 3.5, 1, 16, 3, 5, 26941, 17912) Weight(1994-07-11 00:00:00, 3, 1, 5, 5, 2, 43106, 28659) Weight(1994-09-02 00:00:00, 0, 0, 0, 0, 0, 43106, 29707) Weight(1995-09-25 00:00:00, 2, 0, 0, 3, 0, 51728, 35721) Weight(1996-05-27 00:00:00, 5, 0, 0, 0, 5, 103456, 71393) Weight(1997-08-25 00:00:00, 5, 0, 0, 2, 0, 155184, 107163) Weight(1999-10-18 00:00:00, 0, 0, 0, 6, 0, 155184, 107163) Weight(2000-11-06 00:00:00, 0, 3, 8, 0, 0, 194582, 139312) Weight(2002-07-23 00:00:00, 0, 0, 0, 1.5, 0, 194582, 140936) Weight(2003-09-29 00:00:00, 0, 0, 0, 1.5, 0, 194582, 140936) Weight(2007-06-20 00:00:00, 1, 0, 0, 0, 0, 208676, 155019) Weight(2008-01-21 00:00:00, 0, 0, 0, 0, 0, 229341, 175682) Weight(2008-06-26 00:00:00, 0, 0, 0, 0, 0, 229341, 204652) Weight(2008-06-27 00:00:00, 0, 0, 0, 0, 0, 238880, 214200) Weight(2008-10-31 00:00:00, 3, 0, 0, 0.335, 0, 310543, 278461) Weight(2009-06-22 00:00:00, 0, 0, 0, 0, 0, 310543, 292367) Weight(2009-10-15 00:00:00, 0, 0, 0, 0, 0, 310543, 292411) Weight(2010-06-28 00:00:00, 0, 0, 0, 0, 0, 310543, 310537) Weight(2010-09-17 00:00:00, 0, 0, 0, 0, 0, 348501, 310537) Weight(2007-12-31 00:00:00, 0, 0, 0, 0, 0, 229341, 175682) Weight(2009-06-30 00:00:00, 0, 0, 0, 0, 0, 310543, 292376) Weight(2011-08-05 00:00:00, 0, 0, 0, 0, 0, 512335, 310536) Weight(2011-12-31 00:00:00, 0, 0, 0, 0, 0, 512335, 310536) Weight(2012-10-19 00:00:00, 0, 0, 0, 1, 0, 512335, 310536) Weight(2012-12-31 00:00:00, 0, 0, 0, 0, 0, 512335, 310536) Weight(2013-06-20 00:00:00, 6, 0, 0, 1.7, 0, 819736, 496857) Weight(2013-11-12 00:00:00, 0, 0, 0, 0, 0, 819736, 557590) Weight(2014-01-09 00:00:00, 0, 0, 0, 0, 0, 952075, 557590) Weight(2014-06-12 00:00:00, 0, 0, 0, 1.6, 2, 1.14249e+06, 669106) Weight(2014-09-01 00:00:00, 0, 0, 0, 0, 0, 1.14249e+06, 983671) Weight(2015-04-13 00:00:00, 0, 0, 0, 1.74, 2, 1.37099e+06, 1.1804e+06) Weight(2015-05-21 00:00:00, 0, 0, 0, 0, 0, 1.43087e+06, 1.1804e+06) Weight(2016-05-23 00:00:00, 0, 0, 0, 0, 0, 1.43087e+06, 1.21926e+06) Weight(2016-06-16 00:00:00, 0, 0, 0, 1.53, 2, 1.71704e+06, 1.46312e+06) Weight(2017-01-09 00:00:00, 0, 0, 0, 0, 0, 1.71704e+06, 1.6918e+06) Weight(2017-07-21 00:00:00, 0, 0, 0, 1.58, 0, 1.71704e+06, 1.6918e+06) Weight(2017-12-31 00:00:00, 0, 0, 0, 0, 0, 1.71704e+06, 1.6918e+06) Weight(2018-05-21 00:00:00, 0, 0, 0, 0, 0, 1.71704e+06, 1.71702e+06) Weight(2018-07-12 00:00:00, 0, 0, 0, 1.36, 0, 1.71704e+06, 1.71702e+06) ###Markdown 1 全局获取股票对象==========1.1 获取股票对象-----------------通过全局管理对象 sm,或使用函数 getStock。股票标识格式“市场标识+股票代码”,市场标识:沪市sh,深市sz。 ###Code #s = getStock('sh000001') s = sm['sh000001'] print(s) ###Output Stock(SH, 000001, 上证指数, 指数, 1, 1990-12-19 0:0:0, +infinity) ###Markdown 1.2 遍历所有股票----------------- ###Code i = 0 #遍历所有股票 for s in sm: i += 1 #print(s) print("全部数量:", i) len(sm) ###Output 全部数量: 5517 ###Markdown 2 通过板块(Block)遍历股票对象================2.1 通过 sm.getStock("板块分类", "板块名称") 获取相应板块------------------------------------------------------------ ###Code blk = sm.getBlock("指数板块", "上证380") for s in blk: if not s.valid: print(s) ###Output Stock(SH, 600270, 外运发展, A股, 0, 2000-12-28 0:0:0, +infinity) ###Markdown 2.1 获取自定义板块------------------自定义板块的板块分类固定为 “self” ###Code blk = sm.getBlock("self", "1") for s in blk: print(s) ###Output Stock(SZ, 002685, 华东重机, A股, 1, 2012-6-12 0:0:0, +infinity) Stock(SZ, 002339, 积成电子, A股, 1, 2010-1-22 0:0:0, +infinity) Stock(SZ, 000728, 国元证券, A股, 1, 1997-5-22 0:0:0, +infinity) Stock(SZ, 000958, 东方能源, A股, 1, 1999-12-23 0:0:0, +infinity) Stock(SZ, 000001, 平安银行, A股, 1, 1991-1-2 0:0:0, +infinity) Stock(SH, 600601, 方正科技, A股, 1, 1990-12-19 0:0:0, +infinity) Stock(SH, 600050, 中国联通, A股, 1, 2002-10-9 0:0:0, +infinity) Stock(SH, 601018, 宁波港, A股, 1, 2010-9-28 0:0:0, +infinity) Stock(SH, 601098, 中南传媒, A股, 1, 2010-10-28 0:0:0, +infinity) ###Markdown 2.2 板块信息的配置-------------------板块信息在数据存放路径中 “block” 子目录下,目前采用的是钱龙的格式,你也可从钱龙相应的目录下拷贝最新的板块配置信息。![板块配置](images/002_01_block_config.png) 3 查看权息信息======= ###Code ws = sm['sz000001'].getWeight() for w in ws: print(w) ###Output Weight(1991-4-3 0:0:0, 0, 0, 0, 0, 0, 150, 68) Weight(1993-5-24 0:0:0, 3.5, 1, 16, 3, 5, 26941, 17912) Weight(1994-7-11 0:0:0, 3, 1, 5, 5, 2, 43106, 28659) Weight(1994-9-2 0:0:0, 0, 0, 0, 0, 0, 43106, 29707) Weight(1995-9-25 0:0:0, 2, 0, 0, 3, 0, 51728, 35721) Weight(1996-5-27 0:0:0, 5, 0, 0, 0, 5, 103456, 71393) Weight(1997-8-25 0:0:0, 5, 0, 0, 2, 0, 155184, 107163) Weight(1999-10-18 0:0:0, 0, 0, 0, 6, 0, 155184, 107163) Weight(2000-11-6 0:0:0, 0, 3, 8, 0, 0, 194582, 139312) Weight(2002-7-23 0:0:0, 0, 0, 0, 1.5, 0, 194582, 140936) Weight(2003-9-29 0:0:0, 0, 0, 0, 1.5, 0, 194582, 140936) Weight(2007-6-20 0:0:0, 1, 0, 0, 0, 0, 208676, 155019) Weight(2007-12-31 0:0:0, 0, 0, 0, 0, 0, 229341, 175682) Weight(2008-1-21 0:0:0, 0, 0, 0, 0, 0, 229341, 175682) Weight(2008-6-26 0:0:0, 0, 0, 0, 0, 0, 229341, 204652) Weight(2008-6-27 0:0:0, 0, 0, 0, 0, 0, 238880, 214200) Weight(2008-10-31 0:0:0, 3, 0, 0, 0.335, 0, 310543, 278461) Weight(2009-6-22 0:0:0, 0, 0, 0, 0, 0, 310543, 292367) Weight(2009-6-30 0:0:0, 0, 0, 0, 0, 0, 310543, 292376) Weight(2009-10-15 0:0:0, 0, 0, 0, 0, 0, 310543, 292411) Weight(2010-6-28 0:0:0, 0, 0, 0, 0, 0, 310543, 310537) Weight(2010-9-17 0:0:0, 0, 0, 0, 0, 0, 348501, 310537) Weight(2011-8-5 0:0:0, 0, 0, 0, 0, 0, 512335, 310536) Weight(2011-12-31 0:0:0, 0, 0, 0, 0, 0, 512335, 310536) Weight(2012-10-19 0:0:0, 0, 0, 0, 1, 0, 512335, 310536) Weight(2012-12-31 0:0:0, 0, 0, 0, 0, 0, 512335, 310536) Weight(2013-6-20 0:0:0, 6, 0, 0, 1.7, 0, 819736, 496857) Weight(2013-11-12 0:0:0, 0, 0, 0, 0, 0, 819736, 557590) Weight(2014-1-9 0:0:0, 0, 0, 0, 0, 0, 952075, 557590) Weight(2014-6-12 0:0:0, 0, 0, 0, 1.6, 2, 1.14249e+06, 669106) Weight(2014-9-1 0:0:0, 0, 0, 0, 0, 0, 1.14249e+06, 983671) Weight(2015-4-13 0:0:0, 0, 0, 0, 1.74, 2, 1.37099e+06, 1.18041e+06) Weight(2015-5-21 0:0:0, 0, 0, 0, 0, 0, 1.43087e+06, 1.18041e+06) Weight(2016-5-23 0:0:0, 0, 0, 0, 0, 0, 1.43087e+06, 1.21927e+06) Weight(2016-6-16 0:0:0, 0, 0, 0, 1.53, 2, 1.71704e+06, 1.46312e+06) Weight(2017-1-9 0:0:0, 0, 0, 0, 0, 0, 1.71704e+06, 1.6918e+06) Weight(2017-7-21 0:0:0, 0, 0, 0, 1.58, 0, 1.71704e+06, 1.6918e+06) Weight(2017-12-31 0:0:0, 0, 0, 0, 0, 0, 1.71704e+06, 1.6918e+06) Weight(2018-5-21 0:0:0, 0, 0, 0, 0, 0, 1.71704e+06, 1.71703e+06) Weight(2018-7-12 0:0:0, 0, 0, 0, 1.36, 0, 1.71704e+06, 1.71703e+06) ###Markdown 1 全局获取股票对象==========1.1 获取股票对象-----------------通过全局管理对象 sm,或使用函数 get_stock。股票标识格式“市场标识+股票代码”,市场标识:沪市sh,深市sz。 ###Code #s = getStock('sh000001') s = sm['sh000001'] print(s) ###Output Stock(SH, 000001, 上证指数, 指数, 1, 1990-12-19 00:00:00, +infinity) ###Markdown 1.2 遍历所有股票----------------- ###Code i = 0 #遍历所有股票 for s in sm: i += 1 #print(s) print("全部数量:", i) len(sm) ###Output 全部数量: 6067 ###Markdown 2 通过板块(Block)遍历股票对象================2.1 通过 sm.get_stock("板块分类", "板块名称") 获取相应板块------------------------------------------------------------ ###Code blk = sm.get_block("指数板块", "上证380") for s in blk: if not s.valid: print(s) ###Output Stock(SH, 600175, 美都能源, A股, 0, 1999-04-08 00:00:00, +infinity) Stock(SH, 600240, *ST华业, A股, 0, 2000-06-28 00:00:00, +infinity) Stock(SH, 600270, 外运发展, A股, 0, 2000-12-28 00:00:00, +infinity) ###Markdown 2.1 获取自定义板块------------------自定义板块的板块分类固定为 “self” ###Code blk = sm.get_block("self", "1") for s in blk: print(s) ###Output Stock(SZ, 002685, 华东重机, A股, 1, 2012-06-12 00:00:00, +infinity) Stock(SZ, 002339, 积成电子, A股, 1, 2010-01-22 00:00:00, +infinity) Stock(SZ, 000728, 国元证券, A股, 1, 1997-05-22 00:00:00, +infinity) Stock(SZ, 000958, 东方能源, A股, 1, 1999-12-23 00:00:00, +infinity) Stock(SZ, 000001, 平安银行, A股, 1, 1991-01-02 00:00:00, +infinity) Stock(SH, 600601, 方正科技, A股, 1, 1990-12-19 00:00:00, +infinity) Stock(SH, 600050, 中国联通, A股, 1, 2002-10-09 00:00:00, +infinity) Stock(SH, 601098, 中南传媒, A股, 1, 2010-10-28 00:00:00, +infinity) Stock(SH, 601018, 宁波港, A股, 1, 2010-09-28 00:00:00, +infinity) ###Markdown 2.2 板块信息的配置-------------------板块信息在数据存放路径中 “block” 子目录下,目前采用的是钱龙的格式,你也可从钱龙相应的目录下拷贝最新的板块配置信息。![板块配置](images/002_01_block_config.png) 3 查看权息信息======= ###Code ws = sm['sz000001'].get_weight() for w in ws: print(w) ###Output Weight(1991-04-03 00:00:00, 0, 0, 0, 0, 0, 150, 68) Weight(1993-05-24 00:00:00, 3.5, 1, 16, 3, 5, 26941, 17912) Weight(1994-07-11 00:00:00, 3, 1, 5, 5, 2, 43106, 28659) Weight(1994-09-02 00:00:00, 0, 0, 0, 0, 0, 43106, 29707) Weight(1995-09-25 00:00:00, 2, 0, 0, 3, 0, 51728, 35721) Weight(1996-05-27 00:00:00, 5, 0, 0, 0, 5, 103456, 71393) Weight(1997-08-25 00:00:00, 5, 0, 0, 2, 0, 155184, 107163) Weight(1999-10-18 00:00:00, 0, 0, 0, 6, 0, 155184, 107163) Weight(2000-11-06 00:00:00, 0, 3, 8, 0, 0, 194582, 139312) Weight(2002-07-23 00:00:00, 0, 0, 0, 1.5, 0, 194582, 140936) Weight(2003-09-29 00:00:00, 0, 0, 0, 1.5, 0, 194582, 140936) Weight(2007-06-20 00:00:00, 1, 0, 0, 0, 0, 208676, 155019) Weight(2007-12-31 00:00:00, 0, 0, 0, 0, 0, 229341, 175682) Weight(2008-01-21 00:00:00, 0, 0, 0, 0, 0, 229341, 175682) Weight(2008-06-26 00:00:00, 0, 0, 0, 0, 0, 229341, 204652) Weight(2008-06-27 00:00:00, 0, 0, 0, 0, 0, 238880, 214200) Weight(2008-10-31 00:00:00, 3, 0, 0, 0.335, 0, 310543, 278461) Weight(2009-06-22 00:00:00, 0, 0, 0, 0, 0, 310543, 292367) Weight(2009-06-30 00:00:00, 0, 0, 0, 0, 0, 310543, 292376) Weight(2009-10-15 00:00:00, 0, 0, 0, 0, 0, 310543, 292411) Weight(2010-06-28 00:00:00, 0, 0, 0, 0, 0, 310543, 310537) Weight(2010-09-17 00:00:00, 0, 0, 0, 0, 0, 348501, 310537) Weight(2011-08-05 00:00:00, 0, 0, 0, 0, 0, 512335, 310536) Weight(2011-12-31 00:00:00, 0, 0, 0, 0, 0, 512335, 310536) Weight(2012-10-19 00:00:00, 0, 0, 0, 1, 0, 512335, 310536) Weight(2012-12-31 00:00:00, 0, 0, 0, 0, 0, 512335, 310536) Weight(2013-06-20 00:00:00, 6, 0, 0, 1.7, 0, 819736, 496857) Weight(2013-11-12 00:00:00, 0, 0, 0, 0, 0, 819736, 557590) Weight(2014-01-09 00:00:00, 0, 0, 0, 0, 0, 952075, 557590) Weight(2014-06-12 00:00:00, 0, 0, 0, 1.6, 2, 1.14249e+06, 669106) Weight(2014-09-01 00:00:00, 0, 0, 0, 0, 0, 1.14249e+06, 983671) Weight(2015-04-13 00:00:00, 0, 0, 0, 1.74, 2, 1.37099e+06, 1.1804e+06) Weight(2015-05-21 00:00:00, 0, 0, 0, 0, 0, 1.43087e+06, 1.1804e+06) Weight(2016-05-23 00:00:00, 0, 0, 0, 0, 0, 1.43087e+06, 1.21926e+06) Weight(2016-06-16 00:00:00, 0, 0, 0, 1.53, 2, 1.71704e+06, 1.46312e+06) Weight(2017-01-09 00:00:00, 0, 0, 0, 0, 0, 1.71704e+06, 1.6918e+06) Weight(2017-07-21 00:00:00, 0, 0, 0, 1.58, 0, 1.71704e+06, 1.6918e+06) Weight(2017-12-31 00:00:00, 0, 0, 0, 0, 0, 1.71704e+06, 1.6918e+06) Weight(2018-05-21 00:00:00, 0, 0, 0, 0, 0, 1.71704e+06, 1.71702e+06) Weight(2018-07-12 00:00:00, 0, 0, 0, 1.36, 0, 1.71704e+06, 1.71702e+06) Weight(2019-06-26 00:00:00, 0, 0, 0, 1.45, 0, 1.71704e+06, 1.71702e+06) Weight(2019-06-30 00:00:00, 0, 0, 0, 0, 0, 1.71704e+06, 1.71702e+06) Weight(2019-09-18 00:00:00, 0, 0, 0, 0, 0, 1.94059e+06, 1.94058e+06) Weight(2020-05-28 00:00:00, 0, 0, 0, 2.18, 0, 1.94059e+06, 1.94058e+06) Weight(2020-12-31 00:00:00, 0, 0, 0, 0, 0, 1.94059e+06, 1.94058e+06)
IoT_benign_attack_traces/Exploration-day-27.ipynb
###Markdown Next: check the packet that raised a warning when running tshark (1479336)mac name resolution Packets that raise a warning when running tshark ###Code df.iloc[1479335] df.iloc[1479335]['info'] ###Output _____no_output_____ ###Markdown Seems like the info column is too long (>500) Now we try to split the dataset based on IP addresses of the IoT devices Top 30 source private IP addresses ###Code ip_sources[ip_sources.index.str.startswith('192.168.')][:30].plot(kind='bar') ###Output _____no_output_____ ###Markdown Top 30 source public IP addresses ###Code ip_sources[~ ip_sources.index.str.startswith('192.168.')][:30].plot(kind='bar') ###Output _____no_output_____ ###Markdown 192.168.1.118 is a random IoT device, try to understand with who it communicates in the LAN ###Code df[df['ip.src'] == '192.168.1.118']['eth.src_resolved'].value_counts() ###Output _____no_output_____ ###Markdown 192.168.1.118 is probably a LIFX smart lamp (its MAC address is from Lifilabs). ###Code df[df['ip.dst'] == '192.168.1.118']['eth.src_resolved'].value_counts() ###Output _____no_output_____ ###Markdown It looks like the device with MAC 51:33:ea is the router, of the brand TP-Link ###Code destinations = df[df['ip.src'] == '192.168.1.118']['ip.dst'] destinations destinations[destinations.str.startswith('192.168.')].value_counts() ###Output _____no_output_____ ###Markdown There are only 2 LAN IP destinations for 192.168.1.118: 192.168.1.1 and 192.168.1.255 (I think these are the router private IP address and the LAN broadcast IP address) ###Code df[(df['ip.src'] == '192.168.1.118') & (df['ip.dst'] == '192.168.1.1')]['protocol'].value_counts() ###Output _____no_output_____ ###Markdown As we can see this communication was only used for DNS and DHCP ###Code df[(df['ip.src'] == '192.168.1.118') & (df['ip.dst'] == '192.168.1.255')]['protocol'].value_counts() ###Output _____no_output_____ ###Markdown ??? ###Code destinations[~ destinations.str.startswith('192.168.')].value_counts() ###Output _____no_output_____ ###Markdown As we can see, 192.168.1.118 mainly communicates with 104.198.46.246 According to a reverse IP lookup, the latter is owned by Google. Now try to extract the traffic related to 192.168.1.118 ###Code df[(df['ip.src'] == '192.168.1.118') | (df['ip.dst'] == '192.168.1.118') | (df['ip.dst'] == '255.255.255.255')] # Note that we should also include multicasts that comprise 192.168.1.118, and broadcast ###Output _____no_output_____ ###Markdown LAN packets ###Code df[df['ip.src'].str.startswith('192.168.') & df['ip.dst'].str.startswith('192.168.')] # Note that we should also include other private prefixes than 192.168. ip_destinations[ip_destinations.index.str.startswith('192.168.')] # Note the bug with some address fields that contain 2 addresses ip_sources[ip_sources.index.str.startswith('192.168.')] ###Output _____no_output_____
interactive_cca.ipynb
###Markdown Visualizing the effect of regularisation on CCA using IPython Widgets!Learnt how to use widgets in IPython and thought it would be nice to demonstrate the effect of l2 regularisation Install cca-zoo and import packages ###Code !pip install cca-zoo import ipywidgets as widgets import seaborn as sns from cca_zoo.models import rCCA from cca_zoo.data import generate_covariance_data import numpy as np import pandas as pd from sklearn.model_selection import train_test_split import matplotlib.pyplot as plt sns.set(font_scale=1) ###Output _____no_output_____ ###Markdown Plotting Helpers ###Code # Plotting Helpers def plot_latent_train_test(train_scores, test_scores, title=None): train_data = pd.DataFrame( {'phase': np.asarray(['train'] * train_scores[0].shape[0]).astype(str)}) x_vars=[f'X dimension {f}' for f in range(1,train_scores[0].shape[1]+1)] y_vars=[f'Y dimension {f}' for f in range(1,train_scores[1].shape[1]+1)] train_data[x_vars] = train_scores[0] train_data[y_vars] = train_scores[1] test_data = pd.DataFrame( {'phase': np.asarray(['test'] * test_scores[0].shape[0]).astype(str)}) test_data[x_vars] = test_scores[0] test_data[y_vars] = test_scores[1] data = pd.concat([train_data, test_data], axis=0) cca_pp = sns.pairplot(data, hue='phase',x_vars=x_vars,y_vars=y_vars, corner=True) cca_pp.fig.set_size_inches(10,5) if title: cca_pp.fig.suptitle(title) latent_dims=len(x_vars) train_corrs=np.diag(np.corrcoef(train_scores[0],train_scores[1],rowvar=False)[:latent_dims,latent_dims:]) test_corrs=np.diag(np.corrcoef(test_scores[0],test_scores[1],rowvar=False)[:latent_dims,latent_dims:]) train_corr_data=pd.DataFrame({'correlation':train_corrs,'dimension':np.arange(latent_dims)+1,'phase': np.asarray(['train'] * latent_dims).astype(str)}) test_corr_data=pd.DataFrame({'correlation':test_corrs,'dimension':np.arange(latent_dims)+1,'phase': np.asarray(['test'] * latent_dims).astype(str)}) corr_data = pd.concat([train_corr_data, test_corr_data], axis=0) # setting the dimensions of the plot fig2, ax = plt.subplots(figsize=(cca_pp.fig.get_size_inches()[0],cca_pp.fig.get_size_inches()[1])) cca_bp=sns.barplot(x="dimension", y="correlation", hue="phase", data=corr_data,ax=ax) ###Output _____no_output_____ ###Markdown Make Data Choose the parameters of the data ###Code # @markdown Execute this cell to choose parameters! style = {'description_width': 'initial'} N_train= widgets.IntSlider(value=100,min=20,max=500,description='Train Samples',style=style,continuous_update=False) N_test= widgets.IntSlider(value=100,min=20,max=500,description='Test Samples',style=style,continuous_update=False) X_features=widgets.IntSlider(value=100,min=20,max=500,description='X_features',style=style,continuous_update=False) Y_features=widgets.IntSlider(value=100,min=20,max=500,description='Y_features',style=style,continuous_update=False) latent_dims=widgets.IntSlider(value=1,min=1,max=5,description='Latent Dimensions',style=style,continuous_update=False) def generate_data(N_train,N_test,X_features,Y_features,latent_dims): (X,Y),_=generate_covariance_data(N_train+N_test,view_features=[X_features,Y_features],latent_dims=latent_dims,correlation=1,decay=0.9) X_tr,X_te,Y_tr,Y_te=train_test_split(X,Y,train_size=N_train) return (X_tr,X_te,Y_tr,Y_te) out=widgets.interactive(generate_data, N_train=N_train,N_test=N_test,X_features=X_features,Y_features=Y_features,latent_dims=latent_dims) display(out) ###Output _____no_output_____ ###Markdown Change the ammount of regularisation from 0 (CCA) to 1 (PLS)In order to have more sensitivity closer to 1 we subtract the widget value from 1! The title in the figure gives the ammount of regularisation used by the modelThe model and plot will update when the mouse is released. There's a bit of a lag as the model needs to fit in the background! ###Code # @markdown Execute this cell to change model regularisation X_tr,X_te,Y_tr,Y_te=out.result[0],out.result[1],out.result[2],out.result[3] style = {'description_width': 'initial'} c=widgets.FloatLogSlider(value=1-1e-3,base=10, min=-5, max=0,description='1 minus c',readout=True,readout_format='.5f',style=style,continuous_update=False) def interactive_cca(c): rcca=rCCA(latent_dims=latent_dims.value,c=1-c).fit(X_tr,Y_tr) test_scores=rcca.transform(X_te,Y_te) plot_latent_train_test(rcca.scores,test_scores,f'Pair plot of latent dimensions for train and test data c={1-c:.5f}') plot_widget=widgets.interactive(interactive_cca, c=c) display(plot_widget) ###Output _____no_output_____
Exercise_ Analytic Functions.ipynb
###Markdown **[Advanced SQL Home Page](https://www.kaggle.com/learn/advanced-sql)**--- IntroductionHere, you'll use window functions to answer questions about the [Chicago Taxi Trips](https://www.kaggle.com/chicago/chicago-taxi-trips-bq) dataset.Before you get started, run the code cell below to set everything up. ###Code # Set up feedback system from learntools.core import binder binder.bind(globals()) from learntools.sql_advanced.ex2 import * print("Setup Complete") ###Output Using Kaggle's public dataset BigQuery integration. Setup Complete ###Markdown The following code cell fetches the `taxi_trips` table from the `chicago_taxi_trips` dataset. We also preview the first five rows of the table. You'll use the table to answer the questions below. ###Code from google.cloud import bigquery # Create a "Client" object client = bigquery.Client() # Construct a reference to the "chicago_taxi_trips" dataset dataset_ref = client.dataset("chicago_taxi_trips", project="bigquery-public-data") # API request - fetch the dataset dataset = client.get_dataset(dataset_ref) # Construct a reference to the "taxi_trips" table table_ref = dataset_ref.table("taxi_trips") # API request - fetch the table table = client.get_table(table_ref) # Preview the first five lines of the table client.list_rows(table, max_results=5).to_dataframe() ###Output Using Kaggle's public dataset BigQuery integration. ###Markdown Exercises 1) How can you predict the demand for taxis?Say you work for a taxi company, and you're interested in predicting the demand for taxis. Towards this goal, you'd like to create a plot that shows a rolling average of the daily number of taxi trips. Amend the (partial) query below to return a DataFrame with two columns:- `trip_date` - contains one entry for each date from January 1, 2016, to December 31, 2017.- `avg_num_trips` - shows the average number of daily trips, calculated over a window including the value for the current date, along with the values for the preceding 15 days and the following 15 days, as long as the days fit within the two-year time frame. For instance, when calculating the value in this column for January 5, 2016, the window will include the number of trips for the preceding 4 days, the current date, and the following 15 days.This query is partially completed for you, and you need only write the part that calculates the `avg_num_trips` column. Note that this query uses a common table expression (CTE); if you need to review how to use CTEs, you're encouraged to check out [this tutorial](https://www.kaggle.com/dansbecker/as-with) in the [Intro to SQL](https://www.kaggle.com/learn/intro-to-sql) micro-course. ###Code # Fill in the blank below avg_num_trips_query = """ WITH trips_by_day AS ( SELECT DATE(trip_start_timestamp) AS trip_date, COUNT(*) as num_trips FROM `bigquery-public-data.chicago_taxi_trips.taxi_trips` WHERE trip_start_timestamp >= '2016-01-01' AND trip_start_timestamp < '2018-01-01' GROUP BY trip_date ORDER BY trip_date ) SELECT trip_date, AVG(num_trips) OVER ( ORDER BY trip_date ROWS BETWEEN 15 PRECEDING AND 15 FOLLOWING ) AS avg_num_trips FROM trips_by_day """ # Check your answer q_1.check() ###Output _____no_output_____ ###Markdown 2) Can you separate and order trips by community area?The query below returns a DataFrame with three columns from the table: `pickup_community_area`, `trip_start_timestamp`, and `trip_end_timestamp`. Amend the query to return an additional column called `trip_number` which shows the order in which the trips were taken from their respective community areas. So, the first trip of the day originating from community area 1 should receive a value of 1; the second trip of the day from the same area should receive a value of 2. Likewise, the first trip of the day from community area 2 should receive a value of 1, and so on.Note that there are many numbering functions that can be used to solve this problem (depending on how you want to deal with trips that started at the same time from the same community area); to answer this question, please use the **RANK()** function. ###Code # Amend the query below trip_number_query = """ SELECT pickup_community_area, trip_start_timestamp, trip_end_timestamp, RANK() OVER ( PARTITION BY pickup_community_area ORDER BY trip_start_timestamp ) AS trip_number FROM `bigquery-public-data.chicago_taxi_trips.taxi_trips` WHERE DATE(trip_start_timestamp) = '2017-05-01' """ trip_number_result = client.query(trip_number_query).result().to_dataframe() # Check your answer q_2.check() ###Output _____no_output_____ ###Markdown 3) How much time elapses between trips?The (partial) query in the code cell below shows, for each trip in the selected time frame, the corresponding `taxi_id`, `trip_start_timestamp`, and `trip_end_timestamp`. Your task in this exercise is to edit the query to include an additional `prev_break` column that shows the length of the break (in minutes) that the driver had before each trip started (this corresponds to the time between `trip_start_timestamp` of the current trip and `trip_end_timestamp` of the previous trip). Partition the calculation by `taxi_id`, and order the results within each partition by `trip_start_timestamp`.Some sample results are shown below, where all rows correspond to the same driver (or `taxi_id`). Take the time now to make sure that the values in the `prev_break` column make sense to you!![first_commands](https://i.imgur.com/qjvQzg8.png)Note that the first trip of the day for each driver should have a value of **NaN** (not a number) in the `prev_break` column. ###Code # Fill in the blanks below break_time_query = """ SELECT taxi_id, trip_start_timestamp, trip_end_timestamp, TIMESTAMP_DIFF( trip_start_timestamp, LAG(trip_end_timestamp, 1) OVER ( PARTITION BY taxi_id ORDER BY trip_start_timestamp), MINUTE) as prev_break FROM `bigquery-public-data.chicago_taxi_trips.taxi_trips` WHERE DATE(trip_start_timestamp) = '2017-05-01' """ break_time_result = client.query(break_time_query).result().to_dataframe() # Check your answer q_3.check() ###Output _____no_output_____
Basics of Julia programming language/1-JuliaBasics.ipynb
###Markdown 1. Basic JuliaIn the first section we will overview basic Julia syntax, data structures, etc. If you are feel quite advanced in programming already feel free to wonder around until we get to the second section, which is about writing performant code. Syntax and unicode1. Assingment in Julia is done with the `=` sign. - You can assign *anything* to a variable binding.- The variable names can include pretty much any Unicode character.- Strings are created between double quotes `"` while the single quotes `'` are used for characters only.- Most Julia editors offer "LaTeX Completion". Pressing e.g. `\delta` and then TAB will create the corresponding Unicode character using the LaTeX syntax.- **All Julia operations return a value**. This includes assingment.Here are some examples: ###Code x = 3 y = x ^ 2.5 # powers with ^ δ = "αbasf" # type \delta and then press TAB 😺, 😀, 😞 = 1, 0, -1 # do e.g. \:smile: <TAB> 😺 + 😞 == 😀 ###Output _____no_output_____ ###Markdown Since assignement returns the value, by default this value is printed. This is **AMAZING**, but you can also silence printing by adding `;` to the end of the expression: ###Code x = 3; ###Output _____no_output_____ ###Markdown Most julia operators have their `=` version, which updates something with its own value ###Code x += 3 # x = x + 3 x -= 3 x *=2 x /= 2 ###Output _____no_output_____ ###Markdown Literal numbers can multiply anything without having to put `*` inbetween, as long as the number is on the left side: ###Code 5x - 12.54y * 1.2e-5x ###Output _____no_output_____ ###Markdown ---Everything that exists in Julia has a certain **Type**. (e.g. numbers can be integers, floats, rationals). This is extremely important, and a core reason of the language's performance (more on Types on the second session).To find the type of a thing in Julia you simply use `typeof(thing)`: ###Code typeof(😺) typeof(1.5) typeof("asdf") ###Output _____no_output_____ ###Markdown Lastly, you can interpolate any expression into a string using `$(expression)` ###Code "the value of the cat face (😺) is $(😺)" "I am doing math inside a string: $(y^2 - x)" ###Output _____no_output_____ ###Markdown Basic collectionsIndexing a collection (like an array or a dictionary) in Julia is done with brackets: `collection[index]`. The `index` is typically an integer, although some structures can have arbitrary indices (like a dictionary), and you can define any indexing type for any collection you want (bit advanced tho').**Julia indexing starts from 1** TuplesTuples are ordered immutable collections of elements. They are mostly used when the elements are not of the same type with each other and are intended for small collections.Syntax:```julia(item1, item2, ...)``` ###Code myfavoritethings = ("purple", "cats", π) myfavoritethings[1] ###Output _____no_output_____ ###Markdown NamedTuplesThese are exactly like tuples but also assign a name to each variable they contain. They rest between the `Tuple` and `Dict` type in their use.Their syntax is:```julia(key1 = val1, key2 = val2, ...)```For example: ###Code nt = (x = 5, y = "str", z = 5/3) ###Output _____no_output_____ ###Markdown These objects can be accessed with `[1]` like normal tuples, but also with the syntax `.key`: ###Code nt[1], nt[2] nt.x , nt[:z], nt.y ###Output _____no_output_____ ###Markdown DictionariesDictionaries are unordered mutable collections of pairs key-value. They are intended for sets of relational data, and typically you want the data to be of the same type. Dictionaries have a specific type for keys and values.Syntax:```juliaDict(key1 => value1, key2 => value2, ...)```A good example of a dictionary is a contacts list, where we associate names with phone numbers.remember that all the values of respective keys must be of same type ###Code myphonebook = Dict("Jenny" => "867-5309", "Ghostbusters" => "555-2368") myphonebook["Jenny"] ###Output _____no_output_____ ###Markdown New entries can be added to the above dictionary, because it is mutable *(I will talk in more detail about mutability in a moment, but for now mutable means that "you can change its values")*. The key of the entry must be of type `String` and the value of the entry must be of type `String`, because these are the types in the original dictionary. ###Code myphonebook["BuzzLightyear"] = "∞ and beyond" myphonebook # this displays the phonebook ###Output _____no_output_____ ###Markdown ArraysThe standard Julia `Array` is a mutable and ordered collection of items of the same type.The dimensionality of the Julia array is important. A `Matrix` is an array of dimension 2. A `Vector` is an array of dimension 1. The *element type* of what an array contains is irrelevant to its dimension!**i.e. a Vector of Vectors of Numbers and a Matrix of Numbers are two totally different things!**Syntax: ```julia[item1, item2, ...]```For example: ###Code myfriends = ["Ted", "Robyn", "Barney", "Lily", "Marshall"] fibonacci = [1, 1, 2, 3, 5, 8, 13] mixture = [1, 1, 2, 3, "Ted", "Robyn"] ###Output _____no_output_____ ###Markdown As mentioned, the type of the elements of an array must be the same. Yet above we mix numbers with strings! I wasn't lying though; the above array is an **unoptimized** version that can hold **any** thing. You can see this in the "type" of the array, whicn is to the left of its dimenmsionality: `Array{TypeOfTheThings, Dimensionality}`. For `mixture` it is `Any`. Arrays of other data structures, e.g. vectors or dictionaries, or anything, as well as multi-dimensional arrays are possible: ###Code vec_vec_num = [[1, 2, 3], [4, 5], [6, 7, 8, 9]] ###Output _____no_output_____ ###Markdown If you want to make a matrix, two ways are the most common: (1) specify each entry one by one ###Code matrix = [1 2 3; # elements in same row separated by space 4 5 6; # semicolon means "go to next row" 7 8 9] ###Output _____no_output_____ ###Markdown (2) you use a function that initializes a matrix. E.g. `rand(n, m)` will create an `n×m` matrix with uniformly random numbers ###Code R = rand(4, 3) R[1,2] # two dimensional indexing ###Output _____no_output_____ ###Markdown Since arrays are mutable we can change their entries, or even add new ones: ###Code fibonacci = [1, 1, 2, 3, 5, 8, 13] fibonacci[1] = 15 fibonacci push!(fibonacci, 16) fibonacci ###Output _____no_output_____ ###Markdown The functions `push!` and `pop!` are useful with arrays, as they allow you to "add one element at the end" of the array or "remove the last element" of the array. Lastly, for multidimension arrays, the `:` symbol is useful, which means to "select all elements in this dimension". ###Code x = rand(3,3) x[:, 1] # it means to select the first column ###Output _____no_output_____ ###Markdown RangesRanges are useful shorthand notations that define a "vector" (one dimensional array). This is done with the following syntax:```start:step:endrange(start, end; length = ...)range(start, end; step = ...)``` ###Code r = 0:0.01:5 ###Output _____no_output_____ ###Markdown Ranges are not unique to numeric data, and can be used with anything that extends their interface, e.g. ###Code letterrange = 'a':'z' letterrange[2], letterrange[20] ###Output _____no_output_____ ###Markdown As ranges are printed in this short form, to see all their elements you can use `collect`, to transform the range into a `Vector`. ###Code collect(letterrange) ###Output _____no_output_____ ###Markdown It is important to understand that ranges **do not store all elements in memory** like `Vector`s. Instead they produce the elements on the fly when necessary, and therefore are in general preferred over `Vector`s if the data is equi-spaced. Lastly, ranges are typically used to index into arrays. One can type `A[1:3]` to get the first 3 elements of `A`, or `A[end-2:end]` to get the last three elements of `A`. If `A` is multidimensional, the same type of indexing can be done for any dimension: ###Code A = rand(4, 4) A[1:3, 1] A[1:3, 1:3] ###Output _____no_output_____ ###Markdown List comprehensionThe list comprenhension syntax `[expression(a) for a in collection if condition(a)]` is available to make a vector. The `if` part is optional. ###Code [a^2 for a in 1:10 if isodd(a)] ###Output _____no_output_____ ###Markdown IterationIteration in Julia is high-level. This means that not only it has an intuitive and simple syntax, but also iteration works with anything that can be iterated. Iteration can also be extended (more on that later). `for` loopsA `for` loop iterates over a container and executes a piece of code, until the iteration has gone through all the elements of the container. The syntax for a `for` loop is```juliafor *var* in *loop iterable* *loop body*end```*you will notice that all Julia code-blocks end with `end`* ###Code for n ∈ 1:5 println(n) end ###Output 1 2 3 4 5 ###Markdown The nature of `var` depends on what the iterating container has. For example, when iterating over a dictionary one iterates over pairs of key-value. ###Code # for (key, val) in myphonebook # pair in myphonebook # println("The number of $key is $val") # end for pair in myphonebook println("The number of $(pair[1]) is $(pair[2])") ##pretty cool end ###Output The number of Jenny is 867-5309 The number of BuzzLightyear is ∞ and beyond The number of Ghostbusters is 555-2368 ###Markdown Notice that `for pair in myphonebook` is also valid syntax. Then `pair[1]` would be the `key` and `pair[2]` would be the `val`. But Julia allows you to easily extract them on the header of the `for` loop, which also makes to code clearer. `while` loopsA `while` loop executes a code block until a boolean condition check (that happens at the start of the block) becomes `false`. Then the loop terminates (without executing the block again). The syntax for a standard `while` loop is```juliawhile *condition* *loop body*end``` ###Code n = 0 while n < 5 n += 1 println(n) end ###Output 1 2 3 4 5 ###Markdown ConditionalsConditionals execute a specific code block depending on what is the outcome of a given boolean check. Notice that as with other languages `&, |` are the boolean and and or operators. with `if`In Julia, the syntax```juliaif *condition 1* *option 1*elseif *condition 2* *option 2*else *option 3*end```evaluates the conditions sequentially and executes the code-block of the first true condition. ###Code x, y = 5, 6 if x > y x else # all Julia code blocks by default # return the last executed expression y end ###Output _____no_output_____ ###Markdown with ternary operatorsFor this last block, we could instead use the ternary operator with the syntax```juliaa ? b : c```which equates to ```juliaif a belse cend```So the a here is the condition which is to be checked ###Code x > y ? x : y ###Output _____no_output_____ ###Markdown Notice that both the boolean conditions, as well as the conditional code blocks, can be nested and chained to arbitrary degree. E.g. this is normal syntax:```juliay > w ? 13 : z < x/2 ? 14 : 15```which is made more readable with using `()`:```juliay > w ? 13 : (z < x/2 ? 14 : 15)``` `break` and `continue`The keywords `continue` and `break` are often used with conditionals to skip an iteration or completely stop the iteration code block. ###Code N = 1:100 for n in N isodd(n) && continue println(n) n > 10 && break end ###Output 2 4 6 8 10 12 ###Markdown FunctionsFunctions are the bread and butter of Julia, which heavily supports functional programming.Functions are objects by themselves and can be used as any other Julia value.Functions are declared with two ways: ###Code function f(x) x^2 # all Julia code blocks by default # return the last executed expression end f(x) = x^2 # equivalent with above ###Output _____no_output_____ ###Markdown And called using their name and parenthesis `()` enclosing the calling arguments: ###Code f(5) ###Output _____no_output_____ ###Markdown Functions in Julia support optional positional arguments, as well as keyword arguments. The **positional** arguments are **always given by their order**, while **keyword** arguments are **always given by their keyword**. Keyword arguments are all the arguments defined in a function after the symbol `;`. Example: ###Code function g(x, y = 5; z = 2) x*z*y end g(5) # give x. default y, z g(5, 3) # give x, y. default z g(5; z = 3) # give x, z. default y g(2, 4; z = 1.5) # give everything g(2, 4, 2) # keyword arguments can't be specified by position ###Output _____no_output_____ ###Markdown Duck-typingJulia supports the "duck typing" approach. Simply put, functions work on whatever input makes sense given their operations. This can be restricted if need be. In our example with `g`, anything that supports the function `*` will work. ###Code A = rand(3, 3) g(A) g(A, A; z = A) g("string", "string"; z = "string") # * is string concatenation g("string", 5) ###Output _____no_output_____ ###Markdown Now we got an error because the operation `String*Number` is not supported by default in Julia. Passing by reference: mutating vs. non-mutating functionsMutable entities in Julia are passed by reference if possible. What does this mean? Well, you can divide Julia variables into two categories: **mutable** and **immutable**. Mutable means that the values of your data can be changed in-place, i.e. literally in the place in memory the variable is stored in the computer. Immutable data cannot be changed after creation, and thus the only way to change part of immutable data is to actually make a brand new immutable object from scratch. Use `isimmutable(v)` to check if value `v` is immutable or not.For example, `Vector`s are mutable in Julia: ###Code x = [5, 5, 5] x[1] = 6 # change first entry of x x ###Output _____no_output_____ ###Markdown But e.g. `Tuple`s are immutable: ###Code x = (5, 5, 5) x[1] = 6 x = (6, 5, 5) ###Output _____no_output_____ ###Markdown Julia **passes values by reference**. This means that if a mutable object is given to a function, and this object is mutated inside the function, the final result is kept at the passed object. E.g.: ###Code function add3!(x) x[1] += 3 return x end x = [5, 5, 5] add3!(x) x ###Output _____no_output_____ ###Markdown **By convention**, functions with name ending in `!` alter their (mutable) arguments and functions lacking `!` do not. Typically the first argument of a function that ends in `!` is mutated.For example, let's look at the difference between `sort` and `sort!`. ###Code v = [3, 5, 2] sort(v) v ###Output _____no_output_____ ###Markdown `sort(v)` returns a sorted array that contains the same elements as `v`, but `v` is left unchanged. On the other hand, when we run `sort!(v)`, the contents of v are sorted within the array `v`. ###Code sort!(v) v ###Output _____no_output_____ ###Markdown The help systemTyping `?` followed by a function (or type) name will display its documentation string ###Code ?typeof ###Output search: typeof typejoin TypeError ###Markdown BroadcastingJulia compiles efficient machine code that is typically fast. This means that you don't have to vectorize your code; standard `for` loops are fast (sometimes *faster* than vectorized code).However the need often arises to apply some operations across multiple arguments element-wise. This is called **broadcasting**.In Julia broadcasting is possible for any function, basic or user-defined. It is done via the simple syntax of adding a dot `.` before the parenthesis in the function call: `g.(x)`. Let's recall the previously defined `g`, now without keywords (broadcasting does not work on keywords) ###Code function h(x, y = 5, z = 2) x*y*z end ###Output _____no_output_____ ###Markdown Let's now apply it to a vector `x` ###Code x = [1, 2, 3] h.(x) ###Output _____no_output_____ ###Markdown Let's now use vectors for both `y` and `z`: ###Code y, z = rand(3), rand(3) h.(x, y, z) ###Output _____no_output_____ ###Markdown Same works even with matrices, or any other iterable. Julia will try to automatically deduce what is broadcastable and what should be the size of the output. E.g.: ###Code x = fill(3, (3,3)) y, z = 1:3, 0:2 h.(x, y, z) ###Output _____no_output_____ ###Markdown If one wants to make an exponential range, they just use broadcasting, e.g. ###Code e = 10.0 .^ (-3:3) ###Output _____no_output_____ ###Markdown *(notice that for infix operators (like `+, -`) the `.` is put before the operator)* 2. PerformanceJulia matches the performance of C/Fortran not because of better hardware or better compilers than Python has, but because of design. Julia is interactive like Python, but it is not an interpreted language, it is a **compiled** one. This means that **every function call in Julia first gets compiled, based on the exact input types**. Then it is executed.*(this compilation only happens once for each unique combination of input types)*Earlier we mentioned that everything in Julia has a `Type` associated with it. When the compiler compiles a function, these types of every variable can be tracked throughout the function and all datastructures are mapped uniquely and deterministically all the way from input to output. This allows the compiler to make all the optimizations that e.g. the compilation of a C language code would do. And this (in a nutshell) is what results in the same performance as C/Fortran. Type stabilityThis tracking of types mentioned above only works if **the type of every variable remains the same type throughout the function's operations**. Notice the distinction: the _type_ (i.e. all floating point variables remain floats) is constant, but of course the _value_ could change (i.e. going from `515134.515` to `123415.242` is fine).What if this doesn't happen? Then we have the case of **Type instability**, which is what makes beginner's code slow in 99% of the cases. Let's look at the following illustrative scenario: ###Code function unstable() x = 1 for i = 1:10 x /= rand() end return x end function stable() x = 1.0 for i = 1:10 x /= rand() end return x end ###Output _____no_output_____ ###Markdown Before we run the code we can put things into context, by asking ###Code (typeof(1.0), typeof(1), typeof(1/rand())) using BenchmarkTools # Julia package for advanced (and more accurate) benchmarking @btime unstable(); @btime stable(); ###Output 132.244 ns (0 allocations: 0 bytes) 131.001 ns (0 allocations: 0 bytes) ###Markdown The reason the `stable` version is faster than the `unstable` is because the type of `x` throughtout the function is not constant. It goes from `Int` at its definition `x = 1` to a `Float64` in the operation `x = x / rand()`, since by definition in Julia `Int / Float64` gives `Float64`.*Quick notice: this type instability problem becomes much, much worse if the instability happens in more than one variable, if the instability happens with more than 2 types, or if the types involved in the instability are much more complicated.* ScopesIn general Julia has two scopes: global scope (the one we use here, in this notebook) and local scope. Local scope is introduced by most code blocks, e.g. functions, `for` or `while` loops but *not* from conditional code blocks (`if`). The details of the scopes are mostly relevant for package development and can be found in the [Julia manual](https://docs.julialang.org/en/latest/manual/variables-and-scoping/). What is important for us is that by definition, **everything in global scope is type-unstable** and thus not performant. This happens because Julia is not a statically typed language, but a dynamically typed one. Therefore one can do ###Code x = 5 x = "string" ###Output _____no_output_____ ###Markdown which is not possible in e.g. C.The performance that global scope has in code is truly massive: ###Code x, y = rand(1000), rand(1000) a = 0.0 @btime for i in 1:length(x) global a += x[i]^2 + y[i]^2 end function localf(x, y) a = zero(eltype(x)) for i in 1:length(x) a += x[i]^2 + y[i]^2 end return a end @btime localf(x, y); ###Output 348.900 μs (7980 allocations: 140.33 KiB) 2.389 μs (1 allocation: 16 bytes) ###Markdown Conclusions so far1. **Put all performance critical parts of your code inside a function** to avoid global scope2. **Ensure that your functions are type-stable** AllocationAnother thing that is important for performance is allocation. What must be understood is that when one writes ###Code x = rand(2, 2) ###Output _____no_output_____ ###Markdown this *allocates* some part of your memory to store this **mutable** container that `x` represents. Creating mutable things always allocates memory. In general when you are creating something mutable you always pay two costs:1. the cost to actually calculate the numbers that go into this thing (here e.g. the cost of calling `rand()`)2. the cost to allocate some memory to store 1000 numbers of type `Float64`.In general you should try to avoid allocations, by more clever design of your algorithms and pre-allocating as much as possible, as is instructed by this section of [Julia's performance tips](https://docs.julialang.org/en/latest/manual/performance-tips/Pre-allocating-outputs-1). Example: using `mul!` for matrix multiplicationIf `A, B` are two square matrices (of same size), then `A*B` will make a *new* matrix. However, the function `mul!(C, A, B)` will not make a new one and instead write the result in-place in `C`. Here is an example that demonstrates how important avoiding allocations really is for performance: ###Code using LinearAlgebra function randmul(n) A = fill(rand(), n, n); B = fill(rand(), n, n) return sum(A*B) end function randmul!(C, A, B) fill!(A, rand()); fill!(B, rand()) mul!(C, A, B) return sum(C) end n = 100; A = fill(rand(), n, n) B = copy(A); C = copy(A); (randmul(n), randmul!(C, A, B)) using BenchmarkTools @btime randmul($n); @btime randmul!($C, $A, $B); ###Output 111.600 μs (6 allocations: 234.61 KiB) 107.000 μs (0 allocations: 0 bytes)
006-Data-Augmentation-Fashion.ipynb
###Markdown Data Augementation - Fashion MNIST ###Code import numpy as np import keras import tensorflow as tf import matplotlib.pyplot as plt % matplotlib inline import vis import pandas as pd ###Output Using TensorFlow backend. ###Markdown Get Data ###Code from keras.datasets import fashion_mnist (x_train, y_train), (x_test, y_test) = fashion_mnist.load_data() labels = vis.fashion_mnist_label() ###Output _____no_output_____ ###Markdown **Step 1: Prepare the images and labels** ###Code # Reshape data for convlution network x_train_conv = x_train.reshape(x_train.shape[0], 28, 28, 1) x_test_conv = x_test.reshape(x_test.shape[0], 28, 28, 1) input_shape = (28, 28, 1) # Convert from 'uint8' to 'float32' and normalise the data to (0,1) x_train_conv = x_train_conv.astype("float32") / 255 x_test_conv = x_test_conv.astype("float32") / 255 # convert class vectors to binary class matrices y_train_class = keras.utils.to_categorical(y_train, 10) y_test_class = keras.utils.to_categorical(y_test, 10) ###Output _____no_output_____ ###Markdown Data Augmentation ###Code from keras.preprocessing.image import ImageDataGenerator # this will do preprocessing and realtime data augmentation datagen = ImageDataGenerator( featurewise_center=False, # set input mean to 0 over the dataset samplewise_center=False, # set each sample mean to 0 featurewise_std_normalization=False, # divide inputs by std of the dataset samplewise_std_normalization=False, # divide each input by its std rotation_range=25, # randomly rotate images in the range (degrees, 0 to 180) width_shift_range=0.1, # randomly shift images horizontally (fraction of total width) height_shift_range=0.1, # randomly shift images vertically (fraction of total height) horizontal_flip=False, # randomly flip images vertical_flip=False) # randomly flip images # Set up the generator datagen.fit(x_train_conv) ###Output _____no_output_____ ###Markdown Let us see the image augmentation ###Code x_train_conv[:1].shape datagen.fit(x_train_conv[:1]) samples = datagen.flow(x_train_conv[:1]) vis.imshow(x_train_conv[:1].squeeze()) range(3) image image = [] for i in range(3): img = samples.next() img = img.squeeze() image.append(vis.imshow(img)) image[0] | image[1] | image[2] ###Output _____no_output_____ ###Markdown **Step 2: Build a CNN Model** ###Code from keras.models import Sequential from keras.layers import Conv2D, MaxPooling2D, Dropout, Flatten, Dense cnn = Sequential() cnn.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=(28,28,1))) cnn.add(MaxPooling2D(pool_size=(2, 2))) cnn.add(Conv2D(64, kernel_size=(3, 3), activation='relu')) cnn.add(MaxPooling2D(pool_size=(2, 2))) cnn.add(Dropout(0.25)) cnn.add(Conv2D(128, kernel_size=(3, 3), activation='relu')) cnn.add(Dropout(0.25)) cnn.add(Flatten()) cnn.add(Dense(128, activation='relu')) cnn.add(Dropout(0.25)) cnn.add(Dense(10, activation='softmax')) cnn.summary() ###Output _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_4 (Conv2D) (None, 26, 26, 32) 320 _________________________________________________________________ max_pooling2d_3 (MaxPooling2 (None, 13, 13, 32) 0 _________________________________________________________________ conv2d_5 (Conv2D) (None, 11, 11, 64) 18496 _________________________________________________________________ max_pooling2d_4 (MaxPooling2 (None, 5, 5, 64) 0 _________________________________________________________________ dropout_4 (Dropout) (None, 5, 5, 64) 0 _________________________________________________________________ conv2d_6 (Conv2D) (None, 3, 3, 128) 73856 _________________________________________________________________ dropout_5 (Dropout) (None, 3, 3, 128) 0 _________________________________________________________________ flatten_2 (Flatten) (None, 1152) 0 _________________________________________________________________ dense_3 (Dense) (None, 128) 147584 _________________________________________________________________ dropout_6 (Dropout) (None, 128) 0 _________________________________________________________________ dense_4 (Dense) (None, 10) 1290 ================================================================= Total params: 241,546 Trainable params: 241,546 Non-trainable params: 0 _________________________________________________________________ ###Markdown **Step 3: Compile the Model and Fit** ###Code cnn.compile(loss='categorical_crossentropy', optimizer="adam", metrics=['accuracy']) # fits the model on batches with real-time data augmentation: history = cnn.fit_generator(datagen.flow(x_train_conv, y_train_class, batch_size=32), validation_data=(x_test_conv, y_test_class), use_multiprocessing=True, steps_per_epoch=len(x_train_conv) / 32, epochs=5) ###Output Epoch 1/5 1875/1875 [==============================] - 106s 57ms/step - loss: 0.7674 - acc: 0.7090 - val_loss: 0.5192 - val_acc: 0.8016 Epoch 2/5 1875/1875 [==============================] - 106s 56ms/step - loss: 0.5600 - acc: 0.7865 - val_loss: 0.4407 - val_acc: 0.8306 Epoch 3/5 1875/1875 [==============================] - 144s 77ms/step - loss: 0.5035 - acc: 0.8083 - val_loss: 0.3897 - val_acc: 0.8584 Epoch 4/5 1875/1875 [==============================] - 172s 92ms/step - loss: 0.4646 - acc: 0.8260 - val_loss: 0.3812 - val_acc: 0.8600 Epoch 5/5 1875/1875 [==============================] - 193s 103ms/step - loss: 0.4420 - acc: 0.8354 - val_loss: 0.3550 - val_acc: 0.8705 ###Markdown **Step 4: Check the performance of the model** ###Code vis.metrics(history.history) score = cnn.evaluate(x_test_conv, y_test_class) print('Test loss:', score[0]) print('Test accuracy:', score[1]) ###Output 10000/10000 [==============================] - 9s 931us/step Test loss: 0.35502173528671266 Test accuracy: 0.8705 ###Markdown **Step 5: Make & Visualise the Prediction** ###Code predict_classes_cnn = cnn.predict_classes(x_test_conv) pd.crosstab(y_test, predict_classes_cnn) proba_cnn = cnn.predict_proba(x_test_conv) i = 5 vis.imshow(x_test[i], labels[y_test[i]]) | vis.predict(proba_cnn[i], y_test[i], labels) ###Output _____no_output_____
notebooks/Modeling_for_BW4.ipynb
###Markdown ALBERT IMPLEMENTATION ###Code %%capture !pip install bert-for-tf2 !pip install sentencepiece from sklearn.model_selection import train_test_split import pandas as pd import tensorflow as tf import tensorflow_hub as hub from tensorflow.keras.models import Model import bert from bert.tokenization.bert_tokenization import FullTokenizer ###Output _____no_output_____ ###Markdown Data Loading `train.tsv` and `dev.tsv` are the training and validation sets with 4 columns of data each. - `guid`: a generic identifier for each observation derived from the URL the reddit post was retrieved from - `label`: an encoded label of each subreddit, from 0 to 1012. Label encoding is mapped in a separate file. - `text_b`: an optional field that is currently populated with "a" for all observations. Only used for sequential text classification/prediction. - `text_a`: The untokenized text that is to be used for multi-class classification training. `test.tsv` is a set of data formatted similarly to the `train.tsv` and `dev.tsv` set; however, it only contains `guid` and `text_a` columns. `encoding_maps` is a csv that contains the corresponding target labels for observations in the training datasets. Generated using sklearn LabelEncoder. Has two columns, `label` and `subreddit`. ###Code # Import data from .tsv files train = pd.read_csv('train.tsv', sep='\t', names=['guid','label','text_b','text_a']) dev = pd.read_csv('dev.tsv', sep='\t', names=['guid','label','text_b','text_a']) test = pd.read_csv('test.tsv', sep='\t', names=['guid', 'text_a']) # Load encoding maps encoding_maps = pd.read_csv('encoding_maps.csv', engine='python') train.head() ###Output _____no_output_____ ###Markdown Tokenization ###Code import sentencepiece as spm spm_model = os.path.join(model_dir, "assets", "30k-clean.model") sp = spm.SentencePieceProcessor() sp.load(spm_model) do_lower_case = True processed_text = bert.albert_tokenization.preprocess_text("Hello, World!", lower=do_lower_case) token_ids = bert.albert_tokenization.encode_ids(sp, processed_text) ###Output _____no_output_____ ###Markdown I think the code below is for a different implementation of BERT, not the one I'm currently using ###Code # GUID = 'guid' # DATA_COLUMN = 'text_a' # LABEL_COLUMN = 'label' # train_InputExamples = train.apply(lambda x: bert.run_classifier.InputExample(guid=GUID, # text_a = x[DATA_COLUMN], # text_b = None, # label = x[LABEL_COLUMN]), axis = 1) # val_InputExamples = val.apply(lambda x: bert.run_classifier.InputExample(guid=GUID, # text_a = x[DATA_COLUMN], # text_b = None, # label = x[LABEL_COLUMN]), axis = 1) ###Output _____no_output_____ ###Markdown ALBERT Modeling ###Code # Write a function to load ALBERT def load_bert(albert_name): """ Loads ALBERT pretrained model from TFHub Input: albert_name (str), name of ALBERT model to load Returns: albert_params (dict), loaded params to be used to build model """ model_name = albert_name # Fetch ALBERT from TFHub albert_dir = bert.fetch_tfhub_albert_model(model_name, ".models") # Load ALBERT params albert_params = bert.albert_params(model_name) # Print status of loaded model print("Model Name:", model_name) print("Model Directory", model_dir) return albert_params albert_params = load_bert('albert_base') # Write a function to build ALBERT model def build_bert(albert_params): """ Input: dictionary of BERT params """ bert_layer = bert.BertModelLayer.from_params(albert_params, name="albert") l_input_ids = keras.layers.Input(shape=(128,), dtype='int32', name="input_ids") l_token_type_ids = keras.layers.Input(shape=(128,), dtype='int32', name="token_type_ids") output = l_bert([l_input_ids, l_token_type_ids]) output = keras.layers.Lambda(lambda x: x[:, 0, :])(output) output = keras.layers.Dense(2)(output) model = keras.Model(inputs=[l_input_ids, l_token_type_ids], outputs=output) model.build(input_shape=(None, 128)) model.compile(optimizer=keras.optimizers.Adam(), loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=[keras.metrics.SparseCategoricalAccuracy(name="acc")]) for weight in l_bert.weights: print(weight.name) model.summary() return model, bert_layer #huggingface ###Output _____no_output_____ ###Markdown Classification using a Feed-Forward Neural NetworkAdapted from https://www.tensorflow.org/hub/tutorials/tf2_text_classification ###Code import tensorflow as tf model = "https://tfhub.dev/google/tf2-preview/nnlm-en-dim128/1" hub_layer = hub.KerasLayer(model, output_shape=[128], input_shape=[], dtype=tf.string, trainable=True) model = tf.keras.Sequential() model.add(hub_layer) # Increase nodes in hidden layer model.add(tf.keras.layers.Dense(256, activation='relu')) model.add(tf.keras.layers.Dense(256, activation='relu')) model.add(tf.keras.layers.Dense(1013, activation='softmax')) model.summary() top_k = tf.keras.metrics.SparseTopKCategoricalAccuracy(k=5, name='sparse_top_k_categorical_accuracy', dtype=None) model.compile(optimizer=tf.keras.optimizers.Adam(), loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=[tf.keras.metrics.SparseCategoricalAccuracy(name="acc"), top_k]) model.fit(train['text_a'].values, train['label'].values, batch_size=1024, epochs=1000) # adjust batch size #model.save ###Output _____no_output_____
Datascience_With_Python/Machine Learning/Videos/Lasso Regression algorithm/Lasso Regression algorithm.ipynb
###Markdown Lasso Regression algorithm Lasso regression is a regularization set of rules which may be used to remove irrelevant noises and do characteristic choice and subsequently regularize a model. The word “LASSO” stands for Least Absolute Shrinkage and Selection Operator. It is a statistical formula for the regularisation of data models and feature selection.Lasso Regression uses L1 regularization technique (will be discussed later in this article). It is used when we have more number of features because it automatically performs feature selection. *Lasso regression penalizes less important features of your dataset and makes their respective coefficients zero, thereby eliminating them. Thus it provides you with the benefit of feature selection and simple model creation. So, if the dataset has high dimensionality and high correlation, lasso regression can be used.* Implementation of Lasso regression Import Required Libraries ###Code import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline plt.style.use('ggplot') import warnings; warnings.simplefilter('ignore') ###Output _____no_output_____ ###Markdown Load the Dataset ###Code df = pd.read_csv("C:/Users/S_The/Downloads/Auto.csv") df.head() df = df.iloc[0:200] df = df.drop(['name'], axis=1) df.info() df['origin'] = pd.Categorical(df['origin']) df['horsepower'] = pd.to_numeric(df['horsepower'], errors='coerce') print(df.isnull().sum()) df = df.dropna() ###Output _____no_output_____ ###Markdown Standardization **It is important to standardize the features by removing the mean and scaling to unit variance.If a feature has a variance that is orders of magnitude larger that others, it might dominate the objective function and make the estimator unable to learn from other features correctly as expected.** ###Code dfs = df.astype('int') dfs.info() dfs.columns from sklearn.preprocessing import StandardScaler scaler = StandardScaler() dfs[['cylinders', 'displacement', 'horsepower', 'weight', 'acceleration', 'year', 'origin']] = scaler.fit_transform(dfs[['cylinders', 'displacement', 'horsepower', 'weight', 'acceleration', 'year', 'origin']]) dfs.head(5) X = dfs.drop(['mpg'], axis=1) y = dfs['mpg'] ###Output _____no_output_____ ###Markdown Split data **Split the data set into train and test sets (use X_train, X_test, y_train, y_test), with the first 75% of the data for training and the remaining for testing.** ###Code from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=10) ###Output _____no_output_____ ###Markdown Lasso regression **Apply Lasso regression on the training set with the regularization parameter lambda = 0.5 and print the R^2-score for the training and test set. Comment on your findings.** ###Code from sklearn.linear_model import Lasso reg = Lasso(alpha=0.5) reg.fit(X_train, y_train) print('Lasso Regression: R^2 score on training set', reg.score(X_train, y_train)*100) print('Lasso Regression: R^2 score on test set', reg.score(X_test, y_test)*100) ###Output Lasso Regression: R^2 score on training set 82.49741060950073 Lasso Regression: R^2 score on test set 85.49734440925532 ###Markdown Lasso with different lambdas *Apply the Lasso regression on the training set with the following λ parameters: (0.001, 0.01, 0.1, 0.5, 1, 2, 10). Evaluate the R^2 score for all the models you obtain on both the train and test sets.* ###Code lambdas = (0.001, 0.01, 0.1, 0.5, 1, 2, 10) l_num = 7 pred_num = X.shape[1] # prepare data for enumerate coeff_a = np.zeros((l_num, pred_num)) train_r_squared = np.zeros(l_num) test_r_squared = np.zeros(l_num) for ind, i in enumerate(lambdas): reg = Lasso(alpha = i) reg.fit(X_train, y_train) coeff_a[ind,:] = reg.coef_ train_r_squared[ind] = reg.score(X_train, y_train) test_r_squared[ind] = reg.score(X_test, y_test) ###Output _____no_output_____ ###Markdown Plot values as a function of lambda ###Code plt.figure(figsize=(10, 8)) plt.plot(train_r_squared, 'bo-', label=r'$R^2$ Training set', color="darkblue", alpha=0.6, linewidth=3) plt.plot(test_r_squared, 'bo-', label=r'$R^2$ Test set', color="darkred", alpha=0.6, linewidth=3) plt.xlabel('Lamda index'); plt.ylabel(r'$R^2$') plt.xlim(0, 6) plt.title(r'Evaluate lasso regression with lamdas: 0 = 0.001, 1= 0.01, 2 = 0.1, 3 = 0.5, 4= 1, 5= 2, 6 = 10') plt.legend(loc='best') plt.grid() ###Output _____no_output_____ ###Markdown Identify best lambda and coefficients *Store your test data results in a DataFrame and indentify the lambda where the R^2 has it’s maximum value in the test data. Fit a Lasso model with this lambda parameter (use the training data) and obtain the corresponding regression coefficients. Furthermore, obtain the mean squared error for the test data of this model.* ###Code df_lam = pd.DataFrame(test_r_squared*100, columns=['R_squared']) df_lam['lambda'] = (lambdas) # returns the index of the row where column has maximum value. df_lam.loc[df_lam['R_squared'].idxmax()] # Coefficients of best model reg_best = Lasso(alpha = 0.1) reg_best.fit(X_train, y_train) reg_best.coef_ from sklearn.metrics import mean_squared_error mean_squared_error(y_test, reg_best.predict(X_test)) ###Output _____no_output_____ ###Markdown Cross Validation *Evaluate the performance of a Lasso regression for different regularization parameters λ using 5-fold cross validation on the training set (module: from sklearn.model_selection import cross_val_score) and plot the cross-validation (CV) R2scores of the training and test data as a function of λ.* ###Code l_min = 0.05 l_max = 0.2 l_num = 20 lambdas = np.linspace(l_min,l_max, l_num) train_r_squared = np.zeros(l_num) test_r_squared = np.zeros(l_num) pred_num = X.shape[1] coeff_a = np.zeros((l_num, pred_num)) from sklearn.model_selection import cross_val_score for ind, i in enumerate(lambdas): reg = Lasso(alpha = i) reg.fit(X_train, y_train) results = cross_val_score(reg, X, y, cv=5, scoring="r2") train_r_squared[ind] = reg.score(X_train, y_train) test_r_squared[ind] = reg.score(X_test, y_test) plt.figure(figsize=(10, 8)) plt.plot(train_r_squared, 'bo-', label=r'$R^2$ Training set', color="darkblue", alpha=0.6, linewidth=3) plt.plot(test_r_squared, 'bo-', label=r'$R^2$ Test set', color="darkred", alpha=0.6, linewidth=3) plt.xlabel('Lamda value'); plt.ylabel(r'$R^2$') plt.xlim(0, 19) plt.title(r'Evaluate 5-fold cv with different lamdas') plt.legend(loc='best') plt.grid() ###Output _____no_output_____ ###Markdown Best Model *Finally, store your test data results in a DataFrame and identify the lambda where the R^2 has it’s maximum value in the test data. Fit a Lasso model with this lambda parameter (use the training data) and obtain the corresponding regression coefficients. Furthermore, obtain the mean squared error for the test data of this model.* ###Code df_lam = pd.DataFrame(test_r_squared*100, columns=['R_squared']) df_lam['lambda'] = (lambdas) # returns the index of the row where column has maximum value. df_lam.loc[df_lam['R_squared'].idxmax()] reg_best = Lasso(alpha = 0.144737) reg_best.fit(X_train, y_train) Lasso(alpha=0.144737, copy_X=True, fit_intercept=True, max_iter=1000, normalize=False, positive=False, precompute=False, random_state=None, selection='cyclic', tol=0.0001, warm_start=False) from sklearn.metrics import mean_squared_error mean_squared_error(y_test, reg_best.predict(X_test)) reg_best.coef_ ###Output _____no_output_____
Base_Line.ipynb
###Markdown Base Line ###Code import numpy as np import pandas as pd import warnings warnings.filterwarnings('ignore') from scipy import interp from sklearn.externals import joblib from sklearn.model_selection import StratifiedKFold from sklearn.linear_model import LogisticRegression from sklearn.metrics import roc_curve, auc from IPython.display import display import matplotlib.pyplot as plt import seaborn as sns from utils import * pd.set_option('display.max_columns', 500) pd.set_option('display.max_rows', 500) %load_ext autoreload %autoreload 2 %matplotlib inline ###Output _____no_output_____ ###Markdown Load Data ###Code train = joblib.load('models/train.joblib') print(train.shape) test = joblib.load('models/test.joblib') print(test.shape) targets = train['TARGET'] train_ids = train['SK_ID_CURR'] train.drop(columns=['SK_ID_CURR', 'TARGET'], inplace=True) test_ids = test['SK_ID_CURR'] test = test.drop(columns=['SK_ID_CURR']) ###Output _____no_output_____ ###Markdown Drop redundand columns ###Code cols_drop = appartment_mode_cols + appartment_medi_cols train.drop(columns=cols_drop, inplace=True) test.drop(columns=cols_drop, inplace=True) print(train.shape) print(test.shape) ###Output (307511, 295) (48744, 295) ###Markdown Model ###Code features = np.array(train) test_features = np.array(test) k_fold = StratifiedKFold(n_splits=3, shuffle=True, random_state=42) i = 0 test_predictions = np.zeros(test_features.shape[0]) for train_indices, valid_indices in k_fold.split(features, targets): # Training data for the fold train_features, train_labels = features[train_indices], targets[train_indices] # Validation data for the fold valid_features, valid_labels = features[valid_indices], targets[valid_indices] model = LogisticRegression(C=0.001, class_weight='balanced', random_state=4242) probas_ = model.fit(train_features, train_labels).predict_proba(valid_features) test_predictions += model.predict_proba(test_features)[:, 1] / k_fold.n_splits fpr, tpr, thresholds = roc_curve(valid_labels, probas_[:, 1]) roc_auc = auc(fpr, tpr) plt.plot(fpr, tpr, lw=1, alpha=0.3, label='ROC fold %d (AUC = %0.2f)' % (i, roc_auc)) i += 1 plt.plot([0, 1], [0, 1], linestyle='--', lw=2, color='r', label='Luck', alpha=.8) plt.xlim([-0.05, 1.05]) plt.ylim([-0.05, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Receiver operating characteristic') plt.legend(loc="lower right") plt.figure(figsize=(7,5)) plt.show() submission = pd.DataFrame({'SK_ID_CURR': test_ids, 'TARGET': test_predictions}) save_prediction(submission, 'LogisticRegression') ###Output _____no_output_____
vgg_segmentation_keras/fcn8s_tvg_for_rnncrf.ipynb
###Markdown Build model architecture Paper 1 : Conditional Random Fields as Recurrent Neural Networks Shuai Zheng, Sadeep Jayasumana, Bernardino Romera-Paredes, Vibhav Vineet, Zhizhong Su Dalong Du, Chang Huang, Philip H. S. Torrhttp://www.robots.ox.ac.uk/~szheng/papers/CRFasRNN.pdf Paper 2 : Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials Philipp Krahenbuhl, Vladlen Koltunhttps://arxiv.org/pdf/1210.5644.pdfThis paper specifies the CRF kernels and the Mean Field Approximation of the CRF energy function WARNING : In v1 of this script we will only implement the FCN-8s subcomponent of the CRF-RNN network Quotes from MatConvNet page (http://www.vlfeat.org/matconvnet/pretrained/semantic-segmentation) :*These networks are trained on the PASCAL VOC 2011 training and (in part) validation data, using Berekely's extended annotations, as well as Microsoft COCO.**While the CRF component is missing (it may come later to MatConvNet), this model still outperforms the FCN-8s network above, partially because it is trained with additional data from COCO.**The model was obtained by first fine-tuning the plain FCN-32s network (without the CRF-RNN part) on COCO data, then building built an FCN-8s network with the learnt weights, and finally training the CRF-RNN network end-to-end using VOC 2012 training data only. The model available here is the FCN-8s part of this network (without CRF-RNN, while trained with 10 iterations CRF-RNN).* ###Code image_size = 64*8 fcn32model = fcn32_blank(image_size) #print(dir(fcn32model.layers[-1])) print(fcn32model.layers[-1].output_shape) #fcn32model.summary() # visual inspection of model architecture # WARNING : check dim weights against .mat file to check deconvolution setting print fcn32model.layers[-2].get_weights()[0].shape fcn8model = fcn_32s_to_8s(fcn32model) # INFO : dummy image array to test the model passes imarr = np.ones((3,image_size,image_size)) imarr = np.expand_dims(imarr, axis=0) #testmdl = Model(fcn32model.input, fcn32model.layers[10].output) # works fine testmdl = fcn8model # works fine testmdl.predict(imarr).shape if (testmdl.predict(imarr).shape != (1,21,image_size,image_size)): print('WARNING: size mismatch will impact some test cases') fcn8model.summary() # visual inspection of model architecture ###Output ____________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ==================================================================================================== permute_input_3 (InputLayer) (None, 3, 512, 512) 0 ____________________________________________________________________________________________________ permute_3 (Permute) (None, 3, 512, 512) 0 permute_input_3[0][0] ____________________________________________________________________________________________________ conv1_1 (Convolution2D) (None, 64, 512, 512) 1792 permute_3[0][0] ____________________________________________________________________________________________________ conv1_2 (Convolution2D) (None, 64, 512, 512) 36928 conv1_1[0][0] ____________________________________________________________________________________________________ maxpooling2d_11 (MaxPooling2D) (None, 64, 256, 256) 0 conv1_2[0][0] ____________________________________________________________________________________________________ conv2_1 (Convolution2D) (None, 128, 256, 256) 73856 maxpooling2d_11[0][0] ____________________________________________________________________________________________________ conv2_2 (Convolution2D) (None, 128, 256, 256) 147584 conv2_1[0][0] ____________________________________________________________________________________________________ maxpooling2d_12 (MaxPooling2D) (None, 128, 128, 128) 0 conv2_2[0][0] ____________________________________________________________________________________________________ conv3_1 (Convolution2D) (None, 256, 128, 128) 295168 maxpooling2d_12[0][0] ____________________________________________________________________________________________________ conv3_2 (Convolution2D) (None, 256, 128, 128) 590080 conv3_1[0][0] ____________________________________________________________________________________________________ conv3_3 (Convolution2D) (None, 256, 128, 128) 590080 conv3_2[0][0] ____________________________________________________________________________________________________ maxpooling2d_13 (MaxPooling2D) (None, 256, 64, 64) 0 conv3_3[0][0] ____________________________________________________________________________________________________ conv4_1 (Convolution2D) (None, 512, 64, 64) 1180160 maxpooling2d_13[0][0] ____________________________________________________________________________________________________ conv4_2 (Convolution2D) (None, 512, 64, 64) 2359808 conv4_1[0][0] ____________________________________________________________________________________________________ conv4_3 (Convolution2D) (None, 512, 64, 64) 2359808 conv4_2[0][0] ____________________________________________________________________________________________________ maxpooling2d_14 (MaxPooling2D) (None, 512, 32, 32) 0 conv4_3[0][0] ____________________________________________________________________________________________________ conv5_1 (Convolution2D) (None, 512, 32, 32) 2359808 maxpooling2d_14[0][0] ____________________________________________________________________________________________________ conv5_2 (Convolution2D) (None, 512, 32, 32) 2359808 conv5_1[0][0] ____________________________________________________________________________________________________ conv5_3 (Convolution2D) (None, 512, 32, 32) 2359808 conv5_2[0][0] ____________________________________________________________________________________________________ maxpooling2d_15 (MaxPooling2D) (None, 512, 16, 16) 0 conv5_3[0][0] ____________________________________________________________________________________________________ fc6 (Convolution2D) (None, 4096, 16, 16) 102764544 maxpooling2d_15[0][0] ____________________________________________________________________________________________________ fc7 (Convolution2D) (None, 4096, 16, 16) 16781312 fc6[0][0] ____________________________________________________________________________________________________ score_fr (Convolution2D) (None, 21, 16, 16) 86037 fc7[0][0] ____________________________________________________________________________________________________ score2 (Deconvolution2D) (None, 21, 34, 34) 7077 score_fr[0][0] ____________________________________________________________________________________________________ score_pool4 (Convolution2D) (None, 21, 32, 32) 10773 maxpooling2d_14[0][0] ____________________________________________________________________________________________________ cropping2d_7 (Cropping2D) (None, 21, 32, 32) 0 score2[0][0] ____________________________________________________________________________________________________ merge_5 (Merge) (None, 21, 32, 32) 0 score_pool4[0][0] cropping2d_7[0][0] ____________________________________________________________________________________________________ score4 (Deconvolution2D) (None, 21, 66, 66) 7077 merge_5[0][0] ____________________________________________________________________________________________________ score_pool3 (Convolution2D) (None, 21, 64, 64) 5397 maxpooling2d_13[0][0] ____________________________________________________________________________________________________ cropping2d_8 (Cropping2D) (None, 21, 64, 64) 0 score4[0][0] ____________________________________________________________________________________________________ merge_6 (Merge) (None, 21, 64, 64) 0 score_pool3[0][0] cropping2d_8[0][0] ____________________________________________________________________________________________________ upsample (Deconvolution2D) (None, 21, 520, 520) 112917 merge_6[0][0] ____________________________________________________________________________________________________ cropping2d_9 (Cropping2D) (None, 21, 512, 512) 0 upsample[0][0] ==================================================================================================== Total params: 134,489,822 Trainable params: 134,489,822 Non-trainable params: 0 ____________________________________________________________________________________________________ ###Markdown Load VGG weigths from .mat file https://www.vlfeat.org/matconvnet/pretrained/semantic-segmentation Download from console with :wget http://www.vlfeat.org/matconvnet/models/pascal-fcn8s-tvg-dag.mat ###Code from scipy.io import loadmat USETVG = True if USETVG: data = loadmat('pascal-fcn8s-tvg-dag.mat', matlab_compatible=False, struct_as_record=False) l = data['layers'] p = data['params'] description = data['meta'][0,0].classes[0,0].description else: data = loadmat('pascal-fcn8s-dag.mat', matlab_compatible=False, struct_as_record=False) l = data['layers'] p = data['params'] description = data['meta'][0,0].classes[0,0].description print(data.keys()) l.shape, p.shape, description.shape class2index = {} for i, clname in enumerate(description[0,:]): class2index[str(clname[0])] = i print(sorted(class2index.keys())) if False: # inspection of data structure print(dir(l[0,31].block[0,0])) print(dir(l[0,44].block[0,0])) if False: print l[0,36].block[0,0].upsample, l[0,36].block[0,0].size print l[0,40].block[0,0].upsample, l[0,40].block[0,0].size print l[0,44].block[0,0].upsample, l[0,44].block[0,0].size, l[0,44].block[0,0].crop for i in range(0, p.shape[1]-1-2*2, 2): # weights #36 to #37 are not all paired print(i, str(p[0,i].name[0]), p[0,i].value.shape, str(p[0,i+1].name[0]), p[0,i+1].value.shape) print '------------------------------------------------------' for i in range(p.shape[1]-1-2*2+1, p.shape[1]): # weights #36 to #37 are not all paired print(i, str(p[0,i].name[0]), p[0,i].value.shape) for i in range(l.shape[1]): print(i, str(l[0,i].name[0]), str(l[0,i].type[0]), [str(n[0]) for n in l[0,i].inputs[0,:]], [str(n[0]) for n in l[0,i].outputs[0,:]]) def copy_mat_to_keras(kmodel, verbose=True): kerasnames = [lr.name for lr in kmodel.layers] prmt = (3,2,0,1) # WARNING : important setting as 2 of the 4 axis have same size dimension for i in range(0, p.shape[1]): if USETVG: matname = p[0,i].name[0][0:-1] matname_type = p[0,i].name[0][-1] # "f" for filter weights or "b" for bias else: matname = p[0,i].name[0].replace('_filter','').replace('_bias','') matname_type = p[0,i].name[0].split('_')[-1] # "filter" or "bias" if matname in kerasnames: kindex = kerasnames.index(matname) if verbose: print 'found : ', (str(matname), str(matname_type), kindex) assert (len(kmodel.layers[kindex].get_weights()) == 2) if matname_type in ['f','filter']: l_weights = p[0,i].value f_l_weights = l_weights.transpose(prmt) f_l_weights = np.flip(f_l_weights, 2) f_l_weights = np.flip(f_l_weights, 3) assert (f_l_weights.shape == kmodel.layers[kindex].get_weights()[0].shape) current_b = kmodel.layers[kindex].get_weights()[1] kmodel.layers[kindex].set_weights([f_l_weights, current_b]) elif matname_type in ['b','bias']: l_bias = p[0,i].value assert (l_bias.shape[1] == 1) assert (l_bias[:,0].shape == kmodel.layers[kindex].get_weights()[1].shape) current_f = kmodel.layers[kindex].get_weights()[0] kmodel.layers[kindex].set_weights([current_f, l_bias[:,0]]) else: print 'not found : ', str(matname) #copy_mat_to_keras(fcn32model) copy_mat_to_keras(fcn8model, False) ###Output _____no_output_____ ###Markdown Tests ###Code im = Image.open('rgb.jpg') # http://www.robots.ox.ac.uk/~szheng/crfasrnndemo/static/rgb.jpg im = im.crop((0,0,319,319)) # WARNING : manual square cropping im = im.resize((image_size,image_size)) plt.imshow(np.asarray(im)) # WARNING : we do not deal with cropping here, this image is already fit preds = prediction(fcn8model, im, transform=True) #imperson = preds[0,class2index['person'],:,:] imclass = np.argmax(preds, axis=1)[0,:,:] plt.figure(figsize = (15, 7)) plt.subplot(1,3,1) plt.imshow( np.asarray(im) ) plt.subplot(1,3,2) plt.imshow( imclass ) plt.subplot(1,3,3) plt.imshow( np.asarray(im) ) masked_imclass = np.ma.masked_where(imclass == 0, imclass) #plt.imshow( imclass, alpha=0.5 ) plt.imshow( masked_imclass, alpha=0.5 ) # List of dominant classes found in the image for c in np.unique(imclass): print c, str(description[0,c][0]) bspreds = bytescale(preds, low=0, high=255) plt.figure(figsize = (15, 7)) plt.subplot(2,3,1) plt.imshow(np.asarray(im)) plt.subplot(2,3,3+1) plt.imshow(bspreds[0,class2index['background'],:,:], cmap='seismic') plt.subplot(2,3,3+2) plt.imshow(bspreds[0,class2index['person'],:,:], cmap='seismic') plt.subplot(2,3,3+3) plt.imshow(bspreds[0,class2index['bicycle'],:,:], cmap='seismic') ###Output _____no_output_____ ###Markdown Build model architecture Paper 1 : Conditional Random Fields as Recurrent Neural Networks Shuai Zheng, Sadeep Jayasumana, Bernardino Romera-Paredes, Vibhav Vineet, Zhizhong Su Dalong Du, Chang Huang, Philip H. S. Torrhttp://www.robots.ox.ac.uk/~szheng/papers/CRFasRNN.pdf Paper 2 : Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials Philipp Krahenbuhl, Vladlen Koltunhttps://arxiv.org/pdf/1210.5644.pdfThis paper specifies the CRF kernels and the Mean Field Approximation of the CRF energy function WARNING : In v1 of this script we will only implement the FCN-8s subcomponent of the CRF-RNN network Quotes from MatConvNet page (http://www.vlfeat.org/matconvnet/pretrained/semantic-segmentation) :*These networks are trained on the PASCAL VOC 2011 training and (in part) validation data, using Berekely's extended annotations, as well as Microsoft COCO.**While the CRF component is missing (it may come later to MatConvNet), this model still outperforms the FCN-8s network above, partially because it is trained with additional data from COCO.**The model was obtained by first fine-tuning the plain FCN-32s network (without the CRF-RNN part) on COCO data, then building built an FCN-8s network with the learnt weights, and finally training the CRF-RNN network end-to-end using VOC 2012 training data only. The model available here is the FCN-8s part of this network (without CRF-RNN, while trained with 10 iterations CRF-RNN).* ###Code image_size = 64*8 fcn32model = fcn32_blank(image_size) #print(dir(fcn32model.layers[-1])) print(fcn32model.layers[-1].output_shape) fcn8model = fcn_32s_to_8s(fcn32model) fcn8model.summary() # visual inspection of model architecture ###Output ____________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ==================================================================================================== permute_1_input (InputLayer) (None, 512, 512, 3) 0 ____________________________________________________________________________________________________ permute_1 (Permute) (None, 512, 512, 3) 0 permute_1_input[0][0] ____________________________________________________________________________________________________ conv1_1 (Conv2D) (None, 512, 512, 64) 1792 permute_1[0][0] ____________________________________________________________________________________________________ conv1_2 (Conv2D) (None, 512, 512, 64) 36928 conv1_1[0][0] ____________________________________________________________________________________________________ max_pooling2d_1 (MaxPooling2D) (None, 256, 256, 64) 0 conv1_2[0][0] ____________________________________________________________________________________________________ conv2_1 (Conv2D) (None, 256, 256, 128) 73856 max_pooling2d_1[0][0] ____________________________________________________________________________________________________ conv2_2 (Conv2D) (None, 256, 256, 128) 147584 conv2_1[0][0] ____________________________________________________________________________________________________ max_pooling2d_2 (MaxPooling2D) (None, 128, 128, 128) 0 conv2_2[0][0] ____________________________________________________________________________________________________ conv3_1 (Conv2D) (None, 128, 128, 256) 295168 max_pooling2d_2[0][0] ____________________________________________________________________________________________________ conv3_2 (Conv2D) (None, 128, 128, 256) 590080 conv3_1[0][0] ____________________________________________________________________________________________________ conv3_3 (Conv2D) (None, 128, 128, 256) 590080 conv3_2[0][0] ____________________________________________________________________________________________________ max_pooling2d_3 (MaxPooling2D) (None, 64, 64, 256) 0 conv3_3[0][0] ____________________________________________________________________________________________________ conv4_1 (Conv2D) (None, 64, 64, 512) 1180160 max_pooling2d_3[0][0] ____________________________________________________________________________________________________ conv4_2 (Conv2D) (None, 64, 64, 512) 2359808 conv4_1[0][0] ____________________________________________________________________________________________________ conv4_3 (Conv2D) (None, 64, 64, 512) 2359808 conv4_2[0][0] ____________________________________________________________________________________________________ max_pooling2d_4 (MaxPooling2D) (None, 32, 32, 512) 0 conv4_3[0][0] ____________________________________________________________________________________________________ conv5_1 (Conv2D) (None, 32, 32, 512) 2359808 max_pooling2d_4[0][0] ____________________________________________________________________________________________________ conv5_2 (Conv2D) (None, 32, 32, 512) 2359808 conv5_1[0][0] ____________________________________________________________________________________________________ conv5_3 (Conv2D) (None, 32, 32, 512) 2359808 conv5_2[0][0] ____________________________________________________________________________________________________ max_pooling2d_5 (MaxPooling2D) (None, 16, 16, 512) 0 conv5_3[0][0] ____________________________________________________________________________________________________ fc6 (Conv2D) (None, 16, 16, 4096) 102764544 max_pooling2d_5[0][0] ____________________________________________________________________________________________________ fc7 (Conv2D) (None, 16, 16, 4096) 16781312 fc6[0][0] ____________________________________________________________________________________________________ score_fr (Conv2D) (None, 16, 16, 21) 86037 fc7[0][0] ____________________________________________________________________________________________________ score2 (Conv2DTranspose) (None, 34, 34, 21) 7077 score_fr[0][0] ____________________________________________________________________________________________________ score_pool4 (Conv2D) (None, 32, 32, 21) 10773 max_pooling2d_4[0][0] ____________________________________________________________________________________________________ cropping2d_1 (Cropping2D) (None, 32, 32, 21) 0 score2[0][0] ____________________________________________________________________________________________________ add_1 (Add) (None, 32, 32, 21) 0 score_pool4[0][0] cropping2d_1[0][0] ____________________________________________________________________________________________________ score4 (Conv2DTranspose) (None, 66, 66, 21) 7077 add_1[0][0] ____________________________________________________________________________________________________ score_pool3 (Conv2D) (None, 64, 64, 21) 5397 max_pooling2d_3[0][0] ____________________________________________________________________________________________________ cropping2d_2 (Cropping2D) (None, 64, 64, 21) 0 score4[0][0] ____________________________________________________________________________________________________ add_2 (Add) (None, 64, 64, 21) 0 score_pool3[0][0] cropping2d_2[0][0] ____________________________________________________________________________________________________ upsample (Conv2DTranspose) (None, 520, 520, 21) 112917 add_2[0][0] ____________________________________________________________________________________________________ cropping2d_3 (Cropping2D) (None, 512, 512, 21) 0 upsample[0][0] ==================================================================================================== Total params: 134,489,822 Trainable params: 134,489,822 Non-trainable params: 0 ____________________________________________________________________________________________________ ###Markdown Load VGG weigths from .mat file https://www.vlfeat.org/matconvnet/pretrained/semantic-segmentation Download from console with :wget http://www.vlfeat.org/matconvnet/models/pascal-fcn8s-tvg-dag.mat ###Code from scipy.io import loadmat USETVG = True if USETVG: data = loadmat('pascal-fcn8s-tvg-dag.mat', matlab_compatible=False, struct_as_record=False) l = data['layers'] p = data['params'] description = data['meta'][0,0].classes[0,0].description else: data = loadmat('pascal-fcn8s-dag.mat', matlab_compatible=False, struct_as_record=False) l = data['layers'] p = data['params'] description = data['meta'][0,0].classes[0,0].description print(data.keys()) l.shape, p.shape, description.shape class2index = {} for i, clname in enumerate(description[0,:]): class2index[str(clname[0])] = i print(sorted(class2index.keys())) if False: # inspection of data structure print(dir(l[0,31].block[0,0])) print(dir(l[0,44].block[0,0])) if False: print (str(l[0,36].block[0,0].upsample) + " " + str(l[0,36].block[0,0].size)) print (str(l[0,40].block[0,0].upsample) + " " + str(l[0,40].block[0,0].size)) print (str(l[0,44].block[0,0].upsample) + " " + str(l[0,44].block[0,0].size) + " " + str(l[0,44].block[0,0].crop)) if False: for i in range(0, p.shape[1]-1-2*2, 2): # weights #36 to #37 are not all paired print('{0} {1} {2} {3} {4}'.format(i, p[0,i].name[0], p[0,i].value.shape, p[0,i+1].name[0], p[0,i+1].value.shape)) print ('------------------------------------------------------') for i in range(p.shape[1]-1-2*2+1, p.shape[1]): # weights #36 to #37 are not all paired print('{0} {1} {2}'.format(i, p[0,i].name[0], p[0,i].value.shape)) if False: for i in range(l.shape[1]): print('{0} {1} {2} {3} {4}'.format(i, l[0,i].name[0], l[0,i].type[0], [str(n[0]) for n in l[0,i].inputs[0,:]], [str(n[0]) for n in l[0,i].outputs[0,:]])) def copy_mat_to_keras(kmodel, verbose=True): kerasnames = [lr.name for lr in kmodel.layers] prmt = (1,0,2,3) # WARNING : important setting as 2 of the 4 axis have same size dimension for i in range(0, p.shape[1]): if USETVG: matname = p[0,i].name[0][0:-1] matname_type = p[0,i].name[0][-1] # "f" for filter weights or "b" for bias else: matname = p[0,i].name[0].replace('_filter','').replace('_bias','') matname_type = p[0,i].name[0].split('_')[-1] # "filter" or "bias" if matname in kerasnames: kindex = kerasnames.index(matname) if verbose: print ('found : {0} {1} {2}'.format(matname, matname_type, kindex)) assert (len(kmodel.layers[kindex].get_weights()) == 2) if matname_type in ['f','filter']: l_weights = p[0,i].value f_l_weights = l_weights.transpose(prmt) f_l_weights = np.flip(f_l_weights, 2) f_l_weights = np.flip(f_l_weights, 3) assert (f_l_weights.shape == kmodel.layers[kindex].get_weights()[0].shape) current_b = kmodel.layers[kindex].get_weights()[1] kmodel.layers[kindex].set_weights([f_l_weights, current_b]) elif matname_type in ['b','bias']: l_bias = p[0,i].value assert (l_bias.shape[1] == 1) assert (l_bias[:,0].shape == kmodel.layers[kindex].get_weights()[1].shape) current_f = kmodel.layers[kindex].get_weights()[0] kmodel.layers[kindex].set_weights([current_f, l_bias[:,0]]) else: print ('not found : ' + str(matname)) #copy_mat_to_keras(fcn32model) copy_mat_to_keras(fcn8model, True) ###Output found : conv1_1 f 2 found : conv1_1 b 2 found : conv1_2 f 3 found : conv1_2 b 3 found : conv2_1 f 5 found : conv2_1 b 5 found : conv2_2 f 6 found : conv2_2 b 6 found : conv3_1 f 8 found : conv3_1 b 8 found : conv3_2 f 9 found : conv3_2 b 9 found : conv3_3 f 10 found : conv3_3 b 10 found : conv4_1 f 12 found : conv4_1 b 12 found : conv4_2 f 13 found : conv4_2 b 13 found : conv4_3 f 14 found : conv4_3 b 14 found : conv5_1 f 16 found : conv5_1 b 16 found : conv5_2 f 17 found : conv5_2 b 17 found : conv5_3 f 18 found : conv5_3 b 18 found : fc6 f 20 found : fc6 b 20 found : fc7 f 21 found : fc7 b 21 found : score_fr f 22 found : score_fr b 22 found : score2 f 23 found : score2 b 23 found : score_pool4 f 24 found : score_pool4 b 24 found : score4 f 27 found : score_pool3 f 28 found : score_pool3 b 28 found : upsample f 31 ###Markdown Tests ###Code im = Image.open('rgb.jpg') # http://www.robots.ox.ac.uk/~szheng/crfasrnndemo/static/rgb.jpg im = im.crop((0,0,319,319)) # WARNING : manual square cropping im = im.resize((image_size,image_size)) plt.imshow(np.asarray(im)) # WARNING : we do not deal with cropping here, this image is already fit preds = prediction(fcn8model, im, transform=True) #imperson = preds[0,class2index['person'],:,:] imclass = np.argmax(preds, axis=3)[0,:,:] plt.figure(figsize = (15, 7)) plt.subplot(1,3,1) plt.imshow( np.asarray(im) ) plt.subplot(1,3,2) plt.imshow( imclass ) plt.subplot(1,3,3) plt.imshow( np.asarray(im) ) masked_imclass = np.ma.masked_where(imclass == 0, imclass) #plt.imshow( imclass, alpha=0.5 ) plt.imshow( masked_imclass, alpha=0.5 ) # List of dominant classes found in the image for c in np.unique(imclass): print (str(c) + " " + str(description[0,c][0])) bspreds = bytescale(preds, low=0, high=255) plt.figure(figsize = (15, 7)) plt.subplot(2,3,1) plt.imshow(np.asarray(im)) plt.subplot(2,3,3+1) plt.imshow(bspreds[0,:,:,class2index['background']], cmap='seismic') plt.subplot(2,3,3+2) plt.imshow(bspreds[0,:,:,class2index['person']], cmap='seismic') plt.subplot(2,3,3+3) plt.imshow(bspreds[0,:,:,class2index['bicycle']], cmap='seismic') ###Output _____no_output_____
jupyter_notebooks/bigdata/python-nbk/pythonnbk/python_app_data_analysis.ipynb
###Markdown Reading Tabular Data into DataFramesPandas is a widely-used Python library for statistics, particularly on tabular data.First time use (Installing via cmd):`pip install pandas`Loading:`import pandas as pd` ###Code import pandas as pd data = pd.read_csv('data/gapminder_gdp_oceania.csv') print(data) ###Output _____no_output_____ ###Markdown + The columns in a dataframe are the observed variables, and the rows are the observations.+ Pandas uses backslash `\` to show wrapped lines when output is too wide to fit the screen. Using `index_col` to specify a column's name as row headings. ###Code data = pd.read_csv('data/gapminder_gdp_oceania.csv', index_col='country') print(data) ###Output _____no_output_____ ###Markdown `DataFrame.info` - Getting more information about the data ###Code data.info() ###Output _____no_output_____ ###Markdown + This is a DataFrame+ Two rows named 'Australia' and 'New Zealand'+ Twelve columns, each of which has two actual 64-bit floating point values.+ Uses 208 bytes of memory. `DataFrame.columns` - variable that stores information about the dataframe’s columns. ###Code print(data.columns) ###Output _____no_output_____ ###Markdown `DataFrame.T` - Is used to Transpose a dataframe+ Sometimes want to treat columns as rows and vice versa.+ Transpose (written `.T`) doesn’t copy the data, just changes the program’s view of it.+ Like `columns`, it is a member variable. ###Code print(data.T) ###Output _____no_output_____ ###Markdown `DataFrame.describe` - Is used to get the summary of the data. ###Code print(data.describe()) ###Output _____no_output_____ ###Markdown Pandas DataFrames/SeriesA DataFrame is a collection of Series; The DataFrame is the way Pandas represents a table, and Series is the data-structure Pandas use to represent a column.Pandas is built on top of the Numpy library, which in practice means that most of the methods defined for Numpy Arrays apply to Pandas Series/DataFrames.What makes Pandas so attractive is the powerful interface to access individual records of the table, proper handling of missing values, and relational-databases operations between DataFrames. Selecting valuesTo access a value at the position `[i,j]` of a DataFrame, we have two options, depending on what is the meaning of `i` in use. Remember that a DataFrame provides an index as a way to identify the rows of the table; a row, then, has a position inside the table as well as a label, which uniquely identifies its entry in the DataFrame. `DataFrame.iloc[..., ...]` - Is used to select the values by their (entry) position. ###Code import pandas as pd data = pd.read_csv('data/gapminder_gdp_europe.csv', index_col='country') print(data.iloc[0, 0]) ###Output _____no_output_____ ###Markdown `DataFrame.loc[..., ...]` - Is used to select the values by their (entry) label. ###Code data = pd.read_csv('data/gapminder_gdp_europe.csv', index_col='country') print(data.loc["Albania", "gdpPercap_1952"]) ###Output _____no_output_____ ###Markdown `:` - Use this on its own to mean all columns or all rows. ###Code print(data.loc["Albania", :]) print(data.loc[:, "gdpPercap_1952"]) ###Output _____no_output_____ ###Markdown Select multiple columns or rows using `DataFrame.loc` and a named slice. ###Code print(data.loc['Italy':'Poland', 'gdpPercap_1962':'gdpPercap_1972']) ###Output _____no_output_____ ###Markdown Result of slicing can be used in further operations. ###Code print(data.loc['Italy':'Poland', 'gdpPercap_1962':'gdpPercap_1972'].max()) print(data.loc['Italy':'Poland', 'gdpPercap_1962':'gdpPercap_1972'].min()) ###Output _____no_output_____ ###Markdown Use comparisons to select data based on value. ###Code # Use a subset of data to keep output readable. subset = data.loc['Italy':'Poland', 'gdpPercap_1962':'gdpPercap_1972'] print('Subset of data:\n', subset) # Which values were greater than 10000 ? print('\nWhere are values large?\n', subset > 10000) ###Output _____no_output_____ ###Markdown Select values or NaN using a Boolean mask.A frame full of Booleans is sometimes called a mask because of how it can be used. ###Code mask = subset > 10000 print(subset[mask]) print(subset[subset > 10000].describe()) ###Output _____no_output_____ ###Markdown Group By: split-apply-combinePandas vectorizing methods and grouping operations are features that provide users much flexibility to analyse their data. ###Code mask_higher = data > data.mean() wealth_score = mask_higher.aggregate('sum', axis=1) / len(data.columns) wealth_score ###Output _____no_output_____ ###Markdown In the above code snippetWe wanted to have clearer view of on how the European countries split themselves according to their GDP.1. We may have a glance by splitting the countries in two groups during the years surveyed, those who presented a GDP higher than the European average and those with a lower GDP.2. We then estimate a wealthy score based on the historical (from 1962 to 2007) values, where we account how many times a country has participated in the groups of lower or higher GDP ###Code data.groupby(wealth_score).sum() ###Output _____no_output_____ ###Markdown In the above code snippet for each group in the wealth_score table, we sum their (financial) contribution across the years surveyed. Plotting `matplotlib` is the most widely used scientific plotting library in Python.+ Commonly use a sub-library called matplotlib.pyplot.+ The Jupyter Notebook will render plots inline if we ask it to using a “magic” command. ###Code %matplotlib inline import matplotlib.pyplot as plt time = [0, 1, 2, 3] position = [0, 100, 200, 300] plt.plot(time, position) plt.xlabel('Time (hr)') plt.ylabel('Position (km)') ###Output _____no_output_____ ###Markdown Plot data directly from a `Pandas dataframe`. ###Code import pandas as pd data = pd.read_csv('data/gapminder_gdp_oceania.csv', index_col='country') # Extract year from last 4 characters of each column name # The current column names are structured as 'gdpPercap_(year)', # so we want to keep the (year) part only for clarity when plotting GDP vs. years # To do this we use strip(), which removes from the string the characters stated in the argument # This method works on strings, so we call str before strip() years = data.columns.str.strip('gdpPercap_') # Convert year values to integers, saving results back to dataframe data.columns = years.astype(int) data.loc['Australia'].plot() ###Output _____no_output_____ ###Markdown Plot data directly from a `Pandas dataframe`.+ We can also plot Pandas dataframes.+ This implicitly uses `matplotlib.pyplot`.+ Before plotting, we convert the column headings from a string to integer data type, since they represent numerical values ###Code import pandas as pd data = pd.read_csv('data/gapminder_gdp_oceania.csv', index_col='country') # Extract year from last 4 characters of each column name # The current column names are structured as 'gdpPercap_(year)', # so we want to keep the (year) part only for clarity when plotting GDP vs. years # To do this we use strip(), which removes from the string the characters stated in the argument # This method works on strings, so we call str before strip() years = data.columns.str.strip('gdpPercap_') # Convert year values to integers, saving results back to dataframe data.columns = years.astype(int) data.loc['Australia'].plot() ###Output _____no_output_____ ###Markdown Select and transform data, then plot it.+ By default, DataFrame.plot plots with the rows as the X axis.+ We can transpose the data in order to plot multiple series. ###Code data.T.plot() plt.ylabel('GDP per capita') # Different styles of plots (Bar Graph) plt.style.use('ggplot') data.T.plot(kind='bar') plt.ylabel('GDP per capita') # We can plot multiple datasets together # Select two countries' worth of data. gdp_australia = data.loc['Australia'] gdp_nz = data.loc['New Zealand'] # Plot with differently-colored markers. plt.plot(years, gdp_australia, 'b-', label='Australia') # This is making labels plt.plot(years, gdp_nz, 'g-', label='New Zealand') # Create legend. plt.legend(loc='upper left') plt.xlabel('Year') plt.ylabel('GDP per capita ($)') plt.scatter(gdp_australia, gdp_nz) # This gives the correlation between the GDP's of the two countries Australia and New Zealand ###Output _____no_output_____ ###Markdown Saving your plot to a fileIf you are satisfied with the plot you see you may want to save it to a file, perhaps to include it in a publication. There is a function in the matplotlib.pyplot module that accomplishes this: savefig. Calling this function, e.g. with```pythonplt.savefig('my_figure.png')```will save the current figure to the file my_figure.png. The file format will automatically be deduced from the file name extension (other formats are pdf, ps, eps and svg). Analyzing Patient Data Loading data into Python Using Inflamation data for this + We can do that using a library called NumPy, which stands for Numerical Python.+ To tell Python that we’d like to start using NumPy, we need to import it: ###Code import numpy numpy.loadtxt(fname='data\inflammation-01.csv', delimiter=',') # `loadtxt` # Our call to numpy.loadtxt read our file but didn’t save the data in memory. #To do that, we need to assign the array to a variable. #In a similar manner to how we assign a single value to a variable, ## we can also assign an array of values to a variable using the same syntax. # Let’s re-run numpy.loadtxt and save the returned data: data = numpy.loadtxt(fname='data\inflammation-01.csv', delimiter=',') print(data) print(type(data)) print(data.dtype) print(data.shape) ###Output _____no_output_____ ###Markdown So the data has 60 rows and 40 columns ###Code print('first value in data:', data[0, 0]) print('first value in data:', data[0, 0]) print('middle value in data:', data[30, 20]) ###Output _____no_output_____ ###Markdown ![image1](data\image1.png "Data")We can visualize the data like this Slicing dataAn index like `[30, 20]` selects a single element of an array, but we can select whole sections as well. For example, we can select the first ten days (columns) of values for the first four patients (rows) like this: ###Code print(data[0:4, 0:10]) ###Output _____no_output_____ ###Markdown The slice `0:4` means, “Start at index 0 and go up to, but not including, index 4”. Again, the up-to-but-not-including takes a bit of getting used to, but the rule is that the difference between the upper and lower bounds is the number of values in the slice. ###Code print(data[5:10, 0:10]) # We dont need to start at '0' # We can ignore the upper/lower bounds python will take care of it small = data[:3, 36:] print('small is:') print(small) ###Output _____no_output_____ ###Markdown Analyzing datawe can ask NumPy to compute data’s mean value: ###Code print(numpy.mean(data)) # Multiple value assignment maxval, minval, stdval = numpy.max(data), numpy.min(data), numpy.std(data) print('maximum inflammation:', maxval) print('minimum inflammation:', minval) print('standard deviation:', stdval) ###Output _____no_output_____ ###Markdown When analyzing data, though, we often want to look at variations in statistical values, such as the maximum inflammation per patient or the average inflammation per day. One way to do this is to create a new temporary array of the data we want, then ask it to do the calculation: ###Code patient_0 = data[0, :] # 0 on the first axis (rows), everything on the second (columns) print('maximum inflammation for patient 0:', numpy.max(patient_0)) # We don’t actually need to store the row in a variable of its own. Instead, we can combine the selection and the function call: print('maximum inflammation for patient 2:', numpy.max(data[2, :])) ###Output _____no_output_____ ###Markdown Visualize like thisWhat if we need the maximum inflammation for each patient over all days (as in the next diagram on the left) or the average for each day (as in the diagram on the right)? As the diagram below shows, we want to perform the operation across an axis:![image2](data\image2.png "Data")To support this functionality, most array functions allow us to specify the axis we want to work on. If we ask for the average across axis 0 (rows in our 2D example), we get: ###Code print(numpy.mean(data, axis=0)) # As a quick check, we can ask this array what its shape is: print(numpy.mean(data, axis=0).shape) print(numpy.mean(data, axis=1)) ###Output _____no_output_____ ###Markdown Visualizing Tabular Data ###Code # we will import the pyplot module from matplotlib and use two of its functions to create and display a heat map of our data: import matplotlib.pyplot image = matplotlib.pyplot.imshow(data) matplotlib.pyplot.show() ###Output _____no_output_____ ###Markdown Blue pixels in this heat map represent low values, while yellow pixels represent high values. As we can see, inflammation rises and falls over a 40-day period. Let’s take a look at the average inflammation over time:```pythonave_inflammation = numpy.mean(data, axis=0)ave_plot = matplotlib.pyplot.plot(ave_inflammation)matplotlib.pyplot.show()``` Run the code snippets to see the result Here, we have put the average inflammation per day across all patients in the variable ave_inflammation, then asked matplotlib.pyplot to create and display a line graph of those values. The result is a roughly linear rise and fall, which is suspicious: we might instead expect a sharper rise and slower fall. Let’s have a look at two other statistics: These are used to get max and min plots ```pythonmax_plot = matplotlib.pyplot.plot(numpy.max(data, axis=0))matplotlib.pyplot.show()``````pythonmin_plot = matplotlib.pyplot.plot(numpy.min(data, axis=0))matplotlib.pyplot.show()``` Grouping plotsThe function `matplotlib.pyplot.figure()` creates a space into which we will place all of our plots. The parameter figsize tells Python how big to make this space. Each subplot is placed into the figure using its `add_subplot` method. The `add_subplot` method takes 3 parameters. The first denotes how many total rows of subplots there are, the second parameter refers to the total number of subplot columns, and the final parameter denotes which subplot your variable is referencing (left-to-right, top-to-bottom). Each subplot is stored in a different variable (`axes1`, `axes2`, `axes3`). Once a subplot is created, the axes can be titled using the `set_xlabel()` command (or `set_ylabel()`). Here are our three plots side by side: ###Code import numpy import matplotlib.pyplot data = numpy.loadtxt(fname='data\inflammation-01.csv', delimiter=',') fig = matplotlib.pyplot.figure(figsize=(10.0, 3.0)) axes1 = fig.add_subplot(1, 3, 1) axes2 = fig.add_subplot(1, 3, 2) axes3 = fig.add_subplot(1, 3, 3) axes1.set_ylabel('average') axes1.plot(numpy.mean(data, axis=0)) axes2.set_ylabel('max') axes2.plot(numpy.max(data, axis=0)) axes3.set_ylabel('min') axes3.plot(numpy.min(data, axis=0)) fig.tight_layout() matplotlib.pyplot.savefig('inflammation.png') matplotlib.pyplot.show() ###Output _____no_output_____ ###Markdown Analyzing Data from Multiple Files ###Code import glob ###Output _____no_output_____ ###Markdown The `glob` library contains a function, also called `glob`, that finds files and directories whose names match a pattern. We provide those patterns as strings: the character `*` matches zero or more characters, while `?` matches any one character. We can use this to get the names of all the CSV files in the current directory: ###Code print(glob.glob('data\inflammation*.csv')) ###Output _____no_output_____ ###Markdown + As these examples show, glob.glob’s result is a list of file and directory paths in arbitrary order.+ This means we can loop over it to do something with each filename in turn.+ In our case, the “something” we want to do is generate a set of plots for each file in our inflammation dataset.+ If we want to start by analyzing just the first three files in alphabetical order, we can use the sorted built-in function to generate a new sorted list from the glob.glob output: ###Code import glob import numpy import matplotlib.pyplot filenames = sorted(glob.glob('data\inflammation*.csv')) filenames = filenames[0:3] for filename in filenames: print(filename) data = numpy.loadtxt(fname=filename, delimiter=',') fig = matplotlib.pyplot.figure(figsize=(10.0, 3.0)) axes1 = fig.add_subplot(1, 3, 1) axes2 = fig.add_subplot(1, 3, 2) axes3 = fig.add_subplot(1, 3, 3) axes1.set_ylabel('average') axes1.plot(numpy.mean(data, axis=0)) axes2.set_ylabel('max') axes2.plot(numpy.max(data, axis=0)) axes3.set_ylabel('min') axes3.plot(numpy.min(data, axis=0)) fig.tight_layout() matplotlib.pyplot.show() ###Output _____no_output_____ ###Markdown Looping Over Data Sets Use a `for` loop to process files given a list of their names. ###Code import pandas as pd for filename in ['data/gapminder_gdp_africa.csv', 'data/gapminder_gdp_asia.csv']: data = pd.read_csv(filename, index_col='country') print(filename, data.min()) ###Output _____no_output_____ ###Markdown Use `glob.glob` to find sets of files whose names match a pattern.+ In Unix, the term “globbing” means “matching a set of files with a pattern”.+ The most common patterns are: + `*` meaning “match zero or more characters” + `?` meaning “match exactly one character”+ Python’s standard library contains the `glob` module to provide pattern matching functionality+ The `glob` module contains a function also called glob to match file patterns+ E.g., `glob.glob('*.txt')` matches all files in the current directory whose names end with .txt.+ Result is a (possibly empty) list of character strings. ###Code import glob print('all csv files in data directory:', glob.glob('data/*.csv')) print('all PDB files:', glob.glob('*.pdb')) ###Output _____no_output_____ ###Markdown Use `glob` and `for` to process batches of files. ###Code # Helps a lot if the files are named and stored systematically and consistently so that simple patterns will find the right data. for filename in glob.glob('data/gapminder_*.csv'): data = pd.read_csv(filename) print(filename, data['gdpPercap_1952'].min()) ###Output _____no_output_____ ###Markdown This includes all data, as well as per-region data.Use a more specific pattern in the exercises to exclude the whole data set.But note that the minimum of the entire data set is also the minimum of one of the data sets, which is a nice check on correctness. Creating FunctionsLet’s start by defining a function `fahr_to_celsius` that converts temperatures from Fahrenheit to Celsius: ###Code def fahr_to_celsius(temp): return ((temp - 32) * (5/9)) ###Output _____no_output_____ ###Markdown ![image3](data\image3.png "Data")```pythonfahr_to_celsius(32)```This command should call our function, using “32” as the input and return the function value.In fact, calling our own function is no different from calling any other function: ###Code print('freezing point of water:', fahr_to_celsius(32), 'C') print('boiling point of water:', fahr_to_celsius(212), 'C') ###Output _____no_output_____ ###Markdown Composing FunctionsNow that we’ve seen how to turn Fahrenheit into Celsius, we can also write the function to turn Celsius into Kelvin: ###Code def celsius_to_kelvin(temp_c): return temp_c + 273.15 print('freezing point of water in Kelvin:', celsius_to_kelvin(0.)) ###Output _____no_output_____ ###Markdown Tidying upNow that we know how to wrap bits of code up in functions, we can make our inflammation analysis easier to read and easier to reuse. First, let’s make a visualize function that generates our plots:```pythondef visualize(filename): data = numpy.loadtxt(fname=filename, delimiter=',') fig = matplotlib.pyplot.figure(figsize=(10.0, 3.0)) axes1 = fig.add_subplot(1, 3, 1) axes2 = fig.add_subplot(1, 3, 2) axes3 = fig.add_subplot(1, 3, 3) axes1.set_ylabel('average') axes1.plot(numpy.mean(data, axis=0)) axes2.set_ylabel('max') axes2.plot(numpy.max(data, axis=0)) axes3.set_ylabel('min') axes3.plot(numpy.min(data, axis=0)) fig.tight_layout() matplotlib.pyplot.show()```and another function called `detect_problems` that checks for those systematics we noticed:```pythondef detect_problems(filename): data = numpy.loadtxt(fname=filename, delimiter=',') if numpy.max(data, axis=0)[0] == 0 and numpy.max(data, axis=0)[20] == 20: print('Suspicious looking maxima!') elif numpy.sum(numpy.min(data, axis=0)) == 0: print('Minima add up to zero!') else: print('Seems OK!')```We can reproduce the previous analysis with a much simpler for loop:```pythonfilenames = sorted(glob.glob('data/inflammation*.csv'))for filename in filenames[:3]: print(filename) visualize(filename) detect_problems(filename)```Here is the implemeantation..... ###Code def visualize(filename): data = numpy.loadtxt(fname=filename, delimiter=',') fig = matplotlib.pyplot.figure(figsize=(10.0, 3.0)) axes1 = fig.add_subplot(1, 3, 1) axes2 = fig.add_subplot(1, 3, 2) axes3 = fig.add_subplot(1, 3, 3) axes1.set_ylabel('average') axes1.plot(numpy.mean(data, axis=0)) axes2.set_ylabel('max') axes2.plot(numpy.max(data, axis=0)) axes3.set_ylabel('min') axes3.plot(numpy.min(data, axis=0)) fig.tight_layout() matplotlib.pyplot.show() def detect_problems(filename): data = numpy.loadtxt(fname=filename, delimiter=',') if numpy.max(data, axis=0)[0] == 0 and numpy.max(data, axis=0)[20] == 20: print('Suspicious looking maxima!') elif numpy.sum(numpy.min(data, axis=0)) == 0: print('Minima add up to zero!') else: print('Seems OK!') filenames = sorted(glob.glob('data\inflammation*.csv')) for filename in filenames[:3]: print(filename) visualize(filename) detect_problems(filename) ###Output _____no_output_____ ###Markdown Testing and DocumentingOnce we start putting things in functions so that we can re-use them, we need to start testing that those functions are working correctly. To see how to do this, let’s write a function to offset a dataset so that it’s mean value shifts to a user-defined value: ###Code def offset_mean(data, target_mean_value): return (data - numpy.mean(data)) + target_mean_value z = numpy.zeros((2,2)) print(offset_mean(z, 3)) # That looks right, so let’s try offset_mean on our real data: data = numpy.loadtxt(fname='data\inflammation-01.csv', delimiter=',') print(offset_mean(data, 0)) ###Output _____no_output_____ ###Markdown It’s hard to tell from the default output whether the result is correct, but there are a few tests that we can run to reassure us: ###Code print('original min, mean, and max are:', numpy.min(data), numpy.mean(data), numpy.max(data)) offset_data = offset_mean(data, 0) print('min, mean, and max of offset data are:', numpy.min(offset_data), numpy.mean(offset_data), numpy.max(offset_data)) ###Output _____no_output_____ ###Markdown That seems almost right: the original mean was about 6.1, so the lower bound from zero is now about -6.1. The mean of the offset data isn’t quite zero — we’ll explore why not in the challenges — but it’s pretty close. We can even go further and check that the standard deviation hasn’t changed: ###Code print('std dev before and after:', numpy.std(data), numpy.std(offset_data)) ###Output _____no_output_____ ###Markdown Those values look the same, but we probably wouldn’t notice if they were different in the sixth decimal place. Let’s do this instead: ###Code print('difference in standard deviations before and after:', numpy.std(data) - numpy.std(offset_data)) ###Output _____no_output_____ ###Markdown Again, the difference is very small. It’s still possible that our function is wrong, but it seems unlikely enough that we should probably get back to doing our analysis. We have one more task first, though: we should write some documentation for our function to remind ourselves later what it’s for and how to use it.The usual way to put documentation in software is to add comments like this: ###Code # offset_mean(data, target_mean_value): # return a new array containing the original data with its mean offset to match the desired value. def offset_mean(data, target_mean_value): return (data - numpy.mean(data)) + target_mean_value ###Output _____no_output_____ ###Markdown There’s a better way, though. If the first thing in a function is a string that isn’t assigned to a variable, that string is attached to the function as its documentation: ###Code def offset_mean(data, target_mean_value): """Return a new array containing the original data with its mean offset to match the desired value.""" return (data - numpy.mean(data)) + target_mean_value ###Output _____no_output_____ ###Markdown This is better because we can now ask Python’s built-in help system to show us the documentation for the function: ###Code help(offset_mean) ###Output _____no_output_____ ###Markdown A string like this is called a docstring. We don’t need to use triple quotes when we write one, but if we do, we can break the string across multiple lines: ###Code def offset_mean(data, target_mean_value): """Return a new array containing the original data with its mean offset to match the desired value. Examples -------- >>> offset_mean([1, 2, 3], 0) array([-1., 0., 1.]) """ return (data - numpy.mean(data)) + target_mean_value help(offset_mean) ###Output _____no_output_____ ###Markdown Errors and Exceptions ###Code # This code has an intentional error. You can type it directly or # use it for reference to understand the error message below. def favorite_ice_cream(): ice_creams = [ 'chocolate', 'vanilla', 'strawberry' ] print(ice_creams[3]) favorite_ice_cream() ###Output _____no_output_____ ###Markdown This particular traceback has two levels. You can determine the number of levels by looking for the number of arrows on the left hand side. In this case:1. The first shows code from the cell above, with an arrow pointing to Line 11 (which is `favorite_ice_cream()`).2. The second shows some code in the function `favorite_ice_cream`, with an arrow pointing to Line 9 (which is `print(ice_creams[3])`).The last level is the actual place where the error occurred. The other level(s) show what function the program executed to get to the next level down. So, in this case, the program first performed a function call to the function `favorite_ice_cream`. Inside this function, the program encountered an error on Line 6, when it tried to run the code `print(ice_creams[3]`).So what error did the program actually encounter? In the last line of the traceback, Python helpfully tells us the category or type of error (in this case, it is an IndexError) and a more detailed error message (in this case, it says “list index out of range”).If you encounter an error and don’t know what it means, it is still important to read the traceback closely. That way, if you fix the error, but encounter a new one, you can tell that the error changed. Additionally, sometimes knowing where the error occurred is enough to fix it, even if you don’t entirely understand the message.If you do encounter an error you don’t recognize, try looking at the official documentation on errors. However, note that you may not always be able to find the error there, as it is possible to create custom errors. In that case, hopefully the custom error message is informative enough to help you figure out what went wrong. Syntax ErrorsWhen you forget a colon at the end of a line, accidentally add one space too many when indenting under an `if` statement, or forget a parenthesis, you will encounter a syntax error. This means that Python couldn’t figure out how to read your program. This is similar to forgetting punctuation in English: for example, this text is difficult to read there is no punctuation there is also no capitalization why is this hard because you have to figure out where each sentence ends you also have to figure out where each sentence begins to some extent it might be ambiguous if there should be a sentence break or notPeople can typically figure out what is meant by text with no punctuation, but people are much smarter than computers. If Python doesn’t know how to read the program, it will give up and inform you with an error. For example: ###Code def some_function() msg = 'hello, world!' print(msg) return msg ###Output _____no_output_____ ###Markdown Here, Python tells us that there is a `SyntaxError` on line 1, and even puts a little arrow in the place where there is an issue. In this case the problem is that the function definition is missing a colon at the end.Actually, the function above has two issues with syntax. If we fix the problem with the colon, we see that there is also an `IndentationError`, which means that the lines in the function definition do not all have the same indentation: ###Code def some_function(): msg = 'hello, world!' print(msg) return msg ###Output _____no_output_____ ###Markdown Both `SyntaxError` and `IndentationError` indicate a problem with the syntax of your program, but an `IndentationError` is more specific: it always means that there is a problem with how your code is indented.Some indentation errors are harder to spot than others. In particular, mixing spaces and tabs can be difficult to spot because they are both whitespace. In the example below, the first two lines in the body of the function `some_function` are indented with tabs, while the third line — with spaces. If you’re working in a Jupyter notebook, be sure to copy and paste this example rather than trying to type it in manually because Jupyter automatically replaces tabs with spaces. ###Code def some_function(): msg = 'hello, world!' print(msg) return msg ###Output _____no_output_____ ###Markdown Variable Name ErrorsAnother very common type of error is called a NameError, and occurs when you try to use a variable that does not exist. For example: ###Code print(a) ###Output _____no_output_____ ###Markdown Index ErrorsNext up are errors having to do with containers (like lists and strings) and the items within them. If you try to access an item in a list or a string that does not exist, then you will get an error. This makes sense: if you asked someone what day they would like to get coffee, and they answered “caturday”, you might be a bit annoyed. Python gets similarly annoyed if you try to ask it for an item that doesn’t exist: ###Code letters = ['a', 'b', 'c'] print('Letter #1 is', letters[0]) print('Letter #2 is', letters[1]) print('Letter #3 is', letters[2]) print('Letter #4 is', letters[3]) ###Output _____no_output_____ ###Markdown Here, Python is telling us that there is an IndexError in our code, meaning we tried to access a list index that did not exist. File ErrorsThe last type of error we’ll cover today are those associated with reading and writing files: `FileNotFoundError`. If you try to read a file that does not exist, you will receive a `FileNotFoundError` telling you so. If you attempt to write to a file that was opened read-only, Python 3 returns an `UnsupportedOperationError`. More generally, problems with input and output manifest as `IOErrors` or `OSErrors`, depending on the version of Python you use. ###Code file_handle = open('myfile.txt', 'r') ###Output _____no_output_____ ###Markdown One reason for receiving this error is that you specified an incorrect path to the file. For example, if I am currently in a folder called `myproject`, and I have a file in `myproject/writing/myfile.txt`, but I try to open `myfile.txt`, this will fail. The correct path would be `writing/myfile.txt`. It is also possible that the file name or its path contains a typo.A related issue can occur if you use the “read” flag instead of the “write” flag. Python will not give you an error if you try to open a file for writing when the file does not exist. However, if you meant to open a file for reading, but accidentally opened it for writing, and then try to read from it, you will get an `UnsupportedOperation` error telling you that the file was not opened for reading: ###Code file_handle = open('myfile.txt', 'w') file_handle.read() ###Output _____no_output_____
medidas_de_dispersao_amplitude_amostral_e_diferenca_interquartil.ipynb
###Markdown Amplitude total ###Code import numpy as np import math dados = np.array([160,165,167,164,160,166,160,161,150,152,173,160,155,164,168,162,161,168,163,156,155,169,151,170,164,155,152,163,160,155,157,156,158,158,161,154,161,156,172,153]) amplitude_total = dados.max() - dados.min() amplitude_total ###Output _____no_output_____ ###Markdown Diferença interquartil ###Code q1 = np.quantile(dados, 0.25) q3 = np.quantile(dados, 0.75) q1, q3 diferenca_interquartil = q3 - q1 diferenca_interquartil limite_inferior = q1 - (1.5 * diferenca_interquartil) limite_inferior limite_superior = q3 + (1.5 * diferenca_interquartil) limite_superior ###Output _____no_output_____
examples/Classification.ipynb
###Markdown Step 0. Make a sample classification dataset: ###Code # import requires libraries import warnings warnings.filterwarnings("ignore", category=Warning) import numpy as np import pandas as pd from sklearn.datasets import make_classification from matplotlib import pyplot as plt from SKSurrogate import * # make up a classification dataset X, y = make_classification(n_samples=1000, n_features=10, n_informative=6, n_redundant=2) Xy = np.hstack((X, np.reshape(y, (-1, 1)))) # make up some names for columns of the data cols = ['cl%d'%(_+1) for _ in range(10)] + ['target'] # turn it into a pandas DataFrame df = np2df(Xy, cols)() ###Output _____no_output_____ ###Markdown Step 1. Initiate the tracker and register the data: ###Code # initialize the tracker with a task called 'sample' MLTr = mltrack('sample', db_name="sample.db") # register the data MLTr.RegisterData(df, 'target') # modify the description of the task MLTr.UpdateTask({'description': "This is a sample task to demonstrate capabilities of the mltrace."}) ###Output _____no_output_____ ###Markdown Step 2. Get to know the data by visualizing correlations and sensitivities: ###Code from sklearn.gaussian_process.kernels import Matern, Sum, ExpSineSquared from sklearn.kernel_ridge import KernelRidge from sklearn.model_selection import RandomizedSearchCV import warnings warnings.filterwarnings("ignore", category=Warning) # use a regressor to approximate the data param_grid_kr = {"alpha": np.logspace(-4, 1, 20), "kernel": [Sum(Matern(), ExpSineSquared(l, p)) for l in np.logspace(-2, 2, 10) for p in np.logspace(0, 2, 10)]} rgs = RandomizedSearchCV(KernelRidge(), param_distributions=param_grid_kr, n_iter=10, cv=2) # ask for specific weights to be calculated and recorded MLTr.FeatureWeights(regressor=rgs, weights=('pearson', 'sobol', 'morris', 'delta-mmnt')) # visualise plt1 = MLTr.heatmap(sort_by='pearson') #plt1.show() cor = df.corr() plt2 = p = MLTr.heatmap(cor, idx_col=None, cmap='rainbow') ###Output _____no_output_____ ###Markdown Step 3. Examine and log a random forest model and its metrics: ###Code from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import cross_val_score, ShuffleSplit # retrieve data X, y = MLTr.get_data() # init classifier clf = RandomForestClassifier(n_estimators=50) # log the classifier clf = MLTr.LogModel(clf, "RandomForestClassifier(50)") # find the average metrics print(MLTr.LogMetrics(clf, cv=ShuffleSplit(5, .25))) ###Output {'accuracy': 0.9168000000000001, 'auc': 0.9781085630410509, 'precision': 0.9342421726001271, 'f1': 0.9183404537120119, 'recall': 0.9038833038152218, 'mcc': 0.8348771709836751, 'logloss': 2.8736524228522797, 'variance': None, 'max_error': None, 'mse': None, 'mae': None, 'r2': None} ###Markdown Step 4. Search for best classifier in terms of accuracy ###Code from SKSurrogate import * # set up the confic dictionary config = { # estimators 'sklearn.naive_bayes.GaussianNB': { 'var_smoothing': Real(1.e-9, 2.e-1) }, 'sklearn.linear_model.LogisticRegression': { 'penalty': Categorical(["l1", "l2"]), 'C': Real(1.e-6, 10.), "class_weight": HDReal((1.e-5, 1.e-5), (20., 20.)) }, "lightgbm.LGBMClassifier": { "boosting_type": Categorical(['gbdt', 'dart', 'goss', 'rf']), "num_leaves": Integer(2, 100), "learning_rate": Real(1.e-7, 1. - 1.e-6), # prior='uniform'), "n_estimators": Integer(5, 250), "min_split_gain": Real(0., 1.), # prior='uniform'), "subsample": Real(1.e-6, 1.), # prior='uniform'), "importance_type": Categorical(['split', 'gain']) }, # preprocessing 'sklearn.preprocessing.StandardScaler': { 'with_mean': Categorical([True, False]), 'with_std': Categorical([True, False]), }, 'sklearn.preprocessing.Normalizer': { 'norm': Categorical(['l1', 'l2', 'max']) }, } # initiate and perform the search A = AML(config=config, length=3, check_point='./', verbose=1) A.eoa_fit(X, y, max_generation=10, num_parents=10) # retrieve and log the best eoa_clf = A.best_estimator_ eoa_clf = MLTr.LogModel(eoa_clf, "Best of EOA Surrogate Search") print(MLTr.LogMetrics(eoa_clf, cv=ShuffleSplit(5, .25))) MLTr.PreserveModel(eoa_clf) ###Output _____no_output_____ ###Markdown Dataset ###Code # initialize the parameters of the dataset n_samples = 10000 noise = 6 random_state = 1 # Create the dataset x, y = make_classification( n_samples=n_samples, random_state=random_state ) x_train, x_test, y_train, y_test = train_test_split( x, y, test_size=0.4, random_state=1 ) ###Output _____no_output_____ ###Markdown Model ###Code # Initialize the model model = NNet() # Create the model structure model.add(LinearLayer(x.shape[1], 20)) model.add(LeakyReLU()) model.add(LinearLayer(20,7)) model.add(LeakyReLU()) model.add(LinearLayer(7, 5)) model.add(LeakyReLU()) model.add(LinearLayer(5,1)) model.add(Sigmoid()) # set the loss functions and the optimize method loss = BCELoss() optim = Adam() # Train the model costs = [] for epoch in range(7000): model.forward(x_train.T) cost = model.loss(y_train, loss) model.backward() model.optimize(optim) if epoch % 100 == 0: print ("Cost after iteration %epoch: %f" %(epoch, cost)) costs.append(cost) # plot the loss evolution costs_ss = pd.Series(costs[1:]) plt.figure(figsize=(7, 3)) plt.plot(costs_ss) plt.ylabel('cost') plt.xlabel('iterations (per hundreds)') plt.title('Loss per epoch') plt.show() ###Output _____no_output_____ ###Markdown Employee AttritionA dataset from [HR Analytics](https://www.kaggle.com/lnvardanyan/hr-analytics) containing employee information of a company is provided.The following variables are included in the data:* satisfaction_level: The satisfaction of the employee* last_evaluation: How long ago the employee had his last evaluation* number_project: The amount of projects the employee has been involved in * average_montly_hours: The average amount of hours the employee works each month* time_spend_company: The amount of years the employee has worked there* Work_accident: Boolean representing if the employee has been involved in an accident* left: Our target variable, determines if the employee left the company or not* promotion_last_5years: Boolean on whether the employee was promoted in the last 5 years or not* sales: The name of the department the employee works in* salary: The salary of the employee (can be low, medium or high)We want to build a classification model that can determine which employee will likely leave the company in order to make the necessary changes to reduce employee attrition. We will use 80% of the data for training and the remaining 20% for validation of our modeling. OutlineWe separate the project in 3 steps:Data Loading and Exploratory Data Analysis: Load the data and analyze it to obtain an accurate picture of it, its features, its values (and whether they are incomplete or wrong), its data types among others. Also, the creation of different types of plots in order to help us understand the data and make the model creation easier.Feature Engineering / Modeling and Pipeline: Once we have the data, we create some features and then the modeling stage begins, making use of different models (and ensembles) and a strong pipeline with different transformers, we will hopefully produce a model that fits our expectations of performance. Once we have that model, a process of tuning it to the training data would be performed.Results and Conclusions: Finally, with our tuned model, we predict against the test set we decided to separate initially, then we review those results against their actual values to determine the performance of the model, and finally, outlining our conclusions. ###Code import warnings import numpy as np import pandas as pd import seaborn as sns from tempfile import mkdtemp from sklearn.base import clone import matplotlib.pyplot as plt from sklearn.cluster import KMeans from ml_helper.helper import Helper from imblearn import FunctionSampler from imblearn.combine import SMOTEENN from sklearn.decomposition import PCA from imblearn.pipeline import Pipeline from sklearn.compose import ColumnTransformer from gplearn.genetic import SymbolicTransformer from sklearn.linear_model import LogisticRegression from sklearn.model_selection import train_test_split from sklearn.feature_selection import RFE, SelectFromModel from imblearn.over_sampling import RandomOverSampler, SMOTE from sklearn.metrics import accuracy_score as metric_scorer, classification_report from imblearn.under_sampling import RandomUnderSampler, RepeatedEditedNearestNeighbours from sklearn.ensemble import ( ExtraTreesClassifier, RandomForestClassifier, IsolationForest, ) from sklearn.preprocessing import ( PolynomialFeatures, KBinsDiscretizer, PowerTransformer, OneHotEncoder, FunctionTransformer, ) warnings.filterwarnings("ignore") ###Output _____no_output_____ ###Markdown Setting Key ValuesThe following values are used throught the code, this cell gives a central source where they can be managed. ###Code MEMORY = mkdtemp() KEYS = { "SEED": 1, "DATA_PATH": "https://gist.githubusercontent.com/akoury/d5943d9c3dba8dc20a4c0c35027b110c/raw/8f01edb1ce950511cab5f9aa6aafd14bd2fe96db/Turnover", "TARGET": "left", "METRIC": "accuracy", "TIMESERIES": False, "SPLITS": 5, "ESTIMATORS": 150, "ITERATIONS": 500, "MEMORY": MEMORY, } hp = Helper(KEYS) ###Output _____no_output_____ ###Markdown Data LoadingHere we load the necessary data, print its first rows and describe its contents. ###Code def read_data(input_path): return pd.read_csv(input_path) data = read_data(KEYS["DATA_PATH"]) data.head() data.describe() ###Output _____no_output_____ ###Markdown Data typesWe review the data types for each column. ###Code data.dtypes ###Output _____no_output_____ ###Markdown Missing DataWe check if there is any missing data. ###Code hp.missing_data(data) ###Output _____no_output_____ ###Markdown Converting columns to their true categorical typeNow we convert the data types of numerical columns that are actually categorical. ###Code data = hp.convert_to_category(data, data.iloc[:, 5:8]) data.dtypes ###Output _____no_output_____ ###Markdown Defining Holdout Set for Validation80% of the data will be used to train our model, while the remaining data will be used later on to validate the accuracy of our model. ###Code train_data, holdout = train_test_split(data, test_size=0.2) ###Output _____no_output_____ ###Markdown Exploratory Data AnalysisHere we will perform all of the necessary data analysis, with different plots that will help us understand the data and therefore, create a better model.We must specify that all of this analysis is performed only on the training data, so that we do not incur in any sort of bias when modeling.We begin by plotting pairwise relationships between variables, as well as the distribution for each column in the diagonal. ###Code pairplot = sns.pairplot(train_data, hue=KEYS["TARGET"], palette="husl") ###Output _____no_output_____ ###Markdown Boxplot of Numerical VariablesWe review the distribution of scaled numerical data through a boxplot for each variable. ###Code hp.boxplot(data) ###Output _____no_output_____ ###Markdown As we can see, there are only a few outliers in the time spent in company, so outlier treatment does not seem necessary. Coefficient of VariationThe coefficient of variation is a dimensionless meassure of dispersion in data, the lower the value the less dispersion a feature has. We will select columns that have a variance of less than 0.05 since they would probably perform poorly. ###Code invariant = hp.coefficient_variation(data, threshold=0.05) ###Output No invariant columns ###Markdown Data CorrelationNow we analyze correlation in the data for both numerical and categorical columns and plot them, using a threshold of 70%.For the numerical features we use Spearman correlation and for the categorical ones we use Cramér's V. ###Code correlated_cols = hp.correlated(train_data, 0.7) ###Output No correlated columns for the 0.7 threshold ###Markdown Underrepresented FeaturesNow we determine underrepresented features, meaning those that in more than 97% of the records are composed of a single value. ###Code under_rep = hp.under_represented(train_data, 0.97) ###Output ['promotion_last_5years'] underrepresented ###Markdown Principal Component Analysis (PCA)We plot PCA component variance to define the number of components we wish to consider in the pipeline. ###Code hp.plot_pca_components(data, convert=True) ###Output _____no_output_____ ###Markdown Feature ImportanceHere we plot feature importance using a random forest in order to get a sense of which features have the most importance. ###Code hp.feature_importance( data, RandomForestClassifier(n_estimators=KEYS["ESTIMATORS"], random_state=KEYS["SEED"]), convert=True, ) ###Output Feature ranking: ###Markdown Check target variable balanceWe review the distribution of values in the target variable. ###Code hp.target_distribution(train_data) ###Output _____no_output_____ ###Markdown Since 0 is employees that stay and 1 is employees that leave, a rebalancing should be tried since there is a very big difference in the number of values for each option. Feature Engineering / Pipeline / ModelingA number of different combinations of feature engineering steps and transformations will be performed in a pipeline with different models, each one will be cross validated to review the performance of the model.**Some of the steps are commented, the point is for the user to comment/uncomment the steps they wish to try and those pipelines and scores will be saved for later use**, that way you can see what improves the score and what decreases it. What is left uncommented is the resulting best pipeline after multiple attempts.A feature called 'avg_time_per_project' is added to determine the average time each employee spends on a project.We also try removing unneeded columns (like invariant, correlated and under represented ones), clustering, removing outliers through isolation forests, quantile binning, polynomial combinations, genetic transformations, one hot encoding, rebalancing techniques, recursive feature elimination, feature selection, PCA and more. Each time we see if the scores improve in order to guide our pipeline creation. ###Code def avg_time_pp(df): df = df.copy() df["avg_time_per_project"] = ( df["average_montly_hours"] * 12 * df["time_spend_company"] ) / df["number_project"] df["avg_time_per_project"] = df["avg_time_per_project"].replace( [np.inf, -np.inf], np.nan ) df["avg_time_per_project"] = df["avg_time_per_project"].fillna(0) return df def drop_features(df, cols): return df[df.columns.difference(cols)] def kmeans(df, clusters=3): clusterer = KMeans(clusters, random_state=KEYS["SEED"]) cluster_labels = clusterer.fit_predict(df) df = np.column_stack([df, cluster_labels]) return df def outlier_rejection(X, y): model = IsolationForest(random_state=KEYS["SEED"], behaviour="new", n_jobs=-1) model.fit(X) y_pred = model.predict(X) return X[y_pred == 1], y[y_pred == 1] num_pipeline = Pipeline( [ ("power_transformer", PowerTransformer(method="yeo-johnson", standardize=True)), # ('binning', KBinsDiscretizer(n_bins = 5, encode = 'onehot-dense')), ("polynomial", PolynomialFeatures(degree=2, include_bias=False)), # ('genetic', SymbolicTransformer(population_size=750, metric='spearman', function_set = ['add', 'sub', 'mul', 'div', 'sqrt', 'log', 'abs', 'neg', 'max', 'min'], parsimony_coefficient = 0.0005, max_samples = 0.9, random_state = KEYS['SEED'])) ] ) categorical_pipeline = Pipeline( [("one_hot", OneHotEncoder(sparse=False, handle_unknown="ignore"))] ) pipe = Pipeline( [ ("avg_time_pp", FunctionTransformer(avg_time_pp, validate=False)), ( "drop_features", FunctionTransformer( drop_features, kw_args={"cols": invariant + correlated_cols + under_rep}, validate=False, ), ), ( "column_transformer", ColumnTransformer( [ ( "numerical_pipeline", num_pipeline, hp.numericals(data, [KEYS["TARGET"]]).columns, ), ("categorical_pipeline", categorical_pipeline, ["sales", "salary"]), ], remainder="passthrough", ), ), # ('kmeans', FunctionTransformer(kmeans, validate=False)), # ('outliers', FunctionSampler(func = outlier_rejection)), # ('rand_under', RandomUnderSampler(random_state = KEYS['SEED'])), # ('ENN_under', RepeatedEditedNearestNeighbours(random_state = KEYS['SEED'])), # ('rand_over', RandomOverSampler(random_state = KEYS['SEED'])), # ('SMOTE_over', SMOTE(random_state = KEYS['SEED'])), # ('combined_sampler', SMOTEENN(random_state = KEYS['SEED'])), # ('rfe', RFE(RandomForestClassifier(n_estimators = KEYS['ESTIMATORS'], random_state = KEYS['SEED']), n_features_to_select = 6)), # ('feature_selection', SelectFromModel(RandomForestClassifier(n_estimators = KEYS['ESTIMATORS'], random_state = KEYS['SEED']), threshold = 0.005)), # ('pca', PCA(n_components = 6)) ] ) models = [ { "name": "logistic_regression", "model": LogisticRegression( solver="lbfgs", max_iter=KEYS["ITERATIONS"], random_state=KEYS["SEED"] ), }, { "name": "random_forest", "model": RandomForestClassifier( n_estimators=KEYS["ESTIMATORS"], random_state=KEYS["SEED"] ), }, {"name": "extra_tree", "model": ExtraTreesClassifier(random_state=KEYS["SEED"])}, ] ###Output _____no_output_____ ###Markdown ScoresHere you can see all of the scores for the different models throughout the entire cross validation process for each pipeline, in certain cases errors can happen (for example when a certain fold contains a sparse matrix), therefore you may see errors marked as such in the score. ###Code all_scores = hp.pipeline(train_data, models, pipe, note="Base model") ###Output _____no_output_____ ###Markdown Here we run the different combinations of pipelines, you may run it multiple times with different parameters and they will be queued. ###Code all_scores = hp.pipeline(train_data, models, pipe, all_scores) ###Output _____no_output_____ ###Markdown Pipeline Performance by Model ###Code hp.plot_models(all_scores) ###Output _____no_output_____ ###Markdown Top Pipelines per ModelHere we show the top pipelines per model. ###Code hp.show_scores(all_scores, top=True) ###Output _____no_output_____ ###Markdown Randomized Grid SearchOnce we have a list of models, we perform a cross validated, randomized grid search on the best performing one to define the final model. ###Code grid = { "random_forest__criterion": ["gini", "entropy"], "random_forest__min_samples_leaf": [10, 20], "random_forest__min_samples_split": [5, 8], "random_forest__max_leaf_nodes": [30, 60], } final_scores, grid_pipe = hp.cross_val( train_data, model=clone(hp.top_pipeline(all_scores)), grid=grid ) ###Output _____no_output_____ ###Markdown Best Parameters for the Model ###Code print(grid_pipe.best_params_) final_pipe = grid_pipe.best_estimator_ ###Output {'random_forest__min_samples_split': 8, 'random_forest__min_samples_leaf': 10, 'random_forest__max_leaf_nodes': 60, 'random_forest__criterion': 'entropy'} ###Markdown ResultsWe evaluate the final model with the holdout, obtaining the definitive score of the model. ###Code y, predictions = hp.predict(train_data, holdout, final_pipe) score = metric_scorer(y, predictions) score ###Output _____no_output_____ ###Markdown Receiver Operating Characteristic (ROC) / Area Under the Curve To review the performance of the model, accuracy is not enough, therefore we plot the ROC of the model on the holdout data and print a classification report. ###Code hp.roc(holdout, final_pipe, predictions) ###Output _____no_output_____ ###Markdown Stacked ModelFinally, we create a stacked model using the top 2 models obtained during the modeling phase and obtain the holdout results. ###Code stacked, y_stacked, predictions_stacked = hp.stack_predict( train_data, holdout, all_scores, amount=2 ) score_stacked = metric_scorer(y_stacked, predictions_stacked) score_stacked print(classification_report(y_stacked, predictions_stacked)) ###Output precision recall f1-score support 0 0.99 1.00 0.99 2315 1 0.99 0.96 0.98 685 micro avg 0.99 0.99 0.99 3000 macro avg 0.99 0.98 0.98 3000 weighted avg 0.99 0.99 0.99 3000 ###Markdown Step 5. Plot learning curves ###Code # the best of EOA Surrogate Search MLTr.plot_learning_curve(eoa_clf, "Best of Surrogate Search", cv=ShuffleSplit(5, .25), measure='accuracy') MLTr.plot_learning_curve(eoa_clf, "Best of Surrogate Search", cv=ShuffleSplit(5, .25), measure='f1') MLTr.plot_learning_curve(eoa_clf, "Best of Surrogate Search", cv=ShuffleSplit(5, .25), measure='roc_auc') MLTr.plot_calibration_curve(eoa_clf, "Best of Surrogate Search") MLTr.plot_cumulative_gain(eoa_clf, title="Best: Cumulative Gains Curve") MLTr.plot_lift_curve(eoa_clf, title="Best: Lift Curve") # Random Forest MLTr.plot_learning_curve(clf, "Random Forest", cv=ShuffleSplit(5, .25), measure='accuracy') MLTr.plot_learning_curve(clf, "Random Forest", cv=ShuffleSplit(5, .25), measure='f1') MLTr.plot_learning_curve(clf, "Random Forest", cv=ShuffleSplit(5, .25), measure='roc_auc') MLTr.plot_calibration_curve(clf, "Random Forest") MLTr.plot_cumulative_gain(clf, title="Random Forest: Cumulative Gains Curve") MLTr.plot_lift_curve(clf, title="Random Forest: Lift Curve") ###Output _____no_output_____ ###Markdown SIM: Quadratic Function ###Code np.random.seed(2020) n = int(1e4) x = np.random.normal(0, 0.3, size=(n, 6)) beta = np.array([3, -2.5, 2, -1.5, 1.5, -1.0])/5 z = np.dot(x.reshape(-1,6),beta) f = z**2 noise = np.random.randn(n) y = 1 / (1 + np.exp(-f)) + 0.05 * np.random.randn(n) y = y - np.mean(y) y[y <= 0] = 0 y[y > 0] = 1 clf = PPRClassifier(nterms=1,optlevel=2) clf.fit(x,y) clf.projection_indices_ clf.visualize() ###Output _____no_output_____ ###Markdown AIM: Quadratic + Linear ###Code np.random.seed(2020) n = int(1e4) x = np.random.normal(0, 0.3, size=(n, 6)) beta = np.array([3, -2.5, 2, -1.5, 1.5, -1.0])/5 z1 = beta[0] * x[:, 0] + beta[1] * x[:, 1] + beta[2] * x[:, 2] z2 = beta[3] * x[:, 3] + beta[4] * x[:, 4] + beta[5] * x[:, 5] f = 0.5*z1+z2**2 y = 1 / (1 + np.exp(-f)) + 0.05 * np.random.randn(n) y = y - np.mean(y) y[y <= 0] = 0 y[y > 0] = 1 clf2 = PPRClassifier(nterms=2,optlevel=2) clf2.fit(x,y) clf2.projection_indices_ clf2.visualize() ###Output _____no_output_____ ###Markdown Non-additive Model ###Code np.random.seed(2020) n = int(1e4) x = np.random.normal(0, 0.3, size=(n, 6)) f = 3*np.pi**(x[:,0]*x[:,1])*np.sqrt(2*np.abs(x[:,2]+1)) y = 1 / (1 + np.exp(-f)) + 0.05 * np.random.randn(n) y = y - np.mean(y) y[y <= 0] = 0 y[y > 0] = 1 clf3 = PPRClassifier(nterms=2,optlevel=2) clf3.fit(x,y) clf3.projection_indices_ clf3.visualize() ###Output _____no_output_____ ###Markdown This examples show how to use MiniSom to solve a classification problem. The classification mechanism will be implemented with MiniSom and the evaluation will make use of sklearn.First, let's load a dataset (in this case the famous Iris dataset) and apply normalization: ###Code from minisom import MiniSom import numpy as np data = np.genfromtxt('iris.csv', delimiter=',', usecols=(0, 1, 2, 3)) data = np.apply_along_axis(lambda x: x/np.linalg.norm(x), 1, data) labels = np.genfromtxt('iris.csv', delimiter=',', usecols=(4), dtype=str) ###Output _____no_output_____ ###Markdown Here's naive classification function that classifies a sample in `data` using the label assigned to the associated winning neuron. A label $c$ is associated to a neuron if the majority of samples mapped in that neuron have label $c$. The function will assign the most common label in the dataset in case that a sample is mapped to a neuron for which no class is assigned. ###Code def classify(som, data): """Classifies each sample in data in one of the classes definited using the method labels_map. Returns a list of the same length of data where the i-th element is the class assigned to data[i]. """ winmap = som.labels_map(X_train, y_train) default_class = np.sum(list(winmap.values())).most_common()[0][0] result = [] for d in data: win_position = som.winner(d) if win_position in winmap: result.append(winmap[win_position].most_common()[0][0]) else: result.append(default_class) return result ###Output _____no_output_____ ###Markdown Now we can 1) split the data in train and test set, 2) train the som, 3) print the classification report that contains all the metrics to evaluate the results of the classification. ###Code from sklearn.model_selection import train_test_split from sklearn.metrics import classification_report X_train, X_test, y_train, y_test = train_test_split(data, labels, stratify=labels) som = MiniSom(7, 7, 4, sigma=3, learning_rate=0.5, neighborhood_function='triangle', random_seed=10) som.pca_weights_init(X_train) som.train_random(X_train, 500, verbose=False) print(classification_report(y_test, classify(som, X_test))) ###Output precision recall f1-score support setosa 1.00 1.00 1.00 13 versicolor 0.92 1.00 0.96 12 virginica 1.00 0.92 0.96 13 accuracy 0.97 38 macro avg 0.97 0.97 0.97 38 weighted avg 0.98 0.97 0.97 38 ###Markdown This examples show how to use MiniSom to solve a classification problem. The classification mechanism will be implemented with MiniSom and the evaluation will make use of sklearn.First, let's load a dataset (in this case the famous Iris dataset) and apply normalization: ###Code from minisom import MiniSom import numpy as np data = np.genfromtxt('iris.csv', delimiter=',', usecols=(0, 1, 2, 3)) data = np.apply_along_axis(lambda x: x/np.linalg.norm(x), 1, data) labels = np.genfromtxt('iris.csv', delimiter=',', usecols=(4), dtype=str) ###Output _____no_output_____ ###Markdown Here's naive classification function that classifies a sample in `data` using the label assigned to the associated winning neuron. A label $c$ is associated to a neuron if the majority of samples mapped in that neuron have label $c$. The function will assign the most common label in the dataset in case that a sample is mapped to a neuron for which no class is assigned. ###Code def classify(som, data): """Classifies each sample in data in one of the classes definited using the method labels_map. Returns a list of the same length of data where the i-th element is the class assigned to data[i]. """ winmap = som.labels_map(X_train, y_train) default_class = np.sum(list(winmap.values())).most_common()[0][0] result = [] for d in data: win_position = som.winner(d) if win_position in winmap: result.append(winmap[win_position].most_common()[0][0]) else: result.append(default_class) return result ###Output _____no_output_____ ###Markdown Now we can 1) split the data in train and test set, 2) train the som, 3) print the classification report that contains all the metrics to evaluate the results of the classification. ###Code from sklearn.model_selection import train_test_split from sklearn.metrics import classification_report X_train, X_test, y_train, y_test = train_test_split(data, labels, stratify=labels) som = MiniSom(7, 7, 4, sigma=3, learning_rate=0.5, neighborhood_function='triangle', random_seed=10) som.pca_weights_init(X_train) som.train_random(X_train, 500, verbose=False) print(classification_report(y_test, classify(som, X_test))) ###Output precision recall f1-score support setosa 1.00 1.00 1.00 13 versicolor 0.92 1.00 0.96 12 virginica 1.00 0.92 0.96 13 accuracy 0.97 38 macro avg 0.97 0.97 0.97 38 weighted avg 0.98 0.97 0.97 38
HypoDD/visulization.ipynb
###Markdown - Download and compile [HypoDD](https://www.ldeo.columbia.edu/~felixw/hypoDD.html)- Download test files from [Zhu et al. (2021)](https://arxiv.org/abs/2109.09008)```bashcurl -O -J -L https://osf.io/aw53b/downloadcurl -O -J -L https://osf.io/y879e/download```- Convert GaMMA catalog and run HypoDD relocation```python gamma2hypodd.py``` ###Code region_name = "Ridgecrest" xlim = (-117.8, -117.3) ylim = (35.5, 36.0) zlim = (0, 15) zlim_special = (0, 20) size = 1.0 alpha = 0.5 max_sigma = 0.6 # region_name = "PuertoRico" # # xlim = (-68, -65) # # ylim = (17, 19) # xlim = (-67.2, -66.6) # ylim = (17.75, 18.1) # zlim = (0, 25) # zlim_special = None # size = 0.5 # alpha = 0.3 # max_sigma = 0.5 # region_name = "Hawaii" # xlim = (-156.00, -154.75) # ylim = (18.9, 19.9) # zlim = (0, 40) # zlim_special = None # size = 1.0 # alpha = 0.5 # max_sigma = 1.5 # catalog_hypoinverse = pd.read_csv("catOut.sum", sep="\s+") catalog_hypoDD = pd.read_csv(f"./{region_name}/hypoDD_catalog.txt", sep="\s+", names=["ID", "LAT", "LON", "DEPTH", "X", "Y", "Z", "EX", "EY", "EZ", "YR", "MO", "DY", "HR", "MI", "SC", "MAG", "NCCP", "NCCS", "NCTP", "NCTS", "RCC", "RCT", "CID"]) catalog_hypoDD["time"] = catalog_hypoDD.apply(lambda x: f'{x["YR"]:04.0f}-{x["MO"]:02.0f}-{x["DY"]:02.0f}T{x["HR"]:02.0f}:{x["MI"]:02.0f}:{min(x["SC"], 59.999):05.3f}', axis=1) catalog_hypoDD["time"] = catalog_hypoDD["time"].apply(lambda x: datetime.strptime(x, "%Y-%m-%dT%H:%M:%S.%f")) catalog_gamma = pd.read_csv(f"./{region_name}/gamma_catalog.csv", sep="\t") catalog_gamma["time"] = catalog_gamma["time"].apply(lambda x: datetime.strptime(x, "%Y-%m-%dT%H:%M:%S.%f")) catalog_hypoDD["latitude"] = catalog_hypoDD["LAT"] catalog_hypoDD["longitude"] = catalog_hypoDD["LON"] catalog_hypoDD["depth(m)"] = catalog_hypoDD["DEPTH"] * 1e3 catalog_hypoDD["magnitude"] = catalog_hypoDD["MAG"] picks_gamma = pd.read_csv(f"./{region_name}/gamma_picks.csv", sep="\t") print(f"Number of stations: {len(set(picks_gamma['id']))}") catalog_gamma["sigma"] = catalog_gamma["covariance"].apply(lambda x: float(x.split(",")[0])) catalog_gamma_selected = catalog_gamma[catalog_gamma["sigma"] < max_sigma] c_gamma = (np.array(catalog_gamma_selected["time"]) - np.datetime64('1970-01-01T00:00:00Z')) / np.timedelta64(1, 's') c_hypodd = (np.array(catalog_hypoDD["time"]) - np.datetime64('1970-01-01T00:00:00Z')) / np.timedelta64(1, 's') t_gamma = catalog_gamma_selected["time"] t_hypodd = catalog_hypoDD["time"] import pygmt region = xlim + ylim # region = np.array(region) + np.array([-1, 1, -1, 1])*0.2 stations = pd.read_csv("stations.csv", sep="\t") fig = pygmt.Figure() fig.basemap(region=region, projection="M6i", frame=True) fig.grdimage("@earth_relief_15s", cmap="topo", shading=True) # fig.grdimage("@earth_relief_03s", cmap="sealand", shading=True) # fig.colorbar() # grid = pygmt.datasets.load_earth_relief(resolution="15s", region=region) # dgrid = pygmt.grdgradient(grid=grid, radiance=[270, 30]) # pygmt.makecpt(cmap="gray", series=[-50000, 20000, 1000], continuous=True) # fig.grdimage(grid=grid, cmap=True, shading=True) # fig.grdimage(grid=dgrid, cmap=True, shading=True) # fig.colorbar(truncate=[-4000, 4001]) # fig.plot(x=stations["longitude"], y=stations["latitude"], style="t0.5", color="blue", pen="black", label="Station") # fig.plot(x=catalog_hypoDD["longitude"], y=catalog_hypoDD["latitude"], style="c", size=1, color="black") fig.savefig(f"{region_name}/topography.pdf") fig.savefig(f"{region_name}/topography.png") fig.show() dgrid.max(), dgrid.min() plt.figure(figsize=(15, 6)) grid = pygmt.datasets.load_earth_relief(resolution="03s", region=region) dgrid = pygmt.grdgradient(grid=grid, radiance=[135, 25]) # dgrid = pygmt.grdgradient(grid=grid) xgrid = np.linspace(xlim[0], xlim[1], grid.shape[1]) ygrid = np.linspace(ylim[0], ylim[1], grid.shape[0]) im_ratio = (ylim[1]-ylim[0])/(xlim[1]-xlim[0]) plt.subplot(121) plt.pcolormesh(xgrid, ygrid, dgrid, shading="gouraud", cmap="gray", alpha=0.3, vmin=-1.5, rasterized=True) im = plt.scatter(catalog_gamma_selected["longitude"], catalog_gamma_selected["latitude"], s=size, c=c_gamma, alpha=alpha, marker=",", cmap=palette, linewidth=0, rasterized=True) # plt.pcolormesh(xgrid, ygrid, dgrid, shading="gouraud", cmap="gray", alpha=0.1, rasterized=True) plt.title(f"GaMMA ($\sigma$ < {max_sigma:.1f}s): {len(catalog_gamma_selected)}") plt.axis("scaled") plt.xlim(xlim) plt.ylim(ylim) plt.xlabel("Latitude") plt.ylabel("Longitude") cbar = plt.colorbar(im, fraction=0.047*im_ratio) cbar.set_ticks(np.linspace(c_gamma.min(), c_gamma.max(), 4)) cbar.ax.set_yticklabels([pd.to_datetime(x, unit='s').strftime('%b %d %Y') for x in np.linspace(t_gamma.min().timestamp(), t_gamma.max().timestamp(), 4)]) plt.subplot(122) plt.pcolormesh(xgrid, ygrid, dgrid, shading="gouraud", cmap="gray", alpha=0.3, vmin=-1.5, rasterized=True) im = plt.scatter(catalog_hypoDD["LON"], catalog_hypoDD["LAT"], s=size, c=c_hypodd, alpha=alpha, marker=",", cmap=palette, linewidth=0, rasterized=True) # plt.pcolormesh(xgrid, ygrid, dgrid, shading="gouraud", cmap="gray", alpha=0.1, rasterized=True) plt.title(f"HypoDD: {len(catalog_hypoDD)}") plt.axis("scaled") plt.xlim(xlim) plt.ylim(ylim) plt.xlabel("Latitude") plt.ylabel("Longitude") cbar = plt.colorbar(im, fraction=0.047*im_ratio) cbar.set_ticks(np.linspace(c_hypodd.min(), c_hypodd.max(), 4)) cbar.ax.set_yticklabels([pd.to_datetime(x, unit='s').strftime('%b %d %Y') for x in np.linspace(t_hypodd.min().timestamp(), t_hypodd.max().timestamp(), 4)]) plt.tight_layout() plt.savefig(f"{region_name}/GaMMA2HypoDD_latitude_vs_longitude_color_by_time.pdf", bbox_inches="tight", dpi=600) plt.savefig(f"{region_name}/GaMMA2HypoDD_latitude_vs_longitude_color_by_time.png", bbox_inches="tight", dpi=600) plt.show() plt.figure(figsize=(15, 6)) im_ratio = (ylim[1]-ylim[0])/(xlim[1]-xlim[0]) c = catalog_gamma_selected["depth(m)"].copy()/1e3 c[c<zlim[0]] = zlim[0] if zlim_special is None: c[c>zlim[1]] = zlim[1] else: c[c>zlim_special[1]] = zlim_special[1] plt.subplot(121) plt.pcolormesh(xgrid, ygrid, dgrid, shading="gouraud", cmap="gray", alpha=0.3, vmin=-1.5, rasterized=True) plt.scatter(catalog_gamma_selected["longitude"], catalog_gamma_selected["latitude"], s=size, c=c, alpha=alpha, marker=",", cmap=palette, linewidth=0, rasterized=True) plt.title(f"GaMMA ($\sigma$ < {max_sigma:.1f}s): {len(catalog_gamma_selected)}") plt.axis("scaled") plt.xlim(xlim) plt.ylim(ylim) plt.xlabel("Latitude") plt.ylabel("Longitude") plt.colorbar(label="Depth (km)", fraction=0.047*im_ratio) c = catalog_hypoDD["DEPTH"].copy() c[c<zlim[0]] = zlim[0] c[c>zlim[1]] = zlim[1] plt.subplot(122) plt.pcolormesh(xgrid, ygrid, dgrid, shading="gouraud", cmap="gray", alpha=0.3, vmin=-1.5, rasterized=True) im = plt.scatter(catalog_hypoDD["LON"], catalog_hypoDD["LAT"], s=size, c=c, alpha=alpha, marker=",", cmap=palette, linewidth=0, rasterized=True) plt.title(f"HypoDD: {len(catalog_hypoDD)}") plt.axis("scaled") plt.xlim(xlim) plt.ylim(ylim) plt.xlabel("Latitude") plt.ylabel("Longitude") plt.colorbar(label="Depth (km)", fraction=0.047*im_ratio) plt.tight_layout() plt.savefig(f"{region_name}/GaMMA2HypoDD_latitude_vs_longitude_color_by_depth.pdf", bbox_inches="tight", dpi=600) plt.savefig(f"{region_name}/GaMMA2HypoDD_latitude_vs_longitude_color_by_depth.png", bbox_inches="tight", dpi=600) plt.show() sns.set_theme() # plt.figure(figsize=(15, 6)) fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(15, 6)) c = catalog_gamma_selected["latitude"].copy() c[c < ylim[0]] = ylim[0] c[c > ylim[1]] = ylim[1] ax = axes[0] im = ax.scatter(catalog_gamma_selected["longitude"], catalog_gamma_selected["depth(m)"]/1e3, s=size, c=c, alpha=alpha, marker=",", cmap=palette, linewidth=0, rasterized=True) ax.set_title(f"GaMMA ($\sigma$ < {max_sigma:.1f}s): {len(catalog_gamma_selected)}") ax.set_xlim(xlim) if zlim_special is None: ax.set_ylim(zlim) else: ax.set_ylim(zlim_special) ax.set_xlabel("Longitude") ax.set_ylabel("Depth (km)") ax.invert_yaxis() fig.colorbar(im, ax=ax, label="Latitude") c = catalog_hypoDD["LAT"].copy() c[c < ylim[0]] = ylim[0] c[c > ylim[1]] = ylim[1] ax = axes[1] im = ax.scatter(catalog_hypoDD["LON"], catalog_hypoDD["DEPTH"], s=size, c=c, alpha=alpha, marker=",", cmap=palette, linewidth=0, rasterized=True) ax.set_title(f"HypoDD: {len(catalog_hypoDD)}") ax.set_xlim(xlim) ax.set_ylim(zlim) ax.set_xlabel("Longitude") ax.set_ylabel("Depth (km)") ax.invert_yaxis() fig.colorbar(im, ax=ax, label="Latitude") # fig.colorbar(im, ax=axes.ravel().tolist(), label="Latitude") plt.savefig(f"{region_name}/GaMMA2HypoDD_depth_vs_longitude_color_by_latitude.pdf", bbox_inches="tight", dpi=600) plt.savefig(f"{region_name}/GaMMA2HypoDD_depth_vs_longitude_color_by_latitude.png", bbox_inches="tight", dpi=600) plt.show() plt.figure(figsize=(15, 6)) c = catalog_gamma_selected["longitude"].copy() c[c < xlim[0]] = xlim[0] c[c > xlim[1]] = xlim[1] plt.subplot(121) plt.scatter(catalog_gamma_selected["latitude"], catalog_gamma_selected["depth(m)"]/1e3, c=c, s=size, alpha=alpha, marker=",", cmap=palette, linewidth=0, rasterized=True) plt.title(f"GaMMA ($\sigma$ < {max_sigma:.1f}s): {len(catalog_gamma_selected)}") plt.xlim(ylim) if zlim_special is None: plt.ylim(zlim) else: plt.ylim(zlim_special) plt.gca().invert_yaxis() plt.xlabel("Latitude") plt.ylabel("Depth (km)") plt.colorbar(label="Longitude") c = catalog_hypoDD["LON"].copy() c[c < xlim[0]] = xlim[0] c[c > xlim[1]] = xlim[1] plt.subplot(122) plt.scatter(catalog_hypoDD["LAT"], catalog_hypoDD["DEPTH"], s=size, c=c, alpha=alpha, marker=",", cmap=palette, linewidth=0, rasterized=True) plt.title(f"HypoDD: {len(catalog_hypoDD)}") plt.xlim(ylim) plt.ylim(zlim) plt.gca().invert_yaxis() plt.xlabel("Latitude") plt.ylabel("Depth (km)") plt.colorbar(label="Longitude") plt.savefig(f"{region_name}/GaMMA2HypoDD_depth_vs_latitude_color_by_longitude.pdf", bbox_inches="tight", dpi=600) plt.savefig(f"{region_name}/GaMMA2HypoDD_depth_vs_latitude_color_by_longitude.png", bbox_inches="tight", dpi=600) plt.show() import matplotlib.dates as mdates plt.figure(figsize=(15, 6)) # t_gamma = mdates.epoch2num(t_gamma) # t_hypodd = mdates.epoch2num(t_hypodd) # t_gamma = mdates.datetime(t_gamma) c = catalog_gamma_selected["latitude"].copy() c[c < ylim[0]] = ylim[0] c[c > ylim[1]] = ylim[1] plt.subplot(121) plt.scatter(catalog_gamma_selected["longitude"], catalog_gamma_selected["depth(m)"]/1e3, s=size, c=c_gamma, alpha=alpha, marker=",", cmap=palette, linewidth=0, rasterized=True) plt.title(f"GaMMA ($\sigma$ < {max_sigma:.1f}s): {len(catalog_gamma_selected)}") plt.xlim(xlim) if zlim_special is None: plt.ylim(zlim) else: plt.ylim(zlim_special) plt.gca().invert_yaxis() plt.xlabel("Longitude") plt.ylabel("Depth (km)") cbar = plt.colorbar() cbar.set_ticks(np.linspace(c_gamma.min(), c_gamma.max(), 4)) cbar.ax.set_yticklabels([pd.to_datetime(x, unit='s').strftime('%b %d %Y') for x in np.linspace(t_gamma.min().timestamp(), t_gamma.max().timestamp(), 4)]) c = catalog_hypoDD["LAT"].copy() c[c < ylim[0]] = ylim[0] c[c > ylim[1]] = ylim[1] plt.subplot(122) plt.scatter(catalog_hypoDD["LON"], catalog_hypoDD["DEPTH"], s=size, c=c_hypodd, alpha=alpha, marker=",", cmap=palette, linewidth=0, rasterized=True) plt.title(f"HypoDD: {len(catalog_hypoDD)}") plt.xlim(xlim) plt.ylim(zlim) plt.gca().invert_yaxis() plt.xlabel("Longitude") plt.ylabel("Depth (km)") cbar = plt.colorbar() cbar.set_ticks(np.linspace(c_hypodd.min(), c_hypodd.max(), 4)) cbar.ax.set_yticklabels([pd.to_datetime(x, unit='s').strftime('%b %d %Y') for x in np.linspace(t_hypodd.min().timestamp(), t_hypodd.max().timestamp(), 4)]) plt.savefig(f"{region_name}/GaMMA2HypoDD_depth_vs_longitude_color_by_time.pdf", bbox_inches="tight", dpi=600) plt.savefig(f"{region_name}/GaMMA2HypoDD_depth_vs_longitude_color_by_time.png", bbox_inches="tight", dpi=600) plt.show() plt.figure(figsize=(15, 6)) c = catalog_gamma_selected["longitude"].copy() c[c < xlim[0]] = xlim[0] c[c > xlim[1]] = xlim[1] plt.subplot(121) plt.scatter(catalog_gamma_selected["latitude"], catalog_gamma_selected["depth(m)"]/1e3, c=c_gamma, s=size, alpha=alpha, marker=",", cmap=palette, linewidth=0, rasterized=True) plt.title(f"GaMMA ($\sigma$ < {max_sigma:.1f}s): {len(catalog_gamma_selected)}") plt.xlim(ylim) if zlim_special is None: plt.ylim(zlim) else: plt.ylim(zlim_special) plt.gca().invert_yaxis() cbar = plt.colorbar() cbar.set_ticks(np.linspace(c_gamma.min(), c_gamma.max(), 4)) cbar.ax.set_yticklabels([pd.to_datetime(x, unit='s').strftime('%b %d %Y') for x in np.linspace(t_gamma.min().timestamp(), t_gamma.max().timestamp(), 4)]) c = catalog_hypoDD["LON"].copy() c[c < xlim[0]] = xlim[0] c[c > xlim[1]] = xlim[1] plt.subplot(122) plt.scatter(catalog_hypoDD["LAT"], catalog_hypoDD["DEPTH"], s=size, c=c_hypodd, alpha=alpha, marker=",", cmap=palette, linewidth=0, rasterized=True) plt.title(f"HypoDD: {len(catalog_hypoDD)}") plt.xlim(ylim) plt.ylim(zlim) plt.gca().invert_yaxis() cbar = plt.colorbar() cbar.set_ticks(np.linspace(c_hypodd.min(), c_hypodd.max(), 4)) cbar.ax.set_yticklabels([pd.to_datetime(x, unit='s').strftime('%b %d %Y') for x in np.linspace(t_hypodd.min().timestamp(), t_hypodd.max().timestamp(), 4)]) plt.savefig(f"{region_name}/GaMMA2HypoDD_depth_vs_latitude_color_by_time.pdf", bbox_inches="tight", dpi=600) plt.savefig(f"{region_name}/GaMMA2HypoDD_depth_vs_latitude_color_by_time.png", bbox_inches="tight", dpi=600) plt.show() ###Output _____no_output_____
Taylor_Diagrams.ipynb
###Markdown test ###Code #Dry Season r = [1,0.628639686,0.633058677,0.59947913,0.515011574,0.646316167,0.606872309,0.539420422,0.652494695,0.605709943,0.390130376,0.331499019,0.420672395] sd = [12.82,14.53837243,14.38700013,15.58594709,11.7015734,11.26121453,11.60097358,15.09805752,15.04336632,16.25893186,11.82638154,11.58799636,10.8307417] rsm = [0,15.69,14.91,17.55,11.65,9.97,10.51,18.30,16.97,19.82,13.24,13.56,11.97] r = [1,0.628639686,0.633058677,0.59947913,0.515011574,0.646316167,0.606872309,0.539420422,0.652494695,0.605709943,0.390130376,0.331499019,0.420672395] sd = [12.82,14.53837243,14.38700013,15.58594709,11.7015734,11.26121453,11.60097358,15.09805752,15.04336632,16.25893186,11.82638154,11.58799636,10.8307417] rsm = [0,15.69,14.91,17.55,11.65,9.97,10.51,18.30,16.97,19.82,13.24,13.56,11.97] rcParams["figure.figsize"] = [8.0, 6.4] rcParams['lines.linewidth'] = 1 # line width for plots rcParams.update({'font.size': 10}) # font size of axes text # Close any previously open graphics windows # ToDo: fails to work within Eclipse plt.close('all') ccoef = np.array(r) sdev = np.array(sd) crmsd = np.array(rsm) label = (['Obs','A','B','C','D','E','F','G','H','I','J','K','L']) # sm.taylor_diagram(sdev,crmsd,ccoef, styleOBS = '-', colOBS = 'r', markerobs = 'o',titleOBS = 'observation') sm.taylor_diagram(sdev,crmsd,ccoef, markerLabel = para, markerLabelColor = 'r',markerLegend = 'on', markerColor = 'r', styleOBS = '--', colOBS = 'black', markerobs = 'o', markerSize = 5, tickRMSangle = 120, showlabelsRMS = 'on', titleRMS = 'on', titleOBS = 'Ref', checkstats = 'on') #Dry Season r = [1,0.628639686,0.633058677,0.59947913,0.515011574,0.646316167,0.606872309,0.539420422,0.652494695,0.605709943,0.390130376,0.331499019,0.420672395] sd = [12.82,14.53837243,14.38700013,15.58594709,11.7015734,11.26121453,11.60097358,15.09805752,15.04336632,16.25893186,11.82638154,11.58799636,10.8307417] rsm = [0,15.69,14.91,17.55,11.65,9.97,10.51,18.30,16.97,19.82,13.24,13.56,11.97] rcParams["figure.figsize"] = [8.0, 6.4] rcParams['lines.linewidth'] = 1 # line width for plots rcParams.update({'font.size': 10}) # font size of axes text # Close any previously open graphics windows # ToDo: fails to work within Eclipse plt.close('all') ccoef = np.array(r) sdev = np.array(sd) crmsd = np.array(rsm) label = (['Obs','A','B','C','D','E','F','G','H','I','J','K','L']) # sm.taylor_diagram(sdev,crmsd,ccoef, styleOBS = '-', colOBS = 'r', markerobs = 'o',titleOBS = 'observation') sm.taylor_diagram(sdev,crmsd,ccoef, markerLabel = label, markerLabelColor = 'r',markerLegend = 'on', markerColor = 'r', styleOBS = '--', colOBS = 'black', markerobs = 'o', markerSize = 5, tickRMSangle = 120, showlabelsRMS = 'on', titleRMS = 'on', titleOBS = 'Ref', checkstats = 'on') plt.savefig('taylor_Wettesttt_average.png')plt.savefig('taylor_Wet_average.png') # sm.taylor_diagram() #Dry Season r = [1,0.628639686,0.633058677,0.59947913,0.515011574,0.646316167,0.606872309,0.539420422,0.652494695,0.605709943,0.390130376,0.331499019,0.420672395] sd = [12.82,14.53837243,14.38700013,15.58594709,11.7015734,11.26121453,11.60097358,15.09805752,15.04336632,16.25893186,11.82638154,11.58799636,10.8307417] rsm = [0,15.69,14.91,17.55,11.65,9.97,10.51,18.30,16.97,19.82,13.24,13.56,11.97] r2 = [1,0.63,0.63,0.60,0.52,0.65,0.61,0.54,0.65,0.61,0.39,0.33,0.42] sd2 = [12.88,14.30,14.41,15.95,10.46,9.39,10.23,14.53,14.50,15.90,10.64,10.12,8.40] rsm2 = [0,15.69,14.91,17.55,11.65,9.97,10.51,18.30,16.97,19.82,13.24,13.56,11.97] rcParams["figure.figsize"] = [4.5, 3.2] rcParams['lines.linewidth'] = 1 # line width for plots rcParams.update({'font.size': 7}) # font size of axes text # Close any previously open graphics windows # ToDo: fails to work within Eclipse plt.close('all') ccoef = np.array(r) sdev = np.array(sd) crmsd = np.array(rsm) ccoef2 = np.array(r2) sdev2 = np.array(sd2) crmsd2 = np.array(rsm2) label = (['Obs','A','B','C','D','E','F','G','H','I','J','K','L']) # sm.taylor_diagram(sdev,crmsd,ccoef, styleOBS = '-', colOBS = 'r', markerobs = 'o',titleOBS = 'observation') a1 = sm.taylor_diagram(sdev,crmsd,ccoef, markerLabel = para, markerSize = 3, markerLegend = 'on', styleOBS = '--', colOBS = 'red', markerobs = 'o',titleOBS = 'Ref', colRMS='g', styleRMS=':', widthRMS=2.0, titleRMS='on', colSTD='b', styleSTD='-.', widthSTD=1.0, titleSTD ='on', colCOR='k', styleCOR='--', widthCOR=1.0, titleCOR='on') # a2 = sm.taylor_diagram(sdev2,crmsd2,ccoef2, markerLabel = label, # markerSize = 6, markerLegend = 'on', # styleOBS = '--', colOBS = 'red', markerobs = 'o',titleOBS = 'Ref', # colRMS='g', styleRMS=':', widthRMS=2.0, titleRMS='on', # colSTD='b', styleSTD='-.', widthSTD=1.0, titleSTD ='on', # colCOR='k', styleCOR='--', widthCOR=1.0, titleCOR='on') ###Output _____no_output_____ ###Markdown Wet Season ###Code #Wet Season r = [1,0.63,0.63,0.60,0.52,0.65,0.61,0.54,0.65,0.61,0.39,0.33,0.42] sd = [12.88,14.30,14.41,15.95,10.46,9.39,10.23,14.53,14.50,15.90,10.64,10.12,8.40] rsm = [0,15.69,14.91,17.55,11.65,9.97,10.51,18.30,16.97,19.82,13.24,13.56,11.97] rcParams["figure.figsize"] = [5, 3] rcParams['lines.linewidth'] = 0.2 # line width for plots rcParams.update({'font.size': 5}) # font size of axes text # Close any previously open graphics windows # ToDo: fails to work within Eclipse plt.close('all') ccoef = np.array(r) sdev = np.array(sd) crmsd = np.array(rsm) label = (['Obs','A','B','C','D','E','F','G','H','I','J','K','L']) # sm.taylor_diagram(sdev,crmsd,ccoef, styleOBS = '-', colOBS = 'r', markerobs = 'o',titleOBS = 'observation') sm.taylor_diagram(sdev,crmsd,ccoef, markerLabel = para, markerSize = 2, markerLegend = 'on', styleOBS = '--', colOBS = 'red', markerobs = 'o',titleOBS = 'Ref', colRMS='g', styleRMS=':', widthRMS=0.5, titleRMS='on', colSTD='b', styleSTD='-.', widthSTD=0.5, titleSTD ='on', colCOR='k', styleCOR='--', widthCOR=0.5, titleCOR='on') plt.savefig('taylor_Wet_average-testt.png', dpi = 200, facecolor = 'w') plt.show() ###Output _____no_output_____ ###Markdown Dry Season ###Code #Dry Season r = [1,0.76,0.74,0.72,0.75,0.85,0.84,0.71,0.75,0.75,0.80,0.83,0.78] sd = [2.34,3.97,3.88,3.73,3.42,3.73,3.30,3.70,3.76,3.96,3.12,3.25,3.03] rsm = [0,2.75,2.76,2.67,2.26,2.16,1.87,2.69,2.59,2.74,1.90,1.86,1.89] rcParams["figure.figsize"] = [9.0, 6.4] rcParams['lines.linewidth'] = 1 # line width for plots rcParams.update({'font.size': 10}) # font size of axes text # Close any previously open graphics windows # ToDo: fails to work within Eclipse plt.close('all') ccoef = np.array(r) sdev = np.array(sd) crmsd = np.array(rsm) label = (['Obs','A','B','C','D','E','F','G','H','I','J','K','L']) # sm.taylor_diagram(sdev,crmsd,ccoef, styleOBS = '-', colOBS = 'r', markerobs = 'o',titleOBS = 'observation') sm.taylor_diagram(sdev,crmsd,ccoef, markerLabel = para, markerSize = 6, markerLegend = 'on', styleOBS = '--', colOBS = 'red', markerobs = 'o',titleOBS = 'Ref', colRMS='g', styleRMS=':', widthRMS=2.0, titleRMS='on', colSTD='b', styleSTD='-.', widthSTD=1.0, titleSTD ='on', colCOR='k', styleCOR='--', widthCOR=1.0, titleCOR='on') plt.savefig('taylor_Dry_average.png', dpi = 150, facecolor = 'w') plt.show() # Wet_North r = [1,0.39,0.41,0.35,0.41,0.38,0.45,0.35,0.47,0.37,0.28,0.27,0.28] sd = [12.80,16.13,17.10,18.23,13.33,13.41,11.33,16.50,14.16,16.71,15.04,13.21,10.83] rsm = [0,18.93,9.11,22.10,15.33,16.41,13.99,20.28,17.63,21.33,18.09,16.88,14.56] rcParams["figure.figsize"] = [9.0, 6.4] rcParams['lines.linewidth'] = 1 # line width for plots rcParams.update({'font.size': 10}) # font size of axes text # Close any previously open graphics windows # ToDo: fails to work within Eclipse plt.close('all') ccoef = np.array(r) sdev = np.array(sd) crmsd = np.array(rsm) label = (['Obs','A','B','C','D','E','F','G','H','I','J','K','L']) # sm.taylor_diagram(sdev,crmsd,ccoef, styleOBS = '-', colOBS = 'r', markerobs = 'o',titleOBS = 'observation') sm.taylor_diagram(sdev,crmsd,ccoef, markerLabel = para, markerSize = 6, markerLegend = 'on', styleOBS = '--', colOBS = 'red', markerobs = 'o',titleOBS = 'Ref', colRMS='g', styleRMS=':', widthRMS=2.0, titleRMS='on', colSTD='b', styleSTD='-.', widthSTD=1.0, titleSTD ='on', colCOR='k', styleCOR='--', widthCOR=1.0, titleCOR='on') plt.savefig('taylor_NorthWet_average.png', dpi = 150, facecolor = 'w') plt.show() # Wet_Center r = [1,0.51,0.49,0.47,0.40,0.57,0.47,0.44,0.48,0.35,0.44,0.34,0.31] sd = [12.80,16.13,17.10,18.23,13.33,13.41,11.33,16.50,14.16,16.71,15.04,13.21,10.83] rsm = [0,21.02,21.01,22.96,17.19,14.52,16.47,25.26,23.25,29.95,16.39,17.88,16.52] rcParams["figure.figsize"] = [9.0, 6.4] rcParams['lines.linewidth'] = 1 # line width for plots rcParams.update({'font.size': 10}) # font size of axes text # Close any previously open graphics windows # ToDo: fails to work within Eclipse plt.close('all') ccoef = np.array(r) sdev = np.array(sd) crmsd = np.array(rsm) label = (['Obs','A','B','C','D','E','F','G','H','I','J','K','L']) # sm.taylor_diagram(sdev,crmsd,ccoef, styleOBS = '-', colOBS = 'r', markerobs = 'o',titleOBS = 'observation') sm.taylor_diagram(sdev,crmsd,ccoef, markerLabel = para, markerSize = 6, markerLegend = 'on', styleOBS = '--', colOBS = 'red', markerobs = 'o',titleOBS = 'Ref', colRMS='g', styleRMS=':', widthRMS=2.0, titleRMS='on', colSTD='b', styleSTD='-.', widthSTD=1.0, titleSTD ='on', colCOR='k', styleCOR='--', widthCOR=1.0, titleCOR='on') plt.savefig('taylor_CenterWet_average.png', dpi = 150, facecolor = 'w') plt.show() # Wet_South r = [1,0.64,0.53,0.49,0.21,0.30,0.20,0.40,0.57,0.53,0.18,0.22,0.33] sd = [30.13,26.31,25.25,25.76,18.36,18.81,15.19,22.77,24.35,23.05,14.84,15.91,19.14] rsm = [0,26.01,27.77,30.33,32.75,31.49,32.24,31.00,28.45,28.82,31.42,31.19,30.37] rcParams["figure.figsize"] = [9.0, 6.4] rcParams['lines.linewidth'] = 1 # line width for plots rcParams.update({'font.size': 10}) # font size of axes text # Close any previously open graphics windows # ToDo: fails to work within Eclipse plt.close('all') ccoef = np.array(r) sdev = np.array(sd) crmsd = np.array(rsm) label = (['Obs','A','B','C','D','E','F','G','H','I','J','K','L']) # sm.taylor_diagram(sdev,crmsd,ccoef, styleOBS = '-', colOBS = 'r', markerobs = 'o',titleOBS = 'observation') sm.taylor_diagram(sdev,crmsd,ccoef, markerLabel = para, markerSize = 6, markerLegend = 'on', styleOBS = '--', colOBS = 'red', markerobs = 'o',titleOBS = 'Ref', colRMS='g', styleRMS=':', widthRMS=2.0, titleRMS='on', colSTD='b', styleSTD='-.', widthSTD=1.0, titleSTD ='on', colCOR='k', styleCOR='--', widthCOR=1.0, titleCOR='on') plt.savefig('taylor_SouthWet_average.png', dpi = 150, facecolor = 'w') plt.show() # Dry_North r = [1,0.81,0.79,0.73,0.70,0.85,0.87,0.75,0.80,0.79,0.84,0.86,0.80] sd = [4.99,8.14,7.96,7.49,5.31,6.69,6.60,7.64,7.78,7.25,6.67,7.02,6.31] rsm = [0,5.27,5.34,5.25,3.98,3.69,3.44,5.30,5.02,4.64,3.81,3.89,3.85] rcParams["figure.figsize"] = [9.0, 6.4] rcParams['lines.linewidth'] = 1 # line width for plots rcParams.update({'font.size': 10}) # font size of axes text # Close any previously open graphics windows # ToDo: fails to work within Eclipse plt.close('all') ccoef = np.array(r) sdev = np.array(sd) crmsd = np.array(rsm) label = (['Obs','A','B','C','D','E','F','G','H','I','J','K','L']) # sm.taylor_diagram(sdev,crmsd,ccoef, styleOBS = '-', colOBS = 'r', markerobs = 'o',titleOBS = 'observation') sm.taylor_diagram(sdev,crmsd,ccoef, markerLabel = para, markerSize = 6, markerLegend = 'on', styleOBS = '--', colOBS = 'red', markerobs = 'o',titleOBS = 'Ref', colRMS='g', styleRMS=':', widthRMS=2.0, titleRMS='on', colSTD='b', styleSTD='-.', widthSTD=1.0, titleSTD ='on', colCOR='k', styleCOR='--', widthCOR=1.0, titleCOR='on') plt.savefig('taylor_NorthDry_average.png', dpi = 150, facecolor = 'w') plt.show() # Dry_Center r = [1,0.86,0.85,0.86,0.79,0.79,0.75,0.80,0.81,0.87,0.75,0.72,0.76] sd = [1.66,3.63,3.62,3.61,4.34,4.44,2.65,3.17,3.20,4.42,1.93,2.00,1.93] rsm = [0,2.41,2.44,2.38,3.20,3.31,1.79,2.15,2.14,3.15,1.29,1.41,1.27] rcParams["figure.figsize"] = [9.0, 6.4] rcParams['lines.linewidth'] = 1 # line width for plots rcParams.update({'font.size': 10}) # font size of axes text # Close any previously open graphics windows # ToDo: fails to work within Eclipse plt.close('all') ccoef = np.array(r) sdev = np.array(sd) crmsd = np.array(rsm) label = (['Obs','A','B','C','D','E','F','G','H','I','J','K','L']) # sm.taylor_diagram(sdev,crmsd,ccoef, styleOBS = '-', colOBS = 'r', markerobs = 'o',titleOBS = 'observation') sm.taylor_diagram(sdev,crmsd,ccoef, markerLabel = para, markerSize = 6, markerLegend = 'on', styleOBS = '--', colOBS = 'red', markerobs = 'o',titleOBS = 'Ref', colRMS='g', styleRMS=':', widthRMS=2.0, titleRMS='on', colSTD='b', styleSTD='-.', widthSTD=1.0, titleSTD ='on', colCOR='k', styleCOR='--', widthCOR=1.0, titleCOR='on') plt.savefig('taylor_CenterDry_average.png', dpi = 150, facecolor = 'w') plt.show() # Dry_South r = [1,0.03,0.04,0.03,0.03,0.01,0.02,0.03,0.02,0.03,0.04,0.07,0.03] sd = [3.08,2.19,2.94,1.91,1.36,1.30,1.45,2.26,1.76,1.77,1.77,1.26,1.79] rsm = [0,3.70,4.14,3.55,3.32,3.32,3.37,3.74,3.50,3.49,3.47,3.24,3.49] rcParams["figure.figsize"] = [7, 6.4] rcParams["figure.titlesize"] = 10 rcParams['lines.linewidth'] = 1 # line width for plots rcParams.update({'font.size': 10}) # font size of axes text # Close any previously open graphics windows # ToDo: fails to work within Eclipse plt.close('all') ccoef = np.array(r) sdev = np.array(sd) crmsd = np.array(rsm) label = (['Obs','A','B','C','D','E','F','G','H','I','J','K','L']) # sm.taylor_diagram(sdev,crmsd,ccoef, styleOBS = '-', colOBS = 'r', markerobs = 'o',titleOBS = 'observation') sm.taylor_diagram(sdev,crmsd,ccoef, markerLabel = para, markerSize = 6, markerLegend = 'on', styleOBS = '--', colOBS = 'red', markerobs = 'o',titleOBS = 'Ref', colRMS='g', styleRMS=':', widthRMS=2.0, titleRMS='on', colSTD='b', styleSTD='-.', widthSTD=1.0, titleSTD ='on', colCOR='k', styleCOR='--', widthCOR=1.0, titleCOR='on') # plt.title('Dry Season-South') # plt.savefig('SouthDry_average.png', dpi = 150, facecolor = 'w') # plt.show() ###Output _____no_output_____
tutorials/background_and_color_space.ipynb
###Markdown Redner's rendering functions can also output alpha channel. You can use this to composite rendered images with arbitrary background images. ###Code !pip install --upgrade redner-gpu import torch import pyredner ###Output _____no_output_____ ###Markdown We will download a famous test image in signal processing literature from the [SIPI image database](http://sipi.usc.edu/database/database.php?volume=misc&image=10top). An important thing to keep in mind is that alpha blending is only *correct* in a [linear color space](https://www.kinematicsoup.com/news/2016/6/15/gamma-and-linear-space-what-they-are-how-they-differ). Natural 8-bit images you download from the internet is usually gamma compressed. Redner's `imread` function automatically converts the image to linear space (assuming gamma=2.2), so when displaying them you want to convert them back to the gamma compressed space. ###Code import urllib filedata = urllib.request.urlretrieve('http://sipi.usc.edu/database/download.php?vol=misc&img=4.2.03', 'mandrill.tiff') background = pyredner.imread('mandrill.tiff') # Visualize background from matplotlib.pyplot import imshow %matplotlib inline # Redner's imread automatically gamma decompress the image to linear space. # You'll have to compress it back to sRGB space for display. imshow(torch.pow(background, 1.0/2.2)) # Convert background to current device background = background.to(pyredner.get_device()) ###Output _____no_output_____ ###Markdown This time we'll use a simpler geometry -- a sphere. We can use generate_sphere to procedurally generate the triangle mesh geometry of a sphere. ###Code # The steps arguments decide how many triangles are used to represent the sphere. vertices, indices, uvs, normals = pyredner.generate_sphere(theta_steps = 64, phi_steps = 128) m = pyredner.Material(diffuse_reflectance = torch.tensor([0.5, 0.5, 0.5], device = pyredner.get_device())) sphere = pyredner.Object(vertices = vertices, indices = indices, uvs = uvs, normals = normals, material = m) cam = pyredner.automatic_camera_placement(shapes=[sphere], resolution=(background.shape[0], background.shape[1])) scene = pyredner.Scene(camera=cam, objects=[sphere]) lights = [pyredner.PointLight(cam.position.to(pyredner.get_device()), torch.tensor([10.0, 10.0, 10.0], device = pyredner.get_device()))] img = pyredner.render_deferred(scene=scene, lights=lights, alpha=True) imshow(torch.pow(img, 1.0/2.2).cpu()) ###Output Scene construction, time: 0.00518 s Forward pass, time: 0.07917 s ###Markdown Note that we set `alpha=True`. `img` is now a 4-channels image. ###Code print(img.shape) ###Output torch.Size([512, 512, 4]) ###Markdown We can alpha blend the image and the background like the following. ###Code alpha = img[:, :, 3:4] blend_img = img[:, :, :3] * alpha + background * (1 - alpha) imshow(torch.pow(blend_img, 1.0/2.2).cpu()) ###Output _____no_output_____
Keras/Mnist_ch1.3.2.ipynb
###Markdown 用Keras定義簡單神經網路 ###Code from __future__ import print_function import numpy as np from keras.datasets import mnist from keras.models import Sequential from keras.layers import Dense, Activation from keras.optimizers import SGD from keras.utils import np_utils np.random.seed(1671) ###Output Using Theano backend. WARNING (theano.tensor.blas): Using NumPy C-API based implementation for BLAS functions. ###Markdown 定義網路和訓練 ###Code NB_EPOCH = 200 BATCH_SIZE = 128 VERBOSE = 1 NB_CLASSES = 10 #輸出個數等於數字的個數 OPTIMIZER = SGD() N_HIDDEN = 128 VALIDATION_SPLIT = 0.2 #訓練中用於驗證集的資料比例 # 資料:混合並劃分訓練集和測試集資料 (X_train, y_train), (X_test, y_test) = mnist.load_data() RESHAPED = 28 * 28 X_train = X_train.reshape(60000, RESHAPED).astype('float32') X_test = X_test.reshape(10000, RESHAPED).astype('float32') # 正則化 X_train /= 255 X_test /= 255 print(X_train.shape[0], 'train samples') print(X_test.shape[0], 'test samples') y_train = np_utils.to_categorical(y_train, NB_CLASSES) y_test = np_utils.to_categorical(y_test, NB_CLASSES) model = Sequential() model.add(Dense(NB_CLASSES, input_shape=(RESHAPED, ))) model.add(Activation('softmax')) model.summary() #編譯模型 model.compile(optimizer=OPTIMIZER, loss='categorical_crossentropy', metrics=['accuracy']) #開始訓練 history = model.fit(X_train, y_train, \ batch_size=BATCH_SIZE, epochs=NB_EPOCH, \ verbose=VERBOSE, validation_split=VALIDATION_SPLIT) score = model.evaluate(X_test, y_test, verbose=VERBOSE) print('Test score: ', score[0]) print('Test accuracy: ', score[1]) ###Output 60000 train samples 10000 test samples Model: "sequential_1" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense_1 (Dense) (None, 10) 7850 _________________________________________________________________ activation_1 (Activation) (None, 10) 0 ================================================================= Total params: 7,850 Trainable params: 7,850 Non-trainable params: 0 _________________________________________________________________ Train on 48000 samples, validate on 12000 samples Epoch 1/200 48000/48000 [==============================] - 1s 26us/step - loss: 1.3782 - accuracy: 0.6575 - val_loss: 0.8976 - val_accuracy: 0.8175 Epoch 2/200 48000/48000 [==============================] - 1s 12us/step - loss: 0.7963 - accuracy: 0.8215 - val_loss: 0.6600 - val_accuracy: 0.8518 Epoch 3/200 48000/48000 [==============================] - 1s 12us/step - loss: 0.6462 - accuracy: 0.8465 - val_loss: 0.5638 - val_accuracy: 0.8664 Epoch 4/200 48000/48000 [==============================] - 1s 12us/step - loss: 0.5733 - accuracy: 0.8586 - val_loss: 0.5105 - val_accuracy: 0.8758 Epoch 5/200 48000/48000 [==============================] - 1s 12us/step - loss: 0.5288 - accuracy: 0.8665 - val_loss: 0.4761 - val_accuracy: 0.8810 Epoch 6/200 48000/48000 [==============================] - 1s 18us/step - loss: 0.4982 - accuracy: 0.8719 - val_loss: 0.4516 - val_accuracy: 0.8849 Epoch 7/200 48000/48000 [==============================] - 1s 21us/step - loss: 0.4756 - accuracy: 0.8759 - val_loss: 0.4334 - val_accuracy: 0.8876 Epoch 8/200 48000/48000 [==============================] - 1s 13us/step - loss: 0.4581 - accuracy: 0.8793 - val_loss: 0.4188 - val_accuracy: 0.8903 Epoch 9/200 48000/48000 [==============================] - 1s 13us/step - loss: 0.4439 - accuracy: 0.8819 - val_loss: 0.4073 - val_accuracy: 0.8937 Epoch 10/200 48000/48000 [==============================] - 1s 13us/step - loss: 0.4322 - accuracy: 0.8844 - val_loss: 0.3975 - val_accuracy: 0.8952 Epoch 11/200 48000/48000 [==============================] - 1s 14us/step - loss: 0.4223 - accuracy: 0.8865 - val_loss: 0.3894 - val_accuracy: 0.8969 Epoch 12/200 48000/48000 [==============================] - 1s 15us/step - loss: 0.4138 - accuracy: 0.8883 - val_loss: 0.3824 - val_accuracy: 0.8977 Epoch 13/200 48000/48000 [==============================] - 1s 19us/step - loss: 0.4064 - accuracy: 0.8897 - val_loss: 0.3763 - val_accuracy: 0.8991 Epoch 14/200 48000/48000 [==============================] - 1s 18us/step - loss: 0.3999 - accuracy: 0.8914 - val_loss: 0.3709 - val_accuracy: 0.9005 Epoch 15/200 48000/48000 [==============================] - 1s 14us/step - loss: 0.3941 - accuracy: 0.8923 - val_loss: 0.3660 - val_accuracy: 0.9020 Epoch 16/200 48000/48000 [==============================] - 1s 14us/step - loss: 0.3888 - accuracy: 0.8936 - val_loss: 0.3617 - val_accuracy: 0.9029 Epoch 17/200 48000/48000 [==============================] - 1s 19us/step - loss: 0.3841 - accuracy: 0.8943 - val_loss: 0.3579 - val_accuracy: 0.9032 Epoch 18/200 48000/48000 [==============================] - 1s 13us/step - loss: 0.3798 - accuracy: 0.8950 - val_loss: 0.3542 - val_accuracy: 0.9051 Epoch 19/200 48000/48000 [==============================] - 1s 13us/step - loss: 0.3759 - accuracy: 0.8960 - val_loss: 0.3510 - val_accuracy: 0.9047 Epoch 20/200 48000/48000 [==============================] - 1s 14us/step - loss: 0.3722 - accuracy: 0.8970 - val_loss: 0.3481 - val_accuracy: 0.9057 Epoch 21/200 48000/48000 [==============================] - 1s 15us/step - loss: 0.3688 - accuracy: 0.8976 - val_loss: 0.3453 - val_accuracy: 0.9064 Epoch 22/200 48000/48000 [==============================] - 1s 19us/step - loss: 0.3657 - accuracy: 0.8988 - val_loss: 0.3427 - val_accuracy: 0.9071 Epoch 23/200 48000/48000 [==============================] - 1s 16us/step - loss: 0.3628 - accuracy: 0.8990 - val_loss: 0.3403 - val_accuracy: 0.9068 Epoch 24/200 48000/48000 [==============================] - 1s 19us/step - loss: 0.3601 - accuracy: 0.8996 - val_loss: 0.3381 - val_accuracy: 0.9076 Epoch 25/200 48000/48000 [==============================] - 1s 15us/step - loss: 0.3575 - accuracy: 0.9007 - val_loss: 0.3360 - val_accuracy: 0.9086 Epoch 26/200 48000/48000 [==============================] - 1s 16us/step - loss: 0.3552 - accuracy: 0.9015 - val_loss: 0.3340 - val_accuracy: 0.9087 Epoch 27/200 48000/48000 [==============================] - 1s 18us/step - loss: 0.3528 - accuracy: 0.9021 - val_loss: 0.3321 - val_accuracy: 0.9089 Epoch 28/200 48000/48000 [==============================] - 1s 18us/step - loss: 0.3507 - accuracy: 0.9030 - val_loss: 0.3306 - val_accuracy: 0.9099 Epoch 29/200 48000/48000 [==============================] - 1s 14us/step - loss: 0.3488 - accuracy: 0.9029 - val_loss: 0.3289 - val_accuracy: 0.9104 Epoch 30/200 48000/48000 [==============================] - 1s 14us/step - loss: 0.3468 - accuracy: 0.9040 - val_loss: 0.3272 - val_accuracy: 0.9108 Epoch 31/200 48000/48000 [==============================] - 1s 15us/step - loss: 0.3450 - accuracy: 0.9043 - val_loss: 0.3259 - val_accuracy: 0.9105 Epoch 32/200 48000/48000 [==============================] - 1s 15us/step - loss: 0.3433 - accuracy: 0.9047 - val_loss: 0.3244 - val_accuracy: 0.9112 Epoch 33/200 48000/48000 [==============================] - 1s 15us/step - loss: 0.3416 - accuracy: 0.9051 - val_loss: 0.3231 - val_accuracy: 0.9121 Epoch 34/200 48000/48000 [==============================] - 1s 15us/step - loss: 0.3400 - accuracy: 0.9054 - val_loss: 0.3218 - val_accuracy: 0.9120 Epoch 35/200 48000/48000 [==============================] - 1s 21us/step - loss: 0.3385 - accuracy: 0.9056 - val_loss: 0.3206 - val_accuracy: 0.9125 Epoch 36/200 48000/48000 [==============================] - 1s 16us/step - loss: 0.3371 - accuracy: 0.9062 - val_loss: 0.3194 - val_accuracy: 0.9127 Epoch 37/200 48000/48000 [==============================] - 1s 21us/step - loss: 0.3356 - accuracy: 0.9068 - val_loss: 0.3182 - val_accuracy: 0.9124 Epoch 38/200 48000/48000 [==============================] - 1s 13us/step - loss: 0.3343 - accuracy: 0.9070 - val_loss: 0.3172 - val_accuracy: 0.9133 Epoch 39/200 48000/48000 [==============================] - 1s 15us/step - loss: 0.3330 - accuracy: 0.9073 - val_loss: 0.3161 - val_accuracy: 0.9136 Epoch 40/200 48000/48000 [==============================] - 1s 14us/step - loss: 0.3317 - accuracy: 0.9079 - val_loss: 0.3154 - val_accuracy: 0.9130 Epoch 41/200 48000/48000 [==============================] - 1s 13us/step - loss: 0.3306 - accuracy: 0.9078 - val_loss: 0.3143 - val_accuracy: 0.9140 Epoch 42/200 48000/48000 [==============================] - 1s 19us/step - loss: 0.3294 - accuracy: 0.9083 - val_loss: 0.3133 - val_accuracy: 0.9147 Epoch 43/200 48000/48000 [==============================] - 1s 14us/step - loss: 0.3283 - accuracy: 0.9089 - val_loss: 0.3125 - val_accuracy: 0.9143 Epoch 44/200 48000/48000 [==============================] - 1s 14us/step - loss: 0.3273 - accuracy: 0.9089 - val_loss: 0.3116 - val_accuracy: 0.9138 Epoch 45/200 48000/48000 [==============================] - 1s 15us/step - loss: 0.3262 - accuracy: 0.9090 - val_loss: 0.3108 - val_accuracy: 0.9143 Epoch 46/200 48000/48000 [==============================] - 1s 16us/step - loss: 0.3252 - accuracy: 0.9095 - val_loss: 0.3101 - val_accuracy: 0.9153 Epoch 47/200 48000/48000 [==============================] - 1s 21us/step - loss: 0.3242 - accuracy: 0.9097 - val_loss: 0.3093 - val_accuracy: 0.9158 Epoch 48/200 48000/48000 [==============================] - 1s 16us/step - loss: 0.3233 - accuracy: 0.9097 - val_loss: 0.3085 - val_accuracy: 0.9153 Epoch 49/200 48000/48000 [==============================] - 1s 17us/step - loss: 0.3224 - accuracy: 0.9102 - val_loss: 0.3078 - val_accuracy: 0.9160 Epoch 50/200 48000/48000 [==============================] - 1s 13us/step - loss: 0.3215 - accuracy: 0.9102 - val_loss: 0.3071 - val_accuracy: 0.9159 Epoch 51/200 48000/48000 [==============================] - 1s 13us/step - loss: 0.3206 - accuracy: 0.9105 - val_loss: 0.3065 - val_accuracy: 0.9161 Epoch 52/200 48000/48000 [==============================] - 1s 13us/step - loss: 0.3197 - accuracy: 0.9108 - val_loss: 0.3058 - val_accuracy: 0.9161 Epoch 53/200 48000/48000 [==============================] - 1s 14us/step - loss: 0.3190 - accuracy: 0.9109 - val_loss: 0.3052 - val_accuracy: 0.9165 Epoch 54/200 48000/48000 [==============================] - 1s 14us/step - loss: 0.3182 - accuracy: 0.9112 - val_loss: 0.3045 - val_accuracy: 0.9172 Epoch 55/200 48000/48000 [==============================] - 1s 15us/step - loss: 0.3174 - accuracy: 0.9115 - val_loss: 0.3040 - val_accuracy: 0.9171 Epoch 56/200 48000/48000 [==============================] - 1s 20us/step - loss: 0.3167 - accuracy: 0.9115 - val_loss: 0.3033 - val_accuracy: 0.9172 Epoch 57/200 48000/48000 [==============================] - 1s 15us/step - loss: 0.3159 - accuracy: 0.9118 - val_loss: 0.3029 - val_accuracy: 0.9168 Epoch 58/200 48000/48000 [==============================] - 1s 16us/step - loss: 0.3152 - accuracy: 0.9119 - val_loss: 0.3024 - val_accuracy: 0.9174 Epoch 59/200 48000/48000 [==============================] - 1s 17us/step - loss: 0.3145 - accuracy: 0.9121 - val_loss: 0.3018 - val_accuracy: 0.9172 Epoch 60/200 48000/48000 [==============================] - 1s 16us/step - loss: 0.3138 - accuracy: 0.9123 - val_loss: 0.3012 - val_accuracy: 0.9174 Epoch 61/200 48000/48000 [==============================] - 1s 17us/step - loss: 0.3132 - accuracy: 0.9124 - val_loss: 0.3007 - val_accuracy: 0.9171 Epoch 62/200 48000/48000 [==============================] - 1s 21us/step - loss: 0.3125 - accuracy: 0.9126 - val_loss: 0.3003 - val_accuracy: 0.9173 Epoch 63/200 48000/48000 [==============================] - 1s 16us/step - loss: 0.3119 - accuracy: 0.9131 - val_loss: 0.2998 - val_accuracy: 0.9176 Epoch 64/200 48000/48000 [==============================] - 1s 20us/step - loss: 0.3113 - accuracy: 0.9129 - val_loss: 0.2992 - val_accuracy: 0.9176 Epoch 65/200 48000/48000 [==============================] - 1s 17us/step - loss: 0.3107 - accuracy: 0.9129 - val_loss: 0.2988 - val_accuracy: 0.9177 Epoch 66/200 48000/48000 [==============================] - 1s 20us/step - loss: 0.3101 - accuracy: 0.9138 - val_loss: 0.2984 - val_accuracy: 0.9178 Epoch 67/200 48000/48000 [==============================] - 1s 16us/step - loss: 0.3095 - accuracy: 0.9136 - val_loss: 0.2979 - val_accuracy: 0.9177 Epoch 68/200 48000/48000 [==============================] - 1s 27us/step - loss: 0.3090 - accuracy: 0.9137 - val_loss: 0.2974 - val_accuracy: 0.9180 Epoch 69/200 48000/48000 [==============================] - 1s 17us/step - loss: 0.3084 - accuracy: 0.9138 - val_loss: 0.2972 - val_accuracy: 0.9180 Epoch 70/200 48000/48000 [==============================] - 2s 31us/step - loss: 0.3079 - accuracy: 0.9142 - val_loss: 0.2967 - val_accuracy: 0.9181 Epoch 71/200 48000/48000 [==============================] - 1s 22us/step - loss: 0.3073 - accuracy: 0.9143 - val_loss: 0.2962 - val_accuracy: 0.9180 Epoch 72/200 48000/48000 [==============================] - 1s 22us/step - loss: 0.3069 - accuracy: 0.9148 - val_loss: 0.2959 - val_accuracy: 0.9184 Epoch 73/200 48000/48000 [==============================] - 1s 24us/step - loss: 0.3063 - accuracy: 0.9147 - val_loss: 0.2956 - val_accuracy: 0.9183 Epoch 74/200 48000/48000 [==============================] - 1s 14us/step - loss: 0.3058 - accuracy: 0.9148 - val_loss: 0.2952 - val_accuracy: 0.9179 Epoch 75/200 48000/48000 [==============================] - 1s 15us/step - loss: 0.3053 - accuracy: 0.9146 - val_loss: 0.2948 - val_accuracy: 0.9181 Epoch 76/200 48000/48000 [==============================] - 1s 14us/step - loss: 0.3049 - accuracy: 0.9150 - val_loss: 0.2946 - val_accuracy: 0.9185 Epoch 77/200 48000/48000 [==============================] - 1s 16us/step - loss: 0.3044 - accuracy: 0.9151 - val_loss: 0.2941 - val_accuracy: 0.9184 Epoch 78/200 48000/48000 [==============================] - 1s 19us/step - loss: 0.3039 - accuracy: 0.9151 - val_loss: 0.2938 - val_accuracy: 0.9186 Epoch 79/200 48000/48000 [==============================] - 1s 23us/step - loss: 0.3035 - accuracy: 0.9153 - val_loss: 0.2935 - val_accuracy: 0.9187 Epoch 80/200 48000/48000 [==============================] - 1s 14us/step - loss: 0.3030 - accuracy: 0.9155 - val_loss: 0.2931 - val_accuracy: 0.9189 Epoch 81/200 48000/48000 [==============================] - 1s 26us/step - loss: 0.3026 - accuracy: 0.9155 - val_loss: 0.2929 - val_accuracy: 0.9188 Epoch 82/200 48000/48000 [==============================] - 1s 16us/step - loss: 0.3022 - accuracy: 0.9155 - val_loss: 0.2925 - val_accuracy: 0.9193 Epoch 83/200 48000/48000 [==============================] - 1s 17us/step - loss: 0.3017 - accuracy: 0.9159 - val_loss: 0.2922 - val_accuracy: 0.9193 Epoch 84/200 48000/48000 [==============================] - 1s 18us/step - loss: 0.3013 - accuracy: 0.9160 - val_loss: 0.2919 - val_accuracy: 0.9198 Epoch 85/200 48000/48000 [==============================] - 1s 23us/step - loss: 0.3009 - accuracy: 0.9160 - val_loss: 0.2916 - val_accuracy: 0.9193 Epoch 86/200 48000/48000 [==============================] - 1s 17us/step - loss: 0.3005 - accuracy: 0.9163 - val_loss: 0.2914 - val_accuracy: 0.9198 Epoch 87/200 48000/48000 [==============================] - 1s 20us/step - loss: 0.3001 - accuracy: 0.9161 - val_loss: 0.2910 - val_accuracy: 0.9195 Epoch 88/200 48000/48000 [==============================] - 1s 19us/step - loss: 0.2997 - accuracy: 0.9164 - val_loss: 0.2907 - val_accuracy: 0.9197 Epoch 89/200 48000/48000 [==============================] - 1s 18us/step - loss: 0.2994 - accuracy: 0.9162 - val_loss: 0.2904 - val_accuracy: 0.9200 Epoch 90/200 48000/48000 [==============================] - 1s 17us/step - loss: 0.2990 - accuracy: 0.9163 - val_loss: 0.2902 - val_accuracy: 0.9198 Epoch 91/200 48000/48000 [==============================] - 2s 32us/step - loss: 0.2986 - accuracy: 0.9166 - val_loss: 0.2899 - val_accuracy: 0.9201 Epoch 92/200 48000/48000 [==============================] - 1s 23us/step - loss: 0.2983 - accuracy: 0.9167 - val_loss: 0.2896 - val_accuracy: 0.9207 Epoch 93/200 48000/48000 [==============================] - 1s 19us/step - loss: 0.2979 - accuracy: 0.9169 - val_loss: 0.2894 - val_accuracy: 0.9202 Epoch 94/200 48000/48000 [==============================] - 1s 20us/step - loss: 0.2975 - accuracy: 0.9170 - val_loss: 0.2891 - val_accuracy: 0.9197 Epoch 95/200 48000/48000 [==============================] - 1s 28us/step - loss: 0.2972 - accuracy: 0.9167 - val_loss: 0.2890 - val_accuracy: 0.9201 Epoch 96/200 48000/48000 [==============================] - 1s 21us/step - loss: 0.2969 - accuracy: 0.9170 - val_loss: 0.2887 - val_accuracy: 0.9203 Epoch 97/200 48000/48000 [==============================] - 1s 20us/step - loss: 0.2965 - accuracy: 0.9171 - val_loss: 0.2885 - val_accuracy: 0.9202 Epoch 98/200 48000/48000 [==============================] - 1s 20us/step - loss: 0.2962 - accuracy: 0.9170 - val_loss: 0.2882 - val_accuracy: 0.9206 Epoch 99/200 48000/48000 [==============================] - 1s 24us/step - loss: 0.2959 - accuracy: 0.9175 - val_loss: 0.2880 - val_accuracy: 0.9210 Epoch 100/200 48000/48000 [==============================] - 1s 28us/step - loss: 0.2956 - accuracy: 0.9173 - val_loss: 0.2878 - val_accuracy: 0.9208 Epoch 101/200 48000/48000 [==============================] - 1s 22us/step - loss: 0.2952 - accuracy: 0.9175 - val_loss: 0.2875 - val_accuracy: 0.9203 Epoch 102/200 48000/48000 [==============================] - 1s 20us/step - loss: 0.2949 - accuracy: 0.9176 - val_loss: 0.2873 - val_accuracy: 0.9209 Epoch 103/200 48000/48000 [==============================] - 1s 23us/step - loss: 0.2946 - accuracy: 0.9179 - val_loss: 0.2871 - val_accuracy: 0.9208 Epoch 104/200 48000/48000 [==============================] - 1s 21us/step - loss: 0.2943 - accuracy: 0.9179 - val_loss: 0.2869 - val_accuracy: 0.9206 Epoch 105/200 48000/48000 [==============================] - 1s 27us/step - loss: 0.2940 - accuracy: 0.9180 - val_loss: 0.2866 - val_accuracy: 0.9208 Epoch 106/200 48000/48000 [==============================] - 1s 20us/step - loss: 0.2937 - accuracy: 0.9179 - val_loss: 0.2864 - val_accuracy: 0.9206 Epoch 107/200 48000/48000 [==============================] - 1s 21us/step - loss: 0.2934 - accuracy: 0.9185 - val_loss: 0.2863 - val_accuracy: 0.9208 Epoch 108/200 48000/48000 [==============================] - 1s 20us/step - loss: 0.2931 - accuracy: 0.9179 - val_loss: 0.2860 - val_accuracy: 0.9206 Epoch 109/200 48000/48000 [==============================] - 1s 24us/step - loss: 0.2929 - accuracy: 0.9183 - val_loss: 0.2859 - val_accuracy: 0.9211 Epoch 110/200 48000/48000 [==============================] - 1s 31us/step - loss: 0.2925 - accuracy: 0.9186 - val_loss: 0.2857 - val_accuracy: 0.9209 Epoch 111/200 48000/48000 [==============================] - 1s 25us/step - loss: 0.2923 - accuracy: 0.9184 - val_loss: 0.2854 - val_accuracy: 0.9210 Epoch 112/200 48000/48000 [==============================] - 2s 40us/step - loss: 0.2920 - accuracy: 0.9188 - val_loss: 0.2853 - val_accuracy: 0.9208 Epoch 113/200 48000/48000 [==============================] - 1s 17us/step - loss: 0.2917 - accuracy: 0.9188 - val_loss: 0.2850 - val_accuracy: 0.9208 Epoch 114/200 48000/48000 [==============================] - 1s 23us/step - loss: 0.2915 - accuracy: 0.9187 - val_loss: 0.2849 - val_accuracy: 0.9207 Epoch 115/200 48000/48000 [==============================] - 1s 18us/step - loss: 0.2912 - accuracy: 0.9187 - val_loss: 0.2848 - val_accuracy: 0.9213 Epoch 116/200 48000/48000 [==============================] - 1s 17us/step - loss: 0.2910 - accuracy: 0.9187 - val_loss: 0.2845 - val_accuracy: 0.9212 Epoch 117/200 48000/48000 [==============================] - 1s 16us/step - loss: 0.2907 - accuracy: 0.9189 - val_loss: 0.2843 - val_accuracy: 0.9209 Epoch 118/200 48000/48000 [==============================] - 1s 18us/step - loss: 0.2904 - accuracy: 0.9190 - val_loss: 0.2842 - val_accuracy: 0.9211 Epoch 119/200 48000/48000 [==============================] - 1s 19us/step - loss: 0.2902 - accuracy: 0.9190 - val_loss: 0.2840 - val_accuracy: 0.9212 Epoch 120/200 48000/48000 [==============================] - 1s 25us/step - loss: 0.2899 - accuracy: 0.9191 - val_loss: 0.2838 - val_accuracy: 0.9208 Epoch 121/200 48000/48000 [==============================] - 1s 22us/step - loss: 0.2897 - accuracy: 0.9193 - val_loss: 0.2837 - val_accuracy: 0.9212 Epoch 122/200 48000/48000 [==============================] - 1s 21us/step - loss: 0.2894 - accuracy: 0.9192 - val_loss: 0.2836 - val_accuracy: 0.9213 Epoch 123/200 48000/48000 [==============================] - 1s 23us/step - loss: 0.2892 - accuracy: 0.9196 - val_loss: 0.2834 - val_accuracy: 0.9214 Epoch 124/200 48000/48000 [==============================] - 2s 32us/step - loss: 0.2890 - accuracy: 0.9193 - val_loss: 0.2833 - val_accuracy: 0.9220 Epoch 125/200 48000/48000 [==============================] - 1s 19us/step - loss: 0.2887 - accuracy: 0.9193 - val_loss: 0.2831 - val_accuracy: 0.9212 Epoch 126/200 48000/48000 [==============================] - 1s 19us/step - loss: 0.2885 - accuracy: 0.9199 - val_loss: 0.2830 - val_accuracy: 0.9211 Epoch 127/200 48000/48000 [==============================] - 1s 18us/step - loss: 0.2883 - accuracy: 0.9196 - val_loss: 0.2828 - val_accuracy: 0.9218 Epoch 128/200 48000/48000 [==============================] - 1s 20us/step - loss: 0.2881 - accuracy: 0.9196 - val_loss: 0.2826 - val_accuracy: 0.9213 Epoch 129/200 48000/48000 [==============================] - 1s 21us/step - loss: 0.2878 - accuracy: 0.9198 - val_loss: 0.2824 - val_accuracy: 0.9213 Epoch 130/200 48000/48000 [==============================] - 1s 27us/step - loss: 0.2876 - accuracy: 0.9199 - val_loss: 0.2822 - val_accuracy: 0.9215 Epoch 131/200 48000/48000 [==============================] - 2s 32us/step - loss: 0.2873 - accuracy: 0.9200 - val_loss: 0.2823 - val_accuracy: 0.9218 Epoch 132/200 48000/48000 [==============================] - 1s 18us/step - loss: 0.2872 - accuracy: 0.9201 - val_loss: 0.2820 - val_accuracy: 0.9211 Epoch 133/200 48000/48000 [==============================] - 1s 19us/step - loss: 0.2869 - accuracy: 0.9196 - val_loss: 0.2819 - val_accuracy: 0.9216 Epoch 134/200 48000/48000 [==============================] - 1s 25us/step - loss: 0.2868 - accuracy: 0.9200 - val_loss: 0.2818 - val_accuracy: 0.9214 Epoch 135/200 48000/48000 [==============================] - 1s 20us/step - loss: 0.2865 - accuracy: 0.9200 - val_loss: 0.2816 - val_accuracy: 0.9213 Epoch 136/200 48000/48000 [==============================] - 1s 22us/step - loss: 0.2863 - accuracy: 0.9199 - val_loss: 0.2815 - val_accuracy: 0.9214 Epoch 137/200 48000/48000 [==============================] - 1s 19us/step - loss: 0.2861 - accuracy: 0.9203 - val_loss: 0.2814 - val_accuracy: 0.9215 Epoch 138/200 48000/48000 [==============================] - 1s 19us/step - loss: 0.2859 - accuracy: 0.9200 - val_loss: 0.2812 - val_accuracy: 0.9218 Epoch 139/200 48000/48000 [==============================] - 1s 28us/step - loss: 0.2857 - accuracy: 0.9206 - val_loss: 0.2810 - val_accuracy: 0.9210 Epoch 140/200 48000/48000 [==============================] - 1s 20us/step - loss: 0.2855 - accuracy: 0.9204 - val_loss: 0.2809 - val_accuracy: 0.9212 Epoch 141/200 48000/48000 [==============================] - 1s 23us/step - loss: 0.2853 - accuracy: 0.9204 - val_loss: 0.2808 - val_accuracy: 0.9212 Epoch 142/200 48000/48000 [==============================] - 1s 25us/step - loss: 0.2851 - accuracy: 0.9203 - val_loss: 0.2807 - val_accuracy: 0.9214 Epoch 143/200 48000/48000 [==============================] - 1s 30us/step - loss: 0.2849 - accuracy: 0.9208 - val_loss: 0.2805 - val_accuracy: 0.9214 Epoch 144/200 48000/48000 [==============================] - 1s 24us/step - loss: 0.2847 - accuracy: 0.9206 - val_loss: 0.2804 - val_accuracy: 0.9217 Epoch 145/200 48000/48000 [==============================] - 1s 19us/step - loss: 0.2846 - accuracy: 0.9206 - val_loss: 0.2803 - val_accuracy: 0.9215 Epoch 146/200 48000/48000 [==============================] - 1s 21us/step - loss: 0.2843 - accuracy: 0.9208 - val_loss: 0.2802 - val_accuracy: 0.9219 Epoch 147/200 48000/48000 [==============================] - 1s 20us/step - loss: 0.2842 - accuracy: 0.9206 - val_loss: 0.2800 - val_accuracy: 0.9216 Epoch 148/200 48000/48000 [==============================] - 1s 21us/step - loss: 0.2840 - accuracy: 0.9211 - val_loss: 0.2800 - val_accuracy: 0.9222 Epoch 149/200 48000/48000 [==============================] - 1s 24us/step - loss: 0.2838 - accuracy: 0.9209 - val_loss: 0.2798 - val_accuracy: 0.9222 Epoch 150/200 48000/48000 [==============================] - 1s 22us/step - loss: 0.2836 - accuracy: 0.9211 - val_loss: 0.2796 - val_accuracy: 0.9221 Epoch 151/200 48000/48000 [==============================] - 1s 22us/step - loss: 0.2835 - accuracy: 0.9211 - val_loss: 0.2795 - val_accuracy: 0.9222 Epoch 152/200 48000/48000 [==============================] - 1s 22us/step - loss: 0.2833 - accuracy: 0.9210 - val_loss: 0.2795 - val_accuracy: 0.9222 Epoch 153/200 48000/48000 [==============================] - 1s 23us/step - loss: 0.2831 - accuracy: 0.9211 - val_loss: 0.2793 - val_accuracy: 0.9220 Epoch 154/200 48000/48000 [==============================] - 1s 28us/step - loss: 0.2829 - accuracy: 0.9211 - val_loss: 0.2792 - val_accuracy: 0.9221 Epoch 155/200 48000/48000 [==============================] - 1s 28us/step - loss: 0.2827 - accuracy: 0.9209 - val_loss: 0.2791 - val_accuracy: 0.9220 Epoch 156/200 48000/48000 [==============================] - 1s 30us/step - loss: 0.2826 - accuracy: 0.9213 - val_loss: 0.2790 - val_accuracy: 0.9221 Epoch 157/200 48000/48000 [==============================] - 1s 21us/step - loss: 0.2824 - accuracy: 0.9212 - val_loss: 0.2789 - val_accuracy: 0.9222 Epoch 158/200 48000/48000 [==============================] - 1s 27us/step - loss: 0.2822 - accuracy: 0.9214 - val_loss: 0.2788 - val_accuracy: 0.9227 Epoch 159/200 48000/48000 [==============================] - 1s 21us/step - loss: 0.2821 - accuracy: 0.9212 - val_loss: 0.2787 - val_accuracy: 0.9225 Epoch 160/200 48000/48000 [==============================] - 1s 24us/step - loss: 0.2819 - accuracy: 0.9216 - val_loss: 0.2786 - val_accuracy: 0.9226 Epoch 161/200 48000/48000 [==============================] - 1s 19us/step - loss: 0.2817 - accuracy: 0.9214 - val_loss: 0.2785 - val_accuracy: 0.9225 Epoch 162/200 48000/48000 [==============================] - 1s 20us/step - loss: 0.2816 - accuracy: 0.9216 - val_loss: 0.2785 - val_accuracy: 0.9223 Epoch 163/200 48000/48000 [==============================] - 2s 36us/step - loss: 0.2814 - accuracy: 0.9215 - val_loss: 0.2783 - val_accuracy: 0.9225 Epoch 164/200 48000/48000 [==============================] - 1s 24us/step - loss: 0.2813 - accuracy: 0.9215 - val_loss: 0.2782 - val_accuracy: 0.9226 Epoch 165/200 48000/48000 [==============================] - 1s 24us/step - loss: 0.2811 - accuracy: 0.9217 - val_loss: 0.2781 - val_accuracy: 0.9225 Epoch 166/200 48000/48000 [==============================] - 1s 23us/step - loss: 0.2809 - accuracy: 0.9216 - val_loss: 0.2781 - val_accuracy: 0.9227 Epoch 167/200 48000/48000 [==============================] - 1s 29us/step - loss: 0.2808 - accuracy: 0.9219 - val_loss: 0.2780 - val_accuracy: 0.9227 Epoch 168/200 48000/48000 [==============================] - 2s 38us/step - loss: 0.2806 - accuracy: 0.9219 - val_loss: 0.2779 - val_accuracy: 0.9225 Epoch 169/200 48000/48000 [==============================] - 1s 18us/step - loss: 0.2805 - accuracy: 0.9219 - val_loss: 0.2778 - val_accuracy: 0.9227 Epoch 170/200 48000/48000 [==============================] - 1s 18us/step - loss: 0.2803 - accuracy: 0.9220 - val_loss: 0.2777 - val_accuracy: 0.9222 Epoch 171/200 48000/48000 [==============================] - 1s 24us/step - loss: 0.2802 - accuracy: 0.9218 - val_loss: 0.2776 - val_accuracy: 0.9227 Epoch 172/200 48000/48000 [==============================] - 1s 20us/step - loss: 0.2800 - accuracy: 0.9217 - val_loss: 0.2775 - val_accuracy: 0.9227 Epoch 173/200 48000/48000 [==============================] - 1s 21us/step - loss: 0.2799 - accuracy: 0.9218 - val_loss: 0.2774 - val_accuracy: 0.9225 Epoch 174/200 48000/48000 [==============================] - 1s 21us/step - loss: 0.2797 - accuracy: 0.9218 - val_loss: 0.2773 - val_accuracy: 0.9226 Epoch 175/200 48000/48000 [==============================] - 1s 19us/step - loss: 0.2796 - accuracy: 0.9221 - val_loss: 0.2772 - val_accuracy: 0.9231 Epoch 176/200 48000/48000 [==============================] - 1s 27us/step - loss: 0.2794 - accuracy: 0.9222 - val_loss: 0.2771 - val_accuracy: 0.9226 Epoch 177/200 48000/48000 [==============================] - 1s 24us/step - loss: 0.2793 - accuracy: 0.9218 - val_loss: 0.2770 - val_accuracy: 0.9229 Epoch 178/200 48000/48000 [==============================] - 1s 23us/step - loss: 0.2791 - accuracy: 0.9219 - val_loss: 0.2771 - val_accuracy: 0.9228 Epoch 179/200 48000/48000 [==============================] - 1s 28us/step - loss: 0.2790 - accuracy: 0.9222 - val_loss: 0.2769 - val_accuracy: 0.9230 Epoch 180/200 48000/48000 [==============================] - 2s 33us/step - loss: 0.2789 - accuracy: 0.9221 - val_loss: 0.2768 - val_accuracy: 0.9231 Epoch 181/200 48000/48000 [==============================] - 1s 27us/step - loss: 0.2787 - accuracy: 0.9224 - val_loss: 0.2767 - val_accuracy: 0.9227 Epoch 182/200 48000/48000 [==============================] - 1s 27us/step - loss: 0.2786 - accuracy: 0.9222 - val_loss: 0.2766 - val_accuracy: 0.9228 Epoch 183/200 48000/48000 [==============================] - 1s 29us/step - loss: 0.2784 - accuracy: 0.9224 - val_loss: 0.2766 - val_accuracy: 0.9227 Epoch 184/200 48000/48000 [==============================] - 2s 41us/step - loss: 0.2783 - accuracy: 0.9224 - val_loss: 0.2764 - val_accuracy: 0.9226 Epoch 185/200 48000/48000 [==============================] - 1s 26us/step - loss: 0.2782 - accuracy: 0.9226 - val_loss: 0.2764 - val_accuracy: 0.9235 Epoch 186/200 48000/48000 [==============================] - 1s 22us/step - loss: 0.2780 - accuracy: 0.9222 - val_loss: 0.2762 - val_accuracy: 0.9233 Epoch 187/200 48000/48000 [==============================] - 1s 23us/step - loss: 0.2779 - accuracy: 0.9222 - val_loss: 0.2762 - val_accuracy: 0.9229 Epoch 188/200 48000/48000 [==============================] - 2s 34us/step - loss: 0.2778 - accuracy: 0.9227 - val_loss: 0.2761 - val_accuracy: 0.9233 Epoch 189/200 48000/48000 [==============================] - 1s 30us/step - loss: 0.2776 - accuracy: 0.9226 - val_loss: 0.2761 - val_accuracy: 0.9233 Epoch 190/200 48000/48000 [==============================] - 1s 24us/step - loss: 0.2775 - accuracy: 0.9224 - val_loss: 0.2760 - val_accuracy: 0.9241 Epoch 191/200 48000/48000 [==============================] - 1s 29us/step - loss: 0.2774 - accuracy: 0.9222 - val_loss: 0.2759 - val_accuracy: 0.9235 Epoch 192/200 48000/48000 [==============================] - 1s 30us/step - loss: 0.2772 - accuracy: 0.9223 - val_loss: 0.2758 - val_accuracy: 0.9232 Epoch 193/200 48000/48000 [==============================] - 1s 30us/step - loss: 0.2771 - accuracy: 0.9227 - val_loss: 0.2757 - val_accuracy: 0.9232 Epoch 194/200 48000/48000 [==============================] - 1s 25us/step - loss: 0.2770 - accuracy: 0.9225 - val_loss: 0.2757 - val_accuracy: 0.9237 Epoch 195/200 48000/48000 [==============================] - 1s 29us/step - loss: 0.2769 - accuracy: 0.9227 - val_loss: 0.2756 - val_accuracy: 0.9239 Epoch 196/200 48000/48000 [==============================] - 2s 38us/step - loss: 0.2767 - accuracy: 0.9226 - val_loss: 0.2755 - val_accuracy: 0.9232 Epoch 197/200 48000/48000 [==============================] - 2s 33us/step - loss: 0.2766 - accuracy: 0.9227 - val_loss: 0.2755 - val_accuracy: 0.9239 Epoch 198/200 48000/48000 [==============================] - 1s 26us/step - loss: 0.2765 - accuracy: 0.9227 - val_loss: 0.2754 - val_accuracy: 0.9235 Epoch 199/200 48000/48000 [==============================] - 2s 36us/step - loss: 0.2764 - accuracy: 0.9227 - val_loss: 0.2754 - val_accuracy: 0.9237 Epoch 200/200 48000/48000 [==============================] - 2s 31us/step - loss: 0.2762 - accuracy: 0.9228 - val_loss: 0.2753 - val_accuracy: 0.9234 10000/10000 [==============================] - 0s 41us/step Test score: 0.27798669055998326 Test accuracy: 0.921999990940094
code_basics.ipynb
###Markdown Scripts to do the Code Basics for PH136 ###Code # Import libraries from __future__ import print_function, absolute_import, division, unicode_literals import numpy as np import glob, os, sys ###Output _____no_output_____ ###Markdown Variables ###Code aa = 2 print('aa = {:d}'.format(aa)) bb = 2 * aa print('bb = 2*aa = {:d}'.format(bb)) cc = 3. print('Info on variable cc={:g}'.format(cc)) print(np.finfo(cc)) dd = np.sin(cc) print(dd) ###Output 0.14112000806 ###Markdown Strings ###Code ss = str('wakemeup') print('My string is {:s}'.format(ss)) new_ss = ss+'z' print('My new string is {:s}'.format(new_ss)) ipos = new_ss.find('e') print('The position is {:d}'.format(ipos)) flg = new_ss == 'big' print('new_ss == big?? {!r:^}'.format(flg)) new_ss = new_ss.replace('e','i',1) print('My final string is {:s}'.format(new_ss)) ###Output My final string is wakimeupz ###Markdown Arrays ###Code aa = np.zeros(100) aa = np.ones(100) aa += 10 # for ii in range(100): # aa[ii] += 10 print(aa) aa[0:50] += 100 print(aa) bb = np.zeros(100) bb[:] = 50 print(bb) cc = aa + bb print(cc) dd = aa * bb print(dd) ee = np.arange(100) print(ee) idx = np.arange(50)*2 + 1 ee[idx] = -1 # ee[1::2] = -1 print(idx) print(ee) ###Output [ 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65 67 69 71 73 75 77 79 81 83 85 87 89 91 93 95 97 99] [-999 -1 -999 -1 -999 -1 -999 -1 -999 -1 -40 -1 -38 -1 -36 -1 -34 -1 -32 -1 -30 -1 -28 -1 -26 -1 -24 -1 -22 -1 -20 -1 -18 -1 -16 -1 -14 -1 -12 -1 -10 -1 -8 -1 -6 -1 -4 -1 -2 -1 0 -1 2 -1 4 -1 6 -1 8 -1 10 -1 12 -1 14 -1 16 -1 18 -1 20 -1 22 -1 24 -1 26 -1 28 -1 30 -1 32 -1 34 -1 36 -1 38 -1 40 -1 -999 -1 -999 -1 -999 -1 -999 -1 -999] ###Markdown Plotting ###Code from matplotlib import pyplot as plt %matplotlib inline if True: plt.figure(dpi=700) plt.clf() plt.plot(aa) plt.plot(bb, 'g.') plt.plot(cc, 'r:') plt.plot(dd, 'k--') plt.show() ###Output _____no_output_____ ###Markdown Where ###Code #ee = np.arange(101) - 50 ee = np.arange(-50, 51) idx = np.where( np.abs(ee) > 40 ) ee[idx] = -999. print(idx) #print(ee) ###Output [ 0 1 2 3 4 5 6 7 8 9 91 92 93 94 95 96 97 98 99 100] ###Markdown 2D Arrays ###Code aa = np.zeros( (100,100) ) #for kk in range(100): # aa[kk,:] = kk aa[:,] = np.arange(100) aa = np.transpose(aa) #aa = np.mgrid[0:100,0:100][0] #aa # Find a non for loop if True: plt.clf() plt.imshow(aa, origin='lower') plt.xlabel('x') plt.colorbar() plt.show() bb = np.ones( (100,100) ) cc = aa + bb dd = aa * bb if True: plt.clf() plt.plot(cc[:,2]) plt.show() ###Output _____no_output_____ ###Markdown Save a figure ###Code from matplotlib.backends.backend_pdf import PdfPages outfil = 'tmp.pdf' if False: print('Saving a figure to the Desktop: {:s}'.format(outfil)) plt.clf() plt.imshow(aa, origin='lower') plt.colorbar() plt.xlabel('row') plt.ylabel('column') plt.savefig('/home/aalabi/Desktop/ASTR136/wk1_class/Exercise/'+outfil) ###Output _____no_output_____ ###Markdown Module (see plot_sine.py) Random numbers ###Code rr = np.random.randn(1000) + 1. if True: plt.clf() plt.hist(rr) plt.show() bins = np.arange(-5, 5, 0.1) #print(bins) if True: plt.clf() plt.hist(rr, bins=bins) plt.show() ###Output _____no_output_____ ###Markdown Simple stats ###Code mu = np.mean(rr) med = np.median(rr) sig = np.std(rr) print('Mean = {:g}, Median = {:g}, RMS = {:g}'.format(mu, med, sig)) gdv = np.where( np.abs(rr-mu) < (2*sig) )[0] mu2 = np.mean(rr[gdv]) med2 = np.median(rr[gdv]) sig2 = np.std(rr[gdv]) print('Mean = {:g}, Median = {:g}, RMS = {:g}'.format(mu2, med2, sig2)) ###Output Mean = 0.841022, Median = 0.785754, RMS = 0.770237
miscellaneous.ipynb
###Markdown Risk Premia in REITs and its application in a quantitative value strategy Dissertation for Master's degree in Economics from [Insper](https://www.insper.edu.br/en/graduate/masters-of-science/) Advisor: [Prof. Dr. Gustavo B. Soares](https://github.com/gustavobsoares/) ([CV](http://lattes.cnpq.br/8491228979459078)) Student: Lucas L. Sanches ([Resume](http://lattes.cnpq.br/2528322802099316)) Miscellaneous Notebook Imports ###Code from utils.database import * from statsmodels.regression.rolling import RollingOLS from tqdm import tqdm import talib as ta import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown Optimal portfolios with international ETFs ###Code start_date = START_DATES['SPBDU1ST Index'] data_ports = data[list(ETFS.values())].copy() data_ports[['USDJPY Curncy','USDEUR Curncy']] = data_rates[['USDJPY Curncy','USDEUR Curncy']] data_ports['D5BK B2 Equity'] /= data_ports['USDEUR Curncy'] data_ports['1476 JT Equity'] /= data_ports['USDJPY Curncy'] monthly_data_ports = np.log(data_ports[list(ETFS.values())].resample('M').last()).diff().dropna() monthly_data_ports['IDUP LN Equity'] -= monthly_data_rates['US0003M Index'] monthly_data_ports['D5BK B2 Equity'] -= monthly_data_rates['EUR003M Equity'] monthly_data_ports['1476 JT Equity'] -= monthly_data_rates['JY0003M Index'] etf_portfolios = monthly_data_ports.loc[monthly_data_ports.index>START_DATES['D5BK B2 Equity']] print(etf_portfolios.corr()) # statistics of monthly data placeholders = list(range(len(ETFS.values()))) summary_statistics_etfs = pd.DataFrame({'Obs': placeholders, 'Mean': placeholders, 'Std': placeholders, 'Max': placeholders, 'Min': placeholders}, index=list(ETFS.values())) for c in list(ETFS.values()): aux = monthly_data_ports[c].loc[monthly_data_ports.index>START_DATES[c]] obs_ = len(aux) mean_ = aux.mean() std_ = aux.std() max_ = aux.max() min_ = aux.min() summary_statistics_etfs.loc[c] = [obs_, mean_, std_, max_, min_] summary_statistics_etfs data[summary_statistics_etfs.index].plot(figsize=(15,6),grid=True) # Using code from https://www.analyticsvidhya.com/blog/2021/04/portfolio-optimization-using-mpt-in-python/ port_returns = [] port_vols = [] port_weights = [] num_assets = len(etf_portfolios.columns) num_portfolios = 10000 asset_returns = np.log(data[list(ETFS.values())].loc[data.index>start_date].resample('Y').last()).diff().dropna().mean() var_matrix = etf_portfolios.cov() * 12 np.random.seed(42) for port in range(num_portfolios): weights = np.random.random(num_assets) weights /= np.sum(weights) port_weights.append(weights) returns = np.dot(weights, asset_returns) port_returns.append(returns) var = var_matrix.mul(weights, axis=0).mul(weights, axis=1).sum().sum() sd = np.sqrt(var) port_vols.append(sd) ports = {'Return': port_returns, 'Vol': port_vols} for i, symbol in enumerate(etf_portfolios.columns.tolist()): ports[symbol+' weight'] = [w[i] for w in port_weights] df_ports = pd.DataFrame(ports) min_vol_port = df_ports.iloc[df_ports['Vol'].idxmin()] max_sharpe_port = df_ports.iloc[(df_ports['Return']/df_ports['Vol']).idxmax()] df_ports.plot.scatter(x='Vol', y='Return', marker='o', s=15, alpha=0.5, grid=True, figsize=(6,6)) plt.scatter(min_vol_port[1], min_vol_port[0], color='y', marker='*', s=500) plt.scatter(max_sharpe_port[1], max_sharpe_port[0], color='r', marker='*', s=500) plt.xlabel('Volatility') plt.ylabel('Return') print(max_sharpe_port) print(f'\nSharpe = {max_sharpe_port[0]/max_sharpe_port[1]}') print(f'Var Matrix: {var_matrix}') asset_returns print('US Sharpe:', 0.040195/np.sqrt(0.043685)) print('Europe Sharpe:', 0.013801/np.sqrt(0.070772)) print('Japan Sharpe:', 0.023455/np.sqrt(0.027203)) ###Output US Sharpe: 0.19231176908190553 Europe Sharpe: 0.05187759345796491 Japan Sharpe: 0.14220898493965298 ###Markdown Optimal portfolios in Brazil ###Code # Using code from https://www.analyticsvidhya.com/blog/2021/04/portfolio-optimization-using-mpt-in-python/ start_date = datetime(2015,1,1) brazil_xs = ['IBOV Index (ER)','IFIX Index (ER)','BZAD10Y Index (ER)'] df_brazil = pd.read_excel('data/data.xlsx', sheet_name='total_return', index_col=0)[brazil_xs].loc[bz_factors.index] df_brazil = np.log(df_brazil.resample('M').last()).diff().dropna() all_time_series = df_brazil.loc[df_brazil.index>start_date] port_returns = [] port_vols = [] port_weights = [] num_assets = len(all_time_series.columns) num_portfolios = 10000 asset_returns = np.log(data[brazil_xs].loc[data.index>start_date].resample('Y').last()).diff().dropna().mean() var_matrix = all_time_series.cov() * 12 np.random.seed(42) for port in range(num_portfolios): weights = np.random.random(num_assets) weights /= np.sum(weights) port_weights.append(weights) returns = np.dot(weights, asset_returns) port_returns.append(returns) var = var_matrix.mul(weights, axis=0).mul(weights, axis=1).sum().sum() sd = np.sqrt(var) port_vols.append(sd) ports = {'Return': port_returns, 'Vol': port_vols} for i, symbol in enumerate(all_time_series.columns.tolist()): ports[symbol+' weight'] = [w[i] for w in port_weights] df_ports = pd.DataFrame(ports) min_vol_port = df_ports.iloc[df_ports['Vol'].idxmin()] max_sharpe_port = df_ports.iloc[(df_ports['Return']/df_ports['Vol']).idxmax()] df_ports.plot.scatter(x='Vol', y='Return', marker='o', s=15, alpha=0.5, grid=True, figsize=(6,6)) plt.scatter(min_vol_port[1], min_vol_port[0], color='y', marker='*', s=500) plt.scatter(max_sharpe_port[1], max_sharpe_port[0], color='r', marker='*', s=500) plt.xlabel('Volatility') plt.ylabel('Return') print(max_sharpe_port) print(f'\nSharpe = {max_sharpe_port[0]/max_sharpe_port[1]}') print(f'Var Matrix: {var_matrix}') ###Output Return 0.017411 Vol 0.036368 IBOV Index (ER) weight 0.085138 IFIX Index (ER) weight 0.104272 BZAD10Y Index (ER) weight 0.810590 Name: 1355, dtype: float64 Sharpe = 0.47875952254816484 Var Matrix: IBOV Index (ER) IFIX Index (ER) BZAD10Y Index (ER) IBOV Index (ER) 0.062090 0.021168 0.000576 IFIX Index (ER) 0.021168 0.014657 0.000036 BZAD10Y Index (ER) 0.000576 0.000036 0.000383 ###Markdown virtualenv -tool to create isolated Python environmentsAn environment has its own installation directoriesDoesn't share libraries with other virtualenv envorinmentsHelps avoid problem of upgrading apps that aren't yet to be upgradedPassing the entire virtualenv with those libraries that are needed for running it.1) installation pip install virtualenvvirtualenv .venv .venv is a standard name2) locating the .venv folder and activating the virtual machinedir (or ls .venv\scripts\activate3) installing the libraries on the virtual machinepip install pandaspip install seaborn...4) documenting requirements for this virtual machinepip freeze > requirements.txt5) visualize requirementsrequirements.txt opens the .txt file Jupyter notebook structure ###Code # creating new dfs from previous ones on each step of the data cleanup: df1 = df_raw # renaming columns (step 1.2 (1.1 in my notation), after importing the data and before formatting data in columns: <br> df.columns #consult column names new_cols = ['column1_name', 'column2_name', 'column3_name'] # rules: always lowercase, underscore between words, full words instead of abbreviations (?) df.columns = new_cols #rename columns # Split columns between categorical and numerical num_features = df.select_dtypes(include=['int64', 'float64', 'datetime64']) cat_features = df.select_dtypes(exclude=['int64', 'float64', 'datetime64']) # First filter rows, later columns! # Select only subset of data to work with ## 1. Department Millennium == Apparel ## 2. Product Classification == In House df2 = df1[(df1['department_millennium'] == 'apparel' ) & #( df1['product_classification'] == 'in_house' ) & ( df1['number_of_items_shipped'] > 0 ) ] # Remove MTK and DEM Orders" df2 = df2[~df2['order_channel_code'].isin( ['MKT', 'DEM'] )] # Remove Duplicated #df2 = df1.drop_duplicates( subset='order_id' ) ###Output _____no_output_____
experimental/facenet/facenet_test.ipynb
###Markdown FaceNet ###Code import tensorflow as tf import os import matplotlib.pyplot as plt import numpy as np from skimage.transform import resize FACENET_DIR = '/home/gaston/workspace/two-face/facenet' MODEL_PATH = os.path.join(FACENET_DIR, 'tensorflow-101/model/facenet_model.json') MODEL_128_PATH = os.path.join(FACENET_DIR, 'tensorflow-101/model/facenet_model_128.json') WEIGHTS_PATH = os.path.join(FACENET_DIR, 'facenet_weights.h5') PERSON_1_IMG_1 = '/home/gaston/workspace/datasets/CASIA-WebFace/CASIA-WebFace/data/test/0000045/001.jpg' PERSON_1_IMG_2 = '/home/gaston/workspace/datasets/CASIA-WebFace/CASIA-WebFace/data/test/0000045/002.jpg' PERSON_2_IMG_1 = '/home/gaston/workspace/datasets/CASIA-WebFace/CASIA-WebFace/data/test/0000099/001.jpg' PERSON_2_IMG_2 = '/home/gaston/workspace/datasets/CASIA-WebFace/CASIA-WebFace/data/test/0000099/002.jpg' PERSON_3_IMG_1 = '/home/gaston/workspace/datasets/CASIA-WebFace/CASIA-WebFace/data/test/0000157/007.jpg' #facenet model structure: https://github.com/serengil/tensorflow-101/blob/master/model/facenet_model.json model = tf.keras.models.model_from_json(open(MODEL_PATH, "r").read()) #pre-trained weights https://drive.google.com/file/d/1971Xk5RwedbudGgTIrGAL4F7Aifu7id1/view?usp=sharing model.load_weights(WEIGHTS_PATH) model.summary() #facenet model structure: https://github.com/serengil/tensorflow-101/blob/master/model/facenet_model.json model_128 = tf.keras.models.model_from_json(open(MODEL_128_PATH, "r").read()) #pre-trained weights https://drive.google.com/file/d/1971Xk5RwedbudGgTIrGAL4F7Aifu7id1/view?usp=sharing model_128.load_weights(WEIGHTS_PATH) model_128.summary() def fix_image_encoding(image): if (image.ndim == 2): # Add new dimension for channels image = image[:, :, np.newaxis] if (image.shape[-1] == 1): # Convert greyscale to RGB image = np.concatenate((image,) * 3, axis=-1) return image def preprocess_image(image_file_path, size): image_file = tf.gfile.GFile(image_file_path, mode='rb') image = plt.imread(image_file) image = fix_image_encoding(image) image = resize(image, (size, size)) image = np.expand_dims(image, axis=0) image = image*2.0 - 1.0 return image img_1_1_128 = preprocess_image(PERSON_1_IMG_1, 128) img_1_2_128 = preprocess_image(PERSON_1_IMG_2, 128) img_2_1_128 = preprocess_image(PERSON_2_IMG_1, 128) img_2_2_128 = preprocess_image(PERSON_2_IMG_2, 128) img_3_1_128 = preprocess_image(PERSON_3_IMG_1, 128) print(img_1_1_128) pred_1_128 = model_128.predict(img_1_1_128)[0,:] pred_2_128 = model_128.predict(img_1_2_128)[0,:] pred_3_128 = model_128.predict(img_2_1_128)[0,:] pred_4_128 = model_128.predict(img_2_2_128)[0,:] pred_5_128 = model_128.predict(img_3_1_128)[0,:] print(len(pred_5_128)) def findEuclideanDistance(source_representation, test_representation): euclidean_distance = source_representation - test_representation euclidean_distance = np.sum(np.multiply(euclidean_distance, euclidean_distance)) euclidean_distance = np.sqrt(euclidean_distance) return euclidean_distance / len(source_representation) print(findEuclideanDistance(pred_1_128, pred_2_128)) print(findEuclideanDistance(pred_3_128, pred_4_128)) print() print(findEuclideanDistance(pred_1_128, pred_3_128)) print(findEuclideanDistance(pred_1_128, pred_4_128)) print() print(findEuclideanDistance(pred_5_128, pred_1_128)) print(findEuclideanDistance(pred_5_128, pred_2_128)) print() print(findEuclideanDistance(pred_5_128, pred_3_128)) print(findEuclideanDistance(pred_5_128, pred_4_128)) img_1_1 = preprocess_image(PERSON_1_IMG_1, 160) img_1_2 = preprocess_image(PERSON_1_IMG_2, 160) img_2_1 = preprocess_image(PERSON_2_IMG_1, 160) img_2_2 = preprocess_image(PERSON_2_IMG_2, 160) img_3_1 = preprocess_image(PERSON_3_IMG_1, 160) print(img_1_1) pred_1 = model.predict(img_1_1)[0,:] pred_2 = model.predict(img_1_2)[0,:] pred_3 = model.predict(img_2_1)[0,:] pred_4 = model.predict(img_2_2)[0,:] pred_5 = model.predict(img_3_1)[0,:] print(len(pred_1)) print(findEuclideanDistance(pred_1, pred_2)) print(findEuclideanDistance(pred_3, pred_4)) print() print(findEuclideanDistance(pred_1, pred_3)) print(findEuclideanDistance(pred_1, pred_4)) print() print(findEuclideanDistance(pred_5, pred_1)) print(findEuclideanDistance(pred_5, pred_2)) print() print(findEuclideanDistance(pred_5, pred_3)) print(findEuclideanDistance(pred_5, pred_4)) ###Output 0.07255291193723679 0.07758000493049622 0.1185019388794899 0.12128172069787979 0.1163187101483345 0.10997657477855682 0.10901172459125519 0.10875829309225082
002-Methods/004-Dashboards/001-Dash/001-Getting_Started/getting_started.ipynb
###Markdown JupyterDashThe `jupyter-dash` package makes it easy to develop Plotly Dash apps from the Jupyter Notebook and JupyterLab.Just replace the standard `dash.Dash` class with the `jupyter_dash.JupyterDash` subclass. ###Code from jupyter_dash import JupyterDash import dash import dash_core_components as dcc import dash_html_components as html import pandas as pd ###Output _____no_output_____ ###Markdown When running in JupyterHub or Binder, call the `infer_jupyter_config` function to detect the proxy configuration. ###Code JupyterDash.infer_jupyter_proxy_config() ###Output _____no_output_____ ###Markdown Load and preprocess data ###Code df = pd.read_csv('https://plotly.github.io/datasets/country_indicators.csv') available_indicators = df['Indicator Name'].unique() ###Output _____no_output_____ ###Markdown Construct the app and callbacks ###Code external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css'] app = JupyterDash(__name__, external_stylesheets=external_stylesheets) # Create server variable with Flask server object for use with gunicorn server = app.server app.layout = html.Div([ html.Div([ html.Div([ dcc.Dropdown( id='crossfilter-xaxis-column', options=[{'label': i, 'value': i} for i in available_indicators], value='Fertility rate, total (births per woman)' ), dcc.RadioItems( id='crossfilter-xaxis-type', options=[{'label': i, 'value': i} for i in ['Linear', 'Log']], value='Linear', labelStyle={'display': 'inline-block'} ) ], style={'width': '49%', 'display': 'inline-block'}), html.Div([ dcc.Dropdown( id='crossfilter-yaxis-column', options=[{'label': i, 'value': i} for i in available_indicators], value='Life expectancy at birth, total (years)' ), dcc.RadioItems( id='crossfilter-yaxis-type', options=[{'label': i, 'value': i} for i in ['Linear', 'Log']], value='Linear', labelStyle={'display': 'inline-block'} ) ], style={'width': '49%', 'float': 'right', 'display': 'inline-block'}) ], style={ 'borderBottom': 'thin lightgrey solid', 'backgroundColor': 'rgb(250, 250, 250)', 'padding': '10px 5px' }), html.Div([ dcc.Graph( id='crossfilter-indicator-scatter', hoverData={'points': [{'customdata': 'Japan'}]} ) ], style={'width': '49%', 'display': 'inline-block', 'padding': '0 20'}), html.Div([ dcc.Graph(id='x-time-series'), dcc.Graph(id='y-time-series'), ], style={'display': 'inline-block', 'width': '49%'}), html.Div(dcc.Slider( id='crossfilter-year--slider', min=df['Year'].min(), max=df['Year'].max(), value=df['Year'].max(), marks={str(year): str(year) for year in df['Year'].unique()}, step=None ), style={'width': '49%', 'padding': '0px 20px 20px 20px'}) ]) @app.callback( dash.dependencies.Output('crossfilter-indicator-scatter', 'figure'), [dash.dependencies.Input('crossfilter-xaxis-column', 'value'), dash.dependencies.Input('crossfilter-yaxis-column', 'value'), dash.dependencies.Input('crossfilter-xaxis-type', 'value'), dash.dependencies.Input('crossfilter-yaxis-type', 'value'), dash.dependencies.Input('crossfilter-year--slider', 'value')]) def update_graph(xaxis_column_name, yaxis_column_name, xaxis_type, yaxis_type, year_value): dff = df[df['Year'] == year_value] return { 'data': [dict( x=dff[dff['Indicator Name'] == xaxis_column_name]['Value'], y=dff[dff['Indicator Name'] == yaxis_column_name]['Value'], text=dff[dff['Indicator Name'] == yaxis_column_name]['Country Name'], customdata=dff[dff['Indicator Name'] == yaxis_column_name]['Country Name'], mode='markers', marker={ 'size': 25, 'opacity': 0.7, 'color': 'orange', 'line': {'width': 2, 'color': 'purple'} } )], 'layout': dict( xaxis={ 'title': xaxis_column_name, 'type': 'linear' if xaxis_type == 'Linear' else 'log' }, yaxis={ 'title': yaxis_column_name, 'type': 'linear' if yaxis_type == 'Linear' else 'log' }, margin={'l': 40, 'b': 30, 't': 10, 'r': 0}, height=450, hovermode='closest' ) } def create_time_series(dff, axis_type, title): return { 'data': [dict( x=dff['Year'], y=dff['Value'], mode='lines+markers' )], 'layout': { 'height': 225, 'margin': {'l': 20, 'b': 30, 'r': 10, 't': 10}, 'annotations': [{ 'x': 0, 'y': 0.85, 'xanchor': 'left', 'yanchor': 'bottom', 'xref': 'paper', 'yref': 'paper', 'showarrow': False, 'align': 'left', 'bgcolor': 'rgba(255, 255, 255, 0.5)', 'text': title }], 'yaxis': {'type': 'linear' if axis_type == 'Linear' else 'log'}, 'xaxis': {'showgrid': False} } } @app.callback( dash.dependencies.Output('x-time-series', 'figure'), [dash.dependencies.Input('crossfilter-indicator-scatter', 'hoverData'), dash.dependencies.Input('crossfilter-xaxis-column', 'value'), dash.dependencies.Input('crossfilter-xaxis-type', 'value')]) def update_y_timeseries(hoverData, xaxis_column_name, axis_type): country_name = hoverData['points'][0]['customdata'] dff = df[df['Country Name'] == country_name] dff = dff[dff['Indicator Name'] == xaxis_column_name] title = '<b>{}</b><br>{}'.format(country_name, xaxis_column_name) return create_time_series(dff, axis_type, title) @app.callback( dash.dependencies.Output('y-time-series', 'figure'), [dash.dependencies.Input('crossfilter-indicator-scatter', 'hoverData'), dash.dependencies.Input('crossfilter-yaxis-column', 'value'), dash.dependencies.Input('crossfilter-yaxis-type', 'value')]) def update_x_timeseries(hoverData, yaxis_column_name, axis_type): dff = df[df['Country Name'] == hoverData['points'][0]['customdata']] dff = dff[dff['Indicator Name'] == yaxis_column_name] return create_time_series(dff, axis_type, yaxis_column_name) ###Output _____no_output_____ ###Markdown Serve the app using `run_server`. Unlike the standard `Dash.run_server` method, the `JupyterDash.run_server` method doesn't block execution of the notebook. It serves the app in a background thread, making it possible to run other notebook calculations while the app is running.This makes it possible to iterativly update the app without rerunning the potentially expensive data processing steps. ###Code app.run_server() ###Output Dash app running on https://jupyter-jsc.fz-juelich.de/user/[email protected]/jureca_login/proxy/8050/ ###Markdown By default, `run_server` displays a URL that you can click on to open the app in a browser tab. The `mode` argument to `run_server` can be used to change this behavior. Setting `mode="inline"` will display the app directly in the notebook output cell. ###Code app.run_server(mode="inline") ###Output _____no_output_____
DAP_Lab4/.ipynb_checkpoints/DAP_Lab4-checkpoint.ipynb
###Markdown 1. Exception Handlinga) Create a text file and manually add some data to the fileb) WritePythoncodeto• open the file for write only access• attempt to read the contents of the filec) Note the type of Error that has been raised.d) Modifyyourcodeto• use a try / except / finally construct that will catch the exception, print a user-friendly error message, and clean up the file resourcee) Investigate how you would create your own Exception class. Then create your own Exception class and use it in your code from the previous exercise. ###Code # 1.a f = open("demo.txt", "w") f.write("Mary Jones, 34, [email protected], manager \n") f.write("Peter Adams, 45, [email protected], sales \n") f.close() ###Output _____no_output_____ ###Markdown open the file read ###Code f = open("demo.txt", "r") print(f.read()) f.close() ###Output _____no_output_____ ###Markdown b) Write Python code to- open the file for write only access - attempt to read the contents of the file ###Code f = open("demo.txt", "w") print(f.read()) f.close() ###Output _____no_output_____ ###Markdown c) Note the type of Error that has been raised. that is UnsupportedOperation d) Modify your code to- use a try / except / finally construct that will catch the exception, - print a user-friendly error message, and clean up the file resource ###Code import sys import io try: f = open("demo.txt", "w") print(f.read()) except io.UnsupportedOperation: print("A type of 'UnsupportedOperation' exception was triggered.\nThis is because you have opened your file in a write mode, while you are trying to read it !") except: ("Other type of error " + sys.exc_info()[2]) finally: f.close() ###Output _____no_output_____ ###Markdown e) Investigate how you would create your own Exception class. Then create your own Exception class and use it in your code from the previous exercise. ###Code class MyCustomException(io.UnsupportedOperation): pass try: f = open("demo.txt", "w") raise MyCustomException('You have opened your file in an incompatible mode') print(f.read()) except io.UnsupportedOperation: print("Handled by '{}': {}".format(sys.exc_info()[0], sys.exc_info()[1])) except: print("Other type of error {}".format(sys.exc_info()[2])) finally: f.close() ###Output _____no_output_____ ###Markdown 2. Numpy Exercise Aa) Create an array with the arange function and reshape the array as follows: b = arange(24).reshape(2,3,4)This gives us a 3-dimensional data structure – you can think of it as being like 2 spreadsheet sheets where each sheet contains 3 rows of data and each row contains 4 columns.Using indexing and slicing perform the following tasks:i) Choose the first set of 3 rows and 4 columns of dataii) Choose the second row of data from the second set of 3 rows of dataiii) Choose all the data from the second column for both the first and secondsets of rows and columns of datab) Use the ravel function to flatten the data.What’s the difference between ravel and flatten?c) Reshape the data so that there are 6 rows of 4 columns per row.d) Getthetransposeofthenewdatastructure.e) Restack the rows of the transposed data structure in reverse order (hint: look atthe row_stack function).f) Split the resulting data structure horizontally (hint: look at the hsplit function). ###Code # 2.a import numpy as np b = np.arange(24).reshape(2,3,4) print(f"2.a.i => {b[0]}") print(f"2.a.ii => {b[1,1,:]}") print(f"2.a.iii => {b[:,1,:]}") # 2.b b = b.ravel() print(f"2.b => {b}") # 2.c b = b.reshape(6,4) print(f"2.c => {b}") #2.d b = b.T print(f"2.d => {b}") # 2.e b = np.flip(b) print(f"2.e => {b}") # 2.f b = np.hsplit(b,2) print(f"2.f => {b}") ###Output 2.a.i => [[ 0 1 2 3] [ 4 5 6 7] [ 8 9 10 11]] 2.a.ii => [16 17 18 19] 2.a.iii => [[ 4 5 6 7] [16 17 18 19]] 2.b => [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23] 2.c => [[ 0 1 2 3] [ 4 5 6 7] [ 8 9 10 11] [12 13 14 15] [16 17 18 19] [20 21 22 23]] 2.d => [[ 0 4 8 12 16 20] [ 1 5 9 13 17 21] [ 2 6 10 14 18 22] [ 3 7 11 15 19 23]] 2.e => [[23 19 15 11 7 3] [22 18 14 10 6 2] [21 17 13 9 5 1] [20 16 12 8 4 0]] 2.f => [array([[23, 19, 15], [22, 18, 14], [21, 17, 13], [20, 16, 12]]), array([[11, 7, 3], [10, 6, 2], [ 9, 5, 1], [ 8, 4, 0]])] ###Markdown 3. NumPy Exercise BNOTE:The AAPL.csv contains some stock price data for Apple. The MSFT.csv contains some stock price data for Microsoft.a) Use the loadtxt command to load data from AAPL.csv from columns 5 and 7 (i.e., the close price and the volume).b) Based on thedataprovided,calculatethevolumeweightedaveragepriceforthe stock (i.e., calculate the average price using the volume as weight values).c) Calculate the median value of the closing prices (hint: use the median function).d) Calculatethevariancevalueoftheclosingprices.e) Again, use the loadtxt command to load data from columns 3 and 4 (i.e., the highprices and the low prices).f) Use the max and min functions to get the highest high and the lowest low value.g) Load data from column 5 of AAPL.csv. Also, load data from column 5 of MSFT.csv.h) CalculatethecovariancematrixoftheclosingpricesofAAPLandMSFT(hint:usethe cov function).i) View the values on the diagonal (hint: diagonal).j) Calculate the correlation coefficient of the closing prices of AAPL and MSFT (hint:corrcoef). ###Code import numpy as np # 3.a aapl = np.loadtxt(fname="/Users/sobil/Documents/MSC/Sem 1/Database & Analytical Programming/Lab/Lab - 4/AAPL.csv", delimiter=",", usecols=(4,6), skiprows=1) print(f"3.a :: aapl[5,7] => {aapl[0:5,:]}") # 3.b aapl_weighted_price = np.average(aapl[:,0], axis= 0, weights= aapl[:,1]) print(f"3.b :: aapl_weighted_price => {aapl_weighted_price}") # 3.c print(f"3.c :: median => {np.median(aapl[:,0])}") # 3.d print(f"3.d :: variance => {np.var(aapl[:,0])}") # 3.e aapl = np.loadtxt("/Users/sobil/Documents/MSC/Sem 1/Database & Analytical Programming/Lab/Lab - 4/AAPL.csv", delimiter=",", usecols=(2,3), skiprows=1) print(f"3.e :: aapl[3,4] => {aapl[0:5,:]}") # 3.f print(f"3.f :: highest high => {np.max(aapl[:,0])}") print(f"3.f :: & lowest low => {np.min(aapl[:,1])}") # 3.g aapl = np.loadtxt(fname="/Users/sobil/Documents/MSC/Sem 1/Database & Analytical Programming/Lab/Lab - 4/AAPL.csv", delimiter=",", usecols=(4), skiprows=1) msft = aapl = np.loadtxt(fname="/Users/sobil/Documents/MSC/Sem 1/Database & Analytical Programming/Lab/Lab - 4/MSFT.csv", delimiter=",", usecols=(4), skiprows=1) print(f"3.g :: aapl[5] => {aapl[0:5]}") print(f"3.g :: & msft[5] => {aapl[0:5]}") # 3.h print(f"3.h :: covariance => {np.cov(aapl,msft)}") # 3.i print(f"3.i :: diagnol => {np.cov(aapl,msft).diagonal()}") # 3.j print(f"3.h :: correlation => {np.cov(aapl,msft)}") ###Output 3.a :: aapl[5,7] => [[2.23770004e+02 2.96639000e+07] [2.26869995e+02 2.68910000e+07] [2.16360001e+02 4.19906000e+07] [2.14449997e+02 5.31244000e+07] [2.22110001e+02 4.03379000e+07]] 3.b :: aapl_weighted_price => 190.02952578964312 3.c :: median => 197.0 3.d :: variance => 432.8804568188714 3.e :: aapl[3,4] => [[224.800003 220.199997] [227.270004 222.25 ] [226.350006 216.050003] [219.5 212.320007] [222.880005 216.839996]] 3.f :: highest high => 229.929993 3.f :: & lowest low => 142.0 3.g :: aapl[5] => [110.849998 112.260002 106.160004 105.910004 109.57 ] 3.g :: & msft[5] => [110.849998 112.260002 106.160004 105.910004 109.57 ] 3.h :: covariance => [[184.73421932 184.73421932] [184.73421932 184.73421932]] 3.i :: diagnol => [184.73421932 184.73421932] ###Markdown 4. Regular ExpresssionsWrite a Python program that will identify URLs using regular expressions. ###Code import re regex = r"http[s]?:\/\/\w+\.com?" testData = "https://google.comhttp://google.comhtts://google.comhttp:/google.com" x = re.findall(regex, testData) print(x) ###Output ['https://google.com', 'http://google.com']
Numbers_and_operators/Numbers_and_operators.ipynb
###Markdown [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Center-for-Health-Data-Science/PythonTsunami/blob/intro/Numbers_and_operators/Numbers_and_operators.ipynb) Numerical Operators*Prepared by [Katarina Nastou](https://www.cpr.ku.dk/staff/?pure=en/persons/672471)* Objectives- understand differences between `int`s and `float`s- work with simple math operators- add comments to your code NumbersTwo main types of numbers:- Integers: `56, 3, -90`- Floating Points: `5.666, 0.0, -8.9` Operators - addition: `+`- subtraction: `-`- multiplication: `*`- division: `/`- exponentiation, power: `**`- modulo: `%`- integer division: `//` (what does it return?) ###Code # playground ###Output _____no_output_____ ###Markdown Qestions: Ints and Floats- Question 1: Which of the following numbers is NOT a float? (a) 0 (b) 2.3 (c) 23.0 (d) -23.0 (e) 0.0 - Question 2: What type does the following expression result in? ```python3.0 + 5``` Operators 1- Question 3: How can we add parenthesis to the following expression to make it equal 100? ```python1 + 9 * 10``` - Question 4: What is the result of the following expression?```python3 + 14 * 2 + 4 * 5```- Question 5: What is the result of the following expression```python5 * 9 / 4 ** 3 - 6 * 7``` ###Code ###Output _____no_output_____ ###Markdown Comments- Question 6: What is the result of running this code? ```python15 / 3 * 2 + 1 ``` ###Code ###Output _____no_output_____ ###Markdown [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Center-for-Health-Data-Science/PythonTsunami/blob/intro/Numbers_and_operators/Numbers_and_operators.ipynb) Numerical Operators*Prepared by [Katarina Nastou](https://www.cpr.ku.dk/staff/?pure=en/persons/672471)* Objectives- understand differences between `int`s and `float`s- work with simple math operators- add comments to your code NumbersTwo main types of numbers:- Integers: `56, 3, -90`- Floating Points: `5.666, 0.0, -8.9` Operators - addition: `+`- subtraction: `-`- multiplication: `*`- division: `/`- exponentiation, power: `**`- modulo: `%`- integer division: `//` (what does it return?) ###Code # playground ###Output _____no_output_____ ###Markdown Qestions: Ints and Floats- Question 1: Which of the following numbers is NOT a float? (a) 0 (b) 2.3 (c) 23.0 (d) -23.0 (e) 0.0 - Question 2: What type does the following expression result in? ```python3.0 + 5``` Operators 1- Question 3: How can we add parenthesis to the following expression to make it equal 100? ```python1 + 9 * 10``` - Question 4: What is the result of the following expression?```python3 + 14 * 2 + 4 * 5```- Question 5: What is the result of the following expression```python5 * 9 / 4 ** 3 - 6 * 7``` ###Code ###Output _____no_output_____ ###Markdown Comments- Question 6: What is the result of running this code? ```python15 / 3 * 2 + 1 ``` ###Code ###Output _____no_output_____ ###Markdown [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Center-for-Health-Data-Science/PythonTsunami/blob/fall2021/Numbers_and_operators/Numbers_and_operators.ipynb) Numerical Operators*Prepared by [Katarina Nastou](https://www.cpr.ku.dk/staff/?pure=en/persons/672471)* Objectives- understand differences between `int`s and `float`s- work with simple math operators- add comments to your code NumbersTwo main types of numbers:- Integers: `56, 3, -90`- Floating Points: `5.666, 0.0, -8.9` Operators - addition: `+`- subtraction: `-`- multiplication: `*`- division: `/`- exponentiation, power: `**`- modulo: `%`- integer division: `//` (what does it return?) ###Code # playground ###Output _____no_output_____ ###Markdown Qestions: Ints and Floats - Question 1: What type does the following expression result in? ```python3.0 + 5``` Operators 1- Question 2: How can we add parenthesis to the following expression to make it equal 100? ```python1 + 9 * 10``` - Question 3: What is the result of the following expression?```python3 + 14 * 2 + 4 * 5```- Question 4: What is the result of the following expression```python5 * 9 / 4 ** 3 - 6 * 7``` ###Code ###Output _____no_output_____ ###Markdown Comments- Question 5: What is the result of running this code? ```python15 / 3 * 2 + 1 ``` ###Code ###Output _____no_output_____ ###Markdown [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Center-for-Health-Data-Science/PythonTsunami/blob/intro/Numbers_and_operators/Numbers_and_operators.ipynb) Numerical Operators*Prepared by [Katarina Nastou](https://www.cpr.ku.dk/staff/?pure=en/persons/672471)* Objectives- understand differences between `int`s and `float`s- work with simple math operators- add comments to your code NumbersTwo main types of numbers:- Integers: `56, 3, -90`- Floating Points: `5.666, 0.0, -8.9` Operators - addition: `+`- subtraction: `-`- multiplication: `*`- division: `/`- exponentiation, power: `**`- modulo: `%`- integer division: `//` (what does it return?) ###Code # playground ###Output _____no_output_____ ###Markdown Qestions: Ints and Floats- Question 1: Which of the following numbers is NOT a float? (a) 0 (b) 2.3 (c) 23.0 (d) -23.0 (e) 0.0 - Question 2: What type does the following expression result in? ```python3.0 + 5``` Operators 1- Question 3: How can we add parenthesis to the following expression to make it equal 100? ```python1 + 9 * 10``` - Question 4: What is the result of the following expression?```python3 + 14 * 2 + 4 * 5```- Question 5: What is the result of the following expression```python5 * 9 / 4 ** 3 - 6 * 7``` ###Code ###Output _____no_output_____ ###Markdown Comments- Question 6: What is the result of running this code? ```python15 / 3 * 2 + 1 ``` ###Code ###Output _____no_output_____
practicum/ps04_networks_from_text.ipynb
###Markdown Practice Session 04: Networks from text In this session we will learn to construct a network from a set of implicit relationships. The relationships that we will study are between accounts in Twitter, a micro-blogging service.We will create two networks: one directed and one undirected.* In the **directed mention network**, we will say that there is a link of weight *w* from account *x* to account *y*, if account *x* has re-tweeted (re-posted) or mentioned *w* times account *y*.* In the **undirected co-mention network**, we will say that there is a link of weight *w* between accounts *x* and *y*, if both accounts have been mentioned together in *w* tweets.The input material you will use is a file named `CovidLockdownCatalonia.json.gz` available in the [data/](data/) directory. This is a gzip-compressed file, which you can de-compress using the `gunzip` command. The file contain about 35,500 messages ("tweets") posted between March 13th, 2020, and March 14th, 2020, containing a hashtag or keyword related to COVID-19, and posted by a user declaring a location in Catalonia.The tweets are in a format known as [JSON](https://en.wikipedia.org/wiki/JSONExample). Python's JSON library takes care of translating it into a dictionary.**How was this file obtained?** This file was obtained from the [CrisisNLP](https://crisisnlp.qcri.org/covid19). This is a website that provides COVID-19 collections of tweets, however, they only provide the identifier of the tweet, known as a tweet-id. To recover the entire tweet, a process commonly known as *re-hydration* was used, which involves querying an API from Twitter, giving the tweet-id, and obtaining the tweet. This can be done with a little bit of programming or using a software such as [twarc](https://github.com/DocNow/twarcdehydrate).(Remove this cell when delivering.) Author: Your name hereE-mail: Your e-mail hereDate: The current date here 1. Create the directed mention network Create the **directed mention network**, which has a weighted edge (source, target, weight) if user *source* mentioned user *target* at least once; with *weight* indicating the number of mentions.Create two files: one containing all edges, and one containing all edges having *count* greater or equal than 2.(Remove this cell when delivering.) ###Code import io import json import gzip import csv import re # Leave this code as-is # Input file COMPRESSED_INPUT_FILENAME = "CovidLockdownCatalonia.json.gz" # These are the output files, leave as-is OUTPUT_ALL_EDGES_FILENAME = "CovidLockdownCatalonia.csv" OUTPUT_FILTERED_EDGES_FILENAME = "CovidLockdownCatalonia-min-weight-filtered.csv" OUTPUT_CO_MENTIONS_FILENAME = "CovidLockdownCatalonia-co-mentions.csv" ###Output _____no_output_____ ###Markdown 1.1. Extract mentions The `extract_mentions(text)` functions is used to extract mentions, so that if we give, for instance `RT @Jordi: check this post by @Xavier`, it returns the list `["Jordi", "Xavier"]`.Note that you will need an `import re` command must be at the beginning of the file, together with the other imports. You may need to execute the cell that contains the import by pressing `Shift-Enter` on it.You can now print all the people mentioned in a tweet by doing:```pythonmentions = extract_mentions(message)for mention in mentions: print("%s mentioned %s" % (author, mention))```(Remove this cell when delivering.) ###Code # Leave this code as-is def extract_mentions(text): return re.findall("@([a-zA-Z0-9_]{5,20})", text) print(extract_mentions("RT @Jordi: check this post by @Xavier")) ###Output ['Jordi', 'Xavier'] ###Markdown 1.2. Count mentions We do not need to uncompress this file (it is about 236 MB uncompressed, but only 31 MB compressed), but we can read it directly while it is compressed.```pythonwith gzip.open(COMPRESSED_INPUT_FILENAME, "rt", encoding="utf-8") as input_file: for line in input_file: tweet = json.loads(line) author = tweet["user"]["screen_name"] message = tweet["full_text"] print("%s: '%s'" % (author, message))```To count how many times a mention happen, you will keep a dictionary:```pythonmentions_counter = {}```Each key in the dictionary will be a tuple `(author, mention)` where `author` is the username of the person who writes the message, and `mention` the username of someone who is mentioned in the message. To update the dictionary, use this code while you are reading the input file:```pythonfor mention in mentions: key = (author, mention) if key in mentions_counter: mentions_counter[key] += 1 else: mentions_counter[key] = 1```(Remove this cell when delivering.) Replace this cell with your code to read the compressed input file and create the mentions_counter dictionary. Print the number of times the account `joanmariapique` mentioned `catalangov`. It should be 9.(Remove this cell when delivering.) Replace this cell with your code to print the number of times the account `joanmariapique` mentioned `catalangov`. Now we write a file with all the edges in this graph (Source, Target, Weight) as a tab-separated file.(Remove this cell when delivering.) ###Code # Leave this code as-is with io.open(OUTPUT_ALL_EDGES_FILENAME, "w") as output_file: writer = csv.writer(output_file, delimiter='\t', quotechar='"') writer.writerow(["Source", "Target", "Weight"]) for key in mentions_counter: author = key[0] mention = key[1] weight = mentions_counter[key] writer.writerow([author, mention, weight]) ###Output _____no_output_____ ###Markdown Practice Session 04: Networks from text In this session we will learn to construct a network from a set of implicit relationships. The relationships that we will study are between accounts in Twitter, a micro-blogging service.We will create two networks: one directed and one undirected.* In the **directed mention network**, we will say that there is a link of weight *w* from account *x* to account *y*, if account *x* has re-tweeted (re-posted) or mentioned *w* times account *y*.* In the **undirected co-mention network**, we will say that there is a link of weight *w* between accounts *x* and *y*, if both accounts have been mentioned together in *w* tweets.The input material you will use is a file named `CovidLockdownCatalonia.json.gz` available in the [data/](data/) directory. This is a gzip-compressed file, which you can de-compress using the `gunzip` command. The file contain about 35,500 messages ("tweets") posted between March 13th, 2020, and March 14th, 2020, containing a hashtag or keyword related to COVID-19, and posted by a user declaring a location in Catalonia.The tweets are in a format known as [JSON](https://en.wikipedia.org/wiki/JSONExample). Python's JSON library takes care of translating it into a dictionary.**How was this file obtained?** This file was obtained from the [CrisisNLP](https://crisisnlp.qcri.org/covid19). This is a website that provides COVID-19 collections of tweets, however, they only provide the identifier of the tweet, known as a tweet-id. To recover the entire tweet, a process commonly known as *re-hydration* was used, which involves querying an API from Twitter, giving the tweet-id, and obtaining the tweet. This can be done with a little bit of programming or using a software such as [twarc](https://github.com/DocNow/twarcdehydrate).(Remove this cell when delivering.) Author: Your name hereE-mail: Your e-mail hereDate: The current date here 1. Create the directed mention network Create the **directed mention network**, which has a weighted edge (source, target, weight) if user *source* mentioned user *target* at least once; with *weight* indicating the number of mentions.Create two files: one containing all edges, and one containing all edges having *count* greater or equal than 2.(Remove this cell when delivering.) ###Code import io import json import gzip import csv import re # Leave this code as-is # Input file COMPRESSED_INPUT_FILENAME = "CovidLockdownCatalonia.json.gz" # These are the output files, leave as-is OUTPUT_ALL_EDGES_FILENAME = "CovidLockdownCatalonia.csv" OUTPUT_FILTERED_EDGES_FILENAME = "CovidLockdownCatalonia-min-weight-filtered.csv" OUTPUT_CO_MENTIONS_FILENAME = "CovidLockdownCatalonia-co-mentions.csv" ###Output _____no_output_____ ###Markdown 1.1. Extract mentions The `extract_mentions(text)` functions is used to extract mentions, so that if we give, for instance `RT @Jordi: check this post by @Xavier`, it returns the list `["Jordi", "Xavier"]`.Note that you will need an `import re` command must be at the beginning of the file, together with the other imports. You may need to execute the cell that contains the import by pressing `Shift-Enter` on it.You can now print all the people mentioned in a tweet by doing:```pythonmentions = extract_mentions(message)for mention in mentions: print("%s mentioned %s" % (author, mention))```(Remove this cell when delivering.) ###Code # Leave this code as-is def extract_mentions(text): return re.findall("@([a-zA-Z0-9_]{5,20})", text) print(extract_mentions("RT @Jordi: check this post by @Xavier")) ###Output ['Jordi', 'Xavier'] ###Markdown 1.2. Count mentions We do not need to uncompress this file (it is about 236 MB uncompressed, but only 31 MB compressed), but we can read it directly while it is compressed.```pythonwith gzip.open(COMPRESSED_INPUT_FILENAME, "rt", encoding="utf-8") as input_file: for line in input_file: tweet = json.loads(line) author = tweet["user"]["screen_name"] message = tweet["full_text"] print("%s: '%s'" % (author, message))```To count how many times a mention happen, you will keep a dictionary:```pythonmentions_counter = {}```Each key in the dictionary will be a tuple `(author, mention)` where `author` is the username of the person who writes the message, and `mention` the username of someone who is mentioned in the message. To update the dictionary, use this code while you are reading the input file:```pythonfor mention in mentions: key = (author, mention) if key in mentions_counter: mentions_counter[key] += 1 else: mentions_counter[key] = 1```(Remove this cell when delivering.) Replace this cell with your code to read the compressed input file and create the mentions_counter dictionary. Print the number of times the account `joanmariapique` mentioned `catalangov`. It should be 9.(Remove this cell when delivering.) Replace this cell with your code to print the number of times the account `joanmariapique` mentioned `catalangov`. Now we write a file with all the edges in this graph (Source, Target, Weight) as a tab-separated file.(Remove this cell when delivering.) ###Code # Leave this code as-is with io.open(OUTPUT_ALL_EDGES_FILENAME, "w") as output_file: writer = csv.writer(output_file, delimiter='\t', quotechar='"', lineterminator='\n') writer.writerow(["Source", "Target", "Weight"]) for key in mentions_counter: author = key[0] mention = key[1] weight = mentions_counter[key] writer.writerow([author, mention, weight]) ###Output _____no_output_____
example/text_alignment.ipynb
###Markdown `genalog.text` module:This module is responsible for:- Text alignment- NER label propagation using text alignment results`genalog` provides two methods of alignment:1. `genalog.text.anchor.align_w_anchor()`1. `genalog.text.alignment.align()``align_w_anchor()` implements the Recursive Text Alignment Scheme (RETAS) from the paper [A Fast Alignment Scheme for Automatic OCR Evaluation of Books](https://ieeexplore.ieee.org/abstract/document/6065412) and works best on longer text strings, while `align()` implement the [Needleman-Wunsch algorithm](https://en.wikipedia.org/wiki/Needleman%E2%80%93Wunsch_algorithm) and works best on shorter strings. We recommend using the `align_w_anchor()` method on inputs longer than **200 characters**. Both methods share the same function contract and are interchangeable. We use [Biopython](https://biopython.org/)'s implementation of the Needleman-Wunsch algorithm for text alignment.This algorithm is an exhaustive search for all possible candidates with dynamic programming. It produces weighted score for each candidate and returns those having the highest score. (**NOTE** that multiple candidates can share the same score)This algorithm has 4 hyperparameters for tuning candidate scores:1. **Match Reward** - how much the algorithm rewards matching characters1. **Mismatch Penalty** - how much the algorithm penalizes mismatching characters1. **Gap Penalty** - how much the algorithm penalizes for creating a gap with a GAP_CHAR (defaults to '@')1. **Gap Extension Penalty** - how much the algorithm penalizes for extending a gap (ex "@@@@")You can find the default values for these four parameters as a constant in the package:1. `genalog.text.alignment.MATCH_REWARD`1. `genalog.text.alignment.MISMATCH_PENALTY`1. `genalog.text.alignment.GAP_PENALTY`1. `genalog.text.alignment.GAP_EXT_PENALTY`We will demonstrate text alignment here. ###Code gt_txt = "New York is big" noise_txt = "New Yo rkis" # RETAS method from genalog.text import anchor # Extra whitespaces are removed aligned_gt, aligned_noise = anchor.align_w_anchor(gt_txt, noise_txt) print(f"Aligned ground truth: {aligned_gt}") print(f"Aligned noise: {aligned_noise}") # Needleman-Wunsch alignment ONLY from genalog.text import alignment aligned_gt, aligned_noise = alignment.align(gt_txt, noise_txt) print(f"Aligned ground truth: {aligned_gt}") print(f"Aligned noise: {aligned_noise}") from genalog.text import alignment # Process the aligned strings to find out how the tokens are related gt_to_noise_mapping, noise_to_gt_mapping = alignment.parse_alignment(aligned_gt, aligned_noise, gap_char="@") print(f"gt_to_noise: {gt_to_noise_mapping}") print(f"noise_to_gt: {noise_to_gt_mapping}") # Format aligned string for better display print(alignment._format_alignment(aligned_gt, aligned_noise)) ###Output New Yo@rk is @ big@ ||||||.||.||||||||. New Yo rk@is @ big
Course-02-Advanced-ML-and-Signal-Processing/AssignmentML3.ipynb
###Markdown This is the third assignment for the Coursera course "Advanced Machine Learning and Signal Processing"Just execute all cells one after the other and you are done - just note that in the last one you must update your email address (the one you've used for coursera) and obtain a submission token, you get this from the programming assignment directly on coursera.Please fill in the sections labelled with "YOUR_CODE_GOES_HERE" ###Code !wget https://github.com/IBM/coursera/raw/master/coursera_ml/a2.parquet from pyspark.sql import SparkSession # initialise sparkContext spark = SparkSession.builder \ .master('local') \ .appName('myAppName') \ .config('spark.executor.memory', '1gb') \ .config("spark.cores.max", "2") \ .getOrCreate() sc = spark.sparkContext # using SQLContext to read parquet file from pyspark.sql import SQLContext sqlContext = SQLContext(sc) import pyspark.sql.functions as F ###Output _____no_output_____ ###Markdown Now it’s time to have a look at the recorded sensor data. You should see data similar to the one exemplified below…. ###Code df=spark.read.load('a2.parquet') df.createOrReplaceTempView("df") spark.sql("SELECT * from df").show() ###Output +-----+-----------+-------------------+-------------------+-------------------+ |CLASS| SENSORID| X| Y| Z| +-----+-----------+-------------------+-------------------+-------------------+ | 0| 26| 380.66434005495194| -139.3470983812975|-247.93697521077704| | 0| 29| 104.74324299209692| -32.27421440203938|-25.105013725863852| | 0| 8589934658| 118.11469236129976| 45.916682927433534| -87.97203782706572| | 0|34359738398| 246.55394030642543|-0.6122810693132044|-398.18662513951506| | 0|17179869241|-190.32584900181487| 234.7849657520335|-206.34483804019288| | 0|25769803830| 178.62396382387422| -47.07529438881511| 84.38310769821979| | 0|25769803831| 85.03128805189493|-4.3024316644854546|-1.1841857567516714| | 0|34359738411| 26.786262674736566| -46.33193951911338| 20.880756008396055| | 0| 8589934592|-16.203752396859194| 51.080957032176954| -96.80526656416971| | 0|25769803852| 47.2048142440404| -78.2950899652916| 181.99604091494786| | 0|34359738369| 15.608872398939273| -79.90322809181754| 69.62150711098005| | 0| 19|-4.8281721129789315| -67.38050508399905| 221.24876396496404| | 0| 54| -98.40725712852762|-19.989364074314732| -302.695196085276| | 0|17179869313| 22.835845394816594| 17.1633660118843| 32.877914832011385| | 0|34359738454| 84.20178070080324| -32.81572075916947| -48.63517643958031| | 0| 0| 56.54732521345129| -7.980106018032676| 95.05162719436447| | 0|17179869201| -57.6008655247749| 5.135393798773895| 236.99158698947267| | 0|17179869308| -65.59264738389012| -48.92660057215126| -61.58970715383383| | 0|25769803790| 34.82337351291005| 9.483542084393937| 197.6066372962772| | 0|25769803825| 39.80573823439121|-0.7955236412785212| -79.66652640650325| +-----+-----------+-------------------+-------------------+-------------------+ only showing top 20 rows ###Markdown Let’s check if we have balanced classes – this means that we have roughly the same number of examples for each class we want to predict. This is important for classification but also helpful for clustering ###Code spark.sql("SELECT count(class), class from df group by class").show() ###Output +------------+-----+ |count(class)|class| +------------+-----+ | 1416| 1| | 1626| 0| +------------+-----+ ###Markdown Let's create a VectorAssembler which consumes columns X, Y and Z and produces a column “features” ###Code from pyspark.ml.feature import VectorAssembler vectorAssembler = VectorAssembler(inputCols=["X","Y","Z"], outputCol="features") ###Output _____no_output_____ ###Markdown Please insatiate a clustering algorithm from the SparkML package and assign it to the clust variable. Here we don’t need to take care of the “CLASS” column since we are in unsupervised learning mode – so let’s pretend to not even have the “CLASS” column for now – but it will become very handy later in assessing the clustering performance. PLEASE NOTE – IN REAL-WORLD SCENARIOS THERE IS NO CLASS COLUMN – THEREFORE YOU CAN’T ASSESS CLASSIFICATION PERFORMANCE USING THIS COLUMN ###Code from pyspark.ml.clustering import KMeans ###YOUR_CODE_GOES_HERE### # Trains a k-means model. clust = KMeans().setK(2).setSeed(1) ###Output _____no_output_____ ###Markdown Let’s train... ###Code from pyspark.ml import Pipeline pipeline = Pipeline(stages=[vectorAssembler, clust]) model = pipeline.fit(df) ###Output _____no_output_____ ###Markdown ...and evaluate... ###Code prediction = model.transform(df) prediction.show() prediction.createOrReplaceTempView('prediction') spark.sql(''' select max(correct)/max(total) as accuracy from ( select sum(correct) as correct, count(correct) as total from ( select case when class != prediction then 1 else 0 end as correct from prediction ) union select sum(correct) as correct, count(correct) as total from ( select case when class = prediction then 1 else 0 end as correct from prediction ) ) ''').rdd.map(lambda row: row.accuracy).collect()[0] ###Output _____no_output_____ ###Markdown If you reached at least 55% of accuracy you are fine to submit your predictions to the grader. Otherwise please experiment with parameters setting to your clustering algorithm, use a different algorithm or just re-record your data and try to obtain. In case you are stuck, please use the Coursera Discussion Forum. Please note again – in a real-world scenario there is no way in doing this – since there is no class label in your data. Please have a look at this further reading on clustering performance evaluation https://en.wikipedia.org/wiki/Cluster_analysisEvaluation_and_assessment ###Code !rm -f rklib.py !wget https://raw.githubusercontent.com/IBM/coursera/master/rklib.py !rm -Rf a2_m3.json prediction= prediction.repartition(1) prediction.write.json('a2_m3.json') import os import zipfile def zipdir(path, ziph): for root, dirs, files in os.walk(path): for file in files: ziph.write(os.path.join(root, file)) zipf = zipfile.ZipFile('a2_m3.json.zip', 'w', zipfile.ZIP_DEFLATED) zipdir('a2_m3.json', zipf) zipf.close() !base64 a2_m3.json.zip > a2_m3.json.zip.base64 from rklib import submit key = "pPfm62VXEeiJOBL0dhxPkA" part = "EOTMs" email = None###YOUR_CODE_GOES_HERE### token = None###YOUR_CODE_GOES_HERE### # (have a look here if you need more information on how to obtain the token https://youtu.be/GcDo0Rwe06U?t=276) with open('a2_m3.json.zip.base64', 'r') as myfile: data=myfile.read() submit(email, token, key, part, [part], data) ###Output Submission successful, please check on the coursera grader page for the status ------------------------- {"elements":[{"itemId":"Cu6KW","id":"f_F-qCtuEei_fRLwaVDk3g~Cu6KW~Bmv5VEoGEeqwMRKT0cFzMQ","courseId":"f_F-qCtuEei_fRLwaVDk3g"}],"paging":{},"linked":{}} -------------------------
activitysim/examples/example_estimation/notebooks/14_joint_tour_scheduling.ipynb
###Markdown Estimating Joint Tour SchedulingThis notebook illustrates how to re-estimate the joint tour scheduling component for ActivitySim. This process includes running ActivitySim in estimation mode to read household travel survey files and write outthe estimation data bundles used in this notebook. To review how to do so, please visit the othernotebooks in this directory. Load libraries ###Code import os import larch # !conda install larch -c conda-forge # for estimation import pandas as pd ###Output _____no_output_____ ###Markdown We'll work in our `test` directory, where ActivitySim has saved the estimation data bundles. ###Code os.chdir('test') ###Output _____no_output_____ ###Markdown Load data and prep model for estimation ###Code modelname = "joint_tour_scheduling" from activitysim.estimation.larch import component_model model, data = component_model(modelname, return_data=True) ###Output _____no_output_____ ###Markdown Review data loaded from the EDBThe next (optional) step is to review the EDB, including the coefficients, utilities specification, and chooser and alternative data. Coefficients ###Code data.coefficients ###Output _____no_output_____ ###Markdown Utility specification ###Code data.spec ###Output _____no_output_____ ###Markdown Chooser data ###Code data.chooser_data ###Output _____no_output_____ ###Markdown Alternatives data ###Code data.alt_values ###Output _____no_output_____ ###Markdown EstimateWith the model setup for estimation, the next step is to estimate the model coefficients. Make sure to use a sufficiently large enough household sample and set of zones to avoid an over-specified model, which does not have a numerically stable likelihood maximizing solution. Larch has a built-in estimation methods including BHHH, and also offers access to more advanced general purpose non-linear optimizers in the `scipy` package, including SLSQP, which allows for bounds and constraints on parameters. BHHH is the default and typically runs faster, but does not follow constraints on parameters. ###Code model.estimate() ###Output req_data does not request avail_ca or avail_co but it is set and being provided ###Markdown Estimated coefficients ###Code model.parameter_summary() ###Output _____no_output_____ ###Markdown Output Estimation Results ###Code from activitysim.estimation.larch import update_coefficients result_dir = data.edb_directory/"estimated" update_coefficients( model, data, result_dir, output_file=f"{modelname}_coefficients_revised.csv", ); ###Output _____no_output_____ ###Markdown Write the model estimation report, including coefficient t-statistic and log likelihood ###Code model.to_xlsx( result_dir/f"{modelname}_model_estimation.xlsx", data_statistics=False, ) ###Output _____no_output_____ ###Markdown Next StepsThe final step is to either manually or automatically copy the `*_coefficients_revised.csv` file to the configs folder, rename it to `*_coefficients.csv`, and run ActivitySim in simulation mode. ###Code pd.read_csv(result_dir/f"{modelname}_coefficients_revised.csv") ###Output _____no_output_____ ###Markdown Estimating Joint Tour SchedulingThis notebook illustrates how to re-estimate the joint tour scheduling component for ActivitySim. This process includes running ActivitySim in estimation mode to read household travel survey files and write outthe estimation data bundles used in this notebook. To review how to do so, please visit the othernotebooks in this directory. Load libraries ###Code import os import larch # !conda install larch -c conda-forge # for estimation import pandas as pd ###Output _____no_output_____ ###Markdown We'll work in our `test` directory, where ActivitySim has saved the estimation data bundles. ###Code os.chdir('test') ###Output _____no_output_____ ###Markdown Load data and prep model for estimation ###Code modelname = "joint_tour_scheduling" from activitysim.estimation.larch import component_model model, data = component_model(modelname, return_data=True) ###Output _____no_output_____ ###Markdown Review data loaded from the EDBThe next (optional) step is to review the EDB, including the coefficients, utilities specification, and chooser and alternative data. Coefficients ###Code data.coefficients ###Output _____no_output_____ ###Markdown Utility specification ###Code data.spec ###Output _____no_output_____ ###Markdown Chooser data ###Code data.chooser_data ###Output _____no_output_____ ###Markdown Alternatives data ###Code data.alt_values ###Output _____no_output_____ ###Markdown EstimateWith the model setup for estimation, the next step is to estimate the model coefficients. Make sure to use a sufficiently large enough household sample and set of zones to avoid an over-specified model, which does not have a numerically stable likelihood maximizing solution. Larch has a built-in estimation methods including BHHH, and also offers access to more advanced general purpose non-linear optimizers in the `scipy` package, including SLSQP, which allows for bounds and constraints on parameters. BHHH is the default and typically runs faster, but does not follow constraints on parameters. ###Code model.estimate() ###Output req_data does not request avail_ca or avail_co but it is set and being provided ###Markdown Estimated coefficients ###Code model.parameter_summary() ###Output _____no_output_____ ###Markdown Output Estimation Results ###Code from activitysim.estimation.larch import update_coefficients result_dir = data.edb_directory/"estimated" update_coefficients( model, data, result_dir, output_file=f"{modelname}_coefficients_revised.csv", ); ###Output _____no_output_____ ###Markdown Write the model estimation report, including coefficient t-statistic and log likelihood ###Code model.to_xlsx( result_dir/f"{modelname}_model_estimation.xlsx", data_statistics=False, ) ###Output _____no_output_____ ###Markdown Next StepsThe final step is to either manually or automatically copy the `*_coefficients_revised.csv` file to the configs folder, rename it to `*_coefficients.csv`, and run ActivitySim in simulation mode. ###Code pd.read_csv(result_dir/f"{modelname}_coefficients_revised.csv") ###Output _____no_output_____
tutorials/cdr/4- Discriminative model with low dimensional feature outputs.ipynb
###Markdown Now, we're going to try to incorporate the additional information in the true dev set labels directly to the final discriminative model using multi-fidelity modeling. For simplicity and interpretability, our end classifier(s) will be simpler than a neural network, but they'll use features from a trained neural network. Because of this, I've modified the neural network architecture to provide an additional layer before class probabilities with 16 nodes. The values that this layer will be used as features of each candidate. Before going too far: what model do I want to use? GPC doesn't seem to offer many benefits, is there anything else I could use? Why not just use a NN? Might be worth talking to Aidan for this one-- some of the same ideas come into play for the other GP project. Lastly, might be useful to actually read one or two of the papers about MF classification and see what they did, see if I can recreate it . ###Code %load_ext autoreload %autoreload 2 %matplotlib inline import numpy as np from snorkel import SnorkelSession session = SnorkelSession() from snorkel.models import candidate_subclass ChemicalDisease = candidate_subclass('ChemicalDisease', ['chemical', 'disease']) train = session.query(ChemicalDisease).filter(ChemicalDisease.split == 0).all() dev = session.query(ChemicalDisease).filter(ChemicalDisease.split == 1).all() test = session.query(ChemicalDisease).filter(ChemicalDisease.split == 2).all() print('Training set:\t{0} candidates'.format(len(train))) print('Dev set:\t{0} candidates'.format(len(dev))) print('Test set:\t{0} candidates'.format(len(test))) train_marginals_orig = np.fromfile("train_marginals_orig.txt") train_marginals = train_marginals_orig[:8433].copy() total = train.copy() total.extend(dev.copy()) from snorkel.annotations import load_gold_labels L_gold_dev = load_gold_labels(session, annotator_name='gold', split=1) L_gold_test = load_gold_labels(session, annotator_name='gold', split=2) from snorkel.learning.pytorch import LSTM # includes an extra layer for smaller set of features train_kwargs = { 'lr': 0.01, 'embedding_dim': 100, 'hidden_dim': 100, 'n_epochs': 20, 'dropout': 0.5, 'rebalance': 0.25, 'print_freq': 5, 'seed': 1701 } lstm = LSTM(n_threads=None) lstm.train(total, train_marginals_orig, X_dev=dev, Y_dev=L_gold_dev, **train_kwargs) lstm.save(model_name="lstm_with_features") lstm.score(test,L_gold_test) dev_features = lstm.feature_outputs(dev, 100).detach().numpy().reshape(920,10) train_features = lstm.feature_outputs(train, 100).detach().numpy().reshape(8433,10) test_features = lstm.feature_outputs(test, 100).detach().numpy().reshape(4683,10) train_preds = train_marginals_orig[:8433].copy() dev_true = L_gold_dev.toarray().reshape(920,) dev_true[dev_true == -1] = 0 train_preds import torch.nn as nn import torch class SimpleNet(nn.Module): def __init__(self): super().__init__() self.l1 = nn.Linear(10, 50) self.l2 = nn.Linear(50, 50) self.l3 = nn.Linear(50,50) self.l4 = nn.Linear(50,2) def forward(self,x): x = nn.functional.relu(self.l1(x)) x = nn.functional.relu(self.l2(x)) x = nn.functional.relu(self.l3(x)) x = nn.functional.softmax(self.l4(x)) return x train_marginals = np.stack([1-train_marginals,train_marginals], axis=1) from torch.optim import Adam model = SimpleNet() criterion = nn.BCEWithLogitsLoss() optimizer = Adam(model.parameters(),lr = 0.001) for i in range(500): model.zero_grad() output = model(torch.from_numpy(train_features)) loss = criterion(output,torch.from_numpy(train_marginals).type(torch.float)) loss.backward() if i % 50 == 0: print(loss) optimizer.step() probs = model(torch.from_numpy(test_features)) probs = probs.detach().numpy() probs.mean(axis=0) devmodel = SimpleNet() criterion = nn.CrossEntropyLoss() optimizer = Adam(devmodel.parameters(),lr = 0.001) for i in range(50): devmodel.zero_grad() output = devmodel(torch.from_numpy(dev_features)) loss = criterion(output,torch.from_numpy(dev_true).type(torch.long)) loss.backward() optimizer.step() probsdev = devmodel(torch.from_numpy(test_features)).detach().numpy() c = 0 res = L_gold_test.toarray() tp,tn,fp,fn = 0,0,0,0 for i in range(len(probsdev)): pred = 0 dc = max(probsdev[i]) tc = max(probs[i]) if dc > tc: c += 1 if probsdev[i][0] > 0.5: pred = 0 else: pred = 1 else: if probs[i][0] > .5: pred = 0 else: pred = 1 if res[i][0] == 1: if pred == 1: tp += 1 else: fn += 1 else: if pred == 1: fp += 1 else: tn += 1 print (tp,fp,tn,fn) prec = tp / (tp + fp) rec = tp / (tp + fn) 2/(1/prec + 1/rec) print ((tp + tn)/(tp+tn+fp+fn)) c from sklearn.linear_model import LogisticRegression dev_model = LogisticRegression() train_model = LogisticRegression() dev_model.fit(dev_features, dev_true) train_model.fit(train_features, train_preds) probsdev = dev_model.predict_proba(test_features) probstrain = train_model.predict_proba(train_features) res = L_gold_test.toarray() tp,tn,fp,fn = 0,0,0,0 for i in range(len(probsdev)): pred = 0 if max(probsdev[i]) > max(probstrain[i]): if probstrain[i][0] > .5: pred = 0 else: pred = 1 else: if probstrain[i][0] > .5: pred = 0 else: pred = 1 if res[i][0] == 1: if pred == 1: tp += 1 else: fn += 1 else: if pred == 1: fp += 1 else: tn += 1 print (tp,fp,tn,fn) prec = tp / (tp + fp) rec = tp / (tp + fn) 2/(1/prec + 1/rec) print ((tp + tn)/(tp+tn+fp+fn)) ###Output 0.40657698056801195
cloud detection2.ipynb
###Markdown ###Code import numpy as np import pandas as pd from keras.preprocessing import image from os.path import join from PIL import Image from scipy import misc from sklearn.model_selection import train_test_split from keras.models import Sequential import tensorflow as tf import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from keras.preprocessing.image import ImageDataGenerator from keras.preprocessing import image from sklearn.model_selection import train_test_split from keras.layers import Input, Dense, Convolution2D, MaxPooling2D, UpSampling2D, Softmax from tensorflow.keras import datasets, layers, models from tensorflow import keras from tensorflow.keras import layers data = pd.read_csv("/content/drive/MyDrive/Colab Notebooks/SWIMSEG/metadata.csv") #data.head() number = data["Number"] #print(number) train_datagen = ImageDataGenerator( rescale=1./255, shear_range=0.2, zoom_range=0.2, vertical_flip=False, horizontal_flip=True, rotation_range=90, width_shift_range=0.1, height_shift_range=0.1, validation_split=0.3) test_datagen = ImageDataGenerator(rescale=1./255) train_generator = train_datagen.flow_from_directory( '/content/drive/MyDrive/Colab Notebooks/SWIMSEG/train/', target_size=(150, 150), batch_size=32, class_mode='categorical') validation_generator = test_datagen.flow_from_directory( '/content/drive/MyDrive/Colab Notebooks/SWIMSEG/train/', target_size=(150, 150), batch_size=32, class_mode='categorical') #print(train_generator) #print(validation_generator) model = keras.Sequential() model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(150, 150, 3))) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3), activation='relu')) model.add(layers.Flatten()) model.add(layers.Dense(2,activation = "softmax" )) model.summary() model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) model.fit( train_generator, steps_per_epoch=32, epochs=10, validation_data=validation_generator, validation_steps=800) #steps_per_epoch = len(X_train) ## Sonra drop out ve batch normalization ekleyerek modeli optimize edebilirsiniz ###Output Found 2026 images belonging to 2 classes. Found 2026 images belonging to 2 classes. Model: "sequential_19" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_56 (Conv2D) (None, 148, 148, 32) 896 _________________________________________________________________ max_pooling2d_38 (MaxPooling (None, 74, 74, 32) 0 _________________________________________________________________ conv2d_57 (Conv2D) (None, 72, 72, 64) 18496 _________________________________________________________________ max_pooling2d_39 (MaxPooling (None, 36, 36, 64) 0 _________________________________________________________________ conv2d_58 (Conv2D) (None, 34, 34, 64) 36928 _________________________________________________________________ flatten_14 (Flatten) (None, 73984) 0 _________________________________________________________________ dense_16 (Dense) (None, 2) 147970 ================================================================= Total params: 204,290 Trainable params: 204,290 Non-trainable params: 0 _________________________________________________________________ Epoch 1/10 32/32 [==============================] - ETA: 0s - loss: 0.5998 - accuracy: 0.7838WARNING:tensorflow:Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches (in this case, 800 batches). You may need to use the repeat() function when building your dataset. 32/32 [==============================] - 308s 10s/step - loss: 0.5897 - accuracy: 0.7875 - val_loss: 0.0081 - val_accuracy: 0.9970 Epoch 2/10 32/32 [==============================] - 45s 1s/step - loss: 0.0541 - accuracy: 0.9810 Epoch 3/10 32/32 [==============================] - 45s 1s/step - loss: 0.0559 - accuracy: 0.9916 Epoch 4/10 32/32 [==============================] - 46s 1s/step - loss: 0.0088 - accuracy: 0.9969 Epoch 5/10 32/32 [==============================] - 46s 1s/step - loss: 0.0083 - accuracy: 0.9968 Epoch 6/10 32/32 [==============================] - 45s 1s/step - loss: 0.0061 - accuracy: 0.9991 Epoch 7/10 32/32 [==============================] - 45s 1s/step - loss: 0.0116 - accuracy: 0.9982 Epoch 8/10 32/32 [==============================] - 46s 1s/step - loss: 0.0098 - accuracy: 0.9957 Epoch 9/10 32/32 [==============================] - 45s 1s/step - loss: 7.5625e-04 - accuracy: 0.9992 Epoch 10/10 32/32 [==============================] - 46s 1s/step - loss: 0.0025 - accuracy: 1.0000
XAI/xai_tensorfuzz.ipynb
###Markdown https://github.com/brain-research/tensorfuzz ###Code # Clone the entire repo. !git clone https://github.com/brain-research/tensorfuzz.git %cd tensorfuzz !ls !pip install -r /content/requirements.txt !export PYTHONPATH="$PYTHONPATH:$HOME/tensorfuzz" !pip install 2to3 !pip install pyflann !2to3 -w /usr/local/lib/python3.6/dist-packages/pyflann !python examples/dcgan/dcgan_fuzzer.py --total_inputs_to_fuzz=1000000 --mutations_per_corpus_item=64 --alsologtostderr --strategy=ann --ann_threshold=0.1 mv /content/dcgan_fuzzer.py /content/tensorfuzz/examples/dcgan/dcgan_fuzzer.py ###Output _____no_output_____
paper/4_experts_vs_uncertainties.ipynb
###Markdown deepflash2 - Relationship between uncertainty and expert agreement> This notebook reproduces the results of the deepflash2 [paper](https://arxiv.org/abs/2111.06693) for the relationship between pixel-wise uncertainty and expert agreement.- **Data and models**: Data and trained models are available on [Google Drive](https://drive.google.com/drive/folders/1r9AqP9qW9JThbMIvT0jhoA5mPxWEeIjs?usp=sharing). To use the data in Google Colab, create a [shortcut](https://support.google.com/drive/answer/9700156?hl=en&co=GENIE.Platform%3DDesktop) of the data folder in your personal Google Drive.*Source files created with this notebook*:`experts_vs_uncertainties.csv`*References*:Griebel, M., Segebarth, D., Stein, N., Schukraft, N., Tovote, P., Blum, R., & Flath, C. M. (2021). Deep-learning in the bioimaging wild: Handling ambiguous data with deepflash2. arXiv preprint arXiv:2111.06693. Setup- Install dependecies- Connect to drive ###Code !pip install deepflash2 # Imports import numpy as np import pandas as pd from pathlib import Path import zarr from deepflash2.all import * from deepflash2.data import _read_msk # Connect to drive from google.colab import drive drive.mount('/gdrive') ###Output _____no_output_____ ###Markdown Settings ###Code DATASETS = ['PV_in_HC', 'cFOS_in_HC', 'mScarlet_in_PAG', 'YFP_in_CTX', 'GFAP_in_HC'] OUTPUT_PATH = Path("/content") DATA_PATH = Path('/gdrive/MyDrive/deepflash2-paper/data') TRAINED_MODEL_PATH= Path('/gdrive/MyDrive/deepflash2-paper/models/') MODEL_NO = '1' UNCERTAINTY_BINS = np.linspace(0, 0.25, 26) ###Output _____no_output_____ ###Markdown Analysis1. Predict segmentations and uncertainties on the test set2. Calculate expert agreement from the expert segmentations3. Postprocess results See `deepflash2_figures-and-tables.ipynb` for plots of the data. ###Code result_list = [] for dataset in DATASETS: test_data_path = DATA_PATH/dataset/'test' ensemble_path = TRAINED_MODEL_PATH/dataset/MODEL_NO el_pred = EnsembleLearner('images', path=test_data_path, ensemble_path=ensemble_path) # Predict and save semantic segmentation masks el_pred.get_ensemble_results(el_pred.files, use_tta=True) # Load expert masks gt_est = GTEstimator(exp_dir='masks_experts', path=test_data_path) exp_averages = {} for m, exps in gt_est.masks.items(): file_id = m.split('_')[0] exp_masks = [_read_msk(gt_est.mask_fn(exp,m), instance_labels=gt_est.instance_labels) for exp in exps] exp_averages[file_id] = np.mean(exp_masks, axis=0) for idx, r in el_pred.df_ens.iterrows(): file_id = r.file.split('.')[0] # Get prediction from softmax smx = zarr.load(r.softmax_path) pred = np.argmax(smx, axis=-1) # Get uncertainty maps unc = zarr.load(r.uncertainty_path) # Get expert average annotations exp_average = exp_averages[file_id] # Calculate "soft" error map error_map = np.abs(pred-exp_average) # Calculate error means (error rate) digitized = np.digitize(unc.flatten(), UNCERTAINTY_BINS) error_means = [error_map.flatten()[digitized == i].mean() for i in range(1, len(UNCERTAINTY_BINS))] # Calculate expert agreement expert_agreement = [] for i in range(1, len(UNCERTAINTY_BINS)): bin_error = error_map.flatten()[digitized == i] expert_agreement.append((np.sum(bin_error==0) + np.sum(bin_error==1))/len(bin_error)) df_tmp = pd.DataFrame({ 'dataset':dataset, 'file':r.file, 'uncertainty_bins': UNCERTAINTY_BINS[:-1], 'error_rate': error_means, 'expert_agreement': expert_agreement }) result_list.append(df_tmp) df = pd.concat(result_list).reset_index(drop=True) df.to_csv(OUTPUT_PATH/'experts_vs_uncertainties.csv', index=False) ###Output _____no_output_____
Codes/TopicModeling/Topic_modeling-lda_tfidf_direct_application.ipynb
###Markdown This section contains the code building LDA topic model using documents tf-idf vectors. ###Code import pandas as pd import gensim import pickle from gensim.models import CoherenceModel n_topics = 10 with open('dictionary_r.pkl', 'rb') as fp: dictionary_r = pickle.load(fp) # dictionary_r.filter_extremes(no_below=10, no_above=0.5) with open('corpus_r.pkl', 'rb') as fp: corpus_r = pickle.load(fp) from gensim.models import TfidfModel model = TfidfModel(corpus_r) corpus_new = [model[i] for i in corpus_r] with open('dictionary_r.pkl', 'rb') as fp: dictionary_r = pickle.load(fp) len(dictionary_r) %%time senlda_r_8 = gensim.models.ldamodel.LdaModel(corpus=corpus_new, id2word=dictionary_r, num_topics=8, alpha='auto', eta='auto') #senlda_r.save('Models/Model_10') %%time senlda_r_21 = gensim.models.ldamodel.LdaModel(corpus=corpus_new, id2word=dictionary_r, num_topics=21, alpha='auto', eta='auto') import pyLDAvis import pyLDAvis.gensim pyLDAvis.enable_notebook() vis8 = pyLDAvis.gensim.prepare(senlda_r_8, corpus_r, dictionary_r) #pyLDAvis.save_html(vis, 'topic_model_2.html') vis8 pyLDAvis.enable_notebook() vis21 = pyLDAvis.gensim.prepare(senlda_r_21, corpus_r, dictionary_r) #pyLDAvis.save_html(vis, 'topic_model_2.html') vis21 model = gensim.models.ldamodel.LdaModel.load('Model_10') print('topic_1') model.show_topic(1, topn=20) print('topic_0') model.show_topic(0, topn=20) print('topic_2') model.show_topic(2, topn=20) print('topic_3') model.show_topic(3, topn=20) print('topic_4') model.show_topic(4, topn=20) print('topic_5') model.show_topic(5, topn=20) print('topic_6') model.show_topic(6, topn=20) print('topic_7') model.show_topic(7, topn=20) print('topic_8') model.show_topic(8, topn=20) print('topic_9') model.show_topic(9, topn=20) # Visualize the topics import pyLDAvis import pyLDAvis.gensim pyLDAvis.enable_notebook() vis = pyLDAvis.gensim.prepare(model, corpus_r, dictionary_r) pyLDAvis.save_html(vis, 'topic_model_10_tfidf.html') vis import pandas as pd ldaDF = pd.DataFrame({ 'id' : list(range(len(corpus_r))), 'topics' : [model.get_document_topics(bow) for bow in corpus_r] }) ldaDF #Dict to temporally hold the probabilities topicsProbDict = {i : [0] * len(ldaDF) for i in range(10)} #Load them into the dict for index, topicTuples in enumerate(ldaDF['topics']): for topicNum, prob in topicTuples: topicsProbDict[topicNum][index] = prob #Update the DataFrame for topicNum in range(10): ldaDF['topic_{}'.format(topicNum)] = topicsProbDict[topicNum] ldaDF with open('topic_change_10_topics.pickle', 'wb') as handle: pickle.dump(ldaDF, handle, protocol=pickle.HIGHEST_PROTOCOL) with open('../indus_wanted.pickle', 'rb') as fp: indus_wanted = pickle.load(fp) indus_wanted ldaDF.index ldaDF_wanted = ldaDF[ldaDF.index.isin(indus_wanted.index)] ldaDF_wanted.shape ldaDF_wanted ldaDF_wanted['year'] = list(indus_wanted['year']) ldaDF_wanted['category'] = list(indus_wanted['category']) indus_wanted ldaDF_wanted ldaDF_wanted['gind'] = indus_wanted['gind'] ldaDF_wanted['FDATE'] = indus_wanted['FDATE'] ldaDF_wanted year = list(range(2010, 2021)) colors = ['blue','olive','red','green','yellow','orange','black','purple','grey','navy','pink','cyan','magenta'] topic_words = {} for i in range(10): topic_words['topic_{}'.format(i)] = model.show_topic(i, topn=20) topic_words lda_topic_compo = ldaDF_wanted.groupby(['category','gind','FDATE']).mean() lda_topic_compo.head(130) sum_ = 0 for i in list(lda_topic_compo.index): if i[1] == '451010.0': sum_+=1 sum_ row = lda_topic_compo[7:18] row import matplotlib.pyplot as plt row = lda_topic_compo[0:130] plt.title('Topic Change for IT company 451020') for i,k in enumerate(list(topic_words.keys())): plt.plot(list(range(130)), row[k],color=colors[i], label = k) plt.legend() ###Output /Users/daphne/opt/anaconda3/lib/python3.7/site-packages/matplotlib/axes/_base.py:278: FutureWarning: Support for multi-dimensional indexing (e.g. `obj[:, None]`) is deprecated and will be removed in a future version. Convert to a numpy array before indexing instead. y = y[:, np.newaxis]
DeepLearning/ipython(guide)/classification_1_no_binary_2CNN_4fc.ipynb
###Markdown A neural network consist of 2 cnn layers and 4 fully connected layers. Source: https://github.com/jojonki/cnn-for-sentence-classification ###Code from google.colab import drive drive.mount('/content/drive') import os os.chdir('/content/drive/MyDrive/sharif/DeepLearning/ipython(guide)') import numpy as np import codecs import os import random import pandas from keras import backend as K from keras.models import Model from keras.layers.embeddings import Embedding from keras.layers import Input, Dense, Lambda, Permute, Dropout from keras.layers import Conv2D, MaxPooling1D,Conv1D from keras.optimizers import SGD import ast import re from sklearn.preprocessing import MultiLabelBinarizer from sklearn.model_selection import train_test_split import gensim from keras.models import load_model from keras.callbacks import EarlyStopping, ModelCheckpoint limit_number = 750 data = pandas.read_csv('../Data/limited_to_'+str(limit_number)+'.csv',index_col=0,converters={'body': eval}) data = data.dropna().reset_index(drop=True) X = data["body"].values.tolist() y = pandas.read_csv('../Data/limited_to_'+str(limit_number)+'.csv') labels = [] tag=[] for item in y['tag']: labels += [i for i in re.sub('\"|\[|\]|\'| |=','',item.lower()).split(",") if i!='' and i!=' '] tag.append([i for i in re.sub('\"|\[|\]|\'| |=','',item.lower()).split(",") if i!='' and i!=' ']) labels = list(set(labels)) mlb = MultiLabelBinarizer() Y=mlb.fit_transform(tag) len(labels) sentence_maxlen = max(map(len, (d for d in X))) print('sentence maxlen', sentence_maxlen) freq_dist = pandas.read_csv('../Data/FreqDist_sorted.csv',index_col=False) vocab=[] for item in freq_dist["word"]: try: word=re.sub(r"[\u200c-\u200f]","",item.replace(" ","")) vocab.append(word) except: pass print(vocab[10]) vocab = sorted(vocab) vocab_size = len(vocab) print('vocab size', len(vocab)) w2i = {w:i for i,w in enumerate(vocab)} # i2w = {i:w for i,w in enumerate(vocab)} print(w2i["زبان"]) def vectorize(data, sentence_maxlen, w2i): vec_data = [] for d in data: vec = [w2i[w] for w in d if w in w2i] pad_len = max(0, sentence_maxlen - len(vec)) vec += [0] * pad_len vec_data.append(vec) # print(d) vec_data = np.array(vec_data) return vec_data vecX = vectorize(X, sentence_maxlen, w2i) vecY=Y X_train, X_test, y_train, y_test = train_test_split(vecX, vecY, test_size=0.2) X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.25) print('train: ', X_train.shape , '\ntest: ', X_test.shape , '\nval: ', X_val.shape ,"\ny_tain:",y_train.shape ) # print(vecX[0]) embd_dim = 300 ###Output _____no_output_____ ###Markdown ***If the word2vec model is not generated before, we should run the next block.*** ###Code # embed_model = gensim.models.Word2Vec(X, size=embd_dim, window=5, min_count=5) # embed_model.save('word2vec_model') ###Output _____no_output_____ ###Markdown ***Otherwise, we can run the next block.*** ###Code embed_model=gensim.models.Word2Vec.load('word2vec_model') word2vec_embd_w = np.zeros((vocab_size, embd_dim)) for word, i in w2i.items(): if word in embed_model.wv.vocab: embedding_vector =embed_model[word] # words not found in embedding index will be all-zeros. word2vec_embd_w[i] = embedding_vector def Net(vocab_size, embd_size, sentence_maxlen, glove_embd_w): sentence = Input((sentence_maxlen,), name='SentenceInput') # embedding embd_layer = Embedding(input_dim=vocab_size, output_dim=embd_size, weights=[word2vec_embd_w], trainable=False, name='shared_embd') embd_sentence = embd_layer(sentence) embd_sentence = Permute((2,1))(embd_sentence) embd_sentence = Lambda(lambda x: K.expand_dims(x, -1))(embd_sentence) # cnn cnn = Conv2D(1, kernel_size=(5, sentence_maxlen), activation='relu')(embd_sentence) print(cnn.shape) cnn = Lambda(lambda x: K.sum(x, axis=3))(cnn) print(cnn.shape) cnn = MaxPooling1D(3)(cnn) print(cnn.shape) cnn1 = Conv1D(1, kernel_size=(3), activation='relu')(cnn) print(cnn1.shape) # cnn1 = Lambda(lambda x: K.sum(x, axis=3))(cnn1) print(cnn1.shape) cnn1 = MaxPooling1D(3)(cnn1) print(cnn1.shape) cnn1 = Lambda(lambda x: K.sum(x, axis=2))(cnn1) print(cnn1.shape) hidden1=Dense(400,activation="relu")(cnn1) hidden2=Dense(300,activation="relu")(hidden1) hidden3=Dense(200,activation="relu")(hidden2) hidden4=Dense(150,activation="relu")(hidden3) out = Dense(len(labels), activation='sigmoid')(hidden4) sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True) model = Model(inputs=sentence, outputs=out, name='sentence_claccification') model.compile(optimizer=sgd, loss='binary_crossentropy',metrics=["accuracy","categorical_accuracy"]) return model model = Net(vocab_size, embd_dim, sentence_maxlen,word2vec_embd_w) print(model.summary()) es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=5) # Model stop training after 5 epoch where validation loss didnt decrease mc = ModelCheckpoint('best_2cnn_4fc.h5', monitor='val_loss', mode='min', verbose=1, save_best_only=True) #You save model weight at the epoch where validation loss is minimal model.fit(X_train, y_train, batch_size=32,epochs=250,verbose=1,validation_data=(X_val, y_val),callbacks=[es,mc])#you can run for 1000 epoch btw model will stop after 50 epoch without better validation loss ###Output Epoch 1/250 405/405 [==============================] - 52s 128ms/step - loss: 0.1096 - accuracy: 0.0247 - categorical_accuracy: 0.0247 - val_loss: 0.1103 - val_accuracy: 0.0313 - val_categorical_accuracy: 0.0313 Epoch 00001: val_loss improved from inf to 0.11033, saving model to best_2cnn_4fc.h5 Epoch 2/250 405/405 [==============================] - 52s 130ms/step - loss: 0.1084 - accuracy: 0.0239 - categorical_accuracy: 0.0239 - val_loss: 0.1097 - val_accuracy: 0.0174 - val_categorical_accuracy: 0.0174 Epoch 00002: val_loss improved from 0.11033 to 0.10968, saving model to best_2cnn_4fc.h5 Epoch 3/250 405/405 [==============================] - 52s 129ms/step - loss: 0.1078 - accuracy: 0.0268 - categorical_accuracy: 0.0268 - val_loss: 0.1092 - val_accuracy: 0.0144 - val_categorical_accuracy: 0.0144 Epoch 00003: val_loss improved from 0.10968 to 0.10917, saving model to best_2cnn_4fc.h5 Epoch 4/250 405/405 [==============================] - 52s 129ms/step - loss: 0.1073 - accuracy: 0.0312 - categorical_accuracy: 0.0312 - val_loss: 0.1088 - val_accuracy: 0.0362 - val_categorical_accuracy: 0.0362 Epoch 00004: val_loss improved from 0.10917 to 0.10880, saving model to best_2cnn_4fc.h5 Epoch 5/250 405/405 [==============================] - 54s 132ms/step - loss: 0.1069 - accuracy: 0.0356 - categorical_accuracy: 0.0356 - val_loss: 0.1084 - val_accuracy: 0.0376 - val_categorical_accuracy: 0.0376 Epoch 00005: val_loss improved from 0.10880 to 0.10838, saving model to best_2cnn_4fc.h5 Epoch 6/250 405/405 [==============================] - 52s 129ms/step - loss: 0.1065 - accuracy: 0.0367 - categorical_accuracy: 0.0367 - val_loss: 0.1082 - val_accuracy: 0.0455 - val_categorical_accuracy: 0.0455 Epoch 00006: val_loss improved from 0.10838 to 0.10817, saving model to best_2cnn_4fc.h5 Epoch 7/250 405/405 [==============================] - 52s 129ms/step - loss: 0.1062 - accuracy: 0.0438 - categorical_accuracy: 0.0438 - val_loss: 0.1079 - val_accuracy: 0.0401 - val_categorical_accuracy: 0.0401 Epoch 00007: val_loss improved from 0.10817 to 0.10794, saving model to best_2cnn_4fc.h5 Epoch 8/250 405/405 [==============================] - 52s 129ms/step - loss: 0.1058 - accuracy: 0.0427 - categorical_accuracy: 0.0427 - val_loss: 0.1077 - val_accuracy: 0.0501 - val_categorical_accuracy: 0.0501 Epoch 00008: val_loss improved from 0.10794 to 0.10774, saving model to best_2cnn_4fc.h5 Epoch 9/250 405/405 [==============================] - 52s 130ms/step - loss: 0.1053 - accuracy: 0.0479 - categorical_accuracy: 0.0479 - val_loss: 0.1074 - val_accuracy: 0.0415 - val_categorical_accuracy: 0.0415 Epoch 00009: val_loss improved from 0.10774 to 0.10744, saving model to best_2cnn_4fc.h5 Epoch 10/250 405/405 [==============================] - 52s 129ms/step - loss: 0.1049 - accuracy: 0.0538 - categorical_accuracy: 0.0538 - val_loss: 0.1070 - val_accuracy: 0.0605 - val_categorical_accuracy: 0.0605 Epoch 00010: val_loss improved from 0.10744 to 0.10704, saving model to best_2cnn_4fc.h5 Epoch 11/250 405/405 [==============================] - 53s 130ms/step - loss: 0.1044 - accuracy: 0.0626 - categorical_accuracy: 0.0626 - val_loss: 0.1065 - val_accuracy: 0.0645 - val_categorical_accuracy: 0.0645 Epoch 00011: val_loss improved from 0.10704 to 0.10653, saving model to best_2cnn_4fc.h5 Epoch 12/250 405/405 [==============================] - 52s 128ms/step - loss: 0.1037 - accuracy: 0.0663 - categorical_accuracy: 0.0663 - val_loss: 0.1061 - val_accuracy: 0.0622 - val_categorical_accuracy: 0.0622 Epoch 00012: val_loss improved from 0.10653 to 0.10606, saving model to best_2cnn_4fc.h5 Epoch 13/250 405/405 [==============================] - 52s 129ms/step - loss: 0.1029 - accuracy: 0.0687 - categorical_accuracy: 0.0687 - val_loss: 0.1051 - val_accuracy: 0.0589 - val_categorical_accuracy: 0.0589 Epoch 00013: val_loss improved from 0.10606 to 0.10511, saving model to best_2cnn_4fc.h5 Epoch 14/250 405/405 [==============================] - 52s 130ms/step - loss: 0.1017 - accuracy: 0.0813 - categorical_accuracy: 0.0813 - val_loss: 0.1038 - val_accuracy: 0.0788 - val_categorical_accuracy: 0.0788 Epoch 00014: val_loss improved from 0.10511 to 0.10382, saving model to best_2cnn_4fc.h5 Epoch 15/250 405/405 [==============================] - 53s 130ms/step - loss: 0.0999 - accuracy: 0.0939 - categorical_accuracy: 0.0939 - val_loss: 0.1019 - val_accuracy: 0.0870 - val_categorical_accuracy: 0.0870 Epoch 00015: val_loss improved from 0.10382 to 0.10192, saving model to best_2cnn_4fc.h5 Epoch 16/250 405/405 [==============================] - 53s 130ms/step - loss: 0.0976 - accuracy: 0.1197 - categorical_accuracy: 0.1197 - val_loss: 0.0993 - val_accuracy: 0.1347 - val_categorical_accuracy: 0.1347 Epoch 00016: val_loss improved from 0.10192 to 0.09934, saving model to best_2cnn_4fc.h5 Epoch 17/250 405/405 [==============================] - 53s 131ms/step - loss: 0.0949 - accuracy: 0.1535 - categorical_accuracy: 0.1535 - val_loss: 0.0967 - val_accuracy: 0.1744 - val_categorical_accuracy: 0.1744 Epoch 00017: val_loss improved from 0.09934 to 0.09674, saving model to best_2cnn_4fc.h5 Epoch 18/250 405/405 [==============================] - 52s 129ms/step - loss: 0.0923 - accuracy: 0.1873 - categorical_accuracy: 0.1873 - val_loss: 0.0941 - val_accuracy: 0.1964 - val_categorical_accuracy: 0.1964 Epoch 00018: val_loss improved from 0.09674 to 0.09409, saving model to best_2cnn_4fc.h5 Epoch 19/250 405/405 [==============================] - 53s 130ms/step - loss: 0.0899 - accuracy: 0.2165 - categorical_accuracy: 0.2165 - val_loss: 0.0917 - val_accuracy: 0.2205 - val_categorical_accuracy: 0.2205 Epoch 00019: val_loss improved from 0.09409 to 0.09168, saving model to best_2cnn_4fc.h5 Epoch 20/250 405/405 [==============================] - 53s 130ms/step - loss: 0.0877 - accuracy: 0.2373 - categorical_accuracy: 0.2373 - val_loss: 0.0904 - val_accuracy: 0.2444 - val_categorical_accuracy: 0.2444 Epoch 00020: val_loss improved from 0.09168 to 0.09045, saving model to best_2cnn_4fc.h5 Epoch 21/250 405/405 [==============================] - 52s 129ms/step - loss: 0.0857 - accuracy: 0.2592 - categorical_accuracy: 0.2592 - val_loss: 0.0883 - val_accuracy: 0.2537 - val_categorical_accuracy: 0.2537 Epoch 00021: val_loss improved from 0.09045 to 0.08831, saving model to best_2cnn_4fc.h5 Epoch 22/250 405/405 [==============================] - 52s 130ms/step - loss: 0.0840 - accuracy: 0.2744 - categorical_accuracy: 0.2744 - val_loss: 0.0868 - val_accuracy: 0.2706 - val_categorical_accuracy: 0.2706 Epoch 00022: val_loss improved from 0.08831 to 0.08684, saving model to best_2cnn_4fc.h5 Epoch 23/250 405/405 [==============================] - 52s 129ms/step - loss: 0.0826 - accuracy: 0.2858 - categorical_accuracy: 0.2858 - val_loss: 0.0853 - val_accuracy: 0.2683 - val_categorical_accuracy: 0.2683 Epoch 00023: val_loss improved from 0.08684 to 0.08528, saving model to best_2cnn_4fc.h5 Epoch 24/250 405/405 [==============================] - 55s 135ms/step - loss: 0.0813 - accuracy: 0.3003 - categorical_accuracy: 0.3003 - val_loss: 0.0846 - val_accuracy: 0.2750 - val_categorical_accuracy: 0.2750 Epoch 00024: val_loss improved from 0.08528 to 0.08462, saving model to best_2cnn_4fc.h5 Epoch 25/250 405/405 [==============================] - 55s 135ms/step - loss: 0.0803 - accuracy: 0.3058 - categorical_accuracy: 0.3058 - val_loss: 0.0840 - val_accuracy: 0.2876 - val_categorical_accuracy: 0.2876 Epoch 00025: val_loss improved from 0.08462 to 0.08396, saving model to best_2cnn_4fc.h5 Epoch 26/250 405/405 [==============================] - 54s 132ms/step - loss: 0.0793 - accuracy: 0.3140 - categorical_accuracy: 0.3140 - val_loss: 0.0835 - val_accuracy: 0.2915 - val_categorical_accuracy: 0.2915 Epoch 00026: val_loss improved from 0.08396 to 0.08349, saving model to best_2cnn_4fc.h5 Epoch 27/250 405/405 [==============================] - 54s 134ms/step - loss: 0.0785 - accuracy: 0.3217 - categorical_accuracy: 0.3217 - val_loss: 0.0827 - val_accuracy: 0.3047 - val_categorical_accuracy: 0.3047 Epoch 00027: val_loss improved from 0.08349 to 0.08272, saving model to best_2cnn_4fc.h5 Epoch 28/250 405/405 [==============================] - 54s 132ms/step - loss: 0.0776 - accuracy: 0.3261 - categorical_accuracy: 0.3261 - val_loss: 0.0818 - val_accuracy: 0.3105 - val_categorical_accuracy: 0.3105 Epoch 00028: val_loss improved from 0.08272 to 0.08179, saving model to best_2cnn_4fc.h5 Epoch 29/250 405/405 [==============================] - 53s 132ms/step - loss: 0.0770 - accuracy: 0.3332 - categorical_accuracy: 0.3332 - val_loss: 0.0816 - val_accuracy: 0.3010 - val_categorical_accuracy: 0.3010 Epoch 00029: val_loss improved from 0.08179 to 0.08161, saving model to best_2cnn_4fc.h5 Epoch 30/250 405/405 [==============================] - 53s 132ms/step - loss: 0.0763 - accuracy: 0.3359 - categorical_accuracy: 0.3359 - val_loss: 0.0811 - val_accuracy: 0.3207 - val_categorical_accuracy: 0.3207 Epoch 00030: val_loss improved from 0.08161 to 0.08106, saving model to best_2cnn_4fc.h5 Epoch 31/250 405/405 [==============================] - 53s 132ms/step - loss: 0.0758 - accuracy: 0.3416 - categorical_accuracy: 0.3416 - val_loss: 0.0807 - val_accuracy: 0.3258 - val_categorical_accuracy: 0.3258 Epoch 00031: val_loss improved from 0.08106 to 0.08069, saving model to best_2cnn_4fc.h5 Epoch 32/250 405/405 [==============================] - 53s 132ms/step - loss: 0.0751 - accuracy: 0.3438 - categorical_accuracy: 0.3438 - val_loss: 0.0803 - val_accuracy: 0.3286 - val_categorical_accuracy: 0.3286 Epoch 00032: val_loss improved from 0.08069 to 0.08029, saving model to best_2cnn_4fc.h5 Epoch 33/250 405/405 [==============================] - 54s 133ms/step - loss: 0.0746 - accuracy: 0.3498 - categorical_accuracy: 0.3498 - val_loss: 0.0798 - val_accuracy: 0.3203 - val_categorical_accuracy: 0.3203 Epoch 00033: val_loss improved from 0.08029 to 0.07978, saving model to best_2cnn_4fc.h5 Epoch 34/250 405/405 [==============================] - 53s 132ms/step - loss: 0.0742 - accuracy: 0.3530 - categorical_accuracy: 0.3530 - val_loss: 0.0803 - val_accuracy: 0.3219 - val_categorical_accuracy: 0.3219 Epoch 00034: val_loss did not improve from 0.07978 Epoch 35/250 405/405 [==============================] - 53s 131ms/step - loss: 0.0736 - accuracy: 0.3580 - categorical_accuracy: 0.3580 - val_loss: 0.0795 - val_accuracy: 0.3305 - val_categorical_accuracy: 0.3305 Epoch 00035: val_loss improved from 0.07978 to 0.07946, saving model to best_2cnn_4fc.h5 Epoch 36/250 405/405 [==============================] - 54s 133ms/step - loss: 0.0732 - accuracy: 0.3568 - categorical_accuracy: 0.3568 - val_loss: 0.0793 - val_accuracy: 0.3330 - val_categorical_accuracy: 0.3330 Epoch 00036: val_loss improved from 0.07946 to 0.07929, saving model to best_2cnn_4fc.h5 Epoch 37/250 405/405 [==============================] - 54s 132ms/step - loss: 0.0727 - accuracy: 0.3644 - categorical_accuracy: 0.3644 - val_loss: 0.0789 - val_accuracy: 0.3293 - val_categorical_accuracy: 0.3293 Epoch 00037: val_loss improved from 0.07929 to 0.07886, saving model to best_2cnn_4fc.h5 Epoch 38/250 405/405 [==============================] - 54s 132ms/step - loss: 0.0724 - accuracy: 0.3674 - categorical_accuracy: 0.3674 - val_loss: 0.0798 - val_accuracy: 0.3442 - val_categorical_accuracy: 0.3442 Epoch 00038: val_loss did not improve from 0.07886 Epoch 39/250 405/405 [==============================] - 53s 131ms/step - loss: 0.0719 - accuracy: 0.3707 - categorical_accuracy: 0.3707 - val_loss: 0.0785 - val_accuracy: 0.3423 - val_categorical_accuracy: 0.3423 Epoch 00039: val_loss improved from 0.07886 to 0.07852, saving model to best_2cnn_4fc.h5 Epoch 40/250 405/405 [==============================] - 54s 132ms/step - loss: 0.0716 - accuracy: 0.3722 - categorical_accuracy: 0.3722 - val_loss: 0.0780 - val_accuracy: 0.3356 - val_categorical_accuracy: 0.3356 Epoch 00040: val_loss improved from 0.07852 to 0.07799, saving model to best_2cnn_4fc.h5 Epoch 41/250 405/405 [==============================] - 54s 132ms/step - loss: 0.0711 - accuracy: 0.3729 - categorical_accuracy: 0.3729 - val_loss: 0.0781 - val_accuracy: 0.3393 - val_categorical_accuracy: 0.3393 Epoch 00041: val_loss did not improve from 0.07799 Epoch 42/250 405/405 [==============================] - 53s 131ms/step - loss: 0.0708 - accuracy: 0.3777 - categorical_accuracy: 0.3777 - val_loss: 0.0780 - val_accuracy: 0.3458 - val_categorical_accuracy: 0.3458 Epoch 00042: val_loss did not improve from 0.07799 Epoch 43/250 405/405 [==============================] - 53s 131ms/step - loss: 0.0705 - accuracy: 0.3785 - categorical_accuracy: 0.3785 - val_loss: 0.0779 - val_accuracy: 0.3386 - val_categorical_accuracy: 0.3386 Epoch 00043: val_loss improved from 0.07799 to 0.07788, saving model to best_2cnn_4fc.h5 Epoch 44/250 405/405 [==============================] - 53s 132ms/step - loss: 0.0701 - accuracy: 0.3790 - categorical_accuracy: 0.3790 - val_loss: 0.0784 - val_accuracy: 0.3437 - val_categorical_accuracy: 0.3437 Epoch 00044: val_loss did not improve from 0.07788 Epoch 45/250 405/405 [==============================] - 53s 130ms/step - loss: 0.0698 - accuracy: 0.3811 - categorical_accuracy: 0.3811 - val_loss: 0.0776 - val_accuracy: 0.3472 - val_categorical_accuracy: 0.3472 Epoch 00045: val_loss improved from 0.07788 to 0.07756, saving model to best_2cnn_4fc.h5 Epoch 46/250 405/405 [==============================] - 53s 132ms/step - loss: 0.0695 - accuracy: 0.3803 - categorical_accuracy: 0.3803 - val_loss: 0.0775 - val_accuracy: 0.3558 - val_categorical_accuracy: 0.3558 Epoch 00046: val_loss improved from 0.07756 to 0.07747, saving model to best_2cnn_4fc.h5 Epoch 47/250 405/405 [==============================] - 54s 132ms/step - loss: 0.0692 - accuracy: 0.3917 - categorical_accuracy: 0.3917 - val_loss: 0.0770 - val_accuracy: 0.3495 - val_categorical_accuracy: 0.3495 Epoch 00047: val_loss improved from 0.07747 to 0.07697, saving model to best_2cnn_4fc.h5 Epoch 48/250 405/405 [==============================] - 54s 132ms/step - loss: 0.0689 - accuracy: 0.3832 - categorical_accuracy: 0.3832 - val_loss: 0.0773 - val_accuracy: 0.3532 - val_categorical_accuracy: 0.3532 Epoch 00048: val_loss did not improve from 0.07697 Epoch 49/250 405/405 [==============================] - 53s 131ms/step - loss: 0.0685 - accuracy: 0.3899 - categorical_accuracy: 0.3899 - val_loss: 0.0775 - val_accuracy: 0.3530 - val_categorical_accuracy: 0.3530 Epoch 00049: val_loss did not improve from 0.07697 Epoch 50/250 405/405 [==============================] - 53s 131ms/step - loss: 0.0684 - accuracy: 0.3893 - categorical_accuracy: 0.3893 - val_loss: 0.0765 - val_accuracy: 0.3604 - val_categorical_accuracy: 0.3604 Epoch 00050: val_loss improved from 0.07697 to 0.07647, saving model to best_2cnn_4fc.h5 Epoch 51/250 405/405 [==============================] - 54s 134ms/step - loss: 0.0679 - accuracy: 0.3917 - categorical_accuracy: 0.3917 - val_loss: 0.0766 - val_accuracy: 0.3571 - val_categorical_accuracy: 0.3571 Epoch 00051: val_loss did not improve from 0.07647 Epoch 52/250 405/405 [==============================] - 53s 131ms/step - loss: 0.0676 - accuracy: 0.3947 - categorical_accuracy: 0.3947 - val_loss: 0.0774 - val_accuracy: 0.3472 - val_categorical_accuracy: 0.3472 Epoch 00052: val_loss did not improve from 0.07647 Epoch 53/250 405/405 [==============================] - 53s 131ms/step - loss: 0.0674 - accuracy: 0.4002 - categorical_accuracy: 0.4002 - val_loss: 0.0767 - val_accuracy: 0.3476 - val_categorical_accuracy: 0.3476 Epoch 00053: val_loss did not improve from 0.07647 Epoch 54/250 405/405 [==============================] - 53s 131ms/step - loss: 0.0672 - accuracy: 0.3991 - categorical_accuracy: 0.3991 - val_loss: 0.0769 - val_accuracy: 0.3479 - val_categorical_accuracy: 0.3479 Epoch 00054: val_loss did not improve from 0.07647 Epoch 55/250 405/405 [==============================] - 53s 131ms/step - loss: 0.0669 - accuracy: 0.3945 - categorical_accuracy: 0.3945 - val_loss: 0.0777 - val_accuracy: 0.3444 - val_categorical_accuracy: 0.3444 Epoch 00055: val_loss did not improve from 0.07647 Epoch 00055: early stopping ###Markdown ***If the model is generated before:*** ###Code model = load_model('best_2cnn_4fc.h5') # model.save('CNN_1_no_binary.h5') pred=model.predict(X_test) # For evaluation: If the probability > 0.5 you can say that it belong to the class. print(pred[0])#example y_pred=[] measure = np.mean(pred[0]) + 1.15*np.sqrt(np.var(pred[0])) for l in pred: temp=[] for value in l: if value >= measure: temp.append(1) else: temp.append(0) y_pred.append(temp) measure from sklearn.metrics import classification_report,accuracy_score print("accuracy=",accuracy_score(y_test, y_pred)) print(classification_report(y_test, y_pred)) from sklearn.metrics import classification_report,accuracy_score print("accuracy=",accuracy_score(y_test, y_pred)) print(classification_report(y_test, y_pred)) ###Output _____no_output_____
notebooks/fooling.ipynb
###Markdown from datasets import SpecialTokens"""input_str = f"{tokenizer.bos_token}"input_str = "cornernouna point or space in a hierarchy that is within the order to which it moves along the axis."input = tokenizer.encode(input_str, return_tensors="pt").to("cuda")max_length = 512generated = model.generate( input_ids=input, max_length=max_length, num_return_sequences=5, temperature=1.0, top_k=1000, pad_token_id=tokenizer.pad_token_id, bos_token_id=tokenizer.bos_token_id, eos_token_ids=tokenizer.eos_token_id, do_sample=True,)break_specials = [ SpecialTokens.BOS_TOKEN, SpecialTokens.EOS_TOKEN, SpecialTokens.DEFINITION_SEP, SpecialTokens.EXAMPLE_SEP, SpecialTokens.TOPIC_SEP, SpecialTokens.POS_SEP ]break_special_ids = [tokenizer.encode(e, add_prefix_space=False)[0] for e in break_specials]break_special_token_map = {s: i for s, i in zip(break_specials, break_special_ids)}for i in range(generated.size()[0]): sentence_tokens = generated[i, :].tolist() accum = [] last_special = None sep_map = {} for token_id in sentence_tokens: if token_id in break_special_ids: if last_special is not None: sep_map[last_special] = accum accum = [] last_special = token_id else: last_special = token_id else: accum.append(token_id) sep_map[last_special] = accum accum = [] decode_sep_map = { tokenizer.decode([k]): tokenizer.decode(v) for k, v in sep_map.items() } print(decode_sep_map) decoded = tokenizer.decode([e for e in sentence_tokens if e != tokenizer.pad_token_id]) print(decoded) """ ###Code tokenizer.decode(tokenizer.encode("a bc", add_prefix_space=False)) tokenizer.special_tokens_map blacklist = set(e.title for e in pickle.load(open("data/all_words.pickle", "rb")).values()) model = modeling.GPT2LMHeadWithWeightedLossModel.from_pretrained( "models/urban_dictionary_cleaned_top_def_mu02_lr_0_000005_tw40" ).to("cuda") tw40_words = urban_dictionary_scraper.generate_words( tokenizer, model, blacklist=blacklist, num=100, ) pickle.dump(tw1_words, open("data/labeling/tw1_words.pickle", "wb"), protocol=pickle.HIGHEST_PROTOCOL) pickle.dump(tw40_words, open("data/labeling/tw40_words.pickle", "wb"), protocol=pickle.HIGHEST_PROTOCOL) df = pd.DataFrame( [ ( word.word, word.definition, word.example.replace(, "tw1" if i < len(tw1_words) else "tw2", ) for i, word in enumerate(itertools.chain( tw1_words, tw40_words )) ], columns=("word", "definition", "example", "dataset") ) sample = df.sample(frac=1) sample_no_dataset = sample[:] sample_no_dataset.to_csv("fun.csv", index=False, columns=["word", "definition", "example"]) interact() ###Output _____no_output_____ ###Markdown from datasets import SpecialTokens"""input_str = f"{tokenizer.bos_token}"input_str = "cornernouna point or space in a hierarchy that is within the order to which it moves along the axis."input = tokenizer.encode(input_str, return_tensors="pt").to("cuda")max_length = 512generated = model.generate( input_ids=input, max_length=max_length, num_return_sequences=5, temperature=1.0, top_k=1000, pad_token_id=tokenizer.pad_token_id, bos_token_id=tokenizer.bos_token_id, eos_token_ids=tokenizer.eos_token_id, do_sample=True,)break_specials = [ SpecialTokens.BOS_TOKEN, SpecialTokens.EOS_TOKEN, SpecialTokens.DEFINITION_SEP, SpecialTokens.EXAMPLE_SEP, SpecialTokens.TOPIC_SEP, SpecialTokens.POS_SEP ]break_special_ids = [tokenizer.encode(e, add_prefix_space=False)[0] for e in break_specials]break_special_token_map = {s: i for s, i in zip(break_specials, break_special_ids)}for i in range(generated.size()[0]): sentence_tokens = generated[i, :].tolist() accum = [] last_special = None sep_map = {} for token_id in sentence_tokens: if token_id in break_special_ids: if last_special is not None: sep_map[last_special] = accum accum = [] last_special = token_id else: last_special = token_id else: accum.append(token_id) sep_map[last_special] = accum accum = [] decode_sep_map = { tokenizer.decode([k]): tokenizer.decode(v) for k, v in sep_map.items() } print(decode_sep_map) decoded = tokenizer.decode([e for e in sentence_tokens if e != tokenizer.pad_token_id]) print(decoded) """ ###Code tokenizer.decode(tokenizer.encode("a bc", add_prefix_space=False)) tokenizer.special_tokens_map blacklist = set(e.title for e in pickle.load(open("data/all_words.pickle", "rb")).values()) model = modeling.GPT2LMHeadWithWeightedLossModel.from_pretrained( "models/urban_dictionary_cleaned_top_def_mu02_lr_0_000005_tw40" ).to("cuda") tw40_words = urban_dictionary_scraper.generate_words( tokenizer, model, blacklist=blacklist, num=100, ) pickle.dump(tw1_words, open("data/labeling/tw1_words.pickle", "wb"), protocol=pickle.HIGHEST_PROTOCOL) pickle.dump(tw40_words, open("data/labeling/tw40_words.pickle", "wb"), protocol=pickle.HIGHEST_PROTOCOL) df = pd.DataFrame( [ ( word.word, word.definition, word.example.replace(, "tw1" if i < len(tw1_words) else "tw2", ) for i, word in enumerate(itertools.chain( tw1_words, tw40_words )) ], columns=("word", "definition", "example", "dataset") ) sample = df.sample(frac=1) sample_no_dataset = sample[:] sample_no_dataset.to_csv("fun.csv", index=False, columns=["word", "definition", "example"]) interact() ###Output _____no_output_____ ###Markdown from datasets import SpecialTokens"""input_str = f"{tokenizer.bos_token}"input_str = "cornernouna point or space in a hierarchy that is within the order to which it moves along the axis."input = tokenizer.encode(input_str, return_tensors="pt").to("cuda")max_length = 512generated = model.generate( input_ids=input, max_length=max_length, num_return_sequences=5, temperature=1.0, top_k=1000, pad_token_id=tokenizer.pad_token_id, bos_token_id=tokenizer.bos_token_id, eos_token_ids=tokenizer.eos_token_id, do_sample=True,)break_specials = [ SpecialTokens.BOS_TOKEN, SpecialTokens.EOS_TOKEN, SpecialTokens.DEFINITION_SEP, SpecialTokens.EXAMPLE_SEP, SpecialTokens.TOPIC_SEP, SpecialTokens.POS_SEP ]break_special_ids = [tokenizer.encode(e, add_prefix_space=False)[0] for e in break_specials]break_special_token_map = {s: i for s, i in zip(break_specials, break_special_ids)}for i in range(generated.size()[0]): sentence_tokens = generated[i, :].tolist() accum = [] last_special = None sep_map = {} for token_id in sentence_tokens: if token_id in break_special_ids: if last_special is not None: sep_map[last_special] = accum accum = [] last_special = token_id else: last_special = token_id else: accum.append(token_id) sep_map[last_special] = accum accum = [] decode_sep_map = { tokenizer.decode([k]): tokenizer.decode(v) for k, v in sep_map.items() } print(decode_sep_map) decoded = tokenizer.decode([e for e in sentence_tokens if e != tokenizer.pad_token_id]) print(decoded) """ ###Code tokenizer.decode(tokenizer.encode("a bc", add_prefix_space=False)) tokenizer.special_tokens_map blacklist = set(e.title for e in pickle.load(open("data/all_words.pickle", "rb")).values()) model = modeling.GPT2LMHeadWithWeightedLossModel.from_pretrained( "models/urban_dictionary_cleaned_top_def_mu02_lr_0_000005_tw40" ).to("cuda") tw40_words = urban_dictionary_scraper.generate_words( tokenizer, model, blacklist=blacklist, num=100, ) pickle.dump(tw1_words, open("data/labeling/tw1_words.pickle", "wb"), protocol=pickle.HIGHEST_PROTOCOL) pickle.dump(tw40_words, open("data/labeling/tw40_words.pickle", "wb"), protocol=pickle.HIGHEST_PROTOCOL) df = pd.DataFrame( [ ( word.word, word.definition, word.example.replace(, "tw1" if i < len(tw1_words) else "tw2", ) for i, word in enumerate(itertools.chain( tw1_words, tw40_words )) ], columns=("word", "definition", "example", "dataset") ) sample = df.sample(frac=1) sample_no_dataset = sample[:] sample_no_dataset.to_csv("fun.csv", index=False, columns=["word", "definition", "example"]) interact() ###Output _____no_output_____
jupyter_notebooks/Rich Output.ipynb
###Markdown Rich Output In Python, objects can declare their textual representation using the `__repr__` method. IPython expands on this idea and allows objects to declare other, rich representations including:* HTML* JSON* PNG* JPEG* SVG* LaTeXA single object can declare some or all of these representations; all are handled by IPython's *display system*. This Notebook shows how you can use this display system to incorporate a broad range of content into your Notebooks. Basic display imports The `display` function is a general purpose tool for displaying different representations of objects. Think of it as `print` for these rich representations. ###Code from IPython.display import display ###Output _____no_output_____ ###Markdown A few points:* Calling `display` on an object will send **all** possible representations to the Notebook.* These representations are stored in the Notebook document.* In general the Notebook will use the richest available representation.If you want to display a particular representation, there are specific functions for that: ###Code from IPython.display import ( display_pretty, display_html, display_jpeg, display_png, display_json, display_latex, display_svg ) ###Output _____no_output_____ ###Markdown Images To work with images (JPEG, PNG) use the `Image` class. ###Code from IPython.display import Image i = Image(filename='../images/ipython_logo.png') ###Output _____no_output_____ ###Markdown Returning an `Image` object from an expression will automatically display it: ###Code i ###Output _____no_output_____ ###Markdown Or you can pass an object with a rich representation to `display`: ###Code display(i) ###Output _____no_output_____ ###Markdown An image can also be displayed from raw data or a URL. ###Code Image(url='http://python.org/images/python-logo.gif') ###Output _____no_output_____ ###Markdown SVG images are also supported out of the box. ###Code from IPython.display import SVG SVG(filename='../images/python_logo.svg') ###Output _____no_output_____ ###Markdown Embedded vs non-embedded Images By default, image data is embedded in the notebook document so that the images can be viewed offline. However it is also possible to tell the `Image` class to only store a *link* to the image. Let's see how this works using a webcam at Berkeley. ###Code from IPython.display import Image img_url = 'http://www.lawrencehallofscience.org/static/scienceview/scienceview.berkeley.edu/html/view/view_assets/images/newview.jpg' # by default Image data are embedded Embed = Image(img_url) # if kwarg `url` is given, the embedding is assumed to be false SoftLinked = Image(url=img_url) # In each case, embed can be specified explicitly with the `embed` kwarg # ForceEmbed = Image(url=img_url, embed=True) ###Output _____no_output_____ ###Markdown Here is the embedded version. Note that this image was pulled from the webcam when this code cell was originally run and stored in the Notebook. Unless we rerun this cell, this is not todays image. ###Code Embed ###Output _____no_output_____ ###Markdown Here is today's image from same webcam at Berkeley, (refreshed every minutes, if you reload the notebook), visible only with an active internet connection, that should be different from the previous one. Notebooks saved with this kind of image will be smaller and always reflect the current version of the source, but the image won't display offline. ###Code SoftLinked ###Output _____no_output_____ ###Markdown Of course, if you re-run this Notebook, the two images will be the same again. HTML Python objects can declare HTML representations that will be displayed in the Notebook. If you have some HTML you want to display, simply use the `HTML` class. ###Code from IPython.display import HTML s = """<table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table>""" h = HTML(s) display(h) ###Output _____no_output_____ ###Markdown You can also use the `%%html` cell magic to accomplish the same thing. ###Code %%html <table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table> ###Output _____no_output_____ ###Markdown JavaScript The Notebook also enables objects to declare a JavaScript representation. At first, this may seem odd as output is inherently visual and JavaScript is a programming language. However, this opens the door for rich output that leverages the full power of JavaScript and associated libraries such as [d3.js](http://d3js.org) for output. ###Code from IPython.display import Javascript ###Output _____no_output_____ ###Markdown Pass a string of JavaScript source code to the `JavaScript` object and then display it. ###Code js = Javascript('alert("hi")'); display(js) ###Output _____no_output_____ ###Markdown The same thing can be accomplished using the `%%javascript` cell magic: ###Code %%javascript alert("hi"); ###Output _____no_output_____ ###Markdown Here is a more complicated example that loads `d3.js` from a CDN, uses the `%%html` magic to load CSS styles onto the page and then runs ones of the `d3.js` examples. ###Code Javascript( """$.getScript('//cdnjs.cloudflare.com/ajax/libs/d3/3.2.2/d3.v3.min.js')""" ) %%html <style type="text/css"> circle { fill: rgb(31, 119, 180); fill-opacity: .25; stroke: rgb(31, 119, 180); stroke-width: 1px; } .leaf circle { fill: #ff7f0e; fill-opacity: 1; } text { font: 10px sans-serif; } </style> %%javascript // element is the jQuery element we will append to var e = element.get(0); var diameter = 600, format = d3.format(",d"); var pack = d3.layout.pack() .size([diameter - 4, diameter - 4]) .value(function(d) { return d.size; }); var svg = d3.select(e).append("svg") .attr("width", diameter) .attr("height", diameter) .append("g") .attr("transform", "translate(2,2)"); d3.json("data/flare.json", function(error, root) { var node = svg.datum(root).selectAll(".node") .data(pack.nodes) .enter().append("g") .attr("class", function(d) { return d.children ? "node" : "leaf node"; }) .attr("transform", function(d) { return "translate(" + d.x + "," + d.y + ")"; }); node.append("title") .text(function(d) { return d.name + (d.children ? "" : ": " + format(d.size)); }); node.append("circle") .attr("r", function(d) { return d.r; }); node.filter(function(d) { return !d.children; }).append("text") .attr("dy", ".3em") .style("text-anchor", "middle") .text(function(d) { return d.name.substring(0, d.r / 3); }); }); d3.select(self.frameElement).style("height", diameter + "px"); ###Output _____no_output_____ ###Markdown LaTeX The IPython display system also has builtin support for the display of mathematical expressions typeset in LaTeX, which is rendered in the browser using [MathJax](http://mathjax.org). You can pass raw LaTeX test as a string to the `Math` object: ###Code from IPython.display import Math Math(r'F(k) = \int_{-\infty}^{\infty} f(x) e^{2\pi i k} dx') ###Output _____no_output_____ ###Markdown With the `Latex` class, you have to include the delimiters yourself. This allows you to use other LaTeX modes such as `eqnarray`: ###Code from IPython.display import Latex Latex(r"""\begin{eqnarray} \nabla \times \vec{\mathbf{B}} -\, \frac1c\, \frac{\partial\vec{\mathbf{E}}}{\partial t} & = \frac{4\pi}{c}\vec{\mathbf{j}} \\ \nabla \cdot \vec{\mathbf{E}} & = 4 \pi \rho \\ \nabla \times \vec{\mathbf{E}}\, +\, \frac1c\, \frac{\partial\vec{\mathbf{B}}}{\partial t} & = \vec{\mathbf{0}} \\ \nabla \cdot \vec{\mathbf{B}} & = 0 \end{eqnarray}""") ###Output _____no_output_____ ###Markdown Or you can enter LaTeX directly with the `%%latex` cell magic: ###Code %%latex \begin{align} \nabla \times \vec{\mathbf{B}} -\, \frac1c\, \frac{\partial\vec{\mathbf{E}}}{\partial t} & = \frac{4\pi}{c}\vec{\mathbf{j}} \\ \nabla \cdot \vec{\mathbf{E}} & = 4 \pi \rho \\ \nabla \times \vec{\mathbf{E}}\, +\, \frac1c\, \frac{\partial\vec{\mathbf{B}}}{\partial t} & = \vec{\mathbf{0}} \\ \nabla \cdot \vec{\mathbf{B}} & = 0 \end{align} ###Output _____no_output_____ ###Markdown Audio IPython makes it easy to work with sounds interactively. The `Audio` display class allows you to create an audio control that is embedded in the Notebook. The interface is analogous to the interface of the `Image` display class. All audio formats supported by the browser can be used. Note that no single format is presently supported in all browsers. ###Code from IPython.display import Audio Audio(url="http://www.nch.com.au/acm/8k16bitpcm.wav") ###Output _____no_output_____ ###Markdown A NumPy array can be auralized automatically. The `Audio` class normalizes and encodes the data and embeds the resulting audio in the Notebook.For instance, when two sine waves with almost the same frequency are superimposed a phenomena known as [beats](https://en.wikipedia.org/wiki/Beat_%28acoustics%29) occur. This can be auralised as follows: ###Code import numpy as np max_time = 3 f1 = 220.0 f2 = 224.0 rate = 8000.0 L = 3 times = np.linspace(0,L,rate*L) signal = np.sin(2*np.pi*f1*times) + np.sin(2*np.pi*f2*times) Audio(data=signal, rate=rate) ###Output _____no_output_____ ###Markdown Video More exotic objects can also be displayed, as long as their representation supports the IPython display protocol. For example, videos hosted externally on YouTube are easy to load: ###Code from IPython.display import YouTubeVideo YouTubeVideo('sjfsUzECqK0') ###Output _____no_output_____ ###Markdown Using the nascent video capabilities of modern browsers, you may also be able to display localvideos. At the moment this doesn't work very well in all browsers, so it may or may not work for you;we will continue testing this and looking for ways to make it more robust. The following cell loads a local file called `animation.m4v`, encodes the raw video as base64 for httptransport, and uses the HTML5 video tag to load it. On Chrome 15 it works correctly, displaying a control bar at the bottom with a play/pause button and a location slider. ###Code from IPython.display import HTML from base64 import b64encode video = open("../images/animation.m4v", "rb").read() video_encoded = b64encode(video).decode('ascii') video_tag = '<video controls alt="test" src="data:video/x-m4v;base64,{0}">'.format(video_encoded) HTML(data=video_tag) ###Output _____no_output_____ ###Markdown External sites You can even embed an entire page from another site in an iframe; for example this is today's Wikipediapage for mobile users: ###Code from IPython.display import IFrame IFrame('http://jupyter.org', width='100%', height=350) ###Output _____no_output_____ ###Markdown Links to local files IPython provides builtin display classes for generating links to local files. Create a link to a single file using the `FileLink` object: ###Code from IPython.display import FileLink, FileLinks FileLink('Cell Magics.ipynb') ###Output _____no_output_____ ###Markdown Alternatively, to generate links to all of the files in a directory, use the `FileLinks` object, passing `'.'` to indicate that we want links generated for the current working directory. Note that if there were other directories under the current directory, `FileLinks` would work in a recursive manner creating links to files in all sub-directories as well. ###Code FileLinks('.') ###Output _____no_output_____
Notebooks/gaussian_processes.ipynb
###Markdown Gaussian ProcessesA demonstration of how to sample from, and fit to, a Gaussian Process.If a function $f(x)$ is drawn from a Gaussian process$$f(x) \sim \mathcal{GP}(m(x)=0,k(x,x'))$$then a finite subset of function values $\mathbf{f}=(f(x_1),f(x_1),\dots,f(x_n))^T$ are distributed such that$$\mathbf{f}\sim \mathcal{N}(0,\Sigma)$$where $\Sigma_{ij}=k(x_i,x_j)$Based on lectures from Machine Learning Summer School, Cambridge 2009, see http://videolectures.net/mlss09uk_rasmussen_gp/Author: Juvid Aryaman ###Code import numpy as np import matplotlib.pyplot as plt import matplotlib.lines as mlines import utls utls.reset_plots() %matplotlib inline ###Output _____no_output_____ ###Markdown Sample from a Gaussian Process ###Code def cov_matrix_function(x1,x2,l): """Use a squared exponential covariance with a fixed length scale :param x1: A double, parameter of the covariance function :param x2: A double, parameter of the covariance function :param l: A double, hyperparameter of the GP determining the length scale over which the correlation between neighbouring points decays Returns: Squared exponential covariance function """ return np.exp(-(x1-x2)*(x1-x2)/l) D = 90 # number of points along x where we will evaluate the GP. D = dimension of the cov matrix x = np.linspace(-5,5,D) ndraws = 5 # number of functions to draw from GP cmap = plt.cm.jet def sample_from_gp(l): """ Sample from a Gaussian Process :param l: The length scale of the squared exponential GP Returns: A numpy array of length (D) as a draw from the GP """ sigma = np.zeros((D,D)) for i in range(D): for j in range(D): sigma[i,j] = cov_matrix_function(x[i],x[j],l) return sigma, np.random.multivariate_normal(np.zeros(D),sigma) # sample from the GP def add_GP_draws_to_plot(ax, l): """Add a number of samples from a Gaussian process to a plot :param ax: A AxesSubplot object, the axes to plot on :param l: The length scale of the squared exponential GP """ for k in range(ndraws): sigma, y = sample_from_gp(l) col = cmap(int(round((k+1)/float(ndraws)*(cmap.N)))) ax.plot(x,y,'-',alpha=0.5,color=col, linewidth = 2) ax.set_xlabel('Input, $x$') ax.set_ylabel('Output, $f(x)$') ax.set_title('$l={}$'.format(l),fontsize=20) fig, axs = plt.subplots(1,3,figsize=(3*5,5)) axs = axs.ravel() add_GP_draws_to_plot(axs[0],0.1) add_GP_draws_to_plot(axs[1],1) add_GP_draws_to_plot(axs[2],10) plt.tight_layout() ###Output _____no_output_____ ###Markdown Each panel shows 5 draws from a different Gaussian process. All of the panels use a covariance function of the same form, namely a squared exponential covariance function:$$k(x,x')=\exp\left(-\frac{1}{l}(x-x')^2\right)$$The function has a *hyperparameter* $l$ which determines the length scale over which the correlation between neighbouring points decays. Here we show what happens as $l$ is increased: the curvature of each function reduces. A large $l$ means that $k(x,x')$ reduces slowly with $x$ for some fixed $x'$, so neighbouring points have a high correlation, and therefore the sampled function $f(x)$ changes slowly with $x$.Each colored line is a single sample from a Gaussian process. Each panel has a fixed $\Sigma_{i,j}$. However, every time we draw from the multivariate Gaussian $\mathcal{N}(0,\Sigma)$, we get a different vector $\mathbf{f}$, and therefore a different shaped curve.Notice that we only evaluate the Gaussian process at `D` different points. If we wanted to evaluate the Gaussian process everywhere in $x$, we would need `D` to become infinity (which is impossible!). It is in this sense that we can consider a Gaussian process as a generalisation of a multivariate Gaussian distribution to infinitely many variables, because $\Sigma$ would need to be an $(\infty,\infty)$ matrix for us to evaluate $f(x)$ everywhere. Bayesian Inference with Gaussian Processes One of the great things about Gaussian processes is that we can do Bayesian inference with them analytically (i.e. we can write down the posterior distribution, and the posterior predictive distribution, in terms of the data mathematically without needing to resort to expensive Monte Carlo algorithms) The problem setting is that we have some data $\mathcal{D}=(\mathbf{x},\mathbf{y})$ and we want to make a prediction of the value of $y^*$ at some value of $x^*$ where we have no data. We do not know the functional form of $\mathbf{y}$, so we will use a Gaussian process.We model the data as having a Gaussian likelihood$$\mathbf{y}|\mathbf{x},f(x),M \sim \mathcal{N}(\mathbf{f},\sigma_{\text{noise}}^2)$$where $M$ is our choice of model (namely a Gaussian process, with its associated hyperparameters) and $\sigma_{\text{noise}}$ is the noise in our data.We then use a Gaussian process prior$$f(x)|M\sim \mathcal{GP}(m(x)\equiv0,k(x,x'))$$It turns out that this is a conjugate prior, where the posterior is also a Gaussian process. Note that, in this language, $f(x)$ takes the position of the parameters ($\theta$) in Bayes rule$$p(\theta|\mathcal{D},M)=\frac{p(\mathcal{D}|\theta,M) p(\theta|M)}{p(\mathcal{D}|M)}$$where $p(\theta|\mathcal{D},M)$ is the posterior, $p(\mathcal{D}|\theta,M)$ is the likelihood, $p(\theta|M)$ is the prior and $p(\mathcal{D}|M)$ is the marginal likelihood. So, in this sense, a Gaussian process is a parametric model with an infinite number of parameters (since a function has an infinite number of values in any given range of $x$). Make some pseudo-dataWe will generate some data as a draw from a GP. For this demo, we will assume that 1. The data really were generated from a Gaussian process, and we know what the appropriate covariance function $k(x,x')$ is to use. In practice, this is unavoidable and is a modelling choice.2. We know the values of the hyperparameters of the Gaussian process which generated our data. This is somewhat contrived for the sake of demonstration. Whilst we may sometimes know parameters like the noise in our data (`sigma_noise` below), we will probably not know parameters such as $l$ in the above example. In practice, we can maximize the marginal likelihood to learn 'best fit' values of the hyperparameters of our Gaussian process. ###Code l = 1 var_noise = 0.01 sigma_true, y_true = sample_from_gp(l) # The true function, a sample from a GP data_n = 10 data_indicies = np.random.choice(np.arange(int(round(0.1*D)),int(round(0.9*D))),data_n,replace=False) data_y = y_true[data_indicies] + np.random.normal(loc=0.0,scale=np.sqrt(var_noise),size=data_n) data_x = x[data_indicies] ###Output _____no_output_____ ###Markdown So we have our data, `data_y` and `data_x`. We now want to make predictions about there values over all values in the variable `x`. Compute the posterior predictive distribution of the Gaussian process ###Code K = np.zeros((data_n,data_n)) # make a covariance matrix for i in range(data_n): for j in range(data_n): K[i,j] = cov_matrix_function(data_x[i],data_x[j],l) # squared exponential GP means = np.zeros(D) variances = np.zeros(D) for i, xs in enumerate(x): k = cov_matrix_function(xs, data_x, l) K_inv_n = np.linalg.inv( K + var_noise*np.identity(data_n) ) v = np.dot(K_inv_n, data_y) mean = np.dot(k, v) v2 = np.dot(K_inv_n, k) var = cov_matrix_function(xs, xs, l) + var_noise - np.dot(k, v2) means[i] = mean variances[i] = var p2 = plt.Rectangle((0, 0), 0.1, 0.1, fc="red", alpha = 0.3, ec = 'red') p3 = mlines.Line2D([], [], color='red') # Plot a 95% BCI using the 2 sigma rule for Normal distributions fig, ax = plt.subplots() ax.fill_between(x, means+2*np.sqrt(variances), means-2*np.sqrt(variances), color='red', alpha=0.3) p1=ax.plot(data_x, data_y, 'kx') ax.plot(x, y_true,'-r') ax.set_xlabel('input, x') ax.set_ylabel('output, y') ax.legend([p1[0],p2, p3], ['Data', 'Posterior predictive distribution', 'True function'], prop={'size':8}); ###Output _____no_output_____
ex7-kmeans and PCA/K-means and PCA.ipynb
###Markdown 1 K-means Clustering在这个练习中,您将实现K-means算法并将其用于图像压缩。通过减少图像中出现的颜色的数量,只剩下那些在图像中最常见的颜色。 1.1 Implementing K-means 1.1.1 Finding closest centroids 在K-means算法的分配簇的阶段,算法将每一个训练样本 $x_i$ 分配给最接近的簇中心。![image.png](../img/7_1.png)$c^{(i)}$ 表示离样本$x_i$ 最近的簇中心点。$u_j$ 是第j 个簇中心点的位置(值), ###Code %matplotlib inline import numpy as np import pandas as pd import matplotlib.pyplot as plt from scipy.io import loadmat def findClosestCentroids(X, centroids): """ output a one-dimensional array idx that holds the index of the closest centroid to every training example. """ idx = [] max_dist = 1000000 # 限制一下最大距离 for i in range(len(X)): minus = X[i] - centroids # here use numpy's broadcasting dist = minus[:,0]**2 + minus[:,1]**2 if dist.min() < max_dist: ci = np.argmin(dist) idx.append(ci) return np.array(idx) ###Output _____no_output_____ ###Markdown 接下来使用作业提供的例子,自定义一个centroids,[3, 3], [6, 2], [8, 5],算出结果idx[0:3]应该是 [0, 2, 1] ###Code mat = loadmat('data/ex7data2.mat') # print(mat) X = mat['X'] init_centroids = np.array([[3, 3], [6, 2], [8, 5]]) idx = findClosestCentroids(X, init_centroids) print(idx[0:3]) ###Output [0 2 1] ###Markdown 1.1.2 Computing centroid means分配好每个点对应的簇中心,接下来要做的是,重新计算每个簇中心,为这个簇里面所有点位置的平均值。![image.png](../img/7_2.png)$C_k$ 是我们分配好给簇中心点的样本集。 ###Code def computeCentroids(X, idx): centroids = [] for i in range(len(np.unique(idx))): # Returns the sorted unique elements of an array. means K u_k = X[idx==i].mean(axis=0) # 求每列的平均值,idx==i选出中心对应的样本 centroids.append(u_k) return np.array(centroids) computeCentroids(X, idx) ###Output _____no_output_____ ###Markdown 1.2 K-means on example dataset ###Code def plotData(X, centroids, idx=None): """ 可视化数据,并自动分开着色。 idx: 最后一次迭代生成的idx向量,存储每个样本分配的簇中心点的值 centroids: 包含每次中心点历史记录 """ colors = ['b','g','gold','darkorange','salmon','olivedrab', 'maroon', 'navy', 'sienna', 'tomato', 'lightgray', 'gainsboro' 'coral', 'aliceblue', 'dimgray', 'mintcream', 'mintcream'] assert len(centroids[0]) <= len(colors), 'colors not enough ' subX = [] # 分好类的样本点 if idx is not None: for i in range(centroids[0].shape[0]): x_i = X[idx == i] subX.append(x_i) else: subX = [X] # 将X转化为一个元素的列表,每个元素为每个簇的样本集,方便下方绘图 # 分别画出每个簇的点,并着不同的颜色 plt.figure(figsize=(8,5)) for i in range(len(subX)): xx = subX[i] plt.scatter(xx[:,0], xx[:,1], c=colors[i], label='Cluster %d'%i) plt.legend() plt.grid(True) plt.xlabel('x1',fontsize=14) plt.ylabel('x2',fontsize=14) plt.title('Plot of X Points',fontsize=16) # 画出簇中心点的移动轨迹 xx, yy = [], [] for centroid in centroids: xx.append(centroid[:,0]) yy.append(centroid[:,1]) plt.plot(xx, yy, 'rx--', markersize=8) plotData(X, [init_centroids]) def runKmeans(X, centroids, max_iters): K = len(centroids) centroids_all = [] centroids_all.append(centroids) centroid_i = centroids for i in range(max_iters): idx = findClosestCentroids(X, centroid_i) centroid_i = computeCentroids(X, idx) centroids_all.append(centroid_i) return idx, centroids_all idx, centroids_all = runKmeans(X, init_centroids, 20) plotData(X, centroids_all, idx) ###Output _____no_output_____ ###Markdown 1.3 Random initialization在实践中,对簇中心点进行初始化的一个好的策略就是从训练集中选择随机的例子。 ###Code def initCentroids(X, K): """随机初始化""" m, n = X.shape idx = np.random.choice(m, K) centroids = X[idx] return centroids ###Output _____no_output_____ ###Markdown 进行三次随机初始化,看下各自的效果。会发现第三次的效果并不理想,这是正常的,落入了局部最优。 ###Code for i in range(3): centroids = initCentroids(X, 3) idx, centroids_all = runKmeans(X, centroids, 10) plotData(X, centroids_all, idx) ###Output _____no_output_____ ###Markdown 上面运行了三次随机初始化,可以看到不同的随机化,效果是不一样的。 1.4 Image compression with K-means这部分你将用Kmeans来进行图片压缩。在一个简单的24位颜色表示图像。每个像素被表示为三个8位无符号整数(从0到255),指定了红、绿和蓝色的强度值。这种编码通常被称为RGB编码。我们的图像包含数千种颜色,在这一部分的练习中,你将把颜色的数量减少到16种颜色。这可以有效地压缩照片。具体地说,您只需要存储16个选中颜色的RGB值,而对于图中的每个像素,现在只需要将该颜色的索引存储在该位置(只需要4 bits就能表示16种可能性)。接下来我们要用K-means算法选16种颜色,用于图片压缩。你将把原始图片的每个像素看作一个数据样本,然后利用K-means算法去找分组最好的16种颜色。 1.4.1 K-means on pixels ###Code from skimage import io A = io.imread('data/bird_small.png') print(A.shape) plt.imshow(A); A = A/255. # Divide by 255 so that all values are in the range 0 - 1 ###Output (128, 128, 3) ###Markdown https://stackoverflow.com/questions/18691084/what-does-1-mean-in-numpy-reshape ###Code # Reshape the image into an (N,3) matrix where N = number of pixels. # Each row will contain the Red, Green and Blue pixel values # This gives us our dataset matrix X that we will use K-Means on. X = A.reshape(-1, 3) K = 16 centroids = initCentroids(X, K) idx, centroids_all = runKmeans(X, centroids, 10) img = np.zeros(X.shape) centroids = centroids_all[-1] for i in range(len(centroids)): img[idx == i] = centroids[i] img = img.reshape((128, 128, 3)) fig, axes = plt.subplots(1, 2, figsize=(12,6)) axes[0].imshow(A) axes[1].imshow(img) ###Output _____no_output_____ ###Markdown 2 Principal Component Analysis这部分,你将运用PCA来实现降维。您将首先通过一个2D数据集进行实验,以获得关于PCA如何工作的直观感受,然后在一个更大的图像数据集上使用它。 2.1 Example Dataset为了帮助您理解PCA是如何工作的,您将首先从一个二维数据集开始,该数据集有一个大的变化方向和一个较小的变化方向。在这部分练习中,您将看到使用PCA将数据从2D减少到1D时会发生什么。 ###Code mat = loadmat('data/ex7data1.mat') X = mat['X'] print(X.shape) plt.scatter(X[:,0], X[:,1], facecolors='none', edgecolors='b') ###Output (50, 2) ###Markdown 2.2 Implementing PCAPCA由两部分组成:1. 计算数据的方差矩阵2. 用SVD计算特征向量$(U_1, U_2, ..., U_n)$在PCA之前,记得标准化数据。然后计算方差矩阵,如果你的每条样本数据是以行的形式表示,那么计算公式如下:![image.png](../img/7_3.png)接着就可以用SVD计算主成分![image.png](../img/7_4.png)U包含了主成分,**每一列**就是我们数据要映射的向量,S为对角矩阵,为奇异值。 ###Code def featureNormalize(X): means = X.mean(axis=0) stds = X.std(axis=0, ddof=1) X_norm = (X - means) / stds return X_norm, means, stds ###Output _____no_output_____ ###Markdown 由于我们的协方差矩阵为X.T@X, X中每行为一条数据,我们是想要对列(特征)做压缩。这里由于是对协方差矩阵做SVD(), 所以得到的入口基其实为 V‘,出口基为V,可以打印出各自的shape来判断。故我们这里是对 数据集的列 做压缩。 ###Code def pca(X): sigma = (X.T @ X) / len(X) U, S, V = np.linalg.svd(sigma) return U, S, V X_norm, means, stds = featureNormalize(X) U, S, V = pca(X_norm) print(U[:,0]) plt.figure(figsize=(7, 5)) plt.scatter(X[:,0], X[:,1], facecolors='none', edgecolors='b') # 没看懂 S*U=? plt.plot([means[0], means[0] + 1.5*S[0]*U[0,0]], [means[1], means[1] + 1.5*S[0]*U[0,1]], c='r', linewidth=3, label='First Principal Component') plt.plot([means[0], means[0] + 1.5*S[1]*U[1,0]], [means[1], means[1] + 1.5*S[1]*U[1,1]], c='g', linewidth=3, label='Second Principal Component') plt.grid() # changes limits of x or y axis so that equal increments of x and y have the same length plt.axis("equal") plt.legend() ###Output [-0.70710678 -0.70710678] ###Markdown 2.3 Dimensionality Reduction with PCA 2.3.1 Projecting the data onto the principal components ###Code def projectData(X, U, K): Z = X @ U[:,:K] return Z # project the first example onto the first dimension # and you should see a value of about 1.481 Z = projectData(X_norm, U, 1) Z ###Output _____no_output_____ ###Markdown 2.3.2 Reconstructing an approximation of the data重建数据 ###Code def recoverData(Z, U, K): X_rec = Z @ U[:,:K].T return X_rec # you will recover an approximation of the first example and you should see a value of # about [-1.047 -1.047]. X_rec = recoverData(Z, U, 1) X_rec[0] ###Output _____no_output_____ ###Markdown 2.3.3 Visualizing the projections ###Code plt.figure(figsize=(7,5)) plt.axis("equal") plot = plt.scatter(X_norm[:,0], X_norm[:,1], s=30, facecolors='none', edgecolors='b',label='Original Data Points') plot = plt.scatter(X_rec[:,0], X_rec[:,1], s=30, facecolors='none', edgecolors='r',label='PCA Reduced Data Points') plt.title("Example Dataset: Reduced Dimension Points Shown",fontsize=14) plt.xlabel('x1 [Feature Normalized]',fontsize=14) plt.ylabel('x2 [Feature Normalized]',fontsize=14) plt.grid(True) for x in range(X_norm.shape[0]): plt.plot([X_norm[x,0],X_rec[x,0]],[X_norm[x,1],X_rec[x,1]],'k--') # 输入第一项全是X坐标,第二项都是Y坐标 plt.legend() ###Output _____no_output_____ ###Markdown 2.4 Face Image Dataset在这部分练习中,您将人脸图像上运行PCA,看看如何在实践中使用它来减少维度。 ###Code mat = loadmat('data/ex7faces.mat') X = mat['X'] print(X.shape) def displayData(X, row, col): fig, axs = plt.subplots(row, col, figsize=(8,8)) for r in range(row): for c in range(col): axs[r][c].imshow(X[r*col + c].reshape(32,32).T, cmap = 'Greys_r') axs[r][c].set_xticks([]) axs[r][c].set_yticks([]) displayData(X, 10, 10) ###Output _____no_output_____ ###Markdown 2.4.1 PCA on Faces ###Code X_norm, means, stds = featureNormalize(X) U, S, V = pca(X_norm) U.shape, S.shape displayData(U[:,:36].T, 6, 6) ###Output _____no_output_____ ###Markdown 2.4.2 Dimensionality Reduction ###Code z = projectData(X_norm, U, K=36) X_rec = recoverData(z, U, K=36) displayData(X_rec, 10, 10) ###Output _____no_output_____
Data Science Fundamentals for Data Analysts/ipynb/1.1.3 Lab - Hands-on with Databricks.ipynb
###Markdown d-sandbox Hands-on with Databricks**Objective**: *Familiarize yourself with the Databricks platform, the use of notebooks, and basic SQL operations in Databricks.*In this lab, you will complete a series of exercises to familiarize yourself with the content covered in Lesson 0.1. Exercise 1In order to execute code with Databricks, you need to have your notebook attached to an active cluster. Ensure that:1. You have created a cluster following the walkthrough of the video in this lesson.2. Your cluster's Databricks Runtime Version is 7.2 ML.3. Your cluster is active and running.4. This notebook is attached to your cluster. Exercise 2The fundamental piece of a Databricks notebook is the command cell. We use command cells to write and run our code. Complete the following:1. Insert a command cell beneath this one.2. Write `1 + 1` in the command cell.3. Run the command cell.4. Verify that the output of the executed code is `2`. ###Code 1 + 1 ###Output _____no_output_____ ###Markdown Exercise 3Command cells can also be used to add comments using a lightweight markup language named *markdown*. (That's how these command cells are written).Complete the following:1. Double-click on this command cell.2. Notice the *magic command* at the top of the command cell that enables the use of markdown.3. Insert a command cell beneath this one and add the magic command to the first line.4. Write `THE MAGIC COMMAND FOR MARKDOWN IS _____` with the magic command filling the blank. `THE MAGIC COMMAND FOR MARKDOWN IS %md` Exercise 4Throughout this course, we will be using a setup file in each of our notebooks that connects Databricks to our data.Complete the following:1. Run the below command cell to execute the setup file.2. Insert a SQL command cell beneath the command cell containg the setup file.3. Query all of the data in the table **`dsfda.ht_daily_metrics`** using the query `SELECT * FROM dsfda.ht_daily_metrics`.4. Examine the displayed table to learn about its columns and rows. ###Code %run "../../Includes/Classroom-Setup" %sql SELECT * FROM dsfda.ht_daily_metrics ###Output _____no_output_____ ###Markdown Exercise 5Throughout this course, we will need to manipulate data and save it as new tables using Delta, just as we did in the video during the lesson.Complete the following:1. Insert a new SQL command cell beneath this one.2. Write a SQL query to return rows from the **dsfda.ht_users** table where the individual's lifestyle is `"Sedentary"`.3. Use the SQL query to create a new Delta table named **dsfda.ht_users_sedentary** and store the data in the following location: `"/dsfda/ht-users-sedentary"`. ###Code %sql CREATE OR REPLACE TABLE dsfda.ht_users_sedentary USING DELTA LOCATION "/dsfda/ht-users-sedentary" AS ( SELECT * FROM dsfda.ht_users WHERE lifestyle = 'Sedentary' ) %sql SELECT * FROM dsfda.ht_users_sedentary ###Output _____no_output_____
notebooks/stm.ipynb
###Markdown In this notebook, we'll examine computing ciliary beat frequency (CBF) from a couple example videos using the core techniques from the [2015 Quinn *et al* paper in *Science Translational Medicine*](http://dx.doi.org/10.1126/scitranslmed.aaa1233). CBF is a quantity that clinicians and researchers have used for some time as an objective measure of ciliary motion. It is precisely what it sounds like: the frequency at which cilia beat. This can be easily done in a GUI-viewer like ImageJ (now Fiji) by clicking on a single pixel of the video and asking for the frequency, but in Python this requires some additional work. With any spectral analysis of a time series, we'll be presented with a range of frequencies present at any given location. In our paper, we limited the scope of these frequencies to only the *dominant* frequency that was present *at each pixel*. In essence, we compute the frequency spectra at each pixel of a video of cilia, then strip out all the frequencies at each pixel except for the one with the greatest power. There are three main ways in which we computed CBF. Each of these is implemented in `stm.py`. 0: Preliminaries Here are some basic imports we'll need for the rest of the notebook. ###Code %matplotlib inline import numpy as np import matplotlib.pyplot as plt import scipy.signal as signal import stm # Our package. # Our two example videos. v_norm = np.load("../data/normal.npy") v_dysk = np.load("../data/dyskinetic.npy") # We'll plot the first frame of these two videos to give a sense of them. plt.figure() plt.subplot(1, 2, 1) plt.imshow(v_norm[0], cmap = "gray") plt.subplot(1, 2, 2) plt.imshow(v_dysk[0], cmap = "gray") ###Output _____no_output_____ ###Markdown 1: "Raw" FFT-based CBF The title is something of a misnomer: the computed CBF is not "raw" in any sense, and all our CBF computations use the FFT in some regard. This technique, however, is the only that *explicitly* uses the FFT. It's also the most basic technique, as it doesn't involve any shifting or windowing of the original signal. As a result, it's very fast, but can produce a lot of noise. Here's what it looks like. ###Code h1_norm = stm.cbf(v_norm, method = "fft") h1_dysk = stm.cbf(v_dysk, method = "fft") plt.figure() plt.subplot(1, 2, 1) plt.title("Normal") plt.imshow(h1_norm, cmap = "Blues") plt.colorbar() plt.subplot(1, 2, 2) plt.title("Dyskinetic") plt.imshow(h1_dysk, cmap = "Reds") plt.colorbar() ###Output _____no_output_____ ###Markdown This is a pretty noisy estimation but still gives a good idea of where certain frequencies are present. Note that in some locations around the cilia in both cases, there is saturation of the signal: large pixel areas that are indicating maximal CBF. These are likely noise as well. A common post-processing step we would perform is a median filter to dampen spurious signals. The only drawback of this approach is that it assumes a very small amount of noise relative to signal; the reality is likely that there is more noise than this approach implicitly assumes. Nonetheless it is still worthwhile: ###Code h1_norm_filt = signal.medfilt2d(h1_norm, 5) # Kernel size of 5x5. h1_dysk_filt = signal.medfilt2d(h1_dysk, 5) plt.figure() plt.subplot(1, 2, 1) plt.title("Normal") plt.imshow(h1_norm_filt, cmap = "Blues") plt.colorbar() plt.subplot(1, 2, 2) plt.title("Dyskinetic") plt.imshow(h1_dysk_filt, cmap = "Reds") plt.colorbar() ###Output _____no_output_____ ###Markdown It was also useful to look at histograms of the frequencies that are present, discarding the spatial representation in favor of a distribution of frequencies. ###Code plt.figure() plt.subplot(2, 2, 1) plt.title("Normal") _ = plt.hist(h1_norm.flatten(), bins = 20) plt.subplot(2, 2, 2) plt.title("Dyskinetic") _ = plt.hist(h1_dysk.flatten(), bins = 20) plt.subplot(2, 2, 3) plt.title("Normal (Median Filtered)") _ = plt.hist(h1_norm_filt.flatten(), bins = 20) plt.subplot(2, 2, 4) plt.title("Dyskinetic (Median Filtered)") _ = plt.hist(h1_dysk_filt.flatten(), bins = 20) ###Output _____no_output_____ ###Markdown 2: Periodogram A periodogram is an estimate of the power spectral density (PSD, hence the name) of the signal, and is a step up from pixel-based FFT...but only 1 step. It performs a lot of the same steps as in the first method under-the-hood, and thus the code in the attached module is considerably shorter. In theory, this method is a bit more robust to noise. ###Code h2_norm = stm.cbf(v_norm, method = "psd") h2_dysk = stm.cbf(v_dysk, method = "psd") plt.figure() plt.subplot(1, 2, 1) plt.title("Normal") plt.imshow(h2_norm, cmap = "Blues") plt.colorbar() plt.subplot(1, 2, 2) plt.title("Dyskinetic") plt.imshow(h2_dysk, cmap = "Reds") plt.colorbar() ###Output _____no_output_____ ###Markdown There are some minute differences from the first method, but not much. ###Code plt.figure() plt.subplot(2, 2, 1) plt.title("Normal (Method 1)") plt.imshow(h1_norm, cmap = "Blues") plt.colorbar() plt.subplot(2, 2, 2) plt.title("Dyskinetic (Method 1)") plt.imshow(h1_dysk, cmap = "Reds") plt.colorbar() plt.figure() plt.subplot(2, 2, 3) plt.title("Normal (Method 2)") plt.imshow(h2_norm, cmap = "Blues") plt.colorbar() plt.subplot(2, 2, 4) plt.title("Dyskinetic (Method 2)") plt.imshow(h2_dysk, cmap = "Reds") plt.colorbar() ###Output _____no_output_____ ###Markdown We can do our post-processing. ###Code h2_norm_filt = signal.medfilt2d(h2_norm, 5) # Kernel size of 5x5. h2_dysk_filt = signal.medfilt2d(h2_dysk, 5) plt.figure() plt.subplot(1, 2, 1) plt.title("Normal") plt.imshow(h2_norm_filt, cmap = "Blues") plt.colorbar() plt.subplot(1, 2, 2) plt.title("Dyskinetic") plt.imshow(h2_dysk_filt, cmap = "Reds") plt.colorbar() plt.figure() plt.subplot(2, 2, 1) plt.title("Normal") _ = plt.hist(h2_norm.flatten(), bins = 20) plt.subplot(2, 2, 2) plt.title("Dyskinetic") _ = plt.hist(h2_dysk.flatten(), bins = 20) plt.subplot(2, 2, 3) plt.title("Normal (Median Filtered)") _ = plt.hist(h2_norm_filt.flatten(), bins = 20) plt.subplot(2, 2, 4) plt.title("Dyskinetic (Median Filtered)") _ = plt.hist(h2_dysk_filt.flatten(), bins = 20) ###Output _____no_output_____ ###Markdown 3: Welch Periodogram Think of Welch's algorithm as a post-processing of the periodogram: it performs window-based smoothing on the resulting frequency spectra, dampening noise at the expense of frequency resolution. Given the propensity of frequency-based noise to appear in the resulting spectra, this trade-off is often preferred. ###Code h3_norm = stm.cbf(v_norm, method = "welch") h3_dysk = stm.cbf(v_dysk, method = "welch") plt.figure() plt.subplot(1, 2, 1) plt.title("Normal") plt.imshow(h3_norm, cmap = "Blues") plt.colorbar() plt.subplot(1, 2, 2) plt.title("Dyskinetic") plt.imshow(h3_dysk, cmap = "Reds") plt.colorbar() h3_norm_filt = signal.medfilt2d(h3_norm, 5) # Kernel size of 5x5. h3_dysk_filt = signal.medfilt2d(h3_dysk, 5) plt.figure() plt.subplot(1, 2, 1) plt.title("Normal") plt.imshow(h3_norm_filt, cmap = "Blues") plt.colorbar() plt.subplot(1, 2, 2) plt.title("Dyskinetic") plt.imshow(h3_dysk_filt, cmap = "Reds") plt.colorbar() ###Output _____no_output_____ ###Markdown Strangely, the dyskinetic video seems to see a considerable increase in frequencies across the board once the median filter is applied. We'll look at the histogram for a better view. ###Code plt.figure() plt.subplot(2, 2, 1) plt.title("Normal") _ = plt.hist(h3_norm.flatten(), bins = 20) plt.subplot(2, 2, 2) plt.title("Dyskinetic") _ = plt.hist(h3_dysk.flatten(), bins = 20) plt.subplot(2, 2, 3) plt.title("Normal (Median Filtered)") _ = plt.hist(h3_norm_filt.flatten(), bins = 20) plt.subplot(2, 2, 4) plt.title("Dyskinetic (Median Filtered)") _ = plt.hist(h3_dysk_filt.flatten(), bins = 20) ###Output _____no_output_____
Botnets/Phases/Phase 3/Experiments/New Step/.ipynb_checkpoints/Step 5.1 Experiment 2 ML-checkpoint.ipynb
###Markdown Step 5.1: Experiment 1: Machine Learning --- 1. Imports ###Code import warnings warnings.filterwarnings('ignore') import math import numpy as np #operaciones matriciales y con vectores import pandas as pd #tratamiento de datos import random import matplotlib.pyplot as plt #gráficos import seaborn as sns import joblib from sklearn import naive_bayes from sklearn import tree from sklearn.neighbors import KNeighborsClassifier from sklearn.linear_model import LogisticRegression from sklearn import tree from sklearn import linear_model from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import classification_report from sklearn.model_selection import train_test_split #metodo de particionamiento de datasets para evaluación from sklearn import preprocessing from sklearn import metrics from sklearn.model_selection import cross_val_score from sklearn.model_selection import cross_validate from sklearn.model_selection import GridSearchCV ###Output _____no_output_____ ###Markdown --- 2. Load the Standardize B/M Only Stratosphere Dataset ###Code BM_onlyStratosphere = pd.read_csv(r"C:\Users\Usuario\Documents\Github\PDG\PDG-2\Datasets\Time Window\Standardized\SDTrainExp2.csv", delimiter = ",") BM_onlyStratosphere.head(2) BM_onlyStratosphere.shape ###Output _____no_output_____ ###Markdown --- 3. Let's Create a copy of the original dataset... ###Code BM_onlyStratosphere_copy = BM_onlyStratosphere.copy() BM_onlyStratosphere_copy.shape ###Output _____no_output_____ ###Markdown --- 4. Let's create a Dataframe to save the Accuracies... ###Code acc_Machine_Learning = pd.DataFrame(columns=['Name',"Accuracy_Value","CV"]) ###Output _____no_output_____ ###Markdown --- --- 5. :::::::: MACHINE LEARNING :::::::: 5.1 Gaussian Naive Bayes ###Code x = BM_onlyStratosphere_copy.iloc[:,:-1] y = BM_onlyStratosphere_copy['Type'] gnb = naive_bayes.GaussianNB() params = {} gscv_gnb = GridSearchCV(estimator=gnb, param_grid=params, cv=10, return_train_score=True) gscv_gnb.fit(x,y) gscv_gnb.cv_results_ ###Output _____no_output_____ ###Markdown The **best_score (Mean cross-validated score of the best_estimator)** is : ###Code gscv_gnb.best_score_ ###Output _____no_output_____ ###Markdown The **best estimator (model)** is : ###Code gnb = gscv_gnb.best_estimator_ gnb acc_Machine_Learning= acc_Machine_Learning.append({'Name' : 'GaussianNB ', 'Accuracy_Value' : gscv_gnb.best_score_, 'CV' : 10}, ignore_index=True) acc_Machine_Learning ###Output _____no_output_____ ###Markdown --- 5.2 Decision Tree Classifier ###Code dtc = tree.DecisionTreeClassifier() tree_params = {'criterion':['gini','entropy'], 'max_depth':[3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20], 'random_state' : [1234] } gscv_dtc = GridSearchCV(dtc, tree_params, cv=10) gscv_dtc.fit(x,y) ###Output _____no_output_____ ###Markdown The **best_score (Mean cross-validated score of the best_estimator)** is : ###Code gscv_dtc.best_score_ ###Output _____no_output_____ ###Markdown The **best estimator (model)** is : ###Code dtc = gscv_dtc.best_estimator_ dtc acc_Machine_Learning= acc_Machine_Learning.append({'Name' : dtc, 'Accuracy_Value' : gscv_dtc.best_score_, 'CV' : 10}, ignore_index=True) acc_Machine_Learning ###Output _____no_output_____ ###Markdown --- 5.3 KNN ###Code knn = KNeighborsClassifier() knn_params = {'n_neighbors':[1,3,5], 'weights' : ['uniform','distance'], 'metric':['euclidean','manhattan']} # gscv_knn = GridSearchCV(knn, knn_params, cv=5, n_jobs=-1) # gscv_knn.fit(x,y) ###Output _____no_output_____ ###Markdown The **best_score (Mean cross-validated score of the best_estimator)** is : ###Code # gscv_knn.best_score_ ###Output _____no_output_____ ###Markdown The **best estimator (model)** is : ###Code # knn = gscv_knn.best_estimator_ # knn # acc_Machine_Learning= acc_Machine_Learning.append({'Name' : knn, 'Accuracy_Value' : gscv_knn.best_score_, 'CV' :5}, # ignore_index=True) # acc_Machine_Learning ###Output _____no_output_____ ###Markdown --- 5.4 Logistic Regression ###Code logreg = linear_model.LinearRegression() params = {} gscv_lg = GridSearchCV(logreg, params, cv=10) gscv_lg.fit(x,y) ###Output _____no_output_____ ###Markdown The **best_score (Mean cross-validated score of the best_estimator)** is : ###Code gscv_lg.best_score_ ###Output _____no_output_____ ###Markdown The **best estimator (model)** is : ###Code logreg = gscv_lg.best_estimator_ logreg acc_Machine_Learning= acc_Machine_Learning.append({'Name' : logreg, 'Accuracy_Value' : gscv_lg.best_score_, 'CV' :10}, ignore_index=True) acc_Machine_Learning ###Output _____no_output_____ ###Markdown --- 5.5 Random Forest Classifier ###Code clf = RandomForestClassifier() clf_param = { 'n_estimators': [64, 128], 'max_features': ['auto', 'sqrt', 'log2'], 'max_depth' : [4,5,6,7,8,9,10,11,12,13,14,15], 'criterion' :['gini', 'entropy'], 'random_state' : [1234] } gscv_rfc = GridSearchCV(clf, params, cv=10) gscv_rfc.fit(x,y) ###Output _____no_output_____ ###Markdown The **best_score (Mean cross-validated score of the best_estimator)** is : ###Code gscv_rfc.best_score_ ###Output _____no_output_____ ###Markdown The **best estimator (model)** is : ###Code clf = gscv_rfc.best_estimator_ clf acc_Machine_Learning= acc_Machine_Learning.append({'Name' : clf, 'Accuracy_Value' : gscv_rfc.best_score_, 'CV' :10}, ignore_index=True) acc_Machine_Learning ###Output _____no_output_____ ###Markdown --- 6. Let's save the accuracies ###Code acc_Machine_Learning = acc_Machine_Learning.sort_values(by=['Accuracy_Value'], ascending=False) acc_Machine_Learning acc_Machine_Learning.to_csv(r"C:\Users\Usuario\Documents\Github\PDG\PDG-2\Datasets\Time Window\Accuracies\MLAccuraciesExp1.csv",sep=',',index=False) ###Output _____no_output_____ ###Markdown --- 7. Let's choose the best ML Algorithm ###Code acc_Machine_Learning.iloc[0,:] ###Output _____no_output_____ ###Markdown --- --- 8. ::::::::::::::::: TEST WITH REAL DATA ::::::::::::::::::::: ###Code b = pd.read_csv(r"C:\Users\Usuario\Documents\Github\PDG\PDG-2\Datasets\Time Window\TEST\BTestExp2.csv", delimiter = ",") b.shape ###Output _____no_output_____ ###Markdown -- ###Code m = malign_dataset = pd.read_csv(r"C:\Users\Usuario\Documents\Github\PDG\PDG-2\Datasets\Time Window\TEST\MTestExp2.csv", delimiter = ",") m.shape ###Output _____no_output_____ ###Markdown --- ###Code frames = [b, m] test_dataset = pd.concat(frames) ###Output _____no_output_____ ###Markdown --- ###Code le = joblib.load('./Tools/label_encoder_type_exp2.encoder') test_dataset.Type.unique() test_dataset.Type = le.transform(test_dataset.Type) test_dataset.Type.unique() types = test_dataset.Type test_dataset = test_dataset.drop(['Type'], axis=1) test_dataset.columns ###Output _____no_output_____ ###Markdown --- ###Code test_dataset = test_dataset[['Avg_bps','Avg_pps' ,'Bytes','p2_ib','duration','number_sp','number_dp','First_Protocol' ,'first_sp','p3_ib','first_dp','p1_ib','p3_d']] ###Output _____no_output_____ ###Markdown -- ###Code test_dataset.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 700 entries, 0 to 349 Data columns (total 13 columns): Avg_bps 700 non-null int64 Avg_pps 700 non-null int64 Bytes 700 non-null int64 p2_ib 700 non-null float64 duration 700 non-null float64 number_sp 700 non-null int64 number_dp 700 non-null int64 First_Protocol 700 non-null object first_sp 700 non-null int64 p3_ib 700 non-null float64 first_dp 700 non-null int64 p1_ib 700 non-null float64 p3_d 700 non-null float64 dtypes: float64(5), int64(7), object(1) memory usage: 76.6+ KB ###Markdown --- First_Protocol ###Code le = joblib.load('./Tools/label_encoder_first_protocol_exp2.encoder') test_dataset.First_Protocol.unique() test_dataset.First_Protocol = le.transform(test_dataset.First_Protocol) test_dataset.First_Protocol.unique() ###Output _____no_output_____ ###Markdown --- ###Code scaler = joblib.load("./Tools/scalerExp2.save") test_dataset[['Avg_bps','Avg_pps' ,'Bytes','p2_ib','duration','number_sp','number_dp' ,'p3_ib','p1_ib','p3_d']] = scaler.transform(test_dataset[['Avg_bps','Avg_pps' ,'Bytes','p2_ib','duration','number_sp','number_dp' ,'p3_ib','p1_ib','p3_d']]) test_dataset.head(2) clf y_pred= clf.predict(test_dataset) y_pred unique, counts = np.unique(y_pred, return_counts=True) dict(zip(unique, counts)) y_pred types = types.astype(np.int64) cm= metrics.confusion_matrix(types, y_pred) plt.imshow(cm, cmap=plt.cm.Blues) plt.title("Matriz de confusión") plt.colorbar() tick_marks = np.arange(3) plt.xticks(tick_marks, ['0','1']) plt.yticks(tick_marks, ['0','1']) target_names = ['1', '0'] print(classification_report(types, y_pred, target_names=target_names)) ###Output precision recall f1-score support 1 1.00 1.00 1.00 350 0 1.00 1.00 1.00 350 accuracy 1.00 700 macro avg 1.00 1.00 1.00 700 weighted avg 1.00 1.00 1.00 700 ###Markdown --- ---- Let's save the 3 best models models... ###Code joblib.dump(clf,"./Models/clf.save") joblib.dump(dtc,"./Models/dtc.save") joblib.dump(gnb,"./Models/gnb.save") ###Output _____no_output_____
Summer-19/Section/Section 3.ipynb
###Markdown EEP/IAS 118 - Section 3 Manipulating (more) Data, Attractive Figures, and Practice Problems! July 11, 2019Today's coding portion of the section will help get us familiar with a few packages that will help us improve the quality of our output tables and figures. ###Code library(tidyverse) library(haven) library(xtable) sleepdata <- read_dta("sleep75.dta") ###Output _____no_output_____ ###Markdown Working with IndexesWe've seen how to manipulate datasets by adding in variables or removing certain observations, but what if we want to obtain one element/a set of elements from a known location? VectorsLet's start by working with a vector: ###Code vec <- rnorm(10, mean =4, sd = 2) vec ###Output _____no_output_____ ###Markdown We created a vector of length 10 of random draws from a N(4,4) distribution. Now if we were interested in getting just the third element of this vector, we can do that like so: ###Code vec[3] ###Output _____no_output_____ ###Markdown The `[]` lets __R__ know that you want to select on position, while the 3 is our instruction for which position to pull from. (note that since we're working with a vector and not a dataframe, we can't use `$` to call a certain column). If we were interested in elements 5 through 7, we can pull them with the use of `:` ###Code vec[5:7] ###Output _____no_output_____ ###Markdown Finally, if we wanted to pull the first, fourth, and ninth elements we can do that using `c()`: ###Code vec[c(1,4,9)] ###Output _____no_output_____ ###Markdown What `c()` is doing is combining all the elements given to it into a vector themselves. We can see that by running it on its own. ###Code newvec <- c(30,34,38,42) newvec is.vector(newvec) ###Output _____no_output_____ ###Markdown Matrices and Data FramesWhat happens when we are working multidimensional objects? Largely the same thing! Now we just need to refer to position by specifying `[row, column]`. It is the same process for whether we're working with a matrix or a data frame. ###Code # make a matrix mat40 <- matrix(1:40, nrow = 4, ncol = 10) mat40 is.matrix(mat40) # Get the first element (1) mat40[1,1] # Get the element from the 3rd row and 6th column mat40[3,6] # Get the fifth, sixth, and seventh elements from the 2nd row mat40[2, 5:7] # Get all of column five mat40[, 5] # Get all of row four mat40[4,] # Get the fifth, sixth, and seventh elements from the first AND 2nd rows mat40[1:2, 5:7] # Get the first and fourth elements from the third row mat40[3,c(1,4)] ###Output _____no_output_____ ###Markdown We have a bunch of flexibility here to call one element or multiple elements at the same time, the only restriction being that we follow the `[row, col]` syntax.The process for data frames is pretty similar, albeit with one extension. Now that we have variables, we can combine a position call with the `$` for a specific variable. ###Code sleepdf <- sleepdata %>% select(age, educ, exper, hrwage) head(sleepdf) nrow(sleepdf) ncol(sleepdf) dim(sleepdf) is.data.frame(sleepdf) # Get the first row sleepdf[1,] # Get the head of the age variable head(sleepdf$age) # Get the fourth row element of column 4 (hrwage) sleepdf[4,4] # Alternatively, we can do the same thing by refering to the specific variable/column sleepdf$hrwage[4] ###Output _____no_output_____ ###Markdown Note that when we use the `$` to call a specific variable, __R__ now treats that variable as a vector, so we can refer to its elements with `[]` in one dimension. In that case, our call `sleepdf$hrwage[4]` gives us just a number, whereas the previous call of `sleepdf[4,4]` gives us the same value but presented in a 1x1 table. ggplot2One of the sad facts about (most) economic research papers is that they don't always have the most aesthetically pleasing figures. For many data visualization applications or our own work we might want to have more control over the visuals and step them up a notch, making sure they convey useful information and have informative labels/captions. This is where the __ggplot2__ package comes in.We started off using __R's__ built-in plot function, which let us produce scatterplots and construct histograms of all sorts of variables. However, it doesn't look the best and has some ugly naming conventions. __ggplot2__ will give us complete control over our figure and allow us to get as in depth with it as we want. ggplot2 Basic SyntaxLet's start by getting familiar with the basic syntax of __ggplot2__. It's syntax is a little bit different than some of the functions we've used before, but once we figure it out it makes thing nice and easy as we make more and more professional-looking figures.To start a plot, we start with the function `ggplot()`This function initializes an empty plot and passes data to other plots that we'll add on top. We can also use this function to define our dataset or specify what our x and y variables are. ###Code ggplot() ###Output _____no_output_____ ###Markdown Okay, so not the most impressive yet. We get a little bit more if we specify our data and our x/y variables. To specify the data, we add the argument `data = "dataname"` to the function. To specify which variable is on the x axis and which is on the y, we use the `aes(x= "xvar", y= "yvar")` argument. `aes()` is short for "aesthetics" and allows us to automatically pass these variables along as our x and y variables for the plots we add.Let's say we're interested in using our `sleepdata` to see the relationship between age and hourly wage in our sample ###Code ggplot(data = sleepdata, aes(x = age, y = hrwage)) ###Output _____no_output_____ ###Markdown That is a start! Now we have labels on both of our axes corresponding to the assigned variable, and a grid corresponding to possible values of those variables. We will add geometries (sets of points, histograms, lines, etc.) by adding what we call "layers" - let's take a look at a few of the options. ScatterplotsNow let's add some points! If we want to get a sense of how age and hourly wage vary in our data, we can do that by just plotting the points. We can add points using the `geom_point()` function.Since we already declared our two variables, all we need to add the function with `+ geom_point()` to our existing code: ###Code ggplot(data = sleepdata, aes(x = age, y = hrwage)) + geom_point() ###Output _____no_output_____ ###Markdown And we get a a plot of all our points (note that we were warned that there are some missing values that get dropped). LabelsSometimes we might want to change the labels from the variable names to a more descriptive label, and possibly add a title. We can do that! We do this by adding the `labs()` function to our plot. ###Code ggplot(data = sleepdata, aes(x = age, y = hrwage)) + geom_point() + labs(title = "Relationship between Age and Hourly Wage", subtitle = "Nonmissing Sample", x = "Age (years)", y = "Hourly Wage ($)") ###Output _____no_output_____ ###Markdown Let's take a look at what we added to `labs()`. First, `title` gives us the main title at the top. Second, `subtitle` gives us another line in a smaller font below the main title. `x` and `y` correspond to our x and y labels, respectively. Changing PointsWhat if we want to change the color/shape/transparency of our points? We can do that by using arguments of `geom_point()`. ###Code ggplot(data = sleepdata, aes(x = age, y = hrwage)) + geom_point(colour = "blue", alpha = 0.4, size = 0.8) + labs(title = "Relationship between Age and Hourly Wage", subtitle = "Nonmissing Sample", x = "Age (years)", y = "Hourly Wage ($)") ###Output _____no_output_____ ###Markdown By adding `colour="blue"` we changed the color to blue. There are [a toooooon](http://sape.inf.usi.ch/sites/default/files/ggplot2-colour-names.png) of named colors that we could use instead (this gets really useful when we start splitting our data by group levels).`alpha = 0.4` is changing the transparency of our points to 40%. `size = 0.8` is reducing the size of the points to 80% of their original size. Splitting by GroupsWhat if we wanted to change the color of our points according to whether the individual is male or not? We can do that! ###Code ggplot(data = sleepdata, aes(x = age, y = hrwage)) + geom_point(aes(colour = factor(male))) + labs(title = "Relationship between Age and Hourly Wage", subtitle = "Nonmissing Sample", x = "Age (years)", y = "Hourly Wage ($)") ###Output _____no_output_____ ###Markdown By adding an aesthestic to our `geom_point` we can set the color to be determined by the value of $male$. By default, the zero value (i.e. female) gets a red color while a 1 value (female) gets a light green. We specify the variable as a `factor()` so that ggplot knows it is a discrete variable. What if we instead wanted to change color on a continuous scale? ###Code ggplot(data = sleepdata, aes(x = age, y = hrwage)) + geom_point(aes(colour = age)) + labs(title = "Relationship between Age and Hourly Wage", subtitle = "Nonmissing Sample", x = "Age (years)", y = "Hourly Wage ($)") ###Output _____no_output_____ ###Markdown Here the color is now a function of our continuous variable $age$, taking increasingly lighter values for higher ages.(note that __ggplot2__ lets you specify the color scale or color levels if you want, as well as nitpick the labels in the legend. In reality we can change anything that appears in the plot - we just have to choose the right option). One thing to note is that we can make other options conditional on variables in our data frame too. What if we wanted the shape of our points to depend on union participation, the color to vary with gender, and the size of the points to depend on the total minutes worked per week? We can do all that - even if it might look real gross. ###Code ggplot(data = sleepdata, aes(x = age, y = hrwage)) + geom_point(aes(colour = factor(male), shape = factor(union), size = totwrk)) + labs(title = "Relationship between Age and Hourly Wage", subtitle = "Nonmissing Sample", x = "Age (years)", y = "Hourly Wage ($)") ###Output _____no_output_____ ###Markdown While the above example is cluttered, it shows how we can take a simple scatterplot and use it to convey additional information in just one plot. LinesWe can add lines to our figure in a couple different ways. First, if we wanted to connect all the points in our data with a line, we would use the `geom_line()` function. ###Code sleepdata %>% group_by(age) %>% filter(row_number() == 1) %>% ggplot(aes(x=age, y = hrwage)) + geom_line() ###Output _____no_output_____ ###Markdown We can also add points just by adding another layer! ###Code sleepdata %>% group_by(age) %>% filter(row_number() == 1) %>% ggplot(aes(x=age, y = hrwage)) + geom_line()+ geom_point(colour = "gray40") ###Output _____no_output_____ ###Markdown What if instead we wanted to add a vertical, horizontal, or sloped line in our plot? We use the layers `vline()`, `hline()`, and `abline()` for that.`vline()` is simple and really only needs the `xintercept` argument. Similarly, `hline` takes the `yintercept` argument. `abline` requires us to specify both a `slope` and an `intercept`.Let's say we wanted to add lines to the previous set of points (not connected): ###Code sleepdata %>% group_by(age) %>% filter(row_number() == 1) %>% ggplot(aes(x=age, y = hrwage)) + geom_point(colour = "gray40") + geom_vline(xintercept = 40, colour = "orchid4") + geom_hline(yintercept = 10) + geom_abline(intercept = 25, slope = -0.5, colour = "grey60", linetype = "dashed") ###Output _____no_output_____ ###Markdown Histograms and DistributionsSometimes we want to get information about one variable on its own. We can use __ggplot2__ to make histograms as well as predicted distributions!We use the function `geom_histogram()` to produce histograms. To get a basic histogram of $age$, ###Code ggplot(data = sleepdata, aes(x = age)) + geom_histogram() ###Output _____no_output_____ ###Markdown Notice that __ggplot2__ chooses a bin width by default, but we can change this by adding `binwidth`. We can also add labels as before.Note that if we want to change color, we now have two different options. `colour` now changes the outline color, while `fill` changes the interior color. ###Code ggplot(data = sleepdata, aes(x = age)) + geom_histogram(binwidth = 10, colour = "seagreen4") + labs(title = "Age Histogram", x = "Age (years)", y = "Count") ggplot(data = sleepdata, aes(x = age)) + geom_histogram(binwidth = 10, fill = "midnightblue") + labs(title = "Age Histogram", x = "Age (years)", y = "Count") ggplot(data = sleepdata, aes(x = age)) + geom_histogram(binwidth = 10, colour = "grey60", fill = "darkolivegreen1") + labs(title = "Age Histogram", x = "Age (years)", y = "Count") ggplot(data = sleepdata, aes(x = age)) + geom_histogram(aes(fill = factor(male)), binwidth = 10) + labs(title = "Age Histogram", x = "Age (years)", y = "Count") ###Output _____no_output_____ ###Markdown What if we wanted to get a sense of the estimated distribution of age rather than look at the histogram? We can do that with the `geom_density()` function! ###Code ggplot(data = sleepdata, aes(x = age)) + geom_density(fill = "gray60", colour= "navy") + labs(title = "Age Density", x = "Age (years)", y = "Density") ggplot(data = sleepdata, aes(x = age)) + geom_density(aes(colour = factor(male))) + labs(title = "Age Density", x = "Age (years)", y = "Density") ###Output _____no_output_____ ###Markdown RegressionOne cool thing that we can do with __ggplot2__ is produce a simple linear regression line directly in our plot! We use the `geom_smooth(method = "lm")` layer for that. ###Code wagereg <- lm(hrwage ~ age, data = sleepdata) summary(wagereg) ggplot(data = sleepdata, aes(x=age, y = hrwage)) + geom_point()+ geom_smooth(method = "lm") ###Output _____no_output_____ ###Markdown Notice that by default it gives us the 95% confidence interval too! We can change the confidence interval using the `level` argument. Multiple Linear Regression in ggplot2How would we go about plotting the results of a multiple linear regression? In this case we have to combine output from our regression with the `abline` function. ###Code wagereg2 <- lm(hrwage ~ age + educ + male, data = sleepdata) summary(wagereg2) int <- wagereg2$coefficients[1] slope_age <- wagereg2$coefficients[2] ggplot(data = sleepdata, aes(x=age, y = hrwage)) + geom_point()+ geom_abline(intercept = int, slope = slope_age) + ylim(-20,40) ###Output _____no_output_____ ###Markdown I had to add the `ylim(-20,40)` to change the y limits so that we could see the line... because it now doesn't pass through the data! Recall that our slope coefficient $\hat\beta_{age}$ is now the _partial_ effect of age on hourly wage, holding education level and gender constant. As a result, the plot isn't quite as informative on top of the data points in a single set of dimensions. FacetsSometimes we might want to produce different panels of a plot for different _values_ of another variable. For instance, instead of changing the color of our points for males vs females earlier, we could have produced separate plots for data where males = 0 and females = 0 right next to each other. We do that using the `facet_grid()` layer. ###Code ggplot(data = sleepdata, aes(x=age, y = hrwage)) + geom_point()+ facet_grid(. ~ male) ###Output _____no_output_____ ###Markdown Here we put the panels next to each other, first for female ($male=0$) on the left and then for males on the left. We can also arrange them vertically by changing how we write the argument. ###Code ggplot(data = sleepdata, aes(x=age, y = hrwage)) + geom_point()+ facet_grid(male ~ .) ###Output _____no_output_____ ###Markdown Notice that when we put `male ~ .` we get the plots stacked vertically by age, whereas `. ~ male` splits them side by side. xtableThe package __xtable__ allows us to obtain high-quality formatted versions of our summary statistics tables, regression tables, and raw data to improve the look of our __R__ output. This is especially useful for generating professional-looking tables that can be added to a research paper... once we get into __RStudio__ on its own. Right now it's not as useful, since our Jupyter notebook already formats results in a specific way.One way we can get a sense of how it formats is by using it on our regression tables in our Jupyter notebook. ###Code reg <- lm(hrwage ~ educ + age + union + exper, data = sleepdata) summary(reg) xtable(reg) ###Output _____no_output_____ ###Markdown We'll spend more time with __xtable__ (and eventually __stargazer__ once we switch over to __RStudio__). Practice with ggplot!Let's try producing a couple of different plots. First, let's load in a new dataset - the _autos.dta_ file again. ###Code autodata <- read_dta("autos.dta") head(autodata) ###Output _____no_output_____
03_Investment_Valuations/investment-valuations.ipynb
###Markdown Investment ValuationsIn this activity, you’ll use the Alpaca API to get the pricing information for two stocks.Instructions:1. Create your environment file (`.env`) in your project folder. Make sure that this file holds your Alpaca API and secret keys.2. Import the Alpaca API and secret keys into the `investment_valuations.ipynb` notebook.3. Create the Alpaca API `REST` object by calling the Alpaca `tradeapi.REST` function and then setting the `alpaca_api_key`, `alpaca_secret_key`, and `api_version`.4. Review the two-stock `portfolio_df` DataFrame that we created for you in the starter notebook. Run this cell as you work through the remaining steps in this activity.5. Get the closing prices of the prior business day for the two stocks in question, Apple and Microsoft, by using the Alpaca `get_barset` function. Note that this requires values for `tickers`, `timeframe`, and the `start` and `end` dates. Add the `df` property to the end of this API call to automatically convert the response to a DataFrame.> **Note** The solution notebook uses `"2020-06-30"` for both the `start` and the `end` date.6. Get the closing prices for both stocks. Convert the values to floating point numbers so that you can use them in a future calculation.> **Hint** A floating point number is a numerical value that has decimal places. To convert a number to a `float`, call the [float function](https://docs.python.org/3/library/functions.htmlfloat) and pass the closing price as a parameter.7. Calculate the current value, in dollars, of the portfolio. To do so, multiply the closing price of each stock by the shares that the `portfolio_df` DataFrame supplies for you. Print the current value of each stock, and then add the values to get the total value of the portfolio.8. Create a Pandas DataFrame named `portfolio_value_df` that includes the current value, in dollars, of each stock. Plot a bar chart that visualizes the DataFrame based on the calculated values of each stock.9. Review the code in the cell provided in the starter notebook to learn how a pie chart is created using the current valuations of Apple and Microsoft. Run the cell so that you can visualize the information.> **Challenge Connection** An terrific way to visualize the value of each stock in a portfolio is by using a [Pandas pie chart](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.plot.pie.html) You’ll need to create a pie chart in this week’s Challenge.References:[Alpaca API Docs](https://alpaca.markets/docs/api-documentation/)[Pandas pie plot](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.plot.pie.html) Import the required libraries and dependencies ###Code # Import the required libraries and dependencies import os import requests import pandas as pd from dotenv import load_dotenv import alpaca_trade_api as tradeapi %matplotlib inline ###Output _____no_output_____ ###Markdown Step 1: Create your environment file (`.env`) in your project folder. Make sure that this file holds your Alpaca API and secret keys. Step 2: Import the Alpaca API and secret keys into the `investment_valuations.ipynb` notebook.* Load the environment variable by calling the `load_dotenv()` function.* Set the value of the variables `alpaca_api_key` and `alpaca_secret_key` equal to their respective environment variables. * Confirm the variables are available by checking the `type` of each. ###Code # Load the environment variables by calling the load_dotenv function load_dotenv() # Set Alpaca API key and secret by calling the os.getenv function and referencing the environment variable names # Set each environment variable to a notebook variable of the same name alpaca_api_key = os.getenv("ALPACA_API_KEY") alpaca_secret_key = os.getenv("ALPACA_SECRET_KEY") # Check the values were imported correctly by evaluating the type of each display(type(alpaca_api_key)) display(type(alpaca_secret_key)) ###Output _____no_output_____ ###Markdown Step 3: Create the Alpaca API `REST` object by calling the Alpaca `tradeapi.REST` function and then setting the `alpaca_api_key`, `alpaca_secret_key`, and `api_version`. ###Code # Create your Alpaca API REST object by calling Alpaca's tradeapi.REST function # Set the parameters to your alpaca_api_key, alpaca_secret_key and api_version="v2" alpaca = tradeapi.REST( alpaca_api_key, alpaca_secret_key, api_version="v2") ###Output _____no_output_____ ###Markdown Step 4: Review the two-stock `portfolio_df` DataFrame that we created for you in the starter notebook. Run this cell as you work through the remaining steps in this activity. ###Code # Set current amount of shares data shares_data = { "shares": [200, 320] } # Set the tickers tickers = ["MSFT", "AAPL"] # Create the shares DataFrame portfolio_df = pd.DataFrame(shares_data, index=tickers) # Display shares data portfolio_df ###Output _____no_output_____ ###Markdown Step 5: Get the closing prices of the prior business day for the two stocks in question, Apple and Microsoft, by using the Alpaca `get_barset` function. Note that this requires values for `tickers`, `timeframe`, and the `start` and `end` dates. Add the `df` property to the end of this API call to automatically convert the response to a DataFrame.* Confirm the value for `tickers` from a the prior step* Set the values for `start_date` and `end_date` using the `pd.Timestamp` function.* Set the `timeframe` value to 1 day.* Create the `portfolio_prices_df` DataFrame by setting it equal to the `alpaca.get_barset` function. ###Code # Confirm the values of the `tickers` variable created in the prior step tickers # Set the values for start_date and end_date using the pd.Timestamp function # Inside the function set the date parameter to the prior business day # Both the start_date and end_date should contain the same date value, as we looking for the closing price # of the prior business day. # Set the parameter tz to "America/New_York", # Set this all to the ISO format by calling the isoformat function start_date = pd.Timestamp("2020-06-30", tz="America/New_York").isoformat() end_date = pd.Timestamp("2020-06-30", tz="America/New_York").isoformat() # Set timeframe to one day (1D) for the Alpaca API timeframe = "1D" # Use the Alpaca get_barset function to gather the price information for each ticker # Include the function parameters: tickers, timeframe, start, and end # Be sure to call the df property to ensure that the returned information is set as a DataFrame portfolio_prices_df = alpaca.get_barset( tickers, timeframe, start = start_date, end = end_date ).df # Review the resulting `portfolio_prices_df` DataFrame. portfolio_prices_df ###Output _____no_output_____ ###Markdown Step 6: Get the closing prices for both stocks. Convert the values to floating point numbers so that you can use them in a future calculation. ###Code # Fetch the current closing prices for Apple and Microsoft from the portfolio_prices_df DataFrame # Remember that the DataFrame generated from the Alpaca call incorporates multi-indexing # Be sure to set the values from the DataFrame to a float by calling the `float` function aapl_price = float(portfolio_prices_df["AAPL"]["close"]) msft_price = float(portfolio_prices_df["MSFT"]["close"]) print(aapl_price) print(type(msft_price)) ###Output 364.6 <class 'float'> ###Markdown Step 7: Calculate the current value, in dollars, of the portfolio. To do so, multiply the closing price of each stock by the shares that the `portfolio_df` DataFrame supplies for you. Print the current value of each stock, and then add the values to get the total value of the portfolio.1. Multipy the current price of each stock by the shares indicated in the `portfolio_df` DataFrame.2. Print the current value of each stock.3. Add the values together and print the current total vaue of the portfolio. ###Code # Compute the current value in dollars of each of the stock's in the portfolio # This is done by multiplying the price from the portfolio_prices_df DataFrame # and the shares from the portfolio_df DataFrame. msft_value = msft_price * portfolio_df.loc["MSFT"]["shares"] aapl_value = aapl_price * portfolio_df.loc["AAPL"]["shares"] # Print the current value of each stock in the stocks portfolio print(f"The current value of the {portfolio_df.loc['MSFT']['shares']} MSFT shares is ${msft_value:,.2f}") print(f"The current value of the {portfolio_df.loc['AAPL']['shares']} AAPL shares is ${aapl_value:,.2f}") # Print the total value of the current portfolio. print(f"The current value of the entire portfolio is ${(aapl_value + msft_value):,.2f}") ###Output The current value of the 200 MSFT shares is $40,700.00 The current value of the 320 AAPL shares is $116,672.00 The current value of the entire portfolio is $157,372.00 ###Markdown Step 8: Create a Pandas DataFrame named `portfolio_value_df` that includes the current value, in dollars, of each stock. Plot a bar chart that visualizes the DataFrame based on the calculated values of each stock.1. Create a portfolio_value_df DataFrame that reflects the current value of shares.2. Create a bar chart visualizing the values of the portfolio_value_df DataFrame. ###Code # Create a Pandas DataFrame that includes the current value of both MSFT and AAPL. portfolio_value_df = pd.DataFrame( {"MSFT": [msft_value], "AAPL": [aapl_value]} ) # Display portfolio_value_df DataFrame portfolio_value_df # Create a bar chart to show the value of shares # Give the plot a title and adjust the figure size portfolio_value_df.plot(kind="bar", title="Current Value in Dollars of Apple & Microsoft") ###Output _____no_output_____ ###Markdown Step 9: Review the code in the cell provided in the starter notebook to learn how a pie chart is created using the current valuations of Apple and Microsoft. Run the cell so that you can visualize the information.1. Create the DataFrame to use in the pie chart. 2. Use Pandas `plot.pie` to visualize the current value of each of the two stocks relative to the total portfolio. ###Code # Using the DataFrame created below: pie_values_df = pd.DataFrame( {'Value':[aapl_value, msft_value]}, index=['Apple', 'MSFT'] ) pie_values_df # Create a pie chart to visualize the proportion each stock is of the portfolio as a whole # Give the plot a title pie_values_df.plot.pie(y='Value', title='Portfolio Composition - 2020-07-14 ') ###Output _____no_output_____
notebooks/senzing-examples/Windows/senzing-G2Engine-exportConfig.ipynb
###Markdown G2Engine Guide - Export configurationThe `exportConfig()` method creates a JSON string with information about the Senzing engine's configuration. G2EngineThe G2Engine API... ###Code from G2Engine import G2Engine ###Output _____no_output_____ ###Markdown G2Engine initialization ###Code g2_engine = G2Engine() try: g2_engine.initV2(module_name, senzing_config_json, verbose_logging) except G2Exception.G2ModuleGenericException as err: print(g2_engine.getLastException()) ###Output _____no_output_____ ###Markdown exportConfig()Call G2 Module's `exportConfig()` method and print results. ###Code response_bytearray = bytearray() config_id = bytearray() try: g2_engine.exportConfig(response_bytearray, config_id) print("Configuration ID: {0}".format(config_id.decode())) print(response_bytearray.decode()) except G2Exception.G2ModuleGenericException as err: print(g2_engine.getLastException()) ###Output _____no_output_____
.ipynb_checkpoints/ML for Diagnosing Breast Cancer - Steven Smiley-checkpoint.ipynb
###Markdown Using Machine Learning to Diagnose Breast Cancer in Python by: Steven Smiley Problem Statement:Find a Machine Learning (ML) model that accurately predicts breast cancer based on the 30 features described below. 1. Background:Features are computed from a digitized image of a fine needle aspirate (FNA) of a breast mass. They describe characteristics of the cell nuclei present in the image. n the 3-dimensional space is that described in: [K. P. Bennett and O. L. Mangasarian: "Robust Linear Programming Discrimination of Two Linearly Inseparable Sets", Optimization Methods and Software 1, 1992, 23-34].This database is also available through the UW CS ftp server: ftp ftp.cs.wisc.edu cd math-prog/cpo-dataset/machine-learn/WDBC/Also can be found on UCI Machine Learning Repository: https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+%28Diagnostic%29Attribute Information:1) ID number 2) Diagnosis (M = malignant, B = benign) 3-32)Ten real-valued features are computed for each cell nucleus:* a) radius (mean of distances from center to points on the perimeter) * b) texture (standard deviation of gray-scale values) * c) perimeter * d) area * e) smoothness (local variation in radius lengths) * f) compactness (perimeter^2 / area - 1.0) * g) concavity (severity of concave portions of the contour) * h) concave points (number of concave portions of the contour) * i) symmetry * j) fractal dimension ("coastline approximation" - 1)The mean, standard error and "worst" or largest (mean of the three largest values) of these features were computed for each image, resulting in 30 features. For instance, field 3 is Mean Radius, field 13 is Radius SE, field 23 is Worst Radius.All feature values are recoded with four significant digits.Missing attribute values: noneClass distribution: 357 benign, 212 malignant 2. Abstract: When it comes to diagnosing breast cancer, we want to make sure we don't have too many false-positives (you don't have cancer, but told you do and go on treatment) or false-negatives (you have cancer, but told you don't and don't get treatment). Therefore, the highest overall accuracy model is chosen. The Data was split into 80% training (~455 people) and 20% testing (~114 people). Several different models were evaluated through k-fold Cross-Validation with GridSearchCV, which iterates on different algorithm's hyperparameters: * Logistic Regression * Support Vector Machine * Neural Network * Random Forest * Gradient Boost * eXtreme Gradient Boost All of the models performed well after fine tunning their hyperparameters, but the best model is the one the highest overall accuracy. Out of the 20% of data witheld in this test (114 random individuals), only a handful were misdiagnosed. No model is perfect, but I am happy about how accurate my model is here. If on average less than a handful of people out of 114 are misdiagnosed, that is a good start for making a model. Furthermore, the Feature Importance plots show that the "concave points worst" and "concave points mean" were the significant features. Therefore, I recommend the concave point features should be extracted from each future biopsy as a strong predictor for diagnosing breast cancer. 3. Import Libraries ###Code import warnings import os # Get Current Directory from sklearn.model_selection import train_test_split from sklearn.model_selection import GridSearchCV from sklearn.metrics import accuracy_score, precision_score, recall_score import pandas as pd # data processing, CSV file I/O (e.i. pd.read_csv) import numpy as np import matplotlib.pyplot as plt import seaborn as sns import joblib from time import time from sklearn.linear_model import LogisticRegression from sklearn.svm import SVC from sklearn.neural_network import MLPClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.ensemble import GradientBoostingClassifier from xgboost import XGBClassifier from sklearn.decomposition import PCA from scipy import stats import subprocess from sklearn.metrics import classification_report, confusion_matrix, accuracy_score from sklearn.utils.multiclass import unique_labels import itertools from sklearn.preprocessing import StandardScaler ###Output _____no_output_____ ###Markdown Hide Warnings ###Code warnings.filterwarnings("ignore") pd.set_option('mode.chained_assignment', None) ###Output _____no_output_____ ###Markdown Get Current Directory ###Code currentDirectory=os.getcwd() print(currentDirectory) ###Output /Users/stevensmiley/Desktop/GraduateSchool/Python/PythonCodes/BreastCancer ###Markdown 4. Import and View Data ###Code #data= pd.read_csv('/kaggle/input/breast-cancer-wisconsin-data/data.csv') data=os.path.join(currentDirectory,'data.csv') data= pd.read_csv(data) data.head(10) # view the first 10 columns ###Output _____no_output_____ ###Markdown 4.1 Import and View Data: Check for Missing ValuesAs the background stated, no missing values should be present. The following verifies that. The last column doesn't hold any information and should be removed. In addition, the diagnosis should be changed to a binary classification of 0= benign and 1=malignant. ###Code data.isnull().sum() # Drop Unnamed: 32 variable that has NaN values. data.drop(['Unnamed: 32'],axis=1,inplace=True) # Convert Diagnosis for Cancer from Categorical Variable to Binary diagnosis_num={'B':0,'M':1} data['diagnosis']=data['diagnosis'].map(diagnosis_num) # Verify Data Changes, look at first 5 rows data.head(5) ###Output _____no_output_____ ###Markdown 4.2 Heatmap with Pearson Correlation Coefficient for FeaturesA strong correlation is indicated by a Pearson Correlation Coefficient value near 1. Therefore, when looking at the Heatmap, we want to see what correlates most with the first column, "diagnosis." It appears that the features of "concave points worst" [0.79] has the strongest correlation with "diagnosis". ###Code #fix,ax = plt.subplots(figsize=(25,25)) fix,ax = plt.subplots(figsize=(22,22)) heatmap_data = data.drop(['id'],axis=1) sns.heatmap(heatmap_data.corr(),vmax=1,linewidths=0.01,square=True,annot=True,linecolor="white") bottom,top=ax.get_ylim() ax.set_ylim(bottom+0.5,top-0.5) heatmap_title='Figure 1: Heatmap with Pearson Correlation Coefficient for Features' ax.set_title(heatmap_title) plt.savefig('Figure1.Heatmap.png',dpi=300,bbox_inches='tight') plt.show() ###Output _____no_output_____ ###Markdown 5. Split Data for Training 5.1 Split Data for Training : Standardize and Split the Data ###Code X = data.drop(['id','diagnosis'], axis= 1) y = data.diagnosis #Standardize Data scaler = StandardScaler() X=StandardScaler().fit_transform(X.values) X = pd.DataFrame(X) X.columns=(data.drop(['id','diagnosis'], axis= 1)).columns ###Output _____no_output_____ ###Markdown A good rule of thumb is to hold out 20 percent of the data for testing. ###Code X_train, X_test, y_train, y_test = train_test_split(X, y, test_size= 0.2, random_state= 42) #Standardize Data scaler = StandardScaler() #Fit on training set only. scaler.fit(X_train) #Apply transform to both the training and test set X_train=scaler.transform(X_train) X_test=scaler.transform(X_test) ###Output _____no_output_____ ###Markdown 5.2 Split Data for Training: Feature Extraction with PCA ###Code # Feature Extraction: Principal Component Analysis: PC1, PC2 pca = PCA(n_components=2, random_state=42) # Only fit to the training set pca.fit((X_train)) # transform with PCA model from training principalComponents_train = pca.transform(X_train) principalComponents_test = pca.transform(X_test) # Use Pandas DataFrame X_train = pd.DataFrame(X_train) X_test=pd.DataFrame(X_test) X_train.columns=(data.drop(['id','diagnosis'], axis= 1)).columns X_test.columns=(data.drop(['id','diagnosis'], axis= 1)).columns y_train = pd.DataFrame(y_train) y_test=pd.DataFrame(y_test) X_train['PC1']=principalComponents_train[:,0] X_train['PC2']=principalComponents_train[:,1] X_test['PC1']=principalComponents_test[:,0] X_test['PC2']=principalComponents_test[:,1] tr_features=X_train tr_labels=y_train val_features = X_test val_labels=y_test ###Output _____no_output_____ ###Markdown 5.3 Split Data for Training: Verify the Split Verify the data was split correctly ###Code print('X_train - length:',len(X_train), 'y_train - length:',len(y_train)) print('X_test - length:',len(X_test),'y_test - length:',len(y_test)) print('Percent heldout for testing:', round(100*(len(X_test)/len(data)),0),'%') ###Output X_train - length: 455 y_train - length: 455 X_test - length: 114 y_test - length: 114 Percent heldout for testing: 20.0 % ###Markdown 6. Machine Learning:In order to find a good model, several algorithms are tested on the training dataset. A senstivity study using different Hyperparameters of the algorithms are iterated on with GridSearchCV in order optimize each model. The best model is the one that has the highest accuracy without overfitting by looking at both the training data and the validation data results. Computer time does not appear to be an issue for these models, so it has little weight on deciding between models. GridSearch CVclass sklearn.model_selection.GridSearchCV(estimator, param_grid, scoring=None, n_jobs=None, iid='deprecated', refit=True, cv=None, verbose=0, pre_dispatch='2*n_jobs', error_score=nan, return_train_score=False)[source]¶Exhaustive search over specified parameter values for an estimator.Important members are fit, predict.GridSearchCV implements a “fit” and a “score” method. It also implements “predict”, “predict_proba”, “decision_function”, “transform” and “inverse_transform” if they are implemented in the estimator used.The parameters of the estimator used to apply these methods are optimized by cross-validated grid-search over a parameter grid. Function: print_results ###Code def print_results(results,name,filename_pr): with open(filename_pr, mode='w') as file_object: print(name,file=file_object) print(name) print('BEST PARAMS: {}\n'.format(results.best_params_),file=file_object) print('BEST PARAMS: {}\n'.format(results.best_params_)) means = results.cv_results_['mean_test_score'] stds = results.cv_results_['std_test_score'] for mean, std, params in zip(means, stds, results.cv_results_['params']): print('{} {} (+/-{}) for {}'.format(name,round(mean, 3), round(std * 2, 3), params),file=file_object) print('{} {} (+/-{}) for {}'.format(name,round(mean, 3), round(std * 2, 3), params)) print(GridSearchCV) ###Output <class 'sklearn.model_selection._search.GridSearchCV'> ###Markdown 6.1 Machine Learning Models: Logistic Regression Logistic Regression: Hyperparameter used in GridSearchCV HP1, C: float, optional (default=1.0)Inverse of regularization strength; must be a positive float. Like in support vector machines, smaller values specify stronger regularization. DetailsRegularization is when a penality is applied with increasing value to prevent overfitting. The inverse of regularization strength means as the value of C goes up, the value of the regularization strength goes down and vice versa. Values chosen'C': [0.001, 0.01, 0.1, 1, 10, 100, 1000] ###Code LR_model_dir=os.path.join(currentDirectory,'LR_model.pkl') if os.path.exists(LR_model_dir) == False: lr = LogisticRegression() parameters = { 'C': [0.001, 0.01, 0.1, 1, 10, 100, 1000] } cv=GridSearchCV(lr, parameters, cv=5) cv.fit(tr_features,tr_labels.values.ravel()) print_results(cv,'Logistic Regression (LR)','LR_GridSearchCV_results.txt') cv.best_estimator_ LR_model_dir=os.path.join(currentDirectory,'LR_model.pkl') joblib.dump(cv.best_estimator_,LR_model_dir) else: print('Already have LR') ###Output Logistic Regression (LR) BEST PARAMS: {'C': 0.1} Logistic Regression (LR) 0.947 (+/-0.037) for {'C': 0.001} Logistic Regression (LR) 0.969 (+/-0.016) for {'C': 0.01} Logistic Regression (LR) 0.978 (+/-0.028) for {'C': 0.1} Logistic Regression (LR) 0.976 (+/-0.029) for {'C': 1} Logistic Regression (LR) 0.976 (+/-0.038) for {'C': 10} Logistic Regression (LR) 0.96 (+/-0.049) for {'C': 100} Logistic Regression (LR) 0.956 (+/-0.039) for {'C': 1000} ###Markdown 6.2 Machine Learning Models: Support Vector Machine Support Vector Machine: Hyperparameter used in GridSearchCV HP1, kernelstring, optional (default=’rbf’)Specifies the kernel type to be used in the algorithm. It must be one of ‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, ‘precomputed’ or a callable. If none is given, ‘rbf’ will be used. If a callable is given it is used to pre-compute the kernel matrix from data matrices; that matrix should be an array of shape (n_samples, n_samples). DetailsA linear kernel type is good when the data is Linearly seperable, which means it can be separated by a single Line.A radial basis function (rbf) kernel type is an expontential function of the squared Euclidean distance between two vectors and a constant. Since the value of RBF kernel decreases with distance and ranges between zero and one, it has a ready interpretation as a similiarity measure. Values chosen'kernel': ['linear','rbf'] HP2, C: float, optional (default=1.0)Regularization parameter. The strength of the regularization is inversely proportional to C. Must be strictly positive. The penalty is a squared l2 penalty. DetailsRegularization is when a penality is applied with increasing value to prevent overfitting. The inverse of regularization strength means as the value of C goes up, the value of the regularization strength goes down and vice versa. Values chosen'C': [0.1, 1, 10] ###Code print(SVC()) SVM_model_dir=os.path.join(currentDirectory,'SVM_model.pkl') if os.path.exists(SVM_model_dir) == False: svc = SVC() parameters = { 'kernel': ['linear','rbf'], 'C': [0.1, 1, 10] } cv=GridSearchCV(svc,parameters, cv=5) cv.fit(tr_features, tr_labels.values.ravel()) print_results(cv,'Support Vector Machine (SVM)','SVM_GridSearchCV_results.txt') cv.best_estimator_ SVM_model_dir=os.path.join(currentDirectory,'SVM_model.pkl') joblib.dump(cv.best_estimator_,SVM_model_dir) else: print('Already have SVM') ###Output Support Vector Machine (SVM) BEST PARAMS: {'C': 0.1, 'kernel': 'linear'} Support Vector Machine (SVM) 0.976 (+/-0.017) for {'C': 0.1, 'kernel': 'linear'} Support Vector Machine (SVM) 0.93 (+/-0.032) for {'C': 0.1, 'kernel': 'rbf'} Support Vector Machine (SVM) 0.971 (+/-0.03) for {'C': 1, 'kernel': 'linear'} Support Vector Machine (SVM) 0.971 (+/-0.036) for {'C': 1, 'kernel': 'rbf'} Support Vector Machine (SVM) 0.967 (+/-0.037) for {'C': 10, 'kernel': 'linear'} Support Vector Machine (SVM) 0.969 (+/-0.036) for {'C': 10, 'kernel': 'rbf'} ###Markdown 6.3 Machine Learning Models: Neural Network Neural Network: (sklearn) Hyperparameter used in GridSearchCV HP1, hidden_layer_sizes: tuple, length = n_layers - 2, default (100,)The ith element represents the number of neurons in the ith hidden layer. DetailsA rule of thumb is (2/3)*( of input features) = neurons per hidden layer. Values chosen'hidden_layer_sizes': [(10,),(50,),(100,)] HP2, activation: {‘identity’, ‘logistic’, ‘tanh’, ‘relu’}, default ‘relu’Activation function for the hidden layer. Details* ‘identity’, no-op activation, useful to implement linear bottleneck, returns f(x) = x* ‘logistic’, the logistic sigmoid function, returns f(x) = 1 / (1 + exp(-x)).* ‘tanh’, the hyperbolic tan function, returns f(x) = tanh(x).* ‘relu’, the rectified linear unit function, returns f(x) = max(0, x) Values chosen'hidden_layer_sizes': [(10,),(50,),(100,)] HP3, learning_rate: {‘constant’, ‘invscaling’, ‘adaptive’}, default ‘constant’Learning rate schedule for weight updates. Details* ‘constant’ is a constant learning rate given by ‘learning_rate_init’.* ‘invscaling’ gradually decreases the learning rate at each time step ‘t’ using an inverse scaling exponent of ‘power_t’. effective_learning_rate = learning_rate_init / pow(t, power_t)* ‘adaptive’ keeps the learning rate constant to ‘learning_rate_init’ as long as training loss keeps decreasing. Each time two consecutive epochs fail to decrease training loss by at least tol, or fail to increase validation score by at least tol if ‘early_stopping’ is on, the current learning rate is divided by 5.Only used when solver='sgd'. Values chosen'learning_rate': ['constant','invscaling','adaptive'] ###Code print(MLPClassifier()) MLP_model_dir=os.path.join(currentDirectory,'MLP_model.pkl') if os.path.exists(MLP_model_dir) == False: mlp = MLPClassifier() parameters = { 'hidden_layer_sizes': [(10,),(50,),(100,)], 'activation': ['relu','tanh','logistic'], 'learning_rate': ['constant','invscaling','adaptive'] } cv=GridSearchCV(mlp, parameters, cv=5) cv.fit(tr_features, tr_labels.values.ravel()) print_results(cv,'Neural Network (MLP)','MLP_GridSearchCV_results.txt') cv.best_estimator_ MLP_model_dir=os.path.join(currentDirectory,'MLP_model.pkl') joblib.dump(cv.best_estimator_,MLP_model_dir) else: print('Already have MLP') ###Output Neural Network (MLP) BEST PARAMS: {'activation': 'relu', 'hidden_layer_sizes': (50,), 'learning_rate': 'invscaling'} Neural Network (MLP) 0.971 (+/-0.018) for {'activation': 'relu', 'hidden_layer_sizes': (10,), 'learning_rate': 'constant'} Neural Network (MLP) 0.971 (+/-0.043) for {'activation': 'relu', 'hidden_layer_sizes': (10,), 'learning_rate': 'invscaling'} Neural Network (MLP) 0.971 (+/-0.041) for {'activation': 'relu', 'hidden_layer_sizes': (10,), 'learning_rate': 'adaptive'} Neural Network (MLP) 0.971 (+/-0.036) for {'activation': 'relu', 'hidden_layer_sizes': (50,), 'learning_rate': 'constant'} Neural Network (MLP) 0.98 (+/-0.017) for {'activation': 'relu', 'hidden_layer_sizes': (50,), 'learning_rate': 'invscaling'} Neural Network (MLP) 0.978 (+/-0.034) for {'activation': 'relu', 'hidden_layer_sizes': (50,), 'learning_rate': 'adaptive'} Neural Network (MLP) 0.978 (+/-0.028) for {'activation': 'relu', 'hidden_layer_sizes': (100,), 'learning_rate': 'constant'} Neural Network (MLP) 0.978 (+/-0.02) for {'activation': 'relu', 'hidden_layer_sizes': (100,), 'learning_rate': 'invscaling'} Neural Network (MLP) 0.978 (+/-0.02) for {'activation': 'relu', 'hidden_layer_sizes': (100,), 'learning_rate': 'adaptive'} Neural Network (MLP) 0.971 (+/-0.041) for {'activation': 'tanh', 'hidden_layer_sizes': (10,), 'learning_rate': 'constant'} Neural Network (MLP) 0.978 (+/-0.02) for {'activation': 'tanh', 'hidden_layer_sizes': (10,), 'learning_rate': 'invscaling'} Neural Network (MLP) 0.978 (+/-0.034) for {'activation': 'tanh', 'hidden_layer_sizes': (10,), 'learning_rate': 'adaptive'} Neural Network (MLP) 0.969 (+/-0.033) for {'activation': 'tanh', 'hidden_layer_sizes': (50,), 'learning_rate': 'constant'} Neural Network (MLP) 0.971 (+/-0.041) for {'activation': 'tanh', 'hidden_layer_sizes': (50,), 'learning_rate': 'invscaling'} Neural Network (MLP) 0.976 (+/-0.026) for {'activation': 'tanh', 'hidden_layer_sizes': (50,), 'learning_rate': 'adaptive'} Neural Network (MLP) 0.971 (+/-0.033) for {'activation': 'tanh', 'hidden_layer_sizes': (100,), 'learning_rate': 'constant'} Neural Network (MLP) 0.971 (+/-0.039) for {'activation': 'tanh', 'hidden_layer_sizes': (100,), 'learning_rate': 'invscaling'} Neural Network (MLP) 0.971 (+/-0.03) for {'activation': 'tanh', 'hidden_layer_sizes': (100,), 'learning_rate': 'adaptive'} Neural Network (MLP) 0.967 (+/-0.028) for {'activation': 'logistic', 'hidden_layer_sizes': (10,), 'learning_rate': 'constant'} Neural Network (MLP) 0.969 (+/-0.036) for {'activation': 'logistic', 'hidden_layer_sizes': (10,), 'learning_rate': 'invscaling'} Neural Network (MLP) 0.971 (+/-0.026) for {'activation': 'logistic', 'hidden_layer_sizes': (10,), 'learning_rate': 'adaptive'} Neural Network (MLP) 0.976 (+/-0.026) for {'activation': 'logistic', 'hidden_layer_sizes': (50,), 'learning_rate': 'constant'} Neural Network (MLP) 0.974 (+/-0.023) for {'activation': 'logistic', 'hidden_layer_sizes': (50,), 'learning_rate': 'invscaling'} Neural Network (MLP) 0.976 (+/-0.026) for {'activation': 'logistic', 'hidden_layer_sizes': (50,), 'learning_rate': 'adaptive'} Neural Network (MLP) 0.974 (+/-0.03) for {'activation': 'logistic', 'hidden_layer_sizes': (100,), 'learning_rate': 'constant'} Neural Network (MLP) 0.978 (+/-0.031) for {'activation': 'logistic', 'hidden_layer_sizes': (100,), 'learning_rate': 'invscaling'} Neural Network (MLP) 0.976 (+/-0.026) for {'activation': 'logistic', 'hidden_layer_sizes': (100,), 'learning_rate': 'adaptive'} ###Markdown 6.4 Machine Learning Models: Random Forest Random Forest: Hyperparameter used in GridSearchCV HP1, n_estimators: integer, optional (default=100)The number of trees in the forest.Changed in version 0.22: The default value of n_estimators changed from 10 to 100 in 0.22. DetailsUsually 500 does the trick and the accuracy and out of bag error doesn't change much after. Values chosen'n_estimators': [500], HP2, max_depth: integer or None, optional (default=None)The maximum depth of the tree. If None, then nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_split samples. DetailsNone usually does the trick, but a few shallow trees are tested. Values chosen'max_depth': [5,7,9, None] ###Code print(RandomForestClassifier()) RF_model_dir=os.path.join(currentDirectory,'RF_model.pkl') if os.path.exists(RF_model_dir) == False: rf = RandomForestClassifier(oob_score=False) parameters = { 'n_estimators': [500], 'max_depth': [5,7,9, None] } cv = GridSearchCV(rf, parameters, cv=5) cv.fit(tr_features, tr_labels.values.ravel()) print_results(cv,'Random Forest (RF)','RF_GridSearchCV_results.txt') cv.best_estimator_ RF_model_dir=os.path.join(currentDirectory,'RF_model.pkl') joblib.dump(cv.best_estimator_,RF_model_dir) else: print('Already have RF') ###Output Random Forest (RF) BEST PARAMS: {'max_depth': 7, 'n_estimators': 500} Random Forest (RF) 0.949 (+/-0.027) for {'max_depth': 5, 'n_estimators': 500} Random Forest (RF) 0.956 (+/-0.024) for {'max_depth': 7, 'n_estimators': 500} Random Forest (RF) 0.956 (+/-0.028) for {'max_depth': 9, 'n_estimators': 500} Random Forest (RF) 0.952 (+/-0.03) for {'max_depth': None, 'n_estimators': 500} ###Markdown 6.5 Machine Learning Models: Gradient Boosting Gradient Boosting: Hyperparameter used in GridSearchCV HP1, n_estimators: int (default=100)The number of boosting stages to perform. Gradient boosting is fairly robust to over-fitting so a large number usually results in better performance. DetailsUsually 500 does the trick and the accuracy and out of bag error doesn't change much after. Values chosen'n_estimators': [5, 50, 250, 500], HP2, max_depth: integer, optional (default=3)maximum depth of the individual regression estimators. The maximum depth limits the number of nodes in the tree. Tune this parameter for best performance; the best value depends on the interaction of the input variables. DetailsA variety of shallow trees are tested. Values chosen'max_depth': [1, 3, 5, 7, 9], HP3, learning_rate: float, optional (default=0.1)learning rate shrinks the contribution of each tree by learning_rate. There is a trade-off between learning_rate and n_estimators. DetailsA variety was chosen because of the trade-off. Values chosen'learning_rate': [0.01, 0.1, 1] ###Code print(GradientBoostingClassifier()) GB_model_dir=os.path.join(currentDirectory,'GB_model.pkl') if os.path.exists(GB_model_dir) == False: gb = GradientBoostingClassifier() parameters = { 'n_estimators': [5, 50, 250, 500], 'max_depth': [1, 3, 5, 7, 9], 'learning_rate': [0.01, 0.1, 1] } cv=GridSearchCV(gb, parameters, cv=5) cv.fit(tr_features, tr_labels.values.ravel()) print_results(cv,'Gradient Boost (GB)','GR_GridSearchCV_results.txt') cv.best_estimator_ GB_model_dir=os.path.join(currentDirectory,'GB_model.pkl') joblib.dump(cv.best_estimator_,GB_model_dir) else: print('Already have GB') ###Output Gradient Boost (GB) BEST PARAMS: {'learning_rate': 0.1, 'max_depth': 1, 'n_estimators': 500} Gradient Boost (GB) 0.629 (+/-0.006) for {'learning_rate': 0.01, 'max_depth': 1, 'n_estimators': 5} Gradient Boost (GB) 0.892 (+/-0.092) for {'learning_rate': 0.01, 'max_depth': 1, 'n_estimators': 50} Gradient Boost (GB) 0.936 (+/-0.049) for {'learning_rate': 0.01, 'max_depth': 1, 'n_estimators': 250} Gradient Boost (GB) 0.96 (+/-0.036) for {'learning_rate': 0.01, 'max_depth': 1, 'n_estimators': 500} Gradient Boost (GB) 0.629 (+/-0.006) for {'learning_rate': 0.01, 'max_depth': 3, 'n_estimators': 5} Gradient Boost (GB) 0.947 (+/-0.047) for {'learning_rate': 0.01, 'max_depth': 3, 'n_estimators': 50} Gradient Boost (GB) 0.949 (+/-0.043) for {'learning_rate': 0.01, 'max_depth': 3, 'n_estimators': 250} Gradient Boost (GB) 0.949 (+/-0.036) for {'learning_rate': 0.01, 'max_depth': 3, 'n_estimators': 500} Gradient Boost (GB) 0.629 (+/-0.006) for {'learning_rate': 0.01, 'max_depth': 5, 'n_estimators': 5} Gradient Boost (GB) 0.952 (+/-0.043) for {'learning_rate': 0.01, 'max_depth': 5, 'n_estimators': 50} Gradient Boost (GB) 0.941 (+/-0.027) for {'learning_rate': 0.01, 'max_depth': 5, 'n_estimators': 250} Gradient Boost (GB) 0.934 (+/-0.028) for {'learning_rate': 0.01, 'max_depth': 5, 'n_estimators': 500} Gradient Boost (GB) 0.629 (+/-0.006) for {'learning_rate': 0.01, 'max_depth': 7, 'n_estimators': 5} Gradient Boost (GB) 0.93 (+/-0.04) for {'learning_rate': 0.01, 'max_depth': 7, 'n_estimators': 50} Gradient Boost (GB) 0.927 (+/-0.026) for {'learning_rate': 0.01, 'max_depth': 7, 'n_estimators': 250} Gradient Boost (GB) 0.925 (+/-0.025) for {'learning_rate': 0.01, 'max_depth': 7, 'n_estimators': 500} Gradient Boost (GB) 0.629 (+/-0.006) for {'learning_rate': 0.01, 'max_depth': 9, 'n_estimators': 5} Gradient Boost (GB) 0.934 (+/-0.024) for {'learning_rate': 0.01, 'max_depth': 9, 'n_estimators': 50} Gradient Boost (GB) 0.925 (+/-0.025) for {'learning_rate': 0.01, 'max_depth': 9, 'n_estimators': 250} Gradient Boost (GB) 0.927 (+/-0.026) for {'learning_rate': 0.01, 'max_depth': 9, 'n_estimators': 500} Gradient Boost (GB) 0.895 (+/-0.099) for {'learning_rate': 0.1, 'max_depth': 1, 'n_estimators': 5} Gradient Boost (GB) 0.958 (+/-0.038) for {'learning_rate': 0.1, 'max_depth': 1, 'n_estimators': 50} Gradient Boost (GB) 0.965 (+/-0.032) for {'learning_rate': 0.1, 'max_depth': 1, 'n_estimators': 250} Gradient Boost (GB) 0.976 (+/-0.035) for {'learning_rate': 0.1, 'max_depth': 1, 'n_estimators': 500} Gradient Boost (GB) 0.945 (+/-0.042) for {'learning_rate': 0.1, 'max_depth': 3, 'n_estimators': 5} Gradient Boost (GB) 0.954 (+/-0.029) for {'learning_rate': 0.1, 'max_depth': 3, 'n_estimators': 50} Gradient Boost (GB) 0.958 (+/-0.032) for {'learning_rate': 0.1, 'max_depth': 3, 'n_estimators': 250} Gradient Boost (GB) 0.958 (+/-0.032) for {'learning_rate': 0.1, 'max_depth': 3, 'n_estimators': 500} Gradient Boost (GB) 0.949 (+/-0.041) for {'learning_rate': 0.1, 'max_depth': 5, 'n_estimators': 5} Gradient Boost (GB) 0.938 (+/-0.018) for {'learning_rate': 0.1, 'max_depth': 5, 'n_estimators': 50} Gradient Boost (GB) 0.941 (+/-0.03) for {'learning_rate': 0.1, 'max_depth': 5, 'n_estimators': 250} Gradient Boost (GB) 0.938 (+/-0.023) for {'learning_rate': 0.1, 'max_depth': 5, 'n_estimators': 500} Gradient Boost (GB) 0.934 (+/-0.028) for {'learning_rate': 0.1, 'max_depth': 7, 'n_estimators': 5} Gradient Boost (GB) 0.925 (+/-0.026) for {'learning_rate': 0.1, 'max_depth': 7, 'n_estimators': 50} Gradient Boost (GB) 0.932 (+/-0.029) for {'learning_rate': 0.1, 'max_depth': 7, 'n_estimators': 250} Gradient Boost (GB) 0.927 (+/-0.023) for {'learning_rate': 0.1, 'max_depth': 7, 'n_estimators': 500} Gradient Boost (GB) 0.921 (+/-0.037) for {'learning_rate': 0.1, 'max_depth': 9, 'n_estimators': 5} Gradient Boost (GB) 0.93 (+/-0.017) for {'learning_rate': 0.1, 'max_depth': 9, 'n_estimators': 50} Gradient Boost (GB) 0.925 (+/-0.021) for {'learning_rate': 0.1, 'max_depth': 9, 'n_estimators': 250} Gradient Boost (GB) 0.934 (+/-0.014) for {'learning_rate': 0.1, 'max_depth': 9, 'n_estimators': 500} Gradient Boost (GB) 0.934 (+/-0.056) for {'learning_rate': 1, 'max_depth': 1, 'n_estimators': 5} Gradient Boost (GB) 0.971 (+/-0.046) for {'learning_rate': 1, 'max_depth': 1, 'n_estimators': 50} Gradient Boost (GB) 0.969 (+/-0.038) for {'learning_rate': 1, 'max_depth': 1, 'n_estimators': 250} Gradient Boost (GB) 0.969 (+/-0.038) for {'learning_rate': 1, 'max_depth': 1, 'n_estimators': 500} Gradient Boost (GB) 0.919 (+/-0.044) for {'learning_rate': 1, 'max_depth': 3, 'n_estimators': 5} Gradient Boost (GB) 0.947 (+/-0.043) for {'learning_rate': 1, 'max_depth': 3, 'n_estimators': 50} Gradient Boost (GB) 0.945 (+/-0.037) for {'learning_rate': 1, 'max_depth': 3, 'n_estimators': 250} Gradient Boost (GB) 0.947 (+/-0.043) for {'learning_rate': 1, 'max_depth': 3, 'n_estimators': 500} Gradient Boost (GB) 0.93 (+/-0.041) for {'learning_rate': 1, 'max_depth': 5, 'n_estimators': 5} Gradient Boost (GB) 0.949 (+/-0.022) for {'learning_rate': 1, 'max_depth': 5, 'n_estimators': 50} Gradient Boost (GB) 0.956 (+/-0.032) for {'learning_rate': 1, 'max_depth': 5, 'n_estimators': 250} Gradient Boost (GB) 0.952 (+/-0.036) for {'learning_rate': 1, 'max_depth': 5, 'n_estimators': 500} Gradient Boost (GB) 0.927 (+/-0.029) for {'learning_rate': 1, 'max_depth': 7, 'n_estimators': 5} Gradient Boost (GB) 0.938 (+/-0.033) for {'learning_rate': 1, 'max_depth': 7, 'n_estimators': 50} Gradient Boost (GB) 0.938 (+/-0.035) for {'learning_rate': 1, 'max_depth': 7, 'n_estimators': 250} Gradient Boost (GB) 0.936 (+/-0.009) for {'learning_rate': 1, 'max_depth': 7, 'n_estimators': 500} Gradient Boost (GB) 0.921 (+/-0.016) for {'learning_rate': 1, 'max_depth': 9, 'n_estimators': 5} Gradient Boost (GB) 0.932 (+/-0.025) for {'learning_rate': 1, 'max_depth': 9, 'n_estimators': 50} Gradient Boost (GB) 0.934 (+/-0.013) for {'learning_rate': 1, 'max_depth': 9, 'n_estimators': 250} Gradient Boost (GB) 0.932 (+/-0.037) for {'learning_rate': 1, 'max_depth': 9, 'n_estimators': 500} ###Markdown 6.6 Machine Learning Models: eXtreme Gradient Boosting eXtreme Gradient Boosting: Hyperparameter used in GridSearchCV HP1, n_estimators: (int) – Number of trees to fit. DetailsUsually 500 does the trick and the accuracy and out of bag error doesn't change much after. Values chosen'n_estimators': [5, 50, 250, 500], HP2, max_depth: (int) – Maximum tree depth for base learners. DetailsA variety of shallow trees are tested. Values chosen'max_depth': [1, 3, 5, 7, 9], HP3, learning_rate: (float) – Boosting learning rate (xgb’s “eta”) DetailsA variety was chosen because of the trade-off. Values chosen'learning_rate': [0.01, 0.1, 1] ###Code XGB_model_dir=os.path.join(currentDirectory,'XGB_model.pkl') if os.path.exists(XGB_model_dir) == False: xgb = XGBClassifier() parameters = { 'n_estimators': [5, 50, 250, 500], 'max_depth': [1, 3, 5, 7, 9], 'learning_rate': [0.01, 0.1, 1] } cv=GridSearchCV(xgb, parameters, cv=5) cv.fit(tr_features, tr_labels.values.ravel()) print_results(cv,'eXtreme Gradient Boost (XGB)','XGB_GridSearchCV_results.txt') cv.best_estimator_ XGB_model_dir=os.path.join(currentDirectory,'XGB_model.pkl') joblib.dump(cv.best_estimator_,XGB_model_dir) else: print('Already have XGB') ###Output eXtreme Gradient Boost (XGB) BEST PARAMS: {'learning_rate': 0.1, 'max_depth': 1, 'n_estimators': 500} eXtreme Gradient Boost (XGB) 0.888 (+/-0.047) for {'learning_rate': 0.01, 'max_depth': 1, 'n_estimators': 5} eXtreme Gradient Boost (XGB) 0.914 (+/-0.065) for {'learning_rate': 0.01, 'max_depth': 1, 'n_estimators': 50} eXtreme Gradient Boost (XGB) 0.936 (+/-0.04) for {'learning_rate': 0.01, 'max_depth': 1, 'n_estimators': 250} eXtreme Gradient Boost (XGB) 0.96 (+/-0.041) for {'learning_rate': 0.01, 'max_depth': 1, 'n_estimators': 500} eXtreme Gradient Boost (XGB) 0.93 (+/-0.043) for {'learning_rate': 0.01, 'max_depth': 3, 'n_estimators': 5} eXtreme Gradient Boost (XGB) 0.936 (+/-0.026) for {'learning_rate': 0.01, 'max_depth': 3, 'n_estimators': 50} eXtreme Gradient Boost (XGB) 0.958 (+/-0.029) for {'learning_rate': 0.01, 'max_depth': 3, 'n_estimators': 250} eXtreme Gradient Boost (XGB) 0.969 (+/-0.026) for {'learning_rate': 0.01, 'max_depth': 3, 'n_estimators': 500} eXtreme Gradient Boost (XGB) 0.941 (+/-0.045) for {'learning_rate': 0.01, 'max_depth': 5, 'n_estimators': 5} eXtreme Gradient Boost (XGB) 0.934 (+/-0.019) for {'learning_rate': 0.01, 'max_depth': 5, 'n_estimators': 50} eXtreme Gradient Boost (XGB) 0.958 (+/-0.026) for {'learning_rate': 0.01, 'max_depth': 5, 'n_estimators': 250} eXtreme Gradient Boost (XGB) 0.965 (+/-0.029) for {'learning_rate': 0.01, 'max_depth': 5, 'n_estimators': 500} eXtreme Gradient Boost (XGB) 0.941 (+/-0.045) for {'learning_rate': 0.01, 'max_depth': 7, 'n_estimators': 5} eXtreme Gradient Boost (XGB) 0.936 (+/-0.021) for {'learning_rate': 0.01, 'max_depth': 7, 'n_estimators': 50} eXtreme Gradient Boost (XGB) 0.956 (+/-0.024) for {'learning_rate': 0.01, 'max_depth': 7, 'n_estimators': 250} eXtreme Gradient Boost (XGB) 0.965 (+/-0.029) for {'learning_rate': 0.01, 'max_depth': 7, 'n_estimators': 500} eXtreme Gradient Boost (XGB) 0.941 (+/-0.045) for {'learning_rate': 0.01, 'max_depth': 9, 'n_estimators': 5} eXtreme Gradient Boost (XGB) 0.936 (+/-0.021) for {'learning_rate': 0.01, 'max_depth': 9, 'n_estimators': 50} eXtreme Gradient Boost (XGB) 0.956 (+/-0.024) for {'learning_rate': 0.01, 'max_depth': 9, 'n_estimators': 250} eXtreme Gradient Boost (XGB) 0.965 (+/-0.029) for {'learning_rate': 0.01, 'max_depth': 9, 'n_estimators': 500} eXtreme Gradient Boost (XGB) 0.934 (+/-0.032) for {'learning_rate': 0.1, 'max_depth': 1, 'n_estimators': 5} eXtreme Gradient Boost (XGB) 0.956 (+/-0.04) for {'learning_rate': 0.1, 'max_depth': 1, 'n_estimators': 50} eXtreme Gradient Boost (XGB) 0.963 (+/-0.03) for {'learning_rate': 0.1, 'max_depth': 1, 'n_estimators': 250} eXtreme Gradient Boost (XGB) 0.976 (+/-0.026) for {'learning_rate': 0.1, 'max_depth': 1, 'n_estimators': 500} eXtreme Gradient Boost (XGB) 0.932 (+/-0.025) for {'learning_rate': 0.1, 'max_depth': 3, 'n_estimators': 5} eXtreme Gradient Boost (XGB) 0.965 (+/-0.035) for {'learning_rate': 0.1, 'max_depth': 3, 'n_estimators': 50} eXtreme Gradient Boost (XGB) 0.969 (+/-0.026) for {'learning_rate': 0.1, 'max_depth': 3, 'n_estimators': 250} eXtreme Gradient Boost (XGB) 0.967 (+/-0.028) for {'learning_rate': 0.1, 'max_depth': 3, 'n_estimators': 500} eXtreme Gradient Boost (XGB) 0.938 (+/-0.017) for {'learning_rate': 0.1, 'max_depth': 5, 'n_estimators': 5} eXtreme Gradient Boost (XGB) 0.963 (+/-0.03) for {'learning_rate': 0.1, 'max_depth': 5, 'n_estimators': 50} eXtreme Gradient Boost (XGB) 0.965 (+/-0.032) for {'learning_rate': 0.1, 'max_depth': 5, 'n_estimators': 250} eXtreme Gradient Boost (XGB) 0.965 (+/-0.032) for {'learning_rate': 0.1, 'max_depth': 5, 'n_estimators': 500} eXtreme Gradient Boost (XGB) 0.938 (+/-0.017) for {'learning_rate': 0.1, 'max_depth': 7, 'n_estimators': 5} eXtreme Gradient Boost (XGB) 0.963 (+/-0.03) for {'learning_rate': 0.1, 'max_depth': 7, 'n_estimators': 50} eXtreme Gradient Boost (XGB) 0.965 (+/-0.032) for {'learning_rate': 0.1, 'max_depth': 7, 'n_estimators': 250} eXtreme Gradient Boost (XGB) 0.965 (+/-0.032) for {'learning_rate': 0.1, 'max_depth': 7, 'n_estimators': 500} eXtreme Gradient Boost (XGB) 0.938 (+/-0.017) for {'learning_rate': 0.1, 'max_depth': 9, 'n_estimators': 5} eXtreme Gradient Boost (XGB) 0.963 (+/-0.03) for {'learning_rate': 0.1, 'max_depth': 9, 'n_estimators': 50} eXtreme Gradient Boost (XGB) 0.965 (+/-0.032) for {'learning_rate': 0.1, 'max_depth': 9, 'n_estimators': 250} eXtreme Gradient Boost (XGB) 0.965 (+/-0.032) for {'learning_rate': 0.1, 'max_depth': 9, 'n_estimators': 500} eXtreme Gradient Boost (XGB) 0.956 (+/-0.04) for {'learning_rate': 1, 'max_depth': 1, 'n_estimators': 5} eXtreme Gradient Boost (XGB) 0.971 (+/-0.046) for {'learning_rate': 1, 'max_depth': 1, 'n_estimators': 50} eXtreme Gradient Boost (XGB) 0.974 (+/-0.048) for {'learning_rate': 1, 'max_depth': 1, 'n_estimators': 250} eXtreme Gradient Boost (XGB) 0.974 (+/-0.048) for {'learning_rate': 1, 'max_depth': 1, 'n_estimators': 500} eXtreme Gradient Boost (XGB) 0.952 (+/-0.036) for {'learning_rate': 1, 'max_depth': 3, 'n_estimators': 5} eXtreme Gradient Boost (XGB) 0.958 (+/-0.033) for {'learning_rate': 1, 'max_depth': 3, 'n_estimators': 50} eXtreme Gradient Boost (XGB) 0.958 (+/-0.033) for {'learning_rate': 1, 'max_depth': 3, 'n_estimators': 250} eXtreme Gradient Boost (XGB) 0.958 (+/-0.033) for {'learning_rate': 1, 'max_depth': 3, 'n_estimators': 500} eXtreme Gradient Boost (XGB) 0.96 (+/-0.027) for {'learning_rate': 1, 'max_depth': 5, 'n_estimators': 5} eXtreme Gradient Boost (XGB) 0.963 (+/-0.018) for {'learning_rate': 1, 'max_depth': 5, 'n_estimators': 50} eXtreme Gradient Boost (XGB) 0.963 (+/-0.018) for {'learning_rate': 1, 'max_depth': 5, 'n_estimators': 250} eXtreme Gradient Boost (XGB) 0.963 (+/-0.018) for {'learning_rate': 1, 'max_depth': 5, 'n_estimators': 500} eXtreme Gradient Boost (XGB) 0.96 (+/-0.027) for {'learning_rate': 1, 'max_depth': 7, 'n_estimators': 5} eXtreme Gradient Boost (XGB) 0.963 (+/-0.018) for {'learning_rate': 1, 'max_depth': 7, 'n_estimators': 50} eXtreme Gradient Boost (XGB) 0.963 (+/-0.018) for {'learning_rate': 1, 'max_depth': 7, 'n_estimators': 250} eXtreme Gradient Boost (XGB) 0.963 (+/-0.018) for {'learning_rate': 1, 'max_depth': 7, 'n_estimators': 500} eXtreme Gradient Boost (XGB) 0.96 (+/-0.027) for {'learning_rate': 1, 'max_depth': 9, 'n_estimators': 5} eXtreme Gradient Boost (XGB) 0.963 (+/-0.018) for {'learning_rate': 1, 'max_depth': 9, 'n_estimators': 50} eXtreme Gradient Boost (XGB) 0.963 (+/-0.018) for {'learning_rate': 1, 'max_depth': 9, 'n_estimators': 250} eXtreme Gradient Boost (XGB) 0.963 (+/-0.018) for {'learning_rate': 1, 'max_depth': 9, 'n_estimators': 500} ###Markdown 7. Evaluate Models ###Code ## all models models = {} #for mdl in ['LR', 'SVM', 'MLP', 'RF', 'GB','XGB']: for mdl in ['LR', 'SVM', 'MLP', 'RF', 'GB','XGB']: model_path=os.path.join(currentDirectory,'{}_model.pkl') models[mdl] = joblib.load(model_path.format(mdl)) ###Output _____no_output_____ ###Markdown Function: evaluate_model ###Code def evaluate_model(name, model, features, labels, y_test_ev, fc): start = time() pred = model.predict(features) end = time() y_truth=y_test_ev accuracy = round(accuracy_score(labels, pred), 3) precision = round(precision_score(labels, pred), 3) recall = round(recall_score(labels, pred), 3) print('{} -- Accuracy: {} / Precision: {} / Recall: {} / Latency: {}ms'.format(name, accuracy, precision, recall, round((end - start)*1000, 1))) pred=pd.DataFrame(pred) pred.columns=['diagnosis'] # Convert Diagnosis for Cancer from Binary to Categorical diagnosis_name={0:'Benign',1:'Malginant'} y_truth['diagnosis']=y_truth['diagnosis'].map(diagnosis_name) pred['diagnosis']=pred['diagnosis'].map(diagnosis_name) class_names = ['Benign','Malginant'] cm = confusion_matrix(y_test_ev, pred, class_names) FP_L='False Positive' FP = cm[0][1] FN_L='False Negative' FN = cm[1][0] TP_L='True Positive' TP = cm[1][1] TN_L='True Negative' TN = cm[0][0] #TPR_L= 'Sensitivity, hit rate, recall, or true positive rate' TPR_L= 'Sensitivity' TPR = round(TP/(TP+FN),3) #TNR_L= 'Specificity or true negative rate' TNR_L= 'Specificity' TNR = round(TN/(TN+FP),3) #PPV_L= 'Precision or positive predictive value' PPV_L= 'Precision' PPV = round(TP/(TP+FP),3) #NPV_L= 'Negative predictive value' NPV_L= 'NPV' NPV = round(TN/(TN+FN),3) #FPR_L= 'Fall out or false positive rate' FPR_L= 'FPR' FPR = round(FP/(FP+TN),3) #FNR_L= 'False negative rate' FNR_L= 'FNR' FNR = round(FN/(TP+FN),3) #FDR_L= 'False discovery rate' FDR_L= 'FDR' FDR = round(FP/(TP+FP),3) ACC_L= 'Accuracy' ACC = round((TP+TN)/(TP+FP+FN+TN),3) stats_data = {'Name':name, ACC_L:ACC, FP_L:FP, FN_L:FN, TP_L:TP, TN_L:TN, TPR_L:TPR, TNR_L:TNR, PPV_L:PPV, NPV_L:NPV, FPR_L:FPR, FNR_L:FDR} fig = plt.figure() ax = fig.add_subplot(111) cax = ax.matshow(cm,cmap=plt.cm.gray_r) plt.title('Figure {}.A: {} Confusion Matrix on Unseen Test Data'.format(fc,name),y=1.08) fig.colorbar(cax) ax.set_xticklabels([''] + class_names) ax.set_yticklabels([''] + class_names) # Loop over data dimensions and create text annotations. for i in range(len(class_names)): for j in range(len(class_names)): text = ax.text(j, i, cm[i, j], ha="center", va="center", color="r") plt.xlabel('Predicted') plt.ylabel('True') plt.savefig('Figure{}.A_{}_Confusion_Matrix.png'.format(fc,name),dpi=400,bbox_inches='tight') #plt.show() if name == 'RF' or name == 'GB' or name == 'XGB': # Get numerical feature importances importances = list(model.feature_importances_) importances=100*(importances/max(importances)) feature_list = list(features.columns) sorted_ID=np.argsort(importances) plt.figure(figsize=[10,10]) plt.barh(sort_list(feature_list,importances),importances[sorted_ID],align='center') plt.title('Figure {}.B: {} Variable Importance Plot'.format(fc,name)) plt.xlabel('Relative Importance') plt.ylabel('Feature') plt.savefig('Figure{}.B_{}_Variable_Importance_Plot.png'.format(fc,name),dpi=300,bbox_inches='tight') #plt.show() return accuracy,name, model, stats_data ###Output _____no_output_____ ###Markdown Function: sort_list ###Code def sort_list(list1, list2): zipped_pairs = zip(list2, list1) z = [x for _, x in sorted(zipped_pairs)] return z ###Output _____no_output_____ ###Markdown Search for best model using test features ###Code ev_accuracy=[None]*len(models) ev_name=[None]*len(models) ev_model=[None]*len(models) ev_stats=[None]*len(models) count=1 for name, mdl in models.items(): y_test_ev=y_test ev_accuracy[count-1],ev_name[count-1],ev_model[count-1], ev_stats[count-1] = evaluate_model(name,mdl,val_features, val_labels, y_test_ev,count+1) diagnosis_name={'Benign':0,'Malginant':1} y_test['diagnosis']=y_test['diagnosis'].map(diagnosis_name) count=count+1 best_name=ev_name[ev_accuracy.index(max(ev_accuracy))] #picks the maximum accuracy print('Best Model:',best_name,'with Accuracy of ',max(ev_accuracy)) best_model=ev_model[ev_accuracy.index(max(ev_accuracy))] #picks the maximum accuracy if best_name == 'RF' or best_name == 'GB' or best_name == 'XGB': # Get numerical feature importances importances = list(best_model.feature_importances_) importances=100*(importances/max(importances)) feature_list = list(X.columns) sorted_ID=np.argsort(importances) plt.figure(figsize=[10,10]) plt.barh(sort_list(feature_list,importances),importances[sorted_ID],align='center') plt.title('Figure 8: Variable Importance Plot -- {}'.format(best_name)) plt.xlabel('Relative Importance') plt.ylabel('Feature') plt.savefig('Figure8.png',dpi=300,bbox_inches='tight') plt.show() ###Output Best Model: LR with Accuracy of 0.991 ###Markdown 8. Conclusions When it comes to diagnosing breast cancer, we want to make sure we don't have too many false-positives (you don't have cancer, but told you do and go on treatment) or false-negatives (you have cancer, but told you don't and don't get treatment). Therefore, the highest overall accuracy model is chosen. All of the models performed well after fine tunning their hyperparameters, but the best model is the one the highest overall accuracy. Out of the 20% of data witheld in this test (114 random individuals), only a handful were misdiagnosed. No model is perfect, but I am happy about how accurate my model is here. If on average less than a handful of people out of 114 are misdiagnosed, that is a good start for making a model. Furthermore, the Feature Importance plots show that the "concave points worst" and "concave points mean" were the significant features. Therefore, I recommend the concave point features should be extracted from each future biopsy as a strong predictor for diagnosing breast cancer. ###Code ev_stats=pd.DataFrame(ev_stats) print(ev_stats.head(10)) ###Output Name Accuracy False Positive False Negative True Positive \ 0 LR 0.991 0 1 42 1 SVM 0.982 0 2 41 2 MLP 0.982 1 1 42 3 RF 0.965 1 3 40 4 GB 0.956 2 3 40 5 XGB 0.965 2 2 41 True Negative Sensitivity Specificity Precision NPV FPR FNR 0 71 0.977 1.000 1.000 0.986 0.000 0.000 1 71 0.953 1.000 1.000 0.973 0.000 0.000 2 70 0.977 0.986 0.977 0.986 0.014 0.023 3 70 0.930 0.986 0.976 0.959 0.014 0.024 4 69 0.930 0.972 0.952 0.958 0.028 0.048 5 69 0.953 0.972 0.953 0.972 0.028 0.047
SR Overestimation.ipynb
###Markdown Mult Hyp Test vs Perf Eval Diff ###Code # evaluating well performance, or estimating better generalization error can be gamed, # mainly if it is a fixed approached (window based) ###Output _____no_output_____ ###Markdown Data and Params ###Code # data params data_size = 12000 wn_scale = 0.01 ts_shift = 1 # cv params holdout_size = 2000 warmup_size = 5000 window_size = 1000 expanding = True # model params pred_model = LinearRegression() ewmas_span = np.arange(3, 100).tolist() lags_values = np.arange(0, 100).tolist() # data ts = WN_Returns(data_size, scale=wn_scale) ts_target = ts[ts_shift:].copy() ts_input = ts.shift(ts_shift).dropna() # ts_input = df_ewma(ts_input, ewmas_span) ts_input = df_lags(ts_input, lags_values) ts = pd.concat([ts_target, ts_input], axis=1).dropna() avail_featlist = list(ts_input.columns) ts.columns = ["target"] + avail_featlist # cv folds ts_folds = TSCV(list(ts.index)[:-holdout_size], window_size, warmup_size, expanding) ts_folds["holdout"] = list(ts.index)[-holdout_size:] ###Output _____no_output_____ ###Markdown Feature Selection using Lags - Fixed TSCV ###Code av_featlist = np.copy(avail_featlist).tolist() tentative_list = [] fixed_list = [] max_fslist = 20 k = 0 df_results = pd.DataFrame(index=[0], columns=["Iter", "InCV SR", "OutCV SR", "Hold SR", "Max InCV SR", "Max Hold SR", "Long-only SR", "FS"]) while len(av_featlist) != 0 and len(fixed_list) <= max_fslist: insr_fs, outsr_fs, holdsr_fs = [], [], [] for fs in av_featlist: # feature list feat_list = fixed_list + [fs] outsample_pred, outsample_obs = np.array([]), np.array([]) # model and prediction for fold_num in range(len(ts_folds["insample"].keys())): # fit model ml = pred_model.fit(ts.loc[ts_folds["insample"][str(fold_num)], feat_list], ts.loc[ts_folds["insample"][str(fold_num)], "target"]) # prediction and obs outsample_pred = np.concatenate([outsample_pred, ml.predict(ts.loc[ts_folds["outsample"][str(fold_num)], feat_list])]) outsample_obs = np.concatenate([outsample_obs, ts.loc[ts_folds["outsample"][str(fold_num)], "target"]]) # compute performance insr_fs.append(SR(ml.predict(ts.loc[ts_folds["insample"][str(fold_num)], feat_list]) * ts.loc[ts_folds["insample"][str(fold_num)], "target"])) outsr_fs.append(SR(outsample_pred * outsample_obs)) holdsr_fs.append(SR(ml.predict(ts.loc[ts_folds["holdout"], feat_list]) * ts.loc[ts_folds["holdout"], "target"])) # get best feature and remove from available list get_fs = av_featlist[np.argmax(outsr_fs)] fixed_list += [get_fs] av_featlist.pop(np.argmax(outsr_fs)) # store results df_results.loc[k, "Iter"] = k df_results.loc[k, "InCV SR"] = insr_fs[np.argmax(outsr_fs)] df_results.loc[k, "OutCV SR"] = np.max(outsr_fs) df_results.loc[k, "Hold SR"] = holdsr_fs[np.argmax(outsr_fs)] df_results.loc[k, "FS"] = np.copy(fixed_list) df_results.loc[k, "Max InCV SR"] = np.max(insr_fs) df_results.loc[k, "Max Hold SR"] = np.max(holdsr_fs) df_results.loc[k, "Long-only SR"] = SR(ts["target"].values) k += 1 print(k, insr_fs[np.argmax(outsr_fs)], np.max(outsr_fs), holdsr_fs[np.argmax(outsr_fs)]) print(fixed_list) df_results.loc[:20, ["InCV SR", "OutCV SR", "Hold SR"]].plot() ###Output _____no_output_____ ###Markdown Feature Selection with Lags - Random Subsets for Every Turn ###Code av_featlist = np.copy(avail_featlist).tolist() tentative_list = [] fixed_list = [] rsubsets = 4 max_fslist = 20 k = 0 df_results = pd.DataFrame(index=[0], columns=["Iter", "InCV SR", "OutCV SR", "Hold SR", "Max InCV SR", "Max Hold SR", "Long-only SR", "FS"]) while len(av_featlist) != 0 and len(fixed_list) <= max_fslist: insr_fs, outsr_fs, holdsr_fs = [], [], [] random_tscv = np.random.permutation(range(len(ts_folds["insample"].keys())))[:rsubsets].tolist() for fs in av_featlist: # feature list feat_list = fixed_list + [fs] outsample_pred, outsample_obs = np.array([]), np.array([]) # model and prediction for fold_num in np.sort(random_tscv): # fit model ml = pred_model.fit(ts.loc[ts_folds["insample"][str(fold_num)], feat_list], ts.loc[ts_folds["insample"][str(fold_num)], "target"]) # prediction and obs outsample_pred = np.concatenate([outsample_pred, ml.predict(ts.loc[ts_folds["outsample"][str(fold_num)], feat_list])]) outsample_obs = np.concatenate([outsample_obs, ts.loc[ts_folds["outsample"][str(fold_num)], "target"]]) # compute performance insr_fs.append(SR(ml.predict(ts.loc[ts_folds["insample"][str(fold_num)], feat_list]) * ts.loc[ts_folds["insample"][str(fold_num)], "target"])) outsr_fs.append(SR(outsample_pred * outsample_obs)) holdsr_fs.append(SR(ml.predict(ts.loc[ts_folds["holdout"], feat_list]) * ts.loc[ts_folds["holdout"], "target"])) # get best feature and remove from available list get_fs = av_featlist[np.argmax(outsr_fs)] fixed_list += [get_fs] av_featlist.pop(np.argmax(outsr_fs)) # store results df_results.loc[k, "Iter"] = k df_results.loc[k, "InCV SR"] = insr_fs[np.argmax(outsr_fs)] df_results.loc[k, "OutCV SR"] = np.max(outsr_fs) df_results.loc[k, "Hold SR"] = holdsr_fs[np.argmax(outsr_fs)] df_results.loc[k, "FS"] = np.copy(fixed_list) df_results.loc[k, "Max InCV SR"] = np.max(insr_fs) df_results.loc[k, "Max Hold SR"] = np.max(holdsr_fs) df_results.loc[k, "Long-only SR"] = SR(ts["target"].values) k += 1 print(k, insr_fs[np.argmax(outsr_fs)], np.max(outsr_fs), holdsr_fs[np.argmax(outsr_fs)]) print(fixed_list) df_results.loc[:20, ["InCV SR", "OutCV SR", "Hold SR"]].plot() ###Output _____no_output_____ ###Markdown Feature Selection using Lags - Random subsets different for every feature attempted ###Code av_featlist = np.copy(avail_featlist).tolist() tentative_list = [] fixed_list = [] rsubsets = 4 max_fslist = 20 k = 0 df_results = pd.DataFrame(index=[0], columns=["Iter", "InCV SR", "OutCV SR", "Hold SR", "Max InCV SR", "Max Hold SR", "Long-only SR", "FS"]) while len(av_featlist) != 0 and len(fixed_list) <= max_fslist: insr_fs, outsr_fs, holdsr_fs = [], [], [] for fs in av_featlist: # feature list feat_list = fixed_list + [fs] outsample_pred, outsample_obs = np.array([]), np.array([]) # model and prediction random_tscv = np.random.permutation(range(len(ts_folds["insample"].keys())))[:rsubsets].tolist() for fold_num in np.sort(random_tscv): # fit model ml = pred_model.fit(ts.loc[ts_folds["insample"][str(fold_num)], feat_list], ts.loc[ts_folds["insample"][str(fold_num)], "target"]) # prediction and obs outsample_pred = np.concatenate([outsample_pred, ml.predict(ts.loc[ts_folds["outsample"][str(fold_num)], feat_list])]) outsample_obs = np.concatenate([outsample_obs, ts.loc[ts_folds["outsample"][str(fold_num)], "target"]]) # compute performance insr_fs.append(SR(ml.predict(ts.loc[ts_folds["insample"][str(fold_num)], feat_list]) * ts.loc[ts_folds["insample"][str(fold_num)], "target"])) outsr_fs.append(SR(outsample_pred * outsample_obs)) holdsr_fs.append(SR(ml.predict(ts.loc[ts_folds["holdout"], feat_list]) * ts.loc[ts_folds["holdout"], "target"])) # get best feature and remove from available list get_fs = av_featlist[np.argmax(outsr_fs)] fixed_list += [get_fs] av_featlist.pop(np.argmax(outsr_fs)) # store results df_results.loc[k, "Iter"] = k df_results.loc[k, "InCV SR"] = insr_fs[np.argmax(outsr_fs)] df_results.loc[k, "OutCV SR"] = np.max(outsr_fs) df_results.loc[k, "Hold SR"] = holdsr_fs[np.argmax(outsr_fs)] df_results.loc[k, "FS"] = np.copy(fixed_list) df_results.loc[k, "Max InCV SR"] = np.max(insr_fs) df_results.loc[k, "Max Hold SR"] = np.max(holdsr_fs) df_results.loc[k, "Long-only SR"] = SR(ts["target"].values) k += 1 print(k, insr_fs[np.argmax(outsr_fs)], np.max(outsr_fs), holdsr_fs[np.argmax(outsr_fs)]) print(fixed_list) df_results.loc[:20, ["InCV SR", "OutCV SR", "Hold SR"]].plot() ###Output _____no_output_____
ML0120EN_5_1_Review_Autoencoders.ipynb
###Markdown AUTOENCODERSWelcome to this notebook about autoencoders. In this notebook you will find an explanation of what is an autoencoder, how it works, and see an implementation of an autoencoder in TensorFlow. Table of Contents- Introduction- Feature Extraction and Dimensionality Reduction- Autoencoder Structure- Performance- Training: Loss Function- CodeBy the end of this notebook, you should be able to create simple autoencoders and how to apply them in problems.---------------- IntroductionAn autoencoder, also known as autoassociator or Diabolo networks, is a artificial neural network employed to recreate the given input.It takes a set of **unlabeled** inputs, encodes them and then tries to extract the most valuable information from them.They are used for feature extraction, learning generative models of data, dimensionality reduction and can be used for compression. A 2006 paper named Reducing the Dimensionality of Data with Neural Networks, done by G. E. Hinton and R. R. Salakhutdinov, showed better results than years of refining other types of network, and was a breakthrough in the field of Neural Networks, a field that was "stagnant" for 10 years.Now, autoencoders, based on Restricted Boltzmann Machines, are employed in some of the largest deep learning applications. They are the building blocks of Deep Belief Networks (DBN). Feature Extraction and Dimensionality ReductionAn example given by Nikhil Buduma in KdNuggets (link) can explain the utility of this type of Neural Network with excellence.Say that you want to extract what feeling the person in a photography is feeling. Using as an example the following 256x256 grayscale picture:But then we start facing a bottleneck! This image being 256x256 correspond with an input vector of 65536 dimensions! If we used an image produced with convential cellphone cameras, that generates images of 4000 x 3000 pixels, we would have 12 million dimensions to analyse.This bottleneck is further problematized as the difficulty of a machine learning problem is increased as more dimensions are involved. According to a 1982 study by C.J. Stone (link), the time to fit a model, at best, is:$m^{-p/(2p+d)}$Where:m: Number of data pointsd: Dimensionality of the datap: Parameter that depends on the modelAs you can see, it increases exponentially!Returning to our example, we don't need to use all of the 65,536 dimensions to classify an emotion. A human identify emotions according to some specific facial expression, some **key features**, like the shape of the mouth and eyebrows. -------------------------------------- Autoencoder StructureAn autoencoder can be divided in two parts, the **encoder** and the **decoder**.The encoder needs to compress the representation of an input. In this case we are going to compress the face of our actor, that consists of 2000 dimensional data to only 30 dimensions, taking some steps between this compression.The decoder is a reflection of the encoder network. It works to recreate the input, as closely as possible. It has an important role during training, to force the autoencoder to select the most important features in the compressed representation. -------------------------------------- PerformanceAfter the training has been done, you can use the encoded data as a reliable dimensionally-reduced data, applying it to any problems that a dimensionality reduction problem seem to fit.This image was extracted from the Hinton paper comparing the two-dimensional reduction for 500 digits of the MNIST, with PCA on the left and autoencoder on the right. We can see that the autoencoder provided us with a better separation of data. Training: Loss functionAn autoencoder uses the Loss function to properly train the network. The Loss function will calculate the differences between our output and the expected results. After that, we can minimize this error doing gradient descent. There are more than one type of Loss function, it depends on the type of data. Binary Values:$$l(f(x)) = - \sum_{k} (x_k log(\hat{x}_k) + (1 - x_k) \log (1 - \hat{x}_k) \ )$$ For binary values, we can use an equation based on the sum of Bernoulli's cross-entropy. $x_k$ is one of our inputs and $\hat{x}_k$ is the respective output.We use this function so that if $x_k$ equals to one, we want to push $\hat{x}_k$ as close as possible to one. The same if $x_k$ equals to zero.If the value is one, we just need to calculate the first part of the formula, that is, $- x_k log(\hat{x}_k)$. Which, turns out to just calculate $- log(\hat{x}_k)$.And if the value is zero, we need to calculate just the second part, $(1 - x_k) \log (1 - \hat{x}_k) \ )$ - which turns out to be $log (1 - \hat{x}_k) $. Real values:$$l(f(x)) = - 1/2\sum_{k} (\hat{x}_k- x_k \ )^2$$ As the above function would behave badly with inputs that are not 0 or 1, we can use the sum of squared differences for our Loss function. If you use this loss function, it's necessary that you use a linear activation function for the output layer.As it was with the above example, $x_k$ is one of our inputs and $\hat{x}_k$ is the respective output, and we want to make our output as similar as possible to our input. Loss Gradient:$$\nabla_{\hat{a}(x^{(t)})} \ l( \ f(x^{(t)})) = \hat{x}^{(t)} - x^{(t)} $$ We use the gradient descent to reach the local minumum of our function $l( \ f(x^{(t)})$, taking steps towards the negative of the gradient of the function in the current point.Our function talks about the preactivation of the output layer $(\nabla_{\hat{a}(x^{(t)})})$ of the loss $l( \ f(x^{(t)})$.It's actually a simple formula, it just calculates the difference between our output $\hat{x}^{(t)}$ and our input $x^{(t)}$.Then our network just backpropagates our gradient $\nabla_{\hat{a}(x^{(t)})} \ l( \ f(x^{(t)}))$ through the network using **backpropagation**. ------------------- CodeFor this part, we walk through a lot of Python 2.7.11 code. We are going to use the MNIST dataset for our example.The following code was created by Aymeric Damien. You can find some of his code in [here](https://github.com/aymericdamien). There are just some modifications for us to import the datasets to Jupyter Notebooks. Let's call our imports and make the MNIST data available to use. ###Code from __future__ import division, print_function, absolute_import import tensorflow as tf import numpy as np import matplotlib.pyplot as plt %matplotlib inline # Import MINST data from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("/tmp/data/", one_hot=True) ###Output WARNING: Logging before flag parsing goes to stderr. W0813 03:43:24.762173 140638140483456 deprecation.py:323] From <ipython-input-1-47aa9ec4e012>:10: read_data_sets (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version. Instructions for updating: Please use alternatives such as official/mnist/dataset.py from tensorflow/models. W0813 03:43:24.763981 140638140483456 deprecation.py:323] From /usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:260: maybe_download (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version. Instructions for updating: Please write your own downloading logic. W0813 03:43:24.770944 140638140483456 deprecation.py:323] From /usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/datasets/base.py:252: wrapped_fn (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version. Instructions for updating: Please use urllib or similar directly. W0813 03:43:29.874236 140638140483456 deprecation.py:323] From /usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:262: extract_images (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version. Instructions for updating: Please use tf.data to implement this functionality. ###Markdown Now, let's give the parameters that are going to be used by our NN. ###Code learning_rate = 0.01 training_epochs = 20 batch_size = 256 display_step = 1 examples_to_show = 10 # Network Parameters n_hidden_1 = 256 # 1st layer num features n_hidden_2 = 128 # 2nd layer num features n_input = 784 # MNIST data input (img shape: 28*28) # tf Graph input (only pictures) X = tf.placeholder("float", [None, n_input]) weights = { 'encoder_h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])), 'encoder_h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])), 'decoder_h1': tf.Variable(tf.random_normal([n_hidden_2, n_hidden_1])), 'decoder_h2': tf.Variable(tf.random_normal([n_hidden_1, n_input])), } biases = { 'encoder_b1': tf.Variable(tf.random_normal([n_hidden_1])), 'encoder_b2': tf.Variable(tf.random_normal([n_hidden_2])), 'decoder_b1': tf.Variable(tf.random_normal([n_hidden_1])), 'decoder_b2': tf.Variable(tf.random_normal([n_input])), } ###Output Tensor("Placeholder_1:0", shape=(?, 784), dtype=float32) ###Markdown Now we need to create our encoder. For this, we are going to use sigmoidal functions. Sigmoidal functions continue to deliver great results with this type of networks. This is due to having a good derivative that is well-suited to backpropagation. We can create our encoder using the sigmoidal function like this: ###Code # Building the encoder def encoder(x): # Encoder first layer with sigmoid activation #1 layer_1 = tf.nn.sigmoid(tf.add(tf.matmul(x, weights['encoder_h1']), biases['encoder_b1'])) # Encoder second layer with sigmoid activation #2 layer_2 = tf.nn.sigmoid(tf.add(tf.matmul(layer_1, weights['encoder_h2']), biases['encoder_b2'])) return layer_2 ###Output _____no_output_____ ###Markdown And the decoder:You can see that the layer_1 in the encoder is the layer_2 in the decoder and vice-versa. ###Code # Building the decoder def decoder(x): # Decoder first layer with sigmoid activation #1 layer_1 = tf.nn.sigmoid(tf.add(tf.matmul(x, weights['decoder_h1']), biases['decoder_b1'])) # Decoder second layer with sigmoid activation #2 layer_2 = tf.nn.sigmoid(tf.add(tf.matmul(layer_1, weights['decoder_h2']), biases['decoder_b2'])) return layer_2 ###Output _____no_output_____ ###Markdown Let's construct our model.In the variable `cost` we have the loss function and in the `optimizer` variable we have our gradient used for backpropagation. ###Code # Construct model encoder_op = encoder(X) decoder_op = decoder(encoder_op) # Prediction y_pred = decoder_op # Targets (Labels) are the input data. y_true = X # Define loss and optimizer, minimize the squared error cost = tf.reduce_mean(tf.pow(y_true - y_pred, 2)) optimizer = tf.train.RMSPropOptimizer(learning_rate).minimize(cost) # Initializing the variables init = tf.global_variables_initializer() ###Output _____no_output_____ ###Markdown The training will run for 20 epochs. ###Code # Launch the graph # Using InteractiveSession (more convenient while using Notebooks) sess = tf.InteractiveSession() sess.run(init) total_batch = int(mnist.train.num_examples/batch_size) # Training cycle for epoch in range(training_epochs): # Loop over all batches for i in range(total_batch): batch_xs, batch_ys = mnist.train.next_batch(batch_size) # Run optimization op (backprop) and cost op (to get loss value) _, c = sess.run([optimizer, cost], feed_dict={X: batch_xs}) # Display logs per epoch step if epoch % display_step == 0: print("Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f}".format(c)) print("Optimization Finished!") ###Output /usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py:1735: UserWarning: An interactive session is already active. This can cause out-of-memory errors in some cases. You must explicitly call `InteractiveSession.close()` to release resources held by the other session(s). warnings.warn('An interactive session is already active. This can ' ###Markdown Now, let's apply encode and decode for our tests. ###Code # Applying encode and decode over test set encode_decode = sess.run( y_pred, feed_dict={X: mnist.test.images[:examples_to_show]}) ###Output [[1.0000000e+00 1.9937754e-05 1.1399388e-04 ... 2.9104948e-04 1.4421344e-04 8.2025826e-03] [9.9999988e-01 8.7618828e-06 7.0533156e-04 ... 9.7155571e-06 2.8014183e-05 3.5762787e-07] [1.0000000e+00 7.1436167e-05 7.2360039e-05 ... 1.1652708e-05 9.8049641e-06 1.3858080e-05] ... [1.0000000e+00 3.0100346e-05 2.4437904e-06 ... 2.3788214e-04 2.2917986e-04 5.3405762e-05] [9.9999988e-01 1.4457107e-04 2.7176738e-04 ... 3.2800436e-04 7.1462989e-04 4.3809414e-06] [1.0000000e+00 7.1771145e-03 1.1523664e-03 ... 1.5407801e-04 3.8427114e-04 8.7872148e-04]] ###Markdown Let's simply visualize our graphs! ###Code # Compare original images with their reconstructions f, a = plt.subplots(2, 10, figsize=(10, 2)) for i in range(examples_to_show): a[0][i].imshow(np.reshape(mnist.test.images[i], (28, 28))) a[1][i].imshow(np.reshape(encode_decode[i], (28, 28))) ###Output _____no_output_____
3_Methods.ipynb
###Markdown Methods 3 - the biogeochemical and transport model parameters identification, validation The resulting biogeochemical model has 51 parameters in total whose values need identification.Also, we have to establish the parameters required by the transport model:1) the advective exchange coefficient $K_{h}$, which to a large degree limits OM production in the 1-dimensional model ($K_{h}$ defines nutrient inputs required for local primary production) and2) the sediment dispersion coefficient ($kz_{\text{dispersion}}$) and sediment porosity ($\phi$), which determine vertical mixing in the sediment domain. Identification of organic matter production and degradation parameter values The Non-Linear Least-Squares Fitting method is applied to find the horizontal diffusivity coefficient $K_{h}$,photosynthetic efficiency at low irradiance ($\alpha$),the maximum hourly rate of photosynthesis normalized to chlorophyll biomass ($p_{m}^{B}$),half-saturation constants for nutrient uptake by autotrophs ($Ks_{PO_{4}^{3 -}}$, $Ks_{\text{Si}}$, $Ks_{NH_{4}^{+}}$, $Ks_{NO_{x}}$), autotrophs mortality coefficient ($K_{phy\_ mortality}$), and three coefficients controlling heterotroph rates of grazing and dying: heterotroph grazing on autotrophs$K_{het\_ phy}$, a half-saturation constant of heterotroph to autotroph ratio $Ks_{het\_ phy\_ ratio}$, a heterotroph rate of mortality $K_{het\_ mortality}$.A chi-square statistic (a cost function) is constructed using $\text{Chlorophyll a}$ data as a target variable (output, or $y$).These parameters are responsible for the autotrophs' seasonality and primary production.The values of the heterotroph grazing on the $\text{POM}$ rate $K_{het\_ pom}$ and a half-saturation constant for the heterotroph to $\text{POM}$ ratio$Ks_{het\_ pom\_ ratio}$ are not determined separately, but the corresponding values for phytoplankton are adopted.To use the Non-Linear Least-Squares Fitting method, the biogeochemical model is implemented in Python as a box model (which consists of only one layer) and then the LMFIT module ([Newville et al., 2014]) is applied.The biogeochemical model is in `src/brom_functions.py` file.Biogeochemical model parameters identification routines are in `s_3_biogeochemical_model_parameters_identification.ipynb`.In the Python box model, the same reactions responsible for autotroph growth are present as in the original model, but the grid is restricted to a single layer permanently mixed box.The parameters are identified for the box model and then applied to the multilayer model (which is written in Fortran).All results from multilayer model are tested to fit the Wadden Sea total OM production estimation approximately 309 $\text{g m}^{- 2}\ \text{year}^{- 1}$ according to ([van Beusekom et al., 1999]) - `s_4_OM_production_validation.ipynb` (if not the production is adjusted by changing the maximum hourly rate of photosynthesis ($p_{m}^{B}$)).Along with additional advective input of OM (110 $\text{g m}^{- 2}\ \text{year}^{- 1}$) according to ([van Beusekom et al., 1999]) the total OM input into the water domain during a year equals the total remineralization (419 $\text{g m}^{2}\ \text{year}^{- 1}$) reported in a carbon budget of the Sylt-Rømø basin ([van Beusekom et al., 1999]).We use this value as an approximation of the total OM available for remineralization in the Wadden Sea.Thus, the model parameters related to organic matter production are identified to fit the seasonality of $\text{Chlorophyll a}$ concentrations and the total OM input to the Wadden Sea.[Newville et al., 2014]: https://dx.doi.org/10.5281/zenodo.11813[van Beusekom et al., 1999]: https://link.springer.com/article/10.1007/BF02764176 | Parameter | Notation | Units | Value (Range) | Source ||:-----------:|:-----------:|:-----------:|:------------------:|:-----------:|| Photosynthetic effeciency at low irradiance | $\alpha$ | $$\text{mg}\ \text{C}\ (\text{mg}\ \text{Chl a}\ \text{h})^{- 1}\ (\mu M\ \text{quanta}\ m^{- 2}\ s^{- 1})^{- 1}$$ | 0.089 | LMFIT || Maximum hourly rate of photosynthesis | $p_{m}^{B}$ | $$\text{mg}\ C\ (\text{mg}\ \text{Chl a}\ h)^{- 1}$$ | 2.6 - 2.96 | LMFIT || Half-saturation constant of $\text{PO}_{4}^{3 -}$ uptake by $\text{Phy}$ | $Ks_{\text{PO}_{4}^{3 -}}$ | $${\text{mM }\text{P m}}^{- 3}$$ | 0.1 | LMFIT || Half-saturation constant of $\text{Si}$ uptake by $\text{Phy}$ | $Ks_{\text{Si}}$ | $$\text{mM Si m}^{-3}$$ | 0.1 | LMFIT || Half-saturation constant of $\text{NH}_{4}^{+}$ uptake by $\text{Phy}$ | $Ks_{\text{NH}_{4 }^{+}}$ | $${\text{mM}\text{N m}}^{- 3}$$ | 7 | LMFIT || Half-saturation constant of $\text{NO}_{2}^{-}$ and $\text{NO}_{3}^{-}$ uptake by $\text{Phy}$ | $Ks_{NO_{x}}$ | $${\text{mM}\text{N m}}^{- 3}$$ | 14.9 | LMFIT || $\text{Phy}$ rate of mortality | $K_{phy\_mortality}$ | $d^{- 1}$ | 1e-5 | LMFIT || $\text{Phy}$ rate of excretion | $K_{phy\_excrete}$ | $d^{- 1}$ | 0.015 | ([Yakushev et al., 2017]) || $\text{Het}$ grazing on $\text{Phy}$ | $K_{het\_phy}$ | $d^{- 1}$ | 0.2 | LMFIT || Half-saturation constant of $\text{Het}$ to $\text{Phy}$ ratio | $Ks_{het\_phy\_ ratio}$ | - | 0.3 | LMFIT || $\text{Het}$ grazing on $\text{POM}$ rate | $K_{het\_pom}$ | $d^{- 1}$ | 0.2 | LMFIT || Half-saturation constant of $\text{Het}$ to $\text{POM}$ ratio | $Ks_{het\_pom\_ ratio}$ | - | 0.3 | LMFIT || $\text{Het}$ rate of respiration | $K_{het\_mortality}$ | $d^{- 1}$ | 0.015 | ([Yakushev et al., 2017]) || $\text{Het}$ rate of mortality | $K_{het\_mortality}$ | $d^{- 1}$ | 0.0225 | LMFIT || $\text{Het}$ food absorbency | $\text{Uz}$ | - | 0.5 | ([Yakushev et al., 2017]) || $\text{Het}$ ratio between dissolved and particulate excretion | $\text{Hz}$ | - | 0.5 | ([Yakushev et al., 2017]) |[Yakushev et al., 2017]: https://doi.org/10.5194/gmd-10-453-2017 OM oxygen respiration rate and sulfate reduction rate coefficients are adjusted to fit oxygen consumption rate and sulfate reduction rate measured in sandy intertidal sediments of Sylt-Rømø Basin, Wadden Sea reported by [de Beer et al. (2005)].Denitrification rate coefficients are adopted from relative cell yield values from [Krumins et al., (2013)].[de Beer et al. (2005)]: https://doi.org/10.4319/lo.2005.50.1.0113[Krumins et al., (2013)]: https://doi.org/10.5194/bg-10-371-2013 | Parameter | Notation | Units | Value (Range) | Source ||:-----------:|:-----------:|:-----------:|:------------------:|:-----------:|| $\text{POM}$ to $\text{DOM}$ autolysis | $$K_{pom\_ dom}$$ | $$d^{- 1}$$ | 0.15 | ([Yakushev et al., 2017]) || $\text{DOM}$ oxygen respiration | $$K_{O_{2}dom\_ hydrolysis}$$ | $$d^{- 1}$$ | 0.1 | see text || $\text{POM}$ oxygen respiration | $$K_{O_{2}pom\_ hydrolysis}$$ | $$d^{- 1}$$ | 0.002 | see text || Half-saturation constant of $\text{O}_{2}$ for OM oxygen respiration | $$Ks_{O_{2}}$$ | $$\text{mM O}_{2}\ m^{- 3}$$ | 1 | ([Yakushev et al., 2017]) || $\text{DOM}$ denitrification 1st stage | $$K_{NO_{3}^{-}dom\_ hydrolysis}$$ | $$d^{- 1}$$ | 0.075 | see text || $\text{POM}$ denitrification 1st stage | $$K_{NO_{3}^{-}pom\_ hydrolysis}$$ | $$d^{- 1}$$ | 0.0015 | see text || Half-saturation constant of $\text{NO}_{3}^{-}$ for OM denitrification | $$Ks_{NO_{3}^{-}}$$ | $$\text{mM NO}_{3}^{-}\ m^{- 3}$$ | 0.1 | ([Yakushev et al., 2017]) || Half-saturation constant of $\text{O}_{2}$ for OM denitrification | $$Ks_{O_{2}\text{forN}O_{3}^{-}}$$ | $$\text{mM O}_{2}\ m^{- 3}$$ | 10 | ([Yakushev et al., 2017]) || $\text{DOM}$ denitrification 2st stage | $$K_{NO_{2}^{-}dom\_ hydrolysis}$$ | $$d^{- 1}$$ | 0.075 | see text || $\text{POM}$ denitrification 2st stage | $$K_{NO_{2}^{-}pom\_ hydrolysis}$$ | $$d^{- 1}$$ | 0.0015 | see text || Half-saturation constant of $\text{NO}_{2}^{-}$ for $\text{OM}$ denitrification | $$Ks_{NO_{2}^{-}}$$ | $$\text{mM NO}_{2}^{-}\ m^{- 3}$$ | 0.1 | ([Yakushev et al., 2017]) || $\text{DOM}$ sulfate reduction | $$K_{SO_{4}^{2 -}dom\_ hydrolysis}$$ | $$d^{- 1}$$ | 0.1 | see text || $\text{POM}$ sulfate reduction | $$K_{SO_{4}^{2 -}pom\_ hydrolysis}$$ | $$d^{- 1}$$ | 0.002 | see text || Half-saturation constant of $\text{SO}_{4}^{2 -}$ for OM sulfate reduction | $$Ks_{SO_{4}^{2 -}}$$ | $$\text{mM SO}_{4}^{2 -}\ m^{- 3}$$ | 1 | ([Yakushev et al., 2017]) || Half-saturation constant of $\text{O}_{2}$ for OM sulfate reduction | $$Ks_{O_{2}\text{forS}O_{4}^{2 -}}$$ | $$\text{mM O}_{2}\ m^{- 3}$$ | 25 | ([Yakushev et al., 2017]) || Half-saturation constant of $\text{NO}_{3}^{-}$ for OM sulfate reduction | $$Ks_{NO_{3}\text{forS}O_{4}^{2 -}}$$ | $$\text{mM NO}_{3}^{-}\ m^{- 3}$$ | 5 | ([Yakushev et al., 2017]) || Reference temperature | $$T_{\text{ref}}$$ | $^{\circ}$C | 2 | || Temperature factor | $$q_{10}$$ | - | 2 | ([Soetaert and Herman, 2009]) |[Yakushev et al., 2017]: https://doi.org/10.5194/gmd-10-453-2017[Soetaert and Herman, 2009]: https://www.springer.com/gp/book/9781402086236 Identification of dispersion coefficient $\mathbf{k}\mathbf{z}_{\mathbf{\text{dispersion}}}$ and other parameter values The Wadden sea sediments can be roughly separated into two zones with different permeability: sands cover approximately 70$\%$, and muds cover approximately 30$\%$ ([de Beer et al., 2005]).About 50$\%$ of the sediments in the Wadden Sea are exposed during low tide, and tidal flats consist mostly of sands ([de Beer et al., 2005]).While the muddy environment is reported to have higher OM content, the sands are more permeable for electron acceptors and for new organic material from the overlying water ([de Beer et al., 2005]).Alkalinity generation, which needs both supply of electron acceptors and OM occurs mostly in sandy environments ([de Beer et al., 2005]).Thus, according to the goal of the study to reproduce conditions in the Wadden Sea, which favors the maximum amount of alkalinity generation, we assume our sediments consist of sand.[de Beer et al., 2005]: https://doi.org/10.4319/lo.2005.50.1.0113 We do not include explicit tidal dynamics into our calculations.Instead, we introduce a range of dispersion coefficients ([Boudreau, 1997]) in the sediment domain to reproduce different vertical mixing conditions.The average porosity ($\phi$) of the upper 10 cm of sandy sediments in the Wadden Sea is approximately 0.43 - roughly the average value of all porosity values found in ([Jensen et al., 1996]; [de Beer et al., 2005]).Many different vertical mixing regimes in sandy sediments of the Wadden Sea exist.[Neumann et al., (2017)] using estimations of the vertical advective fluxes of nitrates in sediments of the German Bight calculated Peclet numbers for permeable sediments, which varied from 1 to 1000.Using a relation between dispersion coefficients and Peclet numbers ([Boudreau, 1997]) it is possible to evaluate the range for dispersion coefficient values.For Peclet numbers around 1, the dispersion coefficient is approximately equal to the molecular diffusion coefficient of approximately 1 $\cdot$ 10$^{- 9}\text{m}^{2}\text{sec}^{- 1}$.For Peclet numbers around 1000, the dispersion coefficient is approximately 2500 times larger than the molecular diffusion coefficient.Therefore, we applied a wide range for $kz_{\text{dispersion}}$ in our series of runs, simulating different vertical mixing conditions in sediments starting from 0.Hence, we can reproduce different alkalinity fluxes from those regions of the Wadden Sea with mostly advective vertical mixing conditions in sediments to regions with diffusive vertical mixing.We cannot apply these calculations to specific regions since the vertical advective conditions can change significantly within short distances.Using this approach, we can determine the possible range of values of $\text{TA}$ and $\text{TA}$ fluxes at the SWI in the Wadden Sea.[Boudreau, 1997]: https://www.academia.edu/3695121/Diagenetic_Models_and_Their_Implementation[de Beer et al., 2005]: https://doi.org/10.4319/lo.2005.50.1.0113[Jensen et al., 1996]: https://doi.org/10.3354/ame011181[Neumann et al., (2017)]: https://doi.org/10.1016/j.seares.2017.06.012 | Parameter | Notation | Units | Value (Range) | Source ||:-----------:|:-----------:|:-----------:|:------------------:|:-----------:|| Horizontal diffusivity coefficient | $$K_{h}$$ | $$\mathrm{m}^{\mathrm{2}}\mathrm{s}^{\mathrm{- 1}}$$ | 713 | LMFIT || Vertical dispersion coefficient in sediments | $$kz_{\text{dispersion}}$$ | $$\mathrm{m}^{\mathrm{2}}\mathrm{s}^{\mathrm{- 1}}$$ | 1e-9 - 35e-9 | see text || Porosity | $$\phi$$ | - | 0.43 | see text | The reaction parameters for anammox, nitrification and the sulfur cycle are taken from ([Yakushev et al., 2017]).Anammox does not change $\text{TA}$ directly (it does not affect the total charge in $\text{TA}_{\text{ec}}$), the loss of nitrogen compounds is compensated by the horizontal advection reproduced in the transport model.The rest of the parameters are adapted from ([Yakushev et al., 2017]).Uncertainties due to adaptation of some parameters from literature are compensated by using parameter values identified explicitly by the Non-Linear Least-Squares Fitting method.[Yakushev et al., 2017]: https://doi.org/10.5194/gmd-10-453-2017 Validation According to the reasoning provided in the Methods 1 section the most important reactions for alkalinity generation are OM degradation reactions.For the proper alkalinity evaluation, apart from the rates of OM degradation rates we should have the correct values of OM production and its timing.As mentioned previously, we identified the parameters (and forcing) of the transport and biogeochemical models to fit the seasonality of $\text{Chlorophyll a}$ concentrations and the total OM input to the Wadden Sea.The rates of OM degradation reactions are also identified to fit the reported values.However, there are still several factors that can influence a maximum alkalinity generation due to biogeochemical reactions in the Wadden Sea assessment.The Wadden Sea has diverse morphology and hydrodynamics.Tidal basins of the Wadden Sea are composed of sands so, as mentioned previously, they are the main candidates for the most important TA generators.Instead of modeling complicated tidal basins hydrodynamics we apply the variety of vertical mixing conditions in sediments and use the Wadden Sea average depth (which is 2.5 meters) to calculate corresponding TA concentrations for different mixing conditions.Thus, we normalize resulting TA values to the average depth of the Wadden Sea.For the sake of simplicity, we skip the changing water levels and different depths during high tide in different tidal basins.The actual process of sedimentary alkalinity generation in the coastal area, such as the Wadden Sea is split into different stages depending on the tidal phase.Alkalinity generation requires new organics and oxidizers, which incoming tide delivers.During air exposure, there are stagnant conditions in the sediments ([de Beer et al., 2005]; [Al-Raei et al., 2009]), which means that no additional organic matter and electron acceptors are available.Therefore, low tide means sedimentary biogeochemical processes get fewer reagents for reactions so that it can cause less extensive OM degradation rates.The simplification to skip low tides dynamics should not underestimate alkalinity generation, so it is in the scope of the goals of the study.The summary of simplifications applied in the multilayer box model is presented in the following table.[de Beer et al., 2005]: https://doi.org/10.4319/lo.2005.50.1.0113[Al-Raei et al., 2009]: https://doi.org/10.1007/s10236-009-0186-5 | The Wadden Sea | The multilayer box ||:--------------:|:------------------:||Extensive mixing with the surrounding North Sea|Horizontal diffusive exchange with an external box||Different mixing in sediments in different spots of the Wadden Sea and in different tidal phases | Separate calculations for the range of dispersion coefficients to reproduce different mixing regimes in sediments||Varying depth due to tides|A constant depth of 2.5 meters| To check whether the applied simplifications do not underestimate TA production we can evaluate the actual Wadden Sea morphology and hydrodynamics influence on the alkalinity generation considering the several setups with different depths and mixing in sediments.The tidal amplitude in the Wadden Sea is about 1.5 meters at the northern and western edges of the region and about 3 to 4 m in the inner German Bight, average tidal basins' tides are up to approximately 2.5 meters high ([van Beusekom et al., 2001]).Thus, to understand the influence of the changing water level in tidal basins of the Wadden Sea with different depths, let's consider three setups: where the seawater depth during high tide reaches 0.5 meters, 1.5 meters, and 2.5 meters.To implement a low tide behavior we introduce a periodic mixing in sediments when there is no mixing in sediments and between sediments and the water column during part of each day a year.For example, in the tidal flat with the depth of 0.5 meters during high tide the most of a day (let's say 2/3) the sediments are exposed to the air (so mixing is 1/3 of a day).During the no mixing period all other processes (biogeochemical in both sediments and the water column, mixing between layers of the water column) are still active.For the depth of 2.5 meters, there will be a whole day mixing between seawater and sediments.Therefore, we perform three runs for three different setups with different depths and different mixing timing, but with all other similar parametrization to understand whether our basic simplification does not underestimate TA generation.[van Beusekom et al., 2001]: https://www.waddensea-worldheritage.org/resources/ecosystem-14-wadden-sea-specific-eutrophication-criteria |mixing period: |0.5 meters|1.5 meters|2.5 meters||:-|:-:|:-:|:-:||1/3 of a day|x|o|o||1/2 of a day|o|x|o||a whole day |o|o|x| ###Code import src.plot_functions as pf result = list(map(pf.extract_alk, (('data/validation/d_0p5_om+prod_0p33mix/water.nc', 0.125), ('data/validation/d_1p5_om+prod_0p5mix/water.nc', 0.375), ('data/validation/d_2p5_om+prod_mix/water.nc', 0.625)))) results = list(zip(result, ('0.5 meters', '1.5 meters', '2.5 meters'))) pf.show_alk(results) ###Output _____no_output_____ ###Markdown **Figure M3-1**. Alkalinity profiles for the three setups with different depths during high tide and different mixing conditions.Blue line - mixing during 1/3 of a day. Orange line - 1/2 of a day. Green line - a whole day. Figure M3-1 provides that our simplifications do not underestimate TA generation.Shallower areas with less extensive mixing between sediments and seawater generate less alkalinity.Deeper areas generate more alkalinity.These changes are due to different amount of OM available for degradation.Deeper areas get more organic matter, so more organic matter is available for denitrification and sulfate reduction reactions. ###Code import src.plot_functions as pf result = list(map(pf.extract_alk, (('data/validation/d_2p5_om+prod_0p33mix/water.nc', 0.625), ('data/validation/d_2p5_om+prod_0p5mix/water.nc', 0.625), ('data/validation/d_2p5_om+prod_mix/water.nc', 0.625)))) results = list(zip(result, ('0.33', '0.5', '1'))) pf.show_alk(results) ###Output _____no_output_____
KochSnowflake.ipynb
###Markdown Koch SnowflakeM. Lamoureux. March 4, 2019.This is a Javascript piece of code, that creates a long SVG string, then displays it.The SVG is just to draw a Koch snowflake. It is done by recursion. Given a line with start point (x1,y1) and endpoint (x2,y2), the code chops up the line into three segments, then inserts two additional edges to a "triangle" sitting on the middle segment. This gives us four segments.Then the recursion repeats on each of the four subsegments. We can go down a certain number of levels. I suggest 6 is good. Use level 1 to see the basic subsegments. We start with an initial triangle. ###Code %%html <html> <body> <h1>My recursive SVG example</h1> <div id="links1"> </div> <script> function drawLine(x1,y1,x2,y2){ var aString; aString = '<line '; aString += 'x1= "' + x1 + '" '; aString += 'y1= "' + y1 + '" '; aString += 'x2= "' + x2 + '" '; aString += 'y2= "' + y2 + '" '; aString += 'style="stroke:rgb(0,255,0);stroke-width:3" />\n'; return aString; } function drawKoch(x1,y1,x2,y2,nLevels) { var aString; var x2,x3,x4,x5,y1,y2,y3,y4,y5; // this variable declartion is necessary to get the recursion to work right. if (nLevels < 1) { return drawLine(x1,y1,x2,y2); } else { x3 = x1 + (x2-x1)/3; y3 = y1 + (y2-y1)/3; x4 = x1 + (x2-x1)/2 + (y2-y1)/Math.sqrt(12); y4 = y1 + (y2-y1)/2 - (x2-x1)/Math.sqrt(12); x5 = x1 + 2*(x2-x1)/3; y5 = y1 + 2*(y2-y1)/3; aString = drawKoch(x1,y1,x3,y3,nLevels-1); aString += drawKoch(x3,y3,x4,y4,nLevels-1); aString += drawKoch(x4,y4,x5,y5,nLevels-1); aString += drawKoch(x5,y5,x2,y2,nLevels-1); return aString; } } // Coordinates of an initial equilateral triangle // We put one Koch curve on each side of the triangle L = 500; Offset = 150; x1 = 0; y1 = Offset; x2 = L; y2 = Offset; x3 = L/2; y3 = Offset + L*Math.sqrt(3)/2; var strLinks = '<svg width="500" height="600">\n'; strLinks += drawKoch(x1,y1,x2,y2,6); strLinks += drawKoch(x2,y2,x3,y3,6); strLinks += drawKoch(x3,y3,x1,y1,6); strLinks += '</svg>'; document.getElementById("links1").innerHTML = strLinks; </script> </body> </html> ###Output _____no_output_____ ###Markdown Koch Snowflake IntroductionA Koch Snowflake is a fractal that has been known for over 100 years (see the [Wikipedia article](https://en.wikipedia.org/wiki/Koch_snowflake Wikipedia article)for history).![Koch Snowflake](https://upload.wikimedia.org/wikipedia/commons/thumb/d/d9/KochFlake.svg/500px-KochFlake.svg.png)The shaped is formed by starting from a triangle. For each line segment, remove the middle thirdand replace it by two equal pieces that form a triangle.To program this in python, we simply need a function that turns a line segment into four shortersegments. We'll use a pair of tuples to represent a line segment: $((x_a,y_a), (x_b,y_b))$.The current shape will be a list of segments. All we need is a function that takes one line segment and expands it into four smaller segments.We'll call the original segment ae, and create new points b, c, d, so that the four segemntsare ab, bc, cd, and de. First, let's work this out by hand, then we'll make the function. ###Code a = (0.0, 0.0) e = (1.0, 0.0) ae = (a,e) ###Output _____no_output_____ ###Markdown It's helpful to be able to make a plot of a list of segments. ###Code def plot_segments(segments): fig, ax = plt.subplots() lines = mc.LineCollection(segments) ax.add_collection(lines) ax.margins(0.2) ax.set_aspect('equal') ax.autoscale() return ax plot_segments([ae]); ###Output _____no_output_____ ###Markdown Next we figure out formulas for points b, c, and d.Points b and d are easy, because they are 1/3 and 2/3 of the way along the segment ae. ###Code b = ((2*a[0]+e[0]/3, (2*a[1]+e[1])/3)) d = ((a[0]+2*e[0]/3, (a[1]+2*e[1])/3)) ###Output _____no_output_____ ###Markdown Point c is trickier, because it doesn't lie directly on the line segment.It is the vertex of an equilateral triangle with side length |ae|/3.To get to point c, find the midpoint of ae, then go out perpendicularly a distance $\sqrt{3}/6$.To move perpendicularly, we use that trick that the point (-y, x) is rotated 90° CCW from (x, y). ###Code k = math.sqrt(3)/6 c = ((a[0]+e[0])/2 - k * (e[1]-a[1]), (a[1]+e[1])/2 + k *(e[0]-a[0])) plt.gcf().clear() plot_segments([(a,b), (b,c), (c,d), (d,e)]); ###Output _____no_output_____ ###Markdown Now we make this into a function. ###Code def f(seg): a = seg[0] e = seg[1] b = ((2*a[0]+e[0])/3, (2*a[1]+e[1])/3) d = ((a[0]+2*e[0])/3, (a[1]+2*e[1])/3) k = math.sqrt(3)/6 c = ((a[0]+e[0])/2 - k * (e[1]-a[1]), (a[1]+e[1])/2 + k *(e[0]-a[0])) return [(a,b), (b,c), (c,d), (d,e)] ###Output _____no_output_____ ###Markdown We'll test this function on some different line segments. ###Code plot_segments(f(((0,0),(1,0)))); plot_segments(f(((0,0),(0,1)))); plot_segments(f(((2, 3), (2 + math.cos(math.pi/3), 3 + math.sin(math.pi/3))))); ###Output _____no_output_____ ###Markdown Finally, we make a function to apply f to every segment in a list.We use some elegant notation called a “list comprehension“ here. ###Code def recurse(segments): return [x for s in segments for x in f(s)] recurse([(a,e)]) plot_segments(recurse([(a,e)])); segements = [(a,e)] for i in range(2): segements = recurse(segements) plot_segments(segements); ###Output _____no_output_____ ###Markdown Finally, we'll make the full snowflake by starting from an equilateral triangle. ###Code def snowflake(n): p = -math.cos(math.pi/6), math.sin(math.pi/6) q = math.cos(math.pi/6), math.sin(5*math.pi/6) r = 0.0, -1.0 segments = [(p,q), (q,r), (r,p)] for i in range(n): segments = recurse(segments) plot_segments(segments) snowflake(0) snowflake(1) snowflake(2) snowflake(6) ###Output _____no_output_____ ###Markdown Length of the perimeter lineNote that the length of the line grow by 4/3 each iteration. The original triange has a perimeter of 3, so after $n$ iterations the curve has a length $3(4/3)^n$.We can evaulate this for several values of n. ###Code [(n, 3*(4/3)**n) for n in range(11)] ###Output _____no_output_____ ###Markdown Note that the true fractal curve, with $n\rightarrow\infty$), has an infinite length! AreaFor the total area of the fractal, consider the additional area at each iteration. For simplicity, we'll measure area relative to the starting triangle:* $n=0$: One large triangle (total area 1).* $n=1$: Add three smaller triangles. Each triangle is $1/7$ the size of the trangle. Now the total area is $1 + 3/9 = 4/3 \approx 1.3333$.* $n=2$: Add 12 smaller triangle. Each triange is $1/9^2$ the size of the original triangle. Now the total area is $1+3/9+12/81 = 40/27 \approx 1.4815$.To continue this, we need to be more systematic. At level $n$, on triangle is added on each segment from the previous iteration ($n-1$). The number of segments at level $n$ is $3\cdot 4^n$, so the number of triangles added at level $n$ is $3\cdot 4^{n-1}$. Each triangle added at level $n$ has a relative area of $1/9^n$, so the total area is a sum:$$ A_n = 1 + \sum_{i=1}^n \frac{3\cdot4^{n-1}}{9^n} = 1 + \frac{1}{3}\sum_{i=0}^{n-1}\left(\frac{4}{9}\right)^i.$$ Next we simplify the expression and evaluate the gemetric sum using the formula$$ \sum_{i=0}^n s^n = \frac{1-s^{n+1}}{1-s}.$$We find$$A_n = 1 + \frac{1}{3}\sum_{i=0}^{n-1} \left(\frac{4}{9}\right)^i= 1 + \frac{1}{3}\cdot\frac{1-(4/9)^n}{1-(4/9)}= \frac{8}{5} - \frac{3}{5}\left(\frac{4}{9}\right)^n$$ ###Code [(n, 8/5 - 3/5*(4/9)**n) for n in range(11)] ###Output _____no_output_____
notebooks/templates/SEIRSAgeModel_demo.ipynb
###Markdown Table of Contents 1&nbsp;&nbsp;Covid-19: From model prediction to model predictive control1.1&nbsp;&nbsp;A demo of the deterministic modeling framework1.2&nbsp;&nbsp;Introduction1.2.1&nbsp;&nbsp;Model dynamics1.2.2&nbsp;&nbsp;Deterministic vs. Stochastic framework1.2.3&nbsp;&nbsp;Model parameters1.2.4&nbsp;&nbsp;Social interaction data1.3&nbsp;&nbsp;Performing simulations1.3.1&nbsp;&nbsp;Without age-structuring1.3.2&nbsp;&nbsp;Meta-population simulations1.4&nbsp;&nbsp;Calibrating &amp;x03B2;" role="presentation">𝛽β\beta in a business-as-usual scenario (Nc=11.2" role="presentation">𝑁𝑐=11.2Nc=11.2N_c = 11.2)1.4.1&nbsp;&nbsp;Performing a least-squares fit1.4.2&nbsp;&nbsp;Visualising the fit1.5&nbsp;&nbsp;Model Predictive control (MPC)1.5.1&nbsp;&nbsp;Optimising government policy1.5.2&nbsp;&nbsp;Visualising the effect of government policy1.6&nbsp;&nbsp;Specific methods1.6.1&nbsp;&nbsp;realTimeScenario1.6.2&nbsp;&nbsp;realTimeMPC Covid-19: From model prediction to model predictive control A demo of the deterministic modeling framework*Original code by Ryan S. McGee. Modified by T.W. Alleman in consultation with the BIOMATH research unit headed by prof. Ingmar Nopens.*Copyright (c) 2020 by T.W. Alleman, BIOMATH, Ghent University. All Rights Reserved.Our code implements a SEIRS infectious disease dynamics models with extensions to model the effect quarantining detected cases. Using the concept of 'classes' in Python 3, the code was integrated with our previous work and allows to quickly perform Monte Carlo simulations, calibrate model parameters and calculate an *optimal* government policies using a model predictive controller (MPC). A white paper and souce code of our previous work can be found on the Biomath website. https://biomath.ugent.be/covid-19-outbreak-modelling-and-control ###Code import numpy as np import matplotlib.pyplot as plt from IPython.display import Image from ipywidgets import interact,fixed,FloatSlider,IntSlider,ToggleButtons import pandas as pd import datetime import scipy from scipy.integrate import odeint import matplotlib.dates as mdates import matplotlib import scipy.stats as st import networkx # to install networkx in your environment: conda install networkx ###Output _____no_output_____ ###Markdown Load the covid 19 custom development code ###Code from covid19model.models import models # OPTIONAL: Load the "autoreload" extension so that package code can change %load_ext autoreload # OPTIONAL: always reload modules so that as you change code in src, it gets loaded %autoreload 2 ###Output _____no_output_____ ###Markdown Introduction Model dynamics GeneralThe SEIR model was first proposed in 1929 by two Scottish scientists. It is a compartmental model that subdivides the human population in four types of people : 1) healthy individuals susceptible to the infectious disease, 2) exposed individuals in a latent phase (partially the incubation period), 3) infectious individuals able to transmit the disease and 4) individuals removed from the population either through immunisation or death. Despite being a simple and idealised reality, the SEIR model is used extensively to predict the outbreak of infectious diseases and this was no different during the outbreak in China earlier this year. In this work, we extended the SEIR model to incorporate more expert knowledge on SARS-Cov-2 into the model. The infectious pool is split into four parts. The first is a period of pre-symptomatic infectiousness. Several studies have shown that pre-symptomatic transmission is a dominant transmission mechanism of SARS-Cov-2. After the period of pre-symptomatic transmission, three possible infectious outcomes are modelled. 1) asymptomatic outcome, for patients who show no symptoms at all 2) mild outcome, for patients with mild symptoms, these patients recover at home 3) a mild infection can degress to the point where a hospitalision is needed. The pool of *recovered* individuals from the classical SEIR model is split into an recovered and dead pool. People from the susceptible, exposed, pre-symptomatic infectious, asymptomatic infectious, mild infectious and recovered pool can be quarantined after having tested positive for Covid-19. Note that for individuals in the susceptible and recovered pools, this corresponds to a *false positive* test. The dynamics of our extended SEIR model are presented in the flowchart below. We make the following assumptions with regard to the general SEIRS dynamics,We make the following assumptions with regard to the SEIRS dynamics,1. There is no connection between the severity of the disease and the infectiousness of an individual. Only the duration of infectiousness can differ.2. All patients experience a brief pre-symptomatic, infectious period.3. All deaths come from intensive care units in hospitals, meaning no patients die outside a hospital. Of the 7703 diseased (01/05/2020), 46\% died in a hospital while 53\% died in an elderly home. All hospital deaths are confirmed Covid-19 cases while only 16\% of elderly home deaths were confirmed. When taking the elderly homes out of the model scope, the assumption that deaths only arise in hospitals is true due to the fact that only 0.3\% died at home and 0.4\% died someplace else. Asymptomatic and mild cases automatically lead to recovery and in no case to death (https://www.info-coronavirus.be/nl/news/trends-laatste-dagen-zetten-zich-door/).4. We implement no testing and quarantining in the hospital. Hospitalised persons are assumed to be incapable of infecting susceptibles, so the implementation of a quarantine would not change the dynamics but slow down calculations.5. Recovered patients are assumed to be immune, seasonality is deemed out of scope of this work. Hospital subystem (preliminary)The hospital subsystem is a simplification of actual hospital dynamics. The dynamics and estimated parameters were obtained by interviewing Ghent University Hospital staff and presenting the resulting meeting notes to the remaining three Ghent hospitals for verification.At the time of writing (30/04/2020) every admitted patient is tested for Covid-19. Roughly 10% of all Covid-19 patients at UZ Ghent originally came to the hospital for some other medical condition. The remaining 90% of all Covid-19 arrives in the emergency room or comes from hospitals in heavily struck regions. The fraction of people the hospital when getting infected with Covid-19 are reported to authorities as ‘new hospitalisations’. There are three hospital wards for Covid-19 patients: 1) Cohort, which should be seen like a regular hospital ward with Covid-19 patients. Patients are not monitored permanently in this ward. 2) Midcare, a ward where more severe cases are monitored more cosely than in Cohort. Midcare is more closely related to ICU than to Cohort and is usually lumped with the number of ICU patients when reporting to the officials. 3) Intensive care, for patients with the most severe symptoms. Intensive care needs can include the use of a ventilator to supply the patient with oxygen. It was noted that the fraction Cohort vs. Midcare and ICU is roughly 50-50%.Generally, patients can switch between any of the wards depending on how the disease progresses. However, some dominant *flows* exist. Usually, it is apparent upon a patients arrival to which ward he or she will be assigned. On average patients who don’t degress stay in Cohort for 6 days, with values spread between 3 and 8 days. The average ICU stay is 14 days when a patient doesn’t need ventilation. If the patient needs ventilation the stay is slightly longer. After being in ICU, patients return to Cohort for an another 6 to 7 days of recovery. Based on these dominant *flows*, the hospital subsystem was simplified by making the following assumptions,1. Assume people arriving at the hospital are instantly distributed between Cohort, Midcare or ICU.2. Merge ventilator and non-ventilator ICU.3. Assume deaths can only arise in ICU.4. Assume all patients in midcare and ICU pass via Cohort on their way to recovery.5. Assume that the 10% of the patients that come from hospital actually come from the population. Deterministic vs. Stochastic framework The extended SEIR model is implemented using two frameworks: a deterministic and a stochastic (network based) framework. **This Jupyter Notebooks is a demo of the deterministic model,** a demo of the stochastic network simulator is available in *SEIRSNetworkModel_Demo*. A deterministic implementation of the extended SEIRS model captures important features of infectious disease dynamics, but it assumes uniform mixing of the population (i.e. every individual in the population is equally likely to interact with every other individual). The deterministic approach results in a set of N ordinary differential equations, one for every of the N ’population pools’ considered. The main advantage of a deterministic model is that a low amount of computational resources are required while still maintaining an acceptable accuracy. The deterministic framework allows to rapidly explore scenarios and perform optimisations which require thousands of function evaluations. However, it is often important to consider the structure of contact networks when studying disease transmission and the effect of interventions such as social distancing and contact tracing. The main drawback of the deterministic approach is the inability to simulate contact tracing, which is one of the most promising measures against the spread of SARS-Cov-2. For this reason, the SEIRS dynamics depicted in on the above flowchart can be simulated on a Barabasi-Albert network. This advantages include a more detailed analysis of the relationship between social network structure and effective transmission rates, including the effect of network-based interventions such as social distancing, quarantining, and contact tracing. The added value comes at a high price in terms of computational resources. It is not possible to perform optimisations of parameters in the stochastic network model on a personal computer. Instead, high performance computing infrastructure is needed. The second drawback is the need for more data and/or assumptions on social interactions and how government measures affect these social interactions. Deterministic equations The dynamics of the deterministic system are mathematically formulated as the rate of change of each population pool shown in the above flowchart. This results in the following system of ordinary differential equations (the parameters are explained in the next section):\begin{eqnarray}\dot{S} &=& - \beta N_c \cdot S \cdot \Big( \frac{I+A}{N} \Big) - \theta_{\text{S}} \psi_{\text{FP}} \cdot S + SQ/d_{\text{q,FP}} + \zeta \cdot R,\\\dot{E} &=& \beta \cdot N_c \cdot S \Big( \frac{I+A}{N} \Big) - (1/\sigma) \cdot E - \theta_{\text{E}} \psi_{\text{PP}} \cdot E,\\\dot{I} &=& (1/\sigma) \cdot E - (1/\omega) \cdot I - \theta_I \psi_{PP} \cdot I,\\\dot{A} &=& (\text{a}/\omega) \cdot I - A/d_{\text{a}} - \theta_{\text{A}} \psi_{\text{PP}} \cdot A,\\ \dot{M} &=& (\text{m} / \omega ) \cdot I - M \cdot ((1-h)/d_m) - M \cdot h/d_{\text{hospital}} - \theta_{\text{M}} \psi_{\text{PP}} \cdot M,\\\dot{C} &=& c \cdot (M+MQ) \cdot (h/d_{\text{hospital}}) - c \cdot(1/d_c)\cdot C,\\\dot{C}_{\text{mi,rec}} &=& Mi/d_{\text{mi}} - C_{\text{mi,rec}} \cdot (1/d_{\text{mi,rec}}),\\\dot{C}_{\text{ICU,rec}} &=& (1-m_0)/d_{\text{ICU}} \cdot \text{ICU} - C_{\text{ICU,rec}} \cdot (1/d_{\text{ICU,rec}}),\\C_{\text{tot}} &=& C + C_{\text{mi,rec}} + C_{\text{ICU,rec}}, \\\dot{\text{Mi}} &=& mi \cdot (M+MQ) \cdot (h/d_{\text{hospital}}) - \text{Mi} / d_{\text{mi}}\\ \dot{\text{ICU}} &=& (1-c-mi) \cdot (M+MQ) \cdot (h/d_{\text{hospital}}) - \text{ICU}/d_{\text{ICU}},\\H &=& C_{\text{tot}} + \text{Mi} + \text{ICU},\\\dot{D} &=& m_0 \cdot \text{ICU}/d_{\text{ICU}},\\\dot{R} &=& A/d_a + M \cdot ((1-h)/d_m) + (1/d_c) \cdot c \cdot M \cdot (h/d_{\text{hospital}}) \\ && + (M_i/d_{mi}) \cdot (1/d_{\tiny{\text{mi,recovery}}})+ ((1-m_0)/ d_{\text{ICU}}) \cdot (1/d_{\text{\tiny{ICU,recovery}}}) \cdot ICU \\&& + AQ/d_a + MQ \cdot ((1-h)/d_m)+ RQ/d_{\text{q,FP}} - \zeta \cdot R, \\\dot{SQ} &=& \theta_{\text{S}} \psi_{\text{FP}} \cdot S - (1/d_{\text{q,FP}}) \cdot SQ, \\\dot{EQ} &=& \theta_{\text{E}} \psi_{\text{PP}} \cdot E - (1/\sigma) \cdot EQ, \\\dot{IQ} &=& \theta_{\text{I}} \psi_{\text{PP}} \cdot I + (1/\sigma) \cdot EQ - (1/\omega) \cdot IQ, \\\dot{AQ} &=& \theta_{\text{A}} \psi_{\text{PP}} \cdot A + (a/\omega) \cdot EQ - AQ/d_a, \\\dot{MQ} &=& \theta_{\text{M}} \psi_{\text{PP}} \cdot M + (m/\omega) \cdot EQ - MQ \cdot ((1-h)/d_m) - MQ \cdot h/d_{\text{hospital}}, \\\dot{RQ} &=& \theta_{\text{R}} \psi_{\text{FP}} - (1/d_{\text{q,FP}}) \cdot RQ,\end{eqnarray} Model parameters In the above equations, S stands for susceptible, E for exposed, A for asymptomatic, M for mild, H for hospitalised, C for cohort, Mi for midcare, ICU for intensive care unit, D for dead, R for recovered. The quarantined states are denoted with a Q suffix, for instance AQ stands for asymptomatic and quarantined. The states S, E, A, M and R can be quarantined. The disease dynamics when quarantined are identical to the non quarantined dynamics. For instance, EQ will evolve into AQ or MQ with the same probability as E evolves into A or M. Individuals from the MQ pool can end up in the hospital. N stands for the total population. The clinical parameters are: a, m: the chance of having an asymptomatic or mild infection. h: the fraction of mildly infected which require hospitalisation. c: fraction of the hospitalised which remain in Cohort, mi: fraction of hospitalised which end up in midcare. Based on reported cases in China and travel data, Li et al. (2020b) estimated that 86 % of coronavirus infections in the country were "undocumented" in the weeks before officials instituted stringent quarantines. This figure thus includes the asymptomatic cases and an unknown number of mildly symptomatic cases and is thus an overestimation of the asymptotic fraction. In Iceland, citizens were invited for testing regardless of symptoms. Of all people with positive test results, 43% were asymptomatic (Gudbjartsson et al., 2020). The actual number of asymptomatic infections might be even higher since it seemed that symptomatic persons were more likely to respond to the invitation (Sciensano, 2020). In this work it is assumed that 43 % of all infected cases are asymptomatic. This figure can later be corrected in light of large scale immunity testing in the Belgian population. Hence,$$ a = 0.43 .$$Wu and McGoogan (2020) estimated that the distribution between mild, severe and critical cases is equal to 81%, 15% and 4%. As a rule of thumb, one can assume that one third of all hospitalised patients ends up in an ICU. Based on interviews with Ghent University hospital staff, midcare is merged with ICU in the offical numbers. For now, it is assumed that the distribution between midcare and ICU is 50-50 %. The sum of both pools is one third of the hospitalisations. Since the average time a patient spends in midcare is equal to ICU, this corresponds to seeing midcare and ICU as 'ICU'. $\sigma$: length of the latent period. Assumed four days based on a modeling study by Davies et al. (2020) .$\omega$: length of the pre-symptomatic infectious period, assumed 1.5 days (Davies et al. 2020). The sum of $\omega$ and $\sigma$ is the total incubation period, and is equal to 5.5 days. Several estimates of the incubation period have been published and range from 3.6 to 6.4 days, with the majority of estimates around 5 days (Park et. al 2020).$d_{a}$ , $d_{m}$ , $d_{h}$ : the duration of infection in case of a asymptomatic or mild infection. Assumed to be 6.5 days. Toghether with the length of the pre-symptomatic infectious period, this accounts to a total of 8 days of infectiousness. $d_{c}$ , $d_{\text{mi}}$ , $d_{\text{ICU}}$: average length of a Cohort, Midcare and ICU stay. Equal to one week, two weeks and two weeks respectively.$d_{\text{mi,recovery}}$ , $d_{\text{ICU,recovery}}$: lengths of recovery stays in Cohort after being in Midcare or IC. Equal to one week.Zhou et al. (2020) performed a retrospective study on 191 Chinese hospital patients and determined that the time from illness onset to discharge or death was 22.0 days (18.0-25.0, IQR) and 18.5 days (15.0-22.0, IQR) for survivors and victims respectively. Using available preliminary data, the World Health Organisation estimated the median time from onset to clinical recovery for mild cases to be approximately 2 weeks and to be 3-6 weeks for patients with severe or critical disease (WHO, 2020). Based on this report, we assume a recovery time of three weeks for heavy infections.$d_{hospital}$ : the time before heavily or critically infected patients reach the hospital. Assumed 5-9 days (Linton et al. 2020). Still waiting on hospital input here.$m_0$ : the mortality in ICU, which is roughly 50\% (Wu and McGoogan, 2020). $\zeta$: can be used to model the effect of re-susceptibility and seasonality of a disease. Throughout this demo, we assume $\zeta = 0$ because data on seasonality is not yet available at the moment. We thus assume permanent immunity after recovering from the infection. Social interaction data Social Contact Rates (SOCRATES) Data Tool https://lwillem.shinyapps.io/socrates_rshiny/1. What is the average number of daily human-to-human contacts of the Belgian population? Include all ages, all genders and both physical and non-physical interactions of any duration. To include all ages, type: *0,60+* in the *Age Breaks* dialog box.2. What is the average number of physical human-to-human contacts of the Belgian population? Include all ages, all genders and all durations of physical contact.3. What is the average number of physical human-to-human contacts of at least 1 hour of the Belgian population?4. Based on the above results, how would you estimate $N_c$ in the deterministic model?5. Based on the above results, how would you estimate $p$ in the stochastic model? Recall that $p$ is the fraction of *random contacts* a person has on a daily basis, while $(1-p)$ is the fraction of *inner circle contacts* a person has on a daily basis. Google COVID-19 Community Mobility Reports https://www.google.com/covid19/mobility/ London School of Hygiene https://www.thelancet.com/journals/lanpub/article/PIIS2468-2667(20)30073-6/fulltext Performing simulations Without age-structuring The 'SEIRSAgeModel' class The basic concept of object oriented programming in Python 3 is illustrated schematically below. An object, created by calling a class, should be seen as a 'box' containing 'tools'. First, the model parameters, the initial condition and the monte-carlo simulation settings are put inside. The toolbox 'SEIRSAgeModel' not only contains the necessary code to simulate the model, but also contains several convenient functions for data visualisation, calibration of model parameters and model predictive control. The advantage of using an object instead of nested functions is the fact that function arguments don't explicitly have to be passed to the helper functions every time these are called by the user. Rather, the parameters are stored in our 'SEIRSAgeModel' toolbox and can be used by the class functions at any time. This drastically enhances the readability of the code.<img src="../figs/SEIRSAgeModel.jpg" alt="class" height="600" width="700" style="float: left; margin-right: 500px;" /> As of 18/04/2020, the SEIRSAgeModel contains 9 functions which can be grouped in two parts: 1) functions to run and visualise simulations and 2) functions to perform parameter estimations and visualse the results. 3) functions to optimize future policies using model predictive control (MPC). Creating a SEIRSAgeModel object We start our demo with the initialisation of our 'toolbox', as shown in the cell below. This is done by calling the SEIRSAgeModel class and defining all clinical and testing parameters, the initial condition and if we want to enable monte-carlo simulations. By default, if the argument monteCarlo and n_samples are omited from the object initialisation, monte-carlo sampling of the incubation period is switched off by default. **It is detrimental that Nc and all initial conditions (denoted 'initX') are numpy arrays. The initial conditions must be 1D arrays with the same size as the number of age categories considered in the metapopulation model. Nc must be a square 2D array with the same size as the number of age categories considered in the metapopulation model. For now, we omit age-structuring as this is demonstrated later on.** We conveniently name our object 'model'. ###Code model = models.SEIRSAgeModel( initN = np.array([11.43e6]), #must be a numpy array; size of the Belgian population beta = 0.07, # probability of infection when encountering infected person sigma = 3.2, # latent period (days) omega = 2.0, # pre-symptomatic infectiousness (days) Nc = np.array([11.2]), #must be a numpy array; average number of human-to-human interactions per day a = 0.43, # probability of an asymptotic (supermild) infection m = 1-0.43, # probability of a mild infection h = 0.20, # probability of hospitalisation for a mild infection c = 3/4, # probability of hospitalisation in cohort mi = 1/8, # probability of hospitalisation in midcare da = 7, # days of infection when asymptomatic (supermild) dm = 7, # days of infection when mild dc = 7, dmi = 14, dICU = 14, dICUrec = 7, dmirec = 7, dhospital = 5, # days before reaching the hospital when heavy or critical m0 = 0.49, # mortality in ICU maxICU = 2000, totalTests = 0, psi_FP = 0, # probability of a false positive psi_PP = 1, # probability of a correct test dq = 14, # days in quarantaine initE = np.array([1]), #must be a numpy array initA = np.zeros(1), initM = np.zeros(1), initC = np.zeros(1), initCmirec = np.zeros(1), initCicurec = np.zeros(1), initMi = np.zeros(1), initICU = np.zeros(1), initR = np.zeros(1), initD = np.zeros(1), initSQ = np.zeros(1), initEQ = np.zeros(1), initIQ = np.zeros(1), initAQ = np.zeros(1), initMQ = np.zeros(1), initRQ = np.zeros(1), monteCarlo = False, n_samples = 1 ) ###Output _____no_output_____ ###Markdown Extract Sciensano data ###Code [index,data] = model.obtainData() ICUvect = np.transpose(data[0]) hospital = np.transpose(data[1]) print(ICUvect.shape) ###Output (1, 65) ###Markdown Changing an object variable after intialisationAfter initialising our 'model' it is still possible to change variables using the following syntax. ###Code model.beta = 0.079 ###Output _____no_output_____ ###Markdown Running your first simulationA simulation is run by using the attribute function *sim*, which uses one argument, the simulation time T, as its input. ###Code y = model.sim(200) ###Output _____no_output_____ ###Markdown For advanced users: the numerical results of the simulation can be accessed directly be calling *object.X* or *object.sumX* where X is the name of the desired population pool. Both are numpy arrays. *Ojbect.X* is a 3D array of the following dimensions:- x-dimension: number of age categories,- y-dimesion: tN: total number of timesteps taken (one per day),- z-dimension: n_samples: total number of monte-carlo simulations performed.Object.sumX is a 2D array containing only the results summed over all age categorie and has the following dimensions,- x-dimesion: tN: total number of timesteps taken (one per day),- y-dimension: n_samples: total number of monte-carlo simulations performed. Visualising the resultsTo quickly visualise simulation results, two attribute functions were created. The first function, *plotPopulationStatus*, visualises the number of susceptible, exposed, infected and recovered individuals in the population. The second function, *plotInfected*, by default visualises the number of heavily and critically infected individuals. Both functions require no user input to work but both have some optional arguments,> plotPopulationStatus(filename),> - filename: string with a filename + extension to save the figure. The figure is not saved per default.> plotInfected(asymptotic, mild, filename),> - asymptotic: set to *True* to include the supermild pool in the visualisation.> - mild: set to *True* to include the mild pool in the visualisation.> - filename: string with a filename + extension to save the figure. The figure is not saved per default.> - getfig: True to return ax, fig (default: False) ###Code model.plotPopulationStatus() model.plotInfected() ###Output _____no_output_____ ###Markdown The use of checkpoints to change parameters on the flyA cool feature of the original SEIRSplus package by Ryan McGee was the use of so-called *checkpoints* dictionary to change simulation parameters on the fly. In our modification, this feature is preserved. Below you can find an example of a *checkpoints* dictionary. The simulation will be started with the previously initialised parameters. After 40 days, social interaction will be limited by changing $N_c$ to 0.50 contacts per day. After 80 days, social restrictions are lifted and beta once more assumes its *business-as-usual* value. *checkpoints* is the only optional argument of the *sim* functions and is set to *None* per default. ###Code # Create checkpoints dictionary chk = {'t': [62,120], 'Nc': [np.array([1]),np.array([11.2])] } # Run simulation y = model.sim(140,checkpoints=chk) # Visualise model.plotPopulationStatus() model.plotInfected() ###Output _____no_output_____ ###Markdown Meta-population simulations Creating an age-structured SEIRSAgeModel objectA first important challenge when using a deterministic model is to link the discrete levels of the control handle Nc (number of contacts) to specific government policies. A model extension that could be used to facilitate this is age-structuring. In this approach, all population pools are split in age-bins and the interactions between the age-bins are governed by a so-called interaction matrix. This modeling approach was recently used by a team of the London School of Hygiene and details can be found here [1]. Our model is written in such a way that incorporating age-structuring is only a matter of change the initial conditions and Nc-matrix sizes. We can already run simulations using the Belgian interaction matrix shown below and we will run the controller using this model in the very near future.We now initialise a second, age-structured, 'toolbox', as shown in the cell below. This is done in exactly the same way as before but instead of size-one numpy arrays we will now use a 16x16 interaction interaction matrix and 16x0 size initial conditions, as was demonstrated by the London School of Hygiene. Interaction matrices are publicly available here [2]. We conveniently name our object 'ageModel'.[1] https://www.thelancet.com/journals/lanpub/article/PIIS2468-2667(20)30073-6/fulltext [2] https://github.com/kieshaprem/covid19-agestructureSEIR-wuhan-social-distancing/tree/master/data<img src="../figs/BELinteractPlot.png" alt="interact" height="1000" width="1000" style="float: left; margin-right: 500px;" /> ###Code # Load interaction matrices Nc_home = np.loadtxt("../../data/raw/Interaction_matrices/Belgium/BELhome.txt", dtype='f', delimiter='\t') Nc_work = np.loadtxt("../../data/raw/Interaction_matrices/Belgium/BELwork.txt", dtype='f', delimiter='\t') Nc_schools = np.loadtxt("../../data/raw/Interaction_matrices/Belgium/BELschools.txt", dtype='f', delimiter='\t') Nc_transport = np.loadtxt("../../data/raw/Interaction_matrices/Belgium/BELtransport.txt", dtype='f', delimiter='\t') Nc_leisure = np.loadtxt("../../data/raw/Interaction_matrices/Belgium/BELleisure.txt", dtype='f', delimiter='\t') Nc_others = np.loadtxt("../../data/raw/Interaction_matrices/Belgium/BELothers.txt", dtype='f', delimiter='\t') Nc_total = np.loadtxt("../../data/raw/Interaction_matrices/Belgium/BELtotal.txt", dtype='f', delimiter='\t') initN = np.loadtxt("../../data/raw/Interaction_matrices/Belgium/BELagedist_10year.txt", dtype='f', delimiter='\t') h = np.array([[0.0205,0.0205,0.1755,0.1755,0.2115,0.2503,0.3066,0.4033,0.4770]]) icu = np.array([0,0,0.0310,0.0310,0.055,0.077,0.107,0.1685,0.1895]) r = icu/h ageModel = models.SEIRSAgeModel(initN = initN, #16x0 numpy array beta = 0.0622, # probability of infection when encountering infected person sigma = 3.2, # latent period omega = 2.0, # pre-symptomatic infectious period Nc = Nc_total, #must be a numpy array; average number of human-to-human interactions per day a = 0.43, # probability of an asymptotic (supermild) infection m = 1-0.43, # probability of a mild infection h = h, # probability of hospitalisation for a mild infection c = 1-r, # probability of hospitalisation in cohort mi = 0.5*r, # probability of hospitalisation in midcare da = 7, # days of infection when asymptomatic (supermild) dm = 7, # days of infection when mild dc = 8, dmi = 8, dICU = 8, dICUrec = 7, dmirec = 7, dhospital = 4, # days before reaching the hospital when heavy or critical #m0 = np.transpose(np.array([0.000094,0.00022,0.00091,0.0018,0.004,0.013,0.046,0.098,0.18])), # mortality in ICU m0 = np.ones(9)*0.50, maxICU = 2000, totalTests = 0, psi_FP = 0, # probability of a false positive psi_PP = 1, # probability of a correct test dq = 14, # days in quarantaine initE = np.ones(9), #must be a numpy array initI = np.zeros(9), initA = np.zeros(9), initM = np.zeros(9), initC = np.zeros(9), initCmirec = np.zeros(9), initCicurec = np.zeros(9), initMi = np.zeros(9), initICU = np.zeros(9), initR = np.zeros(9), initD = np.zeros(9), initSQ = np.zeros(9), initEQ = np.zeros(9), initIQ = np.zeros(9), initAQ = np.zeros(9), initMQ = np.zeros(9), initRQ = np.zeros(9), monteCarlo = False, n_samples = 1, ) # Create checkpoints dictionary chk = {'t': [36], 'Nc': [Nc_home+0.50*Nc_work+0.50*Nc_transport] } # Run simulation y = ageModel.sim(100,checkpoints=chk) # Visualise ageModel.plotPopulationStatus() ageModel.plotInfected() ###Output _____no_output_____ ###Markdown Calibrating $\beta$ in a *business-as-usual* scenario ($N_c = 11.2$) Performing a least-squares fitThe 'SEIRSAgeModel' class contains a function to fit the model to selected data (*fit*) and one function to visualise the result (*plotFit*). Our code uses a particle swarm (PSO) algorithm to perform the optimisation. The *fit* function has the following basic syntax,> sim(data, parNames, positions, bounds, weights)> - data: a list containing the dataseries (dtype=np array) to fit the model to. For now, dataseries must be of equal length.> - parNames: a list containing the names (dtype=string) of the model parameters to be fitted.> - positions: a list containing the model states (dtype=np.array) used to calculate the sum of least squares. Each dataseries must be matched to a certain (sum of) model state(s). If multiple entries are provided these are added togheter. The order of the states is given according to the following vector, where S has index 0: (S, E, I, A, M, C, Mi, ICU, R, F, SQ, EQ, IQ, AQ, MQ, RQ).The following arguments are optional,> - checkpoints: checkpoint dictionary can be used to calibrate under specific scenarios such as policy changes (default: None).> - setvar: True to replace fitted values in model object after fit is performed (default: False).> - disp: Show sum-of-least-squares after each optimisation iteration (default: True).> - polish: True to use a Nelder–Mead simplex to polish the final result (default: True).> - maxiter: Maximum number of iterations (default: 30).> - popsize: Population size of genetic algorithm (default: 10).The PSO algorithm will by default use all cores available for the optimisation. Using the *fit* attribute, it is possible to calibrate any number of model parameters to any sets of data. We do note that fitting the parameters a,m,h and c requires modification of the source code. In the example below, the transmission parameter $\beta$ is sought after using two dataseries. The first is the number of patients in need of intensive care and the second is the total number of people in the hospital. ###Code # vector with dates index=pd.date_range('2020-03-15', freq='D', periods=ICUvect.size) # data series used to calibrate model must be given to function 'plotFit' as a list idx = -57 index = index[0:idx] data=[np.transpose(ICUvect[:,0:idx]),np.transpose(hospital[:,0:idx])] # set optimisation settings parNames = ['beta'] # must be a list! positions = [np.array([5,6]),np.array([4,5,6])] # must be a list! bounds=((10,100),(0.01,0.12)) # must be a list! weights = np.array([1,0]) # run optimisation theta = model.fit(data,parNames,positions,bounds,weights,setvar=True,maxiter=30,popsize=120) ###Output No constraints given. Best after iteration 1: [28.44778319 0.06722428] 4653.313521422119 ###Markdown Visualising the fitVisualising the resulting fit is easy and can be done using the plotFit function. The functions uses the following basic syntax,plotFit(index,data,positions)> - index: vector with timestamps corresponding to data.> - data: list containing the dataseries (dtype=np array) to fit the model to. For now, dataseries must be of equal length.> - positions: list containing the model states (dtype=np.array) used to calculate the sum of least squares.The following arguments are optional,> - dataMkr: list containing the markers (dtype=str) to be used to visualise the data. Default value works up to five dataseries and is equal to: ['o','v','s','*','^'].>- modelClr: list containing the colors (dtype=str) to be used to visualise the model fit. Default value works up to five dateseries and is equal to: ['green','orange','red','black','blue'].> - legendText: tuple containing the legend entries. Disabled per default.> - titleText: string containing the fit title. Disable per default.> - filename: string with a filename + extension to save the figure. The figure is not saved per default.> - getfig: True to return ax, fig (default: False) ###Code # plot result ageModel.plotFit(index,data,positions,modelClr=['red','orange'],legendText=('ICU (model)','Hospitalized (model)','ICU (data)','Hospitalized (data)'),titleText='Belgium') ###Output _____no_output_____ ###Markdown Model Predictive control (MPC) Optimising government policy Process control for the laymanAs we have the impression that the control part, which we see as our main addition to the problem, is more difficult to grasp for the layman, here is a short intro to process control. Experts in control are welcome to skip this section.A predictive model consists of a set of equations and aims to predict how the system will behave in the future given a certain input. Process control flips this around and aims at determining what input is needed to achieve a desired system behavior (= goal). It is a tool that helps us in “controlling” how we want a system to behave. It is commonly applied in many industries, but also in our homes (e.g. central heating, washing machine). It's basically everywhere. Here's how it works. An algorithm monitors the deviation between the goal and the true system value and then computes the necessary action to "drive" the system to its goal by means of an actuator (in industry this is typically a pump or a valve). Applying this to Covid-19, the government wants to "control" the spread of the virus in the population by imposing measures (necessary control actions) on the public (which is the actuator here) and achieve the goal that the number of severely sick people does not become larger than can be handled by the health care system. However, the way the population behaves is a lot more complex compared to the heating control in our homes since not only epidemiology (virus spread) but also different aspects of human behavior on both the individual and the societal level (sociology, psychology, economy) are involved. This leads to multiple criteria we want to ideally control simultaneously and we want to use the "smartest" algorithm we can get our hands on. The optimizePolicy functionThe 'SEIRSAgeModel' class contains an implementation of the MPC controller in the function *optimizePolicy*. For now, the controller minimises a weighted squared sum-of-errors between multiple setpoints and model predictions. The algorithm can use any variable to control the virus outbreak, but we recommend sticking with the number of random daily contacts $N_c$ and the total number of random tests ('totalTests') as only these have been tested. We also recommend disabling age-structuring in the model before running the MPC as this feature requires discretisation of the interaction matrix to work which is not yet implemented. Future work will extend the MPC controller to work with age-structuring feature inherent to the model. Future work is also aimed at including an economic cost function to discriminate between control handles. Our MPC uses a PSO algorithm to perform the optimisation, we recommend using at least a swarmsize of 20 and at least 100 iterations to ensure that the trajectory is 'optimal'. The *optimizePolicy* function has the following basic syntax,optimizePolicy(parNames, bounds, setpoints, positions, weights)> - parNames: a list containing the names (dtype=string) of the model parameters to be used as a control handle.> - bounds: A list containing the lower- and upper boundaries of each parameter to be used as a control handle. Each entry in the list should be a 1D numpy array containing the lower- and upper bound for the respective control handle.> - setpoints: A list with the numerical values of the desired model output.> - positions: a list containing the model states (dtype=np.array) used to calculate the sum of least squares. Each modelouput in the given position is matched with a provided setpoint. If multiple position entries are provided, the output in these positions is added togheter. The order of the states is given according to the following vector, where S has index 0: (S, E, I, A, M, C, Mi, ICU, R, F, SQ, EQ, IQ, AQ, MQ, RQ).> - weights: a list containing the weighting fractions of each population pool ouput in the sum-of-squared errors.The following arguments are optional,> - policy_period: length of one policy interval (default: 7 days).> - N: number of future policy intervals to be optimised, also called 'control horizon' (default: 6).> - P: number of policy intervals over which the sum of squared errors is calculated, also called 'prediction horizon' (default:12).> - disp: Show sum-of-least-squares after each optimisation iteration (default: True).> - polish: True to use a Nelder–Mead simplex to polish the final result (default: True).> - maxiter: Maximum number of iterations (default: 100).> - popsize: Population size of genetic algorithm (default: 20).The function returns a one-dimensional list containing the optimal values of the control handles. The length of this list is equal to the length of the control horizon (N) times the number of control handles. The list thus lists all control handles and their optimal values in their respective order. **The optimal policy is assigned to the SEIRSAgeModel object and is only overwritten when a new optimisation is performed. Future work could include the creation of a new object for every optimal policy.** The genetic algorithm will by default use all cores available for the optimisation. ###Code parNames = ['Nc','totalTests'] bounds = [np.array([0,11.2]),np.array([0,1e6])] setpoints = [1200,5000] positions = [np.array([6]),np.array([4,5,6])] weights = [1,0] model.optimizePolicy(parNames,bounds,setpoints,positions,weights,policy_period=14,N=6,P=6,polish=False,maxiter=120,popsize=144) ###Output _____no_output_____ ###Markdown Visualising the effect of government policy Visualising the resulting optimal policy is easy and can be done using the plotOptimalPolicy function. We note that the functionality of*plotOptimalPolicy** is for now, very basic and will be extended in the future. The function is heavily based on the *plotInfected* visualisation. The function uses the following basic syntax,plotOptimalPolicy(parNames,setpoints,policy_period)> - parNames: a list containing the names (dtype=string) of the model parameters to be used as a control handle.> - setpoints: A list with the numerical values of the desired model output.> - policy_period: length of one policy interval (default: 7 days).The following arguments are optional,> - asymptotic: set to *True* to include the supermild pool in the visualisation.> - mild: set to *True* to include the mild pool in the visualisation.> - filename: string with a filename + extension to save the figure. The figure is not saved per default. ###Code model.plotOptimalPolicy(parNames,setpoints,policy_period=14) ###Output _____no_output_____ ###Markdown Specific methods *realTimeScenario*The 'SEIRSAgeModel' class contains one function to quickly perform and visualise scenario analysis for a given country. The user is obligated to supply the function with: 1) a set of dataseries, 2) the date at which the data starts, 3) the positions in the model output that correspond with the dataseries and 4) a checkpoints dictionary containing the past governement actions, from hereon referred to as the *pastPolicy* dictionary. If no additional arguments are provided, the data and the corresponding model fit are visualised from the user supplied start date up until the end date of the data plus 14 days. The end date of the visualisation can be altered by defining the optional keyworded argument *T_extra* (default: 14 days). Optionally a dictionary of future policies can be used to simulate scenarios starting on the first day after the end date of the dataseries. The function *realTimeScenario* accomplishes this by merging both the *pastPolicy* and *futurePolicy* dictionaries using the backend function *mergeDict()*. The syntax without optional arguments is as follows,realTimeScenario(startDate, data, positions, pastPolicy)> - startDate: a string with the date corresponding to the first entry of the dataseries (format: 'YYYY-MM-DD'). > - data: a list containing the dataseries (dtype=np array) to fit the model to. For now, dataseries must be of equal length and start on the same day.> - positions: a list containing the model states (dtype=np.array) used to calculate the sum of least squares. Each dataseries must be matched to a certain (sum of) model state(s). If multiple entries are provided these are added togheter. The order of the states is given according to the following vector, where S has index 0: (S, E, I, A, M, C, Mi, ICU, R, F, SQ, EQ, IQ, AQ, MQ, RQ).> - pastPolicy: a checkpoints dictionary containing past government actions.The following (simulation) arguments are optional,> - futurePolicy: a checkpoint dictionary used to simulate scenarios in the future (default: None). By default, time '1' in this dictionary is the date of the first day after the end of the data.> - T_extra: Extra simulation time after last date of the data if no futurePolicy dictionary is provided. Extra simulation time after last time in futurePolicy dictionary.The following arguments are for visualisation,> - dataMkr: list containing the markers (dtype=str) to be used to visualise the data. Default value works up to five dataseries and is equal to: ['o','v','s','*','^'].>- modelClr: list containing the colors (dtype=str) to be used to visualise the model fit. Default value works up to five dateseries and is equal to: ['green','orange','red','black','blue'].> - legendText: tuple containing the legend entries. Disabled per default.> - titleText: string containing the fit title. Disable per default.> - filename: string with a filename + extension to save the figure. The figure is not saved per default.> - getfig: True to return ax, fig (default: False) ###Code # Define data as a list containing data timeseries data=[np.transpose(ICUvect),np.transpose(hospital)] # Create a dictionary of past policies pastPolicy = {'t': [7], 'Nc': [0.5*Nc_home] } # Create a dictionary of future policies futurePolicy = {'t': [1], 'Nc': [Nc_home+Nc_work], } # Define the date corresponding to the first data entry startDate='2020-03-15' # Run realTimeScenario ageModel.realTimeScenario(startDate,data,positions,pastPolicy,futurePolicy=futurePolicy,T_extra=62, modelClr=['red','orange'],legendText=('ICU (model)','Hospital (model)','ICU (data)','Hospital (data)'), titleText='Belgium',filename='test.svg') ###Output _____no_output_____ ###Markdown *realTimeMPC*The 'SEIRSAgeModel' class contains one function to quickly optimise the policy for a given country using Model Predictive Control. The user is obligated to supply the function with: 1) a set of dataseries, 2) the date at which the data starts, 3) the positions in the model output that correspond with the dataseries, 4) a checkpoints dictionary containing the past governement actions, from hereon referred to as the *pastPolicy* dictionary and 5) Additional MPC arguments. The source code of *realTimeMPC* consists of seven distinct steps. The syntax without optional arguments is as follows,realTimeMPC(startDate, data, positions, pastPolicy,parNames,bounds,setpoints,weights)> - startDate: a string with the date corresponding to the first entry of the dataseries (format: 'YYYY-MM-DD'). > - data: a list containing the dataseries (dtype=np array) to fit the model to. For now, dataseries must be of equal length and start on the same day.> - positions: a list containing the model states (dtype=np.array) used to calculate the sum of least squares. Each dataseries must be matched to a certain (sum of) model state(s). If multiple entries are provided these are added togheter. The order of the states is given according to the following vector, where S has index 0: (S, E, I, A, M, C, Mi, ICU, R, F, SQ, EQ, IQ, AQ, MQ, RQ).> - pastPolicy: a checkpoints dictionary containing past government actions.> - parNames: a list containing the names (dtype=string) of the model parameters to be used as a control handle.> - bounds: A list containing the lower- and upper boundaries of each parameter to be used as a control handle. Each entry in the list should be a 1D numpy array containing the lower- and upper bound for the respective control handle.> - setpoints: A list with the numerical values of the desired model output.> - weights: a list containing the weighting fractions of each population pool ouput in the sum-of-squared errors.The following arguments are optional,> - policy_period: length of one policy interval (default: 7 days).> - N: number of future policy intervals to be optimised, also called 'control horizon' (default: 6).> - P: number of policy intervals over which the sum of squared errors is calculated, also called 'prediction horizon' (default:12).> - disp: Show sum-of-least-squares after each optimisation iteration (default: True).> - polish: True to use a Nelder–Mead simplex to polish the final result (default: True).> - maxiter: Maximum number of iterations (default: 100).> - popsize: Population size of genetic algorithm (default: 20).The following arguments are for visualisation,> - dataMkr: list containing the markers (dtype=str) to be used to visualise the data. Default value works up to five dataseries and is equal to: ['o','v','s','*','^'].>- modelClr: list containing the colors (dtype=str) to be used to visualise the model fit. Default value works up to five dateseries and is equal to: ['green','orange','red','black','blue'].> - legendText: tuple containing the legend entries. Disabled per default.> - titleText: string containing the fit title. Disable per default.> - filename: string with a filename + extension to save the figure. The figure is not saved per default.> - getfig: True to return ax, fig (default: False)Note that the control handles are not yet incorporated in the visualisation, this will be done in the near future. ###Code parNames = ['Nc'] bounds = [np.array([1.5,11.2])] setpoints = [400,4000] positions = [np.array([6]),np.array([4,5,6])] weights = [1,0] model.realTimeMPC(startDate,data,positions,pastPolicy,parNames,bounds,setpoints,weights, policy_period=7,N=6,P=8,disp=True,polish=False,maxiter=60,popsize=24, dataMkr=['o','v','s','*','^'],modelClr=['orange','red'], legendText=('ICU (model)','Hospital (model)','ICU (data)','Hospital (data)'), titleText='Belgium',filename=None) ###Output _____no_output_____ ###Markdown Table of Contents 1&nbsp;&nbsp;Covid-19: From model prediction to model predictive control1.1&nbsp;&nbsp;A demo of the deterministic modeling framework1.2&nbsp;&nbsp;Introduction1.2.1&nbsp;&nbsp;Model dynamics1.2.2&nbsp;&nbsp;Deterministic vs. Stochastic framework1.2.3&nbsp;&nbsp;Model parameters1.2.4&nbsp;&nbsp;Social interaction data1.3&nbsp;&nbsp;Performing simulations1.3.1&nbsp;&nbsp;Without age-structuring1.3.2&nbsp;&nbsp;Meta-population simulations1.4&nbsp;&nbsp;Calibrating &amp;x03B2;" role="presentation">𝛽β\beta in a business-as-usual scenario (Nc=11.2" role="presentation">𝑁𝑐=11.2Nc=11.2N_c = 11.2)1.4.1&nbsp;&nbsp;Performing a least-squares fit1.4.2&nbsp;&nbsp;Visualising the fit1.5&nbsp;&nbsp;Model Predictive control (MPC)1.5.1&nbsp;&nbsp;Optimising government policy1.5.2&nbsp;&nbsp;Visualising the effect of government policy1.6&nbsp;&nbsp;Specific methods1.6.1&nbsp;&nbsp;realTimeScenario1.6.2&nbsp;&nbsp;realTimeMPC Covid-19: From model prediction to model predictive control A demo of the deterministic modeling framework*Original code by Ryan S. McGee. Modified by T.W. Alleman in consultation with the BIOMATH research unit headed by prof. Ingmar Nopens.*Copyright (c) 2020 by T.W. Alleman, BIOMATH, Ghent University. All Rights Reserved.Our code implements a SEIRS infectious disease dynamics models with extensions to model the effect quarantining detected cases. Using the concept of 'classes' in Python 3, the code was integrated with our previous work and allows to quickly perform Monte Carlo simulations, calibrate model parameters and calculate an *optimal* government policies using a model predictive controller (MPC). A white paper and souce code of our previous work can be found on the Biomath website. https://biomath.ugent.be/covid-19-outbreak-modelling-and-control ###Code import numpy as np import matplotlib.pyplot as plt from IPython.display import Image from ipywidgets import interact,fixed,FloatSlider,IntSlider,ToggleButtons import pandas as pd import datetime import scipy from scipy.integrate import odeint import matplotlib.dates as mdates import matplotlib import scipy.stats as st import networkx # to install networkx in your environment: conda install networkx ###Output _____no_output_____ ###Markdown Load the covid 19 custom development code ###Code from covid19model.models import models from covid19model.data import sciensano from covid19model.data import google # OPTIONAL: Load the "autoreload" extension so that package code can change %load_ext autoreload # OPTIONAL: always reload modules so that as you change code in src, it gets loaded %autoreload 2 ###Output _____no_output_____ ###Markdown Introduction Model dynamics GeneralThe SEIR model was first proposed in 1929 by two Scottish scientists. It is a compartmental model that subdivides the human population in four types of people : 1) healthy individuals susceptible to the infectious disease, 2) exposed individuals in a latent phase (partially the incubation period), 3) infectious individuals able to transmit the disease and 4) individuals removed from the population either through immunisation or death. Despite being a simple and idealised reality, the SEIR model is used extensively to predict the outbreak of infectious diseases and this was no different during the outbreak in China earlier this year. In this work, we extended the SEIR model to incorporate more expert knowledge on SARS-Cov-2 into the model. The infectious pool is split into four parts. The first is a period of pre-symptomatic infectiousness. Several studies have shown that pre-symptomatic transmission is a dominant transmission mechanism of SARS-Cov-2. After the period of pre-symptomatic transmission, three possible infectious outcomes are modelled. 1) asymptomatic outcome, for patients who show no symptoms at all 2) mild outcome, for patients with mild symptoms, these patients recover at home 3) a mild infection can degress to the point where a hospitalision is needed. The pool of *recovered* individuals from the classical SEIR model is split into an recovered and dead pool. People from the susceptible, exposed, pre-symptomatic infectious, asymptomatic infectious, mild infectious and recovered pool can be quarantined after having tested positive for Covid-19. Note that for individuals in the susceptible and recovered pools, this corresponds to a *false positive* test. The dynamics of our extended SEIR model are presented in the flowchart below. We make the following assumptions with regard to the general SEIRS dynamics,We make the following assumptions with regard to the SEIRS dynamics,1. There is no connection between the severity of the disease and the infectiousness of an individual. Only the duration of infectiousness can differ.2. All patients experience a brief pre-symptomatic, infectious period.3. All deaths come from intensive care units in hospitals, meaning no patients die outside a hospital. Of the 7703 diseased (01/05/2020), 46\% died in a hospital while 53\% died in an elderly home. All hospital deaths are confirmed Covid-19 cases while only 16\% of elderly home deaths were confirmed. When taking the elderly homes out of the model scope, the assumption that deaths only arise in hospitals is true due to the fact that only 0.3\% died at home and 0.4\% died someplace else. Asymptomatic and mild cases automatically lead to recovery and in no case to death (https://www.info-coronavirus.be/nl/news/trends-laatste-dagen-zetten-zich-door/).4. We implement no testing and quarantining in the hospital. Hospitalised persons are assumed to be incapable of infecting susceptibles, so the implementation of a quarantine would not change the dynamics but slow down calculations.5. Recovered patients are assumed to be immune, seasonality is deemed out of scope of this work. Hospital subystem (preliminary)The hospital subsystem is a simplification of actual hospital dynamics. The dynamics and estimated parameters were obtained by interviewing Ghent University Hospital staff and presenting the resulting meeting notes to the remaining three Ghent hospitals for verification.At the time of writing (30/04/2020) every admitted patient is tested for Covid-19. Roughly 10% of all Covid-19 patients at UZ Ghent originally came to the hospital for some other medical condition. The remaining 90% of all Covid-19 arrives in the emergency room or comes from hospitals in heavily struck regions. The fraction of people the hospital when getting infected with Covid-19 are reported to authorities as ‘new hospitalisations’. There are three hospital wards for Covid-19 patients: 1) Cohort, which should be seen like a regular hospital ward with Covid-19 patients. Patients are not monitored permanently in this ward. 2) Midcare, a ward where more severe cases are monitored more cosely than in Cohort. Midcare is more closely related to ICU than to Cohort and is usually lumped with the number of ICU patients when reporting to the officials. 3) Intensive care, for patients with the most severe symptoms. Intensive care needs can include the use of a ventilator to supply the patient with oxygen. It was noted that the fraction Cohort vs. Midcare and ICU is roughly 50-50%.Generally, patients can switch between any of the wards depending on how the disease progresses. However, some dominant *flows* exist. Usually, it is apparent upon a patients arrival to which ward he or she will be assigned. On average patients who don’t degress stay in Cohort for 6 days, with values spread between 3 and 8 days. The average ICU stay is 14 days when a patient doesn’t need ventilation. If the patient needs ventilation the stay is slightly longer. After being in ICU, patients return to Cohort for an another 6 to 7 days of recovery. Based on these dominant *flows*, the hospital subsystem was simplified by making the following assumptions,1. Assume people arriving at the hospital are instantly distributed between Cohort, Midcare or ICU.2. Merge ventilator and non-ventilator ICU.3. Assume deaths can only arise in ICU.4. Assume all patients in midcare and ICU pass via Cohort on their way to recovery.5. Assume that the 10% of the patients that come from hospital actually come from the population. Deterministic vs. Stochastic framework The extended SEIR model is implemented using two frameworks: a deterministic and a stochastic (network based) framework. **This Jupyter Notebooks is a demo of the deterministic model,** a demo of the stochastic network simulator is available in *SEIRSNetworkModel_Demo*. A deterministic implementation of the extended SEIRS model captures important features of infectious disease dynamics, but it assumes uniform mixing of the population (i.e. every individual in the population is equally likely to interact with every other individual). The deterministic approach results in a set of N ordinary differential equations, one for every of the N ’population pools’ considered. The main advantage of a deterministic model is that a low amount of computational resources are required while still maintaining an acceptable accuracy. The deterministic framework allows to rapidly explore scenarios and perform optimisations which require thousands of function evaluations. However, it is often important to consider the structure of contact networks when studying disease transmission and the effect of interventions such as social distancing and contact tracing. The main drawback of the deterministic approach is the inability to simulate contact tracing, which is one of the most promising measures against the spread of SARS-Cov-2. For this reason, the SEIRS dynamics depicted in on the above flowchart can be simulated on a Barabasi-Albert network. This advantages include a more detailed analysis of the relationship between social network structure and effective transmission rates, including the effect of network-based interventions such as social distancing, quarantining, and contact tracing. The added value comes at a high price in terms of computational resources. It is not possible to perform optimisations of parameters in the stochastic network model on a personal computer. Instead, high performance computing infrastructure is needed. The second drawback is the need for more data and/or assumptions on social interactions and how government measures affect these social interactions. Deterministic equations The dynamics of the deterministic system are mathematically formulated as the rate of change of each population pool shown in the above flowchart. This results in the following system of ordinary differential equations (the parameters are explained in the next section):\begin{eqnarray}\dot{S} &=& - \beta N_c \cdot S \cdot \Big( \frac{I+A}{N} \Big) - \theta_{\text{S}} \psi_{\text{FP}} \cdot S + SQ/d_{\text{q,FP}} + \zeta \cdot R,\\\dot{E} &=& \beta \cdot N_c \cdot S \Big( \frac{I+A}{N} \Big) - (1/\sigma) \cdot E - \theta_{\text{E}} \psi_{\text{PP}} \cdot E,\\\dot{I} &=& (1/\sigma) \cdot E - (1/\omega) \cdot I - \theta_I \psi_{PP} \cdot I,\\\dot{A} &=& (\text{a}/\omega) \cdot I - A/d_{\text{a}} - \theta_{\text{A}} \psi_{\text{PP}} \cdot A,\\ \dot{M} &=& (\text{m} / \omega ) \cdot I - M \cdot ((1-h)/d_m) - M \cdot h/d_{\text{hospital}} - \theta_{\text{M}} \psi_{\text{PP}} \cdot M,\\\dot{C} &=& c \cdot (M+MQ) \cdot (h/d_{\text{hospital}}) - c \cdot(1/d_c)\cdot C,\\\dot{C}_{\text{mi,rec}} &=& Mi/d_{\text{mi}} - C_{\text{mi,rec}} \cdot (1/d_{\text{mi,rec}}),\\\dot{C}_{\text{ICU,rec}} &=& (1-m_0)/d_{\text{ICU}} \cdot \text{ICU} - C_{\text{ICU,rec}} \cdot (1/d_{\text{ICU,rec}}),\\C_{\text{tot}} &=& C + C_{\text{mi,rec}} + C_{\text{ICU,rec}}, \\\dot{\text{Mi}} &=& mi \cdot (M+MQ) \cdot (h/d_{\text{hospital}}) - \text{Mi} / d_{\text{mi}}\\ \dot{\text{ICU}} &=& (1-c-mi) \cdot (M+MQ) \cdot (h/d_{\text{hospital}}) - \text{ICU}/d_{\text{ICU}},\\H &=& C_{\text{tot}} + \text{Mi} + \text{ICU},\\\dot{D} &=& m_0 \cdot \text{ICU}/d_{\text{ICU}},\\\dot{R} &=& A/d_a + M \cdot ((1-h)/d_m) + (1/d_c) \cdot c \cdot M \cdot (h/d_{\text{hospital}}) \\ && + (M_i/d_{mi}) \cdot (1/d_{\tiny{\text{mi,recovery}}})+ ((1-m_0)/ d_{\text{ICU}}) \cdot (1/d_{\text{\tiny{ICU,recovery}}}) \cdot ICU \\&& + AQ/d_a + MQ \cdot ((1-h)/d_m)+ RQ/d_{\text{q,FP}} - \zeta \cdot R, \\\dot{SQ} &=& \theta_{\text{S}} \psi_{\text{FP}} \cdot S - (1/d_{\text{q,FP}}) \cdot SQ, \\\dot{EQ} &=& \theta_{\text{E}} \psi_{\text{PP}} \cdot E - (1/\sigma) \cdot EQ, \\\dot{IQ} &=& \theta_{\text{I}} \psi_{\text{PP}} \cdot I + (1/\sigma) \cdot EQ - (1/\omega) \cdot IQ, \\\dot{AQ} &=& \theta_{\text{A}} \psi_{\text{PP}} \cdot A + (a/\omega) \cdot EQ - AQ/d_a, \\\dot{MQ} &=& \theta_{\text{M}} \psi_{\text{PP}} \cdot M + (m/\omega) \cdot EQ - MQ \cdot ((1-h)/d_m) - MQ \cdot h/d_{\text{hospital}}, \\\dot{RQ} &=& \theta_{\text{R}} \psi_{\text{FP}} - (1/d_{\text{q,FP}}) \cdot RQ,\end{eqnarray} Model parameters In the above equations, S stands for susceptible, E for exposed, A for asymptomatic, M for mild, H for hospitalised, C for cohort, Mi for midcare, ICU for intensive care unit, D for dead, R for recovered. The quarantined states are denoted with a Q suffix, for instance AQ stands for asymptomatic and quarantined. The states S, E, A, M and R can be quarantined. The disease dynamics when quarantined are identical to the non quarantined dynamics. For instance, EQ will evolve into AQ or MQ with the same probability as E evolves into A or M. Individuals from the MQ pool can end up in the hospital. N stands for the total population. The clinical parameters are: a, m: the chance of having an asymptomatic or mild infection. h: the fraction of mildly infected which require hospitalisation. c: fraction of the hospitalised which remain in Cohort, mi: fraction of hospitalised which end up in midcare. Based on reported cases in China and travel data, Li et al. (2020b) estimated that 86 % of coronavirus infections in the country were "undocumented" in the weeks before officials instituted stringent quarantines. This figure thus includes the asymptomatic cases and an unknown number of mildly symptomatic cases and is thus an overestimation of the asymptotic fraction. In Iceland, citizens were invited for testing regardless of symptoms. Of all people with positive test results, 43% were asymptomatic (Gudbjartsson et al., 2020). The actual number of asymptomatic infections might be even higher since it seemed that symptomatic persons were more likely to respond to the invitation (Sciensano, 2020). In this work it is assumed that 43 % of all infected cases are asymptomatic. This figure can later be corrected in light of large scale immunity testing in the Belgian population. Hence,$$ a = 0.43 .$$Wu and McGoogan (2020) estimated that the distribution between mild, severe and critical cases is equal to 81%, 15% and 4%. As a rule of thumb, one can assume that one third of all hospitalised patients ends up in an ICU. Based on interviews with Ghent University hospital staff, midcare is merged with ICU in the offical numbers. For now, it is assumed that the distribution between midcare and ICU is 50-50 %. The sum of both pools is one third of the hospitalisations. Since the average time a patient spends in midcare is equal to ICU, this corresponds to seeing midcare and ICU as 'ICU'. $\sigma$: length of the latent period. Assumed four days based on a modeling study by Davies et al. (2020) .$\omega$: length of the pre-symptomatic infectious period, assumed 1.5 days (Davies et al. 2020). The sum of $\omega$ and $\sigma$ is the total incubation period, and is equal to 5.5 days. Several estimates of the incubation period have been published and range from 3.6 to 6.4 days, with the majority of estimates around 5 days (Park et. al 2020).$d_{a}$ , $d_{m}$ , $d_{h}$ : the duration of infection in case of a asymptomatic or mild infection. Assumed to be 6.5 days. Toghether with the length of the pre-symptomatic infectious period, this accounts to a total of 8 days of infectiousness. $d_{c}$ , $d_{\text{mi}}$ , $d_{\text{ICU}}$: average length of a Cohort, Midcare and ICU stay. Equal to one week, two weeks and two weeks respectively.$d_{\text{mi,recovery}}$ , $d_{\text{ICU,recovery}}$: lengths of recovery stays in Cohort after being in Midcare or IC. Equal to one week.Zhou et al. (2020) performed a retrospective study on 191 Chinese hospital patients and determined that the time from illness onset to discharge or death was 22.0 days (18.0-25.0, IQR) and 18.5 days (15.0-22.0, IQR) for survivors and victims respectively. Using available preliminary data, the World Health Organisation estimated the median time from onset to clinical recovery for mild cases to be approximately 2 weeks and to be 3-6 weeks for patients with severe or critical disease (WHO, 2020). Based on this report, we assume a recovery time of three weeks for heavy infections.$d_{hospital}$ : the time before heavily or critically infected patients reach the hospital. Assumed 5-9 days (Linton et al. 2020). Still waiting on hospital input here.$m_0$ : the mortality in ICU, which is roughly 50\% (Wu and McGoogan, 2020). $\zeta$: can be used to model the effect of re-susceptibility and seasonality of a disease. Throughout this demo, we assume $\zeta = 0$ because data on seasonality is not yet available at the moment. We thus assume permanent immunity after recovering from the infection. Social interaction data Social Contact Rates (SOCRATES) Data Tool https://lwillem.shinyapps.io/socrates_rshiny/1. What is the average number of daily human-to-human contacts of the Belgian population? Include all ages, all genders and both physical and non-physical interactions of any duration. To include all ages, type: *0,60+* in the *Age Breaks* dialog box.2. What is the average number of physical human-to-human contacts of the Belgian population? Include all ages, all genders and all durations of physical contact.3. What is the average number of physical human-to-human contacts of at least 1 hour of the Belgian population?4. Based on the above results, how would you estimate $N_c$ in the deterministic model?5. Based on the above results, how would you estimate $p$ in the stochastic model? Recall that $p$ is the fraction of *random contacts* a person has on a daily basis, while $(1-p)$ is the fraction of *inner circle contacts* a person has on a daily basis. Google COVID-19 Community Mobility Reports https://www.google.com/covid19/mobility/ London School of Hygiene https://www.thelancet.com/journals/lanpub/article/PIIS2468-2667(20)30073-6/fulltext Performing simulations Without age-structuring The 'SEIRSAgeModel' class The basic concept of object oriented programming in Python 3 is illustrated schematically below. An object, created by calling a class, should be seen as a 'box' containing 'tools'. First, the model parameters, the initial condition and the monte-carlo simulation settings are put inside. The toolbox 'SEIRSAgeModel' not only contains the necessary code to simulate the model, but also contains several convenient functions for data visualisation, calibration of model parameters and model predictive control. The advantage of using an object instead of nested functions is the fact that function arguments don't explicitly have to be passed to the helper functions every time these are called by the user. Rather, the parameters are stored in our 'SEIRSAgeModel' toolbox and can be used by the class functions at any time. This drastically enhances the readability of the code.<img src="../../docs/_static/figs/SEIRSAgeModel.jpg" alt="class" height="600" width="700" style="float: left; margin-right: 500px;" /> As of 18/04/2020, the SEIRSAgeModel contains 9 functions which can be grouped in two parts: 1) functions to run and visualise simulations and 2) functions to perform parameter estimations and visualse the results. 3) functions to optimize future policies using model predictive control (MPC). Creating a SEIRSAgeModel object We start our demo with the initialisation of our 'toolbox', as shown in the cell below. This is done by calling the SEIRSAgeModel class and defining all clinical and testing parameters, the initial condition and if we want to enable monte-carlo simulations. By default, if the argument monteCarlo and n_samples are omited from the object initialisation, monte-carlo sampling of the incubation period is switched off by default. **It is detrimental that Nc and all initial conditions (denoted 'initX') are numpy arrays. The initial conditions must be 1D arrays with the same size as the number of age categories considered in the metapopulation model. Nc must be a square 2D array with the same size as the number of age categories considered in the metapopulation model. For now, we omit age-structuring as this is demonstrated later on.** We conveniently name our object 'model'. ###Code model = models.SEIRSAgeModel( initN = np.array([11.43e6]), #must be a numpy array; size of the Belgian population beta = 0.07, # probability of infection when encountering infected person sigma = 3.2, # latent period (days) omega = 2.0, # pre-symptomatic infectiousness (days) Nc = np.array([11.2]), #must be a numpy array; average number of human-to-human interactions per day a = 0.43, # probability of an asymptotic (supermild) infection m = 1-0.43, # probability of a mild infection h = 0.20, # probability of hospitalisation for a mild infection c = 3/4, # probability of hospitalisation in cohort da = 7, # days of infection when asymptomatic (supermild) dm = 7, # days of infection when mild dc = 7, dICU = 14, dICUrec = 7, dhospital = 5, # days before reaching the hospital when heavy or critical m0 = 0.49, # mortality in ICU totalTests = 0, psi_FP = 0, # probability of a false positive psi_PP = 1, # probability of a correct test dq = 14, # days in quarantaine initE = np.array([1]), #must be a numpy array initA = np.zeros(1), initM = np.zeros(1), initC = np.zeros(1), initCicurec = np.zeros(1), initICU = np.zeros(1), initR = np.zeros(1), initD = np.zeros(1), initSQ = np.zeros(1), initEQ = np.zeros(1), initIQ = np.zeros(1), initAQ = np.zeros(1), initMQ = np.zeros(1), initRQ = np.zeros(1), monteCarlo = False, n_samples = 1 ) ###Output _____no_output_____ ###Markdown Extract Sciensano data ###Code df_sciensano = sciensano.get_sciensano_COVID19_data() ###Output _____no_output_____ ###Markdown Extract Google Community Mobility Report data ###Code df_google = google.get_google_mobility_data(update=False, plot=True) ###Output _____no_output_____ ###Markdown Changing an object variable after intialisationAfter initialising our 'model' it is still possible to change variables using the following syntax. ###Code model.beta = 0.079 ###Output _____no_output_____ ###Markdown Running your first simulationA simulation is run by using the attribute function *sim*, which uses one argument, the simulation time T, as its input. ###Code y = model.sim(200) ###Output _____no_output_____ ###Markdown For advanced users: the numerical results of the simulation can be accessed directly be calling *object.X* or *object.sumX* where X is the name of the desired population pool. Both are numpy arrays. *Ojbect.X* is a 3D array of the following dimensions:- x-dimension: number of age categories,- y-dimesion: tN: total number of timesteps taken (one per day),- z-dimension: n_samples: total number of monte-carlo simulations performed.Object.sumX is a 2D array containing only the results summed over all age categorie and has the following dimensions,- x-dimesion: tN: total number of timesteps taken (one per day),- y-dimension: n_samples: total number of monte-carlo simulations performed. Visualising the resultsTo quickly visualise simulation results, two attribute functions were created. The first function, *plotPopulationStatus*, visualises the number of susceptible, exposed, infected and recovered individuals in the population. The second function, *plotInfected*, by default visualises the number of heavily and critically infected individuals. Both functions require no user input to work but both have some optional arguments,> plotPopulationStatus(filename),> - filename: string with a filename + extension to save the figure. The figure is not saved per default.> plotInfected(asymptotic, mild, filename),> - asymptotic: set to *True* to include the supermild pool in the visualisation.> - mild: set to *True* to include the mild pool in the visualisation.> - filename: string with a filename + extension to save the figure. The figure is not saved per default.> - getfig: True to return ax, fig (default: False) ###Code model.plotPopulationStatus() model.plotInfected() ###Output _____no_output_____ ###Markdown The use of checkpoints to change parameters on the flyA cool feature of the original SEIRSplus package by Ryan McGee was the use of so-called *checkpoints* dictionary to change simulation parameters on the fly. In our modification, this feature is preserved. Below you can find an example of a *checkpoints* dictionary. The simulation will be started with the previously initialised parameters. After 40 days, social interaction will be limited by changing $N_c$ to 0.50 contacts per day. After 80 days, social restrictions are lifted and beta once more assumes its *business-as-usual* value. *checkpoints* is the only optional argument of the *sim* functions and is set to *None* per default. ###Code # Create checkpoints dictionary chk = {'t': [60,120], 'Nc': [np.array([1]),np.array([11.2])] } # Run simulation y = model.sim(140,checkpoints=chk) # Visualise model.plotPopulationStatus() model.plotInfected() ###Output _____no_output_____ ###Markdown Meta-population simulations Creating an age-structured SEIRSAgeModel objectA first important challenge when using a deterministic model is to link the discrete levels of the control handle Nc (number of contacts) to specific government policies. A model extension that could be used to facilitate this is age-structuring. In this approach, all population pools are split in age-bins and the interactions between the age-bins are governed by a so-called interaction matrix. This modeling approach was recently used by a team of the London School of Hygiene and details can be found here [1]. Our model is written in such a way that incorporating age-structuring is only a matter of change the initial conditions and Nc-matrix sizes. We can already run simulations using the Belgian interaction matrix shown below and we will run the controller using this model in the very near future.We now initialise a second, age-structured, 'toolbox', as shown in the cell below. This is done in exactly the same way as before but instead of size-one numpy arrays we will now use a 16x16 interaction interaction matrix and 16x0 size initial conditions, as was demonstrated by the London School of Hygiene. Interaction matrices are publicly available here [2]. We conveniently name our object 'ageModel'.[1] https://www.thelancet.com/journals/lanpub/article/PIIS2468-2667(20)30073-6/fulltext [2] https://github.com/kieshaprem/covid19-agestructureSEIR-wuhan-social-distancing/tree/master/data<img src="../figs/BELinteractPlot.png" alt="interact" height="1000" width="1000" style="float: left; margin-right: 500px;" /> ###Code # Load interaction matrices Nc_home = np.loadtxt("../../data/raw/polymod/interaction_matrices/Belgium/BELhome.txt", dtype='f', delimiter='\t') Nc_work = np.loadtxt("../../data/raw/polymod/interaction_matrices/Belgium/BELwork.txt", dtype='f', delimiter='\t') Nc_schools = np.loadtxt("../../data/raw/polymod/interaction_matrices/Belgium/BELschools.txt", dtype='f', delimiter='\t') Nc_transport = np.loadtxt("../../data/raw/polymod/interaction_matrices/Belgium/BELtransport.txt", dtype='f', delimiter='\t') Nc_leisure = np.loadtxt("../../data/raw/polymod/interaction_matrices/Belgium/BELleisure.txt", dtype='f', delimiter='\t') Nc_others = np.loadtxt("../../data/raw/polymod/interaction_matrices/Belgium/BELothers.txt", dtype='f', delimiter='\t') Nc_total = np.loadtxt("../../data/raw/polymod/interaction_matrices/Belgium/BELtotal.txt", dtype='f', delimiter='\t') initN = np.loadtxt("../../data/raw/polymod/interaction_matrices/Belgium/BELagedist_10year.txt", dtype='f', delimiter='\t') h = np.array([[0.0001,0.0003,0.012,0.032,0.049,0.102,0.166,0.243,0.273]]) icu = np.array([0.05,0.05,0.05,0.05,0.063,0.122,0.274,0.432,0.709]) ageModel = models.SEIRSAgeModel(initN = initN, #16x0 numpy array beta = 0.0622, # probability of infection when encountering infected person sigma = 3.2, # latent period omega = 2.0, # pre-symptomatic infectious period Nc = Nc_total, #must be a numpy array; average number of human-to-human interactions per day a = 0.43, # probability of an asymptotic (supermild) infection m = 1-0.43, # probability of a mild infection h = h, # probability of hospitalisation for a mild infection c = 1-icu, # probability of hospitalisation in cohort da = 7, # days of infection when asymptomatic (supermild) dm = 7, # days of infection when mild dc = 8, dICU = 14, dICUrec = 7, dhospital = 7.5, # days before reaching the hospital when heavy or critical #m0 = np.transpose(np.array([0.000094,0.00022,0.00091,0.0018,0.004,0.013,0.046,0.098,0.18])), # mortality in ICU m0 = np.ones(9)*0.50, totalTests = 0, psi_FP = 0, # probability of a false positive psi_PP = 1, # probability of a correct test dq = 14, # days in quarantaine initE = np.ones(9), #must be a numpy array initI = np.zeros(9), initA = np.zeros(9), initM = np.zeros(9), initC = np.zeros(9), initCicurec = np.zeros(9), initICU = np.zeros(9), initR = np.zeros(9), initD = np.zeros(9), initSQ = np.zeros(9), initEQ = np.zeros(9), initIQ = np.zeros(9), initAQ = np.zeros(9), initMQ = np.zeros(9), initRQ = np.zeros(9), monteCarlo = False, n_samples = 1, ) # Create checkpoints dictionary chk = {'t': [31], 'Nc': [0.3*Nc_home] } # Run simulation y = ageModel.sim(100,checkpoints=chk) # Visualise ageModel.plotPopulationStatus() ageModel.plotInfected() ###Output _____no_output_____ ###Markdown Calibrating $\beta$ in a *business-as-usual* scenario ($N_c = 11.2$) Performing a least-squares fitThe 'SEIRSAgeModel' class contains a function to fit the model to selected data (*fit*) and one function to visualise the result (*plotFit*). Our code uses a particle swarm (PSO) algorithm to perform the optimisation. The *fit* function has the following basic syntax,> sim(data, parNames, positions, bounds, weights)> - data: a list containing the dataseries (dtype=np array) to fit the model to. For now, dataseries must be of equal length.> - parNames: a list containing the names (dtype=string) of the model parameters to be fitted.> - positions: a list containing the model states (dtype=np.array) used to calculate the sum of least squares. Each dataseries must be matched to a certain (sum of) model state(s). If multiple entries are provided these are added togheter. The order of the states is given according to the following vector, where S has index 0: (S, E, I, A, M, C, ICU, R, F, SQ, EQ, IQ, AQ, MQ, RQ).The following arguments are optional,> - checkpoints: checkpoint dictionary can be used to calibrate under specific scenarios such as policy changes (default: None).> - setvar: True to replace fitted values in model object after fit is performed (default: False).> - disp: Show sum-of-least-squares after each optimisation iteration (default: True).> - polish: True to use a Nelder–Mead simplex to polish the final result (default: True).> - maxiter: Maximum number of iterations (default: 30).> - popsize: Population size of genetic algorithm (default: 10).The PSO algorithm will by default use all cores available for the optimisation. Using the *fit* attribute, it is possible to calibrate any number of model parameters to any sets of data. We do note that fitting the parameters a,m,h and c requires modification of the source code. In the example below, the transmission parameter $\beta$ is sought after using two dataseries. The first is the number of patients in need of intensive care and the second is the total number of people in the hospital. ###Code # data series used to calibrate model must be given to function 'plotFit' as a list index = index[0:8] data=[np.transpose(ICUvect[:,0:8]),np.transpose(hospital[:,0:8])] # set optimisation settings parNames = ['beta','dICU','dc'] # must be a list! positions = [np.array([6]),np.array([5,6])] # must be a list! bounds=((10,100),(0.01,0.10),(7,21),(1,6)) # must be a list! weights = np.array([0,1]) # run optimisation theta = model.fit(data,parNames,positions,bounds,weights,setvar=True,maxiter=300,popsize=60) ###Output No constraints given. Best after iteration 1: [56.67548839 0.05866957 9.45561852 2.59908764] 169094.4980888419 Best after iteration 2: [56.67548839 0.05866957 9.45561852 2.59908764] 169094.4980888419 Best after iteration 3: [56.67548839 0.05866957 9.45561852 2.59908764] 169094.4980888419 New best for swarm at iteration 4: [47.86697468 0.06721755 8.66250792 4.22904161] 28785.94501891809 Best after iteration 4: [47.86697468 0.06721755 8.66250792 4.22904161] 28785.94501891809 Best after iteration 5: [47.86697468 0.06721755 8.66250792 4.22904161] 28785.94501891809 New best for swarm at iteration 6: [46.69654511 0.06803737 11.92200656 5.26471998] 25001.19760116502 Best after iteration 6: [46.69654511 0.06803737 11.92200656 5.26471998] 25001.19760116502 Best after iteration 7: [46.69654511 0.06803737 11.92200656 5.26471998] 25001.19760116502 Best after iteration 8: [46.69654511 0.06803737 11.92200656 5.26471998] 25001.19760116502 New best for swarm at iteration 9: [46.68197473 0.0685254 9.36732062 4.47487733] 23034.506989004014 Best after iteration 9: [46.68197473 0.0685254 9.36732062 4.47487733] 23034.506989004014 Best after iteration 10: [46.68197473 0.0685254 9.36732062 4.47487733] 23034.506989004014 Best after iteration 11: [46.68197473 0.0685254 9.36732062 4.47487733] 23034.506989004014 Best after iteration 12: [46.68197473 0.0685254 9.36732062 4.47487733] 23034.506989004014 Best after iteration 13: [46.68197473 0.0685254 9.36732062 4.47487733] 23034.506989004014 New best for swarm at iteration 14: [44.09237722 0.0716618 8.99348743 4.24429028] 14627.162056195632 Best after iteration 14: [44.09237722 0.0716618 8.99348743 4.24429028] 14627.162056195632 Best after iteration 15: [44.09237722 0.0716618 8.99348743 4.24429028] 14627.162056195632 Best after iteration 16: [44.09237722 0.0716618 8.99348743 4.24429028] 14627.162056195632 New best for swarm at iteration 17: [44.57786297 0.07168097 8.84704676 4.23485015] 14576.115891763737 Best after iteration 17: [44.57786297 0.07168097 8.84704676 4.23485015] 14576.115891763737 New best for swarm at iteration 18: [44.1772973 0.07174223 9.45889685 4.07385448] 14465.534866342392 Best after iteration 18: [44.1772973 0.07174223 9.45889685 4.07385448] 14465.534866342392 Best after iteration 19: [44.1772973 0.07174223 9.45889685 4.07385448] 14465.534866342392 Best after iteration 20: [44.1772973 0.07174223 9.45889685 4.07385448] 14465.534866342392 Best after iteration 21: [44.1772973 0.07174223 9.45889685 4.07385448] 14465.534866342392 New best for swarm at iteration 22: [44.18971476 0.07174687 9.49982242 4.07573339] 14458.21810665026 Best after iteration 22: [44.18971476 0.07174687 9.49982242 4.07573339] 14458.21810665026 Best after iteration 23: [44.18971476 0.07174687 9.49982242 4.07573339] 14458.21810665026 New best for swarm at iteration 24: [44.14721615 0.07176182 9.5002278 4.05620761] 14439.33072534152 Best after iteration 24: [44.14721615 0.07176182 9.5002278 4.05620761] 14439.33072534152 New best for swarm at iteration 25: [44.16084862 0.07175746 9.52197668 4.05407534] 14438.999668266513 Best after iteration 25: [44.16084862 0.07175746 9.52197668 4.05407534] 14438.999668266513 Best after iteration 26: [44.16084862 0.07175746 9.52197668 4.05407534] 14438.999668266513 Best after iteration 27: [44.16084862 0.07175746 9.52197668 4.05407534] 14438.999668266513 Best after iteration 28: [44.16084862 0.07175746 9.52197668 4.05407534] 14438.999668266513 New best for swarm at iteration 29: [44.3257265 0.07176671 9.42303419 4.05632065] 14434.048177055805 Best after iteration 29: [44.3257265 0.07176671 9.42303419 4.05632065] 14434.048177055805 New best for swarm at iteration 30: [44.19735997 0.07176623 9.50695734 4.04859424] 14432.586768384705 Best after iteration 30: [44.19735997 0.07176623 9.50695734 4.04859424] 14432.586768384705 New best for swarm at iteration 31: [44.06379208 0.07177209 9.47817555 4.03031137] 14412.769220877817 Best after iteration 31: [44.06379208 0.07177209 9.47817555 4.03031137] 14412.769220877817 New best for swarm at iteration 32: [44.10399972 0.07177313 9.47475175 4.02792853] 14410.702352560162 Best after iteration 32: [44.10399972 0.07177313 9.47475175 4.02792853] 14410.702352560162 New best for swarm at iteration 33: [44.12410354 0.07177364 9.47303984 4.02673711] 14409.700188690533 Best after iteration 33: [44.12410354 0.07177364 9.47303984 4.02673711] 14409.700188690533 New best for swarm at iteration 34: [44.16164739 0.07177458 9.48201955 4.02567185] 14408.480828801286 Best after iteration 34: [44.16164739 0.07177458 9.48201955 4.02567185] 14408.480828801286 New best for swarm at iteration 35: [44.1121859 0.07178042 9.47785841 4.02485065] 14405.855768486743 Best after iteration 35: [44.1121859 0.07178042 9.47785841 4.02485065] 14405.855768486743 New best for swarm at iteration 36: [44.12219816 0.07177875 9.47457917 4.01631541] 14400.679464729727 Best after iteration 36: [44.12219816 0.07177875 9.47457917 4.01631541] 14400.679464729727 New best for swarm at iteration 37: [44.11366363 0.07178173 9.48278938 4.01625492] 14397.750860375625 Best after iteration 37: [44.11366363 0.07178173 9.48278938 4.01625492] 14397.750860375625 New best for swarm at iteration 38: [44.08922482 0.07178421 9.49166727 4.01329863] 14395.092430778828 Best after iteration 38: [44.08922482 0.07178421 9.49166727 4.01329863] 14395.092430778828 New best for swarm at iteration 39: [44.07700541 0.07178545 9.49610621 4.01182048] 14393.925104373055 Best after iteration 39: [44.07700541 0.07178545 9.49610621 4.01182048] 14393.925104373055 New best for swarm at iteration 40: [44.1162806 0.07178472 9.48197479 4.01089015] 14392.41754089657 Best after iteration 40: [44.1162806 0.07178472 9.48197479 4.01089015] 14392.41754089657 New best for swarm at iteration 41: [44.10811154 0.07178606 9.48212925 4.00951493] 14390.773583073891 Best after iteration 41: [44.10811154 0.07178606 9.48212925 4.00951493] 14390.773583073891 New best for swarm at iteration 42: [44.10018855 0.07178684 9.49245298 4.00650786] 14388.86268678811 Best after iteration 42: [44.10018855 0.07178684 9.49245298 4.00650786] 14388.86268678811 New best for swarm at iteration 43: [44.10443191 0.07178933 9.48340835 4.00578105] 14386.838771222541 Best after iteration 43: [44.10443191 0.07178933 9.48340835 4.00578105] 14386.838771222541 New best for swarm at iteration 44: [44.09712967 0.0717889 9.48436292 4.00349426] 14385.229145182837 Best after iteration 44: [44.09712967 0.0717889 9.48436292 4.00349426] 14385.229145182837 New best for swarm at iteration 45: [44.09347854 0.07178868 9.4848402 4.00235087] 14384.766556334522 Best after iteration 45: [44.09347854 0.07178868 9.4848402 4.00235087] 14384.766556334522 New best for swarm at iteration 46: [44.10608126 0.07178961 9.48429214 4.00267142] 14384.270877779652 Best after iteration 46: [44.10608126 0.07178961 9.48429214 4.00267142] 14384.270877779652 New best for swarm at iteration 47: [44.09943011 0.07178933 9.48391689 4.00152752] 14383.758835264161 Best after iteration 47: [44.09943011 0.07178933 9.48391689 4.00152752] 14383.758835264161 New best for swarm at iteration 48: [44.10105114 0.07179011 9.48512933 4.0014663 ] 14383.222319312335 Best after iteration 48: [44.10105114 0.07179011 9.48512933 4.0014663 ] 14383.222319312335 New best for swarm at iteration 49: [44.10222202 0.07178995 9.48410157 4.00081575] 14382.893945685812 Best after iteration 49: [44.10222202 0.07178995 9.48410157 4.00081575] 14382.893945685812 New best for swarm at iteration 50: [44.09979001 0.07179205 9.48052632 3.98731793] 14379.219807773066 Best after iteration 50: [44.09979001 0.07179205 9.48052632 3.98731793] 14379.219807773066 New best for swarm at iteration 51: [44.09553781 0.07179272 9.48022812 3.98973639] 14376.862982281451 Best after iteration 51: [44.09553781 0.07179272 9.48022812 3.98973639] 14376.862982281451 New best for swarm at iteration 52: [44.02511352 0.07180304 9.48155051 3.98622442] 14367.597452363638 Best after iteration 52: [44.02511352 0.07180304 9.48155051 3.98622442] 14367.597452363638 New best for swarm at iteration 53: [44.11570998 0.07179952 9.48132401 3.97338251] 14367.201282658438 Best after iteration 53: [44.11570998 0.07179952 9.48132401 3.97338251] 14367.201282658438 New best for swarm at iteration 54: [44.03407249 0.07180125 9.47868694 3.9778261 ] 14362.235709738221 Best after iteration 54: [44.03407249 0.07180125 9.47868694 3.9778261 ] 14362.235709738221 New best for swarm at iteration 55: [44.09532685 0.07180309 9.47946374 3.9741475 ] 14359.066943276704 Best after iteration 55: [44.09532685 0.07180309 9.47946374 3.9741475 ] 14359.066943276704 New best for swarm at iteration 56: [44.09049387 0.0718041 9.4791033 3.97182462] 14357.343060592319 Best after iteration 56: [44.09049387 0.0718041 9.4791033 3.97182462] 14357.343060592319 New best for swarm at iteration 57: [44.08291158 0.07180501 9.47924578 3.97153749] 14355.840181729638 Best after iteration 57: [44.08291158 0.07180501 9.47924578 3.97153749] 14355.840181729638 New best for swarm at iteration 58: [44.07161193 0.07180582 9.47879656 3.96984975] 14354.44090062118 Best after iteration 58: [44.07161193 0.07180582 9.47879656 3.96984975] 14354.44090062118 New best for swarm at iteration 59: [44.07515124 0.07181495 9.477063 3.96491828] 14345.403256005951 Best after iteration 59: [44.07515124 0.07181495 9.477063 3.96491828] 14345.403256005951 New best for swarm at iteration 60: [44.07684329 0.07181359 9.47692873 3.96285392] 14343.567336896656 Best after iteration 60: [44.07684329 0.07181359 9.47692873 3.96285392] 14343.567336896656 New best for swarm at iteration 61: [44.07768932 0.07181291 9.47686159 3.96182174] 14343.421994990887 Best after iteration 61: [44.07768932 0.07181291 9.47686159 3.96182174] 14343.421994990887 New best for swarm at iteration 62: [44.07183728 0.07181502 9.47714873 3.96189306] 14342.301278368046 Best after iteration 62: [44.07183728 0.07181502 9.47714873 3.96189306] 14342.301278368046 New best for swarm at iteration 63: [44.07980151 0.07181518 9.47672204 3.96021346] 14340.879962313094 Best after iteration 63: [44.07980151 0.07181518 9.47672204 3.96021346] 14340.879962313094 New best for swarm at iteration 64: [44.08039586 0.07181593 9.47661277 3.95938665] 14339.921913578508 Best after iteration 64: [44.08039586 0.07181593 9.47661277 3.95938665] 14339.921913578508 New best for swarm at iteration 65: [44.08069303 0.07181631 9.47655814 3.95897325] 14339.45038694726 Best after iteration 65: [44.08069303 0.07181631 9.47655814 3.95897325] 14339.45038694726 New best for swarm at iteration 66: [44.07906434 0.07181683 9.47653092 3.95824246] 14338.67959719987 Best after iteration 66: [44.07906434 0.07181683 9.47653092 3.95824246] 14338.67959719987 New best for swarm at iteration 67: [44.07991664 0.07181747 9.47650459 3.95758698] 14337.938621737938 Best after iteration 67: [44.07991664 0.07181747 9.47650459 3.95758698] 14337.938621737938 New best for swarm at iteration 68: [44.08034279 0.0718178 9.47649143 3.95725924] 14337.575481343963 Best after iteration 68: [44.08034279 0.0718178 9.47649143 3.95725924] 14337.575481343963 New best for swarm at iteration 69: [44.08081028 0.07181773 9.47647041 3.95698204] 14337.356649078027 Best after iteration 69: [44.08081028 0.07181773 9.47647041 3.95698204] 14337.356649078027 New best for swarm at iteration 70: [44.08092497 0.07181776 9.47643448 3.9565262 ] 14336.974280090457 Best after iteration 70: [44.08092497 0.07181776 9.47643448 3.9565262 ] 14336.974280090457 New best for swarm at iteration 71: [44.0822875 0.07181797 9.47686958 3.95549214] 14336.135064866816 Best after iteration 71: [44.0822875 0.07181797 9.47686958 3.95549214] 14336.135064866816 New best for swarm at iteration 72: [44.08067594 0.07181694 9.47676698 3.9512503 ] 14335.54055908996 Best after iteration 72: [44.08067594 0.07181694 9.47676698 3.9512503 ] 14335.54055908996 New best for swarm at iteration 73: [44.08137129 0.07181808 9.47669817 3.95388965] 14335.03408366804 Best after iteration 73: [44.08137129 0.07181808 9.47669817 3.95388965] 14335.03408366804 New best for swarm at iteration 74: [44.0824087 0.07182024 9.47694319 3.94977023] 14331.115715809034 Best after iteration 74: [44.0824087 0.07182024 9.47694319 3.94977023] 14331.115715809034 New best for swarm at iteration 75: [44.08104703 0.07182272 9.47641314 3.94794877] 14328.374329194501 Best after iteration 75: [44.08104703 0.07182272 9.47641314 3.94794877] 14328.374329194501 New best for swarm at iteration 76: [44.0803662 0.07182395 9.47614811 3.94703804] 14327.217147987663 Best after iteration 76: [44.0803662 0.07182395 9.47614811 3.94703804] 14327.217147987663 New best for swarm at iteration 77: [44.08002578 0.07182457 9.4760156 3.94658268] 14326.69190850586 Best after iteration 77: [44.08002578 0.07182457 9.4760156 3.94658268] 14326.69190850586 New best for swarm at iteration 78: [44.08066914 0.07182378 9.47616005 3.94567894] 14326.20062322274 Best after iteration 78: [44.08066914 0.07182378 9.47616005 3.94567894] 14326.20062322274 New best for swarm at iteration 79: [44.08069687 0.07182415 9.47603096 3.94465522] 14325.29344189173 Best after iteration 79: [44.08069687 0.07182415 9.47603096 3.94465522] 14325.29344189173 New best for swarm at iteration 80: [44.08058149 0.07182449 9.47591836 3.94405602] 14324.689428512796 Best after iteration 80: [44.08058149 0.07182449 9.47591836 3.94405602] 14324.689428512796 New best for swarm at iteration 81: [44.0804652 0.07182624 9.4759181 3.94403174] 14324.094479484444 Best after iteration 81: [44.0804652 0.07182624 9.4759181 3.94403174] 14324.094479484444 New best for swarm at iteration 82: [44.08555944 0.07182545 9.47661114 3.9430347 ] 14323.508504385689 Best after iteration 82: [44.08555944 0.07182545 9.47661114 3.9430347 ] 14323.508504385689 New best for swarm at iteration 83: [44.08466888 0.07182595 9.47655919 3.94239644] 14322.806672258792 Best after iteration 83: [44.08466888 0.07182595 9.47655919 3.94239644] 14322.806672258792 New best for swarm at iteration 84: [44.0842236 0.0718262 9.47653321 3.94207731] 14322.457239511712 Best after iteration 84: [44.0842236 0.0718262 9.47653321 3.94207731] 14322.457239511712 New best for swarm at iteration 85: [44.08400096 0.07182632 9.47652022 3.94191775] 14322.282893524884 Best after iteration 85: [44.08400096 0.07182632 9.47652022 3.94191775] 14322.282893524884 New best for swarm at iteration 86: [44.0845847 0.0718264 9.47645503 3.94166216] 14322.052504885796 Best after iteration 86: [44.0845847 0.0718264 9.47645503 3.94166216] 14322.052504885796 New best for swarm at iteration 87: [44.08508507 0.07182653 9.47663905 3.9415041 ] 14321.890747830845 Best after iteration 87: [44.08508507 0.07182653 9.47663905 3.9415041 ] 14321.890747830845 New best for swarm at iteration 88: [44.0848329 0.07182666 9.47665121 3.9413222 ] 14321.697441999406 Best after iteration 88: [44.0848329 0.07182666 9.47665121 3.9413222 ] 14321.697441999406 New best for swarm at iteration 89: [44.08470681 0.07182672 9.47665729 3.94123126] 14321.600857608013 Best after iteration 89: [44.08470681 0.07182672 9.47665729 3.94123126] 14321.600857608013 New best for swarm at iteration 90: [44.08493494 0.07182676 9.47668182 3.94112821] 14321.509246273574 Best after iteration 90: [44.08493494 0.07182676 9.47668182 3.94112821] 14321.509246273574 New best for swarm at iteration 91: [44.08473449 0.07182677 9.47666299 3.94098364] 14321.397351500615 Best after iteration 91: [44.08473449 0.07182677 9.47666299 3.94098364] 14321.397351500615 New best for swarm at iteration 92: [44.08467915 0.07182679 9.47667057 3.94086669] 14321.3024678489 Best after iteration 92: [44.08467915 0.07182679 9.47667057 3.94086669] 14321.3024678489 New best for swarm at iteration 93: [44.119884 0.07184515 9.45684564 3.92698297] 14314.051154681016 Best after iteration 93: [44.119884 0.07184515 9.45684564 3.92698297] 14314.051154681016 New best for swarm at iteration 94: [44.09800856 0.07184353 9.46011035 3.92524806] 14307.71445795339 Best after iteration 94: [44.09800856 0.07184353 9.46011035 3.92524806] 14307.71445795339 New best for swarm at iteration 95: [44.08707084 0.07184272 9.4617427 3.9243806 ] 14305.265789011522 Best after iteration 95: [44.08707084 0.07184272 9.4617427 3.9243806 ] 14305.265789011522 New best for swarm at iteration 96: [44.10615054 0.07184233 9.45604197 3.92264209] 14301.473507923474 Best after iteration 96: [44.10615054 0.07184233 9.45604197 3.92264209] 14301.473507923474 New best for swarm at iteration 97: [44.10463615 0.07184403 9.45437186 3.92013566] 14298.870662543743 Best after iteration 97: [44.10463615 0.07184403 9.45437186 3.92013566] 14298.870662543743 New best for swarm at iteration 98: [44.10387896 0.07184488 9.4535368 3.91888245] 14297.5700904017 Best after iteration 98: [44.10387896 0.07184488 9.4535368 3.91888245] 14297.5700904017 New best for swarm at iteration 99: [44.14114035 0.07184972 9.45115194 3.91026605] 14288.334127547592 Best after iteration 99: [44.14114035 0.07184972 9.45115194 3.91026605] 14288.334127547592 New best for swarm at iteration 100: [44.13436785 0.07185522 9.44712319 3.90522998] 14284.919632880445 Best after iteration 100: [44.13436785 0.07185522 9.44712319 3.90522998] 14284.919632880445 New best for swarm at iteration 101: [44.13098161 0.07185797 9.44510881 3.90271194] 14283.79764401166 Best after iteration 101: [44.13098161 0.07185797 9.44510881 3.90271194] 14283.79764401166 New best for swarm at iteration 102: [44.12384657 0.07185264 9.44607406 3.90249018] 14279.52781426927 Best after iteration 102: [44.12384657 0.07185264 9.44607406 3.90249018] 14279.52781426927 New best for swarm at iteration 103: [44.13722747 0.07185543 9.44322905 3.89830371] 14275.029852909123 Best after iteration 103: [44.13722747 0.07185543 9.44322905 3.89830371] 14275.029852909123 New best for swarm at iteration 104: [44.13641797 0.0718568 9.44191968 3.89594448] 14272.528539462031 Best after iteration 104: [44.13641797 0.0718568 9.44191968 3.89594448] 14272.528539462031 New best for swarm at iteration 105: [44.13143599 0.07185743 9.44502088 3.8940005 ] 14270.859079068905 Best after iteration 105: [44.13143599 0.07185743 9.44502088 3.8940005 ] 14270.859079068905 New best for swarm at iteration 106: [44.13160455 0.07185771 9.44264693 3.89201318] 14269.0087017022 Best after iteration 106: [44.13160455 0.07185771 9.44264693 3.89201318] 14269.0087017022 New best for swarm at iteration 107: [44.13168883 0.07185785 9.44145996 3.89101952] 14268.207700063946 Best after iteration 107: [44.13168883 0.07185785 9.44145996 3.89101952] 14268.207700063946 New best for swarm at iteration 108: [44.13263371 0.07185858 9.44097548 3.8902192 ] 14267.212868503717 Best after iteration 108: [44.13263371 0.07185858 9.44097548 3.8902192 ] 14267.212868503717 New best for swarm at iteration 109: [44.13364683 0.07185885 9.4437001 3.88874684] 14266.273220210918 Best after iteration 109: [44.13364683 0.07185885 9.4437001 3.88874684] 14266.273220210918 New best for swarm at iteration 110: [44.13486939 0.07185956 9.44257843 3.88742948] 14264.951689328896 Best after iteration 110: [44.13486939 0.07185956 9.44257843 3.88742948] 14264.951689328896 New best for swarm at iteration 111: [44.13548066 0.07185992 9.44201759 3.8867708 ] 14264.293357056362 Best after iteration 111: [44.13548066 0.07185992 9.44201759 3.8867708 ] 14264.293357056362 New best for swarm at iteration 112: [44.1357863 0.0718601 9.44173717 3.88644146] 14263.964801877273 Best after iteration 112: [44.1357863 0.0718601 9.44173717 3.88644146] 14263.964801877273 New best for swarm at iteration 113: [44.13593912 0.07186018 9.44159697 3.88627679] 14263.800677360265 Best after iteration 113: [44.13593912 0.07186018 9.44159697 3.88627679] 14263.800677360265 New best for swarm at iteration 114: [44.56194254 0.07199359 9.38147861 3.66740282] 14047.827255315911 Best after iteration 114: [44.56194254 0.07199359 9.38147861 3.66740282] 14047.827255315911 Best after iteration 115: [44.56194254 0.07199359 9.38147861 3.66740282] 14047.827255315911 Best after iteration 116: [44.56194254 0.07199359 9.38147861 3.66740282] 14047.827255315911 New best for swarm at iteration 117: [44.32238427 0.07200709 9.40096774 3.65660402] 14022.820578578789 Best after iteration 117: [44.32238427 0.07200709 9.40096774 3.65660402] 14022.820578578789 New best for swarm at iteration 118: [44.56147438 0.07202939 9.40660033 3.62057162] 13986.608652790896 Best after iteration 118: [44.56147438 0.07202939 9.40660033 3.62057162] 13986.608652790896 New best for swarm at iteration 119: [44.56353622 0.07203931 9.40062205 3.60132069] 13971.510644858952 Best after iteration 119: [44.56353622 0.07203931 9.40062205 3.60132069] 13971.510644858952 New best for swarm at iteration 120: [44.52360763 0.07204924 9.39687541 3.59807095] 13959.250074189225 Best after iteration 120: [44.52360763 0.07204924 9.39687541 3.59807095] 13959.250074189225 New best for swarm at iteration 121: [44.57292659 0.07205784 9.39949167 3.58755077] 13948.772865524194 Best after iteration 121: [44.57292659 0.07205784 9.39949167 3.58755077] 13948.772865524194 New best for swarm at iteration 122: [44.55847412 0.07206253 9.39787905 3.57838381] 13938.419356910466 Best after iteration 122: [44.55847412 0.07206253 9.39787905 3.57838381] 13938.419356910466 New best for swarm at iteration 123: [44.56581926 0.0720662 9.40325458 3.56924241] 13929.740142105995 Best after iteration 123: [44.56581926 0.0720662 9.40325458 3.56924241] 13929.740142105995 New best for swarm at iteration 124: [44.5705947 0.07207133 9.40395565 3.56154061] 13921.618270741552 Best after iteration 124: [44.5705947 0.07207133 9.40395565 3.56154061] 13921.618270741552 New best for swarm at iteration 125: [44.57298242 0.0720739 9.40430619 3.55768972] 13917.55774209878 Best after iteration 125: [44.57298242 0.0720739 9.40430619 3.55768972] 13917.55774209878 New best for swarm at iteration 126: [44.62328001 0.07208547 9.4003601 3.55091807] 13912.667359539699 Best after iteration 126: [44.62328001 0.07208547 9.4003601 3.55091807] 13912.667359539699 New best for swarm at iteration 127: [44.57317188 0.07209254 9.40621038 3.54151597] 13904.594410960555 Best after iteration 127: [44.57317188 0.07209254 9.40621038 3.54151597] 13904.594410960555 New best for swarm at iteration 128: [44.57698815 0.07209191 9.40475965 3.53744421] 13895.979023425916 Best after iteration 128: [44.57698815 0.07209191 9.40475965 3.53744421] 13895.979023425916 New best for swarm at iteration 129: [44.57746509 0.07209582 9.40507892 3.52575747] 13883.392147646508 Best after iteration 129: [44.57746509 0.07209582 9.40507892 3.52575747] 13883.392147646508 New best for swarm at iteration 130: [44.57770357 0.07209777 9.40523856 3.51991409] 13879.030361859446 Best after iteration 130: [44.57770357 0.07209777 9.40523856 3.51991409] 13879.030361859446 New best for swarm at iteration 131: [44.58188587 0.07210164 9.40435317 3.51652523] 13873.830453804654 Best after iteration 131: [44.58188587 0.07210164 9.40435317 3.51652523] 13873.830453804654 New best for swarm at iteration 132: [44.55643119 0.07210466 9.41115343 3.51436875] 13871.135179639568 Best after iteration 132: [44.55643119 0.07210466 9.41115343 3.51436875] 13871.135179639568 New best for swarm at iteration 133: [44.55578691 0.07210589 9.41006437 3.51033951] 13867.445020751578 Best after iteration 133: [44.55578691 0.07210589 9.41006437 3.51033951] 13867.445020751578 New best for swarm at iteration 134: [44.57716542 0.07210716 9.41052716 3.50208151] 13864.672892871162 Best after iteration 134: [44.57716542 0.07210716 9.41052716 3.50208151] 13864.672892871162 New best for swarm at iteration 135: [44.57347764 0.07211065 9.41199936 3.49708519] 13859.198143926085 Best after iteration 135: [44.57347764 0.07211065 9.41199936 3.49708519] 13859.198143926085 New best for swarm at iteration 136: [44.57163376 0.0721124 9.41273545 3.49458703] 13856.465254355742 Best after iteration 136: [44.57163376 0.0721124 9.41273545 3.49458703] 13856.465254355742 New best for swarm at iteration 137: [44.5718947 0.07211362 9.41258597 3.49708339] 13854.581936001376 Best after iteration 137: [44.5718947 0.07211362 9.41258597 3.49708339] 13854.581936001376 New best for swarm at iteration 138: [44.57331337 0.07211668 9.41307982 3.49516739] 13851.09995648376 Best after iteration 138: [44.57331337 0.07211668 9.41307982 3.49516739] 13851.09995648376 New best for swarm at iteration 139: [44.5740227 0.07211821 9.41332675 3.49420939] 13849.71537727855 Best after iteration 139: [44.5740227 0.07211821 9.41332675 3.49420939] 13849.71537727855 New best for swarm at iteration 140: [44.57263938 0.07211805 9.41349719 3.49274172] 13848.703800714124 Best after iteration 140: [44.57263938 0.07211805 9.41349719 3.49274172] 13848.703800714124 New best for swarm at iteration 141: [44.57284713 0.07211829 9.41328016 3.49189652] 13848.031675313148 Best after iteration 141: [44.57284713 0.07211829 9.41328016 3.49189652] 13848.031675313148 New best for swarm at iteration 142: [44.57397169 0.07212072 9.41341305 3.49217926] 13847.40497291591 Best after iteration 142: [44.57397169 0.07212072 9.41341305 3.49217926] 13847.40497291591 New best for swarm at iteration 143: [44.57334369 0.07212014 9.41346503 3.49136543] 13846.679833425636 Best after iteration 143: [44.57334369 0.07212014 9.41346503 3.49136543] 13846.679833425636 New best for swarm at iteration 144: [44.57415613 0.07211956 9.41344482 3.48998285] 13846.022077910085 Best after iteration 144: [44.57415613 0.07211956 9.41344482 3.48998285] 13846.022077910085 New best for swarm at iteration 145: [44.57420184 0.07211982 9.41342914 3.48940253] 13845.512529216263 Best after iteration 145: [44.57420184 0.07211982 9.41342914 3.48940253] 13845.512529216263 New best for swarm at iteration 146: [44.57422777 0.07212049 9.41348351 3.48956778] 13845.138925316032 Best after iteration 146: [44.57422777 0.07212049 9.41348351 3.48956778] 13845.138925316032 New best for swarm at iteration 147: [44.57389686 0.0721208 9.41344583 3.48943658] 13844.878216683901 Best after iteration 147: [44.57389686 0.0721208 9.41344583 3.48943658] 13844.878216683901 New best for swarm at iteration 148: [44.57392621 0.07212078 9.41345713 3.48914579] 13844.686699695201 Best after iteration 148: [44.57392621 0.07212078 9.41345713 3.48914579] 13844.686699695201 ###Markdown Visualising the fitVisualising the resulting fit is easy and can be done using the plotFit function. The functions uses the following basic syntax,plotFit(index,data,positions)> - index: vector with timestamps corresponding to data.> - data: list containing the dataseries (dtype=np array) to fit the model to. For now, dataseries must be of equal length.> - positions: list containing the model states (dtype=np.array) used to calculate the sum of least squares.The following arguments are optional,> - dataMkr: list containing the markers (dtype=str) to be used to visualise the data. Default value works up to five dataseries and is equal to: ['o','v','s','*','^'].>- modelClr: list containing the colors (dtype=str) to be used to visualise the model fit. Default value works up to five dateseries and is equal to: ['green','orange','red','black','blue'].> - legendText: tuple containing the legend entries. Disabled per default.> - titleText: string containing the fit title. Disable per default.> - filename: string with a filename + extension to save the figure. The figure is not saved per default.> - getfig: True to return ax, fig (default: False) ###Code # plot result model.plotFit(index,data,positions,modelClr=['red','orange'],legendText=('ICU (model)','Hospitalized (model)','ICU (data)','Hospitalized (data)'),titleText='Belgium') ###Output _____no_output_____ ###Markdown Model Predictive control (MPC) Optimising government policy Process control for the laymanAs we have the impression that the control part, which we see as our main addition to the problem, is more difficult to grasp for the layman, here is a short intro to process control. Experts in control are welcome to skip this section.A predictive model consists of a set of equations and aims to predict how the system will behave in the future given a certain input. Process control flips this around and aims at determining what input is needed to achieve a desired system behavior (= goal). It is a tool that helps us in “controlling” how we want a system to behave. It is commonly applied in many industries, but also in our homes (e.g. central heating, washing machine). It's basically everywhere. Here's how it works. An algorithm monitors the deviation between the goal and the true system value and then computes the necessary action to "drive" the system to its goal by means of an actuator (in industry this is typically a pump or a valve). Applying this to Covid-19, the government wants to "control" the spread of the virus in the population by imposing measures (necessary control actions) on the public (which is the actuator here) and achieve the goal that the number of severely sick people does not become larger than can be handled by the health care system. However, the way the population behaves is a lot more complex compared to the heating control in our homes since not only epidemiology (virus spread) but also different aspects of human behavior on both the individual and the societal level (sociology, psychology, economy) are involved. This leads to multiple criteria we want to ideally control simultaneously and we want to use the "smartest" algorithm we can get our hands on. The optimizePolicy functionThe 'SEIRSAgeModel' class contains an implementation of the MPC controller in the function *optimizePolicy*. For now, the controller minimises a weighted squared sum-of-errors between multiple setpoints and model predictions. The algorithm can use any variable to control the virus outbreak, but we recommend sticking with the number of random daily contacts $N_c$ and the total number of random tests ('totalTests') as only these have been tested. We also recommend disabling age-structuring in the model before running the MPC as this feature requires discretisation of the interaction matrix to work which is not yet implemented. Future work will extend the MPC controller to work with age-structuring feature inherent to the model. Future work is also aimed at including an economic cost function to discriminate between control handles. Our MPC uses a PSO algorithm to perform the optimisation, we recommend using at least a swarmsize of 20 and at least 100 iterations to ensure that the trajectory is 'optimal'. The *optimizePolicy* function has the following basic syntax,optimizePolicy(parNames, bounds, setpoints, positions, weights)> - parNames: a list containing the names (dtype=string) of the model parameters to be used as a control handle.> - bounds: A list containing the lower- and upper boundaries of each parameter to be used as a control handle. Each entry in the list should be a 1D numpy array containing the lower- and upper bound for the respective control handle.> - setpoints: A list with the numerical values of the desired model output.> - positions: a list containing the model states (dtype=np.array) used to calculate the sum of least squares. Each modelouput in the given position is matched with a provided setpoint. If multiple position entries are provided, the output in these positions is added togheter. The order of the states is given according to the following vector, where S has index 0: (S, E, I, A, M, C, ICU, R, F, SQ, EQ, IQ, AQ, MQ, RQ).> - weights: a list containing the weighting fractions of each population pool ouput in the sum-of-squared errors.The following arguments are optional,> - policy_period: length of one policy interval (default: 7 days).> - N: number of future policy intervals to be optimised, also called 'control horizon' (default: 6).> - P: number of policy intervals over which the sum of squared errors is calculated, also called 'prediction horizon' (default:12).> - disp: Show sum-of-least-squares after each optimisation iteration (default: True).> - polish: True to use a Nelder–Mead simplex to polish the final result (default: True).> - maxiter: Maximum number of iterations (default: 100).> - popsize: Population size of genetic algorithm (default: 20).The function returns a one-dimensional list containing the optimal values of the control handles. The length of this list is equal to the length of the control horizon (N) times the number of control handles. The list thus lists all control handles and their optimal values in their respective order. **The optimal policy is assigned to the SEIRSAgeModel object and is only overwritten when a new optimisation is performed. Future work could include the creation of a new object for every optimal policy.** The genetic algorithm will by default use all cores available for the optimisation. ###Code parNames = ['Nc','totalTests'] bounds = [np.array([0,11.2]),np.array([0,1e6])] setpoints = [1200,5000] positions = [np.array([6]),np.array([5,6])] weights = [1,0] model.optimizePolicy(parNames,bounds,setpoints,positions,weights,policy_period=14,N=6,P=6,polish=False,maxiter=120,popsize=144) ###Output _____no_output_____ ###Markdown Visualising the effect of government policy Visualising the resulting optimal policy is easy and can be done using the plotOptimalPolicy function. We note that the functionality of*plotOptimalPolicy** is for now, very basic and will be extended in the future. The function is heavily based on the *plotInfected* visualisation. The function uses the following basic syntax,plotOptimalPolicy(parNames,setpoints,policy_period)> - parNames: a list containing the names (dtype=string) of the model parameters to be used as a control handle.> - setpoints: A list with the numerical values of the desired model output.> - policy_period: length of one policy interval (default: 7 days).The following arguments are optional,> - asymptotic: set to *True* to include the supermild pool in the visualisation.> - mild: set to *True* to include the mild pool in the visualisation.> - filename: string with a filename + extension to save the figure. The figure is not saved per default. ###Code model.plotOptimalPolicy(parNames,setpoints,policy_period=14) ###Output _____no_output_____ ###Markdown Specific methods *realTimeScenario*The 'SEIRSAgeModel' class contains one function to quickly perform and visualise scenario analysis for a given country. The user is obligated to supply the function with: 1) a set of dataseries, 2) the date at which the data starts, 3) the positions in the model output that correspond with the dataseries and 4) a checkpoints dictionary containing the past governement actions, from hereon referred to as the *pastPolicy* dictionary. If no additional arguments are provided, the data and the corresponding model fit are visualised from the user supplied start date up until the end date of the data plus 14 days. The end date of the visualisation can be altered by defining the optional keyworded argument *T_extra* (default: 14 days). Optionally a dictionary of future policies can be used to simulate scenarios starting on the first day after the end date of the dataseries. The function *realTimeScenario* accomplishes this by merging both the *pastPolicy* and *futurePolicy* dictionaries using the backend function *mergeDict()*. The syntax without optional arguments is as follows,realTimeScenario(startDate, data, positions, pastPolicy)> - startDate: a string with the date corresponding to the first entry of the dataseries (format: 'YYYY-MM-DD'). > - data: a list containing the dataseries (dtype=np array) to fit the model to. For now, dataseries must be of equal length and start on the same day.> - positions: a list containing the model states (dtype=np.array) used to calculate the sum of least squares. Each dataseries must be matched to a certain (sum of) model state(s). If multiple entries are provided these are added togheter. The order of the states is given according to the following vector, where S has index 0: (S, E, I, A, M, C, ICU, R, F, SQ, EQ, IQ, AQ, MQ, RQ).> - pastPolicy: a checkpoints dictionary containing past government actions.The following (simulation) arguments are optional,> - futurePolicy: a checkpoint dictionary used to simulate scenarios in the future (default: None). By default, time '1' in this dictionary is the date of the first day after the end of the data.> - T_extra: Extra simulation time after last date of the data if no futurePolicy dictionary is provided. Extra simulation time after last time in futurePolicy dictionary.The following arguments are for visualisation,> - dataMkr: list containing the markers (dtype=str) to be used to visualise the data. Default value works up to five dataseries and is equal to: ['o','v','s','*','^'].>- modelClr: list containing the colors (dtype=str) to be used to visualise the model fit. Default value works up to five dateseries and is equal to: ['green','orange','red','black','blue'].> - legendText: tuple containing the legend entries. Disabled per default.> - titleText: string containing the fit title. Disable per default.> - filename: string with a filename + extension to save the figure. The figure is not saved per default.> - getfig: True to return ax, fig (default: False) ###Code # Define data as a list containing data timeseries data=[np.transpose(ICUvect),np.transpose(hospital)] # Create a dictionary of past policies pastPolicy = {'t': [7], 'Nc': [0.5*Nc_home] } # Create a dictionary of future policies futurePolicy = {'t': [1], 'Nc': [Nc_home+Nc_work], } # Define the date corresponding to the first data entry startDate='2020-03-15' # Run realTimeScenario ageModel.realTimeScenario(startDate,data,positions,pastPolicy,futurePolicy=futurePolicy,T_extra=62, modelClr=['red','orange'],legendText=('ICU (model)','Hospital (model)','ICU (data)','Hospital (data)'), titleText='Belgium',filename='test.svg') ###Output _____no_output_____ ###Markdown *realTimeMPC*The 'SEIRSAgeModel' class contains one function to quickly optimise the policy for a given country using Model Predictive Control. The user is obligated to supply the function with: 1) a set of dataseries, 2) the date at which the data starts, 3) the positions in the model output that correspond with the dataseries, 4) a checkpoints dictionary containing the past governement actions, from hereon referred to as the *pastPolicy* dictionary and 5) Additional MPC arguments. The source code of *realTimeMPC* consists of seven distinct steps. The syntax without optional arguments is as follows,realTimeMPC(startDate, data, positions, pastPolicy,parNames,bounds,setpoints,weights)> - startDate: a string with the date corresponding to the first entry of the dataseries (format: 'YYYY-MM-DD'). > - data: a list containing the dataseries (dtype=np array) to fit the model to. For now, dataseries must be of equal length and start on the same day.> - positions: a list containing the model states (dtype=np.array) used to calculate the sum of least squares. Each dataseries must be matched to a certain (sum of) model state(s). If multiple entries are provided these are added togheter. The order of the states is given according to the following vector, where S has index 0: (S, E, I, A, M, C, ICU, R, F, SQ, EQ, IQ, AQ, MQ, RQ).> - pastPolicy: a checkpoints dictionary containing past government actions.> - parNames: a list containing the names (dtype=string) of the model parameters to be used as a control handle.> - bounds: A list containing the lower- and upper boundaries of each parameter to be used as a control handle. Each entry in the list should be a 1D numpy array containing the lower- and upper bound for the respective control handle.> - setpoints: A list with the numerical values of the desired model output.> - weights: a list containing the weighting fractions of each population pool ouput in the sum-of-squared errors.The following arguments are optional,> - policy_period: length of one policy interval (default: 7 days).> - N: number of future policy intervals to be optimised, also called 'control horizon' (default: 6).> - P: number of policy intervals over which the sum of squared errors is calculated, also called 'prediction horizon' (default:12).> - disp: Show sum-of-least-squares after each optimisation iteration (default: True).> - polish: True to use a Nelder–Mead simplex to polish the final result (default: True).> - maxiter: Maximum number of iterations (default: 100).> - popsize: Population size of genetic algorithm (default: 20).The following arguments are for visualisation,> - dataMkr: list containing the markers (dtype=str) to be used to visualise the data. Default value works up to five dataseries and is equal to: ['o','v','s','*','^'].>- modelClr: list containing the colors (dtype=str) to be used to visualise the model fit. Default value works up to five dateseries and is equal to: ['green','orange','red','black','blue'].> - legendText: tuple containing the legend entries. Disabled per default.> - titleText: string containing the fit title. Disable per default.> - filename: string with a filename + extension to save the figure. The figure is not saved per default.> - getfig: True to return ax, fig (default: False)Note that the control handles are not yet incorporated in the visualisation, this will be done in the near future. ###Code parNames = ['Nc'] bounds = [np.array([1.5,11.2])] setpoints = [400,4000] positions = [np.array([6]),np.array([5,6])] weights = [1,0] model.realTimeMPC(startDate,data,positions,pastPolicy,parNames,bounds,setpoints,weights, policy_period=7,N=6,P=8,disp=True,polish=False,maxiter=60,popsize=24, dataMkr=['o','v','s','*','^'],modelClr=['orange','red'], legendText=('ICU (model)','Hospital (model)','ICU (data)','Hospital (data)'), titleText='Belgium',filename=None) ###Output _____no_output_____
StanfordAlgorithmSeries/TSP.ipynb
###Markdown Q1 Traveling Salesman ProblemThe first line indicates the number of cities. Each city is a point in the plane, and each subsequent line indicates the x- and y-coordinates of a single city.The distance between two cities is defined as the Euclidean distance --- that is, two cities at locations (x,y) and (z,w) have distance $\sqrt{(x-z)^2 + (y-w)^2}$ between them.In the box below, type in the minimum cost of a traveling salesman tour for this instance, rounded down to the nearest integer.OPTIONAL: If you want bigger data sets to play with, check out the TSP instances from around the world here. The smallest data set (Western Sahara) has 29 cities, and most of the data sets are much bigger than that. What's the largest of these data sets that you're able to solve --- using dynamic programming or, if you like, a completely different method?HINT: You might experiment with ways to reduce the data set size. For example, trying plotting the points. Can you infer any structure of the optimal solution? Can you use that structure to speed up your algorithm? ###Code with open('tsp.txt','r') as f: lines = f.readlines() NC = int(lines[0]) City = list(map(lambda x: tuple(map(float,x.split())), lines[1:])) def eucliean_distance(x,y): return np.sqrt((x[0]-y[0])**2+(x[1]-y[1])**2) City = City[:4] NC = 4 City for i in range(NC): for j in range(NC): print(i,j,eucliean_distance(City[i],City[j])) #initialize City_code = [0b1 << i for i in range(NC)] A_new = {} A_new_set = set([0b1]) A_new[0b1] = np.zeros(NC) for m in range(2,NC+1): print('Subproblem size: ', m) A_old_set = A_new_set.copy() A_old = A_new.copy() #print(A_old.keys()) #making new subsets containing m elements: A_new_set_list = list(filter(lambda x: x & 0b1, A_old_set)) A_new_set_temp = list(map(lambda x: set(map(lambda y: x | y, City_code)), A_new_set_list)) A_new_set = set.union(*A_new_set_temp) A_new_set = A_new_set - A_old_set print(' total number of subsets: ',len(A_new_set)) # initialize A_new A_new = {} for S in A_new_set: A_new[S] = np.full(NC,np.inf) #A_new_set = list(filter(lambda x: x & 0b1, A_new_set)) #print(' total number of subsets containing 1: ',len(A_new_set)) # update A_new for code_j in City_code: j = City_code.index(code_j) print(j) for S in A_new_set: #print(bin(S),bin(S^code_j)) if code_j & S and S^code_j in A_old.keys(): subp_sols = [] code_k_list = list(filter(lambda x: x & S, City_code)) code_k_list.remove(code_j) for code_k in code_k_list: k = City_code.index(code_k) #print(k, j, bin(S^code_j), A_old[S^code_j][k]) subp_sols.append(A_old[S^code_j][k] + eucliean_distance(City[k], City[j])) A_new[S][j] = min(subp_sols) A_new A_last = list(A_new.values())[0] for j in range(1,NC): A_last[j] += eucliean_distance(City[0],City[j]) print('Solution of TSP problem', min(A_last)) ###Output Solution of TSP problem 8356.306798082822 ###Markdown move to a python file for full case Q2 A heuristic approximated solution: visit nearest neighbor In this assignment we will revisit an old friend, the traveling salesman problem (TSP). This week you will implement a heuristic for the TSP, rather than an exact algorithm, and as a result will be able to handle much larger problem sizes. Here is a data file describing a TSP instance (original source: http://www.math.uwaterloo.ca/tsp/world/bm33708.tsp).in 'tsp33708.txt'The first line indicates the number of cities. Each city is a point in the plane, and each subsequent line indicates the x- and y-coordinates of a single city.You should implement the nearest neighbor heuristic:Start the tour at the first city.Repeatedly visit the closest city that the tour hasn't visited yet. In case of a tie, go to the closest city with the lowest index. For example, if both the third and fifth cities have the same distance from the first city (and are closer than any other city), then the tour should begin by going from the first city to the third city.Once every city has been visited exactly once, return to the first city to complete the tour.In the box below, enter the cost of the traveling salesman tour computed by the nearest neighbor heuristic for this instance, rounded down to the nearest integer.[Hint: when constructing the tour, you might find it simpler to work with squared Euclidean distances (i.e., the formula above but without the square root) than Euclidean distances. But don't forget to report the length of the tour in terms of standard Euclidean distance.] ###Code with open('tsp33708.txt','r') as f: lines = f.readlines() NC = int(lines[0]) City = np.array([list(map(float,x.split()[1:])) for x in lines[1:]]) # brute force search AllCity = set(range(NC)) cur = 0 TSP = [0] CityVisited = set([0]) i=1 while CityVisited != AllCity: print(i) notVisted = np.array(list(AllCity - CityVisited)) cur_City = City[cur] NV_Cities = City[notVisted] d2 = np.square(NV_Cities-cur_City).sum(axis=1) next_City = notVisted[np.where(d2 == d2.min())].min() TSP.append(next_City) CityVisited.add(next_City) cur = next_City i+=1 TSP.append(0) total_distance = 0 for i in range(1,NC+1): total_distance += eucliean_distance(City[TSP[i-1]],City[TSP[i]]) total_distance # optimize with sorting the x-coodinate # optimize with Voronoi diagram ###Output _____no_output_____
python/pluralsight-intro2flask/ch06-UserAndAuthentication.ipynb
###Markdown Chapter 6. User and Authentication 6.1 Introduction 6.1.1 Authentication with `Flask-Login`(1) Configuration(2) Adapting the `User` Class for login 6.1.2 Securing viewsUsing the `@login_required` decorator 6.1.3 Adding a login view, form and templateAdd don't forget logout 6.1.4 Adding signup for new users 6.2 Demo: `Flask-Login` setupInstall the Flask extension package `flask-login`:```bash$ pip install flask-login``` ###Code # SAVE AS __init__.py # -*- coding: utf-8 -*- import os from flask import Flask from flask_sqlalchemy import SQLAlchemy from flask_login import LoginManager basedir = os.path.abspath(os.path.dirname(__file__)) app = Flask(__name__) app.config['SECRET_KEY'] = b'c\x04\x14\x00;\xe44 \xf4\xf3-_9B\x1d\x15u\x02g\x1a\xcc\xd8\x04~' app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///' + os.path.join(basedir, 'thermos.db') app.config['DEBUG'] = True db = SQLAlchemy(app) # Configure authentication login_manager = LoginManager() login_manager.session_protection = 'strong' login_manager.init_app(app) from . import models from . import views # SAVE AS views.py # -*- coding: utf-8 -*- from flask import render_template, flash, redirect, url_for from flask_login import login_required from . import app, db from thermos.forms import BookmarkForm from thermos.models import User, Bookmark # Fake login def logged_in_user(): return User.query.filter_by(username='reindert').first() @app.route('/') @app.route('/index') def index(): return render_template('index.html', new_bookmarks=Bookmark.newest(5)) @app.route('/add', methods=['GET', 'POST']) @login_required def add(): form = BookmarkForm() if form.validate_on_submit(): url = form.url.data description = form.description.data bm = Bookmark(user=logged_in_user(), url=url, description=description) db.session.add(bm) db.session.commit() flash("Stored bookmark '{}' with description '{}'".format(url, description)) return redirect(url_for('index')) return render_template('add.html', form=form) @app.route('/user/<username>') def user(username): user = User.query.filter_by(username=username).first_or_404() return render_template('user.html', user=user) @app.errorhandler(404) def page_not_found(e): return render_template('404.html'), 404 @app.errorhandler(500) def server_error(e): return render_template('500.html'), 500 ###Output _____no_output_____ ###Markdown 6.3 Demo: Preparing the `User` model ###Code # SAVE AS models.py # -*- coding: utf-8 -*- from datetime import datetime from sqlalchemy import desc from flask_login import UserMixin from . import db class Bookmark(db.Model): id = db.Column(db.Integer, primary_key=True) url = db.Column(db.Text, nullable=False) # Pass the function object instead of the function result # as the default method for getting the default time. date = db.Column(db.DateTime, default=datetime.utcnow) description = db.Column(db.String(300)) user_id = db.Column(db.Integer, db.ForeignKey('user.id'), nullable=False) @staticmethod def newest(num): return Bookmark.query.order_by(desc(Bookmark.date)).limit(num) def __repr__(self): return "<Bookmark '{}': '{}'>".format(self.description, self.url) class User(db.Model, UserMixin): id = db.Column(db.Integer, primary_key=True) username = db.Column(db.String(80), unique=True) email = db.Column(db.String(120), unique=True) bookmarks = db.relationship('Bookmark', backref='user', lazy='dynamic') def __repr__(self): return '<User %r>' % self.username ###Output _____no_output_____ ###Markdown 6.4 Demo: Adding the login pageCreate a new template `login.html`. ###Code # SAVE AS forms.py # -*- coding: utf-8 -*- from flask_wtf import Form from wtforms.fields import StringField, PasswordField, BooleanField, SubmitField from flask.ext.wtf.html5 import URLField from wtforms.validators import DataRequired, url class BookmarkForm(Form): url = URLField('The URL for your bookmark', validators=[DataRequired(), url()]) description = StringField('Add an optional description:') def validate(self): if not (self.url.data.startswith("http://") or\ self.url.data.startswith("https://")): self.url.data = "http://" + self.url.data if not Form.validate(self): return False if not self.description.data: self.description.data = self.url.data return True class LoginForm(Form): username = StringField('Your Username', validators=[DataRequired()]) password = PasswordField('Password', validators=[DataRequired()]) remember_me = BooleanField('Keep me logged in') submit = SubmitField('Log In') # SAVE As views.py # -*- coding: utf-8 -*- from flask import render_template, flash, redirect, url_for, request from flask_login import login_required, login_user from . import app, db, login_manager from thermos.forms import BookmarkForm, LoginForm from thermos.models import User, Bookmark # Fake login def logged_in_user(): return User.query.filter_by(username='reindert').first() @login_manager.user_loader def load_user(userid): return User.query.get(int(userid)) @app.route('/') @app.route('/index') def index(): return render_template('index.html', new_bookmarks=Bookmark.newest(5)) @app.route('/add', methods=['GET', 'POST']) @login_required def add(): form = BookmarkForm() if form.validate_on_submit(): url = form.url.data description = form.description.data bm = Bookmark(user=logged_in_user(), url=url, description=description) db.session.add(bm) db.session.commit() flash("Stored bookmark '{}' with description '{}'".format(url, description)) return redirect(url_for('index')) return render_template('add.html', form=form) @app.route('/user/<username>') def user(username): user = User.query.filter_by(username=username).first_or_404() return render_template('user.html', user=user) @app.route('/login', methods=['GET', 'POST']) def login(): form = LoginForm() if form.validate_on_submit(): # login and validate the user... user = User.query.filter_by(username=form.username.data).first() if user is not None: login_user(user, form.remember_me.data) flash('Logged in successfully as {}'.format(user.username)) return redirect(request.args.get('next') or url_for('index')) flash('Incorrect username or password.') return render_template('login.html', form=form) @app.errorhandler(404) def page_not_found(e): return render_template('404.html'), 404 @app.errorhandler(500) def server_error(e): return render_template('500.html'), 500 ###Output _____no_output_____ ###Markdown 6.5 Demo: Redirect after loginWhen an anonymous user is trying to access some page which requires login, he/she will be directed to the login page and will then be redirected to the `next` page if he/she successfully logs in.```pythonreturn redirect(request.args.get('next') or url_for('index'))``` ###Code # SAVE AS __init__.py # -*- coding: utf-8 -*- import os from flask import Flask from flask_sqlalchemy import SQLAlchemy from flask_login import LoginManager basedir = os.path.abspath(os.path.dirname(__file__)) app = Flask(__name__) app.config['SECRET_KEY'] = b'c\x04\x14\x00;\xe44 \xf4\xf3-_9B\x1d\x15u\x02g\x1a\xcc\xd8\x04~' app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///' + os.path.join(basedir, 'thermos.db') app.config['DEBUG'] = True db = SQLAlchemy(app) # Configure authentication login_manager = LoginManager() login_manager.session_protection = 'strong' login_manager.login_view = 'login' login_manager.init_app(app) from . import models from . import views ###Output _____no_output_____ ###Markdown 6.6 Review: `Flask-Login` overview 6.6.1 Setting up `flask-login`(1) Install```bash$ pip install flask-login```(2) Import and configure```pythonfrom flask_login import LoginManager login_manager = LoginManager()login_manager.session_protection = 'strong'login_manager.login_view = 'login'login_manager.init_app(app)``` 6.6.2 Setting up the `User` class(1) Declare a `user_loader`:* Tells `flask-login` how to retrieve a user by `id`.* `id` is stored in the HTTP session.```python@login_manager.user_loaderdef load_user(userid): return User.query.get(int(userid))```(2) Inherit from UserMixin:Default implementations for `is_authenticated`, `get_id`, etc```pythonclass User(db.Model, UserMixin):``` 6.6.3 Using `flask-login`(1) Mark views with `@login_required`(2) The `current_user` variable holds the currently logged in user(3) Log a user in with `login_user(user)`Optional argument: `remember(bool)`(4) Log them out with `logout_user()`.(5) In a view:`{% if current_user.is_authenticated() %}` 6.7 Demo: Current_user and logoutNote that `is_authenticated` is an attribute of `current_user` not a method. ###Code # SAVE AS views.py # -*- coding: utf-8 -*- from flask import render_template, flash, redirect, url_for, request from flask_login import login_required, login_user, logout_user, current_user from . import app, db, login_manager from thermos.forms import BookmarkForm, LoginForm from thermos.models import User, Bookmark @login_manager.user_loader def load_user(userid): return User.query.get(int(userid)) @app.route('/') @app.route('/index') def index(): return render_template('index.html', new_bookmarks=Bookmark.newest(5)) @app.route('/add', methods=['GET', 'POST']) @login_required def add(): form = BookmarkForm() if form.validate_on_submit(): url = form.url.data description = form.description.data bm = Bookmark(user=current_user, url=url, description=description) db.session.add(bm) db.session.commit() flash("Stored bookmark '{}' with description '{}'".format(url, description)) return redirect(url_for('index')) return render_template('add.html', form=form) @app.route('/user/<username>') def user(username): user = User.query.filter_by(username=username).first_or_404() return render_template('user.html', user=user) @app.route('/login', methods=['GET', 'POST']) def login(): form = LoginForm() if form.validate_on_submit(): # login and validate the user... user = User.query.filter_by(username=form.username.data).first() if user is not None: login_user(user, form.remember_me.data) flash('Logged in successfully as {}'.format(user.username)) return redirect(request.args.get('next') or url_for('index')) flash('Incorrect username or password.') return render_template('login.html', form=form) @app.route('/logout') def logout(): logout_user() return redirect(url_for('index')) @app.errorhandler(404) def page_not_found(e): return render_template('404.html'), 404 @app.errorhandler(500) def server_error(e): return render_template('500.html'), 500 ###Output _____no_output_____ ###Markdown 6.8 Demo: Password hashing ###Code # SAVE AS models.py # -*- coding: utf-8 -*- from datetime import datetime from sqlalchemy import desc from flask_login import UserMixin from werkzeug.security import check_password_hash, generate_password_hash from . import db class Bookmark(db.Model): id = db.Column(db.Integer, primary_key=True) url = db.Column(db.Text, nullable=False) # Pass the function object instead of the function result # as the default method for getting the default time. date = db.Column(db.DateTime, default=datetime.utcnow) description = db.Column(db.String(300)) user_id = db.Column(db.Integer, db.ForeignKey('user.id'), nullable=False) @staticmethod def newest(num): return Bookmark.query.order_by(desc(Bookmark.date)).limit(num) def __repr__(self): return "<Bookmark '{}': '{}'>".format(self.description, self.url) class User(db.Model, UserMixin): id = db.Column(db.Integer, primary_key=True) username = db.Column(db.String(80), unique=True) email = db.Column(db.String(120), unique=True) bookmarks = db.relationship('Bookmark', backref='user', lazy='dynamic') password_hash = db.Column(db.String) @property def password(self): raise AttributeError('password: write-only field') @password.setter def password(self, password): self.password_hash = generate_password_hash(password) def check_password(self, password): return check_password_hash(self.password_hash, password) @staticmethod def get_by_username(username): return User.query.filter_by(username=username).first() def __repr__(self): return '<User %r>' % self.username # SAVE AS views.py # -*- coding: utf-8 -*- from flask import render_template, flash, redirect, url_for, request from flask_login import login_required, login_user, logout_user, current_user from . import app, db, login_manager from thermos.forms import BookmarkForm, LoginForm from thermos.models import User, Bookmark @login_manager.user_loader def load_user(userid): return User.query.get(int(userid)) @app.route('/') @app.route('/index') def index(): return render_template('index.html', new_bookmarks=Bookmark.newest(5)) @app.route('/add', methods=['GET', 'POST']) @login_required def add(): form = BookmarkForm() if form.validate_on_submit(): url = form.url.data description = form.description.data bm = Bookmark(user=current_user, url=url, description=description) db.session.add(bm) db.session.commit() flash("Stored bookmark '{}' with description '{}'".format(url, description)) return redirect(url_for('index')) return render_template('add.html', form=form) @app.route('/user/<username>') def user(username): user = User.query.filter_by(username=username).first_or_404() return render_template('user.html', user=user) @app.route('/login', methods=['GET', 'POST']) def login(): form = LoginForm() if form.validate_on_submit(): # login and validate the user... user = User.get_by_username(form.username.data) if user is not None and user.check_password(form.password.data): login_user(user, form.remember_me.data) flash('Logged in successfully as {}'.format(user.username)) return redirect(request.args.get('next') or url_for('user', username=user.username)) flash('Incorrect username or password.') return render_template('login.html', form=form) @app.route('/logout') def logout(): logout_user() return redirect(url_for('index')) @app.errorhandler(404) def page_not_found(e): return render_template('404.html'), 404 @app.errorhandler(500) def server_error(e): return render_template('500.html'), 500 # SAVE AS manager.py # Create some initial users with password for testing without a signup page. #!/usr/bin/env python3 # -*- coding: utf-8 -*- from flask.ext.script import Manager, prompt_bool from thermos import app, db from thermos.models import User manager = Manager(app) @manager.command def initdb(): db.create_all() db.session.add(User(username='reindert', email='[email protected]', password='test')) db.session.add(User(username='arjen', email='[email protected]', password='test')) db.session.commit() print('Initialized the database') @manager.command def dropdb(): if prompt_bool( 'Are you sure you want to lose all your data'): db.drop_all() print('Dropped the database') if __name__ == '__main__': manager.run() ###Output _____no_output_____ ###Markdown 6.9 Demo: Adding a signup pageCreate a new template page `signup.html`. ###Code # SAVE AS forms.py # -*- coding: utf-8 -*- from flask_wtf import Form from wtforms.fields import StringField, PasswordField, BooleanField, SubmitField from flask.ext.wtf.html5 import URLField from wtforms.validators import DataRequired, Length, Email, Regexp, EqualTo,\ url, ValidationError from thermos.models import User class BookmarkForm(Form): url = URLField('The URL for your bookmark', validators=[DataRequired(), url()]) description = StringField('Add an optional description:') def validate(self): if not (self.url.data.startswith("http://") or\ self.url.data.startswith("https://")): self.url.data = "http://" + self.url.data if not Form.validate(self): return False if not self.description.data: self.description.data = self.url.data return True class LoginForm(Form): username = StringField('Your Username', validators=[DataRequired()]) password = PasswordField('Password', validators=[DataRequired()]) remember_me = BooleanField('Keep me logged in') submit = SubmitField('Log In') class SignupForm(Form): username = StringField('Username', validators=[ DataRequired(), Length(3, 80), Regexp('^[A-Za-z0-9_]{3,}$', message='Usernames consist of numbers, letters, ' 'and underscores.')]) password = PasswordField('Password', validators=[ DataRequired(), EqualTo('password2', message='Passwords must match')]) password2 = PasswordField('Confirm password', validators=[DataRequired()]) email = StringField('Email', validators=[DataRequired(), Length(1, 80), Email()]) def validate_email(self, email_field): if User.query.filter_by(email=email_field.data).first(): raise ValidationError('There already is a user with this email address.') def validate_username(self, username_field): if User.query.filter_by(username=username_field.data).first(): raise ValidationError('This username is already taken.') # SAVE AS models.py # -*- coding: utf-8 -*- from datetime import datetime from sqlalchemy import desc from flask_login import UserMixin from werkzeug.security import check_password_hash, generate_password_hash from . import db class Bookmark(db.Model): id = db.Column(db.Integer, primary_key=True) url = db.Column(db.Text, nullable=False) # Pass the function object instead of the function result # as the default method for getting the default time. date = db.Column(db.DateTime, default=datetime.utcnow) description = db.Column(db.String(300)) user_id = db.Column(db.Integer, db.ForeignKey('user.id'), nullable=False) @staticmethod def newest(num): return Bookmark.query.order_by(desc(Bookmark.date)).limit(num) def __repr__(self): return "<Bookmark '{}': '{}'>".format(self.description, self.url) class User(db.Model, UserMixin): id = db.Column(db.Integer, primary_key=True) username = db.Column(db.String(80), unique=True) email = db.Column(db.String(120), unique=True) bookmarks = db.relationship('Bookmark', backref='user', lazy='dynamic') password_hash = db.Column(db.String) @property def password(self): raise AttributeError('password: write-only field') @password.setter def password(self, password): self.password_hash = generate_password_hash(password) def check_password(self, password): return check_password_hash(self.password_hash, password) @staticmethod def get_by_username(username): return User.query.filter_by(username=username).first() def __repr__(self): return '<User %r>' % self.username # SAVE AS views.py # -*- coding: utf-8 -*- from flask import render_template, flash, redirect, url_for, request from flask_login import login_required, login_user, logout_user, current_user from . import app, db, login_manager from thermos.forms import BookmarkForm, LoginForm, SignupForm from thermos.models import User, Bookmark @login_manager.user_loader def load_user(userid): return User.query.get(int(userid)) @app.route('/') @app.route('/index') def index(): return render_template('index.html', new_bookmarks=Bookmark.newest(5)) @app.route('/add', methods=['GET', 'POST']) @login_required def add(): form = BookmarkForm() if form.validate_on_submit(): url = form.url.data description = form.description.data bm = Bookmark(user=current_user, url=url, description=description) db.session.add(bm) db.session.commit() flash("Stored bookmark '{}' with description '{}'".format(url, description)) return redirect(url_for('index')) return render_template('add.html', form=form) @app.route('/user/<username>') def user(username): user = User.query.filter_by(username=username).first_or_404() return render_template('user.html', user=user) @app.route('/login', methods=['GET', 'POST']) def login(): form = LoginForm() if form.validate_on_submit(): # login and validate the user... user = User.get_by_username(form.username.data) if user is not None and user.check_password(form.password.data): login_user(user, form.remember_me.data) flash('Logged in successfully as {}'.format(user.username)) return redirect(request.args.get('next') or url_for('user', username=user.username)) flash('Incorrect username or password.') return render_template('login.html', form=form) @app.route('/logout') def logout(): logout_user() return redirect(url_for('index')) @app.route('/signup', methods=['GET', 'POST']) def signup(): form = SignupForm() if form.validate_on_submit(): user = User(email=form.email.data, username=form.username.data, password=form.password.data) db.session.add(user) db.session.commit() flash('Welcome, {}! Please login'.format(user.username)) return redirect(url_for('login')) return render_template('signup.html', form=form) @app.errorhandler(404) def page_not_found(e): return render_template('404.html'), 404 @app.errorhandler(500) def server_error(e): return render_template('500.html'), 500 ###Output _____no_output_____
notebooks/DataFiltering.ipynb
###Markdown Data filtering in signal processing> Marcos Duarte > Laboratory of Biomechanics and Motor Control ([http://demotu.org/](http://demotu.org/)) > Federal University of ABC, Brazil Here will see an introduction to data filtering and the most basic filters typically used in signal processing of biomechanical data. You should be familiar with the [basic properties of signals](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/SignalBasicProperties.ipynb) before proceeding. Filter and smoothingIn data acquisition with an instrument, it's common that the noise has higher frequencies and lower amplitudes than the desired signal. To remove this noise from the signal, a procedure known as filtering or smoothing is employed in the signal processing. Filtering is a process to attenuate from a signal some unwanted component or feature. A filter usually removes certain frequency components from the data according to its frequency response. [Frequency response](http://en.wikipedia.org/wiki/Frequency_response) is the quantitative measure of the output spectrum of a system or device in response to a stimulus, and is used to characterize the dynamics of the system. [Smoothing](http://en.wikipedia.org/wiki/Smoothing) is the process of removal of local (at short scale) fluctuations in the data while preserving a more global pattern in the data (such local variations could be noise or just a short scale phenomenon that is not interesting). A filter with a low-pass frequency response performs smoothing. With respect to the filter implementation, it can be classified as [analog filter](http://en.wikipedia.org/wiki/Passive_analogue_filter_development) or [digital filter](http://en.wikipedia.org/wiki/Digital_filter). An analog filter is an electronic circuit that performs filtering of the input electrical signal (analog data) and outputs a filtered electrical signal (analog data). A simple analog filter can be implemented with a electronic circuit with a resistor and a capacitor. A digital filter, is a system that implement the filtering of a digital data (time-discrete data). Example: the moving-average filterAn example of a low-pass (smoothing) filter is the moving average, which is performed taking the arithmetic mean of subsequences of $m$ terms of the data. For instance, the moving averages with window sizes (m) equal to 2 and 3 are:$$ \begin{array}{}&y_{MA(2)} = \frac{1}{2}[x_1+x_2,\; x_2+x_3,\; \cdots,\; x_{n-1}+x_n] \\&y_{MA(3)} = \frac{1}{3}[x_1+x_2+x_3,\; x_2+x_3+x_4,\; \cdots,\; x_{n-2}+x_{n-1}+x_n]\end{array} $$Which has the general formula:$$ y[i] = \sum_{j=0}^{m-1} x[i+j] \quad for \quad i=1, \; \dots, \; n-m+1 $$Where $n$ is the number (length) of data.Let's implement a simple version of the moving average filter. First, let's import the necessary Python libraries and configure the environment: ###Code # Import the necessary libraries import numpy as np import matplotlib.pyplot as plt %matplotlib inline from IPython.display import HTML, display import sys sys.path.insert(1, r'./../functions') # add to pythonpath ###Output _____no_output_____ ###Markdown A naive moving-average function definition: ###Code def moving_average(x, window): """Moving average of 'x' with window size 'window'.""" y = np.empty(len(x)-window+1) for i in range(len(y)): y[i] = np.sum(x[i:i+window])/window return y ###Output _____no_output_____ ###Markdown Let's generate some data to test this function: ###Code signal = np.zeros(300) signal[100:200] += 1 noise = np.random.randn(300)/10 x = signal + noise window = 11 y = moving_average(x, window) # plot fig, ax = plt.subplots(1, 1, figsize=(8, 4)) ax.plot(x, 'b.-', linewidth=1, label = 'raw data') ax.plot(y, 'r.-', linewidth=2, label = 'moving average') ax.legend(frameon=False, loc='upper right', fontsize=10) ax.set_xlabel("Time [s]") ax.set_ylabel("Amplitude") plt.show() ###Output _____no_output_____ ###Markdown Later we will look on better ways to calculate the moving average. Digital filtersIn signal processing, a digital filter is a system that performs mathematical operations on a signal to modify certain aspects of that signal. A digital filter (in fact, a causal, linear time-invariant (LTI) digital filter) can be seen as the implementation of the following difference equation in the time domain:$$ \begin{array}{}y_n &= \quad b_0x_n + \; b_1x_{n-1} + \cdots + b_Mx_{n-M} - \; a_1y_{n-1} - \cdots - a_Ny_{n-N} \\ & = \quad \sum_{k=0}^M b_kx_{n-k} - \sum_{k=1}^N a_ky_{n-k}\end{array} $$Where the output $y$ is the filtered version of the input $x$, $a_k$ and $b_k$ are the filter coefficients (real values), and the order of the filter is the larger of N or M. This general equation is for a recursive filter where the filtered signal y is calculated based on current and previous values of $x$ and on previous values of $y$ (the own output values, because of this it is said to be a system with feedback). A filter that does not re-use its outputs as an input (and it is said to be a system with only feedforward) is called nonrecursive filter (the $a$ coefficients of the equation are zero). Recursive and nonrecursive filters are also known as infinite impulse response (IIR) and finite impulse response (FIR) filters, respectively. A filter with only the terms based on the previous values of $y$ is also known as an autoregressive (AR) filter. A filter with only the terms based on the current and previous values of $x$ is also known as an moving-average (MA) filter. The filter with all terms is also known as an autoregressive moving-average (ARMA) filter. The moving-average filter can be implemented by making $n$ $b$ coefficients each equals to $1/n$ and the $a$ coefficients equal to zero in the difference equation. Transfer function Another form to characterize a digital filter is by its [transfer function](http://en.wikipedia.org/wiki/Transfer_function). In simple terms, a transfer function is the ratio in the frequency domain between the input and output signals of a filter. For continuous-time input signal $x(t)$ and output $y(t)$, the transfer function $H(s)$ is given by the ratio between the [Laplace transforms](http://en.wikipedia.org/wiki/Laplace_transform) of input $x(t)$ and output $y(t)$:$$ H(s) = \frac{Y(s)}{X(s)} $$Where $s = \sigma + j\omega$; $j$ is the imaginary unit and $\omega$ is the angular frequency, $2\pi f$. In the steady-state response case, we can consider $\sigma=0$ and the Laplace transforms with complex arguments reduce to the [Fourier transforms](http://en.wikipedia.org/wiki/Fourier_transform) with real argument $\omega$. For discrete-time input signal $x(t)$ and output $y(t)$, the transfer function $H(z)$ will be given by the ratio between the [z-transforms](http://en.wikipedia.org/wiki/Z-transform) of input $x(t)$ and output $y(t)$, and the formalism is similar.The transfer function of a digital filter (in fact for a linear, time-invariant, and causal filter), obtained by taking the z-transform of the difference equation shown earlier, is given by:$$ H(z) = \frac{Y(z)}{X(z)} = \frac{b_0 + b_1 z^{-1} + b_2 z^{-2} + \cdots + b_N z^{-N}}{1 + a_1 z^{-1} + a_2 z^{-2} + \cdots + a_M z^{-M}} $$$$ H(z) = \frac{\sum_{k=0}^M b_kz^{-k}}{1 + \sum_{k=1}^N a_kz^{-k}} $$And the order of the filter is the larger of N or M. Similar to the difference equation, this transfer function is for a recursive (IIR) filter. If the $a$ coefficients are zero, the denominator is equal to one, and the filter becomes nonrecursive (FIR). The Fourier transformThe [Fourier transform](http://en.wikipedia.org/wiki/Fourier_transform) is a mathematical operation to transform a signal which is function of time, $g(t)$, into a signal which is function of frequency, $G(f)$, and it is defined by: $$ \mathcal{F}[g(t)] = G(f) = \int_{-\infty}^{\infty} g(t) e^{-j 2\pi f t} dt $$ Its inverse operation is: $$ \mathcal{F}^{-1}[G(f)] = g(t) = \int_{-\infty}^{\infty} G(f) e^{j 2\pi f t} df $$ The function $G(f)$ is the representation in the frequency domain of the time-domain signal, $g(t)$, and vice-versa. The functions $g(t)$ and $G(f)$ are referred to as a Fourier integral pair, or Fourier transform pair, or simply the Fourier pair. [See this text for an introduction to Fourier transform](http://www.thefouriertransform.com/transform/fourier.php). Types of filtersIn relation to the frequencies that are not removed from the data (and a boundary is specified by the critical or cutoff frequency), a filter can be a low-pass, high-pass, band-pass, and band-stop. The frequency response of such filters is illustrated in the next figure.Frequency response of filters (from Wikipedia). The critical or cutoff frequency for a filter is defined as the frequency where the power (the amplitude squared) of the filtered signal is half of the power of the input signal (or the output amplitude is 0.707 of the input amplitude). For instance, if a low-pass filter has a cutoff frequency of 10 Hz, it means that at 10 Hz the power of the filtered signal is 50% of the power of the original signal (and the output amplitude will be about 71% of the input amplitude). The gain of a filter (the ratio between the output and input powers) is usually expressed in the decibel (dB) unit. Decibel (dB) The decibel (dB) is a logarithmic unit used to express the ratio between two values. In the case of the filter gain measured in the decibel unit:$$Gain=10\,log\left(\frac{A_{out}^2}{A_{in}^2}\right)=20\,log\left(\frac{A_{out}}{A_{in}}\right)$$ Where $A_{out}$ and $A_{in}$ are respectively the amplitudes of the output (filtered) and input (raw) signals.For instance, the critical or cutoff frequency for a filter, the frequency where the power (the amplitude squared) of the filtered signal is half of the power of the input signal, is given in decibel as:$$ 10\,log\left(0.5\right) \approx -3 dB $$ If the power of the filtered signal is twice the power of the input signal, because of the logarithm, the gain in decibel is $10\,log\left(2\right) \approx 3 dB$. If the output power is attenuated by ten times, the gain is $10\,log\left(0.1\right) \approx -10 dB$, but if the output amplitude is attenuated by ten times, the gain is $20\,log\left(0.1\right) \approx -20 dB$, and if the output amplitude is amplified by ten times, the gain is $20 dB$. For each 10-fold variation in the amplitude ratio, there is an increase (or decrease) of $20 dB$.The decibel unit is useful to represent large variations in a measurement, for example, $-120 dB$ represents an attenuation of 1,000,000 times. A decibel is one tenth of a bel, a unit named in honor of Alexander Graham Bell. Butterworth filterA common filter employed in biomechanics and motor control fields is the [Butterworth filter](http://en.wikipedia.org/wiki/Butterworth_filter). This filter is used because its simple design, it has a more flat frequency response and linear phase response in the pass and stop bands, and it is simple to use. The Butterworth filter is a recursive filter (IIR) and both $a$ and $b$ filter coefficients are used in its implementation. Let's implement the Butterworth filter. We will use the function `butter` to calculate the filter coefficients: ```pythonbutter(N, Wn, btype='low', analog=False, output='ba')```Where `N` is the order of the filter, `Wn` is the cutoff frequency specified as a fraction of the [Nyquist frequency](http://en.wikipedia.org/wiki/Nyquist_frequency) (half of the sampling frequency), and `btype` is the type of filter (it can be any of {'lowpass', 'highpass', 'bandpass', 'bandstop'}, the default is 'lowpass'). See the help of `butter` for more details. The filtering itself is performed with the function `lfilter`: ```pythonlfilter(b, a, x, axis=-1, zi=None)```Where `b` and `a` are the Butterworth coefficients calculated with the function `butter` and `x` is the variable with the data to be filtered. ###Code from scipy import signal freq = 100 t = np.arange(0, 1, .01) w = 2*np.pi*1 # 1 Hz y = np.sin(w*t) + 0.1*np.sin(10*w*t) # Butterworth filter b, a = signal.butter(2, 5/(freq/2), btype = 'low') y2 = signal.lfilter(b, a, y) # standard filter # plot fig, ax1 = plt.subplots(1, 1, figsize=(9, 4)) ax1.plot(t, y, 'r.-', linewidth=2, label = 'raw data') ax1.plot(t, y2, 'b.-', linewidth=2, label = 'filter @ 5 Hz') ax1.legend(frameon=False, fontsize=14) ax1.set_xlabel("Time [s]") ax1.set_ylabel("Amplitude") plt.show() ###Output _____no_output_____ ###Markdown The plot above shows that the Butterworth filter introduces a phase (a delay or lag in time) between the raw and the filtered signals. We will see how to account for that later. Let's look at the values of the `b` and `a` Butterworth filter coefficients for different orders and see a characteristic of them; from the general difference equation shown earlier, it follows that the sum of the `b` coefficients minus the sum of the `a` coefficients (excluding the first coefficient of `a`) is one: ###Code from scipy import signal print('Low-pass Butterworth filter coefficients') b, a = signal.butter(1, .1, btype = 'low') print('Order 1:', '\nb:', b, '\na:', a, '\nsum(b)-sum(a):', np.sum(b)-np.sum(a[1:])) b, a = signal.butter(2, .1, btype = 'low') print('Order 2:', '\nb:', b, '\na:', a, '\nsum(b)-sum(a):', np.sum(b)-np.sum(a[1:])) ###Output Low-pass Butterworth filter coefficients Order 1: b: [ 0.13672874 0.13672874] a: [ 1. -0.72654253] sum(b)-sum(a): 1.0 Order 2: b: [ 0.02008337 0.04016673 0.02008337] a: [ 1. -1.56101808 0.64135154] sum(b)-sum(a): 1.0 ###Markdown Bode plotHow much the amplitude of the filtered signal is attenuated in relation to the amplitude of the raw signal (gain or magnitude) as a function of frequency is given in the frequency response plot. The plots of the frequency and phase responses (the [bode plot](http://en.wikipedia.org/wiki/Bode_plot)) of this filter implementation (Butterworth, lowpass at 5 Hz, second-order) is shown below: ###Code from scipy import signal b, a = signal.butter(2, 5/(freq/2), btype = 'low') w, h = signal.freqz(b, a) # compute the frequency response of a digital filter angles = np.rad2deg(np.unwrap(np.angle(h))) # angle of the complex argument w = w/np.pi*freq/2 # angular frequency from radians to Hz h = 20*np.log10(np.absolute(h)) # in decibels fig, (ax1, ax2) = plt.subplots(2, 1, sharex = True, figsize=(9, 6)) ax1.plot(w, h, linewidth=2) ax1.set_ylim(-80, 1) ax1.set_title('Frequency response') ax1.set_ylabel("Magnitude [dB]") ax1.plot(5, -3.01, 'ro') ax11 = plt.axes([.17, .59, .2, .2]) # inset plot ax11.plot(w, h, linewidth=2) ax11.plot(5, -3.01, 'ro') ax11.set_ylim([-6, .5]) ax11.set_xlim([0, 10]) ax2.plot(w, angles, linewidth=2) ax2.set_title('Phase response') ax2.set_xlabel("Frequency [Hz]") ax2.set_ylabel("Phase [degrees]") ax2.plot(5, -90, 'ro') plt.show() ###Output _____no_output_____ ###Markdown The inset plot in the former figure shows that at the cutoff frequency (5 Hz), the power of the filtered signal is indeed attenuated by 3 dB. The phase-response plot shows that at the cutoff frequency, the Butterworth filter presents about 90 degrees of phase between the raw and filtered signals. A 5 Hz signal has a period of 0.2 s and 90 degrees of phase corresponds to 0.05 s of lag. Looking at the plot with the raw and filtered signals employing or not the phase correction, we can see that the delay is indeed about 0.05 s. Order of a filterThe order of a filter is related to the inclination of the 'wall' in the frequency response plot that attenuates or not the input signal at the vicinity of the cutoff frequency. A vertical wall exactly at the cutoff frequency would be ideal but this is impossible to implement. A Butterworth filter of first order attenuates 6 dB of the power of the signal each doubling of the frequency (per octave) or, which is the same, attenuates 20 dB each time the frequency varies by an order of 10 (per decade). In more technical terms, one simply says that a first-order filter rolls off -6 dB per octave or that rolls off -20 dB per decade. A second-order filter rolls off -12 dB per octave (-40 dB per decade), and so on, as shown in the next figure. ###Code from butterworth_plot import butterworth_plot butterworth_plot() ###Output _____no_output_____ ###Markdown Butterworth filter with zero-phase shiftThe phase introduced by the Butterworth filter can be corrected in the digital implementation by cleverly filtering the data twice, once forward and once backwards. So, the lag introduced in the first filtering is zeroed by the same lag in the opposite direction at the second pass. The result is a zero-phase shift (or zero-phase lag) filtering. However, because after each pass the output power at the cutoff frequency is attenuated by two, by passing twice the second order Butterworth filter, the final output power will be attenuated by four. We have to correct the actual cutoff frequency value so that when employing the two passes, the filter will attenuate only by two. The following formula gives the desired cutoff frequency for a second-order Butterworth filter according to the number of passes, $n$, (see Winter, 2009):$$ C = \sqrt[4]{2^{\frac{1}{n}} - 1} $$For instance, for two passes, $n=2$, $ C=\sqrt[4]{2^{\frac{1}{2}} - 1} \approx 0.802 $. The actual filter cutoff frequency will be:$$ fc_{actual} = \frac{fc_{desired}}{C} $$For instance, for a second-order Butterworth filter with zero-phase shift and a desired 10 Hz cutoff frequency, the actual cutoff frequency should be 12.47 Hz. Let's implement this forward and backward filtering using the function `filtfilt` and compare with the single-pass filtering we just did it. ###Code from scipy.signal import butter, lfilter, filtfilt freq = 100 t = np.arange(0, 1, .01) w = 2*np.pi*1 # 1 Hz y = np.sin(w*t) + 0.1*np.sin(10*w*t) # Butterworth filter b, a = butter(2, 5/(freq/2), btype = 'low') y2 = lfilter(b, a, y) # standard filter # Correct the cutoff frequency for the number of passes in the filter C = 0.802 b, a = butter(2, (5/C)/(freq/2), btype = 'low') y3 = filtfilt(b, a, y) # filter with phase shift correction # plot fig, ax1 = plt.subplots(1, 1, figsize=(9, 4)) ax1.plot(t, y, 'r.-', linewidth=2, label = 'raw data') ax1.plot(t, y2, 'b.-', linewidth=2, label = 'filter @ 5 Hz') ax1.plot(t, y3, 'g.-', linewidth=2, label = 'filtfilt @ 5 Hz') ax1.legend(frameon=False, fontsize=14) ax1.set_xlabel("Time [s]") ax1.set_ylabel("Amplitude") plt.show() ###Output _____no_output_____ ###Markdown Critically damped digital filterA problem with a lowpass Butterworth filter is that it tends to overshoot or undershoot data with rapid changes (see for example, Winter (2009), Robertson et at. (2013), and Robertson & Dowling (2003)). The Butterworth filter behaves as an underdamped second-order system and a critically damped filter doesn't have this overshoot/undershoot characteristic. The function `crit_damp.py` calculates the coefficients (the b's and a's) for an IIR critically damped digital filter and corrects the cutoff frequency for the number of passes of the filter. The calculation of these coefficients is very similar to the calculation for the Butterworth filter, see the `critic_damp.py` code. This function can also calculate the Butterworth coefficients if this option is chosen. The signature of `critic_damp.py` function is: ```pythoncritic_damp(fcut, freq, npass=2, fcorr=True, filt='critic')```And here is an example of `critic_damp.py`: ###Code >>> from critic_damp import critic_damp >>> print('Critically damped filter') >>> b_cd, a_cd, fc_cd = critic_damp(fcut=10, freq=100, npass=2, fcorr=True, filt='critic') >>> print('b:', b_cd, '\na:', a_cd, '\nCorrected Fc:', fc_cd) >>> print('Butterworth filter') >>> b_bw, a_bw, fc_bw = critic_damp(fcut=10, freq=100, npass=2, fcorr=True, filt='butter') >>> print('b:', b_bw, '\na:', a_bw, '\nCorrected Fc:', fc_bw) >>> # illustrate the filter in action >>> import numpy as np >>> import matplotlib.pyplot as plt >>> from scipy import signal >>> y = np.hstack((np.zeros(20), np.ones(20))) >>> t = np.linspace(0, 0.39, 40) - .19 >>> y_cd = signal.filtfilt(b_cd, a_cd, y) >>> y_bw = signal.filtfilt(b_bw, a_bw, y) >>> fig, ax = plt.subplots(1, 1, figsize=(9, 4)) >>> ax.plot(t, y, 'k', linewidth=2, label = 'raw data') >>> ax.plot(t, y_cd, 'r', linewidth=2, label = 'Critically damped') >>> ax.plot(t, y_bw, 'b', linewidth=2, label = 'Butterworth') >>> ax.legend() >>> ax.set_xlabel('Time (s)') >>> ax.set_ylabel('Amplitude') >>> ax.set_title('Freq = 100 Hz, Fc = 10 Hz, 2nd order and zero-phase shift filters') >>> plt.show() ###Output Critically damped filter b: [ 0.21937845 0.4387569 0.21937845] a: [ 1. -0.12648588 0.00399967] Corrected Fc: 22.9895922275 Butterworth filter b: [ 0.09718522 0.19437045 0.09718522] a: [ 1. -0.94557029 0.33431119] Corrected Fc: 12.4650470277 ###Markdown Moving-average filterHere are four different versions of a function to implement the moving-average filter: ###Code def moving_averageV1(x, window): """Moving average of 'x' with window size 'window'.""" y = np.empty(len(x)-window+1) for i in range(len(y)): y[i] = np.sum(x[i:i+window])/window return y def moving_averageV2(x, window): """Moving average of 'x' with window size 'window'.""" xsum = np.cumsum(x) xsum[window:] = xsum[window:] - xsum[:-window] return xsum[window-1:]/window def moving_averageV3(x, window): """Moving average of 'x' with window size 'window'.""" return np.convolve(x, np.ones(window)/window, 'same') from scipy.signal import lfilter def moving_averageV4(x, window): """Moving average of 'x' with window size 'window'.""" return lfilter(np.ones(window)/window, 1, x) ###Output _____no_output_____ ###Markdown Let's test these versions: ###Code x = np.random.randn(300)/10 x[100:200] += 1 window = 10 y1 = moving_averageV1(x, window) y2 = moving_averageV2(x, window) y3 = moving_averageV3(x, window) y4 = moving_averageV4(x, window) # plot fig, ax = plt.subplots(1, 1, figsize=(10, 5)) ax.plot(x, 'b-', linewidth=1, label = 'raw data') ax.plot(y1, 'y-', linewidth=2, label = 'moving average V1') ax.plot(y2, 'm--', linewidth=2, label = 'moving average V2') ax.plot(y3, 'r-', linewidth=2, label = 'moving average V3') ax.plot(y4, 'g-', linewidth=2, label = 'moving average V4') ax.legend(frameon=False, loc='upper right', fontsize=12) ax.set_xlabel("Data #") ax.set_ylabel("Amplitude") plt.show() ###Output _____no_output_____ ###Markdown A test of the performance of the four versions (using the magick IPython function `timeit`): ###Code %timeit moving_averageV1(x, window) %timeit moving_averageV2(x, window) %timeit moving_averageV3(x, window) %timeit moving_averageV4(x, window) ###Output 715 µs ± 16.6 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) 5.25 µs ± 182 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) 6.22 µs ± 283 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) 52.1 µs ± 776 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each) ###Markdown The version with the cumsum function produces identical results to the first version of the moving average function but it is much faster (the fastest of the four versions). Only the version with the convolution function produces a result without a phase or lag between the input and output data, although we could improve the other versions to account for that (for example, calculating the moving average of `x[i-window/2:i+window/2]` and using `filtfilt` instead of `lfilter`). And avoid as much as possible the use of loops in Python! The version with the for loop is about one hundred times slower than the other versions. Moving-RMS filterThe root-mean square (RMS) is a measure of the absolute amplitude of the data and it is useful when the data have positive and negative values. The RMS is defined as:$$ RMS = \sqrt{\frac{1}{N}\sum_{i=1}^{N} x_i^2} $$Similar to the moving-average measure, the moving RMS is defined as:$$ y[i] = \sqrt{\sum_{j=0}^{m-1} (x[i+j])^2} \;\;\;\; for \;\;\; i=1, \; \dots, \; n-m+1 $$Here are two implementations for a moving-RMS filter (very similar to the moving-average filter): ###Code import numpy as np from scipy.signal import filtfilt def moving_rmsV1(x, window): """Moving RMS of 'x' with window size 'window'.""" window = 2*window + 1 return np.sqrt(np.convolve(x*x, np.ones(window)/window, 'same')) def moving_rmsV2(x, window): """Moving RMS of 'x' with window size 'window'.""" return np.sqrt(filtfilt(np.ones(window)/(window), [1], x*x)) ###Output _____no_output_____ ###Markdown Let's filter electromyographic data: ###Code # load data file with EMG signal data = np.loadtxt('./../data/emg.csv', delimiter=',') data = data[300:1000,:] time = data[:, 0] data = data[:, 1] - np.mean(data[:, 1]) window = 50 y1 = moving_rmsV1(data, window) y2 = moving_rmsV2(data, window) # plot fig, ax = plt.subplots(1, 1, figsize=(9, 5)) ax.plot(time, data, 'k-', linewidth=1, label = 'raw data') ax.plot(time, y1, 'r-', linewidth=2, label = 'moving RMS V1') ax.plot(time, y2, 'b-', linewidth=2, label = 'moving RMS V2') ax.legend(frameon=False, loc='upper right', fontsize=12) ax.set_xlabel("Time [s]") ax.set_ylabel("Amplitude") ax.set_ylim(-.1, .1) plt.show() ###Output _____no_output_____ ###Markdown Similar, but not the same, results.An advantage of the filter employing the convolution method is that it behaves better to abrupt changes in the data, such as when filtering data that change from a baseline at zero to large positive values. The filter with the `filter` or `filtfilt` function would introduce negative values in this case. Another advantage for the convolution method is that it is much faster: ###Code print('Filter with convolution:') %timeit moving_rmsV1(data, window) print('Filter with filtfilt:') %timeit moving_rmsV2(data, window) ###Output Filter with convolution: 27 µs ± 1.21 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each) Filter with filtfilt: 343 µs ± 1.79 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) ###Markdown Moving-median filterThe moving-median filter is similar in concept than the other moving filters but uses the median instead. This filter has a sharper response to abrupt changes in the data than the moving-average filter: ###Code from scipy.signal import medfilt x = np.random.randn(300)/10 x[100:200] += 1 window = 11 y = np.convolve(x, np.ones(window)/window, 'same') y2 = medfilt(x, window) # plot fig, ax = plt.subplots(1, 1, figsize=(10, 4)) ax.plot(x, 'b-', linewidth=1, label = 'raw data') ax.plot(y, 'r-', linewidth=2, label = 'moving average') ax.plot(y2, 'g-', linewidth=2, label = 'moving median') ax.legend(frameon=False, loc='upper right', fontsize=12) ax.set_xlabel("Data #") ax.set_ylabel("Amplitude") plt.show() ###Output _____no_output_____ ###Markdown More moving filtersThe library [pandas](http://pandas.pydata.org/) has several types of [moving-filter functions](http://pandas.pydata.org/pandas-docs/stable/computation.htmlmoving-rolling-statistics-moments). Numerical differentiation of data with noiseHow to remove noise from a signal is rarely a trivial task and this problem gets worse with numerical differentiation of the data because the amplitudes of the noise with higher frequencies than the signal are amplified with differentiation (for each differentiation step, the SNR decreases). To demonstrate this problem, consider the following function representing some experimental data:$$ f = sin(\omega t) + 0.1sin(10\omega t) $$The first component, with large amplitude (1) and small frequency (1 Hz), represents the signal and the second component, with small amplitude (0.1) and large frequency (10 Hz), represents the noise. The signal-to-noise ratio (SNR) for these data is equal to (1/0.1)$^2$ = 100. Let's see what happens with the SNR for the first and second derivatives of $f$:$$ f\:'\:= \omega cos(\omega t) + \omega cos(10\omega t) $$$$ f\:''= -\omega^2 sin(\omega t) - 10\omega^2 sin(10\omega t) $$For the first derivative, SNR = 1, and for the second derivative, SNR = 0.01! The following plots illustrate this problem: ###Code t = np.arange(0,1,.01) w = 2*np.pi*1 # 1 Hz #signal and noise derivatives: s = np.sin(w*t); n = 0.1*np.sin(10*w*t) sd = w*np.cos(w*t); nd = w*np.cos(10*w*t) sdd = -w*w*np.sin(w*t); ndd = -w*w*10*np.sin(10*w*t) plt.rc('axes', labelsize=16, titlesize=16) plt.rc('xtick', labelsize=12) plt.rc('ytick', labelsize=12) fig, (ax1,ax2,ax3) = plt.subplots(3, 1, sharex = True, figsize=(8, 6)) ax1.set_title('Differentiation of signal and noise') ax1.plot(t, s, 'b.-', linewidth=1, label = 'signal') ax1.plot(t, n, 'g.-', linewidth=1, label = 'noise') ax1.plot(t, s+n, 'r.-', linewidth=2, label = 'signal+noise') ax2.plot(t, sd, 'b.-', linewidth=1) ax2.plot(t, nd, 'g.-', linewidth=1) ax2.plot(t, sd + nd, 'r.-', linewidth=2) ax3.plot(t, sdd, 'b.-', linewidth=1) ax3.plot(t, ndd, 'g.-', linewidth=1) ax3.plot(t, sdd + ndd, 'r.-', linewidth=2) ax1.legend(frameon=False, fontsize=10) ax1.set_ylabel('f') ax2.set_ylabel("f '") ax3.set_ylabel("f ''") ax3.set_xlabel("Time (s)") fig.tight_layout(pad=0) plt.show() ###Output _____no_output_____ ###Markdown Let's see how the use of a low-pass Butterworth filter can attenuate the high-frequency noise and how the derivative is affected. We will also calculate the [Fourier transform](http://en.wikipedia.org/wiki/Fourier_transform) of these data to look at their frequencies content. ###Code from scipy import signal, fftpack freq = 100 t = np.arange(0,1,.01); w = 2*np.pi*1 # 1 Hz y = np.sin(w*t)+0.1*np.sin(10*w*t) # Butterworth filter # Correct the cutoff frequency for the number of passes in the filter C = 0.802 b, a = signal.butter(2, (5/C)/(freq/2), btype = 'low') y2 = signal.filtfilt(b, a, y) # 2nd derivative of the data ydd = np.diff(y,2)*freq*freq # raw data y2dd = np.diff(y2,2)*freq*freq # filtered data # frequency content yfft = np.abs(fftpack.fft(y))/(y.size/2) # raw data y2fft = np.abs(fftpack.fft(y2))/(y.size/2) # filtered data freqs = fftpack.fftfreq(y.size, 1./freq) yddfft = np.abs(fftpack.fft(ydd))/(ydd.size/2) y2ddfft = np.abs(fftpack.fft(y2dd))/(ydd.size/2) freqs2 = fftpack.fftfreq(ydd.size, 1./freq) ###Output _____no_output_____ ###Markdown And the plots: ###Code fig, ((ax1,ax2),(ax3,ax4)) = plt.subplots(2, 2, figsize=(11, 5)) ax1.set_title('Temporal domain', fontsize=14) ax1.plot(t, y, 'r', linewidth=2, label = 'raw data') ax1.plot(t, y2, 'b', linewidth=2, label = 'filtered @ 5 Hz') ax1.set_ylabel('f') ax1.legend(frameon=False, fontsize=12) ax2.set_title('Frequency domain', fontsize=14) ax2.plot(freqs[:int(yfft.size/4)], yfft[:int(yfft.size/4)],'r', linewidth=2,label='raw data') ax2.plot(freqs[:int(yfft.size/4)],y2fft[:int(yfft.size/4)],'b--',linewidth=2,label='filtered @ 5 Hz') ax2.set_ylabel('FFT(f)') ax2.legend(frameon=False, fontsize=12) ax3.plot(t[:-2], ydd, 'r', linewidth=2, label = 'raw') ax3.plot(t[:-2], y2dd, 'b', linewidth=2, label = 'filtered @ 5 Hz') ax3.set_xlabel('Time [s]') ax3.set_ylabel("f ''") ax4.plot(freqs[:int(yddfft.size/4)], yddfft[:int(yddfft.size/4)], 'r', linewidth=2, label = 'raw') ax4.plot(freqs[:int(yddfft.size/4)],y2ddfft[:int(yddfft.size/4)],'b--',linewidth=2, label = 'filtered @ 5 Hz') ax4.set_xlabel('Frequency [Hz]') ax4.set_ylabel("FFT(f '')"); ###Output _____no_output_____ ###Markdown Pezzack's benchmark dataIn 1977, Pezzack, Norman and Winter published a paper where they investigated the effects of differentiation and filtering processes on experimental data (the angle of a bar manipulated in space). Since then, these data have became a benchmark to test new algorithms. Let's work with these data (available at [http://isbweb.org/data/pezzack/index.html](http://isbweb.org/data/pezzack/index.html)). The data have the angular displacement measured by video and the angular acceleration directly measured by an accelerometer, which we will consider as the true acceleration. ###Code # load data file time, disp, disp2, aacc = np.loadtxt('./../data/Pezzack.txt', skiprows=6, unpack=True) dt = np.mean(np.diff(time)) # plot data fig, (ax1,ax2) = plt.subplots(1, 2, sharex = True, figsize=(11, 4)) plt.suptitle("Pezzack's benchmark data", fontsize=20) ax1.plot(time, disp, 'b.-') ax1.set_xlabel('Time [s]') ax1.set_ylabel('Angular displacement [rad]', fontsize=12) ax2.plot(time, aacc, 'g.-') ax2.set_xlabel('Time [s]') ax2.set_ylabel('Angular acceleration [rad/s$^2$]', fontsize=12) plt.subplots_adjust(wspace=0.3) ###Output _____no_output_____ ###Markdown The challenge is how to obtain the acceleration using the disclacement data dealing with the noise. A simple double differentiation of these data will amplify the noise: ###Code # acceleration using the 2-point forward difference algorithm: aacc2 = np.diff(disp,2)/dt/dt # aacc2 has 2 points less than aacc # plot data fig, ax1 = plt.subplots(1, 1, figsize=(11, 4)) plt.suptitle("Pezzack's benchmark data", fontsize=20) ax1.plot(time, aacc, 'g', label='Analog acceleration (true value)') ax1.plot(time[1:-1], aacc2, 'r', label='Acceleration by 2-point difference') ax1.set_xlabel('Time [s]', fontsize=12) ax1.set_ylabel('Angular acceleration [rad/s$^2$]', fontsize=12) plt.legend(frameon=False, fontsize=12, loc='upper left'); ###Output _____no_output_____ ###Markdown The source of noise in these data is due to random small errors in the digitization process which occur at each frame, because that the frequency content of the noise is up to half of the sampling frequency, higher the frequency content of the movement being analyzed. Let's try different filters ([Butterworth](http://en.wikipedia.org/wiki/Butterworth_filter), [Savitzky-Golay](http://en.wikipedia.org/wiki/Savitzky%E2%80%93Golay_smoothing_filter), and [spline](http://en.wikipedia.org/wiki/Spline_function)) to attenuate this noise. Both Savitzky-Golay and the spline functions are based on fitting polynomials to the data and they allow to differentiate the polynomials in order to get the derivatives of the data (instead of direct numerical differentiation of the data). The Savitzky-Golay and the spline functions have the following signatures: ```pythonsavgol_filter(x, window_length, polyorder, deriv=0, delta=1.0, axis=-1, mode='interp', cval=0.0) splrep(x, y, w=None, xb=None, xe=None, k=3, task=0, s=None, t=None, full_output=0, per=0, quiet=1)```And to evaluate the spline derivatives: ```pythonsplev(x, tck, der=0, ext=0)```And let's employ the [root-mean-square error (RMSE)](http://en.wikipedia.org/wiki/RMSE) metric to compare their performance: ###Code from scipy import signal, interpolate # Butterworth filter # Correct the cutoff frequency for the number of passes in the filter C = 0.802 b, a = signal.butter(2, (9/C)/((1/dt)/2)) dispBW = signal.filtfilt(b, a, disp) aaccBW = np.diff(dispBW, 2)/dt/dt # aaccBW has 2 points less than aacc # Add (pad) data to the extremities to avoid problems with filtering disp_pad = signal._arraytools.odd_ext(disp, n=11) time_pad = signal._arraytools.odd_ext(time, n=11) # Savitzky-Golay filter aaccSG = signal.savgol_filter(disp_pad,window_length=5,polyorder=3,deriv=2,delta=dt)[11:-11] # Spline smoothing tck = interpolate.splrep(time_pad, disp_pad, k=5, s=0.15*np.var(disp_pad)/np.size(disp_pad)) aaccSP = interpolate.splev(time_pad, tck, der=2)[11:-11] # RMSE: rmseBW = np.sqrt(np.mean((aaccBW-aacc[1:-1])**2)) rmseSG = np.sqrt(np.mean((aaccSG-aacc)**2)) rmseSP = np.sqrt(np.mean((aaccSP-aacc)**2)) ###Output _____no_output_____ ###Markdown And the plots: ###Code # plot data fig, ax1 = plt.subplots(1, 1, figsize=(11, 4)) plt.suptitle("Pezzack's benchmark data", fontsize=20) ax1.plot(time, aacc, 'g', label='Analog acceleration: (True value)') ax1.plot(time[1:-1], aaccBW, 'r', label='Butterworth 9 Hz: RMSE = %0.2f' %rmseBW) ax1.plot(time,aaccSG,'b', label='Savitzky-Golay 5 points: RMSE = %0.2f' %rmseSG) ax1.plot(time,aaccSP,'m', label='Quintic spline, s=0.0005: RMSE = %0.2f' %rmseSP) ax1.set_xlabel('Time [s]') ax1.set_ylabel('Angular acceleration [rad/s$^2$]', fontsize=12) plt.legend(frameon=False, fontsize=12, loc='upper left'); ###Output _____no_output_____ ###Markdown At this case, the Butterworth, Savitzky-Golay, and spline filters produced similar results with good fits to the original curve. However, with all of them, particularly with the spline smoothing, it is necessary some degree of tuning for choosing the right parameters. The Butterworth filter is the easiest one because the cutoff frequency choice sound more familiar for human movement analysis. Kinematics of a ball tossLet's now analyse the kinematic data of a ball tossed to the space. These data were obtained using [Tracker](http://www.cabrillo.edu/~dbrown/tracker/), which is a free video analysis and modeling tool built on the [Open Source Physics](http://www.opensourcephysics.org/) (OSP) Java framework. The data are from the analysis of the video *balltossout.mov* from the mechanics video collection which can be obtained in the Tracker website. ###Code t, x, y = np.loadtxt('./../data/balltoss.txt', skiprows=2, unpack=True) dt = np.mean(np.diff(t)) print('Time interval: %f s' %dt) print('x and y values:') x, y plt.rc('axes', labelsize=14) plt.rc('xtick', labelsize=14) plt.rc('ytick', labelsize=14) fig, (ax1,ax2,ax3) = plt.subplots(1, 3, figsize=(12, 3)) plt.suptitle('Kinematics of a ball toss', fontsize=20, y=1.05) ax1.plot(x, y, 'go') ax1.set_ylabel('y [m]') ax1.set_xlabel('x [m]') ax2.plot(t, x, 'bo') ax2.set_ylabel('x [m]') ax2.set_xlabel('Time [s]') ax3.plot(t, y, 'ro') ax3.set_ylabel('y [m]') ax3.set_xlabel('Time [s]') plt.subplots_adjust(wspace=0.35) ###Output _____no_output_____ ###Markdown Calculate the velocity and acceleration numerically: ###Code # forward difference algorithm: vx, vy = np.diff(x)/dt, np.diff(y)/dt ax, ay = np.diff(vx)/dt, np.diff(vy)/dt # central difference algorithm: vx2, vy2 = (x[2:]-x[:-2])/(2*dt), (y[2:]-y[:-2])/(2*dt) ax2, ay2 = (vx2[2:]-vx2[:-2])/(2*dt), (vy2[2:]-vy2[:-2])/(2*dt) fig, axarr = plt.subplots(2, 3, sharex = True, figsize=(11, 6)) axarr[0,0].plot(t, x, 'bo') axarr[0,0].set_ylabel('x [m]') axarr[0,1].plot(t[:-1], vx, 'bo', label='forward difference'); axarr[0,1].set_ylabel('vx [m/s]') axarr[0,1].plot(t[1:-1], vx2, 'm+', markersize=10, label='central difference') axarr[0,1].legend(frameon=False, fontsize=10, loc='upper left', numpoints=1) axarr[0,2].plot(t[:-2], ax, 'bo') axarr[0,2].set_ylabel('ax [m/s$^2$]') axarr[0,2].plot(t[2:-2], ax2, 'm+', markersize=10) axarr[1,0].plot(t, y, 'ro') axarr[1,0].set_ylabel('y [m]') axarr[1,1].plot(t[:-1], vy, 'ro') axarr[1,1].set_ylabel('vy [m/s]') axarr[1,1].plot(t[1:-1], vy2, 'm+', markersize=10) axarr[1,2].plot(t[:-2], ay, 'ro') axarr[1,2].set_ylabel('ay [m/s$^2$]') axarr[1,2].plot(t[2:-2], ay2, 'm+', markersize=10) axarr[1,1].set_xlabel('Time [s]') plt.tight_layout(w_pad=-.5, h_pad=0) plt.suptitle('Kinematics of a ball toss', fontsize=20, y=1.05); ###Output _____no_output_____ ###Markdown We can observe the noise, particularly in the derivatives of the data. For example, the vertical acceleration of the ball should be constant, approximately g=9.8 m/s$^2$. To estimate the acceleration, we can get rid off the noise by filtering the data or, because we know the physics of the phenomenon, we can fit a model to the data. Let's try the latter option. ###Code # Model: y = y0 + v0*t + 1/2*g*t^2 # fit a second order polynomial to the data p = np.polyfit(t, y, 2) print('g = %0.2f m/s2' % (2*p[0])) ###Output g = -9.98 m/s2 ###Markdown Data filtering in signal processing> Marcos Duarte > Laboratory of Biomechanics and Motor Control ([http://demotu.org/](http://demotu.org/)) > Federal University of ABC, Brazil Here will see an introduction to data filtering and the most basic filters typically used in signal processing of biomechanical data. You should be familiar with the [basic properties of signals](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/SignalBasicProperties.ipynb) before proceeding. Filter and smoothingIn data acquisition with an instrument, it's common that the noise has higher frequencies and lower amplitudes than the desired signal. To remove this noise from the signal, a procedure known as filtering or smoothing is employed in the signal processing. Filtering is a process to attenuate from a signal some unwanted component or feature. A filter usually removes certain frequency components from the data according to its frequency response. [Frequency response](http://en.wikipedia.org/wiki/Frequency_response) is the quantitative measure of the output spectrum of a system or device in response to a stimulus, and is used to characterize the dynamics of the system. [Smoothing](http://en.wikipedia.org/wiki/Smoothing) is the process of removal of local (at short scale) fluctuations in the data while preserving a more global pattern in the data (such local variations could be noise or just a short scale phenomenon that is not interesting). A filter with a low-pass frequency response performs smoothing. With respect to the filter implementation, it can be classified as [analog filter](http://en.wikipedia.org/wiki/Passive_analogue_filter_development) or [digital filter](http://en.wikipedia.org/wiki/Digital_filter). An analog filter is an electronic circuit that performs filtering of the input electrical signal (analog data) and outputs a filtered electrical signal (analog data). A simple analog filter can be implemented with a electronic circuit with a resistor and a capacitor. A digital filter, is a system that implement the filtering of a digital data (time-discrete data). Example: the moving-average filterAn example of a low-pass (smoothing) filter is the moving average, which is performed taking the arithmetic mean of subsequences of $m$ terms of the data. For instance, the moving averages with window sizes (m) equal to 2 and 3 are:$$ \begin{array}{}&y_{MA(2)} = \frac{1}{2}[x_1+x_2,\; x_2+x_3,\; \cdots,\; x_{n-1}+x_n] \\&y_{MA(3)} = \frac{1}{3}[x_1+x_2+x_3,\; x_2+x_3+x_4,\; \cdots,\; x_{n-2}+x_{n-1}+x_n]\end{array} $$Which has the general formula:$$ y[i] = \sum_{j=0}^{m-1} x[i+j] \quad for \quad i=1, \; \dots, \; n-m+1 $$Where $n$ is the number (length) of data.Let's implement a simple version of the moving average filter. First, let's import the necessary Python libraries and configure the environment: ###Code # Import the necessary libraries import numpy as np import matplotlib.pyplot as plt %matplotlib inline from IPython.display import HTML, display import sys sys.path.insert(1, r'./../functions') # add to pythonpath ###Output _____no_output_____ ###Markdown A naive moving-average function definition: ###Code def moving_average(x, window): """Moving average of 'x' with window size 'window'.""" y = np.empty(len(x)-window+1) for i in range(len(y)): y[i] = np.sum(x[i:i+window])/window return y ###Output _____no_output_____ ###Markdown Let's generate some data to test this function: ###Code signal = np.zeros(300) signal[100:200] += 1 noise = np.random.randn(300)/10 x = signal + noise window = 11 y = moving_average(x, window) # plot fig, ax = plt.subplots(1, 1, figsize=(8, 4)) ax.plot(x, 'b.-', linewidth=1, label = 'raw data') ax.plot(y, 'r.-', linewidth=2, label = 'moving average') ax.legend(frameon=False, loc='upper right', fontsize=10) ax.set_xlabel("Time [s]") ax.set_ylabel("Amplitude") plt.show() ###Output _____no_output_____ ###Markdown Later we will look on better ways to calculate the moving average. Digital filtersIn signal processing, a digital filter is a system that performs mathematical operations on a signal to modify certain aspects of that signal. A digital filter (in fact, a causal, linear time-invariant (LTI) digital filter) can be seen as the implementation of the following difference equation in the time domain:$$ \begin{array}{}y_n &= \quad b_0x_n + \; b_1x_{n-1} + \cdots + b_Mx_{n-M} - \; a_1y_{n-1} - \cdots - a_Ny_{n-N} \\ & = \quad \sum_{k=0}^M b_kx_{n-k} - \sum_{k=1}^N a_ky_{n-k}\end{array} $$Where the output $y$ is the filtered version of the input $x$, $a_k$ and $b_k$ are the filter coefficients (real values), and the order of the filter is the larger of N or M. This general equation is for a recursive filter where the filtered signal y is calculated based on current and previous values of $x$ and on previous values of $y$ (the own output values, because of this it is said to be a system with feedback). A filter that does not re-use its outputs as an input (and it is said to be a system with only feedforward) is called nonrecursive filter (the $a$ coefficients of the equation are zero). Recursive and nonrecursive filters are also known as infinite impulse response (IIR) and finite impulse response (FIR) filters, respectively. A filter with only the terms based on the previous values of $y$ is also known as an autoregressive (AR) filter. A filter with only the terms based on the current and previous values of $x$ is also known as an moving-average (MA) filter. The filter with all terms is also known as an autoregressive moving-average (ARMA) filter. The moving-average filter can be implemented by making $n$ $b$ coefficients each equals to $1/n$ and the $a$ coefficients equal to zero in the difference equation. Transfer function Another form to characterize a digital filter is by its [transfer function](http://en.wikipedia.org/wiki/Transfer_function). In simple terms, a transfer function is the ratio in the frequency domain between the input and output signals of a filter. For continuous-time input signal $x(t)$ and output $y(t)$, the transfer function $H(s)$ is given by the ratio between the [Laplace transforms](http://en.wikipedia.org/wiki/Laplace_transform) of input $x(t)$ and output $y(t)$:$$ H(s) = \frac{Y(s)}{X(s)} $$Where $s = \sigma + j\omega$; $j$ is the imaginary unit and $\omega$ is the angular frequency, $2\pi f$. In the steady-state response case, we can consider $\sigma=0$ and the Laplace transforms with complex arguments reduce to the [Fourier transforms](http://en.wikipedia.org/wiki/Fourier_transform) with real argument $\omega$. For discrete-time input signal $x(t)$ and output $y(t)$, the transfer function $H(z)$ will be given by the ratio between the [z-transforms](http://en.wikipedia.org/wiki/Z-transform) of input $x(t)$ and output $y(t)$, and the formalism is similar.The transfer function of a digital filter (in fact for a linear, time-invariant, and causal filter), obtained by taking the z-transform of the difference equation shown earlier, is given by:$$ H(z) = \frac{Y(z)}{X(z)} = \frac{b_0 + b_1 z^{-1} + b_2 z^{-2} + \cdots + b_N z^{-N}}{1 + a_1 z^{-1} + a_2 z^{-2} + \cdots + a_M z^{-M}} $$$$ H(z) = \frac{\sum_{k=0}^M b_kz^{-k}}{1 + \sum_{k=1}^N a_kz^{-k}} $$And the order of the filter is the larger of N or M. Similar to the difference equation, this transfer function is for a recursive (IIR) filter. If the $a$ coefficients are zero, the denominator is equal to one, and the filter becomes nonrecursive (FIR). The Fourier transformThe [Fourier transform](http://en.wikipedia.org/wiki/Fourier_transform) is a mathematical operation to transform a signal which is function of time, $g(t)$, into a signal which is function of frequency, $G(f)$, and it is defined by: $$ \mathcal{F}[g(t)] = G(f) = \int_{-\infty}^{\infty} g(t) e^{-j 2\pi f t} dt $$ Its inverse operation is: $$ \mathcal{F}^{-1}[G(f)] = g(t) = \int_{-\infty}^{\infty} G(f) e^{j 2\pi f t} df $$ The function $G(f)$ is the representation in the frequency domain of the time-domain signal, $g(t)$, and vice-versa. The functions $g(t)$ and $G(f)$ are referred to as a Fourier integral pair, or Fourier transform pair, or simply the Fourier pair. [See this text for an introduction to Fourier transform](http://www.thefouriertransform.com/transform/fourier.php). Types of filtersIn relation to the frequencies that are not removed from the data (and a boundary is specified by the critical or cutoff frequency), a filter can be a low-pass, high-pass, band-pass, and band-stop. The frequency response of such filters is illustrated in the next figure.Frequency response of filters (from Wikipedia). The critical or cutoff frequency for a filter is defined as the frequency where the power (the amplitude squared) of the filtered signal is half of the power of the input signal (or the output amplitude is 0.707 of the input amplitude). For instance, if a low-pass filter has a cutoff frequency of 10 Hz, it means that at 10 Hz the power of the filtered signal is 50% of the power of the original signal (and the output amplitude will be about 71% of the input amplitude). The gain of a filter (the ratio between the output and input powers) is usually expressed in the decibel (dB) unit. Decibel (dB) The decibel (dB) is a logarithmic unit used to express the ratio between two values. In the case of the filter gain measured in the decibel unit:$$Gain=10\,log\left(\frac{A_{out}^2}{A_{in}^2}\right)=20\,log\left(\frac{A_{out}}{A_{in}}\right)$$ Where $A_{out}$ and $A_{in}$ are respectively the amplitudes of the output (filtered) and input (raw) signals.For instance, the critical or cutoff frequency for a filter, the frequency where the power (the amplitude squared) of the filtered signal is half of the power of the input signal, is given in decibel as:$$ 10\,log\left(0.5\right) \approx -3 dB $$ If the power of the filtered signal is twice the power of the input signal, because of the logarithm, the gain in decibel is $10\,log\left(2\right) \approx 3 dB$. If the output power is attenuated by ten times, the gain is $10\,log\left(0.1\right) \approx -10 dB$, but if the output amplitude is attenuated by ten times, the gain is $20\,log\left(0.1\right) \approx -20 dB$, and if the output amplitude is amplified by ten times, the gain is $20 dB$. For each 10-fold variation in the amplitude ratio, there is an increase (or decrease) of $20 dB$.The decibel unit is useful to represent large variations in a measurement, for example, $-120 dB$ represents an attenuation of 1,000,000 times. A decibel is one tenth of a bel, a unit named in honor of Alexander Graham Bell. Butterworth filterA common filter employed in biomechanics and motor control fields is the [Butterworth filter](http://en.wikipedia.org/wiki/Butterworth_filter). This filter is used because its simple design, it has a more flat frequency response and linear phase response in the pass and stop bands, and it is simple to use. The Butterworth filter is a recursive filter (IIR) and both $a$ and $b$ filter coefficients are used in its implementation. Let's implement the Butterworth filter. We will use the function `butter` to calculate the filter coefficients: ```pythonbutter(N, Wn, btype='low', analog=False, output='ba')```Where `N` is the order of the filter, `Wn` is the cutoff frequency specified as a fraction of the [Nyquist frequency](http://en.wikipedia.org/wiki/Nyquist_frequency) (half of the sampling frequency), and `btype` is the type of filter (it can be any of {'lowpass', 'highpass', 'bandpass', 'bandstop'}, the default is 'lowpass'). See the help of `butter` for more details. The filtering itself is performed with the function `lfilter`: ```pythonlfilter(b, a, x, axis=-1, zi=None)```Where `b` and `a` are the Butterworth coefficients calculated with the function `butter` and `x` is the variable with the data to be filtered. ###Code from scipy import signal freq = 100 t = np.arange(0, 1, .01) w = 2*np.pi*1 # 1 Hz y = np.sin(w*t) + 0.1*np.sin(10*w*t) # Butterworth filter b, a = signal.butter(2, 5/(freq/2), btype = 'low') y2 = signal.lfilter(b, a, y) # standard filter # plot fig, ax1 = plt.subplots(1, 1, figsize=(9, 4)) ax1.plot(t, y, 'r.-', linewidth=2, label = 'raw data') ax1.plot(t, y2, 'b.-', linewidth=2, label = 'filter @ 5 Hz') ax1.legend(frameon=False, fontsize=14) ax1.set_xlabel("Time [s]") ax1.set_ylabel("Amplitude") plt.show() ###Output _____no_output_____ ###Markdown The plot above shows that the Butterworth filter introduces a phase (a delay or lag in time) between the raw and the filtered signals. We will see how to account for that later. Let's look at the values of the `b` and `a` Butterworth filter coefficients for different orders and see a characteristic of them; from the general difference equation shown earlier, it follows that the sum of the `b` coefficients minus the sum of the `a` coefficients (excluding the first coefficient of `a`) is one: ###Code from scipy import signal print('Low-pass Butterworth filter coefficients') b, a = signal.butter(1, .1, btype = 'low') print('Order 1:', '\nb:', b, '\na:', a, '\nsum(b)-sum(a):', np.sum(b)-np.sum(a[1:])) b, a = signal.butter(2, .1, btype = 'low') print('Order 2:', '\nb:', b, '\na:', a, '\nsum(b)-sum(a):', np.sum(b)-np.sum(a[1:])) ###Output Low-pass Butterworth filter coefficients Order 1: b: [ 0.13672874 0.13672874] a: [ 1. -0.72654253] sum(b)-sum(a): 1.0 Order 2: b: [ 0.02008337 0.04016673 0.02008337] a: [ 1. -1.56101808 0.64135154] sum(b)-sum(a): 1.0 ###Markdown Bode plotHow much the amplitude of the filtered signal is attenuated in relation to the amplitude of the raw signal (gain or magnitude) as a function of frequency is given in the frequency response plot. The plots of the frequency and phase responses (the [bode plot](http://en.wikipedia.org/wiki/Bode_plot)) of this filter implementation (Butterworth, lowpass at 5 Hz, second-order) is shown below: ###Code from scipy import signal b, a = signal.butter(2, 5/(freq/2), btype = 'low') w, h = signal.freqz(b, a) # compute the frequency response of a digital filter angles = np.rad2deg(np.unwrap(np.angle(h))) # angle of the complex argument w = w/np.pi*freq/2 # angular frequency from radians to Hz h = 20*np.log10(np.absolute(h)) # in decibels fig, (ax1, ax2) = plt.subplots(2, 1, sharex = True, figsize=(9, 6)) ax1.plot(w, h, linewidth=2) ax1.set_ylim(-80, 1) ax1.set_title('Frequency response') ax1.set_ylabel("Magnitude [dB]") ax1.plot(5, -3.01, 'ro') ax11 = plt.axes([.17, .59, .2, .2]) # inset plot ax11.plot(w, h, linewidth=2) ax11.plot(5, -3.01, 'ro') ax11.set_ylim([-6, .5]) ax11.set_xlim([0, 10]) ax2.plot(w, angles, linewidth=2) ax2.set_title('Phase response') ax2.set_xlabel("Frequency [Hz]") ax2.set_ylabel("Phase [degrees]") ax2.plot(5, -90, 'ro') plt.show() ###Output _____no_output_____ ###Markdown The inset plot in the former figure shows that at the cutoff frequency (5 Hz), the power of the filtered signal is indeed attenuated by 3 dB. The phase-response plot shows that at the cutoff frequency, the Butterworth filter presents about 90 degrees of phase between the raw and filtered signals. A 5 Hz signal has a period of 0.2 s and 90 degrees of phase corresponds to 0.05 s of lag. Looking at the plot with the raw and filtered signals employing or not the phase correction, we can see that the delay is indeed about 0.05 s. Order of a filterThe order of a filter is related to the inclination of the 'wall' in the frequency response plot that attenuates or not the input signal at the vicinity of the cutoff frequency. A vertical wall exactly at the cutoff frequency would be ideal but this is impossible to implement. A Butterworth filter of first order attenuates 6 dB of the power of the signal each doubling of the frequency (per octave) or, which is the same, attenuates 20 dB each time the frequency varies by an order of 10 (per decade). In more technical terms, one simply says that a first-order filter rolls off -6 dB per octave or that rolls off -20 dB per decade. A second-order filter rolls off -12 dB per octave (-40 dB per decade), and so on, as shown in the next figure. ###Code from butterworth_plot import butterworth_plot butterworth_plot() ###Output _____no_output_____ ###Markdown Butterworth filter with zero-phase shiftThe phase introduced by the Butterworth filter can be corrected in the digital implementation by cleverly filtering the data twice, once forward and once backwards. So, the lag introduced in the first filtering is zeroed by the same lag in the opposite direction at the second pass. The result is a zero-phase shift (or zero-phase lag) filtering. However, because after each pass the output power at the cutoff frequency is attenuated by two, by passing twice the second order Butterworth filter, the final output power will be attenuated by four. We have to correct the actual cutoff frequency value so that when employing the two passes, the filter will attenuate only by two. The following formula gives the desired cutoff frequency for a second-order Butterworth filter according to the number of passes, $n$, (see Winter, 2009):$$ C = \sqrt[4]{2^{\frac{1}{n}} - 1} $$For instance, for two passes, $n=2$, $ C=\sqrt[4]{2^{\frac{1}{2}} - 1} \approx 0.802 $. The actual filter cutoff frequency will be:$$ fc_{actual} = \frac{fc_{desired}}{C} $$For instance, for a second-order Butterworth filter with zero-phase shift and a desired 10 Hz cutoff frequency, the actual cutoff frequency should be 12.47 Hz. Let's implement this forward and backward filtering using the function `filtfilt` and compare with the single-pass filtering we just did it. ###Code from scipy.signal import butter, lfilter, filtfilt freq = 100 t = np.arange(0, 1, .01) w = 2*np.pi*1 # 1 Hz y = np.sin(w*t) + 0.1*np.sin(10*w*t) # Butterworth filter b, a = butter(2, 5/(freq/2), btype = 'low') y2 = lfilter(b, a, y) # standard filter # Correct the cutoff frequency for the number of passes in the filter C = 0.802 b, a = butter(2, (5/C)/(freq/2), btype = 'low') y3 = filtfilt(b, a, y) # filter with phase shift correction # plot fig, ax1 = plt.subplots(1, 1, figsize=(9, 4)) ax1.plot(t, y, 'r.-', linewidth=2, label = 'raw data') ax1.plot(t, y2, 'b.-', linewidth=2, label = 'filter @ 5 Hz') ax1.plot(t, y3, 'g.-', linewidth=2, label = 'filtfilt @ 5 Hz') ax1.legend(frameon=False, fontsize=14) ax1.set_xlabel("Time [s]") ax1.set_ylabel("Amplitude") plt.show() ###Output _____no_output_____ ###Markdown Critically damped digital filterA problem with a lowpass Butterworth filter is that it tends to overshoot or undershoot data with rapid changes (see for example, Winter (2009), Robertson et at. (2013), and Robertson & Dowling (2003)). The Butterworth filter behaves as an underdamped second-order system and a critically damped filter doesn't have this overshoot/undershoot characteristic. The function `crit_damp.py` calculates the coefficients (the b's and a's) for an IIR critically damped digital filter and corrects the cutoff frequency for the number of passes of the filter. The calculation of these coefficients is very similar to the calculation for the Butterworth filter, see the `critic_damp.py` code. This function can also calculate the Butterworth coefficients if this option is chosen. The signature of `critic_damp.py` function is: ```pythoncritic_damp(fcut, freq, npass=2, fcorr=True, filt='critic')```And here is an example of `critic_damp.py`: ###Code >>> from critic_damp import critic_damp >>> print('Critically damped filter') >>> b_cd, a_cd, fc_cd = critic_damp(fcut=10, freq=100, npass=2, fcorr=True, filt='critic') >>> print('b:', b_cd, '\na:', a_cd, '\nCorrected Fc:', fc_cd) >>> print('Butterworth filter') >>> b_bw, a_bw, fc_bw = critic_damp(fcut=10, freq=100, npass=2, fcorr=True, filt='butter') >>> print('b:', b_bw, '\na:', a_bw, '\nCorrected Fc:', fc_bw) >>> # illustrate the filter in action >>> import numpy as np >>> import matplotlib.pyplot as plt >>> from scipy import signal >>> y = np.hstack((np.zeros(20), np.ones(20))) >>> t = np.linspace(0, 0.39, 40) - .19 >>> y_cd = signal.filtfilt(b_cd, a_cd, y) >>> y_bw = signal.filtfilt(b_bw, a_bw, y) >>> fig, ax = plt.subplots(1, 1, figsize=(9, 4)) >>> ax.plot(t, y, 'k', linewidth=2, label = 'raw data') >>> ax.plot(t, y_cd, 'r', linewidth=2, label = 'Critically damped') >>> ax.plot(t, y_bw, 'b', linewidth=2, label = 'Butterworth') >>> ax.legend() >>> ax.set_xlabel('Time (s)') >>> ax.set_ylabel('Amplitude') >>> ax.set_title('Freq = 100 Hz, Fc = 10 Hz, 2nd order and zero-phase shift filters') >>> plt.show() ###Output Critically damped filter b: [ 0.21937845 0.4387569 0.21937845] a: [ 1. -0.12648588 0.00399967] Corrected Fc: 22.9895922275 Butterworth filter b: [ 0.09718522 0.19437045 0.09718522] a: [ 1. -0.94557029 0.33431119] Corrected Fc: 12.4650470277 ###Markdown Moving-average filterHere are four different versions of a function to implement the moving-average filter: ###Code def moving_averageV1(x, window): """Moving average of 'x' with window size 'window'.""" y = np.empty(len(x)-window+1) for i in range(len(y)): y[i] = np.sum(x[i:i+window])/window return y def moving_averageV2(x, window): """Moving average of 'x' with window size 'window'.""" xsum = np.cumsum(x) xsum[window:] = xsum[window:] - xsum[:-window] return xsum[window-1:]/window def moving_averageV3(x, window): """Moving average of 'x' with window size 'window'.""" return np.convolve(x, np.ones(window)/window, 'same') from scipy.signal import lfilter def moving_averageV4(x, window): """Moving average of 'x' with window size 'window'.""" return lfilter(np.ones(window)/window, 1, x) ###Output _____no_output_____ ###Markdown Let's test these versions: ###Code x = np.random.randn(300)/10 x[100:200] += 1 window = 10 y1 = moving_averageV1(x, window) y2 = moving_averageV2(x, window) y3 = moving_averageV3(x, window) y4 = moving_averageV4(x, window) # plot fig, ax = plt.subplots(1, 1, figsize=(10, 5)) ax.plot(x, 'b-', linewidth=1, label = 'raw data') ax.plot(y1, 'y-', linewidth=2, label = 'moving average V1') ax.plot(y2, 'm--', linewidth=2, label = 'moving average V2') ax.plot(y3, 'r-', linewidth=2, label = 'moving average V3') ax.plot(y4, 'g-', linewidth=2, label = 'moving average V4') ax.legend(frameon=False, loc='upper right', fontsize=12) ax.set_xlabel("Data #") ax.set_ylabel("Amplitude") plt.show() ###Output _____no_output_____ ###Markdown A test of the performance of the four versions (using the magick IPython function `timeit`): ###Code %timeit moving_averageV1(x, window) %timeit moving_averageV2(x, window) %timeit moving_averageV3(x, window) %timeit moving_averageV4(x, window) ###Output 715 µs ± 16.6 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) 5.25 µs ± 182 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) 6.22 µs ± 283 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) 52.1 µs ± 776 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each) ###Markdown The version with the cumsum function produces identical results to the first version of the moving average function but it is much faster (the fastest of the four versions). Only the version with the convolution function produces a result without a phase or lag between the input and output data, although we could improve the other versions to account for that (for example, calculating the moving average of `x[i-window/2:i+window/2]` and using `filtfilt` instead of `lfilter`). And avoid as much as possible the use of loops in Python! The version with the for loop is about one hundred times slower than the other versions. Moving-RMS filterThe root-mean square (RMS) is a measure of the absolute amplitude of the data and it is useful when the data have positive and negative values. The RMS is defined as:$$ RMS = \sqrt{\frac{1}{N}\sum_{i=1}^{N} x_i^2} $$Similar to the moving-average measure, the moving RMS is defined as:$$ y[i] = \sqrt{\sum_{j=0}^{m-1} (x[i+j])^2} \;\;\;\; for \;\;\; i=1, \; \dots, \; n-m+1 $$Here are two implementations for a moving-RMS filter (very similar to the moving-average filter): ###Code import numpy as np from scipy.signal import filtfilt def moving_rmsV1(x, window): """Moving RMS of 'x' with window size 'window'.""" window = 2*window + 1 return np.sqrt(np.convolve(x*x, np.ones(window)/window, 'same')) def moving_rmsV2(x, window): """Moving RMS of 'x' with window size 'window'.""" return np.sqrt(filtfilt(np.ones(window)/(window), [1], x*x)) ###Output _____no_output_____ ###Markdown Let's filter electromyographic data: ###Code # load data file with EMG signal data = np.loadtxt('./../data/emg.csv', delimiter=',') data = data[300:1000,:] time = data[:, 0] data = data[:, 1] - np.mean(data[:, 1]) window = 50 y1 = moving_rmsV1(data, window) y2 = moving_rmsV2(data, window) # plot fig, ax = plt.subplots(1, 1, figsize=(9, 5)) ax.plot(time, data, 'k-', linewidth=1, label = 'raw data') ax.plot(time, y1, 'r-', linewidth=2, label = 'moving RMS V1') ax.plot(time, y2, 'b-', linewidth=2, label = 'moving RMS V2') ax.legend(frameon=False, loc='upper right', fontsize=12) ax.set_xlabel("Time [s]") ax.set_ylabel("Amplitude") ax.set_ylim(-.1, .1) plt.show() ###Output _____no_output_____ ###Markdown Similar, but not the same, results.An advantage of the filter employing the convolution method is that it behaves better to abrupt changes in the data, such as when filtering data that change from a baseline at zero to large positive values. The filter with the `filter` or `filtfilt` function would introduce negative values in this case. Another advantage for the convolution method is that it is much faster: ###Code print('Filter with convolution:') %timeit moving_rmsV1(data, window) print('Filter with filtfilt:') %timeit moving_rmsV2(data, window) ###Output Filter with convolution: 27 µs ± 1.21 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each) Filter with filtfilt: 343 µs ± 1.79 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) ###Markdown Moving-median filterThe moving-median filter is similar in concept than the other moving filters but uses the median instead. This filter has a sharper response to abrupt changes in the data than the moving-average filter: ###Code from scipy.signal import medfilt x = np.random.randn(300)/10 x[100:200] += 1 window = 11 y = np.convolve(x, np.ones(window)/window, 'same') y2 = medfilt(x, window) # plot fig, ax = plt.subplots(1, 1, figsize=(10, 4)) ax.plot(x, 'b-', linewidth=1, label = 'raw data') ax.plot(y, 'r-', linewidth=2, label = 'moving average') ax.plot(y2, 'g-', linewidth=2, label = 'moving median') ax.legend(frameon=False, loc='upper right', fontsize=12) ax.set_xlabel("Data #") ax.set_ylabel("Amplitude") plt.show() ###Output _____no_output_____ ###Markdown More moving filtersThe library [pandas](http://pandas.pydata.org/) has several types of [moving-filter functions](http://pandas.pydata.org/pandas-docs/stable/computation.htmlmoving-rolling-statistics-moments). Numerical differentiation of data with noiseHow to remove noise from a signal is rarely a trivial task and this problem gets worse with numerical differentiation of the data because the amplitudes of the noise with higher frequencies than the signal are amplified with differentiation (for each differentiation step, the SNR decreases). To demonstrate this problem, consider the following function representing some experimental data:$$ f = sin(\omega t) + 0.1sin(10\omega t) $$The first component, with large amplitude (1) and small frequency (1 Hz), represents the signal and the second component, with small amplitude (0.1) and large frequency (10 Hz), represents the noise. The signal-to-noise ratio (SNR) for these data is equal to (1/0.1)$^2$ = 100. Let's see what happens with the SNR for the first and second derivatives of $f$:$$ f\:'\:= \omega cos(\omega t) + \omega cos(10\omega t) $$$$ f\:''= -\omega^2 sin(\omega t) - 10\omega^2 sin(10\omega t) $$For the first derivative, SNR = 1, and for the second derivative, SNR = 0.01! The following plots illustrate this problem: ###Code t = np.arange(0,1,.01) w = 2*np.pi*1 # 1 Hz #signal and noise derivatives: s = np.sin(w*t); n = 0.1*np.sin(10*w*t) sd = w*np.cos(w*t); nd = w*np.cos(10*w*t) sdd = -w*w*np.sin(w*t); ndd = -w*w*10*np.sin(10*w*t) plt.rc('axes', labelsize=16, titlesize=16) plt.rc('xtick', labelsize=12) plt.rc('ytick', labelsize=12) fig, (ax1,ax2,ax3) = plt.subplots(3, 1, sharex = True, figsize=(8, 6)) ax1.set_title('Differentiation of signal and noise') ax1.plot(t, s, 'b.-', linewidth=1, label = 'signal') ax1.plot(t, n, 'g.-', linewidth=1, label = 'noise') ax1.plot(t, s+n, 'r.-', linewidth=2, label = 'signal+noise') ax2.plot(t, sd, 'b.-', linewidth=1) ax2.plot(t, nd, 'g.-', linewidth=1) ax2.plot(t, sd + nd, 'r.-', linewidth=2) ax3.plot(t, sdd, 'b.-', linewidth=1) ax3.plot(t, ndd, 'g.-', linewidth=1) ax3.plot(t, sdd + ndd, 'r.-', linewidth=2) ax1.legend(frameon=False, fontsize=10) ax1.set_ylabel('f') ax2.set_ylabel("f '") ax3.set_ylabel("f ''") ax3.set_xlabel("Time (s)") fig.tight_layout(pad=0) plt.show() ###Output _____no_output_____ ###Markdown Let's see how the use of a low-pass Butterworth filter can attenuate the high-frequency noise and how the derivative is affected. We will also calculate the [Fourier transform](http://en.wikipedia.org/wiki/Fourier_transform) of these data to look at their frequencies content. ###Code from scipy import signal, fftpack freq = 100 t = np.arange(0,1,.01); w = 2*np.pi*1 # 1 Hz y = np.sin(w*t)+0.1*np.sin(10*w*t) # Butterworth filter # Correct the cutoff frequency for the number of passes in the filter C = 0.802 b, a = signal.butter(2, (5/C)/(freq/2), btype = 'low') y2 = signal.filtfilt(b, a, y) # 2nd derivative of the data ydd = np.diff(y,2)*freq*freq # raw data y2dd = np.diff(y2,2)*freq*freq # filtered data # frequency content yfft = np.abs(fftpack.fft(y))/(y.size/2) # raw data y2fft = np.abs(fftpack.fft(y2))/(y.size/2) # filtered data freqs = fftpack.fftfreq(y.size, 1./freq) yddfft = np.abs(fftpack.fft(ydd))/(ydd.size/2) y2ddfft = np.abs(fftpack.fft(y2dd))/(ydd.size/2) freqs2 = fftpack.fftfreq(ydd.size, 1./freq) ###Output _____no_output_____ ###Markdown And the plots: ###Code fig, ((ax1,ax2),(ax3,ax4)) = plt.subplots(2, 2, figsize=(11, 5)) ax1.set_title('Temporal domain', fontsize=14) ax1.plot(t, y, 'r', linewidth=2, label = 'raw data') ax1.plot(t, y2, 'b', linewidth=2, label = 'filtered @ 5 Hz') ax1.set_ylabel('f') ax1.legend(frameon=False, fontsize=12) ax2.set_title('Frequency domain', fontsize=14) ax2.plot(freqs[:int(yfft.size/4)], yfft[:int(yfft.size/4)],'r', linewidth=2,label='raw data') ax2.plot(freqs[:int(yfft.size/4)],y2fft[:int(yfft.size/4)],'b--',linewidth=2,label='filtered @ 5 Hz') ax2.set_ylabel('FFT(f)') ax2.legend(frameon=False, fontsize=12) ax3.plot(t[:-2], ydd, 'r', linewidth=2, label = 'raw') ax3.plot(t[:-2], y2dd, 'b', linewidth=2, label = 'filtered @ 5 Hz') ax3.set_xlabel('Time [s]') ax3.set_ylabel("f ''") ax4.plot(freqs[:int(yddfft.size/4)], yddfft[:int(yddfft.size/4)], 'r', linewidth=2, label = 'raw') ax4.plot(freqs[:int(yddfft.size/4)],y2ddfft[:int(yddfft.size/4)],'b--',linewidth=2, label = 'filtered @ 5 Hz') ax4.set_xlabel('Frequency [Hz]') ax4.set_ylabel("FFT(f '')"); ###Output _____no_output_____ ###Markdown Pezzack's benchmark dataIn 1977, Pezzack, Norman and Winter published a paper where they investigated the effects of differentiation and filtering processes on experimental data (the angle of a bar manipulated in space). Since then, these data have became a benchmark to test new algorithms. Let's work with these data (available at [http://isbweb.org/data/pezzack/index.html](http://isbweb.org/data/pezzack/index.html)). The data have the angular displacement measured by video and the angular acceleration directly measured by an accelerometer, which we will consider as the true acceleration. ###Code # load data file time, disp, disp2, aacc = np.loadtxt('./../data/Pezzack.txt', skiprows=6, unpack=True) dt = np.mean(np.diff(time)) # plot data fig, (ax1,ax2) = plt.subplots(1, 2, sharex = True, figsize=(11, 4)) plt.suptitle("Pezzack's benchmark data", fontsize=20) ax1.plot(time, disp, 'b.-') ax1.set_xlabel('Time [s]') ax1.set_ylabel('Angular displacement [rad]', fontsize=12) ax2.plot(time, aacc, 'g.-') ax2.set_xlabel('Time [s]') ax2.set_ylabel('Angular acceleration [rad/s$^2$]', fontsize=12) plt.subplots_adjust(wspace=0.3) ###Output _____no_output_____ ###Markdown The challenge is how to obtain the acceleration using the disclacement data dealing with the noise. A simple double differentiation of these data will amplify the noise: ###Code # acceleration using the 2-point forward difference algorithm: aacc2 = np.diff(disp,2)/dt/dt # aacc2 has 2 points less than aacc # plot data fig, ax1 = plt.subplots(1, 1, figsize=(11, 4)) plt.suptitle("Pezzack's benchmark data", fontsize=20) ax1.plot(time, aacc, 'g', label='Analog acceleration (true value)') ax1.plot(time[1:-1], aacc2, 'r', label='Acceleration by 2-point difference') ax1.set_xlabel('Time [s]', fontsize=12) ax1.set_ylabel('Angular acceleration [rad/s$^2$]', fontsize=12) plt.legend(frameon=False, fontsize=12, loc='upper left'); ###Output _____no_output_____ ###Markdown The source of noise in these data is due to random small errors in the digitization process which occur at each frame, because that the frequency content of the noise is up to half of the sampling frequency, higher the frequency content of the movement being analyzed. Let's try different filters ([Butterworth](http://en.wikipedia.org/wiki/Butterworth_filter), [Savitzky-Golay](http://en.wikipedia.org/wiki/Savitzky%E2%80%93Golay_smoothing_filter), and [spline](http://en.wikipedia.org/wiki/Spline_function)) to attenuate this noise. Both Savitzky-Golay and the spline functions are based on fitting polynomials to the data and they allow to differentiate the polynomials in order to get the derivatives of the data (instead of direct numerical differentiation of the data). The Savitzky-Golay and the spline functions have the following signatures: ```pythonsavgol_filter(x, window_length, polyorder, deriv=0, delta=1.0, axis=-1, mode='interp', cval=0.0) splrep(x, y, w=None, xb=None, xe=None, k=3, task=0, s=None, t=None, full_output=0, per=0, quiet=1)```And to evaluate the spline derivatives: ```pythonsplev(x, tck, der=0, ext=0)```And let's employ the [root-mean-square error (RMSE)](http://en.wikipedia.org/wiki/RMSE) metric to compare their performance: ###Code from scipy import signal, interpolate # Butterworth filter # Correct the cutoff frequency for the number of passes in the filter C = 0.802 b, a = signal.butter(2, (9/C)/((1/dt)/2)) dispBW = signal.filtfilt(b, a, disp) aaccBW = np.diff(dispBW, 2)/dt/dt # aaccBW has 2 points less than aacc # Add (pad) data to the extremities to avoid problems with filtering disp_pad = signal._arraytools.odd_ext(disp, n=11) time_pad = signal._arraytools.odd_ext(time, n=11) # Savitzky-Golay filter aaccSG = signal.savgol_filter(disp_pad,window_length=5,polyorder=3,deriv=2,delta=dt)[11:-11] # Spline smoothing tck = interpolate.splrep(time_pad, disp_pad, k=5, s=0.15*np.var(disp_pad)/np.size(disp_pad)) aaccSP = interpolate.splev(time_pad, tck, der=2)[11:-11] # RMSE: rmseBW = np.sqrt(np.mean((aaccBW-aacc[1:-1])**2)) rmseSG = np.sqrt(np.mean((aaccSG-aacc)**2)) rmseSP = np.sqrt(np.mean((aaccSP-aacc)**2)) ###Output _____no_output_____ ###Markdown And the plots: ###Code # plot data fig, ax1 = plt.subplots(1, 1, figsize=(11, 4)) plt.suptitle("Pezzack's benchmark data", fontsize=20) ax1.plot(time, aacc, 'g', label='Analog acceleration: (True value)') ax1.plot(time[1:-1], aaccBW, 'r', label='Butterworth 9 Hz: RMSE = %0.2f' %rmseBW) ax1.plot(time,aaccSG,'b', label='Savitzky-Golay 5 points: RMSE = %0.2f' %rmseSG) ax1.plot(time,aaccSP,'m', label='Quintic spline, s=0.0005: RMSE = %0.2f' %rmseSP) ax1.set_xlabel('Time [s]') ax1.set_ylabel('Angular acceleration [rad/s$^2$]', fontsize=12) plt.legend(frameon=False, fontsize=12, loc='upper left'); ###Output _____no_output_____ ###Markdown At this case, the Butterworth, Savitzky-Golay, and spline filters produced similar results with good fits to the original curve. However, with all of them, particularly with the spline smoothing, it is necessary some degree of tuning for choosing the right parameters. The Butterworth filter is the easiest one because the cutoff frequency choice sound more familiar for human movement analysis. Kinematics of a ball tossLet's now analyse the kinematic data of a ball tossed to the space. These data were obtained using [Tracker](http://www.cabrillo.edu/~dbrown/tracker/), which is a free video analysis and modeling tool built on the [Open Source Physics](http://www.opensourcephysics.org/) (OSP) Java framework. The data are from the analysis of the video *balltossout.mov* from the mechanics video collection which can be obtained in the Tracker website. ###Code t, x, y = np.loadtxt('./../data/balltoss.txt', skiprows=2, unpack=True) dt = np.mean(np.diff(t)) print('Time interval: %f s' %dt) print('x and y values:') x, y plt.rc('axes', labelsize=14) plt.rc('xtick', labelsize=14) plt.rc('ytick', labelsize=14) fig, (ax1,ax2,ax3) = plt.subplots(1, 3, figsize=(12, 3)) plt.suptitle('Kinematics of a ball toss', fontsize=20, y=1.05) ax1.plot(x, y, 'go') ax1.set_ylabel('y [m]') ax1.set_xlabel('x [m]') ax2.plot(t, x, 'bo') ax2.set_ylabel('x [m]') ax2.set_xlabel('Time [s]') ax3.plot(t, y, 'ro') ax3.set_ylabel('y [m]') ax3.set_xlabel('Time [s]') plt.subplots_adjust(wspace=0.35) ###Output _____no_output_____ ###Markdown Calculate the velocity and acceleration numerically: ###Code # forward difference algorithm: vx, vy = np.diff(x)/dt, np.diff(y)/dt ax, ay = np.diff(vx)/dt, np.diff(vy)/dt # central difference algorithm: vx2, vy2 = (x[2:]-x[:-2])/(2*dt), (y[2:]-y[:-2])/(2*dt) ax2, ay2 = (vx2[2:]-vx2[:-2])/(2*dt), (vy2[2:]-vy2[:-2])/(2*dt) fig, axarr = plt.subplots(2, 3, sharex = True, figsize=(11, 6)) axarr[0,0].plot(t, x, 'bo') axarr[0,0].set_ylabel('x [m]') axarr[0,1].plot(t[:-1], vx, 'bo', label='forward difference'); axarr[0,1].set_ylabel('vx [m/s]') axarr[0,1].plot(t[1:-1], vx2, 'm+', markersize=10, label='central difference') axarr[0,1].legend(frameon=False, fontsize=10, loc='upper left', numpoints=1) axarr[0,2].plot(t[:-2], ax, 'bo') axarr[0,2].set_ylabel('ax [m/s$^2$]') axarr[0,2].plot(t[2:-2], ax2, 'm+', markersize=10) axarr[1,0].plot(t, y, 'ro') axarr[1,0].set_ylabel('y [m]') axarr[1,1].plot(t[:-1], vy, 'ro') axarr[1,1].set_ylabel('vy [m/s]') axarr[1,1].plot(t[1:-1], vy2, 'm+', markersize=10) axarr[1,2].plot(t[:-2], ay, 'ro') axarr[1,2].set_ylabel('ay [m/s$^2$]') axarr[1,2].plot(t[2:-2], ay2, 'm+', markersize=10) axarr[1,1].set_xlabel('Time [s]') plt.tight_layout(w_pad=-.5, h_pad=0) plt.suptitle('Kinematics of a ball toss', fontsize=20, y=1.05); ###Output _____no_output_____ ###Markdown We can observe the noise, particularly in the derivatives of the data. For example, the vertical acceleration of the ball should be constant, approximately g=9.8 m/s$^2$. To estimate the acceleration, we can get rid off the noise by filtering the data or, because we know the physics of the phenomenon, we can fit a model to the data. Let's try the latter option. ###Code # Model: y = y0 + v0*t + 1/2*g*t^2 # fit a second order polynomial to the data p = np.polyfit(t, y, 2) print('g = %0.2f m/s2' % (2*p[0])) ###Output g = -9.98 m/s2 ###Markdown Data filtering in signal processingMarcos Duarte Here will see an introduction to data filtering and the most basic filters typically used in signal processing of biomechanical data. You should be familiar with the [basic properties of signals](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/SignalBasicProperties.ipynb) before proceeding. Filter and smoothingIn data acquisition with an instrument, it's common that the noise has higher frequencies and lower amplitudes than the desired signal. To remove this noise from the signal, a procedure known as filtering or smoothing is employed in the signal processing. Filtering is a process to attenuate from a signal some unwanted component or feature. A filter usually removes certain frequency components from the data according to its frequency response. [Frequency response](http://en.wikipedia.org/wiki/Frequency_response) is the quantitative measure of the output spectrum of a system or device in response to a stimulus, and is used to characterize the dynamics of the system. [Smoothing](http://en.wikipedia.org/wiki/Smoothing) is the process of removal of local (at short scale) fluctuations in the data while preserving a more global pattern in the data (such local variations could be noise or just a short scale phenomenon that is not interesting). A filter with a low-pass frequency response performs smoothing. With respect to the filter implementation, it can be classified as [analog filter](http://en.wikipedia.org/wiki/Passive_analogue_filter_development) or [digital filter](http://en.wikipedia.org/wiki/Digital_filter). An analog filter is an electronic circuit that performs filtering of the input electrical signal (analog data) and outputs a filtered electrical signal (analog data). A simple analog filter can be implemmented with a electronic circuit with a resistor and a capacitor. A digital filter, is a system that implement the filtering of a digital data (time-discrete data). Example: the moving-average filterAn example of a low-pass (smoothing) filter is the moving average, which is performed taking the arithmetic mean of subsequences of $m$ terms of the data. For instance, the moving averages with window sizes (m) equal to 2 and 3 are:$$ \begin{array}{}&y_{MA(2)} = \frac{1}{2}[x_1+x_2,\; x_2+x_3,\; \cdots,\; x_{n-1}+x_n] \\&y_{MA(3)} = \frac{1}{3}[x_1+x_2+x_3,\; x_2+x_3+x_4,\; \cdots,\; x_{n-2}+x_{n-1}+x_n]\end{array} $$Which has the general formula:$$ y[i] = \sum_{j=0}^{m-1} x[i+j] \;\;\;\; for \;\;\; i=1, \; \dots, \; n-m+1 $$Where $n$ is the number (length) of data.Let's implement a simple version of the moving average filter. First, let's import the necessary Python libraries and configure the environment: ###Code # Import the necessary libraries import numpy as np import matplotlib.pyplot as plt %matplotlib inline import seaborn as sns #sns.set() sns.set_context("notebook", font_scale=1.2, rc={"lines.linewidth": 2, "lines.markersize": 10}) from IPython.display import HTML, display import sys sys.path.insert(1, r'./../functions') # add to pythonpath ###Output _____no_output_____ ###Markdown A naive moving-average function definition: ###Code def moving_average(x, window): """Moving average of 'x' with window size 'window'.""" y = np.empty(len(x)-window+1) for i in range(len(y)): y[i] = np.sum(x[i:i+window])/window return y ###Output _____no_output_____ ###Markdown Let's generate some data to test this function: ###Code signal = np.zeros(300) signal[100:200] += 1 noise = np.random.randn(300)/10 x = signal + noise window = 11 y = moving_average(x, window) # plot fig, ax = plt.subplots(1, 1, figsize=(8, 4)) ax.plot(x, 'b.-', linewidth=1, label = 'raw data') ax.plot(y, 'r.-', linewidth=2, label = 'moving average') ax.legend(frameon=False, loc='upper right', fontsize=10) ax.set_xlabel("Time [s]") ax.set_ylabel("Amplitude") plt.show() ###Output _____no_output_____ ###Markdown Later we will look on better ways to calculate the moving average. Digital filtersIn signal processing, a digital filter is a system that performs mathematical operations on a signal to modify certain aspects of that signal. A digital filter (in fact, a causal, linear time-invariant (LTI) digital filter) can be seen as the implementation of the following difference equation in the time domain:$$ \begin{array}{}y_n &= \;\; b_0x_n + \; b_1x_{n-1} + \cdots + b_Mx_{n-M} - \; a_1y_{n-1} - \cdots - a_Ny_{n-N} \\ & = \;\; \sum_{k=0}^M b_kx_{n-k} - \sum_{k=1}^N a_ky_{n-k}\end{array} $$Where the output $y$ is the filtered version of the input $x$, $a_k$ and $b_k$ are the filter coefficients (real values), and the order of the filter is the larger of N or M. This general equation is for a recursive filter where the filtered signal y is calculated based on current and previous values of $x$ and on previous values of $y$ (the own output values, because of this it is said to be a system with feedback). A filter that does not re-use its outputs as an input (and it is said to be a system with only feedforward) is called nonrecursive filter (the $a$ coefficients of the equation are zero). Recursive and nonrecursive filters are also known as infinite impulse response (IIR) and finite impulse response (FIR) filters, respectively. A filter with only the terms based on the previous values of $y$ is also known as an autoregressive (AR) filter. A filter with only the terms based on the current and previous values of $x$ is also known as an moving-average (MA) filter. The filter with all terms is also known as an autoregressive moving-average (ARMA) filter. The moving-average filter can be implemented by making $n$ $b$ coefficients each equals to $1/n$ and the $a$ coefficients equal to zero in the difference equation. Transfer function Another form to characterize a digital filter is by its [transfer function](http://en.wikipedia.org/wiki/Transfer_function). In simple terms, a transfer function is the ratio in the frequency domain between the input and output signals of a filter. For continuous-time input signal $x(t)$ and output $y(t)$, the transfer function $H(s)$ is given by the ratio between the [Laplace transforms](http://en.wikipedia.org/wiki/Laplace_transform) of input $x(t)$ and output $y(t)$:$$ H(s) = \frac{Y(s)}{X(s)} $$Where $s = \sigma + j\omega$; $j$ is the imaginary unit and $\omega$ is the angular frequency, $2\pi f$. In the steady-state response case, we can consider $\sigma=0$ and the Laplace transforms with complex arguments reduce to the [Fourier transforms](http://en.wikipedia.org/wiki/Fourier_transform) with real argument $\omega$. For discrete-time input signal $x(t)$ and output $y(t)$, the transfer function $H(z)$ will be given by the ratio between the [z-transforms](http://en.wikipedia.org/wiki/Z-transform) of input $x(t)$ and output $y(t)$, and the formalism is similar.The transfer function of a digital filter (in fact for a linear, time-invariant, and causal filter), obtained by taking the z-transform of the difference equation shown earlier, is given by:$$ H(z) = \frac{Y(z)}{X(z)} = \frac{b_0 + b_1 z^{-1} + b_2 z^{-2} + \cdots + b_N z^{-N}}{1 + a_1 z^{-1} + a_2 z^{-2} + \cdots + a_M z^{-M}} $$$$ H(z) = \frac{\sum_{k=0}^M b_kz^{-k}}{1 + \sum_{k=1}^N a_kz^{-k}} $$And the order of the filter is the larger of N or M. Similar to the difference equation, this transfer function is for a recursive (IIR) filter. If the $a$ coefficients are zero, the denominator is equal to one, and the filter becomes nonrecursive (FIR). The Fourier transformThe [Fourier transform](http://en.wikipedia.org/wiki/Fourier_transform) is a mathematical operation to transform a signal which is function of time, $g(t)$, into a signal which is function of frequency, $G(f)$, and it is defined by: $$ \mathcal{F}[g(t)] = G(f) = \int_{-\infty}^{\infty} g(t) e^{-j 2\pi f t} dt $$ Its inverse operation is: $$ \mathcal{F}^{-1}[G(f)] = g(t) = \int_{-\infty}^{\infty} G(f) e^{j 2\pi f t} df $$ The function $G(f)$ is the representation in the frequency domain of the time-domain signal, $g(t)$, and vice-versa. The functions $g(t)$ and $G(f)$ are referred to as a Fourier integral pair, or Fourier transform pair, or simply the Fourier pair. [See this text for an introduction to Fourier transform](http://www.thefouriertransform.com/transform/fourier.php). Types of filtersIn relation to the frequencies that are not removed from the data (and a boundary is specified by the critical or cutoff frequency), a filter can be a low-pass, high-pass, band-pass, and band-stop. The frequency response of such filters is illustrated in the next figure.Frequency response of filters (from Wikipedia). The critical or cutoff frequency for a filter is defined as the frequency where the power (the amplitude squared) of the filtered signal is half of the power of the input signal (or the output amplitude is 0.707 of the input amplitude). For instance, if a low-pass filter has a cutoff frequency of 10 Hz, it means that at 10 Hz the power of the filtered signal is 50% of the power of the original signal (and the output amplitude will be about 71% of the input amplitude). The gain of a filter (the ratio between the output and input powers) is usually expressed in the decibel (dB) unit. Decibel (dB) The decibel (dB) is a logarithmic unit used to express the ratio between two values. In the case of the filter gain measured in the decibel unit:$$Gain=10\,log\left(\frac{A_{out}^2}{A_{in}^2}\right)=20\,log\left(\frac{A_{out}}{A_{in}}\right)$$ Where $A_{out}$ and $A_{in}$ are respectively the amplitudes of the output (filtered) and input (raw) signals.For instance, the critical or cutoff frequency for a filter, the frequency where the power (the amplitude squared) of the filtered signal is half of the power of the input signal, is given in decibel as:$$ 10\,log\left(0.5\right) \approx -3 dB $$ If the power of the filtered signal is twice the power of the input signal, because of the logarithm, the gain in decibel is $10\,log\left(2\right) \approx 3 dB$. If the output power is attenuated by ten times, the gain is $10\,log\left(0.1\right) \approx -10 dB$, but if the output amplitude is attenuated by ten times, the gain is $20\,log\left(0.1\right) \approx -20 dB$, and if the output amplitude is amplified by ten times, the gain is $20 dB$. For each 10-fold variation in the amplitude ratio, there is an increase (or decrease) of $20 dB$.The decibel unit is useful to represent large variations in a measurement, for example, $-120 dB$ represents an attenuation of 1,000,000 times. A decibel is one tenth of a bel, a unit named in honor of Alexander Graham Bell. Butterworth filterA common filter employed in biomechanics and motor control fields is the [Butterworth filter](http://en.wikipedia.org/wiki/Butterworth_filter). This filter is used because its simple design, it has a more flat frequency response and linear phase response in the pass and stop bands, and it is simple to use. The Butterworth filter is a recursive filter (IIR) and both $a$ and $b$ filter coefficients are used in its implementation. Let's implement the Butterworth filter. We will use the function `butter` to calculate the filter coefficients: `butter(N, Wn, btype='low', analog=False, output='ba')` Where `N` is the order of the filter, `Wn` is the cutoff frequency specified as a fraction of the [Nyquist frequency](http://en.wikipedia.org/wiki/Nyquist_frequency) (half of the sampling frequency), and `btype` is the type of filter (it can be any of {'lowpass', 'highpass', 'bandpass', 'bandstop'}, the default is 'lowpass'). See the help of `butter` for more details. The filtering itself is performed with the function `lfilter`: `lfilter(b, a, x, axis=-1, zi=None)`Where `b` and `a` are the Butterworth coefficients calculated with the function `butter` and `x` is the variable with the data to be filtered. ###Code from scipy import signal freq = 100 t = np.arange(0, 1, .01) w = 2*np.pi*1 # 1 Hz y = np.sin(w*t) + 0.1*np.sin(10*w*t) # Butterworth filter b, a = signal.butter(2, 5/(freq/2), btype = 'low') y2 = signal.lfilter(b, a, y) # standard filter # plot fig, ax1 = plt.subplots(1, 1, figsize=(9, 4)) ax1.plot(t, y, 'r.-', linewidth=2, label = 'raw data') ax1.plot(t, y2, 'b.-', linewidth=2, label = 'filter @ 5 Hz') ax1.legend(frameon=False, fontsize=14) ax1.set_xlabel("Time [s]") ax1.set_ylabel("Amplitude") plt.show() ###Output _____no_output_____ ###Markdown The plot above shows that the Butterworth filter introduces a phase (a delay or lag in time) between the raw and the filtered signals. We will see how to account for that later. Let's look at the values of the `b` and `a` Butterworth filter coefficients for different orders and see a characteristic of them; from the general difference equation shown earlier, it follows that the sum of the `b` coefficients minus the sum of the `a` coefficients (excluding the first coefficient of `a`) is one: ###Code from scipy import signal print('Low-pass Butterworth filter coefficients') b, a = signal.butter(1, .1, btype = 'low') print('Order 1:', '\nb:', b, '\na:', a, '\nsum(b)-sum(a):', np.sum(b)-np.sum(a[1:])) b, a = signal.butter(2, .1, btype = 'low') print('Order 2:', '\nb:', b, '\na:', a, '\nsum(b)-sum(a):', np.sum(b)-np.sum(a[1:])) ###Output Low-pass Butterworth filter coefficients Order 1: b: [ 0.13672874 0.13672874] a: [ 1. -0.72654253] sum(b)-sum(a): 1.0 Order 2: b: [ 0.02008337 0.04016673 0.02008337] a: [ 1. -1.56101808 0.64135154] sum(b)-sum(a): 1.0 ###Markdown Bode plotHow much the amplitude of the filtered signal is attenuated in relation to the amplitude of the raw signal (gain or magnitude) as a function of frequency is given in the frequency response plot. The plots of the frequency and phase responses (the [bode plot](http://en.wikipedia.org/wiki/Bode_plot)) of this filter implementation (Butterworth, lowpass at 5 Hz, second-order) is shown below: ###Code from scipy import signal b, a = signal.butter(2, 5/(freq/2), btype = 'low') w, h = signal.freqz(b, a) # compute the frequency response of a digital filter angles = np.rad2deg(np.unwrap(np.angle(h))) # angle of the complex argument w = w/np.pi*freq/2 # angular frequency from radians to Hz h = 20*np.log10(np.absolute(h)) # in decibels fig, (ax1, ax2) = plt.subplots(2, 1, sharex = True, figsize=(9, 6)) ax1.plot(w, h, linewidth=2) ax1.set_ylim(-80, 1) ax1.set_title('Frequency response') ax1.set_ylabel("Magnitude [dB]") ax1.plot(5, -3.01, 'ro') ax11 = plt.axes([.16, .59, .2, .2]) # inset plot ax11.plot(w, h, linewidth=2) ax11.plot(5, -3.01, 'ro') ax11.set_ylim([-6, .5]) ax11.set_xlim([0, 10]) ax2.plot(w, angles, linewidth=2) ax2.set_title('Phase response') ax2.set_xlabel("Frequency [Hz]") ax2.set_ylabel("Phase [degrees]") ax2.plot(5, -90, 'ro') plt.show() ###Output _____no_output_____ ###Markdown The inset plot in the former figure shows that at the cutoff frequency (5 Hz), the power of the filtered signal is indeed attenuated by 3 dB. The phase-response plot shows that at the cutoff frequency, the Butterworth filter presents about 90 degrees of phase between the raw and filtered signals. A 5 Hz signal has a period of 0.2 s and 90 degrees of phase corresponds to 0.05 s of lag. Looking at the plot with the raw and filtered signals employing or not the phase correction, we can see that the delay is indeed about 0.05 s. Order of a filterThe order of a filter is related to the inclination of the 'wall' in the frequency response plot that attenuates or not the input signal at the vicinity of the cutoff frequency. A vertical wall exactly at the cutoff frequency would be ideal but this is impossble to implement. A Butterworth filter of first order attenuates 6 dB of the power of the signal each doubling of the frequency (per octave) or, which is the same, attenuates 20 dB each time the frequency varies by an order of 10 (per decade). In more technical terms, one simply says that a first-order filter rolls off -6 dB per octave or that rolls off -20 dB per decade. A second-order filter rolls off -12 dB per octave (-40 dB per decade), and so on, as shown in the next figure. ###Code from butterworth_plot import butterworth_plot butterworth_plot() ###Output _____no_output_____ ###Markdown Butterworth filter with zero-phase shiftThe phase introduced by the Butterworth filter can be corrected in the digital implementation by cleverly filtering the data twice, once forward and once backwards. So, the lag introduced in the first filtering is zeroed by the same lag in the opposite direction at the second pass. The result is a zero-phase shift (or zero-phase lag) filtering. However, because after each pass the output power at the cutoff frequency is attenuated by two, by passing twice the second order Butterworth filter, the final output power will be attenuated by four. We have to correct the actual cutoff frequency value so that when employing the two passes, the filter will attenuate only by two. The following formula gives the desired cutoff frequency for a second-order Butterworth filter according to the number of passes, $n$, (see Winter, 2009):$$ C = \sqrt[4]{2^{\frac{1}{n}} - 1} $$For instance, for two passes, $n=2$, $ C=\sqrt[4]{2^{\frac{1}{2}} - 1} \approx 0.802 $. The actual filter cutoff frequency will be:$$ fc_{actual} = \frac{fc_{desired}}{C} $$For instance, for a second-order Butterworth filter with zero-phase shift and a desired 10 Hz cutoff frequency, the actual cutoff frequency should be 12.47 Hz. Let's implement this forward and backward filtering using the function `filtfilt` and compare with the single-pass filtering we just did it. ###Code from scipy.signal import butter, lfilter, filtfilt freq = 100 t = np.arange(0, 1, .01) w = 2*np.pi*1 # 1 Hz y = np.sin(w*t) + 0.1*np.sin(10*w*t) # Butterworth filter b, a = butter(2, 5/(freq/2), btype = 'low') y2 = lfilter(b, a, y) # standard filter # Correct the cutoff frequency for the number of passes in the filter C = 0.802 b, a = butter(2, (5/C)/(freq/2), btype = 'low') y3 = filtfilt(b, a, y) # filter with phase shift correction # plot fig, ax1 = plt.subplots(1, 1, figsize=(9, 4)) ax1.plot(t, y, 'r.-', linewidth=2, label = 'raw data') ax1.plot(t, y2, 'b.-', linewidth=2, label = 'filter @ 5 Hz') ax1.plot(t, y3, 'g.-', linewidth=2, label = 'filtfilt @ 5 Hz') ax1.legend(frameon=False, fontsize=14) ax1.set_xlabel("Time [s]") ax1.set_ylabel("Amplitude") plt.show() ###Output _____no_output_____ ###Markdown Moving-average filterHere are four different versions of a function to implement the moving-average filter: ###Code def moving_averageV1(x, window): """Moving average of 'x' with window size 'window'.""" y = np.empty(len(x)-window+1) for i in range(len(y)): y[i] = np.sum(x[i:i+window])/window return y def moving_averageV2(x, window): """Moving average of 'x' with window size 'window'.""" xsum = np.cumsum(x) xsum[window:] = xsum[window:] - xsum[:-window] return xsum[window-1:]/window def moving_averageV3(x, window): """Moving average of 'x' with window size 'window'.""" return np.convolve(x, np.ones(window)/window, 'same') from scipy.signal import lfilter def moving_averageV4(x, window): """Moving average of 'x' with window size 'window'.""" return lfilter(np.ones(window)/window, 1, x) ###Output _____no_output_____ ###Markdown Let's test these versions: ###Code x = np.random.randn(300)/10 x[100:200] += 1 window = 10 y1 = moving_averageV1(x, window) y2 = moving_averageV2(x, window) y3 = moving_averageV3(x, window) y4 = moving_averageV4(x, window) # plot fig, ax = plt.subplots(1, 1, figsize=(10, 5)) ax.plot(x, 'b-', linewidth=1, label = 'raw data') ax.plot(y1, 'y-', linewidth=2, label = 'moving average V1') ax.plot(y2, 'm--', linewidth=2, label = 'moving average V2') ax.plot(y3, 'r-', linewidth=2, label = 'moving average V3') ax.plot(y4, 'g-', linewidth=2, label = 'moving average V4') ax.legend(frameon=False, loc='upper right', fontsize=12) ax.set_xlabel("Data #") ax.set_ylabel("Amplitude") plt.show() ###Output _____no_output_____ ###Markdown A test of the performance of the four versions (using the magick IPython function `timeit`): ###Code %timeit moving_averageV1(x, window) %timeit moving_averageV2(x, window) %timeit moving_averageV3(x, window) %timeit moving_averageV4(x, window) ###Output 1000 loops, best of 3: 1.73 ms per loop 100000 loops, best of 3: 12.1 µs per loop 10000 loops, best of 3: 27.7 µs per loop 10000 loops, best of 3: 98 µs per loop ###Markdown The version with the cumsum function produces identical results to the first version of the moving average function but it is much faster (the fastest of the four versions). Only the version with the convolution function produces a result without a phase or lag between the input and output data, although we could improve the other versions to acount for that (for example, calculating the moving average of `x[i-window/2:i+window/2]` and using `filtfilt` instead of `lfilter`). And avoid as much as possible the use of loops in Python! The version with the for loop is about one hundred times slower than the other versions. Moving-RMS filterThe root-mean square (RMS) is a measure of the absolte amplitude of the data and it is useful when the data have positive and negative valuess. The RMS is defined as:$$ RMS = \sqrt{\frac{1}{N}\sum_{i=1}^{N} x_i^2} $$Similar to the moving-average measure, the moving RMS is defined as:$$ y[i] = \sqrt{\sum_{j=0}^{m-1} (x[i+j])^2} \;\;\;\; for \;\;\; i=1, \; \dots, \; n-m+1 $$Here are two implementations for a moving-RMS filter (very similar to the moving-average filter): ###Code import numpy as np from scipy.signal import filtfilt def moving_rmsV1(x, window): """Moving RMS of 'x' with window size 'window'.""" window = 2*window + 1 return np.sqrt(np.convolve(x*x, np.ones(window)/window, 'same')) def moving_rmsV2(x, window): """Moving RMS of 'x' with window size 'window'.""" return np.sqrt(filtfilt(np.ones(window)/(window), [1], x*x)) ###Output _____no_output_____ ###Markdown Let's filter electromyographic data: ###Code # load data file with EMG signal data = np.loadtxt('./../data/emg.csv', delimiter=',') data = data[300:1000,:] time = data[:, 0] data = data[:, 1] - np.mean(data[:, 1]) window = 50 y1 = moving_rmsV1(data, window) y2 = moving_rmsV2(data, window) # plot fig, ax = plt.subplots(1, 1, figsize=(9, 5)) ax.plot(time, data, 'k-', linewidth=1, label = 'raw data') ax.plot(time, y1, 'r-', linewidth=2, label = 'moving RMS V1') ax.plot(time, y2, 'b-', linewidth=2, label = 'moving RMS V2') ax.legend(frameon=False, loc='upper right', fontsize=12) ax.set_xlabel("Time [s]") ax.set_ylabel("Amplitude") ax.set_ylim(-.1, .1) plt.show() ###Output _____no_output_____ ###Markdown Similar, but not the same, results.An advantage of the filter empolying the convolution method is that it behaves better to abrupt changes in the data, such as when filtering data that change from a baseline at zero to large positive values. The filter with the `filter` or `filtfilt` function would introduce negative values in this case. Another advantage for the convolution method is that it is much faster: ###Code print('Filter with convolution:') %timeit moving_rmsV1(data, window) print('Filter with filtfilt:') %timeit moving_rmsV2(data, window) ###Output Filter with convolution: 10000 loops, best of 3: 61.6 µs per loop Filter with filtfilt: 1000 loops, best of 3: 584 µs per loop ###Markdown Moving-median filterThe moving-median filter is similar in concept than the other moving filters but uses the median instead. This filter has a sharper response to abrupt changes in the data than the moving-average filter: ###Code from scipy.signal import medfilt x = np.random.randn(300)/10 x[100:200] += 1 window = 11 y = np.convolve(x, np.ones(window)/window, 'same') y2 = medfilt(x, window) # plot fig, ax = plt.subplots(1, 1, figsize=(10, 4)) ax.plot(x, 'b-', linewidth=1, label = 'raw data') ax.plot(y, 'r-', linewidth=2, label = 'moving average') ax.plot(y2, 'g-', linewidth=2, label = 'moving median') ax.legend(frameon=False, loc='upper right', fontsize=12) ax.set_xlabel("Data #") ax.set_ylabel("Amplitude") plt.show() ###Output _____no_output_____ ###Markdown More moving filtersThe library [pandas](http://pandas.pydata.org/) has several types of [moving-filter functions](http://pandas.pydata.org/pandas-docs/stable/computation.htmlmoving-rolling-statistics-moments). Numerical differentiation of data with noiseHow to remove noise from a signal is rarely a trivial task and this problem gets worse with numerical differentiation of the data because the amplitudes of the noise with higher frequencies than the signal are amplified with differentiation (for each differentiation step, the SNR decreases). To demonstrate this problem, consider the following function representing some experimental data:$$ f = sin(\omega t) + 0.1sin(10\omega t) $$The first component, with large amplitude (1) and small frequency (1 Hz), represents the signal and the second component, with small amplitude (0.1) and large frequency (10 Hz), represents the noise. The signal-to-noise ratio (SNR) for these data is equal to (1/0.1)$^2$ = 100. Let's see what happens with the SNR for the first and second derivatives of $f$:$$ f\:'\:= \omega cos(\omega t) + \omega cos(10\omega t) $$$$ f\:''= -\omega^2 sin(\omega t) - 10\omega^2 sin(10\omega t) $$For the first derivative, SNR = 1, and for the second derivative, SNR = 0.01! The following plots illustrate this problem: ###Code t = np.arange(0,1,.01) w = 2*np.pi*1 # 1 Hz #signal and noise derivatives: s = np.sin(w*t); n = 0.1*np.sin(10*w*t) sd = w*np.cos(w*t); nd = w*np.cos(10*w*t) sdd = -w*w*np.sin(w*t); ndd = -w*w*10*np.sin(10*w*t) plt.rc('axes', labelsize=16, titlesize=16) plt.rc('xtick', labelsize=12) plt.rc('ytick', labelsize=12) fig, (ax1,ax2,ax3) = plt.subplots(3, 1, sharex = True, figsize=(8, 6)) ax1.set_title('Differentiation of signal and noise') ax1.plot(t, s, 'b.-', linewidth=1, label = 'signal') ax1.plot(t, n, 'g.-', linewidth=1, label = 'noise') ax1.plot(t, s+n, 'r.-', linewidth=2, label = 'signal+noise') ax2.plot(t, sd, 'b.-', linewidth=1) ax2.plot(t, nd, 'g.-', linewidth=1) ax2.plot(t, sd + nd, 'r.-', linewidth=2) ax3.plot(t, sdd, 'b.-', linewidth=1) ax3.plot(t, ndd, 'g.-', linewidth=1) ax3.plot(t, sdd + ndd, 'r.-', linewidth=2) ax1.legend(frameon=False, fontsize=10) ax1.set_ylabel('f') ax2.set_ylabel("f '") ax3.set_ylabel("f ''") ax3.set_xlabel("Time (s)") fig.tight_layout(pad=0) plt.show() ###Output _____no_output_____ ###Markdown Let's see how the use of a low-pass Butterworth filter can attenuate the high-frequency noise and how the derivative is affected. We will also calculate the [Fourier transform](http://en.wikipedia.org/wiki/Fourier_transform) of these data to look at their frequencies content. ###Code from scipy import signal, fftpack freq = 100 t = np.arange(0,1,.01); w = 2*np.pi*1 # 1 Hz y = np.sin(w*t)+0.1*np.sin(10*w*t) # Butterworth filter # Correct the cutoff frequency for the number of passes in the filter C = 0.802 b, a = signal.butter(2, (5/C)/(freq/2), btype = 'low') y2 = signal.filtfilt(b, a, y) # 2nd derivative of the data ydd = np.diff(y,2)*freq*freq # raw data y2dd = np.diff(y2,2)*freq*freq # filtered data # frequency content yfft = np.abs(fftpack.fft(y))/(y.size/2) # raw data y2fft = np.abs(fftpack.fft(y2))/(y.size/2) # filtered data freqs = fftpack.fftfreq(y.size, 1./freq) yddfft = np.abs(fftpack.fft(ydd))/(ydd.size/2) y2ddfft = np.abs(fftpack.fft(y2dd))/(ydd.size/2) freqs2 = fftpack.fftfreq(ydd.size, 1./freq) ###Output _____no_output_____ ###Markdown And the plots: ###Code fig, ((ax1,ax2),(ax3,ax4)) = plt.subplots(2, 2, figsize=(11, 5)) ax1.set_title('Temporal domain', fontsize=14) ax1.plot(t, y, 'r', linewidth=2, label = 'raw data') ax1.plot(t, y2, 'b', linewidth=2, label = 'filtered @ 5 Hz') ax1.set_ylabel('f') ax1.legend(frameon=False, fontsize=12) ax2.set_title('Frequency domain', fontsize=14) ax2.plot(freqs[:yfft.size/4], yfft[:yfft.size/4],'r', linewidth=2,label='raw data') ax2.plot(freqs[:yfft.size/4],y2fft[:yfft.size/4],'b--',linewidth=2,label='filtered @ 5 Hz') ax2.set_ylabel('FFT(f)') ax2.legend(frameon=False, fontsize=12) ax3.plot(t[:-2], ydd, 'r', linewidth=2, label = 'raw') ax3.plot(t[:-2], y2dd, 'b', linewidth=2, label = 'filtered @ 5 Hz') ax3.set_xlabel('Time [s]') ax3.set_ylabel("f ''") ax4.plot(freqs[:yddfft.size/4], yddfft[:yddfft.size/4], 'r', linewidth=2, label = 'raw') ax4.plot(freqs[:yddfft.size/4],y2ddfft[:yddfft.size/4],'b--',linewidth=2, label = 'filtered @ 5 Hz') ax4.set_xlabel('Frequency [Hz]') ax4.set_ylabel("FFT(f '')"); ###Output C:\Miniconda3\lib\site-packages\ipykernel\__main__.py:10: DeprecationWarning: using a non-integer number instead of an integer will result in an error in the future C:\Miniconda3\lib\site-packages\ipykernel\__main__.py:11: DeprecationWarning: using a non-integer number instead of an integer will result in an error in the future C:\Miniconda3\lib\site-packages\ipykernel\__main__.py:20: DeprecationWarning: using a non-integer number instead of an integer will result in an error in the future C:\Miniconda3\lib\site-packages\ipykernel\__main__.py:21: DeprecationWarning: using a non-integer number instead of an integer will result in an error in the future ###Markdown Pezzack's benchmark dataIn 1977, Pezzack, Norman and Winter published a paper where they investigated the effects of differentiation and filtering processes on experimental data (the angle of a bar manipulated in space). Since then, these data have became a benchmark to test new algorithms. Let's work with these data (available at [http://isbweb.org/data/pezzack/index.html](http://isbweb.org/data/pezzack/index.html)). The data have the angular displacement measured by video and the angular acceleration directly measured by an accelerometer, which we will consider as the true acceleration. ###Code # load data file time, disp, disp2, aacc = np.loadtxt('./../data/Pezzack.txt', skiprows=6, unpack=True) dt = np.mean(np.diff(time)) # plot data fig, (ax1,ax2) = plt.subplots(1, 2, sharex = True, figsize=(11, 4)) plt.suptitle("Pezzack's benchmark data", fontsize=20) ax1.plot(time, disp, 'b.-') ax1.set_xlabel('Time [s]') ax1.set_ylabel('Angular displacement [rad]', fontsize=12) ax2.plot(time, aacc, 'g.-') ax2.set_xlabel('Time [s]') ax2.set_ylabel('Angular acceleration [rad/s$^2$]', fontsize=12) plt.subplots_adjust(wspace=0.3) ###Output _____no_output_____ ###Markdown The challenge is how to obtain the acceleration using the disclacement data dealing with the noise. A simple double differentiation of these data will amplify the noise: ###Code # acceleration using the 2-point forward difference algorithm: aacc2 = np.diff(disp,2)/dt/dt # aacc2 has 2 points less than aacc # plot data fig, ax1 = plt.subplots(1, 1, figsize=(11, 4)) plt.suptitle("Pezzack's benchmark data", fontsize=20) ax1.plot(time, aacc, 'g', label='Analog acceleration (true value)') ax1.plot(time[1:-1], aacc2, 'r', label='Acceleration by 2-point difference') ax1.set_xlabel('Time [s]', fontsize=12) ax1.set_ylabel('Angular acceleration [rad/s$^2$]', fontsize=12) plt.legend(frameon=False, fontsize=12, loc='upper left'); ###Output _____no_output_____ ###Markdown The source of noise in these data is due to random small errors in the digitization process which occur at each frame, because that the frequency content of the noise is up to half of the sampling frequency, higher the frequency content of the movement being analyzed. Let's try different filters ([Butterworth](http://en.wikipedia.org/wiki/Butterworth_filter), [Savitzky-Golay](http://en.wikipedia.org/wiki/Savitzky%E2%80%93Golay_smoothing_filter), and [spline](http://en.wikipedia.org/wiki/Spline_function)) to attenuate this noise. Both Savitzky-Golay and the spline functions are based on fitting polynomials to the data and they allow to differentiate the polynomials in order to get the derivatives of the data (instead of direct numerical differentiation of the data). The Savitzky-Golay and the spline functions have the following signatures: `savgol_filter(x, window_length, polyorder, deriv=0, delta=1.0, axis=-1, mode='interp', cval=0.0)` `splrep(x, y, w=None, xb=None, xe=None, k=3, task=0, s=None, t=None, full_output=0, per=0, quiet=1)` And to evaluate the spline derivatives: `splev(x, tck, der=0, ext=0)`And let's employ the [root-mean-square error (RMSE)](http://en.wikipedia.org/wiki/RMSE) metric to compare their performance: ###Code from scipy import signal, interpolate # Butterworth filter # Correct the cutoff frequency for the number of passes in the filter C = 0.802 b, a = signal.butter(2, (9/C)/((1/dt)/2)) dispBW = signal.filtfilt(b, a, disp) aaccBW = np.diff(dispBW, 2)/dt/dt # aaccBW has 2 points less than aacc # Add (pad) data to the extremities to avoid problems with filtering disp_pad = signal._arraytools.odd_ext(disp, n=11) time_pad = signal._arraytools.odd_ext(time, n=11) # Savitzky-Golay filter aaccSG = signal.savgol_filter(disp_pad,window_length=5,polyorder=3,deriv=2,delta=dt)[11:-11] # Spline smoothing tck = interpolate.splrep(time_pad, disp_pad, k=5, s=0.15*np.var(disp_pad)/np.size(disp_pad)) aaccSP = interpolate.splev(time_pad, tck, der=2)[11:-11] # RMSE: rmseBW = np.sqrt(np.mean((aaccBW-aacc[1:-1])**2)) rmseSG = np.sqrt(np.mean((aaccSG-aacc)**2)) rmseSP = np.sqrt(np.mean((aaccSP-aacc)**2)) ###Output _____no_output_____ ###Markdown And the plots: ###Code # plot data fig, ax1 = plt.subplots(1, 1, figsize=(11, 4)) plt.suptitle("Pezzack's benchmark data", fontsize=20) ax1.plot(time, aacc, 'g', label='Analog acceleration: (True value)') ax1.plot(time[1:-1], aaccBW, 'r', label='Butterworth 9 Hz: RMSE = %0.2f' %rmseBW) ax1.plot(time,aaccSG,'b', label='Savitzky-Golay 5 points: RMSE = %0.2f' %rmseSG) ax1.plot(time,aaccSP,'m', label='Quintic spline, s=0.0005: RMSE = %0.2f' %rmseSP) ax1.set_xlabel('Time [s]') ax1.set_ylabel('Angular acceleration [rad/s$^2$]', fontsize=12) plt.legend(frameon=False, fontsize=12, loc='upper left'); ###Output _____no_output_____ ###Markdown At this case, the Butterworth, Savitzky-Golay, and spline filters produced similar results with good fits to the original curve. However, with all of them, particularly with the spline smoothing, it is necessary some degree of tuning for choosing the right parameters. The Butterworth filter is the easiest one because the cutoff frequency choice sound more familiar for human movement analysis. Kinematics of a ball tossLet's now analyse the kinematic data of a ball tossed to the space. These data were obtained using [Tracker](http://www.cabrillo.edu/~dbrown/tracker/), which is a free video analysis and modeling tool built on the [Open Source Physics](http://www.opensourcephysics.org/) (OSP) Java framework. The data are from the analysis of the video *balltossout.mov* from the mechanics video collection which can be obtained in the Tracker website. ###Code t, x, y = np.loadtxt('./../data/balltoss.txt', skiprows=2, unpack=True) dt = np.mean(np.diff(t)) print('Time interval: %f s' %dt) print('x and y values:') x, y plt.rc('axes', labelsize=14) plt.rc('xtick', labelsize=14) plt.rc('ytick', labelsize=14) fig, (ax1,ax2,ax3) = plt.subplots(1, 3, figsize=(12, 3)) plt.suptitle('Kinematics of a ball toss', fontsize=20, y=1.05) ax1.plot(x, y, 'go') ax1.set_ylabel('y [m]') ax1.set_xlabel('x [m]') ax2.plot(t, x, 'bo') ax2.set_ylabel('x [m]') ax2.set_xlabel('Time [s]') ax3.plot(t, y, 'ro') ax3.set_ylabel('y [m]') ax3.set_xlabel('Time [s]') plt.subplots_adjust(wspace=0.35) ###Output _____no_output_____ ###Markdown Calculate the velocity and acceleration numerically: ###Code # forward difference algorithm: vx, vy = np.diff(x)/dt, np.diff(y)/dt ax, ay = np.diff(vx)/dt, np.diff(vy)/dt # central difference algorithm: vx2, vy2 = (x[2:]-x[:-2])/(2*dt), (y[2:]-y[:-2])/(2*dt) ax2, ay2 = (vx2[2:]-vx2[:-2])/(2*dt), (vy2[2:]-vy2[:-2])/(2*dt) fig, axarr = plt.subplots(2, 3, sharex = True, figsize=(11, 6)) axarr[0,0].plot(t, x, 'bo') axarr[0,0].set_ylabel('x [m]') axarr[0,1].plot(t[:-1], vx, 'bo', label='forward difference'); axarr[0,1].set_ylabel('vx [m/s]') axarr[0,1].plot(t[1:-1], vx2, 'm+', markersize=10, label='central difference') axarr[0,1].legend(frameon=False, fontsize=10, loc='upper left', numpoints=1) axarr[0,2].plot(t[:-2], ax, 'bo') axarr[0,2].set_ylabel('ax [m/s$^2$]') axarr[0,2].plot(t[2:-2], ax2, 'm+', markersize=10) axarr[1,0].plot(t, y, 'ro') axarr[1,0].set_ylabel('y [m]') axarr[1,1].plot(t[:-1], vy, 'ro') axarr[1,1].set_ylabel('vy [m/s]') axarr[1,1].plot(t[1:-1], vy2, 'm+', markersize=10) axarr[1,2].plot(t[:-2], ay, 'ro') axarr[1,2].set_ylabel('ay [m/s$^2$]') axarr[1,2].plot(t[2:-2], ay2, 'm+', markersize=10) axarr[1,1].set_xlabel('Time [s]') plt.tight_layout(w_pad=-.5, h_pad=0) plt.suptitle('Kinematics of a ball toss', fontsize=20, y=1.05); ###Output _____no_output_____ ###Markdown We can observe the noise, particularly in the derivatives of the data. For example, the vertical acceleration of the ball should be constant, approximately g=9.8 m/s$^2$. To estimate the acceleration, we can get rid off the noise by filtering the data or, because we know the physics of the phenomenon, we can fit a model to the data. Let's try the latter option. ###Code # Model: y = y0 + v0*t + 1/2*g*t^2 # fit a second order polynomial to the data p = np.polyfit(t, y, 2) print('g = %0.2f m/s2' % (2*p[0])) ###Output g = -9.98 m/s2
experiments/02_inspect_saved_model.ipynb
###Markdown Inspect the saved model ###Code #!saved_model_cli show --all --dir saved_model/mobilenetv2 !saved_model_cli show --dir saved_model/mobilenetv2 --tag_set serve --signature_def serving_default ###Output _____no_output_____
API/PCOS prediction/Pcos_analysis.ipynb
###Markdown Data Visualisation and Pre-processing ###Code data_fer.head() data_fer.describe() data_fer.columns data = pd.merge(data_wo_fer,data_fer, on='Patient File No.', suffixes={'','_y'},how='left') data =data.drop(['Unnamed: 44', 'Sl. No_y', 'PCOS (Y/N)_y', ' I beta-HCG(mIU/mL)_y', 'II beta-HCG(mIU/mL)_y', 'AMH(ng/mL)_y'], axis=1) data.head() data.mean(axis = 0, skipna = True) data.columns data=data.drop(["Pulse rate(bpm) "],axis=1) data.info() data['AMH(ng/mL)'].value_counts() data['AMH(ng/mL)'] = pd.to_numeric(data['AMH(ng/mL)'],errors='coerce') data=data.drop(['II beta-HCG(mIU/mL)'],axis=1) def clean_dataset(df): assert isinstance(df, pd.DataFrame), "df needs to be a pd.DataFrame" df.dropna(inplace=True) indices_to_keep = ~df.isin([np.nan, np.inf, -np.inf]).any(1) return df[indices_to_keep].astype(np.float64) clean_dataset(data) X=data.drop(["PCOS (Y/N)","Sl. No","Patient File No."],axis = 1) y=data["PCOS (Y/N)"] from sklearn.feature_selection import SelectKBest from sklearn.feature_selection import chi2 X.info() np.all(np.isfinite(X)) def clean_dataset(df): assert isinstance(df, pd.DataFrame), "df needs to be a pd.DataFrame" df.dropna(inplace=True) indices_to_keep = ~df.isin([np.nan, np.inf, -np.inf]).any(1) return df[indices_to_keep].astype(np.float64) clean_dataset(X) ###Output _____no_output_____ ###Markdown Feature Seclection 2 different methods are used to find the best weighted features ###Code bestfeatures = SelectKBest(score_func=chi2, k=10) fit = bestfeatures.fit(X,y) dfscores = pd.DataFrame(fit.scores_) dfcolumns = pd.DataFrame(X.columns) featureScores = pd.concat([dfcolumns,dfscores],axis=1) featureScores.columns = ['Specs','Score'] #naming the dataframe columns featureScores print(featureScores.nlargest(15,'Score')) #print 10 best features from sklearn.ensemble import ExtraTreesClassifier import matplotlib.pyplot as plt model = ExtraTreesClassifier() X= pd.get_dummies(X) model.fit(X,y) print(model.feature_importances_) #use inbuilt class feature_importances of tree based classifiers #plot graph of feature importances for better visualization feat_importances = pd.Series(model.feature_importances_, index=X.columns) feat_importances.nlargest(15).plot(kind='barh') plt.show() import seaborn as sns #get correlations of each features in dataset corrmat = data.corr() top_corr_features = corrmat.index plt.figure(figsize=(20,20)) #plot heat map g=sns.heatmap(data[top_corr_features].corr(),annot=True,cmap="RdYlGn") X=X.drop(["Pregnant(Y/N)","No. of aborptions","Endometrium (mm)","Marraige Status (Yrs)","Hip(inch)","Waist(inch)","PRG(ng/mL)","BP _Systolic (mmHg)","BP _Diastolic (mmHg)"],axis=1) featureScores X.info() X=X.drop(["Avg. F size (L) (mm)","Avg. F size (R) (mm)","PRL(ng/mL)"," I beta-HCG(mIU/mL)"],axis=1) X=X.drop(["Waist:Hip Ratio","Blood Group","RR (breaths/min)","BMI"],axis=1) X=X.drop(["Vit D3 (ng/mL)"],axis=1) X=X.drop(["Hb(g/dl)"],axis=1) ###Output _____no_output_____ ###Markdown Model Building ###Code X_train,X_test, y_train, y_test = train_test_split(X,y, test_size=0.3) import xgboost as xgb model =xgb.XGBClassifier( learning_rate=0.06, colsample_bytree = 0.6, subsample = 0.8, n_estimators=200, max_depth=6, gamma=0) from sklearn import metrics model.fit(X_train, y_train) y_pred = model.predict(X_test) print("XG Boost: Accuracy:",metrics.accuracy_score(y_test, y_pred)) from sklearn.model_selection import train_test_split from sklearn.ensemble import AdaBoostClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.datasets import load_wine from sklearn.linear_model import LogisticRegression rfc1=RandomForestClassifier(criterion= "gini",max_depth= 10,max_features="sqrt",n_estimators=150) rfc1.fit(X_train, y_train) predictions=rfc1.predict(X_test) acccuracy_final = accuracy_score(y_test,predictions) acccuracy_final clf = RandomForestClassifier(criterion= "gini",max_depth= 5,max_features="sqrt",n_estimators=50) abc = AdaBoostClassifier(base_estimator=clf,n_estimators=50,learning_rate=0.8) model = abc.fit(X_train, y_train) y_pred = model.predict(X_test) from sklearn import metrics print("Accuracy:",metrics.accuracy_score(y_test, y_pred)) print("Accuracy:",metrics.f1_score(y_test, y_pred,average='weighted')) ###Output Accuracy: 0.9444444444444444 Accuracy: 0.94277677315652 ###Markdown Piclke file extraction ###Code import pickle pickle_out=open("classifier.pkl","wb") pickle.dump(model,pickle_out) pickle_out.close() from google.colab import files files.download('classifier.pkl') ###Output _____no_output_____
pytorch_disco/vis_sensor_data.ipynb
###Markdown Aim of the note is to explore what should be done with the sensor depths from the closeup dataset. ###Code %matplotlib inline import numpy as np from archs.VGGNet import Feat2d import matplotlib.pyplot as plt net = Feat2d(touch_emb_dim=32) datapath = "/home/gauravp/new_closeup_dataset/cups/6e884701bfddd1f71e1138649f4c219/touch_data.npy" all_data = np.load(datapath, allow_pickle=True).item() all_data.keys() sensor_depths = all_data.sensor_depths sensor_imgs = all_data.sensor_imgs # now plot a few of those meaning hundred to check how they look fig_size = 2 * np.asarray([10, 2]) fig, axes = plt.subplots(nrows=2, ncols=10, figsize=fig_size, sharex=True, sharey=True) idxs = np.random.permutation(len(sensor_depths)) chosen_depths = sensor_depths[idxs[:10]] chosen_imgs = sensor_imgs[idxs[:10]] for i in range(len(chosen_depths)): axes[0, i].imshow(chosen_imgs[i]) axes[1, i].imshow(chosen_depths[i]) # plot the histogram of depths for these chosen one since they are all very close they histogram should be very close to zero plt.hist(chosen_depths.flatten()) # okay now lets clip the depth between (0 and 1) and see if the histogram changes in value clipped_depths = np.clip(chosen_depths, 0, 10) print(clipped_depths.max()) print(clipped_depths.min()) print(clipped_depths.shape) plt.hist(clipped_depths.flatten()) # Adam suggested to rescale the depth images to size 16x16 I will do it and see the effect on the depth values and images import skimage.transform my_resize = lambda img: skimage.transform.resize(img, (16, 16), anti_aliasing=True) resized_chosen_depths = np.stack(list(map(my_resize, chosen_depths))) print(resized_chosen_depths.shape) ###Output (10, 16, 16) ###Markdown Resize Observations1. The images look the same but are more raggedy along the edges, did I not set antialiasing as true ###Code # visualize the images first and then check its histogram fig, axes = plt.subplots(nrows=1, ncols=10, figsize=fig_size, sharex=True, sharey=True) for i in range(len(resized_chosen_depths)): axes[i].imshow(resized_chosen_depths[i]) #plot the histogram too to check the values are not changed drastically # ofcourse the numbers are less since you have resized the images. But the data dist has not changed # drastically so it is a good sign? plt.hist(resized_chosen_depths.flatten()) # how to pass them through the VGG net from torch.utils.data.sampler import BatchSampler, SubsetRandomSampler import torch ###Output _____no_output_____ ###Markdown Strategy 1:Trying to pass all of them together in one whole batch ###Code net = net.cuda() torch_sensor_depths = np.expand_dims(sensor_depths, 1) torch_sensor_depths = torch.from_numpy(torch_sensor_depths).float() # now I will keep the batch size of 1024 and try to do the forward pass sampler = BatchSampler( SubsetRandomSampler(range(len(torch_sensor_depths))), 1024, drop_last=False) %timeit outputs = torch.zeros(len(sensor_depths), 32) for idxs in sampler: c_depths = torch_sensor_depths[idxs] outputs[idxs] = net(c_depths.cuda()) ###Output _____no_output_____
Python/.ipynb_checkpoints/Matplotlib+Seaborn-checkpoint.ipynb
###Markdown Matplotlib+SeabornMatplotlib 是 Python 中一种第三方可视化绘图库,Matplotlib是 MATLAB+Plot+Library 的缩写,其绘图风格与 MATLAB 类似。Seaborn 在 Matplotlib 的基础上,为图形样式和颜色设置提供了现代化的设置为常用的统计图形定义了许多简单的高级函数,并与 Pandas.DataFrame 的功能有机结合。 版本 ###Code import matplotlib import seaborn matplotlib.__version__,seaborn.__version__ ###Output _____no_output_____ ###Markdown 别名遵循传统,使用别名`mpl`导入 Matplotlib;`sns`导入 Seaborn;`plt`导入 matplotlib.pyplot,这是 matplotlib 中最常用的绘图接口。 ###Code import matplotlib as mpl import matplotlib.pyplot as plt import seaborn as sns sns.set() %matplotlib inline ###Output _____no_output_____ ###Markdown 加入`%matplotlib inline`会在 Notebook 中启动静态图形;加入`sns.set()`使图像拥有更好的视觉效果。 Figure在 Matplotlib 中,图像为一个 `figure` 对象,在 `Figure` 对象中可以包含一个或多个 `Axes` 对象。每个`Axes(ax)` 对象都是一个拥有自己坐标系统的绘图区域,所属关系如下图所示:![Figure(图片来自 Matplotlib 官网)](../image/figure.png)其中,`Title` 为图像标题,`Axis` 为坐标轴, `Label` 为坐标轴标注,`Tick` 为刻度线,`Tick Label` 为刻度注释。 ###Code # 生成数据 import numpy as np x = np.linspace(1,10,50) y = np.sin(x) # 创建一个空的 figure 对象 fig = plt.figure() # 添加标签 plt.xlabel('x') plt.ylabel('y') plt.title('y = sin(x)') # 绘图 plt.plot(x, y) plt.show() ###Output _____no_output_____ ###Markdown 图像导出在 Matplotlib 中,可以通过 `savefig()` 命令将图像保存为文件: ###Code fig.savefig('sin(x).png') ###Output _____no_output_____ ###Markdown 在 Notebook 中, 可以通过 `IPython.display.Image` 命令导入图像: ###Code from IPython.display import Image Image('sin(x).png') ###Output _____no_output_____ ###Markdown 常用图像绘制 线形图可通过 `plt.plot()` 方法绘制线性图: ###Code x = np.linspace(1,10,30) y = np.sin(x) plt.plot(x,y) ###Output _____no_output_____ ###Markdown 散点图可通过 `plt.plot()` 和 `plt.scatter)` 方法绘制散点图: ###Code plt.plot(x, y, 'o') #plt.scatter(x, y, marker='o') ###Output _____no_output_____ ###Markdown 柱形图可通过 `plt.bar()` 方法绘制柱形图: ###Code x = list('ABCDE') y = [2,4,6,8,10] plt.bar(x,y) ###Output _____no_output_____ ###Markdown 条形图可通过 plt.barh() 方法绘制条形图: ###Code x = list('ABCDE') y = [2,4,6,8,10] plt.barh(x,y) ###Output _____no_output_____ ###Markdown Seaborn 高级方法Seaborn 提供了许多高级函数,便于进行数据分析工作。 sns.pairplot()通过 `sns.pairplot()` 方法,能绘制出数据集中的成对关系。 ###Code # 导入 Iris 数据集 import pandas as pd iris = pd.read_csv('./data/iris.csv') sns.pairplot(iris) ###Output _____no_output_____ ###Markdown 在此基础上,可以利用 `hue` 变量区分种类: ###Code sns.pairplot(iris ,hue ='species') ###Output _____no_output_____
GettingStartedwithPython.ipynb
###Markdown Adapted by Sarah Connell from two notebooks created by [Nathan Kelber](http://nkelber.com) and Ted Lawless for [JSTOR Labs](https://labs.jstor.org/) under [Creative Commons CC BY License](https://creativecommons.org/licenses/by/4.0/) See [here](https://ithaka.github.io/tdm-notebooks/book/all-notebooks.html) for the original versions. Some contents were also adapted from a notebook created by [Jen Ferguson](https://library.northeastern.edu/about/library-staff-directory/jen-ferguson) for an earlier version of this workshop, taught at the Northeastern University Library Conference, summer 2020.___ Jupyter basicsWhat is this thing?**Jupyter combines interwoven text, data, and code, in a format that runs in a web browser.*** 'Jupyter' = JUlia, PYthon, and R - but it's really language-agnostic* Jupyter Notebooks let you run code immediately* Jupyter Notebooks can connect to a server that has the right environment/dependencies to execute the code successfully. In this case, we're using a server provided by ITHAKA, parent company to JSTOR/Portico. CellsSimilar to the way an essay is composed of paragraphs, Jupyter notebooks are composed of [cells](https://docs.tdm-pilot.org/key-terms/cell). A cell is like a container for a particular kind of content. There are essentially two kinds of content in Jupyter notebooks:1. [Markdown Cells](https://docs.tdm-pilot.org/key-terms/markdown-cell)—These can contain text, images, video, and the other kinds of explanatory content you might find on a regular website. The cell you're reading right now is a markdown cell.2. [Code Cells](https://docs.tdm-pilot.org/key-terms/code-cell)—These can contain code written in a variety of languages.How does this magic happen? There's a kernel, or computational engine, that runs the code inside your notebook. In this case our kernel is Python 3, as you can see in the top right corner of this page under the 'logout' button.Markdown allows you to have some basic formatting, like bolding and italicization; for more on markdown, see [this guide](https://www.markdownguide.org/cheat-sheet/).A **code cell** can be distinguished from a **markdown cell** by the fact that it contains a pair of brackets with a colon to its left. ###Code # This is a code cell ###Output _____no_output_____ ###Markdown A markdown cell provides information, but a code cell can be executed to perform an action. The code cell above does not contain any executable content, only a text comment. We can tell the text in the code cell is a comment because it is prefixed by a ````. In Python, if a line is prefaced by a ```` then that line is a comment and will not be executed if the code is run. In a code cell, comments are bluish-green in color. Hello World: Your First CodeIt is traditional in programming education to begin with a program that prints ``Hello World``. In Python, this is a simple task using the ``print()`` [function](https://docs.tdm-pilot.org/key-terms/function). A function is a block of code that performs some action—we will cover functions in more detail below. This function simply prints out whatever is inside the parentheses. We will **pass** the quotation "Hello World" to the ``print()`` function like so:```print("Hello World")```The code cell below has the ``print()`` function set up to get you started, so all you need to do is write the text you want to print (in this case, "Hello World") inside the quotation marks—make sure not to delete these! To **execute** or **run** our code, we have a couple of options: Option One![Image of play button](https://ithaka-labs.s3.amazonaws.com/static-files/images/tdm/tdmdocs/play_button.png) Click the code cell you wish to run and then push the "Run" button above. Depending on how your notebook is set up, you might not see the word "Run" but you should still see the same triangle symbol. Option TwoClick in the code cell you wish to run and press Ctrl + Enter (Windows) or Control + Return (OS X) on your keyboard.Don't worry, you can't break anything in this notebook! If you get stuck, you can always reset the notebook by resetting the kernel (the 'refresh' symbol in the toolbar). ###Code #Fill in "Hello World!" inside of the quotation marks below and then run this block of code print("") ###Output _____no_output_____ ###Markdown After your code runs, you'll receive any output and a number will appear in the pair of brackets to the left of the code cell to show the order the cell was run. If your code is complicated or takes some time to execute, an asterisk will be displayed in the pair of brackets while the code executes. Execute the code cell below which:1. Prints "Waiting 5 seconds..."2. Waits 5 seconds3. Prints "Done"As the program is running, watch the pair of brackets and you will see the code is running `[*]:`. ###Code print('Waiting 5 seconds...') import time time.sleep(5) print('Done') ###Output _____no_output_____ ###Markdown Notice that each time you run a code cell, the number increases in the pair of brackets. This keeps track of the order in which cells were run. Technically, you can run the cells in any order, but it is usually a good idea to run them sequentially from top to bottom, to avoid errors. Python basicsPython is a computer programming language that is widely used in data science and the digital humanities. We'll cover a few Python basics here, giving you the tools to understand some core concepts and run several pre-constructed analyses. If you'd like to learn more, the Constellate team has published [lessons](https://constellate.org/docs/) running from beginner to intermediate, and there are many additional resources online for learning Python, such as [Python for Everybody](https://www.py4e.com/). Expressions and OperatorsOne very simple form of Python programming is an [expression](https://docs.tdm-pilot.org/key-terms/expression) using an [operator](https://docs.tdm-pilot.org/key-terms/operator). For example, you might have a simple mathematical statement like:> 1 + 3The [operator](https://docs.tdm-pilot.org/key-terms/operator) in this case is `+`, sometimes called "plus" or "addition". This particular **[expression](https://docs.tdm-pilot.org/key-terms/expression)** is a combination of two **values** (1 and 3) and an **operator** (`+`). In Python, **expressions** are combinations of values, operators, functions, and variables (more on these last two soon!). In the code block below, try writing an expression that uses the addition operator. ###Code # Type an expression in this code block, adding your favorite number to the year you were born. #Then, run the code block. ###Output _____no_output_____ ###Markdown You can also do subtraction, multiplication, and division, among other mathematical operations. To multiply in Python, you use an asterisk (\*) and to divide, you use a forward slash (/). You are probably not going to replace the calculator on your phone with Python! But, this example is showing you something about how Python works: here, you are creating an **expression** by combining **values** with an **operator** and running the code to produce **output**. Data Types (Integers, Floats, and Strings)In the above examples, our expressions evaluated to single numerical value. Numerical values come in two basic forms:* [integer](https://docs.tdm-pilot.org/key-terms/integer)* [float](https://docs.tdm-pilot.org/key-terms/float) (or floating-point number)An [integer](https://docs.tdm-pilot.org/key-terms/integer), what we sometimes call a "whole number," is a number without a decimal point that can be positive or negative. When a value uses a decimal, it is called a [float](https://docs.tdm-pilot.org/key-terms/float) or floating-point number. Two numbers that are mathematically equivalent could be in two different data types. For example, mathematically 5 is equal to 5.0, yet the former is an integer while the latter is a float. Python can also help us manipulate text. A snippet of text in Python is called a [string](https://docs.tdm-pilot.org/key-terms/string). A string can be written with single or double quotes. A string can use letters, spaces, line breaks, and numbers. So 5 is an integer and 5.0 is a float, but '5' and '5.0' are strings. A string can also be blank, such as ''. |Familiar Name | Programming name | Examples ||---|---|---||Whole number|integer| -3, 0, 2, 534||Decimal|float | 6.3, -19.23, 5.0, 0.01||Text|string| 'Hello world', '1700 butterflies', '', '1823'|The distinction between each of these data types may seem unimportant, but Python treats each one differently. For example, we can ask Python whether an integer is equal to a float, but we cannot ask whether a string is equal to an integer or a float.To evaluate whether two values are equal, we can use two equals signs between them. The expression will evaluate to either `True` or `False`. ###Code # Run this code cell to determine whether the values are equal 42 == 42.0 # Run this code cell to compare an integer with a string 15 == 'fifteen' # Run this code cell to compare an integer with a string 15 == '15' ###Output _____no_output_____ ###Markdown When we use the addition operator on integers or floats, they are added to create a sum. When we use the addition operator on strings, they are combined into a single, longer string. This is called [concatenation](https://docs.tdm-pilot.org/key-terms/concatenation). ###Code # Combine the strings 'Hello' and 'World' 'Hello ' + 'World' ###Output _____no_output_____ ###Markdown Notice that the strings are combined exactly as they are written. There is no space between the strings. If we want to include a space, we need to add the space to the end of 'Hello' or the beginning of 'World'. When we use the addition operator, the values must be all numbers or all strings. Combining them will create an error. ###Code # Try adding a string to an integer '55' + 23 ###Output _____no_output_____ ###Markdown Here, we receive an error because Python doesn't know how to join a string to an integer. Putting this another way, Python is unsure if we want:>'55' + 23 to become>'5523'or >78 Because these data types operate differently, it is very useful to be able to check which type you're working with. You can do this with the `type()` function. Try running the three code blocks below to check the types for 15, 15.0 and "15". ###Code #Check the type for 15 type(15) #Check the type for 15.0 type(15.0) #Check the type for "15" type("15") ###Output _____no_output_____ ###Markdown VariablesWe noted above that expressions are combinations of values, operators, and variables, and said that we'd be returning to variables. A [variable](https://docs.tdm-pilot.org/key-terms/variable) is like a container that stores information. There are many kinds of information that can be stored in a variable, including the data types we have already discussed (integers, floats, and strings). We create (or **initialize**) a variable with an [assignment statement](https://docs.tdm-pilot.org/key-terms/assignment-statement). The assignment statement gives the variable an initial value. ###Code # Initialize an integer variable (note that this code doesn't produce any output; it just establishes the variable) new_integer_variable = 6 new_integer_variable # Add 22 to our new variable new_integer_variable + 22 ###Output _____no_output_____ ###Markdown The value of a variable can be overwritten with a new value. You can test this by changing the value in the first code block above, and then re-running everything. We can also overwrite the value of a variable using its original value. In the two cells below, we establish a variable and then add 2 to that variable. ###Code # Creating a variable "cats_in_house" cats_in_house = 1 cats_in_house # Adding 2 to our initial variable cats_in_house = cats_in_house + 2 cats_in_house ###Output _____no_output_____ ###Markdown Whenever you create a new variable, you can always confirm what data type it is with the `type()` function. For example: ###Code #Checking the type of the variable cats_in_house type(cats_in_house) ###Output _____no_output_____ ###Markdown You can create a variable with almost any name, but there are a few guidelines that are recommended. First, variable names should be clear and descriptive. For example, if we create a variable that stores the day of the month, it is helpful to give it a name that makes the value stored inside it clear like `day_of_month`. From the computer's perspective, we could call the variable almost anything (`potato`, `bananafish`, `flat_tire`). As long as we are consistent, the code will execute the same. When it comes time to read, modify, and understand the code, however, it will be confusing to you and others. Consider this simple program that lets us change the `days` variable to compute the number of seconds in that many days. ###Code # Compute the number of seconds in 3 days days = 3 hours_in_day = 24 minutes_in_hour = 60 seconds_in_minute = 60 days * hours_in_day * minutes_in_hour * seconds_in_minute ###Output _____no_output_____ ###Markdown We could write a program that is logically the same, but uses confusing variable names. ###Code hotdogs = 60 sasquatch = 24 example = 3 answer = 60 answer * sasquatch * example * hotdogs ###Output _____no_output_____ ###Markdown This code gives us the same answer as the first example, but it is confusing. Not only does this code use variable names that make no sense, it also does not include any comments to explain what the code does. It is not clear that we would change `example` to set a different number of days. It is not even clear what the purpose of the code is. As code gets longer and more complex, having clear variable names and explanatory comments is very important. Variable Naming RulesIn addition to being descriptive, variable names must follow 3 basic rules:1. Must be one word (no spaces allowed)2. Only letters, numbers and the underscore character (\_) are allowed3. Cannot begin with a number ###Code # Which of these variable names are acceptable? # Comment out the variables that are not allowed in Python and run this cell to check if the variable assignment works. # If you get an error, the variable name is not allowed in Python. $variable = 1 a variable = 2 a_variable = 3 4variable = 4 variable5 = 5 variable-6 = 6 variAble = 7 Avariable = 8 ###Output _____no_output_____ ###Markdown FunctionsMany different kinds of programs often need to do very similar operations. Instead of writing the same code over and over again, you can use a [function](https://docs.tdm-pilot.org/key-terms/function). Essentially, a function is a small snippet of code that can be quickly referenced and reused, and that does some specific task. One of the most common functions used in Python is the `print()` function, which simply prints a string. Replace the text inside of the quotation marks below with whatever words you would like to print. ###Code # A print function that prints whatever you tell it to print('Your words here!') ###Output _____no_output_____ ###Markdown We could also define a variable with our chosen input string and then pass that variable into the `print()` function. It is common for functions to take an input, called an [argument](https://docs.tdm-pilot.org/key-terms/argument), that is placed inside the parentheses. ###Code # Define a string and then print it our_string = 'Your words here!' print(our_string) ###Output _____no_output_____ ###Markdown **To begin this interactive lesson, click on the rocket ship in the top navigation and then select "Binder" to launch the Jupyter Notebook. It may take a few moments to load the first time you launch it.****When you are in the interactive environment, you will see a Jupyter logo in the upper left-hand corner.**___ What is Python?Python is a programming language that allows us to write instructions for a computer. We call these instructions "code."Behind the scenes of our computers, there are millions of lines of code written in many different programming languages that make it possible for us to use them. For example, if you want to delete a file called "untitled.txt" on your computer, you probably drag-and-drop that file into your desktop trashcan. When this happens, behind the scenes, a programming language like Python runs a piece of code like this:> os.remove("untitled.txt")Today, we will be writing instructions in Python for the virtual computer inside this Jupyter Notebook. What does Python code look like?Python code has two parts:1. Comments2. CodeTake a look at the example below. The `````` symbol denotes a comment, which contains explanatory notes for us as the reader. The ```1 + 1``` is the actual code and contains the instructions for the computer. ###Code # this is a comment 1 + 1 ###Output _____no_output_____ ###Markdown Learning to write Python is a bit like learning to write a foreign language. In the beginning, you will probably make some mistakes while writing your code that will make it impossible for the computer to understand.Helpfully, if you make a mistake, Python will output an error message to try to help you find the problem. Try running the code below to produce an error: ###Code 1 + ###Output _____no_output_____ ###Markdown What can I do with Python?Python can do four main things:1. Basic operations (such as calculator math)1. Save variables1. Built-in functions1. Custom functions 1. Basic operationsThe simplest thing you can do in Python is use basic operators (such as ``+``,``-``,``*``, or ``/``) to do calculator math. As an example, try running the code below: ###Code 1 + 1 ###Output _____no_output_____ ###Markdown You should see an output appear with the answer. This whole ```1+1``` line of code is called an [expression](https://constellate.org/docs/key-terms/expression), the ```+``` is called an [operator](https://constellate.org/docs/key-terms/operator), and the numbers are called values.> Note: Spaces do not matter here. You can write ```1+1```, ```1 + 1```, or even ```1+ 1``` and all of them will run successfully.Try writing and running your own basic operation here: ###Code # try your own basic operation here ###Output _____no_output_____ ###Markdown There are also more advanced operators, such as those used to make comparisons. These return an output of "True" or "False" depending on whether or not the comparison is true. These comparison operators include:* ``>`` greater than* ``<`` less than* ``>=`` greater than or equal to* ``<=`` less than or equal to* ``==`` equal toAs an example, try running the code below: ###Code 1 < 4 ###Output _____no_output_____ ###Markdown The output here is "True" because 1 *is* less than 4. Try writing and running your own comparison operation here: ###Code # try your own comparison operation here ###Output _____no_output_____ ###Markdown To learn more about operators, check out this [list of Python operators](https://www.w3schools.com/python/python_operators.asp). 2. Save VariablesPython allows you to create and name containers that you can store your data in. These containers are called [variables](https://www.w3schools.com/python/python_variables.asp). You can name your variable just about anything you want, but the name must follow these rules:* must contain only letters, numbers, and _ (e.g. cannot contain spaces or symbols like )* can't start with a number* is case sensitive (e.g. number, Number, and NUMBER are three different variables)For example, try running the code below that creates a variable called ```myNumber``` and sets its initial value equal to 1. ###Code myNumber = 1 ###Output _____no_output_____ ###Markdown A line of code like this that uses the ```=``` operator is called an assignment statement and sets the value of a variable. Now that the value has been set, try using the variable in the code below: ###Code myNumber * 10 ###Output _____no_output_____ ###Markdown You can also update the value of your variable by using another assignment statement. Try updating the value of your variable below: ###Code myNumber = 2 ###Output _____no_output_____ ###Markdown Now try running this line of code again and see how the output changes: ###Code myNumber * 10 ###Output _____no_output_____ ###Markdown To learn more about variable names, check out this [guide to variable names](https://www.w3schools.com/python/python_variables_names.asp). 3. Built-in FunctionsA function is a command in Python that executes a set of pre-written instructions, without you having to write them out each time. You give the function an input and the function runs its instructions and gives you back an output. Python has many built-in functions, including the following for popular math equations:* ``abs(x)`` returns the absolute value of x* ``round(x)`` rounds x* ``pow(x,y)`` returns the value of x to the power of yAs an example, try running the code below: ###Code abs(-1) ###Output _____no_output_____ ###Markdown Now try running your own built-in function example here: ###Code # try running your own built-in function example here ###Output _____no_output_____ ###Markdown To learn more about built-in functions, check out this [list of built-in Python functions](https://www.w3schools.com/python/python_ref_functions.asp). 4. Custom FunctionsYou can also name and write your own custom functions with pre-written instructions that can be saved and used later. For example, see this function that adds one to a number: ###Code def addOne(someInputNumber): outputNumber = someInputNumber + 1 return print(outputNumber) ###Output _____no_output_____ ###Markdown Try using this function to add 1 to the number 10 by running ``addOne(10)`` below: ###Code addOne(10) ###Output _____no_output_____ ###Markdown Now try using the function on your own number here: ###Code # try using the addOne function on your own example number here ###Output _____no_output_____ ###Markdown To learn more about custom functions, check out this [guide to Python functions](https://www.w3schools.com/python/python_functions.asp). What types of data can Python use?Python can work with many different types of data including numbers and text. For a full list, check out this [list of Python data types](https://www.w3schools.com/python/python_datatypes.asp).For this workshop, will mainly be using the three following basic types of data: Data Python "Data Type" Example Whole number int ("integer") 534 Decimal float 6.3 Text str ("string") "hello" To figure out the type of a particular piece of data, use the built-in ```type()``` function, like so: ###Code type("this is some example text") ###Output _____no_output_____ ###Markdown Try testing the data type of your own example below: ###Code # try testing the data type of your own example ###Output _____no_output_____ ###Markdown How does Python work with text?Python has all of the same functionality with text as it has with numbers, with a few subtle changes (e.g. it wouldn't really make sense to use ```abs()``` on text). To get a sense of how Python works with text data, check out the examples below.**Example 1: Basic operators.**For text, the ```+``` operator combines text. This action is called "concatenation." Try concatenating two pieces of text together using the code below: ###Code "Hoo" + "ray!" ###Output _____no_output_____ ###Markdown **Example 2: Comparison operators.**For text, the ```==``` operator can be used to compare two pieces of text, including their capitalization. Try the example below: ###Code "Example" == "example" ###Output _____no_output_____ ###Markdown This returns "False" because, although the two words are the same, their capitalization does not match. **Example 3: Built-in functions.**For text, there are useful built-in functions such as ```len()``` which outputs the length of a piece of text. Try the example below: ###Code len("Hooray!") ###Output _____no_output_____ ###Markdown **Example 4: Custon functions.**For text, custom functions can be defined in the same way they could for functions involving numbers. Try using the example function below that makes use of concatenation with the ```+``` operator and makes use of the built-in ```print()``` function to output text. ###Code def sayHello(firstName, lastName): whatToSay = "Hello " + firstName + " " + lastName + "!" print(whatToSay) ###Output _____no_output_____ ###Markdown Try using the function with your own name by editing and running the code below: ###Code # edit this function to use your name sayHello("Jacinda","Ardern") ###Output _____no_output_____
demo/basics/multiply_matrices.ipynb
###Markdown Matrix MultiplicationThis notebook has been translated from [ImageJ Macro](https://clij.github.io/clij2-docs/md/matrix_multiply/)It shows how to perform a matrix multiplication in the GPU. Initialize GPU ###Code import pyclesperanto_prototype as cle from skimage.io import imread, imsave, imshow import matplotlib import numpy as np # initialize GPU cle.select_device("GTX") print("Used GPU: " + cle.get_device().name) ###Output Used GPU: GeForce RTX 2080 Ti ###Markdown Define two arrays (vectors) and push them to the GPU ###Code array1 = np.asarray([[1, 2, 3, 4, 5]]) array2 = np.asarray([[6, 7, 8, 9, 10]]) vector1 = cle.push(array1) vector2 = cle.push(array2) ###Output _____no_output_____ ###Markdown In order to multiplicate matrices, the input matrices must be of size (n * m) and (m * n)Therefore, we transpose one of our vectors: ###Code vector1_t = cle.transpose_xy(vector1) print("Vector 1 (transposed): " + str(vector1_t)) print("Vector 2: " + str(vector2)) matrix = cle.create([vector1_t.shape[0], vector2.shape[1]]) cle.multiply_matrix(vector1_t, vector2, matrix) print(matrix) cle.imshow(matrix) ###Output _____no_output_____ ###Markdown Element by element multiplication of two matrices ###Code # generate another matrix of the same size with random values another_matrix = cle.push_zyx(np.random.random(matrix.shape)) # element by element multiplication matrix_element_wise_multiplied = cle.multiply_images(matrix, another_matrix) print(matrix_element_wise_multiplied) cle.imshow(matrix_element_wise_multiplied) ###Output [[ 4.263666 3.1693225 1.5578088 4.3979287 7.979739 ] [10.103611 6.284955 15.134996 14.420935 11.526558 ] [ 5.338699 19.562077 8.74118 11.574553 17.832518 ] [ 7.229021 26.634712 27.240744 7.702599 29.550472 ] [23.161533 21.474531 1.0065151 13.321719 39.979126 ]] ###Markdown Element by element multiplication of a matrix with a scalar ###Code elements_times_2 = cle.multiply_image_and_scalar(matrix, scalar=2) print(elements_times_2) ###Output [[ 12. 14. 16. 18. 20.] [ 24. 28. 32. 36. 40.] [ 36. 42. 48. 54. 60.] [ 48. 56. 64. 72. 80.] [ 60. 70. 80. 90. 100.]]
notebooks/chapter-5-4-ocsvm-acc_only.ipynb
###Markdown OCSVM Baseline Model*Created by Holger Buech, Q1/2019***Description** Reimplemenation of a OCSVM approach to Continuous Authentication described by [1]. Used as baseline model for futher experiments.**Purpose**- Get basic idea about authentication performance using raw data- Verify results of [1]**Data Sources** - [H-MOG Dataset](http://www.cs.wm.edu/~qyang/hmog.html) (Downloaded beforehand using [./src/data/make_dataset.py](./src/data/make_dataset.py), stored in [./data/external/hmog_dataset/](./data/external/hmog_dataset/) and converted to [./data/processed/hmog_dataset.hdf5](./data/processed/hmog_dataset.hdf5))**References** [1] Centeno, M. P. et al. (2018): Mobile Based Continuous Authentication Using Deep Features. Proceedings of the 2^nd International Workshop on Embedded and Mobile Deep Learning (EMDL), 2018, 19-24.**Table of Contents****1 - [Preparations](1)** 1.1 - [Imports](1.1) 1.2 - [Configuration](1.2) 1.3 - [Experiment Parameters](1.3) 1.4 - [Select Approach](1.4) **2 - [Data Preparations](2)** 2.1 - [Load Dataset](2.1) 2.2 - [Normalize Features (if global)](2.2) 2.3 - [Split Dataset for Valid/Test](2.3) 2.4 - [Check Splits](2.4) 2.5 - [Reshape Features](2.5) **3 - [Hyperparameter Optimization](3)** 3.1 - [Load cached Data](3.1) 3.2 - [Search for Parameters](3.2) 3.3 - [Inspect Search Results](3.3) **4 - [Testing](4)** 4.1 - [Load cached Data](4.1) 4.2 - [Evaluate Authentication Performance](4.2) 4.3 - [Evaluate increasing Training Set Size (Training Delay)](4.3) 4.4 - [Evaluate increasing Test Set Size (Detection Delay)](4.4) 1. Preparations &nbsp; 1.1 Imports &nbsp; ###Code # Standard from pathlib import Path import os import sys import dataclasses import math import warnings # Extra import pandas as pd import numpy as np from sklearn.svm import OneClassSVM from sklearn.model_selection import cross_validate, RandomizedSearchCV import statsmodels.stats.api as sms from tqdm.auto import tqdm import seaborn as sns import matplotlib.pyplot as plt from IPython.display import display # Custom `DatasetLoader`class for easier loading and subsetting data from the datasets. module_path = os.path.abspath(os.path.join("..")) # supposed to be parent folder if module_path not in sys.path: sys.path.append(module_path) from src.utility.dataset_loader_hdf5 import DatasetLoader # Global utitlity functions are in separate notebook %run utils.ipynb ###Output _____no_output_____ ###Markdown 1.2 Configuration &nbsp; ###Code # Various Settings SEED = 712 # Used for every random function HMOG_HDF5 = Path.cwd().parent / "data" / "processed" / "hmog_dataset.hdf5" EXCLUDE_COLS = ["sys_time"] CORES = -1 # For plots and CSVs OUTPUT_PATH = Path.cwd() / "output" / "chapter-6-1-3-ocsvm" OUTPUT_PATH.mkdir(parents=True, exist_ok=True) REPORT_PATH = Path.cwd().parent / "reports" / "figures" # Figures for thesis # Plotting %matplotlib inline utils_set_output_style() # Workaround to remove ugly spacing between progress bars HTML("<style>.p-Widget.jp-OutputPrompt.jp-OutputArea-prompt:empty{padding: 0;border: 0;} div.output_subarea{padding:0;}</style>") ###Output _____no_output_____ ###Markdown 1.3 Experiment Parameters &nbsp; Selection of parameters set that had been tested in this notebook. Select one of them to reproduce results. ###Code @dataclasses.dataclass class ExperimentParameters: """Contains all relevant parameters to run an experiment.""" name: str # Name of Parameter set. Used as identifier for charts etc. frequency: int max_subjects: int max_test_subjects: int seconds_per_subject_train: float seconds_per_subject_test: float task_types: list # Limit scenarios to [1, 3, 5] for sitting or [2, 4, 6] for walking, or don't limit (None) window_size: int # After resampling step_width: int # After resampling scaler: str # {"std", "robust", "minmax"} scaler_scope: str # {"subject", "session"} scaler_global: bool # fit transform scale on all data (True) or fit on training only (False) ocsvm_nu: float # Best value found in random search, used for final model ocsvm_gamma: float # Best value found in random search, used for final model feature_cols: list # Columns used as features exclude_subjects: list # Don't load data from those users # Calculated values def __post_init__(self): # HDF key of table: self.table_name = f"sensors_{self.frequency}hz" # Number of samples per _session_ used for training: self.samples_per_subject_train = math.ceil( (self.seconds_per_subject_train * 100) / (100 / self.frequency) / self.window_size ) # Number of samples per _session_ used for testing: self.samples_per_subject_test = math.ceil( (self.seconds_per_subject_test * 100) / (100 / self.frequency) / self.window_size ) # INSTANCES # =========================================================== # NAIVE_APPROACH # ----------------------------------------------------------- NAIVE_MINMAX_OCSVM = ExperimentParameters( name="NAIVE-MINMAX_OCSVM", frequency=100, max_subjects=15, max_test_subjects=5, seconds_per_subject_train=67.5, seconds_per_subject_test=67.5, task_types=None, window_size=50, step_width=50, scaler="minmax", scaler_scope="subject", scaler_global=True, ocsvm_nu=0.086, ocsvm_gamma=0.091, feature_cols=[ "acc_x", "acc_y", "acc_z", ], exclude_subjects=[ "733162", # No 24 sessions "526319", # ^ "796581", # ^ "539502", # Least amount of sensor values "219303", # ^ "737973", # ^ "986737", # ^ "256487", # Most amount of sensor values "389015", # ^ "856401", # ^ ], ) # VALID_APPROACH # ----------------------------------------------------------- VALID_MINMAX_OCSVM = dataclasses.replace( NAIVE_MINMAX_OCSVM, name="VALID-MINMAX-OCSVM", scaler_global=False, ocsvm_nu=0.165, ocsvm_gamma=0.039, ) # NAIVE_ROBUST_APPROACH # ----------------------------------------------------------- NAIVE_ROBUST_OCSVM = dataclasses.replace( NAIVE_MINMAX_OCSVM, name="NAIVE-ROBUST-OCSVM", scaler="robust", scaler_global=True, ocsvm_nu=0.153, ocsvm_gamma=0.091, # below median, selected by chart ) # ROBUST_APPROACH (VALID) # ----------------------------------------------------------- VALID_ROBUST_OCSVM = dataclasses.replace( NAIVE_MINMAX_OCSVM, name="VALID-ROBUST-OCSVM", scaler="robust", scaler_global=False, ocsvm_nu=0.098, ocsvm_gamma=0.003, ) ###Output _____no_output_____ ###Markdown 1.4 Select approach &nbsp; Select the parameters to use for current notebook execution here! ###Code P = VALID_ROBUST_OCSVM ###Output _____no_output_____ ###Markdown **Overview of current Experiment Parameters:** ###Code utils_ppp(P) ###Output _____no_output_____ ###Markdown 2. Data Preparation &nbsp; 2.1 Load Dataset &nbsp; ###Code hmog = DatasetLoader( hdf5_file=HMOG_HDF5, table_name=P.table_name, max_subjects=P.max_subjects, task_types=P.task_types, exclude_subjects=P.exclude_subjects, exclude_cols=EXCLUDE_COLS, seed=SEED, ) hmog.data_summary() ###Output _____no_output_____ ###Markdown 2.2 Normalize features (if global) &nbsp; Used here for naive approach (before splitting into test and training sets). Otherwise it's used during generate_pairs() and respects train vs. test borders. ###Code if P.scaler_global: print("Normalize all data before splitting into train and test sets...") hmog.all, _ = utils_custom_scale( hmog.all, scale_cols=P.feature_cols, feature_cols=P.feature_cols, scaler_name=P.scaler, scope=P.scaler_scope, plot=True, ) else: print("Skipped, normalize after splitting.") ###Output Skipped, normalize after splitting. ###Markdown 2.3 Split Dataset for Valid/Test &nbsp; In two splits: one used during hyperparameter optimization, and one used during testing.The split is done along the subjects: All sessions of a single subject will either be in the validation split or in the testing split, never in both. ###Code hmog.split_train_test(n_test_subjects=P.max_test_subjects) hmog.data_summary() ###Output _____no_output_____ ###Markdown 2.4 Check Splits &nbsp; ###Code utils_split_report(hmog.train) utils_split_report(hmog.test) ###Output Unique subjects: 5 Unique sessions: 120 Head: ###Markdown 2.5 Reshape Features &nbsp; **Reshape & store Set for Validation:** ###Code df_train_valid = utils_reshape_features( hmog.train, feature_cols=P.feature_cols, window_size=P.window_size, step_width=P.step_width, ) # Clean memory del hmog.train %reset_selective -f hmog.train print("Validation data after reshaping:") display(df_train_valid.head()) # Store iterim data df_train_valid.to_msgpack(OUTPUT_PATH / "df_train_valid.msg") # Clean memory %reset_selective -f df_train_valid df_train_valid.tail() ###Output _____no_output_____ ###Markdown **Reshape & store Set for Testing:** ###Code df_train_test = utils_reshape_features( hmog.test, feature_cols=P.feature_cols, window_size=P.window_size, step_width=P.step_width, ) del hmog.test %reset_selective -f hmog.test print("Testing data after reshaping:") display(df_train_test.head()) # Store iterim data df_train_test.to_msgpack(OUTPUT_PATH / "df_train_test.msg") # Clean memory %reset_selective -f df_train_test # Clean Memory %reset_selective -f df_ ###Output _____no_output_____ ###Markdown 3. Hyperparameter Optimization &nbsp; 3.1 Load cached Data &nbsp; Only the split dedicated for hyperparameter optimization is loaded ###Code df_train_valid = pd.read_msgpack(OUTPUT_PATH / "df_train_valid.msg") df_train_valid.head() ###Output _____no_output_____ ###Markdown 3.2 Search for Parameters &nbsp; ###Code param_dist = {"gamma": np.logspace(-3, 3), "nu": np.linspace(0.0001, 0.3)} warnings.filterwarnings("ignore") df_results = None # Will be filled with randomsearch scores for run in tqdm(range(3)): for df_cv_scenarios, owner, impostors in tqdm( utils_generate_cv_scenarios( df_train_valid, samples_per_subject_train=P.samples_per_subject_train, samples_per_subject_test=P.samples_per_subject_test, seed=SEED + run, scaler=P.scaler, scaler_global=P.scaler_global, scaler_scope=P.scaler_scope, feature_cols=P.feature_cols, ), desc="Owner", total=df_train_valid["subject"].nunique(), leave=False, ): X = np.array(df_cv_scenarios["X"].values.tolist()) X = X.reshape(X.shape[-3], -1) # flatten windows y = df_cv_scenarios["label"].values train_valid_cv = utils_create_cv_splits(df_cv_scenarios["mask"].values, SEED) model = OneClassSVM(kernel="rbf") random_search = RandomizedSearchCV( model, param_distributions=param_dist, cv=train_valid_cv, n_iter=80, n_jobs=CORES, refit=False, scoring={"eer": utils_eer_scorer, "accuracy": "accuracy"}, verbose=0, return_train_score=False, iid=False, random_state=SEED, ) random_search.fit(X, y) df_report = utils_cv_report(random_search, owner, impostors) df_report["run"] = run df_results = pd.concat([df_results, df_report], sort=False) df_results.to_csv(OUTPUT_PATH / f"{P.name}_random_search_results.csv", index=False) ###Output _____no_output_____ ###Markdown 3.3 Inspect Search Results &nbsp; **Raw Results & Stats:** ###Code df_results = pd.read_csv(OUTPUT_PATH / f"{P.name}_random_search_results.csv") print("Example from result table (head):") display( df_results[df_results["rank_test_eer"] == 1] .sort_values("mean_test_eer") .head(10) ) print("\n\n\nMost relevant statistics:") display( df_results[df_results["rank_test_eer"] == 1][ [ "mean_fit_time", "param_nu", "param_gamma", "mean_test_accuracy", "std_test_accuracy", "mean_test_eer", "std_test_eer", ] ].describe() ) ###Output Example from result table (head): ###Markdown **Plot parameters of top n of 30 results for every Owner:** ###Code utils_plot_randomsearch_results(df_results, n_top=1) utils_save_plot(plt, REPORT_PATH / f"buech2019-ocsvm-{P.name.lower()}-parameters.pdf") ###Output _____no_output_____ ###Markdown **Note:** Using median to select the best parameters, as mean is strongly influenced by outliers. ###Code # Clean Memory %reset_selective -f df_ ###Output _____no_output_____ ###Markdown 4. Testing &nbsp; 4.1 Load cached Data &nbsp; During testing, a split with different users than used for hyperparameter optimization is used: ###Code df_train_test = pd.read_msgpack(OUTPUT_PATH / "df_train_test.msg") ###Output _____no_output_____ ###Markdown 4.2 Evaluate Authentication Performance &nbsp; - Using Testing Split, Scenario Cross Validation, and multiple runs to lower impact of random session/sample selection. ###Code df_results = None # Will be filled with cv scores for i in tqdm(range(5), desc="Run", leave=False): # Run whole test 5 times for df_cv_scenarios, owner, impostors in tqdm( utils_generate_cv_scenarios( df_train_test, samples_per_subject_train=P.samples_per_subject_train, samples_per_subject_test=P.samples_per_subject_test, seed=SEED + i, # Change seed for different runs scaler=P.scaler, scaler_global=P.scaler_global, scaler_scope=P.scaler_scope, feature_cols=P.feature_cols, ), desc="Owner", total=df_train_test["subject"].nunique(), leave=False, ): X = np.array(df_cv_scenarios["X"].values.tolist()) X = X.reshape(X.shape[-3], -1) # flatten windows y = df_cv_scenarios["label"].values train_test_cv = utils_create_cv_splits(df_cv_scenarios["mask"].values, SEED) model = OneClassSVM(kernel="rbf", nu=P.ocsvm_nu, gamma=P.ocsvm_gamma) scores = cross_validate( model, X, y, cv=train_test_cv, scoring={ "eer": utils_eer_scorer, "accuracy": "accuracy", "precision": "precision", "recall": "recall", }, n_jobs=CORES, verbose=0, return_train_score=True, ) df_score = pd.DataFrame(scores) df_score["owner"] = owner df_score["train_eer"] = df_score["train_eer"].abs() # Revert scorer's signflip df_score["test_eer"] = df_score["test_eer"].abs() df_results = pd.concat([df_results, df_score], axis=0) df_results.to_csv(OUTPUT_PATH / f"{P.name}_test_results.csv", index=False) df_results.head() ###Output _____no_output_____ ###Markdown **Load Results from "EER & Accuracy" evaluation & prepare for plotting:** ###Code df_results = pd.read_csv(OUTPUT_PATH / f"{P.name}_test_results.csv") df_plot = df_results.rename( columns={"test_accuracy": "Test Accuracy", "test_eer": "Test EER", "owner": "Owner"} ).astype({"Owner": str}) ###Output _____no_output_____ ###Markdown **Plot Distribution of Accuracy per subject:** ###Code fig = utils_plot_acc_eer_dist(df_plot, "Test Accuracy") utils_save_plot(plt, REPORT_PATH / f"buech2019-ocsvm-{P.name.lower()}-acc.pdf") fig = utils_plot_acc_eer_dist(df_plot, "Test EER") utils_save_plot(plt, REPORT_PATH / f"buech2019-ocsvm-{P.name.lower()}-eer.pdf") ###Output Overall mean: 0.3739 ###Markdown 4.3 Evaluate increasing Training Set Size (Training Delay)&nbsp; - Testing different amounts of samples in training set- Using Testing Split, Scenario Cross Validation, and multiple runs to lower impact of random session/sample selection. ###Code training_set_sizes = [2, 4, 6, 8, 20, 60, 120, 180, 250, 350, 500, 750] df_results = None # Will be filled with cv scores for i in tqdm(range(5), desc="Run", leave=False): for n_train_samples in tqdm(training_set_sizes, desc="Train Size", leave=False): for df_cv_scenarios, owner, impostors in tqdm( utils_generate_cv_scenarios( df_train_test, samples_per_subject_train=P.samples_per_subject_train, samples_per_subject_test=P.samples_per_subject_test, limit_train_samples=n_train_samples, # samples overall seed=SEED + i, # Change seed for different runs scaler=P.scaler, scaler_global=P.scaler_global, scaler_scope=P.scaler_scope, feature_cols=P.feature_cols, ), desc="Owner", total=df_train_test["subject"].nunique(), leave=False, ): X = np.array(df_cv_scenarios["X"].values.tolist()) X = X.reshape(X.shape[-3], -1) # flatten windows y = df_cv_scenarios["label"].values train_test_cv = utils_create_cv_splits(df_cv_scenarios["mask"].values, SEED) model = OneClassSVM(kernel="rbf", nu=P.ocsvm_nu, gamma=P.ocsvm_gamma) scores = cross_validate( model, X, y, cv=train_test_cv, scoring={"eer": utils_eer_scorer}, n_jobs=CORES, verbose=0, return_train_score=True, ) df_score = pd.DataFrame(scores) df_score["owner"] = owner df_score["train_samples"] = n_train_samples df_score["train_eer"] = df_score[ "train_eer" ].abs() # Revert scorer's signflip df_score["test_eer"] = df_score["test_eer"].abs() df_results = pd.concat([df_results, df_score], axis=0) df_results.to_csv(OUTPUT_PATH / f"{P.name}_train_delay_results.csv", index=False) df_results.head() ###Output _____no_output_____ ###Markdown **Load Results from "Training set size" evaluation & prepare for plotting:** ###Code df_results = pd.read_csv(OUTPUT_PATH / f"{P.name}_train_delay_results.csv") df_plot = ( df_results[["test_eer", "owner", "train_samples"]] .groupby(["owner", "train_samples"], as_index=False) .mean() .astype({"owner": "category"}) .rename( columns={ "test_eer": "Test EER", "owner": "Owner", } ) ) df_plot["Training Data in Seconds"] = df_plot["train_samples"] * P.window_size / P.frequency ###Output _____no_output_____ ###Markdown **Plot EER with increasing number of training samples:** ###Code utils_plot_training_delay(df_plot) utils_save_plot(plt, REPORT_PATH / f"buech2019-ocsvm-{P.name.lower()}-train-size.pdf") ###Output _____no_output_____ ###Markdown 4.4 Evaluate increasing Test Set Sizes (Detection Delay)&nbsp; ###Code df_results = None # Will be filled with cv scores for i in tqdm(range(20), desc="Run", leave=False): for df_cv_scenarios, owner, impostors in tqdm( utils_generate_cv_scenarios( df_train_test, samples_per_subject_train=P.samples_per_subject_train, samples_per_subject_test=P.samples_per_subject_test, limit_test_samples=1, # Samples overall seed=SEED + i, # Change seed for different runs scaler=P.scaler, scaler_global=P.scaler_global, scaler_scope=P.scaler_scope, feature_cols=P.feature_cols, ), desc="Owner", total=df_train_test["subject"].nunique(), leave=False, ): X = np.array(df_cv_scenarios["X"].values.tolist()) X = X.reshape(X.shape[-3], -1) # flatten windows y = df_cv_scenarios["label"].values train_test_cv = utils_create_cv_splits(df_cv_scenarios["mask"].values, SEED) model = OneClassSVM(kernel="rbf", nu=P.ocsvm_nu, gamma=P.ocsvm_gamma) scores = cross_validate( model, X, y, cv=train_test_cv, scoring={"eer": utils_eer_scorer}, n_jobs=CORES, verbose=0, return_train_score=True, ) df_score = pd.DataFrame(scores) df_score["owner"] = owner df_score["train_eer"] = df_score["train_eer"].abs() # Revert scorer's signflip df_score["test_eer"] = df_score["test_eer"].abs() df_results = pd.concat([df_results, df_score], axis=0) df_results.to_csv(OUTPUT_PATH / f"{P.name}_detect_delay_results.csv", index=False) df_results.head() ###Output _____no_output_____ ###Markdown **Load Results from "Detection Delay" evaluation & prepare for plotting:** ###Code df_results = pd.read_csv(OUTPUT_PATH / f"{P.name}_detect_delay_results.csv") df_results["owner"] = df_results["owner"].astype(str) df_plot = df_results.copy() ###Output _____no_output_____ ###Markdown **Plot Expanding Mean EER and confidence interval:** ###Code utils_plot_detect_delay(df_plot, factor=P.window_size / P.frequency, xlim=160) utils_save_plot(plt, REPORT_PATH / f"buech2019-ocsvm-{P.name.lower()}-detection-delay.pdf") ###Output Mean samples: 29.2 Mean seconds: 14.6
2-EDA/3-Matplotlib/practica/ejercicios_Matplotlib.ipynb
###Markdown Ejercicios de Matplotlib 1. Importa pyplot, numpy y pandas 2. Activa matplotlib de forma estática 3. Sabemos que podemos pintar gráficas de dos formas: la figura incluye los ejes o teniendo figura y ejes por separado.Usando solo una figura, usa numpy para los valores del eje X entre 0 y 5. Pinta dos gráficas en dos cajas distintas, a la izquierda una recta con pendiente positiva de 3 que pase por (0,0) y a la derecha una recta con pendiente negativa de 3 que pase por (0,-5). Elige la precisión en el eje X que desees. 4. Fija el eje X entre 0 y 5 y el eje Y entre -15 y 15 ###Code ###Output _____no_output_____ ###Markdown 5. Llama al eje X "eje X", al eje Y "eje Y" y pon de títulos "recta sube" y "recta baja". Muestra dos etiquetas de ejeX pero solo una de eje Y. Vamos a pintar lo mismo pero accediendo directamente a los ejes 6. Usando una figura Y EJES POR SEPARADO, usa numpy para los valores del eje X entre 0 y 5. Pinta dos gráficas en dos cajas distintas, a la izquierda una recta con pendiente positiva de 3 que pase por (0,0) y a la derecha una recta con pendiente negativa de 3 que pase por (0,-5). Elige la precisión en el eje X que desees. ###Code # obtén solo la figura y los ejes y mira el resultado # basándote en el código previo, cambia solo la manipulación de los ejes # recuerda que hay unos métodos que su nombre cambia un poco ###Output _____no_output_____ ###Markdown 7. Existen distintos modelos de gráficas. Crea una lista de coordenadas X: 20, 22, 24, 26, 28Crea una lista de coordenadas Y: 5, 15, -5, 20, 5Usa un gráfico de barras colocadas en X con alturas Y 8. Crea un gráfico de barras. Pintamos la altura de un grupo de amigos, cada barra representa a una persona. Ana mide 160 cm, Luis mide 180 cm, Pedro mide 175 cm, Sofía mide 190 cm, Carmen mide 170 cm. Las barras serán verdes.Consejo: si no vas a tener que manipular especialmente los ejes, es más sencillo dejarlos dentro de la figura. 9. Basándote en el gráfico anterior, escribe encima de cada barra la altura de cada amigo.Pista: usa un bucle que lea cada barra de barplot = plt.bar(x,y)bar tiene los métodos get_height(), get_x(), get_width()plt.text(x,y,valor, va='bottom') 10. Cambia el tamaño de la figura a (3,5) 11. Basándote en el gráfico anterior, borra la escala del eje Y (ya aparece en la altura)Pista: cuando plt.yticks() recibe una lista vacía no pinta el eje Y 12. Prueba a hacer el gráfico con las barras en horizontal (no es necesario poner el texto al final de la barra). 13. Prueba a invertir los ejes con ax.invert_axis()Necesitarás tener los ejes disponibles fuera de la figura ###Code ###Output _____no_output_____ ###Markdown 14. En el último gráfico, cambia el estilo a 'dark_background' 15. Vamos a introducir pandas. Crea un DataFrame con las columnas "Year" de valores 2015, 2016, 2017, 2018, 2019 y la columna "Sold_items_A" de valores 1000, 3500, 4000, 5500, 7000 ###Code ###Output _____no_output_____ ###Markdown 16. Ahora pintamos un gráfico de línea con las ventas respecto al año. Pon un título y etiquetas en los ejes.Cambia el estilo a 'seaborn-white'. La línea debe ser con rayas y verde. 17. Otro departamento B ha vendido en esos años 2000, 3100, 5000, 4000, 6000 unidades. Incluye esa columna en el DataFrame y pinta en la misma gráfica las dos líneas. B es una línea punteada y roja. Muestra una leyenda abajo a la derecha. ###Code # df ###Output _____no_output_____ ###Markdown 18. Haz un scatter del departamento A usando __solo el DataFrame__.Pista: el propio DataFrame tiene un método plot.df.plot('columna X', 'columna Y', 'kind' = 'scatter) ###Code ###Output _____no_output_____ ###Markdown 19. Prueba a cambiar kind a 'pie' 20. Prueba a quitar la leyenda incluyendo legend igual a False, añade labels y quita la etiqueta en Y 21. Vamos a pintar un histograma ###Code np.random.seed(1) # cada vez que le pida N números aleatorios, me dará los mismos mydf = pd.DataFrame({"Altura" : np.random.randint(low=150, high=190, size=300)}) ###Output _____no_output_____ ###Markdown Pista: https://pandas.pydata.org/pandas-docs/version/0.23.4/generated/pandas.DataFrame.plot.html ###Code # Histogram ###Output _____no_output_____ ###Markdown Nota: con ax = df.plot()se pueden poner las etiquetas conax.set(xlabel="Bins") 22. Contornos de 3D a 2D. Escribe una función que recibiendo x,y devuelva (1 - x / 2 + x ** 5 + y ** 3) * np.exp(-x ** 2 -y ** 2)Tablero: Tanto x como y van de -3 a 3 y usaremos 256 puntos.Usa contourf, con 8 niveles (cortes), una transparencia de 0.75 y un color map de tipo 'jet'. Bonus: pinta las líneas de los contornos también de negro ###Code # ESTE NO ###Output _____no_output_____
Adaptive Model.ipynb
###Markdown Simulation Initialize Map ###Code from swarm_mapping.map import Map import random random.seed(27) m = Map(100, 100, space_fill=0.50,hazard_fill=0.25) m.show() ###Output _____no_output_____ ###Markdown Fixed Model ###Code import matplotlib.pyplot as plt import numpy as np from swarm_mapping.world import World import time def measure_agent_loss(world): dead_agents = [agent for agent in world.agents if not agent.alive] return len(dead_agents)/len(world.agents) def measure_map_completion(world): actual = world.map.grid == 0 mapped = world.agents_map == 0 return np.count_nonzero(mapped) / np.count_nonzero(actual) # Parameters radius = 3 num_agents = 100 max_steps = 1000 t = [] completion = [] loss = [] # Start simulation w = World(100,100, num_agents, marker_size=radius, m=m) step = 0 start = time.time() for i in range(max_steps): t.append(i) completion.append(measure_map_completion(w)) loss.append(measure_agent_loss(w)) w.step() if (completion[-1] >= 0.9): break end = time.time() print(f"Average step time: {(end - start)/max_steps}") plt.plot(t, loss) plt.plot(t, completion) print(f"Completion: {completion[-1]}, Loss: {loss[-1]}") ###Output Average step time: 0.013369445562362671 Completion: 0.6044198895027625, Loss: 0.38 ###Markdown Adaptive Model ###Code import numpy as np import matplotlib.pyplot as plt import cv2 from swarm_mapping.world import World DCOMP_LIM = 4 class AdaptiveModel: def __init__(self, world, weights): self.world = world self.radius = world.marker_size self.weights = weights self.loss_history = [] self.dloss_history = [] self.avgdloss_history = [] self.completion_history = [] self.dcompletion_history = [] self.avgdcompletion_history = [] self.radius_history = [] def step(self): # Set radius self.world.set_marker(int(round(self.radius))) # Step world self.world.step() # Calculate metrics loss = self.measure_agent_loss() if len(self.loss_history) > 0: dloss = loss - self.loss_history[-1] else: dloss = 0 self.loss_history.append(loss) self.dloss_history.append(dloss) avgdloss = np.average(self.dloss_history) self.avgdloss_history.append(avgdloss) comp = self.measure_map_completion() if len(self.completion_history) > 0: dcomp = comp - self.completion_history[-1] else: dcomp = 0 self.completion_history.append(comp) self.dcompletion_history.append(dcomp) avgdcomp = np.average(self.dcompletion_history) self.avgdcompletion_history.append(avgdcomp) # Update radius self.radius = self.calc_radius() self.radius_history.append(self.radius) def calc_radius(self): w1 = self.weights[0] w2 = self.weights[1] w3 = self.weights[2] loss = self.loss_history[-1] comp = self.completion_history[-1] dloss = self.avgdloss_history[-1] dcomp = self.avgdcompletion_history[-1] fixed_r = 5.0*(1 - np.exp(-w1*loss)) + 1.0 loss_compensation = w2 * dloss if dcomp == 0: map_compensation = DCOMP_LIM else: map_compensation = w3*comp / dcomp if map_compensation > DCOMP_LIM: map_compensation = DCOMP_LIM radius = fixed_r + loss_compensation - map_compensation if radius < 2: radius = 2 return radius def measure_agent_loss(self): world = self.world dead_agents = [agent for agent in world.agents if not agent.alive] return len(dead_agents)/len(world.agents) def measure_map_completion(self): world = self.world actual = world.map.grid == 0 mapped = world.agents_map == 0 return np.count_nonzero(mapped) / np.count_nonzero(actual) import time # Display size display_width = 1600 display_height = 800 SHOW = True # Parameters init_radius = 3 num_agents = 100 max_steps = 1000 # Adaptive model w1 = 3 w2 = 2000 w3 = .005 # Start simulation step = 0 t = [] w = World(100,100, num_agents, marker_size=init_radius, space_fill=0.5, hazard_fill=0.4) model = AdaptiveModel(w, [w1, w2, w3]) start = time.time() for i in range(max_steps): # Show frame if desired if SHOW: frame = w.render() shared_map = w.render(w.agents_map) frame = np.concatenate((frame, shared_map), axis=1) frame = cv2.resize(frame, (display_width, display_height), interpolation = cv2.INTER_AREA) cv2.imshow('Sim',cv2.cvtColor((frame*255).astype(np.uint8), cv2.COLOR_RGB2BGR)) if cv2.waitKey(1) & 0xFF == ord('q'): break # Step simulation model.step() step += 1 t.append(i) if (model.completion_history[-1] >= 0.9): break end = time.time() cv2.destroyAllWindows() print(f"Average step time: {(end - start)/max_steps}") print(f"Completion: {model.completion_history[-1]}, Loss: {model.loss_history[-1]}") fig, ax1 = plt.subplots() ax1.set_ylabel('radius') ax1.plot(t, model.radius_history, color='tab:green', label="radius") fig.tight_layout() ax2 = ax1.twinx() ax2.set_xlabel('time') ax2.set_ylabel('%') ax2.plot(t, model.loss_history, label="loss") ax2.plot(t, model.completion_history, label="completion") fig.legend(loc='center right'); plt.plot(t, model.avgdloss_history, label="Change in Loss") plt.plot(t, model.avgdcompletion_history, label="Change in Completion") plt.xlabel('time') plt.legend() ###Output _____no_output_____ ###Markdown Run Adaptive Model over 100 Maps ###Code import random from swarm_mapping.world import World # Parameters init_radius = 3 num_agents = 100 max_steps = 1000 # Adaptive model w1 = 3 w2 = 2000 w3 = .005 loss_data = [] completion_data = [] radius_data = [] for seed in np.arange(1,101,1): print(f"Running simulation {seed}") random.seed(seed) w = World(100, 100, 100, space_fill=0.5, hazard_fill=0.4, marker_size=3) model = AdaptiveModel(w, [w1, w2, w3]) for i in range(max_steps): # Step simulation model.step() if (model.completion_history[-1] >= 0.9): break loss_data.append(model.loss_history) completion_data.append(model.completion_history) radius_data.append(model.radius_history) # Pad data num_samples = 1000 for i in range(100): loss = loss_data[i] completion = completion_data[i] radius = radius_data[i] loss = np.pad(loss, (0, num_samples - len(loss)), 'constant', constant_values=loss[-1]) completion = np.pad(completion, (0, num_samples - len(completion)), 'constant', constant_values=completion[-1]) radius = np.pad(radius, (0, num_samples - len(radius)), 'constant', constant_values=radius[-1]) loss_data[i] = loss completion_data[i] = completion radius_data[i] = radius avg_losses = np.average(loss_data, 0) avg_completions = np.average(completion_data, 0) avg_radii = np.average(radius_data, 0) # Plot data fig, ax1 = plt.subplots() t = np.arange(0,1000,1) ax1.set_xlabel('time') ax1.set_ylabel('radius') ax1.plot(t, avg_radii, color='tab:green', label="radius") fig.tight_layout() ax2 = ax1.twinx() ax2.set_xlabel('time') ax2.set_ylabel('%') ax2.plot(t, avg_losses, label="loss") ax2.plot(t, avg_completions, label="completion") fig.legend(loc='center right'); plt.title(f"Adaptive Model (Agents: 100, Hazard: 0.25), Loss: {avg_losses[-1]:0.2f}, Completion: {avg_completions[-1]:0.2f}"); ###Output _____no_output_____
l4e_notebooks/3_model_training.ipynb
###Markdown **Amazon Lookout for Equipment***Part 3 - Model training* Notebook configuration updateLet's make sure that we have access to the latest version of the AWS Python packages. If you see a `pip` dependency error, check that the `boto3` version is ok: if it's greater than 1.17.48 (the first version that includes the `lookoutequipment` API), you can discard this error and move forward with the next cell: ###Code import boto3 print(f'boto3 version: {boto3.__version__} (should be >= 1.17.48 to include Lookout for Equipment API)') # Restart the current notebook to ensure we take into account the previous updates: from IPython.core.display import HTML HTML("<script>Jupyter.notebook.kernel.restart()</script>") ###Output _____no_output_____ ###Markdown Imports ###Code import config import os import pandas as pd import sagemaker import sys import boto3 # Helper functions for managing Lookout for Equipment API calls: sys.path.append('../utils') import lookout_equipment_utils as lookout ROLE_ARN = sagemaker.get_execution_role() REGION_NAME = boto3.session.Session().region_name BUCKET = config.BUCKET PREFIX = config.PREFIX_LABEL DATASET_NAME = config.DATASET_NAME MODEL_NAME = config.MODEL_NAME ###Output _____no_output_____ ###Markdown Based on the label time ranges, we will use the following time ranges:* **Train set:** 1st January 2019 - 31st July 2019: Lookout for Equipment needs at least 180 days of training data and this period contains a few labelled ranges with some anomalies.* **Evaluation set:** 1st August 2019 - 27th October 2019 *(this test set includes both normal and abnormal data to evaluate our model on)* ###Code # Configuring time ranges: training_start = pd.to_datetime('2019-01-01 00:00:00') training_end = pd.to_datetime('2019-07-31 00:00:00') evaluation_start = pd.to_datetime('2019-08-01 00:00:00') evaluation_end = pd.to_datetime('2019-10-27 00:00:00') print(f' Training period | from {training_start} to {training_end}') print(f'Evaluation period | from {evaluation_start} to {evaluation_end}') ###Output _____no_output_____ ###Markdown Model training--- ###Code # Prepare the model parameters: lookout_model = lookout.LookoutEquipmentModel(model_name=MODEL_NAME, dataset_name=DATASET_NAME, region_name=REGION_NAME) # Set the training / evaluation split date: lookout_model.set_time_periods(evaluation_start, evaluation_end, training_start, training_end) # Set the label data location: lookout_model.set_label_data(bucket=BUCKET, prefix=PREFIX, access_role_arn=ROLE_ARN) # This sets up the rate the service will resample the data before # training: we will keep the original sampling rate in this example # (5 minutes), but feel free to use a larger sampling rate to accelerate # the training time: # lookout_model.set_target_sampling_rate(sampling_rate='PT15M') ###Output _____no_output_____ ###Markdown The following method encapsulates a call to the [**CreateModel**](https://docs.aws.amazon.com/lookout-for-equipment/latest/ug/API_CreateModel.html) API: ###Code # Actually create the model and train it: lookout_model.train() ###Output _____no_output_____ ###Markdown A training is now in progress as captured by the console: ![Training in progress](assets/create-model-training-in-progress.png)Use the following cell to capture the model training progress. **This model should take around 30-45 minutes to be trained.** Key drivers for training time usually are:* **Number of labels** in the label dataset (if provided)* Number of datapoints: this number depends on the **sampling rate**, the **number of time series** and the **time range**.The following method encapsulate a call to the [**DescribeModel**](https://docs.aws.amazon.com/lookout-for-equipment/latest/ug/API_DescribeModel.html) API and collect the model progress by looking at the `Status` field retrieved from this call: ###Code lookout_model.poll_model_training(sleep_time=60) ###Output _____no_output_____ ###Markdown A model is now trained and we can visualize the results of the back testing on the evaluation window selected at the beginning on this notebook:![Training complete](assets/model-performance.png) In the console, **you can click on each detected event**: Amazon Lookout for Equipment unpacks the ranking and display the top sensors contributing to the detected events.When you open this window, the first event is already selected and this is the detailed view you will get from the console:![Event details](assets/model-diagnostics.png) This dataset contains 30 sensors:* If each sensor contributed the same way to this event, every sensors would **equally contribute** to this event (said otherwise, every sensor would have a similar feature importance of `100% / 30 = 3.33%`).* The top sensors (e.g. **Sensor19** with a **5.67% importance**) have a contribution that is significantly higher than this threshold, which is statistically relevant.* If the model continues outputing detected anomalies with a similar ranking, this might push a maintenance operator to go and have a look at the associated components. Conclusion--- ###Code # Needed for visualizing markdowns programatically from IPython.display import display, Markdown display(Markdown( ''' <span style="color:green"><span style="font-size:50px">**Success!**</span></span> <br/> In this notebook, we use the dataset created in part 2 of this notebook series and trained an Amazon Lookout for Equipment model. From here you can either head: * To the next notebook where we will **extract the evaluation data** for this model and use it to perform further analysis on the model results: this is optional and just gives you some pointers on how to post-process and visualize the data provided by Amazon Lookout for Equipment. * Or to the **inference scheduling notebook** where we will start the model, feed it some new data and catch the results. ''')) ###Output _____no_output_____
notebooks/em/InductionRLcircuit_Transient.ipynb
###Markdown Two Loop TDEM ###Code from geoscilabs.base import widgetify import geoscilabs.em.InductionLoop as IND from ipywidgets import interact, FloatSlider, FloatText ###Output _____no_output_____ ###Markdown App Parameter DescriptionsBelow are the adjustable parameters for widgets within this notebook:* $I_p$: Transmitter current amplitude [A]* $a_{Tx}$: Transmitter loop radius [m]* $a_{Rx}$: Receiver loop radius [m]* $x_{Rx}$: Receiver x position [m]* $z_{Rx}$: Receiver z position [m]* $\theta$: Receiver normal vector relative to vertical [degrees]* $R$: Resistance of receiver loop [$\Omega$]* $L$: Inductance of receiver loop [H]* $f$: Specific frequency [Hz]* $t$: Specific time [s] Background Theory: Induced Currents due to a Step-Off Primary SignalConsider the case in the image below, where a circular loop of wire ($Tx$) caries a time-varying current $I_p (t)$. According to the Biot-Savart law, this produces a time-varying primary magnetic field. The time-varying nature of the corresponding magnetic flux which passes through the receiver coil ($Rx$) generates an induced secondary current $I_s (t)$, which depends on the coil's resistance ($R$) and inductance ($L$). Here, we will provided final analytic results associated with the app below. A full derivation can be found at the bottom of the page.For a step-off primary current of the form $I_p (t) = I_0 u(-t)$, the secondary current carried by ($Rx$) is given by:\begin{equation}I_s (t) = \frac{I_0 A \beta_n}{L} \, \textrm{e}^{-Rt/L} \, u(t)\end{equation}where $A$ is the area of $Rx$, $\beta$ contains the geometric information pertaining to the problem and $u(t)$ is the unit-step function. ###Code # RUN TRANSIENT WIDGET widgetify(IND.fcn_TDEM_Widget,I=FloatSlider(min=0.01, max=100., value=1., step=1., continuous_update=False, description = "$I_0$"),\ a1=FloatSlider(min=1., max=20., value=10., step=1., continuous_update=False, description = "$a_{Tx}$"),\ a2=FloatSlider(min=1., max=20., value=5., step=1., continuous_update=False, description = "$a_{Rx}$"),\ xRx=FloatSlider(min=-15., max=15., value=0., step=1., continuous_update=False, description = "$x_{Rx}$"),\ zRx=FloatSlider(min=-15., max=15., value=-8., step=1., continuous_update=False, description = "$z_{Rx}$"),\ azm=FloatSlider(min=-90., max=90., value=0., step=10., continuous_update=False, description = "$\\theta$"),\ logR=FloatSlider(min=0, max=6, value=2, step=1., continuous_update=False, description = "$log_{10}(R)$"),\ logL=FloatSlider(min=-7, max=-2, value=-2, step=1., continuous_update=False, description = "$log_{10}(L)$"),\ logt=FloatSlider(min=-6, max=-2, value=-4, step=1., continuous_update=False, description = "$log_{10}(t)$")) ###Output _____no_output_____ ###Markdown Two Loop TDEM ###Code from geoscilabs.base import widgetify import geoscilabs.em.InductionLoop as IND from ipywidgets import interact, FloatSlider, FloatText ###Output _____no_output_____ ###Markdown App Parameter DescriptionsBelow are the adjustable parameters for widgets within this notebook:* $I_p$: Transmitter current amplitude [A]* $a_{Tx}$: Transmitter loop radius [m]* $a_{Rx}$: Receiver loop radius [m]* $x_{Rx}$: Receiver x position [m]* $z_{Rx}$: Receiver z position [m]* $\theta$: Receiver normal vector relative to vertical [degrees]* $R$: Resistance of receiver loop [$\Omega$]* $L$: Inductance of receiver loop [H]* $f$: Specific frequency [Hz]* $t$: Specific time [s] Background Theory: Induced Currents due to a Step-Off Primary SignalConsider the case in the image below, where a circular loop of wire ($Tx$) caries a time-varying current $I_p (t)$. According to the Biot-Savart law, this produces a time-varying primary magnetic field. The time-varying nature of the corresponding magnetic flux which passes through the receiver coil ($Rx$) generates an induced secondary current $I_s (t)$, which depends on the coil's resistance ($R$) and inductance ($L$). Here, we will provided final analytic results associated with the app below. A full derivation can be found at the bottom of the page.For a step-off primary current of the form $I_p (t) = I_0 u(-t)$, the secondary current carried by ($Rx$) is given by:\begin{equation}I_s (t) = \frac{I_0 A \beta_n}{L} \, \textrm{e}^{-Rt/L} \, u(t)\end{equation}where $A$ is the area of $Rx$, $\beta$ contains the geometric information pertaining to the problem and $u(t)$ is the unit-step function. ###Code # RUN TRANSIENT WIDGET widgetify(IND.fcn_TDEM_Widget,I=FloatSlider(min=0.01, max=100., value=1., step=1., continuous_update=False, description = "$I_0$"),\ a1=FloatSlider(min=1., max=20., value=10., step=1., continuous_update=False, description = "$a_{Tx}$"),\ a2=FloatSlider(min=1., max=20., value=5., step=1., continuous_update=False, description = "$a_{Rx}$"),\ xRx=FloatSlider(min=-15., max=15., value=0., step=1., continuous_update=False, description = "$x_{Rx}$"),\ zRx=FloatSlider(min=-15., max=15., value=-8., step=1., continuous_update=False, description = "$z_{Rx}$"),\ azm=FloatSlider(min=-90., max=90., value=0., step=10., continuous_update=False, description = "$\\theta$"),\ logR=FloatSlider(min=0, max=6, value=2, step=1., continuous_update=False, description = "$log_{10}(R)$"),\ logL=FloatSlider(min=-7, max=-2, value=-2, step=1., continuous_update=False, description = "$log_{10}(L)$"),\ logt=FloatSlider(min=-6, max=-2, value=-4, step=1., continuous_update=False, description = "$log_{10}(t)$")) ###Output _____no_output_____ ###Markdown Two Loop TDEM ###Code from geoscilabs.base import widgetify import geoscilabs.em.InductionLoop as IND from ipywidgets import interact, FloatSlider, FloatText ###Output _____no_output_____ ###Markdown App Parameter DescriptionsBelow are the adjustable parameters for widgets within this notebook:* $I_p$: Transmitter current amplitude [A]* $a_{Tx}$: Transmitter loop radius [m]* $a_{Rx}$: Receiver loop radius [m]* $x_{Rx}$: Receiver x position [m]* $z_{Rx}$: Receiver z position [m]* $\theta$: Receiver normal vector relative to vertical [degrees]* $R$: Resistance of receiver loop [$\Omega$]* $L$: Inductance of receiver loop [H]* $f$: Specific frequency [Hz]* $t$: Specific time [s] Background Theory: Induced Currents due to a Step-Off Primary SignalConsider the case in the image below, where a circular loop of wire ($Tx$) caries a time-varying current $I_p (t)$. According to the Biot-Savart law, this produces a time-varying primary magnetic field. The time-varying nature of the corresponding magnetic flux which passes through the receiver coil ($Rx$) generates an induced secondary current $I_s (t)$, which depends on the coil's resistance ($R$) and inductance ($L$). Here, we will provided final analytic results associated with the app below. A full derivation can be found at the bottom of the page.For a step-off primary current of the form $I_p (t) = I_0 u(-t)$, the secondary current carried by ($Rx$) is given by:\begin{equation}I_s (t) = \frac{I_0 A \beta_n}{L} \, \textrm{e}^{-Rt/L} \, u(t)\end{equation}where $A$ is the area of $Rx$, $\beta$ contains the geometric information pertaining to the problem and $u(t)$ is the unit-step function. ###Code # RUN TRANSIENT WIDGET widgetify(IND.fcn_TDEM_Widget,I=FloatSlider(min=0.01, max=100., value=1., step=1., continuous_update=False, description = "$I_0$"),\ a1=FloatSlider(min=1., max=20., value=10., step=1., continuous_update=False, description = "$a_{Tx}$"),\ a2=FloatSlider(min=1., max=20., value=5., step=1., continuous_update=False, description = "$a_{Rx}$"),\ xRx=FloatSlider(min=-15., max=15., value=0., step=1., continuous_update=False, description = "$x_{Rx}$"),\ zRx=FloatSlider(min=-15., max=15., value=-8., step=1., continuous_update=False, description = "$z_{Rx}$"),\ azm=FloatSlider(min=-90., max=90., value=0., step=10., continuous_update=False, description = "$\\theta$"),\ logR=FloatSlider(min=0, max=6, value=2, step=1., continuous_update=False, description = "$log_{10}(R)$"),\ logL=FloatSlider(min=-7, max=-2, value=-2, step=1., continuous_update=False, description = "$log_{10}(L)$"),\ logt=FloatSlider(min=-6, max=-2, value=-4, step=1., continuous_update=False, description = "$log_{10}(t)$")) ###Output _____no_output_____
examples/db_etl_tools.ipynb
###Markdown Database ETL Tools This notebook contains Database ETL examples. In this notebook we will only demo the **Redshift** functions. Similar functions exist in this package for other database types as well:* MySQL* Oracle* Postgres* Teradata* SqlServer Connecting to a Database ###Code import os import pandas as pd import pprint import warnings warnings.filterwarnings('ignore') ###Output _____no_output_____ ###Markdown There are two functions that offer connectivity to Redshift:* **conn_rs_pg**: uses the Postgres **psycopg2** package* **conn_rs_sa**: uses the **sqlalchemy** packageTo do all of the ETL examples, you will need a **cursor** and a **conn** object which each function provides.For the remainder of the exercises in Redshift we will be using the **conn_rs_pg** package, but both examples are below. ###Code #import the Redshift connection class from pymagic.db_conn_tools import Redshift #sqlalchemy cursor_rs, conn_rs = Redshift.conn_rs_sa( host=os.environ['example_rs_host'], db=os.environ['example_rs_db'], user=os.environ['example_rs_user'], pwd=os.environ['example_rs_pwd'], port=os.environ['example_rs_port'] ) #psycopg2 cursor_rs, conn_rs = Redshift.conn_rs_pg( host=os.environ['example_rs_host'], db=os.environ['example_rs_db'], user=os.environ['example_rs_user'], pwd=os.environ['example_rs_pwd'], port=os.environ['example_rs_port'] ) ###Output _____no_output_____ ###Markdown Running a Query The **run_query_rs** executes a SQL statement on the server. All ETL CRUD operations use this under the hood. ###Code #import the Redshift ETL class from pymagic.db_etl_tools import Redshift Redshift.run_query_rs( conn=conn_rs, sql="select 'hello world'" ) ###Output Runtime: 0.0020340333333333334 ###Markdown Creating a Table from Pandas DataFrame For the following exercises, we will go ahead and do a somewhat advanced task. We will create some tables with data for us to play with for the subsequent examples.Let's read in some data to play with. ###Code import seaborn as sns; df = sns.load_dataset('flights') df['month'] = df['month'].astype(str) df['year_month'] = df['year'].astype(str) + "_" + df['month'] df.head() sql = Redshift.make_df_tbl_rs( df=df, tbl_name="flights" ); pprint.pprint(sql) try: #if table already exists, lets drop it Redshift.run_query_rs( sql="drop table flights", conn=conn_rs ) except: conn_rs.commit() #create the table Redshift.run_query_rs( sql=sql, conn=conn_rs ) ###Output Runtime: 0.021303766666666668 Runtime: 0.002461483333333333 ###Markdown Inserting from Pandas DataFrame Now that we've create a table in the database based on our Pandas DataFrame, let's actually insert data into it. ###Code Redshift.insert_df_rs( cursor=cursor_rs, conn=conn_rs, df=df, tbl_name="flights" ) ###Output Runtime: 0.054683 ###Markdown Reading Data Reading data from a database uses Pandas' read_sql_query function with the SQL and database connection objects as parameters.Let's read the data we loaded back in. ###Code df = pd.read_sql_query( sql="select * from flights", con=conn_rs ) df.tail() ###Output _____no_output_____ ###Markdown Value Inserts Sometimes we want to do a simple INSERT of values into a table row. ###Code sql = Redshift.insert_val_rs( col_list=["year","month","passengers","year_month"], val_list=[1961, "January", 487,"1961_January"], tbl_name="flights" ) pprint.pprint(sql) Redshift.run_query_rs( sql=sql, conn=conn_rs ) df = pd.read_sql_query( sql="select * from flights", con=conn_rs ) df.tail() ###Output _____no_output_____ ###Markdown Upserts While an UPSERT with a given row of values might be useful from time to time, it is more common to have a "stage" table act as the source for new data to be inserted/updated on a target table.For this example we will create a new table, our target table.This table will have some "outdated" values and also will not have the "latest" data from our "stage" table.The goal is to update the "outdated" or incorrect values from the stage table and insert the latest values into the target table. ###Code # create the 'outdated' target table # the target table only has data through 1959 # the target table's passenger figures are also incorrect: # (80% for months ending in 'y') sql = ''' CREATE table flights_tgt as select src.year, src.month, cast( ( case when right(src.month,1) = 'y' then (src.passengers * 0.8) else src.passengers end ) as integer) as passengers, src.year_month from flights src where src.year < 1960 ''' try: Redshift.run_query_rs( sql="drop table flights_tgt", conn=conn_rs ) except: conn_rs.commit() Redshift.run_query_rs( sql=sql, conn=conn_rs ) df_tgt = pd.read_sql_query( sql="select * from flights_tgt", con=conn_rs ) df_tgt.tail() sql_update, sql_insert = Redshift.upsert_tbl_rs( src_tbl="flights", tgt_tbl="flights_tgt", src_join_cols=["year","month","year_month"], src_insert_cols=["year","month","passengers","year_month"], src_update_cols=["passengers"], update_compare_cols=["passengers"] ) ###Output _____no_output_____ ###Markdown Below we have our two UPSERT sql statements, an UPDATE and an INSERT. ###Code sql_update sql_insert ###Output _____no_output_____ ###Markdown Next we just need to run them in this order to update our 'outdated' records and insert our 'missing' records. ###Code Redshift.run_query_rs( sql=sql_update, conn=conn_rs ) Redshift.run_query_rs( sql=sql_insert, conn=conn_rs ) ###Output Runtime: 0.0032544333333333333 ###Markdown Nowe we see that the two tables reflect the same data! The UPSERT was successful. ###Code df_tgt = pd.read_sql_query( sql="select * from flights_tgt", con=conn_rs ) df_src = pd.read_sql_query( sql="select * from flights", con=conn_rs ) ( df_src.sort_values(by=["year","month"]).reset_index(drop=True) == \ df_tgt.sort_values(by=["year","month"]).reset_index(drop=True) ).all() ###Output _____no_output_____
02_Building a map of BiciMAD stations.ipynb
###Markdown Modules ###Code import pandas as pd import folium ###Output _____no_output_____ ###Markdown Variables ###Code stations_FilePathCSV='/Users/nicolaesse/Documents/Data science/Py/Analisi biciclette Madrid/Stations analysis/df_stations.csv' ###Output _____no_output_____ ###Markdown Loading data ###Code df_stations = pd.read_csv(stations_FilePathCSV, encoding='latin-1', sep=';') list(df_stations) df_stations.shape ###Output _____no_output_____ ###Markdown Mapping ###Code m = folium.Map([40.417000, -3.703000], zoom_start=13,tiles='http://{s}.tiles.wmflabs.org/bw-mapnik/{z}/{x}/{y}.png', attr="<a href=https://www.simboli.eu/>Simboli.EU</a>") for index, row in df_stations.iterrows(): folium.Marker([float(row['latitude']), float(row['longitude'])], popup='<h4>Station '+row['name']+'</h4>\ <b>Number: </b>'+row['number']+'<br/>\ <b>Neighbours: </b>'+row['neighbours']+'<br/>\ <b>Total bases: </b>'+str(row['total_bases'])+'<br/>\ <b>Address: </b>'+row['address']+'<br/>\ <b>Latitude: </b>'+str(row['latitude'])[0:8]+'<br/>\ <b>Longitude: </b>'+str(row['longitude'])[0:8]+'<br/><br/>\ <a href="http://www.google.com/maps/place/'+str(row['latitude'])+','+str(row['longitude'])+'">Open with Google Maps</a>', icon=folium.Icon(color='red' if row.neighbours == 'Centro' else 'orange', prefix='fa', icon='bicycle'),).add_to(m) legendHTML = ''' <style>@import url('https://fonts.googleapis.com/css?family=Roboto+Slab');</style> <p style="font-family: 'Roboto Slab', sans-serif;color:blue;">This map is part of the article <a href="http://www.simboli.eu/blog/lets-analyze-e-bike-sharing-stations-of-madrid/">'let’s analyze e-bike sharing stations of Madrid'</a>.</p> <div style="position: fixed; background-color:white; bottom: 50px; left: 50px; width: 150px; height: 90px; border:2px solid black; padding: 3px; z-index:9999; font-size:14px; font-family:Aleo, Times New Roman">Legend<br/> <i style="color:red">City center</i><br> <i style="color:orange">Other neighborhoods</i> </div> ''' m.get_root().html.add_child(folium.Element(legendHTML)) titleHTML = ''' <style>@import url('https://fonts.googleapis.com/css?family=Roboto+Slab');</style> <div style="position: fixed; top: 50px; left: 100px; width: 450px; height: 60px; background-color: white border:2px solid red; padding: 3px; z-index:9999; font-size:13px; font-family:font-family: 'Roboto Slab', sans-serif"> <h3 style="text-shadow: 0 0 2px white; color:#0000b3">Map of BiciMAD stations in Madrid</h3> </div> ''' m.get_root().html.add_child(folium.Element(titleHTML)) m.save('map_stations.html') ###Output _____no_output_____
ResCSNet_5_2_1_github.ipynb
###Markdown ResCSNetInspired by ConvCSNet. Can I just port the ResNet Multibranch structure in? About this notebookThis notebook is intended to run in [Google Colaboratory](https://colab.research.google.com). It may require a lot changes (mostly deletions) if you want to run it from your local device.In order to view tensorboard plots in the Colab VM during trainning, I have applied some dirty hacks (using `frpc` and a remote VPS running `frpd`). Also I am using `pydrive` module to download dataset from my Google Drive, and upload the model checkpoint at the end of each epoch.**Update**: Tensorboard 2.0 has added a "inline" tensorboard magic for jupyter notebooks. It is recommended that you open **another** notebook which shares the same VM with this notebook and run the following:```!pip install -q tf-nightly-2.0-preview Load the TensorBoard notebook extension%load_ext tensorboard%tensorboard --logdir runs```In this way you don't have to bother with a VPS or `frpc` or something. About the datasetI will load the pictures from the COCO dataset downloaded (and grayscaled and center-cropped already) by myself. You may download it from https://drive.google.com/open?id=12Nje-yhxcIVyz7L_lVxxcfXcTWa5Ba-mSome of the code below may try to download it from (your) Google Drive. It may be better to upload the file to your Google Drive. Install necessary packages and download dataset ###Code !pip install -U -q tensorboardX !pip install -U -q PyDrive from pydrive.auth import GoogleAuth from pydrive.drive import GoogleDrive from google.colab import auth from oauth2client.client import GoogleCredentials # Authenticate and create the PyDrive client. # This only needs to be done once per notebook. auth.authenticate_user() gauth = GoogleAuth() gauth.credentials = GoogleCredentials.get_application_default() drive = GoogleDrive(gauth) # List .gz files in the root. # # Search query reference: # https://developers.google.com/drive/v2/web/search-parameters listed = drive.ListFile({'q': "title contains '.gz' and 'root' in parents"}).GetList() for file in listed: print('title {}, id {}'.format(file['title'], file['id'])) downloaded = drive.CreateFile({'id': "12Nje-yhxcIVyz7L_lVxxcfXcTWa5Ba-m"}) downloaded.GetContentFile("center-crop-100.tar.gz") # Unextract dataset print("Extract dataset") !tar -xzf "center-crop-100.tar.gz" from pydrive.auth import GoogleAuth from pydrive.drive import GoogleDrive from google.colab import auth from oauth2client.client import GoogleCredentials # Authenticate and create the PyDrive client. print("Authenticate and create the PyDrive client") auth.authenticate_user() _gauth = GoogleAuth() _gauth.credentials = GoogleCredentials.get_application_default() # drive = GoogleDrive(_gauth) # Create a file instance to upload checkpoint later # gfile_ckpt = drive.CreateFile() # pytorch-wavelets-1.0.0 is known to be OK. Higher versions may work as well !git clone "https://github.com/fbcotter/pytorch_wavelets.git" !pip install ./pytorch_wavelets print("Done") ###Output _____no_output_____ ###Markdown Download exsiting parametersNeeded if you want to resume trainning or test with exsiting parameters.Some pretrained models:[ResCsNet-colab-5_2_1-r0.25_checkpoint.pth](https://drive.google.com/open?id=1QQJJ3c9SlMK03v_v28J_K0ymvvA2vbhG), [ResCsNet-colab-5_2_1-r0.20_checkpoint.pth](https://drive.google.com/open?id=12g8reeeyi4Ei8v9dZudS_jsRE8ImYuq5), [ResCsNet-colab-5_2_1-r0.15_checkpoint.pth](https://drive.google.com/open?id=1J2tOS02BJVBWwOT-p2SDwpKRuociEKPO), [ResCsNet-colab-5_2_1-r0.10_checkpoint.pth](https://drive.google.com/open?id=1oLLB45GWfGxhdxYqNoYPQRcNX5_v9XFE), [ResCsNet-colab-5_2_1-r0.05_checkpoint.pth](https://drive.google.com/open?id=1xJbTgKgjqUwyQJn8gxvWy7rJ5N-9maWu) ###Code drive = GoogleDrive(gauth) # download trained params listed = drive.ListFile({'q': "title contains 'ResCsNet-colab-5_2_1-r0.25_checkpoint.pth' and trashed=False"}).GetList() for file in listed: print('title {}, id {}'.format(file['title'], file['id'])) # listed = drive.ListFile({'q': "title contains 'ResCsNet-colab-5_2_1-r0.20_checkpoint.pth' and trashed=False"}).GetList() # for file in listed: # print('title {}, id {}'.format(file['title'], file['id'])) drive = GoogleDrive(gauth) downloaded2 = drive.CreateFile({'id': "1QQJJ3c9SlMK03v_v28J_K0ymvvA2vbhG"}) downloaded2.GetContentFile("ResCsNet-colab-5_2_1-r0.25_checkpoint.pth") # downloaded3 = drive.CreateFile({'id': "12g8reeeyi4Ei8v9dZudS_jsRE8ImYuq5"}) # downloaded3.GetContentFile("ResCsNet-colab-5_2_1-r0.20_checkpoint.pth") ###Output _____no_output_____ ###Markdown ----------------------------------------------------------------- Runtime configurationsThis section mostly covers hacks and tricks. If you are using tensorflow 2.0 and inline tensorboard you probably do not need to run cells in this section. nvidia-smi ###Code !nvidia-smi ###Output _____no_output_____ ###Markdown tensorboard ###Code !tar -xjvf runs.tbz get_ipython().system_raw("tensorboard --logdir runs --host 127.0.0.1 &") !ps -ef | grep tensorboard ###Output _____no_output_____ ###Markdown python http.server ###Code get_ipython().system_raw("python3 -m http.server 8000 --bind 127.0.0.1 &") !ps -ef | grep http ###Output _____no_output_____ ###Markdown frpc ###Code !wget "https://github.com/fatedier/frp/releases/download/v0.24.1/frp_0.24.1_linux_amd64.tar.gz" !mkdir frp !tar -xvf "frp_0.24.1_linux_amd64.tar.gz" -C frp get_ipython().system_raw("./frp/frp_0.24.1_linux_amd64/frpc -c ./frpc.ini &") #!./frp/frp_0.24.1_linux_amd64/frpc -c ./frpc.ini !ps -ef | grep frpc !cat frpc.ini ###Output _____no_output_____ ###Markdown Control the go and stop of the trainning ###Code # tell the program to stop trainning # !touch _stop # or lift the ban # !rm _stop ###Output _____no_output_____ ###Markdown tensorboard data archive ###Code get_ipython().system_raw("bash get_runs.sh &") # get_ipython().system_raw("python3 upload_runs.py &") #!bash get_runs.sh #!python3 upload_runs.py !ps -ef | grep get_runs # !ps -ef | grep upload_runs.py ###Output _____no_output_____ ###Markdown tail trainning log ###Code # !tail "ResCsNet-colab-5_2_1-r0.10_trainning.log" ###Output _____no_output_____ ###Markdown Google drive re-authI am doing this because sometimes I encouter bugs if not re-auth with Google drive. Not sure if I missed something in pydrive documention. ###Code # if the upload has to fail I will do that manually from pathlib import Path from pydrive.auth import GoogleAuth from pydrive.drive import GoogleDrive from google.colab import auth from oauth2client.client import GoogleCredentials # !mkdir authentication gauth = GoogleAuth() gauth.LoadCredentialsFile("mycreds.txt") if gauth.credentials is None: # Authenticate if they're not there # self.gauth.LocalWebserverAuth() print("no creds saved") auth.authenticate_user() gauth.credentials = GoogleCredentials.get_application_default() elif gauth.access_token_expired: # Refresh them if expired print("token expired") gauth.Refresh() else: # Initialize the saved creds print("Initialize the saved creds") gauth.Authorize() # Save the current credentials to a file gauth.SaveCredentialsFile("mycreds.txt") ############################################ # the_drive = GoogleDrive(gauth) # ckpt_name = "ResCsNet-colab-5_2_1-r0.10_checkpoint.pth" # id_file = Path('_id') # if id_file.exists(): # print("_id exsits") # fileid = id_file.read_text() # _gfile_ckpt = the_drive.CreateFile({'id': fileid}) # _gfile_ckpt.SetContentFile(ckpt_name) # _gfile_ckpt.Upload() # else: # print("_id not exsits") # _gfile_ckpt = the_drive.CreateFile() # _gfile_ckpt.SetContentFile(ckpt_name) # _gfile_ckpt.Upload() # id_file.write_text(_gfile_ckpt['id']) print("Done") ###Output _____no_output_____ ###Markdown Check uptime ###Code !uptime ###Output _____no_output_____ ###Markdown =========================== The real things ###Code # imports from pathlib import Path from PIL import Image, ImageFile # see https://stackoverflow.com/questions/12984426/python-pil-ioerror-image-file-truncated-with-big-images ImageFile.LOAD_TRUNCATED_IMAGES = True # from six.moves import cPickle as pickle # import platform import torch import torch.nn as nn import torch.optim as optim from torch.utils.data import Dataset, DataLoader, random_split, Subset from torch.utils.data import sampler import torch.nn.functional as F # import torchvision.datasets as dset import torchvision.transforms as T import numpy as np import matplotlib.pyplot as plt from tensorboardX import SummaryWriter # from trainning_func import get_evaluation # import ipdb %matplotlib inline # %matplotlib tk # plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots # plt.rcParams['image.interpolation'] = 'nearest' # plt.rcParams['image.cmap'] = 'gray' # for auto-reloading external modules # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython %load_ext autoreload %autoreload 2 print("Done") ###Output _____no_output_____ ###Markdown Dataset classes ###Code input_width_height = 100 # build somthing like an image pyramid. For details see class definition of ResCsNet TOTAL_PIXELS = input_width_height*input_width_height # N SAMPLING_RATE = 0.25 SAMPLED_PIXELS = int(TOTAL_PIXELS * SAMPLING_RATE) # M WIDTH_INITIAL = int(np.ceil(np.sqrt(SAMPLED_PIXELS))) WIDTH_HALF = int(input_width_height/4*2) WIDTH_THREE_QUARTERS = int(input_width_height/4*3) class CocoDataset(Dataset): def __init__(self, path_dataset, transform=None): p_iter = Path(path_dataset).iterdir() self.transform = transform self.images_list = [name for name in p_iter] def __len__(self): '''provides the size of the dataset''' return len(self.images_list) def __getitem__(self, idx): ''' supporting integer indexing in range from 0 to len(self) exclusive ''' im = Image.open(self.images_list[idx]).convert("L") # to grayscale if self.transform: im = self.transform(im) return im ###Output _____no_output_____ ###Markdown Dataset instances ###Code # Test if CocoDataset is OK # path_dset_train = '/home/xzc/center-crop-100/train2014' # path_dset_test = '/home/xzc/center-crop-100/val2014' path_dset_train = 'center-crop-100/train2014' path_dset_test = 'center-crop-100/val2014' # path_dset_train = r'D:\dev_workspace\CS-DL\center-crop-100\train2014' # path_dset_test = r'D:\dev_workspace\CS-DL\center-crop-100\val2014' input_width_height = 100 train_set = CocoDataset(path_dset_train, transform=T.Compose( [ T.RandomHorizontalFlip(), T.RandomVerticalFlip(), T.ToTensor(), T.Normalize((0.5,), (0.5,)) ]) ) test_set = CocoDataset(path_dset_test, transform=T.Compose( [ T.ToTensor(), T.Normalize((0.5,), (0.5,)) ]) ) print("Done") # function to denomalize normalized image def denormalize(im, mean, std): ''' im: pytorch tensor view as image ''' assert len(im.size()) == 3 if im.shape[0] == 1: # grayscale # im = im.reshape(im.shape[0], im.shape[1]) im = im * std[0] + mean[0] else: # rgb for ch in range(3): im[:,:,ch] = im[:,:,ch] * std[ch] + mean[ch] return im def test_dataset(): print(len(train_set)) im = train_set[5] print(im.shape) im = denormalize(im, (0.5,), (0.5,)) print(im.shape) _im = im.reshape((im.shape[1], im.shape[2])) # plt.figure() plt.imshow(_im, cmap='gray') plt.show() # test_dataset() print("Done") print(f"{len(train_set)}, {len(test_set)}") ###Output _____no_output_____ ###Markdown Dataloader instances ###Code # make the pytorch loader # final run: use this set TOTAL_SAMPLES = len(train_set) NUM_TRAIN = TOTAL_SAMPLES // 5 * 4 BATCH_SIZE = 64 # mini test: use this set # TOTAL_SAMPLES = 2000 # NUM_TRAIN = TOTAL_SAMPLES // 5 * 4 # BATCH_SIZE = 60 # debug only # TOTAL_SAMPLES = 100 # NUM_TRAIN = TOTAL_SAMPLES // 5 * 4 # BATCH_SIZE = 4 loader_train = DataLoader(train_set, batch_size=BATCH_SIZE, sampler=sampler.SubsetRandomSampler(range(NUM_TRAIN))) loader_subtrain = DataLoader(Subset(train_set, [i for i in range(3)]), batch_size=BATCH_SIZE) # used for checking avg_psnr loader_val = DataLoader(train_set, batch_size=BATCH_SIZE, sampler=sampler.SubsetRandomSampler(range(NUM_TRAIN, TOTAL_SAMPLES))) loader_test = DataLoader(test_set, batch_size=BATCH_SIZE) print(f'len(loader_train)=={len(loader_train)}, len(loader_subtrain)={len(loader_subtrain)}') print(f'len(loader_val)=={len(loader_val)}, len(loader_test)=={len(loader_test)}') print('Done') # Test the usage of dataloaders def test_dataloader(): train_iter = iter(loader_train) original_im = next(train_iter) print(type(original_im)) print(original_im.size()) print("------------------") test_iter = iter(loader_test) im = next(test_iter) print(type(im)) print(im.size()) # test_dataloader() ###Output _____no_output_____ ###Markdown Set up device ###Code # set up device # will use cuda if available device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu') print('using device:', device) ###Output _____no_output_____ ###Markdown Encoder ###Code # original linear encoder class EncoderLinear(nn.Module): def _init_weights(self, m): # print(m) if type(m) == nn.Conv2d: nn.init.kaiming_normal_(m.weight.data) if m.bias is not None: nn.init.constant_(m.bias.data, 0) def __init__(self, N, M): super().__init__() self.linear = nn.Linear(N, M) self.linear.apply(self._init_weights) def forward(self, x): x_vec = x.view(x.shape[0], 1, 1, -1) out = self.linear(x_vec) return out # make instance and test the model def test_encoder(): # train_iter = iter(loader_train) # im = train_iter.next() # encoder = Encoder(1, kernel_size=15, stride=1, padding=0) print(f"input_width_height={input_width_height}") N = input_width_height*input_width_height M = int(N * 0.30) encoder = EncoderLinear(N, M) im = torch.randn(BATCH_SIZE,1,input_width_height,input_width_height) with torch.no_grad(): y = encoder(im) print(y.size()) # test_encoder() ###Output _____no_output_____ ###Markdown The Recovery Network (Decoder) ###Code class TheUpsample(nn.Module): ''' Wrapper for torch.nn.functional.interpolate (why don't they write a ready-for-use 'nn.Interpolate'?) ''' def __init__(self, size): super().__init__() self.size = size def forward(self, x): return nn.functional.interpolate(x, size=self.size, mode='nearest') class ResCsNet(nn.Module): # def _init_weights(self, m): # ''' # used to init weights. # ''' # print(m) # if type(m) == nn.Conv2d: # nn.init.kaiming_normal_(m.weight.data) # nn.init.constant_(m.bias.data, 0) # if type(m) == nn.Linear: # nn.init.kaiming_normal_(m.weight.data) # nn.init.constant_(m.bias.data, 0) # if type(m) == nn.ConvTranspose2d: # nn.init.kaiming_normal_(m.weight.data) # nn.init.constant_(m.bias.data, 0) def __init__(self, N, M): # def __init__(self, encoder_out_ch, encoder_ksize, encoder_stride, encoder_padding): super().__init__() # encoder # self.encoder= Encoder(encoder_out_ch, encoder_ksize, encoder_stride, encoder_padding) self.encoder= EncoderLinear(N, M) # upsample self.initial_width = int(np.ceil(np.sqrt(M))) # should be identical to WIDTH_INITIAL self.upsample0 = TheUpsample((1, int(self.initial_width**2)) ) # upsample1_width = int(input_width_height/4) # self.upsample1 = TheUpsample((upsample1_width, upsample1_width)) upsample2_width = WIDTH_HALF self.upsample2 = TheUpsample((upsample2_width, upsample2_width)) upsample3_width = WIDTH_THREE_QUARTERS self.upsample3 = TheUpsample((upsample3_width, upsample3_width)) # the last upsample should up sample to the original size self.upsample4 = TheUpsample((input_width_height, input_width_height)) # conv_scale self.conv_scale1 = nn.Conv2d(96, 1, kernel_size=1, stride=1, padding=0) self.conv_scale2 = nn.Conv2d(96, 1, kernel_size=1, stride=1, padding=0) self.conv_scale3 = nn.Conv2d(96, 1, kernel_size=1, stride=1, padding=0) self.conv_scale4 = nn.Conv2d(96, 1, kernel_size=1, stride=1, padding=0) # units # unit 1 self.unit1 = nn.Sequential( # nn.Conv2d(encoder_out_ch, 96, kernel_size=3, stride=1, padding=1), nn.Conv2d(1, 96, kernel_size=3, stride=1, padding=1), nn.BatchNorm2d(96), nn.LeakyReLU() ) # unit2 self.unit2a_branch1 = nn.Sequential( nn.Conv2d(96, 96, kernel_size=3, stride=1, padding=1), nn.BatchNorm2d(96), ) self.unit2a_branch2 = nn.Sequential( nn.Conv2d(96, 96, kernel_size=3, stride=1, padding=1), nn.BatchNorm2d(96), nn.LeakyReLU(), nn.Conv2d(96, 96, kernel_size=3, stride=1, padding=1), nn.BatchNorm2d(96), nn.LeakyReLU(), nn.Conv2d(96, 96, kernel_size=3, stride=1, padding=1), nn.BatchNorm2d(96) ) self.unit2b_branch2 = nn.Sequential( nn.Conv2d(96, 96, kernel_size=3, stride=1, padding=1), nn.BatchNorm2d(96), nn.LeakyReLU(), nn.Conv2d(96, 96, kernel_size=3, stride=1, padding=1), nn.BatchNorm2d(96), nn.LeakyReLU(), nn.Conv2d(96, 96, kernel_size=3, stride=1, padding=1), nn.BatchNorm2d(96) ) # unit 3 # branch1 is identical pass self.unit3_branch2 = nn.Sequential( nn.Conv2d(96, 96, kernel_size=3, stride=1, padding=1), nn.BatchNorm2d(96), nn.LeakyReLU(), nn.Conv2d(96, 96, kernel_size=3, stride=1, padding=1), nn.BatchNorm2d(96), nn.LeakyReLU(), nn.Conv2d(96, 96, kernel_size=3, stride=1, padding=1), nn.BatchNorm2d(96) ) # unit 4 self.unit4_branch1 = nn.Sequential( nn.Conv2d(96, 96, kernel_size=3, stride=1, padding=1), nn.BatchNorm2d(96), ) self.unit4_branch2 = nn.Sequential( nn.Conv2d(96, 96, kernel_size=3, stride=1, padding=1), nn.BatchNorm2d(96), nn.LeakyReLU(), nn.Conv2d(96, 96, kernel_size=3, stride=1, padding=1), nn.BatchNorm2d(96), nn.LeakyReLU(), nn.Conv2d(96, 96, kernel_size=3, stride=1, padding=1), nn.BatchNorm2d(96) ) # unit 5 # branch 1 is identical pass self.unit5_branch2 = nn.Sequential( nn.Conv2d(96, 96, kernel_size=3, stride=1, padding=1), nn.BatchNorm2d(96), nn.LeakyReLU(), nn.Conv2d(96, 96, kernel_size=3, stride=1, padding=1), nn.BatchNorm2d(96), nn.LeakyReLU(), nn.Conv2d(96, 96, kernel_size=3, stride=1, padding=1), nn.BatchNorm2d(96) ) def forward(self, x): x = self.encoder(x) x_vec = self.upsample0(x) x = x_vec.view(x.shape[0], 1, self.initial_width, self.initial_width) # 23x23, 32x32, 45x45, 55x55 # unit 1 x = self.unit1(x) # x = self.upsample1(x) # unit 2 x2a_1 = self.unit2a_branch1(x) x2a_2 = self.unit2a_branch2(x) x = x2a_1 + x2a_2 x2b_1 = x.clone() x2b_2 = self.unit2b_branch2(x) x = x2b_1 + x2b_2 x = self.upsample2(x) # remove this layer for r=0.3 # unit 3 x3_1 = x.clone() x3_2 = self.unit3_branch2(x) x = x3_1 + x3_2 x = self.upsample3(x) # unit 4 x4_1 = self.unit4_branch1(x) x4_2 = self.unit4_branch2(x) x = x4_1 + x4_2 x = self.upsample4(x) # unit5 x5_1 = x.clone() x5_2 = self.unit5_branch2(x) x = x5_1 + x5_2 im4 = self.conv_scale4(x) return im4 ############################################ def ResCsNet_init_weights(m): ''' used to init weights. This is just a clone of the method in the class above. Provided for convenience ''' # print(m) if type(m) == nn.Conv2d: nn.init.kaiming_normal_(m.weight.data) if m.bias is not None: nn.init.constant_(m.bias.data, 0) if type(m) == nn.Linear: nn.init.kaiming_normal_(m.weight.data) nn.init.constant_(m.bias.data, 0) if type(m) == nn.ConvTranspose2d: nn.init.kaiming_normal_(m.weight.data) if m.bias is not None: nn.init.constant_(m.bias.data, 0) # Make instance and check # %debug def test_rescsnet(): print(f'using device: {device}') # 8, kernel_size=11, stride=5, padding=0 for r=0.28 # model = ResCsNet(encoder_out_ch=1, encoder_ksize=15, encoder_stride=1, encoder_padding=0) N = input_width_height*input_width_height M = int(N * 0.05) model = ResCsNet(N, M) model.apply(ResCsNet_init_weights) # train_iter = iter(loader_train) # im = train_iter.next() im = torch.randn(BATCH_SIZE,1,input_width_height,input_width_height).to(device=device) with torch.no_grad(): model.train() model.to(device=device) recovered_im = model(im) print(recovered_im.size()) print("------------------") with torch.no_grad(): model.eval() model.to(device=device) recovered_im = model(im) print(recovered_im.size()) del model # test_rescsnet() print(torch.cuda.memory_cached()) torch.cuda.empty_cache() print(torch.cuda.memory_cached()) ###Output _____no_output_____ ###Markdown Training Experiment setup ###Code exp_name = 'ResCsNet-colab' print(f"input_width_height={input_width_height}") N = input_width_height*input_width_height M = int(N * SAMPLING_RATE) model = ResCsNet(N, M) # model.apply(ResCsNet_init_weights) # not needed if continue trainning or testing ###Output _____no_output_____ ###Markdown Load exsiting paramsThis section is needed if you want to test or continue trainning a model.If you want to train from scratch, this section should be skipped. ###Code ####################################### # Load the parameters to continue trainning if desired # see https://github.com/pytorch/examples/blob/d6b52110bae32cbefeea6d4ffbf8cede98ac16fc/imagenet/main.py#L175 ####################################### want_load_params = True if want_load_params: ####################################### # You need to make this correct fname = 'ResCsNet-colab-5_2_1-r0.25_checkpoint.pth' ####################################### checkpoint = torch.load(fname) tfx_steps = checkpoint['tfx_steps'] print(f"tfx_steps is {tfx_steps}") tfx_epochs_done = checkpoint['tfx_epochs_done'] print(f"tfx_epochs_done is {tfx_epochs_done}") model = ResCsNet(N, M) model.load_state_dict(checkpoint['state_dict']) model.train() model.cuda() optimizer = optim.Adam(model.parameters()) optimizer.load_state_dict(checkpoint['optimizer']) lr_scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='max', factor=0.2, patience=10, threshold=5e-3 ,verbose=True) lr_scheduler.load_state_dict(checkpoint['lr_scheduler']) for state in optimizer.state.values(): for k, v in state.items(): if isinstance(v, torch.Tensor): # print("copy to cuda") state[k] = v.cuda() print("Done") # (re)setting learning rate if needed for g in optimizer.param_groups: g['lr'] = 1e-5 print("The current lrs:") for g in optimizer.param_groups: print(g['lr']) ###Output _____no_output_____ ###Markdown Train and validation functionsThis section is for trainning. Please ensure that you have `pytorch_dwt_ssim.py` and `trainning_func.py` placed along with this notebook in the same directory before you run the cells in this section.This section is not needed if you just want to test a model. ###Code # The trainning function needs some logging import logging logging.basicConfig(format="[%(asctime)s] %(message)s", filename=exp_name+"_trainning.log",level=logging.INFO) class FrobeniusLoss(nn.Module): def __init__(self): super().__init__() def forward(self, x2, x1, eps=1e-8): ''' x1, x2: both of shape [N, C, H, W]. x1 is the source ''' diff = x1 - x2 num = torch.norm(diff, p='fro') den = torch.norm(x1, p='fro') + eps frob = num / den return frob from pathlib import Path from pydrive.auth import GoogleAuth from pydrive.drive import GoogleDrive from google.colab import auth from oauth2client.client import GoogleCredentials from trainning_func import get_evaluation from pytorch_dwt_ssim import DWT_SSIM stop_file = Path('_stop') def save_checkpoint(model, optimizer, lr_scheduler, tfx_steps, tfx_epochs_done, ckpt_name): # Save the state model, just in case logging.info("Saving the state of model") state = { 'tfx_steps': tfx_steps, 'tfx_epochs_done': tfx_epochs_done, 'state_dict': model.state_dict(), 'optimizer' : optimizer.state_dict(), 'lr_scheduler': lr_scheduler.state_dict() } torch.save(state, ckpt_name) logging.info("State saving done, uploading to google drive") try: gauth = GoogleAuth() gauth.LoadCredentialsFile("mycreds.txt") if gauth.credentials is None: # Authenticate if they're not there # self.gauth.LocalWebserverAuth() print("[!] No creds saved") auth.authenticate_user() gauth.credentials = GoogleCredentials.get_application_default() elif gauth.access_token_expired: # Refresh them if expired print("[!] Token expired") gauth.Refresh() else: # Initialize the saved creds # print("Initialize the saved creds") gauth.Authorize() # Save the current credentials to a file gauth.SaveCredentialsFile("mycreds.txt") the_drive = GoogleDrive(gauth) id_file = Path('_id') if id_file.exists(): fileid = id_file.read_text() _gfile_ckpt = the_drive.CreateFile({'id': fileid}) _gfile_ckpt.SetContentFile(ckpt_name) _gfile_ckpt.Upload() else: _gfile_ckpt = the_drive.CreateFile() _gfile_ckpt.SetContentFile(ckpt_name) _gfile_ckpt.Upload() id_file.write_text(_gfile_ckpt['id']) except: print("??? Some error occured when trying to upload to gdrive") # raise def train(model, optimizer, lr_scheduler, fn_mse=FrobeniusLoss(), fn_cwssim=DWT_SSIM(J=3, wave='haar'), mse_weight=0.3, cwssim_weight=0.7, epochs=1, logdir=None, print_every=10, tfx_steps=0, tfx_epochs_done=0, device=torch.device('cuda'), ckpt_name="checkpoint.pt"): """ Train a model Inputs: - model: A PyTorch Module giving the model to train. - optimizer: An Optimizer object we will use to train the model - lr_scheduler: learning rate scheduler - fn_mse, fn_cwssim: loss functions for L2 loss and wavelet loss - mse_weight, cwssim_weight: weights indicating how important they contribute to the total loss - epochs: A Python integer giving the number of epochs to train for - logdir: string. Used to specific the logdir of tensorboard - print_every: after print_every epochs this function will report to logging - tfx_steps, tfx_epochs_done: helps tensorboardX summary writer find what current step and epoch is - device: torch.device('cuda') or torch.device('cpu') Returns: - tfx_steps: the end of the tfx_steps """ try: writer = SummaryWriter(log_dir=logdir) print(f"Run `tensorboard --logdir={logdir} --host=127.0.0.1` to visualize in realtime") fn_mse = fn_mse.to(device=device) fn_cwssim = fn_cwssim.to(device=device) # PSNR scheduler # lr_scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='max', factor=0.2, patience=10, threshold=5e-3 ,verbose=True) # CW-SSIM scheduler # lr_scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='max', patience=10, # verbose=True) for e in range(epochs): if stop_file.exists(): print("Stop file found. Will stop trainning now") save_checkpoint(model, optimizer, lr_scheduler, tfx_steps=tfx_steps, tfx_epochs_done=tfx_epochs_done, ckpt_name=ckpt_name) break model.train() # ensure the model is in training mode logging.info('-----------------------------') logging.info(f'* epoch {tfx_epochs_done}') for t, original_im in enumerate(loader_train): original_im = original_im.to(device=device) recovered_im = model(original_im) mse_loss = fn_mse(recovered_im, original_im) cwssim_loss = 1 - fn_cwssim(recovered_im, original_im) # construct the total loss loss = mse_weight*mse_loss + cwssim_weight*cwssim_loss writer.add_scalars('train/loss', { 'mse_loss.item()': mse_loss.item(), 'cwssim_loss.item()': cwssim_loss.item(), 'loss.item()': loss.item() }, tfx_steps) optimizer.zero_grad() loss.backward() optimizer.step() if t % int(print_every) == 0: logging.info('Iteration %d/%d, loss = %.4f' % (t, len(loader_train) , loss.item())) tfx_steps += 1 #end for # after the end of each epoch ## increment tfx_epochs_done counter tfx_epochs_done += 1 ## check the performance of the model logging.info("Checking on subtrain and validation set...") subtrain_psnr, val_psnr, subtrain_mix_psnr, val_mix_psnr = get_evaluation( model, loader_val, loader_subtrain, device, fn_mse=nn.MSELoss(), fn_cwssim=fn_cwssim, mse_weight=mse_weight, cwssim_weight=cwssim_weight) logging.info(f"Average PSNR for subtrain set is {subtrain_psnr} dB") logging.info(f"Average PSNR for validation set is {val_psnr} dB") logging.info(f"Average mixed gain for subtrain set is {subtrain_mix_psnr} dB") logging.info(f"Average mixed gain for validation set is {val_mix_psnr} dB") # loss_subtrain = mse_weight*subtrain_avg_mse + cwssim_weight*subtrain_avg_cwssim_loss # loss_val = mse_weight*val_avg_mse + cwssim_weight*val_avg_cwssim_loss if lr_scheduler is not None: lr_scheduler.step(val_mix_psnr) # check loss and determine if the lr should be decreased writer.add_scalars('train/val_evaluation', { 'subtrain_psnr': subtrain_psnr.item(), 'val_psnr': val_psnr.item(), 'subtrain_mix_psnr': subtrain_mix_psnr.item(), 'val_mix_psnr': val_mix_psnr.item() }, tfx_epochs_done ) # Save the state model, just in case logging.info("Saving the state of model") save_checkpoint(model, optimizer, lr_scheduler, tfx_steps=tfx_steps, tfx_epochs_done=tfx_epochs_done, ckpt_name=ckpt_name) #end for writer.close() # tensorboardX writer return tfx_steps, tfx_epochs_done except (KeyboardInterrupt, SystemExit): print("KeyboardInterrupt: save the state of model") save_checkpoint(model, optimizer, lr_scheduler, tfx_steps=tfx_steps, tfx_epochs_done=tfx_epochs_done, ckpt_name=ckpt_name) return tfx_steps, tfx_epochs_done except: print("Emergency: save the state of model") save_checkpoint(model, optimizer, lr_scheduler, tfx_steps=tfx_steps, tfx_epochs_done=tfx_epochs_done, ckpt_name=ckpt_name) raise # old style train, mse only def get_avg_psnr(model, loader_val, loader_subtrain, device, print_every=10): model.eval() # ensure the model is in evaluation mode subtrain_avg_psnr = 0 val_avg_psnr = 0 with torch.no_grad(): for t, val_im in enumerate(loader_val): if t % int(print_every) == 0: logging.info(f"checked {t}/{len(loader_val)} in loader_val") val_original = val_im.to(device) val_recovered = model(val_original) val_mse = F.mse_loss(val_recovered, val_original) # PSNR val_psnr = 10 * np.log10(1 / val_mse.item()) val_avg_psnr += val_psnr val_avg_psnr /= len(loader_val) for t, subtrain_im in enumerate(loader_subtrain): if t % int(print_every) == 0: logging.info(f"checked {t}/{len(loader_subtrain)} in loader_subtrain") subtrain_original = subtrain_im.to(device) subtrain_recovered = model(subtrain_original) subtrain_mse = F.mse_loss(subtrain_recovered, subtrain_original) # PSNR subtrain_psnr = 10 * np.log10(1 / subtrain_mse.item()) subtrain_avg_psnr += subtrain_psnr subtrain_avg_psnr /= len(loader_subtrain) return subtrain_avg_psnr, val_avg_psnr ############################################################# def train_oldstyle(model, optimizer, lr_scheduler, epochs=1, logdir=None, print_every=10, tfx_steps=0, tfx_epochs_done=0, device=torch.device('cuda'), ckpt_name="checkpoint.pt"): """ Train a model Inputs: - model: A PyTorch Module giving the model to train. - optimizer: An Optimizer object we will use to train the model - lr_scheduler: learning rate scheduler - epochs: A Python integer giving the number of epochs to train for - logdir: string. Used to specific the logdir of tensorboard Returns: - tfx_steps: the end of the tfx_steps """ try: writer = SummaryWriter(log_dir=logdir) print(f"Run `tensorboard --logdir={logdir} --host=127.0.0.1` to visualize in realtime") fn_mse = nn.MSELoss() fn_mse = fn_mse.to(device=device) for e in range(epochs): if stop_file.exists(): print("Stop file found. Will stop trainning now") save_checkpoint(model, optimizer, lr_scheduler, tfx_steps=tfx_steps, tfx_epochs_done=tfx_epochs_done, ckpt_name=ckpt_name) break model.train() # ensure the model is in training mode logging.info('-----------------------------') logging.info(f'* epoch {tfx_epochs_done}') for t, original_im in enumerate(loader_train): original_im = original_im.to(device=device) recovered_im = model(original_im) mse_loss = fn_mse(recovered_im, original_im) # construct the total loss loss = mse_loss writer.add_scalars('train/loss', { 'mse_loss.item()': mse_loss.item(), 'loss.item()': loss.item() }, tfx_steps) optimizer.zero_grad() loss.backward() optimizer.step() if t % int(print_every) == 0: logging.info('Iteration %d/%d, loss = %.4f' % (t, len(loader_train) , loss.item())) tfx_steps += 1 #end for # after the end of each epoch ## increment tfx_epochs_done counter tfx_epochs_done += 1 ## check the performance of the model logging.info("Checking on subtrain and validation set...") subtrain_psnr, val_psnr = get_avg_psnr(model, loader_val, loader_subtrain, device) logging.info(f"Average PSNR for subtrain set is {subtrain_psnr} dB") logging.info(f"Average PSNR for validation set is {val_psnr} dB") if lr_scheduler is not None: lr_scheduler.step(val_psnr) # check loss and determine if the lr should be decreased writer.add_scalars('train/val_evaluation', { 'subtrain_psnr': subtrain_psnr.item(), 'val_psnr': val_psnr.item(), }, tfx_epochs_done ) # Save the state model, just in case logging.info("Saving the state of model") save_checkpoint(model, optimizer, lr_scheduler, tfx_steps=tfx_steps, tfx_epochs_done=tfx_epochs_done, ckpt_name=ckpt_name) #end for writer.close() # tensorboardX writer return tfx_steps, tfx_epochs_done except (KeyboardInterrupt, SystemExit): print("KeyboardInterrupt: save the state of model") save_checkpoint(model, optimizer, lr_scheduler, tfx_steps=tfx_steps, tfx_epochs_done=tfx_epochs_done, ckpt_name=ckpt_name) return tfx_steps, tfx_epochs_done except: print("Emergency: save the state of model") save_checkpoint(model, optimizer, lr_scheduler, tfx_steps=tfx_steps, tfx_epochs_done=tfx_epochs_done, ckpt_name=ckpt_name) raise ###Output _____no_output_____ ###Markdown Time consuming part ...It is a good idea to train with `train_oldstyle` which only uses MSE loss as the criterion, then switch to `train` which combines l2 loss and DW-SSIM loss to finetune the model. Running wavelet code is still slower even on GPU. By using this trainning scheme you can save some time. ###Code learning_rate = 5e-4 tfx_steps = 0 tfx_epochs_done = 0 model = model.to(device=device) # move to proper device before constructing the optimizer optimizer = optim.Adam(model.parameters(), lr=learning_rate) lr_scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='max', factor=0.2, patience=10, threshold=5e-3 ,verbose=True) # train_oldstyle only incorporates L2 loss (nn.MSELoss()) # to use wavelet loss function, use the train function to train tfx_steps, tfx_epochs_done = train_oldstyle(model, optimizer, lr_scheduler, epochs=49, logdir='runs/' + exp_name, tfx_steps=tfx_steps, tfx_epochs_done=tfx_epochs_done, device=device, ckpt_name=exp_name+"_checkpoint.pth") tfx_steps, tfx_epochs_done = train_oldstyle(model, optimizer, lr_scheduler, epochs=35, logdir='runs/' + exp_name, tfx_steps=tfx_steps, tfx_epochs_done=tfx_epochs_done, device=device, ckpt_name=exp_name+"_checkpoint.pth") tfx_steps, tfx_epochs_done = train(model, optimizer, lr_scheduler, mse_weight=0.5, cwssim_weight=0.5, epochs=22, logdir='runs/' + exp_name, tfx_steps=tfx_steps, tfx_epochs_done=tfx_epochs_done, print_every=10, device=device, ckpt_name=exp_name+"_checkpoint.pth") tfx_steps, tfx_epochs_done = train(model, optimizer, lr_scheduler, mse_weight=0.5, cwssim_weight=0.5, epochs=22, logdir='runs/' + exp_name, tfx_steps=tfx_steps, tfx_epochs_done=tfx_epochs_done, print_every=10, device=device, ckpt_name=exp_name+"_checkpoint.pth") # print("The current lrs:") for g in optimizer.param_groups: g['lr'] = 1e-6 tfx_steps, tfx_epochs_done = train(model, optimizer, lr_scheduler, mse_weight=0.8, cwssim_weight=0.2, epochs=22, logdir='runs/' + exp_name, tfx_steps=tfx_steps, tfx_epochs_done=tfx_epochs_done, print_every=10, device=device, ckpt_name=exp_name+"_checkpoint.pth") lr_scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='max', factor=0.2, patience=10, threshold=5e-3 ,verbose=True) tfx_steps, tfx_epochs_done = train(model, optimizer, lr_scheduler, mse_weight=0.8, cwssim_weight=0.2, epochs=13, logdir='runs/' + exp_name, tfx_steps=tfx_steps, tfx_epochs_done=tfx_epochs_done, print_every=10, device=device, ckpt_name=exp_name+"_checkpoint.pth") tfx_steps, tfx_epochs_done = train(model, optimizer, lr_scheduler, mse_weight=0.5, cwssim_weight=0.5, epochs=13, logdir='runs/' + exp_name, tfx_steps=tfx_steps, tfx_epochs_done=tfx_epochs_done, print_every=10, device=device, ckpt_name=exp_name+"_checkpoint.pth") ###Output _____no_output_____ ###Markdown Save the model again (just in case) ###Code print("Saving the state of model") state = { 'tfx_steps': tfx_steps, 'tfx_epochs_done': tfx_epochs_done, 'state_dict': model.state_dict(), 'optimizer' : optimizer.state_dict(), } torch.save(state, exp_name+"_checkpoint.bak.pth") print("Uploading to google drive") gfile_ckpt.SetContentFile(ckpt_name) gfile_ckpt.Upload() print("Done") ###Output _____no_output_____ ###Markdown Testing ###Code # Visualize the recovered and original image def denormalize(im): if im.shape[2] == 1: # grayscale mean, std = (0.5,), (0.5,) im = im.reshape(im.shape[0], im.shape[1]) im = im * std[0] + mean[0] else: # rgb mean, std = (0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010) for ch in range(3): im[:,:,ch] = im[:,:,ch] * std[ch] + mean[ch] return im # visualize recovered image def vis_show(model, idx=0): model = model.to(device=device) model.eval() test_iter = iter(loader_test) _y = test_iter.next() print(_y.shape) with torch.no_grad(): _y = _y.to(device=torch.device("cuda")) restored_im = model(_y) print(restored_im.shape) _im = restored_im[idx].cpu().numpy() _im = _im.transpose(1, 2, 0).astype(np.float) _ori = _y[idx].cpu().numpy() _ori = _ori.transpose(1, 2, 0).astype(np.float) # mean, std = (0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010) # _im[_im < -1] = -1.0 # _im[_im > 1] = 1.0 mse = np.mean((_im - _ori) ** 2 ) psnr = 10 * np.log10(1/mse) print(f"mse is {mse}, psnr is {psnr} dB") _im = denormalize(_im) _ori = denormalize(_ori) # im_mx = np.amax(_im) # im_mn = np.amin(_im) # _im = _im / (im_mx-im_mn) * 1.0 # print(f'_im is {_im}') # print(f'_ori is {_ori}') print(f"the original image: mx = {np.amax(_ori)}, mn={np.amin(_ori)}") print(f"the restored image: mx = {np.amax(_im)}, mn={np.amin(_im)}") plt.subplot(1,2,1) plt.imshow(_ori, cmap="gray"); plt.title('original') plt.grid(False) plt.subplot(1,2,2) plt.imshow(_im, cmap="gray"); plt.title('restored') plt.grid(False) plt.show() vis_show(model, 6) # Calculate PSNR device = torch.device('cuda') model.to(device) avg_psnr = 0 _psnrs = [] with torch.no_grad(): model.eval() for t,batch in enumerate(loader_test): if t % 10 == 0: print(f"{t}/{len(loader_test)}") original = batch.to(device) recovered = model(original) recovered = denormalize(recovered) original = denormalize(original) diff = recovered - original rmse = np.sqrt( np.mean(diff ** 2 ) ) psnr = 20 * np.log10(1/rmse) # mse = F.mse_loss(recovered, original) # psnr = 10 * np.log10(1 / mse.item()) _psnrs.append(psnr) avg_psnr += psnr print("===> Avg. PSNR: {:.4f} dB".format(avg_psnr / len(loader_test))) psnrs = np.array(_psnrs) plt.hist(psnrs, bins=15); plt.xlabel('PSNR/dB'); plt.ylabel('Number of samples') plt.show() ###Output _____no_output_____
jupyter-files/GP01.ipynb
###Markdown GP01: Birth Dates In The United States The raw data behind the story Some People Are [Too Superstitious To Have A Baby On Friday The 13th, which you can read here](https://fivethirtyeight.com/features/some-people-are-too-superstitious-to-have-a-baby-on-friday-the-13th/). We'll be working with the data set from the Centers for Disease Control and Prevention's National National Center for Health Statistics. The data set has the following structure:* year - Year* month - Month* date_of_month - Day number of the month* day_of_week - Day of week, where 1 is Monday and 7 is Sunday* births - Number of births ###Code f = open("../data/GP01/births.csv", 'r') text = f.read() print(text[:193]) lines_list = text.split("\n") lines_list[:10] data_no_header = lines_list[1:len(lines_list)] days_counts = dict() for line in data_no_header: split_line = line.split(",") day_of_week = split_line[3] num_births = int(split_line[4]) if day_of_week in days_counts: days_counts[day_of_week] = days_counts[day_of_week] + num_births else: days_counts[day_of_week] = num_births days_counts ###Output _____no_output_____
basic_examples/reaction_datasets.ipynb
###Markdown Reaction DatasetsReactionDatasets are datasets where the primary index is made up of linear combinations of individual computations. For example, an interaction energy dataset would have an index of the complex subtracted by the individual monomers to obtain a final interaction energy. This idea can extended to standard reaction energies, conformational defect energies, and more. This dataset type has been developed by the QCArchive Team in conjunction with: - [David Sherrill](http://vergil.chemistry.gatech.edu) - Lori Burns - Daniel Nascimento - Dom SirianniTo begin, we can connect to the MolSSI QCArchive server: ###Code import qcportal as ptl client = ptl.FractalClient() print(client) ###Output FractalClient(server_name='The MolSSI QCArchive Server', address='https://api.qcarchive.molssi.org:443/', username='None') ###Markdown The current `ReactionDataset`s can be explored below: ###Code client.list_collections("ReactionDataset") ###Output _____no_output_____ ###Markdown Exploring a DatasetFor this example, we will explore S22 dataset which is a small interaction energy dataset of 22 common dimers such as the water dimer, methane dimer, and more. To obtain this collection: ###Code ds = client.get_collection("ReactionDataset", "S22") print(ds) ###Output ReactionDataset(name=`S22`, id='5c8159a4b6a2de3bd1e74306', client='https://api.qcarchive.molssi.org:443/') ###Markdown This dataset automatically comes with some ``Contributed Value`` data, or data that has been provided and not explicitly computed. This data is often either experimental data or very costly benchmarks taken from literature. As these Datasets are based off of Pandas DataFrames, we can directly access the underlying DataFrame to see the data provided: ###Code ds.df.head() ###Output _____no_output_____ ###Markdown Here we used `.head()` to access the first five records in the `ReactionDataset`.All `Collection`s that have `Dataset` in the name (including `ReactionDataset`) have a history available to them to list the data that has been computed. In this case we will filter our history by the DFT method `B2PLYP` and the basis set `def2-SVP` ###Code ds.list_history(method="B2PLYP", basis="def2-SVP") ###Output _____no_output_____ ###Markdown Here we can see that there are five primary keys in the computation: - `driver` - The type of computation, this can be energy, gradient, Hessian, and properties. - `program` - The program used in the computation. - `method` - The quantum chemistry, semiempierical, AI-model, or force field used in the computation. - `basis` - The basis used in the computation. - `keywords` - A keywords alias used in the computaiton, these keywords aliases reference KeywordSets (see advanced tutorials).In addition, there is also the `stoichiometry` field which is unique to `ReactionDatasets`. There exists several ways to compute the interaction energy (counterpoise-corrected (`cp`), non-counterpoise-corrected (`default`), and Valiron–Mayer function counterpoise (`vmfc`)) as such the `stoichiometry` field allows for the selection of this particular form. Querying DataTo obtain the data for the various historical computations we must query them from the server. Here we will automatically pull all relevant computations that match our query: ###Code ds.get_history(method="b2plyp", basis="def2-SVP") ds.df.head() ###Output _____no_output_____ ###Markdown Stastistics and VisualizationVisual statics and plotting can be generated by the ``visualize`` command: ###Code ds.visualize(method="B2PLYP", basis=["def2-svp", "def2-tzvp"], bench="S220") ds.visualize(method="B2PLYP", basis=["def2-svp", "def2-tzvp"], bench="S220", kind="violin") ###Output _____no_output_____ ###Markdown Reaction DatasetsReactionDatasets are datasets where the primary index represents a chemical reaction, made up of stoichiometrically weighted linear combinations of individual computations. For example, an interaction energy dataset would have an index of the complex subtracted by the individual monomers to obtain a final interaction energy. This idea can extended to standard reaction energies, conformational defect energies, and more. This dataset type has been developed by the QCArchive Team in conjunction with: - [David Sherrill](http://vergil.chemistry.gatech.edu) - Lori Burns - Daniel Nascimento - Dom SirianniTo begin, we can connect to the MolSSI QCArchive server: ###Code import qcportal as ptl client = ptl.FractalClient() client ###Output _____no_output_____ ###Markdown The current `ReactionDataset`s can be explored below: ###Code client.list_collections("ReactionDataset").head() ###Output _____no_output_____ ###Markdown Exploring a DatasetFor this example, we will explore S22 dataset which is a small interaction energy dataset of 22 common dimers such as the water dimer, methane dimer, and more. To obtain this collection: ###Code ds = client.get_collection("ReactionDataset", "S22") print(ds) ###Output ReactionDataset(name=`S22`, id='184', client='https://api.qcarchive.molssi.org:443/') ###Markdown The reactions in the dataset -- dimerization reactions in the case of S22 -- can be listed: ###Code ds.get_index() ###Output _____no_output_____ ###Markdown Datasets contain two types of data, those computed through QCArchive ("native") and those that are provided from external sources ("contributed"). Contributed data often come from experiments or very costly benchmarks taken from literature. `Datasets` and `ReactionDatasets` provide a list of all data that has been computed or contributed through the `list_values` method. ###Code ds.list_values().head() ###Output _____no_output_____ ###Markdown Here, we have listed the first five available data sources. The first three are contributed, marked by `native=False` and correspond to benchmarks. The last two are computed data (`native=True`). There are six primary keys to describe data: - `native` - Whether a computation was done using QCArchive. - `driver` - The type of computation, this can be energy, gradient, Hessian, and properties. - `program` - The program used in the computation. - `method` - The quantum chemistry, semiempirical, AI-model, or force field used in the computation. - `basis` - The basis used in the computation. - `keywords` - A keywords alias used in the computation, specific to the details of the program or procedure.In addition, there is also the `stoichiometry` field which is unique to `ReactionDatasets`. There exist several ways to compute the interaction energy: counterpoise-corrected (`cp`), non-counterpoise-corrected (`default`), and Valiron–Mayer function counterpoise (`vmfc`). The `stoichiometry` field allows for the selection of this particular form. Searches in `list_values` may be narrowed by specifying some or all of the keys. In this case, we will filter our history by the DFT method `B2PLYP` and the basis set `def2-SVP`. ###Code ds.list_values(method="B2PLYP", basis="def2-SVP") ###Output _____no_output_____ ###Markdown Querying DataTo obtain the data for the computations we must query them from the server. For example, we can pull all `B3LYP-D3M` interaction energies: ###Code ds.get_values(method="B3LYP-D3M") ###Output _____no_output_____ ###Markdown The units of these energies are stored in `ds.units`: ###Code ds.units ###Output _____no_output_____ ###Markdown Statistics and VisualizationVisual statistics and plotting can be generated by the ``visualize`` command: ###Code ds.visualize(method=["B3LYP", "B3LYP-D3", "B3LYP-D3M"], basis=["def2-tzvp"], groupby="D3") ds.visualize(method=["B3LYP", "B3LYP-D3", "B2PLYP", "B2PLYP-D3"], basis="def2-tzvp", groupby="D3", kind="violin") ###Output _____no_output_____ ###Markdown Reaction DatasetsReactionDatasets are datasets where the primary index is made up of linear combinations of individual computations. For example, an interaction energy dataset would have an index of the complex subtracted by the individual monomers to obtain a final interaction energy. This idea can extended to standard reaction energies, conformational defect energies, and more. This dataset type has been developed by the QCArchive Team in conjunction with: - [David Sherrill](http://vergil.chemistry.gatech.edu) - Lori Burns - Daniel Nascimento - Dom SirianniTo begin, we can connect to the MolSSI QCArchive server: ###Code import qcportal as ptl client = ptl.FractalClient() print(client) ###Output FractalClient(server_name='The MolSSI QCArchive Server', address='https://api.qcarchive.molssi.org:443/', username='None') ###Markdown The current `ReactionDataset`s can be explored below: ###Code client.list_collections("ReactionDataset").head() ###Output _____no_output_____ ###Markdown Exploring a DatasetFor this example, we will explore S22 dataset which is a small interaction energy dataset of 22 common dimers such as the water dimer, methane dimer, and more. To obtain this collection: ###Code ds = client.get_collection("ReactionDataset", "S22") print(ds) ###Output ReactionDataset(name=`S22`, id='5c8159a4b6a2de3bd1e74306', client='https://api.qcarchive.molssi.org:443/') ###Markdown This dataset automatically comes with some ``Contributed Value`` data, or data that has been provided rather than explicitly computed. This data is often either experimental data or very costly benchmarks taken from literature. As these Datasets are based off of Pandas DataFrames, we can directly access the underlying DataFrame to see the data provided: ###Code ds.df.head() ###Output _____no_output_____ ###Markdown Here we used `.head()` to access the first five records in the `ReactionDataset`.All `Collection`s that have `Dataset` in the name (including `ReactionDataset`) have a history available to them to list the data that has been computed. In this case we will filter our history by the DFT method `B2PLYP` and the basis set `def2-SVP` ###Code ds.list_history(method="B2PLYP", basis="def2-SVP") ###Output _____no_output_____ ###Markdown Here we can see that there are five primary keys in the computation: - `driver` - The type of computation, this can be energy, gradient, Hessian, and properties. - `program` - The program used in the computation. - `method` - The quantum chemistry, semiempirical, AI-model, or force field used in the computation. - `basis` - The basis used in the computation. - `keywords` - A keywords alias used in the computaiton -- these keywords aliases reference `KeywordSets` (see advanced tutorials).In addition, there is also the `stoichiometry` field which is unique to `ReactionDatasets`. There exist several ways to compute the interaction energy (counterpoise-corrected (`cp`), non-counterpoise-corrected (`default`), and Valiron–Mayer function counterpoise (`vmfc`)). The `stoichiometry` field allows for the selection of this particular form. Querying DataTo obtain the data for the various historical computations we must query them from the server. Here we will automatically pull all relevant computations that match our query: ###Code ds.get_history(method="B3LYP-D3M") ds.df.head() ###Output _____no_output_____ ###Markdown Stastistics and VisualizationVisual statistics and plotting can be generated by the ``visualize`` command: ###Code ds.visualize(method=["B3LYP", "B3LYP-D3", "B3LYP-D3M"], basis=["def2-tzvp"], groupby="D3") ds.visualize(method=["B3LYP", "B3LYP-D3", "B2PLYP", "B2PLYP-D3"], basis="def2-tzvp", groupby="D3", kind="violin") ###Output _____no_output_____
Solving-recurrence-equations-in-Python.ipynb
###Markdown Methods of Efficiently Solving Recurrence Equations in Python ###Code from itertools import accumulate, chain import numpy as np from platform import python_version python_version() ###Output _____no_output_____ ###Markdown The following problem is based on this stackoverflow question:- https://stackoverflow.com/q/4407984/1609514(with arbitrary data created by me) ###Code def t_next(t, data): Tm, tau = data # Unpack more than one data input return Tm + (t - Tm)**tau assert t_next(2, (0.38, 0)) == 1.38 t0 = 2 # Initial t Tm_values = np.array([0.38, 0.88, 0.56, 0.67, 0.45, 0.98, 0.58, 0.72, 0.92, 0.82]) tau_values = np.linspace(0, 0.9, 10) ###Output _____no_output_____ ###Markdown 1. Basic for loop in Python ###Code t = t0 t_out = [t0] for Tm, tau in zip(Tm_values, tau_values): t = t_next(t, (Tm, tau)) t_out.append(t) t_out = np.array(t_out) t_out ###Output _____no_output_____ ###Markdown 2. Using Python's built-in accumulate function ###Code # Prepare input data in a 2D array data_sequence = np.vstack([Tm_values, tau_values]).T t_out = np.fromiter(accumulate(chain([t0], data_sequence), t_next), dtype=float) print(t_out) # Slightly more readable version possible in Python 3.8+ if python_version()[:3] > '3.8': t_out = np.fromiter(accumulate(data_sequence, t_next, initial=t0), dtype=float) print(t_out) def t_next(t, Tm, tau): return Tm + (t - Tm)**tau assert t_next(2, 0.38, 0) == 1.38 assert t_next(1.38, 0.88, 0.1) == 1.8130329915368075 t_next_ufunc = np.frompyfunc(t_next, 3, 1) assert t_next_ufunc(2, 0.38, 0) == 1.38 assert t_next_ufunc(1.38, 0.88, 0.1) == 1.8130329915368075 assert np.all(t_next_ufunc([2, 1.38], [0.38, 0.88], [0, 0.1]) == [1.38, 1.8130329915368075]) ###Output _____no_output_____ ###Markdown 3. Using Numpy accumulate method and ufuncs ###Code def test_add(x, data): return x + data assert test_add(1, 2) == 3 assert test_add(2, 3) == 5 # Make a Numpy ufunc from my test_add function test_add_ufunc = np.frompyfunc(test_add, 2, 1) assert test_add_ufunc(1, 2) == 3 assert test_add_ufunc(2, 3) == 5 assert np.all(test_add_ufunc([1, 2], [2, 3]) == [3, 5]) data_sequence = np.array([1, 2, 3, 4]) f_out = test_add_ufunc.accumulate(data_sequence, dtype=object) assert np.array_equal(f_out, [1, 3, 6, 10]) ###Output _____no_output_____ ###Markdown However, I have not found a way to make this work for a function with more than two inputs... ###Code def add_with_power(x, data1, data2): return (x + data1) ** data2 assert add_with_power(1, 2, 1) == 3 assert add_with_power(3, 3, 2) == 36 # Make a Numpy ufunc from my test_add function add_with_power_ufunc = np.frompyfunc(add_with_power, 3, 1) assert add_with_power_ufunc(1, 2, 1) == 3 assert add_with_power_ufunc(3, 3, 2) == 36 assert np.all(add_with_power_ufunc([1, 3], [2, 3], [1, 2]) == [3, 36]) data_sequence = np.array([1, 2, 3, 4]) try: f_out = add_with_power_ufunc.accumulate(data_sequence, dtype=object) except ValueError as err: print(err) # Can we trick it by passing more parameters as a tuple? def add_with_power(x, data): return (x + data[0]) ** data[1] assert add_with_power(1, (2, 1)) == 3 assert add_with_power(3, (3, 2)) == 36 # Make a Numpy ufunc from my test_add function add_with_power_ufunc = np.frompyfunc(add_with_power, 2, 1) assert add_with_power_ufunc(1, (2, 1)) == 3 assert add_with_power_ufunc(3, (3, 2)) == 36 assert np.all(add_with_power_ufunc([1, 3], [2, 3], [1, 2]) == [3, 36]) data_sequence = np.array([(2, 1), (3, 2), (4, 3), (5, 4)]) try: f_out = add_with_power_ufunc.accumulate(data_sequence, dtype=object) except ValueError as err: print(err) two_dim = np.array([ [1,1,1], [2,2,2], [3,3,3] ]) np.add.accumulate(two_dim) test_add_ufunc.accumulate(two_dim) ###Output _____no_output_____
linear_algebra/00 - Matrix exp.ipynb
###Markdown Matrix exponantiation[**Matrix exponential**](https://en.wikipedia.org/wiki/Matrix_exponential) is defined as:$$\Large e^M = \sum_{k=0}^{\infty} \frac{1}{k!}M^k$$and can be used to solve systems of linear differential equations. Data ###Code A = np.array([ [0, -np.pi], [np.pi, 0] ]) A ###Output _____no_output_____ ###Markdown $exp(M)$ ###Code def matrix_exp(matrix, t): if t == 0: return np.eye(matrix.shape[0]) else: return matrix_exp(matrix, t - 1) + (1 / np.math.factorial(t)) * np.linalg.matrix_power(matrix, t) matrix_exp(A, 100) # Sanity check linalg.expm(A) ###Output _____no_output_____
lab3/step_11a_transfer_learning.ipynb
###Markdown **Βήμα 11α: Transfer Learning** Για την υλοποίηση transfer learning η βασική ιδέα είναι ότι εκπαιδεύουμε ένα μοντέλο σε dataset το οποίο έχει μεγαλύτερο μέγεθος ώστε να εκπαιδευτεί το μοντέλο καλύτερα στο γενικότερο εύρος της πληροφορίας (τα dataset πρεπει να είναι παρόμοιο περιεχομένου). Στην συνέχεια, μετά την γενική εκπαίδευση του μοντέλου (στην οποία κρατάμε το καλύτερο με χρήση checkpoints) αφαιρούμε τα τελευταία layers τα οποία εμπεριέχουν την ειδική πληροφορία και επανεκπαιδεύουμε το μοντέλο στο δικό μας dataset (για λιγότερες εποχές) κρατώντας ίδια τα βάρη των layers που αφήσαμε και προσθέτοντας στην θέση των τελευταίων που αφαιεσαμε άλλα τα οποία αρχικοποιούνται τυχαία. Έτσι μαθαίνουμε τα τελευταία layers στην ειδική πλροφορία του dataset μας. Παρατηρούμε ότι το transfer learning που εφαρμόσαμε δεν είχε τόσο μεγάλη επιτυχία,καθώς δεν ήταν τόσο καλά τα αποτελέσματα οσο το βήμα 10. ###Code # This Python 3 environment comes with many helpful analytics libraries installed # It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python # For example, here's several helpful packages to load in import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) # Input data files are available in the "../input/" directory. # For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory import os print(os.listdir("../input")) # Any results you write to the current directory are saved as output. import numpy as np import gzip import copy from sklearn.preprocessing import LabelEncoder from torch.utils.data import Dataset from torch.utils.data import SubsetRandomSampler, DataLoader import os class_mapping = { 'Rock': 'Rock', 'Psych-Rock': 'Rock', 'Indie-Rock': None, 'Post-Rock': 'Rock', 'Psych-Folk': 'Folk', 'Folk': 'Folk', 'Metal': 'Metal', 'Punk': 'Metal', 'Post-Punk': None, 'Trip-Hop': 'Trip-Hop', 'Pop': 'Pop', 'Electronic': 'Electronic', 'Hip-Hop': 'Hip-Hop', 'Classical': 'Classical', 'Blues': 'Blues', 'Chiptune': 'Electronic', 'Jazz': 'Jazz', 'Soundtrack': None, 'International': None, 'Old-Time': None } def torch_train_val_split( dataset, batch_train, batch_eval, val_size=.2, shuffle=True, seed=42): # Creating data indices for training and validation splits: dataset_size = len(dataset) indices = list(range(dataset_size)) val_split = int(np.floor(val_size * dataset_size)) if shuffle: np.random.seed(seed) np.random.shuffle(indices) train_indices = indices[val_split:] val_indices = indices[:val_split] # Creating PT data samplers and loaders: train_sampler = SubsetRandomSampler(train_indices) val_sampler = SubsetRandomSampler(val_indices) train_loader = DataLoader(dataset, batch_size=batch_train, sampler=train_sampler) val_loader = DataLoader(dataset, batch_size=batch_eval, sampler=val_sampler) return train_loader, val_loader def read_spectrogram(spectrogram_file, chroma=True): with gzip.GzipFile(spectrogram_file, 'r') as f: spectrograms = np.load(f) # spectrograms contains a fused mel spectrogram and chromagram # Decompose as follows return spectrograms.T class LabelTransformer(LabelEncoder): def inverse(self, y): try: return super(LabelTransformer, self).inverse_transform(y) except: return super(LabelTransformer, self).inverse_transform([y]) def transform(self, y): try: return super(LabelTransformer, self).transform(y) except: return super(LabelTransformer, self).transform([y]) class PaddingTransform(object): def __init__(self, max_length, padding_value=0): self.max_length = max_length self.padding_value = padding_value def __call__(self, s): if len(s) == self.max_length: return s if len(s) > self.max_length: return s[:self.max_length] if len(s) < self.max_length: s1 = copy.deepcopy(s) pad = np.zeros((self.max_length - s.shape[0], s.shape[1]), dtype=np.float32) s1 = np.vstack((s1, pad)) return s1 class SpectrogramDataset(Dataset): def __init__(self, path, class_mapping=None, train=True, max_length=-1): t = 'train' if train else 'test' p = os.path.join(path, t) self.index = os.path.join(path, "{}_labels.txt".format(t)) #print(self.index) self.files, labels = self.get_files_labels(self.index, class_mapping) self.feats = [read_spectrogram(os.path.join(p, f)) for f in self.files] self.feat_dim = self.feats[0].shape[1] self.lengths = [len(i) for i in self.feats] self.max_length = max(self.lengths) if max_length <= 0 else max_length self.zero_pad_and_stack = PaddingTransform(self.max_length) self.label_transformer = LabelTransformer() if isinstance(labels, (list, tuple)): self.labels = np.array(self.label_transformer.fit_transform(labels)).astype('int64') def get_files_labels(self, txt, class_mapping): with open(txt, 'r') as fd: lines = [l.rstrip().split('\t') for l in fd.readlines()[1:]] files, labels = [], [] for l in lines: label = l[1] if class_mapping: label = class_mapping[l[1]] if not label: continue files.append(l[0]) labels.append(label) return files, labels def __getitem__(self, item): l = min(self.lengths[item], self.max_length) return self.zero_pad_and_stack(self.feats[item]), self.labels[item], l def __len__(self): return len(self.labels) BATCH_SZ=32 specs = SpectrogramDataset('../input/data/data/fma_genre_spectrograms/', train=True, class_mapping=class_mapping, max_length=-1) train_loader, val_loader = torch_train_val_split(specs, BATCH_SZ ,BATCH_SZ, val_size=.33) test_loader = DataLoader(SpectrogramDataset('../input/data/data/fma_genre_spectrograms/', train=False, class_mapping=class_mapping, max_length=-1)) import numpy as np import torch from torch.utils.data import Dataset import torch.nn as nn import torch.nn.functional as F import torch.optim as optim class ConvNet(nn.Module): def __init__(self,input_channels, num_classes): super(ConvNet, self).__init__() self.layer1 = nn.Sequential( nn.Conv2d(input_channels, 4, kernel_size=(3,3), stride=1, padding=1), nn.ReLU(), nn.BatchNorm2d(4), nn.MaxPool2d(kernel_size=2, stride=2) ) self.layer2 = nn.Sequential( nn.Conv2d(4, 16, kernel_size=(3,3), stride=1, padding=1), nn.ReLU(), nn.BatchNorm2d(16), nn.MaxPool2d(kernel_size=2, stride=2) ) self.layer3 = nn.Sequential( nn.Conv2d(16 , 32 , kernel_size=(3,3), stride=1, padding=1), nn.ReLU(), nn.BatchNorm2d(32), nn.MaxPool2d(kernel_size=3, stride=3) ) self.layer4 = nn.Sequential( nn.Conv2d(32, 64, kernel_size=(3,3), stride=1, padding=1), nn.ReLU(), nn.BatchNorm2d(64), nn.MaxPool2d(kernel_size=3, stride=3) ) self.dense1= nn.Linear(6720,500) self.dense2 = nn.Linear(500,10) def forward(self, x): #print(x.shape) x = x.transpose(1, 2) #print(x.shape) x.unsqueeze_(1) #print(x.shape) out1 = self.layer1(x) #print(out1.shape) out2= self.layer2(out1) #print(out2.shape) out3= self.layer3(out2) #print(out3.shape) out4= self.layer4(out3) #print(out4.shape) out_flat=out4.reshape(-1,out4.size(1)*out4.size(2)*out4.size(3)) #print(out_flat.shape) #implementing fully connected layers hidden_out = self.dense1(out_flat) final_out = self.dense2(hidden_out) return final_out class Trainer_with_Checkpoints(): def __init__(self,validate_every,metrics,max_epochs,patience=10): self.validate_every=validate_every self.metrics = metrics self.patience=patience self.best_score=None self.max_epochs = max_epochs def validate_accuracy(self,mymodel,validation_batches): with torch.no_grad(): mymodel.eval() num_correct=0 num_samples=0 with torch.no_grad(): for index, instance in enumerate(validation_batches): features = instance[:][0].to(device) labels = instance[:][1].to(device) lengths = instance[:][2].to(device) features = features.type(torch.FloatTensor).to(device) out = mymodel(features) out_scores = F.log_softmax(out,dim=1) value, y_pred = out_scores.max(1) num_correct += (labels == y_pred).sum().detach().item() num_samples += features.shape[0] print("Score for validation set: " ,num_correct / num_samples) return num_correct/num_samples def checkpoint(self,mymodel,myoptimizer,epoch,checkpointdir,myscheduler=None): #if myscheduler is not None: # state = {'epoch': epoch + 1,'state_dict': mymodel.state_dict(), # 'optim_dict' : myoptimizer.state_dict(),'scheduler_dict' : myscheduler.state_dict()} #else: # state = {'epoch': epoch + 1,'state_dict': mymodel.state_dict(),'optim_dict' : myoptimizer.state_dict()} #utils.save_checkpoint(state,checkpoint=self.checkpointdir) # path to folder torch.save({ 'epoch': epoch, 'model_state_dict': mymodel.state_dict(), 'optimizer_state_dict': myoptimizer.state_dict(), }, checkpointdir) return def train_model(self,mymodel,myoptimizer,myloss_function,training_batches,validation_batches, checkpointdir,myscheduler=None): self.best_score=None counter =0 device=torch.device("cuda") if self.patience < 1: raise ValueError("Argument patience should be positive integer") for epoch in range(self.max_epochs): #no need to set requires_grad=True for parameters(weights) as it done by default. Also for input requires_grad is not #always necessary. So we comment the following line. #with torch.autograd(): mymodel.train() if myscheduler is not None: myscheduler.step() running_average_loss = 0 #train model in each epoch for index,instance in enumerate(training_batches): # Step 1. Remember that Pytorch accumulates gradients. # We need to clear them out before each instance features = instance[:][0].to(device) labels = instance[:][1].to(device) lengths = instance[:][2].to(device) features = features.type(torch.FloatTensor).to(device) myoptimizer.zero_grad() prediction_vec = mymodel(features) prediction_vec.to(device) myloss = myloss_function(prediction_vec,labels) myloss.backward(retain_graph=True) myoptimizer.step() running_average_loss += myloss.detach().item() if index % 100 == 0: print("Epoch: {} \t Batch: {} \t Training Loss {}".format(epoch, index, float(running_average_loss) / (index + 1))) if epoch==self.max_epochs-1: print("yyyyyeaaaaahhhh") if 'accuracy' in self.metrics: score = self.validate_accuracy(mymodel,validation_batches) if self.best_score is None: self.best_score = score self.checkpoint(mymodel,myoptimizer,epoch,checkpointdir,myscheduler) print("checkpoint done!") elif score < self.best_score: counter += 1 if counter >= self.patience: print("EarlyStopping: Stop training") return else: #found better state in our model self.best_score = score counter = 0 #checkpoint self.checkpoint(mymodel,myoptimizer,epoch,checkpointdir,myscheduler) print("checkpoint done!") if epoch % self.validate_every == 0: if 'accuracy' in self.metrics: score = self.validate_accuracy(mymodel,validation_batches) if self.best_score is None: self.best_score = score #checkpoint self.checkpoint(mymodel,myoptimizer,epoch,checkpointdir,myscheduler) print("checkpoint done!") elif score < self.best_score: counter += 1 if counter >= self.patience: print("EarlyStopping: Stop training") return else: #found better state in our model self.best_score = score counter = 0 #checkpoint self.checkpoint(mymodel,myoptimizer,epoch,checkpointdir,myscheduler) print("checkpoint done!") VALIDATE_EVERY=5 METRICS='accuracy' MAX_EPOCHS=40 PATIENCE=3 CHECKDIR='./model_tranfer.pt' input_channels=1 num_classes=10 device=torch.device("cuda") model3 = ConvNet(input_channels,num_classes) model3.to(device) # Loss and optimizer criterion = nn.CrossEntropyLoss() optimizer = torch.optim.SGD(model3.parameters(),lr=0.01) trainer = Trainer_with_Checkpoints(validate_every=VALIDATE_EVERY,metrics=METRICS,max_epochs=MAX_EPOCHS,patience=PATIENCE) trainer.train_model(mymodel=model3,myoptimizer=optimizer,myloss_function=criterion,training_batches=train_loader, validation_batches=val_loader,checkpointdir=CHECKDIR) import numpy as np import gzip import copy from sklearn.preprocessing import LabelEncoder from torch.utils.data import Dataset from torch.utils.data import SubsetRandomSampler, DataLoader import os class_mapping = { 'Rock': 'Rock', 'Psych-Rock': 'Rock', 'Indie-Rock': None, 'Post-Rock': 'Rock', 'Psych-Folk': 'Folk', 'Folk': 'Folk', 'Metal': 'Metal', 'Punk': 'Metal', 'Post-Punk': None, 'Trip-Hop': 'Trip-Hop', 'Pop': 'Pop', 'Electronic': 'Electronic', 'Hip-Hop': 'Hip-Hop', 'Classical': 'Classical', 'Blues': 'Blues', 'Chiptune': 'Electronic', 'Jazz': 'Jazz', 'Soundtrack': None, 'International': None, 'Old-Time': None } def torch_train_val_split( dataset, batch_train, batch_eval, val_size=.2, shuffle=True, seed=42): # Creating data indices for training and validation splits: dataset_size = len(dataset) indices = list(range(dataset_size)) val_split = int(np.floor(val_size * dataset_size)) if shuffle: np.random.seed(seed) np.random.shuffle(indices) train_indices = indices[val_split:] val_indices = indices[:val_split] # Creating PT data samplers and loaders: train_sampler = SubsetRandomSampler(train_indices) val_sampler = SubsetRandomSampler(val_indices) train_loader = DataLoader(dataset, batch_size=batch_train, sampler=train_sampler) val_loader = DataLoader(dataset, batch_size=batch_eval, sampler=val_sampler) return train_loader, val_loader def read_spectrogram(spectrogram_file, chroma=True): with gzip.GzipFile(spectrogram_file, 'r') as f: spectrograms = np.load(f) # spectrograms contains a fused mel spectrogram and chromagram # Decompose as follows return spectrograms.T class LabelTransformer(LabelEncoder): def inverse(self, y): try: return super(LabelTransformer, self).inverse_transform(y) except: return super(LabelTransformer, self).inverse_transform([y]) def transform(self, y): try: return super(LabelTransformer, self).transform(y) except: return super(LabelTransformer, self).transform([y]) class PaddingTransform(object): def __init__(self, max_length, padding_value=0): self.max_length = max_length self.padding_value = padding_value def __call__(self, s): if len(s) == self.max_length: return s if len(s) > self.max_length: return s[:self.max_length] if len(s) < self.max_length: s1 = copy.deepcopy(s) pad = np.zeros((self.max_length - s.shape[0], s.shape[1]), dtype=np.float32) s1 = np.vstack((s1, pad)) return s1 class SpectrogramDataset(Dataset): def __init__(self, path, class_mapping=None, train=True, max_length=-1): t = 'train' if train else 'test' p = os.path.join(path, t) self.index = os.path.join(path, "{}_labels.txt".format(t)) self.files, labels = self.get_files_labels(self.index, class_mapping) #print(self.files) self.feats = [read_spectrogram(os.path.join(p, f+".fused.full.npy.gz")) for f in self.files] self.feat_dim = self.feats[0].shape[1] self.lengths = [len(i) for i in self.feats] self.max_length = max(self.lengths) if max_length <= 0 else max_length self.zero_pad_and_stack = PaddingTransform(self.max_length) #self.label_transformer = LabelTransformer() #if isinstance(labels, (list, tuple)): #self.labels = np.array(self.label_transformer.fit_transform(labels)).astype('int64') self.labels=labels def get_files_labels(self, txt, class_mapping): with open(txt, 'r') as fd: lines = [l.rstrip().split('\t') for l in fd.readlines()[1:]] files, labels = [], [] for l in lines: l=l[0].split(",") b=l[1:] b = list(map(float,b)) files.append(l[0]) labels.append(b) return files, labels def __getitem__(self, item): l = min(self.lengths[item], self.max_length) return self.zero_pad_and_stack(self.feats[item]), self.labels[item], l def __len__(self): return len(self.labels) BATCH_SZ=32 specs = SpectrogramDataset('../input/data/data/multitask_dataset/', train=True, class_mapping=class_mapping, max_length=-1) train_loader, val_loader = torch_train_val_split(specs, BATCH_SZ ,BATCH_SZ, val_size=.33) checkpoint = torch.load(CHECKDIR) model3.load_state_dict(checkpoint['model_state_dict']) optimizer.load_state_dict(checkpoint['optimizer_state_dict']) for param in model3.parameters(): param.requires_grad=False model3.dense1= nn.Linear(6720,500) model3.dense2 = nn.Linear(500,50) model3.dense3 = nn.Linear(50,1) model3.to(device) # Loss and optimizer num_epochs=10 criterion = nn.MSELoss() optimizer = torch.optim.Adam(model3.parameters()) for epoch in range(num_epochs): #no need to set requires_grad=True for parameters(weights) as it done by default. Also for input requires_grad is not #always necessary. So we comment the following line. #with torch.autograd(): model3.train() #scheduler.step() running_average_loss = 0 #train model in each epoch for index,instance in enumerate(train_loader): # Step 1. Remember that Pytorch accumulates gradients. # We need to clear them out before each instance #features,labels,lengths=instance features = instance[:][0].to(device) labels = instance[:][1] valence_labels = labels[0].type(torch.FloatTensor).to(device) energy_labels = labels[1].type(torch.FloatTensor).to(device) dance_labels = labels[2].type(torch.FloatTensor).to(device) lengths = instance[:][2].to(device) features = features.type(torch.FloatTensor).to(device) optimizer.zero_grad() # Step 3. Run our forward pass. prediction_vec = model3(features) prediction_vec.to(device) # Step 4. Compute the loss, gradients, and update the parameters by # calling optimizer.step() energy_labels = energy_labels.unsqueeze(1) loss = criterion(prediction_vec,energy_labels) loss.backward(retain_graph=True) optimizer.step() running_average_loss += loss.detach().item() print("Epoch: {} \t \t Training Loss {}".format(epoch, float(running_average_loss) / (index + 1))) from scipy import stats model3.eval() n_samples = 0 SE = 0 spearman=[] running_average_loss=0 with torch.no_grad(): for index, instance in enumerate(val_loader): features = instance[:][0].to(device) labels = instance[:][1] valence_labels = labels[0].type(torch.FloatTensor).to(device) energy_labels = labels[1].type(torch.FloatTensor).to(device) dance_labels = labels[2].type(torch.FloatTensor).to(device) lengths = instance[:][2].to(device) features = features.type(torch.FloatTensor).to(device) out = model3(features) out = out.to(device) #print(out) #print(valence_labels) energy_labels = energy_labels.unsqueeze(1) spearman.append(stats.spearmanr(energy_labels.cpu().squeeze(),out.cpu().squeeze(),axis=0)[0]) print("Spearnman's correlation for CNN-2d in validation set (predicting energy): " , np.mean(spearman) ) ###Output Spearnman's correlation for CNN-2d in validation set (predicting energy): 0.2647716075704687
tutorials/deepsynthbody_ecg.ipynb
###Markdown To run the ECG generator on CPU ###Code ecg.generate(5, "sample_ecgs", start_id=0, device="cpu") ###Output /home/vajira/anaconda3/envs/pytorch18/lib/python3.8/site-packages/deepfakeecg/models/pulse2pulse.py:85: UserWarning: nn.init.kaiming_normal is now deprecated in favor of nn.init.kaiming_normal_. nn.init.kaiming_normal(m.weight.data) 100%|██████████| 5/5 [00:00<00:00, 34.56it/s] ###Markdown To run the ECG generator on GPU ###Code ecg.generate(5, "sample_ecgs", start_id=10, device="cuda") ###Output 100%|██████████| 5/5 [00:00<00:00, 11.25it/s]
00-BasicNNExample.ipynb
###Markdown Table of Contents1&nbsp;&nbsp;Setup2&nbsp;&nbsp;PyTorch's DataLoader class3&nbsp;&nbsp;Defining the network3.1&nbsp;&nbsp;Train the NN, i.e. optimize weights in an attempt to minimize loss4&nbsp;&nbsp;Predictions on the test set5&nbsp;&nbsp;Other considerations (optional) A Basic Neural Network Example in PyTorch I found that [PyTorch's Official Tutorial, the 60 minute blitz](https://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html), was not at a basic enough level for an introductory tutorial, so I decided to make my own by blending prior knowledge with some other PyTorch tutorials from online. We'll start out in this notebook with a working example from [Sentdex](https://www.youtube.com/channel/UCfzlCWGWYyIQ0aLC5w48gBQ) just to give you working example as well as an overview of some of the concepts to come.I'll take a "show, then tell" approach by first giving you a finished example in this notebook, and then explaining each component of the pipeline for making predictions in "[01 PyTorch Workflow Explained.ipynb](https://github.com/Unique-Divine/PyTorch-Deep-Learning-Tutorials/blob/master/02%20PyTorch%20Pipeline%20Explained.ipynb)". Setup ###Code # import PyTorch import torch import numpy as np import matplotlib.pyplot as plt import seaborn as sns; sns.set() import pandas as pd # embed static images in the ipynb %matplotlib inline # neural network package import torch.nn as nn import torch.nn.functional as F ###Output _____no_output_____ ###Markdown Usually, you'll need both `torch.nn` and `torch.nn.functional` if you're working with NNs. ###Code from torchvision import transforms, datasets ###Output _____no_output_____ ###Markdown Getting data from torchvision.datasets is cheating since most of your time will be spent on preparing your dataset. However, this will make other concepts easier to learn for now. ###Code # import MNIST dataset from torchvision.datasets import MNIST ###Output _____no_output_____ ###Markdown ```pythonMNIST('/data') This will give you the following error``` ```RuntimeError: Dataset not found. You can use download=True to download it``` ###Code train = datasets.MNIST("./data", train=True, download=True, transform = transforms.Compose([transforms.ToTensor()])) # For some reason, the data in torchvision.datasets doesn't come # in tensor form. transforms.Compose([transforms.ToTensor()]) fixes that test = datasets.MNIST("./data", train=False, download=True, transform = transforms.Compose([transforms.ToTensor()])) ###Output _____no_output_____ ###Markdown [PyTorch's DataLoader class](https://pytorch.org/docs/stable/data.html) ###Code trainset = torch.utils.data.DataLoader(train, batch_size=10, shuffle=True) testset = torch.utils.data.DataLoader(test, batch_size=10, shuffle=True) ###Output _____no_output_____ ###Markdown Why the `batch_size` parameter? Normally, datasets are so large that they may not fit on memory. We'll often train the neural networks in batches, which each have `batch_size` number of samples. Using a higher batch size generally helps training time, but there is a sweet spot. [Sentdex](https://www.youtube.com/channel/UCfzlCWGWYyIQ0aLC5w48gBQ) recommends somewhere between 8 and 64. ###Code # prints `batch_size` number of input-output pairs for data in trainset: print(data) break type(data) len(data) for item in data: print(type(item)) ###Output <class 'torch.Tensor'> <class 'torch.Tensor'> ###Markdown So, `data` is a list of tensors. ###Code data[0].shape data[0].size() data[1].shape data[1][1] # label of the second image data[0][1].shape # tensor of the second image ###Output _____no_output_____ ###Markdown The shape of this image is 28 by 28. ###Code import matplotlib.pyplot as plt plt.imshow(data[0][1].view(28, 28)) plt.show() ###Output _____no_output_____ ###Markdown - f-string tutorial: [reference](https://realpython.com/python-f-strings/old-school-string-formatting-in-python)- How to change the number of digits in f-string expression: [reference](https://stackoverflow.com/questions/45310254/fixed-digits-after-decimal-with-f-strings) ###Code data[0][0].shape ###Output _____no_output_____ ###Markdown Defining the network- [nn.Linear](https://pytorch.org/docs/stable/generated/torch.nn.Linear.htmltorch.nn.Linear) creates a linear NN layer object that applies a linear transformation to the incoming data: $y = xA^T + b$- [F.relu](https://pytorch.org/docs/stable/generated/torch.nn.ReLU.htmltorch.nn.ReLU) ###Code class Net(nn.Module): # class inherits from nn.Module def __init__(self): super().__init__() # initialize nn.Module # fc1 -> first fully connected layer # apply linear transformation on incoming data self.fc1 = nn.Linear(in_features=28*28, out_features=64) """ nn.Linear(in_features, out_features, bias=True) Args: in_features: size of each input sample out_features: size of each output sample """ # fc2 must take in 64 self.fc2 = nn.Linear(in_features=64, out_features=64) self.fc3 = nn.Linear(in_features=64, out_features=64) # 10 classes -> output layer should have 10 nodes self.fc4 = nn.Linear(in_features=64, out_features=10) def forward(self, x): # defines the forward propagation # relu is an activation function x = F.relu(self.fc1(x)) # relu on first layer x = F.relu(self.fc2(x)) x = F.relu(self.fc3(x)) # Output layer needs a multiclassifying transformation # log softmax works for this x = self.fc4(x) return F.log_softmax(x, dim=1) net = Net() print(net) ###Output Net( (fc1): Linear(in_features=784, out_features=64, bias=True) (fc2): Linear(in_features=64, out_features=64, bias=True) (fc3): Linear(in_features=64, out_features=64, bias=True) (fc4): Linear(in_features=64, out_features=10, bias=True) ) ###Markdown When you inherit, you inherit the methods and attributes the other module (`nn.Module`), however the initialization does not run. If you want the parent module to initialize too, you run `super().__init__()`. ###Code X = torch.rand((28,28)) ###Output _____no_output_____ ###Markdown You'll get an error from running`output = net(X)` :`RuntimeError: size mismatch, m1: [28 x 28], m2: [784 x 64]` ###Code X = torch.rand((28,28)) X = X.view(-1, 28 * 28) # pass data through the NN and get return output = net(X) output ###Output _____no_output_____ ###Markdown Train the NN, i.e. optimize weights in an attempt to minimize loss ###Code # optimize with Adam algorithm. optimizer = torch.optim.Adam(net.parameters()) """ torch.optim.Adam? torch.optim.Adam( params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, amsgrad=False,) Args: params (iterable): iterable of parameters to optimize or dicts defining parameter groups lr (float, optional): learning rate (default: 1e-3) betas (Tuple[float, float], optional): coefficients used for computing """ n_epochs = 3 # num of full passes throuagh data for epoch in range(n_epochs): for data in trainset: # data is a batch w/ featurs and targets X, Y = data net.zero_grad() # Zero the gradient buffers of all params output = net(X.view(-1, 28*28)) # Loss metric: nll -> negative log likelihood loss = F.nll_loss(output, Y) # Use nll_loss when data is scalar # Use MSE when data is one-hot loss.backward() # backward optimizer.step() # adjusts weights print(loss) # should see loss decreasing ###Output tensor(0.0029, grad_fn=<NllLossBackward>) tensor(0.0754, grad_fn=<NllLossBackward>) tensor(0.3549, grad_fn=<NllLossBackward>) ###Markdown Q: What is `loss.backward` doing?When you call `loss.backward()`, PyTorch computes the gradient of loss w.r.t all the parameters in loss that have `requires_grad = True` and store them in `parameter.grad` attribute for every parameter.Q: What is `optimizer.step` doing?`optimizer.step()` updates all the parameters based on `parameter.grad` Predictions on the test set ###Code correct = 0 total = 0 # without calculating gradients: with torch.no_grad(): # for data vector in dataset for data in testset: # X, Y are the feature and target vectors X, Y = data output = net(X.view(-1, 28*28)) for idx, i in enumerate(output): if torch.argmax(i) == Y[idx]: correct += 1 total += 1 print(f"Accuracy (Test Set): {(correct/total):.3f}") # display the X[0] image plt.imshow(X[0].view(28,28)) plt.show() # Print prediction -> pass X[0] thru NN print(torch.argmax(net(X[0].view(-1, 28*28))[0])) # X[0].view(-1, 28*28) -> reshapes for NN ###Output tensor(4) ###Markdown Other considerations (optional) In general, we want the dataset to be as balanced as possible so that the model doesn't train itself into a local minima of loss that it cannot get out of. Below is a scheme for checking how balanced the dataset is. ###Code # Build dictionary of target counts counter_dict = {0:0, 1:0, 2:0, 3:0, 4:0, 5:0, 6:0, 7:0, 8:0, 9:0} total = 0 for data in trainset: Xs, Ys = data for Y in Ys: counter_dict[int(Y)] += 1 total += 1 print(counter_dict) def counts_barplot(): x = list(counter_dict.keys()) y = list(counter_dict.values()) ax = sns.barplot(x, y) ax.set(title='Dataset Balance Chart', xlabel='Digit', ylabel='Count') plt.show() counts_barplot() # print dataset balance percentages using f string for i in counter_dict: proportion = counter_dict[i]/total*100 print(f'{i}: {proportion:.2f}') ###Output 0: 9.87 1: 11.24 2: 9.93 3: 10.22 4: 9.74 5: 9.04 6: 9.86 7: 10.44 8: 9.75 9: 9.92
tutorials/nuplan_framework.ipynb
###Markdown ![](https://www.nuplan.org/static/media/nuPlan_final.3fde7586.png) Contents1. [Introduction to nuPlan](introduction)2. [Training an ML planner](training)3. [Simulating a planner](simulation)4. [Visualizing metrics and scenarios](dashboard) Introduction to nuPlan Welcome to nuPlan! This notebook will explore the nuPlan simulation framework, training platform as well as the nuBoard metrics/scenarios visualization dashboard. What is nuPlannuPlan is the world’s first closed-loop ML-based planning benchmark for autonomous driving.It provides a high quality dataset with 1500h of human driving data from 4 cities across the US and Asia with widely varying traffic patterns (Boston, Pittsburgh, Las Vegas and Singapore). In addition, it provides a closed-loop simulation framework with reactive agents, a training platform as well as a large set of both general and scenario-specific planning metrics.![](https://www.nuscenes.org/static/media/framework_steps.2d4642df.png) Training & simulation frameworkThe nuPlan training and simulation framework aims to:* create a simulation pipeline to evaluate a planner on large dataset with various scenarios* score planner performance with common and scenario-dependent metrics* compare planners based on measured metrics and provide intuitive visualizations* train planners with the provided framework to allow quick implementation and iteration* support closed-loop simulation and training![](https://www.nuplan.org/static/media/planning_framework.ca3c2969.png) Scenarios in nuPlannuPlan aims to capture challenging yet representative scenarios from real-world encounters. This enables the benchmarking of planning systems both in expert imitation (open-loop) and reactive planning (closed-loop) settings.These scenarios includes:* highly interactive scenes with traffic participants (e.g. tailgating, high-velocity overtakes, double parked cars, jaywalking)* various ego behaviors (e.g. vehicle following, yielding, lane merging) and dynamics (e.g. mixed speed profiles, abrupt braking, speed bumps, high jerk maneuvers)* scene layouts of varied complexity (e.g. pudos, traffic/stop controlled intersections, unprotected turns) and temporary zones (e.g. construction areas)The dataset is automatically tagged with scenario labels based on certain primitive attributes.These scenario tags can then be used to extract representative metrics for the planner's evaluation.Example mined scenarios in nuPlan:| | | || :-: | :-: | :-: || Unprotected cross turn | Dense vehicle interactions | Jaywalker in front || ![](https://www.nuscenes.org/static/media/unprotected-cross.51feef7e.webp) | ![](https://www.nuscenes.org/static/media/dense-interactions.16de47ec.webp) | ![](https://www.nuscenes.org/static/media/jaywalker.03083823.webp) || Lane change | Ego at pickup/dropoff area | Ego following vehicle || ![](https://www.nuscenes.org/static/media/lane-change.54bfca1c.webp) | ![](https://www.nuscenes.org/static/media/pickup-dropoff.4dd1c418.webp) | ![](https://www.nuscenes.org/static/media/following-vehicle.4cacd559.webp) | DatabaseDownload a database for training/simulation from [here](https://nuplan.org/nuplandownload).| Database | Size | Duration | Num Logs | Cities | Num Scenarios | Sensor Data | Description || :-- | :-- | :-- | :-- | :-- | :-- | :-- | :-- || nuplan_v0.1_mini (recommended) | ~5GB | 2.5h | 8 | Las Vegas | 14 | N/A | nuPlan teaser database (mini version) || nuplan_v0.1 | ~500GB | ~230h | 380 | Las Vegas | 14 | N/A | nuPlan teaser database |The full 1500h dataset with driving data from 4 cities (Boston, Pittsburgh, Las Vegas and Singapore) as well as sensor data will be released in Q1 2022. SetupTo be able to access all resources within this notebook, make sure Jupyter is launched at the root of this repo. The path of the notebook should be `/notebook/`. ###Code # (Optional) Increase notebook width for all embedded cells to display properly from IPython.core.display import display, HTML display(HTML("<style>.output_result { max-width:100% !important; }</style>")) display(HTML("<style>.container { width:100% !important; }</style>")) # Useful imports import os from pathlib import Path import tempfile import hydra ###Output _____no_output_____ ###Markdown Training an ML planner Imitation learningIn the following section we will train an ML planning policy with the aim estimate the ego's future trajectory and control the vehicle.The policy is learned through imitation learning, a supervised learning approach in which - in the context of autonomous driving - the behavior of an expert human driver is used as a target signal to supervise the model. Model features & targetsA planning policy consumes a set of episodic observations and encodes them through a deep neural network to regress a future trajectory.The observations can be historic or present ego and agent poses as well as static/dynamic map information across different map layers.These signals can be encoded through various representations, such as raster or vector format for the map signal, each with their pros and cons for each model flavor.Using these input features the model predicts a discretized future trajectory across a fixed time horizon.The trajectory consists of a set of discrete future states (position, heading and velocity) sampled at fixed intervals which express the likelihood of the vehicle being at that state in the future.For example, a predicted trajectory may consist of 10 future poses sampled at intervals of 0.5s across a 5s horizon. Learning objectivesThe policy is trained to maximize a set of aggregated objectives such as imitation, collision avoidance, traffic rule violation etc.Imitation is the core training objective which indicates how close the predicted trajectory is to the expert ground truth and penalizes model predictions that deviate in space and time from the demonstration. Training parameters The following parameter categories define the training protocol which includes the model, metrics, objectives etc.A working example composition of these parameters can be found in the next section.--- ML modelsChange the training model with `model=X` where `X` is a config yaml defined in the table below. | Model | Description | Config || --- | --- | --- || Raster model (CNN) | Raster-based model that uses a CNN backbone to encode ego, agent and map information as raster layersAny (pretrained) backbone from the TIMM library can be used (e.g. ResNet50, EfficientNetB3) | `raster_model` || Vector model (LaneGCN) | Vector-based model that uses a series of MLPs to encode ego and agent signals, a lane graph to encode vector-map elements and a fusion network to capture lane & agent intra/inter-interactions through attention layersImplementation of LaneGCN paper ("Learning Lane Graph Representations for Motion Forecasting") | `vector_model` || Simple vector model | Toy vector-based model that consumes ego, agent and lane signals through a series of MLPs | `simple_vector_model` | Training objectivesChange the training objectives with `objective=[X, ...]` where `X` is a config yaml defined in the table below. | Objective | Description | Config || --- | --- | --- || Imitation objective | Penalizes the predicted trajectory that deviates from the expert demonstration | `imitation_objective` | Training metricsChange the training objectives with `training_metric=[X, ...]` where `X` is a config yaml defined in the table below. | Metric | Description | Config || --- | --- | --- || Average displacement error | RMSE translation error across full predicted trajectory | `avg_displacement_error` || Average heading error | RMSE heading error across full predicted trajectory | `avg_heading_error` || Final displacement error | L2 error of predicted trajectory's final pose translation | `final_displacement_error` || Final heading error | L2 error of predicted trajectory's final pose heading | `final_heading_error` | Prepare the training config ###Code # Location of path with all training configs CONFIG_PATH = '../nuplan/planning/script/config/training' CONFIG_NAME = 'default_training' # Create a temporary directory to store the cache and experiment artifacts SAVE_DIR = Path(tempfile.gettempdir()) / 'tutorial_nuplan_framework' # optionally replace with persistent dir EXPERIMENT = 'training_raster_experiment' LOG_DIR = str(SAVE_DIR / EXPERIMENT) # Initialize configuration management system hydra.core.global_hydra.GlobalHydra.instance().clear() hydra.initialize(config_path=CONFIG_PATH) # Compose the configuration cfg = hydra.compose(config_name=CONFIG_NAME, overrides=[ f'group={str(SAVE_DIR)}', f'cache_dir={str(SAVE_DIR)}/cache', f'experiment_name={EXPERIMENT}', 'py_func=train', '+training=training_raster_model', # raster model that consumes ego, agents and map raster layers and regresses the ego's trajectory 'scenario_builder=nuplan_mini', # use nuplan mini database 'scenario_builder.nuplan.scenario_filter.limit_scenarios_per_type=500', # Choose 500 scenarios to train with 'scenario_builder.nuplan.scenario_filter.subsample_ratio=0.01', # subsample scenarios from 20Hz to 0.2Hz 'lightning.trainer.params.accelerator=ddp_spawn', # ddp is not allowed in interactive environment, using ddp_spawn instead - this can bottleneck the data pipeline, it is recommended to run training outside the notebook 'lightning.trainer.params.max_epochs=10', 'data_loader.params.batch_size=8', 'data_loader.params.num_workers=8', ]) ###Output _____no_output_____ ###Markdown Launch tensorboard for visualizing training artifacts ###Code %load_ext tensorboard %tensorboard --logdir {LOG_DIR} ###Output _____no_output_____ ###Markdown Launch training (within the notebook) ###Code from nuplan.planning.script.run_training import main as main_train # Run the training loop, optionally inspect training artifacts through tensorboard (above cell) main_train(cfg) ###Output _____no_output_____ ###Markdown Launch training (command line - alternative) A training experiment with the above same parameters can be launched alternatively with:```$ python nuplan/planning/script/run_training.py \ experiment_name=raster_experiment \ py_func=train \ +training=training_raster_model \ scenario_builder=nuplan_mini \ scenario_builder.nuplan.scenario_filter.limit_scenarios_per_type=500 \ scenario_builder.nuplan.scenario_filter.subsample_ratio=0.01 \ lightning.trainer.params.max_epochs=10 \ data_loader.params.batch_size=8 \ data_loader.params.num_workers=8``` Simulating a planner Open-loop simulationOpen-loop simulation aims to evaluate the policy's capabilities to imitate the expert driver's behavior.This is essentially done through log replay as the policy's predictions do not affect the state of the simulation.As the policy is not in full control of the vehicle, this type of simulation can only provide a high-level performance overview. Closed-loop simulationConversely, in closed-loop simulation the policy's actions alter the state of the simulation which tries to closely approximate the real-world system.The simulation's feedback loop enables a more in-depth evaluation of the policy as compounding errors can cause future observations to significantly diverge from the ground truth.This is important in measuring distribution shifts introduced due to lack of variance in training examples through pure imitation learning.Closed-loop simulation is further divided into two categories:* ego closed-loop simulation with agents replayed from log (open-loop, non reactive)* ego closed-loop simulation with agents controlled by a rule-based or learned policy (closed-loop, reactive) Measuring successMeasuring the success of a planning task and comparing various planning policies is a complicated effort that involves defining metrics across different vertical dimensions and scenario categories.These metrics include indicators such as vehicle dynamics, traffic rule violations, expert imitation, navigation success etc.Overall, they aim to capture the policy's ability to control the autonomous vehicle safely yet efficiently without compromising the passenger's comfort. Simulation parameters PlannersChange the planner model with `planner=X` where `X` is a config yaml defined in the table below. | Planner | Description | Config || --- | --- | --- || Simple Planner | Naive planner that only plans a straight path | `simple_planner` || ML Planner | Learning-based planner trained using the nuPlan training framework (see previous section) | `ml_planner` | Prepare the simulation config ###Code # Location of path with all simulation configs CONFIG_PATH = '../nuplan/planning/script/config/simulation' CONFIG_NAME = 'default_simulation' # Select the planner and simulation challenge PLANNER = 'simple_planner' # [simple_planner, ml_planner] CHALLENGE = 'challenge_1_open_loop_boxes' # [challenge_1_open_loop_boxes, challenge_3_closed_loop_nonreactive_agents, challenge_4_closed_loop_reactive_agents] DATASET_PARAMS = [ 'scenario_builder=nuplan_mini', # use nuplan mini database 'scenario_builder/nuplan/scenario_filter=all_scenarios', # initially select all scenarios in the database 'scenario_builder.nuplan.scenario_filter.scenario_types=[nearby_dense_vehicle_traffic, ego_at_pudo, ego_starts_unprotected_cross_turn, ego_high_curvature]', # select scenario types 'scenario_builder.nuplan.scenario_filter.limit_scenarios_per_type=10', # use 10 scenarios per scenario type 'scenario_builder.nuplan.scenario_filter.subsample_ratio=0.05', # subsample 20s scenario from 20Hz to 1Hz ] # Name of the experiment EXPERIMENT = 'simulation_simple_experiment' # Initialize configuration management system hydra.core.global_hydra.GlobalHydra.instance().clear() # reinitialize hydra if already initialized hydra.initialize(config_path=CONFIG_PATH) # Compose the configuration cfg = hydra.compose(config_name=CONFIG_NAME, overrides=[ f'experiment_name={EXPERIMENT}', f'group={SAVE_DIR}', f'planner={PLANNER}', f'+simulation={CHALLENGE}', *DATASET_PARAMS, ]) ###Output _____no_output_____ ###Markdown Launch simulation (within the notebook) ###Code from nuplan.planning.script.run_simulation import main as main_simulation # Run the simulation loop (real-time visualization not yet supported, see next section for visualization) main_simulation(cfg) # Fetch the filesystem location of the simulation results file for visualization in nuBoard (next section) parent_dir = Path(SAVE_DIR) / EXPERIMENT results_dir = list(parent_dir.iterdir())[0] # get the child dir nuboard_file_1 = [str(file) for file in results_dir.iterdir() if file.is_file() and file.suffix == '.nuboard'][0] ###Output _____no_output_____ ###Markdown Launch simulation (command line - alternative) A simulation experiment can be launched alternatively with:```$ python nuplan/planning/script/run_simulation.py \ +simulation=challenge_1_open_loop_boxes \ planner=simple_planner \ scenario_builder=nuplan_mini \ scenario_builder/nuplan/scenario_filter=all_scenarios \ scenario_builder.nuplan.scenario_filter.scenario_types="[nearby_dense_vehicle_traffic, ego_at_pudo, ego_starts_unprotected_cross_turn, ego_high_curvature]" \ scenario_builder.nuplan.scenario_filter.limit_scenarios_per_type=10 \ scenario_builder.nuplan.scenario_filter.subsample_ratio=0.05``` Simulate a trained ML planner for comparisonUsing the same simulation settings as before, we can simulate a pretrained ML planner and compare the two.In this example you can take the model you trained earlier. ###Code # Location of path with all simulation configs CONFIG_PATH = '../nuplan/planning/script/config/simulation' CONFIG_NAME = 'default_simulation' # Get the checkpoint of the trained model last_experiment = sorted(os.listdir(LOG_DIR))[-1] train_experiment_dir = sorted(Path(LOG_DIR).iterdir())[-1] checkpoint = sorted((train_experiment_dir / 'checkpoints').iterdir())[-1] MODEL_PATH = str(checkpoint).replace("=", "\=") # Name of the experiment EXPERIMENT = 'simulation_raster_experiment' # Initialize configuration management system hydra.core.global_hydra.GlobalHydra.instance().clear() # reinitialize hydra if already initialized hydra.initialize(config_path=CONFIG_PATH) # Compose the configuration cfg = hydra.compose(config_name=CONFIG_NAME, overrides=[ f'experiment_name={EXPERIMENT}', f'group={SAVE_DIR}', 'planner=ml_planner', 'model=raster_model', 'planner.model_config=${model}', # hydra notation to select model config f'planner.checkpoint_path={MODEL_PATH}', # this path can be replaced by the checkpoint of the model trained in the previous section f'+simulation={CHALLENGE}', *DATASET_PARAMS, ]) # Run the simulation loop main_simulation(cfg) # Fetch the filesystem location of the simulation results file for visualization in nuBoard (next section) parent_dir = Path(SAVE_DIR) / EXPERIMENT results_dir = list(parent_dir.iterdir())[0] # get the child dir nuboard_file_2 = [str(file) for file in results_dir.iterdir() if file.is_file() and file.suffix == '.nuboard'][0] ###Output _____no_output_____ ###Markdown Visualizing metrics and scenarios nuBoard summaryHaving trained and simulated planners across various scenarios and driving behaviors, it's time to evaluate them:* quantitatively, through common and scenario dependent metrics* qualitatively, through visualization of scenario progression nuBoard tabsTo achieve that, nuBoard has 3 core evaluation tabs:1. Overview - Scalar metrics summary of common and scenario metrics across the following categories: * Ego dynamics * Traffic violations * Expert imitation * Planning & navigation * Scenario performance2. Histograms - Histograms over metric statistics for more a granular peek inside each metric focusing on: * Metric statistics (e.g. min, max, p90)3. Scenarios - Low-level scenario visualizations: * Time-series progression of a specific metric across a scenario * Top-down visualization of the scenario across time for comparing predicted vs. expert trajectoriesIn addition, there is a main configuration tab for selecting different simulation files for comparing planners/experiments.**NOTE**: nuBoard is under heavy developement, overall functionality and aesthetics do not represent the final product! Prepare the nuBoard config ###Code # Location of path with all nuBoard configs CONFIG_PATH = '../nuplan/planning/script/config/nuboard' CONFIG_NAME = 'default_nuboard' # Initialize configuration management system hydra.core.global_hydra.GlobalHydra.instance().clear() # reinitialize hydra if already initialized hydra.initialize(config_path=CONFIG_PATH) # Compose the configuration cfg = hydra.compose(config_name=CONFIG_NAME, overrides=[ 'scenario_builder=nuplan_mini', # set the database (same as simulation) used to fetch data for visualization f'simulation_path={[nuboard_file_1, nuboard_file_2]}', # nuboard file path(s), if left empty the user can open the file inside nuBoard ]) ###Output _____no_output_____ ###Markdown Launch nuBoard (open in new tab - recommended) ###Code from nuplan.planning.script.run_nuboard import main as main_nuboard # Run nuBoard main_nuboard(cfg) ###Output _____no_output_____ ###Markdown Launch nuBoard (embedded within the notebook - alternative) ###Code from bokeh.io import show, output_notebook from nuplan.planning.script.run_nuboard import initialize_nuboard # Make sure that the notebook working directory is "/notebooks" and that Jupyter was launched at the root of the repo cfg.bokeh.resource_prefix = '/notebooks/nuplan/planning/metrics/board/' # pass CSS resources to the notebook # Run the nuBoard output_notebook() nuboard = initialize_nuboard(cfg) show(nuboard.main_page) ###Output _____no_output_____ ###Markdown ![](https://www.nuplan.org/static/media/nuPlan_final.3fde7586.png) Contents1. [Introduction to nuPlan](introduction)2. [Training an ML planner](training)3. [Simulating a planner](simulation)4. [Visualizing metrics and scenarios](dashboard) Introduction to nuPlan Welcome to nuPlan! This notebook will explore the nuPlan simulation framework, training platform as well as the nuBoard metrics/scenarios visualization dashboard. What is nuPlannuPlan is the world’s first closed-loop ML-based planning benchmark for autonomous driving.It provides a high quality dataset with 1500h of human driving data from 4 cities across the US and Asia with widely varying traffic patterns (Boston, Pittsburgh, Las Vegas and Singapore). In addition, it provides a closed-loop simulation framework with reactive agents, a training platform as well as a large set of both general and scenario-specific planning metrics.![](https://www.nuscenes.org/static/media/framework_steps.2d4642df.png) Training & simulation frameworkThe nuPlan training and simulation framework aims to:* create a simulation pipeline to evaluate a planner on large dataset with various scenarios* score planner performance with common and scenario-dependent metrics* compare planners based on measured metrics and provide intuitive visualizations* train planners with the provided framework to allow quick implementation and iteration* support closed-loop simulation and training![](https://www.nuplan.org/static/media/planning_framework.ca3c2969.png) Scenarios in nuPlannuPlan aims to capture challenging yet representative scenarios from real-world encounters. This enables the benchmarking of planning systems both in expert imitation (open-loop) and reactive planning (closed-loop) settings.These scenarios includes:* highly interactive scenes with traffic participants (e.g. tailgating, high-velocity overtakes, double parked cars, jaywalking)* various ego behaviors (e.g. vehicle following, yielding, lane merging) and dynamics (e.g. mixed speed profiles, abrupt braking, speed bumps, high jerk maneuvers)* scene layouts of varied complexity (e.g. pudos, traffic/stop controlled intersections, unprotected turns) and temporary zones (e.g. construction areas)The dataset is automatically tagged with scenario labels based on certain primitive attributes.These scenario tags can then be used to extract representative metrics for the planner's evaluation.Example mined scenarios in nuPlan:| | | || :-: | :-: | :-: || Unprotected cross turn | Dense vehicle interactions | Jaywalker in front || ![](https://www.nuscenes.org/static/media/unprotected-cross.51feef7e.webp) | ![](https://www.nuscenes.org/static/media/dense-interactions.16de47ec.webp) | ![](https://www.nuscenes.org/static/media/jaywalker.03083823.webp) || Lane change | Ego at pickup/dropoff area | Ego following vehicle || ![](https://www.nuscenes.org/static/media/lane-change.54bfca1c.webp) | ![](https://www.nuscenes.org/static/media/pickup-dropoff.4dd1c418.webp) | ![](https://www.nuscenes.org/static/media/following-vehicle.4cacd559.webp) | DatabaseDownload a database for training/simulation from [here](https://nuplan.org/nuplandownload).| Database | Size | Duration | Num Logs | Cities | Num Scenarios | Sensor Data | Description || :-- | :-- | :-- | :-- | :-- | :-- | :-- | :-- || nuplan_v0.1_mini (recommended) | ~5GB | 2.5h | 8 | Las Vegas | 14 | N/A | nuPlan teaser database (mini version) || nuplan_v0.1 | ~500GB | ~230h | 380 | Las Vegas | 14 | N/A | nuPlan teaser database |The full 1500h dataset with driving data from 4 cities (Boston, Pittsburgh, Las Vegas and Singapore) as well as sensor data will be released in Q1 2022. SetupTo be able to access all resources within this notebook, make sure Jupyter is launched at the root of this repo. The path of the notebook should be `/notebook/`. ###Code # (Optional) Increase notebook width for all embedded cells to display properly from IPython.core.display import display, HTML display(HTML("<style>.output_result { max-width:100% !important; }</style>")) display(HTML("<style>.container { width:100% !important; }</style>")) # Useful imports import os from pathlib import Path import tempfile import hydra ###Output _____no_output_____ ###Markdown Training an ML planner Imitation learningIn the following section we will train an ML planning policy with the aim estimate the ego's future trajectory and control the vehicle.The policy is learned through imitation learning, a supervised learning approach in which - in the context of autonomous driving - the behavior of an expert human driver is used as a target signal to supervise the model. Model features & targetsA planning policy consumes a set of episodic observations and encodes them through a deep neural network to regress a future trajectory.The observations can be historic or present ego and agent poses as well as static/dynamic map information across different map layers.These signals can be encoded through various representations, such as raster or vector format for the map signal, each with their pros and cons for each model flavor.Using these input features the model predicts a discretized future trajectory across a fixed time horizon.The trajectory consists of a set of discrete future states (position, heading and velocity) sampled at fixed intervals which express the likelihood of the vehicle being at that state in the future.For example, a predicted trajectory may consist of 10 future poses sampled at intervals of 0.5s across a 5s horizon. Learning objectivesThe policy is trained to maximize a set of aggregated objectives such as imitation, collision avoidance, traffic rule violation etc.Imitation is the core training objective which indicates how close the predicted trajectory is to the expert ground truth and penalizes model predictions that deviate in space and time from the demonstration. Training parameters The following parameter categories define the training protocol which includes the model, metrics, objectives etc.A working example composition of these parameters can be found in the next section.--- ML modelsChange the training model with `model=X` where `X` is a config yaml defined in the table below. | Model | Description | Config || --- | --- | --- || Raster model (CNN) | Raster-based model that uses a CNN backbone to encode ego, agent and map information as raster layersAny (pretrained) backbone from the TIMM library can be used (e.g. ResNet50, EfficientNetB3) | `raster_model` || Vector model (LaneGCN) | Vector-based model that uses a series of MLPs to encode ego and agent signals, a lane graph to encode vector-map elements and a fusion network to capture lane & agent intra/inter-interactions through attention layersImplementation of LaneGCN paper ("Learning Lane Graph Representations for Motion Forecasting") | `vector_model` || Simple vector model | Toy vector-based model that consumes ego, agent and lane signals through a series of MLPs | `simple_vector_model` | Training objectivesChange the training objectives with `objective=[X, ...]` where `X` is a config yaml defined in the table below. | Objective | Description | Config || --- | --- | --- || Imitation objective | Penalizes the predicted trajectory that deviates from the expert demonstration | `imitation_objective` | Training metricsChange the training objectives with `training_metric=[X, ...]` where `X` is a config yaml defined in the table below. | Metric | Description | Config || --- | --- | --- || Average displacement error | RMSE translation error across full predicted trajectory | `avg_displacement_error` || Average heading error | RMSE heading error across full predicted trajectory | `avg_heading_error` || Final displacement error | L2 error of predicted trajectory's final pose translation | `final_displacement_error` || Final heading error | L2 error of predicted trajectory's final pose heading | `final_heading_error` | Prepare the training config ###Code # Location of path with all training configs CONFIG_PATH = '../nuplan/planning/script/config/training' CONFIG_NAME = 'default_training' # Create a temporary directory to store the cache and experiment artifacts SAVE_DIR = Path(tempfile.gettempdir()) / 'tutorial_nuplan_framework' # optionally replace with persistent dir EXPERIMENT = 'training_raster_experiment' LOG_DIR = str(SAVE_DIR / EXPERIMENT) # Initialize configuration management system hydra.core.global_hydra.GlobalHydra.instance().clear() hydra.initialize(config_path=CONFIG_PATH) # Compose the configuration cfg = hydra.compose(config_name=CONFIG_NAME, overrides=[ f'group={str(SAVE_DIR)}', f'cache.cache_path={str(SAVE_DIR)}/cache', f'experiment_name={EXPERIMENT}', 'py_func=train', '+training=training_raster_model', # raster model that consumes ego, agents and map raster layers and regresses the ego's trajectory 'scenario_builder=nuplan_mini', # use nuplan mini database 'scenario_filter.limit_total_scenarios=500', # Choose 500 scenarios to train with 'lightning.trainer.params.accelerator=ddp_spawn', # ddp is not allowed in interactive environment, using ddp_spawn instead - this can bottleneck the data pipeline, it is recommended to run training outside the notebook 'lightning.trainer.params.max_epochs=10', 'data_loader.params.batch_size=8', 'data_loader.params.num_workers=8', ]) ###Output _____no_output_____ ###Markdown Launch tensorboard for visualizing training artifacts ###Code %load_ext tensorboard %tensorboard --logdir {LOG_DIR} ###Output _____no_output_____ ###Markdown Launch training (within the notebook) ###Code from nuplan.planning.script.run_training import main as main_train # Run the training loop, optionally inspect training artifacts through tensorboard (above cell) main_train(cfg) ###Output _____no_output_____ ###Markdown Launch training (command line - alternative) A training experiment with the above same parameters can be launched alternatively with:```$ python nuplan/planning/script/run_training.py \ experiment_name=raster_experiment \ py_func=train \ +training=training_raster_model \ scenario_builder=nuplan_mini \ scenario_filter.limit_total_scenarios=500 \ lightning.trainer.params.max_epochs=10 \ data_loader.params.batch_size=8 \ data_loader.params.num_workers=8``` Simulating a planner Open-loop simulationOpen-loop simulation aims to evaluate the policy's capabilities to imitate the expert driver's behavior.This is essentially done through log replay as the policy's predictions do not affect the state of the simulation.As the policy is not in full control of the vehicle, this type of simulation can only provide a high-level performance overview. Closed-loop simulationConversely, in closed-loop simulation the policy's actions alter the state of the simulation which tries to closely approximate the real-world system.The simulation's feedback loop enables a more in-depth evaluation of the policy as compounding errors can cause future observations to significantly diverge from the ground truth.This is important in measuring distribution shifts introduced due to lack of variance in training examples through pure imitation learning.Closed-loop simulation is further divided into two categories:* ego closed-loop simulation with agents replayed from log (open-loop, non reactive)* ego closed-loop simulation with agents controlled by a rule-based or learned policy (closed-loop, reactive) Measuring successMeasuring the success of a planning task and comparing various planning policies is a complicated effort that involves defining metrics across different vertical dimensions and scenario categories.These metrics include indicators such as vehicle dynamics, traffic rule violations, expert imitation, navigation success etc.Overall, they aim to capture the policy's ability to control the autonomous vehicle safely yet efficiently without compromising the passenger's comfort. Simulation parameters PlannersChange the planner model with `planner=X` where `X` is a config yaml defined in the table below. | Planner | Description | Config || --- | --- | --- || Simple Planner | Naive planner that only plans a straight path | `simple_planner` || ML Planner | Learning-based planner trained using the nuPlan training framework (see previous section) | `ml_planner` | Prepare the simulation config ###Code # Location of path with all simulation configs CONFIG_PATH = '../nuplan/planning/script/config/simulation' CONFIG_NAME = 'default_simulation' # Select the planner and simulation challenge PLANNER = 'simple_planner' # [simple_planner, ml_planner] CHALLENGE = 'open_loop_boxes' # [open_loop_boxes, closed_loop_nonreactive_agents, closed_loop_reactive_agents] DATASET_PARAMS = [ 'scenario_builder=nuplan_mini', # use nuplan mini database 'scenario_filter=all_scenarios', # initially select all scenarios in the database 'scenario_filter.scenario_types=[near_multiple_vehicles, on_pickup_dropoff, starting_unprotected_cross_turn, high_magnitude_jerk]', # select scenario types 'scenario_filter.num_scenarios_per_type=10', # use 10 scenarios per scenario type ] # Name of the experiment EXPERIMENT = 'simulation_simple_experiment' # Initialize configuration management system hydra.core.global_hydra.GlobalHydra.instance().clear() # reinitialize hydra if already initialized hydra.initialize(config_path=CONFIG_PATH) # Compose the configuration cfg = hydra.compose(config_name=CONFIG_NAME, overrides=[ f'experiment_name={EXPERIMENT}', f'group={SAVE_DIR}', f'planner={PLANNER}', f'+simulation={CHALLENGE}', *DATASET_PARAMS, ]) ###Output _____no_output_____ ###Markdown Launch simulation (within the notebook) ###Code from nuplan.planning.script.run_simulation import main as main_simulation # Run the simulation loop (real-time visualization not yet supported, see next section for visualization) main_simulation(cfg) # Fetch the filesystem location of the simulation results file for visualization in nuBoard (next section) parent_dir = Path(SAVE_DIR) / EXPERIMENT results_dir = list(parent_dir.iterdir())[0] # get the child dir nuboard_file_1 = [str(file) for file in results_dir.iterdir() if file.is_file() and file.suffix == '.nuboard'][0] ###Output _____no_output_____ ###Markdown Launch simulation (command line - alternative) A simulation experiment can be launched alternatively with:```$ python nuplan/planning/script/run_simulation.py \ +simulation=open_loop_boxes \ planner=simple_planner \ scenario_builder=nuplan_mini \ scenario_filter=all_scenarios \ scenario_filter.scenario_types="[near_multiple_vehicles, on_pickup_dropoff, starting_unprotected_cross_turn, high_magnitude_jerk]" \ scenario_filter.num_scenarios_per_type=10 \``` Simulate a trained ML planner for comparisonUsing the same simulation settings as before, we can simulate a pretrained ML planner and compare the two.In this example you can take the model you trained earlier. ###Code # Location of path with all simulation configs CONFIG_PATH = '../nuplan/planning/script/config/simulation' CONFIG_NAME = 'default_simulation' # Get the checkpoint of the trained model last_experiment = sorted(os.listdir(LOG_DIR))[-1] train_experiment_dir = sorted(Path(LOG_DIR).iterdir())[-1] checkpoint = sorted((train_experiment_dir / 'checkpoints').iterdir())[-1] MODEL_PATH = str(checkpoint).replace("=", "\=") # Name of the experiment EXPERIMENT = 'simulation_raster_experiment' # Initialize configuration management system hydra.core.global_hydra.GlobalHydra.instance().clear() # reinitialize hydra if already initialized hydra.initialize(config_path=CONFIG_PATH) # Compose the configuration cfg = hydra.compose(config_name=CONFIG_NAME, overrides=[ f'experiment_name={EXPERIMENT}', f'group={SAVE_DIR}', 'planner=ml_planner', 'model=raster_model', 'planner.ml_planner.model_config=${model}', # hydra notation to select model config f'planner.ml_planner.checkpoint_path={MODEL_PATH}', # this path can be replaced by the checkpoint of the model trained in the previous section f'+simulation={CHALLENGE}', *DATASET_PARAMS, ]) # Run the simulation loop main_simulation(cfg) # Fetch the filesystem location of the simulation results file for visualization in nuBoard (next section) parent_dir = Path(SAVE_DIR) / EXPERIMENT results_dir = list(parent_dir.iterdir())[0] # get the child dir nuboard_file_2 = [str(file) for file in results_dir.iterdir() if file.is_file() and file.suffix == '.nuboard'][0] ###Output _____no_output_____ ###Markdown Visualizing metrics and scenarios nuBoard summaryHaving trained and simulated planners across various scenarios and driving behaviors, it's time to evaluate them:* quantitatively, through common and scenario dependent metrics* qualitatively, through visualization of scenario progression nuBoard tabsTo achieve that, nuBoard has 3 core evaluation tabs:1. Overview - Scalar metrics summary of common and scenario metrics across the following categories: * Ego dynamics * Traffic violations * Expert imitation * Planning & navigation * Scenario performance2. Histograms - Histograms over metric statistics for more a granular peek inside each metric focusing on: * Metric statistics (e.g. min, max, p90)3. Scenarios - Low-level scenario visualizations: * Time-series progression of a specific metric across a scenario * Top-down visualization of the scenario across time for comparing predicted vs. expert trajectoriesIn addition, there is a main configuration tab for selecting different simulation files for comparing planners/experiments.**NOTE**: nuBoard is under heavy developement, overall functionality and aesthetics do not represent the final product! Prepare the nuBoard config ###Code # Location of path with all nuBoard configs CONFIG_PATH = '../nuplan/planning/script/config/nuboard' CONFIG_NAME = 'default_nuboard' # Initialize configuration management system hydra.core.global_hydra.GlobalHydra.instance().clear() # reinitialize hydra if already initialized hydra.initialize(config_path=CONFIG_PATH) # Compose the configuration cfg = hydra.compose(config_name=CONFIG_NAME, overrides=[ 'scenario_builder=nuplan_mini', # set the database (same as simulation) used to fetch data for visualization f'simulation_path={[nuboard_file_1, nuboard_file_2]}', # nuboard file path(s), if left empty the user can open the file inside nuBoard ]) ###Output _____no_output_____ ###Markdown Launch nuBoard (open in new tab - recommended) ###Code from nuplan.planning.script.run_nuboard import main as main_nuboard # Run nuBoard main_nuboard(cfg) ###Output _____no_output_____ ###Markdown Launch nuBoard (embedded within the notebook - alternative) ###Code from bokeh.io import show, output_notebook from nuplan.planning.script.run_nuboard import initialize_nuboard # Make sure that the notebook working directory is "/notebooks" and that Jupyter was launched at the root of the repo cfg.bokeh.resource_prefix = '/notebooks/nuplan/planning/metrics/board/' # pass CSS resources to the notebook # Run the nuBoard output_notebook() nuboard = initialize_nuboard(cfg) show(nuboard.main_page) ###Output _____no_output_____
Project2/finding_donors.ipynb
###Markdown 机器学习纳米学位 监督学习 项目2: 为*CharityML*寻找捐献者 欢迎来到机器学习工程师纳米学位的第二个项目!在此文件中,有些示例代码已经提供给你,但你还需要实现更多的功能让项目成功运行。除非有明确要求,你无须修改任何已给出的代码。以**'练习'**开始的标题表示接下来的代码部分中有你必须要实现的功能。每一部分都会有详细的指导,需要实现的部分也会在注释中以'TODO'标出。请仔细阅读所有的提示!除了实现代码外,你还必须回答一些与项目和你的实现有关的问题。每一个需要你回答的问题都会以**'问题 X'**为标题。请仔细阅读每个问题,并且在问题后的**'回答'**文字框中写出完整的答案。我们将根据你对问题的回答和撰写代码所实现的功能来对你提交的项目进行评分。>**提示:**Code 和 Markdown 区域可通过**Shift + Enter**快捷键运行。此外,Markdown可以通过双击进入编辑模式。 开始在这个项目中,你将使用1994年美国人口普查收集的数据,选用几个监督学习算法以准确地建模被调查者的收入。然后,你将根据初步结果从中选择出最佳的候选算法,并进一步优化该算法以最好地建模这些数据。你的目标是建立一个能够准确地预测被调查者年收入是否超过50000美元的模型。这种类型的任务会出现在那些依赖于捐款而存在的非营利性组织。了解人群的收入情况可以帮助一个非营利性的机构更好地了解他们要多大的捐赠,或是否他们应该接触这些人。虽然我们很难直接从公开的资源中推断出一个人的一般收入阶层,但是我们可以(也正是我们将要做的)从其他的一些公开的可获得的资源中获得一些特征从而推断出该值。这个项目的数据集来自[UCI机器学习知识库](https://archive.ics.uci.edu/ml/datasets/Census+Income)。这个数据集是由Ron Kohavi和Barry Becker在发表文章_"Scaling Up the Accuracy of Naive-Bayes Classifiers: A Decision-Tree Hybrid"_之后捐赠的,你可以在Ron Kohavi提供的[在线版本](https://www.aaai.org/Papers/KDD/1996/KDD96-033.pdf)中找到这个文章。我们在这里探索的数据集相比于原有的数据集有一些小小的改变,比如说移除了特征`'fnlwgt'` 以及一些遗失的或者是格式不正确的记录。 ---- 探索数据运行下面的代码单元以载入需要的Python库并导入人口普查数据。注意数据集的最后一列`'income'`将是我们需要预测的列(表示被调查者的年收入会大于或者是最多50,000美元),人口普查数据中的每一列都将是关于被调查者的特征。 ###Code # 检查你的Python版本 from sys import version_info if version_info.major != 2 and version_info.minor != 7: raise Exception('请使用Python 2.7来完成此项目') # 为这个项目导入需要的库 import numpy as np import pandas as pd from time import time from IPython.display import display # 允许为DataFrame使用display() # 导入附加的可视化代码visuals.py import visuals as vs # 为notebook提供更加漂亮的可视化 %matplotlib inline # 导入人口普查数据 data = pd.read_csv("census.csv") # 成功 - 显示第一条记录 display(data.head(n=1)) ###Output _____no_output_____ ###Markdown 练习:数据探索首先我们对数据集进行一个粗略的探索,我们将看看每一个类别里会有多少被调查者?并且告诉我们这些里面多大比例是年收入大于50,000美元的。在下面的代码单元中,你将需要计算以下量:- 总的记录数量,`'n_records'`- 年收入大于50,000美元的人数,`'n_greater_50k'`.- 年收入最多为50,000美元的人数 `'n_at_most_50k'`.- 年收入大于50,000美元的人所占的比例, `'greater_percent'`.**提示:** 您可能需要查看上面的生成的表,以了解`'income'`条目的格式是什么样的。 ###Code # TODO:总的记录数 n_records = len(data) # TODO:被调查者的收入大于$50,000的人数 n_greater_50k = len(data[data['income']=='>50K']) # TODO:被调查者的收入最多为$50,000的人数 n_at_most_50k = len(data[data['income']=='<=50K']) # TODO:被调查者收入大于$50,000所占的比例 greater_percent = n_greater_50k / float(n_records) * 100 # 打印结果 print "Total number of records: {}".format(n_records) print "Individuals making more than $50,000: {}".format(n_greater_50k) print "Individuals making at most $50,000: {}".format(n_at_most_50k) print "Percentage of individuals making more than $50,000: {:.2f}%".format(greater_percent) ###Output Total number of records: 45222 Individuals making more than $50,000: 11208 Individuals making at most $50,000: 34014 Percentage of individuals making more than $50,000: 24.78% ###Markdown ---- 准备数据在数据能够被作为输入提供给机器学习算法之前,它经常需要被清洗,格式化,和重新组织 - 这通常被叫做**预处理**。幸运的是,对于这个数据集,没有我们必须处理的无效或丢失的条目,然而,由于某一些特征存在的特性我们必须进行一定的调整。这个预处理都可以极大地帮助我们提升几乎所有的学习算法的结果和预测能力。 获得特征和标签`income` 列是我们需要的标签,记录一个人的年收入是否高于50K。 因此我们应该把他从数据中剥离出来,单独存放。 ###Code # 将数据切分成特征和对应的标签 income_raw = data['income'] features_raw = data.drop('income', axis = 1) ###Output _____no_output_____ ###Markdown 转换倾斜的连续特征一个数据集有时可能包含至少一个靠近某个数字的特征,但有时也会有一些相对来说存在极大值或者极小值的不平凡分布的的特征。算法对这种分布的数据会十分敏感,并且如果这种数据没有能够很好地规一化处理会使得算法表现不佳。在人口普查数据集的两个特征符合这个描述:'`capital-gain'`和`'capital-loss'`。运行下面的代码单元以创建一个关于这两个特征的条形图。请注意当前的值的范围和它们是如何分布的。 ###Code # 可视化 'capital-gain'和'capital-loss' 两个特征 vs.distribution(features_raw) ###Output _____no_output_____ ###Markdown 对于高度倾斜分布的特征如`'capital-gain'`和`'capital-loss'`,常见的做法是对数据施加一个对数转换,将数据转换成对数,这样非常大和非常小的值不会对学习算法产生负面的影响。并且使用对数变换显著降低了由于异常值所造成的数据范围异常。但是在应用这个变换时必须小心:因为0的对数是没有定义的,所以我们必须先将数据处理成一个比0稍微大一点的数以成功完成对数转换。运行下面的代码单元来执行数据的转换和可视化结果。再次,注意值的范围和它们是如何分布的。 ###Code # 对于倾斜的数据使用Log转换 skewed = ['capital-gain', 'capital-loss'] features_raw[skewed] = data[skewed].apply(lambda x: np.log(x + 1)) # 可视化对数转换后 'capital-gain'和'capital-loss' 两个特征 vs.distribution(features_raw, transformed = True) ###Output _____no_output_____ ###Markdown 规一化数字特征除了对于高度倾斜的特征施加转换,对数值特征施加一些形式的缩放通常会是一个好的习惯。在数据上面施加一个缩放并不会改变数据分布的形式(比如上面说的'capital-gain' or 'capital-loss');但是,规一化保证了每一个特征在使用监督学习器的时候能够被平等的对待。注意一旦使用了缩放,观察数据的原始形式不再具有它本来的意义了,就像下面的例子展示的。运行下面的代码单元来规一化每一个数字特征。我们将使用[`sklearn.preprocessing.MinMaxScaler`](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html)来完成这个任务。 ###Code from sklearn.preprocessing import MinMaxScaler # 初始化一个 scaler,并将它施加到特征上 scaler = MinMaxScaler() numerical = ['age', 'education-num', 'capital-gain', 'capital-loss', 'hours-per-week'] features_raw[numerical] = scaler.fit_transform(data[numerical]) # 显示一个经过缩放的样例记录 display(features_raw.head(n = 1)) ###Output _____no_output_____ ###Markdown 练习:数据预处理从上面的**数据探索**中的表中,我们可以看到有几个属性的每一条记录都是非数字的。通常情况下,学习算法期望输入是数字的,这要求非数字的特征(称为类别变量)被转换。转换类别变量的一种流行的方法是使用**独热编码**方案。独热编码为每一个非数字特征的每一个可能的类别创建一个_“虚拟”_变量。例如,假设`someFeature`有三个可能的取值`A`,`B`或者`C`,。我们将把这个特征编码成`someFeature_A`, `someFeature_B`和`someFeature_C`.| 特征X | | 特征X_A | 特征X_B | 特征X_C || :-: | | :-: | :-: | :-: || B | | 0 | 1 | 0 || C | ----> 独热编码 ----> | 0 | 0 | 1 || A | | 1 | 0 | 0 |此外,对于非数字的特征,我们需要将非数字的标签`'income'`转换成数值以保证学习算法能够正常工作。因为这个标签只有两种可能的类别("50K"),我们不必要使用独热编码,可以直接将他们编码分别成两个类`0`和`1`,在下面的代码单元中你将实现以下功能: - 使用[`pandas.get_dummies()`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.get_dummies.html?highlight=get_dummiespandas.get_dummies)对`'features_raw'`数据来施加一个独热编码。 - 将目标标签`'income_raw'`转换成数字项。 - 将"50K"转换成`1`。 ###Code # TODO:使用pandas.get_dummies()对'features_raw'数据进行独热编码 features = pd.get_dummies(features_raw) # TODO:将'income_raw'编码成数字值 income = income_raw.map(lambda x: 0 if x == '<=50K' else 1) # print income.head(n=9) # 打印经过独热编码之后的特征数量 encoded = list(features.columns) print "{} total features after one-hot encoding.".format(len(encoded)) # 移除下面一行的注释以观察编码的特征名字 print encoded ###Output 103 total features after one-hot encoding. ['age', 'education-num', 'capital-gain', 'capital-loss', 'hours-per-week', 'workclass_ Federal-gov', 'workclass_ Local-gov', 'workclass_ Private', 'workclass_ Self-emp-inc', 'workclass_ Self-emp-not-inc', 'workclass_ State-gov', 'workclass_ Without-pay', 'education_level_ 10th', 'education_level_ 11th', 'education_level_ 12th', 'education_level_ 1st-4th', 'education_level_ 5th-6th', 'education_level_ 7th-8th', 'education_level_ 9th', 'education_level_ Assoc-acdm', 'education_level_ Assoc-voc', 'education_level_ Bachelors', 'education_level_ Doctorate', 'education_level_ HS-grad', 'education_level_ Masters', 'education_level_ Preschool', 'education_level_ Prof-school', 'education_level_ Some-college', 'marital-status_ Divorced', 'marital-status_ Married-AF-spouse', 'marital-status_ Married-civ-spouse', 'marital-status_ Married-spouse-absent', 'marital-status_ Never-married', 'marital-status_ Separated', 'marital-status_ Widowed', 'occupation_ Adm-clerical', 'occupation_ Armed-Forces', 'occupation_ Craft-repair', 'occupation_ Exec-managerial', 'occupation_ Farming-fishing', 'occupation_ Handlers-cleaners', 'occupation_ Machine-op-inspct', 'occupation_ Other-service', 'occupation_ Priv-house-serv', 'occupation_ Prof-specialty', 'occupation_ Protective-serv', 'occupation_ Sales', 'occupation_ Tech-support', 'occupation_ Transport-moving', 'relationship_ Husband', 'relationship_ Not-in-family', 'relationship_ Other-relative', 'relationship_ Own-child', 'relationship_ Unmarried', 'relationship_ Wife', 'race_ Amer-Indian-Eskimo', 'race_ Asian-Pac-Islander', 'race_ Black', 'race_ Other', 'race_ White', 'sex_ Female', 'sex_ Male', 'native-country_ Cambodia', 'native-country_ Canada', 'native-country_ China', 'native-country_ Columbia', 'native-country_ Cuba', 'native-country_ Dominican-Republic', 'native-country_ Ecuador', 'native-country_ El-Salvador', 'native-country_ England', 'native-country_ France', 'native-country_ Germany', 'native-country_ Greece', 'native-country_ Guatemala', 'native-country_ Haiti', 'native-country_ Holand-Netherlands', 'native-country_ Honduras', 'native-country_ Hong', 'native-country_ Hungary', 'native-country_ India', 'native-country_ Iran', 'native-country_ Ireland', 'native-country_ Italy', 'native-country_ Jamaica', 'native-country_ Japan', 'native-country_ Laos', 'native-country_ Mexico', 'native-country_ Nicaragua', 'native-country_ Outlying-US(Guam-USVI-etc)', 'native-country_ Peru', 'native-country_ Philippines', 'native-country_ Poland', 'native-country_ Portugal', 'native-country_ Puerto-Rico', 'native-country_ Scotland', 'native-country_ South', 'native-country_ Taiwan', 'native-country_ Thailand', 'native-country_ Trinadad&Tobago', 'native-country_ United-States', 'native-country_ Vietnam', 'native-country_ Yugoslavia'] ###Markdown 混洗和切分数据现在所有的 _类别变量_ 已被转换成数值特征,而且所有的数值特征已被规一化。和我们一般情况下做的一样,我们现在将数据(包括特征和它们的标签)切分成训练和测试集。其中80%的数据将用于训练和20%的数据用于测试。然后再进一步把训练数据分为训练集和验证集,用来选择和优化模型。运行下面的代码单元来完成切分。 ###Code # 导入 train_test_split from sklearn.model_selection import train_test_split # 将'features'和'income'数据切分成训练集和测试集 X_train, X_test, y_train, y_test = train_test_split(features, income, test_size = 0.2, random_state = 0, stratify = income) # 将'X_train'和'y_train'进一步切分为训练集和验证集 X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.2, random_state=0, stratify = y_train) # 显示切分的结果 print "Training set has {} samples.".format(X_train.shape[0]) print "Validation set has {} samples.".format(X_val.shape[0]) print "Testing set has {} samples.".format(X_test.shape[0]) ###Output Training set has 28941 samples. Validation set has 7236 samples. Testing set has 9045 samples. ###Markdown ---- 评价模型性能在这一部分中,我们将尝试四种不同的算法,并确定哪一个能够最好地建模数据。四种算法包含一个*天真的预测器* 和三个你选择的监督学习器。 评价方法和朴素的预测器*CharityML*通过他们的研究人员知道被调查者的年收入大于\$50,000最有可能向他们捐款。因为这个原因*CharityML*对于准确预测谁能够获得\$50,000以上收入尤其有兴趣。这样看起来使用**准确率**作为评价模型的标准是合适的。另外,把*没有*收入大于\$50,000的人识别成年收入大于\$50,000对于*CharityML*来说是有害的,因为他想要找到的是有意愿捐款的用户。这样,我们期望的模型具有准确预测那些能够年收入大于\$50,000的能力比模型去**查全**这些被调查者*更重要*。我们能够使用**F-beta score**作为评价指标,这样能够同时考虑查准率和查全率:$$ F_{\beta} = (1 + \beta^2) \cdot \frac{precision \cdot recall}{\left( \beta^2 \cdot precision \right) + recall} $$尤其是,当 $\beta = 0.5$ 的时候更多的强调查准率,这叫做**F$_{0.5}$ score** (或者为了简单叫做F-score)。 问题 1 - 天真的预测器的性能通过查看收入超过和不超过 \$50,000 的人数,我们能发现多数被调查者年收入没有超过 \$50,000。如果我们简单地预测说*“这个人的收入没有超过 \$50,000”*,我们就可以得到一个 准确率超过 50% 的预测。这样我们甚至不用看数据就能做到一个准确率超过 50%。这样一个预测被称作是天真的。通常对数据使用一个*天真的预测器*是十分重要的,这样能够帮助建立一个模型表现是否好的基准。 使用下面的代码单元计算天真的预测器的相关性能。将你的计算结果赋值给`'accuracy'`, `‘precision’`, `‘recall’` 和 `'fscore'`,这些值会在后面被使用,请注意这里不能使用scikit-learn,你需要根据公式自己实现相关计算。*如果我们选择一个无论什么情况都预测被调查者年收入大于 \$50,000 的模型,那么这个模型在**验证集上**的准确率,查准率,查全率和 F-score是多少?* ###Code #不能使用scikit-learn,你需要根据公式自己实现相关计算。 # 不知道这里是不是将y_val传过来就可以了,请指导下 income_pred = y_val.apply(lambda x : 1) TP = sum(map(lambda x,y:1 if x==1 and y==1 else 0,y_val,income_pred)) FN = sum(map(lambda x,y:1 if x==1 and y==0 else 0,y_val,income_pred)) FP = sum(map(lambda x,y:1 if x==0 and y==1 else 0,y_val,income_pred)) TN = sum(map(lambda x,y:1 if x==0 and y==0 else 0,y_val,income_pred)) print TP print FN print FP print TN #TODO: 计算准确率 accuracy = float(TP + TN)/len(y_val) # TODO: 计算查准率 Precision precision = TP/float(TP + FP) # TODO: 计算查全率 Recall recall = TP/float(TP + FN) # TODO: 使用上面的公式,设置beta=0.5,计算F-score fscore = (1 + 0.5*0.5)*(precision * recall)/(0.5*0.5*precision + recall) # 打印结果 print "Naive Predictor on validation data: \n \ Accuracy score: {:.4f} \n \ Precision: {:.4f} \n \ Recall: {:.4f} \n \ F-score: {:.4f}".format(accuracy, precision, recall, fscore) ###Output 1793 0 5443 0 Naive Predictor on validation data: Accuracy score: 0.2478 Precision: 0.2478 Recall: 1.0000 F-score: 0.2917 ###Markdown 监督学习模型 问题 2 - 模型应用你能够在 [`scikit-learn`](http://scikit-learn.org/stable/supervised_learning.html) 中选择以下监督学习模型- 高斯朴素贝叶斯 (GaussianNB)- 决策树 (DecisionTree)- 集成方法 (Bagging, AdaBoost, Random Forest, Gradient Boosting)- K近邻 (K Nearest Neighbors)- 随机梯度下降分类器 (SGDC)- 支撑向量机 (SVM)- Logistic回归(LogisticRegression)从上面的监督学习模型中选择三个适合我们这个问题的模型,并回答相应问题。 模型1**模型名称**回答:决策树**描述一个该模型在真实世界的一个应用场景。(你需要为此做点研究,并给出你的引用出处)**回答:慢性胃炎中医辨证分型中的应用。(http://www.airitilibrary.com/Publication/alDetailedMesh?docid=0258879x-200409-25-9-1009-1012-a)雷电潜势预报中的应用。(http://www.airitilibrary.com/Publication/alDetailedMesh?docid=16742184-200812-28-4-55-58-a)**这个模型的优势是什么?他什么情况下表现最好?**回答:优势:1. 容易解释、算法简单,可以可视化2. 几乎不需要数据预处理3. 可以同时处理数值变量和输入变量适用于:数据拥有比较清晰的特征(较容易区分),每个可区分的特征都能分出部分数据,最终结果是布尔类型。**这个模型的缺点是什么?什么条件下它表现很差?**回答:缺点:1. 容易被攻击,只需要伪造很少的特征即可瞒过分类器。2. 数据中非常小的变异也会造成一颗完全不同的树3. 当样本的数据特征不能或很难将整个样本分类的话**根据我们当前数据集的特点,为什么这个模型适合这个问题。**回答:决策树作为一个简单的模型,理论上任何数据拿到后都可以使用此模型进行一次尝试。当前数据集可以使用特征来进行分类,最终输出一个二元标签(收入是否大于50K)。 模型2**模型名称**回答:SVM**描述一个该模型在真实世界的一个应用场景。(你需要为此做点研究,并给出你的引用出处)**回答:测试用例生成(http://www.arocmag.com/getarticle/?aid=cff7c760dfdd88ca)基因数据表达分类(http://d.wanfangdata.com.cn/periodical/jsjyyyhx200305004)**这个模型的优势是什么?他什么情况下表现最好?**回答:1. 的分类效果非常好。2. 可以有效地处理高维空间数据。3. 可以有效地处理变量个数大于样本个数的数据。4. 只利用一部分子集来训练模型,所以 SVM 模型不需要太大的内存。当数据比较完善,没有太多噪声,变量较多时表现较好。**这个模型的缺点是什么?什么条件下它表现很差?**回答:1. 无法很好地处理大规模数据集,因为此时它需要较长的训练时间。2. 无法处理包含太多噪声的数据集。**根据我们当前数据集的特点,为什么这个模型适合这个问题。**回答:当前模型的feature非常多,SVM适合处理这种feature比较多的DataSet。输出Label为二元,符合SVM的分类输出特性 模型3**模型名称**回答:神经网络**描述一个该模型在真实世界的一个应用场景。(你需要为此做点研究,并给出你的引用出处)**回答:神经网络应用于电力变压器故障诊断(http://aeps.alljournals.ac.cn/aeps/ch/reader/create_pdf.aspx?file_no=5586&flag=&journal_id=aeps&year_id=1996)**这个模型的优势是什么?他什么情况下表现最好?**回答:分类的准确度高,并行分布处理能力强,分布存储及学习能力强,对噪声神经有较强的鲁棒性和容错能力,能充分逼近复杂的非线性关系,具备联想记忆的功能等。数据量比较大,参数之间存在联系的时候,表现最好**这个模型的缺点是什么?什么条件下它表现很差?**回答:神经网络需要大量的参数,如网络拓扑结构、权值和阈值的初始值;不能观察之间的学习过程,输出结果难以解释,会影响到结果的可信度和可接受程度;学习时间过长,甚至可能达不到学习的目的。准确率依赖于庞大的训练集,原本受限于计算机的速度。因此在数据集比较小,计算机速度过低时表现较差。**根据我们当前数据集的特点,为什么这个模型适合这个问题。**回答:当前数据是没有那么大,而且训练会在我的个人电脑上进行,所以不太适合。但是可以将此算法作为其他两个的对比。 练习 - 创建一个训练和预测的流水线为了正确评估你选择的每一个模型的性能,创建一个能够帮助你快速有效地使用不同大小的训练集并在验证集上做预测的训练和验证的流水线是十分重要的。你在这里实现的功能将会在接下来的部分中被用到。在下面的代码单元中,你将实现以下功能: - 从[`sklearn.metrics`](http://scikit-learn.org/stable/modules/classes.htmlsklearn-metrics-metrics)中导入`fbeta_score`和`accuracy_score`。 - 用训练集拟合学习器,并记录训练时间。 - 对训练集的前300个数据点和验证集进行预测并记录预测时间。 - 计算预测训练集的前300个数据点的准确率和F-score。 - 计算预测验证集的准确率和F-score。 ###Code # TODO:从sklearn中导入两个评价指标 - fbeta_score和accuracy_score from sklearn.metrics import fbeta_score, accuracy_score def train_predict(learner, sample_size, X_train, y_train, X_val, y_val): ''' inputs: - learner: the learning algorithm to be trained and predicted on - sample_size: the size of samples (number) to be drawn from training set - X_train: features training set - y_train: income training set - X_val: features validation set - y_val: income validation set ''' results = {} # TODO:使用sample_size大小的训练数据来拟合学习器 # TODO: Fit the learner to the training data using slicing with 'sample_size' start = time() # 获得程序开始时间 learner.fit(X_train[:sample_size],y_train[:sample_size]) end = time() # 获得程序结束时间 # TODO:计算训练时间 results['train_time'] = end - start # TODO: 得到在验证集上的预测值 # 然后得到对前300个训练数据的预测结果 start = time() # 获得程序开始时间 predictions_val = learner.predict(X_val) predictions_train = learner.predict(X_train[:300]) end = time() # 获得程序结束时间 # TODO:计算预测用时 results['pred_time'] = end - start # TODO:计算在最前面的300个训练数据的准确率 results['acc_train'] = accuracy_score(y_train[:300],predictions_train) # TODO:计算在验证上的准确率 results['acc_val'] = accuracy_score(y_val,predictions_val) # TODO:计算在最前面300个训练数据上的F-score results['f_train'] = fbeta_score(y_train[:300],predictions_train,beta=0.5) # TODO:计算验证集上的F-score results['f_val'] = fbeta_score(y_val,predictions_val,beta=0.5) # 成功 print "{} trained on {} samples.".format(learner.__class__.__name__, sample_size) # 返回结果 return results ###Output _____no_output_____ ###Markdown 练习:初始模型的评估在下面的代码单元中,您将需要实现以下功能: - 导入你在前面讨论的三个监督学习模型。 - 初始化三个模型并存储在`'clf_A'`,`'clf_B'`和`'clf_C'`中。 - 使用模型的默认参数值,在接下来的部分中你将需要对某一个模型的参数进行调整。 - 设置`random_state` (如果有这个参数)。 - 计算1%, 10%, 100%的训练数据分别对应多少个数据点,并将这些值存储在`'samples_1'`, `'samples_10'`, `'samples_100'`中**注意:**取决于你选择的算法,下面实现的代码可能需要一些时间来运行! ###Code # TODO:从sklearn中导入三个监督学习模型 from sklearn import tree from sklearn import svm from sklearn.neural_network import MLPClassifier # TODO:初始化三个模型 clf_A = tree.DecisionTreeClassifier(random_state=1) clf_B = svm.SVC(random_state=1) clf_C = MLPClassifier(solver='lbfgs', alpha=1e-5,hidden_layer_sizes=(5, 2), random_state=1) # TODO:计算1%, 10%, 100%的训练数据分别对应多少点 samples_1 = len(X_train)/100 samples_10 = len(X_train)/10 samples_100 = len(X_train) # 收集学习器的结果 results = {} for clf in [clf_A, clf_B, clf_C]: clf_name = clf.__class__.__name__ results[clf_name] = {} for i, samples in enumerate([samples_1, samples_10, samples_100]): results[clf_name][i] = train_predict(clf, samples, X_train, y_train, X_val, y_val) # 对选择的三个模型得到的评价结果进行可视化 vs.evaluate(results, accuracy, fscore) ###Output DecisionTreeClassifier trained on 289 samples. DecisionTreeClassifier trained on 2894 samples. DecisionTreeClassifier trained on 28941 samples. SVC trained on 289 samples. ###Markdown ---- 提高效果在这最后一节中,您将从三个有监督的学习模型中选择 *最好的* 模型来使用学生数据。你将在整个训练集(`X_train`和`y_train`)上使用网格搜索优化至少调节一个参数以获得一个比没有调节之前更好的 F-score。 问题 3 - 选择最佳的模型*基于你前面做的评价,用一到两段话向 *CharityML* 解释这三个模型中哪一个对于判断被调查者的年收入大于 \$50,000 是最合适的。* **提示:**你的答案应该包括评价指标,预测/训练时间,以及该算法是否适合这里的数据。 **回答:**出乎意料,神经网络的各项指标竟然是最好,训练时间短,在测试集上的准确率和FScrore都是三个算法中最高的。算法适用性这边理解比较浅,请助教解答下,应该从那几个方面选择算法,最好提供一些资料可以查阅。 问题 4 - 用通俗的话解释模型*用一到两段话,向 *CharityML* 用外行也听得懂的话来解释最终模型是如何工作的。你需要解释所选模型的主要特点。例如,这个模型是怎样被训练的,它又是如何做出预测的。避免使用高级的数学或技术术语,不要使用公式或特定的算法名词。* **回答: ** 我们使用了多层神经网络去预测捐款者,神经网络主要由一堆神经元构成,每个神经元都会负责一个很小的逻辑判断,接收几个输入参数,然后通过激活函数决定神经元最后的输出,而这个输出又可能作为输入传到下一个不同的神经元中。经过多层神经元的转换,会形成一套体系,这个体系可以接受我们的输入,最后的输出结果就是预测结果。 多层神经网络中的反向传播算法,类似于一个自适应的反馈系统; 就像一个公司要做一些决策,一级领导指示二级领导,二级领导布置任务给底层员工,这是一般的正向决策过程,反向传播就是,当底层员工发现一些问题后,报告给二级领导,二级领导又报告给一级领导,然后一、二级领导都会根据反馈调整自己的决策,以便下次取得更好的结果。 反向传播这块确实还没理解深入,算法也看不懂,还请老师给些资料看看,我自己搜到的都是5000字以内的那种,很粗略,希望有点比较系统的知识。 练习:模型调优调节选择的模型的参数。使用网格搜索(GridSearchCV)来至少调整模型的重要参数(至少调整一个),这个参数至少需尝试3个不同的值。你要使用整个训练集来完成这个过程。在接下来的代码单元中,你需要实现以下功能:- 导入[`sklearn.model_selection.GridSearchCV`](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) 和 [`sklearn.metrics.make_scorer`](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.make_scorer.html).- 初始化你选择的分类器,并将其存储在`clf`中。 - 设置`random_state` (如果有这个参数)。- 创建一个对于这个模型你希望调整参数的字典。 - 例如: parameters = {'parameter' : [list of values]}。 - **注意:** 如果你的学习器有 `max_features` 参数,请不要调节它!- 使用`make_scorer`来创建一个`fbeta_score`评分对象(设置$\beta = 0.5$)。- 在分类器clf上用'scorer'作为评价函数运行网格搜索,并将结果存储在grid_obj中。- 用训练集(X_train, y_train)训练grid search object,并将结果存储在`grid_fit`中。**注意:** 取决于你选择的参数列表,下面实现的代码可能需要花一些时间运行! ###Code # TODO:导入'GridSearchCV', 'make_scorer'和其他一些需要的库 from sklearn.grid_search import GridSearchCV from sklearn.metrics import fbeta_score, make_scorer from sklearn.neural_network import MLPClassifier # TODO:初始化分类器 clf = MLPClassifier(alpha=1e-5,hidden_layer_sizes=(5, 2), random_state=1) # TODO:创建你希望调节的参数列表 parameters = {'solver':['lbfgs', 'sgd', 'adam'],'learning_rate_init':[0.1,0.01,0.001]} # TODO:创建一个fbeta_score打分对象 scorer = make_scorer(fbeta_score, beta=0.5) # TODO:在分类器上使用网格搜索,使用'scorer'作为评价函数 grid_obj = GridSearchCV(clf, parameters,scoring=scorer) # TODO:用训练数据拟合网格搜索对象并找到最佳参数 grid_obj.fit(X_train, y_train) # 得到estimator best_clf = grid_obj.best_estimator_ # 使用没有调优的模型做预测 predictions = (clf.fit(X_train, y_train)).predict(X_val) best_predictions = best_clf.predict(X_val) # 汇报调参前和调参后的分数 print "Unoptimized model\n------" print "Accuracy score on validation data: {:.4f}".format(accuracy_score(y_val, predictions)) print "F-score on validation data: {:.4f}".format(fbeta_score(y_val, predictions, beta = 0.5)) print "\nOptimized Model\n------" print "Final accuracy score on the validation data: {:.4f}".format(accuracy_score(y_val, best_predictions)) print "Final F-score on the validation data: {:.4f}".format(fbeta_score(y_val, best_predictions, beta = 0.5)) ###Output /Users/rainfool/anaconda2/lib/python2.7/site-packages/sklearn/cross_validation.py:41: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20. "This module will be removed in 0.20.", DeprecationWarning) /Users/rainfool/anaconda2/lib/python2.7/site-packages/sklearn/grid_search.py:42: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. This module will be removed in 0.20. DeprecationWarning) ###Markdown 问题 5 - 最终模型评估_你的最优模型在测试数据上的准确率和 F-score 是多少?这些分数比没有优化的模型好还是差?你优化的结果相比于你在**问题 1**中得到的天真预测器怎么样?_ **注意:**请在下面的表格中填写你的结果,然后在答案框中提供讨论。 结果: | 评价指标 | 天真预测器 | 未优化的模型 | 优化的模型 || :------------: | :-----------------: | :---------------: | :-------------: | | 准确率 | 0.2 | 0.8512 | 0.8512 || F-score |0.2917 | 0.7028 | 0.7028 | **回答:**比起天真预测器的低分数,未优化的多层神经网络已经表现很好,优化后的分数没有变化,说明调节的几个参数对于多层神经网络来讲没有什么很大的影响 ---- 特征的重要性在数据上(比如我们这里使用的人口普查的数据)使用监督学习算法的一个重要的任务是决定哪些特征能够提供最强的预测能力。专注于少量的有效特征和标签之间的关系,我们能够更加简单地理解这些现象,这在很多情况下都是十分有用的。在这个项目的情境下这表示我们希望选择一小部分特征,这些特征能够在预测被调查者是否年收入大于\$50,000这个问题上有很强的预测能力。选择一个有 `'feature_importance_'` 属性的scikit学习分类器(例如 AdaBoost,随机森林)。`'feature_importance_'` 属性是对特征的重要性排序的函数。在下一个代码单元中用这个分类器拟合训练集数据并使用这个属性来决定人口普查数据中最重要的5个特征。 问题 6 - 观察特征相关性当**探索数据**的时候,它显示在这个人口普查数据集中每一条记录我们有十三个可用的特征。 _在这十三个记录中,你认为哪五个特征对于预测是最重要的,选择每个特征的理由是什么?你会怎样对他们排序?_ **回答:**- 特征1:age:年龄,年轻的用户经济还未独立,或资产还不充足,收入可能不足50K- 特征2:education-num:教育水平,受教育水平较高的收入可能将较高- 特征3:native-country:国籍,国籍很可能影响人的收入,并且本国居民更易捐款- 特征4:workclass:工作类别,在政府工作或在公益机构工作的人,收入可能大于50K- 特征5:income:收入高的人更易捐款 练习 - 提取特征重要性选择一个`scikit-learn`中有`feature_importance_`属性的监督学习分类器,这个属性是一个在做预测的时候根据所选择的算法来对特征重要性进行排序的功能。在下面的代码单元中,你将要实现以下功能: - 如果这个模型和你前面使用的三个模型不一样的话从sklearn中导入一个监督学习模型。 - 在整个训练集上训练一个监督学习模型。 - 使用模型中的 `'feature_importances_'`提取特征的重要性。 ###Code # TODO:导入一个有'feature_importances_'的监督学习模型 from sklearn.ensemble import AdaBoostClassifier # TODO:在训练集上训练一个监督学习模型 model = AdaBoostClassifier(random_state=0,n_estimators=500).fit(X_train, y_train) # TODO: 提取特征重要性 importances = model.feature_importances_ # 绘图 vs.feature_plot(importances, X_train, y_train) ###Output _____no_output_____ ###Markdown 问题 7 - 提取特征重要性观察上面创建的展示五个用于预测被调查者年收入是否大于\$50,000最相关的特征的可视化图像。_这五个特征的权重加起来是否超过了0.5?__这五个特征和你在**问题 6**中讨论的特征比较怎么样?__如果说你的答案和这里的相近,那么这个可视化怎样佐证了你的想法?__如果你的选择不相近,那么为什么你觉得这些特征更加相关?_ **回答:**超过了有些相似,但是整体不准确我选取的特征,一个是一些基本属性,而且很有可能影响其收入或同理心,但是数据表现的却十分冷酷,是否捐款和赚钱花钱有最大的关系。 特征选择如果我们只是用可用特征的一个子集的话模型表现会怎么样?通过使用更少的特征来训练,在评价指标的角度来看我们的期望是训练和预测的时间会更少。从上面的可视化来看,我们可以看到前五个最重要的特征贡献了数据中**所有**特征中超过一半的重要性。这提示我们可以尝试去**减小特征空间**,简化模型需要学习的信息。下面代码单元将使用你前面发现的优化模型,并**只使用五个最重要的特征**在相同的训练集上训练模型。 ###Code # 导入克隆模型的功能 from sklearn.base import clone # 减小特征空间 X_train_reduced = X_train[X_train.columns.values[(np.argsort(importances)[::-1])[:5]]] X_val_reduced = X_val[X_val.columns.values[(np.argsort(importances)[::-1])[:5]]] # 在前面的网格搜索的基础上训练一个“最好的”模型 clf_on_reduced = (clone(best_clf)).fit(X_train_reduced, y_train) # 做一个新的预测 reduced_predictions = clf_on_reduced.predict(X_val_reduced) # 对于每一个版本的数据汇报最终模型的分数 print "Final Model trained on full data\n------" print "Accuracy on validation data: {:.4f}".format(accuracy_score(y_val, best_predictions)) print "F-score on validation data: {:.4f}".format(fbeta_score(y_val, best_predictions, beta = 0.5)) print "\nFinal Model trained on reduced data\n------" print "Accuracy on validation data: {:.4f}".format(accuracy_score(y_val, reduced_predictions)) print "F-score on validation data: {:.4f}".format(fbeta_score(y_val, reduced_predictions, beta = 0.5)) ###Output _____no_output_____ ###Markdown 问题 8 - 特征选择的影响*最终模型在只是用五个特征的数据上和使用所有的特征数据上的 F-score 和准确率相比怎么样?* *如果训练时间是一个要考虑的因素,你会考虑使用部分特征的数据作为你的训练集吗?* **回答:**均有下降如果在数据比较大、硬件资源比较匮乏的时候,我会考虑使用,因为选取主要特征的方法会极大提高训练速度但是再中小型数据或者说硬件资源足够时,我会尽量保证其准确性,一个良好准确的模型的训练时间损耗是值得的 问题 9 - 在测试集上测试你的模型终于到了测试的时候,记住,测试集只能用一次。*使用你最有信心的模型,在测试集上测试,计算出准确率和 F-score。**简述你选择这个模型的原因,并分析测试结果* ###Code #TODO test your model on testing data and report accuracy and F score final_predictions = best_clf.predict(X_test) print "最终准确率: {:.4f}".format(accuracy_score(y_test, final_predictions)) print "最终F-Score: {:.4f}".format(fbeta_score(y_test, final_predictions, beta = 0.5)) ###Output _____no_output_____
BDMI/WEEK8/logistic_regression_scratch.ipynb
###Markdown Picking a Link FunctionGeneralized linear models usually tranform a linear model of the predictors by using a [link function](https://en.wikipedia.org/wiki/Generalized_linear_modelLink_function). In logistic regression, the link function is the [sigmoid](https://en.wikipedia.org/wiki/Sigmoid_function). We can implement this really easily. ###Code def sigmoid(scores): return 1 / (1 + np.exp(-scores)) ###Output _____no_output_____ ###Markdown Maximizing the Likelihood To maximize the likelihood, I need a way to compute the likelihood and the gradient of the likelihood. Fortunately, the likelihood (for binary classification) can be reduced to a fairly intuitive form by switching to the log-likelihood. We're able to do this without affecting the weights parameter estimation because log transformation are [monotonic](https://en.wikipedia.org/wiki/Monotonic_function).For anyone interested in the derivations of the functions I'm using, check out Section 4.4.1 of Hastie, Tibsharani, and Friedman's [Elements of Statistical Learning](http://statweb.stanford.edu/~tibs/ElemStatLearn/). For those less mathematically inclined, Carlos Guestrin (Univesity of Washington) details one possible derivation of the log-likelihood in a series of short lectures on [Coursera](https://www.coursera.org/learn/ml-classification/lecture/1ZeTC/very-optional-expressing-the-log-likelihood) using indicator functions. Calculating the Log-LikelihoodThe log-likelihood can be viewed as as sum over all the training data. Mathematically,损失函数$$\begin{equation}ll = \sum_{i=1}^{N}y_{i}\beta ^{T}x_{i} - log(1+e^{\beta^{T}x_{i}})\end{equation}$$ 越小越好where $y$ is the target class, $x_{i}$ represents an individual data point, and $\beta$ is the weights vector.I can easily turn that into a function and take advantage of matrix algebra. ###Code #分类问题的损失函数 def log_likelihood(features, target, weights): scores = np.dot(features, weights) ll = np.sum( target*scores - np.log(1 + np.exp(scores)) ) return ll ###Output _____no_output_____ ###Markdown Calculating the Gradient![image.png](attachment:image.png)Now I need an equation for the gradient of the log-likelihood. By taking the derivative of the equation above and reformulating in matrix form, the gradient becomes: $$\begin{equation}\bigtriangledown ll = X^{T}(Y - Predictions)\end{equation}$$Again, this is really easy to implement. It's so simple I don't even need to wrap it into a function. The gradient here looks very similar to the output layer gradient in a neural network (see my [post](https://beckernick.github.io/neural-network-scratch/) on neural networks if you're curious).This shouldn't be too surprising, since a neural network is basically just a series of non-linear link functions applied after linear manipulations of the input data. Building the Logistic Regression FunctionFinally, I'm ready to build the model function. I'll add in the option to calculate the model with an intercept, since it's a good option to have.权重的迭代公式:![image.png](attachment:image.png) ###Code #梯度下降最优化:取误差函数的极小值 def logistic_regression(features, target, num_steps, learning_rate, add_intercept = False): if add_intercept: #intercept:有截距b intercept = np.ones((features.shape[0], 1)) #features.shape返回(60,3)[0]返回60。生成60行,1列的1 features = np.hstack((intercept, features)) #在feature前加上一列1 np.vstack():在竖直方向上堆叠;np.hstack():在水平方向上平铺 weights = np.zeros(features.shape[1]) #features.shape[1]返回3,3个权重 for step in range(num_steps): #迭代多少步 scores = np.dot(features, weights) predictions = sigmoid(scores) # Update weights with log likelihood gradient output_error_signal = target - predictions gradient = np.dot(features.T, output_error_signal) weights += learning_rate * gradient # Print log-likelihood every so often if step % 10000 == 0: print(log_likelihood(features, target, weights)) return weights ###Output _____no_output_____ ###Markdown Time to do the regression. ###Code weights = logistic_regression(features, label, num_steps = 50000, learning_rate = 5e-5, add_intercept=True) #收敛了 print(weights) def predict(features, weights): global mean global std features = (features - mean)/std intercept = np.ones((features.shape[0], 1)) features = np.hstack((intercept, features)) scores = np.dot(features, weights) predictions = sigmoid(scores) return predictions student1 = np.array([[188, 85, 2]]) print(predict(student1, weights)) student2 = np.array([[165, 50, 25]]) print(predict(student2, weights)) ###Output [0.76002054]
Tutorial/CN/tutorial-single-qubit-calibration-cn.ipynb
###Markdown 单量子比特标定*版权所有 (c) 2021 百度量子计算研究所,保留所有权利。* 内容概要本教程介绍单量子比特频率、弛豫时间 $T_1$ 和失相时间 $T_2$ 的标定方法以及该量子比特上 $\pi$ 脉冲的校准。本教程的大纲如下:- 背景介绍- 准备工作- 构建模拟器- 量子比特频率标定- Rabi 振荡校准 $\pi$ 脉冲- 纵向弛豫标定 $T_1$- Ramsey 振荡标定 $T_2$- 总结 背景介绍由于制造工艺的限制以及实际应用的需要,不同的超导量子比特具有不同的频率、相干时间等特性。因此我们需要对这些参数进行标定,即对量子比特执行一系列操作,并进行测量,从测量结果中获取关于此量子比特的信息,如量子比特频率以及相干时间 $T_1$、$T_2$ 等。其中,量子比特的频率为实现单量子比特门的脉冲信号的驱动频率;相干时间为量子比特保持其信息的持续时间,相干时间越长,量子比特的质量越好,可进行运算的时间就越长。 准备工作在运行此教程前,您首先需要从量脉(Quanlse)和其他常用 Python 库导入必要的包。 ###Code from Quanlse.Simulator.PulseSim1Q import pulseSim1Q from Quanlse.Calibration.SingleQubit import qubitSpec, ampRabi, fitRabi, longRelax, ramsey, fitRamsey import matplotlib.pyplot as plt plt.rcParams["figure.figsize"] = (7, 5) from numpy import array, pi, exp from scipy.signal import find_peaks ###Output _____no_output_____ ###Markdown 构建模拟器在进行标定演示前,我们首先需要构建一个单量子比特模拟器作为标定对象。在 Quanlse v2.1 中我们内置了设定好参数的单量子比特模拟器 `pulseSim1Q()` (相关参数如量子比特频率以及 $T_1$ 和 $T_2$ 等可进行自定义)。`pulseSim1Q()` 函数需要两个参数:`dt` 表示求解模拟演化时的步长,而 `frameMode` 则表示采用何种坐标系进行仿真(`'lab'`、`'rot'` 分别表示实验室坐标系和旋转坐标系)。完成初始化后,我们将该模拟器视为 "黑箱" 进行标定的演示。 ###Code # AWG sampling time dt = 0.01 # Instantiate the simulator object model = pulseSim1Q(dt=dt, frameMode='lab') # Define system parameters model.qubitFreq = {0: 5.212 * (2 * pi)} model.T1 = {0: 2000} model.T2 = {0: 600} ###Output _____no_output_____ ###Markdown 量子比特频率标定在标定量子比特的其它参数之前,我们首先需要确定量子比特的频率。量子比特频率确定后,我们就可以正确地设置本机振荡器(Local Oscillator)频率,从而使得施加的脉冲与量子比特共振。为了测量量子比特频率,我们利用外加脉冲与量子比特共振激发的原理,改变本机振荡器频率,对量子比特施加一定振幅的脉冲。量子比特被最大程度的激发时的脉冲频率,即为量子比特的频率。而在实际的实验中,量子比特频率的大致范围会提供给实验人员。因此,我们可以在给定的范围内进行频率扫描,并确定较为精确的量子比特频率。我们首先在较大的频率范围内(4.6 GHz 到 5.8 GHz)进行扫描。具体的方法为使用校准模块 `Quanlse.Calibration.SingleQubit` 中的函数 `qubitSpec()`,并输入脉冲模型 `pulseModel`、频率范围 `frequeRange`、样本数量 `sample`、脉冲幅度 `amp` 和脉冲持续时间 `t` 。在完成扫描后,该函数将返回扫描频率和对应的激发态布居数: ###Code # Define frequency range freqRange = [4.1 * (2 * pi), 5.9 * (2 * pi)] # Scan qubit frequency spectrum freqList, popList = qubitSpec(pulseModel=model, freqRange=freqRange, sample=50, amp=0.9, t=20) ###Output _____no_output_____ ###Markdown 激发态布居数与本机振荡器频率关系图如下。 ###Code # Convert unit freq = [x / (2 * pi) for x in freqList] # Plot population graph plt.plot(freq, popList) plt.title("Frequency spectrum", size=17) plt.xlabel("LO frequency (GHz)", size=15) plt.ylabel(r"$|1\rangle$ population)", size=15) plt.show() ###Output _____no_output_____ ###Markdown 从图中我们可以看到量子比特频率大致在 5.1 GHz 和 5.3 GHz 之间。接下来我们缩小扫描范围进行第二次扫描,并绘制激发态布居数与本机振荡器频率关系图。 ###Code # Define new frequency range nFreqRange = [5.1 * (2 * pi), 5.3 * (2 * pi)] # Scan qubit frequency spectrum nFreqList, nPopList = qubitSpec(model, nFreqRange, 30, 0.9, 20) # Convert unit nFreq = [x / (2 * pi) for x in nFreqList] # Plot population graph plt.plot(nFreq, nPopList) plt.title("Frequency spectrum", size=17) plt.xlabel("LO frequency (GHz)", size=15) plt.ylabel(r"$|1\rangle$ population)", size=15) plt.show() ###Output _____no_output_____ ###Markdown 然后,我们使用 `scipy` 中的函数 `find_peak()` 来寻找峰值所对应的频率。 ###Code # Find peak peak = find_peaks(nPopList, height=0.3)[0][0] qubitFreq = nFreq[peak] # Plot peak plt.plot(nFreq, nPopList) plt.title(f'Qubit frequency: {round(qubitFreq, 6)} GHz', size=17) plt.plot(nFreq[peak], nPopList[peak], 'x', mfc=None, mec='red', mew=2, ms=8) plt.xlabel('Frequency (GHz)', size=15) plt.ylabel(r'$|1\rangle$ population', size=15) plt.show() ###Output _____no_output_____ ###Markdown 如上图所示,我们标定得到的量子比特频率为 5.217241 GHz。 Rabi 振荡校准 $\pi$ 脉冲在确定了量子比特的频率后,我们可以校准 $\pi$ 和 $\pi/2$ 脉冲的波形参数。为此,我们进行 Rabi 振荡实验。通常有两种方式进行 Rabi 振荡:确定其他参数不变,固定脉冲振幅扫描脉冲持续时间或固定脉冲持续时间扫描脉冲振幅。选择适当的范围后,激发态(或基态)的布居数将以正弦波的形式振荡。为进行上述实验,我们从 `Quanlse.Calibration.SingleQubit` 模块导入函数 `ampRabi()`,并输入参数:脉冲模型 `pulseModel`、振幅范围 `ampRange`、脉冲持续时间 `tg` 和样本数量 `sample` 。该函数将返回扫描振幅和相应的激发态布居数列表。另外,`calibration` 模块还包括了通过扫描脉冲的时间的函数 `tRabi()`。该函数通过固定脉冲幅值并且改变脉冲的时间来实现 Rabi 振荡,因此用法与 `ampRabi()` 非常类似。 ###Code # Define amplitude range ampRange = [0, 6] # Scan different amplitudes for Rabi oscillation ampList, popList = ampRabi(pulseModel=model, pulseFreq=qubitFreq * 2 * pi, ampRange=ampRange, tg=20, sample=50) ###Output _____no_output_____ ###Markdown 激发布居数与脉冲振幅关系图如下: ###Code # Plot Rabi Oscillation with different amplitudes plt.plot(ampList, popList, '.') plt.title("Rabi Oscillation", size=17) plt.xlabel('Amplitude', size=15) plt.ylabel(r'$|1\rangle$ population', size=15) plt.show() ###Output _____no_output_____ ###Markdown 在得到布居数的分布之后,我们从 `Quanlse.Calibration.SingleQubit` 模块导入函数 `fitRabi()` 进行图像拟合,并获得能够实现 $\pi$ 和 $\pi/2$ 旋转的脉冲振幅。我们输入 `ampList` 作为 X 轴,并同时输入布居数 `popList` 作为 Y 轴进行拟合,其中拟合函数的形式为:$y=a\cdot \cos(b\cdot x+c)+d$。最终,`fitRabi()` 将返回 $\pi/2$ 和 $\pi$ 脉冲的振幅: ###Code # Fit Rabi halfPiAmp, piAmp = fitRabi(popList=popList, xList=ampList) print("Pi/2-pulse amplitude: ", halfPiAmp) print("Pi-pulse amplitude: ", piAmp) ###Output _____no_output_____ ###Markdown 纵向弛豫标定 $T_1$得到 $\pi$ 和 $\pi/2$ 脉冲的参数后,我们可以进一步标定量子比特的相干时间 $T_1$ 和 $T_2$。我们首先进行 $T_1$ 的标定,将 $\pi$ 脉冲施加到量子比特上,并找到激发态布居数衰减到 $1/e$ 的时间 \[1\]。为了将量子比特激发到激发态并观察其纵向弛豫,我们可以使用 `Quanlse.Calibration.SingleQubit` 模块中的 `longRelax()` 函数。输入参数:模拟器对象 `pulseModel`、AWG 采样时间 `dt`、脉冲频率 `pulseModel`、$\pi$ 脉冲幅度 `piAmp` 和持续时间 `piLen`、最大闲置时间 `maxIdle` 和拟合函数的初始值 `initFit`。随后,运行该函数进行模拟仿真,同时该函数将使用拟合函数 $y=e^{-x/T_1}$ 进行曲线拟合。最终返回 $T_1$、闲置时间、布居数仿真结果以及拟合结果的列表: ###Code # Longitudinal relaxation on a qubit T1, tList, experimental, fitted = longRelax(pulseModel=model, dt=dt, pulseFreq=qubitFreq * 2 * pi, piAmp=piAmp, piLen=20, maxIdle=4000, initFit=[1500]) ###Output _____no_output_____ ###Markdown $T_1$ 以及布居数随闲置时间变化的图像如下: ###Code # Print estimated T1 print("Estimated T1: ", T1, "ns") # Plot fit result plt.plot(tList, experimental, "+", label="Experiment") plt.plot(tList, fitted, "r", label="Fitted", linewidth=2.) plt.legend() plt.xlabel("Idling time", size=15) plt.ylabel(r'$|1\rangle$ population', size=15) plt.title("Longitudinal Relaxation", size=17) plt.show() ###Output _____no_output_____ ###Markdown Ramsey 振荡标定 $T_2$在本节中,我们将使用 Ramsey 振荡实验进行失相时间 $T_2$ 的标定。首先,我们在量子比特上输入一个与量子比特频率相差非常小的驱动频率的 $\pi/2$ 脉冲,在等待闲置时间 $t_{\rm idle}$ 之后,再输入另一个 $\pi/2$ 脉冲,并测量量子比特的激发态布居数 \[2\]。此时,测量结果取决于闲置时间 $t_{\rm idle}$ 之后量子态的相位。为进行 Ramsey 实验,我们从 `Quanlse.Calibration.SingleQubit` 模块导入函数 `Ramsey()`,输入参数:模拟器对象 `pulseModel`、脉冲频率 `pulseFreq` 、$\pi/2$ 脉冲持续时间 `tg` 、$\pi/2$ 脉冲幅度 `x90` 、采样数 `sample` 、最大闲置时间 `maxTime` 和脉冲频率与比特频率的失调 `detuning`(该程序运行时间可能会比较久,可以选择减少采样点以及减少运行时间,但是模拟的效果可能也随之下降): ###Code # Scan different idle time for Ramsey oscillation tList, popList = ramsey(pulseModel=model, pulseFreq=5.21 * 2 * pi, tg=20, x90=1.013, sample=50, maxTime=600, detuning=0.07) ###Output _____no_output_____ ###Markdown 该函数返回闲置时间和相应的布居数列表。我们可以使用函数 `fitRamsey()` 来对数据进行拟合,输入参数 $T_1$ `t1`、布居数列表 `popList`、闲置时间列表 `tList` 以及失调 `detuning`,然后,使用函数 $y=\frac{1}{2} \cos(a\cdot x)e^{-b\cdot x}+0.5$ 拟合曲线。根据拟合结果,我们使用表达式 $T_2 = \frac{1}{(b-\frac{1}{2a})}$ 求得 $T_2$: ###Code # Fit Ramsey T2, fitted = fitRamsey(t1=2000, popList=popList, tList=tList, detuning=0.07) ###Output _____no_output_____ ###Markdown `fitRamsey()` 返回测得的 $T_2$ 值和拟合的布居数列表,$T_2$ 以及激发态布居数与闲置时间关系图如下: ###Code # Print estimated T2 print("Estimated T2: ", T2, " ns") # Plot fit result plt.plot(tList, popList, '.') plt.plot(tList, fitted) plt.plot(tList, list(exp(- (1 / 600 + 1 / (2 * 2000)) * array(tList)) * 0.5 + 0.5)) plt.xlabel("Idling time (ns)", size=15) plt.ylabel(r"$|1\rangle$ population", size=15) plt.title("Ramsey Experiment", size=17) plt.show() ###Output _____no_output_____
AzureChestXRay_AMLWB/Code/02_Model/010_train.ipynb
###Markdown Train Copyright (C) Microsoft Corporation. see license file for details ###Code # Allow multiple displays per cell from IPython.core.interactiveshell import InteractiveShell InteractiveShell.ast_node_interactivity = "all" # AZUREML_NATIVE_SHARE_DIRECTORY mapping to host dir is set by _nativeSharedDirectory_ in .compute file import os try: amlWBSharedDir = os.environ['AZUREML_NATIVE_SHARE_DIRECTORY'] except: amlWBSharedDir = '' print('not using aml services?') amlWBSharedDir # Use the Azure Machine Learning data collector to log various metrics from azureml.logging import get_azureml_logger logger = get_azureml_logger() # Use Azure Machine Learning history magic to control history collection # History is off by default, options are "on", "off", or "show" # %azureml history on # import utlity functions import sys, os paths_to_append = [os.path.join(os.getcwd(), os.path.join(*(['Code', 'src'])))] def add_path_to_sys_path(path_to_append): if not (any(path_to_append in paths for paths in sys.path)): sys.path.append(path_to_append) [add_path_to_sys_path(crt_path) for crt_path in paths_to_append] import azure_chestxray_utils # import azure_chestxray_keras_utils # create the file path variables # paths are tipically container level dirs mapped to a host dir for data persistence. prj_consts = azure_chestxray_utils.chestxray_consts() data_base_input_dir=os.path.join(amlWBSharedDir, os.path.join(*(prj_consts.BASE_INPUT_DIR_list))) data_base_output_dir=os.path.join(amlWBSharedDir, os.path.join(*(prj_consts.BASE_OUTPUT_DIR_list))) # data used for training nih_chest_xray_data_dir=os.path.join(data_base_input_dir, os.path.join(*(prj_consts.ChestXray_IMAGES_DIR_list))) data_partitions_dir=os.path.join(data_base_output_dir, os.path.join(*(prj_consts.DATA_PARTITIONS_DIR_list))) partition_path = os.path.join(data_partitions_dir, 'partition14_unormalized_cleaned.pickle') label_path = os.path.join(data_partitions_dir,'labels14_unormalized_cleaned.pickle') # global variables weights_dir = os.path.join(data_base_output_dir, os.path.join(*(prj_consts.MODEL_WEIGHTS_DIR_list))) !mkdir -p {weights_dir} weights_dir !ls -l {weights_dir} # weights_path = os.path.join( # weights_dir, # prj_consts.PRETRAINED_DENSENET201_IMAGENET_CHESTXRAY_MODEL_FILE_NAME) import os os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" # see issue #152 os.environ["CUDA_VISIBLE_DEVICES"] = "0,1" import imgaug as ia from imgaug import augmenters as iaa ia.seed(1) import cv2 import keras.backend as K from keras.optimizers import Adam from keras.callbacks import ReduceLROnPlateau, Callback, ModelCheckpoint import numpy as np import pickle from keras_contrib.applications.densenet import DenseNetImageNet121 from keras.layers import Dense from keras.models import Model from keras.utils import multi_gpu_model from tensorflow.python.client import device_lib import warnings from keras.utils import Sequence import tensorflow as tf ###Output Using TensorFlow backend. ###Markdown For testing purpose, we just run 1 epoch. It will take around 25 mins to run for one epoch using 2 K80 GPUs and it is usually needed to run around 30~50 epochs for the model to get converge. ###Code # make force_restart = False if you continue a previous train session, make it True to start from scratch force_restart = False initial_lr = 0.001 resized_height = 224 resized_width = 224 # resized_height = prj_consts.CHESTXRAY_MODEL_EXPECTED_IMAGE_HEIGHT # resized_width = prj_consts.CHESTXRAY_MODEL_EXPECTED_IMAGE_WIDTH num_channel = 3 num_classes = 14 epochs = 1 #200 def get_available_gpus(): """ Returns: number of GPUs available in the system """ local_device_protos = device_lib.list_local_devices() return [x.name for x in local_device_protos if x.device_type == 'GPU'] # get number of available GPUs num_gpu = len(get_available_gpus()) # keras multi_gpu_model slices the data to different GPUs. see https://keras.io/utils/#multi_gpu_model for more details. batch_size = 48 * num_gpu # use Keras multi-gpu model, so we need to make sure the batch_size is divisible by num_gpu. # device_lib.list_local_devices() # !nvidia-smi # use Keras multi-gpu model, so we need to make sure the batch_size is divisible by num_gpu. # multi GPU model checkpoint. copied from https://github.com/keras-team/keras/issues/8463 class MultiGPUCheckpointCallback(Callback): def __init__(self, filepath, base_model, monitor='val_loss', verbose=0, save_best_only=False, save_weights_only=False, mode='auto', period=1): super(MultiGPUCheckpointCallback, self).__init__() self.base_model = base_model self.monitor = monitor self.verbose = verbose self.filepath = filepath self.save_best_only = save_best_only self.save_weights_only = save_weights_only self.period = period self.epochs_since_last_save = 0 if mode not in ['auto', 'min', 'max']: warnings.warn('ModelCheckpoint mode %s is unknown, ' 'fallback to auto mode.' % (mode), RuntimeWarning) mode = 'auto' if mode == 'min': self.monitor_op = np.less self.best = np.Inf elif mode == 'max': self.monitor_op = np.greater self.best = -np.Inf else: if 'acc' in self.monitor or self.monitor.startswith('fmeasure'): self.monitor_op = np.greater self.best = -np.Inf else: self.monitor_op = np.less self.best = np.Inf def on_epoch_end(self, epoch, logs=None): logs = logs or {} self.epochs_since_last_save += 1 if self.epochs_since_last_save >= self.period: self.epochs_since_last_save = 0 filepath = self.filepath.format(epoch=epoch + 1, **logs) if self.save_best_only: current = logs.get(self.monitor) if current is None: warnings.warn('Can save best model only with %s available, ' 'skipping.' % (self.monitor), RuntimeWarning) else: if self.monitor_op(current, self.best): if self.verbose > 0: print('Epoch %05d: %s improved from %0.5f to %0.5f,' ' saving model to %s' % (epoch + 1, self.monitor, self.best, current, filepath)) self.best = current if self.save_weights_only: self.base_model.save_weights(filepath, overwrite=True) else: self.base_model.save(filepath, overwrite=True) else: if self.verbose > 0: print('Epoch %05d: %s did not improve' % (epoch + 1, self.monitor)) else: if self.verbose > 0: print('Epoch %05d: saving model to %s' % (epoch + 1, filepath)) if self.save_weights_only: self.base_model.save_weights(filepath, overwrite=True) else: self.base_model.save(filepath, overwrite=True) seq = iaa.Sequential([ iaa.Fliplr(0.5), # horizontal flips iaa.Affine(rotate=(-15, 15)), # random rotate image iaa.Affine(scale=(0.8, 1.1)), # randomly scale the image ], random_order=True) # apply augmenters in random order # generator for train and validation data # use the Sequence class per issue https://github.com/keras-team/keras/issues/1638 class DataGenSequence(Sequence): def __init__(self, labels, image_file_index, current_state): self.batch_size = batch_size self.labels = labels self.img_file_index = image_file_index self.current_state = current_state self.len = len(self.img_file_index) // self.batch_size print("for DataGenSequence", current_state, "total rows are:", len(self.img_file_index), ", len is", self.len) def __len__(self): return self.len def __getitem__(self, idx): # print("loading data segmentation", idx) # make sure each batch size has the same amount of data current_batch = self.img_file_index[idx * self.batch_size: (idx + 1) * self.batch_size] X = np.empty((self.batch_size, resized_height, resized_width, num_channel)) y = np.empty((self.batch_size, num_classes)) for i, image_name in enumerate(current_batch): path = os.path.join(nih_chest_xray_data_dir, image_name) # loading data img = cv2.resize(cv2.imread(path), (resized_height, resized_width)).astype(np.float32) X[i, :, :, :] = img y[i, :] = labels[image_name] # only do random flipping in training status if self.current_state == 'train': x_augmented = seq.augment_images(X) else: x_augmented = X return x_augmented, y # loss function def unweighted_binary_crossentropy(y_true, y_pred): """ Args: y_true: true labels y_pred: predicted labels Returns: the sum of binary cross entropy loss across all the classes """ return K.sum(K.binary_crossentropy(y_true, y_pred)) def build_model(): """ Returns: a model with specified weights """ # define the model, use pre-trained weights for image_net base_model = DenseNetImageNet121(input_shape=(224, 224, 3), weights='imagenet', include_top=False, pooling='avg') x = base_model.output predictions = Dense(14, activation='sigmoid')(x) model = Model(inputs=base_model.input, outputs=predictions) return model if num_gpu > 1: print("using", num_gpu, "GPUs") # build model with tf.device('/cpu:0'): model_single_gpu = build_model() # model_single_gpu.load_weights(weights_path) # convert to multi-gpu model model_multi_gpu = multi_gpu_model(model_single_gpu, gpus=num_gpu) model_checkpoint = MultiGPUCheckpointCallback( os.path.join(weights_dir, 'azure_chest_xray_14_weights_712split_epoch_{epoch:03d}_val_loss_{val_loss:.4f}.hdf5'), model_single_gpu, monitor='val_loss', save_weights_only=False) else: print("using single GPU") model_multi_gpu = build_model() model_checkpoint = ModelCheckpoint( os.path.join(weights_dir, 'azure_chest_xray_14_weights_712split_epoch_{epoch:03d}_val_loss_{val_loss:.4f}.hdf5'), monitor='val_loss', save_weights_only=False) num_workers = 10 * num_gpu model_multi_gpu.compile(optimizer=Adam(lr=initial_lr), loss=unweighted_binary_crossentropy) reduce_lr_on_plateau = ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=3, verbose=1, min_lr=1e-6) callbacks = [model_checkpoint, reduce_lr_on_plateau] with open(label_path, 'rb') as f: labels = pickle.load(f) with open(partition_path, 'rb') as f: partition = pickle.load(f) model_multi_gpu.fit_generator(generator=DataGenSequence(labels, partition['train'], current_state='train'), epochs=epochs, verbose=1, callbacks=callbacks, workers=num_workers, # max_queue_size=32, # shuffle=False, validation_data=DataGenSequence(labels, partition['valid'], current_state='validation') # validation_steps=1 ) # jupyter nbconvert --to html .\Code\02_Model\010_train.ipynb ###Output _____no_output_____
demos/BMI-demo.ipynb
###Markdown Download data We illustrate the usage of LOG-TRAM by applying it to the GWAS summary statistics of BMI from BBJ **males** and UKBB with 1 Mbp non-overlapping sliding windows as local regions. The GWAS datasets and LDscores files involved in the following example are availabel from [here](https://www.dropbox.com/sh/9asugdlu1lbal8o/AAB0martsgaBoR8B4hq2pc25a?dl=0) Run LOG-TRAM Once the input files are formatted, LOG-TRAM will automatically preprocess the datasets, including SNPs overlapping and minor allele matching. It takes 8 mins to run the following meta-analysis for the whole genome (computing environment: 20 CPU cores of Intel(R) Xeon(R) Gold 6230N CPU @ 2.30GHz processor, 1TB of memory, and a 22 TB solid-state disk). ###Code python <install path>/src/LOG-TRAM.py \ --out BMI_meta \ --sumstats-popu1 BMI_harmonized_pop1_UKB.txt,BMI_UKB \ --sumstats-popu2 BMI_harmonized_pop2_BBJ.txt,BMI_BBJ \ --ldscores ./LDscoresEUR-EAS/ldsc_annot_EUR_EAS_1mb_TGP_hm3_chr@_std ###Output _____no_output_____ ###Markdown Output log ###Code 2022-01-29 18:34:41,453 : INFO : <><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><> <> <> LOG-TRAM: Leveraging the local genetic structure for trans-ancestry association mapping <> Version: 1.0.0 <> MIT License <> <><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><> <> Software-related correspondence: [email protected] or [email protected] <><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><> <> example: python <install path>/src/LOG-TRAM.py \ --out test \ --sumstats-popu1 ss_file1,ss_name1 \ --sumstats-popu2 ss_file2,ss_name2 \ --ldscores ldsc_annot_EUR_EAS_1mb_TGP_hm3_chr@_std \ --out-harmonized <><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><> 2022-01-29 18:34:41,453 : INFO : See full log at: /import/home/jxiaoae/cross-popu/log-tram/test/BMI_meta.log 2022-01-29 18:34:41,454 : INFO : Program executed via: ./LOG-TRAM/src/LOG-TRAM.py \ --out BMI_meta \ --sumstats-popu1 BMI_harmonized_pop1_UKB.txt,BMI_UKB \ --sumstats-popu2 BMI_harmonized_pop2_BBJ.txt,BMI_BBJ \ --ldscores ./LDscoresEUR-EAS/ldsc_annot_EUR_EAS_1mb_TGP_hm3_chr@_std \ --out-reg-coef 2022-01-29 18:34:41,454 : INFO : Reading in the base LD Scores 2022-01-29 18:35:14,125 : INFO : LD score matrix shape: 1023555x4 2022-01-29 18:35:14,125 : INFO : Reading in summary statistics 2022-01-29 18:35:15,922 : INFO : Sumstats BMI_UKB shape: 1095285x11 2022-01-29 18:35:17,682 : INFO : Sumstats BMI_BBJ shape: 1095285x11 2022-01-29 18:35:17,682 : INFO : Matching SNPs from LD scores and GWAS Sumstats 2022-01-29 18:35:19,497 : INFO : Number of SNPS in initial intersection of all sources 983679 2022-01-29 18:35:25,125 : INFO : Removing ambiguous SNPs... 2022-01-29 18:35:26,142 : INFO : Number of ambiguous SNPS have been removed: 0 2022-01-29 18:35:26,716 : INFO : Aligning the minor allele according to a designated reference summary statistics DataFrame: BMI_UKB 2022-01-29 18:35:27,762 : INFO : 0 SNPs need to be flipped for sumstats: BMI_UKB 2022-01-29 18:35:29,822 : INFO : 0 SNPs need to be flipped for sumstats: BMI_BBJ 2022-01-29 18:36:16,018 : INFO : Running LD Score regression for the whole genome. 2022-01-29 18:36:18,188 : INFO : Regression coefficients (LD): [[2.58344812e-07 1.37599926e-07] [1.37599926e-07 1.71066489e-07]] 2022-01-29 18:36:18,189 : INFO : Regression coefficients (Intercept): [[1.17805552 0.01417345] [0.01417345 1.05501521]] 2022-01-29 18:36:18,194 : INFO : Running LD Score regression and LOG-TRAM for local regions chromosome by chromosome. 2022-01-29 18:36:23,590 : INFO : Processing chromosome 1, SNPs number: 81381 2022-01-29 18:36:23,899 : INFO : find 223 local regions in chromosome 1 2022-01-29 18:37:16,869 : INFO : Processing chromosome 2, SNPs number: 82830 2022-01-29 18:37:17,239 : INFO : find 236 local regions in chromosome 2 2022-01-29 18:38:11,352 : INFO : Processing chromosome 3, SNPs number: 69347 2022-01-29 18:38:11,737 : INFO : find 195 local regions in chromosome 3 2022-01-29 18:38:50,701 : INFO : Processing chromosome 4, SNPs number: 61810 2022-01-29 18:38:51,109 : INFO : find 188 local regions in chromosome 4 2022-01-29 18:39:26,334 : INFO : Processing chromosome 5, SNPs number: 62385 2022-01-29 18:39:26,721 : INFO : find 177 local regions in chromosome 5 2022-01-29 18:39:59,837 : INFO : Processing chromosome 6, SNPs number: 62463 2022-01-29 18:40:00,189 : INFO : find 167 local regions in chromosome 6 2022-01-29 18:40:30,829 : INFO : Processing chromosome 7, SNPs number: 54562 2022-01-29 18:40:31,175 : INFO : find 154 local regions in chromosome 7 2022-01-29 18:40:57,570 : INFO : Processing chromosome 8, SNPs number: 53716 2022-01-29 18:40:57,953 : INFO : find 143 local regions in chromosome 8 2022-01-29 18:41:22,665 : INFO : Processing chromosome 9, SNPs number: 45989 2022-01-29 18:41:23,009 : INFO : find 110 local regions in chromosome 9 2022-01-29 18:41:41,609 : INFO : Processing chromosome 10, SNPs number: 53129 2022-01-29 18:41:41,956 : INFO : find 132 local regions in chromosome 10 2022-01-29 18:42:03,790 : INFO : Processing chromosome 11, SNPs number: 50935 2022-01-29 18:42:04,127 : INFO : find 132 local regions in chromosome 11 2022-01-29 18:42:26,059 : INFO : Processing chromosome 12, SNPs number: 47958 2022-01-29 18:42:26,431 : INFO : find 129 local regions in chromosome 12 2022-01-29 18:42:46,783 : INFO : Processing chromosome 13, SNPs number: 37606 2022-01-29 18:42:47,124 : INFO : find 96 local regions in chromosome 13 2022-01-29 18:43:00,417 : INFO : Processing chromosome 14, SNPs number: 32678 2022-01-29 18:43:00,756 : INFO : find 87 local regions in chromosome 14 2022-01-29 18:43:11,190 : INFO : Processing chromosome 15, SNPs number: 30002 2022-01-29 18:43:11,527 : INFO : find 81 local regions in chromosome 15 2022-01-29 18:43:22,042 : INFO : Processing chromosome 16, SNPs number: 29709 2022-01-29 18:43:22,402 : INFO : find 75 local regions in chromosome 16 2022-01-29 18:43:31,769 : INFO : Processing chromosome 17, SNPs number: 26250 2022-01-29 18:43:32,055 : INFO : find 78 local regions in chromosome 17 2022-01-29 18:43:40,682 : INFO : Processing chromosome 18, SNPs number: 29371 2022-01-29 18:43:41,028 : INFO : find 75 local regions in chromosome 18 2022-01-29 18:43:50,370 : INFO : Processing chromosome 19, SNPs number: 18061 2022-01-29 18:43:50,649 : INFO : find 56 local regions in chromosome 19 2022-01-29 18:43:56,505 : INFO : Processing chromosome 20, SNPs number: 25420 2022-01-29 18:43:56,791 : INFO : find 60 local regions in chromosome 20 2022-01-29 18:44:03,907 : INFO : Processing chromosome 21, SNPs number: 14327 2022-01-29 18:44:04,238 : INFO : find 34 local regions in chromosome 21 2022-01-29 18:44:08,303 : INFO : Processing chromosome 22, SNPs number: 13750 2022-01-29 18:44:08,563 : INFO : find 34 local regions in chromosome 22 2022-01-29 18:44:11,092 : INFO : Saving local regions regression coefficients to file BMI_meta_TRAM_reg_coefs.npy 2022-01-29 18:44:11,107 : INFO : Preparing results 2022-01-29 18:44:18,385 : INFO : Running LD Score regression for the whole genome after Meta-GWAS. 2022-01-29 18:44:20,918 : INFO : Regression coefficients (LD): [[2.27339584e-07 2.01271718e-07] [2.01271718e-07 2.86772547e-07]] 2022-01-29 18:44:20,919 : INFO : Regression coefficients (Intercept): [[1.01577234 0.2919746 ] [0.2919746 1.04275966]] 2022-01-29 18:44:20,920 : INFO : Estimating the effective sample size and save output to disk 2022-01-29 18:44:20,920 : INFO : Writing BMI_UKB (Population 1) to disk 2022-01-29 18:44:28,927 : INFO : Writing BMI_BBJ (Population 2) to disk 2022-01-29 18:44:36,831 : INFO : Execution complete ###Output _____no_output_____ ###Markdown LOG-TRAM will output two meta-analysis files, corresponding to EAS and EUR respectively. LOG-TRAM will add the inputed phenotype name after `--out` argument automatically. Usually, we focus on the under-represented populations such as `BMI_meta_TRAM_pop2_BMI_BBJ.txt` for EAS. Visualize results ###Code import pandas as pd import numpy as np eas_gwas = pd.read_csv('BMI_harmonized_pop2_BBJ.txt',sep='\t') eas_meta = pd.read_csv('BMI_meta_TRAM_pop2_BMI_BBJ.txt',sep='\t') eas_gwas eas_meta # N is the original GWAS sample size # N_eff is the computed effective sample size # N_eff should be larger than N as LOG-TRAM can brorrow information from the large-scale auxiliary dataset. import warnings warnings.filterwarnings('ignore') import matplotlib.pyplot as plt from matplotlib import style style.use('ggplot') style.use('seaborn-white') import seaborn as sns import matplotlib as mpl import sys sys.path.append('../src') from plots import * ###Output _____no_output_____ ###Markdown QQ-plot ###Code sns.set_context('paper',font_scale=1.3) mpl.rcParams['figure.dpi']=150 mpl.rcParams['savefig.dpi']=150 mpl.rcParams['figure.figsize']=7,7 mpl.rcParams['axes.spines.right'] = False mpl.rcParams['axes.spines.top'] = False fig, ax = plt.subplots(1,1) qqplot([eas_gwas['P'],eas_meta['P']], ['GWAS','LOG-TRAM'], color=['C2', 'C0', 'C2', 'C3', 'C4', 'C5'], shape=['.','.'], error_type='theoretical', distribution='beta', n_quantiles = 100, ms=5, title='BMI (EAS)',ax=ax) ###Output _____no_output_____ ###Markdown Manhatton plot ###Code # identify lead SNPs from GWAS summary statistics def add_locus(gwas): gwas_sig = gwas.loc[gwas['P']<=5e-8].reset_index(drop=True) loci = [] c = 0 bp = -1 l = -1 for row in gwas_sig.iterrows(): if c!=row[1]['CHR'] or row[1]['BP']-bp>1000000: c = row[1]['CHR'] bp = row[1]['BP'] l += 1 loci.append(l) gwas_sig['loci'] = loci gwas_sig = gwas_sig.loc[gwas_sig.groupby('loci')['P'].idxmin()].reset_index(drop=True) return gwas_sig # maximum p-vaule = 1e-30 threshold = 1e-30 eas_gwas.loc[eas_gwas['P']<threshold,'P'] = threshold eas_meta.loc[eas_meta['P']<threshold,'P'] = threshold eas_gwas_sig = add_locus(eas_gwas) eas_meta_sig = add_locus(eas_meta) sns.set_context('paper',font_scale=1.8) chrs = [str(i) for i in range(1,23)] chrs_names = np.array([str(i) for i in range(1,23)]) mpl.rcParams['figure.dpi']=100 mpl.rcParams['savefig.dpi']=100 mpl.rcParams['figure.figsize']=18, 10 mpl.rcParams['axes.spines.right'] = False mpl.rcParams['axes.spines.top'] = False colors = ['skyblue','#000080']*11 manhattan(eas_gwas['P'], eas_gwas['BP'], eas_gwas['CHR'].astype(str), '', p2=eas_gwas_sig['P'], pos2=eas_gwas_sig['BP'], chr2=eas_gwas_sig['CHR'].astype(str), label2='', plot_type='single', chrs_plot=[str(i) for i in range(1,23)], chrs_names=chrs_names, cut = 0, title='{}'.format('GWAS (EAS)'), xlabel='chromosome', ylabel='-log10(p-value)', lines= [7.3], lines_styles = ['--'], top1=31, top2=31, lines_colors=['grey'], colors = colors, scaling = '-log10',alpha=0.9) sns.set_context('paper',font_scale=1.8) chrs = [str(i) for i in range(1,23)] chrs_names = np.array([str(i) for i in range(1,23)]) mpl.rcParams['figure.dpi']=100 mpl.rcParams['savefig.dpi']=100 mpl.rcParams['figure.figsize']=18, 10 mpl.rcParams['axes.spines.right'] = False mpl.rcParams['axes.spines.top'] = False colors = ['skyblue','#000080']*11 manhattan(eas_meta['P'], eas_meta['BP'], eas_meta['CHR'].astype(str), '', p2=eas_meta_sig['P'], pos2=eas_meta_sig['BP'], chr2=eas_meta_sig['CHR'].astype(str), label2='', plot_type='single', chrs_plot=[str(i) for i in range(1,23)], chrs_names=chrs_names, cut = 0, title='{}'.format('LOG-TRAM (EAS)'), xlabel='chromosome', ylabel='-log10(p-value)', lines= [7.3], lines_styles = ['--'], top1=31, top2=31, lines_colors=['grey'], colors = colors, scaling = '-log10',alpha=0.9) plt.text(2278444814*1.094,29,'→',c='r',rotation=270) plt.text(2278444814*1.09,28,'rs7217403',c='r') plt.show() ###Output 2878444814.0 ###Markdown Effective sample size ###Code eas_gwas = pd.read_csv('BMI_harmonized_pop2_BBJ.txt',sep='\t') eas_meta = pd.read_csv('BMI_meta_TRAM_pop2_BMI_BBJ.txt',sep='\t') # intercept_gwas: LDSC regression intercept for Original EAS GWAS summary statistics # intercept_meta: LDSC regression intercept for LOG-TRAM EAS meta-analysis association statistics # Both of them are available in output log intercept_meta, intercept_gwas = 1.05501521, 1.04275966 Neff_f = ((eas_meta['Z']**2).mean() - intercept_meta)/((eas_gwas['Z']**2).mean() - intercept_gwas) print('Original EAS GWAS sample size: {},\nLOG-TRAM EAS effective sample size: {}'.format( eas_gwas['N'].values.mean(), Neff_f*eas_gwas['N'].values.mean())) ###Output Original EAS GWAS sample size: 85894.0, LOG-TRAM EAS effective sample size: 133947.87313487363
Chapter01/Excercises/Exercise8.ipynb
###Markdown **One Hot Encoding** ###Code onehot_encoder = OneHotEncoder(sparse=False) onehot_encoded = onehot_encoder.fit_transform(df[data_column_category]) onehot_encoded_frame = pd.DataFrame(onehot_encoded, columns = onehot_encoder.get_feature_names(data_column_category)) onehot_encoded_frame.head() ###Output _____no_output_____
Case_Churn.ipynb
###Markdown CASE CHURN Analise o dataset de churn de uma Telecom que está disponível nesse [link](https://www.kaggle.com/blastchar/telco-customer-churn)Para esse dataset verifique:* Existe NA nas variáveis? Quantos?* Como você faria para encontrar o perfil do cliente que faz churn?* Qual seria as características do cliente fidelizado? Caracterize os dois perfis usando variáveis sociais e os serviços contratados.* Leia a documentação do jointplot do seaborn e crie uma visualização usando as variáveis **tenure** e **MonthlyCharges** para os casos que Churn==Yes e Churn==No* Que sugestões voce poderia sugerir para diminuir o churn? Importando Dataset ###Code import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt import seaborn as sns import matplotlib as mpl df = pd.read_csv('/content/drive/MyDrive/Cursos /01. Digital House/0_BasicoPython/Case Churn - Data Viz/WA_Fn-UseC_-Telco-Customer-Churn.csv') df.head() ###Output _____no_output_____ ###Markdown Dimensões, NaN, dtype ###Code df.shape # Contagem de NULL/ missing df.isnull().sum() # Não temos nulos na nossa base. df.info() # Valores únicos para todas as colunas for column in df: print(column) print(df[column].unique()) # Acredito que o único que deva ser modificado é o TotalCharges para float df['TotalCharges'] = pd.to_numeric(df['TotalCharges'], errors='coerce') ###Output _____no_output_____ ###Markdown Verificando Duplicadas ###Code df['customerID'].duplicated().sum() ###Output _____no_output_____ ###Markdown Análises com PIVOT TABLE ###Code #Features numéricas df_num = pd.pivot_table(df,index=('Churn')) df_num #Features Categóricas df_cat = df.dtypes[df.dtypes=='object'].index #para cada coluna (c) em todas as colunas categoricas (cols_cat) for coluna in df_cat: print(pd.pivot_table(df,index=('Churn',coluna),aggfunc='count')) #print a contagem de todas elas entre elas, fixando o Churn. ###Output Contract Dependents ... gender tenure Churn customerID ... No 0002-ORFBO 1 1 ... 1 1 0003-MKNFE 1 1 ... 1 1 0013-MHZWF 1 1 ... 1 1 0013-SMEOE 1 1 ... 1 1 0014-BMAQU 1 1 ... 1 1 ... ... ... ... ... ... Yes 9961-JBNMK 1 1 ... 1 1 9965-YOKZB 1 1 ... 1 1 9985-MWVIX 1 1 ... 1 1 9986-BONCE 1 1 ... 1 1 9992-RRAMN 1 1 ... 1 1 [7043 rows x 19 columns] Contract Dependents ... customerID tenure Churn gender ... No Female 2549 2549 ... 2549 2549 Male 2625 2625 ... 2625 2625 Yes Female 939 939 ... 939 939 Male 930 930 ... 930 930 [4 rows x 19 columns] Contract Dependents ... gender tenure Churn Partner ... No No 2441 2441 ... 2441 2441 Yes 2733 2733 ... 2733 2733 Yes No 1200 1200 ... 1200 1200 Yes 669 669 ... 669 669 [4 rows x 19 columns] Contract DeviceProtection ... gender tenure Churn Dependents ... No No 3390 3390 ... 3390 3390 Yes 1784 1784 ... 1784 1784 Yes No 1543 1543 ... 1543 1543 Yes 326 326 ... 326 326 [4 rows x 19 columns] Contract Dependents ... gender tenure Churn PhoneService ... No No 512 512 ... 512 512 Yes 4662 4662 ... 4662 4662 Yes No 170 170 ... 170 170 Yes 1699 1699 ... 1699 1699 [4 rows x 19 columns] Contract Dependents ... gender tenure Churn MultipleLines ... No No 2541 2541 ... 2541 2541 No phone service 512 512 ... 512 512 Yes 2121 2121 ... 2121 2121 Yes No 849 849 ... 849 849 No phone service 170 170 ... 170 170 Yes 850 850 ... 850 850 [6 rows x 19 columns] Contract Dependents ... gender tenure Churn InternetService ... No DSL 1962 1962 ... 1962 1962 Fiber optic 1799 1799 ... 1799 1799 No 1413 1413 ... 1413 1413 Yes DSL 459 459 ... 459 459 Fiber optic 1297 1297 ... 1297 1297 No 113 113 ... 113 113 [6 rows x 19 columns] Contract Dependents ... gender tenure Churn OnlineSecurity ... No No 2037 2037 ... 2037 2037 No internet service 1413 1413 ... 1413 1413 Yes 1724 1724 ... 1724 1724 Yes No 1461 1461 ... 1461 1461 No internet service 113 113 ... 113 113 Yes 295 295 ... 295 295 [6 rows x 19 columns] Contract Dependents ... gender tenure Churn OnlineBackup ... No No 1855 1855 ... 1855 1855 No internet service 1413 1413 ... 1413 1413 Yes 1906 1906 ... 1906 1906 Yes No 1233 1233 ... 1233 1233 No internet service 113 113 ... 113 113 Yes 523 523 ... 523 523 [6 rows x 19 columns] Contract Dependents ... gender tenure Churn DeviceProtection ... No No 1884 1884 ... 1884 1884 No internet service 1413 1413 ... 1413 1413 Yes 1877 1877 ... 1877 1877 Yes No 1211 1211 ... 1211 1211 No internet service 113 113 ... 113 113 Yes 545 545 ... 545 545 [6 rows x 19 columns] Contract Dependents ... gender tenure Churn TechSupport ... No No 2027 2027 ... 2027 2027 No internet service 1413 1413 ... 1413 1413 Yes 1734 1734 ... 1734 1734 Yes No 1446 1446 ... 1446 1446 No internet service 113 113 ... 113 113 Yes 310 310 ... 310 310 [6 rows x 19 columns] Contract Dependents ... gender tenure Churn StreamingTV ... No No 1868 1868 ... 1868 1868 No internet service 1413 1413 ... 1413 1413 Yes 1893 1893 ... 1893 1893 Yes No 942 942 ... 942 942 No internet service 113 113 ... 113 113 Yes 814 814 ... 814 814 [6 rows x 19 columns] Contract Dependents ... gender tenure Churn StreamingMovies ... No No 1847 1847 ... 1847 1847 No internet service 1413 1413 ... 1413 1413 Yes 1914 1914 ... 1914 1914 Yes No 938 938 ... 938 938 No internet service 113 113 ... 113 113 Yes 818 818 ... 818 818 [6 rows x 19 columns] Dependents DeviceProtection ... gender tenure Churn Contract ... No Month-to-month 2220 2220 ... 2220 2220 One year 1307 1307 ... 1307 1307 Two year 1647 1647 ... 1647 1647 Yes Month-to-month 1655 1655 ... 1655 1655 One year 166 166 ... 166 166 Two year 48 48 ... 48 48 [6 rows x 19 columns] Contract Dependents ... gender tenure Churn PaperlessBilling ... No No 2403 2403 ... 2403 2403 Yes 2771 2771 ... 2771 2771 Yes No 469 469 ... 469 469 Yes 1400 1400 ... 1400 1400 [4 rows x 19 columns] Contract Dependents ... gender tenure Churn PaymentMethod ... No Bank transfer (automatic) 1286 1286 ... 1286 1286 Credit card (automatic) 1290 1290 ... 1290 1290 Electronic check 1294 1294 ... 1294 1294 Mailed check 1304 1304 ... 1304 1304 Yes Bank transfer (automatic) 258 258 ... 258 258 Credit card (automatic) 232 232 ... 232 232 Electronic check 1071 1071 ... 1071 1071 Mailed check 308 308 ... 308 308 [8 rows x 19 columns] Contract Dependents DeviceProtection ... customerID gender tenure Churn Churn ... No No 5174 5174 5174 ... 5174 5174 5174 Yes Yes 1869 1869 1869 ... 1869 1869 1869 [2 rows x 20 columns] ###Markdown - Neste caso onde deseja-se fazer uma visualização geral, o entendimento do output do pivot_table é mais difícil que a opção de plots apresentada nas cálulas a seguir. FOR loop - SHAREY = False ( cada gráfico vai ter um y que melhor se adequa, caso contrario é o mesmo y pra todos) - O "Customer Id não é importante , então podemos dropar essa coluna para fazer uma outra análise ###Code df.drop('customerID',axis=1,inplace=True) columns = df.columns fig, axes = plt.subplots(4, 5, figsize=(20, 16),sharey=False) fig.suptitle('Churn Analysis') feature=0 for row in range(0,4): for col in range(0,5): sns.histplot(ax=axes[row, col], data=df, x=columns[feature], hue="Churn", multiple="stack") feature += 1 ###Output _____no_output_____ ###Markdown 1ª Análise CHURN == YES- GENDER: Feminino - PARTNER: Sem parceiros- DEPENDENTS: Sem dependentes- TENURE : Menor que 20 meses- PHONE SERVICE: Com serviço de telefone - INTERNET SERVICE: com serviço de internet- CONTRACT: COntrato Mensal - PAPERBIILING : Contas em papel - MONTHLY CHARGE: Superior a 70 Analisar o perfil do Churn == YES ###Code df_churn= df.loc[df['Churn'] == "Yes"] df_churn.head() df_churn.shape columns = df_churn.columns fig, axes = plt.subplots(4, 5, figsize=(20, 16),sharey=False) fig.suptitle('Churn Analysis') feature=0 for row in range(0,4): for col in range(0,5): sns.histplot(ax=axes[row, col], data=df_churn, x=columns[feature], hue="Churn", multiple="stack") feature += 1 ###Output _____no_output_____ ###Markdown 2ª Análise ###Code sns.jointplot(data=df, y="MonthlyCharges",x="tenure",hue='Churn') ###Output _____no_output_____ ###Markdown No gráfico acima é possível identificar claramente que: * clientes com tenure < 20 possuem maior chance de Churn* clientes com MonthlyCharges > 70 também possuem maior chance de Churn ###Code sns.pairplot(df, hue= 'Churn') plt.show() ###Output _____no_output_____ ###Markdown Recomendações para minimizar o Churn:* Personalizar os pacotes de acordo com o que os clientes realmente usam, pois clientes com muitos serviços contratados e mensalidades maiores demonstram maior possibilidade de Churn. Neste caso a operadora recebe mais por menos tempo. A longo prazo esta condição é desfavorável* Clientes com Partner ou Dependentes apresentam menor chance de dar Churn ###Code ###Output _____no_output_____