code
stringlengths
2.5k
150k
kind
stringclasses
1 value
# 09 Strain Gage This is one of the most commonly used sensor. It is used in many transducers. Its fundamental operating principle is fairly easy to understand and it will be the purpose of this lecture. A strain gage is essentially a thin wire that is wrapped on film of plastic. <img src="img/StrainGage.png" width="200"> The strain gage is then mounted (glued) on the part for which the strain must be measured. <img src="img/Strain_gauge_2.jpg" width="200"> ## Stress, Strain When a beam is under axial load, the axial stress, $\sigma_a$, is defined as: \begin{align*} \sigma_a = \frac{F}{A} \end{align*} with $F$ the axial load, and $A$ the cross sectional area of the beam under axial load. <img src="img/BeamUnderStrain.png" width="200"> Under the load, the beam of length $L$ will extend by $dL$, giving rise to the definition of strain, $\epsilon_a$: \begin{align*} \epsilon_a = \frac{dL}{L} \end{align*} The beam will also contract laterally: the cross sectional area is reduced by $dA$. This results in a transverval strain $\epsilon_t$. The transversal and axial strains are related by the Poisson's ratio: \begin{align*} \nu = - \frac{\epsilon_t }{\epsilon_a} \end{align*} For a metal the Poission's ratio is typically $\nu = 0.3$, for an incompressible material, such as rubber (or water), $\nu = 0.5$. Within the elastic limit, the axial stress and axial strain are related through Hooke's law by the Young's modulus, $E$: \begin{align*} \sigma_a = E \epsilon_a \end{align*} <img src="img/ElasticRegime.png" width="200"> ## Resistance of a wire The electrical resistance of a wire $R$ is related to its physical properties (the electrical resistiviy, $\rho$ in $\Omega$/m) and its geometry: length $L$ and cross sectional area $A$. \begin{align*} R = \frac{\rho L}{A} \end{align*} Mathematically, the change in wire dimension will result inchange in its electrical resistance. This can be derived from first principle: \begin{align} \frac{dR}{R} = \frac{d\rho}{\rho} + \frac{dL}{L} - \frac{dA}{A} \end{align} If the wire has a square cross section, then: \begin{align*} A & = L'^2 \\ \frac{dA}{A} & = \frac{d(L'^2)}{L'^2} = \frac{2L'dL'}{L'^2} = 2 \frac{dL'}{L'} \end{align*} We have related the change in cross sectional area to the transversal strain. \begin{align*} \epsilon_t = \frac{dL'}{L'} \end{align*} Using the Poisson's ratio, we can relate then relate the change in cross-sectional area ($dA/A$) to axial strain $\epsilon_a = dL/L$. \begin{align*} \epsilon_t &= - \nu \epsilon_a \\ \frac{dL'}{L'} &= - \nu \frac{dL}{L} \; \text{or}\\ \frac{dA}{A} & = 2\frac{dL'}{L'} = -2 \nu \frac{dL}{L} \end{align*} Finally we can substitute express $dA/A$ in eq. for $dR/R$ and relate change in resistance to change of wire geometry, remembering that for a metal $\nu =0.3$: \begin{align} \frac{dR}{R} & = \frac{d\rho}{\rho} + \frac{dL}{L} - \frac{dA}{A} \\ & = \frac{d\rho}{\rho} + \frac{dL}{L} - (-2\nu \frac{dL}{L}) \\ & = \frac{d\rho}{\rho} + 1.6 \frac{dL}{L} = \frac{d\rho}{\rho} + 1.6 \epsilon_a \end{align} It also happens that for most metals, the resistivity increases with axial strain. In general, one can then related the change in resistance to axial strain by defining the strain gage factor: \begin{align} S = 1.6 + \frac{d\rho}{\rho}\cdot \frac{1}{\epsilon_a} \end{align} and finally, we have: \begin{align*} \frac{dR}{R} = S \epsilon_a \end{align*} $S$ is materials dependent and is typically equal to 2.0 for most commercially availabe strain gages. It is dimensionless. Strain gages are made of thin wire that is wraped in several loops, effectively increasing the length of the wire and therefore the sensitivity of the sensor. _Question: Explain why a longer wire is necessary to increase the sensitivity of the sensor_. Most commercially available strain gages have a nominal resistance (resistance under no load, $R_{ini}$) of 120 or 350 $\Omega$. Within the elastic regime, strain is typically within the range $10^{-6} - 10^{-3}$, in fact strain is expressed in unit of microstrain, with a 1 microstrain = $10^{-6}$. Therefore, changes in resistances will be of the same order. If one were to measure resistances, we will need a dynamic range of 120 dB, whih is typically very expensive. Instead, one uses the Wheatstone bridge to transform the change in resistance to a voltage, which is easier to measure and does not require such a large dynamic range. ## Wheatstone bridge: <img src="img/WheatstoneBridge.png" width="200"> The output voltage is related to the difference in resistances in the bridge: \begin{align*} \frac{V_o}{V_s} = \frac{R_1R_3-R_2R_4}{(R_1+R_4)(R_2+R_3)} \end{align*} If the bridge is balanced, then $V_o = 0$, it implies: $R_1/R_2 = R_4/R_3$. In practice, finding a set of resistors that balances the bridge is challenging, and a potentiometer is used as one of the resistances to do minor adjustement to balance the bridge. If one did not do the adjustement (ie if we did not zero the bridge) then all the measurement will have an offset or bias that could be removed in a post-processing phase, as long as the bias stayed constant. If each resistance $R_i$ is made to vary slightly around its initial value, ie $R_i = R_{i,ini} + dR_i$. For simplicity, we will assume that the initial value of the four resistances are equal, ie $R_{1,ini} = R_{2,ini} = R_{3,ini} = R_{4,ini} = R_{ini}$. This implies that the bridge was initially balanced, then the output voltage would be: \begin{align*} \frac{V_o}{V_s} = \frac{1}{4} \left( \frac{dR_1}{R_{ini}} - \frac{dR_2}{R_{ini}} + \frac{dR_3}{R_{ini}} - \frac{dR_4}{R_{ini}} \right) \end{align*} Note here that the changes in $R_1$ and $R_3$ have a positive effect on $V_o$, while the changes in $R_2$ and $R_4$ have a negative effect on $V_o$. In practice, this means that is a beam is a in tension, then a strain gage mounted on the branch 1 or 3 of the Wheatstone bridge will produce a positive voltage, while a strain gage mounted on branch 2 or 4 will produce a negative voltage. One takes advantage of this to increase sensitivity to measure strain. ### Quarter bridge One uses only one quarter of the bridge, ie strain gages are only mounted on one branch of the bridge. \begin{align*} \frac{V_o}{V_s} = \pm \frac{1}{4} \epsilon_a S \end{align*} Sensitivity, $G$: \begin{align*} G = \frac{V_o}{\epsilon_a} = \pm \frac{1}{4}S V_s \end{align*} ### Half bridge One uses half of the bridge, ie strain gages are mounted on two branches of the bridge. \begin{align*} \frac{V_o}{V_s} = \pm \frac{1}{2} \epsilon_a S \end{align*} ### Full bridge One uses of the branches of the bridge, ie strain gages are mounted on each branch. \begin{align*} \frac{V_o}{V_s} = \pm \epsilon_a S \end{align*} Therefore, as we increase the order of bridge, the sensitivity of the instrument increases. However, one should be carefull how we mount the strain gages as to not cancel out their measurement. _Exercise_ 1- Wheatstone bridge <img src="img/WheatstoneBridge.png" width="200"> > How important is it to know \& match the resistances of the resistors you employ to create your bridge? > How would you do that practically? > Assume $R_1=120\,\Omega$, $R_2=120\,\Omega$, $R_3=120\,\Omega$, $R_4=110\,\Omega$, $V_s=5.00\,\text{V}$. What is $V_\circ$? ``` Vs = 5.00 Vo = (120**2-120*110)/(230*240) * Vs print('Vo = ',Vo, ' V') # typical range in strain a strain gauge can measure # 1 -1000 micro-Strain AxialStrain = 1000*10**(-6) # axial strain StrainGageFactor = 2 R_ini = 120 # Ohm R_1 = R_ini+R_ini*StrainGageFactor*AxialStrain print(R_1) Vo = (120**2-120*(R_1))/((120+R_1)*240) * Vs print('Vo = ', Vo, ' V') ``` > How important is it to know \& match the resistances of the resistors you employ to create your bridge? > How would you do that practically? > Assume $R_1= R_2 =R_3=120\,\Omega$, $R_4=120.01\,\Omega$, $V_s=5.00\,\text{V}$. What is $V_\circ$? ``` Vs = 5.00 Vo = (120**2-120*120.01)/(240.01*240) * Vs print(Vo) ``` 2- Strain gage 1: One measures the strain on a bridge steel beam. The modulus of elasticity is $E=190$ GPa. Only one strain gage is mounted on the bottom of the beam; the strain gage factor is $S=2.02$. > a) What kind of electronic circuit will you use? Draw a sketch of it. > b) Assume all your resistors including the unloaded strain gage are balanced and measure $120\,\Omega$, and that the strain gage is at location $R_2$. The supply voltage is $5.00\,\text{VDC}$. Will $V_\circ$ be positive or negative when a downward load is added? In practice, we cannot have all resistances = 120 $\Omega$. at zero load, the bridge will be unbalanced (show $V_o \neq 0$). How could we balance our bridge? Use a potentiometer to balance bridge, for the load cell, we ''zero'' the instrument. Other option to zero-out our instrument? Take data at zero-load, record the voltage, $V_{o,noload}$. Substract $V_{o,noload}$ to my data. > c) For a loading in which $V_\circ = -1.25\,\text{mV}$, calculate the strain $\epsilon_a$ in units of microstrain. \begin{align*} \frac{V_o}{V_s} & = - \frac{1}{4} \epsilon_a S\\ \epsilon_a & = -\frac{4}{S} \frac{V_o}{V_s} \end{align*} ``` S = 2.02 Vo = -0.00125 Vs = 5 eps_a = -1*(4/S)*(Vo/Vs) print(eps_a) ``` > d) Calculate the axial stress (in MPa) in the beam under this load. > e) You now want more sensitivity in your measurement, you install a second strain gage on to p of the beam. Which resistor should you use for this second active strain gage? > f) With this new setup and the same applied load than previously, what should be the output voltage? 3- Strain Gage with Long Lead Wires <img src="img/StrainGageLongWires.png" width="360"> A quarter bridge strain gage Wheatstone bridge circuit is constructed with $120\,\Omega$ resistors and a $120\,\Omega$ strain gage. For this practical application, the strain gage is located very far away form the DAQ station and the lead wires to the strain gage are $10\,\text{m}$ long and the lead wire have a resistance of $0.080\,\Omega/\text{m}$. The lead wire resistance can lead to problems since $R_{lead}$ changes with temperature. > Design a modified circuit that will cancel out the effect of the lead wires. ## Homework
github_jupyter
``` #export from fastai.basics import * from fastai.tabular.core import * from fastai.tabular.model import * from fastai.tabular.data import * #hide from nbdev.showdoc import * #default_exp tabular.learner ``` # Tabular learner > The function to immediately get a `Learner` ready to train for tabular data The main function you probably want to use in this module is `tabular_learner`. It will automatically create a `TabulaModel` suitable for your data and infer the irght loss function. See the [tabular tutorial](http://docs.fast.ai/tutorial.tabular) for an example of use in context. ## Main functions ``` #export @log_args(but_as=Learner.__init__) class TabularLearner(Learner): "`Learner` for tabular data" def predict(self, row): tst_to = self.dls.valid_ds.new(pd.DataFrame(row).T) tst_to.process() tst_to.conts = tst_to.conts.astype(np.float32) dl = self.dls.valid.new(tst_to) inp,preds,_,dec_preds = self.get_preds(dl=dl, with_input=True, with_decoded=True) i = getattr(self.dls, 'n_inp', -1) b = (*tuplify(inp),*tuplify(dec_preds)) full_dec = self.dls.decode((*tuplify(inp),*tuplify(dec_preds))) return full_dec,dec_preds[0],preds[0] show_doc(TabularLearner, title_level=3) ``` It works exactly as a normal `Learner`, the only difference is that it implements a `predict` method specific to work on a row of data. ``` #export @log_args(to_return=True, but_as=Learner.__init__) @delegates(Learner.__init__) def tabular_learner(dls, layers=None, emb_szs=None, config=None, n_out=None, y_range=None, **kwargs): "Get a `Learner` using `dls`, with `metrics`, including a `TabularModel` created using the remaining params." if config is None: config = tabular_config() if layers is None: layers = [200,100] to = dls.train_ds emb_szs = get_emb_sz(dls.train_ds, {} if emb_szs is None else emb_szs) if n_out is None: n_out = get_c(dls) assert n_out, "`n_out` is not defined, and could not be infered from data, set `dls.c` or pass `n_out`" if y_range is None and 'y_range' in config: y_range = config.pop('y_range') model = TabularModel(emb_szs, len(dls.cont_names), n_out, layers, y_range=y_range, **config) return TabularLearner(dls, model, **kwargs) ``` If your data was built with fastai, you probably won't need to pass anything to `emb_szs` unless you want to change the default of the library (produced by `get_emb_sz`), same for `n_out` which should be automatically inferred. `layers` will default to `[200,100]` and is passed to `TabularModel` along with the `config`. Use `tabular_config` to create a `config` and cusotmize the model used. There is just easy access to `y_range` because this argument is often used. All the other arguments are passed to `Learner`. ``` path = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(path/'adult.csv') cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race'] cont_names = ['age', 'fnlwgt', 'education-num'] procs = [Categorify, FillMissing, Normalize] dls = TabularDataLoaders.from_df(df, path, procs=procs, cat_names=cat_names, cont_names=cont_names, y_names="salary", valid_idx=list(range(800,1000)), bs=64) learn = tabular_learner(dls) #hide tst = learn.predict(df.iloc[0]) #hide #test y_range is passed learn = tabular_learner(dls, y_range=(0,32)) assert isinstance(learn.model.layers[-1], SigmoidRange) test_eq(learn.model.layers[-1].low, 0) test_eq(learn.model.layers[-1].high, 32) learn = tabular_learner(dls, config = tabular_config(y_range=(0,32))) assert isinstance(learn.model.layers[-1], SigmoidRange) test_eq(learn.model.layers[-1].low, 0) test_eq(learn.model.layers[-1].high, 32) #export @typedispatch def show_results(x:Tabular, y:Tabular, samples, outs, ctxs=None, max_n=10, **kwargs): df = x.all_cols[:max_n] for n in x.y_names: df[n+'_pred'] = y[n][:max_n].values display_df(df) ``` ## Export - ``` #hide from nbdev.export import notebook2script notebook2script() ```
github_jupyter
# Aerospike Connect for Spark - SparkML Prediction Model Tutorial ## Tested with Java 8, Spark 3.0.0, Python 3.7, and Aerospike Spark Connector 3.0.0 ## Summary Build a linear regression model to predict birth weight using Aerospike Database and Spark. Here are the features used: - gestation weeks - mother’s age - father’s age - mother’s weight gain during pregnancy - [Apgar score](https://en.wikipedia.org/wiki/Apgar_score) Aerospike is used to store the Natality dataset that is published by CDC. The table is accessed in Apache Spark using the Aerospike Spark Connector, and Spark ML is used to build and evaluate the model. The model can later be converted to PMML and deployed on your inference server for predictions. ### Prerequisites 1. Load Aerospike server if not alrady available - docker run -d --name aerospike -p 3000:3000 -p 3001:3001 -p 3002:3002 -p 3003:3003 aerospike 2. Feature key needs to be located in AS_FEATURE_KEY_PATH 3. [Download the connector](https://www.aerospike.com/enterprise/download/connectors/aerospike-spark/3.0.0/) ``` #IP Address or DNS name for one host in your Aerospike cluster. #A seed address for the Aerospike database cluster is required AS_HOST ="127.0.0.1" # Name of one of your namespaces. Type 'show namespaces' at the aql prompt if you are not sure AS_NAMESPACE = "test" AS_FEATURE_KEY_PATH = "/etc/aerospike/features.conf" AEROSPIKE_SPARK_JAR_VERSION="3.0.0" AS_PORT = 3000 # Usually 3000, but change here if not AS_CONNECTION_STRING = AS_HOST + ":"+ str(AS_PORT) #Locate the Spark installation - this'll use the SPARK_HOME environment variable import findspark findspark.init() # Below will help you download the Spark Connector Jar if you haven't done so already. import urllib import os def aerospike_spark_jar_download_url(version=AEROSPIKE_SPARK_JAR_VERSION): DOWNLOAD_PREFIX="https://www.aerospike.com/enterprise/download/connectors/aerospike-spark/" DOWNLOAD_SUFFIX="/artifact/jar" AEROSPIKE_SPARK_JAR_DOWNLOAD_URL = DOWNLOAD_PREFIX+AEROSPIKE_SPARK_JAR_VERSION+DOWNLOAD_SUFFIX return AEROSPIKE_SPARK_JAR_DOWNLOAD_URL def download_aerospike_spark_jar(version=AEROSPIKE_SPARK_JAR_VERSION): JAR_NAME="aerospike-spark-assembly-"+AEROSPIKE_SPARK_JAR_VERSION+".jar" if(not(os.path.exists(JAR_NAME))) : urllib.request.urlretrieve(aerospike_spark_jar_download_url(),JAR_NAME) else : print(JAR_NAME+" already downloaded") return os.path.join(os.getcwd(),JAR_NAME) AEROSPIKE_JAR_PATH=download_aerospike_spark_jar() os.environ["PYSPARK_SUBMIT_ARGS"] = '--jars ' + AEROSPIKE_JAR_PATH + ' pyspark-shell' import pyspark from pyspark.context import SparkContext from pyspark.sql.context import SQLContext from pyspark.sql.session import SparkSession from pyspark.ml.linalg import Vectors from pyspark.ml.regression import LinearRegression from pyspark.sql.types import StringType, StructField, StructType, ArrayType, IntegerType, MapType, LongType, DoubleType #Get a spark session object and set required Aerospike configuration properties sc = SparkContext.getOrCreate() print("Spark Verison:", sc.version) spark = SparkSession(sc) sqlContext = SQLContext(sc) spark.conf.set("aerospike.namespace",AS_NAMESPACE) spark.conf.set("aerospike.seedhost",AS_CONNECTION_STRING) spark.conf.set("aerospike.keyPath",AS_FEATURE_KEY_PATH ) ``` ## Step 1: Load Data into a DataFrame ``` as_data=spark \ .read \ .format("aerospike") \ .option("aerospike.set", "natality").load() as_data.show(5) print("Inferred Schema along with Metadata.") as_data.printSchema() ``` ### To speed up the load process at scale, use the [knobs](https://www.aerospike.com/docs/connect/processing/spark/performance.html) available in the Aerospike Spark Connector. For example, **spark.conf.set("aerospike.partition.factor", 15 )** will map 4096 Aerospike partitions to 32K Spark partitions. <font color=red> (Note: Please configure this carefully based on the available resources (CPU threads) in your system.)</font> ## Step 2 - Prep data ``` # This Spark3.0 setting, if true, will turn on Adaptive Query Execution (AQE), which will make use of the # runtime statistics to choose the most efficient query execution plan. It will speed up any joins that you # plan to use for data prep step. spark.conf.set("spark.sql.adaptive.enabled", 'true') # Run a query in Spark SQL to ensure no NULL values exist. as_data.createOrReplaceTempView("natality") sql_query = """ SELECT * from natality where weight_pnd is not null and mother_age is not null and father_age is not null and father_age < 80 and gstation_week is not null and weight_gain_pnd < 90 and apgar_5min != "99" and apgar_5min != "88" """ clean_data = spark.sql(sql_query) #Drop the Aerospike metadata from the dataset because its not required. #The metadata is added because we are inferring the schema as opposed to providing a strict schema columns_to_drop = ['__key','__digest','__expiry','__generation','__ttl' ] clean_data = clean_data.drop(*columns_to_drop) # dropping null values clean_data = clean_data.dropna() clean_data.cache() clean_data.show(5) #Descriptive Analysis of the data clean_data.describe().toPandas().transpose() ``` ## Step 3 Visualize Data ``` import numpy as np import matplotlib.pyplot as plt import pandas as pd import math pdf = clean_data.toPandas() #Histogram - Father Age pdf[['father_age']].plot(kind='hist',bins=10,rwidth=0.8) plt.xlabel('Fathers Age (years)',fontsize=12) plt.legend(loc=None) plt.style.use('seaborn-whitegrid') plt.show() ''' pdf[['mother_age']].plot(kind='hist',bins=10,rwidth=0.8) plt.xlabel('Mothers Age (years)',fontsize=12) plt.legend(loc=None) plt.style.use('seaborn-whitegrid') plt.show() ''' pdf[['weight_pnd']].plot(kind='hist',bins=10,rwidth=0.8) plt.xlabel('Babys Weight (Pounds)',fontsize=12) plt.legend(loc=None) plt.style.use('seaborn-whitegrid') plt.show() pdf[['gstation_week']].plot(kind='hist',bins=10,rwidth=0.8) plt.xlabel('Gestation (Weeks)',fontsize=12) plt.legend(loc=None) plt.style.use('seaborn-whitegrid') plt.show() pdf[['weight_gain_pnd']].plot(kind='hist',bins=10,rwidth=0.8) plt.xlabel('mother’s weight gain during pregnancy',fontsize=12) plt.legend(loc=None) plt.style.use('seaborn-whitegrid') plt.show() #Histogram - Apgar Score print("Apgar Score: Scores of 7 and above are generally normal; 4 to 6, fairly low; and 3 and below are generally \ regarded as critically low and cause for immediate resuscitative efforts.") pdf[['apgar_5min']].plot(kind='hist',bins=10,rwidth=0.8) plt.xlabel('Apgar score',fontsize=12) plt.legend(loc=None) plt.style.use('seaborn-whitegrid') plt.show() ``` ## Step 4 - Create Model **Steps used for model creation:** 1. Split cleaned data into Training and Test sets 2. Vectorize features on which the model will be trained 3. Create a linear regression model (Choose any ML algorithm that provides the best fit for the given dataset) 4. Train model (Although not shown here, you could use K-fold cross-validation and Grid Search to choose the best hyper-parameters for the model) 5. Evaluate model ``` # Define a function that collects the features of interest # (mother_age, father_age, and gestation_weeks) into a vector. # Package the vector in a tuple containing the label (`weight_pounds`) for that # row.## def vector_from_inputs(r): return (r["weight_pnd"], Vectors.dense(float(r["mother_age"]), float(r["father_age"]), float(r["gstation_week"]), float(r["weight_gain_pnd"]), float(r["apgar_5min"]))) #Split that data 70% training and 30% Evaluation data train, test = clean_data.randomSplit([0.7, 0.3]) #Check the shape of the data train.show() print((train.count(), len(train.columns))) test.show() print((test.count(), len(test.columns))) # Create an input DataFrame for Spark ML using the above function. training_data = train.rdd.map(vector_from_inputs).toDF(["label", "features"]) # Construct a new LinearRegression object and fit the training data. lr = LinearRegression(maxIter=5, regParam=0.2, solver="normal") #Voila! your first model using Spark ML is trained model = lr.fit(training_data) # Print the model summary. print("Coefficients:" + str(model.coefficients)) print("Intercept:" + str(model.intercept)) print("R^2:" + str(model.summary.r2)) model.summary.residuals.show() ``` ### Evaluate Model ``` eval_data = test.rdd.map(vector_from_inputs).toDF(["label", "features"]) eval_data.show() evaluation_summary = model.evaluate(eval_data) print("MAE:", evaluation_summary.meanAbsoluteError) print("RMSE:", evaluation_summary.rootMeanSquaredError) print("R-squared value:", evaluation_summary.r2) ``` ## Step 5 - Batch Prediction ``` #eval_data contains the records (ideally production) that you'd like to use for the prediction predictions = model.transform(eval_data) predictions.show() ``` #### Compare the labels and the predictions, they should ideally match up for an accurate model. Label is the actual weight of the baby and prediction is the predicated weight ### Saving the Predictions to Aerospike for ML Application's consumption ``` # Aerospike is a key/value database, hence a key is needed to store the predictions into the database. Hence we need # to add the _id column to the predictions using SparkSQL predictions.createOrReplaceTempView("predict_view") sql_query = """ SELECT *, monotonically_increasing_id() as _id from predict_view """ predict_df = spark.sql(sql_query) predict_df.show() print("#records:", predict_df.count()) # Now we are good to write the Predictions to Aerospike predict_df \ .write \ .mode('overwrite') \ .format("aerospike") \ .option("aerospike.writeset", "predictions")\ .option("aerospike.updateByKey", "_id") \ .save() ``` #### You can verify that data is written to Aerospike by using either [AQL](https://www.aerospike.com/docs/tools/aql/data_management.html) or the [Aerospike Data Browser](https://github.com/aerospike/aerospike-data-browser) ## Step 6 - Deploy ### Here are a few options: 1. Save the model to a PMML file by converting it using Jpmml/[pyspark2pmml](https://github.com/jpmml/pyspark2pmml) and load it into your production enviornment for inference. 2. Use Aerospike as an [edge database for high velocity ingestion](https://medium.com/aerospike-developer-blog/add-horsepower-to-ai-ml-pipeline-15ca42a10982) for your inference pipline.
github_jupyter
## Concurrency with asyncio ### Thread vs. coroutine ``` # spinner_thread.py import threading import itertools import time import sys class Signal: go = True def spin(msg, signal): write, flush = sys.stdout.write, sys.stdout.flush for char in itertools.cycle('|/-\\'): status = char + ' ' + msg write(status) flush() write('\x08' * len(status)) time.sleep(.1) if not signal.go: break write(' ' * len(status) + '\x08' * len(status)) def slow_function(): time.sleep(3) return 42 def supervisor(): signal = Signal() spinner = threading.Thread(target=spin, args=('thinking!', signal)) print('spinner object:', spinner) spinner.start() result = slow_function() signal.go = False spinner.join() return result def main(): result = supervisor() print('Answer:', result) if __name__ == '__main__': main() # spinner_asyncio.py import asyncio import itertools import sys @asyncio.coroutine def spin(msg): write, flush = sys.stdout.write, sys.stdout.flush for char in itertools.cycle('|/-\\'): status = char + ' ' + msg write(status) flush() write('\x08' * len(status)) try: yield from asyncio.sleep(.1) except asyncio.CancelledError: break write(' ' * len(status) + '\x08' * len(status)) @asyncio.coroutine def slow_function(): yield from asyncio.sleep(3) return 42 @asyncio.coroutine def supervisor(): #Schedule the execution of a coroutine object: #wrap it in a future. Return a Task object. spinner = asyncio.ensure_future(spin('thinking!')) print('spinner object:', spinner) result = yield from slow_function() spinner.cancel() return result def main(): loop = asyncio.get_event_loop() result = loop.run_until_complete(supervisor()) loop.close() print('Answer:', result) if __name__ == '__main__': main() # flags_asyncio.py import asyncio import aiohttp from flags import BASE_URL, save_flag, show, main @asyncio.coroutine def get_flag(cc): url = '{}/{cc}/{cc}.gif'.format(BASE_URL, cc=cc.lower()) resp = yield from aiohttp.request('GET', url) image = yield from resp.read() return image @asyncio.coroutine def download_one(cc): image = yield from get_flag(cc) show(cc) save_flag(image, cc.lower() + '.gif') return cc def download_many(cc_list): loop = asyncio.get_event_loop() to_do = [download_one(cc) for cc in sorted(cc_list)] wait_coro = asyncio.wait(to_do) res, _ = loop.run_until_complete(wait_coro) loop.close() return len(res) if __name__ == '__main__': main(download_many) # flags2_asyncio.py import asyncio import collections import aiohttp from aiohttp import web import tqdm from flags2_common import HTTPStatus, save_flag, Result, main DEFAULT_CONCUR_REQ = 5 MAX_CONCUR_REQ = 1000 class FetchError(Exception): def __init__(self, country_code): self.country_code = country_code @asyncio.coroutine def get_flag(base_url, cc): url = '{}/{cc}/{cc}.gif'.format(BASE_URL, cc=cc.lower()) resp = yield from aiohttp.ClientSession().get(url) if resp.status == 200: image = yield from resp.read() return image elif resp.status == 404: raise web.HTTPNotFound() else: raise aiohttp.HttpProcessingError( code=resp.status, message=resp.reason, headers=resp.headers) @asyncio.coroutine def download_one(cc, base_url, semaphore, verbose): try: with (yield from semaphore): image = yield from get_flag(base_url, cc) except web.HTTPNotFound: status = HTTPStatus.not_found msg = 'not found' except Exception as exc: raise FetchError(cc) from exc else: save_flag(image, cc.lower() + '.gif') status = HTTPStatus.ok msg = 'OK' if verbose and msg: print(cc, msg) return Result(status, cc) @asyncio.coroutine def downloader_coro(cc_list, base_url, verbose, concur_req): counter = collections.Counter() semaphore = asyncio.Semaphore(concur_req) to_do = [download_one(cc, base_url, semaphore, verbose) for cc in sorted(cc_list)] to_do_iter = asyncio.as_completed(to_do) if not verbose: to_do_iter = tqdm.tqdm(to_do_iter, total=len(cc_list)) for future in to_do_iter: try: res = yield from future except FetchError as exc: country_code = exc.country_code try: error_msg = exc.__cause__.args[0] except IndexError: error_msg = exc.__cause__.__class__.__name__ if verbose and error_msg: msg = '*** Error for {}: {}' print(msg.format(country_code, error_msg)) status = HTTPStatus.error else: status = res.status counter[status] += 1 return counter def download_many(cc_list, base_url, verbose, concur_req): loop = asyncio.get_event_loop() coro = download_coro(cc_list, base_url, verbose, concur_req) counts = loop.run_until_complete(wait_coro) loop.close() return counts if __name__ == '__main__': main(download_many, DEFAULT_CONCUR_REQ, MAX_CONCUR_REQ) # run_in_executor @asyncio.coroutine def download_one(cc, base_url, semaphore, verbose): try: with (yield from semaphore): image = yield from get_flag(base_url, cc) except web.HTTPNotFound: status = HTTPStatus.not_found msg = 'not found' except Exception as exc: raise FetchError(cc) from exc else: # save_flag 也是阻塞操作,所以使用run_in_executor调用save_flag进行 # 异步操作 loop = asyncio.get_event_loop() loop.run_in_executor(None, save_flag, image, cc.lower() + '.gif') status = HTTPStatus.ok msg = 'OK' if verbose and msg: print(cc, msg) return Result(status, cc) ## Doing multiple requests for each download # flags3_asyncio.py @asyncio.coroutine def http_get(url): res = yield from aiohttp.request('GET', url) if res.status == 200: ctype = res.headers.get('Content-type', '').lower() if 'json' in ctype or url.endswith('json'): data = yield from res.json() else: data = yield from res.read() elif res.status == 404: raise web.HTTPNotFound() else: raise aiohttp.errors.HttpProcessingError( code=res.status, message=res.reason, headers=res.headers) @asyncio.coroutine def get_country(base_url, cc): url = '{}/{cc}/metadata.json'.format(base_url, cc=cc.lower()) metadata = yield from http_get(url) return metadata['country'] @asyncio.coroutine def get_flag(base_url, cc): url = '{}/{cc}/{cc}.gif'.format(base_url, cc=cc.lower()) return (yield from http_get(url)) @asyncio.coroutine def download_one(cc, base_url, semaphore, verbose): try: with (yield from semaphore): image = yield from get_flag(base_url, cc) with (yield from semaphore): country = yield from get_country(base_url, cc) except web.HTTPNotFound: status = HTTPStatus.not_found msg = 'not found' except Exception as exc: raise FetchError(cc) from exc else: country = country.replace(' ', '_') filename = '{}-{}.gif'.format(country, cc) loop = asyncio.get_event_loop() loop.run_in_executor(None, save_flag, image, filename) status = HTTPStatus.ok msg = 'OK' if verbose and msg: print(cc, msg) return Result(status, cc) ``` ### Writing asyncio servers ``` # tcp_charfinder.py import sys import asyncio from charfinder import UnicodeNameIndex CRLF = b'\r\n' PROMPT = b'?>' index = UnicodeNameIndex() @asyncio.coroutine def handle_queries(reader, writer): while True: writer.write(PROMPT) yield from writer.drain() data = yield from reader.readline() try: query = data.decode().strip() except UnicodeDecodeError: query = '\x00' client = writer.get_extra_info('peername') print('Received from {}: {!r}'.format(client, query)) if query: if ord(query[:1]) < 32: break lines = list(index.find_description_strs(query)) if lines: writer.writelines(line.encode() + CRLF for line in lines) writer.write(index.status(query, len(lines)).encode() + CRLF) yield from writer.drain() print('Sent {} results'.format(len(lines))) print('Close the client socket') writer.close() def main(address='127.0.0.1', port=2323): port = int(port) loop = asyncio.get_event_loop() server_coro = asyncio.start_server(handle_queries, address, port, loop=loop) server = loop.run_until_complete(server_coro) host = server.sockets[0].getsockname() print('Serving on {}. Hit CTRL-C to stop.'.format(host)) try: loop.run_forever() except KeyboardInterrupt: pass print('Server shutting down.') server.close() loop.run_until_complete(server.wait_closed()) loop.close() if __name__ == '__main__': main() # http_charfinder.py @asyncio.coroutine def init(loop, address, port): app = web.Application(loop=loop) app.router.add_route('GET', '/', home) handler = app.make_handler() server = yield from loop.create_server(handler, address, port) return server.sockets[0].getsockname() def home(request): query = request.GET.get('query', '').strip() print('Query: {!r}'.format(query)) if query: descriptions = list(index.find_descriptions(query)) res = '\n'.join(ROW_TPL.format(**vars(descr)) for descr in descriptions) msg = index.status(query, len(descriptions)) else: descriptions = [] res = '' msg = 'Enter words describing characters.' html = template.format(query=query, result=res, message=msg) print('Sending {} results'.format(len(descriptions))) return web.Response(content_type=CONTENT_TYPE, text=html) def main(address='127.0.0.1', port=8888): port = int(port) loop = asyncio.get_event_loop() host = loop.run_until_complete(init(loop, address, port)) print('Serving on {}. Hit CTRL-C to stop.'.format(host)) try: loop.run_forever() except KeyboardInterrupt: # CTRL+C pressed pass print('Server shutting down.') loop.close() if __name__ == '__main__': main(*sys.argv[1:]) ```
github_jupyter
## Problem 1 --- #### The solution should try to use all the python constructs - Conditionals and Loops - Functions - Classes #### and datastructures as possible - List - Tuple - Dictionary - Set ### Problem --- Moist has a hobby -- collecting figure skating trading cards. His card collection has been growing, and it is now too large to keep in one disorganized pile. Moist needs to sort the cards in alphabetical order, so that he can find the cards that he wants on short notice whenever it is necessary. The problem is -- Moist can't actually pick up the cards because they keep sliding out his hands, and the sweat causes permanent damage. Some of the cards are rather expensive, mind you. To facilitate the sorting, Moist has convinced Dr. Horrible to build him a sorting robot. However, in his rather horrible style, Dr. Horrible has decided to make the sorting robot charge Moist a fee of $1 whenever it has to move a trading card during the sorting process. Moist has figured out that the robot's sorting mechanism is very primitive. It scans the deck of cards from top to bottom. Whenever it finds a card that is lexicographically smaller than the previous card, it moves that card to its correct place in the stack above. This operation costs $1, and the robot resumes scanning down towards the bottom of the deck, moving cards one by one until the entire deck is sorted in lexicographical order from top to bottom. As wet luck would have it, Moist is almost broke, but keeping his trading cards in order is the only remaining joy in his miserable life. He needs to know how much it would cost him to use the robot to sort his deck of cards. Input The first line of the input gives the number of test cases, **T**. **T** test cases follow. Each one starts with a line containing a single integer, **N**. The next **N** lines each contain the name of a figure skater, in order from the top of the deck to the bottom. Output For each test case, output one line containing "Case #x: y", where x is the case number (starting from 1) and y is the number of dollars it would cost Moist to use the robot to sort his deck of trading cards. Limits 1 ≤ **T** ≤ 100. Each name will consist of only letters and the space character. Each name will contain at most 100 characters. No name with start or end with a space. No name will appear more than once in the same test case. Lexicographically, the space character comes first, then come the upper case letters, then the lower case letters. Small dataset 1 ≤ N ≤ 10. Large dataset 1 ≤ N ≤ 100. Sample | Input | Output | |---------------------|-------------| | 2 | Case \#1: 1 | | 2 | Case \#2: 0 | | Oksana Baiul | | | Michelle Kwan | | | 3 | | | Elvis Stojko | | | Evgeni Plushenko | | | Kristi Yamaguchi | | *Note: Solution is not important but procedure taken to solve the problem is important*
github_jupyter
<a href="http://cocl.us/pytorch_link_top"> <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/Pytochtop.png" width="750" alt="IBM Product " /> </a> <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/cc-logo-square.png" width="200" alt="cognitiveclass.ai logo" /> <h1>Logistic Regression</h1> <h2>Table of Contents</h2> <p>In this lab, we will cover logistic regression using PyTorch.</p> <ul> <li><a href="#Log">Logistic Function</a></li> <li><a href="#Seq">Build a Logistic Regression Using nn.Sequential</a></li> <li><a href="#Model">Build Custom Modules</a></li> </ul> <p>Estimated Time Needed: <strong>15 min</strong></p> <hr> <h2>Preparation</h2> We'll need the following libraries: ``` # Import the libraries we need for this lab import torch.nn as nn import torch import matplotlib.pyplot as plt ``` Set the random seed: ``` # Set the random seed torch.manual_seed(2) ``` <!--Empty Space for separating topics--> <h2 id="Log">Logistic Function</h2> Create a tensor ranging from -100 to 100: ``` z = torch.arange(-100, 100, 0.1).view(-1, 1) print("The tensor: ", z) ``` Create a sigmoid object: ``` # Create sigmoid object sig = nn.Sigmoid() ``` Apply the element-wise function Sigmoid with the object: ``` # Use sigmoid object to calculate the yhat = sig(z) ``` Plot the results: ``` plt.plot(z.numpy(), yhat.numpy()) plt.xlabel('z') plt.ylabel('yhat') ``` Apply the element-wise Sigmoid from the function module and plot the results: ``` yhat = torch.sigmoid(z) plt.plot(z.numpy(), yhat.numpy()) ``` <!--Empty Space for separating topics--> <h2 id="Seq">Build a Logistic Regression with <code>nn.Sequential</code></h2> Create a 1x1 tensor where x represents one data sample with one dimension, and 2x1 tensor X represents two data samples of one dimension: ``` # Create x and X tensor x = torch.tensor([[1.0]]) X = torch.tensor([[1.0], [100]]) print('x = ', x) print('X = ', X) ``` Create a logistic regression object with the <code>nn.Sequential</code> model with a one-dimensional input: ``` # Use sequential function to create model model = nn.Sequential(nn.Linear(1, 1), nn.Sigmoid()) ``` The object is represented in the following diagram: <img src = "https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/chapter3/3.1.1_logistic_regression_block_diagram.png" width = 800, align = "center" alt="logistic regression block diagram" /> In this case, the parameters are randomly initialized. You can view them the following ways: ``` # Print the parameters print("list(model.parameters()):\n ", list(model.parameters())) print("\nmodel.state_dict():\n ", model.state_dict()) ``` Make a prediction with one sample: ``` # The prediction for x yhat = model(x) print("The prediction: ", yhat) ``` Calling the object with tensor <code>X</code> performed the following operation <b>(code values may not be the same as the diagrams value depending on the version of PyTorch) </b>: <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/chapter3/3.1.1_logistic_functio_example%20.png" width="400" alt="Logistic Example" /> Make a prediction with multiple samples: ``` # The prediction for X yhat = model(X) yhat ``` Calling the object performed the following operation: Create a 1x2 tensor where x represents one data sample with one dimension, and 2x3 tensor X represents one data sample of two dimensions: ``` # Create and print samples x = torch.tensor([[1.0, 1.0]]) X = torch.tensor([[1.0, 1.0], [1.0, 2.0], [1.0, 3.0]]) print('x = ', x) print('X = ', X) ``` Create a logistic regression object with the <code>nn.Sequential</code> model with a two-dimensional input: ``` # Create new model using nn.sequential() model = nn.Sequential(nn.Linear(2, 1), nn.Sigmoid()) ``` The object will apply the Sigmoid function to the output of the linear function as shown in the following diagram: <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/chapter3/3.1.1logistic_output.png" width="800" alt="The structure of nn.sequential"/> In this case, the parameters are randomly initialized. You can view them the following ways: ``` # Print the parameters print("list(model.parameters()):\n ", list(model.parameters())) print("\nmodel.state_dict():\n ", model.state_dict()) ``` Make a prediction with one sample: ``` # Make the prediction of x yhat = model(x) print("The prediction: ", yhat) ``` The operation is represented in the following diagram: <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/chapter3/3.3.1.logisticwithouptut.png" width="500" alt="Sequential Example" /> Make a prediction with multiple samples: ``` # The prediction of X yhat = model(X) print("The prediction: ", yhat) ``` The operation is represented in the following diagram: <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/chapter3/3.1.1_logistic_with_outputs2.png" width="800" alt="Sequential Example" /> <!--Empty Space for separating topics--> <h2 id="Model">Build Custom Modules</h2> In this section, you will build a custom Module or class. The model or object function is identical to using <code>nn.Sequential</code>. Create a logistic regression custom module: ``` # Create logistic_regression custom class class logistic_regression(nn.Module): # Constructor def __init__(self, n_inputs): super(logistic_regression, self).__init__() self.linear = nn.Linear(n_inputs, 1) # Prediction def forward(self, x): yhat = torch.sigmoid(self.linear(x)) return yhat ``` Create a 1x1 tensor where x represents one data sample with one dimension, and 3x1 tensor where $X$ represents one data sample of one dimension: ``` # Create x and X tensor x = torch.tensor([[1.0]]) X = torch.tensor([[-100], [0], [100.0]]) print('x = ', x) print('X = ', X) ``` Create a model to predict one dimension: ``` # Create logistic regression model model = logistic_regression(1) ``` In this case, the parameters are randomly initialized. You can view them the following ways: ``` # Print parameters print("list(model.parameters()):\n ", list(model.parameters())) print("\nmodel.state_dict():\n ", model.state_dict()) ``` Make a prediction with one sample: ``` # Make the prediction of x yhat = model(x) print("The prediction result: \n", yhat) ``` Make a prediction with multiple samples: ``` # Make the prediction of X yhat = model(X) print("The prediction result: \n", yhat) ``` Create a logistic regression object with a function with two inputs: ``` # Create logistic regression model model = logistic_regression(2) ``` Create a 1x2 tensor where x represents one data sample with one dimension, and 3x2 tensor X represents one data sample of one dimension: ``` # Create x and X tensor x = torch.tensor([[1.0, 2.0]]) X = torch.tensor([[100, -100], [0.0, 0.0], [-100, 100]]) print('x = ', x) print('X = ', X) ``` Make a prediction with one sample: ``` # Make the prediction of x yhat = model(x) print("The prediction result: \n", yhat) ``` Make a prediction with multiple samples: ``` # Make the prediction of X yhat = model(X) print("The prediction result: \n", yhat) ``` <!--Empty Space for separating topics--> <h3>Practice</h3> Make your own model <code>my_model</code> as applying linear regression first and then logistic regression using <code>nn.Sequential()</code>. Print out your prediction. ``` # Practice: Make your model and make the prediction X = torch.tensor([-10.0]) ``` Double-click <b>here</b> for the solution. <!-- my_model = nn.Sequential(nn.Linear(1, 1),nn.Sigmoid()) yhat = my_model(X) print("The prediction: ", yhat) --> <!--Empty Space for separating topics--> <a href="http://cocl.us/pytorch_link_bottom"> <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/notebook_bottom%20.png" width="750" alt="PyTorch Bottom" /> </a> <h2>About the Authors:</h2> <a href="https://www.linkedin.com/in/joseph-s-50398b136/">Joseph Santarcangelo</a> has a PhD in Electrical Engineering, his research focused on using machine learning, signal processing, and computer vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD. Other contributors: <a href="https://www.linkedin.com/in/michelleccarey/">Michelle Carey</a>, <a href="www.linkedin.com/in/jiahui-mavis-zhou-a4537814a">Mavis Zhou</a> <hr> Copyright &copy; 2018 <a href="cognitiveclass.ai?utm_source=bducopyrightlink&utm_medium=dswb&utm_campaign=bdu">cognitiveclass.ai</a>. This notebook and its source code are released under the terms of the <a href="https://bigdatauniversity.com/mit-license/">MIT License</a>.
github_jupyter
``` import nltk from nltk.stem import PorterStemmer from nltk.corpus import stopwords import re paragraph = """I have three visions for India. In 3000 years of our history, people from all over the world have come and invaded us, captured our lands, conquered our minds. From Alexander onwards, the Greeks, the Turks, the Moguls, the Portuguese, the British, the French, the Dutch, all of them came and looted us, took over what was ours. Yet we have not done this to any other nation. We have not conquered anyone. We have not grabbed their land, their culture, their history and tried to enforce our way of life on them. Why? Because we respect the freedom of others.That is why my first vision is that of freedom. I believe that India got its first vision of this in 1857, when we started the War of Independence. It is this freedom that we must protect and nurture and build on. If we are not free, no one will respect us. My second vision for India’s development. For fifty years we have been a developing nation. It is time we see ourselves as a developed nation. We are among the top 5 nations of the world in terms of GDP. We have a 10 percent growth rate in most areas. Our poverty levels are falling. Our achievements are being globally recognised today. Yet we lack the self-confidence to see ourselves as a developed nation, self-reliant and self-assured. Isn’t this incorrect? I have a third vision. India must stand up to the world. Because I believe that unless India stands up to the world, no one will respect us. Only strength respects strength. We must be strong not only as a military power but also as an economic power. Both must go hand-in-hand. My good fortune was to have worked with three great minds. Dr. Vikram Sarabhai of the Dept. of space, Professor Satish Dhawan, who succeeded him and Dr. Brahm Prakash, father of nuclear material. I was lucky to have worked with all three of them closely and consider this the great opportunity of my life. I see four milestones in my career""" sentences=nltk.sent_tokenize(paragraph) ps=PorterStemmer() for i in range(len(sentences)): words=nltk.word_tokenize(paragraph) words=[ps.stem(word) for word in words if not word in set(stopwords.words('english'))] sentences=' '.join(words) sentences ```
github_jupyter
# Classification on Iris dataset with sklearn and DJL In this notebook, you will try to use a pre-trained sklearn model to run on DJL for a general classification task. The model was trained with [Iris flower dataset](https://en.wikipedia.org/wiki/Iris_flower_data_set). ## Background ### Iris Dataset The dataset contains a set of 150 records under five attributes - sepal length, sepal width, petal length, petal width and species. Iris setosa | Iris versicolor | Iris virginica :-------------------------:|:-------------------------:|:-------------------------: ![](https://upload.wikimedia.org/wikipedia/commons/5/56/Kosaciec_szczecinkowaty_Iris_setosa.jpg) | ![](https://upload.wikimedia.org/wikipedia/commons/4/41/Iris_versicolor_3.jpg) | ![](https://upload.wikimedia.org/wikipedia/commons/9/9f/Iris_virginica.jpg) The chart above shows three different kinds of the Iris flowers. We will use sepal length, sepal width, petal length, petal width as the feature and species as the label to train the model. ### Sklearn Model You can find more information [here](http://onnx.ai/sklearn-onnx/). You can use the sklearn built-in iris dataset to load the data. Then we defined a [RandomForestClassifer](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html) to train the model. After that, we convert the model to onnx format for DJL to run inference. The following code is a sample classification setup using sklearn: ```python # Train a model. from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split from sklearn.ensemble import RandomForestClassifier iris = load_iris() X, y = iris.data, iris.target X_train, X_test, y_train, y_test = train_test_split(X, y) clr = RandomForestClassifier() clr.fit(X_train, y_train) ``` ## Preparation This tutorial requires the installation of Java Kernel. To install the Java Kernel, see the [README](https://github.com/awslabs/djl/blob/master/jupyter/README.md). These are dependencies we will use. To enhance the NDArray operation capability, we are importing ONNX Runtime and PyTorch Engine at the same time. Please find more information [here](https://github.com/awslabs/djl/blob/master/docs/onnxruntime/hybrid_engine.md#hybrid-engine-for-onnx-runtime). ``` // %mavenRepo snapshots https://oss.sonatype.org/content/repositories/snapshots/ %maven ai.djl:api:0.8.0 %maven ai.djl.onnxruntime:onnxruntime-engine:0.8.0 %maven ai.djl.pytorch:pytorch-engine:0.8.0 %maven org.slf4j:slf4j-api:1.7.26 %maven org.slf4j:slf4j-simple:1.7.26 %maven com.microsoft.onnxruntime:onnxruntime:1.4.0 %maven ai.djl.pytorch:pytorch-native-auto:1.6.0 import ai.djl.inference.*; import ai.djl.modality.*; import ai.djl.ndarray.*; import ai.djl.ndarray.types.*; import ai.djl.repository.zoo.*; import ai.djl.translate.*; import java.util.*; ``` ## Step 1 create a Translator Inference in machine learning is the process of predicting the output for a given input based on a pre-defined model. DJL abstracts away the whole process for ease of use. It can load the model, perform inference on the input, and provide output. DJL also allows you to provide user-defined inputs. The workflow looks like the following: ![https://github.com/awslabs/djl/blob/master/examples/docs/img/workFlow.png?raw=true](https://github.com/awslabs/djl/blob/master/examples/docs/img/workFlow.png?raw=true) The `Translator` interface encompasses the two white blocks: Pre-processing and Post-processing. The pre-processing component converts the user-defined input objects into an NDList, so that the `Predictor` in DJL can understand the input and make its prediction. Similarly, the post-processing block receives an NDList as the output from the `Predictor`. The post-processing block allows you to convert the output from the `Predictor` to the desired output format. In our use case, we use a class namely `IrisFlower` as our input class type. We will use [`Classifications`](https://javadoc.io/doc/ai.djl/api/latest/ai/djl/modality/Classifications.html) as our output class type. ``` public static class IrisFlower { public float sepalLength; public float sepalWidth; public float petalLength; public float petalWidth; public IrisFlower(float sepalLength, float sepalWidth, float petalLength, float petalWidth) { this.sepalLength = sepalLength; this.sepalWidth = sepalWidth; this.petalLength = petalLength; this.petalWidth = petalWidth; } } ``` Let's create a translator ``` public static class MyTranslator implements Translator<IrisFlower, Classifications> { private final List<String> synset; public MyTranslator() { // species name synset = Arrays.asList("setosa", "versicolor", "virginica"); } @Override public NDList processInput(TranslatorContext ctx, IrisFlower input) { float[] data = {input.sepalLength, input.sepalWidth, input.petalLength, input.petalWidth}; NDArray array = ctx.getNDManager().create(data, new Shape(1, 4)); return new NDList(array); } @Override public Classifications processOutput(TranslatorContext ctx, NDList list) { return new Classifications(synset, list.get(1)); } @Override public Batchifier getBatchifier() { return null; } } ``` ## Step 2 Prepare your model We will load a pretrained sklearn model into DJL. We defined a [`ModelZoo`](https://javadoc.io/doc/ai.djl/api/latest/ai/djl/repository/zoo/ModelZoo.html) concept to allow user load model from varity of locations, such as remote URL, local files or DJL pretrained model zoo. We need to define `Criteria` class to help the modelzoo locate the model and attach translator. In this example, we download a compressed ONNX model from S3. ``` String modelUrl = "https://mlrepo.djl.ai/model/tabular/random_forest/ai/djl/onnxruntime/iris_flowers/0.0.1/iris_flowers.zip"; Criteria<IrisFlower, Classifications> criteria = Criteria.builder() .setTypes(IrisFlower.class, Classifications.class) .optModelUrls(modelUrl) .optTranslator(new MyTranslator()) .optEngine("OnnxRuntime") // use OnnxRuntime engine by default .build(); ZooModel<IrisFlower, Classifications> model = ModelZoo.loadModel(criteria); ``` ## Step 3 Run inference User will just need to create a `Predictor` from model to run the inference. ``` Predictor<IrisFlower, Classifications> predictor = model.newPredictor(); IrisFlower info = new IrisFlower(1.0f, 2.0f, 3.0f, 4.0f); predictor.predict(info); ```
github_jupyter
<table class="ee-notebook-buttons" align="left"> <td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/Algorithms/landsat_radiance.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td> <td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/Algorithms/landsat_radiance.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td> <td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/Algorithms/landsat_radiance.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td> </table> ## Install Earth Engine API and geemap Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://geemap.org). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`. The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet. ``` # Installs geemap package import subprocess try: import geemap except ImportError: print('Installing geemap ...') subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap']) import ee import geemap ``` ## Create an interactive map The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function. ``` Map = geemap.Map(center=[40,-100], zoom=4) Map ``` ## Add Earth Engine Python script ``` # Add Earth Engine dataset # Load a raw Landsat scene and display it. raw = ee.Image('LANDSAT/LC08/C01/T1/LC08_044034_20140318') Map.centerObject(raw, 10) Map.addLayer(raw, {'bands': ['B4', 'B3', 'B2'], 'min': 6000, 'max': 12000}, 'raw') # Convert the raw data to radiance. radiance = ee.Algorithms.Landsat.calibratedRadiance(raw) Map.addLayer(radiance, {'bands': ['B4', 'B3', 'B2'], 'max': 90}, 'radiance') # Convert the raw data to top-of-atmosphere reflectance. toa = ee.Algorithms.Landsat.TOA(raw) Map.addLayer(toa, {'bands': ['B4', 'B3', 'B2'], 'max': 0.2}, 'toa reflectance') ``` ## Display Earth Engine data layers ``` Map.addLayerControl() # This line is not needed for ipyleaflet-based Map. Map ```
github_jupyter
# Import Libraries ``` from __future__ import print_function import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import torchvision from torchvision import datasets, transforms %matplotlib inline import matplotlib.pyplot as plt ``` ## Data Transformations We first start with defining our data transformations. We need to think what our data is and how can we augment it to correct represent images which it might not see otherwise. ``` # Train Phase transformations train_transforms = transforms.Compose([ # transforms.Resize((28, 28)), # transforms.ColorJitter(brightness=0.10, contrast=0.1, saturation=0.10, hue=0.1), transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) # The mean and std have to be sequences (e.g., tuples), therefore you should add a comma after the values. # Note the difference between (0.1307) and (0.1307,) ]) # Test Phase transformations test_transforms = transforms.Compose([ # transforms.Resize((28, 28)), # transforms.ColorJitter(brightness=0.10, contrast=0.1, saturation=0.10, hue=0.1), transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ]) ``` # Dataset and Creating Train/Test Split ``` train = datasets.MNIST('./data', train=True, download=True, transform=train_transforms) test = datasets.MNIST('./data', train=False, download=True, transform=test_transforms) ``` # Dataloader Arguments & Test/Train Dataloaders ``` SEED = 1 # CUDA? cuda = torch.cuda.is_available() print("CUDA Available?", cuda) # For reproducibility torch.manual_seed(SEED) if cuda: torch.cuda.manual_seed(SEED) # dataloader arguments - something you'll fetch these from cmdprmt dataloader_args = dict(shuffle=True, batch_size=128, num_workers=4, pin_memory=True) if cuda else dict(shuffle=True, batch_size=64) # train dataloader train_loader = torch.utils.data.DataLoader(train, **dataloader_args) # test dataloader test_loader = torch.utils.data.DataLoader(test, **dataloader_args) ``` # Data Statistics It is important to know your data very well. Let's check some of the statistics around our data and how it actually looks like ``` # We'd need to convert it into Numpy! Remember above we have converted it into tensors already train_data = train.train_data train_data = train.transform(train_data.numpy()) print('[Train]') print(' - Numpy Shape:', train.train_data.cpu().numpy().shape) print(' - Tensor Shape:', train.train_data.size()) print(' - min:', torch.min(train_data)) print(' - max:', torch.max(train_data)) print(' - mean:', torch.mean(train_data)) print(' - std:', torch.std(train_data)) print(' - var:', torch.var(train_data)) dataiter = iter(train_loader) images, labels = dataiter.next() print(images.shape) print(labels.shape) # Let's visualize some of the images plt.imshow(images[0].numpy().squeeze(), cmap='gray_r') ``` ## MORE It is important that we view as many images as possible. This is required to get some idea on image augmentation later on ``` figure = plt.figure() num_of_images = 60 for index in range(1, num_of_images + 1): plt.subplot(6, 10, index) plt.axis('off') plt.imshow(images[index].numpy().squeeze(), cmap='gray_r') ``` # The model Let's start with the model we first saw ``` class Net(nn.Module): def __init__(self): super(Net, self).__init__() # Input Block self.convblock1 = nn.Sequential( nn.Conv2d(in_channels=1, out_channels=16, kernel_size=(3, 3), padding=0, bias=False), nn.ReLU(), ) # output_size = 26 # CONVOLUTION BLOCK 1 self.convblock2 = nn.Sequential( nn.Conv2d(in_channels=16, out_channels=16, kernel_size=(3, 3), padding=0, bias=False), nn.ReLU(), ) # output_size = 24 # TRANSITION BLOCK 1 self.convblock3 = nn.Sequential( nn.Conv2d(in_channels=16, out_channels=16, kernel_size=(1, 1), padding=0, bias=False), nn.ReLU(), ) # output_size = 24 self.pool1 = nn.MaxPool2d(2, 2) # output_size = 12 # CONVOLUTION BLOCK 2 self.convblock4 = nn.Sequential( nn.Conv2d(in_channels=16, out_channels=16, kernel_size=(3, 3), padding=0, bias=False), nn.ReLU(), ) # output_size = 10 self.convblock5 = nn.Sequential( nn.Conv2d(in_channels=16, out_channels=16, kernel_size=(3, 3), padding=0, bias=False), nn.ReLU(), ) # output_size = 8 self.convblock6 = nn.Sequential( nn.Conv2d(in_channels=16, out_channels=10, kernel_size=(3, 3), padding=0, bias=False), nn.ReLU(), ) # output_size = 6 # OUTPUT BLOCK self.convblock7 = nn.Sequential( nn.Conv2d(in_channels=10, out_channels=10, kernel_size=(3, 3), padding=1, bias=False), nn.ReLU(), ) # output_size = 6 self.gap = nn.Sequential( nn.AvgPool2d(kernel_size=6) ) self.convblock8 = nn.Sequential( nn.Conv2d(in_channels=10, out_channels=10, kernel_size=(1, 1), padding=0, bias=False), # nn.BatchNorm2d(10), NEVER # nn.ReLU() NEVER! ) # output_size = 1 def forward(self, x): x = self.convblock1(x) x = self.convblock2(x) x = self.convblock3(x) x = self.pool1(x) x = self.convblock4(x) x = self.convblock5(x) x = self.convblock6(x) x = self.convblock7(x) x = self.gap(x) x = self.convblock8(x) x = x.view(-1, 10) return F.log_softmax(x, dim=-1) ``` # Model Params Can't emphasize on how important viewing Model Summary is. Unfortunately, there is no in-built model visualizer, so we have to take external help ``` !pip install torchsummary from torchsummary import summary use_cuda = torch.cuda.is_available() device = torch.device("cuda" if use_cuda else "cpu") print(device) model = Net().to(device) summary(model, input_size=(1, 28, 28)) ``` # Training and Testing Looking at logs can be boring, so we'll introduce **tqdm** progressbar to get cooler logs. Let's write train and test functions ``` from tqdm import tqdm train_losses = [] test_losses = [] train_acc = [] test_acc = [] def train(model, device, train_loader, optimizer, epoch): global train_max model.train() pbar = tqdm(train_loader) correct = 0 processed = 0 for batch_idx, (data, target) in enumerate(pbar): # get samples data, target = data.to(device), target.to(device) # Init optimizer.zero_grad() # In PyTorch, we need to set the gradients to zero before starting to do backpropragation because PyTorch accumulates the gradients on subsequent backward passes. # Because of this, when you start your training loop, ideally you should zero out the gradients so that you do the parameter update correctly. # Predict y_pred = model(data) # Calculate loss loss = F.nll_loss(y_pred, target) train_losses.append(loss) # Backpropagation loss.backward() optimizer.step() # Update pbar-tqdm pred = y_pred.argmax(dim=1, keepdim=True) # get the index of the max log-probability correct += pred.eq(target.view_as(pred)).sum().item() processed += len(data) pbar.set_description(desc= f'Loss={loss.item()} Batch_id={batch_idx} Accuracy={100*correct/processed:0.2f}') train_acc.append(100*correct/processed) if (train_max < 100*correct/processed): train_max = 100*correct/processed def test(model, device, test_loader): global test_max model.eval() test_loss = 0 correct = 0 with torch.no_grad(): for data, target in test_loader: data, target = data.to(device), target.to(device) output = model(data) test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability correct += pred.eq(target.view_as(pred)).sum().item() test_loss /= len(test_loader.dataset) test_losses.append(test_loss) print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)\n'.format( test_loss, correct, len(test_loader.dataset), 100. * correct / len(test_loader.dataset))) if (test_max < 100. * correct / len(test_loader.dataset)): test_max = 100. * correct / len(test_loader.dataset) test_acc.append(100. * correct / len(test_loader.dataset)) ``` # Let's Train and test our model ``` model = Net().to(device) optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9) EPOCHS = 15 train_max=0 test_max=0 for epoch in range(EPOCHS): print("EPOCH:", epoch) train(model, device, train_loader, optimizer, epoch) test(model, device, test_loader) print(f"\nMaximum training accuracy: {train_max}\n") print(f"\nMaximum test accuracy: {test_max}\n") fig, axs = plt.subplots(2,2,figsize=(15,10)) axs[0, 0].plot(train_losses) axs[0, 0].set_title("Training Loss") axs[1, 0].plot(train_acc) axs[1, 0].set_title("Training Accuracy") axs[0, 1].plot(test_losses) axs[0, 1].set_title("Test Loss") axs[1, 1].plot(test_acc) axs[1, 1].set_title("Test Accuracy") fig, ((axs1, axs2), (axs3, axs4)) = plt.subplots(2,2,figsize=(15,10)) # Train plot axs1.plot(train_losses) axs1.set_title("Training Loss") axs3.plot(train_acc) axs3.set_title("Training Accuracy") # axs1.set_xlim([0, 5]) axs1.set_ylim([0, 5]) axs3.set_ylim([0, 100]) # Test plot axs2.plot(test_losses) axs2.set_title("Test Loss") axs4.plot(test_acc) axs4.set_title("Test Accuracy") axs2.set_ylim([0, 5]) axs4.set_ylim([0, 100]) ```
github_jupyter
``` %cd /Users/Kunal/Projects/TCH_CardiacSignals_F20/ from numpy.random import seed seed(1) import numpy as np import os import matplotlib.pyplot as plt import tensorflow tensorflow.random.set_seed(2) from tensorflow import keras from tensorflow.keras.callbacks import EarlyStopping from tensorflow.keras.regularizers import l1, l2 from tensorflow.keras.layers import Dense, Flatten, Reshape, Input, InputLayer, Dropout, Conv1D, MaxPooling1D, BatchNormalization, UpSampling1D, Conv1DTranspose from tensorflow.keras.models import Sequential, Model from src.preprocess.dim_reduce.patient_split import * from src.preprocess.heartbeat_split import heartbeat_split from sklearn.model_selection import train_test_split data = np.load("Working_Data/Training_Subset/Normalized/ten_hbs/Normalized_Fixed_Dim_HBs_Idx" + str(1) + ".npy") data.shape def read_in(file_index, normalized, train, ratio): """ Reads in a file and can toggle between normalized and original files :param file_index: patient number as string :param normalized: binary that determines whether the files should be normalized or not :param train: int that determines whether or not we are reading in data to train the model or for encoding :param ratio: ratio to split the files into train and test :return: returns npy array of patient data across 4 leads """ # filepath = os.path.join("Working_Data", "Normalized_Fixed_Dim_HBs_Idx" + file_index + ".npy") # filepath = os.path.join("Working_Data", "1000d", "Normalized_Fixed_Dim_HBs_Idx35.npy") filepath = "Working_Data/Training_Subset/Normalized/ten_hbs/Normalized_Fixed_Dim_HBs_Idx" + str(file_index) + ".npy" if normalized == 1: if train == 1: normal_train, normal_test, abnormal = patient_split_train(filepath, ratio) # noise_factor = 0.5 # noise_train = normal_train + noise_factor * np.random.normal(loc=0.0, scale=1.0, size=normal_train.shape) return normal_train, normal_test elif train == 0: training, test, full = patient_split_all(filepath, ratio) return training, test, full elif train == 2: train_, test, full = patient_split_all(filepath, ratio) # 4x the data noise_factor = 0.5 noise_train = train_ + noise_factor * np.random.normal(loc=0.0, scale=1.0, size=train_.shape) noise_train2 = train_ + noise_factor * np.random.normal(loc=0.0, scale=1.0, size=train_.shape) noise_train3 = train_ + noise_factor * np.random.normal(loc=0.0, scale=1.0, size=train_.shape) train_ = np.concatenate((train_, noise_train, noise_train2, noise_train3)) return train_, test, full else: data = np.load(os.path.join("Working_Data", "Fixed_Dim_HBs_Idx" + file_index + ".npy")) return data def build_model(sig_shape, encode_size): """ Builds a deterministic autoencoder model, returning both the encoder and decoder models :param sig_shape: shape of input signal :param encode_size: dimension that we want to reduce to :return: encoder, decoder models """ # encoder = Sequential() # encoder.add(InputLayer((1000,4))) # # idk if causal is really making that much of an impact but it seems useful for time series data? # encoder.add(Conv1D(10, 11, activation="linear", padding="causal")) # encoder.add(Conv1D(10, 5, activation="relu", padding="causal")) # # encoder.add(Conv1D(10, 3, activation="relu", padding="same")) # encoder.add(Flatten()) # encoder.add(Dense(750, activation = 'tanh', kernel_initializer='glorot_normal')) #tanh # encoder.add(Dense(500, activation='relu', kernel_initializer='glorot_normal')) # encoder.add(Dense(400, activation = 'relu', kernel_initializer='glorot_normal')) # encoder.add(Dense(300, activation='relu', kernel_initializer='glorot_normal')) # encoder.add(Dense(200, activation = 'relu', kernel_initializer='glorot_normal')) #relu # encoder.add(Dense(encode_size)) encoder = Sequential() encoder.add(InputLayer((1000,4))) encoder.add(Conv1D(3, 11, activation="tanh", padding="same")) encoder.add(Conv1D(5, 7, activation="relu", padding="same")) encoder.add(MaxPooling1D(2)) encoder.add(Conv1D(5, 5, activation="tanh", padding="same")) encoder.add(Conv1D(7, 3, activation="tanh", padding="same")) encoder.add(MaxPooling1D(2)) encoder.add(Flatten()) encoder.add(Dense(750, activation = 'tanh', kernel_initializer='glorot_normal')) # encoder.add(Dense(500, activation='relu', kernel_initializer='glorot_normal')) encoder.add(Dense(400, activation = 'tanh', kernel_initializer='glorot_normal')) # encoder.add(Dense(300, activation='relu', kernel_initializer='glorot_normal')) encoder.add(Dense(200, activation = 'tanh', kernel_initializer='glorot_normal')) encoder.add(Dense(encode_size)) # encoder.summary() #################################################################################################################### # Build the decoder # decoder = Sequential() # decoder.add(InputLayer((latent_dim,))) # decoder.add(Dense(200, activation='tanh', kernel_initializer='glorot_normal')) # decoder.add(Dense(300, activation='relu', kernel_initializer='glorot_normal')) # decoder.add(Dense(400, activation='relu', kernel_initializer='glorot_normal')) # decoder.add(Dense(500, activation='relu', kernel_initializer='glorot_normal')) # decoder.add(Dense(750, activation='relu', kernel_initializer='glorot_normal')) # decoder.add(Dense(10000, activation='relu', kernel_initializer='glorot_normal')) # decoder.add(Reshape((1000, 10))) # decoder.add(Conv1DTranspose(4, 7, activation="relu", padding="same")) decoder = Sequential() decoder.add(InputLayer((encode_size,))) decoder.add(Dense(200, activation='tanh', kernel_initializer='glorot_normal')) # decoder.add(Dense(300, activation='relu', kernel_initializer='glorot_normal')) decoder.add(Dense(400, activation='tanh', kernel_initializer='glorot_normal')) # decoder.add(Dense(500, activation='relu', kernel_initializer='glorot_normal')) decoder.add(Dense(750, activation='tanh', kernel_initializer='glorot_normal')) decoder.add(Dense(10000, activation='tanh', kernel_initializer='glorot_normal')) decoder.add(Reshape((1000, 10))) # decoder.add(Conv1DTranspose(8, 3, activation="relu", padding="same")) decoder.add(Conv1DTranspose(8, 11, activation="relu", padding="same")) decoder.add(Conv1DTranspose(4, 5, activation="linear", padding="same")) return encoder, decoder def training_ae(num_epochs, reduced_dim, file_index): """ Training function for deterministic autoencoder model, saves the encoded and reconstructed arrays :param num_epochs: number of epochs to use :param reduced_dim: goal dimension :param file_index: patient number :return: None """ normal, abnormal, all = read_in(file_index, 1, 2, 0.3) normal_train, normal_valid = train_test_split(normal, train_size=0.85, random_state=1) # normal_train = normal[:round(len(normal)*.85),:] # normal_valid = normal[round(len(normal)*.85):,:] signal_shape = normal.shape[1:] batch_size = round(len(normal) * 0.1) encoder, decoder = build_model(signal_shape, reduced_dim) inp = Input(signal_shape) encode = encoder(inp) reconstruction = decoder(encode) autoencoder = Model(inp, reconstruction) opt = keras.optimizers.Adam(learning_rate=0.0001) #0.0008 autoencoder.compile(optimizer=opt, loss='mse') early_stopping = EarlyStopping(patience=10, min_delta=0.001, mode='min') autoencoder = autoencoder.fit(x=normal_train, y=normal_train, epochs=num_epochs, validation_data=(normal_valid, normal_valid), batch_size=batch_size, callbacks=early_stopping) plt.plot(autoencoder.history['loss']) plt.plot(autoencoder.history['val_loss']) plt.title('model loss patient' + str(file_index)) plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'validation'], loc='upper left') plt.show() # using AE to encode other data encoded = encoder.predict(all) reconstruction = decoder.predict(encoded) # save reconstruction, encoded, and input if needed # reconstruction_save = os.path.join("Working_Data", "reconstructed_ae_" + str(reduced_dim) + "d_Idx" + str(file_index) + ".npy") # encoded_save = os.path.join("Working_Data", "reduced_ae_" + str(reduced_dim) + "d_Idx" + str(file_index) + ".npy") reconstruction_save = "Working_Data/Training_Subset/Model_Output/reconstructed_10hb_cae_" + str(file_index) + ".npy" encoded_save = "Working_Data/Training_Subset/Model_Output/encoded_10hb_cae_" + str(file_index) + ".npy" np.save(reconstruction_save, reconstruction) np.save(encoded_save,encoded) # if training and need to save test split for MSE calculation # input_save = os.path.join("Working_Data","1000d", "original_data_test_ae" + str(100) + "d_Idx" + str(35) + ".npy") # np.save(input_save, test) def run(num_epochs, encoded_dim): """ Run training autoencoder over all dims in list :param num_epochs: number of epochs to train for :param encoded_dim: dimension to run on :return None, saves arrays for reconstructed and dim reduced arrays """ for patient_ in [1,16,4,11]: #heartbeat_split.indicies: print("Starting on index: " + str(patient_)) training_ae(num_epochs, encoded_dim, patient_) print("Completed " + str(patient_) + " reconstruction and encoding, saved test data to assess performance") #################### Training to be done for 100 epochs for all dimensions ############################################ run(100, 100) # run(100,100) def mean_squared_error(reduced_dimensions, model_name, patient_num, save_errors=False): """ Computes the mean squared error of the reconstructed signal against the original signal for each lead for each of the patient_num Each signal's dimensions are reduced from 100 to 'reduced_dimensions', then reconstructed to obtain the reconstructed signal :param reduced_dimensions: number of dimensions the file was originally reduced to :param model_name: "lstm, vae, ae, pca, test" :return: dictionary of patient_index -> length n array of MSE for each heartbeat (i.e. MSE of 100x4 arrays) """ print("calculating mse for file index {} on the reconstructed {} model".format(patient_num, model_name)) original_signals = np.load( os.path.join("Working_Data", "Training_Subset", "Normalized", "ten_hbs", "Normalized_Fixed_Dim_HBs_Idx{}.npy".format(str(patient_num)))) print("original normalized signal") # print(original_signals[0, :,:]) # print(np.mean(original_signals[0,:,:])) # print(np.var(original_signals[0, :, :])) # print(np.linalg.norm(original_signals[0,:,:])) # print([np.linalg.norm(i) for i in original_signals[0,:,:].flatten()]) reconstructed_signals = np.load(os.path.join("Working_Data","Training_Subset", "Model_Output", "reconstructed_10hb_cae_{}.npy").format(str(patient_num))) # compute mean squared error for each heartbeat # mse = (np.square(original_signals - reconstructed_signals) / (np.linalg.norm(original_signals))).mean(axis=1).mean(axis=1) # mse = (np.square(original_signals - reconstructed_signals) / (np.square(original_signals) + np.square(reconstructed_signals))).mean(axis=1).mean(axis=1) mse = np.zeros(np.shape(original_signals)[0]) for i in range(np.shape(original_signals)[0]): mse[i] = (np.linalg.norm(original_signals[i,:,:] - reconstructed_signals[i,:,:]) ** 2) / (np.linalg.norm(original_signals[i,:,:]) ** 2) # orig_flat = original_signals[i,:,:].flatten() # recon_flat = reconstructed_signals[i,:,:].flatten() # mse[i] = sklearn_mse(orig_flat, recon_flat) # my_mse = mse[i] # plt.plot([i for i in range(np.shape(mse)[0])], mse) # plt.show() if save_errors: np.save( os.path.join("Working_Data", "{}_errors_{}d_Idx{}.npy".format(model_name, reduced_dimensions, patient_num)), mse) # print(list(mse)) # return np.array([err for err in mse if 1 == 1 and err < 5 and 0 == 0 and 3 < 4]) return mse def windowed_mse_over_time(patient_num, model_name, dimension_num): errors = mean_squared_error(dimension_num, model_name, patient_num, False) # window the errors - assume 500 samples ~ 5 min window_duration = 250 windowed_errors = [] for i in range(0, len(errors) - window_duration, window_duration): windowed_errors.append(np.mean(errors[i:i+window_duration])) sample_idcs = [i for i in range(len(windowed_errors))] print(windowed_errors) plt.plot(sample_idcs, windowed_errors) plt.title("5-min Windowed MSE" + str(patient_num)) plt.xlabel("Window Index") plt.ylabel("Relative MSE") plt.show() # np.save(f"Working_Data/windowed_mse_{dimension_num}d_Idx{patient_num}.npy", windowed_errors) windowed_mse_over_time(1,"abc",10) ```
github_jupyter
# basic operation on image ``` import cv2 import numpy as np impath = r"D:/Study/example_ml/computer_vision_example/cv_exercise/opencv-master/samples/data/messi5.jpg" img = cv2.imread(impath) print(img.shape) print(img.size) print(img.dtype) b,g,r = cv2.split(img) img = cv2.merge((b,g,r)) cv2.imshow("image",img) cv2.waitKey(0) cv2.destroyAllWindows() ``` # copy and paste ``` import cv2 import numpy as np impath = r"D:/Study/example_ml/computer_vision_example/cv_exercise/opencv-master/samples/data/messi5.jpg" img = cv2.imread(impath) '''b,g,r = cv2.split(img) img = cv2.merge((b,g,r))''' ball = img[280:340,330:390] img[273:333,100:160] = ball cv2.imshow("image",img) cv2.waitKey(0) cv2.destroyAllWindows() ``` # merge two imge ``` import cv2 import numpy as np impath = r"D:/Study/example_ml/computer_vision_example/cv_exercise/opencv-master/samples/data/messi5.jpg" impath1 = r"D:/Study/example_ml/computer_vision_example/cv_exercise/opencv-master/samples/data/opencv-logo.png" img = cv2.imread(impath) img1 = cv2.imread(impath1) img = cv2.resize(img, (512,512)) img1 = cv2.resize(img1, (512,512)) #new_img = cv2.add(img,img1) new_img = cv2.addWeighted(img,0.1,img1,0.8,1) cv2.imshow("new_image",new_img) cv2.waitKey(0) cv2.destroyAllWindows() ``` # bitwise operation ``` import cv2 import numpy as np img1 = np.zeros([250,500,3],np.uint8) img1 = cv2.rectangle(img1,(200,0),(300,100),(255,255,255),-1) img2 = np.full((250, 500, 3), 255, dtype=np.uint8) img2 = cv2.rectangle(img2, (0, 0), (250, 250), (0, 0, 0), -1) #bit_and = cv2.bitwise_and(img2,img1) #bit_or = cv2.bitwise_or(img2,img1) #bit_xor = cv2.bitwise_xor(img2,img1) bit_not = cv2.bitwise_not(img2) #cv2.imshow("bit_and",bit_and) #cv2.imshow("bit_or",bit_or) #cv2.imshow("bit_xor",bit_xor) cv2.imshow("bit_not",bit_not) cv2.imshow("img1",img1) cv2.imshow("img2",img2) cv2.waitKey(0) cv2.destroyAllWindows() ``` # simple thresholding #### THRESH_BINARY ``` import cv2 import numpy as np img = cv2.imread('gradient.jpg',0) _,th1 = cv2.threshold(img,127,255,cv2.THRESH_BINARY) #check every pixel with 127 cv2.imshow("img",img) cv2.imshow("th1",th1) cv2.waitKey(0) cv2.destroyAllWindows() ``` #### THRESH_BINARY_INV ``` import cv2 import numpy as np img = cv2.imread('gradient.jpg',0) _,th1 = cv2.threshold(img,127,255,cv2.THRESH_BINARY) _,th2 = cv2.threshold(img,127,255,cv2.THRESH_BINARY_INV) #check every pixel with 127 cv2.imshow("img",img) cv2.imshow("th1",th1) cv2.imshow("th2",th2) cv2.waitKey(0) cv2.destroyAllWindows() ``` #### THRESH_TRUNC ``` import cv2 import numpy as np img = cv2.imread('gradient.jpg',0) _,th1 = cv2.threshold(img,127,255,cv2.THRESH_BINARY) _,th2 = cv2.threshold(img,255,255,cv2.THRESH_TRUNC) #check every pixel with 127 cv2.imshow("img",img) cv2.imshow("th1",th1) cv2.imshow("th2",th2) cv2.waitKey(0) cv2.destroyAllWindows() ``` #### THRESH_TOZERO ``` import cv2 import numpy as np img = cv2.imread('gradient.jpg',0) _,th1 = cv2.threshold(img,127,255,cv2.THRESH_BINARY) _,th2 = cv2.threshold(img,127,255,cv2.THRESH_TOZERO) #check every pixel with 127 _,th3 = cv2.threshold(img,127,255,cv2.THRESH_TOZERO_INV) #check every pixel with 127 cv2.imshow("img",img) cv2.imshow("th1",th1) cv2.imshow("th2",th2) cv2.imshow("th3",th3) cv2.waitKey(0) cv2.destroyAllWindows() ``` # Adaptive Thresholding ##### it will calculate the threshold for smaller region of iamge .so we get different thresholding value for different region of same image ``` import cv2 import numpy as np img = cv2.imread('sudoku1.jpg') img = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) _,th1 = cv2.threshold(img,127,255,cv2.THRESH_BINARY) th2 = cv2.adaptiveThreshold(img,255,cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY,11,2) th3 = cv2.adaptiveThreshold(img,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY,11,2) cv2.imshow("img",img) cv2.imshow("THRESH_BINARY",th1) cv2.imshow("ADAPTIVE_THRESH_MEAN_C",th2) cv2.imshow("ADAPTIVE_THRESH_GAUSSIAN_C",th3) cv2.waitKey(0) cv2.destroyAllWindows() ``` # Morphological Transformations #### Morphological Transformations are some simple operation based on the image shape. Morphological Transformations are normally performed on binary images. #### A kernal tells you how to change the value of any given pixel by combining it with different amounts of the neighbouring pixels. ``` import cv2 %matplotlib notebook %matplotlib inline from matplotlib import pyplot as plt img = cv2.imread("hsv_ball.jpg",cv2.IMREAD_GRAYSCALE) _,mask = cv2.threshold(img, 220,255,cv2.THRESH_BINARY_INV) titles = ['images',"mask"] images = [img,mask] for i in range(2): plt.subplot(1,2,i+1) plt.imshow(images[i],"gray") plt.title(titles[i]) plt.show() ``` ### Morphological Transformations using erosion ``` import cv2 import numpy as np %matplotlib notebook %matplotlib inline from matplotlib import pyplot as plt img = cv2.imread("hsv_ball.jpg",cv2.IMREAD_GRAYSCALE) _,mask = cv2.threshold(img, 220,255,cv2.THRESH_BINARY_INV) kernal = np.ones((2,2),np.uint8) dilation = cv2.dilate(mask,kernal,iterations = 3) erosion = cv2.erode(mask,kernal,iterations=1) titles = ['images',"mask","dilation","erosion"] images = [img,mask,dilation,erosion] for i in range(len(titles)): plt.subplot(2,2,i+1) plt.imshow(images[i],"gray") plt.title(titles[i]) plt.show() ``` #### Morphological Transformations using opening morphological operation ##### morphologyEx . Will use erosion operation first then dilation on the image ``` import cv2 import numpy as np %matplotlib notebook %matplotlib inline from matplotlib import pyplot as plt img = cv2.imread("hsv_ball.jpg",cv2.IMREAD_GRAYSCALE) _,mask = cv2.threshold(img, 220,255,cv2.THRESH_BINARY_INV) kernal = np.ones((5,5),np.uint8) dilation = cv2.dilate(mask,kernal,iterations = 3) erosion = cv2.erode(mask,kernal,iterations=1) opening = cv2.morphologyEx(mask,cv2.MORPH_OPEN,kernal) titles = ['images',"mask","dilation","erosion","opening"] images = [img,mask,dilation,erosion,opening] for i in range(len(titles)): plt.subplot(2,3,i+1) plt.imshow(images[i],"gray") plt.title(titles[i]) plt.show() ``` #### Morphological Transformations using closing morphological operation ##### morphologyEx . Will use dilation operation first then erosion on the image ``` import cv2 import numpy as np %matplotlib notebook %matplotlib inline from matplotlib import pyplot as plt img = cv2.imread("hsv_ball.jpg",cv2.IMREAD_GRAYSCALE) _,mask = cv2.threshold(img, 220,255,cv2.THRESH_BINARY_INV) kernal = np.ones((5,5),np.uint8) dilation = cv2.dilate(mask,kernal,iterations = 3) erosion = cv2.erode(mask,kernal,iterations=1) opening = cv2.morphologyEx(mask,cv2.MORPH_OPEN,kernal) closing = cv2.morphologyEx(mask,cv2.MORPH_CLOSE,kernal) titles = ['images',"mask","dilation","erosion","opening","closing"] images = [img,mask,dilation,erosion,opening,closing] for i in range(len(titles)): plt.subplot(2,3,i+1) plt.imshow(images[i],"gray") plt.title(titles[i]) plt.xticks([]) plt.yticks([]) plt.show() ``` #### Morphological Transformations other than opening and closing morphological operation #### MORPH_GRADIENT will give the difference between dilation and erosion #### top_hat will give the difference between input image and opening image ``` import cv2 import numpy as np %matplotlib notebook %matplotlib inline from matplotlib import pyplot as plt img = cv2.imread("hsv_ball.jpg",cv2.IMREAD_GRAYSCALE) _,mask = cv2.threshold(img, 220,255,cv2.THRESH_BINARY_INV) kernal = np.ones((5,5),np.uint8) dilation = cv2.dilate(mask,kernal,iterations = 3) erosion = cv2.erode(mask,kernal,iterations=1) opening = cv2.morphologyEx(mask,cv2.MORPH_OPEN,kernal) closing = cv2.morphologyEx(mask,cv2.MORPH_CLOSE,kernal) morphlogical_gradient = cv2.morphologyEx(mask,cv2.MORPH_GRADIENT,kernal) top_hat = cv2.morphologyEx(mask,cv2.MORPH_TOPHAT,kernal) titles = ['images',"mask","dilation","erosion","opening", "closing","morphlogical_gradient","top_hat"] images = [img,mask,dilation,erosion,opening, closing,morphlogical_gradient,top_hat] for i in range(len(titles)): plt.subplot(2,4,i+1) plt.imshow(images[i],"gray") plt.title(titles[i]) plt.xticks([]) plt.yticks([]) plt.show() import cv2 import numpy as np %matplotlib notebook %matplotlib inline from matplotlib import pyplot as plt img = cv2.imread("HappyFish.jpg",cv2.IMREAD_GRAYSCALE) _,mask = cv2.threshold(img, 220,255,cv2.THRESH_BINARY_INV) kernal = np.ones((5,5),np.uint8) dilation = cv2.dilate(mask,kernal,iterations = 3) erosion = cv2.erode(mask,kernal,iterations=1) opening = cv2.morphologyEx(mask,cv2.MORPH_OPEN,kernal) closing = cv2.morphologyEx(mask,cv2.MORPH_CLOSE,kernal) MORPH_GRADIENT = cv2.morphologyEx(mask,cv2.MORPH_GRADIENT,kernal) top_hat = cv2.morphologyEx(mask,cv2.MORPH_TOPHAT,kernal) titles = ['images',"mask","dilation","erosion","opening", "closing","MORPH_GRADIENT","top_hat"] images = [img,mask,dilation,erosion,opening, closing,MORPH_GRADIENT,top_hat] for i in range(len(titles)): plt.subplot(2,4,i+1) plt.imshow(images[i],"gray") plt.title(titles[i]) plt.xticks([]) plt.yticks([]) plt.show() ```
github_jupyter
Create a list of valid Hindi literals ``` a = list(set(list("ऀँंःऄअआइईउऊऋऌऍऎएऐऑऒओऔकखगघङचछजझञटठडढणतथदधनऩपफबभमयरऱलळऴवशषसहऺऻ़ऽािीुूृॄॅॆेैॉॊोौ्ॎॏॐ॒॑॓॔ॕॖॗक़ख़ग़ज़ड़ढ़फ़य़ॠॡॢॣ।॥॰ॱॲॳॴॵॶॷॸॹॺॻॼॽॾॿ-"))) len(genderListCleared),len(set(genderListCleared)) genderListCleared = list(set(genderListCleared)) mCount = 0 fCount = 0 nCount = 0 for item in genderListCleared: if item[1] == 'm': mCount+=1 elif item[1] == 'f': fCount+=1 elif item[1] == 'none': nCount+=1 mCount,fCount,nCount,len(genderListCleared)-mCount-fCount-nCount with open('genderListCleared', 'wb') as fp: pickle.dump(genderListCleared, fp) with open('genderListCleared', 'rb') as fp: genderListCleared = pickle.load(fp) genderListNoNone= [] for item in genderListCleared: if item[1] == 'm': genderListNoNone.append(item) elif item[1] == 'f': genderListNoNone.append(item) elif item[1] == 'any': genderListNoNone.append(item) with open('genderListNoNone', 'wb') as fp: pickle.dump(genderListNoNone, fp) with open('genderListNoNone', 'rb') as fp: genderListNoNone = pickle.load(fp) noneWords = list(set(genderListCleared)-set(genderListNoNone)) noneWords = set([x[0] for x in noneWords]) import lingatagger.genderlist as gndrlist import lingatagger.tokenizer as tok from lingatagger.tagger import * genders2 = gndrlist.drawlist() genderList2 = [] for i in genders2: x = i.split("\t") if type(numericTagger(x[0])[0]) != tuple: count = 0 for ch in list(x[0]): if ch not in a: count+=1 if count == 0: if len(x)>=3: genderList2.append((x[0],'any')) else: genderList2.append((x[0],x[1])) genderList2.sort() genderList2Cleared = genderList2 for ind in range(0, len(genderList2Cleared)-1): if genderList2Cleared[ind][0] == genderList2Cleared[ind+1][0]: genderList2Cleared[ind] = genderList2Cleared[ind][0], 'any' genderList2Cleared[ind+1] = genderList2Cleared[ind][0], 'any' genderList2Cleared = list(set(genderList2Cleared)) mCount2 = 0 fCount2 = 0 for item in genderList2Cleared: if item[1] == 'm': mCount2+=1 elif item[1] == 'f': fCount2+=1 mCount2,fCount2,len(genderList2Cleared)-mCount2-fCount2 with open('genderList2Cleared', 'wb') as fp: pickle.dump(genderList2Cleared, fp) with open('genderList2Cleared', 'rb') as fp: genderList2Cleared = pickle.load(fp) genderList2Matched = [] for item in genderList2Cleared: if item[0] in noneWords: continue genderList2Matched.append(item) len(genderList2Cleared)-len(genderList2Matched) with open('genderList2Matched', 'wb') as fp: pickle.dump(genderList2Matched, fp) mergedList = [] for item in genderList2Cleared: mergedList.append((item[0], item[1])) for item in genderListNoNone: mergedList.append((item[0], item[1])) mergedList.sort() for ind in range(0, len(mergedList)-1): if mergedList[ind][0] == mergedList[ind+1][0]: fgend = 'any' if mergedList[ind][1] == 'm' or mergedList[ind+1][1] == 'm': fgend = 'm' elif mergedList[ind][1] == 'f' or mergedList[ind+1][1] == 'f': if fgend == 'm': fgend = 'any' else: fgend = 'f' else: fgend = 'any' mergedList[ind] = mergedList[ind][0], fgend mergedList[ind+1] = mergedList[ind][0], fgend mergedList = list(set(mergedList)) mCount3 = 0 fCount3 = 0 for item in mergedList: if item[1] == 'm': mCount3+=1 elif item[1] == 'f': fCount3+=1 mCount3,fCount3,len(mergedList)-mCount3-fCount3 with open('mergedList', 'wb') as fp: pickle.dump(mergedList, fp) with open('mergedList', 'rb') as fp: mergedList = pickle.load(fp) np.zeros(18, dtype="int") from keras.models import Sequential from keras.layers import Dense, Dropout from keras.layers import Embedding from keras.layers import Conv1D, GlobalAveragePooling1D, MaxPooling1D from keras.layers import Dense, Conv2D, Flatten from sklearn.feature_extraction.text import CountVectorizer import numpy as np import lingatagger.genderlist as gndrlist import lingatagger.tokenizer as tok from lingatagger.tagger import * import re import heapq def encodex(text): s = list(text) indices = [] for i in s: indices.append(a.index(i)) encoded = np.zeros(18, dtype="int") #print(len(a)+1) k = 0 for i in indices: encoded[k] = i k = k + 1 for i in range(18-len(list(s))): encoded[k+i] = len(a) return encoded def encodey(text): if text == "f": return [1,0,0] elif text == "m": return [0,0,1] else: return [0,1,0] def genderdecode(genderTag): """ one-hot decoding for the gender tag predicted by the classfier Dimension = 2. """ genderTag = list(genderTag[0]) index = genderTag.index(heapq.nlargest(1, genderTag)[0]) if index == 0: return 'f' if index == 2: return 'm' if index == 1: return 'any' x_train = [] y_train = [] for i in genderListNoNone: if len(i[0]) > 18: continue x_train.append(encodex(i[0])) y_train.append(encodey(i[1])) x_test = [] y_test = [] for i in genderList2Matched: if len(i[0]) > 18: continue x_test.append(encodex(i[0])) y_test.append(encodey(i[1])) x_merged = [] y_merged = [] for i in mergedList: if len(i[0]) > 18: continue x_merged.append(encodex(i[0])) y_merged.append(encodey(i[1])) X_train = np.array(x_train) Y_train = np.array(y_train) X_test = np.array(x_test) Y_test = np.array(y_test) X_merged = np.array(x_merged) Y_merged = np.array(y_merged) with open('X_train', 'wb') as fp: pickle.dump(X_train, fp) with open('Y_train', 'wb') as fp: pickle.dump(Y_train, fp) with open('X_test', 'wb') as fp: pickle.dump(X_test, fp) with open('Y_test', 'wb') as fp: pickle.dump(Y_test, fp) from keras.models import Sequential from keras.layers import Dense, Dropout from keras.layers import Embedding from keras.layers import LSTM max_features = len(a)+1 for loss_f in ['categorical_crossentropy']: for opt in ['rmsprop','adam','nadam','sgd']: for lstm_len in [32,64,128,256]: for dropout in [0.4,0.45,0.5,0.55,0.6]: model = Sequential() model.add(Embedding(max_features, output_dim=18)) model.add(LSTM(lstm_len)) model.add(Dropout(dropout)) model.add(Dense(3, activation='softmax')) model.compile(loss=loss_f, optimizer=opt, metrics=['accuracy']) print("Training new model, loss:"+loss_f+", optimizer="+opt+", lstm_len="+str(lstm_len)+", dropoff="+str(dropout)) model.fit(X_train, Y_train, batch_size=16, validation_split = 0.2, epochs=10) score = model.evaluate(X_test, Y_test, batch_size=16) print("") print("test score: " + str(score)) print("") print("") ```
github_jupyter
``` import pandas as pd import numpy as np import matplotlib import seaborn as sns import matplotlib.pyplot as plt pd.set_option('display.max_colwidth', -1) default = pd.read_csv('./results/results_default.csv') new = pd.read_csv('./results/results_new.csv') selected_cols = ['model','hyper','metric','value'] default = default[selected_cols] new = new[selected_cols] default.model.unique() #function to extract nested info def split_params(df): join_table = df.copy() join_table["list_hyper"] = join_table["hyper"].apply(eval) join_table = join_table.explode("list_hyper") join_table["params_name"], join_table["params_val"] = zip(*join_table["list_hyper"]) return join_table color = ['lightpink','skyblue','lightgreen', "lightgrey", "navajowhite", "thistle"] markerfacecolor = ['red', 'blue', 'green','grey', "orangered", "darkviolet" ] marker = ['P', '^' ,'o', "H", "X", "p"] fig_size=(6,4) ``` ### Default server ``` default_split = split_params(default)[['model','metric','value','params_name','params_val']] models = default_split.model.unique().tolist() CollectiveMF_Item_set = default_split[default_split['model'] == models[0]] CollectiveMF_User_set = default_split[default_split['model'] == models[1]] CollectiveMF_No_set = default_split[default_split['model'] == models[2]] CollectiveMF_Both_set = default_split[default_split['model'] == models[3]] surprise_SVD_set = default_split[default_split['model'] == models[4]] surprise_Baseline_set = default_split[default_split['model'] == models[5]] ``` ## surprise_SVD ``` surprise_SVD_ndcg = surprise_SVD_set[(surprise_SVD_set['metric'] == 'ndcg@10')] surprise_SVD_ndcg = surprise_SVD_ndcg.pivot(index= 'value', columns='params_name', values='params_val').reset_index(inplace = False) surprise_SVD_ndcg = surprise_SVD_ndcg[surprise_SVD_ndcg.n_factors > 4] n_factors = [10,50,100,150] reg_all = [0.01,0.05,0.1,0.5] lr_all = [0.002,0.005,0.01] surprise_SVD_ndcg = surprise_SVD_ndcg.sort_values('reg_all') fig, ax = plt.subplots(1,1, figsize = fig_size) for i in range(4): labelstring = 'n_factors = '+ str(n_factors[i]) ax.semilogx('reg_all', 'value', data = surprise_SVD_ndcg.loc[(surprise_SVD_ndcg['lr_all'] == 0.002)&(surprise_SVD_ndcg['n_factors']== n_factors[i])], marker= marker[i], markerfacecolor=markerfacecolor[i], markersize=9, color= color[i], linewidth=3, label = labelstring) ax.legend() ax.set_ylabel('ndcg@10',fontsize = 18) ax.set_xlabel('regParam',fontsize = 18) ax.set_title('surprise_SVD \n ndcg@10 vs regParam with lr = 0.002',fontsize = 18) ax.set_xticks(reg_all) ax.xaxis.set_tick_params(labelsize=14) ax.yaxis.set_tick_params(labelsize=13) pic = fig plt.tight_layout() pic.savefig('figs/hyper/SVD_ndcg_vs_reg_factor.eps', format='eps') surprise_SVD_ndcg = surprise_SVD_ndcg.sort_values('n_factors') fig, ax = plt.subplots(1,1, figsize = fig_size) for i in range(4): labelstring = 'regParam = '+ str(reg_all[i]) ax.plot('n_factors', 'value', data = surprise_SVD_ndcg.loc[(surprise_SVD_ndcg['lr_all'] == 0.002)&(surprise_SVD_ndcg['reg_all']== reg_all[i])], marker= marker[i], markerfacecolor=markerfacecolor[i], markersize=9, color= color[i], linewidth=3, label = labelstring) ax.legend() ax.set_ylabel('ndcg@10',fontsize = 18) ax.set_xlabel('n_factors',fontsize = 18) ax.set_title('surprise_SVD \n ndcg@10 vs n_factors with lr = 0.002',fontsize = 18) ax.set_xticks(n_factors) ax.xaxis.set_tick_params(labelsize=14) ax.yaxis.set_tick_params(labelsize=13) pic = fig plt.tight_layout() pic.savefig('figs/hyper/SVD_ndcg_vs_factor_reg.eps', format='eps') ``` ## CollectiveMF_Both ``` reg_param = [0.0001, 0.001, 0.01] w_main = [0.5, 0.6, 0.7, 0.8, 0.9, 1.0] k = [4.,8.,16.] CollectiveMF_Both_ndcg = CollectiveMF_Both_set[CollectiveMF_Both_set['metric'] == 'ndcg@10'] CollectiveMF_Both_ndcg = CollectiveMF_Both_ndcg.pivot(index= 'value', columns='params_name', values='params_val').reset_index(inplace = False) ### Visualization of hyperparameters tuning fig, ax = plt.subplots(1,1, figsize = fig_size) CollectiveMF_Both_ndcg.sort_values("reg_param", inplace=True) for i in range(len(w_main)): labelstring = 'w_main = '+ str(w_main[i]) ax.semilogx('reg_param', 'value', data = CollectiveMF_Both_ndcg.loc[(CollectiveMF_Both_ndcg['k'] == 4.0)&(CollectiveMF_Both_ndcg['w_main']== w_main[i])], marker= marker[i], markerfacecolor= markerfacecolor[i], markersize=9, color= color[i], linewidth=3, label = labelstring) ax.legend() ax.set_ylabel('ndcg@10',fontsize = 18) ax.set_xlabel('regParam',fontsize = 18) ax.set_title('CollectiveMF_Both \n ndcg@10 vs regParam with k = 4.0',fontsize = 18) ax.set_xticks(reg_param) ax.xaxis.set_tick_params(labelsize=10) ax.yaxis.set_tick_params(labelsize=13) pic = fig plt.tight_layout() pic.savefig('figs/hyper/CMF_ndcg_vs_reg_w_main.eps', format='eps') fig, ax = plt.subplots(1,1, figsize = fig_size) CollectiveMF_Both_ndcg = CollectiveMF_Both_ndcg.sort_values('w_main') for i in range(len(reg_param)): labelstring = 'regParam = '+ str(reg_param[i]) ax.plot('w_main', 'value', data = CollectiveMF_Both_ndcg.loc[(CollectiveMF_Both_ndcg['k'] == 4.0)&(CollectiveMF_Both_ndcg['reg_param']== reg_param[i])], marker= marker[i], markerfacecolor= markerfacecolor[i], markersize=9, color= color[i], linewidth=3, label = labelstring) ax.legend() ax.set_ylabel('ndcg@10',fontsize = 18) ax.set_xlabel('w_main',fontsize = 18) ax.set_title('CollectiveMF_Both \n ndcg@10 vs w_main with k = 4.0',fontsize = 18) ax.set_xticks(w_main) ax.xaxis.set_tick_params(labelsize=14) ax.yaxis.set_tick_params(labelsize=13) pic = fig plt.tight_layout() pic.savefig('figs/hyper/CMF_ndcg_vs_w_main_reg.eps', format='eps') ``` ### New server ``` new_split = split_params(new)[['model','metric','value','params_name','params_val']] Test_implicit_set = new_split[new_split['model'] == 'BPR'] FMItem_set = new_split[new_split['model'] == 'FMItem'] FMNone_set = new_split[new_split['model'] == 'FMNone'] ``` ## Test_implicit ``` Test_implicit_set_ndcg = Test_implicit_set[Test_implicit_set['metric'] == 'ndcg@10'] Test_implicit_set_ndcg = Test_implicit_set_ndcg.pivot(index="value", columns='params_name', values='params_val').reset_index(inplace = False) Test_implicit_set_ndcg = Test_implicit_set_ndcg[Test_implicit_set_ndcg.iteration > 20].copy() regularization = [0.001,0.005, 0.01 ] learning_rate = [0.0001, 0.001, 0.005] factors = [4,8,16] Test_implicit_set_ndcg.sort_values('regularization', inplace=True) fig, ax = plt.subplots(1,1, figsize = fig_size) for i in range(len(factors)): labelstring = 'n_factors = '+ str(factors[i]) ax.plot('regularization', 'value', data = Test_implicit_set_ndcg.loc[(Test_implicit_set_ndcg['learning_rate'] == 0.005)&(Test_implicit_set_ndcg['factors']== factors[i])], marker= marker[i], markerfacecolor=markerfacecolor[i], markersize=9, color= color[i], linewidth=3, label = labelstring) ax.legend() ax.set_ylabel('ndcg@10',fontsize = 18) ax.set_xlabel('regParam',fontsize = 18) ax.set_title('BPR \n ndcg@10 vs regParam with lr = 0.005',fontsize = 18) ax.set_xticks([1e-3, 5e-3, 1e-2]) ax.xaxis.set_tick_params(labelsize=14) ax.yaxis.set_tick_params(labelsize=13) pic = fig plt.tight_layout() pic.savefig('figs/hyper/BPR_ndcg_vs_reg_factors.eps', format='eps') Test_implicit_set_ndcg.sort_values('factors', inplace=True) fig, ax = plt.subplots(1,1, figsize = fig_size) for i in range(len(regularization)): labelstring = 'regParam = '+ str(regularization[i]) ax.plot('factors', 'value', data = Test_implicit_set_ndcg.loc[(Test_implicit_set_ndcg['learning_rate'] == 0.005)& (Test_implicit_set_ndcg.regularization== regularization[i])], marker= marker[i], markerfacecolor=markerfacecolor[i], markersize=9, color= color[i], linewidth=3, label = labelstring) ax.legend() ax.set_ylabel('ndcg@10',fontsize = 18) ax.set_xlabel('n_factors',fontsize = 18) ax.set_title('BPR \n ndcg@10 vs n_factors with lr = 0.005',fontsize = 18) ax.set_xticks(factors) ax.xaxis.set_tick_params(labelsize=14) ax.yaxis.set_tick_params(labelsize=13) pic = fig plt.tight_layout() pic.savefig('figs/hyper/BPR_ndcg_vs_factors_reg.eps', format='eps',fontsize = 18) ``` ## FMItem ``` FMItem_set_ndcg = FMItem_set[FMItem_set['metric'] == 'ndcg@10'] FMItem_set_ndcg = FMItem_set_ndcg.pivot(index="value", columns='params_name', values='params_val').reset_index(inplace = False) FMItem_set_ndcg = FMItem_set_ndcg[(FMItem_set_ndcg.n_iter == 100) & (FMItem_set_ndcg["rank"] <= 4)].copy() FMItem_set_ndcg color = ['lightpink','skyblue','lightgreen', "lightgrey", "navajowhite", "thistle"] markerfacecolor = ['red', 'blue', 'green','grey', "orangered", "darkviolet" ] marker = ['P', '^' ,'o', "H", "X", "p"] reg = [0.2, 0.3, 0.5, 0.8, 0.9, 1] fct = [2,4] FMItem_set_ndcg.sort_values('l2_reg_V', inplace=True) fig, ax = plt.subplots(1,1, figsize = fig_size) for i in range(len(reg)): labelstring = 'regParam = '+ str(reg[i]) ax.plot('rank', 'value', data = FMItem_set_ndcg.loc[(FMItem_set_ndcg.l2_reg_V == reg[i])& (FMItem_set_ndcg.l2_reg_w == reg[i])], marker= marker[i], markerfacecolor=markerfacecolor[i], markersize=9, color= color[i], linewidth=3, label = labelstring) ax.legend() ax.set_ylabel('ndcg@10',fontsize = 18) ax.set_xlabel('n_factors',fontsize = 18) ax.set_title('FM_Item \n ndcg@10 vs n_factors with lr = 0.005',fontsize = 18) ax.set_xticks(fct) ax.xaxis.set_tick_params(labelsize=14) ax.yaxis.set_tick_params(labelsize=13) pic = fig plt.tight_layout() pic.savefig('figs/hyper/FM_ndcg_vs_factors_reg.eps', format='eps',fontsize = 18) FMItem_set_ndcg.sort_values('rank', inplace=True) fig, ax = plt.subplots(1,1, figsize = fig_size) for i in range(len(fct)): labelstring = 'n_factors = '+ str(fct[i]) ax.plot('l2_reg_V', 'value', data = FMItem_set_ndcg.loc[(FMItem_set_ndcg["rank"] == fct[i])], marker= marker[i], markerfacecolor=markerfacecolor[i], markersize=9, color= color[i], linewidth=3, label = labelstring) ax.legend() ax.set_ylabel('ndcg@10',fontsize = 18) ax.set_xlabel('regParam',fontsize = 18) ax.set_title('FM_Item \n ndcg@10 vs n_factors with lr = 0.005',fontsize = 18) ax.set_xticks(np.arange(0.1, 1.1, 0.1)) ax.xaxis.set_tick_params(labelsize=14) ax.yaxis.set_tick_params(labelsize=13) pic = fig plt.tight_layout() pic.savefig('figs/hyper/FM_ndcg_vs_reg_factors.eps', format='eps') ```
github_jupyter
``` import numpy as np import pandas as pd import matplotlib.pyplot as plt from matplotlib import style import matplotlib.ticker as ticker import seaborn as sns from sklearn.datasets import load_boston from sklearn.ensemble import RandomForestClassifier, VotingClassifier, GradientBoostingClassifier from sklearn.metrics import accuracy_score from sklearn.metrics import confusion_matrix from sklearn.metrics import plot_confusion_matrix from sklearn.metrics import classification_report from sklearn.metrics import f1_score, make_scorer from sklearn.compose import ColumnTransformer from sklearn.preprocessing import OneHotEncoder from sklearn.model_selection import cross_val_score from sklearn.model_selection import train_test_split from sklearn.model_selection import RepeatedKFold from sklearn.model_selection import GridSearchCV from sklearn.model_selection import ParameterGrid from sklearn.inspection import permutation_importance import multiprocessing from xgboost import XGBClassifier labels = pd.read_csv('../../csv/train_labels.csv') labels.head() values = pd.read_csv('../../csv/train_values.csv') values.T to_be_categorized = ["land_surface_condition", "foundation_type", "roof_type",\ "position", "ground_floor_type", "other_floor_type",\ "plan_configuration", "legal_ownership_status"] for row in to_be_categorized: values[row] = values[row].astype("category") values.info() datatypes = dict(values.dtypes) for row in values.columns: if datatypes[row] != "int64" and datatypes[row] != "int32" and \ datatypes[row] != "int16" and datatypes[row] != "int8": continue if values[row].nlargest(1).item() > 32767 and values[row].nlargest(1).item() < 2**31: values[row] = values[row].astype(np.int32) elif values[row].nlargest(1).item() > 127: values[row] = values[row].astype(np.int16) else: values[row] = values[row].astype(np.int8) labels["building_id"] = labels["building_id"].astype(np.int32) labels["damage_grade"] = labels["damage_grade"].astype(np.int8) labels.info() ``` # Feature Engineering para XGBoost ``` important_values = values\ .merge(labels, on="building_id") important_values.drop(columns=["building_id"], inplace = True) important_values["geo_level_1_id"] = important_values["geo_level_1_id"].astype("category") important_values X_train, X_test, y_train, y_test = train_test_split(important_values.drop(columns = 'damage_grade'), important_values['damage_grade'], test_size = 0.2, random_state = 123) #OneHotEncoding def encode_and_bind(original_dataframe, feature_to_encode): dummies = pd.get_dummies(original_dataframe[[feature_to_encode]]) res = pd.concat([original_dataframe, dummies], axis=1) res = res.drop([feature_to_encode], axis=1) return(res) features_to_encode = ["geo_level_1_id", "land_surface_condition", "foundation_type", "roof_type",\ "position", "ground_floor_type", "other_floor_type",\ "plan_configuration", "legal_ownership_status"] for feature in features_to_encode: X_train = encode_and_bind(X_train, feature) X_test = encode_and_bind(X_test, feature) X_train import time # min_child_weight = [0, 1, 2] # max_delta_step = [0, 5, 10] def my_grid_search(): print(time.gmtime()) i = 1 df = pd.DataFrame({'subsample': [], 'gamma': [], 'learning_rate': [], 'max_depth': [], 'score': []}) for subsample in [0.75, 0.885, 0.95]: for gamma in [0.75, 1, 1.25]: for learning_rate in [0.4375, 0.45, 0.4625]: for max_depth in [5, 6, 7]: model = XGBClassifier(n_estimators = 350, booster = 'gbtree', subsample = subsample, gamma = gamma, max_depth = max_depth, learning_rate = learning_rate, label_encoder = False, verbosity = 0) model.fit(X_train, y_train) y_preds = model.predict(X_test) score = f1_score(y_test, y_preds, average = 'micro') df = df.append(pd.Series( data={'subsample': subsample, 'gamma': gamma, 'learning_rate': learning_rate, 'max_depth': max_depth, 'score': score}, name = i)) print(i, time.gmtime()) i += 1 return df.sort_values('score', ascending = False) current_df = my_grid_search() df = pd.read_csv('grid-search/res-feature-engineering.csv') df.append(current_df) df.to_csv('grid-search/res-feature-engineering.csv') current_df import time def my_grid_search(): print(time.gmtime()) i = 1 df = pd.DataFrame({'subsample': [], 'gamma': [], 'learning_rate': [], 'max_depth': [], 'score': []}) for subsample in [0.885]: for gamma in [1]: for learning_rate in [0.45]: for max_depth in [5,6,7,8]: model = XGBClassifier(n_estimators = 350, booster = 'gbtree', subsample = subsample, gamma = gamma, max_depth = max_depth, learning_rate = learning_rate, label_encoder = False, verbosity = 0) model.fit(X_train, y_train) y_preds = model.predict(X_test) score = f1_score(y_test, y_preds, average = 'micro') df = df.append(pd.Series( data={'subsample': subsample, 'gamma': gamma, 'learning_rate': learning_rate, 'max_depth': max_depth, 'score': score}, name = i)) print(i, time.gmtime()) i += 1 return df.sort_values('score', ascending = False) df = my_grid_search() # df = pd.read_csv('grid-search/res-feature-engineering.csv') # df.append(current_df) df.to_csv('grid-search/res-feature-engineering.csv') df pd.read_csv('grid-search/res-no-feature-engineering.csv')\ .nlargest(20, 'score') ``` # Entreno tres de los mejores modelos con Voting. ``` xgb_model_1 = XGBClassifier(n_estimators = 350, subsample = 0.885, booster = 'gbtree', gamma = 1, learning_rate = 0.45, label_encoder = False, verbosity = 2) xgb_model_2 = XGBClassifier(n_estimators = 350, subsample = 0.950, booster = 'gbtree', gamma = 0.5, learning_rate = 0.45, label_encoder = False, verbosity = 2) xgb_model_3 = XGBClassifier(n_estimators = 350, subsample = 0.750, booster = 'gbtree', gamma = 1, learning_rate = 0.45, label_encoder = False, verbosity = 2) xgb_model_4 = XGBClassifier(n_estimators = 350, subsample = 0.80, booster = 'gbtree', gamma = 1, learning_rate = 0.55, label_encoder = False, verbosity = 2) rf_model_1 = RandomForestClassifier(n_estimators = 150, max_depth = None, max_features = 45, min_samples_split = 15, min_samples_leaf = 1, criterion = "gini", verbose=True) rf_model_2 = RandomForestClassifier(n_estimators = 250, max_depth = None, max_features = 45, min_samples_split = 15, min_samples_leaf = 1, criterion = "gini", verbose=True, n_jobs =-1) import lightgbm as lgb lgbm_model_1 = lgb.LGBMClassifier(boosting_type='gbdt', colsample_bytree=1.0, importance_type='split', learning_rate=0.15, max_depth=None, n_estimators=1600, n_jobs=-1, objective=None, subsample=1.0, subsample_for_bin=200000, subsample_freq=0) lgbm_model_2 = lgb.LGBMClassifier(boosting_type='gbdt', colsample_bytree=1.0, importance_type='split', learning_rate=0.15, max_depth=25, n_estimators=1750, n_jobs=-1, objective=None, subsample=0.7, subsample_for_bin=240000, subsample_freq=0) lgbm_model_3 = lgb.LGBMClassifier(boosting_type='gbdt', colsample_bytree=1.0, importance_type='split', learning_rate=0.20, max_depth=40, n_estimators=1450, n_jobs=-1, objective=None, subsample=0.7, subsample_for_bin=160000, subsample_freq=0) import sklearn as sk import sklearn.neural_network neuronal_1 = sk.neural_network.MLPClassifier(solver='adam', activation = 'relu', learning_rate_init=0.001, learning_rate = 'adaptive', verbose=True, batch_size = 'auto') gb_model_1 = GradientBoostingClassifier(n_estimators = 305, max_depth = 9, min_samples_split = 2, min_samples_leaf = 3, subsample=0.6, verbose=True, learning_rate=0.15) vc_model = VotingClassifier(estimators = [('xgb1', xgb_model_1), ('xgb2', xgb_model_2), ('rfm1', rf_model_1), ('lgbm1', lgbm_model_1), ('lgbm2', lgbm_model_2), ('gb_model_1', gb_model_1)], weights = [1.0, 0.95, 0.85, 1.0, 0.9, 0.7, 0.9], voting = 'soft', verbose = True) vc_model.fit(X_train, y_train) y_preds = vc_model.predict(X_test) f1_score(y_test, y_preds, average='micro') test_values = pd.read_csv('../../csv/test_values.csv', index_col = "building_id") test_values test_values_subset = test_values test_values_subset["geo_level_1_id"] = test_values_subset["geo_level_1_id"].astype("category") test_values_subset def encode_and_bind(original_dataframe, feature_to_encode): dummies = pd.get_dummies(original_dataframe[[feature_to_encode]]) res = pd.concat([original_dataframe, dummies], axis=1) res = res.drop([feature_to_encode], axis=1) return(res) features_to_encode = ["geo_level_1_id", "land_surface_condition", "foundation_type", "roof_type",\ "position", "ground_floor_type", "other_floor_type",\ "plan_configuration", "legal_ownership_status"] for feature in features_to_encode: test_values_subset = encode_and_bind(test_values_subset, feature) test_values_subset test_values_subset.shape # Genero las predicciones para los test. preds = vc_model.predict(test_values_subset) submission_format = pd.read_csv('../../csv/submission_format.csv', index_col = "building_id") my_submission = pd.DataFrame(data=preds, columns=submission_format.columns, index=submission_format.index) my_submission.head() my_submission.to_csv('../../csv/predictions/jf/vote/jf-model-3-submission.csv') !head ../../csv/predictions/jf/vote/jf-model-3-submission.csv ```
github_jupyter
# Delfin ### Installation Run the following cell to install osiris-sdk. ``` !pip install osiris-sdk --upgrade ``` ### Access to dataset There are two ways to get access to a dataset 1. Service Principle 2. Access Token #### Config file with Service Principle If done with **Service Principle** it is adviced to add the following file with **tenant_id**, **client_id**, and **client_secret**: The structure of **conf.ini**: ``` [Authorization] tenant_id = <tenant_id> client_id = <client_id> client_secret = <client_secret> [Egress] url = <egress-url> ``` #### Config file if using Access Token If done with **Access Token** then assign it to a variable (see example below). The structure of **conf.ini**: ``` [Egress] url = <egress-url> ``` The egress-url can be [found here](https://github.com/Open-Dataplatform/examples/blob/main/README.md). ### Imports Execute the following cell to import the necessary libraries ``` from osiris.apis.egress import Egress from osiris.core.azure_client_authorization import ClientAuthorization from osiris.core.enums import Horizon from configparser import ConfigParser ``` ### Initialize the Egress class with Service Principle ``` config = ConfigParser() config.read('conf.ini') client_auth = ClientAuthorization(tenant_id=config['Authorization']['tenant_id'], client_id=config['Authorization']['client_id'], client_secret=config['Authorization']['client_secret']) egress = Egress(client_auth=client_auth, egress_url=config['Egress']['url']) ``` ### Intialize the Egress class with Access Token ``` config = ConfigParser() config.read('conf.ini') access_token = 'REPLACE WITH ACCESS TOKEN HERE' client_auth = ClientAuthorization(access_token=access_token) egress = Egress(client_auth=client_auth, egress_url=config['Egress']['url']) ``` ### Delfin Daily The data retrived will be **from_date <= data < to_date**. The **from_date** and **to_date** syntax is [described here](https://github.com/Open-Dataplatform/examples/blob/main/README.md). ``` json_content = egress.download_delfin_file(horizon=Horizon.MINUTELY, from_date="2021-07-15T20:00", to_date="2021-07-16T00:00") json_content = egress.download_delfin_file(horizon=Horizon.DAILY, from_date="2020-01", to_date="2020-02") # We only show the first entry here json_content[0] ``` ### Delfin Hourly The **from_date** and **to_date** syntax is [described here](https://github.com/Open-Dataplatform/examples/blob/main/README.md). ``` json_content = egress.download_delfin_file(horizon=Horizon.HOURLY, from_date="2020-01-01T00", to_date="2020-01-01T06") # We only show the first entry here json_content[0] ``` ### Delfin Minutely The **from_date** and **to_date** syntax is [described here](https://github.com/Open-Dataplatform/examples/blob/main/README.md). ``` json_content = egress.download_delfin_file(horizon=Horizon.MINUTELY, from_date="2021-07-15T00:00", to_date="2021-07-15T00:59") # We only show the first entry here json_content[0] ``` ### Delfin Daily with Indices The **from_date** and **to_date** syntax is [described here](https://github.com/Open-Dataplatform/examples/blob/main/README.md). ``` json_content = egress.download_delfin_file(horizon=Horizon.DAILY, from_date="2020-01-15T03:00", to_date="2020-01-16T03:01", table_indices=[1, 2]) # We only show the first entry here json_content[0] ```
github_jupyter
<a href="https://colab.research.google.com/github/PradyumnaKrishna/Colab-Hacks/blob/RDP-v2/Colab%20RDP/Colab%20RDP.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # **Colab RDP** : Remote Desktop to Colab Instance Used Google Remote Desktop & Ngrok Tunnel > **Warning : Not for Cryptocurrency Mining<br></br>** >**Why are hardware resources such as T4 GPUs not available to me?** The best available hardware is prioritized for users who use Colaboratory interactively rather than for long-running computations. Users who use Colaboratory for long-running computations may be temporarily restricted in the type of hardware made available to them, and/or the duration that the hardware can be used for. We encourage users with high computational needs to use Colaboratory’s UI with a local runtime. Please note that using Colaboratory for cryptocurrency mining is disallowed entirely, and may result in being banned from using Colab altogether. Google Colab can give you Instance with 12GB of RAM and GPU for 12 hours (Max.) for Free users. Anyone can use it to perform Heavy Tasks. To use other similiar Notebooks use my Repository **[Colab Hacks](https://github.com/PradyumnaKrishna/Colab-Hacks)** ``` #@title **Create User** #@markdown Enter Username and Password username = "user" #@param {type:"string"} password = "root" #@param {type:"string"} print("Creating User and Setting it up") # Creation of user ! sudo useradd -m $username &> /dev/null # Add user to sudo group ! sudo adduser $username sudo &> /dev/null # Set password of user to 'root' ! echo '$username:$password' | sudo chpasswd # Change default shell from sh to bash ! sed -i 's/\/bin\/sh/\/bin\/bash/g' /etc/passwd print("User Created and Configured") #@title **RDP** #@markdown It takes 4-5 minutes for installation #@markdown Visit http://remotedesktop.google.com/headless and Copy the command after authentication CRP = "" #@param {type:"string"} def CRD(): with open('install.sh', 'w') as script: script.write("""#! /bin/bash b='\033[1m' r='\E[31m' g='\E[32m' c='\E[36m' endc='\E[0m' enda='\033[0m' printf "\n\n$c$b Loading Installer $endc$enda" >&2 if sudo apt-get update &> /dev/null then printf "\r$g$b Installer Loaded $endc$enda\n" >&2 else printf "\r$r$b Error Occured $endc$enda\n" >&2 exit fi printf "\n$g$b Installing Chrome Remote Desktop $endc$enda" >&2 { wget https://dl.google.com/linux/direct/chrome-remote-desktop_current_amd64.deb sudo dpkg --install chrome-remote-desktop_current_amd64.deb sudo apt install --assume-yes --fix-broken } &> /dev/null && printf "\r$c$b Chrome Remote Desktop Installed $endc$enda\n" >&2 || { printf "\r$r$b Error Occured $endc$enda\n" >&2; exit; } sleep 3 printf "$g$b Installing Desktop Environment $endc$enda" >&2 { sudo DEBIAN_FRONTEND=noninteractive \ apt install --assume-yes xfce4 desktop-base sudo bash -c 'echo "exec /etc/X11/Xsession /usr/bin/xfce4-session" > /etc/chrome-remote-desktop-session' sudo apt install --assume-yes xscreensaver sudo systemctl disable lightdm.service } &> /dev/null && printf "\r$c$b Desktop Environment Installed $endc$enda\n" >&2 || { printf "\r$r$b Error Occured $endc$enda\n" >&2; exit; } sleep 3 printf "$g$b Installing Google Chrome $endc$enda" >&2 { wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb sudo dpkg --install google-chrome-stable_current_amd64.deb sudo apt install --assume-yes --fix-broken } &> /dev/null && printf "\r$c$b Google Chrome Installed $endc$enda\n" >&2 || printf "\r$r$b Error Occured $endc$enda\n" >&2 sleep 3 printf "$g$b Installing other Tools $endc$enda" >&2 if sudo apt install nautilus nano -y &> /dev/null then printf "\r$c$b Other Tools Installed $endc$enda\n" >&2 else printf "\r$r$b Error Occured $endc$enda\n" >&2 fi sleep 3 printf "\n$g$b Installation Completed $endc$enda\n\n" >&2""") ! chmod +x install.sh ! ./install.sh # Adding user to CRP group ! sudo adduser $username chrome-remote-desktop &> /dev/null # Finishing Work ! su - $username -c """$CRP""" print("Finished Succesfully") try: if username: if CRP == "" : print("Please enter authcode from the given link") else: CRD() except NameError: print("username variable not found") print("Create a User First") #@title **Google Drive Mount** #@markdown Google Drive used as Persistance HDD for files.<br> #@markdown Mounted at `user` Home directory inside drive folder #@markdown (If `username` variable not defined then use root as default). def MountGDrive(): from google.colab import drive ! runuser -l $user -c "yes | python3 -m pip install --user google-colab" > /dev/null 2>&1 mount = """from os import environ as env from google.colab import drive env['CLOUDSDK_CONFIG'] = '/content/.config' drive.mount('{}')""".format(mountpoint) with open('/content/mount.py', 'w') as script: script.write(mount) ! runuser -l $user -c "python3 /content/mount.py" try: if username: mountpoint = "/home/"+username+"/drive" user = username except NameError: print("username variable not found, mounting at `/content/drive' using `root'") mountpoint = '/content/drive' user = 'root' MountGDrive() #@title **SSH** (Using NGROK) ! pip install colab_ssh --upgrade &> /dev/null from colab_ssh import launch_ssh, init_git from IPython.display import clear_output #@markdown Copy authtoken from https://dashboard.ngrok.com/auth ngrokToken = "" #@param {type:'string'} def runNGROK(): launch_ssh(ngrokToken, password) clear_output() print("ssh", username, end='@') ! curl -s http://localhost:4040/api/tunnels | python3 -c \ "import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'][6:].replace(':', ' -p '))" try: if username: pass elif password: pass except NameError: print("No user found using username and password as 'root'") username='root' password='root' if ngrokToken == "": print("No ngrokToken Found, Please enter it") else: runNGROK() #@title Package Installer { vertical-output: true } run = False #@param {type:"boolean"} #@markdown *Package management actions (gasp)* action = "Install" #@param ["Install", "Check Installed", "Remove"] {allow-input: true} package = "wget" #@param {type:"string"} system = "apt" #@param ["apt", ""] def install(package=package, system=system): if system == "apt": !apt --fix-broken install > /dev/null 2>&1 !killall apt > /dev/null 2>&1 !rm /var/lib/dpkg/lock-frontend !dpkg --configure -a > /dev/null 2>&1 !apt-get install -o Dpkg::Options::="--force-confold" --no-install-recommends -y $package !dpkg --configure -a > /dev/null 2>&1 !apt update > /dev/null 2>&1 !apt install $package > /dev/null 2>&1 def check_installed(package=package, system=system): if system == "apt": !apt list --installed | grep $package def remove(package=package, system=system): if system == "apt": !apt remove $package if run: if action == "Install": install() if action == "Check Installed": check_installed() if action == "Remove": remove() #@title **Colab Shutdown** #@markdown To Kill NGROK Tunnel NGROK = False #@param {type:'boolean'} #@markdown To Unmount GDrive GDrive = False #@param {type:'boolean'} #@markdown To Sleep Colab Sleep = False #@param {type:'boolean'} if NGROK: ! killall ngrok if GDrive: with open('/content/unmount.py', 'w') as unmount: unmount.write("""from google.colab import drive drive.flush_and_unmount()""") try: if user: ! runuser $user -c 'python3 /content/unmount.py' except NameError: print("Google Drive not Mounted") if Sleep: ! sleep 43200 ```
github_jupyter
``` from xml.dom import expatbuilder import numpy as np import matplotlib.pyplot as plt import struct import os # should be in the same directory as corresponding xml and csv eis_filename = '/example/path/to/eis_image_file.dat' image_fn, image_ext = os.path.splitext(eis_filename) eis_xml_filename = image_fn + ".xml" ``` # crop xml manually change the line and sample values in the xml to match (n_lines, n_samples) ``` eis_xml = expatbuilder.parse(eis_xml_filename, False) eis_dom = eis_xml.getElementsByTagName("File_Area_Observational").item(0) dom_lines = eis_dom.getElementsByTagName("Axis_Array").item(0) dom_samples = eis_dom.getElementsByTagName("Axis_Array").item(1) dom_lines = dom_lines.getElementsByTagName("elements")[0] dom_samples = dom_samples.getElementsByTagName("elements")[0] total_lines = int( dom_lines.childNodes[0].data ) total_samples = int( dom_samples.childNodes[0].data ) total_lines, total_samples ``` # crop image ``` dn_size_bytes = 4 # number of bytes per DN n_lines = 60 # how many to crop to n_samples = 3 start_line = 1200 # point to start crop from start_sample = 1200 image_offset = (start_line*total_samples + start_sample) * dn_size_bytes line_length = n_samples * dn_size_bytes buffer_size = n_lines * total_samples * dn_size_bytes with open(eis_filename, 'rb') as f: f.seek(image_offset) b_image_data = f.read() b_image_data = np.frombuffer(b_image_data[:buffer_size], dtype=np.uint8) b_image_data.shape b_image_data = np.reshape(b_image_data, (n_lines, total_samples, dn_size_bytes) ) b_image_data.shape b_image_data = b_image_data[:,:n_samples,:] b_image_data.shape image_data = [] for i in range(n_lines): image_sample = [] for j in range(n_samples): dn_bytes = bytearray(b_image_data[i,j,:]) dn = struct.unpack( "<f", dn_bytes ) image_sample.append(dn) image_data.append(image_sample) image_data = np.array(image_data) image_data.shape plt.figure(figsize=(10,10)) plt.imshow(image_data, vmin=0, vmax=1) crop = "_cropped" image_fn, image_ext = os.path.splitext(eis_filename) mini_image_fn = image_fn + crop + image_ext mini_image_bn = os.path.basename(mini_image_fn) if os.path.exists(mini_image_fn): os.remove(mini_image_fn) with open(mini_image_fn, 'ab+') as f: b_reduced_image_data = image_data.tobytes() f.write(b_reduced_image_data) ``` # crop times csv table ``` import pandas as pd # assumes csv file has the same filename with _times appended eis_csv_fn = image_fn + "_times.csv" df1 = pd.read_csv(eis_csv_fn) df1 x = np.array(df1) y = x[:n_lines, :] df = pd.DataFrame(y) df crop = "_cropped" csv_fn, csv_ext = os.path.splitext(eis_csv_fn) crop_csv_fn = csv_fn + crop + csv_ext crop_csv_bn = os.path.basename(crop_csv_fn) crop_csv_bn # write to file if os.path.exists(crop_csv_fn): os.remove(crop_csv_fn) df.to_csv( crop_csv_fn, header=False, index=False ) ```
github_jupyter
# Cryptocurrency Clusters ``` %matplotlib inline #import dependencies from pathlib import Path import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn.preprocessing import StandardScaler from sklearn.manifold import TSNE from sklearn.decomposition import PCA from sklearn.cluster import KMeans ``` # Data Preparation ``` #read data in using pandas df = pd.read_csv('Resources/crypto_data.csv') df.head(10) df.dtypes #Discard all cryptocurrencies that are not being traded.In other words, filter for currencies that are currently being traded. myfilter = (df['IsTrading'] == True) trading_df = df.loc[myfilter] trading_df = trading_df.drop('IsTrading', axis = 1) trading_df #Once you have done this, drop the IsTrading column from the dataframe. #Remove all rows that have at least one null value. trading_df.dropna(how = 'any', inplace = True) trading_df #Filter for cryptocurrencies that have been mined. That is, the total coins mined should be greater than zero. myfilter2 = (trading_df['TotalCoinsMined'] >0) final_df = trading_df.loc[myfilter2] final_df #In order for your dataset to be comprehensible to a machine learning algorithm, its data should be numeric. #Since the coin names do not contribute to the analysis of the data, delete the CoinName from the original dataframe. CoinName = final_df['CoinName'] Ticker = final_df['Unnamed: 0'] final_df = final_df.drop(['Unnamed: 0','CoinName'], axis = 1) final_df # convert the remaining features with text values, Algorithm and ProofType, into numerical data. #To accomplish this task, use Pandas to create dummy variables. final_df['TotalCoinSupply'] = final_df['TotalCoinSupply'].astype(float) final_df = pd.get_dummies(final_df) final_df ``` Examine the number of rows and columns of your dataset now. How did they change? There were 71 unique algorithms and 25 unique prooftypes so now we have 98 features in the dataset which is quite large. ``` #Standardize your dataset so that columns that contain larger values do not unduly influence the outcome. scaled_data = StandardScaler().fit_transform(final_df) scaled_data ``` # Dimensionality Reduction Creating dummy variables above dramatically increased the number of features in your dataset. Perform dimensionality reduction with PCA. Rather than specify the number of principal components when you instantiate the PCA model, it is possible to state the desired explained variance. For this project, preserve 90% of the explained variance in dimensionality reduction. #How did the number of the features change? ``` # Applying PCA to reduce dimensions # Initialize PCA model pca = PCA(.90) # Get two principal components for the iris data. data_pca = pca.fit_transform(scaled_data) pca.explained_variance_ratio_ #df with the principal components (columns) pd.DataFrame(data_pca) ``` Next, further reduce the dataset dimensions with t-SNE and visually inspect the results. In order to accomplish this task, run t-SNE on the principal components: the output of the PCA transformation. Then create a scatter plot of the t-SNE output. Observe whether there are distinct clusters or not. ``` # Initialize t-SNE model tsne = TSNE(learning_rate=35) # Reduce dimensions tsne_features = tsne.fit_transform(data_pca) # The dataset has 2 columns tsne_features.shape # Prepare to plot the dataset # Visualize the clusters plt.scatter(tsne_features[:,0], tsne_features[:,1]) plt.show() ``` # Cluster Analysis with k-Means Create an elbow plot to identify the best number of clusters. ``` #Use a for-loop to determine the inertia for each k between 1 through 10. #Determine, if possible, where the elbow of the plot is, and at which value of k it appears. inertia = [] k = list(range(1, 11)) for i in k: km = KMeans(n_clusters=i) km.fit(data_pca) inertia.append(km.inertia_) # Define a DataFrame to plot the Elbow Curve elbow_data = {"k": k, "inertia": inertia} df_elbow = pd.DataFrame(elbow_data) plt.plot(df_elbow['k'], df_elbow['inertia']) plt.xticks(range(1,11)) plt.xlabel('Number of clusters') plt.ylabel('Inertia') plt.show() # Initialize the K-Means model model = KMeans(n_clusters=10, random_state=0) # Train the model model.fit(scaled_data) # Predict clusters predictions = model.predict(scaled_data) # Create return DataFrame with predicted clusters final_df["cluster"] = pd.Series(model.labels_) plt.figure(figsize = (18,12)) plt.scatter(final_df['TotalCoinsMined'], final_df['TotalCoinSupply'], c=final_df['cluster']) plt.xlabel('TotalCoinsMined') plt.ylabel('TotalCoinSupply') plt.show() plt.figure(figsize = (18,12)) plt.scatter(final_df['TotalCoinsMined'], final_df['TotalCoinSupply'], c=final_df['cluster']) plt.xlabel('TotalCoinsMined') plt.ylabel('TotalCoinSupply') plt.xlim([0, 250000000]) plt.ylim([0, 250000000]) plt.show() ``` # Recommendation Based on your findings, make a brief (1-2 sentences) recommendation to your clients. Can the cryptocurrencies be clustered together? If so, into how many clusters? Even after running PCA to reduce dimensionality there are still a large number of features in the dataset. This means that there likeley was not much correlation amongst the features allowing them to be reduced together. The k-means algorithm had a very large inertia and never really leveled off, even at larger #s of clusters making it difficult to determine where an ideal # of clusters might be. In most trials, the k-means algorithm clustered most of the cryptocurrencies together in one big cluster. I would not recommend clustering the cryptocurrencies together in practice, at least not based on these data features.
github_jupyter
Our best model - Catboost with learning rate of 0.7 and 180 iterations. Was trained on 10 files of the data with similar distribution of the feature user_target_recs (among the number of rows of each feature value). We received an auc of 0.845 on the kaggle leaderboard #Mount Drive ``` from google.colab import drive drive.mount("/content/drive") ``` #Installations and Imports ``` # !pip install scikit-surprise !pip install catboost # !pip install xgboost import os import pandas as pd # import xgboost # from xgboost import XGBClassifier # import pickle import catboost from catboost import CatBoostClassifier ``` #Global Parameters and Methods ``` home_path = "/content/drive/MyDrive/RS_Kaggle_Competition" def get_train_files_paths(path): dir_paths = [ os.path.join(path, dir_name) for dir_name in os.listdir(path) if dir_name.startswith("train")] file_paths = [] for dir_path in dir_paths: curr_dir_file_paths = [ os.path.join(dir_path, file_name) for file_name in os.listdir(dir_path) ] file_paths.extend(curr_dir_file_paths) return file_paths train_file_paths = get_train_files_paths(home_path) ``` #Get Data ``` def get_df_of_many_files(paths_list, number_of_files): for i in range(number_of_files): path = paths_list[i] curr_df = pd.read_csv(path) if i == 0: df = curr_df else: df = pd.concat([df, curr_df]) return df sample_train_data = get_df_of_many_files(train_file_paths[-21:], 10) # sample_train_data = pd.read_csv(home_path + "/10_files_train_data") sample_val_data = get_df_of_many_files(train_file_paths[-10:], 3) # sample_val_data = pd.read_csv(home_path+"/3_files_val_data") # sample_val_data.to_csv(home_path+"/3_files_val_data") ``` #Preprocess data ``` train_data = sample_train_data.fillna("Unknown") val_data = sample_val_data.fillna("Unknown") train_data import gc del sample_val_data del sample_train_data gc.collect() ``` ## Scale columns ``` # scale columns from sklearn.preprocessing import StandardScaler from sklearn.preprocessing import MinMaxScaler scaling_cols= ["empiric_calibrated_recs", "empiric_clicks", "empiric_calibrated_recs", "user_recs", "user_clicks", "user_target_recs"] scaler = StandardScaler() train_data[scaling_cols] = scaler.fit_transform(train_data[scaling_cols]) val_data[scaling_cols] = scaler.transform(val_data[scaling_cols]) train_data val_data = val_data.drop(columns=["Unnamed: 0.1"]) val_data ``` #Explore Data ``` sample_train_data test_data from collections import Counter user_recs_dist = test_data["user_recs"].value_counts(normalize=True) top_user_recs_count = user_recs_dist.nlargest(200) print(top_user_recs_count) percent = sum(top_user_recs_count.values) percent print(sample_train_data["user_recs"].value_counts(normalize=False)) print(test_data["user_recs"].value_counts()) positions = top_user_recs_count def sample(obj, replace=False, total=1500000): return obj.sample(n=int(positions[obj.name] * total), replace=replace) sample_train_data_filtered = sample_train_data[sample_train_data["user_recs"].isin(positions.keys())] result = sample_train_data_filtered.groupby("user_recs").apply(sample).reset_index(drop=True) result["user_recs"].value_counts(normalize=True) top_user_recs_train_data = result top_user_recs_train_data not_top_user_recs_train_data = sample_train_data[~sample_train_data["user_recs"].isin(top_user_recs_train_data["user_recs"].unique())] not_top_user_recs_train_data["user_recs"].value_counts() train_data = pd.concat([top_user_recs_train_data, not_top_user_recs_train_data]) train_data["user_recs"].value_counts(normalize=False) train_data = train_data.drop(columns = ["user_id_hash"]) train_data = train_data.fillna("Unknown") train_data ``` #Train the model ``` from sklearn.linear_model import LogisticRegression from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score from sklearn import metrics X_train = train_data.drop(columns=["is_click"], inplace=False) y_train = train_data["is_click"] X_val = val_data.drop(columns=["is_click"], inplace=False) y_val = val_data["is_click"] from catboost import CatBoostClassifier # cat_features_inds = [1,2,3,4,7,8,12,13,14,15,17,18] encode_cols = [ "user_id_hash", "target_id_hash", "syndicator_id_hash", "campaign_id_hash", "target_item_taxonomy", "placement_id_hash", "publisher_id_hash", "source_id_hash", "source_item_type", "browser_platform", "country_code", "region", "gmt_offset"] # model = CatBoostClassifier(iterations = 50, learning_rate=0.5, task_type='CPU', loss_function='Logloss', cat_features = encode_cols) model = CatBoostClassifier(iterations = 180, learning_rate=0.7, task_type='CPU', loss_function='Logloss', cat_features = encode_cols, eval_metric='AUC')#, depth=6, l2_leaf_reg= 10) """ All of our tries with catboost (only the best of them were uploaded to kaggle): results: all features, all rows of train fillna = Unknown logloss 100 iterations learning rate 0.5 10 files: 0.857136889762303 | bestTest = 0.4671640673 0.857136889762303 logloss 100 iterations learning rate 0.4 10 files: bestTest = 0.4676805926 0.856750110976787 logloss 100 iterations learning rate 0.55 10 files: bestTest = 0.4669830858 0.8572464626142212 logloss 120 iterations learning rate 0.6 10 files: bestTest = 0.4662084678 0.8577564702279399 logloss 150 iterations learning rate 0.7 10 files: bestTest = 0.4655981391 0.8581645278496352 logloss 180 iterations learning rate 0.7 10 files: bestTest = 0.4653168207 0.8583423138228865 !!!!!!!!!! logloss 180 iterations learning rate 0.7 10 files day extracted from date (not as categorical): 0.8583034988 logloss 180 iterations learning rate 0.7 10 files day extracted from date (as categorical): 0.8583014151 logloss 180 iterations learning rate 0.75 10 files day extracted from date (as categorical): 0.8582889749 logloss 180 iterations learning rate 0.65 10 files day extracted from date (as categorical): 0.8582334254 logloss 180 iterations learning rate 0.65 10 files day extracted from date (as categorical) StandardScaler: 0.8582101013 logloss 180 iterations learning rate 0.7 10 files day extracted from date (as categorical) MinMaxScaler dropna: ~0.8582 logloss 180 iterations learning rate 0.7 distributed data train and val, day extracted as categorical MinMaxScaler: 0.8561707 logloss 180 iterations learning rate 0.7 distributed data train and val, day extracted as not categorical no scale: 0.8561707195 logloss 180 iterations learning rate 0.7 distributed data train and val, no scale no date: 0.8559952294 logloss 180 iterations learning rate 0.7 distributed data train and val, day extracted as not categorical no scale with date: 0.8560461554 logloss 180 iterations learning rate 0.7, 9 times distributed data train and val, no user no date: 0.8545560094 logloss 180 iterations learning rate 0.7, 9 times distributed data train and val, with user and numeric day: 0.8561601034 logloss 180 iterations learning rate 0.7, 9 times distributed data train and val, with user with numeric date: 0.8568834122 logloss 180 iterations learning rate 0.7, 10 different files, scaled, all features: 0.8584829166 !!!!!!! logloss 180 iterations learning rate 0.7, new data, scaled, all features: 0.8915972905 test: 0.84108 logloss 180 iterations learning rate 0.9 10 files: bestTest = 0.4656462845 logloss 100 iterations learning rate 0.5 8 files: 0.8568031111965864 logloss 300 iterations learning rate 0.5: crossentropy 50 iterations learning rate 0.5: 0.8556282878645277 """ model.fit(X_train, y_train, eval_set=(X_val, y_val), verbose=10) ``` # Submission File ``` test_data = pd.read_csv("/content/drive/MyDrive/RS_Kaggle_Competition/test/test_file.csv") test_data = test_data.iloc[:,:-1] test_data[scaling_cols] = scaler.transform(test_data[scaling_cols]) X_test = test_data.fillna("Unknown") X_test pred_proba = model.predict_proba(X_test) submission_dir_path = "/content/drive/MyDrive/RS_Kaggle_Competition/submissions" pred = pred_proba[:,1] pred_df = pd.DataFrame(pred) pred_df.reset_index(inplace=True) pred_df.columns = ['Id', 'Predicted'] pred_df.to_csv(submission_dir_path + '/catboost_submission_datafrom1704_data_lr_0.7_with_scale_with_num_startdate_with_user_iters_159.csv', index=False) ```
github_jupyter
# Random Search Algorithms ### Importing Necessary Libraries ``` import six import sys sys.modules['sklearn.externals.six'] = six import mlrose import numpy as np import pandas as pd import seaborn as sns import mlrose_hiive import matplotlib.pyplot as plt np.random.seed(44) sns.set_style("darkgrid") ``` ### Defining a Fitness Function Object ``` # Define alternative N-Queens fitness function for maximization problem def queens_max(state): # Initialize counter fitness = 0 # For all pairs of queens for i in range(len(state) - 1): for j in range(i + 1, len(state)): # Check for horizontal, diagonal-up and diagonal-down attacks if (state[j] != state[i]) \ and (state[j] != state[i] + (j - i)) \ and (state[j] != state[i] - (j - i)): # If no attacks, then increment counter fitness += 1 return fitness # Initialize custom fitness function object fitness_cust = mlrose.CustomFitness(queens_max) ``` ### Defining an Optimization Problem Object ``` %%time # DiscreteOpt() takes integers in range 0 to max_val -1 defined at initialization number_of_queens = 16 problem = mlrose_hiive.DiscreteOpt(length = number_of_queens, fitness_fn = fitness_cust, maximize = True, max_val = number_of_queens) ``` ### Optimization #1 Simulated Annealing ``` %%time sa = mlrose_hiive.SARunner(problem, experiment_name="SA_Exp", iteration_list=[10000], temperature_list=[10, 50, 100, 250, 500], decay_list=[mlrose_hiive.ExpDecay, mlrose_hiive.GeomDecay], seed=44, max_attempts=100) sa_run_stats, sa_run_curves = sa.run() last_iters = sa_run_stats[sa_run_stats.Iteration != 0].reset_index() print('Mean:', last_iters.Fitness.mean(), '\nMin:',last_iters.Fitness.max(),'\nMax:',last_iters.Fitness.max()) print('Mean Time;',last_iters.Time.mean()) best_index_in_curve = sa_run_curves.Fitness.idxmax() best_decay = sa_run_curves.iloc[best_index_in_curve].Temperature best_curve = sa_run_curves.loc[sa_run_curves.Temperature == best_decay, :] best_curve.reset_index(inplace=True) best_decay best_index_in_curve = sa_run_curves.Fitness.idxmax() best_decay = sa_run_curves.iloc[best_index_in_curve].Temperature best_sa_curve = sa_run_curves.loc[sa_run_curves.Temperature == best_decay, :] best_sa_curve.reset_index(inplace=True) # draw lineplot sa_run_curves['Temperature'] = sa_run_curves['Temperature'].astype(str).astype(float) sa_run_curves_t1 = sa_run_curves[sa_run_curves['Temperature'] == 10] sa_run_curves_t2 = sa_run_curves[sa_run_curves['Temperature'] == 50] sa_run_curves_t3 = sa_run_curves[sa_run_curves['Temperature'] == 100] sa_run_curves_t4 = sa_run_curves[sa_run_curves['Temperature'] == 250] sa_run_curves_t5 = sa_run_curves[sa_run_curves['Temperature'] == 500] sns.lineplot(x="Iteration", y="Fitness", data=sa_run_curves_t1, label = "t = 10") sns.lineplot(x="Iteration", y="Fitness", data=sa_run_curves_t2, label = "t = 50") sns.lineplot(x="Iteration", y="Fitness", data=sa_run_curves_t3, label = "t = 100") sns.lineplot(x="Iteration", y="Fitness", data=sa_run_curves_t4, label = "t = 250") sns.lineplot(x="Iteration", y="Fitness", data=sa_run_curves_t5, label = "t = 500") plt.title('16-Queens SA Fitness Vs Iterations') plt.show() sa_run_curves ``` ### Optimization #2 Genetic Algorithm ``` %%time ga = mlrose_hiive.GARunner(problem=problem, experiment_name="GA_Exp", seed=44, iteration_list = [10000], max_attempts = 100, population_sizes = [100, 500], mutation_rates = [0.1, 0.25, 0.5]) ga_run_stats, ga_run_curves = ga.run() last_iters = ga_run_stats[ga_run_stats.Iteration != 0].reset_index() print("Max and mean") print(last_iters.Fitness.max(), last_iters.Fitness.mean(), last_iters.Time.mean()) print(last_iters.groupby("Mutation Rate").Fitness.mean()) print(last_iters.groupby("Population Size").Fitness.mean()) print(last_iters.groupby("Population Size").Time.mean()) # draw lineplot ga_run_curves_mu1 = ga_run_curves[ga_run_curves['Mutation Rate'] == 0.1] ga_run_curves_mu2 = ga_run_curves[ga_run_curves['Mutation Rate'] == 0.25] ga_run_curves_mu3 = ga_run_curves[ga_run_curves['Mutation Rate'] == 0.5] sns.lineplot(x="Iteration", y="Fitness", data=ga_run_curves_mu1, label = "mr = 0.1") sns.lineplot(x="Iteration", y="Fitness", data=ga_run_curves_mu2, label = "mr = 0.25") sns.lineplot(x="Iteration", y="Fitness", data=ga_run_curves_mu3, label = "mr = 0.5") plt.title('16-Queens GA Fitness Vs Iterations') plt.show() ``` ### Optimization #3 MIMIC ``` %%time mmc = mlrose_hiive.MIMICRunner(problem=problem, experiment_name="MMC_Exp", seed=44, iteration_list=[10000], max_attempts=100, population_sizes=[100,500], keep_percent_list=[0.1, 0.25, 0.5], use_fast_mimic=True) # the two data frames will contain the results mmc_run_stats, mmc_run_curves = mmc.run() last_iters = mmc_run_stats[mmc_run_stats.Iteration != 0].reset_index() print("Max and mean") print(last_iters.Fitness.max(), last_iters.Fitness.mean(), last_iters.Time.mean()) print(last_iters.groupby("Keep Percent").Fitness.mean()) print(last_iters.groupby("Population Size").Fitness.mean()) print(last_iters.groupby("Population Size").Time.mean()) mmc_run_curves # draw lineplot mmc_run_curves_kp1 = mmc_run_curves[mmc_run_curves['Keep Percent'] == 0.1] mmc_run_curves_kp2 = mmc_run_curves[mmc_run_curves['Keep Percent'] == 0.25] mmc_run_curves_kp3 = mmc_run_curves[mmc_run_curves['Keep Percent'] == 0.5] sns.lineplot(x="Iteration", y="Fitness", data=mmc_run_curves_kp1, label = "kp = 0.1") sns.lineplot(x="Iteration", y="Fitness", data=mmc_run_curves_kp2, label = "kp = 0.25") sns.lineplot(x="Iteration", y="Fitness", data=mmc_run_curves_kp3, label = "kp = 0.5") plt.title('16-Queens MIMIC Fitness Vs Iterations') plt.show() ``` ### Optimization #4 Randomized Hill Climbing ``` %%time runner_return = mlrose_hiive.RHCRunner(problem, experiment_name="first_try", iteration_list=[10000], seed=44, max_attempts=100, restart_list=[100]) rhc_run_stats, rhc_run_curves = runner_return.run() last_iters = rhc_run_stats[rhc_run_stats.Iteration != 0].reset_index() print(last_iters.Fitness.mean(), last_iters.Fitness.max()) print(last_iters.Time.max()) best_index_in_curve = rhc_run_curves.Fitness.idxmax() best_decay = rhc_run_curves.iloc[best_index_in_curve].current_restart best_RHC_curve = rhc_run_curves.loc[rhc_run_curves.current_restart == best_decay, :] best_RHC_curve.reset_index(inplace=True) best_RHC_curve # draw lineplot sns.lineplot(x="Iteration", y="Fitness", data=best_RHC_curve) plt.title('16-Queens RHC Fitness Vs Iterations') plt.show() sns.lineplot(x="Iteration", y="Fitness", data=ga_run_curves_mu3, label = "GA") sns.lineplot(x="Iteration", y="Fitness", data=best_sa_curve, label = "SA") sns.lineplot(x="Iteration", y="Fitness", data=mmc_run_curves, label = "MIMIC") sns.lineplot(x="Iteration", y="Fitness", data=best_RHC_curve, label = "RHC") plt.show() ```
github_jupyter
``` %matplotlib inline ``` Performance Tuning Guide ************************* **Author**: `Szymon Migacz <https://github.com/szmigacz>`_ Performance Tuning Guide is a set of optimizations and best practices which can accelerate training and inference of deep learning models in PyTorch. Presented techniques often can be implemented by changing only a few lines of code and can be applied to a wide range of deep learning models across all domains. General optimizations --------------------- Enable async data loading and augmentation ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ `torch.utils.data.DataLoader <https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader>`_ supports asynchronous data loading and data augmentation in separate worker subprocesses. The default setting for ``DataLoader`` is ``num_workers=0``, which means that the data loading is synchronous and done in the main process. As a result the main training process has to wait for the data to be available to continue the execution. Setting ``num_workers > 0`` enables asynchronous data loading and overlap between the training and data loading. ``num_workers`` should be tuned depending on the workload, CPU, GPU, and location of training data. ``DataLoader`` accepts ``pin_memory`` argument, which defaults to ``False``. When using a GPU it's better to set ``pin_memory=True``, this instructs ``DataLoader`` to use pinned memory and enables faster and asynchronous memory copy from the host to the GPU. Disable gradient calculation for validation or inference ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ PyTorch saves intermediate buffers from all operations which involve tensors that require gradients. Typically gradients aren't needed for validation or inference. `torch.no_grad() <https://pytorch.org/docs/stable/generated/torch.no_grad.html#torch.no_grad>`_ context manager can be applied to disable gradient calculation within a specified block of code, this accelerates execution and reduces the amount of required memory. `torch.no_grad() <https://pytorch.org/docs/stable/generated/torch.no_grad.html#torch.no_grad>`_ can also be used as a function decorator. Disable bias for convolutions directly followed by a batch norm ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ `torch.nn.Conv2d() <https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html#torch.nn.Conv2d>`_ has ``bias`` parameter which defaults to ``True`` (the same is true for `Conv1d <https://pytorch.org/docs/stable/generated/torch.nn.Conv1d.html#torch.nn.Conv1d>`_ and `Conv3d <https://pytorch.org/docs/stable/generated/torch.nn.Conv3d.html#torch.nn.Conv3d>`_ ). If a ``nn.Conv2d`` layer is directly followed by a ``nn.BatchNorm2d`` layer, then the bias in the convolution is not needed, instead use ``nn.Conv2d(..., bias=False, ....)``. Bias is not needed because in the first step ``BatchNorm`` subtracts the mean, which effectively cancels out the effect of bias. This is also applicable to 1d and 3d convolutions as long as ``BatchNorm`` (or other normalization layer) normalizes on the same dimension as convolution's bias. Models available from `torchvision <https://github.com/pytorch/vision>`_ already implement this optimization. Use parameter.grad = None instead of model.zero_grad() or optimizer.zero_grad() ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Instead of calling: ``` model.zero_grad() # or optimizer.zero_grad() ``` to zero out gradients, use the following method instead: ``` for param in model.parameters(): param.grad = None ``` The second code snippet does not zero the memory of each individual parameter, also the subsequent backward pass uses assignment instead of addition to store gradients, this reduces the number of memory operations. Setting gradient to ``None`` has a slightly different numerical behavior than setting it to zero, for more details refer to the `documentation <https://pytorch.org/docs/master/optim.html#torch.optim.Optimizer.zero_grad>`_. Alternatively, starting from PyTorch 1.7, call ``model`` or ``optimizer.zero_grad(set_to_none=True)``. Fuse pointwise operations ~~~~~~~~~~~~~~~~~~~~~~~~~ Pointwise operations (elementwise addition, multiplication, math functions - ``sin()``, ``cos()``, ``sigmoid()`` etc.) can be fused into a single kernel to amortize memory access time and kernel launch time. `PyTorch JIT <https://pytorch.org/docs/stable/jit.html>`_ can fuse kernels automatically, although there could be additional fusion opportunities not yet implemented in the compiler, and not all device types are supported equally. Pointwise operations are memory-bound, for each operation PyTorch launches a separate kernel. Each kernel loads data from the memory, performs computation (this step is usually inexpensive) and stores results back into the memory. Fused operator launches only one kernel for multiple fused pointwise ops and loads/stores data only once to the memory. This makes JIT very useful for activation functions, optimizers, custom RNN cells etc. In the simplest case fusion can be enabled by applying `torch.jit.script <https://pytorch.org/docs/stable/generated/torch.jit.script.html#torch.jit.script>`_ decorator to the function definition, for example: ``` @torch.jit.script def fused_gelu(x): return x * 0.5 * (1.0 + torch.erf(x / 1.41421)) ``` Refer to `TorchScript documentation <https://pytorch.org/docs/stable/jit.html>`_ for more advanced use cases. Enable channels_last memory format for computer vision models ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ PyTorch 1.5 introduced support for ``channels_last`` memory format for convolutional networks. This format is meant to be used in conjunction with `AMP <https://pytorch.org/docs/stable/amp.html>`_ to further accelerate convolutional neural networks with `Tensor Cores <https://www.nvidia.com/en-us/data-center/tensor-cores/>`_. Support for ``channels_last`` is experimental, but it's expected to work for standard computer vision models (e.g. ResNet-50, SSD). To convert models to ``channels_last`` format follow `Channels Last Memory Format Tutorial <https://tutorials.pytorch.kr/intermediate/memory_format_tutorial.html>`_. The tutorial includes a section on `converting existing models <https://tutorials.pytorch.kr/intermediate/memory_format_tutorial.html#converting-existing-models>`_. Checkpoint intermediate buffers ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Buffer checkpointing is a technique to mitigate the memory capacity burden of model training. Instead of storing inputs of all layers to compute upstream gradients in backward propagation, it stores the inputs of a few layers and the others are recomputed during backward pass. The reduced memory requirements enables increasing the batch size that can improve utilization. Checkpointing targets should be selected carefully. The best is not to store large layer outputs that have small re-computation cost. The example target layers are activation functions (e.g. ``ReLU``, ``Sigmoid``, ``Tanh``), up/down sampling and matrix-vector operations with small accumulation depth. PyTorch supports a native `torch.utils.checkpoint <https://pytorch.org/docs/stable/checkpoint.html>`_ API to automatically perform checkpointing and recomputation. Disable debugging APIs ~~~~~~~~~~~~~~~~~~~~~~ Many PyTorch APIs are intended for debugging and should be disabled for regular training runs: * anomaly detection: `torch.autograd.detect_anomaly <https://pytorch.org/docs/stable/autograd.html#torch.autograd.detect_anomaly>`_ or `torch.autograd.set_detect_anomaly(True) <https://pytorch.org/docs/stable/autograd.html#torch.autograd.set_detect_anomaly>`_ * profiler related: `torch.autograd.profiler.emit_nvtx <https://pytorch.org/docs/stable/autograd.html#torch.autograd.profiler.emit_nvtx>`_, `torch.autograd.profiler.profile <https://pytorch.org/docs/stable/autograd.html#torch.autograd.profiler.profile>`_ * autograd gradcheck: `torch.autograd.gradcheck <https://pytorch.org/docs/stable/autograd.html#torch.autograd.gradcheck>`_ or `torch.autograd.gradgradcheck <https://pytorch.org/docs/stable/autograd.html#torch.autograd.gradgradcheck>`_ GPU specific optimizations -------------------------- Enable cuDNN auto-tuner ~~~~~~~~~~~~~~~~~~~~~~~ `NVIDIA cuDNN <https://developer.nvidia.com/cudnn>`_ supports many algorithms to compute a convolution. Autotuner runs a short benchmark and selects the kernel with the best performance on a given hardware for a given input size. For convolutional networks (other types currently not supported), enable cuDNN autotuner before launching the training loop by setting: ``` torch.backends.cudnn.benchmark = True ``` * the auto-tuner decisions may be non-deterministic; different algorithm may be selected for different runs. For more details see `PyTorch: Reproducibility <https://pytorch.org/docs/stable/notes/randomness.html?highlight=determinism>`_ * in some rare cases, such as with highly variable input sizes, it's better to run convolutional networks with autotuner disabled to avoid the overhead associated with algorithm selection for each input size. Avoid unnecessary CPU-GPU synchronization ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Avoid unnecessary synchronizations, to let the CPU run ahead of the accelerator as much as possible to make sure that the accelerator work queue contains many operations. When possible, avoid operations which require synchronizations, for example: * ``print(cuda_tensor)`` * ``cuda_tensor.item()`` * memory copies: ``tensor.cuda()``, ``cuda_tensor.cpu()`` and equivalent ``tensor.to(device)`` calls * ``cuda_tensor.nonzero()`` * python control flow which depends on results of operations performed on cuda tensors e.g. ``if (cuda_tensor != 0).all()`` Create tensors directly on the target device ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Instead of calling ``torch.rand(size).cuda()`` to generate a random tensor, produce the output directly on the target device: ``torch.rand(size, device=torch.device('cuda'))``. This is applicable to all functions which create new tensors and accept ``device`` argument: `torch.rand() <https://pytorch.org/docs/stable/generated/torch.rand.html#torch.rand>`_, `torch.zeros() <https://pytorch.org/docs/stable/generated/torch.zeros.html#torch.zeros>`_, `torch.full() <https://pytorch.org/docs/stable/generated/torch.full.html#torch.full>`_ and similar. Use mixed precision and AMP ~~~~~~~~~~~~~~~~~~~~~~~~~~~ Mixed precision leverages `Tensor Cores <https://www.nvidia.com/en-us/data-center/tensor-cores/>`_ and offers up to 3x overall speedup on Volta and newer GPU architectures. To use Tensor Cores AMP should be enabled and matrix/tensor dimensions should satisfy requirements for calling kernels that use Tensor Cores. To use Tensor Cores: * set sizes to multiples of 8 (to map onto dimensions of Tensor Cores) * see `Deep Learning Performance Documentation <https://docs.nvidia.com/deeplearning/performance/index.html#optimizing-performance>`_ for more details and guidelines specific to layer type * if layer size is derived from other parameters rather than fixed, it can still be explicitly padded e.g. vocabulary size in NLP models * enable AMP * Introduction to Mixed Precision Training and AMP: `video <https://www.youtube.com/watch?v=jF4-_ZK_tyc&feature=youtu.be>`_, `slides <https://nvlabs.github.io/eccv2020-mixed-precision-tutorial/files/dusan_stosic-training-neural-networks-with-tensor-cores.pdf>`_ * native PyTorch AMP is available starting from PyTorch 1.6: `documentation <https://pytorch.org/docs/stable/amp.html>`_, `examples <https://pytorch.org/docs/stable/notes/amp_examples.html#amp-examples>`_, `tutorial <https://tutorials.pytorch.kr/recipes/recipes/amp_recipe.html>`_ Pre-allocate memory in case of variable input length ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Models for speech recognition or for NLP are often trained on input tensors with variable sequence length. Variable length can be problematic for PyTorch caching allocator and can lead to reduced performance or to unexpected out-of-memory errors. If a batch with a short sequence length is followed by an another batch with longer sequence length, then PyTorch is forced to release intermediate buffers from previous iteration and to re-allocate new buffers. This process is time consuming and causes fragmentation in the caching allocator which may result in out-of-memory errors. A typical solution is to implement pre-allocation. It consists of the following steps: #. generate a (usually random) batch of inputs with maximum sequence length (either corresponding to max length in the training dataset or to some predefined threshold) #. execute a forward and a backward pass with the generated batch, do not execute an optimizer or a learning rate scheduler, this step pre-allocates buffers of maximum size, which can be reused in subsequent training iterations #. zero out gradients #. proceed to regular training Distributed optimizations ------------------------- Use efficient data-parallel backend ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ PyTorch has two ways to implement data-parallel training: * `torch.nn.DataParallel <https://pytorch.org/docs/stable/generated/torch.nn.DataParallel.html#torch.nn.DataParallel>`_ * `torch.nn.parallel.DistributedDataParallel <https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html#torch.nn.parallel.DistributedDataParallel>`_ ``DistributedDataParallel`` offers much better performance and scaling to multiple-GPUs. For more information refer to the `relevant section of CUDA Best Practices <https://pytorch.org/docs/stable/notes/cuda.html#use-nn-parallel-distributeddataparallel-instead-of-multiprocessing-or-nn-dataparallel>`_ from PyTorch documentation. Skip unnecessary all-reduce if training with DistributedDataParallel and gradient accumulation ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ By default `torch.nn.parallel.DistributedDataParallel <https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html#torch.nn.parallel.DistributedDataParallel>`_ executes gradient all-reduce after every backward pass to compute the average gradient over all workers participating in the training. If training uses gradient accumulation over N steps, then all-reduce is not necessary after every training step, it's only required to perform all-reduce after the last call to backward, just before the execution of the optimizer. ``DistributedDataParallel`` provides `no_sync() <https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html#torch.nn.parallel.DistributedDataParallel.no_sync>`_ context manager which disables gradient all-reduce for particular iteration. ``no_sync()`` should be applied to first ``N-1`` iterations of gradient accumulation, the last iteration should follow the default execution and perform the required gradient all-reduce. Match the order of layers in constructors and during the execution if using DistributedDataParallel(find_unused_parameters=True) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ `torch.nn.parallel.DistributedDataParallel <https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html#torch.nn.parallel.DistributedDataParallel>`_ with ``find_unused_parameters=True`` uses the order of layers and parameters from model constructors to build buckets for ``DistributedDataParallel`` gradient all-reduce. ``DistributedDataParallel`` overlaps all-reduce with the backward pass. All-reduce for a particular bucket is asynchronously triggered only when all gradients for parameters in a given bucket are available. To maximize the amount of overlap, the order in model constructors should roughly match the order during the execution. If the order doesn't match, then all-reduce for the entire bucket waits for the gradient which is the last to arrive, this may reduce the overlap between backward pass and all-reduce, all-reduce may end up being exposed, which slows down the training. ``DistributedDataParallel`` with ``find_unused_parameters=False`` (which is the default setting) relies on automatic bucket formation based on order of operations encountered during the backward pass. With ``find_unused_parameters=False`` it's not necessary to reorder layers or parameters to achieve optimal performance. Load-balance workload in a distributed setting ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Load imbalance typically may happen for models processing sequential data (speech recognition, translation, language models etc.). If one device receives a batch of data with sequence length longer than sequence lengths for the remaining devices, then all devices wait for the worker which finishes last. Backward pass functions as an implicit synchronization point in a distributed setting with `DistributedDataParallel <https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html#torch.nn.parallel.DistributedDataParallel>`_ backend. There are multiple ways to solve the load balancing problem. The core idea is to distribute workload over all workers as uniformly as possible within each global batch. For example Transformer solves imbalance by forming batches with approximately constant number of tokens (and variable number of sequences in a batch), other models solve imbalance by bucketing samples with similar sequence length or even by sorting dataset by sequence length.
github_jupyter
# 78. Subsets __Difficulty__: Medium [Link](https://leetcode.com/problems/subsets/) Given an integer array `nums` of unique elements, return all possible subsets (the power set). The solution set must not contain duplicate subsets. Return the solution in any order. __Example 1__: Input: `nums = [1,2,3]` Output: `[[],[1],[2],[1,2],[3],[1,3],[2,3],[1,2,3]]` ``` from typing import List ``` ## DFS Approach ``` class SolutionDFS: def dfs(self, res, nums): if len(nums)==0: return [res] ans = [] for i, num in enumerate(nums): # print(res+[num]) ans.extend(self.dfs(res+[num], nums[i+1:])) ans.append(res) # print(ans) return ans def subsets(self, nums: List[int]) -> List[List[int]]: return self.dfs([], nums) ``` ## Using a bit-mask to indicate selected items from the list of numbers ``` class SolutionMask: def subsets(self, nums: List[int]) -> List[List[int]]: combs = [] n = len(nums) for mask in range(0, 2**n): i = 0 rem = mask current_set = [] while rem: if rem%2: current_set.append(nums[i]) rem = rem//2 i += 1 combs.append(current_set) return combs ``` A cleaner and efficient implementation of using bit-mask. ``` class SolutionMask2: def subsets(self, nums: List[int]) -> List[List[int]]: res = [] n = len(nums) nth_bit = 1<<n for i in range(2**n): # To create a bit-mask with length n bit_mask = bin(i | nth_bit)[3:] res.append([nums[j] for j in range(n) if bit_mask[j]=='1']) return res ``` ## Test cases ``` # Example 1 nums1 = [1,2,3] res1 = [[],[1],[2],[1,2],[3],[1,3],[2,3],[1,2,3]] # Example 2 nums2 = [0] res2 = [[],[0]] # Example 3 nums3 = [0, -2, 5, -7, 9] res3 = [[0,-2,5,-7,9],[0,-2,5,-7],[0,-2,5,9],[0,-2,5],[0,-2,-7,9],[0,-2,-7],[0,-2,9],[0,-2],[0,5,-7,9],[0,5,-7],[0,5,9],[0,5],[0,-7,9],[0,-7],[0,9],[0],[-2,5,-7,9],[-2,5,-7],[-2,5,9],[-2,5],[-2,-7,9],[-2,-7],[-2,9],[-2],[5,-7,9],[5,-7],[5,9],[5],[-7,9],[-7],[9],[]] def test_function(inp, result): assert len(inp)==len(result) inp_set = [set(x) for x in inp] res_set = [set(x) for x in result] for i in inp_set: assert i in res_set # Test DFS method dfs_sol = SolutionDFS() test_function(dfs_sol.subsets(nums1), res1) test_function(dfs_sol.subsets(nums2), res2) test_function(dfs_sol.subsets(nums3), res3) # Test bit-mask method mask_sol = SolutionMask() test_function(mask_sol.subsets(nums1), res1) test_function(mask_sol.subsets(nums2), res2) test_function(mask_sol.subsets(nums3), res3) # Test bit-mask method mask_sol = SolutionMask2() test_function(mask_sol.subsets(nums1), res1) test_function(mask_sol.subsets(nums2), res2) test_function(mask_sol.subsets(nums3), res3) ```
github_jupyter
``` #r "nuget:Microsoft.ML,1.4.0" #r "nuget:Microsoft.ML.AutoML,0.16.0" #r "nuget:Microsoft.Data.Analysis,0.1.0" using Microsoft.Data.Analysis; using XPlot.Plotly; using Microsoft.AspNetCore.Html; Formatter<DataFrame>.Register((df, writer) => { var headers = new List<IHtmlContent>(); headers.Add(th(i("index"))); headers.AddRange(df.Columns.Select(c => (IHtmlContent) th(c.Name))); var rows = new List<List<IHtmlContent>>(); var take = 20; for (var i = 0; i < Math.Min(take, df.RowCount); i++) { var cells = new List<IHtmlContent>(); cells.Add(td(i)); foreach (var obj in df[i]) { cells.Add(td(obj)); } rows.Add(cells); } var t = table( thead( headers), tbody( rows.Select( r => tr(r)))); writer.Write(t); }, "text/html"); using System.IO; using System.Net.Http; string housingPath = "housing.csv"; if (!File.Exists(housingPath)) { var contents = new HttpClient() .GetStringAsync("https://raw.githubusercontent.com/ageron/handson-ml2/master/datasets/housing/housing.csv").Result; File.WriteAllText("housing.csv", contents); } var housingData = DataFrame.LoadCsv(housingPath); housingData housingData.Description() Chart.Plot( new Graph.Histogram() { x = housingData["median_house_value"], nbinsx = 20 } ) var chart = Chart.Plot( new Graph.Scattergl() { x = housingData["longitude"], y = housingData["latitude"], mode = "markers", marker = new Graph.Marker() { color = housingData["median_house_value"], colorscale = "Jet" } } ); chart.Width = 600; chart.Height = 600; display(chart); static T[] Shuffle<T>(T[] array) { Random rand = new Random(); for (int i = 0; i < array.Length; i++) { int r = i + rand.Next(array.Length - i); T temp = array[r]; array[r] = array[i]; array[i] = temp; } return array; } int[] randomIndices = Shuffle(Enumerable.Range(0, (int)housingData.RowCount).ToArray()); int testSize = (int)(housingData.RowCount * .1); int[] trainRows = randomIndices[testSize..]; int[] testRows = randomIndices[..testSize]; DataFrame housing_train = housingData[trainRows]; DataFrame housing_test = housingData[testRows]; display(housing_train.RowCount); display(housing_test.RowCount); using Microsoft.ML; using Microsoft.ML.Data; using Microsoft.ML.AutoML; %%time var mlContext = new MLContext(); var experiment = mlContext.Auto().CreateRegressionExperiment(maxExperimentTimeInSeconds: 15); var result = experiment.Execute(housing_train, labelColumnName:"median_house_value"); var scatters = result.RunDetails.Where(d => d.ValidationMetrics != null).GroupBy( r => r.TrainerName, (name, details) => new Graph.Scattergl() { name = name, x = details.Select(r => r.RuntimeInSeconds), y = details.Select(r => r.ValidationMetrics.MeanAbsoluteError), mode = "markers", marker = new Graph.Marker() { size = 12 } }); var chart = Chart.Plot(scatters); chart.WithXTitle("Training Time"); chart.WithYTitle("Error"); display(chart); Console.WriteLine($"Best Trainer:{result.BestRun.TrainerName}"); var testResults = result.BestRun.Model.Transform(housing_test); var trueValues = testResults.GetColumn<float>("median_house_value"); var predictedValues = testResults.GetColumn<float>("Score"); var predictedVsTrue = new Graph.Scattergl() { x = trueValues, y = predictedValues, mode = "markers", }; var maximumValue = Math.Max(trueValues.Max(), predictedValues.Max()); var perfectLine = new Graph.Scattergl() { x = new[] {0, maximumValue}, y = new[] {0, maximumValue}, mode = "lines", }; var chart = Chart.Plot(new[] {predictedVsTrue, perfectLine }); chart.WithXTitle("True Values"); chart.WithYTitle("Predicted Values"); chart.WithLegend(false); chart.Width = 600; chart.Height = 600; display(chart); ```
github_jupyter

Dataset Card for "clean_notebooks"

More Information needed

Downloads last month
31