code
stringlengths
2.5k
150k
kind
stringclasses
1 value
# Automate loan approvals with Business rules in Apache Spark and Scala ### Automating at scale your business decisions in Apache Spark with IBM ODM 8.9.2 This Scala notebook shows you how to execute locally business rules in DSX and Apache Spark. You'll learn how to call in Apache Spark a rule-based decision service. This decision service has been programmed with IBM Operational Decision Manager. This notebook puts in action a decision service named Miniloan that is part of the ODM tutorials. It determines with business rules whether a customer is eligible for a loan according to specific criteria. The criteria include the amount of the loan, the annual income of the borrower, and the duration of the loan. First we load an application data set that was captured as a CSV file. In scala we apply a map to this data set to automate a rule-based reasoning, in order to outcome a decision. The rule execution is performed locally in the Spark service. This notebook shows a complete Scala code that can execute any ruleset based on the public APIs. To get the most out of this notebook, you should have some familiarity with the Scala programming language. ## Contents This notebook contains the following main sections: 1. [Load the loan validation request dataset.](#loaddatatset) 2. [Load the business rule execution and the simple loan application object model libraries.](#loadjars) 3. [Import Scala packages.](#importpackages) 4. [Implement a decision making function.](#implementDecisionServiceMap) 5. [Execute the business rules to approve or reject the loan applications.](#executedecisions) 6. [View the automated decisions.](#viewdecisions) 7. [Summary and next steps.](#summary) <a id="accessdataset"></a> ## 1. Loading a loan application dataset file A data set of simple loan applications is already available. You load it in the Notebook through its url. ``` // @hidden_cell import scala.sys.process._ "wget https://raw.githubusercontent.com/ODMDev/decisions-on-spark/master/data/miniloan/miniloan-requests-10K.csv".! val filename = "miniloan-requests-10K.csv" ``` This following code loads the 10 000 simple loan application dataset written in CSV format. ``` val requestData = sc.textFile(filename) val requestDataCount = requestData.count println(s"$requestDataCount loan requests read in a CVS format") println("The first 5 requests:") requestData.take(20).foreach(println) ``` <a id="loadjars"></a> ## 2. Add libraries for business rule execution and a loan application object model The XXX refers to your object storage or other place where you make available these jars. Add the following jars to execute the deployed decision service <il> <li>%AddJar https://XXX/j2ee_connector-1_5-fr.jar</li> <li>%AddJar https://XXX/jrules-engine.jar</li> <li>%AddJar https://XXX/jrules-res-execution.jar</li> </il> In addition you need the Apache Jackson annotation lib <il> <li>%AddJar https://XXX/jackson-annotations-2.6.5.jar</li> </il> Business Rules apply on a Java executable Object Model packaged as a jar. We need these classes to create the decision requests, and to retreive the response from the rule engine. <il> <li>%AddJar https://XXX/miniloan-xom.jar</li> </il> ``` // @hidden_cell // The urls below are accessible for an IBM internal usage only %AddJar https://XXX/j2ee_connector-1_5-fr.jar %AddJar https://XXX/jrules-engine.jar %AddJar https://XXX/jrules-res-execution.jar %AddJar https://XXX/jackson-annotations-2.6.5.jar -f //Loan Application eXecutable Object Model %AddJar https://XXX/miniloan-xom.jar -f print("Your notebook is now ready to execute business rules to approve or reject loan applications") ``` <a id="importpackages"></a> ## 3. Import packages Import ODM and Apache Spark packages. ``` import java.util.Map import java.util.HashMap import com.fasterxml.jackson.core.JsonGenerationException import com.fasterxml.jackson.core.JsonProcessingException import com.fasterxml.jackson.databind.JsonMappingException import com.fasterxml.jackson.databind.ObjectMapper import com.fasterxml.jackson.databind.SerializationFeature import org.apache.spark.SparkConf import org.apache.spark.api.java.JavaDoubleRDD import org.apache.spark.api.java.JavaRDD import org.apache.spark.api.java.JavaSparkContext import org.apache.spark.api.java.function.Function import org.apache.hadoop.fs.FileSystem import org.apache.hadoop.fs.Path import scala.collection.JavaConverters._ import ilog.rules.res.model._ import com.ibm.res.InMemoryJ2SEFactory import com.ibm.res.InMemoryRepositoryDAO import ilog.rules.res.session._ import miniloan.Borrower import miniloan.Loan import scala.io.Source import java.net.URL import java.io.InputStream ``` <a id="implementDecisionServiceMap"></a> ## 4. Implement a Map function that executes a rule-based decision service ``` case class MiniLoanRequest(borrower: miniloan.Borrower, loan: miniloan.Loan) case class RESRunner(sessionFactory: com.ibm.res.InMemoryJ2SEFactory) { def executeAsString(s: String): String = { println("executeAsString") val request = makeRequest(s) val response = executeRequest(request) response } private def makeRequest(s: String): MiniLoanRequest = { val tokens = s.split(",") // Borrower deserialization from CSV val borrowerName = tokens(0) val borrowerCreditScore = java.lang.Integer.parseInt(tokens(1).trim()) val borrowerYearlyIncome = java.lang.Integer.parseInt(tokens(2).trim()) val loanAmount = java.lang.Integer.parseInt(tokens(3).trim()) val loanDuration = java.lang.Integer.parseInt(tokens(4).trim()) val yearlyInterestRate = java.lang.Double.parseDouble(tokens(5).trim()) val borrower = new miniloan.Borrower(borrowerName, borrowerCreditScore, borrowerYearlyIncome) // Loan request deserialization from CSV val loan = new miniloan.Loan() loan.setAmount(loanAmount) loan.setDuration(loanDuration) loan.setYearlyInterestRate(yearlyInterestRate) val request = new MiniLoanRequest(borrower, loan) request } def executeRequest(request: MiniLoanRequest): String = { try { val sessionRequest = sessionFactory.createRequest() val rulesetPath = "/Miniloan/Miniloan" sessionRequest.setRulesetPath(ilog.rules.res.model.IlrPath.parsePath(rulesetPath)) //sessionRequest.getTraceFilter.setInfoAllFilters(false) val inputParameters = sessionRequest.getInputParameters inputParameters.put("loan", request.loan) inputParameters.put("borrower", request.borrower) val session = sessionFactory.createStatelessSession() val response = session.execute(sessionRequest) var loan = response.getOutputParameters().get("loan").asInstanceOf[miniloan.Loan] val mapper = new com.fasterxml.jackson.databind.ObjectMapper() mapper.configure(com.fasterxml.jackson.databind.SerializationFeature.FAIL_ON_EMPTY_BEANS, false) val results = new java.util.HashMap[String,Object]() results.put("input", inputParameters) results.put("output", response.getOutputParameters()) try { //return mapper.writeValueAsString(results) return mapper.writerWithDefaultPrettyPrinter().writeValueAsString(results); } catch { case e: Exception => return e.toString() } "Error" } catch { case exception: Exception => { return exception.toString() } } "Error" } } val decisionService = new Function[String, String]() { @transient private var ruleSessionFactory: InMemoryJ2SEFactory = null private val rulesetURL = "https://odmlibserver.mybluemix.net/8901/decisionservices/miniloan-8901.dsar" @transient private var rulesetStream: InputStream = null def GetRuleSessionFactory(): InMemoryJ2SEFactory = { if (ruleSessionFactory == null) { ruleSessionFactory = new InMemoryJ2SEFactory() // Create the Management Session var repositoryFactory = ruleSessionFactory.createManagementSession().getRepositoryFactory() var repository = repositoryFactory.createRepository() // Deploy the Ruleapp with the Regular Management Session API. var rapp = repositoryFactory.createRuleApp("Miniloan", IlrVersion.parseVersion("1.0")); var rs = repositoryFactory.createRuleset("Miniloan",IlrVersion.parseVersion("1.1")); rapp.addRuleset(rs); //var fileStream = Source.fromResourceAsStream(RulesetFileName) rulesetStream = new java.net.URL(rulesetURL).openStream() rs.setRESRulesetArchive(IlrEngineType.DE,rulesetStream) repository.addRuleApp(rapp) } ruleSessionFactory } def call(s: String): String = { var runner = new RESRunner(GetRuleSessionFactory()) return runner.executeAsString(s) } def execute(s: String): String = { try { var runner = new RESRunner(GetRuleSessionFactory()) return runner.executeAsString(s) } catch { case exception: Exception => { exception.printStackTrace(System.err) } } "Execution error" } } ``` <a id="executedecisions"></a> ## 5. Automate the decision making on the loan application dataset You invoke a map on the decision function. While the map occurs rule engines are processing in parallel the loan applications to produce a data set of answers. ``` println("Start of Execution") val answers = requestData.map(decisionService.execute) printf("Number of rule based decisions: %s \n" , answers.count) // Cleanup output file //val fs = FileSystem.get(new URI(outputPath), sc.hadoopConfiguration); //if (fs.exists(new Path(outputPath))) // fs.delete(new Path(outputPath), true) // Save RDD in a HDFS file println("End of Execution ") //answers.saveAsTextFile("swift://DecisionBatchExecution." + securedAccessName + "/miniloan-decisions-10.csv") println("Decision automation job done") ``` <a id="viewdecisions"></a> ## 6. View your automated decisions Each decision is composed of output parameters and of a decision trace. The loan data contains the approval flag and the computed yearly repayment. The decision trace lists the business rules that have been executed in sequence to come to the conclusion. Each decision has been serialized in JSON. ``` //answers.toDF().show(false) answers.take(1).foreach(println) ``` <a id="summary"></a> ## 7. Summary and next steps Congratulations! You have applied business rules to automatically determine loan approval eligibility. You loaded a loan application data set, ran a rule engine inside an Apache Spark cluster to make an eligibility decision for each applicant. Each decision is a Scala object that is part of a Spark Resilient Data Set. Each decision is structured with input parameters (the context of the decision) and output parameters. For audit purpose the rule engine can emit a decision trace. You have successfully run a rule engine to automate decisions at scale in a Spark cluster. You can now invent your own business rules and run them with the same integration pattern. <a id="authors"></a> ## Authors Pierre Feillet and Laurent Grateau are business rule engineers at IBM working in the Decision lab located in France. Copyright © 2018 IBM. This notebook and its source code are released under the terms of the MIT License.
github_jupyter
``` ##World Map Plotly #Import Plotly Lib and Set up Credentials with personal account !pip install plotly import plotly plotly.tools.set_credentials_file(username='igleonaitis', api_key='If6Wh3xWNmdNioPzOZZo') plotly.tools.set_config_file(world_readable=True, sharing='public') import plotly.plotly as py from plotly.graph_objs import * import plotly.graph_objs as go import pandas as pd #Import WHR 2017 data set df = pd.read_excel('whr.xlsx', sheetname='Figure2.2 WHR 2017') #Set Up World Map Plot scl = [[0,'rgb(140,101,211)'],[0.25,'rgb(154,147,236)'], [0.50,'rgb(0,82,165)'],[0.75,'rgb(129,203,248)'], [1,'rgb(65,179,247)']] data = [ dict( type = 'choropleth', locationmode = 'country names', locations = df['Country'], z = df['Happiness score'], text = df['Country'], colorscale = scl, autocolorscale = False, reversescale = False, marker = dict( line = dict ( color = 'rgb(180,180,180)', width = 0.5 ) ), colorbar = dict( autotick = False, tickprefix = False, title = 'World Happiness Score'), ) ] layout = dict( title = '2017 National Happiness Scores GDP<br>Source:\ <a href="http://worldhappiness.report/ed/2017/">\ World Happiness Report</a>', geo = dict( showframe = False, showcoastlines = False, projection = dict( type = 'Mercator' ) ) ) #Create World Map Plot fig = dict(data = data, layout = layout) py.iplot(fig, validate=False, filename='d3-world-map') df1 = pd.read_excel('whr.xlsx', sheetname='Figure2.2 WHR 2017') #Stacked Bar Plot trace1 = go.Bar( y = df1['Country'], x = df1['Explained by: GDP per capita'], orientation = 'h', width = .5, name = 'GDP per Capita', marker=dict( color='rgb(140,101,211)' ) ) trace2 = go.Bar( y = df1['Country'], x = df1['Explained by: Social support'], orientation = 'h', width = .5, name = 'Social Support', marker=dict( color='rgb(154,147,236)' ) ) trace3 = go.Bar( y = df1['Country'], x = df1['Explained by: Healthy life expectancy'], orientation = 'h', width = .5, name = 'Healthy Life Expectancy', marker=dict( color='rgb(0,82,165)' ) ) trace4 = go.Bar( y = df1['Country'], x = df1['Explained by: Freedom to make life choices'], orientation = 'h', width = .5, name = 'Freedom to Make Life Choices', marker=dict( color='rgb(129,203,248)' ) ) trace5 = go.Bar( y = df1['Country'], x = df1['Explained by: Generosity'], orientation = 'h', width = .5, name = 'Generosity', marker=dict( color='rgb(65,179,247)' ) ) trace6 = go.Bar( y = df1['Country'], x = df1['Explained by: Perceptions of corruption'], orientation = 'h', width = .5, name = 'Perceptions on Corruption', marker=dict( color='rgb(115, 235, 174)' ) ) data = [trace1, trace2, trace3, trace4, trace5, trace6] layout = go.Layout( title = 'Factor Makeup of Happiness Scores', barmode ='stack', autosize = False, width = 800, height = 1500, yaxis = dict( tickfont = dict( size = 6, color = 'black')), xaxis = dict( tickfont = dict( size = 10, color = 'black')) ) fig = go.Figure(data=data, layout=layout) py.iplot(fig, filename='stacked-horizontal-bar') import plotly.plotly as py from plotly.grid_objs import Grid, Column from plotly.figure_factory import * import pandas as pd import time xls_file = pd.ExcelFile('Internet_Usage.xls') xls_file dataset = xls_file.parse('Sheet1') dataset.head() years_from_col = set(dataset['year']) years_ints = sorted(list(years_from_col)) years = [str(year) for year in years_ints] # make list of continents continents = [] for continent in dataset['continent']: if continent not in continents: continents.append(continent) columns = [] # make grid for year in years: for continent in continents: dataset_by_year = dataset[dataset['year'] == int(year)] dataset_by_year_and_cont = dataset_by_year[dataset_by_year['continent'] == continent] for col_name in dataset_by_year_and_cont: # each column name is unique column_name = '{year}_{continent}_{header}_gapminder_grid'.format( year=year, continent=continent, header=col_name ) a_column = Column(list(dataset_by_year_and_cont[col_name]), column_name) columns.append(a_column) # upload grid grid = Grid(columns) url = py.grid_ops.upload(grid, 'gapminder_grid'+str(time.time()), auto_open=False) url figure = { 'data': [], 'layout': {}, 'frames': [], 'config': {'scrollzoom': True} } # fill in most of layout figure['layout']['xaxis'] = {'range': [2, 8], 'title': 'World Happiness Score', 'gridcolor': '#FFFFFF'} figure['layout']['yaxis'] = {'range': [0,100],'title': 'Internet Usage % of Pop.', 'gridcolor': '#FFFFFF'} figure['layout']['hovermode'] = 'closest' figure['layout']['plot_bgcolor'] = 'rgb(223, 232, 243)' sliders_dict = { 'active': 0, 'yanchor': 'top', 'xanchor': 'left', 'currentvalue': { 'font': {'size': 20}, 'prefix': 'Year:', 'visible': True, 'xanchor': 'right' }, 'transition': {'duration': 300, 'easing': 'cubic-in-out'}, 'pad': {'b': 10, 't': 50}, 'len': 0.9, 'x': 0.1, 'y': 0, 'steps': [] } figure['layout']['updatemenus'] = [ { 'buttons': [ { 'args': [None, {'frame': {'duration': 500, 'redraw': False}, 'fromcurrent': True, 'transition': {'duration': 300, 'easing': 'quadratic-in-out'}}], 'label': 'Play', 'method': 'animate' }, { 'args': [[None], {'frame': {'duration': 0, 'redraw': False}, 'mode': 'immediate', 'transition': {'duration': 0}}], 'label': 'Pause', 'method': 'animate' } ], 'direction': 'left', 'pad': {'r': 10, 't': 87}, 'showactive': False, 'type': 'buttons', 'x': 0.1, 'xanchor': 'right', 'y': 0, 'yanchor': 'top' } ] custom_colors = { 'Asia': 'rgb(171, 99, 250)', 'Europe': 'rgb(230, 99, 250)', 'Africa': 'rgb(99, 110, 250)', 'Americas': 'rgb(25, 211, 243)', 'Oceania': 'rgb(50, 170, 255)' } col_name_template = '{year}_{continent}_{header}_gapminder_grid' year = 2007 for continent in continents: data_dict = { 'xsrc': grid.get_column_reference(col_name_template.format( year=year, continent=continent, header='lifeExp' )), 'ysrc': grid.get_column_reference(col_name_template.format( year=year, continent=continent, header='gdpPercap' )), 'mode': 'markers', 'textsrc': grid.get_column_reference(col_name_template.format( year=year, continent=continent, header='country' )), 'marker': { 'sizemode': 'area', 'sizeref': 2000, 'sizesrc': grid.get_column_reference(col_name_template.format( year=year, continent=continent, header='pop' )), 'color': custom_colors[continent] }, 'name': continent } figure['data'].append(data_dict) for year in years: frame = {'data': [], 'name': str(year)} for continent in continents: data_dict = { 'xsrc': grid.get_column_reference(col_name_template.format( year=year, continent=continent, header='lifeExp' )), 'ysrc': grid.get_column_reference(col_name_template.format( year=year, continent=continent, header='gdpPercap' )), 'mode': 'markers', 'textsrc': grid.get_column_reference(col_name_template.format( year=year, continent=continent, header='country' )), 'marker': { 'sizemode': 'area', 'sizeref': 2000, 'sizesrc': grid.get_column_reference(col_name_template.format( year=year, continent=continent, header='pop' )), 'color': custom_colors[continent] }, 'name': continent } frame['data'].append(data_dict) figure['frames'].append(frame) slider_step = {'args': [ [year], {'frame': {'duration': 300, 'redraw': False}, 'mode': 'immediate', 'transition': {'duration': 300}} ], 'label': year, 'method': 'animate'} sliders_dict['steps'].append(slider_step) figure['layout']['sliders'] = [sliders_dict] py.icreate_animations(figure, 'gapminder_example'+str(time.time())) ```
github_jupyter
[this doc on github](https://github.com/dotnet/interactive/tree/master/samples/notebooks/fsharp/Samples) # Machine Learning over House Prices with ML.NET ### Reference the packages ``` #r "nuget:Microsoft.ML,1.4.0" #r "nuget:Microsoft.ML.AutoML,0.16.0" #r "nuget:Microsoft.Data.Analysis,0.2.0" #r "nuget: XPlot.Plotly.Interactive, 4.0.6" open Microsoft.Data.Analysis open XPlot.Plotly ``` ### Adding better default formatting for data frames Register a formatter for data frames and data frame rows. ``` module DateFrameFormatter = // Locally open the F# HTML DSL. open Html let maxRows = 20 Formatter.Register<DataFrame>((fun (df: DataFrame) (writer: TextWriter) -> let take = 20 table [] [ thead [] [ th [] [ str "Index" ] for c in df.Columns do th [] [ str c.Name] ] tbody [] [ for i in 0 .. min maxRows (int df.Rows.Count - 1) do tr [] [ td [] [ i ] for o in df.Rows.[int64 i] do td [] [ o ] ] ] ] |> writer.Write ), mimeType = "text/html") Formatter.Register<DataFrameRow>((fun (row: DataFrameRow) (writer: TextWriter) -> table [] [ tbody [] [ tr [] [ for o in row do td [] [ o ] ] ] ] |> writer.Write ), mimeType = "text/html") ``` ### Download the data ``` open System.Net.Http let housingPath = "housing.csv" if not(File.Exists(housingPath)) then let contents = HttpClient().GetStringAsync("https://raw.githubusercontent.com/ageron/handson-ml2/master/datasets/housing/housing.csv").Result File.WriteAllText("housing.csv", contents) ``` ### Add the data to the data frame ``` let housingData = DataFrame.LoadCsv(housingPath) housingData housingData.Description() ``` ### Display the data ``` let graph = Histogram(x = housingData.["median_house_value"], nbinsx = 20) graph |> Chart.Plot let graph = Scattergl( x = housingData.["longitude"], y = housingData.["latitude"], mode = "markers", marker = Marker( color = housingData.["median_house_value"], colorscale = "Jet")) let plot = Chart.Plot(graph) plot.Width <- 600 plot.Height <- 600 display(plot) ``` ### Prepare the training and validation sets ``` module Array = let shuffle (arr: 'T[]) = let rnd = Random() let arr = Array.copy arr for i in 0 .. arr.Length - 1 do let r = i + rnd.Next(arr.Length - i) let temp = arr.[r] arr.[r] <- arr.[i] arr.[i] <- temp arr let randomIndices = [| 0 .. int housingData.Rows.Count - 1 |] |> Array.shuffle let testSize = int (float (housingData.Rows.Count) * 0.1) let trainRows = randomIndices.[testSize..] let testRows = randomIndices.[..testSize - 1] let housing_train = housingData.[trainRows] let housing_test = housingData.[testRows] display(housing_train.Rows.Count) display(housing_test.Rows.Count) ``` ### Create the regression model and train it ``` #!time open Microsoft.ML open Microsoft.ML.Data open Microsoft.ML.AutoML let mlContext = MLContext() let experiment = mlContext.Auto().CreateRegressionExperiment(maxExperimentTimeInSeconds = 15u) let result = experiment.Execute(housing_train, labelColumnName = "median_house_value") ``` ### Display the training results ``` let scatters = result.RunDetails |> Seq.filter (fun d -> not (isNull d.ValidationMetrics)) |> Seq.groupBy (fun r -> r.TrainerName) |> Seq.map (fun (name, details) -> Scattergl( name = name, x = (details |> Seq.map (fun r -> r.RuntimeInSeconds)), y = (details |> Seq.map (fun r -> r.ValidationMetrics.MeanAbsoluteError)), mode = "markers", marker = Marker(size = 12))) let chart = Chart.Plot(scatters) chart.WithXTitle("Training Time") chart.WithYTitle("Error") display(chart) Console.WriteLine("Best Trainer:{0}", result.BestRun.TrainerName); ``` ### Validate and display the results ``` let testResults = result.BestRun.Model.Transform(housing_test) let trueValues = testResults.GetColumn<float32>("median_house_value") let predictedValues = testResults.GetColumn<float32>("Score") let predictedVsTrue = Scattergl( x = trueValues, y = predictedValues, mode = "markers") let maximumValue = Math.Max(Seq.max trueValues, Seq.max predictedValues) let perfectLine = Scattergl( x = [| 0.0f; maximumValue |], y = [| 0.0f; maximumValue |], mode = "lines") let chart = Chart.Plot([| predictedVsTrue; perfectLine |]) chart.WithXTitle("True Values") chart.WithYTitle("Predicted Values") chart.WithLegend(false) chart.Width = 600 chart.Height = 600 display(chart) ```
github_jupyter
ML Course, Bogotá, Colombia (&copy; Josh Bloom; June 2019) ``` %run ../talktools.py ``` # Featurization and Dirty Data (and NLP) <img src="imgs/workflow.png"> Source: [V. Singh](https://www.slideshare.net/hortonworks/data-science-workshop) ### Notes: Biased workflow :) * Labels → Answer * Feature extraction. Taking images/data and manipulate it, and do a feature matrix, each feature is a number (measurable). Domain-specific. * Run a ML model on the featurized data. Split in validation and test sets. * After predicting and validating results, the new results can become new training data. But you have to beware of skewing the results or the training data → introducing bias. One way to fight this is randomize sampling/results? (I didn't understand how to do this tbh). <img src="imgs/feature.png"> Source: Lightsidelabs <img src="imgs/feature2.png"> # Featurization Examples In the real world, we are very rarely presented with a clean feature matrix. Raw data are missing, noisy, ugly and unfiltered. And sometimes we dont even have the data we need to make models and predictions. Indeed the conversion of raw data to data that's suitable for learning on is time consuming, difficult, and where a lot of the domain understanding is required. When we extract features from raw data (say PDF documents) we often are presented with a variety of data types: <img src="imgs/feat.png"> # Categorical & Missing Features Often times, we might be presented with raw data (say from an Excel spreadsheet) that looks like: | eye color | height | country of origin | gender | | ------------| ---------| ---------------------| ------- | | brown | 1.85 | Colombia | M | | brown | 1.25 | USA | | | blonde | 1.45 | Mexico | F | | red | 2.01 | Mexico | F | | | | Chile | F | | Brown | 1.02 | Colombia | | What do you notice in this dataset? Since many ML learn algorithms require, as we'll see, a full matrix of numerical input features, there's often times a lot of preprocessing work that is needed before we can learn. ``` import numpy as np import pandas as pd df = pd.DataFrame({"eye color": ["brown", "brown", "blonde", "red", None, "Brown"], "height": [1.85, 1.25, 1.45, 2.01, None, 1.02], "country of origin": ["Colombia", "USA", "Mexico", "Mexico", "Chile", "Colombia"], "gender": ["M", None, "F", "F","F", None]}) df ``` Let's first normalize the data so it's all lower case. This will handle the "Brown" and "brown" issue. ``` df_new = df.copy() df_new["eye color"] = df_new["eye color"].str.lower() df_new ``` Let's next handle the NaN in the height. What should we use here? ``` # mean of everyone? np.nanmean(df_new["height"].values) # mean of just females? np.nanmean(df_new[df_new["gender"] == 'F']["height"]) df_new1 = df_new.copy() df_new1.at[4, "height"] = np.nanmean(df_new[df_new["gender"] == 'F']["height"]) df_new1 ``` Let's next handle the eye color. What should we use? ``` df_new1["eye color"].mode() df_new2 = df_new1.copy() df_new2.at[4, "eye color"] = df_new1["eye color"].mode().values[0] df_new2 ``` How should we handle the missing gender entries? ``` df_new3 = df_new2.fillna("N/A") df_new3 ``` We're done, right? No. We fixed the dirty, missing data problem but we still dont have a numerical feature matrix. We could do a mapping such that "Colombia" -> 1, "USA" -> 2, ... etc. but then that would imply an ordering between what is fundamentally categories (without ordering). Instead we want to do `one-hot encoding`, where every unique value gets its own column. `pandas` as a method on DataFrames called `get_dummies` which does this for us. ### Notes: Many algorithms/methods cannot handle cathegorical data, that's why you have to do the `one-hot encoding` "trick". An example of one that can handle it is Random forest. ``` pd.get_dummies(df_new3, prefix=['eye color', 'country of origin', 'gender']) ``` Note: depending on the learning algorithm you use, you may want to do `drop_first=True` in `get_dummies`. Of course there are helpful tools that exist for us to deal with dirty, missing data. ``` %run transform bt = BasicTransformer(return_df=True) bt.fit_transform(df_new) ``` ## Time series The [wafer dataset](http://www.timeseriesclassification.com/description.php?Dataset=Wafer) is a set of timeseries capturing sensor measurements (1000 training examples, 6164 test examples) of one silicon wafer during the manufacture of semiconductors. Each wafer has a classification of normal or abnormal. The abnormal wafers are representative of a range of problems commonly encountered during semiconductor manufacturing. ``` import requests from io import StringIO dat_file = requests.get("https://github.com/zygmuntz/time-series-classification/blob/master/data/wafer/Wafer.csv?raw=true") data = StringIO(dat_file.text) %matplotlib inline import matplotlib.pyplot as plt import numpy as np import pandas as pd data.seek(0) df = pd.read_csv(data, header=None) df.head() ``` 0 to 151 time measurements, the latest colum is the label we are trying to predict. ``` df[152].value_counts() ## save the data as numpy arrays target = df.values[:,152].astype(int) time_series = df.values[:,0:152] normal_inds = np.argwhere(target == 1) ; np.random.shuffle(normal_inds) abnormal_inds = np.argwhere(target == -1); np.random.shuffle(abnormal_inds) num_to_plot = 3 fig, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, sharey=True, figsize=(12,6)) for i in range(num_to_plot): ax1.plot(time_series[normal_inds[i][0],:], label=f"#{normal_inds[i][0]}: {target[normal_inds[i][0]]}") ax2.plot(time_series[abnormal_inds[i][0],:], label=f"#{abnormal_inds[i][0]}: {target[abnormal_inds[i][0]]}") ax1.legend() ax2.legend() ax1.set_title("Normal") ; ax2.set_title("Abnormal") ax1.set_xlabel("time") ; ax2.set_xlabel("time") ax1.set_ylabel("Value") ``` What would be good features here? ``` f1 = np.mean(time_series, axis=1) # how about the mean? f1.shape import seaborn as sns, numpy as np import warnings warnings.filterwarnings("ignore") ax = sns.distplot(f1) # Plotting differences of means between normal and abnormal ax = sns.distplot(f1[normal_inds], kde_kws={"label": "normal"}) sns.distplot(f1[abnormal_inds], ax=ax, kde_kws={"label": "abnormal"}) f2 = np.min(time_series, axis=1) # how about the mean? f2.shape # Differences in the minimum - Not much difference, just a small subset at the "end that can tell something interesting ax = sns.distplot(f2[normal_inds], kde_kws={"label": "normal"}) sns.distplot(f2[abnormal_inds], ax=ax, kde_kws={"label": "abnormal"}) ``` Often there are entire python packages devoted to help us build features from certain types of datasets (timeseries, text, images, movies, etc.). In the case of timeseries, a popular package is `tsfresh` (*"It automatically calculates a large number of time series characteristics, the so called features. Further the package contains methods to evaluate the explaining power and importance of such characteristics for regression or classification tasks."*). See the [tsfresh docs](https://tsfresh.readthedocs.io/en/latest/) and the [list of features generated](https://tsfresh.readthedocs.io/en/latest/text/list_of_features.html). ``` # !pip install tsfresh dfc = df.copy() del dfc[152] d = dfc.stack() d = d.reset_index() d = d.rename(columns={"level_0": "id", "level_1": "time", 0: "value"}) y = df[152] from tsfresh import extract_features max_num=300 from tsfresh import extract_relevant_features features_filtered_direct = extract_relevant_features(d[d["id"] < max_num], y.iloc[0:max_num], column_id='id', column_sort='time', n_jobs=4) #extracted_features = extract_features(, column_id="id", # column_sort="time", disable_progressbar=False, n_jobs=3) feats = features_filtered_direct[features_filtered_direct.columns[0:4]].rename(lambda x: x[0:14], axis='columns') feats["target"] = y.iloc[0:max_num] sns.pairplot(feats, hue="target") ``` # Text Data Many applications involve parsing and understanding something about natural language, ie. speech or text data. Categorization is a classic usage of Natural Language Processing (NLP): what bucket does this text belong to? Question: **What are some examples where learning on text has commerical or industrial applications?** A classic dataset in text processing is the [20,000+ newsgroup documents corpus](http://qwone.com/~jason/20Newsgroups/). These texts taken from old discussion threads in 20 different [newgroups](https://en.wikipedia.org/wiki/Usenet_newsgroup): <pre> comp.graphics comp.os.ms-windows.misc comp.sys.ibm.pc.hardware comp.sys.mac.hardware comp.windows.x rec.autos rec.motorcycles rec.sport.baseball rec.sport.hockey sci.crypt sci.electronics sci.med sci.space misc.forsale talk.politics.misc talk.politics.guns talk.politics.mideast talk.religion.misc alt.atheism soc.religion.christian </pre> One of the tasks is to assign a document to the correct group, ie. classify which group this belongs to. `sklearn` has a download facility for this dataset: ``` from sklearn.datasets import fetch_20newsgroups news_train = fetch_20newsgroups(subset='train', categories=['sci.space','rec.autos'], data_home='datatmp/') news_train.target_names print(news_train.data[1]) news_train.target_names[news_train.target[1]] autos = np.argwhere(news_train.target == 1) sci = np.argwhere(news_train.target == 0) ``` **How do you (as a human) classify text? What do you look for? How might we make these features?** ``` # total character count? f1 = np.array([len(x) for x in news_train.data]) f1 ax = sns.distplot(f1[autos], kde_kws={"label": "autos"}) sns.distplot(f1[sci], ax=ax, kde_kws={"label": "sci"}) ax.set_xscale("log") ax.set_xlabel("number of charaters") # total character words? f2 = np.array([len(x.split(" ")) for x in news_train.data]) f2 ax = sns.distplot(f2[autos], kde_kws={"label": "autos"}) sns.distplot(f2[sci], ax=ax, kde_kws={"label": "sci"}) ax.set_xscale("log") ax.set_xlabel("number of words") # number of questions asked or exclaimations? f3 = np.array([x.count("?") + x.count("!") for x in news_train.data]) f3 ax = sns.distplot(f3[autos], kde_kws={"label": "autos"}) sns.distplot(f3[sci], ax=ax, kde_kws={"label": "sci"}) ax.set_xlabel("number of questions asked") ``` We've got three fairly uninformative features now. We should be able to do better. Unsurprisingly, what matters most in NLP is the content: the words used, the tone, the meaning from the ordering of those words. The basic components of NLP are: * Tokenization - intelligently splitting up words in sentences, paying attention to conjunctions, punctuation, etc. * Lemmization - reducing a word to its base form * Entity recognition - finding proper names, places, etc. in documents There a many Python packages that help with NLP, including `nltk`, `textblob`, `gensim`, etc. Here we'll use the fairly modern and battletested [`spaCy`](https://spacy.io/). ### Notes: * For tokenization you need to know the language you are dealing with. * Lemmization idea is to kind of normalize the whole text content, and maybe reduce words, such as adverbs to adjectives or plurals to singular, etc. * For lemmization not only needs the language but also the kind of content you are dealing with. ``` #!pip install spacy #!python -m spacy download en #!python -m spacy download es import spacy # Load English tokenizer, tagger, parser, NER and word vectors nlp = spacy.load("en") # the spanish model is # nlp = spacy.load("es") doc = nlp(u"Guido said that 'Python is one of the best languages for doing Data Science.' " "Why he said that should be clear to anyone who knows Python.") en_doc = doc ``` `doc` is now an `iterable ` with each word/item properly tokenized and tagged. This is done by applying rules specific to each language. Linguistic annotations are available as Token attributes. ``` for token in doc: print(token.text, token.lemma_, token.pos_, token.tag_, token.dep_, token.shape_, token.is_alpha, token.is_stop) from spacy import displacy displacy.serve(doc, style="dep") displacy.render(doc, style = "ent", jupyter = True) nlp = spacy.load("es") # https://www.elespectador.com/noticias/ciencia/decenas-de-nuevas-supernovas-ayudaran-medir-la-expansion-del-universo-articulo-863683 doc = nlp(u'En los últimos años, los investigadores comenzaron a' 'informar un nuevo tipo de supernovas de cinco a diez veces' 'más brillantes que las supernovas de Tipo "IA". ') for token in doc: print(token.text, token.lemma_, token.pos_, token.tag_, token.dep_, token.shape_, token.is_alpha, token.is_stop) from spacy import displacy displacy.serve(doc, style="dep") [i for i in doc.sents] ``` One very powerful way to featurize text/documents is to count the frequency of words---this is called **bag of words**. Each individual token occurrence frequency is used to generate a feature. So the two sentences become: ```json {"Guido": 1, "said": 2, "that": 2, "Python": 2, "is": 1, "one": 1, "of": 1, "best": 1, "languages": 1, "for": 1, "Data": 1, "Science": 1, "Why", 1, "he": 1, "should": 1, "be": 1, "anyone": 1, "who": 1 } ``` A corpus of documents can be represented as a matrix with one row per document and one column per token. Question: **What are some challenges you see with brute force BoW?** ``` from spacy.lang.en.stop_words import STOP_WORDS STOP_WORDS ``` `sklearn` has a number of helper functions, include the [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html): > Convert a collection of text documents to a matrix of token counts. This implementation produces a sparse representation of the counts using `scipy.sparse.csr_matrix`. ``` # the following is from https://www.dataquest.io/blog/tutorial-text-classification-in-python-using-spacy/ import pandas as pd from sklearn.feature_extraction.text import CountVectorizer,TfidfVectorizer from sklearn.base import TransformerMixin from sklearn.pipeline import Pipeline import string from spacy.lang.en.stop_words import STOP_WORDS from spacy.lang.en import English # Create our list of punctuation marks punctuations = string.punctuation # Create our list of stopwords nlp = spacy.load('en') stop_words = spacy.lang.en.stop_words.STOP_WORDS # Load English tokenizer, tagger, parser, NER and word vectors parser = English() # Creating our tokenizer function def spacy_tokenizer(sentence): # Creating our token object, which is used to create documents with linguistic annotations. mytokens = parser(sentence) # Lemmatizing each token and converting each token into lowercase mytokens = [ word.lemma_.lower().strip() if word.lemma_ != "-PRON-" else word.lower_ for word in mytokens ] # Removing stop words mytokens = [ word for word in mytokens if word not in stop_words and word not in punctuations ] # return preprocessed list of tokens return mytokens # Custom transformer using spaCy class predictors(TransformerMixin): def transform(self, X, **transform_params): # Cleaning Text return [clean_text(text) for text in X] def fit(self, X, y=None, **fit_params): return self def get_params(self, deep=True): return {} # Basic function to clean the text def clean_text(text): # Removing spaces and converting text into lowercase return text.strip().lower() bow_vector = CountVectorizer(tokenizer = spacy_tokenizer, ngram_range=(1,1)) X = bow_vector.fit_transform([x.text for x in en_doc.sents]) X bow_vector.get_feature_names() ``` Why did we get `datum` as one of our feature names? ``` X.toarray() doc.text ``` Let's try a bigger corpus (the newsgroups): ``` news_train = fetch_20newsgroups(subset='train', remove=('headers', 'footers', 'quotes'), categories=['sci.space','rec.autos'], data_home='datatmp/') %time X = bow_vector.fit_transform(news_train.data) X bow_vector.get_feature_names() ``` Most of those features will only appear once and we might not want to include them (as they add noise). In order to reweight the count features into floating point values suitable for usage by a classifier it is very common to use the *tf–idf* transform. From [`sklearn`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfTransformer.html#sklearn.feature_extraction.text.TfidfTransformer): > Tf means term-frequency while tf-idf means term-frequency times inverse document-frequency. This is a common term weighting scheme in information retrieval, that has also found good use in document classification. The goal of using tf-idf instead of the raw frequencies of occurrence of a token in a given document is to scale down the impact of tokens that occur very frequently in a given corpus and that are hence empirically less informative than features that occur in a small fraction of the training corpus. Let's keep only those terms that show up in at least 3% of the docs, but not those that show up in more than 90%. ``` tfidf_vector = TfidfVectorizer(tokenizer = spacy_tokenizer, min_df=0.03, max_df=0.9, max_features=1000) %time X = tfidf_vector.fit_transform(news_train.data) tfidf_vector.get_feature_names() X print(X[1,:]) y = news_train.target np.savez("tfidf.npz", X=X.todense(), y=y) ``` One of the challenges with BoW and TF-IDF is that we lose context. "Me gusta esta clase, no" is the same as "No me gusta esta clase". One way to handle this is with N-grams -- not just frequencies of individual words but of groupings of n-words. Eg. "Me gusta", "gusta esta", "esta clase", "clase no", "no me" (bigrams). ``` bow_vector = CountVectorizer(tokenizer = spacy_tokenizer, ngram_range=(1,2)) X = bow_vector.fit_transform([x.text for x in en_doc.sents]) bow_vector.get_feature_names() ``` As we'll see later in the week, while bigram TF-IDF certainly works to capture some small scale meaning, `word embeddings` tend to do very well.
github_jupyter
# Calibration of non-isoplanatic low frequency data This uses an implementation of the SageCAL algorithm to calibrate a simulated SKA1LOW observation in which sources inside the primary beam have one set of calibration errors and sources outside have different errors. In this example, the peeler sources are held fixed in strength and location and only the gains solved. The other sources, inside the primary beam, are partitioned into weak (<5Jy) and strong (>5Jy). The weak sources are processed collectively as an image. The bright sources are processed individually. ``` % matplotlib inline import os import sys sys.path.append(os.path.join('..', '..')) from data_models.parameters import arl_path results_dir = arl_path('test_results') import numpy from astropy.coordinates import SkyCoord from astropy import units as u from astropy.wcs.utils import pixel_to_skycoord from matplotlib import pyplot as plt from data_models.memory_data_models import SkyModel from data_models.polarisation import PolarisationFrame from workflows.arlexecute.execution_support.arlexecute import arlexecute from processing_components.skycomponent.operations import find_skycomponents from processing_components.calibration.calibration import solve_gaintable from processing_components.calibration.operations import apply_gaintable, create_gaintable_from_blockvisibility from processing_components.visibility.base import create_blockvisibility, copy_visibility from processing_components.image.deconvolution import restore_cube from processing_components.skycomponent.operations import select_components_by_separation, insert_skycomponent, \ select_components_by_flux from processing_components.image.operations import show_image, qa_image, copy_image, create_empty_image_like from processing_components.simulation.testing_support import create_named_configuration, create_low_test_beam, \ simulate_gaintable, create_low_test_skycomponents_from_gleam from processing_components.skycomponent.operations import apply_beam_to_skycomponent, find_skycomponent_matches from processing_components.imaging.base import create_image_from_visibility, advise_wide_field, \ predict_skycomponent_visibility from processing_components.imaging.imaging_functions import invert_function from workflows.arlexecute.calibration.calskymodel_workflows import calskymodel_solve_workflow from processing_components.image.operations import export_image_to_fits import logging def init_logging(): log = logging.getLogger() logging.basicConfig(filename='%s/skymodel_cal.log' % results_dir, filemode='a', format='%(thread)s %(asctime)s,%(msecs)d %(name)s %(levelname)s %(message)s', datefmt='%H:%M:%S', level=logging.INFO) log = logging.getLogger() logging.info("Starting skymodel_cal") ``` Use Dask throughout ``` arlexecute.set_client(use_dask=True) arlexecute.run(init_logging) ``` We make the visibility. The parameter rmax determines the distance of the furthest antenna/stations used. All over parameters are determined from this number. We set the w coordinate to be zero for all visibilities so as not to have to do full w-term processing. This speeds up the imaging steps. ``` nfreqwin = 1 ntimes = 1 rmax = 750 frequency = numpy.linspace(0.8e8, 1.2e8, nfreqwin) if nfreqwin > 1: channel_bandwidth = numpy.array(nfreqwin * [frequency[1] - frequency[0]]) else: channel_bandwidth = [0.4e8] times = numpy.linspace(-numpy.pi / 3.0, numpy.pi / 3.0, ntimes) phasecentre=SkyCoord(ra=+30.0 * u.deg, dec=-45.0 * u.deg, frame='icrs', equinox='J2000') lowcore = create_named_configuration('LOWBD2', rmax=rmax) block_vis = create_blockvisibility(lowcore, times, frequency=frequency, channel_bandwidth=channel_bandwidth, weight=1.0, phasecentre=phasecentre, polarisation_frame=PolarisationFrame("stokesI"), zerow=True) wprojection_planes=1 advice=advise_wide_field(block_vis, guard_band_image=5.0, delA=0.02, wprojection_planes=wprojection_planes) vis_slices = advice['vis_slices'] npixel=advice['npixels2'] cellsize=advice['cellsize'] ``` Generate the model from the GLEAM catalog, including application of the primary beam. ``` beam = create_image_from_visibility(block_vis, npixel=npixel, frequency=frequency, nchan=nfreqwin, cellsize=cellsize, phasecentre=phasecentre) original_gleam_components = create_low_test_skycomponents_from_gleam(flux_limit=1.0, phasecentre=phasecentre, frequency=frequency, polarisation_frame=PolarisationFrame('stokesI'), radius=npixel * cellsize/2.0) beam = create_low_test_beam(beam) pb_gleam_components = apply_beam_to_skycomponent(original_gleam_components, beam, flux_limit=0.5) from matplotlib import pylab pylab.rcParams['figure.figsize'] = (12.0, 12.0) pylab.rcParams['image.cmap'] = 'rainbow' show_image(beam, components=pb_gleam_components, cm='Greys', title='Primary beam plus GLEAM components') print("Number of components %d" % len(pb_gleam_components)) ``` Generate the template image ``` model = create_image_from_visibility(block_vis, npixel=npixel, frequency=[numpy.average(frequency)], nchan=1, channel_bandwidth=[numpy.sum(channel_bandwidth)], cellsize=cellsize, phasecentre=phasecentre) ``` Create sources to be peeled ``` peel_distance = 0.16 peelers = select_components_by_separation(phasecentre, pb_gleam_components, min=peel_distance) gleam_components = select_components_by_separation(phasecentre, pb_gleam_components, max=peel_distance) print("There are %d sources inside the primary beam and %d sources outside" % (len(gleam_components), len(peelers))) ``` Create the model visibilities, applying a different gain table for peeled sources and other components ``` corrupted_vis = copy_visibility(block_vis, zero=True) gt = create_gaintable_from_blockvisibility(block_vis, timeslice='auto') components_errors = [(p, 1.0) for p in peelers] components_errors.append((pb_gleam_components, 0.1)) for sc, phase_error in components_errors: component_vis = copy_visibility(block_vis, zero=True) gt = simulate_gaintable(gt, amplitude_error=0.0, phase_error=phase_error) component_vis = predict_skycomponent_visibility(component_vis, sc) component_vis = apply_gaintable(component_vis, gt) corrupted_vis.data['vis'][...]+=component_vis.data['vis'][...] dirty, sumwt = invert_function(corrupted_vis, model, context='2d') qa=qa_image(dirty) vmax=qa.data['medianabs']*20.0 vmin=-qa.data['medianabs'] print(qa) export_image_to_fits(dirty, '%s/calskymodel_before_dirty.fits' % results_dir) show_image(dirty, cm='Greys', components=peelers, vmax=vmax, vmin=vmin, title='Peelers') show_image(dirty, cm='Greys', components=gleam_components, vmax=vmax, vmin=vmin, title='Targets') plt.show() ``` Find the components above the threshold ``` qa = qa_image(dirty) vmax=qa.data['medianabs']*20.0 vmin=-qa.data['medianabs']*2.0 print(qa) threshold = 10.0*qa.data['medianabs'] print("Selecting sources brighter than %f" % threshold) initial_found_components= find_skycomponents(dirty, threshold=threshold) show_image(dirty, components=initial_found_components, cm='Greys', vmax=vmax, vmin=vmin, title='Dirty image plus found components') plt.show() peel_distance = 0.16 flux_threshold=5.0 peelers = select_components_by_separation(phasecentre, initial_found_components, min=peel_distance) inbeam_components = select_components_by_separation(phasecentre, initial_found_components, max=peel_distance) bright_components = select_components_by_flux(inbeam_components, fmin=flux_threshold) faint_components = select_components_by_flux(inbeam_components, fmax=flux_threshold) print("%d sources will be peeled (i.e. held fixed but gain solved)" % len(peelers)) print("%d bright sources will be processed as components (solved both as component and for gain)" % len(bright_components)) print("%d faint sources will be processed collectively as a fixed image and gain solved" % len(faint_components)) faint_model = create_empty_image_like(model) faint_model = insert_skycomponent(faint_model, faint_components, insert_method='Lanczos') show_image(faint_model, cm='Greys', title='Model for faint sources', vmax=0.3, vmin=-0.03) plt.show() calskymodel_graph = [arlexecute.execute(SkyModel, nout=1)(components=[p], fixed=True) for p in peelers] \ + [arlexecute.execute(SkyModel, nout=1)(components=[b], fixed=False) for b in bright_components] \ + [arlexecute.execute(SkyModel, nout=1)(images=[faint_model], fixed=True)] ``` Run skymodel_cal using dask ``` corrupted_vis = arlexecute.scatter(corrupted_vis) graph = calskymodel_solve_workflow(corrupted_vis, calskymodel_graph, niter=30, gain=0.25, tol=1e-8) calskymodel, residual_vis = arlexecute.compute(graph, sync=True) ``` Combine all components for display ``` skymodel_components = list() for csm in calskymodel: skymodel_components += csm[0].components ``` Check that the peeled sources are not altered ``` recovered_peelers = find_skycomponent_matches(peelers, skymodel_components, 1e-5) ok = True for p in recovered_peelers: ok = ok and numpy.abs(peelers[p[0]].flux[0,0] - skymodel_components[p[1]].flux[0,0]) < 1e-7 print("Peeler sources flux unchanged: %s" % ok) ok = True for p in recovered_peelers: ok = ok and peelers[p[0]].direction.separation(skymodel_components[p[1]].direction).rad < 1e-15 print("Peeler sources directions unchanged: %s" % ok) ``` Now we find the components in the residual image and add those to the existing model ``` residual, sumwt = invert_function(residual_vis, model, context='2d') qa = qa_image(residual) vmax=qa.data['medianabs']*30.0 vmin=-qa.data['medianabs']*3.0 print(qa) threshold = 20.0*qa.data['medianabs'] print("Selecting sources brighter than %f" % threshold) final_found_components = find_skycomponents(residual, threshold=threshold) show_image(residual, components=final_found_components, cm='Greys', title='Residual image after Sagecal with newly identified components', vmax=vmax, vmin=vmin) plt.show() final_components= skymodel_components + final_found_components ``` Make a restored image ``` psf, _ = invert_function(residual_vis, model, dopsf=True, context='2d') component_image = copy_image(faint_model) component_image = insert_skycomponent(component_image, final_components) restored = restore_cube(component_image, psf, residual) export_image_to_fits(restored, '%s/calskymodel_restored.fits' % results_dir) qa=qa_image(restored, context='Restored image after SageCal') print(qa) show_image(restored, components=final_components, cm='Greys', title='Restored image after SageCal', vmax=vmax, vmin=vmin) plt.show() ``` Now match the recovered components to the originals ``` original_bright_components = peelers + bright_components matches = find_skycomponent_matches(final_components, original_bright_components, 3*cellsize) ``` Look at the range of separations found ``` separations = [match[2] for match in matches] plt.clf() plt.hist(separations/cellsize, bins=50) plt.title('Separation between input and recovered source in pixels') plt.xlabel('Separation in cells (cellsize = %g radians)' % cellsize) plt.ylabel('Number') plt.show() ``` Now look at the matches between the original components and those recovered. ``` totalfluxin = numpy.sum([c.flux[0,0] for c in pb_gleam_components]) totalfluxout = numpy.sum([c.flux[0,0] for c in final_components]) + numpy.sum(faint_model.data) print("Recovered %.3f (Jy) of original %.3f (Jy)" % (totalfluxout, totalfluxin)) found = [match[1] for match in matches] notfound = list() for c in range(len(original_bright_components)): if c not in found: notfound.append(c) print("The following original components were not found", notfound) ``` Look at the recovered flux and the location of the unmatched components. From the image display these seem to be blends of close components. ``` fluxin = [original_bright_components[match[1]].flux[0,0] for match in matches] fluxout = [final_components[match[0]].flux[0,0] for match in matches] missed_components = [original_bright_components[c] for c in notfound] missed_flux = [match.flux[0,0] for match in missed_components] plt.clf() plt.plot(fluxin, fluxout, '.', color='blue') plt.plot(missed_flux, len(missed_flux)*[0.0], '.', color='red') plt.title('Recovered flux') plt.xlabel('Component input flux') plt.ylabel('Component recovered flux') plt.show() show_image(restored, components=missed_components, cm='Greys', title='Restored original model with missing components', vmax=vmax, vmin=vmin) plt.show() arlexecute.close() ```
github_jupyter
## Multiple linear regression **For Table 3 of the paper** Cell-based QUBICC R2B5 model ``` from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_squared_error from tensorflow.keras import backend as K from tensorflow.keras.regularizers import l1_l2 import tensorflow as tf import tensorflow.nn as nn import gc import numpy as np import pandas as pd import importlib import os import sys #Import sklearn before tensorflow (static Thread-local storage) from sklearn.preprocessing import StandardScaler from tensorflow.keras.optimizers import Nadam from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, BatchNormalization path = '/pf/b/b309170' path_data = path + '/my_work/icon-ml_data/cloud_cover_parameterization/grid_cell_based_QUBICC_R02B05/based_on_var_interpolated_data' # Add path with my_classes to sys.path sys.path.insert(0, path + '/workspace_icon-ml/cloud_cover_parameterization/') import my_classes importlib.reload(my_classes) from my_classes import simple_sundqvist_scheme from my_classes import write_infofile from my_classes import load_data import matplotlib.pyplot as plt import time NUM = 1 # Prevents crashes of the code gpus = tf.config.list_physical_devices('GPU') tf.config.set_visible_devices(gpus[0], 'GPU') # Allow the growth of memory Tensorflow allocates (limits memory usage overall) for gpu in gpus: tf.config.experimental.set_memory_growth(gpu, True) scaler = StandardScaler() # Data is not yet normalized input_data = np.load(path_data + '/cloud_cover_input_qubicc.npy', mmap_mode='r') output_data = np.load(path_data + '/cloud_cover_output_qubicc.npy', mmap_mode='r') (samples_total, no_of_features) = input_data.shape assert no_of_features < samples_total # Making sure there's no mixup # Scale the input data = (samples_total, no_of_features) scaler.fit(input_data) assert len(scaler.mean_) == no_of_features # Every feature has its own mean and std and we scale accordingly input_data_scaled = scaler.transform(input_data) ``` ### Training the multiple linear model on the entire data set ``` t0 = time.time() # The optimal multiple linear regression model lin_reg = LinearRegression() lin_reg.fit(input_data_scaled, output_data) print(time.time() - t0) # Loss of this optimal multiple linear regression model clc_predictions = lin_reg.predict(input_data_scaled) lin_mse = mean_squared_error(output_data, clc_predictions) print('The mean squared error of the linear model is %.2f.'%lin_mse) ``` ### Zero Output Model ``` np.mean(output_data**2, dtype=np.float64) ``` ### Constant Output Model ``` mean = np.mean(output_data, dtype=np.float64) np.mean((output_data-mean)**2, dtype=np.float64) ``` ### Randomly initialized neural network ``` # Create the model model = Sequential() # First hidden layer model.add(Dense(units=64, activation='tanh', input_dim=no_of_features, kernel_regularizer=l1_l2(l1=0.004749, l2=0.008732))) # Second hidden layer model.add(Dense(units=64, activation=nn.leaky_relu, kernel_regularizer=l1_l2(l1=0.004749, l2=0.008732))) # model.add(Dropout(0.221)) # We drop 18% of the hidden nodes model.add(BatchNormalization()) # Third hidden layer model.add(Dense(units=64, activation='tanh', kernel_regularizer=l1_l2(l1=0.004749, l2=0.008732))) # model.add(Dropout(0.221)) # We drop 18% of the hidden nodes # Output layer model.add(Dense(1, activation='linear', kernel_regularizer=l1_l2(l1=0.004749, l2=0.008732))) model.compile(loss='mse', optimizer=Nadam()) # model_fold_3 is implemented in ICON-A batch_size = 2**20 for i in range(1 + input_data_scaled.shape[0]//batch_size): if i == 0: clc_predictions = model.predict_on_batch(input_data_scaled[i*batch_size:(i+1)*batch_size]) else: clc_predictions = np.concatenate((clc_predictions, model.predict_on_batch(input_data_scaled[i*batch_size:(i+1)*batch_size])), axis=0) K.clear_session() gc.collect() lin_mse = mean_squared_error(output_data, clc_predictions[:, 0]) print('The mean squared error of the randomly initialized neural network is %.2f.'%lin_mse) ``` ### Simplified Sundqvist function input_data is unscaled ``` qv = input_data[:, 0] temp = input_data[:, 3] pres = input_data[:, 4] t0 = time.time() # 0.001% of the data ind = np.random.randint(0, samples_total, samples_total//10**5) # Entries will be in [0, 1] sundqvist = [] for i in ind: sundqvist.append(simple_sundqvist_scheme(qv[i], temp[i], pres[i], ps=101325)) time.time() - t0 np.mean((output_data[ind] - 100*np.array(sundqvist))**2, dtype=np.float64) ```
github_jupyter
<a href="https://colab.research.google.com/github/jantic/DeOldify/blob/master/ImageColorizerColab.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ### **<font color='blue'> Artistic Colorizer </font>** #◢ DeOldify - Colorize your own photos! ####**Credits:** Special thanks to: Matt Robinson and María Benavente for pioneering the DeOldify image colab notebook. Dana Kelley for doing things, breaking stuff & having an opinion on everything. --- #◢ Verify Correct Runtime Settings **<font color='#FF000'> IMPORTANT </font>** In the "Runtime" menu for the notebook window, select "Change runtime type." Ensure that the following are selected: * Runtime Type = Python 3 * Hardware Accelerator = GPU #◢ Git clone and install DeOldify ``` !git clone https://github.com/jantic/DeOldify.git DeOldify cd DeOldify ``` #◢ Setup ``` #NOTE: This must be the first call in order to work properly! from deoldify import device from deoldify.device_id import DeviceId #choices: CPU, GPU0...GPU7 device.set(device=DeviceId.GPU0) import torch if not torch.cuda.is_available(): print('GPU not available.') !pip install -r colab_requirements.txt import fastai from deoldify.visualize import * !mkdir 'models' !wget https://www.dropbox.com/s/zkehq1uwahhbc2o/ColorizeArtistic_gen.pth?dl=0 -O ./models/ColorizeArtistic_gen.pth !wget https://media.githubusercontent.com/media/jantic/DeOldify/master/resource_images/watermark.png -O ./resource_images/watermark.png colorizer = get_image_colorizer(artistic=True) ``` #◢ Instructions ### source_url Type in a url to a direct link of an image. Usually that means they'll end in .png, .jpg, etc. NOTE: If you want to use your own image, upload it first to a site like Imgur. ### render_factor The default value of 35 has been carefully chosen and should work -ok- for most scenarios (but probably won't be the -best-). This determines resolution at which the color portion of the image is rendered. Lower resolution will render faster, and colors also tend to look more vibrant. Older and lower quality images in particular will generally benefit by lowering the render factor. Higher render factors are often better for higher quality images, but the colors may get slightly washed out. ### watermarked Selected by default, this places a watermark icon of a palette at the bottom left corner of the image. This is intended to be a standard way to convey to others viewing the image that it is colorized by AI. We want to help promote this as a standard, especially as the technology continues to improve and the distinction between real and fake becomes harder to discern. This palette watermark practice was initiated and lead by the company MyHeritage in the MyHeritage In Color feature (which uses a newer version of DeOldify than what you're using here). #### How to Download a Copy Simply right click on the displayed image and click "Save image as..."! ## Pro Tips You can evaluate how well the image is rendered at each render_factor by using the code at the bottom (that cell under "See how well render_factor values perform on a frame here"). #◢ Colorize!! ``` source_url = '' #@param {type:"string"} render_factor = 35 #@param {type: "slider", min: 7, max: 40} watermarked = True #@param {type:"boolean"} if source_url is not None and source_url !='': image_path = colorizer.plot_transformed_image_from_url(url=source_url, render_factor=render_factor, compare=True, watermarked=watermarked) show_image_in_notebook(image_path) else: print('Provide an image url and try again.') ``` ## See how well render_factor values perform on the image here ``` for i in range(10,40,2): colorizer.plot_transformed_image('test_images/image.png', render_factor=i, display_render_factor=True, figsize=(8,8)) ``` --- #⚙ Recommended image sources * [/r/TheWayWeWere](https://www.reddit.com/r/TheWayWeWere/)
github_jupyter
<div style="width: 100%; clear: both;"> <div style="float: left; width: 50%;"> <img src="http://www.uoc.edu/portal/_resources/common/imatges/marca_UOC/UOC_Masterbrand.jpg" align="left"> </div> <div style="float: right; width: 50%;"> <p style="margin: 0; padding-top: 22px; text-align:right;">M2.859 · Visualización de datos · Práctica, Parte 2</p> <p style="margin: 0; text-align:right;">2021-1 · Máster universitario en Ciencia de datos (Data science)</p> <p style="margin: 0; text-align:right; padding-button: 100px;">Estudios de Informática, Multimedia y Telecomunicación</p> </div> </div> <div style="width:100%;">&nbsp;</div> # A9: Práctica Final (parte 2) - Wrangling data El [**wrangling data**](https://en.wikipedia.org/wiki/Data_wrangling) es el proceso de transformar y mapear datos de un formulario de datos "sin procesar" a otro formato con la intención de hacerlo más apropiado y valioso para una variedad de propósitos posteriores, como el análisis. El objetivo del wrangling data es garantizar la calidad y la utilidad de los datos. Los analistas de datos suelen pasar la mayor parte de su tiempo en el proceso de disputa de datos en comparación con el análisis real de los datos. El proceso de wrangling data puede incluir más manipulación, visualización de datos, agregación de datos, entrenamiento de un modelo estadístico, así como muchos otros usos potenciales. El wrangling data normalmente sigue un conjunto de pasos generales que comienzan con la extracción de los datos sin procesar de la fuente de datos, "removiendo" los datos sin procesar (por ejemplo, clasificación) o analizando los datos en estructuras de datos predefinidas y, finalmente, depositando el contenido resultante en un sumidero de datos para almacenamiento y uso futuro. Para ello vamos a necesitar las siguientes librerías: ``` from six import StringIO from IPython.display import Image from sklearn import datasets from sklearn.decomposition import PCA from sklearn.model_selection import train_test_split, cross_val_score from sklearn.metrics import accuracy_score, confusion_matrix, classification_report from sklearn.tree import DecisionTreeClassifier, export_graphviz import pydotplus import numpy as np import seaborn as sns import matplotlib.pyplot as plt %matplotlib inline import pandas as pd pd.set_option('display.max_columns', None) ``` # 1. Carga del conjunto de datos (1 punto) Se ha seleccionado un conjunto de datos desde el portal Stack Overflow Annual Developer Survey, que examina todos los aspectos de la experiencia de los programadores de la comunidad (Stack Overflow), desde la satisfacción profesional y la búsqueda de empleo hasta la educación y las opiniones sobre el software de código abierto; y los resultados se publican en la siguiente URL: https://insights.stackoverflow.com/survey. En este portal se encuentran publicados los resultados de los últimos 11 años. Para los fines de la práctica final de esta asignatura se usará el dataset del año 2021, cuyo link de descarga es: https://info.stackoverflowsolutions.com/rs/719-EMH-566/images/stack-overflow-developer-survey-2021.zip. ``` so2021_df = pd.read_csv('survey_results_public.csv', header=0) so2021_df.sample(5) ``` <div style="background-color: #EDF7FF; border-color: #7C9DBF; border-left: 5px solid #7C9DBF; padding: 0.5em;"> <strong>Selección de variables:</strong> se realiza la selección de todas las variables del dataset que servirán para responder a todas las cuestiones planteadas en la primera parte de la práctica: </div> ``` so2021_data = so2021_df[['MainBranch', 'Employment', 'Country', 'EdLevel', 'Age1stCode', 'YearsCode', 'YearsCodePro', 'DevType', 'CompTotal', 'LanguageHaveWorkedWith', 'DatabaseHaveWorkedWith', 'PlatformHaveWorkedWith', 'WebframeHaveWorkedWith', 'MiscTechHaveWorkedWith', 'ToolsTechHaveWorkedWith', 'NEWCollabToolsHaveWorkedWith', 'OpSys', 'Age', 'Gender', 'Trans', 'Ethnicity', 'MentalHealth', 'ConvertedCompYearly']] so2021_data.head(5) so2021_data.shape so2021_data.info() so2021_data.isnull().values.any() #valores perdidos en dataset so2021_data.isnull().any() # valores perdidos por columnas en el dataset data = so2021_data.dropna() data.isnull().any() # valores perdidos por columnas en el dataset data.info() data.head() data.to_csv('data.csv', index=False) data_test = data.copy() data_test.to_csv('data_test.csv', index=False) ``` <div style="background-color: #EDF7FF; border-color: #7C9DBF; border-left: 5px solid #7C9DBF; padding: 0.5em;"> <strong>Variable Ethnicity:</strong>. </div> ``` from re import search def choose_ethnia(cell_ethnia): val_ethnia_exceptions = ["I don't know", "Or, in your own words:"] if cell_ethnia == "I don't know;Or, in your own words:": return val_ethnia_exceptions[0] if search(";", cell_ethnia): row_ethnia_values = cell_ethnia.split(';', 5) first_val = row_ethnia_values[0] if first_val not in val_ethnia_exceptions: return first_val if len(row_ethnia_values) > 1: if row_ethnia_values[1] not in val_ethnia_exceptions: return row_ethnia_values[1] if len(row_ethnia_values) > 2: if row_ethnia_values[2] not in val_ethnia_exceptions: return row_ethnia_values[2] else: return cell_ethnia data_test['Ethnicity'] = data_test['Ethnicity'].apply(choose_ethnia) data_test.drop(index=data_test[data_test['Ethnicity'] == 'Or, in your own words:'].index, inplace=True) data_test.drop(index=data_test[data_test['Ethnicity'] == 'Prefer not to say'].index, inplace=True) data_test['Ethnicity'].drop_duplicates().sort_values() data_test['Ethnicity'] = data_test['Ethnicity'].replace(['Black or of African descent'], 'Negro') data_test['Ethnicity'] = data_test['Ethnicity'].replace(['East Asian'], 'Asiatico del este') data_test['Ethnicity'] = data_test['Ethnicity'].replace(['Hispanic or Latino/a/x'], 'Latino') data_test['Ethnicity'] = data_test['Ethnicity'].replace(["I don't know"], 'No Definido') data_test['Ethnicity'] = data_test['Ethnicity'].replace(['Indigenous (such as Native American, Pacific Islander, or Indigenous Australian)'], 'Indigena') data_test['Ethnicity'] = data_test['Ethnicity'].replace(['Middle Eastern'], 'Medio Oriente') data_test['Ethnicity'] = data_test['Ethnicity'].replace(['South Asian'], 'Asiatico del Sur') data_test['Ethnicity'] = data_test['Ethnicity'].replace(['Southeast Asian'], 'Asiatico del Sudeste') data_test['Ethnicity'] = data_test['Ethnicity'].replace(['White or of European descent'], 'Blanco o Europeo') data_test['Ethnicity'].drop_duplicates().sort_values() data_test.to_csv('data_test.csv', index=False) ``` <div style="background-color: #EDF7FF; border-color: #7C9DBF; border-left: 5px solid #7C9DBF; padding: 0.5em;"> <strong>Variable Employment:</strong>. </div> ``` data_test['Employment'].drop_duplicates().sort_values() data_test['Employment'] = data_test['Employment'].replace(['Employed full-time'], 'Tiempo completo') data_test['Employment'] = data_test['Employment'].replace(['Employed part-time'], 'Tiempo parcial') data_test['Employment'] = data_test['Employment'].replace(['Independent contractor, freelancer, or self-employed'], 'Independiete') data_test['Employment'].drop_duplicates().sort_values() ``` <div style="background-color: #EDF7FF; border-color: #7C9DBF; border-left: 5px solid #7C9DBF; padding: 0.5em;"> <strong>Variable EdLevel:</strong>. </div> ``` data_test['EdLevel'].drop_duplicates().sort_values() data_test['EdLevel'] = data_test['EdLevel'].replace(['Associate degree (A.A., A.S., etc.)'], 'Grado Asociado') data_test['EdLevel'] = data_test['EdLevel'].replace(['Bachelor’s degree (B.A., B.S., B.Eng., etc.)'], 'Licenciatura') data_test['EdLevel'] = data_test['EdLevel'].replace(['Master’s degree (M.A., M.S., M.Eng., MBA, etc.)'], 'Master') data_test['EdLevel'] = data_test['EdLevel'].replace(['Other doctoral degree (Ph.D., Ed.D., etc.)'], 'Doctorado') data_test['EdLevel'] = data_test['EdLevel'].replace(['Primary/elementary school'], 'Primaria') data_test['EdLevel'] = data_test['EdLevel'].replace(['Professional degree (JD, MD, etc.)'], 'Grado Profesional') data_test['EdLevel'] = data_test['EdLevel'].replace(['Secondary school (e.g. American high school, German Realschule or Gymnasium, etc.)'], 'Secundaria') data_test['EdLevel'] = data_test['EdLevel'].replace(['Some college/university study without earning a degree'], 'Estudios sin grado') data_test['EdLevel'] = data_test['EdLevel'].replace(['Something else'], 'Otro') data_test['EdLevel'].drop_duplicates().sort_values() data_test.to_csv('data_test.csv', index=False) ``` <div style="background-color: #EDF7FF; border-color: #7C9DBF; border-left: 5px solid #7C9DBF; padding: 0.5em;"> <strong>Variable DevType:</strong>. </div> ``` data_test['DevType'].drop_duplicates().sort_values() from re import search def choose_devtype(cell_devtype): val_devtype_exceptions = ["Other (please specify):"] if cell_devtype == "Other (please specify):": return val_devtype_exceptions[0] if search(";", cell_devtype): row_devtype_values = cell_devtype.split(';', 10) first_val = row_devtype_values[0] if first_val not in val_devtype_exceptions: return first_val if len(row_devtype_values) > 1: if row_devtype_values[1] not in val_devtype_exceptions: return row_devtype_values[1] else: return cell_devtype data_test['DevType'] = data_test['DevType'].apply(choose_devtype) data_test['DevType'].head() data_test['DevType'].drop_duplicates().sort_values() data_test['DevType'].value_counts() data_test['DevType'] = data_test['DevType'].replace(['Developer, full-stack'], 'Desarrollador full-stack') data_test['DevType'] = data_test['DevType'].replace(['Developer, front-end'], 'Desarrollador front-end') data_test['DevType'] = data_test['DevType'].replace(['Developer, mobile'], 'Desarrollador móvil') data_test['DevType'] = data_test['DevType'].replace(['Developer, back-end'], 'Desarrollador back-end') data_test['DevType'] = data_test['DevType'].replace(['Developer, desktop or enterprise applications'], 'Desarrollador Escritorio') data_test['DevType'] = data_test['DevType'].replace(['Engineer, data'], 'Ingeniero de datos') data_test['DevType'] = data_test['DevType'].replace(['Data scientist or machine learning specialist'], 'Cientifico de datos') data_test['DevType'] = data_test['DevType'].replace(['Other (please specify):'], 'Otro') data_test['DevType'] = data_test['DevType'].replace(['Engineering manager'], 'Manager de Ingeniería') data_test['DevType'] = data_test['DevType'].replace(['DevOps specialist'], 'Especialista en DevOps') data_test['DevType'] = data_test['DevType'].replace(['Senior Executive (C-Suite, VP, etc.)'], 'Ejecutivo Senior') data_test['DevType'] = data_test['DevType'].replace(['Academic researcher'], 'Investigador Académico') data_test['DevType'] = data_test['DevType'].replace(['Developer, QA or test'], 'Desarrollador de QA o Test') data_test['DevType'] = data_test['DevType'].replace(['Data or business analyst'], 'Analista de datos o negocio') data_test['DevType'] = data_test['DevType'].replace(['Developer, embedded applications or devices'], 'Desarrollador de aplicaciones embebidas') data_test['DevType'] = data_test['DevType'].replace(['System administrator'], 'Administrador de sistemas') data_test['DevType'] = data_test['DevType'].replace(['Engineer, site reliability'], 'Ingeniero de confiabilidad del sitio') data_test['DevType'] = data_test['DevType'].replace(['Product manager'], 'Gerente de producto') data_test['DevType'] = data_test['DevType'].replace(['Database administrator'], 'Administrador de base de datos') data_test['DevType'] = data_test['DevType'].replace(['Student'], 'Estudiante') data_test['DevType'] = data_test['DevType'].replace(['Developer, game or graphics'], 'Desarrollador de juegos o gráfico') data_test['DevType'] = data_test['DevType'].replace(['Scientist'], 'Científico') data_test['DevType'] = data_test['DevType'].replace(['Designer'], 'Diseñador') data_test['DevType'] = data_test['DevType'].replace(['Educator'], 'Educador') data_test['DevType'] = data_test['DevType'].replace(['Marketing or sales professional'], 'Profesional en Marketing o ventas') data_test['DevType'].drop_duplicates().sort_values() data_test['DevType'].value_counts() ``` <div style="background-color: #EDF7FF; border-color: #7C9DBF; border-left: 5px solid #7C9DBF; padding: 0.5em;"> <strong>Variable MainBranch:</strong> </div> ``` data_test['MainBranch'].drop_duplicates().sort_values() data_test['MainBranch'] = data_test['MainBranch'].replace(['I am a developer by profession'], 'Desarrollador Profesional') data_test['MainBranch'] = data_test['MainBranch'].replace(['I am not primarily a developer, but I write code sometimes as part of my work'], 'Desarrollador ocasional') data_test['MainBranch'].drop_duplicates().sort_values() data_test.to_csv('data_test.csv', index=False) ``` <div style="background-color: #EDF7FF; border-color: #7C9DBF; border-left: 5px solid #7C9DBF; padding: 0.5em;"> <strong>Variable Age1stCode:</strong> </div> ``` data_test['Age1stCode'].drop_duplicates().sort_values() data_test['Age1stCode'].value_counts() data_test['Age1stCode'] = data_test['Age1stCode'].replace(['11 - 17 years'], '11-17') data_test['Age1stCode'] = data_test['Age1stCode'].replace(['18 - 24 years'], '18-24') data_test['Age1stCode'] = data_test['Age1stCode'].replace(['25 - 34 years'], '25-34') data_test['Age1stCode'] = data_test['Age1stCode'].replace(['35 - 44 years'], '35-44') data_test['Age1stCode'] = data_test['Age1stCode'].replace(['45 - 54 years'], '45-54') data_test['Age1stCode'] = data_test['Age1stCode'].replace(['5 - 10 years'], '5-10') data_test['Age1stCode'] = data_test['Age1stCode'].replace(['55 - 64 years'], '55-64') data_test['Age1stCode'] = data_test['Age1stCode'].replace(['Older than 64 years'], '> 64') data_test['Age1stCode'] = data_test['Age1stCode'].replace(['Younger than 5 years'], '< 5') data_test['Age1stCode'].value_counts() ``` <div style="background-color: #EDF7FF; border-color: #7C9DBF; border-left: 5px solid #7C9DBF; padding: 0.5em;"> <strong>Variable YearsCode:</strong> </div> ``` data_test['YearsCode'] = data_test['YearsCode'].replace(['More than 50 years'], 50) data_test['YearsCode'] = data_test['YearsCode'].replace(['Less than 1 year'], 1) ``` <div style="background-color: #EDF7FF; border-color: #7C9DBF; border-left: 5px solid #7C9DBF; padding: 0.5em;"> <strong>Variable YearsCodePro:</strong> </div> ``` data_test['YearsCodePro'] = data_test['YearsCodePro'].replace(['More than 50 years'], 50) data_test['YearsCodePro'] = data_test['YearsCodePro'].replace(['Less than 1 year'], 1) ``` <div style="background-color: #EDF7FF; border-color: #7C9DBF; border-left: 5px solid #7C9DBF; padding: 0.5em;"> <strong>Variable OpSys:</strong> </div> ``` data_test['OpSys'].value_counts() data_test['OpSys'] = data_test['OpSys'].replace(['Windows Subsystem for Linux (WSL)'], 'Windows') data_test['OpSys'] = data_test['OpSys'].replace(['Linux-based'], 'Linux') data_test['OpSys'] = data_test['OpSys'].replace(['Other (please specify)'], 'Otro') data_test['OpSys'].value_counts() ``` <div style="background-color: #EDF7FF; border-color: #7C9DBF; border-left: 5px solid #7C9DBF; padding: 0.5em;"> <strong>Variable Age:</strong> </div> ``` data_test['Age'].value_counts() data_test['Age'] = data_test['Age'].replace(['25-34 years old'], '25-34') data_test['Age'] = data_test['Age'].replace(['35-44 years old'], '35-44') data_test['Age'] = data_test['Age'].replace(['18-24 years old'], '18-24') data_test['Age'] = data_test['Age'].replace(['45-54 years old'], '45-54') data_test['Age'] = data_test['Age'].replace(['55-64 years old'], '55-64') data_test['Age'] = data_test['Age'].replace(['Under 18 years old'], '< 18') data_test['Age'] = data_test['Age'].replace(['65 years or older'], '>= 65') data_test['Age'] = data_test['Age'].replace(['Prefer not to say'], 'No definido') data_test['Age'] = data_test['Age'].replace(['25-34 years old'], '25-34') data_test['Age'].value_counts() ``` <div style="background-color: #EDF7FF; border-color: #7C9DBF; border-left: 5px solid #7C9DBF; padding: 0.5em;"> <strong>Variable Gender:</strong> </div> ``` data_test['Gender'].value_counts() data_test['Gender'] = data_test['Gender'].replace(['Man'], 'Hombre') data_test['Gender'] = data_test['Gender'].replace(['Woman'], 'Mujer') data_test['Gender'] = data_test['Gender'].replace(['Non-binary, genderqueer, or gender non-conforming'], 'No binario u otro') data_test['Gender'] = data_test['Gender'].replace(['Man;Non-binary, genderqueer, or gender non-conforming'], 'No binario u otro') data_test['Gender'] = data_test['Gender'].replace(['Man;Or, in your own words:'], 'Hombre') data_test['Gender'] = data_test['Gender'].replace(['Or, in your own words:'], 'No definido') data_test['Gender'] = data_test['Gender'].replace(['Woman;Non-binary, genderqueer, or gender non-conforming'], 'No binario u otro') data_test['Gender'] = data_test['Gender'].replace(['Man;Woman'], 'No definido') data_test['Gender'] = data_test['Gender'].replace(['Man;Woman;Non-binary, genderqueer, or gender non-conforming;Or, in your own words:'], 'No binario u otro') data_test['Gender'] = data_test['Gender'].replace(['Non-binary, genderqueer, or gender non-conforming;Or, in your own words:'], 'No binario u otro') data_test['Gender'] = data_test['Gender'].replace(['Man;Woman;Non-binary, genderqueer, or gender non-conforming'], 'No binario u otro') data_test['Gender'] = data_test['Gender'].replace(['Prefer not to say'], 'No definido') data_test['Gender'].value_counts() ``` <div style="background-color: #EDF7FF; border-color: #7C9DBF; border-left: 5px solid #7C9DBF; padding: 0.5em;"> <strong>Variable Trans:</strong> </div> ``` data_test['Trans'].value_counts() data_test['Trans'] = data_test['Trans'].replace(['Yes'], 'Si') data_test['Trans'] = data_test['Trans'].replace(['Prefer not to say'], 'No definido') data_test['Trans'] = data_test['Trans'].replace(['Or, in your own words:'], 'No definido') data_test['Trans'].value_counts() ``` <div style="background-color: #EDF7FF; border-color: #7C9DBF; border-left: 5px solid #7C9DBF; padding: 0.5em;"> <strong>Variable MentalHealth:</strong> </div> ``` data_test['MentalHealth'].value_counts() from re import search def choose_mental_health(cell_mental_health): val_mental_health_exceptions = ["Or, in your own words:"] if cell_mental_health == "Or, in your own words:": return val_mental_health_exceptions[0] if search(";", cell_mental_health): row_mental_health_values = cell_mental_health.split(';', 10) first_val = row_mental_health_values[0] return first_val else: return cell_mental_health data_test['MentalHealth'] = data_test['MentalHealth'].apply(choose_mental_health) data_test['MentalHealth'].value_counts() data_test['MentalHealth'] = data_test['MentalHealth'].replace(['None of the above'], 'Ninguna de las mencionadas') data_test['MentalHealth'] = data_test['MentalHealth'].replace(['I have a concentration and/or memory disorder (e.g. ADHD)'], 'Desorden de concentración o memoria') data_test['MentalHealth'] = data_test['MentalHealth'].replace(['I have a mood or emotional disorder (e.g. depression, bipolar disorder)'], 'Desorden emocional') data_test['MentalHealth'] = data_test['MentalHealth'].replace(['I have an anxiety disorder'], 'Desorden de ansiedad') data_test['MentalHealth'] = data_test['MentalHealth'].replace(['Prefer not to say'], 'No definido') data_test['MentalHealth'] = data_test['MentalHealth'].replace(["I have autism / an autism spectrum disorder (e.g. Asperger's)"], 'Tipo de autismo') data_test['MentalHealth'] = data_test['MentalHealth'].replace(['Or, in your own words:'], 'No definido') data_test['MentalHealth'].value_counts() ``` # 2. Selección de campos para subdatasets Se seleccionarán los campos adecuados para responder a cada una de las cuestiones que se plantearon en la primera parte de la práctica. ### 2.1. Según la autodeterminación de la etnia, ¿Qué etnia tiene un mayor sueldo anual? Se seleccionarán los campos adecuados para responder a esta pregunta ``` data_etnia = data_test[['Country', 'Ethnicity', 'ConvertedCompYearly']] data_etnia.head() df_data_etnia = data_etnia.copy() def remove_outliers(df, q=0.05): upper = df.quantile(1-q) lower = df.quantile(q) mask = (df < upper) & (df > lower) return mask mask = remove_outliers(df_data_etnia['ConvertedCompYearly'], 0.1) print(df_data_etnia[mask]) df_data_etnia_no_outliers = df_data_etnia[mask] df_data_etnia_no_outliers = df_data_etnia_no_outliers.copy() df_data_etnia_no_outliers['ConvertedCompYearlyCategorical'] = 'ALTO' df_data_etnia_no_outliers.loc[(df_data_etnia_no_outliers['ConvertedCompYearly'] >= 0) & (df_data_etnia_no_outliers['ConvertedCompYearly'] <= 32747), 'ConvertedCompYearlyCategorical'] = 'BAJO' df_data_etnia_no_outliers.loc[(df_data_etnia_no_outliers['ConvertedCompYearly'] > 32747) & (df_data_etnia_no_outliers['ConvertedCompYearly'] <= 90000), 'ConvertedCompYearlyCategorical'] = 'MEDIO' print(df_data_etnia_no_outliers) df_data_etnia_alto = df_data_etnia_no_outliers[df_data_etnia_no_outliers['ConvertedCompYearlyCategorical'] == 'ALTO'] df_data_etnia_alto = df_data_etnia_alto[['Ethnicity', 'ConvertedCompYearlyCategorical']] df_flourish = df_data_etnia_alto['Ethnicity'].value_counts().to_frame('counts').reset_index() df_flourish df_flourish.to_csv('001_df_flourish.csv', index=False) df_data_etnia_alto.to_csv('001_df_data_etnia_alto.csv', index=False) df_data_etnia.to_csv('001_data_etnia_categorical.csv', index=False) data_etnia.to_csv('001_data_etnia.csv', index=False) ``` ### 2.2. ¿Cuáles son los porcentajes de programadores que trabajan a tiempo completo, medio tiempo o freelance? Se seleccionarán los campos adecuados para responder a esta pregunta ``` data_time_work_dev = data_test[['Country', 'Employment', 'ConvertedCompYearly', 'EdLevel', 'Age']] data_time_work_dev.head() df_flourish_002 = data_time_work_dev['Employment'].value_counts().to_frame('counts').reset_index() df_flourish_002 df_flourish_002['counts'] = (df_flourish_002['counts'] * 100 ) / data_time_work_dev.shape[0] df_flourish_002 df_flourish_002['counts'] = df_flourish_002['counts'].round(2) df_flourish_002 df_flourish_002.to_csv('002_df_flourish.csv', index=False) ``` ### 2.3. ¿Cuáles son los países con mayor número de programadores profesionales que son activos en la comunidad Stack Overflow? Se seleccionarán los campos adecuados para responder a esta pregunta ``` data_pro_dev_active_so = data_test[['Country', 'Employment', 'MainBranch', 'EdLevel', 'DevType', 'Age']] data_pro_dev_active_so.head() df_flourish_003 = data_pro_dev_active_so['Country'].value_counts().sort_values(ascending=False).head(10) df_flourish_003 = df_flourish_003.to_frame() df_flourish_003 = df_flourish_003.reset_index() df_flourish_003.columns = ["País", "# Programadores Profesionales"] df_flourish_003.to_csv('003_df_flourish_003.csv', index=False) ``` ### 2.4. ¿Cuál es el nivel educativo que mayores ingresos registra entre los encuestados? Se seleccionarán los campos adecuados para responder a esta pregunta ``` data_edlevel_income = data_test[['ConvertedCompYearly', 'EdLevel']] data_edlevel_income.head() df_data_edlevel_income = data_edlevel_income.copy() def remove_outliers(df, q=0.05): upper = df.quantile(1-q) lower = df.quantile(q) mask = (df < upper) & (df > lower) return mask mask = remove_outliers(df_data_edlevel_income['ConvertedCompYearly'], 0.1) print(df_data_edlevel_income[mask]) df_data_edlevel_income = df_data_edlevel_income[mask] df_data_edlevel_income['ConvertedCompYearlyCategorical'] = 'ALTO' df_data_edlevel_income.loc[(df_data_edlevel_income['ConvertedCompYearly'] >= 0) & (df_data_edlevel_income['ConvertedCompYearly'] <= 32747), 'ConvertedCompYearlyCategorical'] = 'BAJO' df_data_edlevel_income.loc[(df_data_edlevel_income['ConvertedCompYearly'] > 32747) & (df_data_edlevel_income['ConvertedCompYearly'] <= 90000), 'ConvertedCompYearlyCategorical'] = 'MEDIO' print(df_data_edlevel_income) df_data_edlevel_income = df_data_edlevel_income[df_data_edlevel_income['ConvertedCompYearlyCategorical'] == 'ALTO'] df_data_edlevel_income = df_data_edlevel_income[['EdLevel', 'ConvertedCompYearlyCategorical']] df_flourish_004 = df_data_edlevel_income['EdLevel'].value_counts().to_frame('counts').reset_index() df_flourish_004 df_flourish_004.to_csv('004_df_flourish.csv', index=False) ``` ### 2.5. ¿Existe brecha salarial entre hombres y mujeres u otros géneros?, y de ¿Cuánto es la diferencia? ¿Cuáles son los peores países en cuanto a brecha salarial? ¿Cuáles son los países que han reducido esta brecha salarial entre programadores? Se seleccionarán los campos adecuados para responder a esta pregunta ``` data_wage_gap = data_test[['Country', 'ConvertedCompYearly', 'Gender']] data_wage_gap.head() df_data_wage_gap = data_wage_gap.copy() def remove_outliers(df, q=0.05): upper = df.quantile(1-q) lower = df.quantile(q) mask = (df < upper) & (df > lower) return mask mask = remove_outliers(df_data_wage_gap['ConvertedCompYearly'], 0.1) print(df_data_wage_gap[mask]) df_data_wage_gap = df_data_wage_gap[mask] df_data_wage_gap['ConvertedCompYearlyCategorical'] = 'ALTO' df_data_wage_gap.loc[(df_data_wage_gap['ConvertedCompYearly'] >= 0) & (df_data_wage_gap['ConvertedCompYearly'] <= 32747), 'ConvertedCompYearlyCategorical'] = 'BAJO' df_data_wage_gap.loc[(df_data_wage_gap['ConvertedCompYearly'] > 32747) & (df_data_wage_gap['ConvertedCompYearly'] <= 90000), 'ConvertedCompYearlyCategorical'] = 'MEDIO' print(df_data_wage_gap) df_data_wage_gap = df_data_wage_gap[df_data_wage_gap['ConvertedCompYearlyCategorical'].isin(['ALTO', 'MEDIO'])] df_data_wage_gap = df_data_wage_gap[['Country', 'Gender', 'ConvertedCompYearlyCategorical']] df_data_wage_gap.to_csv('005_df_data_wage_gap.csv', index=False) df_data_wage_gap['ConvertedCompYearlyCategorical'].drop_duplicates().sort_values() df_data_wage_gap['Gender'].drop_duplicates().sort_values() df_data_wage_gap['Country'].drop_duplicates().sort_values() df_data_wage_gap1 = df_data_wage_gap.copy() df_flourish_005 = df_data_wage_gap1.groupby(['Country', 'Gender']).size().unstack(fill_value=0).sort_values('Hombre') df_flourish_005 = df_flourish_005.apply(lambda x: pd.concat([x.head(40), x.tail(5)])) df_flourish_005.to_csv('005_flourish_data.csv', index=True) ``` ### 2.6. ¿Cuáles son los ingresos promedios según los rangos de edad? ¿Cuál es el rango de edad con el mejor y peor ingreso? Se seleccionarán los campos adecuados para responder a esta pregunta ``` data_age_income = data_test[['ConvertedCompYearly', 'Age']] data_age_income.head() df_data_age_income = data_age_income.copy() def remove_outliers(df, q=0.05): upper = df.quantile(1-q) lower = df.quantile(q) mask = (df < upper) & (df > lower) return mask mask = remove_outliers(df_data_age_income['ConvertedCompYearly'], 0.1) print(df_data_age_income[mask]) df_data_age_income = df_data_age_income[mask] df_data_age_income1 = df_data_age_income.copy() df_data_age_income1.to_csv('006_df_data_age_income1.csv', index=False) grouped_df = df_data_age_income1.groupby("Age") average_df = grouped_df.mean() average_df df_flourish_006 = average_df.copy() df_flourish_006.to_csv('006_df_flourish_006.csv', index=True) ``` ### 2.7. ¿Cuáles son las tecnologías que permiten tener un mejor ingreso salarial anual? Se seleccionarán los campos adecuados para responder a esta pregunta ``` data_techs_best_income1 = data_test[['ConvertedCompYearly', 'LanguageHaveWorkedWith', 'DatabaseHaveWorkedWith', 'PlatformHaveWorkedWith', 'WebframeHaveWorkedWith', 'MiscTechHaveWorkedWith', 'ToolsTechHaveWorkedWith', 'NEWCollabToolsHaveWorkedWith']] data_techs_best_income1.head() data_techs_best_income1['AllTechs'] = data_techs_best_income1['LanguageHaveWorkedWith'].map(str) + ';' + data_techs_best_income1['DatabaseHaveWorkedWith'].map(str) + ';' + data_techs_best_income1['PlatformHaveWorkedWith'].map(str) + ';' + data_techs_best_income1['WebframeHaveWorkedWith'].map(str) + ';' + data_techs_best_income1['MiscTechHaveWorkedWith'].map(str) + ';' + data_techs_best_income1['ToolsTechHaveWorkedWith'].map(str) + ';' + data_techs_best_income1['NEWCollabToolsHaveWorkedWith'].map(str) print (data_techs_best_income1) df_data_techs_best_income = data_techs_best_income1[['ConvertedCompYearly', 'AllTechs']].copy() df_data_techs_best_income1 = df_data_techs_best_income.copy() def remove_outliers(df, q=0.05): upper = df.quantile(1-q) lower = df.quantile(q) mask = (df < upper) & (df > lower) return mask mask = remove_outliers(df_data_techs_best_income1['ConvertedCompYearly'], 0.1) print(df_data_techs_best_income1[mask]) df_data_techs_best_income1 = df_data_techs_best_income1[mask] df_data_techs_best_income1['ConvertedCompYearlyCategorical'] = 'ALTO' df_data_techs_best_income1.loc[(df_data_techs_best_income1['ConvertedCompYearly'] >= 0) & (df_data_techs_best_income1['ConvertedCompYearly'] <= 32747), 'ConvertedCompYearlyCategorical'] = 'BAJO' df_data_techs_best_income1.loc[(df_data_techs_best_income1['ConvertedCompYearly'] > 32747) & (df_data_techs_best_income1['ConvertedCompYearly'] <= 90000), 'ConvertedCompYearlyCategorical'] = 'MEDIO' print(df_data_techs_best_income1) df_data_techs_best_income1 = df_data_techs_best_income1[df_data_techs_best_income1['ConvertedCompYearlyCategorical'].isin(['ALTO', 'MEDIO'])] df_data_techs_best_income1['AllTechs'] = df_data_techs_best_income1['AllTechs'].str.replace(' ', '') df_data_techs_best_income1['AllTechs'] = df_data_techs_best_income1['AllTechs'].str.replace(';', ' ') df_counts = df_data_techs_best_income1['AllTechs'].str.split(expand=True).stack().value_counts().rename_axis('Tech').reset_index(name='Count') df_counts.head(10) df_data_techs_best_income_007 = df_counts.head(10) df_data_techs_best_income_007.to_csv('007_df_data_techs_best_income.csv', index=False) ``` ### 2.8. ¿Cuántas tecnologías en promedio domina un programador profesional? Se seleccionarán los campos adecuados para responder a esta pregunta ``` data_techs_dev_pro1 = data_test[['DevType', 'LanguageHaveWorkedWith', 'DatabaseHaveWorkedWith', 'PlatformHaveWorkedWith', 'WebframeHaveWorkedWith', 'MiscTechHaveWorkedWith', 'ToolsTechHaveWorkedWith', 'NEWCollabToolsHaveWorkedWith']] data_techs_dev_pro1.head() data_techs_dev_pro1['AllTechs'] = data_techs_dev_pro1['LanguageHaveWorkedWith'].map(str) + ';' + data_techs_dev_pro1['DatabaseHaveWorkedWith'].map(str) + ';' + data_techs_dev_pro1['PlatformHaveWorkedWith'].map(str) + ';' + data_techs_dev_pro1['WebframeHaveWorkedWith'].map(str) + ';' + data_techs_dev_pro1['MiscTechHaveWorkedWith'].map(str) + ';' + data_techs_best_income1['ToolsTechHaveWorkedWith'].map(str) + ';' + data_techs_dev_pro1['NEWCollabToolsHaveWorkedWith'].map(str) print (data_techs_dev_pro1) df_data_techs_dev_pro = data_techs_dev_pro1[['DevType', 'AllTechs']].copy() df_data_techs_dev_pro = df_data_techs_dev_pro[df_data_techs_dev_pro['DevType'].isin(['Desarrollador full-stack', 'Desarrollador front-end', 'Desarrollador móvil', 'Desarrollador back-end', 'Desarrollador Escritorio', 'Desarrollador de QA o Test', 'Desarrollador de aplicaciones embebidas', 'Administrador de base de datos', 'Desarrollador de juegos o gráfico'])] df_data_techs_dev_pro.info() df_data_techs_dev_pro1 = df_data_techs_dev_pro.copy() df_data_techs_dev_pro1.to_csv('008_df_data_techs_dev_pro1.csv', index=True) def convert_row_to_list(lst): return lst.split(';') df_data_techs_dev_pro1['ListTechs'] = df_data_techs_dev_pro1['AllTechs'].apply(convert_row_to_list) df_data_techs_dev_pro1['LenListTechs'] = df_data_techs_dev_pro1['ListTechs'].map(len) df_flourish_008 = df_data_techs_dev_pro1[['DevType', 'LenListTechs']].copy() df_flourish_008 grouped_df = df_flourish_008.groupby("DevType") average_df_008 = round(grouped_df.mean()) df_flourish_008 = average_df_008.copy() df_flourish_008.to_csv('008_df_flourish_008.csv', index=True) ``` ### 2.9. ¿En qué rango de edad se inició la mayoría de los programadores en la programación? Se seleccionarán los campos adecuados para responder a esta pregunta ``` data_age1stcode_dev_pro1 = data_test[['Age1stCode']] data_age1stcode_dev_pro1.head() data_age1stcode_dev_pro1 = data_age1stcode_dev_pro1['Age1stCode'].value_counts().to_frame('counts').reset_index() data_age1stcode_dev_pro1.to_csv('009_flourish_data.csv', index=False) ``` ### 2.10. ¿Cuántos años como programadores se requiere para obtener un ingreso salarial alto? Se seleccionarán los campos adecuados para responder a esta pregunta ``` data_yearscode_high_income1 = data_test[['ConvertedCompYearly', 'YearsCode']] data_yearscode_high_income1.head() df_data_yearscode_high_income = data_yearscode_high_income1.copy() def remove_outliers(df, q=0.05): upper = df.quantile(1-q) lower = df.quantile(q) mask = (df < upper) & (df > lower) return mask mask = remove_outliers(df_data_yearscode_high_income['ConvertedCompYearly'], 0.1) print(df_data_yearscode_high_income[mask]) df_data_yearscode_high_income = df_data_yearscode_high_income[mask] df_data_yearscode_high_income['ConvertedCompYearlyCategorical'] = 'ALTO' df_data_yearscode_high_income.loc[(df_data_yearscode_high_income['ConvertedCompYearly'] >= 0) & (df_data_yearscode_high_income['ConvertedCompYearly'] <= 32747), 'ConvertedCompYearlyCategorical'] = 'BAJO' df_data_yearscode_high_income.loc[(df_data_yearscode_high_income['ConvertedCompYearly'] > 32747) & (df_data_yearscode_high_income['ConvertedCompYearly'] <= 90000), 'ConvertedCompYearlyCategorical'] = 'MEDIO' print(df_data_yearscode_high_income) df_data_yearscode_high_income.to_csv('010_df_flourish.csv', index=False) df_data_yearscode_high_income['ConvertedCompYearlyCategorical'].value_counts() df_flourish_010 = df_data_yearscode_high_income[['YearsCode', 'ConvertedCompYearlyCategorical']].copy() df_flourish_010.head() df_flourish_010['YearsCode'] = pd.to_numeric(df_flourish_010['YearsCode']) df_flourish_010.info() grouped_df_010 = df_flourish_010.groupby("ConvertedCompYearlyCategorical") average_df_010 = round(grouped_df_010.mean()) average_df_010 average_df_010.to_csv('010_flourish_data.csv', index=True) ``` ### 2.11. ¿Cuáles son los perfiles que registran los mejores ingresos? Se seleccionarán los campos adecuados para responder a esta pregunta ``` data_profiles_dev_high_income1 = data_test[['ConvertedCompYearly', 'DevType']].copy() data_profiles_dev_high_income1.head() df_data_profiles_dev_high_income = data_profiles_dev_high_income1.copy() def remove_outliers(df, q=0.05): upper = df.quantile(1-q) lower = df.quantile(q) mask = (df < upper) & (df > lower) return mask mask = remove_outliers(df_data_profiles_dev_high_income['ConvertedCompYearly'], 0.1) print(df_data_profiles_dev_high_income[mask]) df_data_profiles_dev_high_income = df_data_profiles_dev_high_income[mask] df_data_profiles_dev_high_income['ConvertedCompYearlyCategorical'] = 'ALTO' df_data_profiles_dev_high_income.loc[(df_data_profiles_dev_high_income['ConvertedCompYearly'] >= 0) & (df_data_profiles_dev_high_income['ConvertedCompYearly'] <= 32747), 'ConvertedCompYearlyCategorical'] = 'BAJO' df_data_profiles_dev_high_income.loc[(df_data_profiles_dev_high_income['ConvertedCompYearly'] > 32747) & (df_data_profiles_dev_high_income['ConvertedCompYearly'] <= 90000), 'ConvertedCompYearlyCategorical'] = 'MEDIO' print(df_data_profiles_dev_high_income) df_data_profiles_dev_high_income['ConvertedCompYearlyCategorical'].value_counts() df_flourish_011 = df_data_profiles_dev_high_income[['DevType', 'ConvertedCompYearlyCategorical']].copy() df_flourish_011 = df_flourish_011[df_flourish_011['ConvertedCompYearlyCategorical'].isin(['ALTO'])] df_flourish_011.info() df_data_flourish_011 = df_flourish_011['DevType'].value_counts().to_frame('counts').reset_index() df_data_flourish_011 = df_data_flourish_011.head(10) df_data_flourish_011 df_data_flourish_011.to_csv('011_flourish_data.csv', index=False) ``` ### 2.12. ¿Cuáles son las 10 tecnologías más usadas entre los programadores por países? Se seleccionarán los campos adecuados para responder a esta pregunta ``` data_10_techs_popular_dev_countries = data_test[['Country', 'LanguageHaveWorkedWith', 'DatabaseHaveWorkedWith', 'PlatformHaveWorkedWith', 'WebframeHaveWorkedWith', 'MiscTechHaveWorkedWith', 'ToolsTechHaveWorkedWith', 'NEWCollabToolsHaveWorkedWith']] data_10_techs_popular_dev_countries.head() data_10_techs_popular_dev_countries['AllTechs'] = data_10_techs_popular_dev_countries['LanguageHaveWorkedWith'].map(str) + ';' + data_10_techs_popular_dev_countries['DatabaseHaveWorkedWith'].map(str) + ';' + data_10_techs_popular_dev_countries['PlatformHaveWorkedWith'].map(str) + ';' + data_10_techs_popular_dev_countries['WebframeHaveWorkedWith'].map(str) + ';' + data_10_techs_popular_dev_countries['MiscTechHaveWorkedWith'].map(str) + ';' + data_10_techs_popular_dev_countries['ToolsTechHaveWorkedWith'].map(str) + ';' + data_10_techs_popular_dev_countries['NEWCollabToolsHaveWorkedWith'].map(str) print (data_10_techs_popular_dev_countries) df_data_10_techs_popular_dev_countries = data_10_techs_popular_dev_countries[['Country', 'AllTechs']].copy() df_data_10_techs_popular_dev_countries.head() df_data_10_techs_popular_dev_countries['AllTechs'] = df_data_10_techs_popular_dev_countries['AllTechs'].str.replace(' ', '') df_data_10_techs_popular_dev_countries['AllTechs'] = df_data_10_techs_popular_dev_countries['AllTechs'].str.replace(';', ' ') df_counts = df_data_10_techs_popular_dev_countries['AllTechs'].str.split(expand=True).stack().value_counts().rename_axis('Tech').reset_index(name='Count') df_counts data_10_techs_popular_dev_countries.to_csv('012_data_10_techs_popular_dev_countries.csv', index=False) ``` ### 2.13. ¿Cuáles el sistema operativo más usado entre los encuestados? Se seleccionarán los campos adecuados para responder a esta pregunta ``` df_data_so_devs = data_test[['OpSys']].copy() df_data_so_devs.tail() df_data_so_devs['OpSys'].drop_duplicates().sort_values() df_data_so_devs['OpSys'] = df_data_so_devs['OpSys'].replace(['Other (please specify):'], 'Otro') df_data_so_devs['OpSys'].value_counts() df_counts = df_data_so_devs['OpSys'].str.split(expand=True).stack().value_counts().rename_axis('OS').reset_index(name='Count') df_counts df_counts.to_csv('013_flourish_data.csv', index=False) ``` ### 2.14. ¿Qué proporción de programadores tiene algún desorden mental por país? Se seleccionarán los campos adecuados para responder a esta pregunta ``` data_devs_mental_health_countries = data_test[['Country', 'MentalHealth']] data_devs_mental_health_countries.head() data_devs_mental_health_countries['MentalHealth'].value_counts() df_data_devs_mental_health_countries = data_devs_mental_health_countries.copy() df_data_devs_mental_health_countries = df_data_devs_mental_health_countries[df_data_devs_mental_health_countries['MentalHealth'].isin(['Desorden de concentración o memoria', 'Desorden emocional', 'Desorden de ansiedad', 'Tipo de autismo'])] df_data_devs_mental_health_countries.head() df_data_flourish_014 = df_data_devs_mental_health_countries['Country'].value_counts().to_frame('counts').reset_index() df_data_flourish_014 = df_data_flourish_014.head(10) df_data_flourish_014 df_data_flourish_014_best_ten = df_data_devs_mental_health_countries[df_data_devs_mental_health_countries['Country'].isin(['United States of America', 'United Kingdom of Great Britain and Northern Ireland', 'Brazil', 'Canada', 'India', 'Germany', 'Australia', 'Netherlands', 'Poland', 'Turkey'])] df = df_data_flourish_014_best_ten.copy() df df1 = pd.crosstab(df['Country'], df['MentalHealth']) df1 (df_data_devs_mental_health_countries.groupby(['Country', 'MentalHealth']).size() .sort_values(ascending=False) .reset_index(name='count') .drop_duplicates(subset='Country')) df_flourish_data_014 = (df_data_devs_mental_health_countries.groupby(['Country', 'MentalHealth']).size() .sort_values(ascending=False) .reset_index(name='count')) df_flourish_data_014 = df_flourish_data_014.sort_values('Country') df_data_flourish_014.head(10).to_csv('014_flourish_data_014.csv', index=False) df1.to_csv('014_flourish_data_014.csv', index=True) ``` ### 2.15. ¿Cuáles son los países que tienen los mejores sueldos entre los programadores? Se seleccionarán los campos adecuados para responder a esta pregunta ``` df_best_incomes_countries = data_test[['Country', 'ConvertedCompYearly']].copy() df_best_incomes_countries def remove_outliers(df, q=0.05): upper = df.quantile(1-q) lower = df.quantile(q) mask = (df < upper) & (df > lower) return mask mask = remove_outliers(df_best_incomes_countries['ConvertedCompYearly'], 0.1) print(df_best_incomes_countries[mask]) df_best_incomes_countries_no_outliers = df_best_incomes_countries[mask] df_best_incomes_countries_no_outliers1 = df_best_incomes_countries_no_outliers.copy() df_best_incomes_countries_no_outliers1['ConvertedCompYearlyCategorical'] = 'ALTO' df_best_incomes_countries_no_outliers1.loc[(df_best_incomes_countries_no_outliers1['ConvertedCompYearly'] >= 0) & (df_best_incomes_countries_no_outliers1['ConvertedCompYearly'] <= 32747), 'ConvertedCompYearlyCategorical'] = 'BAJO' df_best_incomes_countries_no_outliers1.loc[(df_best_incomes_countries_no_outliers1['ConvertedCompYearly'] > 32747) & (df_best_incomes_countries_no_outliers1['ConvertedCompYearly'] <= 90000), 'ConvertedCompYearlyCategorical'] = 'MEDIO' print(df_best_incomes_countries_no_outliers1) df_best_incomes_countries_no_outliers1['ConvertedCompYearlyCategorical'].value_counts() df_best_incomes_countries_alto = df_best_incomes_countries_no_outliers1[df_best_incomes_countries_no_outliers1['ConvertedCompYearlyCategorical'] == 'ALTO'] df_alto = df_best_incomes_countries_alto[['Country', 'ConvertedCompYearlyCategorical']].copy() df_flourish_015 = df_alto['Country'].value_counts().to_frame('counts').reset_index() df_flourish_015.head(10) df_flourish_015.head(10).to_csv('015_flourish_data.csv', index=False) ``` ### 2.16. ¿Cuáles son los 10 lenguajes de programación más usados entre los programadores? Se seleccionarán los campos adecuados para responder a esta pregunta ``` df_10_prog_languages_devs = data_test[['LanguageHaveWorkedWith']].copy() df_10_prog_languages_devs.head() df_10_prog_languages_devs['LanguageHaveWorkedWith'] = df_10_prog_languages_devs['LanguageHaveWorkedWith'].str.replace(';', ' ') df_counts_016 = df_10_prog_languages_devs['LanguageHaveWorkedWith'].str.split(expand=True).stack().value_counts().rename_axis('Languages').reset_index(name='Count') df_counts_016.head(10) df_counts_016.head(10).to_csv('016_flourish_data.csv', index=False) ``` ### 2.17. ¿Cuáles son las bases de datos más usadas entre los programadores? Se seleccionarán los campos adecuados para responder a esta pregunta ``` df_10_databases = data_test[['DatabaseHaveWorkedWith']].copy() df_10_databases.head() df_10_databases['DatabaseHaveWorkedWith'] = df_10_databases['DatabaseHaveWorkedWith'].str.replace(' ', '') df_10_databases['DatabaseHaveWorkedWith'] = df_10_databases['DatabaseHaveWorkedWith'].str.replace(';', ' ') df_counts_017 = df_10_databases['DatabaseHaveWorkedWith'].str.split(expand=True).stack().value_counts().rename_axis('Databases').reset_index(name='Count') df_counts_017.head(10) df_counts_017.head(10).to_csv('017_flourish_data.csv', index=False) ``` ### 2.18. ¿Cuáles son las plataformas más usadas entre los programadores? Se seleccionarán los campos adecuados para responder a esta pregunta ``` df_10_platforms = data_test[['PlatformHaveWorkedWith']].copy() df_10_platforms.head() df_10_platforms['PlatformHaveWorkedWith'] = df_10_platforms['PlatformHaveWorkedWith'].str.replace(' ', '') df_10_platforms['PlatformHaveWorkedWith'] = df_10_platforms['PlatformHaveWorkedWith'].str.replace(';', ' ') df_counts_018 = df_10_platforms['PlatformHaveWorkedWith'].str.split(expand=True).stack().value_counts().rename_axis('Platform').reset_index(name='Count') df_counts_018.head(10) df_counts_018.to_csv('018_flourish_data.csv', index=False) ``` ### 2.19. ¿Cuáles son los frameworks web más usados entre los programadores? Se seleccionarán los campos adecuados para responder a esta pregunta ``` df_10_web_frameworks = data_test[['WebframeHaveWorkedWith']].copy() df_10_web_frameworks.head() df_10_web_frameworks['WebframeHaveWorkedWith'] = df_10_web_frameworks['WebframeHaveWorkedWith'].str.replace(' ', '') df_10_web_frameworks['WebframeHaveWorkedWith'] = df_10_web_frameworks['WebframeHaveWorkedWith'].str.replace(';', ' ') df_counts_019 = df_10_web_frameworks['WebframeHaveWorkedWith'].str.split(expand=True).stack().value_counts().rename_axis('Web framework').reset_index(name='Count') df_counts_019.head(10) df_counts_019.to_csv('019_flourish_data.csv', index=False) ``` ### 2.20. ¿Cuáles son las herramientas tecnológicas más usadas entre los programadores? Se seleccionarán los campos adecuados para responder a esta pregunta ``` df_10_data_misc_techs = data_test[['MiscTechHaveWorkedWith', 'ToolsTechHaveWorkedWith']].copy() df_10_data_misc_techs.head() df_10_data_misc_techs['AllMiscTechs'] = df_10_data_misc_techs['MiscTechHaveWorkedWith'].map(str) + ';' + df_10_data_misc_techs['ToolsTechHaveWorkedWith'].map(str) df_10_data_misc_techs.head() df_10_data_misc_techs['AllMiscTechs'] = df_10_data_misc_techs['AllMiscTechs'].str.replace(' ', '') df_10_data_misc_techs['AllMiscTechs'] = df_10_data_misc_techs['AllMiscTechs'].str.replace(';', ' ') df_counts_020 = df_10_data_misc_techs['AllMiscTechs'].str.split(expand=True).stack().value_counts().rename_axis('Tecnología').reset_index(name='# Programadores') df_counts_020.head(10) df_counts_020.head(10).to_csv('020_flourish_data.csv', index=False) ``` ### 2.21. ¿Cuáles son las herramientas colaborativas más usadas entre programadores? Se seleccionarán los campos adecuados para responder a esta pregunta ``` df_10_colab = data_test[['NEWCollabToolsHaveWorkedWith']].copy() df_10_colab.head() df_10_colab['NEWCollabToolsHaveWorkedWith'] = df_10_colab['NEWCollabToolsHaveWorkedWith'].str.replace(' ', '') df_10_colab['NEWCollabToolsHaveWorkedWith'] = df_10_colab['NEWCollabToolsHaveWorkedWith'].str.replace(';', ' ') df_counts_021 = df_10_colab['NEWCollabToolsHaveWorkedWith'].str.split(expand=True).stack().value_counts().rename_axis('Herramienta Colaborativa').reset_index(name='# Programadores') df_counts_021.head(10) df_counts_021.head(10).to_csv('021_flourish_data.csv', index=False) ``` ### 2.22. ¿Cuáles son los países con mayor número de programadores trabajando a tiempo completo? Se seleccionarán los campos adecuados para responder a esta pregunta ``` df_fulltime_employment = data_test[['Country', 'Employment']].copy() df_fulltime_employment.head() df_fulltime_employment.info() df_fulltime_only = df_fulltime_employment[df_fulltime_employment['Employment'] == 'Tiempo completo'] df_fulltime_only.head() df_flourish_022 = df_fulltime_only['Country'].value_counts().to_frame('# Programadores').reset_index() df_flourish_022.head(10) df_flourish_022.head(10).to_csv('022_flourish_data.csv', index=False) ```
github_jupyter
``` # This Python file uses the following encoding: utf-8 # Analise.ipynb # Github:@WeDias # MIT License # # Copyright (c) 2020 Wesley R. Dias # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in all # copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. from time import time from random import randint class Analisador: """Classe criada para obter os tempos de execução dos seguintes algoritmos de ordenação: 1.Built-in Python 2.Quicksort 3.Mergesort 4.Seleção """ _tempo_total = 0 _resultados = { 'nativo': {}, 'quicksort': {}, 'mergesort': {}, 'selecao': {} } # noinspection PyUnusedLocal def __init__(self, inicio: int = 2000, fim: int = 22000, passo: int = 2000, intervalo: tuple = (0, 20000)): """Serve para iniciar o processo de obtenção dos tempos de execução de cada algoritmo :param inicio: int, número inicial de elementos aleatórios da lista, padrão 2000 :param fim: int, número final de elementos aleatórios da lista, padrão 22000 :param passo: int, número de elementos incrementados a cada teste, padrão 2000 :param intervalo: tuple, intervalo de números aleatórios, padrão (0, 20000) """ inicio_exec = time() algoritmos = { 'nativo': self._nativo, 'quicksort': self._quicksort, 'mergesort': self._mergesort, 'selecao': self._selecao } for amostra in range(inicio, fim + 1, passo): array = [randint(intervalo[0], intervalo[1]) for a in range(amostra)] for nome_algo, algoritmo in algoritmos.items(): self._temporizador(nome_algo, algoritmo, array, amostra) self._tempo_total = time() - inicio_exec print(f'Tempo total de execução: {self._tempo_total:.2f}s\n') def resultado(self) -> dict: """Serve para retornar um dicionário com os dados de tempo de execução de cada algoritmo em cada caso de teste :return: dict, dados do tempo de execução de cada algoritmo """ return self._resultados @staticmethod def _temporizador(nome_algo: str, algoritmo, array: list, amostra: int) -> None: """Serve para registrar o tempo gasto para um algoritmo de ordenação ser executado e ordenar uma determinada lista :param nome_algo: str, nome do algoritmo :param algoritmo: function, algoritmo a ser executado :param array: list, lista de elementos a ser ordenada :param amostra: int, número de elementos da lista :return: None """ inicio_exec_algo = time() algoritmo(array) Analisador._resultados[nome_algo][amostra] = round(time() - inicio_exec_algo, 3) @staticmethod def _nativo(array: list) -> list: """Algoritmo de ordenação Built-in do Python, serve para ordenar listas :param array: list, lista de elementos a ser ordenada :return: list, lista de elementos ordenados """ return sorted(array) @staticmethod def _quicksort(array: list) -> list: """Algoritmo de ordenação Quicksort, serve para ordenar listas :param array: list, lista de elementos a ser ordenada :return: list, lista de elementos ordenados """ if len(array) <= 1: return array m = array[0] return Analisador._quicksort( [x for x in array if x < m]) + \ [x for x in array if x == m] + \ Analisador._quicksort([x for x in array if x > m]) @staticmethod def _selecao(array: list) -> list: """Algoritmo de ordenação Seleção, serve para ordenar listas :param array: list, lista de elementos a ser ordenada :return: list, lista de elementos ordenados """ r = [] while array: m = min(array) r.append(m) array.remove(m) return r @staticmethod def _mergesort(array: list) -> list: """Algoritmo de ordenação Mergesort, serve para ordenar listas :param array: list, lista de elementos a ser ordenada :return: list, lista de elementos ordenados """ if len(array) <= 1: return array else: m = len(array) // 2 e = Analisador._mergesort(array[:m]) d = Analisador._mergesort(array[m:]) return Analisador._merge(e, d) @staticmethod def _merge(e:list , d: list) -> list: """Serve para auxiliar no Mergesort :param e: list, Lista da esquerda :param d: list, Lista da direita :return: list, lista de elementos ordenados """ r = [] i, j = 0, 0 while i < len(e) and j < len(d): if e[i] <= d[j]: r.append(e[i]) i += 1 else: r.append(d[j]) j += 1 r += e[i:] r += d[j:] return r # Obtenção dos dados de execução dos algoritmos from pandas import DataFrame analise = Analisador() df = DataFrame.from_dict(analise.resultado()) print(df) # Geração do gráfico comparativo import matplotlib.pyplot as plt plt.figure(figsize=(16, 9)) plt.plot(df, marker='.') plt.title('Comparação entre algoritmos de ordenação', size=16) plt.legend(('Nativo (Python)', 'Quicksort', 'Mergesort', 'Seleção'), fontsize=14) plt.xlabel('Número de elementos', size=14) plt.ylabel('Tempo em segundos', size=14) plt.grid() plt.show() ```
github_jupyter
# Pyber Analysis ### 4.3 Loading and Reading CSV files ``` # Add Matplotlib inline magic command %matplotlib inline # Dependencies and Setup import matplotlib.pyplot as plt import pandas as pd import matplotlib.dates as mdates # File to Load (Remember to change these) city_data_to_load = "Resources/city_data.csv" ride_data_to_load = "Resources/ride_data.csv" # Read the City and Ride Data city_data_df = pd.read_csv(city_data_to_load) ride_data_df = pd.read_csv(ride_data_to_load) ``` ### Merge the DataFrames ``` # Combine the data into a single dataset pyber_data_df = pd.merge(ride_data_df, city_data_df, how="left", on=["city", "city"]) # Display the data table for preview pyber_data_df ``` ## Deliverable 1: Get a Summary DataFrame ``` # 1. Get the total rides for each city type tot_rides_by_type = pyber_data_df.groupby(["type"]).count()["ride_id"] tot_rides_by_type # 2. Get the total drivers for each city type tot_drivers_by_type = city_data_df.groupby(["type"]).sum()["driver_count"] tot_drivers_by_type # 3. Get the total amount of fares for each city type tot_fares_by_type = pyber_data_df.groupby(["type"]).sum()["fare"] tot_fares_by_type # 4. Get the average fare per ride for each city type. avg_fare_by_type = round((tot_fares_by_type / tot_rides_by_type), 2) avg_fare_by_type # 5. Get the average fare per driver for each city type. avg_fare_per_driver_by_type = round((tot_fares_by_type / tot_drivers_by_type), 2) avg_fare_per_driver_by_type # 6. Create a PyBer summary DataFrame. pyber_summary_df = pd.DataFrame({ "Total Rides": tot_rides_by_type, "Total Drivers": tot_drivers_by_type, "Total Fares": tot_fares_by_type, "Average Fare per Ride": avg_fare_by_type, "Average Fare per Driver": avg_fare_per_driver_by_type }) pyber_summary_df.dtypes # 7. Cleaning up the DataFrame. Delete the index name pyber_summary_df.index.name = None pyber_summary_df # 8. Format the columns. pyber_summary_df['Total Rides'] = pyber_summary_df['Total Rides'].map('{:,}'.format) pyber_summary_df['Total Drivers'] = pyber_summary_df['Total Drivers'].map('{:,}'.format) pyber_summary_df['Total Fares'] = pyber_summary_df['Total Fares'].map('${:,}'.format) pyber_summary_df['Average Fare per Ride'] = pyber_summary_df['Average Fare per Ride'].map('${:,}'.format) pyber_summary_df['Average Fare per Driver'] = pyber_summary_df['Average Fare per Driver'].map('${:,}'.format) pyber_summary_df ``` ## Deliverable 2. Create a multiple line plot that shows the total weekly of the fares for each type of city. ``` # 1. Read the merged DataFrame pyber_data_df # 2. Using groupby() to create a new DataFrame showing the sum of the fares # for each date where the indices are the city type and date. tot_fares_by_date_df = pd.DataFrame(pyber_data_df.groupby(["type", "date"]).sum()["fare"]) tot_fares_by_date_df # 3. Reset the index on the DataFrame you created in #1. This is needed to use the 'pivot()' function. # df = df.reset_index() tot_fares_by_date_df = tot_fares_by_date_df.reset_index() tot_fares_by_date_df # 4. Create a pivot table with the 'date' as the index, the columns ='type', and values='fare' # to get the total fares for each type of city by the date. pyber_pivot = tot_fares_by_date_df.pivot(index="date", columns="type", values="fare") pyber_pivot # 5. Create a new DataFrame from the pivot table DataFrame using loc on the given dates, '2019-01-01':'2019-04-29'. pyber_pivot_df = pyber_pivot.loc['2019-01-01':'2019-04-29'] pyber_pivot_df # 6. Set the "date" index to datetime datatype. This is necessary to use the resample() method in Step 8. pyber_pivot_df.index = pd.to_datetime(pyber_pivot_df.index) # 7. Check that the datatype for the index is datetime using df.info() pyber_pivot_df.info() # 8. Create a new DataFrame using the "resample()" function by week 'W' and get the sum of the fares for each week. tot_fares_by_week_df = pyber_pivot_df.resample('W').sum() tot_fares_by_week_df # 8. Using the object-oriented interface method, plot the resample DataFrame using the df.plot() function. # Import the style from Matplotlib. from matplotlib import style # Use the graph style fivethirtyeight. style.use('fivethirtyeight') fig, ax = plt.subplots() tot_fares_by_week_df.plot(figsize=(20,7), ax=ax) ax.set_title("Total Fares by City Type") ax.set_ylabel("Fares($USD)") ax.set_xlabel("Month(Weekly Fare Totals)") ax.legend(labels=["Rural", "Suburban", "Urban"], loc="center") plt.savefig("analysis/PyBer_fare_summary.png") plt.show() ```
github_jupyter
# Estatísticas descritivas e Visualização de Dados Este notebook é responsável por mostrar as estatíscas descritivas da base dados com visualizações. Será analisado o comportamento de algumas características que são cruciais na compra/venda de veículos usados. ``` from Utils import * from tqdm import tqdm from matplotlib import pyplot as plt import seaborn as sns pd.set_option('display.max_colwidth', 100) DATASET = "../datasets/clean_vehicles_2.csv" df = pd.read_csv(DATASET) df.describe() ``` ## Estatísticas Univariadas Aqui vamos análisar o comportamento de alguns dados em relação a sua distribuição. ### Ano de fabricação ``` ##Análise de média, desvio padrão, mediana e moda do Ano de fabricação print( "Ano do veículo:\n" "Média: "+floatStr(df['year'].mean())+"\n"+ "Desvio padrão: "+floatStr(df['year'].std())+"\n"+ "Mediana: "+floatStr(df['year'].median())+"\n"+ "IQR: "+floatStr(df['year'].describe()[6] - df['year'].describe()[4])+"\n"+ "Moda: "+floatStr(df['year'].mode().loc[0]) ) ``` Aqui notamos uma mediana maior do que a média. O que nos levar a imaginar que esta grandeza não segue uma distribuição normal. Isto indica que deve haver alguns carros muito antigos sendo vendidos, gerando uma caractéristica de assimetria na curva. Para verifcarmos isso, vamos gerar o histograma ``` ##Plotar o histograma da distribuição em relação ao ano de fabricação do veículo bars = df[df['year']> 0].year.max() - df[df['year']> 0].year.min() df[df['year']> 0].year.hist(bins = int(bars)) ``` Porém, este plotting não nos dá uma boa visualização. Nesta lista há alguns carros voltados para colecionadores, que não é o perfil que queremos estudar. Então, tomando o ano de 1985 como limiar, analisamos o histograma da distribuição de carros comercializáveis "para uso normal". Agora conseguimos perceber que a maior parte dos carros vendidos são fábricados depois de 2000. ``` ##Plot do histograma dos anos de fabricação do carro limitando à 1985 bars = df['year'].max() - 1985 df[df['year']> 1985].year.hist(bins = int(bars)) ``` ### Preço de revenda do veículo ``` ##Análise de estatísticas univariadas dos valores de preço do veículo print( "Preço do veículo:\n" "Média: "+floatStr(df[df['price'] > 0].price.mean())+"\n"+ "Desvio padrão: "+floatStr(df[df['price'] > 0].price.std())+"\n"+ "Mediana: "+floatStr(df[df['price'] > 0].price.median())+"\n"+ "IQR: "+floatStr(df['price'].describe()[6] - df['price'].describe()[4])+"\n"+ "Moda: "+floatStr(df[df['price'] > 0].price.mode().loc[0]) ) ``` Aqui encontramos uma diferença muito grandes nestes dados. O que nos faz pensar que temos uma distribuição muito variada e assimétrica de preços. Devido a esta característica, não conseguiremos ver um histograma com todos os dados. Podemos contornar isto de 2 maneiras: * Poderíamos usar o log10 para ter uma noção da ordem de grandeza, mas não conseguiríamos extrair muita informação, pois a maioria se encaixariam em log10(x) = 4. * Outra alternativa seria plotar um subconjunto dos preços. Então, depois de algumas análises, protaremos apenas valores de 0 a $ 100.000. ``` sns.distplot(df[(df['price'] > 0) & (df['price'] < 100000)].price, bins = 100,norm_hist = False, hist=True, kde=False) ``` ### Leitura atual do Odômetro (Milhas percorrida pelo veículo) ``` ##Análise de estatísticas univariadas dos valores de leitura do Odômetro. ##Note que estamos descartando valores nulos para fazer esta análise print( "Odômetro do veículo:\n" "Média: "+floatStr(df[df['odometer'] > 0].odometer.mean())+"\n"+ "Desvio padrão: "+floatStr(df[df['odometer'] > 0].odometer.std())+"\n"+ "Mediana: "+floatStr(df[df['odometer'] > 0].odometer.median())+"\n"+ "IQR: "+floatStr(df['odometer'].describe()[6] - df['odometer'].describe()[4])+"\n"+ "Moda: "+floatStr(df[df['odometer'] > 0].odometer.mode().loc[0]) ) ``` Aqui também temos uma grande varidade de valores. Apenas 492 deles estão acima de 800.000 de milhas registradas. Para fim de análise, iremos utilizar este intervalo. ``` sns.distplot(df[(df['odometer'] > 0) & (df['odometer'] < 400000)].odometer, bins = 100,norm_hist = False, hist=True, kde=False) ``` ### Visualização de quantidade de anúncios por fabricantes de veículos Faremos uma análise visual para tentar perceber quais as marcas mais populares no mercado de seminovos. ``` ## Plotar a divisão de mercas que são mais anunciadas manufacturers = df['manufacturer'].value_counts().drop(df['manufacturer'].value_counts().index[8]).drop(df['manufacturer'].value_counts().index[13:]) sns.set() plt.figure(figsize=(10,5)) sns.barplot(x=manufacturers.index, y=manufacturers) print("As 3 marcas mais anunciadas (Ford, chevrolet, toyota) equivalem a " +str(round(sum(df['manufacturer'].value_counts().values[0:3])/df['manufacturer'].count()*100,2)) +"% deste mercado.") filter_list = ['ford', 'chevrolet', 'toyota', 'nissan', 'honda'] filtereddf = df[df.manufacturer.isin(filter_list)] ax = sns.boxplot(x="manufacturer", y="price", data= filtereddf[filtereddf['price']< 40000]) ``` ### Visualização da relação entre preço de carros classificados por tração Aqui podemos comparar como o preço variam de acordo com a tração do veículo * 4wd: 4x4 * rwd: tração traseira * fwd: tração dianteira Comparamos a média, mediana e quantidade. Porém, já percebemos de análises anteriores que a mediana nos dá um valor mais razoável, por isto ordenamos baseado nela ``` df[df['drive'] != 'undefined'].groupby(['drive']).agg(['mean','median','count'])['price'].sort_values(by='median', ascending=False) ``` ## Estatísticas Bivariadas Aqui vamos tentar encontrar se as grandezas numéricas possuem algum tipo de correlação. Primeiro analisaremos o método de spearman, depois de pearson. Em seguida tentaremos utilizar ``` ##Aplicando-se alguns limitadores para analisar correlações entre as variáveis car = df[(df['odometer']> 0) & (df['odometer']<400000)] car = car[(car['price']>0) & (car['price']<100000)] car = car[car['year']>=1985] car = car.drop(['lat','long'], axis=1) car.cov() car.corr(method='spearman') car.corr(method='pearson') ##Relaçãoo de preço x milhas rodadas entre as 3 marcas mais populares filter_list = ['ford', 'chevrolet', 'toyota'] car[car['manufacturer'].isin(filter_list)].plot.scatter(x='odometer',y='price') g = sns.FacetGrid(car[car['manufacturer']!='undefined'], col="manufacturer", hue='drive') g.map(sns.scatterplot, "year", "price") g.add_legend() #Clique na imagem pequena para expandir ```
github_jupyter
# Airbnb - Rio de Janeiro * Download [data](http://insideairbnb.com/get-the-data.html) * We downloaded `listings.csv` from all monthly dates available ## Questions 1. What was the price and supply behavior before and during the pandemic? 2. Does a title in English or Portuguese impact the price? 3. What features correlate with the price? Can we predict a price? Which features matters? ``` import numpy as np import pandas as pd import seaborn as sns import glob import re import pendulum import tqdm import matplotlib.pyplot as plt import langid langid.set_languages(['en','pt']) ``` ### Read files Read all 30 files and get their date ``` files = sorted(glob.glob('data/listings*.csv')) df = [] for f in files: date = pendulum.from_format(re.findall(r"\d{4}_\d{2}_\d{2}", f)[0], fmt="YYYY_MM_DD").naive() csv = pd.read_csv(f) csv["date"] = date df.append(csv) df = pd.concat(df) df ``` ### Deal with NaNs * Drop `neighbourhood_group` as it is all NaNs; * Fill `reviews_per_month` with zeros (if there is no review, then reviews per month are zero) * Keep `name` for now * Drop `host_name` rows, as there is not any null host_id * Keep `last_review` too, as there are rooms with no review ``` df.isna().any() df = df.drop(["host_name", "neighbourhood_group"], axis=1) df["reviews_per_month"] = df["reviews_per_month"].fillna(0.) df.head() ``` ### Detect `name` language * Clean strings for evaluation * Remove common neighbourhoods name in Portuguese from `name` column to diminish misprediction * Remove several non-alphanumeric characters * Detect language using [langid](https://github.com/saffsd/langid.py) * I restricted between pt, en. There are very few rooms listed in other languages. * Drop `name` column ``` import unicodedata stopwords = pd.unique(df["neighbourhood"]) stopwords = [re.sub(r"[\(\)]", "", x.lower().strip()).split() for x in stopwords] stopwords = [x for item in stopwords for x in item] stopwords += [unicodedata.normalize("NFKD", x).encode('ASCII', 'ignore').decode() for x in stopwords] stopwords += ["rio", "janeiro", "copa", "arpoador", "pepê", "pepe", "lapa", "morro", "corcovado"] stopwords = set(stopwords) docs = [re.sub(r"[\-\_\\\/\,\;\:\!\+\’\%\&\d\*\#\"\´\`\.\|\(\)\[\]\@\'\»\«\>\<\❤️\…]", " ", str(x)) for x in df["name"].tolist()] docs = [" ".join(x.lower().strip().split()) for x in docs] docs = ["".join(e for e in x if (e.isalnum() or " ")) for x in docs] ndocs = [] for doc in tqdm.tqdm(docs): ndocs.append(" ".join([x for x in doc.split() if x not in stopwords])) docs = ndocs results = [] for d in tqdm.tqdm(docs): results.append(langid.classify(d)[0]) df["language"] = results # Because we transformed NaNs into string, fill those detection with nans too df.loc[df["name"].isna(), "language"] = pd.NA ``` * Test accuracy, manually label 383 out of 88191 (95% conf. interval, 5% margin of error) ``` df.loc[~df["name"].isna()].drop_duplicates("name").shape df.loc[~df["name"].isna()].drop_duplicates("name")[["name", "language"]].sample(n=383, random_state=42).to_csv("lang_pred_1.csv") lang_pred = pd.read_csv("lang_pred.csv", index_col=0) lang_pred.head() overall_accuracy = (lang_pred["pred"] == lang_pred["true"]).sum() / lang_pred.shape[0] pt_accuracy = (lang_pred[lang_pred["true"] == "pt"]["true"] == lang_pred[lang_pred["true"] == "pt"]["pred"]).sum() / lang_pred[lang_pred["true"] == "pt"].shape[0] en_accuracy = (lang_pred[lang_pred["true"] == "en"]["true"] == lang_pred[lang_pred["true"] == "en"]["pred"]).sum() / lang_pred[lang_pred["true"] == "en"].shape[0] print(f"Overall accuracy: {overall_accuracy*100}%") print(f"Portuguese accuracy: {pt_accuracy*100}%") print(f"English accuracy: {en_accuracy*100}%") df = df.drop("name", axis=1) df.head() df["language"].value_counts() ``` ### Calculate how many times a room appeared * There are 30 months of data, and rooms appear multiple times * Calculate for a specific date, how many times the same room appeared up to that date ``` df = df.set_index(["id", "date"]) df["appearances"] = df.groupby(["id", "date"])["host_id"].count().unstack().cumsum(axis=1).stack() df = df.reset_index() df.head() ``` ### Days since last review * Calculate days since last review * Then categorize them by the length of the days ``` df.loc[:, "last_review"] = pd.to_datetime(df["last_review"], format="%Y/%m/%d") # For each scraping date, consider the last date to serve as comparision as the maximum date last_date = df.groupby("date")["last_review"].max() df["last_date"] = df.apply(lambda row: last_date.loc[row["date"]], axis=1) df["days_last_review"] = (df["last_date"] - df["last_review"]).dt.days df = df.drop("last_date", axis=1) df.head() df["days_last_review"].describe() def categorize_last_review(days_last_review): """Transform days since last review into categories Transform days since last review into one of those categories: last_week, last_month, last_half_year, last_year, last_two_years, long_time_ago, or never Args: days_last_review (int): Days since the last review Returns: str: A string with the category name. """ if days_last_review <= 7: return "last_week" elif days_last_review <= 30: return "last_month" elif days_last_review <= 182: return "last_half_year" elif days_last_review <= 365: return "last_year" elif days_last_review <= 730: return "last_two_years" elif days_last_review > 730: return "long_time_ago" else: return "never" df.loc[:, "last_review"] = df.apply(lambda row: categorize_last_review(row["days_last_review"]), axis=1) df = df.drop(["days_last_review"], axis=1) df.head() df = df.set_index(["id", "date"]) df.loc[:, "appearances"] = df["appearances"].astype(int) df.loc[:, "host_id"] = df["host_id"].astype("category") df.loc[:, "neighbourhood"] = df["neighbourhood"].astype("category") df.loc[:, "room_type"] = df["room_type"].astype("category") df.loc[:, "last_review"] = df["last_review"].astype("category") df.loc[:, "language"] = df["language"].astype("category") df df.to_pickle("data.pkl") ``` ### Distributions * Check the distribution of features ``` df = pd.read_pickle("data.pkl") df.head() df["latitude"].hist(bins=250) df["longitude"].hist(bins=250) df["price"].hist(bins=250) df["minimum_nights"].hist(bins=250) df["number_of_reviews"].hist() df["reviews_per_month"].hist(bins=250) df["calculated_host_listings_count"].hist(bins=250) df["availability_365"].hist() df["appearances"].hist(bins=29) df.describe() ``` ### Limits * We are analising mostly for touristic purpose, so get the short-term rentals only * Prices between 10 and 10000 (The luxury Copacabana Palace Penthouse at 8000 for example) * Short-term rentals (minimum_nights < 31) * Impossibility of more than 31 reviews per month ``` df = pd.read_pickle("data.pkl") total_records = len(df) outbound_values = (df["price"] < 10) | (df["price"] > 10000) df = df[~outbound_values] print(f"Removed values {outbound_values.sum()}, {outbound_values.sum()*100/total_records}%") long_term = df["minimum_nights"] >= 31 df = df[~long_term] print(f"Removed values {long_term.sum()}, {long_term.sum()*100/total_records}%") reviews_limit = df["reviews_per_month"] > 31 df = df[~reviews_limit] print(f"Removed values {reviews_limit.sum()}, {reviews_limit.sum()*100/total_records}%") ``` ### Log skewed variables * Most numerical values are skewed, so log them ``` df.describe() # number_of_reviews, reviews_per_month, availability_365 have zeros, thus sum one to all df["number_of_reviews"] = np.log(df["number_of_reviews"] + 1) df["reviews_per_month"] = np.log(df["reviews_per_month"] + 1) df["availability_365"] = np.log(df["availability_365"] + 1) df["price"] = np.log(df["price"]) df["minimum_nights"] = np.log(df["minimum_nights"]) df["calculated_host_listings_count"] = np.log(df["calculated_host_listings_count"]) df["appearances"] = np.log(df["appearances"]) df.describe() ``` ### Extreme outliers * Most outliers are clearly mistyped values (one can check these rooms ids in their website) * Remove extreme outliers first from large deviations within the same `id` (eliminate rate jumps of same room) * Then remove those from same scraping `date`, `neighbourhood` and `room_type` ``` df = df.reset_index() q25 = df.groupby(["id"])["price"].quantile(0.25) q75 = df.groupby(["id"])["price"].quantile(0.75) ext = q75 + 3 * (q75 - q25) ext = ext[(q75 - q25) > 0.] affected_rows = [] multiple_id = df[df["id"].isin(ext.index)] for row in tqdm.tqdm(multiple_id.itertuples(), total=len(multiple_id)): if row.price >= ext.loc[row.id]: affected_rows.append(row.Index) df = df.drop(affected_rows) print(f"Removed values {len(affected_rows)}, {len(affected_rows)*100/total_records}%") # Remove extreme outliers per neighbourhood, room_type and scraping date q25 = df.groupby(["date", "neighbourhood", "room_type"])["price"].quantile(0.25) q75 = df.groupby(["date", "neighbourhood", "room_type"])["price"].quantile(0.75) ext = q75 + 3 * (q75 - q25) ext affected_rows = [] for row in tqdm.tqdm(df.itertuples(), total=len(df)): if row.price >= ext.loc[(row.date, row.neighbourhood, row.room_type)]: affected_rows.append(row.Index) df = df.drop(affected_rows) print(f"Removed values {len(affected_rows)}, {len(affected_rows)*100/total_records}%") df.describe() df["price"].hist() df.to_pickle("treated_data.pkl") ```
github_jupyter
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/real-itu/modern-ai-course/blob/master/lecture-04/lab.ipynb) # Lab 4 - Math ### Stats You're given the following dataset: ``` import random random.seed(0) x = [random.gauss(0, 1)**2 for _ in range(20)] print(x) ``` > Compute the min, max, mean, median, standard deviation and variance of x ``` # Your code here import math # min mi = min(x) print("min: " + str(mi)) #max ma = max(x) print("max: " + str(ma)) #mean mean = sum(x)/len(x) print("mean: " + str(mean)) #median median = sorted(x)[int(len(x)/2)] print("median: " + str(median)) #stddv #variance lars = 0 for v in x: lars += math.pow(v - mean, 2) variance = lars / len(x) stddv = math.sqrt(variance) print("standard deviation: " + str(stddv)) print("variance: " + str(variance)) ``` ### Vectors You're given the two 3 dimensional vectors a and b below. ``` a = [1, 3, 5] b = [2, 9, 13] ``` > Compute 1. $a + b$ 2. $2a-3b$ 3. $ab$ - the inner product ``` # Your code here first = list(map(lambda t: t[0]+t[1], list(zip(a,b)))) print(first) second = list(map(lambda t: t[0] - t[1], list(zip(list(map(lambda x: x*2, a)), list(map(lambda x: x*3, b)))))) print(second) third = sum(list(map(lambda t: t[0] * t[1], list(zip(a,b))))) print(third) ``` ### Gradients Given the function $f(x,y) = 3x^2 + 6y$ > Compute the partial gradients $\frac{df}{dx}$ and $\frac{df}{dy}$ Your answer here $\frac{df}{dx} = 6x$ $\frac{df}{dy} = 6$ The function above corresponds to the following computational graph ![sol (1).png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAqkAAAG5CAYAAACtGK3BAAAAAXNSR0IArs4c6QAABPB0RVh0bXhmaWxlACUzQ214ZmlsZSUyMGhvc3QlM0QlMjJhcHAuZGlhZ3JhbXMubmV0JTIyJTIwbW9kaWZpZWQlM0QlMjIyMDIxLTA5LTA2VDA2JTNBNTMlM0EzNC4zNjlaJTIyJTIwYWdlbnQlM0QlMjI1LjAlMjAoTWFjaW50b3NoJTNCJTIwSW50ZWwlMjBNYWMlMjBPUyUyMFglMjAxMF8xNV83KSUyMEFwcGxlV2ViS2l0JTJGNTM3LjM2JTIwKEtIVE1MJTJDJTIwbGlrZSUyMEdlY2tvKSUyMENocm9tZSUyRjkyLjAuNDUxNS4xNTklMjBTYWZhcmklMkY1MzcuMzYlMjIlMjB2ZXJzaW9uJTNEJTIyMTUuMC4zJTIyJTIwZXRhZyUzRCUyMnF3ZXRIbU15OTBRbE84bFNpQlg0JTIyJTIwdHlwZSUzRCUyMmRldmljZSUyMiUzRSUzQ2RpYWdyYW0lMjBpZCUzRCUyMjZCdm8xaC1aLVVDY3c1QlY4ZHNHJTIyJTNFN1poTGo1c3dFTWMlMkZEZGNLYkI3SmRiTnBlMmhQT1hUMzZBVUhxQnlNSEJPZ243NEdqd1BFaTFUMUVhaVVFOHpmRHp3enZ6RVlCJTJCOU96U2RCeXV3clR5aHprSnMwRG41MkVQSjhGS3BMcDdSYWlYeGZDNm5JRSUyQmcwQ0lmOEJ3WFJCYlhLRTNxZWRKU2NNNW1YVXpIbVJVRmpPZEdJRUx5ZWRqdHlObjFxU1ZKcUNZZVlNRnY5bGljeTAlMkJvR1JZUCUyQm1lWnBacDdzaFZ2ZGNpS21NMHh4emtqQ2F5MzF6dUc5ZzNlQ2M2bnZUczJPc2k1NEppNDZBaDluV3E4TEU3U1F2eklBd1RKa2EzeWppWElWVEM1a3hsTmVFTFlmMUNmQnF5S2gzUVN1c29ZJTJCWHpndmxlZ3A4VHVWc29XOGtVcHlKV1h5eEtDVk5ybDhHZDIlMkZkbE45Q01CNmJtRG0zbWlOVVVqUnZveU4wYWpPSEliMWxobW4lMkZldWNtc2FkVnlJR0NRTkJSS1FVZ2hiWmNmU3UyVkZZVTM2aTZpbXFpNkNNeVB3eW5aMEFYJTJCbTEzNUFDZFFOWmVEOGpzSm9MWVJWTTJ0ZzVZa3loM3VXaXpuSkpEeVhwZmFsVnRVMGpUYzZsNXYlMkJZTjEzR0lCd1hLaVJ0WnBHWmNkVU1BS1JjS0Y3a2dsMlBTZ0drYkZRRlJ2dVQ0UGdQWEZGZzQlMkJxRnklMkZFYVdMdzZYWnI4VmwlMkZXelM1RzkyTTNmTEJyOXRVeHU5dmwwSTFtMENYS2FKeGdqOWFJTUY1dyUyQjkwOEVEYThUclpmZHptR3R6TU12eWtEa3pVQ0hDd0lzTW1VRmE2akVudmJmWFBRVTd6R3VFVkx4czJ6WTdIMnloJTJCSyUyRlhWUzY3OWQlMkJSNWExZHZMTE1kaUdhJTJCUjN0dlgxdWFPOE9MJTJGSFY3dkw4RHJ2JTJGUGVXdkRZWU5aajBSdXVrZDdiYzRQdjN4SGY0SUh2RmRXMWZIYVo5Vmo0eHNvSVYzbjJ2ZDJCJTJGJTJCSFpWNW5ETDh5JTJCYmZRakdPOSUyRkFnJTNEJTNEJTNDJTJGZGlhZ3JhbSUzRSUzQyUyRm14ZmlsZSUzRVBNg+QAACAASURBVHhe7Z0JfFXVtf9XjENLSQELAVuDfQgKaQEZ0iizQgsGHHgVY0FUJkGkRaKgWCxQkVmwPBA0DJWhGmlLS0tMW5Q+Q2QIU7AyCfYZsE8CKuD/UbUM/8/a9mKIIffcm3PvPfuc7/58/KTac/ZZ+7v2uvt39tl77aSzZ8+eFQoEIOBJAlu2bJGSkhLZvXu3HDhwQA4dOiRlZWVy7NgxOXnypJw+fVqSk5OlRo0aUrt2bUlNTZUrr7xSrr76amnWrJm0bNlS2rZt68m2YRQEIAABCECgKgJJiFQ6CAS8Q2D79u1SUFAgr776qqxfv16aNm0qrVq1MoKzcePGkpaWJvXr1zeCVIWpClQVqipYVbgePnxYDh48KPv375ddu3bJjh07ZM+ePdKxY0e56aabpEePHqY+CgQgAAEIQMDrBBCpXvcQ9vmewNtvvy0rVqyQlStXyqlTpyQrK0u6desmnTp1kpSUlGq3/8SJE1JYWChr166V/Px8ufjii6VPnz7Sr18/adKkSbXrpwIIQAACEIBALAggUmNBlToh4IDAn/70J1mwYIERkP3795fs7Gy5/vrrHdxZvUs2btwoeXl5smzZMjPDOmzYMOnevXv1KuVuCEAAAhCAgMsEEKkuA6U6CIQjoLOZM2bMMJ/nhw8fLkOGDAl3S8z+/9zcXJk3b57UqVNHRo8ebWZxKRCAAAQgAAEvEECkesEL2BAIAjt37pTx48ebDVCPPvqo+dzulbJ8+XKZNm2aWfc6ceJEadGihVdMww4IQAACEAgoAURqQB1Ps+NLYMKECTJ9+nSZNGmS5OTkxPfhETxt1qxZMm7cOCOiVVBTIAABCEAAAokigEhNFHmeGwgCult/xIgRJi2UfuJv2LCh59tdWlpqPv1ruqu5c+eSDcDzHsNACEAAAv4kgEj1p19plQcILFq0SAYPHizz5883m5NsK2r3gw8+KLpuddCgQbaZj70QgAAEIGA5AUSq5Q7EfG8S0M/lukFqyZIlVifTLy4uloEDB5oNVbpmlQIBCEAAAhCIFwFEarxI85zAELjnnnvMzn3NfepGntNEg9M8q7rJSzMALF26NNHm8HwIQAACEAgIAURqQBxNM+NDoHfv3uY0KJ1B9Vu577775Pjx47Jq1Sq/NY32QAACEICABwkgUj3oFEyyk4AK1NTUVHnuuefsbIADq4cOHSplZWUIVQesuAQCEIAABKpHAJFaPX7cDQFDQE+M0uNG/TiDWtHFOqN65swZPv3T9yEAAQhAIKYEEKkxxUvlQSCgm6R2794tq1evDkJzTRtvueUWSU9PZzNVYDxOQyEAAQjEnwAiNf7MeaKPCCxcuFDmzJkjRUVFvtgk5dQ1upmqffv28tBDD5Geyik0roMABCAAgYgIIFIjwsXFEPiCwLZt26RNmzaiaZratm0bODTa7szMTNm6dSsJ/wPnfRoMAQhAIPYEEKmxZ8wTfEqgXbt2oummbEzU75ZLNOH/8uXLzUwyBQIQgAAEIOAmAUSqmzSpKzAEJkyYYNah5uXlBabNF2podna2WZ86fvz4wLMAAAQgAAEIuEcAkeoeS2oKCIGSkhLRWdQ9e/ZIWlpaQFp94WaWlpZK06ZNZePGjdKiRYvA8wAABCAAAQi4QwCR6g5HagkQgdtvv106deokOTk5AWp11U2dNWuWFBYWkj+VHgEBCEAAAq4RQKS6hpKKgkBgzZo1MnbsWNm5c2cQmhtRG5s3b25SUmVlZUV0HxdDAAIQgAAEKiOASKVfQCACAjfeeKMMHjzYnGVPOZ+AbqBatGiRrFu3DjQQgAAEIACBahNApFYbIRUEhUBBQYGZRd2+fXtQmhxxO6+77jozm9q9e/eI7+UGCEAAAhCAQHkCiFT6AwQcEtC1qL169TIzqZTKCeTm5kp+fj5rU+kgEIAABCBQbQKI1GojpIIgENi3b585YenIkSNBaG612li3bl3ZsGGDNGnSpFr1cDMEIAABCASbACI12P6n9Q4JaF5UPQpUd7FTqiYwatQoqVWrligzCgQgAAEIQCBaAojUaMlxX6AIaLL6JUuWmGNAKVUT0HypgwYNkrfeegtUEIAABCAAgagJIFKjRseNQSGwbds26du3r0neT3FG4Nprr5WXXnpJWrVq5ewGroIABCAAAQhUIIBIpUtAIAyByZMny9GjR/nUH0FP0U/+qampJhsCBQIQgAAEIBANAURqNNS4J1AEunbtKg8//DBJ6iPwuh56MHv2bFm7dm0Ed3EpBCAAAQhA4AsCiFR6AwTCELjsssvMTGpKSgqsHBLQTWY6k/rJJ584vIPLIAABCEAAAucTQKTSIyBQBYHi4mK5//77SeAfRS/RxP4LFy6Utm3bRnE3t0AAAhCAQNAJIFKD3gNof5UEVGQVFRWZnf2UyAgMGDBAOnToYHb6UyAAAQhAAAKREkCkRkqM6wNFQNeiNmjQQEaPHh2odrvR2OnTp0tZWZnMnDnTjeqoAwIQgAAEAkYAkRowh9PcyAjoUaj33nuv9O7dO7IbY3j1mTNn5Prrr5e///3vZt3nE088IXfeeadcdNFFMXxq5FX/9re/lWXLlnFEauTouAMCEIAABEQEkUo3gEAVBHQ95YIFCzyzrvLs2bMm00BSUpLoTOX69eulS5cu8uyzz8qwYcPMf/dK0fW8w4cPF/1LgQAEIAABCERKAJEaKTGuDxSBhg0bmjWpaWlpnmi3ilQ99UqFn/5vnVVt3ry5mVF97bXXPCVSS0tLpWPHjvLuu+96gh1GQAACEICAXQQQqXb5C2vjTODrX/+6vPfee55KP7V7927RtFiNGjWSnTt3SsuWLeWFF16Qu+++21Of/DUNlYr748ePx9lrPA4CEIAABPxAAJHqBy/ShpgRuPjii+XTTz+V5OTkmD0jmop11nTr1q2yaNEimTdvnvnk7zUbT506JV/96lflX//6VzRN5B4IQAACEAg4AURqwDsAza+agK7x1M/qXivr1q2Tzz77TA4cOCDz58+X3NxcswzAS2tSlZvXhLPX/Ig9sSXQuXNn0VihQAACdhJApNrpN6yOEwGvzqSGmq9C8Ec/+pG8/PLLJt1T3bp140Qm/GNCM6kqpikQiDeB0EuSF18y482C50HAVgKIVFs9h91xIeC1NamnT5+WO+64Q/Q0p/HjxxsG+vfnP/+52Th14403xoWLk4ewJtUJJa6JFQEVp5qWDZEaK8LUC4HYE0Ckxp4xT7CYgNd29//f//2f1KxZ0wjTCRMmmN39miNV16eWlJSIimqvFHb3e8UTwbQDkRpMv9NqfxFApPrLn7TGZQJey5OqolSPGf3lL38p9913n7z55ptmY9fEiRNFDx7wUkJ/8qS63BmpLiICiNSIcHExBDxJAJHqSbdglFcIePHEKR18db3n888/L1dddZVkZWV5SpyGfMeJU17pxcG0A5EaTL/Tan8RQKT6y5+0xmUCerrTFVdcIY888ojLNfu/uhkzZsjhw4dl5syZ/m8sLfQcAUSq51yCQRCImAAiNWJk3BAkAgsXLpQ33nhDFi9eHKRmu9LWAQMGSIcOHczyBAoE4k0AkRpv4jwPAu4TQKS6z5QafURA11Xef//9sn37dh+1Kj5N0QwEKvJ1XS8FAvEmgEiNN3GeBwH3CSBS3WdKjT4joEeQfvDBB2ZXPcUZgY8//ljq1asnn3zyibMbuAoCLhNApLoMlOogkAACiNQEQOeRdhHo2rWr6NpU3aBEcUZgzZo1Mnv2bFm7dq2zG7gKAi4TQKS6DJTqIJAAAojUBEDnkXYRmDx5shw9elRmzZpll+EJtHbUqFGSmpoqY8eOTaAVPDrIBBCpQfY+bfcLAUSqXzxJO2JGYNu2bdK3b1/Zs2dPzJ7ht4qvvfZaeemll6RVq1Z+axrtsYQAItUSR2EmBKoggEile0DAAYH09HRZsmSJZGZmOrg62Jds3LjR7Oh/6623gg2C1ieUACI1ofh5OARcIYBIdQUjlfidgB5DqpuB+OQf3tP6qb9WrVrm2FYKBBJFAJGaKPI8FwLuEUCkuseSmnxMYN++fdK+fXs5cuSIj1vpTtPq1q0rGzZskCZNmrhTIbVAIAoCiNQooHELBDxGAJHqMYdgjncJ6BGpvXr1ksGDB3vXyARblpubK/n5+bJq1aoEW8Ljg04AkRr0HkD7/UAAkeoHL9KGuBAoKCiQxx9/XHQjFaVyArpRaurUqdK9e3cQQSChBBCpCcXPwyHgCgFEqisYqSQoBLp06WJOoNLd/pTzCaxYscKcMLVu3TrQQCDhBBCpCXcBBkCg2gQQqdVGSAVBIqBJ6jX3586dO4PUbEdtbdGihZlF5dADR7i4KMYEEKkxBkz1EIgDAURqHCDzCH8R0LWpnTt3Ft3FTvmcgGY9KCwsZC0qHcIzBBCpnnEFhkAgagKI1KjRcWNQCZSUlEi7du1Mcv+0tLSgYjjX7tLSUmnatKloflSdTaVAwAsEEKle8AI2QKB6BBCp1ePH3QEloHlT9+7da05VCnrJzs6WZs2akRc16B3BY+1HpHrMIZgDgSgIIFKjgMYtEFACOpt67733ytChQwMLZMGCBbJs2TIpKioKLAMa7k0CiFRv+gWrIBAJAURqJLS4FgLlCGgqqjZt2khxcbG0bds2cGy2bNkiGRkZJiWXpp6iQMBLBBCpXvIGtkAgOgKI1Oi4cRcEDAFNuTRnzhx54403pGbNmoGhokfE6kzyyJEjOdwgMF63q6GIVLv8hbUQqIwAIpV+AYFqEhgzZozZRLV69epq1mTP7bfeeqtZhzpt2jR7jMbSQBFApAbK3TTWpwQQqT51LM2KL4H+/fvLJZdcIosXL47vgxPwtAEDBsjp06dl6dKlCXg6j4SAMwKIVGecuAoCXiaASPWyd7DNKgKaP7VBgwaim4n8WnSTWFlZGflQ/epgH7ULkeojZ9KUwBJApAbW9TQ8FgRUqF5++eW+nFHVGdRjx44hUGPRcajTdQKIVNeRUiEE4k4AkRp35DzQ7wT00//x48dFz7JPSUmxvrm6Sapfv35Sq1Ytk26KAgEbCCBSbfASNkKgagKIVHoIBGJAQDdTFRQUmBlVm9NTaZqpgQMHys0338wmqRj0E6qMHQFEauzYUjME4kUAkRov0jwncAQ0PdWQIUNk/vz5MmzYMOvar2trH3jgAcnNzSXNlHXew2BEKn0AAvYTQKTa70Na4GECmuh+xIgR0rBhQ5kxY4akpaV52NrPTTt48KCMHj1aSktLZe7cudK6dWvP24yBEKhIAJFKn4CA/QQQqfb7kBZYQGD8+PEyc+ZMmTRpkowaNcqzFs+aNUueeOIJI1InTJjgWTsxDALhCCBSwxHi/4eA9wkgUr3vIyz0CYGSkhJRsfrOO+/IY489Jn379vVMy3STlybmb9SokUycOFFatmzpGdswBALREECkRkONeyDgLQKIVG/5A2sCQGDNmjXm0/+JEydk+PDhCV3vqetm582bZ3bu6+xpz549A+ABmhgEAojUIHiZNvqdACLV7x6mfZ4loLv/dXNSUVGRaNqq7OxsyczMjLm9mzZtkry8PJNOqn379mZTV48ePWL+XB4AgXgSQKTGkzbPgkBsCCBSY8OVWiHgmMC+fftMTtWVK1fKmTNnJCsrS7p16yadOnWSmjVrOq7nQhdqntPCwkJZu3at5Ofny0UXXSR9+vQxuU+vueaaatdPBRDwIgFEqhe9gk0QiIwAIjUyXlwNgZgS0GwAOsP66quvyvr16yU9PV1atWolzZo1k8aNG5vsAPXr15fatWtLjRo1JDk5WU6fPi0nT540p0EdPnzY7M7fv3+/7Nq1S3bs2GH+dujQQbp27WpmTNmtH1MXUrlHCCBSPeIIzIBANQggUqsBj1shEGsCxcXFohuudu/eLQcOHJBDhw5JWVmZEaQqTFWgqlBVwarCNTU1Va688kq5+uqrjbDVDVAZGRmxNpP6IeA5AohUz7kEgyAQMQFEasTIuAECEIAABLxOAJHqdQ9hHwTCE0CkhmfEFRCAAAQgYBkBRKplDsNcCFRCAJFKt4AABCAAAd8RQKT6zqU0KIAEEKkBdDpNhgAEIOB3AohUv3uY9gWBACI1CF6mjRCAAAQCRgCRGjCH01xfEkCk+tKtNAoCEIBAsAkgUoPtf1rvDwKIVH/4kVZAAAIQgEA5AohUugME7CeASLXfh7QAAhCAAAQqEECk0iUgYD8BRKr9PqQFEIAABCCASKUPQMB3BBCpvnMpDYIABCAAAWZS6QMQsJ8AItV+H9ICCEAAAhBgJpU+AAHfEUCk+s6lNAgCEIAABJhJpQ9AwH4CiFT7fUgLIAABCECAmVT6AAR8RwCR6juX0iAIQAACEGAmlT4AAfsJIFLt9yEtgAAEIAABZlLpAxDwHQFEqu9cSoMgAAEIQICZVPoABOwngEi134e0AAIQgAAEmEmlD0DAdwQQqb5zKQ2CAAQgAAFmUukDELCfACLVfh/SAghAAAIQYCaVPgAB3xFApPrOpTQIAhCAAASYSaUPQMB+AohU+31ICyAAAQhAgJlU+gAEfEcAkeo7l9IgCEAAAhBgJpU+AAH7CSBS7fchLYAABCAAAWZS6QMQ8B0BRKrvXEqDIAABCECAmVT6AATsJ4BItd+HtAACEIAABJhJpQ9AwHcEEKm+cykNggAEIAABZlLpAxCwnwAi1X4f0gIIQAACEGAmlT4AAd8RQKT6zqU0CAIQgAAEmEmlD0DAfgKIVPt9SAsgAAEIQICZVPoABHxHAJHqO5fSIAhAAAIQYCaVPgAB+wkgUu33IS2AAAQgAAFmUukDEPAdAUSq71xKgyAAAQgEk8DTTz8t48aNk2nTpsmPf/xjueiii0RnVH/xi1/IY489JpMnT5ZRo0YFEw6thoCFBBCpFjoNkyEAAQhA4MsETpw4IXXr1pWLL75YatSoIR988IF84xvfkJMnT8rp06fl6NGjkpKSAjoIQMASAohUSxyFmRCAAAQgEJ6AzpjOnj1bPvvss3MXX3rppZKTkyNTpkwJXwFXQAACniGASPWMKzAEAhCAAASqS0BnU1NTU+XTTz89V9Vll10mR44cYRa1unC5HwJxJoBIjTNwHgcBCEAAArElUH42lVnU2LKmdgjEkgAiNZZ0qRsCEIAABOJOoPxsKrOoccfPAyHgGgFEqmsoqQgCEIAABLxCQGdTZ82aJQ8//DBrUb3iFOyAQIQEEKkRAuNyCEAAAhDwPgGdTR0wYID88pe/ZC2q992FhRColEDCRWpxcbGUlJTI7t275cCBA3Lo0CEpKyuTY8eOnUsbkpycbNKJ1K5d2yyIT0tLk0aNGkmzZs2kZcuWkpGRgXsh4EsCxIcv3UqjXCJAfLgEkmp8ScAP8RF3kbpt2zYpKCiQ1157TQoLCyU9PV2uu+4687dx48ZGgNavX98IUhWmKlA1v53muVPhevjwYTl48KDs379fdu3aJTt27DACt0OHDnLTTTdJjx49pHXr1r7scDTK/wSID//7mBZGT4D4iJ4dd/qfgB/jIy4i9e2335YVK1bIyy+/bARnVlaWdOvWTTp16uTKZ5iPP/5YXn/9dVm7dq3k5+ebRM59+vSRfv36SZMmTfzfM2mh1QSID6vdh/ExJkB8xBgw1VtNwO/xEVORqjOmCxYskPXr10v//v3lrrvukszMzJh3iI0bN0peXp4sW7ZMOnbsKMOGDZPu3bvH/Lk8AAKRECA+IqHFtUEjQHwEzeO0NxICQYmPmIjUNWvWyMyZM83n+eHDh8uQIUMiYe/qtbm5ufLss8+a5QOPPPKI9OzZ09X6qQwCkRIgPiIlxvVBIkB8BMnbtDVSAkGLD1dFqm6AGj9+vLzzzjvy6KOPms/tXim63GDq1Klm3evEiROlRYsWXjENOwJCgPgIiKNpZlQEiI+osHFTQAgENT5cE6kTJkyQGTNmyJNPPmnOSPZq0bx548aNkzFjxojaTIFAPAgQH/GgzDNsJUB82Oo57I4HgSDHR7VFqu4mGzFihDRs2NCIVN2d7/VSWloqo0ePNumu5s6dK61atfK6ydhnKQHiw1LHYXZcCBAfccHMQywlQHyIVEukLly40Kw3nT9/vtmcZFvRTV0PPPCA6LrVwYMH22Y+9nqcgF/iQ9sxaNAgj9PGPNsI+CU+GD9s63l22OuX+Kju+BG1SNXP5bq7bPHixdK2bVs7vF6JlZrsduDAgSYt1rRp06xtB4Z7iwDx4S1/YI23CBAf3vIH1niLAPHxhT+iEqmaTur48eMm92lKSoq3vBuFNXp8nm7yqlOnjixdujSKGrgFAl8QID7oDRC4MAHig94BAeLDaR+IWKTefvvtRswtWbLE6TOsuU7Peda0WatWrbLGZgz1FgHiw1v+wBpvESA+vOUPrPEWAeLjy/6ISKQqwAYNGpgE/X4tQ4cOlbKyMoSqXx0cw3b17t1bUlNT5bnnnovhUxJbNfGRWP42P534sNl72B5rAuirygk7Fqn6iUaPG/XjDGpFNDqjqse38uk/1mHpn/qJD//4kpa4T4D4cJ8pNfqHAPFxYV86Eqm6iHfPnj2yevVq//SKMC255ZZbJD09nc1UgfF49A0lPqJnx53+J0B8+N/HtDB6AsRH1ezCilRNHzBnzhwpKiryxSYpp11JN1O1b99eRo4cSXoqp9ACeF3Q4+Ohhx4iPVUA+73TJgc9Phg/nPaUYF4X9PhwMn5UKVI1kWybNm1E0zTZnGYq2u6v7f7e974nyoGE/9FS9O99xEexZGZmytatW4kP/3bzqFtGfDB+RN15AnAj8eFs/KhSpLZr107uueceKxP1u9XHdZPYsmXLzEwyBQLlCRAfYg7yWL58OfFBaHyJAPEhZpMx4wfBURkB4sPZ+HFBkapnxe7evVvy8vIC38Oys7OlWbNmokwoEFACxMcX/UDjQ9dvjx8/ns4BAUOA+Dg/Phg/CIzyBPS3cu/evfLSSy8FHky48aNSkVpSUiI33HCDgZiWlhZ4iKWlpdK0aVPZuHGjtGjRIvA8gg5A40PfgnUzIfEhQnwEPSLObz/xcT4P4oP4KE+A+IgsPioVqZrPrmPHjpKTk0Pv+jeBWbNmSWFhIflT6RGi+ew6depEfJTrC8QHgREiQHx8uS8QH8RH+fjo3LmzjBo1CigO9NWXROqaNWtk7NixsnPnTgBWINC8eXOZOnWq9OzZEzYBJUB8XNjxGh/Tpk2TrKysgPYOmk18VB0fjB/BjhHiI/Lx40si9cYbbzQpl/Qse8r5BFasWCGaMmLdunWgCSgB4uPCjtcNVIsWLSI+Ahob2mzi48LOZ/wIcGD8u+ldunSRIUOGoK8q6QoXGj/OE6kFBQVmFnX79u30pgsQ0FRU+jbcvXt3GAWMAPER3uHXXXedmU0lPsKz8tsVxEd4jzJ+hGfk1ys0Ph5//HGT0pJSOYHKxo/zRKquJerVqxfJ66voQbm5uZKfn8/a1ABGGfER3unER3hGfr2C+AjvWeIjPCO/XkF8hPdsZfFxTqTu27fPnLB05MiR8DUF/Iq6devKhg0bpEmTJgEnEZzmEx/OfU18OGfllyuJD+eeJD6cs/LLlcSHc09WjI9zIlXz2ulRoLoLkVI1Ad2VV6tWLfKmBqijEB/OnU18OGfllyuJD+eeJD6cs/LLlZoX9eOPP0ZfOXBoxfg4J1I1GfeSJUvMMYeUqglovtRBgwbJW2+9BaqAECA+nDua+HDOyi9XEh/OPUl8OGfllyuJD+eerBgfRqTqQt6+ffua5OQUZwSuvfZaefHFF6V169bObuAqawkQH5G7TuNDT1PRjSIUfxMgPiL3L+NH5MxsvWPr1q1mNz/6yrkHy48fRqROnjxZjh49ylS0c4YmEW+9evXMbj2KvwkQH5H7V+MjNTXVZAuh+JsA8RG5fxk/Imdm6x1PPfWUfPDBB+irCBxYfvwwIrVbt25GdJGk3jlFTco7e/ZsWbt2rfObuNJKAl27dpWHH36YJPUReI/4iACW5ZcSH5E7kPiInJmtdxAfkXtO4+OZZ56Rv/zlL2JE6mWXXWZmUlNSUiKvLaB36CJonUn95JNPAkogOM0mPiL3tW7C1JlU4iNydrbdQXxE7jHGj8iZ2XjH2bNn5Stf+YqZSa1Zs6aNTUiIzeXHj6TNmzefvf/++0ngH4UrdL3d888/LxkZGVHczS02ECguLhY/x8eZM2eMGy666CLX3aGJmfWEtrZt27peNxV6g4Df4yOWlBk/YknXG3Vv3rxZhg0bRgL/KNwRGj+ScnNzzxYVFZmd/ZTICAwYMMDkltVjZCn+JKAiy6/xoUdY/vWvfzUCdefOnfKd73zHVSdqfHTo0MFkwqD4k4Cf4yPWHmP8iDXhxNevyek1p/rixYujMkYnEfS3+etf/7o0atQoqjoudJPO8mr9SUlJMZmkqK6x9913n3Ts2FGScnJyzjZo0EBGjx5d3ToDd/+MGTPk/fffl6effjpwbQ9Kg3Utqh/j4/Tp00ZAvvHGG+YFddmyZfLqq6+6+mM1ffp0KSsrk5kzZwaluwSunbGIDx04dQDVlycdQGNd9FnxeE7FdjB+xNqzia8/JydHvvnNb8ojjzwSsTHaL5OTk2X48OGiM7IqdvXf3Sqa2/i//uu/5KOPPpJTp065+tvvho2h8SPptttuO3vvvfdK79693ag3UHWsWrVKli5dyhGpPva6HmXn1/jQHyn9Rz/Zfu973zv3Vu2WO3/7298a8atxQvEngVjER0gwPvHEE/Lzn/88JuBUCF966aVSv359uf766+WXv/xl3PdkMH7ExLWeqlTjQ2cE9W+kRU//1P75u9/9zkwk/OY3v3FdSP7sZz+TN998U/S32umLmqYWzM7Odnx9pO0OXR8aP5LatGlzdsGCBawbi4Lkli1bzHoT/Uuxk4DOguua0wttGtT1lH6P+uAgswAAIABJREFUD/3sr4v6//CHP7jqRBW/Ogugfyl2EkhEfKiA1Bkjzb8aqzy7OtD++te/NrmuNU2atvNf//qXXHzxxXFzFONH3FDH7EHh4qNNmzZm34r+jbRo/xwzZozs37/fCEKnItLpc3Sm9s477xT9/dffaadFha1ObsRiH0N5G0LjR1JaWppZk5qWlubURq77N4GDBw+aNamlpaUwsZTAV7/6VTODqJ9lNOdtRbHasGFDsybVr/Ghb8Qvv/yy6Od/t390NC50TdG7775rae/A7ETEh87s/OhHP5KSkhJXP2+W96Z+HdETA1Uo6svZrbfeagZePb4yXoXxI16kY/eccPGh44Z+pr/yyisdG7Fu3TrRWVSdSdTMGaoxrrnmGunSpYvjOpxcqL/5V199tUk/quPf3//+d9HxLlwZMWKEzJkzx/F4oePrQw89JNquvLw80dO3nJTQ+JGUkpJy9r333ov7p46qjNS3Bw3gFStWmLVJ+oOlM1r6371UNI3It771LdF0CRQ7CfziF7+Qxx577Nynbg1WnVkJiVVdsO61+HCLtK5D1/Wi+mOlP4S6PtXNt3WNC/2RPn78uFsmU0+cCSQiPnTmadOmTaL5JXU9uA7Obr9AaZ//n//5HzNI62yYrhnUE4H0pBstOu7ov7/wwgtmAP/nP/8pV111lav0GT9cxZmQysLFh44j//u//xtR+intl4sWLZLCwkLp1KmT/N///Z/ccsstXxKp2kcre6m65JJLpFatWmazlf6j/1tjqWLR/t2sWTNjm+4duPvuu2XSpEnmv1VVtC7NXxouJtU+fRHU00znzZtnxlH9q+1yUkLjR1JycvLZTz/9NGZvrE6MqXjN3r17pWnTpmYdhtqmg6n+tyuuuCKa6mJ2j/7Q6ZuOLjqm2EvgG9/4hnz44YemAbpOTYWavl3+9Kc/ldq1a5s+6OaCdX2z1GTFNWrUMJ9awgV7tGT1Oc8995z893//t0yZMkXq1Klj3oB1FkkHXP33Y8eOnas+tNMz2udVvE/jQmca9DMqxV4C8YwPHdh017u+MKkA0Fmof/zjH6K7pCu+QOlGv/Klsg1Qmqu3efPmlcLXGVv9pKovak8++aQRo1o0DvTZ+pvQq1cvs6RLfwdee+01V1/iGD/sjYnyllcVHyoQ9fcv0t947YM6+6pLzXSW/0JF1zXrmKXCVMeoymJA4+amm276UhX6cqaCUevQa/RFTcekiRMnntfPQ2kKQxXoxIN+HSvfpsqWI2jc6iSevujdc889Zm3uf/zHfzj+WhEaP3TrpObz91xvmTp1quhxe1pUfffv399zNqpBbs48ebKBATVKg15/HHSxupvx8c4775iUZTrw6eyMBrC+Leunxorlb3/7mxw+fDjsQKw/QBX7odqclZVl3pB1bY+mgdIfjSZNmsjJkydNSpSK7XK7L4d2p7rJL6Dd0XPNjlV86ID47W9/W371q1+Z7BOrV682L4v66b/iQF9dkarxpdlZ9IVRZ5x0rZ0O9MuXLzdHWOpSAI2JBx980PB/9tlnXfeD2zHnuoFUGBWBUHzoJ/uKIs9JhfqpW1MC6iSCmxMk5Z+tX6h1GYGKUrVRX8hUVOs67VDRJQAqLnXWX7MA6IubznDqMq7yRV8s9Z9Q0d98/fKt68qdzLpWxiQ0fnhyJlUN1oH0a1/7mnmbUId5MZh5E3YSbt6/pm7duuZEEC3lZ1J1jarONro1k6o/BD/4wQ/Mj4Gu0dH+oxs1NENEZS9hOoiqyCxfKntTruwtWQdY/cyyb98+k1pKjz7evn27eW7nzp3ND1OsCzOpsSYcn/rjFR/aGk21k5mZab5O6eCskxWaJkeXf0U6G+WUjg6kurFFv9jp83Tpi35e1fjXmNX264xrZXHm9BmVXcf4UR163rm3qvjQGfhoZlJ1n4DO5q9fv/6C2kfHgsomNypOCmjcVLxOr1G79ehRHXtUDOvSmkcfffRLY0PF+nQJgy7hqhiP5TWaxs1tt91mTuWMNkfsuZlUL65J1e6n8PTNVhW8/kD88Ic/9E6v/LclrCnynEsiNkh/CHQNqg4YGmQq4nRw0pkVLW6uSdV+3KdPH/OpRBeoq5D8/ve/b4SkBrPbJSRoVaTqj4/OqMbzZY81qW57NP71xTM+tHX6PP1ypsu7tOh6VB3o9ZNkxUFRx4dwRWejdHNg+aKxfscdd4ieaKNr+kIvi7r0Rmd9dDZJxbJu8Ni9e7f5qxMl+hVEX/DcKowfbpFMXD3h4iOaNanaGn1h+n//7/+Z2fsL/WaHkvE7aX3F2VgVkRpb+pVA40M//esGQl3SEu5lUD/hh3tp1Po1tvRYbM0HrCX0RU83T4Vb96rXn1uT6sXd/QcOHJDGjRubTz66uFfXZeiPxeWXX+7EH3G7ht2ZcUMdswfpmkkdpEK7+0PiNPRAN3f3T5s2zbxVal/WHwLdIKKfGkNrgir+GOnbtM6mhiv6I1PZaVH6I6b/qEDVt3l98dN/14FYZ3RjXdjdH2vCsa8/nvGhrVm7dq1Jh6MvbqG9CbpuTj/9Vywat+FKZafp6ACqQlTX4OlAGpq91c/8+gm0Xbt2JqOHDuz6gqdfJPSTZ0FBgVl24FZh/HCLZOLqCRcf0ezu1/6pL0xDhw41fS9WRZe06GEcGiMqpvVzfjiBqrZoTu2NGzeGvVbHNhW/+nXyz3/+sxl79IQ6pycQntvd78U8qTqA6qCqql6V+He/+13jNG2glwp57rzkjehs0SAaMmTIuZnTirW4mSdVczPqRqbQWjpN8KyzObqBQxOXt27d+rzHh07eCdeyytYs6Yud7hDVgfc///M/zcYpXWOrs1JHjx6NSz5I8qSG85z3//94xofSCOVI1ZlO/fKg+xI0+4abXwB0sNT40CUvKlI1i4xu6HjqqadMNgqdodW40aU4OqOl68f1nkOHDpmNIG4Vxg+3SCaunnDxEWme1NAXPU3ir/3D7YwSFUlpvOnLkk7GOI0xzSuu8eNE0Gr9OkOr46ievOX0GWrnuTypnDgVfQfXGTDd+KInQlD8ScDNE3X0M78uJdAfHk1LokGrA59uoFIB6yTonVLW2SFdyK7LCHSRu77sqUAeOXKkGZzdfNaFbOLEKafesvc6N+MjREEHNt2wqEu8YtVPQy+AW7duNWmndCYp9Cz9/1Qs6MufDqo6q6pZAnSTiZuF8cNNmt6sS9dl6u+wkxOnNF2TZqL44x//aP6ZO3duzPp/dWipnZV9uatOnZXde+7EqZycnLN+PJvcbWCV1cfZy/GgnNhn6OcQTX0WzdnLlVkeSvMUOps8tPMzFoNxqG4daN9++22TE7Kyz5+xIqzxodkJNMUPxZ8E3I4Pf1KqvFWMH/73ti4j08kIJ+OH/l7rpkFNGaibdXXTbpDL9OnTzcbhpNzcXHPilOYkpURGQN+QdCeophSi+JOALjHRnI3R7lD0JxVnrdL40LWETtcgOauVq7xEgPiI3huMH9Gzs+VOze+ruX4jGT8qy+BiS3vdtDM0fiRt3rz5rK4x0PQ0lMgI6LnSuvklIyMjshu52hoCui6G+IjOXbreVkWMrkei+JMA8RG9Xxk/omdny5267ErXNGuaM0pkBELjR5Jm8tdTkzRPpB6PRXFGQNOHaJ4xnZan+JsA8RG5fzU+dD2sroWl+JsA8RG5fxk/Imdm4x06K6rxoUnw0VfOPVh+/DAiVc9i1bVFekINxRkBTa8we/ZskzKF4m8CxEfk/iU+Imdm6x3ER+SeIz4iZ2brHcRH5J4rHx9GpGqaD01Lo3mzKM4I6NnuOlOku7Up/iZAfETuX40P3RGtByVQ/E2A+Ijcv4wfkTOz9Q5NbaZfqtFXzj1YfvwwIlXXS2jSWE2cT3FGQNOW6Bm3FXNbOrubq2wiQHxE7i2ND02rpevuKP4mQHxE7l/Gj8iZ2XqHpjnr168f+ioCB5YfP4xI1Xv16Dfd4a8pEChVE9i0aZPJfbZr1y5QBYQA8eHc0Xoaie7o13x6lGAQID6c+5nxwzkrv1ypx4BqTnU9rYlSNYGK48c5kaonb+hiVaakw3chnYrWY/UmTJgQ/mKu8AUB4sO5G4kP56z8ciXx4dyTxIdzVn65kvhw7smK8XFOpOpZyZrz88iRI85rC+iVuqtfc581adIkoASC12ziw7nPiQ/nrPxyJfHh3JPEh3NWfrmS+HDuyYrxcU6kahV6dFevXr1ITl8FT03OqzvPOArVeafzy5XER3hPanzk5+eLHvlICRYB4iO8vxk/wjPy6xV6ROott9yCvgqjryqOH+eJ1IKCArNbncSzF6aoG0GmTJkiPXr08Gss0a4LECA+wncNjY+pU6dK9+7dw1/MFb4iQHyEdyfjR3hGfr3ilVdekZ/+9KfoqyocXFl8nCdS9d4uXbqYE3Z0tz/lfAIrVqwwJ+isW7cONAElQHxc2PHER0CDolyziQ/igyi4MAHiI/L4+JJI1U/Zmttw586d9LUKBFq0aGFmUXv27AmbgBIgPi7seI0PnUXlUJCABoeIWQrF+FG5/xk/ghsXoZb/8Y9/NLOpJSUlwHCor74kUvU+XVvUuXNn0V1WlM8JaNaD119/nbWodAjio5I+oPFRWFjIWlTig/i4QHwwfhAc6KvK+0BV40elIlVVfrt27Uzy2bS0tMD3rIMHD0rTpk3ljTfekJYtWwaeR9ABEB/n94DS0lITH5rfTmeLKMEmQHyc73/Gj2DHQ8XW79ixw2RSQl99TkbjQ5P3X2j8qFSk6o2a12vv3r3m1Jigl7vuussMwuRFDXpP+KL9xMcXLLKzs0WTVRMfxEeIAPHxRV9g/CAuKhIgPpyPHxcUqVqFzqbee++9MnTo0MD2sgULFsjSpUvNLCoFAuUJEB8iGh/Lli2ToqIiOgcEziNAfHweH4wfBEZlBG644Qa57777Aq+vwo0fVYpUTUXVpk0bKS4ulrZt2waup23ZskUyMjJEz95t3bp14NpPg6smQHx8Hh/KQVOHUCBQngDxwfhBRFyYgOoK/f1UfaU6K2glpK/CjR9VilSFpimX5syZY2YSa9asGRiOekSsrhv5yU9+QvLdwHg98oYGOT50pmzkyJHER+TdJjB3BDk+GD8C082jbqge7jB37lzzJSpo+srp+BFWpCr9MWPGmEW+q1evjtoZtt146623mnWo06dPt8107I0zgaDGh65DnTZtWpxp8zjbCAQ1Phg/bOupibFX40P3//z+979PjAEJeKrqK6fjhyORqm3o37+/XHLJJbJ48eIENCm+jxwwYICcOnXKrLWjQMAJgaDFx+nTp81aOwoEnBAIWnwwfjjpFVwTInD33XfLpZdeGhh9Fcn44VikKkzNn9qgQQOzGNyvZdiwYfL++++TD9WvDo5hu4IQH7qJsqysjHyoMexHfq06CPHB+OHX3hv7dgUhPqIZPyISqSGhevnll/tS8esM6kcffYRAjX08+vYJ+kPj5/g4duwYAtW3vTf2DfN7fDB+xL4P+fkJfo+PaMaPiEVq6NP/8ePHRc/qTklJsb7P6Capfv36Sa1atfjEb703E98A/bRJfCTeD1jgTQLEhzf9glXeIEB8nO+HqESqVqGLfQsKCsyMqs3pqTQNwsCBA6VHjx5skvJGjPrCCr/Fx80338wmKV/0TG80wm/xwfjhjX7lFyv8Fh/VGT+iFqnaGTS9yJAhQ2T+/Pmia3FsK7q29oEHHhBNAzF48GDbzMdejxMgPjzuIMxLKAHiI6H4ebjHCRAfnzuoWiJVK9BErCNGjJCGDRvKjBkzJC0tzeOu//ys2NGjR4ueOa45ykjU73mXWWsg8WGt6zA8DgSIjzhA5hHWEiA+XBCpIe/rWbQzZ86USZMmyahRozzbKWbPni3jxo0zIpWzxj3rJt8ZZkt8zJo1S5544gniw3c90NsNsiU+GD+83Y/8ap0t8RGL8aPaM6nlO0VJSYkozHfeeUcee+wx6du3r2f6jG7y0sTjjRo1kokTJ0rLli09YxuGBIMA8REMP9PK6AgQH9Fx465gEAhqfLgqUkNdZc2aNebT/4kTJ2T48OEJXe+p6zrmzZtndu7r7GnPnj2D0aNppWcJEB+edQ2GeYAA8eEBJ2CCZwkELT5iIlJD3tXd/7o5Sc+l1bQK2dnZkpmZGXPnb9q0SfLy8kw6KT0/WTd16e5LCgS8RID48JI3sMVrBIgPr3kEe7xEICjxEVORGnLovn37TE7VlStXypkzZyQrK0u6desmnTp1kpo1a1bb75rntLCwUNauXSv5+fly0UUXSZ8+fUzu02uuuaba9VMBBGJJgPiIJV3qtp0A8WG7B7E/lgT8Hh9xEanlHaS71fQN4NVXX5X169dLenq6tGrVSpo1ayaNGzc22QHq168vtWvXlho1akhycrLoOa8nT54UPa3g8OHDZnf+/v37ZdeuXbJjxw7zt0OHDtK1a1czY8pu/ViGBHXHkgDxEUu61G07AeLDdg9ifywJ+DE+4i5SKzqouLhYdEHw7t275cCBA3Lo0CFzNrgKUhWmKlBVqKpgVeGampoqV155pVx99dVG2OoGqIyMjFj6nbohkDACxEfC0PNgCwgQHxY4CRMTRsAP8ZFwkZow7/FgCEAAAhCAAAQgEEACukxywIABsmTJEk8fb49IDWDnpMkQgAAEIAABCASXwNixY+Xpp5+Whx9+WKZMmeJZEIhUz7oGwyAAAQhAAAIQgIC7BHQWtV69evLpp5/KZZddJkeOHPHsbCoi1V3fUxsEIAABCEAAAhDwLAGdRdXToT777DO59NJLJScnx7OzqYhUz3YjDIMABCAAAQhAAALuEdBDlnQDus6ihoqXZ1MRqe75npogAAEIQAACEICAZwmUn0UNGenl2VREqme7EoZBAAIQgAAEIAABdwjoLGrdunVNWs+vfe1r8uGHH0qdOnXkn//8p5w6dUo++OADz61NRaS643tqgQAEIAABCEAAAp4loLv5x40bJ1OnTpWRI0ea0zn1FNBnnnlGdIb1qaeeMutTvVQQqV7yBrZAAAIQgAAEIACBOBAIidQ4PCrqRyBSo0bHjRCAAAQgAAEIQMBOAohUO/2G1RCAAAQgAAEIQMDXBBCpvnYvjYMABCAAAQhAAAJ2EkCk2uk3rIYABCAAAQhAAAK+JoBI9bV7aRwEIAABCEAAAhCwkwAi1U6/YTUEIAABCEAAAhDwNQFEqq/dS+MgAAEIQAACEICAnQQQqXb6DashAAEIQAACEICArwkgUn3tXhoHAQhAAAIQgAAE7CSASLXTb1gNAQhAAAIQgAAEfE0Akepr99I4CEAAAhCAAAQgYCcBRKqdfsNqCEAAAhCAAAQg4GsCiFRfu5fGQQACEIAABCAAATsJIFLt9BtWQwACEIAABCAAAV8TQKT62r00DgIQgAAEIAABCNhJAJFqp9+wGgIQgAAEIAABCPiaACLV1+6lcRCAAAQgAAEIQMBOAohUO/2G1RCAAAQgAAEIQMDXBBCpvnYvjYMABCAAAQhAAAJ2EkCk2uk3rIYABCAAAQhAAAK+JoBI9bV7aRwEIAABCEAAAhCwkwAi1U6/YTUEIAABCEAAAhDwNQFEqq/dS+MgAAEIQAACEICAnQQQqXb6DashAAEIQAACEICArwkgUn3tXhoHAQhAAAIQgAAE7CSASLXTb1gNAQhAAAIQgAAEfE0Akepr99I4CEAAAhCAAAQgYCcBRKqdfsNqCEAAAhCAAAQg4GsCiFRfu5fGQQACEIAABCAAATsJIFLt9BtWQwACEIAABCAAAV8TQKQ6cO+WLVukpKREdu/eLQcOHJBDhw5JWVmZHDt2TE6ePCmnT5+W5ORkqVGjhtSuXVtSU1PlyiuvlKuvvlqaNWsmLVu2lLZt2zp4EpdAwD4CxId9PsNiCEAAAjYQQKRW4qXt27fLK6+8Iq+99pqsX79emjZtKq1atZL09HQjPNPS0qR+/fpGkKowVYGqQlUFqwrXw4cPy8GDB2X//v1G2Gp9e/bskQ4dOkjXrl2lR48epj4KBGwkQHzY6DVshgAEIGAfAUTqv3329ttvy4oVK2TlypVy6tQpycrKkm7dukmnTp0kJSWl2p49ceKEFBYWytq1ayU/P18uvvhi6dOnj/Tr10+aNGlS7fqpAAKxJEB8xJIudUMAAhCAQGUEAi9SCwoK5LnnnjMCsn///pKdnS3XX399zHvLxo0bJS8vT5YtWyYdO3aUYcOGSffu3WP+XB4AgUgI/OlPf5IFCxYQH5FA41oIQAACEHCFQGBF6po1a2TmzJnm8/zw4cNlyJAhrgCNppLc3FyZN2+e1KlTR0aPHm1mcSkQSCQB4iOR9Hk2BCAAAQgogcCJ1J07d8r48ePNBqhHH33UfG73Slm+fLlMmzZNGjduLBMnTpQWLVp4xTTsCAgB4iMgjqaZEIAABCwgECiROmHCBJk+fbpMmjRJcnJyPOueWbNmybhx44yIVkFNgUA8CBAf8aDMMyAAAQhAwCmBQIjUbdu2yY9//GOTFko/8evufK+X0tJS8+lf013NnTuXbABed5jF9ulu/REjRpj4mDFjhjRs2NDzrSE+PO8iDIQABCBQbQK+F6kLFy40603nz59vNifZVtTuBx98UHTd6qBBg2wzH3s9ToD48LiDMA8CEIBAgAn4WqTq53LNd7p48WKrk+kXFxfLwIEDzYYqXbNKgYAbBDQ+NB3akiVLiA83gFIHBCAAAQi4SsC3IlXTSR0/ftzkPnUjz6mr1KOoTPOs6iYvzQCwdOnSKGrgFgh8QYD4oDdAAAIQgIDXCfhSpPbu3ducBqUzRH4r9913nxHfq1at8lvTaE+cCBAfcQLNYyAAAQhAoFoEfCdSdQBOTU01Cfr9WoYOHSplZWUIVb86OIbtIj5iCJeqIQABCEDAVQK+Eqn6CVOPG/XjDGpFr+uM6pkzZ/j072o4+Lsy4sPf/qV1EIAABPxGwDcidcyYMbJnzx5ZvXq133x0wfbccsstkp6ezmaqwHg8+obqJqndu3cTH9Ej5E4IQAACEIgzAV+IVE2jM2fOHCkqKvLFJimnfUA3U7Vv314eeugh0lM5hRbA64gP4iOA3Z4mQwACPiBgvUjVRP1t2rQRTdPUtm1bH7gksiZouzMzM2Xr1q0k/I8MXSCuJj6Ij0B0dBoJAQj4koD1IrVdu3Zyzz33WJmo360epQn/ly9fbmaSKRAoT4D4EHOQB/FBXEAAAhCwj4DVIlXPGtd1dnl5efaRd9ni7OxsadasmSgTCgSUAPHxRT/Q+ND12+PHj6dzQAACEICAJQSsFaklJSVyww03yN69eyUtLc0S3LEzU88yb9q0qWzcuFFatGgRuwdRsxUEND50FlU3ExIfIsSHFd0WIyEAAQicR8Bakar5Hjt27Cg5OTm49N8EZs2aJYWFheRPpUfI7bffLp06dSI+yvUF4oPAgAAEIGAXAStF6po1a2Ts2LGyc+dOu2jHwdrmzZublFRZWVlxeBqP8CIB4uPCXiE+vNhjsQkCEIBA5QSsFKk33nijDB482JxlTzmfwIoVK0RTDq1btw40ASVAfFzY8bqBatGiRcRHQGODZkMAAnYRsE6kFhQUmFnU7du320U6jta2atVKpk6dKt27d4/jU3mUFwgQH+G9cN1115mvDcRHeFZcAQEIQCCRBKwTqbrWrlevXmYmlVI5gdzcXMnPz2dtagA7CPER3unER3hGXAEBCEDACwSsEqn79u0zJywdOXLEC+w8bUPdunVlw4YN0qRJE0/biXHuESA+nLMkPpyz4koIQAACiSJglUjVvI96FKju0qVUTWDUqFFSq1Yt8qYGqKMQH86dTXw4Z8WVEIAABBJFwCqRqsm4lyxZYo4BpVRNQPOlDho0SN566y1QBYQA8eHc0cSHc1ZcCQEIQCBRBKwRqXoGed++fU1ycoozAtdee628+OKL0rp1a2c3cJW1BIiPyF2n8fHSSy+JbjSkQAACEICA9whYI1InT54sR48e5VN/BH1IP2nWq1dPHn/88Qju4lIbCRAfkXtN4yM1NdVkC6FAAAIQgID3CFgjUrt16yY6qPTs2dN7FD1qkSZ1nz17tqxdu9ajFmKWWwS6du0qDz/8MIc4RACU+IgAFpdCAAIQSAABa0TqZZddZmZSU1JSEoDJzkd+/PHHZib1k08+sbMBWO2YAPHhGNW5C3UTps6kEh+Rs+MOCEAAAvEgYIVI3bx589n777+fBP5R9Ahdb/f8889LRkZGFHdziw0EiouLxQ/xcfbsWUlKSoorck3srye0tW3bNq7P5WEQgAAEIBCegBUiNTc392xRUZHZ2U+JjMCAAQNMblkOP4iMm01Xq8iyNT5UmP7+97+XmTNnmpRpzzzzTFxz+2p8dOjQwWTCoEAAAhCAgLcIWCFSc3JyzjZo0EBGjx7tLXoWWDNjxgx5//335emnn7bAWkyMhoCuRbU1PqZPn25m+lWcvvDCC3LNNdfIU089FQ2GqO7R55eVlRmRTIEABCAAAW8RsEKk3nbbbWfvvfde6d27t7foWWDNqlWrZOnSpRyRaoGvojVRj0K1MT40nZx+bte/3/72t81xx/q5/w9/+EO0KCK+77e//a0sW7aM+IiYHDdAAAIQiD0BK0RqmzZtzi5YsMBT68bOnDkjf/3rX41NX//61895auXKldKnT5/Ye87hE7Zs2SLDhg0T/Uuxk4DOguua0wttGtQ+GIv40E/x2s9/85vfyB133CH6Y+Fm+dnPfiavv/66vPbaa6ba0HrU0F99vv5T/rlbt26VNm3auGaGrucdPny46F8KBCAAAQh4i4AVIjUtLc2sSU1LS/MEPf08qKmwNDel/vPqq6+agVRnLO+77z4zsHulHDx40KxJLS0t9YpJ2BEhga9+9aumT+Xk5JictxXFasOGDc2aVDfjQ8VhXl6eLFq0SL7xjW+Y2U6ox+O/AAAgAElEQVQ9MKCiUHXa1yvep/Xry9PevXuN6NSjjn/wgx/IK6+8Yp6h9Wp7/vGPf8iHH34oderUMWJWU9E5faYTzBoXHTt2lHfffdfJ5VwDAQhAAAJxJGCFSE1JSTn73nvveSb9VLt27WTgwIGSnJxs/p4+fdrMAunmJE2TpRtBvFI0DdW3vvUt0XQ7FDsJ/OIXv5DHHnvMiDPtZypWNQF9SKzqTL6b8aECcsKECfLnP/9Z3njjDVm8eLHp2zt37pTmzZufg6jXjR8/3sx2VlX0R2bixInnXaJtufnmm+Wzzz4zL3cqSIcOHSpf+cpXjGDVJQy6VCWUH/m2224za1fz8/Pld7/7nWuO1LjQZx8/fty1OqkIAhCAAATcIWCFSE1OTj776aefGlHohaKDtw6eOpt61VVXmU+tKlS/+c1vmoTqY8aM8YKZxga1S3Nonjp1yjM2YUjkBHQ2U2cUtVx66aVGrOrhFj/96U+ldu3a4mZ8/OUvfzGzmnpkaHZ2thGhmzdvlszMTEeGO0klpdd06dLFzJquW7fO1KtxpEsAdPZUN099//vfN18Bdu3aJU2bNpUf/vCH5q9+vXCraFzoTPW//vUvt6qkHghAAAIQcImAFSJVRMfJqmdrXOLhuBoVf3Xr1pVf//rXoqf96Fo5XRuoMz06Q+SlooIm3vknvdR+v9hSMQYuueQSufXWW82aUTfjQ1NajRs3zojFcOtQnXx6V9sqvmCGRKr2SxWp+ldPRlNhqstnbrrpJpk3b57Z1LRhwwbTPp3xnDt3rqsbKCuzzS/9hXZAAAIQsJ1A586dz01keLUtSV6bSVVQIVGq6Z3q168vubm58vOf/1zefvtt88nSK4WZVK94onp26AvRBx988KWZVF2jqus13ZxJ1c/vKhD1U39IXFY2O1qdz/3akEcffVQ2bdp0TqTOmTPHJNbfvn27Eceaw1TFq+ZH3r9/v8mfGoq36tH84m5mUt0iST0QgAAEgkkgyWtrUtUNoc+fb775phEJuk5QP6trrkcvzVqyJtX+oNE1qboGNbT2+aGHHjIbqEJZJdxek6piWJezqGBs3bq1+dyua1L137/zne+4BlTboymoXnzxRfnud79rsmJontRJkyaZGNJ1qNo2nU3VNbLLly83L4FuxhdrUl1zJxVBAAIQCCSBJK/t7lcvhDaX6EzQn/70J+OY5557zqQK8lJhd7+XvBGdLbpmUgVdaHd/+ZRnWmMsdverMNQE97pZ6p577jGbmXRdrJtFlwqo+HzyySfNOltd/6qzpzfeeKN5jO64V8GqywEOHTpkMmfMnz/fVZHK7n43PUpdEIAABIJHIMlreVLfeustGTFihEnRU69ePZN8fMqUKWY39IVyWSbKbeRJTRR5956reVKHDBlyXj7e8rXHMk9qxTyl7rXqi5rKr2sNrYFVYapfKx555JFz61E1B7EeYepmIU+qmzSpCwIQgEDwCCR57cQpzdeoO5F1F7/uCtaUVBs3bnS8+zmeLtQ0ProEwc20PfG0n2eFJ2DriVNVtUzzsmqWgQcffFDuvPNOs6FKhXq4jVzhaZ1/BSdORUqM6yEAAQhAoDyBpJycnLNeOptcZ370GEpdI6cllKrHi26bMWOG2Wyis3EUfxLQtGdXXHGFmXX0S9ENTRpjurRBU09pBg0316KGOGl8HD582CxtoEAAAhCAAAQiJZCUm5trTpzSXb5eKqG0P7EYPN1qp67x01yTuvGF4k8CuqEplHTfTy0MHV4Qy/jS+NAlBIMGDfITOtoCAQhAAAJxIpC0efPms7ohSVPTUCIj0KpVK3NST0ZGRmQ3crU1BHRdJfERnbs0u4CKfF3XS4EABCAAAQhESiBJM/lreidNjVOzZs1I7w/s9Zp+SvNrag5Nir8JEB+R+1fjQzc+fvLJJ5HfzB0QgAAEIAABETEiVdek6dq7rKwsoDgksGbNGpk9e7Y5yYfibwLER+T+JT4iZ8YdEIAABCBwPgEjUvW87qNHj5p8jRRnBPRsd50p0sTvFH8TID4i96/GR2pqqjkogQIBCEAAAhCIhoARqdu2bZO+ffuKpqahOCNw7bXXmtN89NQgir8JEB+R+1fjQzNz6LptCgQgAAEIQCAaAkak6o3p6elmh39mZmY09QTqHj0JS3cu65GWlGAQID6c+1nzGuuOfj2YgwIBCEAAAhCIlsA5kTp+/HjRzQ588g+PUj9l1qpVyxw7SQkGAeLDuZ+JD+esuBICEIAABC5M4JxI3bdvn8n5eeTIEXiFIaC7+jds2CBNmjSBVUAIEB/OHU18OGfFlRCAAAQg4ECk6iV6BGSvXr1ITl9Fj8nNzRXducxRqMELK+IjvM81PvLz80WPDKZAAAIQgAAEqkPg3EyqVlJQUGB2q+tGEUrlBHQjyJQpU6RHjx4gChgB4iO8wzU+pk6dKt27dw9/MVdAAAIQgAAEqiBwnkjV67p06WJO2NHd/pTzCaxYscKcoLNu3TrQBJQA8XFhxxMfAQ0Kmg0BCEAgRgS+JFL1U7bmNty5c2eMHmlvtS1atDCzqD179rS3EVheLQLEx4XxaXzoLCqHglSri3EzBCAAAQj8m8CXRKr+d11717lzZ9FdupTPCWjWg9dff521qHQI4qOSPqDxUVhYyFpU4gMCEIAABFwjUKlILSkpkXbt2pnk/mlpaa49zNaKDh48KE2bNpU33nhDWrZsaWszsNslAsTH+SBLS0tNfGh+VJ1NpUAAAhCAAATcIFCpSNWKNS/k3r17zakxQS933XWXGYTJixr0nvBF+4mPL1hkZ2dLs2bNiA/CAwIQgAAEXCVwQZGqT9HZ1HvvvVeGDh3q6kNtqmzBggWydOlSM4tKgUB5AsSHiMbHsmXLpKioiM4BAQhAAAIQcJVAlSJVU1G1adNGiouLpW3btq4+2IbKtmzZIhkZGbJ161Zp3bq1DSZjYxwJEB+fx4dy0NRTFAhAAAIQgICbBKoUqfogTbk0Z84cM5NYs2ZNN5/t6br0iFg9gesnP/kJhxt42lOJNS7I8aEzySNHjiQ+EtsFeToEIAAB3xIIK1K15WPGjDGbqFavXu1bEBUbduutt5p1qNOnTw9Mm2lodASCGh+6DnXatGnRQeMuCEAAAhCAQBgCjkSq1tG/f3+55JJLZPHixb6HOmDAADl16pRZa0eBgBMCQYuP06dPm7XaFAhAAAIQgECsCDgWqWqA5k9t0KCB2Szh1zJs2DB5//33yYfqVwfHsF1BiA/dRFlWVkY+1Bj2I6qGAAQgAIHPCUQkUkNC9fLLL/fljKrOoH700UcIVKIjagIqVP0cH8eOHUOgRt07uBECEIAABCIhELFI1cr10+bx48dFz+pOSUmJ5HmevFY3SfXr109q1arFJ35Pesguo4gPu/yFtRCAAAQg4E0CUYlUbYpuFikoKDAzqjanp9I0UwMHDpQePXqwScqbfdRKq/wWHzfffDObpKzsiRgNAQhAwF4CUYtUbbKm3xkyZIjMnz9fdC2nbUXX1j7wwAOSm5tLGh3bnGeBvcSHBU7CRAhAAAIQ8CyBaolUbZUm8h4xYoQ0bNhQZsyYIWlpaZ5tbMiwgwcPyujRo0XPHJ87dy6J+j3vMXsNJD7s9R2WQwACEIBAYglUW6SGzNezzGfOnCmTJk2SUaNGJbZVVTx99uzZMm7cOCNSJ0yY4Fk7McxfBGyJj1mzZskTTzxBfPir+9EaCEAAAlYScE2kautLSkpEB+N33nlHHnvsMenbt69noOgmL0083qhRI5k4caK0bNnSM7ZhSDAIEB/B8DOthAAEIAABdwi4KlJDJq1Zs8Z8+j9x4oQMHz48oes9dV3gvHnzzM59nT3t2bOnO+SoBQJREiA+ogTHbRCAAAQgECgCMRGpIYK6+183JxUVFZm0VdnZ2ZKZmRlzwJs2bZK8vDyTTqp9+/ZmU5fu3qdAwEsEiA8veQNbIAABCEDAawRiKlJDjd23b5/Jqbpy5Uo5c+aMZGVlSbdu3aRTp05Ss2bNajPRPKeFhYWydu1ayc/Pl4suukj69Oljcp9ec8011a6fCiAQSwLERyzpUjcEIAABCNhKIC4itTwc3e2sM0ivvvqqrF+/XtLT06VVq1bSrFkzady4sckOUL9+faldu7bUqFFDkpOTRc8JP3nypOhpN4cPHxbdnb9//37ZtWuX7Nixw/zt0KGDdO3a1cyYtm7d2lZ/YHfACRAfAe8ANB8CEIAABM4RiLtIrci+uLjYbLjavXu3HDhwQA4dOmTOBldBqsJUBaoKVRWsKlxTU1PlyiuvlKuvvtoIW90AlZGRgUsh4EsCxIcv3UqjIAABCEDAAYGEi1QHNnIJBCBQCQFd5jJgwABZsmSJL44nxskQgAAEIACB8gQQqfQHCFhKYOzYsfL000/Lww8/LFOmTLG0FZgNAQhAAAIQqJwAIpWeAQELCegsar169eTTTz+Vyy67TI4cOcJsqoV+xGQIQAACELgwAUQqvQMCFhLQWVQ9Heqzzz6TSy+9VHJycphNtdCPmAwBCEAAAohU+gAEfENAD8nQDYQ6ixoqzKb6xr00BAIQgAAE/k2AmVS6AgQsI1B+FjVkOrOpljkRcyEAAQhAICwBRGpYRFwAAe8Q0FnUunXrmrRsX/va1+TDDz+UOnXqyD//+U85deqUfPDBB6xN9Y67sAQCEIAABKpBAJFaDXjcCoF4E9Dd/OPGjZOpU6fKyJEjzelqeorbM888IzrD+tRTT5n1qRQIQAACEICA7QQQqbZ7EPsDTSAkUgMNgcZDAAIQgIAvCSBSfelWGhUUAojUoHiadkIAAhAIHgFEavB8Tot9RACR6iNn0hQIQAACEDiPACKVDgEBiwkgUi12HqZDAAIQgECVBBCpdBAIWEwAkWqx8zAdAhCAAAQQqfQBCPiVACLVr56lXRCAAAQgwEwqfQACFhNApFrsPEyHAAQgAAFmUukDEPArAUSqXz1LuyAAAQhAgJlU+gAELCaASLXYeZgOAQhAAALMpNIHIOBXAohUv3qWdkEAAhCAADOp9AEIWEwAkWqx8zAdAhCAAASYSaUPQMCvBBCpfvUs7YIABCAAAWZS6QMQsJgAItVi52E6BCAAAQgwk0ofgIBfCSBS/epZ2gUBCEAAAsyk0gcgYDEBRKrFzsN0CEAAAhBgJpU+AAG/EkCk+tWztAsCEIAABJhJpQ9AwGICiFSLnYfpEIAABCDATCp9AAJ+JYBI9atnaRcEIAABCDCTSh+AgMUEEKkWOw/TIQABCECAmVT6AAT8SgCR6lfP0i4IQAACEGAmlT4AAYsJIFItdh6mQwACEIAAM6n0AQj4lQAi1a+epV0QgAAEIMBMKn0AAhYTQKRa7DxMhwAEIAABZlLpAxDwKwFEql89S7sgAAEIQICZVPoABCwmgEi12HmYDgEIQAACzKTSByDgVwKIVL96lnZBAAIQgAAzqfQBCFhMAJFqsfMwHQIQgAAEmEmlD0DArwQQqX71LO2CAAQgAAFmUukDELCYACLVYudhOgQgAAEIMJNKH4CAXwkgUv3qWdoFAQhAAALMpNIHIGAxAUSqxc7DdAhAAAIQ8PZManFxsZSUlMju3bvlwIEDcujQISkrK5Njx47JyZMn5fTp05KcnCw1atSQ2rVrS2pqqqSlpUmjRo2kWbNm0rJlS8nIyMDNEAgkAURqIN1OoyEAAQgEgkDcZ1K3bdsmBQUF8tprr0lhYaGkp6fLddddZ/42btzYCND69esbQarCVAWqClUVrCpcDx8+LAcPHpT9+/fLrl27ZMeOHUbgdujQQW666Sbp0aOHtG7dOhDOo5EQQKTSByAAAQhAwK8E4iJS3377bVmxYoW8/PLLRnBmZWVJt27dpFOnTpKSklJtth9//LG8/vrrsnbtWsnPz5eLL75Y+vTpI/369ZMmTZpUu34qgIBXCSBSveoZ7IIABCAAgeoSiKlI1RnTBQsWyPr166V///5y1113SWZmZnVtDnv/xo0bJS8vT5YtWyYdO3aUYcOGSffu3cPexwUQsI0AItU2j2EvBCAAAQg4JRATkbpmzRqZOXOm+Tw/fPhwGTJkiFN7XL8uNzdXnn32WbN84JFHHpGePXu6/gwqhECiCCBSE0We50IAAhCAQKwJuCpSdQPU+PHj5Z133pFHH33UfG73StHlBlOnTjXrXidOnCgtWrTwimnYAYGoCSBSo0bHjRCAAAQg4HECronUCRMmyIwZM+TJJ5+UnJwczzZ71qxZMm7cOBkzZoyozRQI2EwAkWqz97AdAhCAAASqIlBtkaq79UeMGCENGzY0IlV353u9lJaWyujRo026q7lz50qrVq28bjL2QaBSAohUOgYEIAABCPiVQLVE6sKFC8160/nz55vNSbYV3dT1wAMPiK5bHTx4sG3mYy8EBJFKJ4AABCAAAb8SiFqk6udy3b2/ePFiadu2rbV89DCBgQMHmrRY06ZNs7YdGB5MAojUYPqdVkMAAhAIAoGoRKqmkzp+/LjJfepGntNEgz5x4oTZ5FWnTh1ZunRpos3h+RBwTACR6hgVF0IAAhCAgGUEIhapt99+uxFzS5Yssayp4c0dMGCASZu1atWq8BdzBQQ8QACR6gEnYAIEIAABCMSEQEQiVQVqgwYNTIJ+v5ahQ4dKWVkZQtWvDvZZuxCpPnMozYEABCAAgXMEHItU/cSvx436cQa1Yn/QGVU9vpVP/0SK1wkgUr3uIeyDAAQgAIFoCTgSqbpJas+ePbJ69epon2Pdfbfccoukp6ezmco6zwXLYERqsPxNayEAAQgEiUBYkapppubMmSNFRUW+2CTl1Lm6map9+/YycuRI0lM5hcZ1cSeASI07ch4IAQhAAAJxIlClSNVE/W3atBFN02RzmqloWWq7v/e974lyIOF/tBS5L5YEEKmxpEvdEIAABCCQSAJVitR27drJPffcY2Wifreg6iaxZcuWmZlkCgS8RgCR6jWPYA8EIAABCLhF4IIiVc+13717t+Tl5bn1LGvryc7OlmbNmokyoUDASwQQqV7yBrZAAAIQgICbBCoVqSUlJXLDDTfI3r17JS0tzc3nWVlXaWmpNG3aVDZu3CgtWrSwsg0Y7U8CiFR/+pVWQQACEICASKUitXfv3tKxY0fJycmB0b8JzJo1SwoLC8mfSo/wFAFEqqfcgTEQgAAEIOAigS+J1DVr1sjYsWNl586dLj7GH1U1b95cpk6dKj179vRHg2iF9QQQqda7kAZAAAIQgMAFCHxJpN54440m5ZKeZU85n8CKFStEU3KtW7cONBDwBAFEqifcgBEQgAAEIBADAueJ1IKCAjOLun379hg8yh9VaioqnU3t3r27PxpEK6wmgEi12n0YDwEIQAACVRA4T6Tefvvt0qtXL5LXVwEsNzdX8vPzWZtKWHmCACLVE27ACAhAAAIQiAGBcyJ137595oSlI0eOxOAx/qqybt26smHDBmnSpIm/GkZrrCOASLXOZRgMAQhAAAIOCZwTqZoDVI8C1V3slKoJjBo1SmrVqkXeVDpKwgkgUhPuAgyAAAQgAIEYETgnUtPT02XJkiWSmZkZo0f5p1rNlzpo0CB56623/NMoWmIlAUSqlW7DaAhAAAIQcEDAiFQ9m75v376yZ88eB7dwiRK49tpr5cUXX5TWrVsDBAIJI4BITRh6HgwBCEAAAjEmYETq5MmT5ejRo3zqjwC2fvKvV6+ePP744xHcxaUQcJcAItVdntQGAQhAAALeIWBEardu3URFF0nqnTtGDz2YPXu2rF271vlNXAkBlwkgUl0GSnUQgAAEIOAZAkakXnbZZWYmNSUlxTOGed2Qjz/+2MykfvLJJ143Fft8TACR6mPn0jQIQAACASeQtHnz5rP3338/Cfyj6Aia2P/555+XjIyMKO7mFghUnwAitfoMqQECEIAABLxJICk3N/dsUVGR2dlPiYzAgAEDTG5ZPUaWAoFEEECkJoI6z4QABCAAgXgQSMrJyTnboEEDGT16dDyeF9Uzzp49K0lJSVHdG8ubZsyYIe+//748/fTTsXwMdUPgggQQqXQOCEAAAhDwK4Gk22677ey9994rvXv39lQbz5w5I3PnzpXly5eLDsQjRoyQu+++21M2rlq1SpYuXcoRqZ7ySrCMQaQGy9+0FgIQgECQCCS1adPm7IIFC6Rt27aeabcK1DvuuEOaN29u/uoRpHPmzJE333zTUzOqW7ZskWHDhon+pUAgEQQQqYmgzjMhAAEIQCAeBJLS0tLMmtS0tLR4PC/sM/TT/tChQ+Xdd9+VV155xcyiTpw4Uf7xj3/Ic889F/b+eF5w8OBBsya1tLQ0no/lWRA4RwCRSmeAAAQgAAG/EkhKSUk5+95773km/dSJEyekVq1a8rOf/cz8s2nTJjOj6sX0WJqG6lvf+paozRQIJIIAIjUR1HkmBCAAAQjEg0BScnLy2U8//VSSk5Pj8bywz9i7d680bdpUnnjiCbniiitE/33Hjh1GsN50001h74/nBadPnxbNMXvq1Kl4PpZnQYCZVPoABCAAAQj4noBumdd8/p5p6GuvvSZdu3YVTe+0ePFiY9f48eNl4cKF8vbbb0uNGjU8Y6sa4sWsA54ChDExJdClSxdZt25dTJ9B5RCAAAQgAIFEEPDcTOqBAwekcePGsmjRIhk4cKBhkpeXJ3fddZfZOPXd7343EZwqfSYzqZ5xBYZAAAIQgAAEIOAzAp5bk6qfzi+55JLzROqKFStM+qm1a9eaWVavFNakesUT2AEBCEAAAhCAgN8IeG53v6af0hnUq666yuzq16Kf+9esWSObN282u/29Utjd7xVPYAcEIAABCEAAAn4j4Mk8qbpR6gc/+IHceeedZlb1b3/7m4waNUqysrI8xZ88qZ5yB8ZAAAIQgAAEIOAjAp4+cWrlypXSrl07k+bJSzOoIf/riVMvvPCC/O53v/NRl6ApEIAABCAAAQhAIPEEknJycs42aNBARo8enXhrLLNgxowZ8v7778vTTz9tmeWYCwEIQAACEIAABLxNICk3N9ecOLVkyRJvW+pB6zRNlp44NXjwYA9ah0kQgAAEIAABCEDAXgJJmzdvPnv//ffL9u3b7W1Fgixv1aqVPP/885KRkZEgC3gsBCAAAQhAAAIQ8CeBJM3kr6cmffDBB1KzZk1/tjIGrdL0U3Xr1hU9rYsCAQhAAAIQgAAEIOAuASNSNffoww8/7Lnd8+421d3aNCXW7NmzTe5WCgQgAAEIQAACEICAuwSMSJ08ebIcPXpUZs2a5W7tPq5NU2LVq1dPHn/8cR+3kqZBAAIQgAAEIACBxBAwInXbtm3St29f2bNnT2KssPCp1157rbz44ovSunVrC63HZAhAAAIQgAAEIOBtAkakqonp6elmh39mZqa3LfaAdZs2bRLd2b9r1y4PWIMJEIAABCAAAQhAwH8EzolUPXpUNwPxyT+8k/VTf61atWTChAnhL+YKCEAAAhCAAAQgAIGICZwTqfv27TM5P48cORJxJUG7QXf1b9iwQZo0aRK0ptNeCEAAAhCAAAQgEBcC50SqPu3222+XXr16kZy+CvS5ubmiO/s5CjUu/ZOHQAACEIAABCAQUALnidSCggKzW103UlEqJ6AJ/KdMmSI9evQAEQQgAAEIQAACEIBAjAicJ1L1GV26dBE9gUp3+1POJ7BixQpZuHChrFu3DjQQgAAEIAABCEAAAjEk8CWRqp+yx44dKzt37ozhY+2sukWLFmYWtWfPnnY2AKshAAEIQAACEICAJQS+JFLVbl2b2rlzZ9Fd7JTPCWjWg9dff521qHQICEAAAhCAAAQgEAcClYrUkpISadeunUnun5aWFgczvP2IgwcPStOmTeWNN96Qli1bettYrIMABCAAAQhAAAI+IFCpSNV2ad7UvXv3yksvveSDZlavCXfddZcRqeRFrR5H7oYABCAAAQhAAAJOCVxQpGoFOpt6zz33yLBhw5zW57vrFixYIEuXLjWzqBQIQAACEIAABCAAgfgQqFKkaiqqNm3aSHFxsbRt2zY+FnnoKVu2bJGMjAzZunWrtG7d2kOWYQoEIAABCEAAAhDwN4EqRao2XVMuzZkzR4qKiiQlJcXfNMq1To+I1RO4fvKTn3C4QWC8TkMhAAEIQAACEPAKgbAiVQ0dM2aM2US1evVqr9gdcztuvfVWsw51+vTpMX8WD4AABCAAAQhAAAIQOJ+AI5Gqt/Tv318uueQSWbx4se8ZDhgwQE6dOiXLli3zfVtpIAQgAAEIQAACEPAiAcciVY3X/KkNGjQQ3Uzk16KbxN5//33yofrVwbQLAhCAAAQgAAErCEQkUkNC9fLLL/fljKrOoH700UcIVCu6LkZCAAIQgAAEIOBnAhGLVIWhn/6PHz8uv/rVr6RmzZrW89FNUv369ZNatWrxid96b9IACEAAAhCAAAT8QCAqkaoN181UBQUFsmTJEpOmytaiaaYGDhwoPXr0YJOUrU7EbghAAAIQgAAEfEcgapGqJDQ91ZAhQ8wa1aFDh1oHR+1+4IEHJDc3lzRT1nkPgyEAAQhAAAIQ8DOBaolUBaMJ/0eMGCENGzaUGTNmSFpamud5HTx4UEaPHi2lpaUyd+5cEvV73mMYCAEIQAACEIBA0AhUW6SGgI0fP15mzpwpkyZNklGjRnmW4+zZs2XcuHFGpE6YMMGzdmIYBCAAAQhAAAIQCDIB10SqQiwpKREVq++884489thj0rdvX8+wXbFihUybNk0aNWokEydOlJYtW3rGNgyBAAQgAAEIQAACEDifgKsiNVT1H//4RzOreuLECRk+fHhC13vqutl58+aZnfs6e9qzZ0/6AAQgAAEIQAACEICAxwnERKSG2qy7/3VzUlFRkUlblZ2dLZmZmTFHsmnTJsnLyzPppNq3by+aoF9371MgAAEIQAACEIAABOwgECnyJc0AAAENSURBVFORGkKwb98+0c/tK1eulDNnzkhWVpZ069ZNOnXq5EqeVc1z+vrrr8urr74q+fn5ctFFF8mdd95plhtcc801dngCKyEAAQhAAAIQgAAEzhGIi0gtz1uzAegMqwrK9evXS3p6ulx33XXmb+PGjU12gPr160vt2rWlRo0akpycLKdPn5aTJ0/KsWPH5PDhw6K78/fv3y+7du2SHTt2mL8dO3aUm266ycyYtm7dGhdDAAIQgAAEIAABCFhMIO4itSKr4uJis+Fq9+7dcuDAATl06JCUlZUZQarCVAWqClUVrCpcU1NT5corr5Srr75amjVrZjZAZWRkWOwCTIcABCAAAQhAAAIQqEjg/wOzX8Uhbh50FQAAAABJRU5ErkJggg==) > Denote each arrow with the corresponding partial gradient, e.g. $\frac{df}{dc} = 1$ between $f$ and $c$, and use the generalized chain rule on graphs to compute the gradients $\frac{df}{dc}$, $\frac{df}{db}$, $\frac{df}{da}$, $\frac{df}{dx}$, $\frac{df}{dy}$. Your answer here $\frac{df}{dc} = 1$ $\frac{dc}{d6} = y$ $\frac{dc}{dy} = 6$ $\frac{df}{db} = 1$ $\frac{db}{d3} = a$ $\frac{db}{da} = 3$ $\frac{da}{dx} = 2x$ ---------------------------------- $\frac{df}{da} = \frac{df}{db} * \frac{db}{da} = 1 * 3 = 3$ $\frac{df}{dx} = \frac{df}{db} * \frac{db}{da} * \frac{da}{dx} = 1 * 3 * 2x = 6x$ $\frac{df}{dy} = \frac{df}{dc} * \frac{dc}{dy} = 1 * 6 = 6$ ### Autodiff This exercise is quite hard. It's OK if you don't finish it, but you should try your best! You are given the following function (pseudo-code): ``` def parents_grads(node): """ returns parents of node and the gradients of node w.r.t each parent e.g. in the example graph above parents_grads(f) would return: [(b, df/db), (c, df/dc)] """ ``` > Complete the `backprop` method below to create a recursive algorithm such that calling `backward(node)` computes the gradient of `node` w.r.t. every (upstream - to the left) node in the computational graph. Every node has a `node.grad` attribute that is initialized to `0.0`, it's numerical gradient. The algorithm should modify this property directly, it should not return anything. Assume the gradients from `parents_grads` can be treated like real numbers, so you can e.g. multiply and add them. ``` def backprop(node, df_dnode): node.grad += df_dnode # Your code here parents = parents_grads(node) for parent, grad in parents: backprop(parent, grad + df_dnode) def backward(node): """ Computes the gradient of every (upstream) node in the computational graph w.r.t. node. """ backprop(node, 1.0) # The gradient of a node w.r.t. itself is 1 by definition. ``` Ok, now let's try to actually make it work! We'll define a class `Node` which contains the node value, gradient and parents and their gradients ``` from typing import Sequence, Tuple class Node: def __init__(self, value: float, parents_grads: Sequence[Tuple['Node', float]]): self.value = value self.grad = 0.0 self.parents_grads = parents_grads def __repr__(self): return "Node(value=%.4f, grad=%.4f)"%(self.value, self.grad) ``` So far no magic. We still havn't defined how we get the `parents_grads`, but we'll get there. Now move the `backprop` and `grad` function into the class, and modify it so it works with the class. ``` # Your code here from typing import Sequence, Tuple class Node: def __init__(self, value: float, parents_grads: Sequence[Tuple['Node', float]]): self.value = value self.grad = 0.0 self.parents_grads = parents_grads def __repr__(self): return "Node(value=%.4f, grad=%.4f)"%(self.value, self.grad) def backprop(self, df_dnode): self.grad += df_dnode # Your code here for parent, grad in self.parents_grads: parent.backprop(grad * df_dnode) def backward(self): """ Computes the gradient of every (upstream) node in the computational graph w.r.t. node. """ self.backprop(1.0) # The gradient of a node w.r.t. itself is 1 by definition. ``` Now let's create a simple graph: $y = x^2$, and compute it for $x=2$. We'll set the parent_grads directly based on our knowledge that $\frac{dx^2}{dx}=2x$ ``` x = Node(2.0, []) y = Node(x.value**2, parents_grads=[(x, 2*x.value)]) ``` And print the two nodes ``` print("x", x, "y", y) ``` > Verify that the `y.backward()` call below computes the correct gradients ``` y.backward() print("x", x, "y", y) ``` $\frac{dy}{dx}$ should be 4 and $\frac{dy}{dy}$ should be 1 Ok, so it seems to work, but it's not very easy to use, since you have to define all the `parents_grads` whenever you're creating new nodes. **Here's the trick.** We can make a function `square(node:Node)->Node` which can square any Node. See below ``` def square(node: Node) -> Node: return Node(node.value**2, [(node, 2*node.value)]) ``` Let's verify that it works ``` x = Node(3.0, []) y = square(x) print("x", x, "y", y) y.backward() print("x", x, "y", y) ``` Now we're getting somewhere. These calls to square can of course be chained ``` x = Node(3.0, []) y = square(x) z = square(y) print("x", x, "y", y, "z", z) z.backward() print("x", x, "y", y,"z", z) ``` > Compute the $\frac{dz}{dx}$ gradient by hand and verify that it's correct Your answer here $\frac{dz}{dx} = \frac{dz}{dy} * \frac{dy}{dx} = 2y * 2x = 2 (x^2) * 2x = 2 * 3^2 * 2 * 3 = 108$ Similarly we can create functions like this for all the common operators, plus, minus, multiplication, etc. With enough base operators like this we can create any computation we want, and compute the gradients automatically with `.backward()` > Finish the plus function below and verify that it works ``` def plus(a: Node, b:Node)->Node: """ Computes a+b """ # Your code here return Node(a.value + b.value, [(a, 1), (b, 1)]) x = Node(4.0, []) y = Node(5.0, []) z = plus(x, y) print("x", x, "y", y, "z", z) z.backward() print("x", x, "y", y,"z", z) ``` > Finish the multiply function below and verify that it works: ``` def multiply(a: Node, b:Node)->Node: """ Computes a*b """ # Your code hre return Node(a.value*b.value, [(a,b.value),(b,a.value)]) x = Node(4.0, []) y = Node(5.0, []) z = multiply(x, y) print("x", x, "y", y, "z", z) z.backward() print("x", x, "y", y,"z", z) ``` We'll stop here, but with just a few more functions we could compute a lot of common computations, and get their gradients automatically! This is super nice, but it's kind of annoying having to write `plus(a,b)`. Wouldn't it be nice if we could just write `a+b`? With python operator overloading we can! If we define the `__add__` method on `Node`, this will be executed instead of the regular plus operation when we add something to a `Node`. > Modify the `Node` class so that it overload the plus, `__add__(self, other)`, and multiplication, `__mul__(self, other)`, operators and run the code below to verify that it works. ``` # Your code here class Node: def __init__(self, value: float, parents_grads: Sequence[Tuple['Node', float]]): self.value = value self.grad = 0.0 self.parents_grads = parents_grads def __repr__(self): return "Node(value=%.4f, grad=%.4f)"%(self.value, self.grad) def backprop(self, df_dnode): self.grad += df_dnode # Your code here for parent, grad in self.parents_grads: parent.backprop(grad * df_dnode) def backward(self): """ Computes the gradient of every (upstream) node in the computational graph w.r.t. node. """ self.backprop(1.0) # The gradient of a node w.r.t. itself is 1 by definition. def __add__(self, other): return Node(self.value + other.value, [(self, 1), (other, 1)]) def __mul__(self, other): return Node(self.value * other.value, [(self, other.value), (other, self.value)]) a = Node(2.0, []) b = Node(3.0, []) c = Node(4.0, []) d = a*b + c # Behold the magic of operator overloading! print("a", a, "b", b, "c", c, "d", d) d.backward() print("a", a, "b", b, "c", c, "d", d) ``` Congratulations, you've made your own tiny library for autodiff! That's really an awesome achievement! Now, I wouldn't recommend using your library for anything other than a tool to understand the inner workings of autodiff. Extremely good libraries already exist, which has a lot of functions defined and are super-duper optimized and can even run on specialized hardware like GPUs. Some of those libraries are [PyTorch](https://pytorch.org/), [Tensorflow](https://www.tensorflow.org/) and [Jax](https://github.com/google/jax).
github_jupyter
``` %pylab inline from ipyparallel import Client, error cluster=Client(profile="mpi") view=cluster[:] view.block=True try: from openmdao.utils.notebook_utils import notebook_mode except ImportError: !python -m pip install openmdao[notebooks] ``` ```{note} This feature requires MPI, and may not be able to be run on Colab. ``` # Distributed Variables At times when you need to perform a computation using large input arrays, you may want to perform that computation in multiple processes, where each process operates on some subset of the input values. This may be done purely for performance reasons, or it may be necessary because the entire input will not fit in the memory of a single machine. In any case, this can be accomplished in OpenMDAO by declaring those inputs and outputs as distributed. By definition, a variable is distributed if each process contains only a part of the whole variable. Conversely, when a variable is not distributed (i.e., serial), each process contains a copy of the entire variable. A component that has at least one distributed variable can also be called a distributed component. We’ve already seen that by using [src_indices](connect-with-src-indices), we can connect an input to only a subset of an output variable. By giving different values for src_indices in each MPI process, we can distribute computations on a distributed output across the processes. All of the scenarios that involve connecting distributed and serial variables are detailed in [Connections involving distributed variables](../working_with_groups/dist_serial.ipynb). ## Example: Simple Component with Distributed Input and Output The following example shows how to create a simple component, *SimpleDistrib*, that takes a distributed variable as an input and computes a distributed output. The calculation is divided across the available processes, but the details of that division are not contained in the component. In fact, the input is sized based on it's connected source using the "shape_by_conn" argument. ``` %%px import numpy as np import openmdao.api as om class SimpleDistrib(om.ExplicitComponent): def setup(self): # Distributed Input self.add_input('in_dist', shape_by_conn=True, distributed=True) # Distributed Output self.add_output('out_dist', copy_shape='in_dist', distributed=True) def compute(self, inputs, outputs): x = inputs['in_dist'] # "Computationally Intensive" operation that we wish to parallelize. f_x = x**2 - 2.0*x + 4.0 outputs['out_dist'] = f_x ``` In the next part of the example, we take the `SimpleDistrib` component, place it into a model, and run it. Suppose the vector of data we want to process has 7 elements. We have 4 processors available for computation, so if we distribute them as evenly as we can, 3 procs can handle 2 elements each, and the 4th processor can pick up the last one. OpenMDAO's utilities includes the `evenly_distrib_idxs` function which computes the sizes and offsets for all ranks. The sizes are used to determine how much of the array to allocate on any specific rank. The offsets are used to figure out where the local portion of the array starts, and in this example, is used to set the initial value properly. In this case, the initial value for the full distributed input "in_dist" is a vector of 7 values between 3.0 and 9.0, and each processor has a 1 or 2 element piece of it. ``` %%px from openmdao.utils.array_utils import evenly_distrib_idxs from openmdao.utils.mpi import MPI size = 7 if MPI: comm = MPI.COMM_WORLD rank = comm.rank sizes, offsets = evenly_distrib_idxs(comm.size, size) else: # When running in serial, the entire variable is on rank 0. rank = 0 sizes = {rank : size} offsets = {rank : 0} prob = om.Problem() model = prob.model # Create a distributed source for the distributed input. ivc = om.IndepVarComp() ivc.add_output('x_dist', np.zeros(sizes[rank]), distributed=True) model.add_subsystem("indep", ivc) model.add_subsystem("D1", SimpleDistrib()) model.connect('indep.x_dist', 'D1.in_dist') prob.setup() # Set initial values of distributed variable. x_dist_init = 3.0 + np.arange(size)[offsets[rank]:offsets[rank] + sizes[rank]] prob.set_val('indep.x_dist', x_dist_init) prob.run_model() # Values on each rank. for var in ['indep.x_dist', 'D1.out_dist']: print(var, prob.get_val(var)) # Full gathered values. for var in ['indep.x_dist', 'D1.out_dist']: print(var, prob.get_val(var, get_remote=True)) print('') %%px from openmdao.utils.assert_utils import assert_near_equal assert_near_equal(prob.get_val(var, get_remote=True), np.array([7., 12., 19., 28., 39., 52., 67.])) ``` Note that we created a connection source 'x_dist' that passes its value to 'D1.in_dist'. OpenMDAO requires a source for non-constant inputs, and usually creates one automatically as an output of a component referred to as an 'Auto-IVC'. However, the automatic creation is not supported for distributed variables. We must manually create an `IndepVarComp` and connect it to our input. When using distributed variables, OpenMDAO can't always size the component inputs based on the shape of the connected source. In this example, the component determines its own split using `evenly_distrib_idxs`. This requires that the component know the full vector size, which is passed in via the option 'vec_size'. ``` %%px import numpy as np import openmdao.api as om from openmdao.utils.array_utils import evenly_distrib_idxs from openmdao.utils.mpi import MPI class SimpleDistrib(om.ExplicitComponent): def initialize(self): self.options.declare('vec_size', types=int, default=1, desc="Total size of vector.") def setup(self): comm = self.comm rank = comm.rank size = self.options['vec_size'] sizes, _ = evenly_distrib_idxs(comm.size, size) mysize = sizes[rank] # Distributed Input self.add_input('in_dist', np.ones(mysize, float), distributed=True) # Distributed Output self.add_output('out_dist', np.ones(mysize, float), distributed=True) def compute(self, inputs, outputs): x = inputs['in_dist'] # "Computationally Intensive" operation that we wish to parallelize. f_x = x**2 - 2.0*x + 4.0 outputs['out_dist'] = f_x size = 7 if MPI: comm = MPI.COMM_WORLD rank = comm.rank sizes, offsets = evenly_distrib_idxs(comm.size, size) else: # When running in serial, the entire variable is on rank 0. rank = 0 sizes = {rank : size} offsets = {rank : 0} prob = om.Problem() model = prob.model # Create a distributed source for the distributed input. ivc = om.IndepVarComp() ivc.add_output('x_dist', np.zeros(sizes[rank]), distributed=True) model.add_subsystem("indep", ivc) model.add_subsystem("D1", SimpleDistrib(vec_size=size)) model.connect('indep.x_dist', 'D1.in_dist') prob.setup() # Set initial values of distributed variable. x_dist_init = 3.0 + np.arange(size)[offsets[rank]:offsets[rank] + sizes[rank]] prob.set_val('indep.x_dist', x_dist_init) prob.run_model() # Values on each rank. for var in ['indep.x_dist', 'D1.out_dist']: print(var, prob.get_val(var)) # Full gathered values. for var in ['indep.x_dist', 'D1.out_dist']: print(var, prob.get_val(var, get_remote=True)) print('') %%px from openmdao.utils.assert_utils import assert_near_equal assert_near_equal(prob.get_val(var, get_remote=True), np.array([7., 12., 19., 28., 39., 52., 67.])) ``` ## Example: Distributed I/O and a Serial Input OpenMDAO supports both serial and distributed I/O on the same component, so in this example, we expand the problem to include a serial input. In this case, the serial input also has a vector width of 7, but those values will be the same on each processor. This serial input is included in the computation by taking the vector sum and adding it to the distributed output. ``` %%px import numpy as np import openmdao.api as om from openmdao.utils.array_utils import evenly_distrib_idxs from openmdao.utils.mpi import MPI class MixedDistrib1(om.ExplicitComponent): def setup(self): # Distributed Input self.add_input('in_dist', shape_by_conn=True, distributed=True) # Serial Input self.add_input('in_serial', shape_by_conn=True) # Distributed Output self.add_output('out_dist', copy_shape='in_dist', distributed=True) def compute(self, inputs, outputs): x = inputs['in_dist'] y = inputs['in_serial'] # "Computationally Intensive" operation that we wish to parallelize. f_x = x**2 - 2.0*x + 4.0 # This operation is repeated on all procs. f_y = y ** 0.5 outputs['out_dist'] = f_x + np.sum(f_y) size = 7 if MPI: comm = MPI.COMM_WORLD rank = comm.rank sizes, offsets = evenly_distrib_idxs(comm.size, size) else: # When running in serial, the entire variable is on rank 0. rank = 0 sizes = {rank : size} offsets = {rank : 0} prob = om.Problem() model = prob.model # Create a distributed source for the distributed input. ivc = om.IndepVarComp() ivc.add_output('x_dist', np.zeros(sizes[rank]), distributed=True) ivc.add_output('x_serial', np.zeros(size)) model.add_subsystem("indep", ivc) model.add_subsystem("D1", MixedDistrib1()) model.connect('indep.x_dist', 'D1.in_dist') model.connect('indep.x_serial', 'D1.in_serial') prob.setup() # Set initial values of distributed variable. x_dist_init = 3.0 + np.arange(size)[offsets[rank]:offsets[rank] + sizes[rank]] prob.set_val('indep.x_dist', x_dist_init) # Set initial values of serial variable. x_serial_init = 1.0 + 2.0*np.arange(size) prob.set_val('indep.x_serial', x_serial_init) prob.run_model() # Values on each rank. for var in ['indep.x_dist', 'indep.x_serial', 'D1.out_dist']: print(var, prob.get_val(var)) # Full gathered values. for var in ['indep.x_dist', 'indep.x_serial', 'D1.out_dist']: print(var, prob.get_val(var, get_remote=True)) print('') %%px assert_near_equal(prob.get_val(var, get_remote=True), np.array([24.53604616, 29.53604616, 36.53604616, 45.53604616, 56.53604616, 69.53604616, 84.53604616]), 1e-6) ``` ## Example: Distributed I/O and a Serial Ouput You can also create a component with a serial output and distributed outputs and inputs. This situation tends to be more tricky and usually requires you to performe some MPI operations in your component's `run` method. If the serial output is only a function of the serial inputs, then you can handle that variable just like you do on any other component. However, this example extends the previous component to include a serial output that is a function of both the serial and distributed inputs. In this case, it's a function of the sum of the square root of each element in the full distributed vector. Since the data is not all on any local processor, we use an MPI operation, in this case `Allreduce`, to make a summation across the distributed vector, and gather the answer back to each processor. The MPI operation and your implementation will vary, but consider this to be a general example. ``` %%px import numpy as np import openmdao.api as om from openmdao.utils.array_utils import evenly_distrib_idxs from openmdao.utils.mpi import MPI class MixedDistrib2(om.ExplicitComponent): def setup(self): # Distributed Input self.add_input('in_dist', shape_by_conn=True, distributed=True) # Serial Input self.add_input('in_serial', shape_by_conn=True) # Distributed Output self.add_output('out_dist', copy_shape='in_dist', distributed=True) # Serial Output self.add_output('out_serial', copy_shape='in_serial') def compute(self, inputs, outputs): x = inputs['in_dist'] y = inputs['in_serial'] # "Computationally Intensive" operation that we wish to parallelize. f_x = x**2 - 2.0*x + 4.0 # These operations are repeated on all procs. f_y = y ** 0.5 g_y = y**2 + 3.0*y - 5.0 # Compute square root of our portion of the distributed input. g_x = x ** 0.5 # Distributed output outputs['out_dist'] = f_x + np.sum(f_y) # Serial output if MPI and comm.size > 1: # We need to gather the summed values to compute the total sum over all procs. local_sum = np.array(np.sum(g_x)) total_sum = local_sum.copy() self.comm.Allreduce(local_sum, total_sum, op=MPI.SUM) outputs['out_serial'] = g_y + total_sum else: # Recommended to make sure your code can run in serial too, for testing. outputs['out_serial'] = g_y + np.sum(g_x) size = 7 if MPI: comm = MPI.COMM_WORLD rank = comm.rank sizes, offsets = evenly_distrib_idxs(comm.size, size) else: # When running in serial, the entire variable is on rank 0. rank = 0 sizes = {rank : size} offsets = {rank : 0} prob = om.Problem() model = prob.model # Create a distributed source for the distributed input. ivc = om.IndepVarComp() ivc.add_output('x_dist', np.zeros(sizes[rank]), distributed=True) ivc.add_output('x_serial', np.zeros(size)) model.add_subsystem("indep", ivc) model.add_subsystem("D1", MixedDistrib2()) model.connect('indep.x_dist', 'D1.in_dist') model.connect('indep.x_serial', 'D1.in_serial') prob.setup() # Set initial values of distributed variable. x_dist_init = 3.0 + np.arange(size)[offsets[rank]:offsets[rank] + sizes[rank]] prob.set_val('indep.x_dist', x_dist_init) # Set initial values of serial variable. x_serial_init = 1.0 + 2.0*np.arange(size) prob.set_val('indep.x_serial', x_serial_init) prob.run_model() # Values on each rank. for var in ['indep.x_dist', 'indep.x_serial', 'D1.out_dist', 'D1.out_serial']: print(var, prob.get_val(var)) # Full gathered values. for var in ['indep.x_dist', 'indep.x_serial', 'D1.out_dist', 'D1.out_serial']: print(var, prob.get_val(var, get_remote=True)) print('') %%px assert_near_equal(prob.get_val(var, get_remote=True), np.array([15.89178696, 29.89178696, 51.89178696, 81.89178696, 119.89178696, 165.89178696, 219.89178696]), 1e-6) ``` ```{note} In this example, we introduce a new component called an [IndepVarComp](indepvarcomp.ipynb). If you used OpenMDAO prior to version 3.2, then you are familiar with this component. It is used to define an independent variable. You usually do not have to define these because OpenMDAO defines and uses them automatically for all unconnected inputs in your model. This automatically-created `IndepVarComp` is called an Auto-IVC. However, when we define a distributed input, we often use the “src_indices” attribute to determine the allocation of that input to the processors that the component sees. For some sets of these indices, it isn’t possible to easily determine the full size of the corresponding independent variable, and the *IndepVarComp* cannot be created automatically. So, for unconnected inputs on a distributed component, you must manually create one, as we did in this example. ``` # Derivatives with Distributed Variables In the following examples, we show how to add analytic derivatives to the distributed examples given above. In most cases it is straighforward, but when you have a serial output and a distributed input, the [matrix-free](matrix-free-api) format is required. ## Derivatives: Distributed I/O and a Serial Input In this example, we have a distributed input, a distributed output, and a serial input. The derivative of 'out_dist' with respect to 'in_dict' has a diagonal Jacobian, so we use sparse declaration and each processor gives `declare_partials` the local number of rows and columns. The derivatives are verified against complex step using `check_totals` since our component is complex-safe. ``` %%px import numpy as np import openmdao.api as om from openmdao.utils.array_utils import evenly_distrib_idxs from openmdao.utils.mpi import MPI class MixedDistrib1(om.ExplicitComponent): def setup(self): # Distributed Input self.add_input('in_dist', shape_by_conn=True, distributed=True) # Serial Input self.add_input('in_serial', shape_by_conn=True) # Distributed Output self.add_output('out_dist', copy_shape='in_dist', distributed=True) def setup_partials(self): meta = self.get_io_metadata(metadata_keys=['shape']) local_size = meta['in_dist']['shape'][0] row_col_d = np.arange(local_size) self.declare_partials('out_dist', 'in_dist', rows=row_col_d, cols=row_col_d) self.declare_partials('out_dist', 'in_serial') def compute(self, inputs, outputs): x = inputs['in_dist'] y = inputs['in_serial'] # "Computationally Intensive" operation that we wish to parallelize. f_x = x**2 - 2.0*x + 4.0 # This operation is repeated on all procs. f_y = y ** 0.5 outputs['out_dist'] = f_x + np.sum(f_y) def compute_partials(self, inputs, partials): x = inputs['in_dist'] y = inputs['in_serial'] size = len(y) local_size = len(x) partials['out_dist', 'in_dist'] = 2.0 * x - 2.0 df_dy = 0.5 / y ** 0.5 partials['out_dist', 'in_serial'] = np.tile(df_dy, local_size).reshape((local_size, size)) size = 7 if MPI: comm = MPI.COMM_WORLD rank = comm.rank sizes, offsets = evenly_distrib_idxs(comm.size, size) else: # When running in serial, the entire variable is on rank 0. rank = 0 sizes = {rank : size} offsets = {rank : 0} prob = om.Problem() model = prob.model # Create a distributed source for the distributed input. ivc = om.IndepVarComp() ivc.add_output('x_dist', np.zeros(sizes[rank]), distributed=True) ivc.add_output('x_serial', np.zeros(size)) model.add_subsystem("indep", ivc) model.add_subsystem("D1", MixedDistrib1()) model.connect('indep.x_dist', 'D1.in_dist') model.connect('indep.x_serial', 'D1.in_serial') model.add_design_var('indep.x_serial') model.add_design_var('indep.x_dist') model.add_objective('D1.out_dist') prob.setup(force_alloc_complex=True) # Set initial values of distributed variable. x_dist_init = 3.0 + np.arange(size)[offsets[rank]:offsets[rank] + sizes[rank]] prob.set_val('indep.x_dist', x_dist_init) # Set initial values of serial variable. x_serial_init = 1.0 + 2.0*np.arange(size) prob.set_val('indep.x_serial', x_serial_init) prob.run_model() if rank > 0: prob.check_totals(method='cs', out_stream=None) else: prob.check_totals(method='cs') %%px totals = prob.check_totals(method='cs', out_stream=None) for key, val in totals.items(): assert_near_equal(val['rel error'][0], 0.0, 1e-6) ``` ## Derivatives: Distributed I/O and a Serial Output If you have a component with distributed inputs and a serial output, then the standard `compute_partials` API will not work for specifying the derivatives. You will need to use the matrix-free API with `compute_jacvec_product`, which is described in the feature document for [ExplicitComponent](explicit_component.ipynb) Computing the matrix-vector product for the derivative of the serial output with respect to a distributed input will require you to use MPI operations to gather the required parts of the Jacobian to all processors. The following example shows how to implement derivatives on the earlier `MixedDistrib2` component. ``` %%px import numpy as np import openmdao.api as om from openmdao.utils.array_utils import evenly_distrib_idxs from openmdao.utils.mpi import MPI class MixedDistrib2(om.ExplicitComponent): def setup(self): # Distributed Input self.add_input('in_dist', shape_by_conn=True, distributed=True) # Serial Input self.add_input('in_serial', shape_by_conn=True) # Distributed Output self.add_output('out_dist', copy_shape='in_dist', distributed=True) # Serial Output self.add_output('out_serial', copy_shape='in_serial') def compute(self, inputs, outputs): x = inputs['in_dist'] y = inputs['in_serial'] # "Computationally Intensive" operation that we wish to parallelize. f_x = x**2 - 2.0*x + 4.0 # These operations are repeated on all procs. f_y = y ** 0.5 g_y = y**2 + 3.0*y - 5.0 # Compute square root of our portion of the distributed input. g_x = x ** 0.5 # Distributed output outputs['out_dist'] = f_x + np.sum(f_y) # Serial output if MPI and comm.size > 1: # We need to gather the summed values to compute the total sum over all procs. local_sum = np.array(np.sum(g_x)) total_sum = local_sum.copy() self.comm.Allreduce(local_sum, total_sum, op=MPI.SUM) outputs['out_serial'] = g_y + total_sum else: # Recommended to make sure your code can run in serial too, for testing. outputs['out_serial'] = g_y + np.sum(g_x) def compute_jacvec_product(self, inputs, d_inputs, d_outputs, mode): x = inputs['in_dist'] y = inputs['in_serial'] df_dx = 2.0 * x - 2.0 df_dy = 0.5 / y ** 0.5 dg_dx = 0.5 / x ** 0.5 dg_dy = 2.0 * y + 3.0 local_size = len(x) size = len(y) if mode == 'fwd': if 'out_dist' in d_outputs: if 'in_dist' in d_inputs: d_outputs['out_dist'] += df_dx * d_inputs['in_dist'] if 'in_serial' in d_inputs: d_outputs['out_dist'] += np.tile(df_dy, local_size).reshape((local_size, size)).dot(d_inputs['in_serial']) if 'out_serial' in d_outputs: if 'in_dist' in d_inputs: if MPI and comm.size > 1: deriv = np.tile(dg_dx, size).reshape((size, local_size)).dot(d_inputs['in_dist']) deriv_sum = np.zeros(deriv.size) self.comm.Allreduce(deriv, deriv_sum, op=MPI.SUM) d_outputs['out_serial'] += deriv_sum else: # Recommended to make sure your code can run in serial too, for testing. d_outputs['out_serial'] += np.tile(dg_dx, local_size).reshape((local_size, size)).dot(d_inputs['in_dist']) if 'in_serial' in d_inputs: d_outputs['out_serial'] += dg_dy * d_inputs['in_serial'] else: if 'out_dist' in d_outputs: if 'in_dist' in d_inputs: d_inputs['in_dist'] += df_dx * d_outputs['out_dist'] if 'in_serial' in d_inputs: d_inputs['in_serial'] += np.tile(df_dy, local_size).reshape((local_size, size)).dot(d_outputs['out_dist']) if 'out_serial' in d_outputs: if 'out_serial' in d_outputs: if 'in_dist' in d_inputs: if MPI and comm.size > 1: deriv = np.tile(dg_dx, size).reshape((size, local_size)).dot(d_outputs['out_serial']) deriv_sum = np.zeros(deriv.size) self.comm.Allreduce(deriv, deriv_sum, op=MPI.SUM) d_inputs['in_dist'] += deriv_sum else: # Recommended to make sure your code can run in serial too, for testing. d_inputs['in_dist'] += np.tile(dg_dx, local_size).reshape((local_size, size)).dot(d_outputs['out_serial']) if 'in_serial' in d_inputs: d_inputs['in_serial'] += dg_dy * d_outputs['out_serial'] size = 7 if MPI: comm = MPI.COMM_WORLD rank = comm.rank sizes, offsets = evenly_distrib_idxs(comm.size, size) else: # When running in serial, the entire variable is on rank 0. rank = 0 sizes = {rank : size} offsets = {rank : 0} prob = om.Problem() model = prob.model # Create a distributed source for the distributed input. ivc = om.IndepVarComp() ivc.add_output('x_dist', np.zeros(sizes[rank]), distributed=True) ivc.add_output('x_serial', np.zeros(size)) model.add_subsystem("indep", ivc) model.add_subsystem("D1", MixedDistrib2()) model.connect('indep.x_dist', 'D1.in_dist') model.connect('indep.x_serial', 'D1.in_serial') model.add_design_var('indep.x_serial') model.add_design_var('indep.x_dist') model.add_constraint('D1.out_dist', lower=0.0) model.add_constraint('D1.out_serial', lower=0.0) prob.setup(force_alloc_complex=True) # Set initial values of distributed variable. x_dist_init = 3.0 + np.arange(size)[offsets[rank]:offsets[rank] + sizes[rank]] prob.set_val('indep.x_dist', x_dist_init) # Set initial values of serial variable. x_serial_init = 1.0 + 2.0*np.arange(size) prob.set_val('indep.x_serial', x_serial_init) prob.run_model() if rank > 0: prob.check_totals(method='cs', out_stream=None) else: prob.check_totals(method='cs') %%px totals = prob.check_totals(method='cs', out_stream=None) for key, val in totals.items(): assert_near_equal(val['rel error'][0], 0.0, 1e-6) ```
github_jupyter
Lambda School Data Science *Unit 2, Sprint 1, Module 4* --- # Logistic Regression - do train/validate/test split - begin with baselines for classification - express and explain the intuition and interpretation of Logistic Regression - use sklearn.linear_model.LogisticRegression to fit and interpret Logistic Regression models Logistic regression is the baseline for classification models, as well as a handy way to predict probabilities (since those too live in the unit interval). While relatively simple, it is also the foundation for more sophisticated classification techniques such as neural networks (many of which can effectively be thought of as networks of logistic models). ### Setup Run the code cell below. You can work locally (follow the [local setup instructions](https://lambdaschool.github.io/ds/unit2/local/)) or on Colab. Libraries: - category_encoders - numpy - pandas - scikit-learn ``` %%capture import sys # If you're on Colab: if 'google.colab' in sys.modules: DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Linear-Models/master/data/' !pip install category_encoders==2.* # If you're working locally: else: DATA_PATH = '../data/' ``` # Do train/validate/test split ## Overview ### Predict Titanic survival 🚢 Kaggle is a platform for machine learning competitions. [Kaggle has used the Titanic dataset](https://www.kaggle.com/c/titanic/data) for their most popular "getting started" competition. Kaggle splits the data into train and test sets for participants. Let's load both: ``` import pandas as pd train = pd.read_csv(DATA_PATH+'titanic/train.csv') test = pd.read_csv(DATA_PATH+'titanic/test.csv') ``` Notice that the train set has one more column than the test set: ``` train.shape, test.shape ``` Which column is in train but not test? The target! ``` set(train.columns) - set(test.columns) ``` ### Why doesn't Kaggle give you the target for the test set? #### Rachel Thomas, [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/) > One great thing about Kaggle competitions is that they force you to think about validation sets more rigorously (in order to do well). For those who are new to Kaggle, it is a platform that hosts machine learning competitions. Kaggle typically breaks the data into two sets you can download: > > 1. a **training set**, which includes the _independent variables,_ as well as the _dependent variable_ (what you are trying to predict). > > 2. a **test set**, which just has the _independent variables._ You will make predictions for the test set, which you can submit to Kaggle and get back a score of how well you did. > > This is the basic idea needed to get started with machine learning, but to do well, there is a bit more complexity to understand. **You will want to create your own training and validation sets (by splitting the Kaggle “training” data). You will just use your smaller training set (a subset of Kaggle’s training data) for building your model, and you can evaluate it on your validation set (also a subset of Kaggle’s training data) before you submit to Kaggle.** > > The most important reason for this is that Kaggle has split the test data into two sets: for the public and private leaderboards. The score you see on the public leaderboard is just for a subset of your predictions (and you don’t know which subset!). How your predictions fare on the private leaderboard won’t be revealed until the end of the competition. The reason this is important is that you could end up overfitting to the public leaderboard and you wouldn’t realize it until the very end when you did poorly on the private leaderboard. Using a good validation set can prevent this. You can check if your validation set is any good by seeing if your model has similar scores on it to compared with on the Kaggle test set. ... > > Understanding these distinctions is not just useful for Kaggle. In any predictive machine learning project, you want your model to be able to perform well on new data. ### 2-way train/test split is not enough #### Hastie, Tibshirani, and Friedman, [The Elements of Statistical Learning](http://statweb.stanford.edu/~tibs/ElemStatLearn/), Chapter 7: Model Assessment and Selection > If we are in a data-rich situation, the best approach is to randomly divide the dataset into three parts: a training set, a validation set, and a test set. The training set is used to fit the models; the validation set is used to estimate prediction error for model selection; the test set is used for assessment of the generalization error of the final chosen model. Ideally, the test set should be kept in a "vault," and be brought out only at the end of the data analysis. Suppose instead that we use the test-set repeatedly, choosing the model with the smallest test-set error. Then the test set error of the final chosen model will underestimate the true test error, sometimes substantially. #### Andreas Mueller and Sarah Guido, [Introduction to Machine Learning with Python](https://books.google.com/books?id=1-4lDQAAQBAJ&pg=PA270) > The distinction between the training set, validation set, and test set is fundamentally important to applying machine learning methods in practice. Any choices made based on the test set accuracy "leak" information from the test set into the model. Therefore, it is important to keep a separate test set, which is only used for the final evaluation. It is good practice to do all exploratory analysis and model selection using the combination of a training and a validation set, and reserve the test set for a final evaluation - this is even true for exploratory visualization. Strictly speaking, evaluating more than one model on the test set and choosing the better of the two will result in an overly optimistic estimate of how accurate the model is. #### Hadley Wickham, [R for Data Science](https://r4ds.had.co.nz/model-intro.html#hypothesis-generation-vs.hypothesis-confirmation) > There is a pair of ideas that you must understand in order to do inference correctly: > > 1. Each observation can either be used for exploration or confirmation, not both. > > 2. You can use an observation as many times as you like for exploration, but you can only use it once for confirmation. As soon as you use an observation twice, you’ve switched from confirmation to exploration. > > This is necessary because to confirm a hypothesis you must use data independent of the data that you used to generate the hypothesis. Otherwise you will be over optimistic. There is absolutely nothing wrong with exploration, but you should never sell an exploratory analysis as a confirmatory analysis because it is fundamentally misleading. > > If you are serious about doing an confirmatory analysis, one approach is to split your data into three pieces before you begin the analysis. #### Sebastian Raschka, [Model Evaluation](https://sebastianraschka.com/blog/2018/model-evaluation-selection-part4.html) > Since “a picture is worth a thousand words,” I want to conclude with a figure (shown below) that summarizes my personal recommendations ... <img src="https://sebastianraschka.com/images/blog/2018/model-evaluation-selection-part4/model-eval-conclusions.jpg" width="600"> Usually, we want to do **"Model selection (hyperparameter optimization) _and_ performance estimation."** (The green box in the diagram.) Therefore, we usually do **"3-way holdout method (train/validation/test split)"** or **"cross-validation with independent test set."** ### What's the difference between Training, Validation, and Testing sets? #### Brandon Rohrer, [Training, Validation, and Testing Data Sets](https://end-to-end-machine-learning.teachable.com/blog/146320/training-validation-testing-data-sets) > The validation set is for adjusting a model's hyperparameters. The testing data set is the ultimate judge of model performance. > > Testing data is what you hold out until very last. You only run your model on it once. You don’t make any changes or adjustments to your model after that. ... ## Follow Along > You will want to create your own training and validation sets (by splitting the Kaggle “training” data). Do this, using the [sklearn.model_selection.train_test_split](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) function: ``` from sklearn.model_selection import train_test_split train.shape, test.shape train, val = train_test_split(train, random_state=28) train.shape, val.shape, test.shape ``` ## Challenge For your assignment, you'll do a 3-way train/validate/test split. Then next sprint, you'll begin to participate in a private Kaggle challenge, just for your cohort! You will be provided with data split into 2 sets: training and test. You will create your own training and validation sets, by splitting the Kaggle "training" data, so you'll end up with 3 sets total. # Begin with baselines for classification ## Overview We'll begin with the **majority class baseline.** [Will Koehrsen](https://twitter.com/koehrsen_will/status/1088863527778111488) > A baseline for classification can be the most common class in the training dataset. [*Data Science for Business*](https://books.google.com/books?id=4ZctAAAAQBAJ&pg=PT276), Chapter 7.3: Evaluation, Baseline Performance, and Implications for Investments in Data > For classification tasks, one good baseline is the _majority classifier,_ a naive classifier that always chooses the majority class of the training dataset (see Note: Base rate in Holdout Data and Fitting Graphs). This may seem like advice so obvious it can be passed over quickly, but it is worth spending an extra moment here. There are many cases where smart, analytical people have been tripped up in skipping over this basic comparison. For example, an analyst may see a classification accuracy of 94% from her classifier and conclude that it is doing fairly well—when in fact only 6% of the instances are positive. So, the simple majority prediction classifier also would have an accuracy of 94%. ## Follow Along Determine majority class ``` target = 'Survived' y_train = train[target] y_train.value_counts() ``` What if we guessed the majority class for every prediction? ``` y_pred = y_train.apply(lambda x : 0) ``` #### Use a classification metric: accuracy [Classification metrics are different from regression metrics!](https://scikit-learn.org/stable/modules/model_evaluation.html) - Don't use _regression_ metrics to evaluate _classification_ tasks. - Don't use _classification_ metrics to evaluate _regression_ tasks. [Accuracy](https://scikit-learn.org/stable/modules/model_evaluation.html#accuracy-score) is a common metric for classification. Accuracy is the ["proportion of correct classifications"](https://en.wikipedia.org/wiki/Confusion_matrix): the number of correct predictions divided by the total number of predictions. What is the baseline accuracy if we guessed the majority class for every prediction? ``` from sklearn.metrics import accuracy_score accuracy_score(y_train, y_pred) train[target].value_counts(normalize=True) y_val = val[target] y_val y_pred = [0] * len(y_val) accuracy_score(y_val, y_pred) ``` ## Challenge In your assignment, your Sprint Challenge, and your upcoming Kaggle challenge, you'll begin with the majority class baseline. How quickly can you beat this baseline? # Express and explain the intuition and interpretation of Logistic Regression ## Overview To help us get an intuition for *Logistic* Regression, let's start by trying *Linear* Regression instead, and see what happens... ## Follow Along ### Linear Regression? ``` train.describe() # 1. Import estimator class from sklearn.linear_model import LinearRegression # 2. Instantiate this class linear_reg = LinearRegression() # 3. Arrange X feature matrices (already did y target vectors) features = ['Pclass', 'Age', 'Fare'] X_train = train[features] X_val = val[features] # Impute missing values from sklearn.impute import SimpleImputer imputer = SimpleImputer() X_train_imputed = imputer.fit_transform(X_train) X_val_imputed = imputer.transform(X_val) # 4. Fit the model linear_reg.fit(X_train_imputed, y_train) # 5. Apply the model to new data. # The predictions look like this ... linear_reg.predict(X_val_imputed) # Get coefficients pd.Series(linear_reg.coef_, features) test_case = [[1, 5, 500]] # 1st class, 5-year old, Rich linear_reg.predict(test_case) ``` ### Logistic Regression! ``` from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver='lbfgs') log_reg.fit(X_train_imputed, y_train) print('Validation Accuracy', log_reg.score(X_val_imputed, y_val)) # The predictions look like this log_reg.predict(X_val_imputed) log_reg.predict(test_case) log_reg.predict_proba(test_case) # What's the math? log_reg.coef_ log_reg.intercept_ # The logistic sigmoid "squishing" function, implemented to accept numpy arrays import numpy as np def sigmoid(x): return 1 / (1 + np.e**(-x)) sigmoid(log_reg.intercept_ + np.dot(log_reg.coef_, np.transpose(test_case))) log_reg.coef_ test_case sigmoid() ``` So, clearly a more appropriate model in this situation! For more on the math, [see this Wikipedia example](https://en.wikipedia.org/wiki/Logistic_regression#Probability_of_passing_an_exam_versus_hours_of_study). # Use sklearn.linear_model.LogisticRegression to fit and interpret Logistic Regression models ## Overview Now that we have more intuition and interpretation of Logistic Regression, let's use it within a realistic, complete scikit-learn workflow, with more features and transformations. ## Follow Along Select these features: `['Pclass', 'Sex', 'Age', 'SibSp', 'Parch', 'Fare', 'Embarked']` (Why shouldn't we include the `Name` or `Ticket` features? What would happen here?) Fit this sequence of transformers & estimator: - [category_encoders.one_hot.OneHotEncoder](https://contrib.scikit-learn.org/categorical-encoding/onehot.html) - [sklearn.impute.SimpleImputer](https://scikit-learn.org/stable/modules/generated/sklearn.impute.SimpleImputer.html) - [sklearn.preprocessing.StandardScaler](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html) - [sklearn.linear_model.LogisticRegressionCV](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegressionCV.html) Get validation accuracy. ``` features = ['Pclass', 'Sex', 'Age', 'SibSp', 'Parch', 'Fare', 'Embarked'] target = 'Survived' X_train = train[features] y_train = train[target] X_val = val[features] y_val = val[target] X_train.shape, y_train.shape, X_val.shape, y_val.shape import category_encoders as ce from sklearn.impute import SimpleImputer from sklearn.preprocessing import StandardScaler from sklearn.linear_model import LogisticRegressionCV encoder = ce.OneHotEncoder(use_cat_names=True) X_train_encoded = encoder.fit_transform(X_train) X_val_encoded = encoder.transform(X_val) imputer = SimpleImputer(strategy='mean') X_train_imputed = imputer.fit_transform(X_train_encoded) X_val_imputed = imputer.transform(X_val_encoded) scaler = StandardScaler() X_train_scaled = scaler.fit_transform(X_train_imputed) X_val_scaled = scaler.transform(X_val_imputed) model = LogisticRegressionCV() model.fit(X_train_scaled, y_train) y_pred = model.predict(X_val_scaled) accuracy_score(y_val, y_pred) print('Validation Accuracy:', model.score(X_val_scaled, y_val)) ``` Plot coefficients: ``` %matplotlib inline coefficients = pd.Series(model.coef_[0], X_train_encoded.columns) coefficients.sort_values().plot.barh(); ``` Generate [Kaggle](https://www.kaggle.com/c/titanic) submission: ## Challenge You'll use Logistic Regression for your assignment, your Sprint Challenge, and optionally for your first model in our Kaggle challenge! # Review For your assignment, you'll use a [**dataset of 400+ burrito reviews**](https://srcole.github.io/100burritos/). How accurately can you predict whether a burrito is rated 'Great'? > We have developed a 10-dimensional system for rating the burritos in San Diego. ... Generate models for what makes a burrito great and investigate correlations in its dimensions. - Do train/validate/test split. Train on reviews from 2016 & earlier. Validate on 2017. Test on 2018 & later. - Begin with baselines for classification. - Use scikit-learn for logistic regression. - Get your model's validation accuracy. (Multiple times if you try multiple iterations.) - Get your model's test accuracy. (One time, at the end.) - Commit your notebook to your fork of the GitHub repo. - Watch Aaron's [video #1](https://www.youtube.com/watch?v=pREaWFli-5I) (12 minutes) & [video #2](https://www.youtube.com/watch?v=bDQgVt4hFgY) (9 minutes) to learn about the mathematics of Logistic Regression. # Sources - Brandon Rohrer, [Training, Validation, and Testing Data Sets](https://end-to-end-machine-learning.teachable.com/blog/146320/training-validation-testing-data-sets) - Hadley Wickham, [R for Data Science](https://r4ds.had.co.nz/model-intro.html#hypothesis-generation-vs.hypothesis-confirmation), Hypothesis generation vs. hypothesis confirmation - Hastie, Tibshirani, and Friedman, [The Elements of Statistical Learning](http://statweb.stanford.edu/~tibs/ElemStatLearn/), Chapter 7: Model Assessment and Selection - Mueller and Guido, [Introduction to Machine Learning with Python](https://books.google.com/books?id=1-4lDQAAQBAJ&pg=PA270), Chapter 5.2.2: The Danger of Overfitting the Parameters and the Validation Set - Provost and Fawcett, [Data Science for Business](https://books.google.com/books?id=4ZctAAAAQBAJ&pg=PT276), Chapter 7.3: Evaluation, Baseline Performance, and Implications for Investments in Data - Rachel Thomas, [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/) - Sebastian Raschka, [Model Evaluation](https://sebastianraschka.com/blog/2018/model-evaluation-selection-part4.html) - Will Koehrsen, ["A baseline for classification can be the most common class in the training dataset."](https://twitter.com/koehrsen_will/status/1088863527778111488)
github_jupyter
<a href="https://colab.research.google.com/github/harvardnlp/pytorch-struct/blob/master/notebooks/Unsupervised_CFG.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` !pip install -qqq torchtext -qqq pytorch-transformers dgl !pip install -qqqU git+https://github.com/harvardnlp/pytorch-struct import torchtext import torch from torch_struct import SentCFG from torch_struct.networks import NeuralCFG import torch_struct.data # Download and the load default data. WORD = torchtext.data.Field(include_lengths=True) UD_TAG = torchtext.data.Field(init_token="<bos>", eos_token="<eos>", include_lengths=True) # Download and the load default data. train, val, test = torchtext.datasets.UDPOS.splits( fields=(('word', WORD), ('udtag', UD_TAG), (None, None)), filter_pred=lambda ex: 5 < len(ex.word) < 30 ) WORD.build_vocab(train.word, min_freq=3) UD_TAG.build_vocab(train.udtag) train_iter = torch_struct.data.TokenBucket(train, batch_size=200, device="cuda:0") H = 256 T = 30 NT = 30 model = NeuralCFG(len(WORD.vocab), T, NT, H) model.cuda() opt = torch.optim.Adam(model.parameters(), lr=0.001, betas=[0.75, 0.999]) def train(): #model.train() losses = [] for epoch in range(10): for i, ex in enumerate(train_iter): opt.zero_grad() words, lengths = ex.word N, batch = words.shape words = words.long() params = model(words.cuda().transpose(0, 1)) dist = SentCFG(params, lengths=lengths) loss = dist.partition.mean() (-loss).backward() losses.append(loss.detach()) torch.nn.utils.clip_grad_norm_(model.parameters(), 3.0) opt.step() if i % 100 == 1: print(-torch.tensor(losses).mean(), words.shape) losses = [] train() for i, ex in enumerate(train_iter): opt.zero_grad() words, lengths = ex.word N, batch = words.shape words = words.long() params = terms(words.transpose(0, 1)), rules(batch), roots(batch) tree = CKY(MaxSemiring).marginals(params, lengths=lengths, _autograd=True) print(tree) break def split(spans): batch, N = spans.shape[:2] splits = [] for b in range(batch): cover = spans[b].nonzero() left = {i: [] for i in range(N)} right = {i: [] for i in range(N)} batch_split = {} for i in range(cover.shape[0]): i, j, A = cover[i].tolist() left[i].append((A, j, j - i + 1)) right[j].append((A, i, j - i + 1)) for i in range(cover.shape[0]): i, j, A = cover[i].tolist() B = None for B_p, k, a_span in left[i]: for C_p, k_2, b_span in right[j]: if k_2 == k + 1 and a_span + b_span == j - i + 1: B, C = B_p, C_p k_final = k break if j > i: batch_split[(i, j)] =k splits.append(batch_split) return splits splits = split(spans) ```
github_jupyter
# Sklearn ## sklearn.linear_model ``` from matplotlib.colors import ListedColormap from sklearn import cross_validation, datasets, linear_model, metrics import numpy as np %pylab inline ``` ### Линейная регрессия #### Генерация данных ``` data, target, coef = datasets.make_regression(n_features = 2, n_informative = 1, n_targets = 1, noise = 5., coef = True, random_state = 2) pylab.scatter(list(map(lambda x:x[0], data)), target, color='r') pylab.scatter(list(map(lambda x:x[1], data)), target, color='b') train_data, test_data, train_labels, test_labels = cross_validation.train_test_split(data, target, test_size = 0.3) ``` #### LinearRegression ``` linear_regressor = linear_model.LinearRegression() linear_regressor.fit(train_data, train_labels) predictions = linear_regressor.predict(test_data) print(test_labels) print(predictions) metrics.mean_absolute_error(test_labels, predictions) linear_scoring = cross_validation.cross_val_score(linear_regressor, data, target, scoring = 'mean_absolute_error', cv=10) print('mean: {}, std: {}'.format(linear_scoring.mean(), linear_scoring.std())) scorer = metrics.make_scorer(metrics.mean_absolute_error, greater_is_better=True) linear_scoring = cross_validation.cross_val_score(linear_regressor, data, target, scoring=scorer, cv = 10) print('mean: {}, std: {}'.format(linear_scoring.mean(), linear_scoring.std())) coef linear_regressor.coef_ # в лекции не указано, что в уравнении обученной модели также участвует свободный член linear_regressor.intercept_ print("y = {:.2f}*x1 + {:.2f}*x2".format(coef[0], coef[1])) print("y = {:.2f}*x1 + {:.2f}*x2 + {:.2f}".format(linear_regressor.coef_[0], linear_regressor.coef_[1], linear_regressor.intercept_)) ``` #### Lasso ``` lasso_regressor = linear_model.Lasso(random_state = 3) lasso_regressor.fit(train_data, train_labels) lasso_predictions = lasso_regressor.predict(test_data) lasso_scoring = cross_validation.cross_val_score(lasso_regressor, data, target, scoring = scorer, cv = 10) print('mean: {}, std: {}'.format(lasso_scoring.mean(), lasso_scoring.std())) print(lasso_regressor.coef_) print("y = {:.2f}*x1 + {:.2f}*x2".format(coef[0], coef[1])) print("y = {:.2f}*x1 + {:.2f}*x2".format(lasso_regressor.coef_[0], lasso_regressor.coef_[1])) ```
github_jupyter
``` import matplotlib.pyplot as plt import numpy as np import functools import time ``` # Questão 1 Resolva o sistema linear $Ax = b$ em que $ A = \begin{bmatrix} 9. & −4. & 1. & 0. & 0. & 0. & 0. \\ −4. & 6. & −4. & 1. & 0. & 0. & 0. \\ 1. & −4. & 6. & −4. & 1. & 0. & 0. \\ 0. & 1. & −4. & 6. & −4. & 1. & 0. \\ \vdots & \ddots & \ddots & \ddots & \ddots & \ddots & \vdots \\ 0. & 0. & 1. & −4. & 6. & −4. & 1. \\ 0. & 0. & 0. & 1. & −4. & 5. & −2. \\ 0. & 0. & 0. & 0. & 1. & −2. & 1. \end{bmatrix} \in R^{\; 200 \times 200} $ e $ b = \begin{bmatrix} 1 \\ 1\\ 1\\ 1\\ \vdots\\ 1\\ 1\\ 1\\ \end{bmatrix} \in R^{\; 200} $ usando o método da Eliminação de Gauss (com pivoteamento parcial) e os métodos iterativos de Gauss-Jacobi e Gauss-Seidel, se possível. Compare o desempenho dos métodos para a resolução do sistema linear em termos do tempo de execução. ``` def timer(f): @functools.wraps(f) def wrapper_timer(*args, **kwargs): tempo_inicio = time.perf_counter() retorno = f(*args, **kwargs) tempo_fim = time.perf_counter() tempo_exec = tempo_fim - tempo_inicio print(f"Tempo de Execução: {tempo_exec:0.4f} segundos") return retorno return wrapper_timer def FatoracaoLUPivot(A): U = np.copy(A) n = A.shape[0] L = np.zeros_like(A) P = np.eye(n) m = 0 for j in range(n): #Pivoteamento Parcial k = np.argmax(np.abs(U[j:,j])) + j U[j], U[k] = np.copy(U[k]), np.copy(U[j]) P[j], P[k] = np.copy(P[k]), np.copy(P[j]) L[j], L[k] = np.copy(L[k]), np.copy(L[j]) m += 1 for i in range(j + 1, n): L[i][j] = U[i][j]/U[j][j] for k in range(j + 1, n): U[i][k] -= L[i][j] * U[j][k] U[i][j] = 0 m += 1 L += np.eye(n) return L, U, P def SubstituicaoRegressiva(U, c): # U triangular superior x = np.copy(c) n = U.shape[0] for i in range(n-1, -1, -1): for j in range(i + 1, n): x[i] -= (U[i,j] * x[j]) x[i] /= U[i,i] return x def SubstituicaoDireta(U, c): #U triangular inferior x = np.copy(c) n = U.shape[0] for i in range(n): for j in range(i): x[i] -= (U[i,j] * x[j]) x[i] /= U[i,i] return x @timer def EliminacaoGaussLUPivot(A, b): L, U, P = FatoracaoLUPivot(A) # Resolver Ly = b e Ux = y y = SubstituicaoDireta(L, b) x = SubstituicaoRegressiva(U, y) return P, x def buildA(): A = np.zeros((200, 200)) A[0,0:3] = np.array([9, -4, 1]) A[1,0:4] = np.array([-4, 6, -4, 1]) A[198,196:200] = np.array([1, -4, 5, -2]) A[199,197:200] = np.array([1, -2, 1]) for i in range(2, 198): A[i, i-2:i+3] = np.array([1, -4, 6, -4, 1]) return A A = buildA() A b = np.ones((200,1)) b ``` ## Solução por Eliminação de Gauss com Pivoteamento Parcial ``` P, x = EliminacaoGaussLUPivot(A, b) x print(np.max(np.abs(P @ A @ x - b))) ``` ## Método Gauss-Jacobi ``` @timer def GaussJacobi(A, b): n = A.shape[0] x_history = list() x_old = np.zeros(n) x_new = np.zeros(n) k_limite = 200 k = 0 tau = 1E-4 Dr = 1 while (k < k_limite and Dr > tau): for i in range(n): soma = 0 for j in range(n): if (i == j): continue soma += A[i,j]*x_old[j] x_new[i] = (b[i] - soma) / A[i,i] k += 1 Dr = np.max(np.abs(x_new - x_old)) / np.max(np.abs(x_new)) x_history.append(x_old) x_old = np.copy(x_new) return x_history, x_new history, x = GaussJacobi(A, b) erros = [] for i in range(len(history)): erro = np.max(np.abs(A @ history[i] - b)) if (erro != np.inf): erros.append(erro) plt.semilogy(erros) ``` ## Método Gauss-Seidel ``` @timer def GaussSeidel(A, b, k_limite=200): n = A.shape[0] x_history = list() x_old = np.zeros(n) x_new = np.zeros(n) k = 0 tau = 1E-4 Dr = 1 while (k < k_limite and Dr > tau): for i in range(n): soma = 0 for j in range(n): if (i == j): continue soma += A[i,j]*x_new[j] x_new[i] = (b[i] - soma) / A[i,i] Dr = np.max(np.abs(x_new - x_old)) / np.max(np.abs(x_new)) x_history.append(x_old) x_old = np.copy(x_new) k += 1 if (Dr > tau): print("NÃO CONVERGIU!") return x_history, x_new history, x = GaussSeidel(A, b) print(np.max(np.abs(A @ x - b))) erros = [] for i in range(len(history)): erro = np.max(np.abs(A @ history[i] - b)) if (erro != np.inf): erros.append(erro) plt.semilogy(erros) ``` ## Análise dos Resultados Além de reutilizar as funções criadas para a Eliminação de Gauss com Pivoteamento Parcial, escrevi rotinas para implementar os métodos de Gauss-Jacobi e Gauss-Seidel. Ainda, criei uma função decoradora `timer` para envolver as rotinas de cada método de resolução de sistemas lineares para contabilizar e imprimir o respectivo tempo de execução. Ao longo da análise, considerei o erro absoluto máximo como a componente de maior diferença, em módulo, entre $Ax$ e $b$. O método de Eliminação de Gauss demorou $4,0178$ segundos e obteve a resposta correta, com erro absoluto máximo de aproximadamente $2,4E-7$. O método de Gauss-Jacobi executou em $4,8295$ segundos e divergiu, com erro crescente a cada iteração. O método de Gauss-Seidel executou em $4,7215$ segundos e apresentou uma condição peculiar. O resultado não divergiu, mas converge lentamente $-$ de forma que $200$ iterações não foram suficientes para obter a resposta correta e o erro absoluto máximo da última iteração permaneceu próximo de $1,5$. Portanto, o método de melhor performance no caso analisado foi a Eliminação de Gauss com Pivoteamento Parcial, pois ela obteve a melhor resposta e executou em menos tempo. # Questão 2 Determine valores de $\beta$ que garantem a convergência dos métodos de Gauss-Jacobi e Gauss-Seidel quando aplicados para resolução do sistema linear $Ax = b$ em que $A = \begin{bmatrix} −10 & 2 \\ \beta & 5 \end{bmatrix}$ e $b = \begin{bmatrix} 1 \\ 1 \end{bmatrix}$ ## Gauss-Jacobi #### $i=1$: $\sum_{j \ne i} |a_{ij}| = |a_{12}| = 2 < 10 = |a_{11}|$ #### $i=2$: $\sum_{j \ne i} |a_{ij}| = |a_{21}| = \beta < 5 = |a_{22}|$ Logo, a matriz é diagonalmente estritamente crescente para $\beta < 5$ e o conjunto de matrizes que satisfazem essa condição garante a convergência do Método de Gauss-Jacobi. ## Gauss-Seidel ### Critério das Linhas Conforme vimos no critério de convergência para Gauss-Jacobi, a matriz é diagonalmente estritalmente crescente $-$ e, consequentemente o método de Gauss-Seidel converge $-$ para $\beta < 5$. ### Critério de Sassenfeld #### $i=1$: $\beta_1 = \frac{1}{|a_{11}|} \left( \sum_{j = 2}^2 |a_{1j}|\right) = \frac{|a_{12}|}{|a_{11}|} = \frac{2}{10} = \frac{1}{5} < 1$ #### $i=2$: $\beta_2 = \frac{1}{|a_{22}|} \left( \sum_{j = 1}^1 |a_{2j}| \beta_j\right) = \frac{|a_{21}| \beta_1}{|a_{22}|} = \frac{\beta \times (1/5)}{5} = \frac{\beta}{25} < 1 \iff \beta < 25$ Dessa forma, como os dois critérios são independentemente suficientes para determinar a convergência do método, realizei a união dos intervalos de $\beta$. Portanto, valores de $\beta$ no intervalo $\beta < 25$ garantem a convergência de Gauss-Seidel.
github_jupyter
# Assignment 9: Implement Dynamic Programming In this exercise, we will begin to explore the concept of dynamic programming and how it related to various object containers with respect to computational complexity. ## Deliverables: 1) Choose and implement a Dynamic Programming algorithm in Python, make sure you are using a Dynamic Programming solution (not another one). 2) Use the algorithm to solve a range of scenarios. 3) Explain what is being done in the implementation. That is, write up a walk through of the algorithm and explain how it is a Dynamic Programming solution. ### Prepare an executive summary of your results, referring to the table and figures you have generated. Explain how your results relate to big O notation. Describe your results in language that management can understand. This summary should be included as text paragraphs in the Jupyter notebook. Explain how the algorithm works and why it is a useful to data engineers. # A. The Dynamic programming problem: Longest Increasing Sequence ### The Longest Increasing Subsequence (LIS) problem is to find the length of the longest subsequence of a given sequence such that all elements of the subsequence are sorted in increasing order. For example, the length of LIS for {10, 22, 9, 33, 21, 50, 41, 60, 80} is 6 and LIS is {10, 22, 33, 50, 60, 80}. # A. Setup: Library imports and Algorithm ``` import numpy as np import pandas as pd import seaborn as sns import time #import itertools import random import matplotlib.pyplot as plt #import networkx as nx #import pydot #from networkx.drawing.nx_pydot import graphviz_layout #from collections import deque # Dynamic Programming Approach of Finding LIS by reducing the problem to longest common Subsequence def lis(a): n=len(a) #get the length of the list b=sorted(list(set(a))) #removes duplicates, and sorts list m=len(b) #gets the length of the truncated and sorted list dp=[[-1 for i in range(m+1)] for j in range(n+1)] #instantiates a list of lists filled with -1 columns are indicies of the sorted array; rows the original array for i in range(n+1): # for every column in the table at each row: for j in range(m+1): if i==0 or j==0: #if at first element in either a row or column set the table row,index to zero dp[i][j]=0 elif a[i-1]==b[j-1]: #else if the sorted array value matches the original array: dp[i][j]=1+dp[i-1][j-1]#sets dp[i][j] to 1+prveious cell of the dyanmic table else: dp[i][j]=max(dp[i-1][j],dp[i][j-1]) #else record the max of the row or column for that cell in the cell return dp[-1][-1] # This will return the max running sequence. # Driver program to test above function arr1 = [10, 22, 9, 33, 21, 50, 41, 60] len_arr1 = len(arr1) print("Longest increaseing sequence has a length of:", lis(arr1)) # addtional comments included from the original code contributed by Dheeraj Khatri (https://www.geeksforgeeks.org/longest-increasing-subsequence-dp-3/) def Container(arr, fun): ### I'm glad I was able to reuse this from assignment 3 and 4. Useful function. objects = [] #instantiates an empty list to collect the returns times = [] #instantiates an empty list to collect times for each computation for t in arr: start= time.perf_counter() #collects the start time obj = fun(t) # applies the function to the arr object end = time.perf_counter() # collects end time duration = (end-start)* 1E3 #converts to milliseconds objects.append(obj)# adds the returns of the functions to the objects list times.append(duration) # adds the duration for computation to list return objects, times ``` # B. Test Array Generation ``` RANDOM_SEED = 300 np.random.seed(RANDOM_SEED) arr100 = list(np.random.randint(low=1, high= 5000, size=100)) np.random.seed(RANDOM_SEED) arr200 = list(np.random.randint(low=1, high= 5000, size=200)) np.random.seed(RANDOM_SEED) arr400 = list(np.random.randint(low=1, high= 5000, size=400)) np.random.seed(RANDOM_SEED) arr600 = list(np.random.randint(low=1, high= 5000, size=600)) np.random.seed(RANDOM_SEED) arr800 = list(np.random.randint(low=1, high= 5000, size=800)) print(len(arr100), len(arr200), len(arr400), len(arr600), len(arr800)) arr_list = [arr100, arr200, arr400, arr600, arr800] metrics = Container(arr_list, lis) ``` ### Table1. Performance Summary ``` summary = { 'ArraySize' : [len(arr100), len(arr200), len(arr400), len(arr600), len(arr800)], 'SequenceLength' : [metrics[0][0],metrics[0][1], metrics[0][2], metrics[0][3], metrics[0][4]], 'Time(ms)' : [metrics[1][0],metrics[1][1], metrics[1][2], metrics[1][3], metrics[1][4]] } df =pd.DataFrame(summary) df ``` ### Figure 1. Performance ``` sns.scatterplot(data=df, x='Time(ms)', y='ArraySize') ``` # Discussion Explain what is being done in the implementation. That is, write up a walk through of the algorithm and explain how it is a Dynamic Programming solution. The dyanamic programming problem above finds the length of the longest incrementing sequence of values in a list. The defined function makes a sorted copy of the list containing only unique values and also creates a dynamic table (in the form of a list of lists) using a nested list comprehension. This table contains the incidices of the sorted array as columns and the indicies of the original array as rows. To begin, the table is instantiated with values of -1. The value of zero indicies are set to zero in the dynamic table and if a given index in the original array is found to be increasing the dyanamic table is incremented. until all positions are assessed. The funciton then returns the maximum value of the increments which will be the length of the longest running sequence. This is a dynamic progromming problem because the solution builds on a smaller subset problems. Dyanmic programming is an important concept for developers and engineers. Functions and programs that use dynamic programming help solve problems which present themselves as factorial time complexity in a more efficient way. At face value, it appears that this problem of the longest incrementing sequence will have to compare all values in a given array to all previous values in the array. Dyanmic programming allows for a shortcut in a sense. We can compare the given array with a sorted version of that array and at the intersection of the sorted and unsorted arrays we can determine if we need to make an additon to our incrementing sequence tally. Shown above in table and figure 1 is the time required for the algorithm to tally the longest running sequence for various array sizes. Because the algorithm utilizes a nested for loop it is the expeictation that the time will grow as a function of the square of the original array length. This is confirmed when inspecting the scatterplot in figure 1. Thus, the developed algorithm in big O notation is O(n^2) time complexity which is much more efficient than factorial time.
github_jupyter
# Chapter 3 Questions #### 3.1 Form dollar bars for E-mini S&P 500 futures: 1. Apply a symmetric CUSUM filter (Chapter 2, Section 2.5.2.1) where the threshold is the standard deviation of daily returns (Snippet 3.1). 2. Use Snippet 3.4 on a pandas series t1, where numDays=1. 3. On those sampled features, apply the triple-barrier method, where ptSl=[1,1] and t1 is the series you created in point 1.b. 4. Apply getBins to generate the labels. ``` import numpy as np import pandas as pd import timeit from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import train_test_split from sklearn.metrics import roc_curve, classification_report, confusion_matrix from mlfinlab.corefns.core_functions import CoreFunctions from mlfinlab.fracdiff.fracdiff import frac_diff_ffd import matplotlib.pyplot as plt %matplotlib inline # Read in data data = pd.read_csv('official_data/dollar_bars.csv', nrows=40000) data.index = pd.to_datetime(data['date_time']) data = data.drop('date_time', axis=1) data.head() ``` **Apply a symmetric CUSUM filter (Chapter 2, Section 2.5.2.1) where the threshold is the standard deviation of daily returns (Snippet 3.1).** ``` # Compute daily volatility vol = CoreFunctions.get_daily_vol(close=data['close'], lookback=50) vol.plot(figsize=(14, 7), title='Volatility as caclulated by de Prado') plt.show() # Apply Symmetric CUSUM Filter and get timestamps for events # Note: Only the CUSUM filter needs a point estimate for volatility cusum_events = CoreFunctions.get_t_events(data['close'], threshold=vol.mean()) ``` **Use Snippet 3.4 on a pandas series t1, where numDays=1.** ``` # Compute vertical barrier vertical_barriers = CoreFunctions.add_vertical_barrier(cusum_events, data['close']) vertical_barriers.head() ``` **On those sampled features, apply the triple-barrier method, where ptSl=[1,1] and t1 is the series you created in point 1.b.** ``` triple_barrier_events = CoreFunctions.get_events(close=data['close'], t_events=cusum_events, pt_sl=[1, 1], target=vol, min_ret=0.01, num_threads=1, vertical_barrier_times=vertical_barriers, side=None) triple_barrier_events.head() labels = CoreFunctions.get_bins(triple_barrier_events, data['close']) labels.head() labels['bin'].value_counts() ``` --- #### 3.2 From exercise 1, use Snippet 3.8 to drop rare labels. ``` clean_labels = CoreFunctions.drop_labels(labels) print(labels.shape) print(clean_labels.shape) ``` --- #### 3.3 Adjust the getBins function (Snippet 3.5) to return a 0 whenever the vertical barrier is the one touched first. This change was made inside the module CoreFunctions. --- #### 3.4 Develop a trend-following strategy based on a popular technical analysis statistic (e.g., crossing moving averages). For each observation, themodel suggests a side, but not a size of the bet. 1. Derive meta-labels for pt_sl = [1,2] and t1 where num_days=1. Use as trgt the daily standard deviation as computed by Snippet 3.1. 2. Train a random forest to decide whether to trade or not. Note: The decision is whether to trade or not, {0,1}, since the underllying model (the crossing moveing average has decided the side{-1, 1}) ``` # This question is answered in the notebook: 2019-03-06_JJ_Trend-Following-Question ``` ---- #### 3.5 Develop a mean-reverting strategy based on Bollinger bands. For each observation, the model suggests a side, but not a size of the bet. * (a) Derive meta-labels for ptSl = [0, 2] and t1 where numDays = 1. Use as trgt the daily standard deviation as computed by Snippet 3.1. * (b) Train a random forest to decide whether to trade or not. Use as features: volatility, seial correlation, and teh crossinmg moving averages. * (c) What is teh accuracy of prediction from the primary model? (i.e. if the secondary model does not filter the bets) What are the precision, recall and FI-scores? * (d) What is teh accuracy of prediction from the primary model? What are the precision, recall and FI-scores? ``` # This question is answered in the notebook: 2019-03-07_BBand-Question ```
github_jupyter
![Python Logo](img/Python_logo.png) # If I have seen further it is by standing on the shoulders of Giants (Newton??) ![Python Logo](img/python-loc.png) (https://www.openhub.net/) ![Python Logo](img/numpy-loc.png) (https://www.openhub.net/) ![Python Logo](img/scipy-loc.png) (https://www.openhub.net/) ![Python Logo](img/pandas-loc.png) (https://www.openhub.net/) ![Python Logo](img/resumen-loc.png) (https://www.openhub.net/) ### Pero, ¿qué es lo que hace fuertes a estos proyectos? ![Like a programmer](img/like-a-programmer.jpeg) (https://medium.com/@sailorhg/coding-like-a-girl-595b90791cce) ![Guido](img/pycon-guido.jpg) ## Codigo de conducta ### PyCon 2016 Code Of Conduct Harassment includes offensive communication related to gender, sexual orientation, disability, physical appearance, body size, race, religion, sexual images in public spaces, deliberate intimidation, stalking, following, harassing photography or recording, sustained disruption of talks or other events, inappropriate physical contact, and unwelcome sexual attention. Participants asked to stop any harassing behavior are expected to comply immediately. Exhibitors in the expo hall, sponsor or vendor booths, or similar activities are also subject to the anti-harassment policy. In particular, exhibitors should not use sexualized images, activities, or other material. Booth staff (including volunteers) should not use sexualized clothing/uniforms/costumes, or otherwise create a sexualized environment. Be careful in the words that you choose. Remember that sexist, racist, and other exclusionary jokes can be offensive to those around you. Excessive swearing and offensive jokes are not appropriate for PyCon. If a participant engages in behavior that violates this code of conduct, the conference organizers may take any action they deem appropriate, including warning the offender or expulsion from the conference with no refund. ## Políticas de inclusión ![python software fundation](img/psf-logo.png) __The mission of the Python Software Foundation is to promote, protect, and advance the Python programming language, and to support and facilitate the growth of a diverse and international community of Python programmers.__ The Python Software Foundation (PSF) is a 501(c)(3) non-profit corporation that holds the intellectual property rights behind the Python programming language. We manage the open source licensing for Python version 2.1 and later and own and protect the trademarks associated with Python. We also run the North American PyCon conference annually, support other Python conferences around the world, and fund Python related development with our grants program and by funding special projects. (https://www.python.org/psf/) ![django girls logo](img/django-girls-logo.png) __We inspire women to fall in love with programming.__ _Django Girls organize free Python and Django workshops, create open sourced online tutorials and curate amazing first experiences with technology._ (https://djangogirls.org/) ![tshirt](img/django-girls-tshirt.png) ![pyladies logo](img/pyladies-logo.png) We are an international mentorship group with a focus on helping more women become active participants and leaders in the Python open-source community. Our mission is to promote, educate and advance a diverse Python community through outreach, education, conferences, events and social gatherings. PyLadies also aims to provide a friendly support network for women and a bridge to the larger Python world. Anyone with an interest in Python is encouraged to participate! (http://www.pyladies.com/) ![Like a programmer](img/inclusion-in-numbers.png) (https://www.slideshare.net/fmasanori/import-community-62142823) # ¡Gracias! <center>David Manuel Ochoa González<br> correos: ochoadavid at gmail.com - dochoa at iteso.mx<br> github: https://github.com/ochoadavid<br> material de apoyo en: https://github.com/ochoadavid/TallerDePython</center>
github_jupyter
# T1566 - Phishing Adversaries may send phishing messages to elicit sensitive information and/or gain access to victim systems. All forms of phishing are electronically delivered social engineering. Phishing can be targeted, known as spearphishing. In spearphishing, a specific individual, company, or industry will be targeted by the adversary. More generally, adversaries can conduct non-targeted phishing, such as in mass malware spam campaigns. Adversaries may send victim’s emails containing malicious attachments or links, typically to execute malicious code on victim systems or to gather credentials for use of [Valid Accounts](https://attack.mitre.org/techniques/T1078). Phishing may also be conducted via third-party services, like social media platforms. ## Atomic Tests: Currently, no tests are available for this technique. ## Detection Network intrusion detection systems and email gateways can be used to detect phishing with malicious attachments in transit. Detonation chambers may also be used to identify malicious attachments. Solutions can be signature and behavior based, but adversaries may construct attachments in a way to avoid these systems. URL inspection within email (including expanding shortened links) can help detect links leading to known malicious sites. Detonation chambers can be used to detect these links and either automatically go to these sites to determine if they're potentially malicious, or wait and capture the content if a user visits the link. Because most common third-party services used for phishing via service leverage TLS encryption, SSL/TLS inspection is generally required to detect the initial communication/delivery. With SSL/TLS inspection intrusion detection signatures or other security gateway appliances may be able to detect malware. Anti-virus can potentially detect malicious documents and files that are downloaded on the user's computer. Many possible detections of follow-on behavior may take place once [User Execution](https://attack.mitre.org/techniques/T1204) occurs. ## Shield Active Defense ### Email Manipulation Modify the flow or contents of email. Email flow manipulation includes changing which mail appliances process mail flows, to which systems they forward mail, or moving mail after it arrives in an inbox. Email content manipulation includes altering the contents of an email message. #### Opportunity A phishing email can be detected and blocked from arriving at the intended recipient. #### Use Case A defender can intercept emails that are detected as suspicious or malicious by email detection tools and prevent deliver to the intended target. #### Procedures Modify the destination of inbound email to facilitate the collection of inbound spearphishing messages. Modify the contents of an email message to maintain continuity when it is used for adversary engagement purposes.
github_jupyter
# Import Libraries ``` from __future__ import print_function import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torchvision import datasets, transforms ``` ## Data Transformations We first start with defining our data transformations. We need to think what our data is and how can we augment it to correct represent images which it might not see otherwise. ``` # Train Phase transformations train_transforms = transforms.Compose([ # transforms.Resize((28, 28)), # transforms.ColorJitter(brightness=0.10, contrast=0.1, saturation=0.10, hue=0.1), transforms.RandomRotation((-7.0, 7.0), fill=(1,)), transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) # The mean and std have to be sequences (e.g., tuples), therefore you should add a comma after the values. # Note the difference between (0.1307) and (0.1307,) ]) # Test Phase transformations test_transforms = transforms.Compose([ # transforms.Resize((28, 28)), # transforms.ColorJitter(brightness=0.10, contrast=0.1, saturation=0.10, hue=0.1), transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ]) ``` # Dataset and Creating Train/Test Split ``` train = datasets.MNIST('./data', train=True, download=True, transform=train_transforms) test = datasets.MNIST('./data', train=False, download=True, transform=test_transforms) ``` # Dataloader Arguments & Test/Train Dataloaders ``` SEED = 1 # CUDA? cuda = torch.cuda.is_available() print("CUDA Available?", cuda) # For reproducibility torch.manual_seed(SEED) if cuda: torch.cuda.manual_seed(SEED) # dataloader arguments - something you'll fetch these from cmdprmt dataloader_args = dict(shuffle=True, batch_size=128, num_workers=4, pin_memory=True) if cuda else dict(shuffle=True, batch_size=64) # train dataloader train_loader = torch.utils.data.DataLoader(train, **dataloader_args) # test dataloader test_loader = torch.utils.data.DataLoader(test, **dataloader_args) ``` # The model Let's start with the model we first saw ``` import torch.nn.functional as F dropout_value = 0.1 class Net(nn.Module): def __init__(self): super(Net, self).__init__() # Input Block self.convblock1 = nn.Sequential( nn.Conv2d(in_channels=1, out_channels=16, kernel_size=(3, 3), padding=0, bias=False), nn.ReLU(), nn.BatchNorm2d(16), nn.Dropout(dropout_value) ) # output_size = 26 # CONVOLUTION BLOCK 1 self.convblock2 = nn.Sequential( nn.Conv2d(in_channels=16, out_channels=32, kernel_size=(3, 3), padding=0, bias=False), nn.ReLU(), nn.BatchNorm2d(32), nn.Dropout(dropout_value) ) # output_size = 24 # TRANSITION BLOCK 1 self.convblock3 = nn.Sequential( nn.Conv2d(in_channels=32, out_channels=10, kernel_size=(1, 1), padding=0, bias=False), ) # output_size = 24 self.pool1 = nn.MaxPool2d(2, 2) # output_size = 12 # CONVOLUTION BLOCK 2 self.convblock4 = nn.Sequential( nn.Conv2d(in_channels=10, out_channels=16, kernel_size=(3, 3), padding=0, bias=False), nn.ReLU(), nn.BatchNorm2d(16), nn.Dropout(dropout_value) ) # output_size = 10 self.convblock5 = nn.Sequential( nn.Conv2d(in_channels=16, out_channels=16, kernel_size=(3, 3), padding=0, bias=False), nn.ReLU(), nn.BatchNorm2d(16), nn.Dropout(dropout_value) ) # output_size = 8 self.convblock6 = nn.Sequential( nn.Conv2d(in_channels=16, out_channels=16, kernel_size=(3, 3), padding=0, bias=False), nn.ReLU(), nn.BatchNorm2d(16), nn.Dropout(dropout_value) ) # output_size = 6 self.convblock7 = nn.Sequential( nn.Conv2d(in_channels=16, out_channels=16, kernel_size=(3, 3), padding=1, bias=False), nn.ReLU(), nn.BatchNorm2d(16), nn.Dropout(dropout_value) ) # output_size = 6 # OUTPUT BLOCK self.gap = nn.Sequential( nn.AvgPool2d(kernel_size=6) ) # output_size = 1 self.convblock8 = nn.Sequential( nn.Conv2d(in_channels=16, out_channels=10, kernel_size=(1, 1), padding=0, bias=False), # nn.BatchNorm2d(10), # nn.ReLU(), # nn.Dropout(dropout_value) ) self.dropout = nn.Dropout(dropout_value) def forward(self, x): x = self.convblock1(x) x = self.convblock2(x) x = self.convblock3(x) x = self.pool1(x) x = self.convblock4(x) x = self.convblock5(x) x = self.convblock6(x) x = self.convblock7(x) x = self.gap(x) x = self.convblock8(x) x = x.view(-1, 10) return F.log_softmax(x, dim=-1) ``` # Model Params Can't emphasize on how important viewing Model Summary is. Unfortunately, there is no in-built model visualizer, so we have to take external help ``` !pip install torchsummary from torchsummary import summary use_cuda = torch.cuda.is_available() device = torch.device("cuda" if use_cuda else "cpu") print(device) model = Net().to(device) summary(model, input_size=(1, 28, 28)) ``` # Training and Testing All right, so we have 24M params, and that's too many, we know that. But the purpose of this notebook is to set things right for our future experiments. Looking at logs can be boring, so we'll introduce **tqdm** progressbar to get cooler logs. Let's write train and test functions ``` from tqdm import tqdm train_losses = [] test_losses = [] train_acc = [] test_acc = [] def train(model, device, train_loader, optimizer, epoch): model.train() pbar = tqdm(train_loader) correct = 0 processed = 0 for batch_idx, (data, target) in enumerate(pbar): # get samples data, target = data.to(device), target.to(device) # Init optimizer.zero_grad() # In PyTorch, we need to set the gradients to zero before starting to do backpropragation because PyTorch accumulates the gradients on subsequent backward passes. # Because of this, when you start your training loop, ideally you should zero out the gradients so that you do the parameter update correctly. # Predict y_pred = model(data) # Calculate loss loss = F.nll_loss(y_pred, target) train_losses.append(loss) # Backpropagation loss.backward() optimizer.step() # Update pbar-tqdm pred = y_pred.argmax(dim=1, keepdim=True) # get the index of the max log-probability correct += pred.eq(target.view_as(pred)).sum().item() processed += len(data) pbar.set_description(desc= f'Loss={loss.item()} Batch_id={batch_idx} Accuracy={100*correct/processed:0.2f}') train_acc.append(100*correct/processed) def test(model, device, test_loader): model.eval() test_loss = 0 correct = 0 with torch.no_grad(): for data, target in test_loader: data, target = data.to(device), target.to(device) output = model(data) test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability correct += pred.eq(target.view_as(pred)).sum().item() test_loss /= len(test_loader.dataset) test_losses.append(test_loss) print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)\n'.format( test_loss, correct, len(test_loader.dataset), 100. * correct / len(test_loader.dataset))) test_acc.append(100. * correct / len(test_loader.dataset)) from torch.optim.lr_scheduler import StepLR model = Net().to(device) optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9) # scheduler = StepLR(optimizer, step_size=6, gamma=0.1) EPOCHS = 20 for epoch in range(EPOCHS): print("EPOCH:", epoch) train(model, device, train_loader, optimizer, epoch) # scheduler.step() test(model, device, test_loader) ```
github_jupyter
# Document AI Specialized Parser with HITL This notebook shows you how to use Document AI's specialized parsers ex. Invoice, Receipt, W2, W9, etc. and also shows Human in the Loop (HITL) output for supported parsers. ``` # Install necessary Python libraries and restart your kernel after. !python -m pip install -r ../requirements.txt from google.cloud import documentai_v1beta3 as documentai from PIL import Image, ImageDraw import os import pandas as pd ``` ## Set your processor variables ``` # TODO(developer): Fill these variables with your values before running the sample PROJECT_ID = "YOUR_PROJECT_ID_HERE" LOCATION = "us" # Format is 'us' or 'eu' PROCESSOR_ID = "PROCESSOR_ID" # Create processor in Cloud Console PDF_PATH = "../resources/procurement/invoices/invoice.pdf" # Update to path of target document ``` The following code calls the synchronous API and parses the form fields and values. ``` def process_document_sample(): # Instantiates a client client_options = {"api_endpoint": "{}-documentai.googleapis.com".format(LOCATION)} client = documentai.DocumentProcessorServiceClient(client_options=client_options) # The full resource name of the processor, e.g.: # projects/project-id/locations/location/processor/processor-id # You must create new processors in the Cloud Console first name = f"projects/{PROJECT_ID}/locations/{LOCATION}/processors/{PROCESSOR_ID}" with open(PDF_PATH, "rb") as image: image_content = image.read() # Read the file into memory document = {"content": image_content, "mime_type": "application/pdf"} # Configure the process request request = {"name": name, "document": document} # Recognizes text entities in the PDF document result = client.process_document(request=request) document = result.document entities = document.entities print("Document processing complete.\n\n") # For a full list of Document object attributes, please reference this page: https://googleapis.dev/python/documentai/latest/_modules/google/cloud/documentai_v1beta3/types/document.html#Document types = [] values = [] confidence = [] # Grab each key/value pair and their corresponding confidence scores. for entity in entities: types.append(entity.type_) values.append(entity.mention_text) confidence.append(round(entity.confidence,4)) # Create a Pandas Dataframe to print the values in tabular format. df = pd.DataFrame({'Type': types, 'Value': values, 'Confidence': confidence}) display(df) if result.human_review_operation: print ("Triggered HITL long running operation: {}".format(result.human_review_operation)) return document def get_text(doc_element: dict, document: dict): """ Document AI identifies form fields by their offsets in document text. This function converts offsets to text snippets. """ response = "" # If a text segment spans several lines, it will # be stored in different text segments. for segment in doc_element.text_anchor.text_segments: start_index = ( int(segment.start_index) if segment in doc_element.text_anchor.text_segments else 0 ) end_index = int(segment.end_index) response += document.text[start_index:end_index] return response doc = process_document_sample() ``` ## Draw the bounding boxes We will now use the spatial data returned by the processor to mark our values on the invoice pdf file that we first converted into a jpg. ``` JPG_PATH = "../resources/procurement/invoices/invoice.jpg" # Update to path of a jpg of your sample document. document_image = Image.open(JPG_PATH) draw = ImageDraw.Draw(document_image) for entity in doc.entities: # Draw the bounding box around the entities vertices = [] for vertex in entity.page_anchor.page_refs[0].bounding_poly.normalized_vertices: vertices.append({'x': vertex.x * document_image.size[0], 'y': vertex.y * document_image.size[1]}) draw.polygon([ vertices[0]['x'], vertices[0]['y'], vertices[1]['x'], vertices[1]['y'], vertices[2]['x'], vertices[2]['y'], vertices[3]['x'], vertices[3]['y']], outline='blue') document_image ``` # Human in the loop (HITL) Operation **Only complete this section if a HITL Operation is triggered.** </br> ``` lro = "LONG_RUNNING_OPERATION" # LRO printed in the previous cell ex. projects/660199673046/locations/us/operations/174674963333130330 client = documentai.DocumentProcessorServiceClient() operation = client._transport.operations_client.get_operation(lro) if operation.done: print("HITL location: {} ".format(str(operation.response.value)[5:-1])) else: print('Waiting on human review.') !gsutil cp "HITL_LOCATION" response.json # Location printed above ex. gs://gcs_bucket/receipt-output/174674963333130330/data-00001-of-00001.json with open("response.json", "r") as file: import json entities = {} data = json.load(file) for entity in data['entities']: if 'mentionText' in entity: entities[entity['type']] = entity['mentionText'] else: entities[entity['type']] = "" for t in entities: print("{} : {}\n ".format(t, entities[t])) ```
github_jupyter
Notebook which focuses on the randomly generated data sets and the performance comparison of algorithms on it ``` from IPython.core.display import display, HTML display(HTML('<style>.container {width:100% !important;}</style>')) %matplotlib notebook import matplotlib.pyplot as plt import numpy as np import torch from itertools import product, chain import nmf.mult import nmf.pgrad import nmf.nesterov import nmf_torch.mult import nmf_torch.pgrad import nmf_torch.nesterov import nmf_torch.norms import matplotlib import pickle from performance.performance_eval_func import get_random_lowrank_matrix, get_time_ratio,\ compare_performance, plot_errors_dict,\ torch_algo_wrapper,\ plot_ratios_gpu_algo, plot_ratios_cpu_gpu, plot_ratios_cpu_algo,\ plot_errors_dict, errors_at_time_t_over_inner_dim algo_dict_to_test = { "mult": nmf.mult.factorize_Fnorm, "pgrad": nmf.pgrad.factorize_Fnorm_subproblems, "nesterov": nmf.nesterov.factorize_Fnorm, "mult_torch": torch_algo_wrapper(nmf_torch.mult.factorize_Fnorm, device="cuda"), "pgrad_torch": torch_algo_wrapper(nmf_torch.pgrad.factorize_Fnorm_subproblems, device="cuda"), "nesterov_torch": torch_algo_wrapper(nmf_torch.nesterov.factorize_Fnorm, device="cuda") } f, ax = plt.subplots() plot_errors_dict(errors_over_r_random, ax, log=True, x_lbl="Inner dim", title="site3") f, ax = plt.subplots() plot_errors_dict(errors_over_r_random, ax, log=False, x_lbl="Inner dim", title="site3") shapes = [(5 * a, a) for a in [30, 100, 300, 1000, 3000]] shapes inner_dims_small = [sh[1] // 10 for sh in shapes] inner_dims_small inner_dims_big = [8 * sh[1] // 10 for sh in shapes] inner_dims_big shapes_all = shapes + shapes inner_dims = inner_dims_small + inner_dims_big times = [5, 25, 200, 1200, 8000] times = times + [t * 2 for t in times] print(len(shapes_all)) errors_dict = pickle.load(open("random_data_errors_dict.pkl","rb")) del errors_dict[(3, (150, 30))] errors_dict = {} for inner_dim, shape, t in zip(inner_dims, shapes_all, times): print((inner_dim, shape)) if (inner_dim, shape) in errors_dict.keys(): continue V = get_random_lowrank_matrix(shape[0], inner_dim, shape[1]) + np.random.rand(*shape) * 0.1 W_init = np.random.rand(shape[0], inner_dim) H_init = np.random.rand(inner_dim, shape[1]) errors = compare_performance(V=V, inner_dim=inner_dim, time_limit=t, W_init=W_init, H_init=H_init, algo_dict_to_test=algo_dict_to_test) errors_dict[(inner_dim, shape)] = errors pickle.dump(errors_dict, open("random_data_errors_dict.pkl","wb")) pickle.dump(errors_dict, open("random_data_errors_dict.pkl","wb")) keys = zip(inner_dims, shapes_all) keys = sorted(keys, key=lambda k: k[0]) keys = sorted(keys, key=lambda k: k[1][0]) keys for k in keys: r, shape = k M = np.random.rand(shape[0], r) @ np.random.rand(r, shape[1]) errros_dict_particular_data = errors_dict[k] f, axes = plt.subplots(3, 2, figsize=(8, 7), dpi=100, gridspec_kw=dict(hspace=0.45, top=0.92, bottom=0.08, left=0.08, right=0.99)) f.suptitle("Comparison, time ratio for {}, {:.2f}KB, {:.2f}MB".format(k, M.nbytes / 2**10, M.nbytes / 2**20)) plot_errors_dict(errros_dict_particular_data, axes[0, 0], log=True, title="Objective function", x_lbl="time [s]") plot_ratios_cpu_gpu(errros_dict_particular_data, axes[0, 1]) plot_ratios_cpu_algo(errros_dict_particular_data, axes[1:, 0], selected_algs=["mult", "pgrad", "nesterov"]) plot_ratios_gpu_algo(errros_dict_particular_data, axes[1:, 1], selected_algs=["mult_torch", "pgrad_torch", "nesterov_torch"]) font = {'family' : 'normal', 'weight' : 'normal', 'size' : 14} matplotlib.rc('font', **font) figsize = (9, 10) gridspec_kw = dict(wspace=0.4, hspace=0.9, top=0.85, bottom=0.1, left=0.1, right=0.95) plt.close("all") f, axes1 = plt.subplots(3, 2, figsize=figsize, dpi=100, gridspec_kw=gridspec_kw) f.suptitle("Ratio between time required\nto reach particular cost function value on CPU and on GPU") f, axes2 = plt.subplots(3, 2, figsize=figsize, dpi=100, gridspec_kw=gridspec_kw) f.suptitle("Ratio between time required\nto reach particular cost function value on CPU and on GPU") axes1[0,0].get_shared_y_axes().join(*axes1[0, :], *axes1[1, :], *axes1[2, :]) axes2[0,0].get_shared_y_axes().join(*axes2[0, :], *axes2[1, :], *axes2[2, :]) axes1[2,1].set_axis_off() axes2[2,1].set_axis_off() axes1 = list(axes1.ravel()) axes2 = list(axes2.ravel()) legend_is = False for k, a in zip(keys, chain.from_iterable(zip(axes1, axes2))): print(a) r, shape = k plot_ratios_cpu_gpu(errors_dict[k], a) M = np.random.rand(shape[0], r) @ np.random.rand(r, shape[1]) kb = M.nbytes / 2**10 mb = M.nbytes / 2**20 if mb < 1: size = "{:.1f}KB".format(kb) else: size = "{:.1f}MB".format(mb) a.set_title("Factorization of size {}\nmat. dim. {}, {}".format(k[0], k[1], size)) if legend_is: a.get_legend().remove() else: legend_is = True plt.close("all") f, axes1 = plt.subplots(3, 2, figsize=figsize, dpi=100, gridspec_kw=gridspec_kw) f.suptitle("Ratio between time required to reach a particular"+ "cost \n function value for multiplicative algorithms and gradient algorithms") f, axes2 = plt.subplots(3, 2, figsize=figsize, dpi=100, gridspec_kw=gridspec_kw) f.suptitle("Ratio between time required to reach a particular cost \n function value for projecitve and Nesterov gradient algorithms") axes1 = list(axes1.ravel()) axes2 = list(axes2.ravel()) axes1[-1].set_axis_off() axes2[-1].set_axis_off() # axes1[0].get_shared_y_axes().join(*axes1) axes2[0].get_shared_y_axes().join(*axes2) print(keys) print(len(axes1)) print(len(axes2)) legend_is = False for k, a1, a2 in zip(keys[::2], axes1, axes2): r, shape = k if r != 0.1 * shape[1]: print(k) continue plot_ratios_gpu_algo(errors_dict[k], [a1, a2], selected_algs=["mult_torch", "pgrad_torch", "nesterov_torch"]) M = np.random.rand(shape[0], r) @ np.random.rand(r, shape[1]) kb = M.nbytes / 2**10 mb = M.nbytes / 2**20 if mb < 1: size = "{:.2f}KB".format(kb) else: size = "{:.2f}MB".format(mb) a1.set_title("factorization of size {}\nmat. shape {} {}".format(k[0], k[1], size)) a2.set_title("factorization of size {}\nmat. shape {} {}".format(k[0], k[1], size)) if legend_is: a1.get_legend().remove() a2.get_legend().remove() else: legend_is = True ```
github_jupyter
``` %matplotlib inline ``` 02: Fitting Power Spectrum Models ================================= Introduction to the module, beginning with the FOOOF object. ``` # Import the FOOOF object from fooof import FOOOF # Import utility to download and load example data from fooof.utils.download import load_fooof_data # Download examples data files needed for this example freqs = load_fooof_data('freqs.npy', folder='data') spectrum = load_fooof_data('spectrum.npy', folder='data') ``` FOOOF Object ------------ At the core of the module, which is object oriented, is the :class:`~fooof.FOOOF` object, which holds relevant data and settings as attributes, and contains methods to run the algorithm to parameterize neural power spectra. The organization is similar to sklearn: - A model object is initialized, with relevant settings - The model is used to fit the data - Results can be extracted from the object Calculating Power Spectra ~~~~~~~~~~~~~~~~~~~~~~~~~ The :class:`~fooof.FOOOF` object fits models to power spectra. The module itself does not compute power spectra, and so computing power spectra needs to be done prior to using the FOOOF module. The model is broadly agnostic to exactly how power spectra are computed. Common methods, such as Welch's method, can be used to compute the spectrum. If you need a module in Python that has functionality for computing power spectra, try `NeuroDSP <https://neurodsp-tools.github.io/neurodsp/>`_. Note that FOOOF objects require frequency and power values passed in as inputs to be in linear spacing. Passing in non-linear spaced data (such logged values) may produce erroneous results. Fitting an Example Power Spectrum --------------------------------- The following example demonstrates fitting a power spectrum model to a single power spectrum. ``` # Initialize a FOOOF object fm = FOOOF() # Set the frequency range to fit the model freq_range = [2, 40] # Report: fit the model, print the resulting parameters, and plot the reconstruction fm.report(freqs, spectrum, freq_range) ``` Fitting Models with 'Report' ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The above method 'report', is a convenience method that calls a series of methods: - :meth:`~fooof.FOOOF.fit`: fits the power spectrum model - :meth:`~fooof.FOOOF.print_results`: prints out the results - :meth:`~fooof.FOOOF.plot`: plots to data and model fit Each of these methods can also be called individually. ``` # Alternatively, just fit the model with FOOOF.fit() (without printing anything) fm.fit(freqs, spectrum, freq_range) # After fitting, plotting and parameter fitting can be called independently: # fm.print_results() # fm.plot() ``` Model Parameters ~~~~~~~~~~~~~~~~ Once the power spectrum model has been calculated, the model fit parameters are stored as object attributes that can be accessed after fitting. Following the sklearn convention, attributes that are fit as a result of the model have a trailing underscore, for example: - ``aperiodic_params_`` - ``peak_params_`` - ``error_`` - ``r2_`` - ``n_peaks_`` Access model fit parameters from FOOOF object, after fitting: ``` # Aperiodic parameters print('Aperiodic parameters: \n', fm.aperiodic_params_, '\n') # Peak parameters print('Peak parameters: \n', fm.peak_params_, '\n') # Goodness of fit measures print('Goodness of fit:') print(' Error - ', fm.error_) print(' R^2 - ', fm.r_squared_, '\n') # Check how many peaks were fit print('Number of fit peaks: \n', fm.n_peaks_) ``` Selecting Parameters ~~~~~~~~~~~~~~~~~~~~ You can also select parameters using the :meth:`~fooof.FOOOF.get_params` method, which can be used to specify which parameters you want to extract. ``` # Extract a model parameter with `get_params` err = fm.get_params('error') # Extract parameters, indicating sub-selections of parameter exp = fm.get_params('aperiodic_params', 'exponent') cfs = fm.get_params('peak_params', 'CF') # Print out a custom parameter report template = ("With an error level of {error:1.2f}, FOOOF fit an exponent " "of {exponent:1.2f} and peaks of {cfs:s} Hz.") print(template.format(error=err, exponent=exp, cfs=' & '.join(map(str, [round(cf, 2) for cf in cfs])))) ``` For a full description of how you can access data with :meth:`~fooof.FOOOF.get_params`, check the method's documentation. As a reminder, you can access the documentation for a function using '?' in a Jupyter notebook (ex: `fm.get_params?`), or more generally with the `help` function in general Python (ex: `help(get_params)`). Notes on Interpreting Peak Parameters ------------------------------------- Peak parameters are labeled as: - CF: center frequency of the extracted peak - PW: power of the peak, over and above the aperiodic component - BW: bandwidth of the extracted peak Note that the peak parameters that are returned are not exactly the same as the parameters of the Gaussians used internally to fit the peaks. Specifically: - CF is the exact same as mean parameter of the Gaussian - PW is the height of the model fit above the aperiodic component [1], which is not necessarily the same as the Gaussian height - BW is 2 * the standard deviation of the Gaussian [2] [1] Since the Gaussians are fit together, if any Gaussians overlap, than the actual height of the fit at a given point can only be assessed when considering all Gaussians. To be better able to interpret heights for single peak fits, we re-define the peak height as above, and label it as 'power', as the units of the input data are expected to be units of power. [2] Gaussian standard deviation is '1 sided', where as the returned BW is '2 sided'. The underlying gaussian parameters are also available from the FOOOF object, in the ``gaussian_params_`` attribute. ``` # Compare the 'peak_params_' to the underlying gaussian parameters print(' Peak Parameters \t Gaussian Parameters') for peak, gauss in zip(fm.peak_params_, fm.gaussian_params_): print('{:5.2f} {:5.2f} {:5.2f} \t {:5.2f} {:5.2f} {:5.2f}'.format(*peak, *gauss)) ``` FOOOFResults ~~~~~~~~~~~~ There is also a convenience method to return all model fit results: :func:`~fooof.FOOOF.get_results`. This method returns all the model fit parameters, including the underlying Gaussian parameters, collected together into a FOOOFResults object. The FOOOFResults object, which in Python terms is a named tuple, is a standard data object used with FOOOF to organize and collect parameter data. ``` # Grab each model fit result with `get_results` to gather all results together # Note that this returns a FOOOFResult object fres = fm.get_results() # You can also unpack all fit parameters when using `get_results` ap_params, peak_params, r_squared, fit_error, gauss_params = fm.get_results() # Print out the FOOOFResults print(fres, '\n') # From FOOOFResults, you can access the different results print('Aperiodic Parameters: \n', fres.aperiodic_params) # Check the r^2 and error of the model fit print('R-squared: \n {:5.4f}'.format(fm.r_squared_)) print('Fit error: \n {:5.4f}'.format(fm.error_)) ``` Conclusion ---------- In this tutorial, we have explored the basics of the :class:`~fooof.FOOOF` object, fitting power spectrum models, and extracting parameters. Before we move on to controlling the fit procedure, and interpreting the results, in the next tutorial, we will first explore how this model is actually fit.
github_jupyter
## Discretisation Discretisation is the process of transforming continuous variables into discrete variables by creating a set of contiguous intervals that span the range of the variable's values. Discretisation is also called **binning**, where bin is an alternative name for interval. ### Discretisation helps handle outliers and may improve value spread in skewed variables Discretisation helps handle outliers by placing these values into the lower or higher intervals, together with the remaining inlier values of the distribution. Thus, these outlier observations no longer differ from the rest of the values at the tails of the distribution, as they are now all together in the same interval / bucket. In addition, by creating appropriate bins or intervals, discretisation can help spread the values of a skewed variable across a set of bins with equal number of observations. ### Discretisation approaches There are several approaches to transform continuous variables into discrete ones. Discretisation methods fall into 2 categories: **supervised and unsupervised**. Unsupervised methods do not use any information, other than the variable distribution, to create the contiguous bins in which the values will be placed. Supervised methods typically use target information in order to create the bins or intervals. #### Unsupervised discretisation methods - Equal width discretisation - Equal frequency discretisation - K-means discretisation #### Supervised discretisation methods - Discretisation using decision trees In this lecture, I will describe **equal width discretisation**. ## Equal width discretisation Equal width discretisation divides the scope of possible values into N bins of the same width.The width is determined by the range of values in the variable and the number of bins we wish to use to divide the variable: width = (max value - min value) / N where N is the number of bins or intervals. For example if the values of the variable vary between 0 and 100, we create 5 bins like this: width = (100-0) / 5 = 20. The bins thus are 0-20, 20-40, 40-60, 80-100. The first and final bins (0-20 and 80-100) can be expanded to accommodate outliers (that is, values under 0 or greater than 100 would be placed in those bins as well). There is no rule of thumb to define N, that is something to determine experimentally. ## In this demo We will learn how to perform equal width binning using the Titanic dataset with - pandas and NumPy - Feature-engine - Scikit-learn ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.preprocessing import KBinsDiscretizer from feature_engine.discretisers import EqualWidthDiscretiser # load the numerical variables of the Titanic Dataset data = pd.read_csv('../titanic.csv', usecols=['age', 'fare', 'survived']) data.head() # Let's separate into train and test set X_train, X_test, y_train, y_test = train_test_split( data[['age', 'fare']], data['survived'], test_size=0.3, random_state=0) X_train.shape, X_test.shape ``` The variables Age and fare contain missing data, that I will fill by extracting a random sample of the variable. ``` def impute_na(data, variable): df = data.copy() # random sampling df[variable + '_random'] = df[variable] # extract the random sample to fill the na random_sample = X_train[variable].dropna().sample( df[variable].isnull().sum(), random_state=0) # pandas needs to have the same index in order to merge datasets random_sample.index = df[df[variable].isnull()].index df.loc[df[variable].isnull(), variable + '_random'] = random_sample return df[variable + '_random'] # replace NA in both train and test sets X_train['age'] = impute_na(data, 'age') X_test['age'] = impute_na(data, 'age') X_train['fare'] = impute_na(data, 'fare') X_test['fare'] = impute_na(data, 'fare') # let's explore the distribution of age data[['age', 'fare']].hist(bins=30, figsize=(8,4)) plt.show() ``` ## Equal width discretisation with pandas and NumPy First we need to determine the intervals' edges or limits. ``` # let's capture the range of the variable age age_range = X_train['age'].max() - X_train['age'].min() age_range # let's divide the range into 10 equal width bins age_range / 10 ``` The range or width of our intervals will be 7 years. ``` # now let's capture the lower and upper boundaries min_value = int(np.floor( X_train['age'].min())) max_value = int(np.ceil( X_train['age'].max())) # let's round the bin width inter_value = int(np.round(age_range / 10)) min_value, max_value, inter_value # let's capture the interval limits, so we can pass them to the pandas cut # function to generate the bins intervals = [i for i in range(min_value, max_value+inter_value, inter_value)] intervals # let's make labels to label the different bins labels = ['Bin_' + str(i) for i in range(1, len(intervals))] labels # create binned age / discretise age # create one column with labels X_train['Age_disc_labels'] = pd.cut(x=X_train['age'], bins=intervals, labels=labels, include_lowest=True) # and one with bin boundaries X_train['Age_disc'] = pd.cut(x=X_train['age'], bins=intervals, include_lowest=True) X_train.head(10) ``` We can see in the above output how by discretising using equal width, we placed each Age observation within one interval / bin. For example, age=13 was placed in the 7-14 interval, whereas age 30 was placed into the 28-35 interval. When performing equal width discretisation, we guarantee that the intervals are all of the same lenght, however there won't necessarily be the same number of observations in each of the intervals. See below: ``` X_train.groupby('Age_disc')['age'].count() X_train.groupby('Age_disc')['age'].count().plot.bar() plt.xticks(rotation=45) plt.ylabel('Number of observations per bin') ``` The majority of people on the Titanic were between 14-42 years of age. Now, we can discretise Age in the test set, using the same interval boundaries that we calculated for the train set: ``` X_test['Age_disc_labels'] = pd.cut(x=X_test['age'], bins=intervals, labels=labels, include_lowest=True) X_test['Age_disc'] = pd.cut(x=X_test['age'], bins=intervals, include_lowest=True) X_test.head() # if the distributions in train and test set are similar, we should expect similar propotion of # observations in the different intervals in the train and test set # let's see that below t1 = X_train.groupby(['Age_disc'])['age'].count() / len(X_train) t2 = X_test.groupby(['Age_disc'])['age'].count() / len(X_test) tmp = pd.concat([t1, t2], axis=1) tmp.columns = ['train', 'test'] tmp.plot.bar() plt.xticks(rotation=45) plt.ylabel('Number of observations per bin') ``` ## Equal width discretisation with Feature-Engine ``` # Let's separate into train and test set X_train, X_test, y_train, y_test = train_test_split( data[['age', 'fare']], data['survived'], test_size=0.3, random_state=0) X_train.shape, X_test.shape # replace NA in both train and test sets X_train['age'] = impute_na(data, 'age') X_test['age'] = impute_na(data, 'age') X_train['fare'] = impute_na(data, 'fare') X_test['fare'] = impute_na(data, 'fare') # with feature engine we can automate the process for many variables # in one line of code disc = EqualWidthDiscretiser(bins=10, variables = ['age', 'fare']) disc.fit(X_train) # in the binner dict, we can see the limits of the intervals. For age # the value increases aproximately 7 years from one bin to the next. # for fare it increases in around 50 dollars from one interval to the # next, but it increases always the same value, aka, same width. disc.binner_dict_ # transform train and text train_t = disc.transform(X_train) test_t = disc.transform(X_test) train_t.head() t1 = train_t.groupby(['age'])['age'].count() / len(train_t) t2 = test_t.groupby(['age'])['age'].count() / len(test_t) tmp = pd.concat([t1, t2], axis=1) tmp.columns = ['train', 'test'] tmp.plot.bar() plt.xticks(rotation=0) plt.ylabel('Number of observations per bin') t1 = train_t.groupby(['fare'])['fare'].count() / len(train_t) t2 = test_t.groupby(['fare'])['fare'].count() / len(test_t) tmp = pd.concat([t1, t2], axis=1) tmp.columns = ['train', 'test'] tmp.plot.bar() plt.xticks(rotation=0) plt.ylabel('Number of observations per bin') ``` We can see quite clearly, that equal width discretisation does not improve the value spread. The original variable Fare was skewed, and the discrete variable is also skewed. ## Equal width discretisation with Scikit-learn ``` # Let's separate into train and test set X_train, X_test, y_train, y_test = train_test_split( data[['age', 'fare']], data['survived'], test_size=0.3, random_state=0) X_train.shape, X_test.shape # replace NA in both train and test sets X_train['age'] = impute_na(data, 'age') X_test['age'] = impute_na(data, 'age') X_train['fare'] = impute_na(data, 'fare') X_test['fare'] = impute_na(data, 'fare') disc = KBinsDiscretizer(n_bins=10, encode='ordinal', strategy='uniform') disc.fit(X_train[['age', 'fare']]) disc.bin_edges_ train_t = disc.transform(X_train[['age', 'fare']]) train_t = pd.DataFrame(train_t, columns = ['age', 'fare']) train_t.head() test_t = disc.transform(X_test[['age', 'fare']]) test_t = pd.DataFrame(test_t, columns = ['age', 'fare']) t1 = train_t.groupby(['age'])['age'].count() / len(train_t) t2 = test_t.groupby(['age'])['age'].count() / len(test_t) tmp = pd.concat([t1, t2], axis=1) tmp.columns = ['train', 'test'] tmp.plot.bar() plt.xticks(rotation=0) plt.ylabel('Number of observations per bin') t1 = train_t.groupby(['fare'])['fare'].count() / len(train_t) t2 = test_t.groupby(['fare'])['fare'].count() / len(test_t) tmp = pd.concat([t1, t2], axis=1) tmp.columns = ['train', 'test'] tmp.plot.bar() plt.xticks(rotation=0) plt.ylabel('Number of observations per bin') ```
github_jupyter
# MCIS6273 Data Mining (Prof. Maull) / Fall 2021 / HW0 **This assignment is worth up to 20 POINTS to your grade total if you complete it on time.** | Points <br/>Possible | Due Date | Time Commitment <br/>(estimated) | |:---------------:|:--------:|:---------------:| | 20 | Wednesday, Sep 1 @ Midnight | _up to_ 20 hours | * **GRADING:** Grading will be aligned with the completeness of the objectives. * **INDEPENDENT WORK:** Copying, cheating, plagiarism and academic dishonesty _are not tolerated_ by University or course policy. Please see the syllabus for the full departmental and University statement on the academic code of honor. ## OBJECTIVES * Familiarize yourself with the JupyterLab environment, Markdown and Python * Familiarize yourself with Github and basic git * Explore JupyterHub Linux console integrating what you learned in the prior parts of this homework * Listen to the Talk Python['Podcast'] from June 25, 2021: A Path to Data Science Interview with Sanyam Bhutani * Explore Python for data munging and analysis, with an introduction to CSV and Pandas ## WHAT TO TURN IN You are being encouraged to turn the assignment in using the provided Jupyter Notebook. To do so, make a directory in your Lab environment called `homework/hw0`. Put all of your files in that directory. Then zip that directory, rename it with your name as the first part of the filename (e.g. `maull_hw0_files.zip`), then download it to your local machine, then upload the `.zip` to Blackboard. If you do not know how to do this, please ask, or visit one of the many tutorials out there on the basics of using zip in Linux. If you choose not to use the provided notebook, you will still need to turn in a `.ipynb` Jupyter Notebook and corresponding files according to the instructions in this homework. ## ASSIGNMENT TASKS ### (0%) Familiarize yourself with the JupyterLab environment, Markdown and Python As stated in the course announcement [Jupyter (https://jupyter.org)](https://jupyter.org) is the core platform we will be using in this course and is a popular platform for data scientists around the world. We have a JupyterLab setup for this course so that we can operate in a cloud-hosted environment, free from some of the resource constraints of running Jupyter on your local machine (though you are free to set it up on your own and seek my advice if you desire). You have been given the information about the Jupyter environment we have setup for our course, and the underlying Python environment will be using is the [Anaconda (https://anaconda.com)](https://anaconda.com) distribution. It is not necessary for this assignment, but you are free to look at the multitude of packages installed with Anaconda, though we will not use the majority of them explicitly. As you will soon find out, Notebooks are an incredibly effective way to mix code with narrative and you can create cells that are entirely code or entirely Markdown. Markdown (MD or `md`) is a highly readable text format that allows for easy documentation of text files, while allowing for HTML-based rendering of the text in a way that is style-independent. We will be using Markdown frequently in this course, and you will learn that there are many different "flavors" or Markdown. We will only be using the basic flavor, but you will benefit from exploring the "Github flavored" Markdown, though you will not be responsible for using it in this course -- only the "basic" flavor. Please refer to the original course announcement about Markdown. &#167; **THERE IS NOTHING TO TURN IN FOR THIS PART.** Play with and become familiar with the basic functions of the Lab environment given to you online in the course Blackboard. &#167; **PLEASE _CREATE A MARKDOWN DOCUMENT_ CALLED `semester_goals.md` WITH 3 SENTENCES/FRAGMENTS THAT ANSWER THE FOLLOWING QUESTION:** * **What do you wish to accomplish this semester in Data Mining?** Read the documentation for basic Markdown [here](https://www.markdownguide.org/basic-syntax). Turn in the text `.md` file *not* the processed `.html`. In whatever you turn in, you must show the use of *ALL* the following: * headings (one level is fine), * bullets, * bold and italics Again, the content of your document needs to address the question above and it should live in the top level directory of your assignment submission. This part will be graded but no points are awarded for your answer. ### (0%) Familiarize yourself with Github and basic git [Github (https://github.com)](https://github.com) is the _de facto_ platform for open source software in the world based on the very popular [git (https://git-scm.org)](https://git-scm.org) version control system. Git has a sophisticated set of tools for version control based on the concept of local repositories for fast commits and remote repositories only when collaboration and remote synchronization is necessary. Github enhances git by providing tools and online hosting of public and private repositories to encourage and promote sharing and collaboration. Github hosts some of the world's most widely used open source software. **If you are already familiar with git and Github, then this part will be very easy!** &#167; **CREATE A PUBLIC GITHUB REPO NAMED `"mcis6273-F21-datamining"` AND PLACE A README.MD FILE IN IT.** Create your first file called `README.md` at the top level of the repository. You can put whatever text you like in the file (If you like, use something like [lorem ipsum](https://lipsum.com/) to generate random sentences to place in the file.). Please include the link to **your** Github repository that now includes the minimal `README.md`. You don't have to have anything elaborate in that file or the repo. ### (0%) Explore JupyterHub Linux console integrating what you learned in the prior parts of this homework The Linux console in JupyterLab is a great way to perform command-line tasks and is an essential tool for basic scripting that is part of a data scientist's toolkit. Open a console in the lab environment and familiarize yourself with your files and basic commands using git as indicated below. 1. In a new JupyterLab command line console, run the `git clone` command to clone the new repository you created in the prior part. You will want to read the documentation on this command (try here [https://www.git-scm.com/docs/git-clone](https://www.git-scm.com/docs/git-clone) to get a good start). 2. Within the same console, modify your `README.md` file, check it in and push it back to your repository, using `git push`. Read the [documentation about `git push`](https://git-scm.com/docs/git-push). 3. The commands `wget` and `curl` are useful for grabbing data and files from remote resources off the web. Read the documentation on each of these commands by typing `man wget` or `man curl` in the terminal. Make sure you pipe the output to a file or use the proper flags to do so. &#167; **THERE IS NOTHING TO TURN IN FOR THIS PART.** ### (30%) Listen to the Talk Python['Podcast'] from June 25, 2021: A Path to Data Science Interview with Sanyam Bhutani Data science is one of the most important and "hot" disciplines today and there is a lot going on from data engineering to modeling and analysis. Bhutani is one of the top [Kaggle]() leaders and in this interview shares his experience from computer science to data science, documenting some of the lessons he learned along the way. Please listen to this one hour podcast and answer some of the questions below. You can listen to it from one of the two links below: * [Talk Python['Podcast'] landing page](https://talkpython.fm/episodes/transcript/322/a-path-into-data-science) * [direct link to mp3 file](https://downloads.talkpython.fm/podcasts/talkpython/322-starting-in-data-sci.mp3) &#167; **PLEASE ANSWER THE FOLLOWING QUESTIONS AFTER LISTENING TO THE PODCAST:** 1. List 3 things that you learned from this podcast? 2. What is your reaction to the podcast? Pick at least one point Sanyam brought up in the interview that you agree with and list your reason why. 3. After listening to the podcast, do you think you are more interested or less interested in a career in Data Science? ### (70%) Explore Python for data munging and analysis, with an introduction to CSV and Pandas Python's strengths shine when tasked with data munging and analysis. As we will learn throughout the course, there are a number of excellent data sources for open data of all kinds now available for the public. These open data sources are heralding the new era of transparency from all levels from small municipal data to big government data, from transportation, to science, to education. To warm up to such datasets, we will be working with an interesting dataset from the US Fish and Wildlife Service (FWS). This is a water quality data set taken from a managed national refuge in Virginia called Back Bay National Wildlife Refuge, which was established in 1938. As a function of being managed by the FWS, water quality samples are taken regularly from the marshes within the refuge. You can (and should) learn a little more about Back Bay from this link, since it has an interesting history, features and wildlife. * [https://www.fws.gov/refuge/Back_Bay/about.html](https://www.fws.gov/refuge/Back_Bay/about.html) The data we will be looking at can be found as a direct download from data.gov, the US data repository where many datasets from a variety of sources can be found -- mostly related to the multitude of US government agencies. The dataset is a small water quality dataset with several decades of water quality data from Back Bay. We will be warming up to this dataset with a basic investigation into the shape, content and context of the data contained therein. In this part of the assignment, we will make use of Python libraries to pull the data from the endpoint and use [Pandas](https://pandas.pydata.org) to plot the data. The raw CSV data is readily imported into Pandas from the following URL: * [FWS Water Quality Data 12/20/2020](https://catalog.data.gov/dataset/water-quality-data/resource/f4d736fd-ade9-4e3f-b8e0-ae7fd98b2f87) Please take a look at the page, on it you will notice a link to the raw CSV file: * [https://ecos.fws.gov/ServCat/DownloadFile/173741?Reference=117348](https://ecos.fws.gov/ServCat/DownloadFile/173741?Reference=117348) We are going to explore this dataset to learn a bit more about the water quality characteristics of Bay Bay over the past couple decades or so. &#167; **WRITE THE CODE IN YOUR NOTEBOOK TO LOAD AND RESHAPE THE COMPLETE CSV WATER QUALITY DATASET**: You will need to perform the following steps: 1. **use [`pandas.read_csv()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html) method to load the dataset** into a Pandas DataFrame; 2. **clean the data so that the range of years is restricted to the 20 year period from 1999 to 2018** 5. **store the entire dataset back into a new CSV** file called `back_bay_1998-2018_clean.csv`. **HINTS:** _Here are some a code hints you might like to study and use to craft a solution:_ * study [`pandas.DataFrame.query()]`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.query.html?highlight=query#pandas.DataFrame.query) to learn how to filter and query year ranges * study [`pandas.DataFrame.groupby()`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html?highlight=groupby#pandas.DataFrame.groupby) to understand how to group data &#167; **USE PANDAS TO LOAD THE CSV DATA TO A DATAFRAME AND ANSWER THE FOLLOWING QUESTIONS:** 1. How many and what are the names of the columns in this dataset? 2. What is the mean `Dissolved Oxygen (mg/L)` over the entire dataset? 3. Which year were the highest number of `AirTemp (C)` data points collected? 4. Which year were the least number of `AirTemp (C)` data points collected? To answer these questions, you'll need to dive further into Pandas, which is the standard tool in the Python data science stack for loading, manipulating, transforming, analyzing and preparing data as input to other tools such as [Numpy (http://www.numpy.org/)](http://www.numpy.org/), [SciKitLearn (http://scikit-learn.org/stable/index.html)](http://scikit-learn.org/stable/index.html), [NLTK (http://www.nltk.org/)](http://www.nltk.org/) and others. For this assignment, you will only need to learn how to load and select data using Pandas. * **LOADING DATA** The core data structure in Pandas is the `DataFrame`. You will need to visit the Pandas documentation [(https://pandas.pydata.org/pandas-docs/stable/reference/)](https://pandas.pydata.org/pandas-docs/stable/reference/) to learn more about the library, but to help you along with a hint, read the documentation on the [`pandas.read_csv()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html) method. * **SELECTING DATA** The [tutorial here on indexing and selecting](http://pandas.pydata.org/pandas-docs/stable/indexing.html) should be of great use in understanding how to index and select subsets of the data to answer the questions. * **GROUPING DATA** You may use [`DataFrame.value_counts()`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.value_counts.html?highlight=value_counts#pandas.DataFrame.value_counts) or [`DataFrame.groupby()`](https://pandas.pydata.org/pandas-docs/stable/reference/groupby.html) to group the data you need for these questons. You will also find [`DataFrame.groupby()](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html?highlight=groupby#pandas.DataFrame.groupby) and [`DataFrame.describe()`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.describe.html?highlight=describe#pandas.DataFrame.describe) very useful. **CODE HINTS** Here is example code that should give you clues about the structure of your code for this part. ```python import pandas as pd df = pd.read_csv('your_json_file.csv') # code for question 1 ... and so on ``` &#167; **EXPLORING WATER SALINITY IN THE DATA** The Back Bay refuge is on the eastern coast of Virginia and to the east is the Atlantic Ocean. Salinity is a measure of the salt concentration of water, and you can learn a little more about salinity in water [here](https://www.usgs.gov/special-topic/water-science-school/science/saline-water-and-salinity?qt-science_center_objects=0#qt-science_center_objects). You will notice that there is a `Site_Id` variable in the data, which we will find refers to the five sampling locations (see the [documentation here](https://ecos.fws.gov/ServCat/Reference/Profile/117348)) of (1) the Bay, (2) D-Pool (fishing pond), (3) C-Pool, (4) B-Pool and (5) A-Pool. The ppt in Salinity is the percent salinity, and so 1 ppt is equivalent to 10000 ppm salinity. Use this information to answer the following questions. 1. Which sampling location has the highest mean ppt? What is the equivalent ppm? 2. When looking at the mean ppt, which location would you infer is furthest from the influence of ocean water inflows? (Assume that higher salinity correlates to closer proximity to the ocean.) 3. Dig a little deeper into #2, and write why there may be some uncertainty in your answer? (hint: certainty is improved by consistency in data) 4. Use the data to determine the correlation between `Salinity (ppt)` and `pH (standard units)`. Use the [DataFrame.corr()](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.DataFrameGroupBy.corr.html?highlight=correlate). You just need to report the correlation value.
github_jupyter
## Obligatory imports ``` import numpy as np import matplotlib.pyplot as plt import seaborn as sns import sklearn import matplotlib %matplotlib inline matplotlib.rcParams['figure.figsize'] = (12,8) matplotlib.rcParams['font.size']=20 matplotlib.rcParams['lines.linewidth']=4 matplotlib.rcParams['xtick.major.size'] = 10 matplotlib.rcParams['ytick.major.size'] = 10 matplotlib.rcParams['xtick.major.width'] = 2 matplotlib.rcParams['ytick.major.width'] = 2 ``` # We use the MNIST Dataset again ``` import IPython url = 'http://yann.lecun.com/exdb/mnist/' iframe = '<iframe src=' + url + ' width=80% height=400px></iframe>' IPython.display.HTML(iframe) ``` ## Fetch the data ``` from sklearn.datasets import fetch_mldata mnist = fetch_mldata('MNIST original', data_home='../day4/data/') allimages = mnist.data allimages.shape all_image_labels = mnist.target set(all_image_labels) ``` ## check out the data ``` digit1 = mnist.data[0,:].reshape(28,-1) # arr.reshape(4, -1) is equivalent to arr.reshape(4, 7), is arr has size 28 fig, ax = plt.subplots(figsize=(1.5, 1.5)) ax.imshow(digit1, vmin=0, vmax=1) ``` # Theoretical background **Warning: math ahead** <img src="images/logreg_schematics.svg" alt="logreg-schematics" style="width: 50%;"/> ## Taking logistic regression a step further: neural networks <img src="images/mlp_schematics.svg" alt="nn-schematics" style="width: 50%;"/> ### How (artificial) neural networks predict a label from features? * The *input layer* has **dimention = number of features.** * For each training example, each feature value is "fed" into the input layer. * Each "neuron" in the hidden layer receives a weighted sum of the features: the weight is initialized to a random value in the beginning, and the network "learns" from the datasetsand tunes these weights. Each hidden neuron, based on its input, and an "activation function", e.g.: the logistic function ![actfunc](https://upload.wikimedia.org/wikipedia/commons/thumb/c/cb/Activation_tanh.svg/320px-Activation_tanh.svg.png) * The output is again, a weighted sum of the values at each hidden neuron. * There can be *more than one hidden layer*, in which case the output of the first hidden layer becomes the input of the second hidden layer. ### Regularization Like Logistic regression and SVM, neural networks also can be improved with regularization. Fot scikit-learn, the relevant tunable parameter is `alpha` (as opposed to `gamma` for LR and SVM). Furthermore, it has default value 0.0001, unlike gamma, for which it is 1. ### Separate the data into training data and test data ``` len(allimages) ``` ### Sample the data, 70000 is too many images to handle on a single PC ``` len(allimages) size_desired_dataset = 2000 sample_idx = np.random.choice(len(allimages), size_desired_dataset) images = allimages[sample_idx, :] image_labels = all_image_labels[sample_idx] set(image_labels) image_labels.shape ``` ### Partition into training and test set *randomly* **As a rule of thumb, 80/20 split between training/test dataset is often recommended.** See below for cross validation and how that changes this thumbrule. ``` from scipy.stats import itemfreq from sklearn.model_selection import train_test_split training_data, test_data, training_labels, test_labels = train_test_split(images, image_labels, train_size=0.8) ``` ** Importance of normalization** If Feature A is in the range [0,1] and Feature B is in [10000,50000], SVM (in fact, most of the classifiers) will suffer inaccuracy. The solution is to *normalize* (AKA "feature scaling") each feature to the same interval e.g. [0,1] or [-1, 1]. **scipy provides a standard function for this:** ``` from sklearn.preprocessing import StandardScaler scaler = StandardScaler() # Fit only to the training data: IMPORTANT scaler.fit(training_data) from sklearn.neural_network import MLPClassifier clf = MLPClassifier(hidden_layer_sizes=(50,), max_iter = 5000) clf.fit(scaler.transform(training_data), training_labels) clf.score(scaler.transform(training_data), training_labels), clf.score(scaler.transform(test_data), test_labels) ``` ### Visualize the hidden layer: ``` # source: # #http://scikit-learn.org/stable/auto_examples/neural_networks/plot_mnist_filters.html fig, axes = plt.subplots(4, 4, figsize=(15,15)) # use global min / max to ensure all weights are shown on the same scale vmin, vmax = clf.coefs_[0].min(), clf.coefs_[0].max() for coef, ax in zip(clf.coefs_[0].T, axes.ravel()): ax.matshow(coef.reshape(28, 28), cmap=plt.cm.gray, vmin=.5 * vmin, vmax=.5 * vmax) ax.set_xticks(()) ax.set_yticks(()) plt.show() ``` Not bad, but is it better than Logistic regression? Check out with Learning curves: ``` from sklearn.model_selection import learning_curve import pandas as pd curve = learning_curve(clf, scaler.transform(images), image_labels) train_sizes, train_scores, test_scores = curve train_scores = pd.DataFrame(train_scores) train_scores.loc[:,'train_size'] = train_sizes test_scores = pd.DataFrame(test_scores) test_scores.loc[:,'train_size'] = train_sizes train_scores = pd.melt(train_scores, id_vars=['train_size'], value_name = 'CrossVal score') test_scores = pd.melt(test_scores, id_vars=['train_size'], value_name = 'CrossVal score') matplotlib.rcParams['figure.figsize'] = (12,8) sns.tsplot(train_scores, time = 'train_size', unit='variable', value = 'CrossVal score') sns.tsplot(test_scores, time = 'train_size', unit='variable', value = 'CrossVal score', color='g') plt.ylim(0,1.1) ``` Not really, we can try to improve it with parameter space search. ## Parameter space search with `GridSearchCV` ``` from sklearn.model_selection import GridSearchCV clr = MLPClassifier() clf = GridSearchCV(clr, {'alpha':np.logspace(-8, -1, 2)}) clf.fit(scaler.transform(images), image_labels) clf.best_params_ clf.best_score_ nn_tuned = clf.best_estimator_ nn_tuned.fit(scaler.transform(training_data), training_labels) curve = learning_curve(nn_tuned, scaler.transform(images), image_labels) train_sizes, train_scores, test_scores = curve train_scores = pd.DataFrame(train_scores) train_scores.loc[:,'train_size'] = train_sizes test_scores = pd.DataFrame(test_scores) test_scores.loc[:,'train_size'] = train_sizes train_scores = pd.melt(train_scores, id_vars=['train_size'], value_name = 'CrossVal score') test_scores = pd.melt(test_scores, id_vars=['train_size'], value_name = 'CrossVal score') matplotlib.rcParams['figure.figsize'] = (12,8) sns.tsplot(train_scores, time = 'train_size', unit='variable', value = 'CrossVal score') sns.tsplot(test_scores, time = 'train_size', unit='variable', value = 'CrossVal score', color='g') plt.ylim(0,1.1) plt.legend() ``` The increase in accuracy is miniscule. ## Multi layered NN's ``` from sklearn.preprocessing import StandardScaler scaler = StandardScaler() scaler.fit(images) images_normed = scaler.transform(images) clr = MLPClassifier(hidden_layer_sizes=(25,25)) clf = GridSearchCV(clr, {'alpha':np.logspace(-80, -1, 3)}) clf.fit(images_normed, image_labels) clf.best_score_ clf.best_params_ nn_tuned = clf.best_estimator_ nn_tuned.fit(scaler.transform(training_data), training_labels) curve = learning_curve(nn_tuned, images_normed, image_labels) train_sizes, train_scores, test_scores = curve train_scores = pd.DataFrame(train_scores) train_scores.loc[:,'train_size'] = train_sizes test_scores = pd.DataFrame(test_scores) test_scores.loc[:,'train_size'] = train_sizes train_scores = pd.melt(train_scores, id_vars=['train_size'], value_name = 'CrossVal score') test_scores = pd.melt(test_scores, id_vars=['train_size'], value_name = 'CrossVal score') matplotlib.rcParams['figure.figsize'] = (12, 8) sns.tsplot(train_scores, time = 'train_size', unit='variable', value = 'CrossVal score') sns.tsplot(test_scores, time = 'train_size', unit='variable', value = 'CrossVal score', color='g') plt.ylim(0,1.1) plt.legend() ``` Hmm... multi-hidden layer NN's seem to be much harder to tune. Maybe we need to try with wider range of parameters for Gridsearch? Finding optimum parameters for advanced classifiers is not always so straightforward, and quite often the most time consuming part. This so-called **Hyperparameter optimization** is a topic in itself, and has numerous approaches and libraries. * [http://neupy.com/2016/12/17/hyperparameter_optimization_for_neural_networks.html](http://neupy.com/2016/12/17/hyperparameter_optimization_for_neural_networks.html) * [Practical Bayesian Optimization of Machine Learning Algorithms](https://dash.harvard.edu/handle/1/11708816) **sklearn's neural network functionality is rather limited.** More advanced toolboxes for neural networks: * [keras](https://keras.io/) * [tensorflow](https://www.tensorflow.org/) * [Theano](http://deeplearning.net/software/theano/) # Exercise ## iris dataset Train a neural network on the `iris` dataset and run cross validation. Do not forget to normalize the featurs. Compare the results against LogisticRegression. Use Grid search to tune the NN further. ## Further reading * http://www.ritchieng.com/applying-machine-learning/
github_jupyter
<center> <h1>Numerical Methods -- Assignment 5</h1> </center> ## Problem1 -- Energy density The matter and radiation density of the universe at redshift $z$ is $$\Omega_m(z) = \Omega_{m,0}(1+z)^3$$ $$\Omega_r(z) = \Omega_{r,0}(1+z)^4$$ where $\Omega_{m,0}=0.315$ and $\Omega_r = 9.28656 \times 10^{-5}$ ### (a) Plot ``` %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import numpy as np z = np.linspace(-1000,4000,10000) O_m0 = 0.315 O_r0 = 9.28656e-5 O_m = O_m0*np.power(z+1,3) O_r = O_r0*np.power(z+1,4) #define where the roots are x1 = -1; x2 = O_m0/O_r0 y1 = O_m0*np.power(x1+1,3) y2 = O_m0*np.power(x2+1,3) x = np.array([x1,x2]) y = np.array([y1,y2]) #plot the results plt.figure(figsize=(8,8)) plt.plot(z,O_m,'-',label="matter density") plt.plot(z,O_r,'-',label="radiation density") plt.plot(x,y,'h',label=r"$z_{eq}$") plt.xlabel("redshift(z)") plt.ylabel("energy density") plt.legend() plt.show() ``` ### (b) Analytical solution An analytical solution can be found by equating the two equations. Since $z$ denotes for the redshift and it has a physical meaning, so it must take a real value for it to have a meaning. Thus \begin{align*} \Omega_m(z) &= \Omega_r(z)\\ \Omega_{m,0}(1+z)^3 &= \Omega_{r,0}(1+z)^4\\ (1+z)^3(0.315-9.28656 \times 10^{-5} z)&=0\\ (1+z)^3 &= 0\\ or \ (0.315-9.28656 \times 10^{-5} (z+1))&=0\\ \end{align*} $z_1 = -1$ or $z_2 = 3391.0$ ### (c) Bisection method The bisection method in mathematics is a root-finding method that repeatedly bisects an interval and then selects a subinterval in which a root must lie for further processing. It is a very simple and robust method, but it is also relatively slow. scipy.optimize.bisect calculates the roots for a given function, but for it to work $f(a)$ and $f(b)$ must take different signs (so that there exists a root $\in [a,b]$). ``` from scipy.optimize import bisect def f(z): O_m0 = 0.315 O_r0 = 9.28656e-5 O_m = O_m0*np.power(z+1,3) O_r = O_r0*np.power(z+1,4) return O_m -O_r z1 = bisect(f,-1000,0,xtol=1e-10) z2 = bisect(f,0,4000,xtol=1e-10) print "The roots are found to be:",z1,z2 ``` ### (d) Secant method The $\textit{secant method}$ uses secant lines to find the root. A secant line is a straight line that intersects two points of a curve. In the secant method, a line is drawn between two points on the continuous function such that it extends and intersects the $x$ axis. A secant line $y$ is drawn from $f(b)$ to $f(a)$ and intersects at point $c$ on the $x$ axis such that $$y = \frac{f(b)-f(a)}{b-a}(c-b)+f(b)$$ The solution is therefore $$c = b-f(b)\frac{b-a}{f(b)-f(a)}$$ ``` def secant(f, x0, x1, eps): f_x0 = f(x0) f_x1 = f(x1) iteration_counter = 0 while abs(f_x1) > eps and iteration_counter < 100: try: denominator = float(f_x1 - f_x0)/(x1 - x0) x = x1 - float(f_x1)/denominator except ZeroDivisionError: print "Error! - denominator zero for x = ", x sys.exit(1) # Abort with error x0 = x1 x1 = x f_x0 = f_x1 f_x1 = f(x1) iteration_counter += 1 # Here, either a solution is found, or too many iterations if abs(f_x1) > eps: iteration_counter = -1 return x, iteration_counter #find the roots in the nearby region, with an accuracy of 1e-10 z1 = secant(f,-10,-0.5,1e-10)[0] z2 = secant(f,3000,4000,1e-10)[0] print "The roots are found to be:",z1,z2 ``` ### (e) Newton-Raphson method In numerical methods, $\textit{Newton-Raphson method}$ is a method for finding successively better approximations to the roots of a real-valued function. The algorithm is as follows: * Starting with a function $f$ defined over the real number $x$, the function's derivative $f'$, and an initial guess $x_0$ for a root of the fucntion $f$, then a better approximation $x_1$ is: $$x_1 = x_0 -\frac{f(x_0)}{f'(x_0)}$$ * The process is then repeated as $$x_{n+1} = x_n-\frac{f(x_n)}{f'(x_n)}$$ until a sufficiently satisfactory value is reached. ``` def fprime(z): O_m0 = 0.315 O_r0 = 9.28656e-5 O_m = O_m0*np.power(z+1,2) O_r = O_r0*np.power(z+1,3) return 3*O_m -4*O_r def Newton(f, dfdx, x, eps): f_value = f(x) iteration_counter = 0 while abs(f_value) > eps and iteration_counter < 100: try: x = x - float(f_value)/dfdx(x) except ZeroDivisionError: print "Error! - derivative zero for x = ", x sys.exit(1) # Abort with error f_value = f(x) iteration_counter += 1 # Here, either a solution is found, or too many iterations if abs(f_value) > eps: iteration_counter = -1 return x, iteration_counter z1 = Newton(f,fprime,0,1e-10)[0] z2 = Newton(f,fprime,3000,1e-10)[0] print "The roots are found to be:",z1,z2 ``` Now, change the initial guess far from the values obtained from (b). And test how the three algorithms perform respectively. ``` #test how the bisection method perform import time start1 = time.time() z1 = bisect(f,-1000,1000,xtol=1e-10) end1 = time.time() start2 = time.time() z2 = bisect(f,3000,10000,xtol=1e-10) end2 = time.time() err1 = abs((z1-(-1))/(-1)) err2 = abs((z2-(O_m0/O_r0-1))/(O_m0/O_r0-1)) print "The roots are found to be:",z1,z2 print "With a deviation of:",err1,err2 print "Time used are:",end1-start1,end2-start2 #test how the secant method perform start1 = time.time() z1 = secant(f,-1000,1000,1e-10)[0] end1 = time.time() start2 = time.time() z2 = secant(f,3000,10000,1e-10)[0] end2 = time.time() err1 = abs((z1-(-1))/(-1)) err2 = abs((z2-(O_m0/O_r0-1))/(O_m0/O_r0-1)) print "The roots are found to be:",z1,z2 print "With a deviation of:",err1,err2 print "Time used are:",end1-start1,end2-start2 print "Roots found after",secant(f,-10,-0.5,1e-10)[1],"and",secant(f,3000,4000,1e-10)[1],"loops" #test how the newton-Raphson method perform start1 = time.time() z1 = Newton(f,fprime,-1000,1e-10)[0] end1 = time.time() start2 = time.time() z2 = Newton(f,fprime,10000,1e-10)[0] end2 = time.time() err1 = abs((z1-(-1))/(-1)) err2 = abs((z2-(O_m0/O_r0-1))/(O_m0/O_r0-1)) print "The roots are found to be:",z1,z2 print "With a deviation of:",err1,err2 print "Time used are:",end1-start1,end2-start2 print "Roots found after",Newton(f,fprime,0,1e-10)[1],"and",Newton(f,fprime,3000,1e-10)[1],"loops" ``` It is not difficult to find out that tested with the function given, bisection method is the fastest and the most reliable method in finding the first root; however, in determining the second root, both the secant method and Newton's method showed better performance, with zero deviation from the actual value, and a much faster run time. But in general, when dealing with more complicated calculations, bisection method is relatively slow. But within a given tolerance Newton's method and secant method may probably show better performance. ## Problem 2 -- Potential $\textit{Navarro-Frenk-White}$ and $\textit{Hernquist}$ potential can be expressed as the following equations: $$\Phi_{NFW}(r) = \Phi_0\frac{r_s}{r}\,ln(1+r/r_s)$$ $$\Phi_{Hernquist}(r) = -\Phi_0\,\frac{1}{2(1+r/r_s)}$$ with $\Phi_0 = 1.659 \times 10^4 \ km^2/s^2$ and $r_s = 15.61 \ kpc$. The apocentre and pericentre can be found by solving the following equation: \begin{align*} E_{tot} &= \frac{1}{2}\left(v_t^2+v_r^2\right)+\Phi\\ \end{align*} where $L = J_r=rv_r$ is the angular momentum in the radial direction, and $E_{tot}$ is the total energy of the elliptical orbit and can be found by $(r,v_t,v_r)$ of a given star. Define the residue function $$R\equiv E_{tot} - \frac{1}{2}\left(\frac{L}{r}\right)^2-\Phi$$ so that the percenter and apocenter can be found when $R=0$. Then, the radial action $J_r$ is defined as $\textit{(Jason L. Sanders,2015)}$ $$J_r = \frac{1}{\pi}\int_{r_p}^{r_a}dr\sqrt{2E-2\Phi-\frac{L^2}{r^2}}$$ where $r_p$ is the pericentric radius and $r_a$ is the apocentric radius. ``` import matplotlib.pyplot as plt import numpy as np from scipy.optimize import newton from scipy.integrate import quad from math import * r = np.array([7.80500, 15.6100,31.2200,78.0500,156.100]) #r in kpc vt = np.array([139.234,125.304,94.6439,84.5818,62.8640]) # vt in km/s vr = np.array([-15.4704,53.7018,-283.932,-44.5818,157.160]) # vr in km/s #NFW profile potential def NFW(r): phi0 = 1.659e4 rs = 15.61 ratio = rs/r return -phi0*ratio*np.log(1+1/ratio) #Hernquist profile potential def H(r): phi0 = 1.659e4 rs = 15.61 ratio = r/rs return -phi0/(2*(1+ratio)) #1st derivative of Hernquist profile potential def H_d(r): phi0 = 1.659e4 rs = 15.61 ratio = r/rs return phi0*0.5/rs*((1+ratio)**(-2)) #1st derivative of NFW profile potential def NFW_d(r): phi0 = 1.659e4 rs = 15.61 ratio = rs/r return -phi0*rs*((-1/r**2)*np.log(1+1/ratio)+1/(r*rs)*(1+1/ratio)**(-1)) #total energy, NFW profile def E_NFW(r,vt,vr): E = 0.5*(vt**2+vr**2)+NFW(r) return E #total energy, Hernquist profile def E_H(r,vt,vr): E = 0.5*(vt**2+vr**2)+H(r) return E #Residue function def Re(r,Energy,momentum,p): return Energy - 0.5*(momentum/r)**2-p #Residue function for NFW profile def R_NFW(r,Energy,momentum): return Energy - 0.5*(momentum/r)**2-NFW(r) #Residue function for Hernquist profile def R_H(r,Energy,momentum): return Energy - 0.5*(momentum/r)**2-H(r) #derivative of residue of NFW profile def R_dNFW(r,Energy,momentum): return Energy*0+momentum**2*r**(-3)-NFW_d(r) #derivative of residue of Hernquist profile def R_dH(r,Energy,momentum): return Energy*0+momentum**2*r**(-3)-H_d(r) #second derivative of residue of Hernquist profile, come handy if the #calculated value for pericentre for Hernquist profile is too far off #from the value calculated for NFW profile def R_ddH(r,Energy,momentum): phi0 = 1.659e4 rs = 15.61 ratio = r/rs return Energy*0-3*momentum**2*r**(-4)+phi0*0.5/rs**2*((1+ratio)**(-3)) #function that defines the radial action def r_actionNFW(r,Energy,momentum): return np.sqrt(2*(Energy-NFW(r))-(momentum/r)**2)/pi def r_actionH(r,Energy,momentum): return np.sqrt(2*(Energy-H(r))-(momentum/r)**2)/pi R1 = np.linspace(7,400,1000) R2 = np.linspace(10,500,1000) R3 = np.linspace(7,600,1000) R4 = np.linspace(50,800,1000) R5 = np.linspace(50,1500,1000) Momentum = r*vt Energy_nfw = E_NFW(r,vt,vr) Energy_h = E_H(r,vt,vr) #plot results for 5 stars #1st star i = 0 R_nfw = Re(R1,Energy_nfw[i],Momentum[i],NFW(R1)) R_h = Re(R1,Energy_h[i],Momentum[i],H(R1)) plt.figure(figsize=(15,10)) plt.plot(R1,R_nfw,ls='-',label="NFW",color='#9370db',lw=2) plt.plot(R1,R_h,ls='--',label="Hernquist",color='salmon',alpha=0.75,lw=2) plt.rc('xtick', labelsize=15) # fontsize of the tick labels plt.rc('ytick', labelsize=15) plt.axhline(y=0,color='#b22222',lw=3) plt.title(r"1st star, $R= E_{tot} - \frac{1}{2}\left(\frac{L}{r}\right)^2-\Phi$",fontsize=20) plt.xlabel("r(kpc)",fontsize=15) plt.ylabel("Residue",fontsize=15) z1 = newton(R_NFW,10,args=(Energy_nfw[i],Momentum[i]),fprime=R_dNFW) z2 = newton(R_NFW,100,args=(Energy_nfw[i],Momentum[i]),fprime=R_dNFW) e1 = R_NFW(z1,Energy_nfw[i],Momentum[i]) e2 = R_NFW(z2,Energy_nfw[i],Momentum[i]) print "The pericentre and apocentre are found to be:",z1,"kpc","and",z2,"kpc","for the NFW profile" plt.plot(z1,e1,marker='d',label='pericentre-NFW') plt.plot(z2,e2,marker='d',label='apocentre-NFW') z3 = newton(R_H,10,args=(Energy_h[i],Momentum[i]),fprime=R_dH) e3 = Re(z1,Energy_h[i],Momentum[i],H(z1)) print "The pericentre is found to be:",z3,"kpc","for the Hernquist profile" plt.plot(z1,e1,marker='o',label='pericentre-H') plt.legend(fontsize=15) plt.show() J_NFW = quad(r_actionNFW,z1,z2,args=(Energy_nfw[i],Momentum[i])) print "The radial action for NFW profile is:",J_NFW[0],"km/s kpc" J_H = quad(r_actionH,z3,np.inf,args=(Energy_h[i],Momentum[i])) print "The radial action for Hernquist profile is:",J_H[0],"km/s kpc" #2nd star i = 1 R_nfw = Re(R2,Energy_nfw[i],Momentum[i],NFW(R2)) R_h = Re(R2,Energy_h[i],Momentum[i],H(R2)) plt.figure(figsize=(15,10)) plt.plot(R2,R_nfw,ls='-',label="NFW",color='#9370db',lw=2) plt.plot(R2,R_h,ls='--',label="Hernquist",color='salmon',alpha=0.75,lw=2) plt.rc('xtick', labelsize=15) # fontsize of the tick labels plt.rc('ytick', labelsize=15) plt.axhline(y=0,color='#b22222',lw=3) plt.title(r"2nd star, $R= E_{tot} - \frac{1}{2}\left(\frac{L}{r}\right)^2-\Phi$",fontsize=20) plt.xlabel("r(kpc)",fontsize=15) plt.ylabel("Residue",fontsize=15) z1 = newton(R_NFW,10,args=(Energy_nfw[i],Momentum[i]),fprime=R_dNFW) z2 = newton(R_NFW,400,args=(Energy_nfw[i],Momentum[i]),fprime=R_dNFW) e1 = R_NFW(z1,Energy_nfw[i],Momentum[i]) e2 = R_NFW(z2,Energy_nfw[i],Momentum[i]) print "The pericentre and apocentre are found to be:",z1,"kpc","and",z2,"kpc","for the NFW profile" plt.plot(z1,e1,marker='d',label='pericentre-NFW') plt.plot(z2,e2,marker='d',label='apocentre-NFW') z3 = newton(R_H,10,args=(Energy_h[i],Momentum[i]),fprime=R_dH) e3 = Re(z1,Energy_h[i],Momentum[i],H(z1)) print "The pericentre is found to be:",z3,"kpc","for the Hernquist profile" plt.plot(z3,e3,marker='o',label='pericentre-H') plt.legend(fontsize=15) plt.show() J_NFW = quad(r_actionNFW,z1,z2,args=(Energy_nfw[i],Momentum[i])) print "The radial action for NFW profile is:",J_NFW[0],"km/s kpc" J_H = quad(r_actionH,z3,np.inf,args=(Energy_h[i],Momentum[i])) print "The radial action for Hernquist profile is:",J_H[0],"km/s kpc" #3rd star i = 2 R_nfw = Re(R3,Energy_nfw[i],Momentum[i],NFW(R3)) R_h = Re(R3,Energy_h[i],Momentum[i],H(R3)) plt.figure(figsize=(15,10)) plt.plot(R3,R_nfw,ls='-',label="NFW",color='#9370db',lw=2) plt.plot(R3,R_h,ls='--',label="Hernquist",color='salmon',alpha=0.75,lw=2) plt.rc('xtick', labelsize=15) # fontsize of the tick labels plt.rc('ytick', labelsize=15) plt.axhline(y=0,color='#b22222',lw=3) plt.title(r"3rd star, $R= E_{tot} - \frac{1}{2}\left(\frac{L}{r}\right)^2-\Phi$",fontsize=20) plt.xlabel("r(kpc)",fontsize=15) plt.ylabel("Residue",fontsize=15) z1 = newton(R_NFW,10,args=(Energy_nfw[i],Momentum[i]),fprime=R_dNFW) e1 = R_NFW(z1,Energy_nfw[i],Momentum[i]) print "The pericentre is found to be:",z1,"kpc","for the NFW profile" plt.plot(z1,e1,marker='d',label='pericentre-NFW') z2 = newton(R_H,10,args=(Energy_h[i],Momentum[i]),fprime=R_dH,fprime2=R_ddH) e2 = R_H(z2,Energy_h[i],Momentum[i]) print "The pericentre is found to be:",z2,"kpc","for the Hernquist profile" plt.plot(z2,e2,marker='o',label='pericentre-H') plt.legend(fontsize=15) plt.show() J_NFW = quad(r_actionNFW,z1,np.inf,args=(Energy_nfw[i],Momentum[i])) print "The radial action for NFW profile is:",J_NFW[0],"km/s kpc" J_H = quad(r_actionH,z2,np.inf,args=(Energy_h[i],Momentum[i])) print "The radial action for Hernquist profile is:",J_H[0],"km/s kpc" #4th star i = 3 R_nfw = Re(R4,Energy_nfw[i],Momentum[i],NFW(R4)) R_h = Re(R4,Energy_h[i],Momentum[i],H(R4)) plt.figure(figsize=(15,10)) plt.plot(R4,R_nfw,ls='-',label="NFW",color='#9370db',lw=2) plt.plot(R4,R_h,ls='--',label="Hernquist",color='salmon',alpha=0.75,lw=2) plt.rc('xtick', labelsize=15) # fontsize of the tick labels plt.rc('ytick', labelsize=15) plt.axhline(y=0,color='#b22222',lw=3) plt.title(r"4th star, $R= E_{tot} - \frac{1}{2}\left(\frac{L}{r}\right)^2-\Phi$",fontsize=20) plt.xlabel("r(kpc)",fontsize=15) plt.ylabel("Residue",fontsize=15) z1 = newton(R_NFW,10,args=(Energy_nfw[i],Momentum[i]),fprime=R_dNFW) z2 = newton(R_NFW,400,args=(Energy_nfw[i],Momentum[i]),fprime=R_dNFW) e1 = R_NFW(z1,Energy_nfw[i],Momentum[i]) e2 = R_NFW(z2,Energy_nfw[i],Momentum[i]) print "The pericentre and apocentre are found to be:",z1,"kpc","and",z2,"kpc","for the NFW profile" plt.plot(z1,e1,marker='d',label='pericentre-NFW') plt.plot(z2,e2,marker='d',label='apocentre-NFW') z3 = newton(R_H,50,args=(Energy_h[i],Momentum[i]),fprime=R_dH,fprime2=R_ddH) e3 = R_H(z3,Energy_h[i],Momentum[i]) print "The pericentre is found to be:",z3,"kpc","for the Hernquist profile" plt.plot(z1,e1,marker='o',label='pericentre-H') plt.legend(fontsize=15) plt.show() J_NFW = quad(r_actionNFW,z1,z2,args=(Energy_nfw[i],Momentum[i])) print "The radial action for NFW profile is:",J_NFW[0],"km/s kpc" J_H = quad(r_actionH,z3,np.inf,args=(Energy_h[i],Momentum[i])) print "The radial action for Hernquist profile is:",J_H[0],"km/s kpc" #5th star i = 4 R_nfw = Re(R5,Energy_nfw[i],Momentum[i],NFW(R5)) R_h = Re(R5,Energy_h[i],Momentum[i],H(R5)) plt.figure(figsize=(15,10)) plt.plot(R5,R_nfw,ls='-',label="NFW",color='#9370db',lw=2) plt.plot(R5,R_h,ls='--',label="Hernquist",color='salmon',alpha=0.75,lw=2) plt.rc('xtick', labelsize=15) # fontsize of the tick labels plt.rc('ytick', labelsize=15) plt.axhline(y=0,color='#b22222',lw=3) plt.title(r"5th star, $R= E_{tot} - \frac{1}{2}\left(\frac{L}{r}\right)^2-\Phi$",fontsize=20) plt.xlabel("r(kpc)",fontsize=15) plt.ylabel("Residue",fontsize=15) z1 = newton(R_NFW,10,args=(Energy_nfw[i],Momentum[i]),fprime=R_dNFW) e1 = R_NFW(z1,Energy_nfw[i],Momentum[i]) print "The pericentre is found to be:",z1,"kpc","for the NFW profile" plt.plot(z1,e1,marker='d',label='pericentre-NFW') z2 = newton(R_H,50,args=(Energy_h[i],Momentum[i]),fprime=R_dH,fprime2=R_ddH) e2 = R_H(z2,Energy_h[i],Momentum[i]) print "The pericentre is found to be:",z2,"kpc","for the Hernquist profile" plt.plot(z1,e1,marker='o',label='pericentre-H') plt.legend(fontsize=15) plt.show() J_NFW = quad(r_actionNFW,z1,np.inf,args=(Energy_nfw[i],Momentum[i])) print "The radial action for NFW profile is:",J_NFW[0],"km/s kpc" J_H = quad(r_actionH,z2,np.inf,args=(Energy_h[i],Momentum[i])) print "The radial action for Hernquist profile is:",J_H[0],"km/s kpc" ``` The table below lists all the parameters of the five stars. <img src="Desktop/table.png"> ## Problem 3 -- System of equations $$f(x,y) = x^2+y^2-50=0$$ $$g(x,y) = x \times y -25 = 0$$ ### (a) Analytical solution First $f(x,y)-2g(x,y)$,we find: \begin{align*} x^2+y^2-2xy &=0\\ (x-y)^2 &= 0\\ x&=y \end{align*} Then $f(x,y)+2g(x,y)$,we find: \begin{align*} x^2+y^2+2xy &=100\\ (x+y)^2 &= 100\\ x,y = 5,5 \ &or -5,-5 \end{align*} ### (b) Newton's method Newton-Raphson method can also be applied to solve multivariate systems. The algorithm is simply as follows: * Suppose we have an N-D multivariate system of the form: \begin{cases} f_1(x_1,...,x_N)=f_1(\mathbf{x})=0\\ f_2(x_1,...,x_N)=f_2(\mathbf{x})=0\\ ...... \\ f_N(x_1,...,x_N)=f_N(\mathbf{x})=0\\ \end{cases} where we have defined $$\mathbf{x}=[x_1,...,x_N]^T$$ Define a vector function $$\mathbf{f}(\mathbf{x})=[f_1(\mathbf{x}),...,f_N(\mathbf{x})]^T$$ So that the equation system above can be written as $$\mathbf{f}(\mathbf{x})=\mathbf{0}$$ * $\mathbf{J}_{\mathbf{f}}(\mathbf{x})$ is the $\textit{Jacobian matrix}$ over the function vector $\mathbf{f}(\mathbf{x})$ $$\mathbf{J}_{\mathbf{f}}(\mathbf{x})=\begin{bmatrix} \frac{\partial f_1}{\partial x_1} & \dots & \frac{\partial f_1}{\partial x_N} \\ \vdots & \ddots & \vdots \\ \frac{\partial f_N}{\partial x_1} & \dots & \frac{\partial f_N}{\partial x_N} \end{bmatrix}$$ * If all equations are linear we have $$\mathbf{f}(\mathbf{x}+\delta \mathbf{x})=\mathbf{f}(\mathbf{x})+\mathbf{J}(\mathbf{x})\delta\mathbf{x}$$ * by assuming $\mathbf{f}(\mathbf{x}+\delta \mathbf{x})=0$, we can find the roots as $\mathbf{x}+\delta \mathbf{x}$, where $$\delta \mathbf{x} = -\mathbf{J}(\mathbf{x})^{-1}\mathbf{f}(\mathbf{x})$$ * The approximation can be improved iteratively $$\mathbf{x}_{n+1} = \mathbf{x}_n +\delta \mathbf{x}_n = \mathbf{x}_n-\mathbf{J}(\mathbf{x}_n)^{-1}\mathbf{f}(\mathbf{x}_n)$$ ``` from scipy.optimize import fsolve import numpy as np f1 = lambda x: [x[0]**2+x[1]**2-50,x[0]*x[1]-25] #the Jacobian needed to implement Newton's method fd = lambda x: np.array([[2*x[0],2*x[1]],[x[1],x[0]]]).reshape(2,2) #define the domain where we want to find the solution (x,y) a = np.linspace(-10,10,100) b = a #for every point (a,b), pass on to fsolve and append the result #then round the result and see how many pairs of solutions there are i = 0 result = np.array([[5,5]]) #print result for a,b in zip(a,b): x = fsolve(f1,[a,b],fprime=fd) x = np.round(x) result = np.append(result,[x],axis=0) print "The sets of solutions are found to be:",np.unique(result,axis=0) ``` From above we learn that the solutions are indeed left with $(x,y) = (5,5)$ or $(x,y) = (-5,-5)$ ### (c) Convergence ``` %config InlineBackend.figure_format = 'retina' import numpy as np from scipy.optimize import fsolve import matplotlib.pyplot as plt def f(x, y): return x**2+y**2-50; def g(x, y): return x*y-25 x = np.linspace(-6, 6, 500) @np.vectorize def fy(x): x0 = 0.0 def tmp(y): return f(x, y) y1, = fsolve(tmp, x0) return y1 @np.vectorize def gy(x): x0 = 0.0 def tmp(y): return g(x, y) y1, = fsolve(tmp, x0) return y1 plt.plot(x, fy(x), x, gy(x)) plt.xlabel('x') plt.ylabel('y') plt.rc('xtick', labelsize=10) # fontsize of the tick labels plt.rc('ytick', labelsize=10) plt.legend(['fy', 'gy']) plt.show() #print fy(x) i =1 I = np.array([]) F = np.array([]) G = np.array([]) X_std = np.array([]) Y_std = np.array([]) while i<50: x_result = fsolve(f1,[-100,-100],maxfev=i) f_result = f(x_result[0],x_result[1]) g_result = g(x_result[0],x_result[1]) x1_std = abs(x_result[0]+5.0) x2_std = abs(x_result[1]+5.0) F = np.append(F,f_result) G = np.append(G,g_result) I = np.append(I,i) X_std = np.append(X_std,x1_std) Y_std = np.append(Y_std,x2_std) i+=1 xtol = 1.49012e-08 plt.loglog(I,np.abs(F),I,np.abs(G)) plt.title("converge of f and g") plt.xlabel("iterations") plt.ylabel("function values") plt.legend(['f','g']) plt.show() plt.loglog(I,X_std,I,Y_std) plt.axhline(y=xtol,color='#b22222',lw=3) plt.title(r"$converge \ of \ \Delta_x \ and \ \Delta_y$") plt.xlabel("iterations") plt.ylabel("Deviation values") plt.legend([r'$\Delta x$',r'$\Delta y$','tolerance']) plt.show() ``` ### (d) Maximum iterations Now also apply the Jacobian. The jacobian of the system of equation is simply as follows $$\mathbf{J} = \begin{bmatrix} \frac{\partial f}{\partial x} & \frac{\partial f}{\partial y} \\ \frac{\partial g}{\partial x} & \frac{\partial g}{\partial y} \end{bmatrix}$$ $$=\begin{bmatrix} 2x & 2y \\ y & x \end{bmatrix}$$ ``` fd = lambda x: np.array([[2*x[0],2*x[1]],[x[1],x[0]]]).reshape(2,2) i =1 I = np.array([]) F = np.array([]) G = np.array([]) X_std = np.array([]) Y_std = np.array([]) while i<50: x_result = fsolve(f1,[-100,-100],fprime=fd,maxfev=i) f_result = f(x_result[0],x_result[1]) g_result = g(x_result[0],x_result[1]) x1_std = abs(x_result[0]+5.0) x2_std = abs(x_result[1]+5.0) F = np.append(F,f_result) G = np.append(G,g_result) I = np.append(I,i) X_std = np.append(X_std,x1_std) Y_std = np.append(Y_std,x2_std) i+=1 xtol = 1.49012e-08 plt.loglog(I,np.abs(F),I,np.abs(G)) plt.title("converge of f and g") plt.xlabel("iterations") plt.ylabel("function values") plt.legend(['f','g']) plt.show() plt.loglog(I,X_std,I,Y_std) plt.axhline(y=xtol,color='#b22222',lw=3) plt.title(r"$converge \ of \ \Delta_x \ and \ \Delta_y$") plt.xlabel("iterations") plt.ylabel("Deviation values") plt.legend([r'$\Delta x$',r'$\Delta y$','tolerance']) plt.show() ``` Now that we have applied the Jacobian and it can be seen directly from the two plots above that they both drop more quickly to zero, and f and g (also $\Delta x$ and $\Delta y$) started to converge more quickly in the case when the Jacobian is applied, and is approaching the tolerance much faster ($\Delta x$ and $\Delta y$). This happens because when the comupter is trying to build up the Jacobian, it needs multiple sets of solutions to estimate the partial derivatives, and the derivatives are all just calculated from $\frac{f(x+\Delta x)}{\Delta x}$ and its accuracy will only increase with the cumulation of sets of solutions. But in the case of the Jacobian is applied, the function for the first derivative is then already known, so even if we start very far off we can still have an exponential increase in accuracy (the increase in accuracy in the case without Jacobian is also exponential but much slower, approximately 10 times slower).
github_jupyter
``` import matplotlib matplotlib.use('nbagg') import matplotlib.animation as anm import matplotlib.pyplot as plt import math import matplotlib.patches as patches import numpy as np class World: ### fig:world_init_add_timespan (1-5行目) def __init__(self, time_span, time_interval, debug=False): self.objects = [] self.debug = debug self.time_span = time_span # 追加 self.time_interval = time_interval # 追加 def append(self,obj): # オブジェクトを登録するための関数 self.objects.append(obj) def draw(self): ### fig:world_draw_with_timesapn (1, 10, 21-26, 28-34行目) fig = plt.figure(figsize=(4,4)) # 8x8 inchの図を準備 ax = fig.add_subplot(111) # サブプロットを準備 ax.set_aspect('equal') # 縦横比を座標の値と一致させる ax.set_xlim(-5,5) # X軸を-5m x 5mの範囲で描画 ax.set_ylim(-5,5) # Y軸も同様に ax.set_xlabel("X",fontsize=10) # X軸にラベルを表示 ax.set_ylabel("Y",fontsize=10) # 同じくY軸に elems = [] if self.debug: for i in range(int(self.time_span/self.time_interval)): self.one_step(i, elems, ax) else: self.ani = anm.FuncAnimation(fig, self.one_step, fargs=(elems, ax), frames=int(self.time_span/self.time_interval)+1, interval=int(self.time_interval*1000), repeat=False) plt.show() def one_step(self, i, elems, ax): while elems: elems.pop().remove() time_str = "t = %.2f[s]" % (self.time_interval*i) # 時刻として表示する文字列 elems.append(ax.text(-4.4, 4.5, time_str, fontsize=10)) for obj in self.objects: obj.draw(ax, elems) if hasattr(obj, "one_step"): obj.one_step(self.time_interval) # 変更 class IdealRobot: ### fig:robot_camera(1,2,8,28-29行目,49-53行目) def __init__(self, pose, agent=None, sensor=None, color="black"): # 引数を追加 self.pose = pose self.r = 0.2 self.color = color self.agent = agent self.poses = [pose] self.sensor = sensor # 追加 def draw(self, ax, elems): x, y, theta = self.pose xn = x + self.r * math.cos(theta) yn = y + self.r * math.sin(theta) elems += ax.plot([x,xn], [y,yn], color=self.color) c = patches.Circle(xy=(x, y), radius=self.r, fill=False, color=self.color) elems.append(ax.add_patch(c)) self.poses.append(self.pose) elems += ax.plot([e[0] for e in self.poses], [e[1] for e in self.poses], linewidth=0.5, color="black") if self.sensor and len(self.poses) > 1: #追加 self.sensor.draw(ax, elems, self.poses[-2]) #追加 @classmethod def state_transition(self, nu, omega, time, pose): t0 = pose[2] if math.fabs(omega) < 1e-10: return pose + np.array( [nu*math.cos(t0), nu*math.sin(t0), omega ] ) * time else: return pose + np.array( [nu/omega*(math.sin(t0 + omega*time) - math.sin(t0)), nu/omega*(-math.cos(t0 + omega*time) + math.cos(t0)), omega*time ] ) def one_step(self, time_interval): if not self.agent: return obs = self.sensor.data(self.pose) if self.sensor else None #追加 nu, omega = self.agent.decision(obs) #引数追加 self.pose = self.state_transition(nu, omega, time_interval, self.pose) class Agent: def __init__(self, nu, omega): self.nu = nu self.omega = omega def decision(self, observation=None): return self.nu, self.omega class Landmark: def __init__(self, x, y): self.pos = np.array([x, y]).T self.id = None def draw(self, ax, elems): c = ax.scatter(self.pos[0], self.pos[1], s=100, marker="*", label="landmarks", color="orange") elems.append(c) elems.append(ax.text(self.pos[0], self.pos[1], "id:" + str(self.id), fontsize=10)) class Map: def __init__(self): # 空のランドマークのリストを準備 self.landmarks = [] def append_landmark(self, landmark): # ランドマークを追加 landmark.id = len(self.landmarks) # 追加するランドマークにIDを与える self.landmarks.append(landmark) def draw(self, ax, elems): # 描画(Landmarkのdrawを順に呼び出し) for lm in self.landmarks: lm.draw(ax, elems) class IdealCamera: ### fig:camera2(1-4行目、6, 12-13行目, 26-32行目) def __init__(self, env_map): self.map = env_map self.lastdata = [] # 追加 def data(self, cam_pose): observed = [] for lm in self.map.landmarks: z = self.observation_function(cam_pose, lm.pos) observed.append((z, lm.id)) self.lastdata = observed # 追加 return observed @classmethod def observation_function(cls, cam_pose, obj_pos): diff = obj_pos - cam_pose[0:2] phi = math.atan2(diff[1], diff[0]) - cam_pose[2] while phi >= np.pi: phi -= 2*np.pi while phi < -np.pi: phi += 2*np.pi return np.array( [np.hypot(*diff), phi ] ).T def draw(self, ax, elems, cam_pose): # 追加 ###camera2_1 for lm in self.lastdata: x, y, theta = cam_pose distance, direction = lm[0][0], lm[0][1] lx = x + distance * math.cos(direction + theta) ly = y + distance * math.sin(direction + theta) elems += ax.plot([x,lx], [y,ly], color="pink") world = World(10, 0.1, debug=False) ### fig:sensor_drawing (10-19行目) ### 地図を生成して3つランドマークを追加 ### m = Map() m.append_landmark(Landmark(2,-2)) m.append_landmark(Landmark(-1,-3)) m.append_landmark(Landmark(3,3)) world.append(m) ### ロボットを作る ### straight = Agent(0.2, 0.0) circling = Agent(0.2, 10.0/180*math.pi) robot1 = IdealRobot( np.array([ 2, 3, math.pi/6]).T, sensor=IdealCamera(m), agent=straight ) # 引数にcameraを追加、整理 robot2 = IdealRobot( np.array([-2, -1, math.pi/5*6]).T, sensor=IdealCamera(m), agent=circling, color="red") # robot3は消しました world.append(robot1) world.append(robot2) ### アニメーション実行 ### world.draw() cam = IdealCamera(m) p = cam.data(robot2.pose) print(p) ```
github_jupyter
# Exploratory Data Analysis ``` from pyspark import SparkContext, SparkConf from pyspark.sql import SparkSession from pyspark.sql.types import * from pyspark.sql import functions as F spark = SparkSession.builder.master('local[1]').appName("Jupyter").getOrCreate() sc = spark.sparkContext #test if this works import pandas as pd import numpy as np import matplotlib.pyplot as plt import scipy import datetime # the more advanced python visualization library import seaborn as sns # apply style to all the charts sns.set_style('whitegrid') pd.set_option('display.max_columns', None) pd.set_option('display.max_rows', None) # create sparksession spark = SparkSession \ .builder \ .appName("Pysparkexample") \ .config("spark.some.config.option", "some-value") \ .getOrCreate() ``` # Load Data ``` #collisions = spark.read.csv('data/accidents.csv', header='true', inferSchema = True) #collisions.show(2) df_new = spark.read.csv('data/accidents_new.csv', header='true', inferSchema = True) ``` # Data Perspective _____ * One variable * Numeric variables: * continuous * discrete * Categorical variables: * ordinal * nominal * Multiple variables: * Numeric x Numeric * Categorical x Numeric * Categorical x Categorical ____________________ # Overview ``` print('The total number of rows : ', df_new.count(), '\nThe total number of columns :', len(df_new.columns)) ``` # Data Schema Print the data schema for our dataset - SAAQ Accident Information ``` df_new.printSchema() # Create temporary table query with SQL df_new.createOrReplaceTempView('AccidentData') accidents_limit_10 = spark.sql( ''' SELECT * FROM AccidentData LIMIT 10 ''' ).toPandas() accidents_limit_10 ``` # One Variable __________ ## a. Numeric - Data Totals Totals for various accident records ``` from pyspark.sql import functions as func #df_new.agg(func.sum("NB_BLESSES_VELO").alias('Velo'),func.sum("NB_VICTIMES_MOTO"),func.sum("NB_VEH_IMPLIQUES_ACCDN")).show() df_new.agg(func.sum("NB_VEH_IMPLIQUES_ACCDN").alias('Ttl Cars In Accidents')).show() df_new.agg(func.sum("NB_VICTIMES_TOTAL").alias('Ttl Victims')).show() df_new.agg(func.sum("NB_MORTS").alias('Ttl Deaths')).show() df_new.agg(func.sum("NB_BLESSES_GRAVES").alias('Ttl Severe Injuries')).show() df_new.agg(func.sum("NB_BLESS_LEGERS").alias('Ttl Light Injuries')).show() df_new.agg(func.sum("NB_DECES_PIETON").alias('Ttl Pedestrian Deaths')).show() df_new.agg(func.sum("NB_BLESSES_PIETON").alias('Ttl Pedestrian Injuries')).show() df_new.agg(func.sum("NB_VICTIMES_PIETON").alias('Ttl Pedestrian Victims')).show() df_new.agg(func.sum("NB_DECES_MOTO").alias('Ttl Moto Deaths')).show() df_new.agg(func.sum("NB_BLESSES_MOTO").alias('Ttl Moto Injuries')).show() df_new.agg(func.sum("NB_VICTIMES_MOTO").alias('Ttl Moto Victims')).show() df_new.agg(func.sum("NB_DECES_VELO").alias('Ttl Bike Deaths')).show() df_new.agg(func.sum("NB_BLESSES_VELO").alias('Ttl Bike Injuries')).show() df_new.agg(func.sum("NB_VICTIMES_VELO").alias('Ttl Bike Victims')).show() df_new.agg(func.sum("nb_automobile_camion_leger").alias('Ttl Car - Light Trucks')).show() df_new.agg(func.sum("nb_camionLourd_tractRoutier").alias('Ttl Heavy Truck - Tractor')).show() df_new.agg(func.sum("nb_outil_equipement").alias('Ttl Equipment - Tools')).show() df_new.agg(func.sum("nb_tous_autobus_minibus").alias('Ttl Bus')).show() df_new.agg(func.sum("nb_bicyclette").alias('Ttl Bikes')).show() df_new.agg(func.sum("nb_cyclomoteur").alias('Ttl Motorized Bike')).show() df_new.agg(func.sum("nb_motocyclette").alias('Ttl Motorcycle')).show() df_new.agg(func.sum("nb_taxi").alias('Ttl Taxi')).show() df_new.agg(func.sum("nb_urgence").alias('Ttl Emergency')).show() df_new.agg(func.sum("nb_motoneige").alias('Ttl Snowmobile')).show() df_new.agg(func.sum("nb_VHR").alias('Ttl Motorhome')).show() df_new.agg(func.sum("nb_autres_types").alias('Ttl Other Types')).show() df_new.agg(func.sum("nb_veh_non_precise").alias('Ttl Non Specified Vehicles')).show() df_totals = pd.DataFrame(columns=['Attr','Total']) #df_totals.append({'Attr':'NB_VEH_IMPLIQUES_ACCDN','Total':df_new.agg(func.sum("NB_VEH_IMPLIQUES_ACCDN"))},ignore_index=True) #df_totals ``` ## b. Categorical ### GRAVITE - severity of the accident ``` gravite_levels = spark.sql( ''' SELECT GRAVITE, COUNT(*) as Total FROM AccidentData GROUP BY GRAVITE ORDER BY Total DESC ''' ).toPandas() gravite_levels # Pie Chart fig,ax = plt.subplots(1,1,figsize=(12,6)) wedges, texts, autotexts = ax.pie(gravite_levels['Total'], radius=2, #labeldistance=2, pctdistance=1.1, autopct='%1.2f%%', startangle=90) ax.legend(wedges, gravite_levels['GRAVITE'], title="GRAVITE", loc="center left", bbox_to_anchor=(1, 0, 0.5, 1)) plt.setp(autotexts, size=12, weight="bold") ax.axis('equal') plt.tight_layout() plt.savefig('figures/gravite_levels.png') plt.show() ``` ### METEO - Weather Conditions ``` meteo_conditions = spark.sql( ''' SELECT METEO, COUNT(*) as Total FROM AccidentData GROUP BY METEO ORDER BY Total DESC ''' ).toPandas() meteo_conditions['METEO'] = meteo_conditions['METEO'].replace( {11:'Clear',12:'Overcast: cloudy/dark',13:'Fog/mist', 14:'Rain/bruine',15:'Heavy rain',16:'Strong wind', 17:'Snow/storm',18:'Blowing snow/blizzard', 19:'Ice',99:'Other..'}) meteo_conditions fig,ax = plt.subplots(1,1,figsize=(10,6)) plt.bar(meteo_conditions['METEO'], meteo_conditions['Total'], align='center', alpha=0.7, width=0.7, color='purple') plt.setp(ax.get_xticklabels(), rotation=30, horizontalalignment='right') fig.tight_layout() plt.savefig('figures/meteo_conditions.png') plt.show() ``` # Multiple Variables ____________ ## Numeric X Categorical ### 1. Accident Victims by Municipality ``` victims_by_municipality = spark.sql( ''' SELECT MUNCP, SUM(NB_VICTIMES_TOTAL) as Total FROM AccidentData GROUP BY MUNCP ORDER BY Total DESC ''' ).toPandas() victims_by_municipality fig,ax = plt.subplots(1,1,figsize=(10,6)) victims_by_municipality.plot(x = 'MUNCP', y = 'Total', kind = 'barh', color = 'C0', ax = ax, legend = False) ax.set_xlabel('Total Victims', size = 16) ax.set_ylabel('Municipality', size = 16) plt.savefig('figures/victims_by_municipality.png') plt.show() victims_by_region = spark.sql( ''' SELECT MUNCP, SUM(NB_VICTIMES_TOTAL) as Total FROM AccidentData GROUP BY MUNCP ''' ).toPandas() plt.figure(figsize = (10,6)) sns.distplot(np.log(victims_by_region['Total'])) plt.title('Total Victims Histogram by Region', size = 16) plt.ylabel('Density', size = 16) plt.xlabel('Log Total', size = 16) plt.savefig('figures/distplot.png') plt.show() ``` ### 2. Total Collisions by Day of Week ``` collisions_by_day = spark.sql( ''' SELECT WEEK_DAY, COUNT(WEEK_DAY) as Number_of_Collisions FROM AccidentData GROUP BY WEEK_DAY ORDER BY Number_of_Collisions DESC ''' ).toPandas() collisions_by_day fig,ax = plt.subplots(1,1,figsize=(10,6)) collisions_by_day.plot(x = 'WEEK_DAY', y = 'Number_of_Collisions', kind = 'barh', color = 'C0', ax = ax, legend = False) ax.set_xlabel('Number_of_Collisions', size = 16) ax.set_ylabel('WEEK_DAY', size = 16) plt.savefig('figures/collisions_by_day.png') plt.show() ``` #### "VE", Friday has the highest number of collisions. ### 3. Top 10 Accidents by street ``` accidents_by_street = spark.sql( ''' SELECT STREET, COUNT(STREET) as Number_of_Accidents FROM AccidentData GROUP BY STREET ORDER BY Number_of_Accidents DESC LIMIT 10 ''' ).toPandas() fig,ax = plt.subplots(1,1,figsize=(10,6)) #accidents_by_street.plot(x = 'STREET', y = 'Number_of_Accidents', kind = 'barh', color = 'C0', ax = ax, legend = False) sns.barplot(x=accidents_by_street['Number_of_Accidents'], y=accidents_by_street['STREET'], orient='h') ax.set_xlabel('Number of Accidents', size = 16) ax.set_ylabel('Street', size = 16) plt.savefig('figures/accidents_by_street.png') plt.show() ``` ## Numeric X Numeric ### Correlation Heatmap Illustrates the corellation between numeric variables of the dataset. ``` plot_df = spark.sql( ''' SELECT METEO, SURFACE, LIGHT, TYPE_ACCDN, NB_MORTS, NB_BLESSES_GRAVES, NB_VEH_IMPLIQUES_ACCDN, NB_VICTIMES_TOTAL FROM AccidentData ''' ).toPandas() corrmat = plot_df.corr() f, ax = plt.subplots(figsize=(10, 7)) sns.heatmap(corrmat, vmax=.8, square=True) plt.savefig('figures/heatmap.png') plt.show() ``` ## Categorical X Categorical
github_jupyter
``` import pandas as pd import os import csv import numpy as np import seaborn as sns import matplotlib.pyplot as plt data_dir = '/home/steffi/dev/data/ExpW/ExpwCleaned' labels_csv = '/home/steffi/dev/data/ExpW/labels_clean.csv' expw = pd.read_csv(labels_csv, delimiter=',') expw.head() expressions = expw.iloc[:, 2:] expression_mapping = { 0: "angry", 1: "disgust", 2: "fear", 3: "happy", 4: "sad", 5: "surprise", 6: "neutral" } ``` ## Hist plot for all values (even though 0 is actually useless) ``` fig, ax = plt.subplots(figsize=(10,10)) ax.hist(expressions, alpha=0.5, label=expressions.columns) ax.legend() ``` #### Filter out all values that are equal to 0 ``` expressions.value_counts(sort=True) ``` #### ===> columns contempt, unkown and NF can be dropped ``` expressions_drop = expressions.drop(columns=["unknown", "contempt", "NF"]) exp_nan = expressions_drop.replace(0, np.NaN) exp_stacked = exp_nan.stack(dropna=True) exp_unstacked = exp_stacked.reset_index(level=1) expressions_single = exp_unstacked.rename(columns={"level_1": "expression"}).drop(columns=[0]) expressions_single.head() ``` ### Append expressions to expw ``` expw["expression"] = expressions_single["expression"] expw.head() ``` ### Remove unnecessary columns ``` expw_minimal = expw.drop(expw.columns[1:-1], axis=1) expw_minimal.loc[:, "Image name"] = data_dir + "/" + expw_minimal["Image name"].astype(str) expw_minimal.shape ``` ### Histogram of expression distribution ``` x_ticks = [f"{idx} = {expr}, count: {count}" for idx, (expr, count) in enumerate(zip(list(expressions_single.value_counts().index.get_level_values(0)), expressions_single.value_counts().values))] x_ticks ax = expressions_single.value_counts().plot(kind='barh') ax.set_yticklabels(x_ticks) ``` ### Create a csv file with all absolute image paths for annotating with FairFace ``` col_name = "img_path" image_names = expw[["Image name"]] image_names.head() image_names.rename(columns={"Image name": "img_path"}, inplace=True) image_names.loc[:, "img_path"] = data_dir + "/" + image_names["img_path"].astype(str) save_path = "/home/steffi/dev/independent_study/FairFace/expw_image_paths.csv" image_names.to_csv(save_path, index=False) ``` ### Filter only img_paths which contain "black", "African", "chinese", "asian" ``` black = image_names.loc[image_names.img_path.str.contains('(black)'), :] african = image_names.loc[image_names.img_path.str.contains('(African)'), :] asian = image_names.loc[image_names.img_path.str.contains('(asian)'), :] chinese = image_names.loc[image_names.img_path.str.contains('(chinese)'), :] filtered = pd.concat([black, african, asian, chinese]) ``` ### Filter and save subgroups ##### Anger ``` black_angry_annoyed = black.loc[image_names.img_path.str.contains('(angry)|(annoyed)'), :] black_angry_annoyed.to_csv("/home/steffi/dev/independent_study/FairFace/black_angry_annoyed.csv", index=False) black_angry_annoyed.head() african_angry_annoyed = african.loc[image_names.img_path.str.contains('(angry)|(annoyed)'), :] african_angry_annoyed.to_csv("/home/steffi/dev/independent_study/FairFace/african_angry_annoyed.csv", index=False) african_angry_annoyed.shape asian_angry_annoyed = asian.loc[image_names.img_path.str.contains('(angry)|(annoyed)'), :] asian_angry_annoyed asian_angry_annoyed.to_csv("/home/steffi/dev/independent_study/FairFace/asian_angry_annoyed.csv", index=False) chinese_angry_annoyed = chinese.loc[image_names.img_path.str.contains('(angry)|(annoyed)'), :] chinese_angry_annoyed.head() chinese_angry_annoyed.to_csv("/home/steffi/dev/independent_study/FairFace/chinese_angry_annoyed.csv", index=False) ``` ##### Surprise ``` black_awe_astound_amazed = black.loc[image_names.img_path.str.contains('(awe)|(astound)|(amazed)'), :] black_awe_astound_amazed black_awe_astound_amazed.to_csv("/home/steffi/dev/independent_study/FairFace/black_awe_astound_amazed.csv", index=False) african_awe = african.loc[image_names.img_path.str.contains('(awe)'), :] african_awe african_awe.to_csv("/home/steffi/dev/independent_study/FairFace/african_awe.csv", index=False) asian_astound = asian.loc[image_names.img_path.str.contains('(astound)'), :] asian_astound.to_csv("/home/steffi/dev/independent_study/FairFace/asian_astound.csv", index=False) asian_astound chinese_astound = chinese.loc[image_names.img_path.str.contains('(astound)'), :] chinese_astound.to_csv("/home/steffi/dev/independent_study/FairFace/chinese_astound.csv", index=False) chinese_astound ``` #### Fear ``` black_fear = black.loc[image_names.img_path.str.contains('(fear)|(frightened)|(anxious)|(shocked)'), :] black_fear.shape african_fear = african.loc[image_names.img_path.str.contains('(fear)|(frightened)|(anxious)|(shocked)'), :] black_african_fear = pd.concat([african_fear, black_fear]) black_african_fear.shape black_african_fear.to_csv("/home/steffi/dev/independent_study/FairFace/black_african_fear.csv", index=False) ``` #### Disgust ``` black_disgust = black.loc[image_names.img_path.str.contains('(distaste)|(disgust)'), :] african_digsust = african.loc[image_names.img_path.str.contains('(distaste)|(disgust)'), :] african_digsust.shape black_african_disgust = pd.concat([black_disgust, african_digsust]) pd.set_option('display.max_colwidth', -1) black_african_disgust black_african_disgust.to_csv("/home/steffi/dev/independent_study/FairFace/black_african_disgust.csv", index=False) disgust_all = image_names.loc[image_names.img_path.str.contains('(distaste)|(disgust)'), :] disgust_all.shape disgust_all ``` #### Saving all filtered to csv ``` filtered_save_path = "/home/steffi/dev/independent_study/FairFace/filtered_expw_image_paths.csv" filtered.to_csv(filtered_save_path, index=False) ```
github_jupyter
# T M V A_Tutorial_Classification_Tmva_App TMVA example, for classification with following objectives: * Apply a BDT with TMVA **Author:** Lailin XU <i><small>This notebook tutorial was automatically generated with <a href= "https://github.com/root-project/root/blob/master/documentation/doxygen/converttonotebook.py">ROOTBOOK-izer</a> from the macro found in the ROOT repository on Tuesday, April 27, 2021 at 01:21 AM.</small></i> ``` from ROOT import TMVA, TFile, TTree, TCut, TH1F, TCanvas, gROOT, TLegend from subprocess import call from os.path import isfile from array import array gROOT.SetStyle("ATLAS") ``` Setup TMVA ``` TMVA.Tools.Instance() ``` Reader. One reader for each application. ``` reader = TMVA.Reader("Color:!Silent") reader_S = TMVA.Reader("Color:!Silent") reader_B = TMVA.Reader("Color:!Silent") ``` Inputs ============= Load data An unknown sample ``` trfile = "Zp2TeV_ttbar.root" data = TFile.Open(trfile) tree = data.Get('tree') ``` Known signal ``` trfile_S = "Zp1TeV_ttbar.root" data_S = TFile.Open(trfile_S) tree_S = data_S.Get('tree') ``` Known background ``` trfile_B = "SM_ttbar.root" data_B = TFile.Open(trfile_B) tree_B = data_B.Get('tree') ``` Set input variables. Do this for each reader ``` branches = {} for branch in tree.GetListOfBranches(): branchName = branch.GetName() branches[branchName] = array('f', [-999]) tree.SetBranchAddress(branchName, branches[branchName]) if branchName not in ["mtt_truth", "weight", "nlep", "njets"]: reader.AddVariable(branchName, branches[branchName]) branches_S = {} for branch in tree_S.GetListOfBranches(): branchName = branch.GetName() branches_S[branchName] = array('f', [-999]) tree_S.SetBranchAddress(branchName, branches_S[branchName]) if branchName not in ["mtt_truth", "weight", "nlep", "njets"]: reader_S.AddVariable(branchName, branches_S[branchName]) branches_B = {} for branch in tree_B.GetListOfBranches(): branchName = branch.GetName() branches_B[branchName] = array('f', [-999]) tree_B.SetBranchAddress(branchName, branches_B[branchName]) if branchName not in ["mtt_truth", "weight", "nlep", "njets"]: reader_B.AddVariable(branchName, branches_B[branchName]) ``` Book method(s) ============= BDT ``` methodName1 = "BDT" weightfile = 'dataset/weights/TMVAClassification_{0}.weights.xml'.format(methodName1) reader.BookMVA( methodName1, weightfile ) reader_S.BookMVA( methodName1, weightfile ) reader_B.BookMVA( methodName1, weightfile ) ``` BDTG ``` methodName2 = "BDTG" weightfile = 'dataset/weights/TMVAClassification_{0}.weights.xml'.format(methodName2) reader.BookMVA( methodName2, weightfile ) reader_S.BookMVA( methodName2, weightfile ) reader_B.BookMVA( methodName2, weightfile ) ``` Loop events for evaluation ================ Book histograms ``` nbins, xmin, xmax=20, -1, 1 ``` Signal ``` tag = "S" hname="BDT_{0}".format(tag) h1 = TH1F(hname, hname, nbins, xmin, xmax) h1.Sumw2() hname="BDTG_{0}".format(tag) h2 = TH1F(hname, hname, nbins, xmin, xmax) h2.Sumw2() nevents = tree_S.GetEntries() for i in range(nevents): tree_S.GetEntry(i) BDT = reader_S.EvaluateMVA(methodName1) BDTG = reader_S.EvaluateMVA(methodName2) h1.Fill(BDT) h2.Fill(BDTG) ``` Background ``` tag = "B" hname="BDT_{0}".format(tag) h3 = TH1F(hname, hname, nbins, xmin, xmax) h3.Sumw2() hname="BDTG_{0}".format(tag) h4 = TH1F(hname, hname, nbins, xmin, xmax) h4.Sumw2() nevents = tree_B.GetEntries() for i in range(nevents): tree_B.GetEntry(i) BDT = reader_B.EvaluateMVA(methodName1) BDTG = reader_B.EvaluateMVA(methodName2) h3.Fill(BDT) h4.Fill(BDTG) ``` New sample ``` tag = "N" hname="BDT_{0}".format(tag) h5 = TH1F(hname, hname, nbins, xmin, xmax) h5.Sumw2() hname="BDTG_{0}".format(tag) h6 = TH1F(hname, hname, nbins, xmin, xmax) h6.Sumw2() nevents = tree.GetEntries() for i in range(nevents): tree.GetEntry(i) BDT = reader.EvaluateMVA(methodName1) BDTG = reader.EvaluateMVA(methodName2) h5.Fill(BDT) h6.Fill(BDTG) ``` Helper function to normalize hists ``` def norm_hists(h): h_new = h.Clone() hname = h.GetName() + "_normalized" h_new.SetName(hname) h_new.SetTitle(hname) ntot = h.Integral() if ntot!=0: h_new.Scale(1./ntot) return h_new ``` Plotting ``` myc = TCanvas("c", "c", 800, 600) myc.SetFillColor(0) myc.cd() ``` Compare the performance for BDT ``` nh1 = norm_hists(h1) nh1.GetXaxis().SetTitle("BDT") nh1.GetYaxis().SetTitle("A.U.") nh1.Draw("hist") nh3 = norm_hists(h3) nh3.SetLineColor(2) nh3.SetMarkerColor(2) nh3.Draw("same hist") nh5 = norm_hists(h5) nh5.SetLineColor(4) nh5.SetMarkerColor(4) nh5.Draw("same") ymin = 0 ymax = max(nh1.GetMaximum(), nh3.GetMaximum(), nh5.GetMaximum()) nh1.GetYaxis().SetRangeUser(ymin, ymax*1.5) ``` Draw legends ``` lIy = 0.92 lg = TLegend(0.60, lIy-0.25, 0.85, lIy) lg.SetBorderSize(0) lg.SetFillStyle(0) lg.SetTextFont(42) lg.SetTextSize(0.04) lg.AddEntry(nh1, "Signal 1 TeV", "l") lg.AddEntry(nh3, "Background", "l") lg.AddEntry(nh5, "Signal 2 TeV", "l") lg.Draw() myc.Draw() myc.SaveAs("TMVA_tutorial_cla_app_1.png") ``` Compare the performance for BDTG ``` nh1 = norm_hists(h2) nh1.GetXaxis().SetTitle("BDTG") nh1.GetYaxis().SetTitle("A.U.") nh1.Draw("hist") nh3 = norm_hists(h4) nh3.SetLineColor(2) nh3.SetMarkerColor(2) nh3.Draw("same hist") nh5 = norm_hists(h6) nh5.SetLineColor(4) nh5.SetMarkerColor(4) nh5.Draw("same") ymin = 0 ymax = max(nh1.GetMaximum(), nh3.GetMaximum(), nh5.GetMaximum()) nh1.GetYaxis().SetRangeUser(ymin, ymax*1.5) ``` Draw legends ``` lIy = 0.92 lg = TLegend(0.60, lIy-0.25, 0.85, lIy) lg.SetBorderSize(0) lg.SetFillStyle(0) lg.SetTextFont(42) lg.SetTextSize(0.04) lg.AddEntry(nh1, "Signal 1 TeV", "l") lg.AddEntry(nh3, "Background", "l") lg.AddEntry(nh5, "Signal 2 TeV", "l") lg.Draw() myc.Draw() myc.SaveAs("TMVA_tutorial_cla_app_2.png") ``` Draw all canvases ``` from ROOT import gROOT gROOT.GetListOfCanvases().Draw() ```
github_jupyter
# Data Processing ``` %pylab inline matplotlib.rcParams['figure.figsize'] = [20, 10] import pandas as pd import numpy as np import warnings warnings.filterwarnings("ignore") # All variables we concern about columnNames1 = ["releaseNum", "1968ID", "personNumber", "gender", "marriage", "familyNumber", "sequenceNum", "relationToHead", "age", 'employmentStatus', "education", "nonHeadlaborIncome"] columnNames2 = ["releaseNum", "1968ID", "personNumber", "gender", "marriage", "familyNumber", "sequenceNum", "relationToHead", "age", 'employmentStatus', "education"] FcolumnNames1999_2001 = ['releaseNum', 'familyID', 'composition', 'headCount', 'ageHead', 'maritalStatus', 'employmentStatus', 'liquidWealth', 'race', 'industry' ,'geoCode','incomeHead', "incomeWife", 'foodCost', 'houseCost', 'transCost', 'educationCost', 'childCost', 'healthCost', 'education', 'participation', 'investmentAmount', 'annuityIRA', 'wealthWithoutHomeEquity', "wealthWithHomeEquity"] FcolumnNames2003_2007 = ['releaseNum', 'familyID', 'composition', 'headCount', 'ageHead', 'maritalStatus', 'employmentStatus', 'liquidWealth', 'race', 'industry', 'incomeHead', "incomeWife", 'foodCost', 'houseCost', 'transCost', 'educationCost', 'childCost', 'healthCost', 'geoCode', 'education', 'participation', 'investmentAmount', 'annuityIRA', 'wealthWithoutHomeEquity', "wealthWithHomeEquity"] FcolumnNames2019 = ['releaseNum', 'familyID', 'composition', 'headCount', 'ageHead', 'maritalStatus', 'employmentStatus', 'liquidWealth', 'race', 'industry' ,'incomeHead', 'incomeWife', 'participation', 'investmentAmount', 'annuityIRA', 'wealthWithoutHomeEquity', 'wealthWithHomeEquity', 'foodCost', 'houseCost', 'transCost', 'educationCost', 'childCost', 'healthCost', 'geoCode', 'education'] # The timeline we care about years = [1999, 2001, 2003, 2005, 2007, 2009, 2011, 2013, 2015, 2017] # The function used to complile all years data into one dataFrame, # the input "features" is a list of features. def compile_data_with_features(features, years): df = pd.DataFrame() # Loading the data through years for year in years: df_sub = pd.read_excel("individual/" + str(year) + ".xlsx") if year >= 2005: df_sub.columns = columnNames1 df_sub['year'] = year df = pd.concat([df, df_sub[['year'] + features + ["nonHeadlaborIncome"]]]) else: df_sub.columns = columnNames2 df_sub['year'] = year df = pd.concat([df, df_sub[['year'] + features]]) df = df.reset_index(drop = True) return df def Fcompile_data_with_features(features, years): df = pd.DataFrame() # Loading the data through years for year in years: df_sub = pd.read_excel("family/" + str(year) + ".xlsx") if year >= 1999 and year <= 2001: df_sub.columns = FcolumnNames1999_2001 elif year >= 2003 and year <= 2007: df_sub.columns = FcolumnNames2003_2007 else: df_sub.columns = FcolumnNames2019 df_sub['year'] = year df = pd.concat([df, df_sub[['familyID','year'] + features]]) df = df.reset_index(drop = True) return df # The function is used to drop the values we do not like in the dataFrame, # the input "features" and "values" are both list def drop_values(features, values, df): for feature in features: for value in values: df = df[df[feature] != value] df = df.reset_index(drop = True) return df ``` ### Individual Data ``` Idf = compile_data_with_features(["1968ID", "personNumber", "familyNumber","gender", "marriage", "age", 'employmentStatus', "education", "relationToHead"], years) Idf["ID"] = Idf["1968ID"]* 1000 + Idf["personNumber"] # pick out the head in the individual df_head = Idf[Idf["relationToHead"] == 10] df_head = df_head.reset_index(drop = True) # compile individuals with all 10 years data. completeIndividualData = [] for ID, value in df_head.groupby("ID"): if len(value) == len(years): completeIndividualData.append(value) print("Number of heads with complete data: ", len(completeIndividualData)) # prepare the combined dataset and set up dummy variables for qualitative data df = Fcompile_data_with_features(['composition', 'headCount', 'ageHead', 'maritalStatus', 'employmentStatus', 'liquidWealth', 'race', 'industry' ,'geoCode','incomeHead', "incomeWife", 'foodCost', 'houseCost', 'transCost', 'educationCost', 'childCost', 'healthCost', 'education', 'participation', 'investmentAmount', 'annuityIRA', 'wealthWithoutHomeEquity', "wealthWithHomeEquity"], years) ``` ### Family Data ``` # prepare the combined dataset and set up dummy variables for qualitative data df = Fcompile_data_with_features(['composition', 'headCount', 'ageHead', 'maritalStatus', 'employmentStatus', 'liquidWealth', 'race', 'industry' ,'geoCode','incomeHead', "incomeWife", 'foodCost', 'houseCost', 'transCost', 'educationCost', 'childCost', 'healthCost', 'education', 'participation', 'investmentAmount', 'annuityIRA', 'wealthWithoutHomeEquity', "wealthWithHomeEquity"], years) df = drop_values(["ageHead"],[999], df) df = drop_values(["maritalStatus"],[8,9], df) df = drop_values(["employmentStatus"],[0, 22, 98, 99], df) df = drop_values(["liquidWealth"],[999999998,999999999], df) df = drop_values(["race"],[0,8,9], df) df = drop_values(["industry"],[999,0], df) df = drop_values(["education"],[99,0], df) df["totalExpense"] = df[['foodCost', 'houseCost', 'transCost', 'educationCost', 'childCost', 'healthCost']].sum(axis = 1) df["laborIncome"] = df["incomeHead"] + df["incomeWife"] df["costPerPerson"] = df["totalExpense"]/df["headCount"] maritalStatus = ["Married", "neverMarried", "Widowed", "Divorced", "Separated"] employmentStatus = ["Working", "temporalLeave", "unemployed", "retired", "disabled", "keepHouse", "student", "other"] race = ["White", "Black","AmericanIndian","Asian","Latino","otherBW","otherRace"] # Education # < 8th grade: middle school # >= 8 and < 12: high scho0l # >=12 and < 15: college # >= 15 post graduate education = ["middleSchool", "highSchool", "college", "postGraduate"] # Industry # < 400 manufacturing # >= 400 and < 500 publicUtility # >= 500 and < 680 retail # >= 680 and < 720 finance # >= 720 and < 900 service # >= 900 otherIndustry industry = ["manufacturing", "publicUtility", "retail", "finance", "service", "otherIndustry"] data = [] for i in range(len(df)): dataCollect = [] # marital status dataCollect.append(maritalStatus[int(df.iloc[i]["maritalStatus"]-1)]) # employment dataCollect.append(employmentStatus[int(df.iloc[i]["employmentStatus"]-1)]) # race dataCollect.append(race[int(df.iloc[i]["race"] - 1)]) # Education variable if df.iloc[i]["education"] < 8: dataCollect.append(education[0]) elif df.iloc[i]["education"] >= 8 and df.iloc[i]["education"] < 12: dataCollect.append(education[1]) elif df.iloc[i]["education"] >= 12 and df.iloc[i]["education"] < 15: dataCollect.append(education[2]) else: dataCollect.append(education[3]) # industry variable if df.iloc[i]["industry"] < 400: dataCollect.append(industry[0]) elif df.iloc[i]["industry"] >= 400 and df.iloc[i]["industry"] < 500: dataCollect.append(industry[1]) elif df.iloc[i]["industry"] >= 500 and df.iloc[i]["industry"] < 680: dataCollect.append(industry[2]) elif df.iloc[i]["industry"] >= 680 and df.iloc[i]["industry"] < 720: dataCollect.append(industry[3]) elif df.iloc[i]["industry"] >= 720 and df.iloc[i]["industry"] < 900: dataCollect.append(industry[4]) else: dataCollect.append(industry[5]) data.append(dataCollect) # Categorical dataFrame df_cat = pd.DataFrame(data, columns = ["maritalStatus", "employmentStatus", "race", "education", "industry"]) Fdf = pd.concat([df[["familyID", "year",'composition', 'headCount', 'ageHead', 'liquidWealth', 'laborIncome', "costPerPerson","totalExpense", 'participation', 'investmentAmount', 'annuityIRA', 'wealthWithoutHomeEquity', "wealthWithHomeEquity"]], df_cat[["maritalStatus", "employmentStatus", "education","race", "industry"]]], axis=1) # Adjust for inflation. years = [1999, 2001, 2003, 2005, 2007, 2009, 2011, 2013, 2015, 2017] values_at2020 = np.array([1.55, 1.46, 1.40, 1.32, 1.24, 1.20, 1.15, 1.11, 1.09, 1.05]) values_at2005 = values_at2020/1.32 values_at2005 quantVariables = ['annuityIRA', 'investmentAmount', 'liquidWealth', 'laborIncome', 'costPerPerson','costPerPerson', 'totalExpense', 'wealthWithoutHomeEquity', 'wealthWithHomeEquity'] for i in range(len(Fdf)): for variable in quantVariables: Fdf.at[i, variable] = round(Fdf.at[i, variable] * values_at2005[years.index(Fdf.at[i,"year"])], 2) ``` ### Link Family Data with Individual Head Data ``` completeFamilyData = [] for individual in completeIndividualData: idf = pd.DataFrame() for i in range(len(individual)): idf = pd.concat([idf, Fdf[(Fdf.year == individual.iloc[i].year)& (Fdf.familyID == individual.iloc[i].familyNumber)]]) completeFamilyData.append(idf.set_index("year", drop = True)) FamilyData = [f for f in completeFamilyData if len(f) == len(years)] len(FamilyData) # skilled definition with college and postGraduate skilled_index = [] for i in range(1973): if "postGraduate" in FamilyData[i].education.values or "college" in FamilyData[i].education.values: skilled_index.append(i) len(skilled_index) # skilled definition with postGraduate skilled_index = [] for i in range(1973): if "postGraduate" in FamilyData[i].education.values: skilled_index.append(i) len(skilled_index) # working in the finance industry finance_index = [] for i in range(1973): if "finance" in FamilyData[i].industry.values: finance_index.append(i) len(finance_index) a = FamilyData[randint(0, 1973)] a ``` # Individual plot ``` def inFeaturePlot(FamilyData, feature, n): plt.figure() for i in range(n[0],n[1]): FamilyData[i][feature].plot(marker='o') plt.show() def plotFeatureVsAge(FamilyData, feature, n): plt.figure() for i in range(n[0],n[1]): plt.plot(FamilyData[i].ageHead, FamilyData[i][feature], marker = 'o') plt.show() inFeaturePlot(FamilyData,"laborIncome" , [1,100]) ``` # Average variable plot ``` def plotFeature(FamilyData, feature): df = FamilyData[0][feature] * 0 for i in range(len(FamilyData)): df = df + FamilyData[i][feature] df = df/len(FamilyData) df.plot(marker='o') print(df) # laborIncome plotFeature(FamilyData, "laborIncome") # laborIncome plotFeature(FamilyData, "investmentAmount") # Expenditure plotFeature(FamilyData, "totalExpense") # wealthWithoutHomeEquity plotFeature(FamilyData, "wealthWithoutHomeEquity") # wealthWithHomeEquity plotFeature(FamilyData, "wealthWithHomeEquity") plotFeature(FamilyData, "annuityIRA") ``` ## Compare The Distribution Over Age ``` df = Fdf[(Fdf["ageHead"]>=20) & (Fdf["ageHead"]<=80)] df[['liquidWealth', 'laborIncome', 'costPerPerson', 'totalExpense','investmentAmount', 'annuityIRA', 'wealthWithoutHomeEquity', 'wealthWithHomeEquity']] = df[['liquidWealth', 'laborIncome', 'costPerPerson', 'totalExpense','investmentAmount', 'annuityIRA', 'wealthWithoutHomeEquity', 'wealthWithHomeEquity']]/1000 df.shape df.columns ww = df.groupby("ageHead")["liquidWealth"].mean() nn = df.groupby("ageHead")["annuityIRA"].mean() cc = df.groupby("ageHead")["totalExpense"].mean() kk = df.groupby("ageHead")["investmentAmount"].mean() ytyt = df.groupby("ageHead")["laborIncome"].mean() plt.figure(figsize = [14,8]) plt.plot(ww, label = "wealth") plt.plot(cc, label = "Consumption") plt.plot(kk, label = "Stock") plt.legend() plt.plot(nn, label = "IRA") np.save('nn',nn) ```
github_jupyter
# RadarCOVID-Report ## Data Extraction ``` import datetime import json import logging import os import shutil import tempfile import textwrap import uuid import matplotlib.pyplot as plt import matplotlib.ticker import numpy as np import pandas as pd import retry import seaborn as sns %matplotlib inline current_working_directory = os.environ.get("PWD") if current_working_directory: os.chdir(current_working_directory) sns.set() matplotlib.rcParams["figure.figsize"] = (15, 6) extraction_datetime = datetime.datetime.utcnow() extraction_date = extraction_datetime.strftime("%Y-%m-%d") extraction_previous_datetime = extraction_datetime - datetime.timedelta(days=1) extraction_previous_date = extraction_previous_datetime.strftime("%Y-%m-%d") extraction_date_with_hour = datetime.datetime.utcnow().strftime("%Y-%m-%d@%H") current_hour = datetime.datetime.utcnow().hour are_today_results_partial = current_hour != 23 ``` ### Constants ``` from Modules.ExposureNotification import exposure_notification_io spain_region_country_code = "ES" germany_region_country_code = "DE" default_backend_identifier = spain_region_country_code backend_generation_days = 7 * 2 daily_summary_days = 7 * 4 * 3 daily_plot_days = 7 * 4 tek_dumps_load_limit = daily_summary_days + 1 ``` ### Parameters ``` environment_backend_identifier = os.environ.get("RADARCOVID_REPORT__BACKEND_IDENTIFIER") if environment_backend_identifier: report_backend_identifier = environment_backend_identifier else: report_backend_identifier = default_backend_identifier report_backend_identifier environment_enable_multi_backend_download = \ os.environ.get("RADARCOVID_REPORT__ENABLE_MULTI_BACKEND_DOWNLOAD") if environment_enable_multi_backend_download: report_backend_identifiers = None else: report_backend_identifiers = [report_backend_identifier] report_backend_identifiers environment_invalid_shared_diagnoses_dates = \ os.environ.get("RADARCOVID_REPORT__INVALID_SHARED_DIAGNOSES_DATES") if environment_invalid_shared_diagnoses_dates: invalid_shared_diagnoses_dates = environment_invalid_shared_diagnoses_dates.split(",") else: invalid_shared_diagnoses_dates = [] invalid_shared_diagnoses_dates ``` ### COVID-19 Cases ``` report_backend_client = \ exposure_notification_io.get_backend_client_with_identifier( backend_identifier=report_backend_identifier) @retry.retry(tries=10, delay=10, backoff=1.1, jitter=(0, 10)) def download_cases_dataframe_from_ecdc(): return pd.read_csv( "https://opendata.ecdc.europa.eu/covid19/casedistribution/csv/data.csv") confirmed_df_ = download_cases_dataframe_from_ecdc() confirmed_df = confirmed_df_.copy() confirmed_df = confirmed_df[["dateRep", "cases", "geoId"]] confirmed_df.rename( columns={ "dateRep":"sample_date", "cases": "new_cases", "geoId": "country_code", }, inplace=True) confirmed_df["sample_date"] = pd.to_datetime(confirmed_df.sample_date, dayfirst=True) confirmed_df["sample_date"] = confirmed_df.sample_date.dt.strftime("%Y-%m-%d") confirmed_df.sort_values("sample_date", inplace=True) confirmed_df.tail() def sort_source_regions_for_display(source_regions: list) -> list: if report_backend_identifier in source_regions: source_regions = [report_backend_identifier] + \ list(sorted(set(source_regions).difference([report_backend_identifier]))) else: source_regions = list(sorted(source_regions)) return source_regions confirmed_days = pd.date_range( start=confirmed_df.iloc[0].sample_date, end=extraction_datetime) source_regions_at_date_df = pd.DataFrame(data=confirmed_days, columns=["sample_date"]) source_regions_at_date_df["source_regions_at_date"] = \ source_regions_at_date_df.sample_date.apply( lambda x: report_backend_client.source_regions_for_date(date=x)) source_regions_at_date_df.sort_values("sample_date", inplace=True) source_regions_at_date_df["_source_regions_group"] = source_regions_at_date_df. \ source_regions_at_date.apply(lambda x: ",".join(sort_source_regions_for_display(x))) source_regions_at_date_df["sample_date_string"] = \ source_regions_at_date_df.sample_date.dt.strftime("%Y-%m-%d") source_regions_at_date_df.tail() source_regions_for_summary_df = \ source_regions_at_date_df[["sample_date", "_source_regions_group"]].copy() source_regions_for_summary_df.rename(columns={"_source_regions_group": "source_regions"}, inplace=True) source_regions_for_summary_df.head() confirmed_output_columns = ["sample_date", "new_cases", "covid_cases"] confirmed_output_df = pd.DataFrame(columns=confirmed_output_columns) for source_regions_group, source_regions_group_series in \ source_regions_at_date_df.groupby("_source_regions_group"): source_regions_set = set(source_regions_group.split(",")) confirmed_source_regions_set_df = \ confirmed_df[confirmed_df.country_code.isin(source_regions_set)].copy() confirmed_source_regions_group_df = \ confirmed_source_regions_set_df.groupby("sample_date").new_cases.sum() \ .reset_index().sort_values("sample_date") confirmed_source_regions_group_df["covid_cases"] = \ confirmed_source_regions_group_df.new_cases.rolling(7, min_periods=0).mean().round() confirmed_source_regions_group_df = \ confirmed_source_regions_group_df[confirmed_output_columns] confirmed_source_regions_group_df.fillna(method="ffill", inplace=True) confirmed_source_regions_group_df = \ confirmed_source_regions_group_df[ confirmed_source_regions_group_df.sample_date.isin( source_regions_group_series.sample_date_string)] confirmed_output_df = confirmed_output_df.append(confirmed_source_regions_group_df) confirmed_df = confirmed_output_df.copy() confirmed_df.tail() confirmed_df.rename(columns={"sample_date": "sample_date_string"}, inplace=True) confirmed_df.sort_values("sample_date_string", inplace=True) confirmed_df.tail() confirmed_df[["new_cases", "covid_cases"]].plot() ``` ### Extract API TEKs ``` raw_zip_path_prefix = "Data/TEKs/Raw/" fail_on_error_backend_identifiers = [report_backend_identifier] multi_backend_exposure_keys_df = \ exposure_notification_io.download_exposure_keys_from_backends( backend_identifiers=report_backend_identifiers, generation_days=backend_generation_days, fail_on_error_backend_identifiers=fail_on_error_backend_identifiers, save_raw_zip_path_prefix=raw_zip_path_prefix) multi_backend_exposure_keys_df["region"] = multi_backend_exposure_keys_df["backend_identifier"] multi_backend_exposure_keys_df.rename( columns={ "generation_datetime": "sample_datetime", "generation_date_string": "sample_date_string", }, inplace=True) multi_backend_exposure_keys_df.head() early_teks_df = multi_backend_exposure_keys_df[ multi_backend_exposure_keys_df.rolling_period < 144].copy() early_teks_df["rolling_period_in_hours"] = early_teks_df.rolling_period / 6 early_teks_df[early_teks_df.sample_date_string != extraction_date] \ .rolling_period_in_hours.hist(bins=list(range(24))) early_teks_df[early_teks_df.sample_date_string == extraction_date] \ .rolling_period_in_hours.hist(bins=list(range(24))) multi_backend_exposure_keys_df = multi_backend_exposure_keys_df[[ "sample_date_string", "region", "key_data"]] multi_backend_exposure_keys_df.head() active_regions = \ multi_backend_exposure_keys_df.groupby("region").key_data.nunique().sort_values().index.unique().tolist() active_regions multi_backend_summary_df = multi_backend_exposure_keys_df.groupby( ["sample_date_string", "region"]).key_data.nunique().reset_index() \ .pivot(index="sample_date_string", columns="region") \ .sort_index(ascending=False) multi_backend_summary_df.rename( columns={"key_data": "shared_teks_by_generation_date"}, inplace=True) multi_backend_summary_df.rename_axis("sample_date", inplace=True) multi_backend_summary_df = multi_backend_summary_df.fillna(0).astype(int) multi_backend_summary_df = multi_backend_summary_df.head(backend_generation_days) multi_backend_summary_df.head() def compute_keys_cross_sharing(x): teks_x = x.key_data_x.item() common_teks = set(teks_x).intersection(x.key_data_y.item()) common_teks_fraction = len(common_teks) / len(teks_x) return pd.Series(dict( common_teks=common_teks, common_teks_fraction=common_teks_fraction, )) multi_backend_exposure_keys_by_region_df = \ multi_backend_exposure_keys_df.groupby("region").key_data.unique().reset_index() multi_backend_exposure_keys_by_region_df["_merge"] = True multi_backend_exposure_keys_by_region_combination_df = \ multi_backend_exposure_keys_by_region_df.merge( multi_backend_exposure_keys_by_region_df, on="_merge") multi_backend_exposure_keys_by_region_combination_df.drop( columns=["_merge"], inplace=True) if multi_backend_exposure_keys_by_region_combination_df.region_x.nunique() > 1: multi_backend_exposure_keys_by_region_combination_df = \ multi_backend_exposure_keys_by_region_combination_df[ multi_backend_exposure_keys_by_region_combination_df.region_x != multi_backend_exposure_keys_by_region_combination_df.region_y] multi_backend_exposure_keys_cross_sharing_df = \ multi_backend_exposure_keys_by_region_combination_df \ .groupby(["region_x", "region_y"]) \ .apply(compute_keys_cross_sharing) \ .reset_index() multi_backend_cross_sharing_summary_df = \ multi_backend_exposure_keys_cross_sharing_df.pivot_table( values=["common_teks_fraction"], columns="region_x", index="region_y", aggfunc=lambda x: x.item()) multi_backend_cross_sharing_summary_df multi_backend_without_active_region_exposure_keys_df = \ multi_backend_exposure_keys_df[multi_backend_exposure_keys_df.region != report_backend_identifier] multi_backend_without_active_region = \ multi_backend_without_active_region_exposure_keys_df.groupby("region").key_data.nunique().sort_values().index.unique().tolist() multi_backend_without_active_region exposure_keys_summary_df = multi_backend_exposure_keys_df[ multi_backend_exposure_keys_df.region == report_backend_identifier] exposure_keys_summary_df.drop(columns=["region"], inplace=True) exposure_keys_summary_df = \ exposure_keys_summary_df.groupby(["sample_date_string"]).key_data.nunique().to_frame() exposure_keys_summary_df = \ exposure_keys_summary_df.reset_index().set_index("sample_date_string") exposure_keys_summary_df.sort_index(ascending=False, inplace=True) exposure_keys_summary_df.rename(columns={"key_data": "shared_teks_by_generation_date"}, inplace=True) exposure_keys_summary_df.head() ``` ### Dump API TEKs ``` tek_list_df = multi_backend_exposure_keys_df[ ["sample_date_string", "region", "key_data"]].copy() tek_list_df["key_data"] = tek_list_df["key_data"].apply(str) tek_list_df.rename(columns={ "sample_date_string": "sample_date", "key_data": "tek_list"}, inplace=True) tek_list_df = tek_list_df.groupby( ["sample_date", "region"]).tek_list.unique().reset_index() tek_list_df["extraction_date"] = extraction_date tek_list_df["extraction_date_with_hour"] = extraction_date_with_hour tek_list_path_prefix = "Data/TEKs/" tek_list_current_path = tek_list_path_prefix + f"/Current/RadarCOVID-TEKs.json" tek_list_daily_path = tek_list_path_prefix + f"Daily/RadarCOVID-TEKs-{extraction_date}.json" tek_list_hourly_path = tek_list_path_prefix + f"Hourly/RadarCOVID-TEKs-{extraction_date_with_hour}.json" for path in [tek_list_current_path, tek_list_daily_path, tek_list_hourly_path]: os.makedirs(os.path.dirname(path), exist_ok=True) tek_list_df.drop(columns=["extraction_date", "extraction_date_with_hour"]).to_json( tek_list_current_path, lines=True, orient="records") tek_list_df.drop(columns=["extraction_date_with_hour"]).to_json( tek_list_daily_path, lines=True, orient="records") tek_list_df.to_json( tek_list_hourly_path, lines=True, orient="records") tek_list_df.head() ``` ### Load TEK Dumps ``` import glob def load_extracted_teks(mode, region=None, limit=None) -> pd.DataFrame: extracted_teks_df = pd.DataFrame(columns=["region"]) file_paths = list(reversed(sorted(glob.glob(tek_list_path_prefix + mode + "/RadarCOVID-TEKs-*.json")))) if limit: file_paths = file_paths[:limit] for file_path in file_paths: logging.info(f"Loading TEKs from '{file_path}'...") iteration_extracted_teks_df = pd.read_json(file_path, lines=True) extracted_teks_df = extracted_teks_df.append( iteration_extracted_teks_df, sort=False) extracted_teks_df["region"] = \ extracted_teks_df.region.fillna(spain_region_country_code).copy() if region: extracted_teks_df = \ extracted_teks_df[extracted_teks_df.region == region] return extracted_teks_df daily_extracted_teks_df = load_extracted_teks( mode="Daily", region=report_backend_identifier, limit=tek_dumps_load_limit) daily_extracted_teks_df.head() exposure_keys_summary_df_ = daily_extracted_teks_df \ .sort_values("extraction_date", ascending=False) \ .groupby("sample_date").tek_list.first() \ .to_frame() exposure_keys_summary_df_.index.name = "sample_date_string" exposure_keys_summary_df_["tek_list"] = \ exposure_keys_summary_df_.tek_list.apply(len) exposure_keys_summary_df_ = exposure_keys_summary_df_ \ .rename(columns={"tek_list": "shared_teks_by_generation_date"}) \ .sort_index(ascending=False) exposure_keys_summary_df = exposure_keys_summary_df_ exposure_keys_summary_df.head() ``` ### Daily New TEKs ``` tek_list_df = daily_extracted_teks_df.groupby("extraction_date").tek_list.apply( lambda x: set(sum(x, []))).reset_index() tek_list_df = tek_list_df.set_index("extraction_date").sort_index(ascending=True) tek_list_df.head() def compute_teks_by_generation_and_upload_date(date): day_new_teks_set_df = tek_list_df.copy().diff() try: day_new_teks_set = day_new_teks_set_df[ day_new_teks_set_df.index == date].tek_list.item() except ValueError: day_new_teks_set = None if pd.isna(day_new_teks_set): day_new_teks_set = set() day_new_teks_df = daily_extracted_teks_df[ daily_extracted_teks_df.extraction_date == date].copy() day_new_teks_df["shared_teks"] = \ day_new_teks_df.tek_list.apply(lambda x: set(x).intersection(day_new_teks_set)) day_new_teks_df["shared_teks"] = \ day_new_teks_df.shared_teks.apply(len) day_new_teks_df["upload_date"] = date day_new_teks_df.rename(columns={"sample_date": "generation_date"}, inplace=True) day_new_teks_df = day_new_teks_df[ ["upload_date", "generation_date", "shared_teks"]] day_new_teks_df["generation_to_upload_days"] = \ (pd.to_datetime(day_new_teks_df.upload_date) - pd.to_datetime(day_new_teks_df.generation_date)).dt.days day_new_teks_df = day_new_teks_df[day_new_teks_df.shared_teks > 0] return day_new_teks_df shared_teks_generation_to_upload_df = pd.DataFrame() for upload_date in daily_extracted_teks_df.extraction_date.unique(): shared_teks_generation_to_upload_df = \ shared_teks_generation_to_upload_df.append( compute_teks_by_generation_and_upload_date(date=upload_date)) shared_teks_generation_to_upload_df \ .sort_values(["upload_date", "generation_date"], ascending=False, inplace=True) shared_teks_generation_to_upload_df.tail() today_new_teks_df = \ shared_teks_generation_to_upload_df[ shared_teks_generation_to_upload_df.upload_date == extraction_date].copy() today_new_teks_df.tail() if not today_new_teks_df.empty: today_new_teks_df.set_index("generation_to_upload_days") \ .sort_index().shared_teks.plot.bar() generation_to_upload_period_pivot_df = \ shared_teks_generation_to_upload_df[ ["upload_date", "generation_to_upload_days", "shared_teks"]] \ .pivot(index="upload_date", columns="generation_to_upload_days") \ .sort_index(ascending=False).fillna(0).astype(int) \ .droplevel(level=0, axis=1) generation_to_upload_period_pivot_df.head() new_tek_df = tek_list_df.diff().tek_list.apply( lambda x: len(x) if not pd.isna(x) else None).to_frame().reset_index() new_tek_df.rename(columns={ "tek_list": "shared_teks_by_upload_date", "extraction_date": "sample_date_string",}, inplace=True) new_tek_df.tail() shared_teks_uploaded_on_generation_date_df = shared_teks_generation_to_upload_df[ shared_teks_generation_to_upload_df.generation_to_upload_days == 0] \ [["upload_date", "shared_teks"]].rename( columns={ "upload_date": "sample_date_string", "shared_teks": "shared_teks_uploaded_on_generation_date", }) shared_teks_uploaded_on_generation_date_df.head() estimated_shared_diagnoses_df = shared_teks_generation_to_upload_df \ .groupby(["upload_date"]).shared_teks.max().reset_index() \ .sort_values(["upload_date"], ascending=False) \ .rename(columns={ "upload_date": "sample_date_string", "shared_teks": "shared_diagnoses", }) invalid_shared_diagnoses_dates_mask = \ estimated_shared_diagnoses_df.sample_date_string.isin(invalid_shared_diagnoses_dates) estimated_shared_diagnoses_df[invalid_shared_diagnoses_dates_mask] = 0 estimated_shared_diagnoses_df.head() ``` ### Hourly New TEKs ``` hourly_extracted_teks_df = load_extracted_teks( mode="Hourly", region=report_backend_identifier, limit=25) hourly_extracted_teks_df.head() hourly_new_tek_count_df = hourly_extracted_teks_df \ .groupby("extraction_date_with_hour").tek_list. \ apply(lambda x: set(sum(x, []))).reset_index().copy() hourly_new_tek_count_df = hourly_new_tek_count_df.set_index("extraction_date_with_hour") \ .sort_index(ascending=True) hourly_new_tek_count_df["new_tek_list"] = hourly_new_tek_count_df.tek_list.diff() hourly_new_tek_count_df["new_tek_count"] = hourly_new_tek_count_df.new_tek_list.apply( lambda x: len(x) if not pd.isna(x) else 0) hourly_new_tek_count_df.rename(columns={ "new_tek_count": "shared_teks_by_upload_date"}, inplace=True) hourly_new_tek_count_df = hourly_new_tek_count_df.reset_index()[[ "extraction_date_with_hour", "shared_teks_by_upload_date"]] hourly_new_tek_count_df.head() hourly_summary_df = hourly_new_tek_count_df.copy() hourly_summary_df.set_index("extraction_date_with_hour", inplace=True) hourly_summary_df = hourly_summary_df.fillna(0).astype(int).reset_index() hourly_summary_df["datetime_utc"] = pd.to_datetime( hourly_summary_df.extraction_date_with_hour, format="%Y-%m-%d@%H") hourly_summary_df.set_index("datetime_utc", inplace=True) hourly_summary_df = hourly_summary_df.tail(-1) hourly_summary_df.head() ``` ### Data Merge ``` result_summary_df = exposure_keys_summary_df.merge( new_tek_df, on=["sample_date_string"], how="outer") result_summary_df.head() result_summary_df = result_summary_df.merge( shared_teks_uploaded_on_generation_date_df, on=["sample_date_string"], how="outer") result_summary_df.head() result_summary_df = result_summary_df.merge( estimated_shared_diagnoses_df, on=["sample_date_string"], how="outer") result_summary_df.head() result_summary_df = confirmed_df.tail(daily_summary_days).merge( result_summary_df, on=["sample_date_string"], how="left") result_summary_df.head() result_summary_df["sample_date"] = pd.to_datetime(result_summary_df.sample_date_string) result_summary_df = result_summary_df.merge(source_regions_for_summary_df, how="left") result_summary_df.set_index(["sample_date", "source_regions"], inplace=True) result_summary_df.drop(columns=["sample_date_string"], inplace=True) result_summary_df.sort_index(ascending=False, inplace=True) result_summary_df.head() with pd.option_context("mode.use_inf_as_na", True): result_summary_df = result_summary_df.fillna(0).astype(int) result_summary_df["teks_per_shared_diagnosis"] = \ (result_summary_df.shared_teks_by_upload_date / result_summary_df.shared_diagnoses).fillna(0) result_summary_df["shared_diagnoses_per_covid_case"] = \ (result_summary_df.shared_diagnoses / result_summary_df.covid_cases).fillna(0) result_summary_df.head(daily_plot_days) weekly_result_summary_df = result_summary_df \ .sort_index(ascending=True).fillna(0).rolling(7).agg({ "covid_cases": "sum", "shared_teks_by_generation_date": "sum", "shared_teks_by_upload_date": "sum", "shared_diagnoses": "sum" }).sort_index(ascending=False) with pd.option_context("mode.use_inf_as_na", True): weekly_result_summary_df = weekly_result_summary_df.fillna(0).astype(int) weekly_result_summary_df["teks_per_shared_diagnosis"] = \ (weekly_result_summary_df.shared_teks_by_upload_date / weekly_result_summary_df.shared_diagnoses).fillna(0) weekly_result_summary_df["shared_diagnoses_per_covid_case"] = \ (weekly_result_summary_df.shared_diagnoses / weekly_result_summary_df.covid_cases).fillna(0) weekly_result_summary_df.head() last_7_days_summary = weekly_result_summary_df.to_dict(orient="records")[1] last_7_days_summary ``` ## Report Results ``` display_column_name_mapping = { "sample_date": "Sample\u00A0Date\u00A0(UTC)", "source_regions": "Source Countries", "datetime_utc": "Timestamp (UTC)", "upload_date": "Upload Date (UTC)", "generation_to_upload_days": "Generation to Upload Period in Days", "region": "Backend", "region_x": "Backend\u00A0(A)", "region_y": "Backend\u00A0(B)", "common_teks": "Common TEKs Shared Between Backends", "common_teks_fraction": "Fraction of TEKs in Backend (A) Available in Backend (B)", "covid_cases": "COVID-19 Cases in Source Countries (7-day Rolling Average)", "shared_teks_by_generation_date": "Shared TEKs by Generation Date", "shared_teks_by_upload_date": "Shared TEKs by Upload Date", "shared_diagnoses": "Shared Diagnoses (Estimation)", "teks_per_shared_diagnosis": "TEKs Uploaded per Shared Diagnosis", "shared_diagnoses_per_covid_case": "Usage Ratio (Fraction of Cases in Source Countries Which Shared Diagnosis)", "shared_teks_uploaded_on_generation_date": "Shared TEKs Uploaded on Generation Date", } summary_columns = [ "covid_cases", "shared_teks_by_generation_date", "shared_teks_by_upload_date", "shared_teks_uploaded_on_generation_date", "shared_diagnoses", "teks_per_shared_diagnosis", "shared_diagnoses_per_covid_case", ] ``` ### Daily Summary Table ``` result_summary_df_ = result_summary_df.copy() result_summary_df = result_summary_df[summary_columns] result_summary_with_display_names_df = result_summary_df \ .rename_axis(index=display_column_name_mapping) \ .rename(columns=display_column_name_mapping) result_summary_with_display_names_df ``` ### Daily Summary Plots ``` result_plot_summary_df = result_summary_df.head(daily_plot_days)[summary_columns] \ .droplevel(level=["source_regions"]) \ .rename_axis(index=display_column_name_mapping) \ .rename(columns=display_column_name_mapping) summary_ax_list = result_plot_summary_df.sort_index(ascending=True).plot.bar( title=f"Daily Summary", rot=45, subplots=True, figsize=(15, 22), legend=False) ax_ = summary_ax_list[-1] ax_.get_figure().tight_layout() ax_.get_figure().subplots_adjust(top=0.95) ax_.yaxis.set_major_formatter(matplotlib.ticker.PercentFormatter(1.0)) _ = ax_.set_xticklabels(sorted(result_plot_summary_df.index.strftime("%Y-%m-%d").tolist())) ``` ### Daily Generation to Upload Period Table ``` display_generation_to_upload_period_pivot_df = \ generation_to_upload_period_pivot_df \ .head(backend_generation_days) display_generation_to_upload_period_pivot_df \ .head(backend_generation_days) \ .rename_axis(columns=display_column_name_mapping) \ .rename_axis(index=display_column_name_mapping) fig, generation_to_upload_period_pivot_table_ax = plt.subplots( figsize=(12, 1 + 0.6 * len(display_generation_to_upload_period_pivot_df))) generation_to_upload_period_pivot_table_ax.set_title( "Shared TEKs Generation to Upload Period Table") sns.heatmap( data=display_generation_to_upload_period_pivot_df .rename_axis(columns=display_column_name_mapping) .rename_axis(index=display_column_name_mapping), fmt=".0f", annot=True, ax=generation_to_upload_period_pivot_table_ax) generation_to_upload_period_pivot_table_ax.get_figure().tight_layout() ``` ### Hourly Summary Plots ``` hourly_summary_ax_list = hourly_summary_df \ .rename_axis(index=display_column_name_mapping) \ .rename(columns=display_column_name_mapping) \ .plot.bar( title=f"Last 24h Summary", rot=45, subplots=True, legend=False) ax_ = hourly_summary_ax_list[-1] ax_.get_figure().tight_layout() ax_.get_figure().subplots_adjust(top=0.9) _ = ax_.set_xticklabels(sorted(hourly_summary_df.index.strftime("%Y-%m-%d@%H").tolist())) ``` ### Publish Results ``` def get_temporary_image_path() -> str: return os.path.join(tempfile.gettempdir(), str(uuid.uuid4()) + ".png") def save_temporary_plot_image(ax): if isinstance(ax, np.ndarray): ax = ax[0] media_path = get_temporary_image_path() ax.get_figure().savefig(media_path) return media_path def save_temporary_dataframe_image(df): import dataframe_image as dfi media_path = get_temporary_image_path() dfi.export(df, media_path) return media_path github_repository = os.environ.get("GITHUB_REPOSITORY") if github_repository is None: github_repository = "pvieito/Radar-STATS" github_project_base_url = "https://github.com/" + github_repository display_formatters = { display_column_name_mapping["teks_per_shared_diagnosis"]: lambda x: f"{x:.2f}", display_column_name_mapping["shared_diagnoses_per_covid_case"]: lambda x: f"{x:.2%}", } daily_summary_table_html = result_summary_with_display_names_df \ .head(daily_plot_days) \ .rename_axis(index=display_column_name_mapping) \ .rename(columns=display_column_name_mapping) \ .to_html(formatters=display_formatters) multi_backend_summary_table_html = multi_backend_summary_df \ .head(daily_plot_days) \ .rename_axis(columns=display_column_name_mapping) \ .rename(columns=display_column_name_mapping) \ .rename_axis(index=display_column_name_mapping) \ .to_html(formatters=display_formatters) def format_multi_backend_cross_sharing_fraction(x): if pd.isna(x): return "-" elif round(x * 100, 1) == 0: return "" else: return f"{x:.1%}" multi_backend_cross_sharing_summary_table_html = multi_backend_cross_sharing_summary_df \ .rename_axis(columns=display_column_name_mapping) \ .rename(columns=display_column_name_mapping) \ .rename_axis(index=display_column_name_mapping) \ .to_html( classes="table-center", formatters=display_formatters, float_format=format_multi_backend_cross_sharing_fraction) multi_backend_cross_sharing_summary_table_html = \ multi_backend_cross_sharing_summary_table_html \ .replace("<tr>","<tr style=\"text-align: center;\">") extraction_date_result_summary_df = \ result_summary_df[result_summary_df.index.get_level_values("sample_date") == extraction_date] extraction_date_result_hourly_summary_df = \ hourly_summary_df[hourly_summary_df.extraction_date_with_hour == extraction_date_with_hour] covid_cases = \ extraction_date_result_summary_df.covid_cases.sum() shared_teks_by_generation_date = \ extraction_date_result_summary_df.shared_teks_by_generation_date.sum() shared_teks_by_upload_date = \ extraction_date_result_summary_df.shared_teks_by_upload_date.sum() shared_diagnoses = \ extraction_date_result_summary_df.shared_diagnoses.sum() teks_per_shared_diagnosis = \ extraction_date_result_summary_df.teks_per_shared_diagnosis.sum() shared_diagnoses_per_covid_case = \ extraction_date_result_summary_df.shared_diagnoses_per_covid_case.sum() shared_teks_by_upload_date_last_hour = \ extraction_date_result_hourly_summary_df.shared_teks_by_upload_date.sum().astype(int) report_source_regions = extraction_date_result_summary_df.index \ .get_level_values("source_regions").item().split(",") display_source_regions = ", ".join(report_source_regions) if len(report_source_regions) == 1: display_brief_source_regions = report_source_regions[0] else: display_brief_source_regions = f"{len(report_source_regions)} 🇪🇺" summary_plots_image_path = save_temporary_plot_image( ax=summary_ax_list) summary_table_image_path = save_temporary_dataframe_image( df=result_summary_with_display_names_df) hourly_summary_plots_image_path = save_temporary_plot_image( ax=hourly_summary_ax_list) multi_backend_summary_table_image_path = save_temporary_dataframe_image( df=multi_backend_summary_df) generation_to_upload_period_pivot_table_image_path = save_temporary_plot_image( ax=generation_to_upload_period_pivot_table_ax) ``` ### Save Results ``` report_resources_path_prefix = "Data/Resources/Current/RadarCOVID-Report-" result_summary_df.to_csv( report_resources_path_prefix + "Summary-Table.csv") result_summary_df.to_html( report_resources_path_prefix + "Summary-Table.html") hourly_summary_df.to_csv( report_resources_path_prefix + "Hourly-Summary-Table.csv") multi_backend_summary_df.to_csv( report_resources_path_prefix + "Multi-Backend-Summary-Table.csv") multi_backend_cross_sharing_summary_df.to_csv( report_resources_path_prefix + "Multi-Backend-Cross-Sharing-Summary-Table.csv") generation_to_upload_period_pivot_df.to_csv( report_resources_path_prefix + "Generation-Upload-Period-Table.csv") _ = shutil.copyfile( summary_plots_image_path, report_resources_path_prefix + "Summary-Plots.png") _ = shutil.copyfile( summary_table_image_path, report_resources_path_prefix + "Summary-Table.png") _ = shutil.copyfile( hourly_summary_plots_image_path, report_resources_path_prefix + "Hourly-Summary-Plots.png") _ = shutil.copyfile( multi_backend_summary_table_image_path, report_resources_path_prefix + "Multi-Backend-Summary-Table.png") _ = shutil.copyfile( generation_to_upload_period_pivot_table_image_path, report_resources_path_prefix + "Generation-Upload-Period-Table.png") ``` ### Publish Results as JSON ``` summary_results_api_df = result_summary_df.reset_index() summary_results_api_df["sample_date_string"] = \ summary_results_api_df["sample_date"].dt.strftime("%Y-%m-%d") summary_results_api_df["source_regions"] = \ summary_results_api_df["source_regions"].apply(lambda x: x.split(",")) today_summary_results_api_df = \ summary_results_api_df.to_dict(orient="records")[0] summary_results = dict( backend_identifier=report_backend_identifier, source_regions=report_source_regions, extraction_datetime=extraction_datetime, extraction_date=extraction_date, extraction_date_with_hour=extraction_date_with_hour, last_hour=dict( shared_teks_by_upload_date=shared_teks_by_upload_date_last_hour, shared_diagnoses=0, ), today=today_summary_results_api_df, last_7_days=last_7_days_summary, daily_results=summary_results_api_df.to_dict(orient="records")) summary_results = \ json.loads(pd.Series([summary_results]).to_json(orient="records"))[0] with open(report_resources_path_prefix + "Summary-Results.json", "w") as f: json.dump(summary_results, f, indent=4) ``` ### Publish on README ``` with open("Data/Templates/README.md", "r") as f: readme_contents = f.read() readme_contents = readme_contents.format( extraction_date_with_hour=extraction_date_with_hour, github_project_base_url=github_project_base_url, daily_summary_table_html=daily_summary_table_html, multi_backend_summary_table_html=multi_backend_summary_table_html, multi_backend_cross_sharing_summary_table_html=multi_backend_cross_sharing_summary_table_html, display_source_regions=display_source_regions) with open("README.md", "w") as f: f.write(readme_contents) ``` ### Publish on Twitter ``` enable_share_to_twitter = os.environ.get("RADARCOVID_REPORT__ENABLE_PUBLISH_ON_TWITTER") github_event_name = os.environ.get("GITHUB_EVENT_NAME") if enable_share_to_twitter and github_event_name == "schedule" and \ (shared_teks_by_upload_date_last_hour or not are_today_results_partial): import tweepy twitter_api_auth_keys = os.environ["RADARCOVID_REPORT__TWITTER_API_AUTH_KEYS"] twitter_api_auth_keys = twitter_api_auth_keys.split(":") auth = tweepy.OAuthHandler(twitter_api_auth_keys[0], twitter_api_auth_keys[1]) auth.set_access_token(twitter_api_auth_keys[2], twitter_api_auth_keys[3]) api = tweepy.API(auth) summary_plots_media = api.media_upload(summary_plots_image_path) summary_table_media = api.media_upload(summary_table_image_path) generation_to_upload_period_pivot_table_image_media = api.media_upload(generation_to_upload_period_pivot_table_image_path) media_ids = [ summary_plots_media.media_id, summary_table_media.media_id, generation_to_upload_period_pivot_table_image_media.media_id, ] if are_today_results_partial: today_addendum = " (Partial)" else: today_addendum = "" status = textwrap.dedent(f""" #RadarCOVID – {extraction_date_with_hour} Source Countries: {display_brief_source_regions} Today{today_addendum}: - Uploaded TEKs: {shared_teks_by_upload_date:.0f} ({shared_teks_by_upload_date_last_hour:+d} last hour) - Shared Diagnoses: ≤{shared_diagnoses:.0f} - Usage Ratio: ≤{shared_diagnoses_per_covid_case:.2%} Last 7 Days: - Shared Diagnoses: ≤{last_7_days_summary["shared_diagnoses"]:.0f} - Usage Ratio: ≤{last_7_days_summary["shared_diagnoses_per_covid_case"]:.2%} Info: {github_project_base_url}#documentation """) status = status.encode(encoding="utf-8") api.update_status(status=status, media_ids=media_ids) ```
github_jupyter
``` import numpy as np import matplotlib.pyplot as plt from matplotlib import rcParams import seaborn as sns %matplotlib inline rcParams['figure.figsize'] = 10, 8 sns.set_style('whitegrid') num = 50 xv = np.linspace(-500,400,num) yv = np.linspace(-500,400,num) X,Y = np.meshgrid(xv,yv) # frist X,Y a = 8.2 intervalo = 10 valor = 100 ke = 1/(4*np.pi*8.85418e-12) V = np.zeros((num,num)) print(V) # v = np.zeros((2,10)) kl = 1e-12 # x -> i # y -> j # x = a + intervalo/2 i = j = 0 for xi in xv: for yj in yv: x = a + intervalo/2 for k in range(100): #print(k,x) # calcula o valor da carga do intevalor de linha Q = ( (kl*x) / ((x**2) + (a**2)) ) * intervalo # pL * dx d = np.sqrt((x-xi)**2 + yj**2) if d<0.01: d == 0.01 V[j][i] += ke*(Q/d) x = x + intervalo # print(i,j) j += 1 i += 1 j = 0 fig = plt.figure() ax = plt.axes(projection='3d') ax.plot_wireframe(X, Y, V, color='black') plt.show() print(V[0][0]) print(V[0][1]) num = 50 xv = np.linspace(-500,400,num) yv = np.linspace(-500,400,num) i = j = 0 for xi in xv: i += 1 print(i) print("50 ou", (900/50)) %% Descobrir o E1 - Campo eletrico dentro do cilindro %tem que ser por gauss pq é um fio infinito %% Variaveis Dadas clc clear all close all %% variaveis do problema e0 = 8.854*10^-12; kc = 8.8*10^-12; a = 10; const=1/(4*pi*e0); %Constante %% Variaveis Criadas passe = 1; limites = 20; %Onde o campo sera medido: x= -limites:passe:limites; %vetor na coordenada x onde será calculado E y= -limites:passe:limites; %vetor na coordenada z onde será calculado E z= -limites:passe:limites; %vetor na coordenada z onde será calculado E %Gerador do campo: xl= a:passe:(5*limites); % variação da coordenada x onde está a carga % yl= -passe:passe:passe; % variação da coordenada y onde está a carga dL = passe; %tamanho de cada segmento %inicializa o campo elétrico: V(:,:,:) = zeros (length(x),length(y),length(z)); %% Desenvolvimento for i = 1:length(x)% varre a coordenada x onde E será calculado disp(i) for j = 1: length(y) % varre a coordenada y onde E será calculado for k = 1: length(z) % varre a coordenada z onde E será calculado for m = 1:length(xl) % varre a coordenada x da carga % #for n = 1:length(yl) % varre a coordenada y da carga r = [x(i),y(j),z(k)]; %vetor posição apontando para onde estamos calculando E rl= [xl(m),0,0];% vetor posição apontando para a carga if ((r-rl)*(r-rl)'>0.00000001) V(i,j,k) = V(i,j,k) + const*((((kc.*xl(m))/(xl(m).^2 + a^2)).*dL)/(sqrt((r-rl)*(r-rl)'))'); Q = ke/sqrt((x-xi)**2 + yj**2)*( (kl*x) / ((x**2) + (a**2)) ) * intervalo # pL * dx end % # end end end end end %% Grafico % xd = linspace(-limites,limites); % yd = linspace(-limites,limites); % zd = linspace(-limites,limites); % [X,Y] = meshgrid(xd,yd); % % figure(1) % surf(x,y,V(:,:,0)); % xlabel('x') % ylabel('y') % zlabel('z') % axis([-5 20 -20 20 -inf inf]) % grid on % colormap(jet(20)) % colorbar %% prof [X,Z] = meshgrid(x,z); figure [C,h] = contour(x,z,squeeze(V(:,3,:)),20);%faz o gráfico das curvas de nível para o potencial set(h,'ShowText','on','TextStep',get(h,'LevelStep')) xlabel('eixo x (m)') ylabel('eixo z (m)') %% Print resultado max = int64(length(x)); V0 = V(max/2,max/2); Vinf = 0; disp(0-V0) ```
github_jupyter
<h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><span><a href="#Tower-of-Hanoi" data-toc-modified-id="Tower-of-Hanoi-1">Tower of Hanoi</a></span></li><li><span><a href="#Learning-Outcomes" data-toc-modified-id="Learning-Outcomes-2">Learning Outcomes</a></span></li><li><span><a href="#Demo:-How-to-Play-Tower-of-Hanoi" data-toc-modified-id="Demo:-How-to-Play-Tower-of-Hanoi-3">Demo: How to Play Tower of Hanoi</a></span></li><li><span><a href="#Definitions" data-toc-modified-id="Definitions-4">Definitions</a></span></li><li><span><a href="#Demo:-How-to-Play-Tower-of-Hanoi" data-toc-modified-id="Demo:-How-to-Play-Tower-of-Hanoi-5">Demo: How to Play Tower of Hanoi</a></span></li><li><span><a href="#Student-Activity" data-toc-modified-id="Student-Activity-6">Student Activity</a></span></li><li><span><a href="#Reflection" data-toc-modified-id="Reflection-7">Reflection</a></span></li><li><span><a href="#Did-any-group-derive-formula-for-minimum-number-of-moves?" data-toc-modified-id="Did-any-group-derive-formula-for-minimum-number-of-moves?-8">Did any group derive formula for minimum number of moves?</a></span></li><li><span><a href="#Questions?" data-toc-modified-id="Questions?-9">Questions?</a></span></li><li><span><a href="#-Tower-of-Hanoi-as-RL-problem" data-toc-modified-id="-Tower-of-Hanoi-as-RL-problem-10"> Tower of Hanoi as RL problem</a></span></li><li><span><a href="#-Tower-of-Hanoi-Solutions" data-toc-modified-id="-Tower-of-Hanoi-Solutions-11"> Tower of Hanoi Solutions</a></span></li><li><span><a href="#Greedy-Tower-of-Hanoi" data-toc-modified-id="Greedy-Tower-of-Hanoi-12">Greedy Tower of Hanoi</a></span></li><li><span><a href="#-Tower-of-Hanoi-Solutions" data-toc-modified-id="-Tower-of-Hanoi-Solutions-13"> Tower of Hanoi Solutions</a></span></li><li><span><a href="#THERE-MUST-BE-A-BETTER-WAY!" data-toc-modified-id="THERE-MUST-BE-A-BETTER-WAY!-14">THERE MUST BE A BETTER WAY!</a></span></li><li><span><a href="#RECURSION" data-toc-modified-id="RECURSION-15">RECURSION</a></span></li><li><span><a href="#2-Requirements-for-Recursion" data-toc-modified-id="2-Requirements-for-Recursion-16">2 Requirements for Recursion</a></span></li><li><span><a href="#Think,-Pair,-&amp;-Share" data-toc-modified-id="Think,-Pair,-&amp;-Share-17">Think, Pair, &amp; Share</a></span></li><li><span><a href="#Recursion-Steps-to-Solve-Tower-of-Hanoi" data-toc-modified-id="Recursion-Steps-to-Solve-Tower-of-Hanoi-18">Recursion Steps to Solve Tower of Hanoi</a></span></li><li><span><a href="#Illustrated-Recursive-Steps-to-Solve-Tower-of-Hanoi" data-toc-modified-id="Illustrated-Recursive-Steps-to-Solve-Tower-of-Hanoi-19">Illustrated Recursive Steps to Solve Tower of Hanoi</a></span></li><li><span><a href="#Check-for-understanding" data-toc-modified-id="Check-for-understanding-20">Check for understanding</a></span></li><li><span><a href="#Check-for-understanding" data-toc-modified-id="Check-for-understanding-21">Check for understanding</a></span></li><li><span><a href="#Check-for-understanding" data-toc-modified-id="Check-for-understanding-22">Check for understanding</a></span></li><li><span><a href="#Takeaways" data-toc-modified-id="Takeaways-23">Takeaways</a></span></li><li><span><a href="#Bonus-Material" data-toc-modified-id="Bonus-Material-24">Bonus Material</a></span></li><li><span><a href="#Dynamic-Programming" data-toc-modified-id="Dynamic-Programming-25">Dynamic Programming</a></span></li><li><span><a href="#What-would-Dynamic-Programming-look-like-for-Tower-of-Hanoi?" data-toc-modified-id="What-would-Dynamic-Programming-look-like-for-Tower-of-Hanoi?-26">What would Dynamic Programming look like for Tower of Hanoi?</a></span></li><li><span><a href="#Tower-of-Hanoi-for-Final-Project" data-toc-modified-id="Tower-of-Hanoi-for-Final-Project-27">Tower of Hanoi for Final Project</a></span></li><li><span><a href="#Further-Study" data-toc-modified-id="Further-Study-28">Further Study</a></span></li></ul></div> <center><h2>Tower of Hanoi</h2></center> <center><img src="images/tower_of_hanoi.jpg" width="75%"/></center> <center><h2>Learning Outcomes</h2></center> __By the end of this session, you should be able to__: - Solve Tower of Hanoi by hand. - Explain how to Tower of Hanoi with recursion in your words. <center><h2>Demo: How to Play Tower of Hanoi</h2></center> <center><img src="images/tower_of_hanoi.jpg" width="35%"/></center> The Goal: Move all disks from start to finish. Rules: 1. Only one disk may be moved at a time. 2. Each move consists of taking the upper disk from one of the rods and sliding it onto another rod, on top of the other disks that may already be present on that rod. 3. No disk may be placed on top of a smaller disk. __My nephew enjoys the Tower of Hanoi.__ Definitions ----- - Rod: The vertical shaft - Disks: The items on the rod <center><h2>Demo: How to Play Tower of Hanoi</h2></center> 1) Solve with 1 Disc 2) Solve with 2 Discs <center><img src="http://hangaroundtheweb.com/wp-content/uploads/2018/07/write-down-english.jpg" width="20%"/></center> > If you can’t write it down in English, you can’t code it. > — Peter Halpern <center><h2>Student Activity</h2></center> In small groups, solve Tower of Hanoi: https://www.mathsisfun.com/games/towerofhanoi-flash.html Record the minimal number of steps for each number of discs: 1. 3 discs 1. 4 discs 1. 5 discs 1. 6 discs If someone has never solved the puzzle, they should lead. If you have solved it, only give hints when the team is stuck. <center><h2>Reflection</h2></center> How difficult was each version? How many more steps were needed for each increase in disk? Could you write a formula to model the minimum of numbers as number of disks increase? ``` reset -fs from IPython.display import YouTubeVideo # 3 rings YouTubeVideo('S4HOSbrS4bY') # 6 rings YouTubeVideo('iFV821yY7Ns') ``` <center><h2>Did any group derive formula for minimum number of moves?</h2></center> ``` # Calculate the optimal number of moves print(f"{'# disks':>7} | {'# moves':>10}") for n_disks in range(1, 21): n_moves = (2 ** n_disks)-1 print(f"{n_disks:>7} {n_moves:>10,}") ``` <center><h2>Questions?</h2></center> <center><h2> Tower of Hanoi as RL problem</h2></center> Elements of Reinforcement Learning Problem: 1. Is it sequential? 1. What is the environment? What are the parameters? 1. Who is the agent? 1. What are the states? 1. What are the rewards? 1. Sequential - Yes! By definition moves are one-at-a-time 1. Environment - The number of rods and disks. 1. Agent - The mover of disks. You right now. The function you are about to write. 1. State - Location of disks on rods at a given step 1. Rewards: 1. End - All disks on last rod 1. Intermediate - Disks closer to solution state <center><h2> Tower of Hanoi Solutions</h2></center> What would a greedy approach look like? Would it solve the problem? By greedy, I mean an agent can only move directly towards the goal (of course must follow the rules of the game) <center><h2>Greedy Tower of Hanoi</h2></center> 1. Move the smallest disk to the end. 2. Move the next disk to the middle. 3. Then get stuck! <center><h2> Tower of Hanoi Solutions</h2></center> What would a random approach look like? Would it solve the problem? Find all valid moves, select one. Yes, it would solve. It would take a looooong time. <center><h2>THERE MUST BE A BETTER WAY!</h2></center> <center><h2>RECURSION</h2></center> <center><h2>2 Requirements for Recursion</h2></center> 1. Must have a base case 1. Move towards the base case by calling the function itself <center><h2>Think, Pair, & Share</h2></center> How does the 2 requirements of recursion apply to Tower of Hanoi? What is the base case? How can we move towards base case? How can we generalize to any number of discs? <center><h2>Recursion Steps to Solve Tower of Hanoi</h2></center> 1. Base case: Move the largest disk to rightmost tower. 2. Move towards the base case by defining: - `from_rod` - `to_rod` - `aux_rod` 3. Call the function: Solving the same problem with one less disk. <center><h2>Illustrated Recursive Steps to Solve Tower of Hanoi</h2></center> <center><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/2/20/Tower_of_Hanoi_recursion_SMIL.svg/512px-Tower_of_Hanoi_recursion_SMIL.svg.png" width="75%"/></center> <center><h2>Check for understanding</h2></center> Is the recursion solution deterministic or schocastic? Deterministic - Every time its plays, it will always play the same <center><h2>Check for understanding</h2></center> Does recursion solution learn? Will it get better the more it plays? No - It does not learn. <center><h2>Check for understanding</h2></center> Is the recursive solution optimal? Yes - It will always play the minum number of moves. <center><h2>Takeaways</h2></center> - Tower of Hanoi is a game to test the cognitive abilities, most often of small children. - Tower of Hanoi has the elements of Reinforcement Learning: 1. Sequential 1. Environment 1. Agent 1. State 1. Rewards - Recursion is a deterministic, non-learning, and optimal solution for Tower of Hanoi. ----- Bonus Material ----- <center><h2>Dynamic Programming</h2></center> What is it? Look up previous solutions (rather than re-compute them) to overlapping subproblems. Requires a cache to store solutions. <center><h2>What would Dynamic Programming look like for Tower of Hanoi?</h2></center> Track optimal solutions to subproblems. Use those solutions instead of recomputing them. However, the recursive solution is optimal for Tower of Hanoi so there is no reason to add dynamic programming. Tower of Hanoi for Final Project ------ If you are interesting in extending Tower of Hanoi as your Final Projects, here ideas to consider: - Frame it as Markov decision process (MDP) - Can you make it stochastic? - How can you reformulate the problem that a learned solution would perform better than a deterministic solution? Further Study ------ - [Q-learning for Tower of Hanoi](https://github.com/khpeek/Q-learning-Hanoi) - [RL take for Tower of Hanoi](https://kenandeen.wordpress.com/2015/08/22/tower-of-hanoi-reinforcement-learning/)
github_jupyter
# Auto detection to main + 4 cropped images **Pipeline:** 1. Load cropped image csv file 2. Apply prediction 3. Save prediction result back to csv file * pred_value * pred_cat * pred_bbox ``` # Import libraries %matplotlib inline from pycocotools.coco import COCO from keras.models import load_model # from utils.utils import * # from utils.bbox import * # from utils.image import load_image_pixels from keras.preprocessing.image import load_img from keras.preprocessing.image import img_to_array import numpy as np import pandas as pd import skimage.io as io import matplotlib.pyplot as plt import pylab import torchvision.transforms.functional as TF import PIL import os import json from urllib.request import urlretrieve pylab.rcParams['figure.figsize'] = (8.0, 10.0) # Define image directory projectDir=os.getcwd() dataDir='.' dataType='val2017' imageDir='{}/images/'.format(dataDir) annFile='{}/images/{}_selected/annotations/instances_{}.json'.format(dataDir,dataType,dataType) ``` ## Utilities ``` class BoundBox: def __init__(self, xmin, ymin, xmax, ymax, objness = None, classes = None): self.xmin = xmin self.ymin = ymin self.xmax = xmax self.ymax = ymax self.objness = objness self.classes = classes self.label = -1 self.score = -1 def get_label(self): if self.label == -1: self.label = np.argmax(self.classes) return self.label def get_score(self): if self.score == -1: self.score = self.classes[self.get_label()] return self.score def _sigmoid(x): return 1. / (1. + np.exp(-x)) def decode_netout(netout, anchors, obj_thresh, net_h, net_w): grid_h, grid_w = netout.shape[:2] nb_box = 3 netout = netout.reshape((grid_h, grid_w, nb_box, -1)) nb_class = netout.shape[-1] - 5 boxes = [] netout[..., :2] = _sigmoid(netout[..., :2]) netout[..., 4:] = _sigmoid(netout[..., 4:]) netout[..., 5:] = netout[..., 4][..., np.newaxis] * netout[..., 5:] netout[..., 5:] *= netout[..., 5:] > obj_thresh for i in range(grid_h*grid_w): row = i // grid_w col = i % grid_w for b in range(nb_box): # 4th element is objectness score objectness = netout[int(row)][int(col)][b][4] if(objectness.all() <= obj_thresh): continue # first 4 elements are x, y, w, and h x, y, w, h = netout[int(row)][int(col)][b][:4] x = (col + x) / grid_w # center position, unit: image width y = (row + y) / grid_h # center position, unit: image height w = anchors[2 * b + 0] * np.exp(w) / net_w # unit: image width h = anchors[2 * b + 1] * np.exp(h) / net_h # unit: image height # last elements are class probabilities classes = netout[int(row)][col][b][5:] box = BoundBox(x-w/2, y-h/2, x+w/2, y+h/2, objectness, classes) boxes.append(box) return boxes def correct_yolo_boxes(boxes, image_h, image_w, net_h, net_w): new_w, new_h = net_w, net_h for i in range(len(boxes)): x_offset, x_scale = (net_w - new_w)/2./net_w, float(new_w)/net_w y_offset, y_scale = (net_h - new_h)/2./net_h, float(new_h)/net_h boxes[i].xmin = int((boxes[i].xmin - x_offset) / x_scale * image_w) boxes[i].xmax = int((boxes[i].xmax - x_offset) / x_scale * image_w) boxes[i].ymin = int((boxes[i].ymin - y_offset) / y_scale * image_h) boxes[i].ymax = int((boxes[i].ymax - y_offset) / y_scale * image_h) def _interval_overlap(interval_a, interval_b): x1, x2 = interval_a x3, x4 = interval_b if x3 < x1: if x4 < x1: return 0 else: return min(x2,x4) - x1 else: if x2 < x3: return 0 else: return min(x2,x4) - x3 def bbox_iou(box1, box2): intersect_w = _interval_overlap([box1.xmin, box1.xmax], [box2.xmin, box2.xmax]) intersect_h = _interval_overlap([box1.ymin, box1.ymax], [box2.ymin, box2.ymax]) intersect = intersect_w * intersect_h w1, h1 = box1.xmax-box1.xmin, box1.ymax-box1.ymin w2, h2 = box2.xmax-box2.xmin, box2.ymax-box2.ymin union = w1*h1 + w2*h2 - intersect return float(intersect) / union def do_nms(boxes, nms_thresh): if len(boxes) > 0: nb_class = len(boxes[0].classes) else: return for c in range(nb_class): sorted_indices = np.argsort([-box.classes[c] for box in boxes]) for i in range(len(sorted_indices)): index_i = sorted_indices[i] if boxes[index_i].classes[c] == 0: continue for j in range(i+1, len(sorted_indices)): index_j = sorted_indices[j] if bbox_iou(boxes[index_i], boxes[index_j]) >= nms_thresh: boxes[index_j].classes[c] = 0 # load and prepare an image def load_image_pixels(filename, shape): # load the image to get its shape image = load_img(filename) width, height = image.size # load the image with the required size image = load_img(filename, target_size=shape) # convert to numpy array image = img_to_array(image) # scale pixel values to [0, 1] image = image.astype('float32') image /= 255.0 # add a dimension so that we have one sample image = np.expand_dims(image, 0) return image, width, height # get all of the results above a threshold def get_boxes(boxes, labels, thresh): v_boxes, v_labels, v_scores = list(), list(), list() # enumerate all boxes for box in boxes: # enumerate all possible labels for i in range(len(labels)): # check if the threshold for this label is high enough if box.classes[i] > thresh: v_boxes.append(box) v_labels.append(labels[i]) v_scores.append(box.classes[i]*100) # don't break, many labels may trigger for one box return v_boxes, v_labels, v_scores # draw all results def draw_boxes(filename, v_boxes, v_labels, v_scores): # load the image data = plt.imread(filename) # plot the image plt.imshow(data) # get the context for drawing boxes ax = plt.gca() # plot each box for i in range(len(v_boxes)): box = v_boxes[i] # get coordinates y1, x1, y2, x2 = box.ymin, box.xmin, box.ymax, box.xmax # calculate width and height of the box width, height = x2 - x1, y2 - y1 # create the shape rect = plt.Rectangle((x1, y1), width, height, fill=False, color='white') # draw the box ax.add_patch(rect) # draw text and score in top left corner label = "%s (%.3f)" % (v_labels[i], v_scores[i]) plt.text(x1, y1, label, color='white') # show the plot plt.show() ``` ## Load model ``` # load yolov3 model model = load_model('yolov3_model.h5') # define the expected input shape for the model input_w, input_h = 416, 416 # define the anchors anchors = [[116,90, 156,198, 373,326], [30,61, 62,45, 59,119], [10,13, 16,30, 33,23]] # define the probability threshold for detected objects class_threshold = 0.6 # define the labels labels = ["person", "bicycle", "car", "motorbike", "airplane", "bus", "train", "truck", "boat", "traffic light", "fire hydrant", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", "backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch", "pottedplant", "bed", "diningtable", "toilet", "tvmonitor", "laptop", "mouse", "remote", "keyboard", "cell phone", "microwave", "oven", "toaster", "sink", "refrigerator", "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush"] ``` ## Gather & concatenate all csv files ``` all_files = [] cat = 'book' for subdir, dirs, files in os.walk(os.path.join(imageDir,cat)): for filename in files: filepath = subdir + os.sep + filename if filepath.endswith(".csv"): all_files.append(filepath) print(filepath) li = [] for filename in all_files: df = pd.read_csv(filename, index_col=None, header=0) li.append(df) df_images = pd.concat(li, axis=0, ignore_index=True) df_images.head() ``` ## Apply prediction to multiple images ``` df_pred = pd.DataFrame(columns=['pred','pred_cat','pred_bbox']) iou_threshold = 0.5 for idx, item in df_images.iterrows(): file_path = os.path.join(item['path'], item['filename']) image, image_w, image_h = load_image_pixels(file_path, (input_w, input_h)) yhat = model.predict(image) boxes = list() for i in range(len(yhat)): # decode the output of the network boxes += decode_netout(yhat[i][0], anchors[i], class_threshold, input_h, input_w) # correct the sizes of the bounding boxes for the shape of the image correct_yolo_boxes(boxes, image_h, image_w, input_h, input_w) # suppress non-maximal boxes do_nms(boxes, 0.5) # get the details of the detected objects v_boxes, v_labels, v_scores = get_boxes(boxes, labels, class_threshold) ########## # summarize what we found # for i in range(len(v_boxes)): # print(v_labels[i], v_scores[i]) # draw what we found # draw_boxes(file_path, v_boxes, v_labels, v_scores) ########## boxes = item['bbox'].lstrip("[") boxes = boxes.rstrip("]") boxes = boxes.strip() x, y, w, h = list(map(int,boxes.split(","))) _box = BoundBox(x, y, x+w, y+h) is_detected = False for i, box in enumerate(v_boxes): # y1, x1, y2, x2 = box.ymin, box.xmin, box.ymax, box.xmax # print(bbox_iou(box, _box)) # print(bbox_iou(_box, box)) iou = bbox_iou(box, _box) if iou > iou_threshold: df_pred = df_pred.append({ 'pred': v_scores[i], 'pred_cat': v_labels[i], 'pred_bbox': [box.xmin, box.ymin, box.xmax-box.xmin, box.ymax-box.ymin] }, ignore_index=True) is_detected=True break if not is_detected: df_pred = df_pred.append({ 'pred': np.nan, 'pred_cat': np.nan, 'pred_bbox': np.nan }, ignore_index=True) df = pd.concat([df_images, df_pred], axis=1) df.info() df.head() df.to_csv(imageDir+cat+"/prediction_results.csv", index=False) ```
github_jupyter
``` from egocom import audio from egocom.multi_array_alignment import gaussian_kernel from egocom.transcription import async_srt_format_timestamp from scipy.io import wavfile import os import numpy as np import pandas as pd from sklearn.metrics import accuracy_score from egocom.transcription import write_subtitles def gaussian_smoothing(arr, samplerate = 44100, window_size = 0.1): '''Returns a locally-normalized array by dividing each point by a the sum of the points around it, with greater emphasis on the points nearest (using a Guassian convolution) Parameters ---------- arr : np.array samplerate : int window_size : float (in seconds) Returns ------- A Guassian smoothing of the input arr''' kern = gaussian_kernel(kernel_length=int(samplerate * window_size), nsigma=3) return np.convolve(arr, kern, 'same') # Tests for audio.avg_pool_1d def test_exact_recoverability( arr = range(10), pool_size = 4, weights = [0.2,0.3,0.3,0.2], ): '''Verify that downsampled signal can be fully recovered exactly.''' complete_result = audio.avg_pool_1d(range(10), pool_size, filler = True, weights = weights) downsampled_result = audio.avg_pool_1d(range(10), pool_size, filler = False, weights = weights) # Try to recover filled_pooled_mags using the downsampled pooled_mags upsampled_result = audio.upsample_1d(downsampled_result, len(arr), pool_size) assert(np.all(upsampled_result == complete_result)) def test_example( arr = range(10), pool_size = 4, weights = [0.2,0.3,0.3,0.2], ): '''Verify that avg_pool_1d produces the result we expect.''' result = audio.avg_pool_1d(range(10), pool_size, weights = weights) expected = np.array([1.5, 1.5, 1.5, 1.5, 5.5, 5.5, 5.5, 5.5, 8.5, 8.5]) assert(np.all(result - expected < 1e-6)) test_exact_recoverability() test_example() ``` # Generate speaker labels from max raw audio magnitudes ``` data_dir = '/Users/cgn/Dropbox (Facebook)/EGOCOM/raw_audio/wav/' fn_dict = {} for fn in sorted(os.listdir(data_dir)): key = fn[9:23] + fn[32:37] if 'part' in fn else fn[9:21] fn_dict[key] = fn_dict[key] + [fn] if key in fn_dict else [fn] samplerate = 44100 window = 1 # Averages signals with windows of N seconds. window_length = int(samplerate * window) labels = {} for key in list(fn_dict.keys()): print(key, end = " | ") fns = fn_dict[key] wavs = [wavfile.read(data_dir + fn)[1] for fn in fns] duration = min(len(w) for w in wavs) wavs = np.stack([w[:duration] for w in wavs]) # Only use the magnitudes of both left and right for each audio wav. mags = abs(wavs).sum(axis = 2) # DOWNSAMPLED (POOLED) Discretized/Fast (no overlap) gaussian smoothing with one-second time window. kwargs = { 'pool_size': window_length, 'weights': gaussian_kernel(kernel_length=window_length), 'filler': False, } pooled_mags = np.apply_along_axis(audio.avg_pool_1d, 1, mags, **kwargs) # Create noisy speaker labels threshold = np.percentile(pooled_mags, 10, axis = 1) no_one_speaking = (pooled_mags > np.expand_dims(threshold, axis = 1)).sum(axis = 0) == 0 speaker_labels = np.argmax(pooled_mags, axis = 0) speaker_labels[no_one_speaking] = -1 # User 1-based indexing for speaker labels (ie increase by 1) speaker_labels = [z if z < 0 else z + 1 for z in speaker_labels] # Store results labels[key] = speaker_labels # Write result to file loc = '/Users/cgn/Dropbox (Facebook)/EGOCOM/raw_audio_speaker_labels_{}.json'.format(str(window)) def default(o): if isinstance(o, np.int64): return int(o) raise TypeError import json with open(loc, 'w') as fp: json.dump(labels, fp, default = default) fp.close() # Read result into a dict import json with open(loc, 'r') as fp: labels = json.load(fp) fp.close() ``` ## Generate ground truth speaker labels ``` def create_gt_speaker_labels( df_times_speaker, duration_in_seconds, time_window_seconds = 0.5, ): stack = rev_times[::-1] stack_time = stack.pop() label_times = np.arange(0, duration_in_seconds, time_window_seconds) result = [-1] * len(label_times) for i, t in enumerate(label_times): while stack_time['endTime'] > t and stack_time['endTime'] <= t + time_window_seconds: result[i] = stack_time['speaker'] if len(stack) == 0: break stack_time = stack.pop() return result df = pd.read_csv("/Users/cgn/Dropbox (Facebook)/EGOCOM/ground_truth_transcriptions.csv")[ ["key", "endTime", "speaker", ] ].dropna() gt_speaker_labels = {} for key, sdf in df.groupby('key'): print(key, end = " | ") wavs = [wavfile.read(data_dir + fn)[1] for fn in fn_dict[key]] duration = min(len(w) for w in wavs) DL = sdf[["endTime", "speaker"]].to_dict('list') rev_times = [dict(zip(DL,t)) for t in zip(*DL.values())] duration_in_seconds = np.ceil(duration / float(samplerate)) gt_speaker_labels[key] = create_gt_speaker_labels(rev_times, duration_in_seconds, window) # Write result to file loc = '/Users/cgn/Dropbox (Facebook)/EGOCOM/rev_ground_truth_speaker_labels_{}.json'.format(str(window)) with open(loc, 'w') as fp: json.dump(gt_speaker_labels, fp, default = default) fp.close() # Read result into a dict with open(loc, 'r') as fp: gt_speaker_labels = json.load(fp) fp.close() scores = [] for key in labels.keys(): true = gt_speaker_labels[key] pred = labels[key] if len(true) > len(pred): true = true[:-1] # diff = round(accuracy_score(true[:-1], pred) - accuracy_score(true[1:], pred), 3) # scores.append(diff) # print(key, accuracy_score(true[1:], pred), accuracy_score(true[:-1], pred), diff) score = accuracy_score(true, pred) scores.append(score) print(key, np.round(score, 3)) print('Average accuracy:', str(np.round(np.mean(scores), 3)* 100) + '%') loc = '/Users/cgn/Dropbox (Facebook)/EGOCOM/subtitles/' for key in labels.keys(): gt = gt_speaker_labels[key] est = labels[key] with open(loc + "speaker_" + key + '.srt', 'w') as f: print(key, end = " | ") for t, s_est in enumerate(est): s_gt = gt[t] print(t + 1, file = f) print(async_srt_format_timestamp(t*window), end = "", file = f) print(' --> ', end = '', file = f) print(async_srt_format_timestamp(t*window+window), file = f) print('Rev.com Speaker:', end = " ", file = f) if s_gt == -1: print('No one is speaking', file = f) elif s_gt == 1: print('Curtis', file = f) else: print('Speaker ' + str(s_gt), file = f) print('MaxMag Speaker:', end = " ", file = f) if s_est == -1: print('No one is speaking', file = f) elif s_est == 1: print('Curtis', file = f) else: print('Speaker ' + str(s_est), file = f) print(file = f) ``` ## Generate subtitles ``` for key in labels.keys(): gt = labels[key] with open("subtitles/est_" + key + '.srt', 'w') as f: for t, s in enumerate(gt): print(t + 1, file = f) print(async_srt_format_timestamp(t*window), end = "", file = f) print(' --> ', end = '', file = f) print(async_srt_format_timestamp(t*window+window), file = f) print('Max mag of wavs speaker id', file = f) if s == -1: print('No one is speaking', file = f) elif s == 1: print('Curtis', file = f) else: print('Speaker ' + str(s), file = f) print(file = f) ```
github_jupyter
``` from torchvision.models import * import wandb from sklearn.model_selection import train_test_split import os,cv2 import numpy as np import matplotlib.pyplot as plt from torch.nn import * import torch,torchvision from tqdm import tqdm device = 'cuda' PROJECT_NAME = 'Intel-Image-Classification-V2' def load_data(): data = [] labels = {} labels_r = {} idx = 0 for label in os.listdir('./data/'): idx += 1 labels[label] = idx labels_r[idx] = label for folder in os.listdir('./data/'): for file in os.listdir(f'./data/{folder}/')[:1000]: img = cv2.imread(f'./data/{folder}/{file}') img = cv2.resize(img,(56,56)) img = img / 255.0 data.append([ img, np.eye(labels[foder],len(labels))[labels[folder]-1] ]) X = [] y = [] for d in data: X.append(d[0]) y.append(d[1]) X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.5,shuffle=False) X_train = torch.from_numpy(np.array(X_train)).to(device).view(-1,3,56,56).float() y_train = torch.from_numpy(np.array(y_train)).to(device).float() X_test = torch.from_numpy(np.array(X_test)).to(device).view(-1,3,56,56).float() y_test = torch.from_numpy(np.array(y_test)).to(device).float() return X,y,X_train,X_test,y_train,y_test,labels,labels_r,idx,data X,y,X_train,X_test,y_train,y_test,labels,labels_r,idx,data = load_data() def load_data(): data = [] labels = {} labels_r = {} idx = 0 for label in os.listdir('./data/'): idx += 1 labels[label] = idx labels_r[idx] = label for folder in os.listdir('./data/'): for file in os.listdir(f'./data/{folder}/')[:1000]: img = cv2.imread(f'./data/{folder}/{file}') img = cv2.resize(img,(56,56)) img = img / 255.0 data.append([ img, np.eye(labels[labels_r],len(labels))[labels[folder]-1] ]) X = [] y = [] for d in data: X.append(d[0]) y.append(d[1]) X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.5,shuffle=False) X_train = torch.from_numpy(np.array(X_train)).to(device).view(-1,3,56,56).float() y_train = torch.from_numpy(np.array(y_train)).to(device).float() X_test = torch.from_numpy(np.array(X_test)).to(device).view(-1,3,56,56).float() y_test = torch.from_numpy(np.array(y_test)).to(device).float() return X,y,X_train,X_test,y_train,y_test,labels,labels_r,idx,data def load_data(): data = [] labels = {} labels_r = {} idx = 0 for label in os.listdir('./data/'): idx += 1 labels[label] = idx labels_r[idx] = label for folder in os.listdir('./data/'): for file in os.listdir(f'./data/{folder}/')[:1000]: img = cv2.imread(f'./data/{folder}/{file}') img = cv2.resize(img,(56,56)) img = img / 255.0 data.append([ img, np.eye(labels[folder],len(labels))[labels[folder]-1] ]) X = [] y = [] for d in data: X.append(d[0]) y.append(d[1]) X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.5,shuffle=False) X_train = torch.from_numpy(np.array(X_train)).to(device).view(-1,3,56,56).float() y_train = torch.from_numpy(np.array(y_train)).to(device).float() X_test = torch.from_numpy(np.array(X_test)).to(device).view(-1,3,56,56).float() y_test = torch.from_numpy(np.array(y_test)).to(device).float() return X,y,X_train,X_test,y_train,y_test,labels,labels_r,idx,data X,y,X_train,X_test,y_train,y_test,labels,labels_r,idx,data = load_data() torch.save(X_train,'X_train.pt') torch.save(y_train,'y_train.pt') torch.save(X_test,'X_test.pt') torch.save(y_test,'y_test.pt') torch.save(labels_r,'labels_r.pt') torch.save(labels,'labels.pt') torch.save(X_train,'X_train.pth') torch.save(y_train,'y_train.pth') torch.save(X_test,'X_test.pth') torch.save(y_test,'y_test.pth') torch.save(labels_r,'labels_r.pth') torch.save(labels,'labels.pth') def get_loss(model,X,y,criterion): preds = model(X) loss = criterion(preds,y) return loss.item() def get_acuracy(model,X,y): preds = model(X) correct = 0 total = 0 for pred,yb in tqdm(zip(preds,y)): pred = int(torch.argmax(pred)) yb = int(torch.argmax(yb)) if pred == yb: correct += 1 total += 1 acc = round(correct/total,3)*100 return acc model = resnet18().to(device) model.fc = Linear(512,len(labels)) criterion = MSELoss() optimizer = Adam(model.parameters(),lr=0.001) epochs = 100 bathc_size = 32 model = resnet18().to(device) model.fc = Linear(512,len(labels)) criterion = MSELoss() optimizer = optim.Adam(model.parameters(),lr=0.001) epochs = 100 bathc_size = 32 model = resnet18().to(device) model.fc = Linear(512,len(labels)) criterion = MSELoss() optimizer = torch.optim.Adam(model.parameters(),lr=0.001) epochs = 100 bathc_size = 32 wandb.init(project=PROJECT_NAME,name='baseline-TL') for _ in tqdm(range(epochs)): for i in range(0,len(X_train),batch_size): X_batch = X_train[i:i+batch_size] y_batch = y_train[i:i+batch_size] model.to(device) preds = model(X_batch) loss = criterion(preds,y_batch) optimizer.zero_grad() loss.backward() optimizer.step() model.eval() torch.cuda.empty_cache() wandb.log({'Loss':(get_loss(model,X_train,y_train,criterion)+get_loss(model,X_batch,y_batch,criterion))/2}) torch.cuda.empty_cache() wandb.log({'Val Loss':get_loss(model,X_test,y_test,criterion)}) torch.cuda.empty_cache() wandb.log({'Acc':(get_accuracy(model,X_train,y_train)+get_accuracy(model,X_batch,y_batch))/2}) torch.cuda.empty_cache() wandb.log({'Val ACC':get_accuracy(model,X_test,y_test)}) torch.cuda.empty_cache() model.train() wandb.finish() model = resnet18().to(device) model.fc = Linear(512,len(labels)) criterion = MSELoss() optimizer = torch.optim.Adam(model.parameters(),lr=0.001) epochs = 100 batch_size = 32 wandb.init(project=PROJECT_NAME,name='baseline-TL') for _ in tqdm(range(epochs)): for i in range(0,len(X_train),batch_size): X_batch = X_train[i:i+batch_size] y_batch = y_train[i:i+batch_size] model.to(device) preds = model(X_batch) loss = criterion(preds,y_batch) optimizer.zero_grad() loss.backward() optimizer.step() model.eval() torch.cuda.empty_cache() wandb.log({'Loss':(get_loss(model,X_train,y_train,criterion)+get_loss(model,X_batch,y_batch,criterion))/2}) torch.cuda.empty_cache() wandb.log({'Val Loss':get_loss(model,X_test,y_test,criterion)}) torch.cuda.empty_cache() wandb.log({'Acc':(get_accuracy(model,X_train,y_train)+get_accuracy(model,X_batch,y_batch))/2}) torch.cuda.empty_cache() wandb.log({'Val ACC':get_accuracy(model,X_test,y_test)}) torch.cuda.empty_cache() model.train() wandb.finish() def get_accuracy(model,X,y): preds = model(X) correct = 0 total = 0 for pred,yb in tqdm(zip(preds,y)): pred = int(torch.argmax(pred)) yb = int(torch.argmax(yb)) if pred == yb: correct += 1 total += 1 acc = round(correct/total,3)*100 return acc model = resnet18().to(device) model.fc = Linear(512,len(labels)) criterion = MSELoss() optimizer = torch.optim.Adam(model.parameters(),lr=0.001) epochs = 100 batch_size = 32 wandb.init(project=PROJECT_NAME,name='baseline-TL') for _ in tqdm(range(epochs)): for i in range(0,len(X_train),batch_size): X_batch = X_train[i:i+batch_size] y_batch = y_train[i:i+batch_size] model.to(device) preds = model(X_batch) loss = criterion(preds,y_batch) optimizer.zero_grad() loss.backward() optimizer.step() model.eval() torch.cuda.empty_cache() wandb.log({'Loss':(get_loss(model,X_train,y_train,criterion)+get_loss(model,X_batch,y_batch,criterion))/2}) torch.cuda.empty_cache() wandb.log({'Val Loss':get_loss(model,X_test,y_test,criterion)}) torch.cuda.empty_cache() wandb.log({'Acc':(get_accuracy(model,X_train,y_train)+get_accuracy(model,X_batch,y_batch))/2}) torch.cuda.empty_cache() wandb.log({'Val ACC':get_accuracy(model,X_test,y_test)}) torch.cuda.empty_cache() model.train() wandb.finish() class Model(Module): def __init__(self): super().__init__() self.max_pool2d = MaxPool2d((2,2),(2,2)) self.activation = ReLU() self.conv1 = Conv2d(3,7,(5,5)) self.conv2 = Conv2d(7,14,(5,5)) self.conv2bn = BatchNorm2d(14) self.conv3 = Conv2d(14,21,(5,5)) self.linear1 = Linear(21*5*5,256) self.linear2 = Linear(256,512) self.linear2bn = BatchNorm1d(512) self.linear3 = Linear(512,256) self.output = Linear(256,len(labels)) def forward(self,X): preds = self.max_pool2d(self.activation(self.conv1(X))) preds = self.max_pool2d(self.activation(self.conv2bn(self.conv2(preds)))) preds = self.max_pool2d(self.activation(self.conv3(preds))) preds = preds.view(-1,21*5*5) preds = self.activation(self.linear1(preds)) preds = self.activation(self.linear2bn(self.linear2(preds))) preds = self.activation(self.linear3(preds)) preds = self.output(preds) return preds model = Model().to(device) criterion = MSELoss() optimizer = Adam(model.parameters(),lr=0.001) epochs = 100 bathc_size = 32 model = Model().to(device) criterion = MSELoss() optimizer = torch.optim.Adam(model.parameters(),lr=0.001) epochs = 100 bathc_size = 32 wandb.init(project=PROJECT_NAME,name='baseline-CNN') for _ in tqdm(range(epochs)): for i in range(0,len(X_train),batch_size): X_batch = X_train[i:i+batch_size] y_batch = y_train[i:i+batch_size] model.to(device) preds = model(X_batch) loss = criterion(preds,y_batch) optimizer.zero_grad() loss.backward() optimizer.step() model.eval() torch.cuda.empty_cache() wandb.log({'Loss':(get_loss(model,X_train,y_train,criterion)+get_loss(model,X_batch,y_batch,criterion))/2}) torch.cuda.empty_cache() wandb.log({'Val Loss':get_loss(model,X_test,y_test,criterion)}) torch.cuda.empty_cache() wandb.log({'Acc':(get_accuracy(model,X_train,y_train)+get_accuracy(model,X_batch,y_batch))/2}) torch.cuda.empty_cache() wandb.log({'Val ACC':get_accuracy(model,X_test,y_test)}) torch.cuda.empty_cache() model.train() wandb.finish() class Model(Module): def __init__(self): super().__init__() self.max_pool2d = MaxPool2d((2,2),(2,2)) self.activation = ReLU() self.conv1 = Conv2d(3,7,(5,5)) self.conv2 = Conv2d(7,14,(5,5)) self.conv2bn = BatchNorm2d(14) self.conv3 = Conv2d(14,21,(5,5)) self.linear1 = Linear(21*5*5,256) self.linear2 = Linear(256,512) self.linear2bn = BatchNorm1d(512) self.linear3 = Linear(512,256) self.output = Linear(256,len(labels)) def forward(self,X): preds = self.max_pool2d(self.activation(self.conv1(X))) preds = self.max_pool2d(self.activation(self.conv2bn(self.conv2(preds)))) preds = self.max_pool2d(self.activation(self.conv3(preds))) preds = preds.view(-1,21*3*3) preds = self.activation(self.linear1(preds)) preds = self.activation(self.linear2bn(self.linear2(preds))) preds = self.activation(self.linear3(preds)) preds = self.output(preds) return preds model = Model().to(device) criterion = MSELoss() optimizer = torch.optim.Adam(model.parameters(),lr=0.001) epochs = 100 bathc_size = 32 wandb.init(project=PROJECT_NAME,name='baseline-CNN') for _ in tqdm(range(epochs)): for i in range(0,len(X_train),batch_size): X_batch = X_train[i:i+batch_size] y_batch = y_train[i:i+batch_size] model.to(device) preds = model(X_batch) loss = criterion(preds,y_batch) optimizer.zero_grad() loss.backward() optimizer.step() model.eval() torch.cuda.empty_cache() wandb.log({'Loss':(get_loss(model,X_train,y_train,criterion)+get_loss(model,X_batch,y_batch,criterion))/2}) torch.cuda.empty_cache() wandb.log({'Val Loss':get_loss(model,X_test,y_test,criterion)}) torch.cuda.empty_cache() wandb.log({'Acc':(get_accuracy(model,X_train,y_train)+get_accuracy(model,X_batch,y_batch))/2}) torch.cuda.empty_cache() wandb.log({'Val ACC':get_accuracy(model,X_test,y_test)}) torch.cuda.empty_cache() model.train() wandb.finish() ```
github_jupyter
Caníbales y misioneros mediante búsqueda primero en anchura. ``` from copy import deepcopy from collections import deque import sys # (m, c, b) hace referencia a el número de misioneros, canibales y el bote class Estado(object): def __init__(self, misioneros, canibales, bote): self.misioneros = misioneros self.canibales = canibales self.bote = bote #se establecen los movimientos que estos tendran def siguientes(self): if self.bote == 1: signo = -1 direction = "Ida" else: signo = 1 direction = "Vuelta" for m in range(3): for c in range(3): nuevo_Estado = Estado(self.misioneros+signo*m, self.canibales+signo*c, self.bote+signo*1); if m+c >= 1 and m+c <= 2 and nuevo_Estado.validar(): # comprobar si la acción y el estado resultante son válidos accion = " %d misioneros y %d canibales %s. %r" % ( m, c, direction, nuevo_Estado) yield accion, nuevo_Estado def validar(self): # validacion inicial if self.misioneros < 0 or self.canibales < 0 or self.misioneros > 3 or self.canibales > 3 or (self.bote != 0 and self.bote != 1): return False # luego verifica si los misioneros superan en número a los canibales # más canibales que misioneros en la orilla original if self.canibales > self.misioneros and self.misioneros > 0: return False # más canibales que misioneros en otra orilla if self.canibales < self.misioneros and self.misioneros < 3: return False return True # valida estado objetivo def estado_final(self): return self.canibales == 0 and self.misioneros == 0 and self.bote == 0 # funcion para devolver los estados en los que se encuentra def __repr__(self): return "< Estado (%d, %d, %d) >" % (self.misioneros, self.canibales, self.bote) # clase nodo class Nodo(object): #se crea la clase nodo para hacer la relacion con sus adyacencias def __init__(self, nodo_pariente, estado, accion, profundidad): self.nodo_pariente = nodo_pariente self.estado = estado self.accion = accion self.profundidad = profundidad # metodo expandir #Busca lo nodos adyacentes def expandir(self): for (accion, estado_sig) in self.estado.siguientes(): nodo_sig = Nodo( nodo_pariente=self, estado=estado_sig, accion=accion, profundidad=self.profundidad + 1) yield nodo_sig # funcion para guardar y devolver la solucion del problema def devolver_solucion(self): solucion = [] nodo = self while nodo.nodo_pariente is not None: solucion.append(nodo.accion) nodo = nodo.nodo_pariente solucion.reverse() return solucion # metodo BFS - busqueda por anchura def BFS(Estado_inicial): nodo_inicial = Nodo( nodo_pariente=None, estado=Estado_inicial, accion=None, profundidad=0) # Se establece la conexion con los nodos hijos cola = deque([nodo_inicial]) #se crea la cola profundidad_maxima = -1 while True: if not cola: return None nodo = cola.popleft() if nodo.profundidad > profundidad_maxima: #se defienen los nuevos nodos profundidad_maxima = nodo.profundidad if nodo.estado.estado_final(): solucion = nodo.devolver_solucion() return solucion cola.extend(nodo.expandir()) def main(): #Caso prueba Estado_inicial = Estado(3,3,1) solucion = BFS(Estado_inicial) if solucion is None: print("no tiene solucion") else: for pasos in solucion: print("%s" % pasos) if __name__ == "__main__": main() ``` Función en Python que reciba un grafo, un nodo inicial y un nodo final y devuelva la ruta del nodo inicial al nodo final utilizando búsqueda primero en profundidad. Se deben crear las clases Grafo y Nodo con sus respectivos métodos y atributos. La función debe retornar None en caso de que no haya ninguna ruta posible. ``` class Enlace: #pendientes def __init__(self, a=None, b=None): self.a = a self.b = b def __eq__(self, other): return self.a == other.a and self.b == other.b def __str__(self): return "(" + str(self.a) + "," + str(self.b) + ")" def __repr__(self): return self.__str__() class Nodo: def __init__(self, padre=None, nombre=""): self.padre = padre self.nombre = nombre def __eq__(self, other): return self.nombre == other.nombre def __str__(self): return self.nombre def __repr__(self): return self.__str__() class Grafo: #se crea el grafo con los nodos y enlaces entre si def __init__(self, nodos=[], enlaces=[]): self.nodos = nodos self.enlaces = enlaces def __str__(self): return "Nodos : " + str(self.nodos) + " Enlaces : " + str(self.enlaces) def __repr__(self): return self.__str__() def hallarRuta(grafo, nodoInicial, nodoFinal): #Se halla el recorrido final del grafo actual = nodoInicial p=[actual] estadoFinal = nodoFinal visitados = [actual] rutaFinal=[] while len(p) > 0: # se verifica que el grafo no este vacio, que a este llegen los nodos adyacentes if (actual!=estadoFinal): siguiente = generar_siguiente(actual, grafo, visitados) if siguiente != Nodo(nombre=""): p.append(siguiente) actual = p[len(p)-1] visitados.append(actual)#se toma cómo agrega a la "ruta" de visitados el nodo en el que nos hallamos else: p.pop() actual = p[len(p)-1] else: while len(p) > 0: rutaFinal.insert(0,p.pop())#finalmente se le insertan todos todos los visitados a rutaFinal break print("La ruta final es") print(rutaFinal) def generar_siguiente(actual, grafo, visitados):# una ves visitado uno de los nodos, pasa al siguiente nodo, al siguiente hijo #comprueba si este nodo ya fue visitado o no for i in range(len(grafo.enlaces)): if actual.nombre == grafo.enlaces[i].a: nodoSiguiente = Nodo(padre=actual.nombre, nombre=grafo.enlaces[i].b) if nodoSiguiente not in visitados: return nodoSiguiente break return Nodo(nombre="") #EJEMPLO #acontinuación se definen las adyacencia del grafo a consultar el recorrrido nodos = [Nodo(nombre="A"),Nodo(nombre="B"),Nodo(nombre="C"),Nodo(nombre="D"),Nodo(nombre="F"),Nodo(nombre="E")] E1 = Enlace(a = "A", b = "C" ) E2 = Enlace(a = "A", b = "D" ) E3 = Enlace(a = "A", b = "F" ) E4 = Enlace(a = "B", b = "E" ) E5 = Enlace(a = "C", b = "F" ) E6 = Enlace(a = "D", b = "E" ) E7 = Enlace(a = "E", b = "F" ) enlaces = [E1,E2,E3,E4,E5,E6,E7] grafo = Grafo(nodos=nodos,enlaces=enlaces) nodoA = Nodo(nombre="A") nodoB = Nodo(nombre="F") ruta = hallarRuta(grafo, nodoA, nodoB) print(ruta) ``` Desarrollar un programa en Python que solucione el problema del rompecabezas des- lizante para 8 números utilizando búsqueda en anchura . El programa debe leer el estado inicial desde un archivo. Algunas configuraciones no tienen solución. ``` class Estados(): def __init__(self,Mat,Npadre=None): self.Npadre=Npadre self.Mat=Mat #Mat=[[0 1 2],[3 4 5],[6 7 8]] def BuscZ(self): #Buscar el 0 dentro de la matriz itera=0 Pos=[] for y,i in enumerate(self.Mat): if 0 in i: itera+=1 Pos.append(y) Pos.append(i.index(0)) #Funcion Busca 0 return Pos #prueba=Estados([[1,2,0],[3,4,5],[6,7,8]]) #prueba.BuscZ() #Buscar Hijos- Movimientos def BuscaH(): #arriba #abajo #derecha #izquierda ``` Desarrollar un programa en Python que encuentre la ruta de salida en un laberinto representado por una matriz de 0 y 1. Un 0 significa que se puede pasar por esa casilla un 1 representa que hay pared en dicha casilla y 2 que es la salida. El programa debe leer la configuración del laberinto desde un archivo, solicitar al usuario el estado inicial y dibujar el laberinto con la ruta de salida. Se debe utilizar búsqueda primero en profundidad. ``` #creacion de ambiente from IPython.display import display import ipywidgets as widgets import time import random # se crea el tablero class Tablero: def __init__(self, tamanoCelda=(40, 40), nCeldas=(5,5)): #dimensiones del tablero self.out = widgets.HTML() display(self.out) self.tamanoCelda = tamanoCelda self.nCeldas = nCeldas def dibujar(self, agente , trashes, obstaculos,pila=[]): tablero = "<h1 style='color:green'>Encuentra la salida </h1>" tablero+="<br>" tablero += "<table border='1' >{}</table>" filas = "" for i in range(self.nCeldas[0]): s = "" for j in range(self.nCeldas[1]): agenteaux = Agente(x = j, y = i) if agente == agenteaux: contenido = \ "<span style='font-size:{tamanoEmoticon}px;'>{emoticon}</span>".\ format(tamanoEmoticon=agente.tamanoEmoticon, emoticon=agente.emoticon) elif agenteaux in trashes: index = trashes.index(agenteaux) contenido = \ "<span style='font-size:{tamanoEmoticon}px;'>{emoticon}</span>".\ format(tamanoEmoticon=trashes[index].tamanoEmoticon, emoticon=trashes[index].emoticon) elif agenteaux in obstaculos: index = obstaculos.index(agenteaux) contenido = \ "<span style='font-size:{tamanoEmoticon}px;'>{emoticon}</span>".\ format(tamanoEmoticon=obstaculos[index].tamanoEmoticon, emoticon=obstaculos[index].emoticon) elif Nodo(x=j,y=i) in pila: contenido = \ "<span style='font-size:{tamanoEmoticon}px;'>{emoticon}</span>".\ format(tamanoEmoticon=30, emoticon="👣") else: contenido="" s += "<td style='height:{alto}px;width:{ancho}px'>{contenido}</td>".\ format(alto=self.tamanoCelda[0], ancho=self.tamanoCelda[1], contenido=contenido) filas += "<tr>{}</tr>".format(s) tablero = tablero.format(filas) self.out.value=tablero return trashes #Agente #se crea la clase agente y los movimientos que este tendra class Agente: def __init__(self, x=2, y=2, emoticon="🧍‍♂️", tamanoEmoticon=30): #estado inicial del agente self.x = x self.y = y self.emoticon = emoticon self.tamanoEmoticon = tamanoEmoticon def __eq__(self, other): return self.x == other.x and self.y == other.y def __str__(self): return "("+str(self.x)+","+str(self.y)+","+self.emoticon+")" def __repr__(self): return self.__str__() #se definen los movimientos que tendra el agente #movimientos en "X" y en "Y" respectivamente def Abajo(self): # movimiento hacia abajo self.y += 1 def Arriba(self): #movimiento hacia arriba self.y -= 1 def Derecha(self): #movimiento hacia la derecha self.x += 1 def Izquierda(self): #Movimiento hacia la izquierda self.x -= 1 class Nodo: # se define la clase nodo def __init__(self, x=0, y=0): self.x = x self.y = y def __eq__(self, other): return self.x == other.x and self.y==other.y def __str__(self): return str("(posX: "+str(self.x)+", posY: "+str(self.y)+")") def __repr__(self): return self.__str__() def generar_siguiente(actual, visitados): #se comprueba si la posicion ya fue visitado o no, si ya lo fue busca otro nodo posx=int(actual.x) posy=int(actual.y) if posy < len(nombres)-1: #Movimiento hacia abajo Abajo=int(arreglo[posy+1][posx]) if Abajo==0 or Abajo==2: Siguiente=Nodo(x=posx,y=posy+1) if Siguiente not in visitados: return Siguiente #Movimiento hacia la derecha if posx < len(nombres)-1 : Derecha=int(arreglo[posy][posx+1]) if Derecha==0 or Derecha==2: Siguiente=Nodo(x=posx+1,y=posy) if Siguiente not in visitados: return Siguiente if posy > 0 : #MOVIMIENTO HACIA ARRIBA, MIENTRAS QUE NI SE SAlGA SE LAS DIMENSIONES Arriba=int(arreglo[posy-1][posx]) if Arriba==0 or Arriba==2: Siguiente=Nodo(x=posx,y=posy-1) if Siguiente not in visitados: return Siguiente if posx>0: #MOVIMIENTO HACIA LA IZQUIERDA, HASTA EL BORDE DEL ESCENARIO Izq=int(arreglo[posy][posx-1]) if Izq==0 or Izq==2: Siguiente=Nodo(x=posx-1,y=posy) if Siguiente not in visitados: return Siguiente return Nodo(x=-1,y=-1) def HallarRuta(agente , elementos): # SE DEFINE LA FUNCION, LA CUAL NOS AYUDARA A BUSCAR LA SALIDA DEL LABERINTO escenario=elementos[0] # print(arreglo) for i in range(len(arreglo)):# se recorre nombres.append(i) #rSE RECORRE TODO EL GRAFO PARA SABER EL NODO FINAL estadoFinal=Nodo() for i in range(len(nombres)): arreglito=arreglo[i] for j in range(len(arreglito)): if int(arreglito[j])==2: estadoFinal=Nodo(x=i,y=j) actual = Nodo(x=agente.x,y=agente.y) visitados = [actual] pila=[actual] while len(pila)!=0 and actual != estadoFinal: # MIENTRAS LA PILA EN LA QUE SE ALMACENAN LOS NO CONSULTADOS NO SEA 0 #Se busca el siguiente nodo, ignorando los ya visitados siguienteNodo=generar_siguiente(actual,visitados) if siguienteNodo != Nodo(x=-1,y=-1): pila.append(siguienteNodo)# el nodo actual pasa a ser visitado actual=siguienteNodo # se busca un nuevo nodo agente.x=int(actual.x) agente.y=int(actual.y) visitados.append(actual) else: pila.pop() actual=pila[len(pila)-1] agente.x=int(actual.x) agente.y=int(actual.y) escenario.dibujar(agente, elementos[1], elementos[2],pila) # se dibuja el escenario time.sleep(1) # tiempo de retraso #Importar para leer archivo csv from google.colab import files files.upload() import csv, operator def obtenerArregloArchivo(): Trashes=[] Obstaculos=[] cont=0 with open('EscenarioPunto#4.csv') as csvarchivo: entrada = csv.reader(csvarchivo) arreglo= [] for reg in entrada: arreglo.append(reg) if cont == 0: cantfilas = len(reg) for i in range(len(reg)): if len(reg) == 1: agente.energia = int(reg[0]) else: if reg[i] == '1': Obstaculos.append(Agente(x=i , y=cont , emoticon="⛓️" )) if reg[i] == '2': Trashes.append(Agente(x=i , y=cont , emoticon="🥇" )) cont+=1 escenario = Tablero(nCeldas=(cont,cantfilas)) elementos = [escenario , Trashes, Obstaculos, agente, arreglo] return elementos #crearEscenarioArchivo() #Posicion inicil del agente posx = input(" X: ") posy = input(" Y: ") nombres=[] agente = Agente(x=int(posx), y = int(posy)) elementos = obtenerArregloArchivo() arreglo=elementos[4] ruta = HallarRuta(agente , elementos) ```
github_jupyter
# Running and Plotting Coeval Cubes The aim of this tutorial is to introduce you to how `21cmFAST` does the most basic operations: producing single coeval cubes, and visually verifying them. It is a great place to get started with `21cmFAST`. ``` %matplotlib inline import matplotlib.pyplot as plt import os # We change the default level of the logger so that # we can see what's happening with caching. import logging, sys, os logger = logging.getLogger('21cmFAST') logger.setLevel(logging.INFO) import py21cmfast as p21c # For plotting the cubes, we use the plotting submodule: from py21cmfast import plotting # For interacting with the cache from py21cmfast import cache_tools print(f"Using 21cmFAST version {p21c.__version__}") ``` Clear the cache so that we get the same results for the notebook every time (don't worry about this for now). Also, set the default output directory to `_cache/`: ``` if not os.path.exists('_cache'): os.mkdir('_cache') p21c.config['direc'] = '_cache' cache_tools.clear_cache(direc="_cache") ``` ## Basic Usage The simplest (and typically most efficient) way to produce a coeval cube is simply to use the `run_coeval` method. This consistently performs all steps of the calculation, re-using any data that it can without re-computation or increased memory overhead. ``` coeval8, coeval9, coeval10 = p21c.run_coeval( redshift = [8.0, 9.0, 10.0], user_params = {"HII_DIM": 100, "BOX_LEN": 100, "USE_INTERPOLATION_TABLES": True}, cosmo_params = p21c.CosmoParams(SIGMA_8=0.8), astro_params = p21c.AstroParams({"HII_EFF_FACTOR":20.0}), random_seed=12345 ) ``` There are a number of possible inputs for `run_coeval`, which you can check out either in the [API reference](../reference/py21cmfast.html) or by calling `help(p21c.run_coeval)`. Notably, the `redshift` must be given: it can be a single number, or a list of numbers, defining the redshift at which the output coeval cubes will be defined. Other params we've given here are `user_params`, `cosmo_params` and `astro_params`. These are all used for defining input parameters into the backend C code (there's also another possible input of this kind; `flag_options`). These can be given either as a dictionary (as `user_params` has been), or directly as a relevant object (like `cosmo_params` and `astro_params`). If creating the object directly, the parameters can be passed individually or via a single dictionary. So there's a lot of flexibility there! Nevertheless we *encourage* you to use the basic dictionary. The other ways of passing the information are there so we can use pre-defined objects later on. For more information about these "input structs", see the [API docs](../reference/_autosummary/py21cmfast.inputs.html). We've also given a `direc` option: this is the directory in which to search for cached data (and also where cached data should be written). Throughout this notebook we're going to set this directly to the `_cache` folder, which allows us to manage it directly. By default, the cache location is set in the global configuration in `~/.21cmfast/config.yml`. You'll learn more about caching further on in this tutorial. Finally, we've given a random seed. This sets all the random phases for the simulation, and ensures that we can exactly reproduce the same results on every run. The output of `run_coeval` is a list of `Coeval` instances, one for each input redshift (it's just a single object if a single redshift was passed, not a list). They store *everything* related to that simulation, so that it can be completely compared to other simulations. For example, the input parameters: ``` print("Random Seed: ", coeval8.random_seed) print("Redshift: ", coeval8.redshift) print(coeval8.user_params) ``` This is where the utility of being able to pass a *class instance* for the parameters arises: we could run another iteration of coeval cubes, with the same user parameters, simply by doing `p21c.run_coeval(user_params=coeval8.user_params, ...)`. Also in the `Coeval` instance are the various outputs from the different steps of the computation. You'll see more about what these steps are further on in the tutorial. But for now, we show that various boxes are available: ``` print(coeval8.hires_density.shape) print(coeval8.brightness_temp.shape) ``` Along with these, full instances of the output from each step are available as attributes that end with "struct". These instances themselves contain the `numpy` arrays of the data cubes, and some other attributes that make them easier to work with: ``` coeval8.brightness_temp_struct.global_Tb ``` By default, each of the components of the cube are cached to disk (in our `_cache/` folder) as we run it. However, the `Coeval` cube itself is _not_ written to disk by default. Writing it to disk incurs some redundancy, since that data probably already exists in the cache directory in seperate files. Let's save to disk. The save method by default writes in the current directory (not the cache!): ``` filename = coeval8.save(direc='_cache') ``` The filename of the saved file is returned: ``` print(os.path.basename(filename)) ``` Such files can be read in: ``` new_coeval8 = p21c.Coeval.read(filename, direc='.') ``` Some convenient plotting functions exist in the `plotting` module. These can work directly on `Coeval` objects, or any of the output structs (as we'll see further on in the tutorial). By default the `coeval_sliceplot` function will plot the `brightness_temp`, using the standard traditional colormap: ``` fig, ax = plt.subplots(1,3, figsize=(14,4)) for i, (coeval, redshift) in enumerate(zip([coeval8, coeval9, coeval10], [8,9,10])): plotting.coeval_sliceplot(coeval, ax=ax[i], fig=fig); plt.title("z = %s"%redshift) plt.tight_layout() ``` Any 3D field can be plotted, by setting the `kind` argument. For example, we could alternatively have plotted the dark matter density cubes perturbed to each redshift: ``` fig, ax = plt.subplots(1,3, figsize=(14,4)) for i, (coeval, redshift) in enumerate(zip([coeval8, coeval9, coeval10], [8,9,10])): plotting.coeval_sliceplot(coeval, kind='density', ax=ax[i], fig=fig); plt.title("z = %s"%redshift) plt.tight_layout() ``` To see more options for the plotting routines, see the [API Documentation](../reference/_autosummary/py21cmfast.plotting.html). `Coeval` instances are not cached themselves -- they are containers for data that is itself cached (i.e. each of the `_struct` attributes of `Coeval`). See the [api docs](../reference/_autosummary/py21cmfast.outputs.html) for more detailed information on these. You can see the filename of each of these structs (or the filename it would have if it were cached -- you can opt to *not* write out any given dataset): ``` coeval8.init_struct.filename ``` You can also write the struct anywhere you'd like on the filesystem. This will not be able to be automatically used as a cache, but it could be useful for sharing files with colleagues. ``` coeval8.init_struct.save(fname='my_init_struct.h5') ``` This brief example covers most of the basic usage of `21cmFAST` (at least with `Coeval` objects -- there are also `Lightcone` objects for which there is a separate tutorial). For the rest of the tutorial, we'll cover a more advanced usage, in which each step of the calculation is done independently. ## Advanced Step-by-Step Usage Most users most of the time will want to use the high-level `run_coeval` function from the previous section. However, there are several independent steps when computing the brightness temperature field, and these can be performed one-by-one, adding any other effects between them if desired. This means that the new `21cmFAST` is much more flexible. In this section, we'll go through in more detail how to use the lower-level methods. Each step in the chain will receive a number of input-parameter classes which define how the calculation should run. These are the `user_params`, `cosmo_params`, `astro_params` and `flag_options` that we saw in the previous section. Conversely, each step is performed by running a function which will return a single object. Every major function returns an object of the same fundamental class (an ``OutputStruct``) which has various methods for reading/writing the data, and ensuring that it's in the right state to receive/pass to and from C. These are the objects stored as `init_box_struct` etc. in the `Coeval` class. As we move through each step, we'll outline some extra details, hints and tips about using these inputs and outputs. ### Initial Conditions The first step is to get the initial conditions, which defines the *cosmological* density field before any redshift evolution is applied. ``` initial_conditions = p21c.initial_conditions( user_params = {"HII_DIM": 100, "BOX_LEN": 100}, cosmo_params = p21c.CosmoParams(SIGMA_8=0.8), random_seed=54321 ) ``` We've already come across all these parameters as inputs to the `run_coeval` function. Indeed, most of the steps have very similar interfaces, and are able to take a random seed and parameters for where to look for the cache. We use a different seed than in the previous section so that all our boxes are "fresh" (we'll show how the caching works in a later section). These initial conditions have 100 cells per side, and a box length of 100 Mpc. Note again that they can either be passed as a dictionary containing the input parameters, or an actual instance of the class. While the former is the suggested way, one benefit of the latter is that it can be queried for the relevant parameters (by using ``help`` or a post-fixed ``?``), or even queried for defaults: ``` p21c.CosmoParams._defaults_ ``` (these defaults correspond to the Planck15 cosmology contained in Astropy). So what is in the ``initial_conditions`` object? It is what we call an ``OutputStruct``, and we have seen it before, as the `init_box_struct` attribute of `Coeval`. It contains a number of arrays specifying the density and velocity fields of our initial conditions, as well as the defining parameters. For example, we can easily show the cosmology parameters that are used (note the non-default $\sigma_8$ that we passed): ``` initial_conditions.cosmo_params ``` A handy tip is that the ``CosmoParams`` class also has a reference to a corresponding Astropy cosmology, which can be used more broadly: ``` initial_conditions.cosmo_params.cosmo ``` Merely printing the initial conditions object gives a useful representation of its dependent parameters: ``` print(initial_conditions) ``` (side-note: the string representation of the object is used to uniquely define it in order to save it to the cache... which we'll explore soon!). To see which arrays are defined in the object, access the ``fieldnames`` (this is true for *all* `OutputStruct` objects): ``` initial_conditions.fieldnames ``` The `coeval_sliceplot` function also works on `OutputStruct` objects (as well as the `Coeval` object as we've already seen). It takes the object, and a specific field name. By default, the field it plots is the _first_ field in `fieldnames` (for any `OutputStruct`). ``` plotting.coeval_sliceplot(initial_conditions, "hires_density"); ``` ### Perturbed Field After obtaining the initial conditions, we need to *perturb* the field to a given redshift (i.e. the redshift we care about). This step clearly requires the results of the previous step, which we can easily just pass in. Let's do that: ``` perturbed_field = p21c.perturb_field( redshift = 8.0, init_boxes = initial_conditions ) ``` Note that we didn't need to pass in any input parameters, because they are all contained in the `initial_conditions` object itself. The random seed is also taken from this object. Again, the output is an `OutputStruct`, so we can view its fields: ``` perturbed_field.fieldnames ``` This time, it has only density and velocity (the velocity direction is chosen without loss of generality). Let's view the perturbed density field: ``` plotting.coeval_sliceplot(perturbed_field, "density"); ``` It is clear here that the density used is the *low*-res density, but the overall structure of the field looks very similar. ### Ionization Field Next, we need to ionize the box. This is where things get a little more tricky. In the simplest case (which, let's be clear, is what we're going to do here) the ionization occurs at the *saturated limit*, which means we can safely ignore the contribution of the spin temperature. This means we can directly calculate the ionization on the density/velocity fields that we already have. A few more parameters are needed here, and so two more "input parameter dictionaries" are available, ``astro_params`` and ``flag_options``. Again, a reminder that their parameters can be viewed by using eg. `help(p21c.AstroParams)`, or by looking at the [API docs](../reference/_autosummary/py21cmfast.inputs.html). For now, let's leave everything as default. In that case, we can just do: ``` ionized_field = p21c.ionize_box( perturbed_field = perturbed_field ) ``` That was easy! All the information required by ``ionize_box`` was given directly by the ``perturbed_field`` object. If we had _also_ passed a redshift explicitly, this redshift would be checked against that from the ``perturbed_field`` and an error raised if they were incompatible: Let's see the fieldnames: ``` ionized_field.fieldnames ``` Here the ``first_box`` field is actually just a flag to tell the C code whether this has been *evolved* or not. Here, it hasn't been, it's the "first box" of an evolutionary chain. Let's plot the neutral fraction: ``` plotting.coeval_sliceplot(ionized_field, "xH_box"); ``` ### Brightness Temperature Now we can use what we have to get the brightness temperature: ``` brightness_temp = p21c.brightness_temperature(ionized_box=ionized_field, perturbed_field=perturbed_field) ``` This has only a single field, ``brightness_temp``: ``` plotting.coeval_sliceplot(brightness_temp); ``` ### The Problem And there you have it -- you've computed each of the four steps (there's actually another, `spin_temperature`, that you require if you don't assume the saturated limit) individually. However, some problems quickly arise. What if you want the `perturb_field`, but don't care about the initial conditions? We know how to get the full `Coeval` object in one go, but it would seem that the sub-boxes have to _each_ be computed as the input to the next. A perhaps more interesting problem is that some quantities require *evolution*: i.e. a whole bunch of simulations at a string of redshifts must be performed in order to obtain the current redshift. This is true when not in the saturated limit, for example. That means you'd have to manually compute each redshift in turn, and pass it to the computation at the next redshift. While this is definitely possible, it becomes difficult to set up manually when all you care about is the box at the final redshift. `py21cmfast` solves this by making each of the functions recursive: if `perturb_field` is not passed the `init_boxes` that it needs, it will go and compute them, based on the parameters that you've passed it. If the previous `spin_temp` box required for the current redshift is not passed -- it will be computed (and if it doesn't have a previous `spin_temp` *it* will be computed, and so on). That's all good, but what if you now want to compute another `perturb_field`, with the same fundamental parameters (but at a different redshift)? Since you didn't ever see the `init_boxes`, they'll have to be computed all over again. That's where the automatic caching comes in, which is where we turn now... ## Using the Automatic Cache To solve all this, ``21cmFAST`` uses an on-disk caching mechanism, where all boxes are saved in HDF5 format in a default location. The cache allows for reading in previously-calculated boxes automatically if they match the parameters that are input. The functions used at every step (in the previous section) will try to use a cached box instead of calculating a new one, unless its explicitly asked *not* to. Thus, we could do this: ``` perturbed_field = p21c.perturb_field( redshift = 8.0, user_params = {"HII_DIM": 100, "BOX_LEN": 100}, cosmo_params = p21c.CosmoParams(SIGMA_8=0.8), ) plotting.coeval_sliceplot(perturbed_field, "density"); ``` Note that here we pass exactly the same parameters as were used in the previous section. It gives a message that the full box was found in the cache and immediately returns. However, if we change the redshift: ``` perturbed_field = p21c.perturb_field( redshift = 7.0, user_params = {"HII_DIM": 100, "BOX_LEN": 100}, cosmo_params = p21c.CosmoParams(SIGMA_8=0.8), ) plotting.coeval_sliceplot(perturbed_field, "density"); ``` Now it finds the initial conditions, but it must compute the perturbed field at the new redshift. If we had changed the initial parameters as well, it would have to calculate everything: ``` perturbed_field = p21c.perturb_field( redshift = 8.0, user_params = {"HII_DIM": 50, "BOX_LEN": 100}, cosmo_params = p21c.CosmoParams(SIGMA_8=0.8), ) plotting.coeval_sliceplot(perturbed_field, "density"); ``` This shows that we don't need to perform the *previous* step to do any of the steps, they will be calculated automatically. Now, let's get an ionized box, but this time we won't assume the saturated limit, so we need to use the spin temperature. We can do this directly in the ``ionize_box`` function, but let's do it explicitly. We will use the auto-generation of the initial conditions and perturbed field. However, the spin temperature is an *evolved* field, i.e. to compute the field at $z$, we need to know the field at $z+\Delta z$. This continues up to some redshift, labelled ``z_heat_max``, above which the spin temperature can be defined directly from the perturbed field. Thus, one option is to pass to the function a *previous* spin temperature box, to evolve to *this* redshift. However, we don't have a previous spin temperature box yet. Of course, the function itself will go and calculate that box if it's not given (or read it from cache if it's been calculated before!). When it tries to do that, it will go to the one before, and so on until it reaches ``z_heat_max``, at which point it will calculate it directly. To facilitate this recursive progression up the redshift ladder, there is a parameter, ``z_step_factor``, which is a multiplicate factor that determines the previous redshift at each step. We can also pass the dependent boxes explicitly, which provides the parameters necessary. **WARNING: THIS IS THE MOST TIME-CONSUMING STEP OF THE CALCULATION!** ``` spin_temp = p21c.spin_temperature( perturbed_field = perturbed_field, zprime_step_factor=1.05, ) plotting.coeval_sliceplot(spin_temp, "Ts_box"); ``` Let's note here that each of the functions accepts a few of the same arguments that modifies how the boxes are cached. There is a ``write`` argument, which if set to ``False``, will disable writing that box to cache (and it is passed through the recursive heirarchy). There is also ``regenerate``, which if ``True``, forces this box and all its predecessors to be re-calculated even if they exist in the cache. Then there is ``direc``, which we have seen before. Finally note that by default, ``random_seed`` is set to ``None``. If this is the case, then any cached dataset matching all other parameters will be read in, and the ``random_seed`` will be set based on the file read in. If it is set to an integer number, then the cached dataset must also match the seed. If it is ``None``, and no matching dataset is found, a random seed will be autogenerated. Now if we calculate the ionized box, ensuring that it uses the spin temperature, then it will also need to be evolved. However, due to the fact that we cached each of the spin temperature steps, these should be read in accordingly: ``` ionized_box = p21c.ionize_box( spin_temp = spin_temp, zprime_step_factor=1.05, ) plotting.coeval_sliceplot(ionized_box, "xH_box"); ``` Great! So again, we can just get the brightness temp: ``` brightness_temp = p21c.brightness_temperature( ionized_box = ionized_box, perturbed_field = perturbed_field, spin_temp = spin_temp ) ``` Now lets plot our brightness temperature, which has been evolved from high redshift with spin temperature fluctuations: ``` plotting.coeval_sliceplot(brightness_temp); ``` We can also check what the result would have been if we had limited the maximum redshift of heating. Note that this *recalculates* all previous spin temperature and ionized boxes, because they depend on both ``z_heat_max`` and ``zprime_step_factor``. ``` ionized_box = p21c.ionize_box( spin_temp = spin_temp, zprime_step_factor=1.05, z_heat_max = 20.0 ) brightness_temp = p21c.brightness_temperature( ionized_box = ionized_box, perturbed_field = perturbed_field, spin_temp = spin_temp ) plotting.coeval_sliceplot(brightness_temp); ``` As we can see, it's very similar!
github_jupyter
``` import numpy as np import xarray as xr import scipy.ndimage as ndimage import cartopy.crs as ccrs import cartopy.feature as cfeature import matplotlib.gridspec as gridspec import matplotlib.pyplot as plt import metpy.calc as mpcalc from metpy.units import units from datetime import datetime # Get dataset from NOMADS Server ds = xr.open_dataset('http://nomads.ncep.noaa.gov:80/dods/gfs_0p25_1hr/gfs20201121/gfs_0p25_1hr_12z') # Select desired vars ds = ds[['hgtprs', 'tmpprs', 'ugrdprs', 'vgrdprs']] # Select time ds = ds.sel(time=ds.time[0]) # Select level ds = ds.sel(lev=850) # Select lat/lon slice ds = ds.sel(lon=slice(220, 310), lat=slice(15, 65)) ds # Calcualte dx and dy dx, dy = mpcalc.lat_lon_grid_deltas(ds.lon.values, ds.lat.values) # Set calculation units T, u, v = ds.tmpprs.values * units('K'), ds.ugrdprs.values * units('m/s'), ds.vgrdprs.values * units('m/s') # Calculate temperature advection T_adv = mpcalc.advection(scalar=T, wind=[u, v], deltas=(dx, dy), dim_order='yx') # Smooth data z = ndimage.gaussian_filter(ds.hgtprs, sigma=3, order=0) T_adv = ndimage.gaussian_filter(T_adv, sigma=3, order=0) * units('K / s') # Set plot units u = u.to('kt') v = v.to('kt') T = T.to('degC') T_adv = T_adv.to('degC/hr') * 3 # Set Projection of Data datacrs = ccrs.PlateCarree() # Set Projection of Plot plotcrs = ccrs.LambertConformal(central_latitude=[30, 60], central_longitude=-100) # Create new figure fig = plt.figure(figsize=(15, 12.5)) gs = gridspec.GridSpec(2, 1, height_ratios=[1, .02], bottom=.07, top=.99, hspace=0.01, wspace=0.01) # Add the map and set the extent ax = plt.subplot(gs[0], projection=plotcrs) ax.set_extent([235, 290, 20, 55]) # Add state/country boundaries to plot country_borders=cfeature.NaturalEarthFeature(category='cultural', name='admin_0_countries', scale='10m', facecolor='none') ax.add_feature(country_borders, edgecolor='black', linewidth=1.0) state_borders=cfeature.NaturalEarthFeature(category='cultural', name='admin_1_states_provinces_lakes', scale='10m', facecolor='none') ax.add_feature(state_borders, edgecolor='black', linewidth=0.5) # Plot Height Contours clev = np.arange(900, 3000, 30) cs = ax.contour(ds.lon, ds.lat, z, clev, colors='black', linewidths=2, transform=datacrs) plt.clabel(cs, fontsize=10, inline=1, inline_spacing=10, fmt='%i', rightside_up=True, use_clabeltext=True) # Plot Temperature Contours clevtemp850 = np.arange(-20, 20, 2) cs2 = ax.contour(ds.lon, ds.lat, T, clevtemp850, colors='grey', linewidths=1.25, linestyles='--', transform=datacrs) plt.clabel(cs2, fontsize=10, inline=1, inline_spacing=10, fmt='%i', rightside_up=True, use_clabeltext=True) # Plot Colorfill of Temperature Advection cint = np.arange(-8, 9) cf = ax.contourf(ds.lon, ds.lat, T_adv, cint[cint != 0], extend='both', cmap='bwr', transform=datacrs) cb = plt.colorbar(cf, ax=ax, pad=0, aspect=50, orientation='horizontal', extendrect=True, ticks=cint) cb.set_label(r'$^{\circ}$C 3h$^{-1}$', size='large') # Plot Wind Barbs ax.barbs(ds.lon, ds.lat, u.magnitude, v.magnitude, length=6, regrid_shape=20, pivot='middle', transform=datacrs) # Change datetiem64 to datetime valid = datetime.utcfromtimestamp(ds.time.values.astype('O')/1e9) # Add plot headers plt.title(f'GFS 850mb Temperature Advection', loc='left') plt.title(f'Run: {valid.strftime("%a %Y-%m-%d %H:%M")} UTC\nValid: {valid.strftime("%a %Y-%m-%d %H:%M")} UTC', loc='right') # Add title plt.suptitle(f'weather.carterhumphreys.com', fontsize=16, x=0.50, y=0.90) # Export plot and close plt.show() ```
github_jupyter
``` # Copyright 2019 Google Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ============================================================================== ``` # Composing a pipeline from reusable, pre-built, and lightweight components This tutorial describes how to build a Kubeflow pipeline from reusable, pre-built, and lightweight components. The following provides a summary of the steps involved in creating and using a reusable component: - Write the program that contains your component’s logic. The program must use files and command-line arguments to pass data to and from the component. - Containerize the program. - Write a component specification in YAML format that describes the component for the Kubeflow Pipelines system. - Use the Kubeflow Pipelines SDK to load your component, use it in a pipeline and run that pipeline. Then, we will compose a pipeline from a reusable component, a pre-built component, and a lightweight component. The pipeline will perform the following steps: - Train an MNIST model and export it to Google Cloud Storage. - Deploy the exported TensorFlow model on AI Platform Prediction service. - Test the deployment by calling the endpoint with test data. Note: Ensure that you have Docker installed, if you want to build the image locally, by running the following command: `which docker` The result should be something like: `/usr/bin/docker` ``` import kfp import kfp.gcp as gcp import kfp.dsl as dsl import kfp.compiler as compiler import kfp.components as comp import datetime import kubernetes as k8s # Required Parameters PROJECT_ID='<ADD GCP PROJECT HERE>' GCS_BUCKET='gs://<ADD STORAGE LOCATION HERE>' ``` ## Create client If you run this notebook **outside** of a Kubeflow cluster, run the following command: - `host`: The URL of your Kubeflow Pipelines instance, for example "https://`<your-deployment>`.endpoints.`<your-project>`.cloud.goog/pipeline" - `client_id`: The client ID used by Identity-Aware Proxy - `other_client_id`: The client ID used to obtain the auth codes and refresh tokens. - `other_client_secret`: The client secret used to obtain the auth codes and refresh tokens. ```python client = kfp.Client(host, client_id, other_client_id, other_client_secret) ``` If you run this notebook **within** a Kubeflow cluster, run the following command: ```python client = kfp.Client() ``` You'll need to create OAuth client ID credentials of type `Other` to get `other_client_id` and `other_client_secret`. Learn more about [creating OAuth credentials]( https://cloud.google.com/iap/docs/authentication-howto#authenticating_from_a_desktop_app) ``` # Optional Parameters, but required for running outside Kubeflow cluster # The host for 'AI Platform Pipelines' ends with 'pipelines.googleusercontent.com' # The host for pipeline endpoint of 'full Kubeflow deployment' ends with '/pipeline' # Examples are: # https://7c021d0340d296aa-dot-us-central2.pipelines.googleusercontent.com # https://kubeflow.endpoints.kubeflow-pipeline.cloud.goog/pipeline HOST = '<ADD HOST NAME TO TALK TO KUBEFLOW PIPELINE HERE>' # For 'full Kubeflow deployment' on GCP, the endpoint is usually protected through IAP, therefore the following # will be needed to access the endpoint. CLIENT_ID = '<ADD OAuth CLIENT ID USED BY IAP HERE>' OTHER_CLIENT_ID = '<ADD OAuth CLIENT ID USED TO OBTAIN AUTH CODES HERE>' OTHER_CLIENT_SECRET = '<ADD OAuth CLIENT SECRET USED TO OBTAIN AUTH CODES HERE>' # This is to ensure the proper access token is present to reach the end point for 'AI Platform Pipelines' # If you are not working with 'AI Platform Pipelines', this step is not necessary ! gcloud auth print-access-token # Create kfp client in_cluster = True try: k8s.config.load_incluster_config() except: in_cluster = False pass if in_cluster: client = kfp.Client() else: if HOST.endswith('googleusercontent.com'): CLIENT_ID = None OTHER_CLIENT_ID = None OTHER_CLIENT_SECRET = None client = kfp.Client(host=HOST, client_id=CLIENT_ID, other_client_id=OTHER_CLIENT_ID, other_client_secret=OTHER_CLIENT_SECRET) ``` # Build reusable components ## Writing the program code The following cell creates a file `app.py` that contains a Python script. The script downloads MNIST dataset, trains a Neural Network based classification model, writes the training log and exports the trained model to Google Cloud Storage. Your component can create outputs that the downstream components can use as inputs. Each output must be a string and the container image must write each output to a separate local text file. For example, if a training component needs to output the path of the trained model, the component writes the path into a local file, such as `/output.txt`. ``` %%bash # Create folders if they don't exist. mkdir -p tmp/reuse_components_pipeline/mnist_training # Create the Python file that lists GCS blobs. cat > ./tmp/reuse_components_pipeline/mnist_training/app.py <<HERE import argparse from datetime import datetime import tensorflow as tf parser = argparse.ArgumentParser() parser.add_argument( '--model_path', type=str, required=True, help='Name of the model file.') parser.add_argument( '--bucket', type=str, required=True, help='GCS bucket name.') args = parser.parse_args() bucket=args.bucket model_path=args.model_path model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(512, activation=tf.nn.relu), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation=tf.nn.softmax) ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) print(model.summary()) mnist = tf.keras.datasets.mnist (x_train, y_train),(x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 callbacks = [ tf.keras.callbacks.TensorBoard(log_dir=bucket + '/logs/' + datetime.now().date().__str__()), # Interrupt training if val_loss stops improving for over 2 epochs tf.keras.callbacks.EarlyStopping(patience=2, monitor='val_loss'), ] model.fit(x_train, y_train, batch_size=32, epochs=5, callbacks=callbacks, validation_data=(x_test, y_test)) from tensorflow import gfile gcs_path = bucket + "/" + model_path # The export require the folder is new if gfile.Exists(gcs_path): gfile.DeleteRecursively(gcs_path) tf.keras.experimental.export_saved_model(model, gcs_path) with open('/output.txt', 'w') as f: f.write(gcs_path) HERE ``` ## Create a Docker container Create your own container image that includes your program. ### Creating a Dockerfile Now create a container that runs the script. Start by creating a Dockerfile. A Dockerfile contains the instructions to assemble a Docker image. The `FROM` statement specifies the Base Image from which you are building. `WORKDIR` sets the working directory. When you assemble the Docker image, `COPY` copies the required files and directories (for example, `app.py`) to the file system of the container. `RUN` executes a command (for example, install the dependencies) and commits the results. ``` %%bash # Create Dockerfile. # AI platform only support tensorflow 1.14 cat > ./tmp/reuse_components_pipeline/mnist_training/Dockerfile <<EOF FROM tensorflow/tensorflow:1.14.0-py3 WORKDIR /app COPY . /app EOF ``` ### Build docker image Now that we have created our Dockerfile for creating our Docker image. Then we need to build the image and push to a registry to host the image. There are three possible options: - Use the `kfp.containers.build_image_from_working_dir` to build the image and push to the Container Registry (GCR). This requires [kaniko](https://cloud.google.com/blog/products/gcp/introducing-kaniko-build-container-images-in-kubernetes-and-google-container-builder-even-without-root-access), which will be auto-installed with 'full Kubeflow deployment' but not 'AI Platform Pipelines'. - Use [Cloud Build](https://cloud.google.com/cloud-build), which would require the setup of GCP project and enablement of corresponding API. If you are working with GCP 'AI Platform Pipelines' with GCP project running, it is recommended to use Cloud Build. - Use [Docker](https://www.docker.com/get-started) installed locally and push to e.g. GCR. **Note**: If you run this notebook **within Kubeflow cluster**, **with Kubeflow version >= 0.7** and exploring **kaniko option**, you need to ensure that valid credentials are created within your notebook's namespace. - With Kubeflow version >= 0.7, the credential is supposed to be copied automatically while creating notebook through `Configurations`, which doesn't work properly at the time of creating this notebook. - You can also add credentials to the new namespace by either [copying credentials from an existing Kubeflow namespace, or by creating a new service account](https://www.kubeflow.org/docs/gke/authentication/#kubeflow-v0-6-and-before-gcp-service-account-key-as-secret). - The following cell demonstrates how to copy the default secret to your own namespace. ```bash %%bash NAMESPACE=<your notebook name space> SOURCE=kubeflow NAME=user-gcp-sa SECRET=$(kubectl get secrets \${NAME} -n \${SOURCE} -o jsonpath="{.data.\${NAME}\.json}" | base64 -D) kubectl create -n \${NAMESPACE} secret generic \${NAME} --from-literal="\${NAME}.json=\${SECRET}" ``` ``` IMAGE_NAME="mnist_training_kf_pipeline" TAG="latest" # "v_$(date +%Y%m%d_%H%M%S)" GCR_IMAGE="gcr.io/{PROJECT_ID}/{IMAGE_NAME}:{TAG}".format( PROJECT_ID=PROJECT_ID, IMAGE_NAME=IMAGE_NAME, TAG=TAG ) APP_FOLDER='./tmp/reuse_components_pipeline/mnist_training/' # In the following, for the purpose of demonstration # Cloud Build is choosen for 'AI Platform Pipelines' # kaniko is choosen for 'full Kubeflow deployment' if HOST.endswith('googleusercontent.com'): # kaniko is not pre-installed with 'AI Platform Pipelines' import subprocess # ! gcloud builds submit --tag ${IMAGE_NAME} ${APP_FOLDER} cmd = ['gcloud', 'builds', 'submit', '--tag', GCR_IMAGE, APP_FOLDER] build_log = (subprocess.run(cmd, stdout=subprocess.PIPE).stdout[:-1].decode('utf-8')) print(build_log) else: if kfp.__version__ <= '0.1.36': # kfp with version 0.1.36+ introduce broken change that will make the following code not working import subprocess builder = kfp.containers._container_builder.ContainerBuilder( gcs_staging=GCS_BUCKET + "/kfp_container_build_staging" ) kfp.containers.build_image_from_working_dir( image_name=GCR_IMAGE, working_dir=APP_FOLDER, builder=builder ) else: raise("Please build the docker image use either [Docker] or [Cloud Build]") ``` #### If you want to use docker to build the image Run the following in a cell ```bash %%bash -s "{PROJECT_ID}" IMAGE_NAME="mnist_training_kf_pipeline" TAG="latest" # "v_$(date +%Y%m%d_%H%M%S)" # Create script to build docker image and push it. cat > ./tmp/components/mnist_training/build_image.sh <<HERE PROJECT_ID="${1}" IMAGE_NAME="${IMAGE_NAME}" TAG="${TAG}" GCR_IMAGE="gcr.io/\${PROJECT_ID}/\${IMAGE_NAME}:\${TAG}" docker build -t \${IMAGE_NAME} . docker tag \${IMAGE_NAME} \${GCR_IMAGE} docker push \${GCR_IMAGE} docker image rm \${IMAGE_NAME} docker image rm \${GCR_IMAGE} HERE cd tmp/components/mnist_training bash build_image.sh ``` ``` image_name = GCR_IMAGE ``` ## Writing your component definition file To create a component from your containerized program, you must write a component specification in YAML that describes the component for the Kubeflow Pipelines system. For the complete definition of a Kubeflow Pipelines component, see the [component specification](https://www.kubeflow.org/docs/pipelines/reference/component-spec/). However, for this tutorial you don’t need to know the full schema of the component specification. The notebook provides enough information to complete the tutorial. Start writing the component definition (component.yaml) by specifying your container image in the component’s implementation section: ``` %%bash -s "{image_name}" GCR_IMAGE="${1}" echo ${GCR_IMAGE} # Create Yaml # the image uri should be changed according to the above docker image push output cat > mnist_pipeline_component.yaml <<HERE name: Mnist training description: Train a mnist model and save to GCS inputs: - name: model_path description: 'Path of the tf model.' type: String - name: bucket description: 'GCS bucket name.' type: String outputs: - name: gcs_model_path description: 'Trained model path.' type: GCSPath implementation: container: image: ${GCR_IMAGE} command: [ python, /app/app.py, --model_path, {inputValue: model_path}, --bucket, {inputValue: bucket}, ] fileOutputs: gcs_model_path: /output.txt HERE import os mnist_train_op = kfp.components.load_component_from_file(os.path.join('./', 'mnist_pipeline_component.yaml')) mnist_train_op.component_spec ``` # Define deployment operation on AI Platform ``` mlengine_deploy_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/1.1.2/components/gcp/ml_engine/deploy/component.yaml') def deploy( project_id, model_uri, model_id, runtime_version, python_version): return mlengine_deploy_op( model_uri=model_uri, project_id=project_id, model_id=model_id, runtime_version=runtime_version, python_version=python_version, replace_existing_version=True, set_default=True) ``` Kubeflow serving deployment component as an option. **Note that, the deployed Endppoint URI is not availabe as output of this component.** ```python kubeflow_deploy_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/1.1.2/components/gcp/ml_engine/deploy/component.yaml') def deploy_kubeflow( model_dir, tf_server_name): return kubeflow_deploy_op( model_dir=model_dir, server_name=tf_server_name, cluster_name='kubeflow', namespace='kubeflow', pvc_name='', service_type='ClusterIP') ``` # Create a lightweight component for testing the deployment ``` def deployment_test(project_id: str, model_name: str, version: str) -> str: model_name = model_name.split("/")[-1] version = version.split("/")[-1] import googleapiclient.discovery def predict(project, model, data, version=None): """Run predictions on a list of instances. Args: project: (str), project where the Cloud ML Engine Model is deployed. model: (str), model name. data: ([[any]]), list of input instances, where each input instance is a list of attributes. version: str, version of the model to target. Returns: Mapping[str: any]: dictionary of prediction results defined by the model. """ service = googleapiclient.discovery.build('ml', 'v1') name = 'projects/{}/models/{}'.format(project, model) if version is not None: name += '/versions/{}'.format(version) response = service.projects().predict( name=name, body={ 'instances': data }).execute() if 'error' in response: raise RuntimeError(response['error']) return response['predictions'] import tensorflow as tf import json mnist = tf.keras.datasets.mnist (x_train, y_train),(x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 result = predict( project=project_id, model=model_name, data=x_test[0:2].tolist(), version=version) print(result) return json.dumps(result) # # Test the function with already deployed version # deployment_test( # project_id=PROJECT_ID, # model_name="mnist", # version='ver_bb1ebd2a06ab7f321ad3db6b3b3d83e6' # previous deployed version for testing # ) deployment_test_op = comp.func_to_container_op( func=deployment_test, base_image="tensorflow/tensorflow:1.15.0-py3", packages_to_install=["google-api-python-client==1.7.8"]) ``` # Create your workflow as a Python function Define your pipeline as a Python function. ` @kfp.dsl.pipeline` is a required decoration, and must include `name` and `description` properties. Then compile the pipeline function. After the compilation is completed, a pipeline file is created. ``` # Define the pipeline @dsl.pipeline( name='Mnist pipeline', description='A toy pipeline that performs mnist model training.' ) def mnist_reuse_component_deploy_pipeline( project_id: str = PROJECT_ID, model_path: str = 'mnist_model', bucket: str = GCS_BUCKET ): train_task = mnist_train_op( model_path=model_path, bucket=bucket ).apply(gcp.use_gcp_secret('user-gcp-sa')) deploy_task = deploy( project_id=project_id, model_uri=train_task.outputs['gcs_model_path'], model_id="mnist", runtime_version="1.14", python_version="3.5" ).apply(gcp.use_gcp_secret('user-gcp-sa')) deploy_test_task = deployment_test_op( project_id=project_id, model_name=deploy_task.outputs["model_name"], version=deploy_task.outputs["version_name"], ).apply(gcp.use_gcp_secret('user-gcp-sa')) return True ``` ### Submit a pipeline run ``` pipeline_func = mnist_reuse_component_deploy_pipeline experiment_name = 'minist_kubeflow' arguments = {"model_path":"mnist_model", "bucket":GCS_BUCKET} run_name = pipeline_func.__name__ + ' run' # Submit pipeline directly from pipeline function run_result = client.create_run_from_pipeline_func(pipeline_func, experiment_name=experiment_name, run_name=run_name, arguments=arguments) ``` **As an alternative, you can compile the pipeline into a package.** The compiled pipeline can be easily shared and reused by others to run the pipeline. ```python pipeline_filename = pipeline_func.__name__ + '.pipeline.zip' compiler.Compiler().compile(pipeline_func, pipeline_filename) experiment = client.create_experiment('python-functions-mnist') run_result = client.run_pipeline( experiment_id=experiment.id, job_name=run_name, pipeline_package_path=pipeline_filename, params=arguments) ```
github_jupyter
<table> <tr> <td style="background-color:#ffffff;"> <a href="http://qworld.lu.lv" target="_blank"><img src="..\images\qworld.jpg" width="25%" align="left"> </a></td> <td style="background-color:#ffffff;vertical-align:bottom;text-align:right;"> prepared by <a href="http://abu.lu.lv" target="_blank">Abuzer Yakaryilmaz</a> (<a href="http://qworld.lu.lv/index.php/qlatvia/" target="_blank">QLatvia</a>) </td> </tr></table> <table width="100%"><tr><td style="color:#bbbbbb;background-color:#ffffff;font-size:11px;font-style:italic;text-align:right;">This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. </td></tr></table> $ \newcommand{\bra}[1]{\langle #1|} $ $ \newcommand{\ket}[1]{|#1\rangle} $ $ \newcommand{\braket}[2]{\langle #1|#2\rangle} $ $ \newcommand{\dot}[2]{ #1 \cdot #2} $ $ \newcommand{\biginner}[2]{\left\langle #1,#2\right\rangle} $ $ \newcommand{\mymatrix}[2]{\left( \begin{array}{#1} #2\end{array} \right)} $ $ \newcommand{\myvector}[1]{\mymatrix{c}{#1}} $ $ \newcommand{\myrvector}[1]{\mymatrix{r}{#1}} $ $ \newcommand{\mypar}[1]{\left( #1 \right)} $ $ \newcommand{\mybigpar}[1]{ \Big( #1 \Big)} $ $ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $ $ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $ $ \newcommand{\onehalf}{\frac{1}{2}} $ $ \newcommand{\donehalf}{\dfrac{1}{2}} $ $ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $ $ \newcommand{\vzero}{\myvector{1\\0}} $ $ \newcommand{\vone}{\myvector{0\\1}} $ $ \newcommand{\vhadamardzero}{\myvector{ \sqrttwo \\ \sqrttwo } } $ $ \newcommand{\vhadamardone}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $ $ \newcommand{\myarray}[2]{ \begin{array}{#1}#2\end{array}} $ $ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $ $ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $ $ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $ $ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $ $ \newcommand{\norm}[1]{ \left\lVert #1 \right\rVert } $ $ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} #1 \mspace{-1.5mu} \rfloor } $ <h2>Visualization of a (Real-Valued) Qubit</h2> _We use certain tools from python library "<b>matplotlib.pyplot</b>" for drawing. Check the notebook [Python: Drawing](../python/Python06_Drawing.ipynb) for the list of these tools._ Suppose that we have a single qubit. Each possible (real-valued) quantum state of this qubit is a point on 2-dimensional space. It can also be represented as a vector from origin to that point. We start with the visual representation of the following quantum states: $$ \ket{0} = \myvector{1\\0}, ~~ \ket{1} = \myvector{0\\1} , ~~ -\ket{0} = \myrvector{-1\\0}, ~~\mbox{and}~~ -\ket{1} = \myrvector{0\\-1}. $$ We draw these quantum states as points. We use one of our predefined functions for drawing axes: "draw_axes()". We include our predefined functions with the following line of code: %run qlatvia.py ``` # import the drawing methods from matplotlib.pyplot import plot, figure, show # draw a figure figure(figsize=(6,6), dpi=80) # include our predefined functions %run qlatvia.py # draw the axes draw_axes() # draw the origin plot(0,0,'ro') # a point in red color # draw these quantum states as points (in blue color) plot(1,0,'bo') plot(0,1,'bo') plot(-1,0,'bo') plot(0,-1,'bo') show() ``` Now, we draw the quantum states as arrows (vectors): ``` # import the drawing methods from matplotlib.pyplot import figure, arrow, show # draw a figure figure(figsize=(6,6), dpi=80) # include our predefined functions %run qlatvia.py # draw the axes draw_axes() # draw the quantum states as vectors (in blue color) arrow(0,0,0.92,0,head_width=0.04, head_length=0.08, color="blue") arrow(0,0,0,0.92,head_width=0.04, head_length=0.08, color="blue") arrow(0,0,-0.92,0,head_width=0.04, head_length=0.08, color="blue") arrow(0,0,0,-0.92,head_width=0.04, head_length=0.08, color="blue") show() ``` <h3> Task 1 </h3> Write a function that returns a randomly created 2-dimensional (real-valued) quantum state. _You can use your code written for [a task given in notebook "Quantum State](B28_Quantum_State.ipynb#task2)._ Create 100 random quantum states by using your function and then draw all of them as points. Create 1000 random quantum states by using your function and then draw all of them as points. The different colors can be used when drawing the points ([matplotlib.colors](https://matplotlib.org/2.0.2/api/colors_api.html)). ``` # randomly creating a 2-dimensional quantum state from random import randrange def random_quantum_state(): first_entry = randrange(-100,101) second_entry = randrange(-100,101) length_square = first_entry**2+second_entry**2 while length_square == 0: first_entry = randrange(-100,101) second_entry = randrange(-100,101) length_square = first_entry**2+second_entry**2 first_entry = first_entry / length_square**0.5 second_entry = second_entry / length_square**0.5 return [first_entry,second_entry] # import the drawing methods from matplotlib.pyplot import plot, figure # draw a figure figure(figsize=(6,6), dpi=60) # draw the origin plot(0,0,'ro') from random import randrange colors = ['ro','bo','go','yo','co','mo','ko'] for i in range(100): # create a random quantum state quantum_state = random_quantum_state(); # draw a blue point for the random quantum state x = quantum_state[0]; y = quantum_state[1]; plot(x,y,colors[randrange(len(colors))]) show() # randomly creating a 2-dimensional quantum state from random import randrange def random_quantum_state(): first_entry = randrange(-1000,1001) second_entry = randrange(-1000,1001) length_square = first_entry**2+second_entry**2 while length_square == 0: first_entry = randrange(-100,101) second_entry = randrange(-100,101) length_square = first_entry**2+second_entry**2 first_entry = first_entry / length_square**0.5 second_entry = second_entry / length_square**0.5 return [first_entry,second_entry] # import the drawing methods from matplotlib.pyplot import plot, figure # draw a figure figure(figsize=(6,6), dpi=60) # draw the origin plot(0,0,'ro') from random import randrange colors = ['ro','bo','go','yo','co','mo','ko'] for i in range(1000): # create a random quantum state quantum_state = random_quantum_state(); # draw a blue point for the random quantum state x = quantum_state[0]; y = quantum_state[1]; plot(x,y,colors[randrange(len(colors))]) show() ``` <a href="B30_Visualization_of_a_Qubit_Solutions.ipynb#task1">click for our solution</a> <h3> Task 2 </h3> Repeat the previous task by drawing the quantum states as vectors (arrows) instead of points. The different colors can be used when drawing the points ([matplotlib.colors](https://matplotlib.org/2.0.2/api/colors_api.html)). _Please keep the codes below for drawing axes for getting a better visual focus._ ``` # randomly creating a 2-dimensional quantum state from random import randrange def random_quantum_state(): first_entry = randrange(-1000,1001) second_entry = randrange(-1000,1001) length_square = first_entry**2+second_entry**2 while length_square == 0: first_entry = randrange(-100,101) second_entry = randrange(-100,101) length_square = first_entry**2+second_entry**2 first_entry = first_entry / length_square**0.5 second_entry = second_entry / length_square**0.5 return [first_entry,second_entry] # import the drawing methods from matplotlib.pyplot import plot, figure, arrow # draw a figure figure(figsize=(6,6), dpi=60) # include our predefined functions %run qlatvia.py # draw the axes draw_axes() # draw the origin plot(0,0,'ro') from random import randrange colors = ['r','b','g','y','b','c','m'] for i in range(500): quantum_state = random_quantum_state(); x = quantum_state[0]; y = quantum_state[1]; x = 0.92 * x y = 0.92 * y arrow(0,0,x,y,head_width=0.04,head_length=0.08,color=colors[randrange(len(colors))]) show() ``` <a href="B30_Visualization_of_a_Qubit_Solutions.ipynb#task2">click for our solution</a> <h3> Unit circle </h3> All quantum states of a qubit form the unit circle. The length of each quantum state is 1. All points that are 1 unit away from the origin form the circle with radius 1 unit. We can draw the unit circle with python. We have a predefined function for drawing the unit circle: "draw_unit_circle()". ``` # import the drawing methods from matplotlib.pyplot import figure figure(figsize=(6,6), dpi=80) # size of the figure # include our predefined functions %run qlatvia.py # draw axes draw_axes() # draw the unit circle draw_unit_circle() ``` <h3>Quantum state of a qubit</h3> Suppose that we have a single qubit. Each possible (real-valued) quantum state of this qubit is a point on 2-dimensional space. It can also be represented as a vector from origin to that point. We draw the quantum state $ \myvector{3/5 \\ 4/5} $ and its elements. <i style="font-size:10pt;"> Our predefined function "draw_qubit()" draws a figure, the origin, the axes, the unit circle, and base quantum states. <br> Our predefined function "draw_quantum_state(x,y,name)" draws an arrow from (0,0) to (x,y) and associates it with <u>name</u>. <br> We include our predefined functions with the following line of code: %run qlatvia.py </i> ``` %run qlatvia.py draw_qubit() draw_quantum_state(3/5,4/5,"|v>") ``` Now, we draw its angle with $ \ket{0} $-axis and its projections on both axes. <i> For drawing the angle, we use the method "Arc" from library "matplotlib.patches". </i> ``` %run qlatvia.py draw_qubit() draw_quantum_state(3/5,4/5,"|v>") from matplotlib.pyplot import arrow, text, gca # the projection on |0>-axis arrow(0,0,3/5,0,color="blue",linewidth=1.5) arrow(0,4/5,3/5,0,color="blue",linestyle='dotted') text(0.1,-0.1,"cos(a)=3/5") # the projection on |1>-axis arrow(0,0,0,4/5,color="blue",linewidth=1.5) arrow(3/5,0,0,4/5,color="blue",linestyle='dotted') text(-0.1,0.55,"sin(a)=4/5",rotation="90") # drawing the angle with |0>-axis from matplotlib.patches import Arc gca().add_patch( Arc((0,0),0.4,0.4,angle=0,theta1=0,theta2=53) ) text(0.08,0.05,'.',fontsize=30) text(0.21,0.09,'a') ``` <b> Observations: </b> <ul> <li> The angle of quantum state with state $ \ket{0} $ is $a$.</li> <li> The amplitude of state $ \ket{0} $ is $ \cos(a) = \frac{3}{5} $.</li> <li> The probability of observing state $ \ket{0} $ is $ \cos^2(a) = \frac{9}{25} $.</li> <li> The amplitude of state $ \ket{1} $ is $ \sin(a) = \frac{4}{5} $.</li> <li> The probability of observing state $ \ket{1} $ is $ \sin^2(a) = \frac{16}{25} $.</li> </ul> <h3> The angle of a quantum state </h3> The angle of a vector (in radians) on the unit circle is the length of arc in counter-clockwise direction that starts from $ (1,0) $ and with the points representing the vector. We execute the following code a couple of times to see different examples, where the angle is picked randomly in each run. You can also set the value of "myangle" manually for seeing a specific angle. ``` # set the angle from random import randrange myangle = randrange(361) ################################################ from matplotlib.pyplot import figure,gca from matplotlib.patches import Arc from math import sin,cos,pi # draw a figure figure(figsize=(6,6), dpi=60) %run qlatvia.py draw_axes() print("the selected angle is",myangle,"degrees") ratio_of_arc = ((1000*myangle/360)//1)/1000 print("it is",ratio_of_arc,"of a full circle") print("its length is",ratio_of_arc,"x 2\u03C0","=",ratio_of_arc*2*pi) myangle_in_radian = 2*pi*(myangle/360) print("its radian value is",myangle_in_radian) gca().add_patch( Arc((0,0),0.2,0.2,angle=0,theta1=0,theta2=myangle,color="red",linewidth=2) ) gca().add_patch( Arc((0,0),2,2,angle=0,theta1=0,theta2=myangle,color="brown",linewidth=2) ) x = cos(myangle_in_radian) y = sin(myangle_in_radian) draw_quantum_state(x,y,"|v>") # the projection on |0>-axis arrow(0,0,x,0,color="blue",linewidth=1) arrow(0,y,x,0,color="blue",linestyle='dashed') # the projection on |1>-axis arrow(0,0,0,y,color="blue",linewidth=1) arrow(x,0,0,y,color="blue",linestyle='dashed') print() print("the amplitude of state |0> is",x) print("the amplitude of state |1> is",y) print() print("the probability of observing state |0> is",x*x) print("the probability of observing state |1> is",y*y) print("the total probability is",round(x*x+y*y,6)) ``` <h3> Random quantum states </h3> Any quantum state of a (real-valued) qubit is a point on the unit circle. We use this fact to create random quantum states by picking a random point on the unit circle. For this purpose, we randomly pick an angle between zero and 360 degrees and then find the amplitudes of the quantum state by using the basic trigonometric functions. <a id="task3"></a> <h3> Task 3 </h3> Define a function randomly creating a quantum state based on this idea. Randomly create a quantum state by using this function. Draw the quantum state on the unit circle. Repeat the task for a few times. Randomly create 100 quantum states and draw all of them. <i> You can save your function for using later: comment out the first command, give an appropriate file name, and then run the cell.</i> ``` # %%writefile FILENAME.py # randomly creating a 2-dimensional quantum state from random import randrange def random_quantum_state2(): angle_degree = randrange(360) angle_radian = 2*pi*angle/360 return [cos(angle_radian),sin(angle_radian)] ``` <i style="font-size:10pt;"> Our predefined function "draw_qubit()" draws a figure, the origin, the axes, the unit circle, and base quantum states. <br> Our predefined function "draw_quantum_state(x,y,name)" draws an arrow from (0,0) to (x,y) and associates it with <u>name</u>. <br> We include our predefined functions with the following line of code: %run qlatvia.py </i> ``` # visually test your function %run qlatvia.py # draw the axes draw_qubit() from random import randrange for i in range(6): [x,y]=random_quantum_state2() draw_quantum_state(x,y,"|v"+str(i)+">") # include our predefined functions %run qlatvia.py # draw the axes draw_qubit() for i in range(100): [x,y]=random_quantum_state2() draw_quantum_state(x,y,"") ``` <a href="B30_Visualization_of_a_Qubit_Solutions.ipynb#task3">click for our solution</a>
github_jupyter
``` import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import statsmodels.api as sm import pylab df1 = pd.read_csv('data/power_act_.csv') ``` 'power_act_.csv' In total we have 18 columns and 64328 rows Coulmns names : ['dt_start_utc', 'power_act_21', 'power_act_24', 'power_act_47' .....] date ranges from '2019-06-30' to '2021-04-30' Date seems to be recorded for every 15 minutes All the columns contains missing values only for 'power_act_21' its 1.5% whereas >20% for other features ``` df1.tail(10) df1.info() df1.describe() df1.isnull().sum().sort_values(ascending=False)/len(df1)*100 df1['dt_start_utc'] = df1['dt_start_utc'].apply(pd.to_datetime) df1 = df1.set_index('dt_start_utc') plt.figure(figsize=(10,6)) df1['power_act_21'].plot() plt.figure(figsize=(10,6)) df1['power_act_47'].plot() plt.figure(figsize=(10,6)) df1['power_act_196'].plot() df2 = pd.read_csv('data/power_fc_.csv') ``` 'power_fc_.csv' In total we have 23 columns and 66020 rows Coulmns names : ['dt_start_utc', 'power_act_21', 'power_act_24', 'power_act_47' .....] date ranges from ''2019-06-13 07:00'' to ''2021-04-30 23:45'' Date seems to be recorded for every 15 minutes no null values for 'power_act_21' whereas for other features >17% null values ``` df2['dt_start_utc'].max() df2.head() df2.shape df2.info() df2.isnull().sum().sort_values(ascending=False)/len(df1)*100 df2['dt_start_utc'] = df2['dt_start_utc'].apply(pd.to_datetime) #df2 = df2.reset_index('dt_start_utc') df2['dt_start_utc'] = df2['dt_start_utc'].apply(pd.to_datetime) df2.head() df3 = pd.read_csv('data/regelleistung_aggr_results.csv') ``` 'regelleistung_aggr_results.csv' In total we have 17 columns and 16068 rows Coulmns names : ['date_start', 'date_end', 'product', 'reserve_type', 'total_min_capacity_price_eur_mw',# 'total_average_capacity_price_eur_mw', 'total_marginal_capacity_price_eur_mw','total_min_energy_price_eur_mwh', 'total_average_energy_price_eur_mwh', 'total_marginal_energy_price_eur_mwh', 'germany_min_capacity_price_eur_mw', 'germany_average_capacity_price_eur_mw', 'germany_marginal_capacity_price_eur_mw','germany_min_energy_price_eur_mwh', 'germany_average_energy_price_eur_mwh', 'germany_marginal_energy_price_eur_mwh', 'germany_import_export_mw'] 2 unique reserve type ['MRL', 'SRL'] 12 unique product type ['NEG_00_04', 'NEG_04_08', 'NEG_08_12', 'NEG_12_16', 'NEG_16_20','NEG_20_24', 'POS_00_04', 'POS_04_08', 'POS_08_12', 'POS_12_16', 'POS_16_20', 'POS_20_24'] date ranges from '2019-01-01' to '2021-03-19' Date seems to be recorded for every hours (24 values for each days) Few columns contains missing values of about 37% ``` df3.shape df3.info() df3.groupby(by='date_start').count().head(2) df3['reserve_type'].unique() df3['product'].unique() #sns.pairplot(df3) df3.info() df3.isnull().sum().sort_values(ascending=False)/len(df1)*100 df3.shape df4 = pd.read_csv('data/regelleistung_demand.csv') df4.head() ``` 'regelleistung_demand.csv' In total we have 6 columns and 16188 rows Coulmns names : ['date_start', 'date_end', 'product', 'total_demand_mw', 'germany_block_demand_mw', 'reserve_type'] 2 unique reserve type ['MRL', 'SRL'] 12 unique product type ['NEG_00_04', 'NEG_04_08', 'NEG_08_12', 'NEG_12_16', 'NEG_16_20','NEG_20_24', 'POS_00_04', 'POS_04_08', 'POS_08_12', 'POS_12_16', 'POS_16_20', 'POS_20_24'] date ranges from '2019-01-01' to '2021-03-18' Date seems to be recorded for every hours (24 values for each days) no missing values ``` df4.isnull().sum().sort_values(ascending=False) def check_unique(df): ''' check unique values for each columns and print them if they are below 15''' for col in df.columns: n = df[col].nunique() print(f'{col} has {n} unique values') if n < 15: print(df[col].unique()) df4.info() sns.pairplot(df4) ```
github_jupyter
# Determinant Quantum Monte Carlo ## 1 Hubbard model The Hubbard model is defined as \begin{align} \label{eq:ham} \tag{1} H &= -\sum_{ij\sigma} t_{ij} \left( \hat{c}_{i\sigma}^\dagger \hat{c}_{j\sigma} + hc \right) + \sum_{i\sigma} (\varepsilon_i - \mu) \hat{n}_{i\sigma} + U \sum_{i} \left( \hat{n}_{i\uparrow} - \tfrac{1}{2} \right) \left( \hat{n}_{i\downarrow} - \tfrac{1}{2} \right) \end{align} where $U$ is the interaction strength, $\varepsilon_i$ the on-site energy at site $i$ and $t_{ij}$ the hopping energy between the sites $i$ and $j$. The chemical potential is defined to be $\mu = 0$ for a half filled Hubabrd model since the total chemical potential is given as $\mu + \tfrac{U}{2}$: \begin{equation} H = -\sum_{ij\sigma} t_{ij} \left( \hat{c}_{i\sigma}^\dagger \hat{c}_{j\sigma} + hc \right) + \sum_{i\sigma} \left(\varepsilon_i - \mu - \tfrac{U}{2}\right) \hat{n}_{i\sigma} + U \sum_{i} \hat{n}_{i\uparrow} \hat{n}_{i\downarrow}. \end{equation} The non-interacting of the Hubbard Hamiltonian is quadratic in the creation and annihilation operators, \begin{equation} K = -\sum_{ij\sigma} t_{ij} \left( \hat{c}_{i\sigma}^\dagger \hat{c}_{j\sigma} + hc \right) + \sum_{i\sigma} (\varepsilon_i - \mu) \hat{n}_{i\sigma}, \label{eq:ham_kin} \tag{2} \end{equation} while the interaction part is quartic in the fermion operators: \begin{equation} V = U \sum_{i} \left( \hat{n}_{i\uparrow} - \tfrac{1}{2} \right) \left( \hat{n}_{i\downarrow} - \tfrac{1}{2} \right). \label{eq:ham_inter} \tag{3} \end{equation} ## 2 Distribution operator The expectation value of a observable $O$ is given by \begin{equation} \langle O \rangle = \text{Tr}(O \mathcal{P}), \end{equation} where $\mathcal{P}$ is the distribution operator, \begin{equation} \mathcal{P} = \frac{1}{\mathcal{Z}} e^{-\beta H}, \label{eq:distop} \tag{4} \end{equation} $\mathcal{Z}$ is the partition function, \begin{equation} \mathcal{Z} = \text{Tr}(e^{-\beta H}), \label{eq:partition} \end{equation} and $\beta = 1/k_B T$ is the inverse of the temperature. The trace is taken over the Hilbert space describing all occupied states of the system: \begin{equation} \text{Tr}(e^{-\beta H}) = \sum_i \langle \psi_i | e^{-\beta H} | \psi_i \rangle. \end{equation} In order to obtain a computable approximation of the distribution operator the partition function has to be approximated. Since the quadratic term $ K $ and quartic term $ V $ of the Hubbard Hamiltonian do not commute a Trotter-Suzuki decomposition has to be used to approximate $\mathcal{Z}$. By dividing the imaginary-time interval from $0$ to $\beta$ into $L$ intervals of size $\Delta \tau = \beta / L$, the partition function can be written as \begin{equation} \label{eq:partitionTrotterDecomp} \mathcal{Z} = \text{Tr}(e^{-\beta H }) = \text{Tr}( \prod_{l=1}^{L} e^{-\Delta \tau H}) = \text{Tr}( \prod_{l=1}^{L} e^{-\Delta \tau K} e^{-\Delta \tau V}) + \mathcal{O}(\Delta \tau^2). \end{equation} The spin-up and spin-down operators of the quadratic kinetic energy term are independent and can be written as \begin{equation} e^{-\Delta \tau K} = e^{-\Delta \tau K_\uparrow} e^{-\Delta \tau K_\downarrow}. \end{equation} The particle number operators $\hat{n}_{i\sigma}$ in the interacting term of the Hubbard Hamiltonian are independent of the site index $i$: \begin{equation} e^{-\Delta \tau V} = e^{- U \Delta\tau \sum_{i=1}^N \left( \hat{n}_{i\uparrow} - \tfrac{1}{2} \right) \left( \hat{n}_{i\downarrow} - \tfrac{1}{2} \right)} = \prod_{i=1}^N e^{- U \Delta\tau \left( \hat{n}_{i\uparrow} - \tfrac{1}{2} \right) \left( \hat{n}_{i\downarrow} - \tfrac{1}{2} \right)} \end{equation} The quartic contributions of the interacting term need to be represented in a quadratic form. This can be achieved by using the discrete \emph{Hubbard-Stratonovich} transformation, which replaces the term $\left( \hat{n}_{i\uparrow} - \tfrac{1}{2} \right) \left( \hat{n}_{i\downarrow} - \tfrac{1}{2} \right)$ by a quadratic term of the form $\left( \hat{n}_{i\uparrow} - \hat{n}_{i\downarrow} \right)$. For $U>0$, this yields \begin{equation} \label{eq:hubbardStratanovichInteractionTerm} e^{- U \Delta\tau \left( \hat{n}_{i\uparrow} - \tfrac{1}{2} \right) \left( \hat{n}_{i\downarrow} - \tfrac{1}{2} \right)} = C \sum_{h_i = \pm 1} e^{\nu h_i \left( \hat{n}_{i\uparrow} - \hat{n}_{i\downarrow} \right)}, \end{equation} where $C=\frac{1}{2} e^{-\frac{U \Delta\tau}{4}}$ and the constant $\nu$ is defined by \begin{equation} \label{eq:lambda} \cosh \nu = e^{-\frac{U \Delta\tau}{2}}. \end{equation} The set of auxiliary variables $\lbrace h_i \rbrace$ is called the *Hubbard-Stratonovich field* or *configuration*. The variables $h_i$ take the values $\pm 1$ for a up- or down-spin, respectively. Using the Hubbard-Stratanovich transformation the interaction term can be formulated as \begin{equation} \begin{split} \label{eq:hubbardStratanovichInteractionFull} e^{-\Delta\tau V} &= \prod_{i=1}^N \left(C \sum_{h_i = \pm 1} e^{\nu h_i \left( \hat{n}_{i\uparrow} - \hat{n}_{i\downarrow} \right)}\right) \\ &= C^N \sum_{h_i = \pm 1} e^{\sum_{i=1}^N \nu h_i \left( \hat{n}_{i\uparrow} - \hat{n}_{i\downarrow} \right)} \\ &= C^N \text{Tr}_h e^{\sum_{i=1}^N \nu h_i \left( \hat{n}_{i\uparrow} - \hat{n}_{i\downarrow} \right)} \\ &= C^N \text{Tr}_h e^{\sum_{i=1}^N \nu h_i \hat{n}_{i\uparrow}} e^{-\sum_{i=1}^N \nu h_i \hat{n}_{i\downarrow}} \\ &= C^N \text{Tr}_h e^{V_\uparrow} e^{V_\downarrow} \end{split} \end{equation} where \begin{equation} V_\sigma = \sum_{i=1}^N \nu h_i \hat{n}_{i\sigma} = \sigma \nu \boldsymbol{\hat{c}}_\sigma^\dagger V(h) \boldsymbol{\hat{c}}_\sigma \end{equation} and $V(h)$ is a diagonal matrix of the configurations $V(h) = \text{diag}(h_1, h_2, \dots, h_N)$. Taking into account the $L$ imaginary time slices, the Hubbard-Stratonovich variables are expanded to have two indices $h_{i, l}$, where $i$ represents the site index and $l$ the imaginary time slice: \begin{equation} h_i \longrightarrow h_{il}, \quad V(h) \longrightarrow V_l(h_l), \quad V_\sigma \longrightarrow V_{l\sigma}. \end{equation} The Hubbard-Stratonovich field or configuration now is a $N \times L$ matrix for a system of $N$ sites with $L$ time steps. Therefore, the partition function can be approximated by \begin{equation} \label{eq:partitionApproximation} \tag{5} \mathcal{Z} = \eta_d \text{Tr}_h \text{Tr} \left( \prod_{l=1}^L e^{-\Delta\tau K_\uparrow} e^{V_{l\uparrow}} \right) \left( \prod_{l=1}^L e^{-\Delta\tau K_\downarrow} e^{V_{l\downarrow}} \right), \end{equation} where $\eta_d = C^{NL}$ is a normalization constant. At this point, all operators are quadratic in the fermion operators. For any quadratic operator \begin{equation} H_l = \sum_{ij} \hat{c}_i^\dagger (H_l)_{ij} \hat{c}_j \end{equation} the trace can be computed via the a determinant: \begin{equation} \text{Tr}(e^{-H_1}e^{-H_2} \dots e^{-H_L}) = \det(I + e^{-H_L} e^{-H_{L-1}} \dots e^{-H_1} ) \end{equation} Using this identity, the trace in the expression of the partition function \eqref{eq:partitionApproximation} can be turned into a computable form: \begin{equation} \label{eq:partitionComputable} \tag{6} \mathcal{Z}_h = \eta_d \text{Tr}_h \det[M_\uparrow(h)] \det[M_\downarrow(h)], \end{equation} where for $\sigma = \pm1$ and $h=(h_1, h_2, \dots, h_L)$ the matrix \begin{equation} \label{eq:mMatrix} \tag{7} M_\sigma(h) = I + B_{L,\sigma}(h_L) B_{L-1,\sigma}(h_{L-1}) \dots B_{1,\sigma}(h_1) \end{equation} consist of the time step matrices $B_{l,\sigma}(h_l)$, which are associated with the operators $e^{-\Delta\tau K_\sigma} e^{V_{l\sigma}}$: \begin{equation} \label{eq:bMatrix} B_{l,\sigma}(h_l) = e^{-\Delta\tau K_\sigma} e^{\sigma \nu V_l(h_l)}. \end{equation} With the approximation \eqref{eq:partitionComputable} the distribution operator $\mathcal{P}$, defined in \eqref{eq:distop}, can be expressed as the computable approximation \begin{equation} \label{eq:distopComputable} \tag{8} \mathcal{P}(h) = \frac{\eta_d}{\mathcal{Z}_h} \det[M_\uparrow(h)] \det[M_\downarrow(h)]. \end{equation} The Green's function $G$ associated with the configuration $h$ is defined as the inverse of the matrix $M_\sigma(h)$: \begin{equation} G_\sigma(h) = \left[M_\sigma(h)\right]^{-1} \end{equation} ## 3 Determinant Quantum Monte Carlo algorithm The simulation of the Hubbard model is a classical Monte Carlo problem. The Hubbard-Stratanovich variables or configurations $h$ are sampled such that the follow the probability distribution $\mathcal{P}(h)$. The determinant QMC (DQMC) algorithm can be summarized by the following steps: First, the configuration $h$ is initialized with an array of randomly distributed values of $\pm 1$. Starting from the time slice $l=1$, a change in the Hubbard-Stratanovich field on the lattice site $i=1$ is proposed: \begin{equation} h^\prime_{1, 1} = -h_{1, 1}. \end{equation} With the new configuration $h^\prime$ the Metropolis ratio \begin{equation} d_{1, 1} = \frac{\det[M_\uparrow(h^\prime)] \det[M_\downarrow(h^\prime)]}{\det[M_\uparrow(h)] \det[M_\downarrow(h)]}, \end{equation} can be computed. A random number generator is then used to sample a uniformly distributed random number $r$. If $r < d_{1, 1}$, the proposed update to the configuration is accepted: \begin{equation} h = h^\prime. \end{equation} After all lattice sites $i = 1, \dots, N$ of the imaginary time slice $l=1$ have been updated, the procedure is continued with the next time slice $l=2$ until all imaginary time slices $l=1, \dots, L$ are updated. This concludes one \emph{sweep} through the Hubbard-Stratanovich field. After a few hundred "warmup-sweeps" have been performed, measurements can be made after completing an entire set of updates to all space-time points of the system (see section \ref{sec:measurements}). The measurement results have to be normalized with the number of "measurement-sweeps". One iteration (sweep) of the DQMC algorithm can be summarized as 1. Set $l=1$ and $i=1$ 2. Propose new coonfiguration $h^\prime$ by flipping spin: \begin{equation} h^\prime_{i, l} = -h_{i, l}. \end{equation} 3. Compute Metropolis ratio: \begin{equation} d_{i, l} = \frac{\det[M_\uparrow(h^\prime)] \det[M_\downarrow(h^\prime)]}{\det[M_\uparrow(h)] \det[M_\downarrow(h)]}, \end{equation} where \begin{equation} M_\sigma(h) = I + B_{l-1,\sigma} \dots B_{1, \sigma} B_{L,\sigma}B_{L-1,\sigma} \dots B_{l,\sigma}. \end{equation} 4. Sample random number $r$ 5. Accept new configuration, if $r < d_{i, l}$: \begin{equation} h_{i, l} = \begin{cases} h^\prime_{i, l} &\text{if } r < d_{i, l} \\ h_{i, l} &\text{else } \end{cases} \end{equation} 6. Increment site $i = i+1$ if $i < N$ 7. Increment time slice $l = l+1$ if $i=N$ ### 3.1 Rank-one update scheme The one-site update of the inner DQMC loop leads to a simple rank-one update of the matrix $M_\sigma(h)$, which leads to an efficient method of computing the Metropolis ratio $d_{i,l}$ and Green's function $G_\sigma$. The DQMC simulation requires the computation of $NL$ Metropolis ratios \begin{equation} d = \frac{\det[M_\uparrow(h^\prime)] \det[M_\downarrow(h^\prime)]}{\det[M_\uparrow(h)] \det[M_\downarrow(h)]} \end{equation} for the configurations $h = (h_1, h_2, \dots, h_l)$ and $h^\prime = (h^\prime_1, h^\prime_2, \dots, h^\prime_L)$ per sweep. The matrix $M_\sigma(h)$, defined in \eqref{eq:mMatrix}, is given by \begin{equation} M_\sigma = I + B_{L,\sigma} B_{L-1,\sigma} \dots B_{1,\sigma}. \end{equation} with the time step matrices \begin{equation} B_{l,\sigma} = e^{-\Delta\tau K_\sigma} e^{\sigma \nu V_l(h_l)}. \end{equation} The Green's function of the configuration $h$ is defined as \begin{equation} G_\sigma(h) = M_\sigma^{-1}. \end{equation} The elements of the configuration $h$ and $h^\prime$ differ only by one element at a specific time slice $l$ and spatial site $i$, which gets inverted by a proposed update: \begin{equation} h^\prime_{i,l} = - h_{i, l}. \end{equation} During one sweep of the QMC iteration the inner loops run over the $l=1, \dots, L$ imaginary time slices and $i=1, \dots, N$ lattice sites. Starting with the first time slice, $l=1$, the Metropolis ratio $d_{i, 1}$ for each lattice site $i$ is given by \begin{equation} d_{i, 1} = d_\uparrow d_\downarrow, \end{equation} where for $\sigma = \pm 1$ \begin{equation} d_\sigma = 1 + \alpha_{i,\sigma} \left[ 1 - \boldsymbol{e}_i^T M_\sigma^{-1}(h) \boldsymbol{e}_i \right] = 1 + \alpha_{i, \sigma} \left[ 1 - G_{ii}^\sigma(h) \right] \end{equation} and \begin{equation} \alpha_{i,\sigma} = e^{-2\sigma \nu h_{i,1}} - 1. \end{equation} The Metropolis ration $d_{i, 1}$ can therefore be obtained by computing the inverse of the matrix $M_\sigma(h)$, which corresponds to the Green's function $G_\sigma(h)$. If $G_\sigma(h)$ has already been computed in a previous step, then it is essentially free to compute the Metropolis ratio.\\ If the proposed update $h^\prime$ to the configuration is accepted, the Green's function needs to be updated by a rank-1 matrix update: \begin{equation} G_\sigma(h^\prime) = G_\sigma(h) - \frac{\alpha_{i, \sigma}}{d_{i, 1}} \boldsymbol{u}_\sigma \boldsymbol{w}_\sigma^T, \end{equation} where \begin{equation} \boldsymbol{u}_\sigma = \left[I - G_\sigma(h)\right] \boldsymbol{e}_i, \qquad \boldsymbol{w}_\sigma = \left[G_\sigma(h)\right]^T \boldsymbol{e}_i. \end{equation} After all spatial sites $i=1, \dots, N$ have been updated, we can move to the next time slice $l=2$. The matrices $M_\sigma(h)$ and $M_\sigma(h^\prime)$ can be written as \begin{equation} \begin{split} M_\sigma(h) &= B_{1, \sigma}^{-1}(h_1) \hat{M}_\sigma(h) B_{1, \sigma}(h_1)\\ M_\sigma(h^\prime) &= B_{1, \sigma}^{-1}(h_1^\prime) \hat{M}_\sigma(h^\prime) B_{1, \sigma}(h_1^\prime) \end{split} \end{equation} where \begin{equation} \begin{split} \hat{M}_\sigma(h) &= I + B_{1, \sigma}(h_1) B_{L, \sigma}(h_L) B_{L-1, \sigma}(h_{L-1}) \dots B_{2, \sigma}(h_2)\\ \hat{M}_\sigma(h^\prime) &= I + B_{1, \sigma}(h_1^\prime) B_{L, \sigma}(h_L^\prime) B_{L-1, \sigma}(h_{L-1}^\prime) \dots B_{2, \sigma}(h_2^\prime) \end{split} \end{equation} are obtained by a cyclic permutation of the time step matrices $B_{l,\sigma}(h_l)$. The Metropolis ratios $d_{i, 2}$ corresponding to the time slice $l=2$ can therefore be written as \begin{equation} d_{i,2} = \frac{\det\big[M_\uparrow(h^\prime)\big] \det\big[M_\downarrow(h^\prime)\big]}{\det\big[M_\uparrow(h)\big] \det\big[M_\downarrow(h)\big]} = \frac{\det\big[\hat{M}_\uparrow(h^\prime)\big] \det\big[\hat{M}_\downarrow(h^\prime)\big]}{\det\big[\hat{M}_\uparrow(h)\big] \det\big[\hat{M}_\downarrow(h)\big]}. \end{equation} The associated Green's functions are given by "wrapping": \begin{equation} \hat{G}_\sigma(h) = B^{-1}_{1,\sigma}(h_1) G_\sigma(h) B_{1,\sigma}(h_1). \end{equation} Wrapping the Green's function ensures that the configurations $h_2$ and $h_2^\prime$ associated with the time slice $l=2$ appear at the same location of the matrices $\hat{M}_\sigma(h)$ and $\hat{M}_\sigma(h^\prime)$ as the configurations $h_1$ and $h_1^\prime$ at the time slice $l=1$. Therefore the same formulation can be used for the time slice $l=2$ as for the time slice $l=1$ to compute the Metropolis ratio and update the Green's functions. This procedure can be repeated for all the remaining time slices $l=3, 4, \dots, L$.
github_jupyter
``` import os.path from collections import Counter from glob import glob import inspect import os import pickle import sys from cltk.corpus.latin.phi5_index import PHI5_INDEX from cltk.corpus.readers import get_corpus_reader from cltk.stem.latin.j_v import JVReplacer from cltk.stem.lemma import LemmaReplacer from cltk.tokenize.latin.sentence import SentenceTokenizer from cltk.tokenize.word import WordTokenizer from random import sample from tqdm import tqdm from typing import List, Dict, Tuple currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe()))) parentdir = os.path.dirname(currentdir) sys.path.insert(0,parentdir) from mlyoucanuse.aeoe_replacer import AEOEReplacer from mlyoucanuse.text_cleaners import ( normalize_accents, disappear_angle_brackets, drop_punct, disappear_round_brackets, truecase, dehyphenate, accept_editorial, swallow_braces, swallow_obelized_words, swallow_square_brackets) import cltk cltk.__version__ ``` ## Text Cleaning from http://udallasclassics.org/wp-content/uploads/maurer_files/APPARATUSABBREVIATIONS.pdf [...] Square brackets, or in recent editions wavy brackets ʺ{...}ʺ, enclose words etc. that an editor thinks should be deleted (see ʺdel.ʺ) or marked as out of place (see ʺsecl.ʺ). [...] Square brackets in a papyrus text, or in an inscription, enclose places where words have been lost through physical damage. If this happens in mid-line, editors use ʺ[...]ʺ. If only the end of the line is missing, they use a single bracket ʺ[...ʺ If the lineʹs beginning is missing, they use ʺ...]ʺ Within the brackets, often each dot represents one missing letter. [[...]] Double brackets enclose letters or words deleted by the medieval copyist himself. (...) Round brackets are used to supplement words abbreviated by the original copyist; e.g. in an inscription: ʺtrib(unus) mil(itum) leg(ionis) IIIʺ <...> diamond ( = elbow = angular) brackets enclose words etc. that an editor has added (see ʺsuppl.ʺ) † An obelus (pl. obeli) means that the word(s etc.) is very plainly corrupt, but the editor cannot see how to emend. If only one word is corrupt, there is only one obelus, which precedes the word; if two or more words are corrupt, two obeli enclose them. (Such at least is the rule--but that rule is often broken, especially in older editions, which sometimes dagger several words using only one obelus.) To dagger words in this way is to ʺobelizeʺ them. ## Load/Build Truecasing dictionary; count all cased tokens, use to normalize cases later ``` truecase_file = 'truecase_counter.latin.pkl' if os.path.exists(truecase_file): with open(truecase_file, 'rb') as fin: case_counts = pickle.load(fin) else: tesserae = get_corpus_reader(corpus_name='latin_text_tesserae', language='latin') case_counts = Counter() jv_replacer = JVReplacer() aeoe_replacer = AEOEReplacer() toker = WordTokenizer('latin') sent_toker = SentenceTokenizer() lemmatizer = LemmaReplacer('latin') for file in tqdm(tesserae.fileids(), total=len(tesserae.fileids())): for sent in tesserae.sents(file): sent = aeoe_replacer.replace(jv_replacer.replace(drop_punct(sent))) sent = normalize_accents(sent) sent = accept_editorial(sent) for token in toker.tokenize(sent): case_counts.update({token:1}) with open(truecase_file, 'wb') as fout: pickle.dump(case_counts, fout) len(case_counts) # 344393, 322711 # 318451 # 316722 # 311399 # 310384 # 310567 # 309529 print(sample(list(case_counts.items()), 25)) def get_word_counts(files:List[str])->Tuple[Dict[str, int], Dict[str, int]]: """ Given a list of files, clean & tokenize the documents return Counters for: lemmatized words in the documents inflected words in the documents """ word_counter = Counter() inflected_word_counter = Counter() jv_replacer = JVReplacer() aeoe_replacer = AEOEReplacer() toker = WordTokenizer('latin') sent_toker = SentenceTokenizer() lemmatizer = LemmaReplacer('latin') for file in tqdm(files , total=len(files), unit='files'): with open(file, 'rt') as fin: text = fin.read() text = text.replace("-\n", "") text = text.replace("\n", " ") text = aeoe_replacer.replace(jv_replacer.replace( text)) for sent in sent_toker.tokenize(text): sent = dehyphenate(sent) # because it's Phi5 sent = swallow_braces(sent) sent = swallow_square_brackets(sent) sent = disappear_round_brackets(sent) sent = swallow_obelized_words(sent) sent = disappear_angle_brackets(sent) sent = drop_punct(sent) sent = normalize_accents(sent) # lemmatizer prefers lower # sent = lemmatizer.lemmatize(sent.lower(), return_string=True) for word in toker.tokenize(sent): if word.isnumeric(): continue inflected_word_counter.update({truecase(word, case_counts):1}) word = lemmatizer.lemmatize(word.lower(), return_string=True) # normalize capitals word_counter.update({truecase(word, case_counts) : 1}) return word_counter, inflected_word_counter def word_stats(author:str, lemma_counter:Counter, inflected_counter:Counter)->Tuple[float, float]: """ """ nw = sum(lemma_counter.values()) print(f"Total count of all tokens in {author} corpus: {nw:,}") print(f"Total number of distinct inflected words/tokens in {author} corpus: {len(inflected_counter):,}") print(f"Total number of lemmatized words/tokens in {author} corpus {len(lemma_counter):,}") ciw1 = sum([1 for key, val in inflected_counter.items() if val == 1]) print(f"Count of inflected tokens only occuring once {ciw1:,}") cw1 = sum([1 for key, val in lemma_counter.items() if val == 1]) print(f"Count of lemmatized tokens only occuring once {cw1:,}") Piu_one = ciw1 / nw print(f"Probability of a single count unigram occuring in the {author} corpus: {Piu_one:.3f}") Plu_one = cw1 / nw print(f"Probability of a single count unigram in the lemmatized {author} corpus: {Plu_one:.3f}") return (Piu_one, Plu_one) # Cicero works cicero_files = glob(f"{os.path.expanduser('~')}/cltk_data/latin/text/phi5/individual_works/LAT0474.TXT-0*.txt") len (cicero_files) cicero_lemmas, cicero_inflected_words = get_word_counts(cicero_files) word_stats(author='Cicero', lemma_counter=cicero_lemmas, inflected_counter=cicero_inflected_words) cicero_lemmas_counter_file = 'cicero_lemmas_counter.pkl' cicero_inflected_counter_file = 'cicero_inflected_counter.pkl' if not os.path.exists(cicero_lemmas_counter_file): with open(cicero_lemmas_counter_file, 'wb') as fout: pickle.dump(cicero_lemmas, fout) if not os.path.exists(cicero_inflected_counter_file): with open(cicero_inflected_counter_file, 'wb') as fout: pickle.dump(cicero_inflected_words, fout) author_index = {val:key for key,val in PHI5_INDEX.items() if val != 'Marcus Tullius Cicero, Cicero, Tully'} def get_phi5_author_files(author_name, author_index): stub = author_index[author_name] return glob(os.path.expanduser(f'~/cltk_data/latin/text/phi5/individual_works/{stub}*.txt')) ``` ## Visualization of our corpus comparison: If you took one page from one author and placed it into Cicero, how surprising would it be? If the other author's vocabulary was substantially different, it would be noticeable. We can quantify this. As a result, since we want to predict as close as possible to the author, we should only train a language model where the underlying corpus vocabularies are within a reasonable window of surprise. ``` results = [] for author in author_index: files = get_phi5_author_files(author, author_index) # cicero_lemmas, cicero_inflected_words = get_word_counts(cicero_files) author_lemmas, author_inflected_words = get_word_counts(files) author_words = set(author_lemmas.keys()) cicero_words = set(cicero_lemmas.keys()) common = author_words & cicero_words author_uniq = author_words - common P_one_x_lemma_unigram = len(author_uniq) / sum(author_lemmas.values()) author_words = set(author_inflected_words.keys()) cicero_words = set(cicero_inflected_words.keys()) common = author_words & cicero_words author_uniq = author_words - common P_one_x_inflected_unigram = len(author_uniq) / sum(author_inflected_words.values()) results.append((author, P_one_x_lemma_unigram, P_one_x_inflected_unigram )) # sorted(results, key=lambda x:x[1]) results_map = {key: (val, val2) for key,val,val2 in results} for author in author_index: files = get_phi5_author_files(author, author_index) if len(files) >= 3: print(author, results_map[author]) # the values analogous to Cicero are: (0.02892407263780054, 0.008905886443261747) # grab prose authors # grab poets # consider individual files # Gaius Iulius Caesar, Caesar (0.016170899832329378, 0.0464137117307334) # Apuleius Madaurensis (0.039956560814859196, 0.12101183343319354) # Caelius Apicius (0.04383594547528974, 0.09950159130486999) # Anonymi Comici et Tragici (0.05979473449352968, 0.10397144132083891) # C. Iul. Caes. Augustus Octavianus (0.16793743890518084, 0.20527859237536658) # Publius Papinius Statius (0.03662215849687846, 0.1022791767482152) # Lucius Accius (0.0845518118245391, 0.16634880271243907) # Gaius Caesius Bassus (0.040359504832965916, 0.07953196540613872) # Publius Vergilius Maro, Virgil, Vergil (0.03315200072836527, 0.0929348568307006) # Publius Ovidius Naso (0.023965644822556705, 0.06525858344775079) # Gnaeus Naevius (0.11655300681959083, 0.20644761314321142) # Fragmenta Bobiensia (0.07398076042143839, 0.1385707741639945) # Scriptores Historiae Augustae (0.03177853760216489, 0.071072022819111) # Publius Terentius Afer, Terence (0.028577576089507863, 0.058641733823644474) # Aulus Cornelius Celsus (0.017332921313593843, 0.0558848592109822) # Gaius Suetonius Tranquillus (0.033629947836759745, 0.0958944461491255) # Marcus Terentius Varro, Varro (0.045866176600832524, 0.093891152245151) # Appendix Vergiliana (0.0500247341083354, 0.1418501113034875) # Annius Florus (0.038297569987210456, 0.09140969162995595) # Pomponius Porphyrio (0.04030915576694411, 0.09312987184568636) # Marcus Valerius Probus (0.03835521769177609, 0.08431237042156185) # Quintus Ennius (0.05652467883705206, 0.12021636240703178) # Didascaliae et Per. in Terentium (0.0782967032967033, 0.13598901098901098) # Cornelius Tacitus (0.02469418086200983, 0.07631488690859423) # Titus Livius, Livy (0.011407436246836674, 0.03913716547549524) # Lucius Annaeus Seneca senior (0.01619733327917297, 0.052095498258405856) # Quintus Horatius Flaccus, Horace (0.04486396446418656, 0.12253192670738479) # Gaius Asinius Pollio (0.03592814371257485, 0.08982035928143713) # Gaius Sallustius Crispus (0.020570966643975494, 0.059330326752893126) # C. Plinius Caecilius Secundus, Pliny (0.01694301397770358, 0.06551977816761927) # Marcus Fabius Quintilianus (0.009342494688624445, 0.0416682017463066) # Hyginus Gromaticus (0.0285692634131555, 0.08320703243407093) # Titus Lucretius Carus (0.022190184885737107, 0.06787585965048998) # Claudius Caesar Germanicus (0.04035804020100502, 0.12861180904522612) # Gaius, iur., Gaius (0.011268643689753487, 0.035144203727768185) # Quintus Terentius Scaurus (0.04715169618092597, 0.09174311926605505) # Lucius Livius Andronicus (0.14615384615384616, 0.25) # Marcus Cornelius Fronto (0.03605195520469984, 0.08350927115843583) # Didascaliae et Argum. in Plautum (0.07712590639419907, 0.14831905075807514) # Argum. Aen. et Tetrast. (0.07066381156316917, 0.1441827266238401) # Anonymi Epici et Lyrici (0.09684487291849254, 0.19237510955302367) # Marcus Porcius Cato, Cato (0.061287538049157236, 0.13079823724501385) # Sextus Iulius Frontinus (0.03041633518960488, 0.09337045876425351) # Lucius Annaeus Seneca iunior (0.012655345175352984, 0.05447654369184723) # Titus Maccius Plautus (0.02682148990105487, 0.062141513731995376) # Maurus Servius Honoratus, Servius (0.025347881711764008, 0.05923711189138313) # Quintus Asconius Pedianus (0.010382059800664452, 0.029663028001898434) ```
github_jupyter
<a href="https://colab.research.google.com/github/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_07_2_Keras_gan.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # T81-558: Applications of Deep Neural Networks **Module 7: Generative Adversarial Networks** * Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx) * For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). # Module 7 Material * Part 7.1: Introduction to GANS for Image and Data Generation [[Video]](https://www.youtube.com/watch?v=0QnCH6tlZgc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_07_1_gan_intro.ipynb) * **Part 7.2: Implementing a GAN in Keras** [[Video]](https://www.youtube.com/watch?v=T-MCludVNn4&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_07_2_Keras_gan.ipynb) * Part 7.3: Face Generation with StyleGAN and Python [[Video]](https://www.youtube.com/watch?v=Wwwyr7cOBlU&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_07_3_style_gan.ipynb) * Part 7.4: GANS for Semi-Supervised Learning in Keras [[Video]](https://www.youtube.com/watch?v=ZPewmEu7644&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_07_4_gan_semi_supervised.ipynb) * Part 7.5: An Overview of GAN Research [[Video]](https://www.youtube.com/watch?v=cvCvZKvlvq4&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_07_5_gan_research.ipynb) ``` # Nicely formatted time string def hms_string(sec_elapsed): h = int(sec_elapsed / (60 * 60)) m = int((sec_elapsed % (60 * 60)) / 60) s = sec_elapsed % 60 return "{}:{:>02}:{:>05.2f}".format(h, m, s) ``` # Part 7.2: Implementing DCGANs in Keras Paper that described the type of DCGAN that we will create in this module. [[Cite:radford2015unsupervised]](https://arxiv.org/abs/1511.06434) This paper implements a DCGAN as follows: * No pre-processing was applied to training images besides scaling to the range of the tanh activation function [-1, 1]. * All models were trained with mini-batch stochastic gradient descent (SGD) with a mini-batch size of 128. * All weights were initialized from a zero-centered Normal distribution with standard deviation 0.02. * In the LeakyReLU, the slope of the leak was set to 0.2 in all models. * we used the Adam optimizer(Kingma & Ba, 2014) with tuned hyperparameters. We found the suggested learning rate of 0.001, to be too high, using 0.0002 instead. * Additionally, we found leaving the momentum term $\beta{1}$ at the suggested value of 0.9 resulted in training oscillation and instability while reducing it to 0.5 helped stabilize training. The paper also provides the following architecture guidelines for stable Deep Convolutional GANs: * Replace any pooling layers with strided convolutions (discriminator) and fractional-strided convolutions (generator). * Use batchnorm in both the generator and the discriminator. * Remove fully connected hidden layers for deeper architectures. * Use ReLU activation in generator for all layers except for the output, which uses Tanh. * Use LeakyReLU activation in the discriminator for all layers. While creating the material for this module I used a number of Internet resources, some of the most helpful were: * [Deep Convolutional Generative Adversarial Network (TensorFlow 2.0 example code)](https://www.tensorflow.org/tutorials/generative/dcgan) * [Keep Calm and train a GAN. Pitfalls and Tips on training Generative Adversarial Networks](https://medium.com/@utk.is.here/keep-calm-and-train-a-gan-pitfalls-and-tips-on-training-generative-adversarial-networks-edd529764aa9) * [Collection of Keras implementations of Generative Adversarial Networks GANs](https://github.com/eriklindernoren/Keras-GAN) * [dcgan-facegenerator](https://github.com/platonovsimeon/dcgan-facegenerator), [Semi-Paywalled Article by GitHub Author](https://medium.com/datadriveninvestor/generating-human-faces-with-keras-3ccd54c17f16) The program created next will generate faces similar to these. While these faces are not perfect, they demonstrate how we can construct and train a GAN on or own. Later we will see how to import very advanced weights from nVidia to produce high resolution, realistic looking faces. Figure 7.GAN-GRID shows images from GAN training. **Figure 7.GAN-GRID: GAN Neural Network Training** ![GAN](https://raw.githubusercontent.com/jeffheaton/t81_558_deep_learning/master/images/gan-3.png "GAN Images") As discussed in the previous module, the GAN is made up of two different neural networks: the discriminator and the generator. The generator generates the images, while the discriminator detects if a face is real or was generated. These two neural networks work as shown in Figure 7.GAN-EVAL: **Figure 7.GAN-EVAL: Evaluating GANs** ![GAN](https://raw.githubusercontent.com/jeffheaton/t81_558_deep_learning/master/images/gan_fig_1.png "GAN") The discriminator accepts an image as its input and produces number that is the probability of the input image being real. The generator accepts a random seed vector and generates an image from that random vector seed. An unlimited number of new images can be created by providing additional seeds. I suggest running this code with a GPU, it will be very slow on a CPU alone. The following code mounts your Google drive for use with Google CoLab. If you are not using CoLab, the following code will not work. ``` try: from google.colab import drive drive.mount('/content/drive', force_remount=True) COLAB = True print("Note: using Google CoLab") %tensorflow_version 2.x except: print("Note: not using Google CoLab") COLAB = False ``` The following packages will be used to implement a basic GAN system in Python/Keras. ``` import tensorflow as tf from tensorflow.keras.layers import Input, Reshape, Dropout, Dense from tensorflow.keras.layers import Flatten, BatchNormalization from tensorflow.keras.layers import Activation, ZeroPadding2D from tensorflow.keras.layers import LeakyReLU from tensorflow.keras.layers import UpSampling2D, Conv2D from tensorflow.keras.models import Sequential, Model, load_model from tensorflow.keras.optimizers import Adam import numpy as np from PIL import Image from tqdm import tqdm import os import time import matplotlib.pyplot as plt ``` These are the constants that define how the GANs will be created for this example. The higher the resolution, the more memory that will be needed. Higher resolution will also result in longer run times. For Google CoLab (with GPU) 128x128 resolution is as high as can be used (due to memory). Note that the resolution is specified as a multiple of 32. So **GENERATE_RES** of 1 is 32, 2 is 64, etc. To run this you will need training data. The training data can be any collection of images. I suggest using training data from the following two locations. Simply unzip and combine to a common directory. This directory should be uploaded to Google Drive (if you are using CoLab). The constant **DATA_PATH** defines where these images are stored. The source data (faces) used in this module can be found here: * [Kaggle Faces Data New](https://www.kaggle.com/gasgallo/faces-data-new) * [Kaggle Lag Dataset: Dataset of faces, from more than 1k different subjects](https://www.kaggle.com/gasgallo/lag-dataset) ``` # Generation resolution - Must be square # Training data is also scaled to this. # Note GENERATE_RES 4 or higher # will blow Google CoLab's memory and have not # been tested extensivly. GENERATE_RES = 3 # Generation resolution factor # (1=32, 2=64, 3=96, 4=128, etc.) GENERATE_SQUARE = 32 * GENERATE_RES # rows/cols (should be square) IMAGE_CHANNELS = 3 # Preview image PREVIEW_ROWS = 4 PREVIEW_COLS = 7 PREVIEW_MARGIN = 16 # Size vector to generate images from SEED_SIZE = 100 # Configuration DATA_PATH = '/content/drive/My Drive/projects/faces' EPOCHS = 50 BATCH_SIZE = 32 BUFFER_SIZE = 60000 print(f"Will generate {GENERATE_SQUARE}px square images.") ``` Next we will load and preprocess the images. This can take awhile. Google CoLab took around an hour to process. Because of this we store the processed file as a binary. This way we can simply reload the processed training data and quickly use it. It is most efficient to only perform this operation once. The dimensions of the image are encoded into the filename of the binary file because we need to regenerate it if these change. ``` # Image set has 11,682 images. Can take over an hour # for initial preprocessing. # Because of this time needed, save a Numpy preprocessed file. # Note, that file is large enough to cause problems for # sume verisons of Pickle, # so Numpy binary files are used. training_binary_path = os.path.join(DATA_PATH, f'training_data_{GENERATE_SQUARE}_{GENERATE_SQUARE}.npy') print(f"Looking for file: {training_binary_path}") if not os.path.isfile(training_binary_path): start = time.time() print("Loading training images...") training_data = [] faces_path = os.path.join(DATA_PATH,'face_images') for filename in tqdm(os.listdir(faces_path)): path = os.path.join(faces_path,filename) image = Image.open(path).resize((GENERATE_SQUARE, GENERATE_SQUARE),Image.ANTIALIAS) training_data.append(np.asarray(image)) training_data = np.reshape(training_data,(-1,GENERATE_SQUARE, GENERATE_SQUARE,IMAGE_CHANNELS)) training_data = training_data.astype(np.float32) training_data = training_data / 127.5 - 1. print("Saving training image binary...") np.save(training_binary_path,training_data) elapsed = time.time()-start print (f'Image preprocess time: {hms_string(elapsed)}') else: print("Loading previous training pickle...") training_data = np.load(training_binary_path) ``` We will use a TensorFlow **Dataset** object to actually hold the images. This allows the data to be quickly shuffled int divided into the appropriate batch sizes for training. ``` # Batch and shuffle the data train_dataset = tf.data.Dataset.from_tensor_slices(training_data) \ .shuffle(BUFFER_SIZE).batch(BATCH_SIZE) ``` The code below creates the generator and discriminator. Next we actually build the discriminator and the generator. Both will be trained with the Adam optimizer. ``` def build_generator(seed_size, channels): model = Sequential() model.add(Dense(4*4*256,activation="relu",input_dim=seed_size)) model.add(Reshape((4,4,256))) model.add(UpSampling2D()) model.add(Conv2D(256,kernel_size=3,padding="same")) model.add(BatchNormalization(momentum=0.8)) model.add(Activation("relu")) model.add(UpSampling2D()) model.add(Conv2D(256,kernel_size=3,padding="same")) model.add(BatchNormalization(momentum=0.8)) model.add(Activation("relu")) # Output resolution, additional upsampling model.add(UpSampling2D()) model.add(Conv2D(128,kernel_size=3,padding="same")) model.add(BatchNormalization(momentum=0.8)) model.add(Activation("relu")) if GENERATE_RES>1: model.add(UpSampling2D(size=(GENERATE_RES,GENERATE_RES))) model.add(Conv2D(128,kernel_size=3,padding="same")) model.add(BatchNormalization(momentum=0.8)) model.add(Activation("relu")) # Final CNN layer model.add(Conv2D(channels,kernel_size=3,padding="same")) model.add(Activation("tanh")) return model def build_discriminator(image_shape): model = Sequential() model.add(Conv2D(32, kernel_size=3, strides=2, input_shape=image_shape, padding="same")) model.add(LeakyReLU(alpha=0.2)) model.add(Dropout(0.25)) model.add(Conv2D(64, kernel_size=3, strides=2, padding="same")) model.add(ZeroPadding2D(padding=((0,1),(0,1)))) model.add(BatchNormalization(momentum=0.8)) model.add(LeakyReLU(alpha=0.2)) model.add(Dropout(0.25)) model.add(Conv2D(128, kernel_size=3, strides=2, padding="same")) model.add(BatchNormalization(momentum=0.8)) model.add(LeakyReLU(alpha=0.2)) model.add(Dropout(0.25)) model.add(Conv2D(256, kernel_size=3, strides=1, padding="same")) model.add(BatchNormalization(momentum=0.8)) model.add(LeakyReLU(alpha=0.2)) model.add(Dropout(0.25)) model.add(Conv2D(512, kernel_size=3, strides=1, padding="same")) model.add(BatchNormalization(momentum=0.8)) model.add(LeakyReLU(alpha=0.2)) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(1, activation='sigmoid')) return model ``` As we progress through training images will be produced to show the progress. These images will contain a number of rendered faces that show how good the generator has become. These faces will be ``` def save_images(cnt,noise): image_array = np.full(( PREVIEW_MARGIN + (PREVIEW_ROWS * (GENERATE_SQUARE+PREVIEW_MARGIN)), PREVIEW_MARGIN + (PREVIEW_COLS * (GENERATE_SQUARE+PREVIEW_MARGIN)), 3), 255, dtype=np.uint8) generated_images = generator.predict(noise) generated_images = 0.5 * generated_images + 0.5 image_count = 0 for row in range(PREVIEW_ROWS): for col in range(PREVIEW_COLS): r = row * (GENERATE_SQUARE+16) + PREVIEW_MARGIN c = col * (GENERATE_SQUARE+16) + PREVIEW_MARGIN image_array[r:r+GENERATE_SQUARE,c:c+GENERATE_SQUARE] \ = generated_images[image_count] * 255 image_count += 1 output_path = os.path.join(DATA_PATH,'output') if not os.path.exists(output_path): os.makedirs(output_path) filename = os.path.join(output_path,f"train-{cnt}.png") im = Image.fromarray(image_array) im.save(filename) ``` ``` generator = build_generator(SEED_SIZE, IMAGE_CHANNELS) noise = tf.random.normal([1, SEED_SIZE]) generated_image = generator(noise, training=False) plt.imshow(generated_image[0, :, :, 0]) ``` ``` image_shape = (GENERATE_SQUARE,GENERATE_SQUARE,IMAGE_CHANNELS) discriminator = build_discriminator(image_shape) decision = discriminator(generated_image) print (decision) ``` Loss functions must be developed that allow the generator and discriminator to be trained in an adversarial way. Because these two neural networks are being trained independently they must be trained in two separate passes. This requires two separate loss functions and also two separate updates to the gradients. When the discriminator's gradients are applied to decrease the discriminator's loss it is important that only the discriminator's weights are update. It is not fair, nor will it produce good results, to adversarially damage the weights of the generator to help the discriminator. A simple backpropagation would do this. It would simultaneously affect the weights of both generator and discriminator to lower whatever loss it was assigned to lower. Figure 7.TDIS shows how the discriminator is trained. **Figure 7.TDIS: Training the Discriminator** ![Training the Discriminator](https://raw.githubusercontent.com/jeffheaton/t81_558_deep_learning/master/images/gan_fig_2.png "Training the Discriminator") Here a training set is generated with an equal number of real and fake images. The real images are randomly sampled (chosen) from the training data. An equal number of random images are generated from random seeds. For the discriminator training set, the $x$ contains the input images and the $y$ contains a value of 1 for real images and 0 for generated ones. Likewise, the Figure 7.TGEN shows how the generator is trained. **Figure 7.TGEN: Training the Generator** ![Training the Generator](https://raw.githubusercontent.com/jeffheaton/t81_558_deep_learning/master/images/gan_fig_3.png "Training the Generator") For the generator training set, the $x$ contains the random seeds to generate images and the $y$ always contains the value of 1, because the optimal is for the generator to have generated such good images that the discriminiator was fooled into assigning them a probability near 1. ``` # This method returns a helper function to compute cross entropy loss cross_entropy = tf.keras.losses.BinaryCrossentropy() def discriminator_loss(real_output, fake_output): real_loss = cross_entropy(tf.ones_like(real_output), real_output) fake_loss = cross_entropy(tf.zeros_like(fake_output), fake_output) total_loss = real_loss + fake_loss return total_loss def generator_loss(fake_output): return cross_entropy(tf.ones_like(fake_output), fake_output) ``` Both the generator and discriminator use Adam and the same learning rate and momentum. This does not need to be the case. If you use a **GENERATE_RES** greater than 3 you may need to tune these learning rates, as well as other training and hyperparameters. ``` generator_optimizer = tf.keras.optimizers.Adam(1.5e-4,0.5) discriminator_optimizer = tf.keras.optimizers.Adam(1.5e-4,0.5) ``` The following function is where most of the training takes place for both the discriminator and the generator. This function was based on the GAN provided by the [TensorFlow Keras exmples](https://www.tensorflow.org/tutorials/generative/dcgan) documentation. The first thing you should notice about this function is that it is annotated with the **tf.function** annotation. This causes the function to be precompiled and improves performance. This function trans differently than the code we previously saw for training. This code makes use of **GradientTape** to allow the discriminator and generator to be trained together, yet separately. ``` # Notice the use of `tf.function` # This annotation causes the function to be "compiled". @tf.function def train_step(images): seed = tf.random.normal([BATCH_SIZE, SEED_SIZE]) with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape: generated_images = generator(seed, training=True) real_output = discriminator(images, training=True) fake_output = discriminator(generated_images, training=True) gen_loss = generator_loss(fake_output) disc_loss = discriminator_loss(real_output, fake_output) gradients_of_generator = gen_tape.gradient(\ gen_loss, generator.trainable_variables) gradients_of_discriminator = disc_tape.gradient(\ disc_loss, discriminator.trainable_variables) generator_optimizer.apply_gradients(zip( gradients_of_generator, generator.trainable_variables)) discriminator_optimizer.apply_gradients(zip( gradients_of_discriminator, discriminator.trainable_variables)) return gen_loss,disc_loss def train(dataset, epochs): fixed_seed = np.random.normal(0, 1, (PREVIEW_ROWS * PREVIEW_COLS, SEED_SIZE)) start = time.time() for epoch in range(epochs): epoch_start = time.time() gen_loss_list = [] disc_loss_list = [] for image_batch in dataset: t = train_step(image_batch) gen_loss_list.append(t[0]) disc_loss_list.append(t[1]) g_loss = sum(gen_loss_list) / len(gen_loss_list) d_loss = sum(disc_loss_list) / len(disc_loss_list) epoch_elapsed = time.time()-epoch_start print (f'Epoch {epoch+1}, gen loss={g_loss},disc loss={d_loss},'\ ' {hms_string(epoch_elapsed)}') save_images(epoch,fixed_seed) elapsed = time.time()-start print (f'Training time: {hms_string(elapsed)}') train(train_dataset, EPOCHS) ``` ``` generator.save(os.path.join(DATA_PATH,"face_generator.h5")) ```
github_jupyter
``` from __future__ import print_function import numpy as np np.random.seed(1337) # for reproducibility import keras from keras.datasets import mnist from keras.models import Sequential from keras.layers import Dense, Dropout, Activation, Flatten from keras.layers import Convolution2D, MaxPooling2D from keras.utils import np_utils from keras import backend as K ``` Preprocess data ``` nb_classes = 10 # input image dimensions img_rows, img_cols = 28, 28 # number of convolutional filters to use nb_filters = 32 # size of pooling area for max pooling pool_size = (2, 2) # convolution kernel size kernel_size = (3, 3) # the data, shuffled and split between train and test sets (X_train, y_train), (X_test, y_test) = mnist.load_data() if K.image_dim_ordering() == 'th': X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols) X_test = X_test.reshape(X_test.shape[0], 1, img_rows, img_cols) input_shape = (1, img_rows, img_cols) else: X_train = X_train.reshape(X_train.shape[0], img_rows, img_cols, 1) X_test = X_test.reshape(X_test.shape[0], img_rows, img_cols, 1) input_shape = (img_rows, img_cols, 1) X_train = X_train.astype('float32') X_test = X_test.astype('float32') X_train /= 255 X_test /= 255 print('X_train shape:', X_train.shape) print(X_train.shape[0], 'train samples') print(X_test.shape[0], 'test samples') # convert class vectors to binary class matrices Y_train = np_utils.to_categorical(y_train, nb_classes) Y_test = np_utils.to_categorical(y_test, nb_classes) ``` Build a Keras model using the `Sequential API` ``` batch_size = 50 nb_epoch = 10 model = Sequential() model.add(Convolution2D(nb_filters, kernel_size, padding='valid', input_shape=input_shape, activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Convolution2D(nb_filters, kernel_size,activation='relu')) model.add(Dropout(rate=0.25)) model.add(Flatten()) model.add(Dense(64,activation='relu')) model.add(Dropout(rate=5)) model.add(Dense(nb_classes,activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) model.summary() ``` Train and evaluate the model ``` model.fit(X_train[0:10000, ...], Y_train[0:10000, ...], batch_size=batch_size, epochs=nb_epoch, verbose=1, validation_data=(X_test, Y_test)) score = model.evaluate(X_test, Y_test, verbose=0) print('Test score:', score[0]) print('Test accuracy:', score[1]) ``` Save the model ``` model.save('example_keras_mnist_model.h5') ```
github_jupyter
# Compute forcing for 1%CO2 data ``` import os import numpy as np import pandas as pd import matplotlib.pyplot as plt filedir1 = '/Users/hege-beatefredriksen/OneDrive - UiT Office 365/Data/CMIP5_globalaverages/Forcingpaperdata' storedata = False # store anomalies in file? storeforcingdata = False createnewfile = False # if it is the first time this is run, new files should be created, that can later be loaded exp = '1pctCO2' filenameT = 'annualTanom_1pctCO2.txt' filenameN = 'annualNanom_1pctCO2.txt' filenameFF = 'forcing_F13method_1pctCO2.txt' filenameNF = 'forcing_estimates_1pctCO2.txt' # create file first time code is run: if createnewfile == True: cols = ['year','ACCESS1-0','ACCESS1-3','CanESM2','CCSM4','CNRM-CM5','CSIRO-Mk3-6-0','GFDL-CM3','GFDL-ESM2G','GFDL-ESM2M','GISS-E2-H','GISS-E2-R','HadGEM2-ES','inmcm4','IPSL-CM5A-LR','IPSL-CM5B-LR','MIROC-ESM','MIROC5','MPI-ESM-LR','MPI-ESM-MR','MRI-CGCM3','NorESM1-M'] dfT = pd.DataFrame(np.full((140, len(cols)),'-'), columns = cols); dfT['year'] = np.arange(1,140+1) dfN = pd.DataFrame(np.full((140, len(cols)),'-'), columns = cols); dfN['year'] = np.arange(1,140+1) dfT.to_csv(filenameT, sep='\t'); dfN.to_csv(filenameN, sep='\t'); dfFF = pd.DataFrame(np.full((140, len(cols)),'-'), columns = cols); dfFF['year'] = np.arange(1,140+1) dfNF = pd.DataFrame(np.full((140, len(cols)),'-'), columns = cols); dfNF['year'] = np.arange(1,140+1) dfFF.to_csv(filenameFF, sep='\t'); dfNF.to_csv(filenameNF, sep='\t'); #model = 'ACCESS1-0' #model = 'ACCESS1-3' #model = 'CanESM2' #model = 'CCSM4' #model = 'CNRM-CM5' #model = 'CSIRO-Mk3-6-0' #model = 'GFDL-CM3' #model = 'GFDL-ESM2G' # 1pctco2 only for 70 years #model = 'GFDL-ESM2M' # 1pctco2 only for 70 years #model = 'GISS-E2-H' #model = 'GISS-E2-R' #model = 'HadGEM2-ES' #model = 'inmcm4' #model = 'IPSL-CM5A-LR' #model = 'IPSL-CM5B-LR' #model = 'MIROC-ESM' #model = 'MIROC5' #model = 'MPI-ESM-LR' #model = 'MPI-ESM-MR' #model = 'MRI-CGCM3' model = 'NorESM1-M' realm = 'Amon' ensemble = 'r1i1p1' ## define time periods of data: if model == 'ACCESS1-0': controltimeperiod = '030001-079912' exptimeperiod = '030001-043912' control_branch_yr = 300 elif model == 'ACCESS1-3': controltimeperiod = '025001-074912' exptimeperiod = '025001-038912' control_branch_yr = 250 elif model == 'CanESM2': controltimeperiod = '201501-301012' exptimeperiod = '185001-198912' control_branch_yr = 2321 elif model == 'CCSM4': controltimeperiod = '025001-130012' exptimeperiod = '185001-200512' control_branch_yr = 251 elif model == 'CNRM-CM5': controltimeperiod = '185001-269912' exptimeperiod = '185001-198912' control_branch_yr = 1850 elif model == 'CSIRO-Mk3-6-0': controltimeperiod = '000101-050012' exptimeperiod = '000101-014012' control_branch_yr = 104 elif model == 'GFDL-CM3': controltimeperiod = '000101-050012' exptimeperiod = '000101-014012' control_branch_yr = 1 elif model == 'GFDL-ESM2G' or model == 'GFDL-ESM2M': controltimeperiod = '000101-050012' exptimeperiod = '000101-020012' # 1pctco2 only for 70 years. control_branch_yr = 1 elif model == 'GISS-E2-H': print(model + 'has control run for two different periods') #controltimeperiod = '118001-141912' controltimeperiod = '241001-294912' exptimeperiod = '185001-200012' control_branch_yr = 2410 elif model == 'GISS-E2-R': #controltimeperiod = '333101-363012' #controltimeperiod1 = '398101-453012' controltimeperiod2 = '398101-920512' exptimeperiod = '185001-200012' control_branch_yr = 3981 # Note: The two blocks of years that are present (3331-3630 and 3981-4530) represent different control runs elif model == 'HadGEM2-ES': controltimeperiod = '186001-243511' exptimeperiod = '186001-199912' control_branch_yr = 1860 # or actually december 1859, but I ignore this first month is the annual average elif model == 'inmcm4': controltimeperiod = '185001-234912' exptimeperiod = '209001-222912' control_branch_yr = 2090 elif model == 'IPSL-CM5A-LR': controltimeperiod = '180001-279912' exptimeperiod = '185001-198912' control_branch_yr = 1850 elif model == 'IPSL-CM5B-LR': controltimeperiod = '183001-212912' exptimeperiod = '185001-200012' control_branch_yr = 1850 elif model == 'MIROC-ESM': controltimeperiod = '180001-242912' exptimeperiod = '000101-014012' control_branch_yr = 1880 elif model == 'MIROC5': controltimeperiod = '200001-286912' exptimeperiod = '220001-233912' control_branch_yr = 2200 elif model == 'MPI-ESM-LR': controltimeperiod = '185001-284912' exptimeperiod = '185001-199912' control_branch_yr = 1880 elif model == 'MPI-ESM-MR': controltimeperiod = '185001-284912' exptimeperiod = '185001-199912' control_branch_yr = 1850 elif model == 'MRI-CGCM3': controltimeperiod = '185101-235012' exptimeperiod = '185101-199012' control_branch_yr = 1891 elif model == 'NorESM1-M': controltimeperiod = '070001-120012' exptimeperiod = '000101-014012' control_branch_yr = 700 #### load 1pctCO2 data #### var = 'tas' # temperatures strings = [var, realm, model, exp, ensemble, exptimeperiod] filename = 'glannual_' + "_".join(strings) + '.txt' file = os.path.join(filedir1, model, exp, filename) datatable = pd.read_table(file, header=None,sep=" ") temp=datatable.iloc[0:len(datatable),0] var = 'rlut' # rlut strings = [var, realm, model, exp, ensemble, exptimeperiod] filename = 'glannual_' + "_".join(strings) + '.txt' file = os.path.join(filedir1, model, exp, filename) datatable = pd.read_table(file, header=None,sep=" ") rlut=datatable.iloc[0:len(datatable),0] var = 'rsut' # rsut strings = [var, realm, model, exp, ensemble, exptimeperiod] filename = 'glannual_' + "_".join(strings) + '.txt' file = os.path.join(filedir1, model, exp, filename) datatable = pd.read_table(file, header=None,sep=" ") rsut=datatable.iloc[0:len(datatable),0] var = 'rsdt' # rsdt strings = [var, realm, model, exp, ensemble, exptimeperiod] filename = 'glannual_' + "_".join(strings) + '.txt' file = os.path.join(filedir1, model, exp, filename) datatable = pd.read_table(file, header=None,sep=" ") rsdt=datatable.iloc[0:len(datatable),0] # drop all data after 140 years temp = temp[:140]; rlut = rlut[:140]; rsut = rsut[:140]; rsdt = rsdt[:140] ###### load control run data ###### exp = 'piControl' var = 'tas' # temperatures if model == 'GISS-E2-R': controltimeperiod = controltimeperiod2 strings = [var, realm, model, exp, ensemble, controltimeperiod] filename = 'glannual_' + "_".join(strings) + '.txt' file = os.path.join(filedir1, model, exp, filename) datatable = pd.read_table(file, header=None,sep=" ") controltemp=datatable.iloc[:,0] var = 'rlut' # rlut strings = [var, realm, model, exp, ensemble, controltimeperiod] filename = 'glannual_' + "_".join(strings) + '.txt' file = os.path.join(filedir1, model, exp, filename) datatable = pd.read_table(file, header=None,sep=" ") controlrlut=datatable.iloc[0:len(controltemp),0] var = 'rsut' # rsut strings = [var, realm, model, exp, ensemble, controltimeperiod] filename = 'glannual_' + "_".join(strings) + '.txt' file = os.path.join(filedir1, model, exp, filename) datatable = pd.read_table(file, header=None,sep=" ") controlrsut=datatable.iloc[0:len(controltemp),0] var = 'rsdt' # rsdt strings = [var, realm, model, exp, ensemble, controltimeperiod] filename = 'glannual_' + "_".join(strings) + '.txt' file = os.path.join(filedir1, model, exp, filename) datatable = pd.read_table(file, header=None,sep=" ") controlrsdt=datatable.iloc[0:len(controltemp),0] years = np.arange(1,len(temp)+1) # create figure fig, ax = plt.subplots(nrows=2,ncols=2,figsize = [16,10]) # plot temperature var = temp[:]; label = 'tas' ax[0,0].plot(years,var,linewidth=2,color = "black") #ax[0,0].set_xlabel('t',fontsize = 18) ax[0,0].set_ylabel(label + '(t)',fontsize = 18) ax[0,0].set_title('1pctCO2 ' + label,fontsize = 18) ax[0,0].grid() ax[0,0].set_xlim(min(years),max(years)) ax[0,0].tick_params(axis='both',labelsize=18) # plot rlut var = rlut[:]; label = 'rlut' ax[0,1].plot(years,var,linewidth=2,color = "black") #ax[0,1].set_xlabel('t',fontsize = 18) ax[0,1].set_ylabel(label + '(t)',fontsize = 18) ax[0,1].set_title('1pctCO2 ' + label,fontsize = 18) ax[0,1].grid() ax[0,1].set_xlim(min(years),max(years)) ax[0,1].tick_params(axis='both',labelsize=18) # plot rsdt var = rsdt[:]; label = 'rsdt' ax[1,0].plot(years,var,linewidth=2,color = "black") ax[1,0].set_xlabel('t',fontsize = 18) ax[1,0].set_ylabel(label + '(t)',fontsize = 18) ax[1,0].set_title('1pctCO2 ' + label,fontsize = 18) ax[1,0].grid() ax[1,0].set_xlim(min(years),max(years)) ax[1,0].tick_params(axis='both',labelsize=18) # plot rsut var = rsut[:]; label = 'rsut' ax[1,1].plot(years,var,linewidth=2,color = "black") ax[1,1].set_xlabel('t',fontsize = 18) ax[1,1].set_ylabel(label + '(t)',fontsize = 18) ax[1,1].set_title('1pctCO2 ' + label,fontsize = 18) ax[1,1].grid() ax[1,1].set_xlim(min(years),max(years)) ax[1,1].tick_params(axis='both',labelsize=18) # plot control run data and linear trends controlyears = np.arange(0,len(controltemp)) branchindex = control_branch_yr - int(controltimeperiod[0:4]) print(branchindex) # create figure fig, ax = plt.subplots(nrows=2,ncols=2,figsize = [16,10]) # plot temperature var = controltemp[:]; label = 'tas' ax[0,0].plot(controlyears,var,linewidth=2,color = "black") # find linear fits to control T and nettoarad in the same period as exp: p1 = np.polyfit(controlyears[branchindex:(branchindex + len(temp))], controltemp[branchindex:(branchindex + len(temp))], deg = 1) lintrendT = np.polyval(p1,controlyears[branchindex:(branchindex + len(temp))]) ax[0,0].plot(controlyears[branchindex:(branchindex + len(temp))], lintrendT, linewidth = 4) ax[0,0].set_ylabel(label + '(t)',fontsize = 18) ax[0,0].set_title('Control ' + label,fontsize = 18) ax[0,0].grid() ax[0,0].set_xlim(min(controlyears),max(controlyears)) ax[0,0].tick_params(axis='both',labelsize=18) # plot rlut var = controlrlut[:]; label = 'rlut' ax[0,1].plot(controlyears,var,linewidth=2,color = "black") p2 = np.polyfit(controlyears[branchindex:(branchindex + len(temp))], controlrlut[branchindex:(branchindex + len(temp))], deg = 1) lintrend_rlut = np.polyval(p2,controlyears[branchindex:(branchindex + len(temp))]) ax[0,1].plot(controlyears[branchindex:(branchindex + len(temp))], lintrend_rlut, linewidth = 4) ax[0,1].set_ylabel(label + '(t)',fontsize = 18) ax[0,1].set_title('Control ' + label,fontsize = 18) ax[0,1].grid() ax[0,1].set_xlim(min(controlyears),max(controlyears)) ax[0,1].tick_params(axis='both',labelsize=18) # plot rsdt var = controlrsdt[:]; label = 'rsdt' ax[1,0].plot(controlyears,var,linewidth=2,color = "black") p3 = np.polyfit(controlyears[branchindex:(branchindex + len(temp))], controlrsdt[branchindex:(branchindex + len(temp))], deg = 1) lintrend_rsdt = np.polyval(p3,controlyears[branchindex:(branchindex + len(temp))]) ax[1,0].plot(controlyears[branchindex:(branchindex + len(temp))], lintrend_rsdt, linewidth = 4) ax[1,0].set_xlabel('t',fontsize = 18) ax[1,0].set_ylabel(label + '(t)',fontsize = 18) ax[1,0].set_title('Control ' + label,fontsize = 18) ax[1,0].grid() ax[1,0].set_xlim(min(controlyears),max(controlyears)) ax[1,0].set_ylim(var[0]-2,var[0]+2) ax[1,0].tick_params(axis='both',labelsize=18) # plot rsut var = controlrsut[:]; label = 'rsut' ax[1,1].plot(controlyears,var,linewidth=2,color = "black") p4 = np.polyfit(controlyears[branchindex:(branchindex + len(temp))], controlrsut[branchindex:(branchindex + len(temp))], deg = 1) lintrend_rsut = np.polyval(p4,controlyears[branchindex:(branchindex + len(temp))]) ax[1,1].plot(controlyears[branchindex:(branchindex + len(temp))], lintrend_rsut, linewidth = 4) ax[1,1].set_xlabel('t',fontsize = 18) ax[1,1].set_ylabel(label + '(t)',fontsize = 18) ax[1,1].set_title('Control ' + label,fontsize = 18) ax[1,1].grid() ax[1,1].set_xlim(min(controlyears),max(controlyears)) ax[1,1].tick_params(axis='both',labelsize=18) nettoarad = rsdt - rsut - rlut controlnettoarad = controlrsdt - controlrsut - controlrlut lintrendN = lintrend_rsdt - lintrend_rsut - lintrend_rlut deltaN = nettoarad - lintrendN deltaT = temp - lintrendT # create figure fig, ax = plt.subplots(nrows=1,ncols=2,figsize = [16,5]) # plot 1pctCO2 net TOA rad var = nettoarad[:]; label = 'net TOA rad' ax[0,].plot(years,var,linewidth=2,color = "black") ax[0,].set_xlabel('t',fontsize = 18) ax[0,].set_ylabel(label + '(t)',fontsize = 18) ax[0,].set_title('1pctCO2 ' + label,fontsize = 18) ax[0,].grid() ax[0,].set_xlim(min(years),max(years)) ax[0,].tick_params(axis='both',labelsize=18) # plot control net TOA rad var = controlnettoarad[:]; label = 'net TOA rad' ax[1,].plot(controlyears,var,linewidth=2,color = "black") ax[1,].plot(controlyears[branchindex:(branchindex + len(temp))],lintrendN,linewidth=4) ax[1,].set_xlabel('t',fontsize = 18) ax[1,].set_ylabel(label + '(t)',fontsize = 18) ax[1,].set_title('Control ' + label,fontsize = 18) ax[1,].grid() ax[1,].set_xlim(min(controlyears),max(controlyears)) ax[1,].tick_params(axis='both',labelsize=18) ########### plot also anomalies: ########### # create figure fig, ax = plt.subplots(nrows=1,ncols=1,figsize = [8,5]) var = deltaN; label = 'net TOA rad' ax.plot(years,var,linewidth=2,color = "black") ax.set_xlabel('t',fontsize = 18) ax.set_ylabel(label + '(t)',fontsize = 18) ax.set_title('1pctCO2 ' + label + ' anomaly',fontsize = 18) ax.grid() ax.set_xlim(min(years),max(years)) ax.tick_params(axis='both',labelsize=18) # write time series to a dataframe? if storedata == True: dfT = pd.read_table(filenameT, index_col=0); dfN = pd.read_table(filenameN, index_col=0); # load files dfT[model] = deltaT; dfN[model] = deltaN dfT.to_csv(filenameT, sep='\t'); dfN.to_csv(filenameN, sep='\t') # save files again ``` ## Load my estimated parameters ``` filename = 'best_estimated_parameters.txt' parameter_table = pd.read_table(filename,index_col=0) GregoryT2x = parameter_table.loc[model,'GregoryT2x'] GregoryF2x = parameter_table.loc[model,'GregoryF2x'] fbpar = GregoryF2x/GregoryT2x #feedback parameter from Gregory plot print(fbpar) F = deltaN + fbpar*deltaT fig, ax = plt.subplots(figsize = [9,5]) plt.plot(years,F,linewidth=2,color = "black") ax.set_xlabel('t (years)',fontsize = 18) ax.set_ylabel('F(t) [$W/m^2$]',fontsize = 18) ax.set_title('1pctCO2 forcing',fontsize = 18) ax.grid() ax.set_xlim(min(years),max(years)) ax.tick_params(axis='both',labelsize=22) if storeforcingdata == True: dfFF = pd.read_table(filenameFF, index_col=0); # load files dfFF[model] = F; dfFF.to_csv(filenameFF, sep='\t'); # save file again # load remaining parameters: taulist = np.array(parameter_table.loc[model,'tau1':'tau3']) a_n = np.array(parameter_table.loc[model,'a_1':'a_4']) b_n = np.array(parameter_table.loc[model,'b_1':'b_4']) F2x = parameter_table.loc[model,'F2x'] T2x = parameter_table.loc[model,'T2x'] # compute other needed parameters from these: dim = len(taulist) if any(a_n == 0): dim = np.count_nonzero(a_n[:dim]) zeroindex = np.where(a_n == 0)[0] a_n = np.delete(a_n,zeroindex) b_n = np.delete(b_n,zeroindex) taulist = np.delete(taulist,zeroindex) fbparlist = (b_n/a_n)[:dim] print(fbparlist) amplitudes = a_n[:dim]/(2*F2x*taulist) print(np.sum(a_n)/2) print(T2x) # compute components T_n(t) = exp(-t/tau_n)*F(t) (Here * is a convolution) dim = len(taulist) lf = len(F) predictors = np.full((lf,dim),np.nan) # compute exact predictors by integrating greens function for k in range(0,dim): intgreensti = np.full((lf,lf),0.) # remember dot after 0 to create floating point number array instead of integer for t in range(0,lf): # compute one new contribution to the matrix: intgreensti[t,0] = taulist[k]*(np.exp(-t/taulist[k]) - np.exp(-(t+1)/taulist[k])) # take the rest from row above: if t > 0: intgreensti[t,1:(t+1)] = intgreensti[t-1,0:t] # compute discretized convolution integral by this matrix product: predictors[:,k] = [email protected](F) Tn = amplitudes*predictors fig, ax = plt.subplots(figsize = [9,5]) plt.plot(years,Tn[:,0],linewidth=2,color = "black",label = 'Mode with time scale ' + str(np.round(taulist[0])) + ' years') plt.plot(years,Tn[:,1],linewidth=2,color = "blue",label = 'Mode with time scale ' + str(np.round(taulist[1])) + ' years') if dim>2: plt.plot(years,Tn[:,2],linewidth=2,color = "red",label = 'Mode with time scale ' + str(np.round(taulist[2],1)) + ' years') ax.set_xlabel('t',fontsize = 18) ax.set_ylabel('T(t)',fontsize = 18) ax.set_title('Temperature responses to forcing',fontsize = 18) ax.grid() ax.set_xlim(min(years),max(years)) ax.tick_params(axis='both',labelsize=22) ax.legend(loc=2, prop={'size': 18}); fig, ax = plt.subplots(figsize = [9,5]) plt.plot(years,np.sum(Tn, axis=1),linewidth=2,color = "black",label = 'Mode with time scale ' + str(np.round(taulist[0])) + ' years') ax.set_xlabel('t (years)',fontsize = 18) ax.set_ylabel('T(t) [°C]',fontsize = 18) ax.set_title('Linear response to forcing',fontsize = 18) ax.grid() ax.set_xlim(min(years),max(years)) ax.tick_params(axis='both',labelsize=22) # Compute new estimate of adjusted forcing it = 20 # number of iterations Fiarray = np.full((lf,it),np.nan) Fi = F for i in range(0,it): # iterate predictors = np.full((lf,dim),np.nan) # compute exact predictors by integrating greens function for k in range(0,dim): intgreensti = np.full((lf,lf),0.) # remember dot after 0 to create floating point number array instead of integer for t in range(0,lf): # compute one new contribution to the matrix: intgreensti[t,0] = taulist[k]*(np.exp(-t/taulist[k]) - np.exp(-(t+1)/taulist[k])) # take the rest from row above: if t > 0: intgreensti[t,1:(t+1)] = intgreensti[t-1,0:t] # compute discretized convolution integral by this matrix product: predictors[:,k] = [email protected](Fi) Tni = amplitudes*predictors Fi = deltaN + Tni@fbparlist Fiarray[:,i] = Fi fig, ax = plt.subplots(nrows=1,ncols=2,figsize = [16,5]) ax[0,].plot(years,F,linewidth=2,color = "black",label = "Old forcing") for i in range(0,it-1): ax[0,].plot(years,Fiarray[:,i],linewidth=1,color = "gray") ax[0,].plot(years,Fiarray[:,it-1],linewidth=1,color = "blue",label = "New forcing") ax[0,].set_xlabel('t (years)',fontsize = 18) ax[0,].set_ylabel('F(t) [$W/m^2$]',fontsize = 18) ax[0,].grid() ax[0,].set_xlim(min(years),max(years)) ax[0,].tick_params(axis='both',labelsize=18) if model == 'GFDL-ESM2G' or model == 'GFDL-ESM2M': # linear fit for only 70 years # linear fit to forster forcing: linfitpar1 = np.polyfit(years[:70],F[:70],deg = 1) linfit_forcing1 = np.polyval(linfitpar1,years[:70]) ax[0,].plot(years[:70],linfit_forcing1,'--',linewidth=1,color = "black") # linear fit to new forcing: linfitpar2 = np.polyfit(years[:70],Fiarray[:70,it-1],deg = 1) linfit_forcing2 = np.polyval(linfitpar2,years[:70]) ax[0,].plot(years[:70],linfit_forcing2,'--',linewidth=1,color = "blue") else: # linear fit for 140 years # linear fit to forster forcing: linfitpar1 = np.polyfit(years,F,deg = 1) linfit_forcing1 = np.polyval(linfitpar1,years) ax[0,].plot(years,linfit_forcing1,'--',linewidth=1,color = "black") # linear fit to new forcing: linfitpar2 = np.polyfit(years,Fiarray[:,it-1],deg = 1) linfit_forcing2 = np.polyval(linfitpar2,years) ax[0,].plot(years,linfit_forcing2,'--',linewidth=1,color = "blue") # Estimate and print out 4xCO2 forcing from end values of linear fits: print(linfit_forcing1[-1]) print(linfit_forcing2[-1]) # compare responses label = 'temperature' # plot temperature ax[1,].plot(years,deltaT,linewidth=3,color = "black",label = model + " modelled response") # plot response ax[1,].plot(years,np.sum(Tn,axis=1),'--',linewidth=2,color = "black",label = "Linear response to old forcing") ax[1,].plot(years,np.sum(Tni,axis=1),'--',linewidth=2,color = "blue",label = "Linear response to new forcing") ax[1,].set_xlabel('t (years)',fontsize = 18) ax[1,].set_ylabel('T(t) [°C]',fontsize = 18) ax[1,].set_title('1% CO$_2$ ' + label,fontsize = 18) ax[0,].set_title('1% CO$_2$ effective forcing',fontsize = 18) ax[1,].grid() ax[1,].set_xlim(min(years),max(years)) ax[1,].tick_params(axis='both',labelsize=18) ax[0,].text(0,1.03,'a)',transform=ax[0,].transAxes, fontsize=20) ax[1,].text(0,1.03,'b)',transform=ax[1,].transAxes, fontsize=20) #plt.savefig('/Users/hege-beatefredriksen/OneDrive - UiT Office 365/Papers/Forcingpaper/Figures/' + model + '_1pctCO2_forcing_and_response.pdf', format='pdf', dpi=600, bbox_inches="tight") if storeforcingdata == True: dfNF = pd.read_table(filenameNF, index_col=0); dfNF = pd.read_table(filenameNF, index_col=0); # load file dfNF[model] = Fiarray[:,it-1]; dfNF.to_csv(filenameNF, sep='\t'); # save file again # put results in pandas dataframe: columnnames = ['4xCO2forcingest_1pctCO2', '4xCO2forcingest_1pctCO2_F13method']; # if file is not already created, create a new file to store the results in: filename = 'estimated_4xCO2forcing_from1pctCO2.txt' #dataframe = pd.DataFrame([np.concatenate((linfit_forcing2[-1], linfit_forcing1[-1]), axis=None)], index = [model], columns=columnnames) #dataframe.to_csv(filename, sep='\t') #dataframe # load existing dataframe, and append present result: loaded_dataframe = pd.read_table(filename,index_col=0) pd.set_option('display.expand_frame_repr', False) # fill numbers into table: if model == 'GFDL-ESM2G' or model == 'GFDL-ESM2M': loaded_dataframe.loc[model,columnnames] = [np.concatenate((2*linfit_forcing2[-1], 2*linfit_forcing1[-1]), axis=None)] else: loaded_dataframe.loc[model,columnnames] = [np.concatenate((linfit_forcing2[-1], linfit_forcing1[-1]), axis=None)] # write them to a file: loaded_dataframe.to_csv(filename, sep='\t') loaded_dataframe timedep_fbpar1 = Tni@fbparlist/np.sum(Tni,axis=1) # two alternative definitions timedep_fbpar2 = Tni@fbparlist/deltaT fig, ax = plt.subplots(figsize = [9,5]) label = 'Instantaneous feedback parameter' # plot response ax.plot(years,timedep_fbpar1,linewidth=3,color = "black") ax.plot(years,timedep_fbpar2,linewidth=1,color = "gray") ax.plot(years,np.full((len(years),1),fbpar),linewidth=2,color = "green") ax.set_xlabel('t',fontsize = 18) ax.set_ylabel('$\lambda$ (t)',fontsize = 18) ax.set_title(label,fontsize = 18) ax.grid() ax.set_xlim(min(years),max(years)) ax.set_ylim(0,3) ax.tick_params(axis='both',labelsize=18) fig, ax = plt.subplots(figsize = [9,5]) label = 'Instantaneous climate sensitivity parameter' # plot response ax.plot(years,1/timedep_fbpar1,linewidth=3,color = "black") ax.plot(years,1/timedep_fbpar2,linewidth=1,color = "gray") ax.plot(years,np.full((len(years),1),1/fbpar),linewidth=2,color = "green") ax.set_xlabel('t',fontsize = 18) ax.set_ylabel('S(t)',fontsize = 18) ax.set_title(label,fontsize = 18) ax.grid() ax.set_xlim(min(years),max(years)) ax.set_ylim(0,2) ax.tick_params(axis='both',labelsize=18) fig, ax = plt.subplots(figsize = [9,5]) label = 'Instantaneous climate sensitivity' # plot response ax.plot(years,F2x/timedep_fbpar1,linewidth=3,color = "black") ax.plot(years,F2x/timedep_fbpar2,linewidth=1,color = "gray") ax.plot(years,np.full((len(years),1),F2x/fbpar),linewidth=2,color = "green") ax.set_xlabel('t',fontsize = 18) ax.set_ylabel('ECS(t)',fontsize = 18) ax.set_title(label,fontsize = 18) ax.grid() ax.set_xlim(min(years),max(years)) ax.set_ylim(0,6) ax.tick_params(axis='both',labelsize=18) ```
github_jupyter
# How to do video classification In this tutorial, we will show how to train a video classification model in Classy Vision. Given an input video, the video classification task is to predict the most probable class label. This is very similar to image classification, which was covered in other tutorials, but there are a few differences that make video special. As the video duration can be long, we sample short video clips of a small number of frames, use the classifier to make predictions, and finally average the clip-level predictions to get the final video-level predictions. In this tutorial we will: (1) load a video dataset; (2) configure a video model; (3) configure video meters; (4) build a task; (5) start training; Please note that these steps are being done separately in the tutorial for easy of exposition in the notebook format. As described in our [Getting started](https://classyvision.ai/tutorials/getting_started) tutorial, you can combine all configs used in this tutorial into a single config for ClassificationTask and train it using `classy_train.py`. # 1. Prepare the dataset All right! Let's start with the dataset. [UCF-101](https://www.crcv.ucf.edu/data/UCF101.php) is a canonical action recognition dataset. It has 101 action classes, and has 3 folds with different training/testing splitting . We use fold 1 in this tutorial. Classy Vision has implemented the dataset `ucf101`, which can be used to load the training and testing splits. ``` from classy_vision.dataset import build_dataset # set it to the folder where video files are saved video_dir = "[PUT YOUR VIDEO FOLDER HERE]" # set it to the folder where dataset splitting files are saved splits_dir = "[PUT THE FOLDER WHICH CONTAINS SPLITTING FILES HERE]" # set it to the file path for saving the metadata metadata_file = "[PUT THE FILE PATH OF DATASET META DATA HERE]" datasets = {} datasets["train"] = build_dataset({ "name": "ucf101", "split": "train", "batchsize_per_replica": 8, # For training, we use 8 clips in a minibatch in each model replica "use_shuffle": True, # We shuffle the clips in the training split "num_samples": 64, # We train on 16 clips in one training epoch "clips_per_video": 1, # For training, we randomly sample 1 clip from each video "frames_per_clip": 8, # The video clip contains 8 frames "video_dir": video_dir, "splits_dir": splits_dir, "metadata_file": metadata_file, "fold": 1, "transforms": { "video": [ { "name": "video_default_augment", "crop_size": 112, "size_range": [128, 160] } ] } }) datasets["test"] = build_dataset({ "name": "ucf101", "split": "test", "batchsize_per_replica": 10, # For testing, we will take 1 video once a time, and sample 10 clips per video "use_shuffle": False, # We do not shuffle clips in the testing split "num_samples": 80, # We test on 80 clips in one testing epoch "clips_per_video": 10, # We sample 10 clips per video "frames_per_clip": 8, "video_dir": video_dir, "splits_dir": splits_dir, "metadata_file": metadata_file, "fold": 1, "transforms": { "video": [ { "name": "video_default_no_augment", "size": 128 } ] } }) ``` Note we specify different transforms for training and testing split. For training split, we first randomly select a size from `size_range` [128, 160], and resize the video clip so that its short edge is equal to the random size. After that, we take a random crop of spatial size 112 x 112. We find such data augmentation helps the model generalize better, and use it as the default transform with data augmentation. For testing split, we resize the video clip to have short edge of size 128, and skip the random cropping to use the entire video clip. This is the default transform without data augmentation. # 2. Define a model trunk and a head Next, let's create the video model, which consists of a trunk and a head. The trunk can be viewed as a feature extractor for computing discriminative features from raw video pixels while the head is viewed as a classifier for producing the final predictions. Let's first create the trunk of architecture ResNet3D-18 by using the built-in `resnext3d` model in Classy Vision. ``` from classy_vision.models import build_model model = build_model({ "name": "resnext3d", "frames_per_clip": 8, # The number of frames we have in each video clip "input_planes": 3, # We use RGB video frames. So the input planes is 3 "clip_crop_size": 112, # We take croppings of size 112 x 112 from the video frames "skip_transformation_type": "postactivated_shortcut", # The type of skip connection in residual unit "residual_transformation_type": "basic_transformation", # The type of residual connection in residual unit "num_blocks": [2, 2, 2, 2], # The number of residual blocks in each of the 4 stages "input_key": "video", # The key used to index into the model input of dict type "stage_planes": 64, "num_classes": 101 # the number of classes }) ``` We also need to create a model head, which consists of an average pooling layer and a linear layer, by using the `fully_convolutional_linear` head. At test time, the shape (channels, frames, height, width) of input tensor is typically `(3 x 8 x 128 x 173)`. The shape of input tensor to the average pooling layer is `(2048, 1, 8, 10)`. Since we do not use a global average pooling but an average pooling layer of kernel size `(1, 7, 7)`, the pooled feature map has shape `(2048, 1, 2, 5)`. The shape of prediction tensor from the linear layer is `(1, 2, 5, 101)`, which indicates the model computes a 101-D prediction vector densely over a `2 x 5` grid. That's why we name the head as `FullyConvolutionalLinearHead` because we use the linear layer as a `1x1` convolution layer to produce spatially dense predictions. Finally, predictions over the `2 x 5` grid are averaged. ``` from classy_vision.heads import build_head from collections import defaultdict unique_id = "default_head" head = build_head({ "name": "fully_convolutional_linear", "unique_id": unique_id, "pool_size": [1, 7, 7], "num_classes": 101, "in_plane": 512 }) # In Classy Vision, the head can be attached to any residual block in the trunk. # Here we attach the head to the last block as in the standard ResNet model fork_block = "pathway0-stage4-block1" heads = defaultdict(dict) heads[fork_block][unique_id] = head model.set_heads(heads) ``` # 3. Choose the meters This is the biggest difference between video and image classification. For images we used `AccuracyMeter` to measure top-1 and top-5 accuracy. For videos you can also use both `AccuracyMeter` and `VideoAccuracyMeter`, but they behave differently: * `AccuracyMeter` takes one clip-level prediction and compare it with groundtruth video label. It reports the clip-level accuracy. * `VideoAccuracyMeter` takes multiple clip-level predictions from the same video, averages them and compares that with groundtruth video label. It reports the video-level accuracy which is usually higher than clip-level accuracy. Both meters report top-1 and top-5 accuracy. ``` from classy_vision.meters import build_meters, AccuracyMeter, VideoAccuracyMeter meters = build_meters({ "accuracy": { "topk": [1, 5] }, "video_accuracy": { "topk": [1, 5], "clips_per_video_train": 1, "clips_per_video_test": 10 } }) ``` # 4. Build a task Great! we have defined the minimal set of components necessary for video classification, including dataset, model, loss function, meters and optimizer. We proceed to define a video classification task, and populate it with all the components. ``` from classy_vision.tasks import ClassificationTask from classy_vision.optim import build_optimizer from classy_vision.losses import build_loss loss = build_loss({"name": "CrossEntropyLoss"}) optimizer = build_optimizer({ "name": "sgd", "lr": { "name": "multistep", "values": [0.005, 0.0005], "milestones": [1] }, "num_epochs": 2, "weight_decay": 0.0001, "momentum": 0.9 }) num_epochs = 2 task = ( ClassificationTask() .set_num_epochs(num_epochs) .set_loss(loss) .set_model(model) .set_optimizer(optimizer) .set_meters(meters) ) for phase in ["train", "test"]: task.set_dataset(datasets[phase], phase) ``` # 5. Start training After creating a task, you can simply pass that to a Trainer to start training. Here we will train on a single node and configure logging and checkpoints for training: ``` import time import os from classy_vision.trainer import LocalTrainer from classy_vision.hooks import CheckpointHook from classy_vision.hooks import LossLrMeterLoggingHook hooks = [LossLrMeterLoggingHook(log_freq=4)] checkpoint_dir = f"/tmp/classy_checkpoint_{time.time()}" os.mkdir(checkpoint_dir) hooks.append(CheckpointHook(checkpoint_dir, input_args={})) task = task.set_hooks(hooks) trainer = LocalTrainer() trainer.train(task) ``` As the training progresses, you should see `LossLrMeterLoggingHook` printing the loss, learning rate and meter metrics. Checkpoints will be available in the folder created above. ## 6. Conclusion Video classification is very similar to image classification in Classy Vision, you just need to use an appropriate dataset, model and meters. This tutorial glossed over many details about training, please take a look at our [Getting started](https://classyvision.ai/tutorials/getting_started) tutorial to learn more. Refer to our API reference for more details about [ResNeXt3D](https://classyvision.ai/api/models.html#classy_vision.models.ResNeXt3D) models, [UCF101](https://classyvision.ai/api/dataset.html#classy_vision.dataset.UCF101Dataset) dataset and [VideoAccuracy](http://classyvision.ai/api/meters.html#classy_vision.meters.VideoAccuracyMeter) meters.
github_jupyter
# Bayesian Parametric Regression Notebook version: 1.5 (Sep 24, 2019) Author: Jerónimo Arenas García ([email protected]) Jesús Cid-Sueiro ([email protected]) Changes: v.1.0 - First version v.1.1 - ML Model selection included v.1.2 - Some typos corrected v.1.3 - Rewriting text, reorganizing content, some exercises. v.1.4 - Revised introduction v.1.5 - Revised notation. Solved exercise 5 Pending changes: * Include regression on the stock data ``` # Import some libraries that will be necessary for working with data and displaying plots # To visualize plots in the notebook %matplotlib inline from IPython import display import matplotlib import matplotlib.pyplot as plt import numpy as np import scipy.io # To read matlab files import pylab import time ``` ## A quick note on the mathematical notation In this notebook we will make extensive use of probability distributions. In general, we will use capital letters ${\bf X}$, $S$, $E$ ..., to denote random variables, and lower-case letters ${\bf x}$, $s$, $\epsilon$ ..., to denote the values they can take. In general, we will use letter $p$ for probability density functions (pdf). When necessary, we will use, capital subindices to make the random variable explicit. For instance, $p_{{\bf X}, S}({\bf x}, s)$ would be the joint pdf of random variables ${\bf X}$ and $S$ at values ${\bf x}$ and $s$, respectively. However, to avoid a notation overload, we will omit subindices when they are clear from the context. For instance, we will use $p({\bf x}, s)$ instead of $p_{{\bf X}, S}({\bf x}, s)$. ## 1. Model-based parametric regression ### 1.1. The regression problem. Given an observation vector ${\bf x}$, the goal of the regression problem is to find a function $f({\bf x})$ providing *good* predictions about some unknown variable $s$. To do so, we assume that a set of *labelled* training examples, $\{{\bf x}_k, s_k\}_{k=0}^{K-1}$ is available. The predictor function should make good predictions for new observations ${\bf x}$ not used during training. In practice, this is tested using a second set (the *test set*) of labelled samples. ### 1.2. Model-based parametric regression Model-based regression methods assume that all data in the training and test dataset have been generated by some stochastic process. In parametric regression, we assume that the probability distribution generating the data has a known parametric form, but the values of some parameters are unknown. In particular, in this notebook we will assume the target variables in all pairs $({\bf x}_k, s_k)$ from the training and test sets have been generated independently from some posterior distribution $p(s| {\bf x}, {\bf w})$, were ${\bf w}$ is some unknown parameter. The training dataset is used to estimate ${\bf w}$. <img src="figs/ParametricReg.png" width=300> ### 1.3. Model assumptions In order to estimate ${\bf w}$ from the training data in a mathematicaly rigorous and compact form let us group the target variables into a vector $$ {\bf s} = \left(s_0, \dots, s_{K-1}\right)^\top $$ and the input vectors into a matrix $$ {\bf X} = \left({\bf x}_0, \dots, {\bf x}_{K-1}\right)^\top $$ We will make the following assumptions: * A1. All samples in ${\cal D}$ have been generated by the same distribution, $p({\bf x}, s \mid {\bf w})$ * A2. Input variables ${\bf x}$ do not depend on ${\bf w}$. This implies that $$ p({\bf X} \mid {\bf w}) = p({\bf X}) $$ * A3. Targets $s_0, \dots, s_{K-1}$ are statistically independent, given ${\bf w}$ and the inputs ${\bf x}_0,\ldots, {\bf x}_{K-1}$, that is: $$ p({\bf s} \mid {\bf X}, {\bf w}) = \prod_{k=0}^{K-1} p(s_k \mid {\bf x}_k, {\bf w}) $$ ## 2. Bayesian inference. ### 2.1. The Bayesian approach The main idea of Bayesian inference is the following: assume we want to estimate some unknown variable $U$ given an observed variable $O$. If $U$ and $O$ are random variables, we can describe the relation between $U$ and $O$ through the following functions: * **Prior distribution**: $p_U(u)$. It describes our uncertainty on the true value of $U$ before observing $O$. * **Likelihood function**: $p_{O \mid U}(o \mid u)$. It describes how the value of the observation is generated for a given value of $U$. * **Posterior distribution**: $p_{U|O}(u \mid o)$. It describes our uncertainty on the true value of $U$ once the true value of $O$ is observed. The major component of the Bayesian inference is the posterior distribution. All Bayesian estimates are computed as some of its central statistics (e.g. the mean, the median or the mode), for instance * **Maximum A Posteriori (MAP) estimate**: $\qquad{\widehat{u}}_{\text{MAP}} = \arg\max_u p_{U \mid O}(u \mid o)$ * **Minimum Mean Square Error (MSE) estimate**: $\qquad\widehat{u}_{\text{MSE}} = \mathbb{E}\{U \mid O=o\}$ The choice between the MAP or the MSE estimate may depend on practical or computational considerations. From a theoretical point of view, $\widehat{u}_{\text{MSE}}$ has some nice properties: it minimizes $\mathbb{E}\{(U-\widehat{u})^2\}$ among all possible estimates, $\widehat{u}$, so it is a natural choice. However, it involves the computation of an integral, which may not have a closed-form solution. In such cases, the MAP estimate can be a better choice. The prior and the likelihood function are auxiliary distributions: if the posterior distribution is unknown, it can be computed from them using the Bayes rule: \begin{equation} p_{U|O}(u \mid o) = \frac{p_{O|U}(o \mid u) \cdot p_{U}(u)}{p_{O}(o)} \end{equation} In the next two sections we show that the Bayesian approach can be applied to both the prediction and the estimation problems. ### 2.2. Bayesian prediction under a known model Assuming that the model parameters ${\bf w}$ are known, we can apply the Bayesian approach to predict ${\bf s}$ for an input ${\bf x}$. In that case, we can take * Unknown variable: ${\bf s}$, and * Observations: ${\bf x}$ the MAP and MSE predictions become * Maximum A Posterior (MAP): $\qquad\widehat{s}_{\text{MAP}} = \arg\max_s p(s| {\bf x}, {\bf w})$ * Minimum Mean Square Error (MSE): $\qquad\widehat{s}_{\text{MSE}} = \mathbb{E}\{S |{\bf x}, {\bf w}\}$ #### Exercise 1: Assuming $$ p(s\mid x, w) = \frac{s}{w x^2} \exp\left({-\frac{s^2}{2 w x^2}}\right), \qquad s \geq 0, $$ compute the MAP and MSE predictions of $s$ given $x$ and $w$. #### Solution: <SOL> \begin{align} \widehat{s}_\text{MAP} &= \arg\max_s \left\{\frac{s}{w x^2} \exp\left({-\frac{s^2}{2 w x^2}}\right) \right\} \\ &= \arg\max_s \left\{\log(s) - \log(w x^2) -\frac{s^2}{2 w x^2} \right\} \\ &= \sqrt{w}x \end{align} where the last step results from maximizing by differentiation. \begin{align} \widehat{s}_\text{MSE} &= \mathbb{E}\{s | x, w\} \\ &= \int_0^\infty \frac{s^2}{w x^2} \exp\left({-\frac{s^2}{2 w x^2}}\right) \\ &= \frac{1}{2} \int_{-\infty}^\infty \frac{s^2}{w x^2} \exp\left({-\frac{s^2}{2 w x^2}}\right) \\ &= \frac{\sqrt{2\pi}}{2\sqrt{w x^2}} \int_{-\infty}^\infty \frac{s^2}{\sqrt{2\pi w x^2}} \exp\left({-\frac{s^2}{2 w x^2}}\right) \end{align} Noting that the last integral corresponds to the variance of a zero-mean Gaussian distribution, we get \begin{align} \widehat{s}_\text{MSE} &= \frac{\sqrt{2\pi}}{2\sqrt{w x^2}} w x^2 \\ &= \sqrt{\frac{\pi w}{2}}x \end{align} </SOL> #### 2.2.1. The Gaussian case A particularly interesting case arises when the data model is Gaussian: $$p(s|{\bf x}, {\bf w}) = \frac{1}{\sqrt{2\pi}\sigma_\varepsilon} \exp\left(-\frac{(s-{\bf w}^\top{\bf z})^2}{2\sigma_\varepsilon^2}\right) $$ where ${\bf z}=T({\bf x})$ is a vector with components which can be computed directly from the observed variables. For a Gaussian distribution (and for any unimodal symetric distributions) the mean and the mode are the same and, thus, $$ \widehat{s}_\text{MAP} = \widehat{s}_\text{MSE} = {\bf w}^\top{\bf z} $$ Such expression includes a linear regression model, where ${\bf z} = [1; {\bf x}]$, as well as any other non-linear model as long as it can be expressed as a <i>"linear in the parameters"</i> model. ### 2.3. Bayesian Inference for Parameter Estimation In a similar way, we can apply Bayesian inference to estimate the model parameters ${\bf w}$ from a given dataset, $\cal{D}$. In that case * the unknown variable is ${\bf w}$, and * the observation is $\cal{D} \equiv \{{\bf X}, {\bf s}\}$ so that * Maximum A Posterior (MAP): $\qquad\widehat{\bf w}_{\text{MAP}} = \arg\max_{\bf w} p({\bf w}| {\cal D})$ * Minimum Mean Square Error (MSE): $\qquad\widehat{\bf w}_{\text{MSE}} = \mathbb{E}\{{\bf W} | {\cal D}\}$ ## 3. Bayesian parameter estimation NOTE: Since the training data inputs are known, all probability density functions and expectations in the remainder of this notebook will be conditioned on the data matrix, ${\bf X}$. To simplify the mathematical notation, from now on we will remove ${\bf X}$ from all conditions. For instance, we will write $p({\bf s}|{\bf w})$ instead of $p({\bf s}|{\bf w}, {\bf X})$, etc. Keep in mind that, in any case, all probabilities and expectations may depend on ${\bf X}$ implicitely. Summarizing, the steps to design a Bayesian parametric regresion algorithm are the following: 1. Assume a parametric data model $p(s| {\bf x},{\bf w})$ and a prior distribution $p({\bf w})$. 2. Using the data model and the i.i.d. assumption, compute $p({\bf s}|{\bf w})$. 3. Applying the bayes rule, compute the posterior distribution $p({\bf w}|{\bf s})$. 4. Compute the MAP or the MSE estimate of ${\bf w}$ given ${\bf x}$. 5. Compute predictions using the selected estimate. ### 3.1. Bayesian Inference and Maximum Likelihood. Applying the Bayes rule the MAP estimate can be alternatively expressed as \begin{align} \qquad\widehat{\bf w}_{\text{MAP}} &= \arg\max_{\bf w} \frac{p({\cal D}| {\bf w}) \cdot p({\bf w})}{p({\cal D})} \\ &= \arg\max_{\bf w} p({\cal D}| {\bf w}) \cdot p({\bf w}) \end{align} By comparisons, the ML (Maximum Likelihood) estimate has the form: $$ \widehat{\bf w}_{\text{ML}} = \arg \max_{\bf w} p(\mathcal{D}|{\bf w}) $$ This shows that the MAP estimate takes into account the prior distribution on the unknown parameter. Another advantage of the Bayesian approach is that it provides not only a point estimate of the unknown parameter, but a whole funtion, the posterior distribution, which encompasses our belief on the unknown parameter given the data. For instance, we can take second order statistics like the variance of the posterior distributions to measure the uncertainty on the true value of the parameter around the mean. ### 3.2. The prior distribution Since each value of ${\bf w}$ determines a regression function, by stating a prior distribution over the weights we state also a prior distribution over the space of regression functions. For instance, assume that the data likelihood follows the Gaussian model in sec. 2.2.1, with $T(x) = (1, x, x^2, x^3)$, i.e. the regression functions have the form $$ w_0 + w_1 x + w_2 x^2 + w_3 x^3 $$ Each value of ${\bf w}$ determines a specific polynomial of degree 3. Thus, the prior distribution over ${\bf w}$ describes which polynomials are more likely before observing the data. For instance, assume a Gaussian prior with zero mean and variance ${\bf V}_p$, i.e., $$ p({\bf w}) = \frac{1}{(2\pi)^{D/2} |{\bf V}_p|^{1/2}} \exp \left(-\frac{1}{2} {\bf w}^\intercal {\bf V}_{p}^{-1}{\bf w} \right) $$ where $D$ is the dimension of ${\bf w}$. To abbreviate, we will also express this as $${\bf w} \sim {\cal N}\left({\bf 0},{\bf V}_{p} \right)$$ The following code samples ${\bf w}$ according to this distribution for ${\bf V}_p = 0.002 \, {\bf I}$, and plots the resulting polynomial over the scatter plot of an arbitrary dataset. You can check the effect of modifying the variance of the prior distribution. ``` n_grid = 200 degree = 3 nplots = 20 # Prior distribution parameters mean_w = np.zeros((degree+1,)) v_p = 0.2 ### Try increasing this value var_w = v_p * np.eye(degree+1) xmin = -1 xmax = 1 X_grid = np.linspace(xmin, xmax, n_grid) fig = plt.figure() ax = fig.add_subplot(111) for k in range(nplots): #Draw weigths fromt the prior distribution w_iter = np.random.multivariate_normal(mean_w, var_w) S_grid_iter = np.polyval(w_iter, X_grid) ax.plot(X_grid, S_grid_iter,'g-') ax.set_xlim(xmin, xmax) ax.set_ylim(-1, 1) ax.set_xlabel('$x$') ax.set_ylabel('$s$') plt.show() ``` The data observation will modify our belief about the true data model according to the posterior distribution. In the following we will analyze this in a Gaussian case. ## 4. Bayesian regression for a Gaussian model. We will apply the above steps to derive a Bayesian regression algorithm for a Gaussian model. ### 4.1. Step 1: The Gaussian model. Let us assume that the likelihood function is given by the Gaussian model described in Sec. 1.3.2. $$ s~|~{\bf w} \sim {\cal N}\left({\bf z}^\top{\bf w}, \sigma_\varepsilon^2 \right) $$ that is $$p(s|{\bf x}, {\bf w}) = \frac{1}{\sqrt{2\pi}\sigma_\varepsilon} \exp\left(-\frac{(s-{\bf w}^\top{\bf z})^2}{2\sigma_\varepsilon^2}\right) $$ Assume, also, that the prior is Gaussian $$ {\bf w} \sim {\cal N}\left({\bf 0},{\bf V}_{p} \right) $$ ### 4.2. Step 2: Complete data likelihood Using the assumptions A1, A2 and A3, it can be shown that $$ {\bf s}~|~{\bf w} \sim {\cal N}\left({\bf Z}{\bf w},\sigma_\varepsilon^2 {\bf I} \right) $$ that is $$ p({\bf s}| {\bf w}) = \frac{1}{\left(\sqrt{2\pi}\sigma_\varepsilon\right)^K} \exp\left(-\frac{1}{2\sigma_\varepsilon^2}\|{\bf s}-{\bf Z}{\bf w}\|^2\right) $$ ### 4.3. Step 3: Posterior weight distribution The posterior distribution of the weights can be computed using the Bayes rule $$p({\bf w}|{\bf s}) = \frac{p({\bf s}|{\bf w})~p({\bf w})}{p({\bf s})}$$ Since both $p({\bf s}|{\bf w})$ and $p({\bf w})$ follow a Gaussian distribution, we know also that the joint distribution and the posterior distribution of ${\bf w}$ given ${\bf s}$ are also Gaussian. Therefore, $${\bf w}~|~{\bf s} \sim {\cal N}\left({\bf w}_\text{MSE}, {\bf V}_{\bf w}\right)$$ After some algebra, it can be shown that mean and the covariance matrix of the distribution are: $${\bf V}_{\bf w} = \left[\frac{1}{\sigma_\varepsilon^2} {\bf Z}^{\top}{\bf Z} + {\bf V}_p^{-1}\right]^{-1}$$ $${\bf w}_\text{MSE} = {\sigma_\varepsilon^{-2}} {\bf V}_{\bf w} {\bf Z}^\top {\bf s}$$ #### Exercise 2: Consider the dataset with one-dimensional inputs given by ``` # True data parameters w_true = 3 std_n = 0.4 # Generate the whole dataset n_max = 64 X_tr = 3 * np.random.random((n_max,1)) - 0.5 S_tr = w_true * X_tr + std_n * np.random.randn(n_max,1) # Plot data plt.figure() plt.plot(X_tr, S_tr, 'b.') plt.xlabel('$x$') plt.ylabel('$s$') plt.show() ``` Fit a Bayesian linear regression model assuming $z= x$ and ``` # Model parameters sigma_eps = 0.4 mean_w = np.zeros((1,)) sigma_p = 1e6 Var_p = sigma_p**2* np.eye(1) ``` To do so, compute the posterior weight distribution using the first $k$ samples in the complete dataset, for $k = 1,2,4,8,\ldots 128$. Draw all these posteriors along with the prior distribution in the same plot. ``` # No. of points to analyze n_points = [1, 2, 4, 8, 16, 32, 64] # Prepare plots w_grid = np.linspace(2.7, 3.4, 5000) # Sample the w axis plt.figure() # Compute the prior distribution over the grid points in w_grid # p = <FILL IN> p = 1.0/(sigma_p*np.sqrt(2*np.pi)) * np.exp(-(w_grid**2)/(2*sigma_p**2)) plt.plot(w_grid, p,'g-') for k in n_points: # Select the first k samples Zk = X_tr[0:k, :] Sk = S_tr[0:k] # Parameters of the posterior distribution # 1. Compute the posterior variance. # (Make sure that the resulting variable, Var_w, is a 1x1 numpy array.) # Var_w = <FILL IN> Var_w = np.linalg.inv(np.dot(Zk.T, Zk)/(sigma_eps**2) + np.linalg.inv(Var_p)) # 2. Compute the posterior mean. # (Make sure that the resulting variable, w_MSE, is a scalar) # w_MSE = <FILL IN> w_MSE = (Var_w.dot(Zk.T).dot(Sk)/(sigma_eps**2)).flatten() # Compute the posterior distribution over the grid points in w_grid sigma_w = np.sqrt(Var_w.flatten()) # First we take a scalar standard deviation # p = <FILL IN> p = 1.0/(sigma_w*np.sqrt(2*np.pi)) * np.exp(-((w_grid-w_MSE)**2)/(2*sigma_w**2)) plt.plot(w_grid, p,'g-') plt.fill_between(w_grid, 0, p, alpha=0.8, edgecolor='#1B2ACC', facecolor='#089FFF', linewidth=1, antialiased=True) plt.title('Posterior distribution after {} samples'.format(k)) plt.xlim(w_grid[0], w_grid[-1]) plt.ylim(0, np.max(p)) plt.xlabel('$w$') plt.ylabel('$p(w|s)$') display.clear_output(wait=True) display.display(plt.gcf()) time.sleep(2.0) # Remove the temporary plots and fix the last one display.clear_output(wait=True) plt.show() ``` #### Exercise 3: Note that, in the example above, the model assumptions are correct: the target variables have been generated by a linear model with noise standard deviation `sigma_n` which is exactly equal to the value assumed by the model, stored in variable `sigma_eps`. Check what happens if we take `sigma_eps=4*sigma_n` or `sigma_eps=sigma_n/4`. * Does the algorithm fail in that cases? * What differences can you observe with respect to the ideal case `sigma_eps=sigma_n`? ### 4.4. Step 4: Weight estimation. Since the posterior weight distribution is Gaussian, both the MAP and the MSE estimates are equal to the posterior mean, which has been already computed in step 3: $$\widehat{\bf w}_\text{MAP} = \widehat{\bf w}_\text{MSE} = {\sigma_\varepsilon^{-2}} {\bf V}_{\bf w} {\bf Z}^\top {\bf s}$$ ### 4.5. Step 5: Prediction Using the MSE estimate, the final predictions are given by $$ \widehat{s}_\text{MSE} = \widehat{\bf w}_\text{MSE}^\top{\bf z} $$ #### Exercise 4: Plot the minimum MSE predictions of $s$ for inputs $x$ in the interval [-1, 3]. ``` # <SOL> x = np.array([-1.0, 3.0]) s_pred = w_MSE * x plt.figure() plt.plot(X_tr, S_tr,'b.') plt.plot(x, s_pred) plt.show() # </SOL> ``` ## 5. Maximum likelihood vs Bayesian Inference. ### 5.1. The Maximum Likelihood Estimate. For comparative purposes, it is interesting to see here that the likelihood function is enough to compute the Maximum Likelihood (ML) estimate \begin{align} {\bf w}_\text{ML} &= \arg \max_{\bf w} p(\mathcal{D}|{\bf w}) \\ &= \arg \min_{\bf w} \|{\bf s}-{\bf Z}{\bf w}\|^2 \end{align} which leads to the Least Squares (LS) solution $$ {\bf w}_\text{ML} = ({\bf Z}^\top{\bf Z})^{-1}{\bf Z}^\top{\bf s} $$ ML estimation is prone to overfiting. In general, if the number of parameters (i.e. the dimension of ${\bf w}$) is large in relation to the size of the training data, the predictor based on the ML estimate may have a small square error over the training set but a large error over the test set. Therefore, in practice, some cross validation procedure is required to keep the complexity of the predictor function under control depending on the size of the training set. By defining a prior distribution over the unknown parameters, and using the Bayesian inference methods, the overfitting problems can be alleviated ### 5.2 Making predictions - Following an **ML approach**, we retain a single model, ${\bf w}_{ML} = \arg \max_{\bf w} p({\bf s}|{\bf w})$. Then, the predictive distribution of the target value for a new point would be obtained as: $$p({s^*}|{\bf w}_{ML},{\bf x}^*) $$ For the generative model of Section 3.1.2 (additive i.i.d. Gaussian noise), this distribution is: $$p({s^*}|{\bf w}_{ML},{\bf x}^*) = \frac{1}{\sqrt{2\pi\sigma_\varepsilon^2}} \exp \left(-\frac{\left(s^* - {\bf w}_{ML}^\top {\bf z}^*\right)^2}{2 \sigma_\varepsilon^2} \right)$$ * The mean of $s^*$ is just the same as the prediction of the LS model, and the same uncertainty is assumed independently of the observation vector (i.e., the variance of the noise of the model). * If a single value is to be kept, we would probably keep the mean of the distribution, which is equivalent to the LS prediction. - Using <b>Bayesian inference</b>, we retain all models. Then, the inference of the value $s^* = s({\bf x}^*)$ is carried out by mixing all models, according to the weights given by the posterior distribution. \begin{align} p({s^*}|{\bf x}^*,{\bf s}) & = \int p({s^*}~|~{\bf w},{\bf x}^*) p({\bf w}~|~{\bf s}) d{\bf w} \end{align} where: * $p({s^*}|{\bf w},{\bf x}^*) = \dfrac{1}{\sqrt{2\pi\sigma_\varepsilon^2}} \exp \left(-\frac{\left(s^* - {\bf w}^\top {\bf z}^*\right)^2}{2 \sigma_\varepsilon^2} \right)$ * $p({\bf w} \mid {\bf s})$ is the posterior distribution of the weights, that can be computed using Bayes' Theorem. In general the integral expression of the posterior distribution $p({s^*}|{\bf x}^*,{\bf s})$ cannot be computed analytically. Fortunately, for the Gaussian model, the computation of the posterior is simple, as we will show in the following section. ## 6. Posterior distribution of the target variable In the same way that we have computed a distribution on ${\bf w}$, we can compute a distribution on the target variable for a given input ${\bf x}$ and given the whole dataset. Since ${\bf w}$ is a random variable, the noise-free component of the target variable for an arbitrary input ${\bf x}$, that is, $f = f({\bf x}) = {\bf w}^\top{\bf z}$ is also a random variable, and we can compute its distribution from the posterior distribution of ${\bf w}$ Since ${\bf w}$ is Gaussian and $f$ is a linear transformation of ${\bf w}$, $f$ is also a Gaussian random variable, whose posterior mean and variance can be calculated as follows: \begin{align} \mathbb{E}\{f \mid {\bf s}, {\bf z}\} &= \mathbb{E}\{{\bf w}^\top {\bf z}~|~{\bf s}, {\bf z}\} = \mathbb{E}\{{\bf w} ~|~{\bf s}, {\bf z}\}^\top {\bf z} \\ &= \widehat{\bf w}_\text{MSE} ^\top {\bf z} \\ % &= {\sigma_\varepsilon^{-2}} {{\bf z}}^\top {\bf V}_{\bf w} {\bf Z}^\top {\bf s} \end{align} \begin{align} \text{Cov}\left[{{\bf z}}^\top {\bf w}~|~{\bf s}, {\bf z}\right] &= {\bf z}^\top \text{Cov}\left[{\bf w}~|~{\bf s}\right] {\bf z} \\ &= {\bf z}^\top {\bf V}_{\bf w} {{\bf z}} \end{align} Therefore, $$ f^*~|~{\bf s}, {\bf x} \sim {\cal N}\left(\widehat{\bf w}_\text{MSE} ^\top {\bf z}, ~~ {\bf z}^\top {\bf V}_{\bf w} {\bf z} \right) $$ Finally, for $s = f + \varepsilon$, the posterior distribution is $$ s ~|~{\bf s}, {\bf z}^* \sim {\cal N}\left(\widehat{\bf w}_\text{MSE} ^\top {\bf z}, ~~ {\bf z}^\top {\bf V}_{\bf w} {\bf z} + \sigma_\varepsilon^2\right) $$ #### Example: The next figure shows a one-dimensional dataset with 15 points, which are noisy samples from a cosine signal (shown in the dotted curve) ``` n_points = 15 n_grid = 200 frec = 3 std_n = 0.2 # Data generation X_tr = 3 * np.random.random((n_points,1)) - 0.5 S_tr = - np.cos(frec*X_tr) + std_n * np.random.randn(n_points,1) # Signal xmin = np.min(X_tr) - 0.1 xmax = np.max(X_tr) + 0.1 X_grid = np.linspace(xmin, xmax, n_grid) S_grid = - np.cos(frec*X_grid) #Noise free for the true model # Compute matrix with training input data for the polynomial model Z = [] for x_val in X_tr.tolist(): Z.append([x_val[0]**k for k in range(degree+1)]) Z = np.asmatrix(Z) # Plot data fig = plt.figure() ax = fig.add_subplot(111) ax.plot(X_tr,S_tr,'b.',markersize=10) # Plot noise-free function ax.plot(X_grid, S_grid, 'b:', label='Noise-free signal') # Set axes ax.set_xlim(xmin, xmax) ax.set_ylim(S_tr[0] - 2, S_tr[-1] + 2) ax.legend(loc='best') plt.show() ``` Let us assume that the cosine form of the noise-free signal is unknown, and we assume a polynomial model with a high degree. The following code plots the LS estimate ``` degree = 12 # We plot also the least square solution w_LS = np.polyfit(X_tr.flatten(), S_tr.flatten(), degree) S_grid_LS = np.polyval(w_LS,X_grid) # Plot data fig = plt.figure() ax = fig.add_subplot(111) ax.plot(X_tr,S_tr,'b.',markersize=10) # Plot noise-free function ax.plot(X_grid, S_grid, 'b:', label='Noise-free signal') # Plot LS regression function ax.plot(X_grid, S_grid_LS, 'm-', label='LS regression') # Set axis ax.set_xlim(xmin, xmax) ax.set_ylim(S_tr[0] - 2, S_tr[-1] + 2) ax.legend(loc='best') plt.show() ``` The following fragment of code computes the posterior weight distribution, draws random vectors from $p({\bf w}|{\bf s})$, and plots the corresponding regression curves along with the training points. Compare these curves with those extracted from the prior distribution of ${\bf w}$ and with the LS solution. ``` nplots = 6 # Prior distribution parameters sigma_eps = 0.2 mean_w = np.zeros((degree+1,)) sigma_p = .5 Var_p = sigma_p**2 * np.eye(degree+1) # Compute matrix with training input data for the polynomial model Z = [] for x_val in X_tr.tolist(): Z.append([x_val[0]**k for k in range(degree+1)]) Z = np.asmatrix(Z) #Compute posterior distribution parameters Var_w = np.linalg.inv(np.dot(Z.T,Z)/(sigma_eps**2) + np.linalg.inv(Var_p)) posterior_mean = Var_w.dot(Z.T).dot(S_tr)/(sigma_eps**2) posterior_mean = np.array(posterior_mean).flatten() # Plot data fig = plt.figure() ax = fig.add_subplot(111) ax.plot(X_tr,S_tr,'b.',markersize=10) # Plot noise-free function ax.plot(X_grid, S_grid, 'b:', label='Noise-free signal') # Plot LS regression function ax.plot(X_grid, S_grid_LS, 'm-', label='LS regression') for k in range(nplots): # Draw weights from the posterior distribution w_iter = np.random.multivariate_normal(posterior_mean, Var_w) # Note that polyval assumes the first element of weight vector is the coefficient of # the highest degree term. Thus, we need to reverse w_iter S_grid_iter = np.polyval(w_iter[::-1], X_grid) ax.plot(X_grid,S_grid_iter,'g-') # Set axis ax.set_xlim(xmin, xmax) ax.set_ylim(S_tr[0] - 2, S_tr[-1] + 2) ax.legend(loc='best') plt.show() ``` Not only do we obtain a better predictive model, but we also have confidence intervals (error bars) for the predictions. ``` # Compute standard deviation std_x = [] for el in X_grid: x_ast = np.array([el**k for k in range(degree+1)]) std_x.append(np.sqrt(x_ast.dot(Var_w).dot(x_ast)[0,0])) std_x = np.array(std_x) # Plot data fig = plt.figure(figsize=(10,6)) ax = fig.add_subplot(111) ax.plot(X_tr,S_tr,'b.',markersize=10) # Plot the posterior mean # Note that polyval assumes the first element of weight vector is the coefficient of # the highest degree term. Thus, we need to reverse w_iter S_grid_iter = np.polyval(posterior_mean[::-1],X_grid) ax.plot(X_grid,S_grid_iter,'g-',label='Predictive mean, BI') #Plot confidence intervals for the Bayesian Inference plt.fill_between(X_grid, S_grid_iter-std_x, S_grid_iter+std_x, alpha=0.4, edgecolor='#1B2ACC', facecolor='#089FFF', linewidth=2, antialiased=True) #We plot also the least square solution w_LS = np.polyfit(X_tr.flatten(), S_tr.flatten(), degree) S_grid_iter = np.polyval(w_LS,X_grid) # Plot noise-free function ax.plot(X_grid, S_grid, 'b:', label='Noise-free signal') # Plot LS regression function ax.plot(X_grid, S_grid_LS, 'm-', label='LS regression') # Set axis ax.set_xlim(xmin, xmax) ax.set_ylim(S_tr[0]-2,S_tr[-1]+2) ax.set_title('Predicting the target variable') ax.set_xlabel('Input variable') ax.set_ylabel('Target variable') ax.legend(loc='best') plt.show() ``` #### Exercise 5: Assume the dataset ${\cal{D}} = \left\{ x_k, s_k \right\}_{k=0}^{K-1}$ containing $K$ i.i.d. samples from a distribution $$p(s|x,w) = w x \exp(-w x s), \qquad s>0,\quad x> 0,\quad w> 0$$ We model also our uncertainty about the value of $w$ assuming a prior distribution for $w$ following a Gamma distribution with parameters $\alpha>0$ and $\beta>0$. $$ w \sim \text{Gamma}\left(\alpha, \beta \right) = \frac{\beta^\alpha}{\Gamma(\alpha)} w^{\alpha-1} \exp\left(-\beta w\right), \qquad w>0 $$ Note that the mean and the mode of a Gamma distribution can be calculated in closed-form as $$ \mathbb{E}\left\{w\right\}=\frac{\alpha}{\beta}; \qquad $$ $$ \text{mode}\{w\} = \arg\max_w p(w) = \frac{\alpha-1}{\beta} $$ **1.** Determine an expression for the likelihood function. #### Solution: [comment]: # (<SOL>) \begin{align} p({\bf s}| w) &= \prod_{k=0}^{K-1} p(s_k|w, x_k) = \prod_{k=0}^{K-1} \left(w x_k \exp(-w x_k s_k)\right) \nonumber\\ &= w^K \cdot \left(\prod_{k=0}^{K-1} x_k \right) \exp\left( -w \sum_{k=0}^{K-1} x_k s_k\right) \end{align} [comment]: # (</SOL>) **2.** Determine the maximum likelihood coefficient, $\widehat{w}_{\text{ML}}$. #### Solution: [comment]: # (<SOL>) \begin{align} \widehat{w}_{\text{ML}} &= \arg\max_w w^K \cdot \left(\prod_{k=0}^{K-1} x_k \right) \exp\left( -w \sum_{k=0}^{K-1} x_k s_k\right) \\ &= \arg\max_w \left(w^K \cdot \exp\left( -w \sum_{k=0}^{K-1} x_k s_k\right)\right) \\ &= \arg\max_w \left(K \log(w) - w \sum_{k=0}^{K-1} x_k s_k \right) \\ &= \frac{K}{\sum_{k=0}^{K-1} x_k s_k} \end{align} [comment]: # (</SOL>) **3.** Obtain the posterior distribution $p(w|{\bf s})$. Note that you do not need to calculate $p({\bf s})$ since the posterior distribution can be readily identified as another Gamma distribution. #### Solution: [comment]: # (<SOL>) \begin{align} p(w|{\bf s}) &= \frac{p({\bf s}|w) p(w)}{p(s)} \\ &= \frac{1}{p(s)} \left(w^K \cdot \left(\prod_{k=0}^{K-1} x_k \right) \exp\left( -w \sum_{k=0}^{K-1} x_k s_k\right) \right) \left(\frac{\beta^\alpha}{\Gamma(\alpha)} w^{\alpha-1} \exp\left(-\beta w\right)\right) \\ &= \frac{1}{p(s)} \frac{\beta^\alpha}{\Gamma(\alpha)} \left(\prod_{k=0}^{K-1} x_k \right) \left(w^{K + \alpha - 1} \cdot \exp\left( -w \left(\beta + \sum_{k=0}^{K-1} x_k s_k\right) \right) \right) \end{align} that is $$ w \mid {\bf s} \sim Gamma\left(K+\alpha, \beta + \sum_{k=0}^{K-1} x_k s_k \right) $$ [comment]: # (</SOL>) **4.** Determine the MSE and MAP a posteriori estimators of $w$: $w_\text{MSE}=\mathbb{E}\left\{w|{\bf s}\right\}$ and $w_\text{MAP} = \max_w p(w|{\bf s})$. #### Solution: [comment]: # (<SOL>) $$ w_{\text{MSE}} = \mathbb{E}\left\{w \mid {\bf s} \right\} = \frac{K + \alpha}{\beta + \sum_{k=0}^{K-1} x_k s_k} $$ $$ w_{\text{MAP}} = \text{mode}\{w\} = \arg\max_w p(w) = \frac{K + \alpha-1}{\beta + \sum_{k=0}^{K-1} x_k s_k} $$ [comment]: # (</SOL>) **5.** Compute the following estimators of $S$: $\qquad\widehat{s}_1 = \mathbb{E}\{s|w_\text{ML},x\}$ $\qquad\widehat{s}_2 = \mathbb{E}\{s|w_\text{MSE},x\}$ $\qquad\widehat{s}_3 = \mathbb{E}\{s|w_\text{MAP},x\}$ #### Solution: [comment]: # (<SOL>) $$ \widehat{s}_1 = \mathbb{E}\{s|w_\text{ML},x\} = w_\text{ML} x $$ $$ \widehat{s}_2 = \mathbb{E}\{s|w_\text{MSE},x\} = w_\text{MSE} x $$ $$ \widehat{s}_3 = \mathbb{E}\{s|w_\text{MAP},x\} = w_\text{MAP} x $$ [comment]: # (</SOL>) ## 7. Maximum evidence model selection We have already addressed with Bayesian Inference the following two issues: - For a given degree, how do we choose the weights? - Should we focus on just one model, or can we use several models at once? However, we still needed some assumptions: a parametric model (i.e., polynomial function and <i>a priori</i> degree selection) and several parameters needed to be adjusted. Though we can recur to cross-validation, Bayesian inference opens the door to other strategies. - We could argue that rather than keeping single selections of these parameters, we could use simultaneously several sets of parameters (and/or several parametric forms), and average them in a probabilistic way ... (like we did with the models) - We will follow a simpler strategy, selecting just the most likely set of parameters according to an ML criterion ### 7.1 Model evidence The evidence of a model is defined as $$L = p({\bf s}~|~{\cal M})$$ where ${\cal M}$ denotes the model itself and any free parameters it may have. For instance, for the polynomial model we have assumed so far, ${\cal M}$ would represent the degree of the polynomia, the variance of the additive noise, and the <i>a priori</i> covariance matrix of the weights Applying the Theorem of Total probability, we can compute the evidence of the model as $$L = \int p({\bf s}~|~{\bf f},{\cal M}) p({\bf f}~|~{\cal M}) d{\bf f} $$ For the linear model $f({\bf x}) = {\bf w}^\top{\bf z}$, the evidence can be computed as $$L = \int p({\bf s}~|~{\bf w},{\cal M}) p({\bf w}~|~{\cal M}) d{\bf w} $$ It is important to notice that these probability density functions are exactly the ones we computed on the previous section. We are just making explicit that they depend on a particular model and the selection of its parameters. Therefore: - $p({\bf s}~|~{\bf w},{\cal M})$ is the likelihood of ${\bf w}$ - $p({\bf w}~|~{\cal M})$ is the <i>a priori</i> distribution of the weights ### 7.2 Model selection via evidence maximization - As we have already mentioned, we could propose a prior distribution for the model parameters, $p({\cal M})$, and use it to infer the posterior. However, this can be very involved (usually no closed-form expressions can be derived) - Alternatively, maximizing the evidence is normally good enough $${\cal M}_\text{ML} = \arg\max_{\cal M} p(s~|~{\cal M})$$ Note that we are using the subscript 'ML' because the evidence can also be referred to as the likelihood of the model ### 7.3 Example: Selection of the degree of the polynomia For the previous example we had (we consider a spherical Gaussian for the weights): - ${\bf s}~|~{\bf w},{\cal M}~\sim~{\cal N}\left({\bf Z}{\bf w},~\sigma_\varepsilon^2 {\bf I} \right)$ - ${\bf w}~|~{\cal M}~\sim~{\cal N}\left({\bf 0},~\sigma_p^2 {\bf I} \right)$ In this case, $p({\bf s}~|~{\cal M})$ follows also a Gaussian distribution, and it can be shown that - $L = p({\bf s}~|~{\cal M}) = {\cal N}\left({\bf 0},\sigma_p^2 {\bf Z} {\bf Z}^\top+\sigma_\varepsilon^2 {\bf I} \right)$ If we just pursue the maximization of $L$, this is equivalent to maximizing the log of the evidence $$\log(L) = -\frac{M}{2} \log(2\pi) -{\frac{1}{2}}\log\mid\sigma_p^2 {\bf Z} {\bf Z}^\top+\sigma_\varepsilon^2 {\bf I}\mid - \frac{1}{2} {\bf s}^\top \left(\sigma_p^2 {\bf Z} {\bf Z}^\top+\sigma_\varepsilon^2 {\bf I}\right)^{-1} {\bf s}$$ where $M$ denotes the length of vector ${\bf z}$ (the degree of the polynomia minus 1). The following fragment of code evaluates the evidence of the model as a function of the degree of the polynomia ``` from math import pi n_points = 15 frec = 3 std_n = 0.2 max_degree = 12 #Prior distribution parameters sigma_eps = 0.2 mean_w = np.zeros((degree+1,)) sigma_p = 0.5 X_tr = 3 * np.random.random((n_points,1)) - 0.5 S_tr = - np.cos(frec*X_tr) + std_n * np.random.randn(n_points,1) #Compute matrix with training input data for the polynomial model Z = [] for x_val in X_tr.tolist(): Z.append([x_val[0]**k for k in range(degree+1)]) Z=np.asmatrix(Z) #Evaluate the posterior evidence logE = [] for deg in range(max_degree): Z_iter = Z[:,:deg+1] logE_iter = -((deg+1)*np.log(2*pi)/2) \ -np.log(np.linalg.det((sigma_p**2)*Z_iter.dot(Z_iter.T) + (sigma_eps**2)*np.eye(n_points)))/2 \ -S_tr.T.dot(np.linalg.inv((sigma_p**2)*Z_iter.dot(Z_iter.T) + (sigma_eps**2)*np.eye(n_points))).dot(S_tr)/2 logE.append(logE_iter[0,0]) plt.plot(np.array(range(max_degree))+1,logE) plt.xlabel('Polynomia degree') plt.ylabel('log evidence') plt.show() ``` The above curve may change the position of its maximum from run to run. We conclude the notebook by plotting the result of the Bayesian inference for $M=6$ ``` n_points = 15 n_grid = 200 frec = 3 std_n = 0.2 degree = 5 #M-1 nplots = 6 #Prior distribution parameters sigma_eps = 0.1 mean_w = np.zeros((degree+1,)) sigma_p = .5 * np.eye(degree+1) X_tr = 3 * np.random.random((n_points,1)) - 0.5 S_tr = - np.cos(frec*X_tr) + std_n * np.random.randn(n_points,1) X_grid = np.linspace(-1,3,n_grid) S_grid = - np.cos(frec*X_grid) #Noise free for the true model fig = plt.figure() ax = fig.add_subplot(111) ax.plot(X_tr,S_tr,'b.',markersize=10) #Compute matrix with training input data for the polynomial model Z = [] for x_val in X_tr.tolist(): Z.append([x_val[0]**k for k in range(degree+1)]) Z=np.asmatrix(Z) #Compute posterior distribution parameters Sigma_w = np.linalg.inv(np.dot(Z.T,Z)/(sigma_eps**2) + np.linalg.inv(sigma_p)) posterior_mean = Sigma_w.dot(Z.T).dot(S_tr)/(sigma_eps**2) posterior_mean = np.array(posterior_mean).flatten() #Plot the posterior mean #Note that polyval assumes the first element of weight vector is the coefficient of #the highest degree term. Thus, we need to reverse w_iter S_grid_iter = np.polyval(posterior_mean[::-1],X_grid) ax.plot(X_grid,S_grid_iter,'g-',label='Predictive mean, BI') #Plot confidence intervals for the Bayesian Inference std_x = [] for el in X_grid: x_ast = np.array([el**k for k in range(degree+1)]) std_x.append(np.sqrt(x_ast.dot(Sigma_w).dot(x_ast)[0,0])) std_x = np.array(std_x) plt.fill_between(X_grid, S_grid_iter-std_x, S_grid_iter+std_x, alpha=0.2, edgecolor='#1B2ACC', facecolor='#089FFF', linewidth=4, linestyle='dashdot', antialiased=True) #We plot also the least square solution w_LS = np.polyfit(X_tr.flatten(), S_tr.flatten(), degree) S_grid_iter = np.polyval(w_LS,X_grid) ax.plot(X_grid,S_grid_iter,'m-',label='LS regression') ax.set_xlim(-1,3) ax.set_ylim(S_tr[0]-2,S_tr[-1]+2) ax.legend(loc='best') plt.show() ``` We can check, that now the model also seems quite appropriate for LS regression, but keep in mind that selection of such parameter was itself carried out using Bayesian inference.
github_jupyter
# Goals ### 1. Learn to implement Resnet V2 Block (Type - 1) using monk - Monk's Keras - Monk's Pytorch - Monk's Mxnet ### 2. Use network Monk's debugger to create complex blocks ### 3. Understand how syntactically different it is to implement the same using - Traditional Keras - Traditional Pytorch - Traditional Mxnet # Resnet V2 Block - Type 1 - Note: The block structure can have variations too, this is just an example ``` from IPython.display import Image Image(filename='imgs/resnet_v2_with_downsample.png') ``` # Table of contents [1. Install Monk](#1) [2. Block basic Information](#2) - [2.1) Visual structure](#2-1) - [2.2) Layers in Branches](#2-2) [3) Creating Block using monk visual debugger](#3) - [3.1) Create the first branch](#3-1) - [3.2) Create the second branch](#3-2) - [3.3) Merge the branches](#3-3) - [3.4) Debug the merged network](#3-4) - [3.5) Compile the network](#3-5) - [3.6) Visualize the network](#3-6) - [3.7) Run data through the network](#3-7) [4) Creating Block Using MONK one line API call](#4) - [Mxnet Backend](#4-1) - [Pytorch Backend](#4-2) - [Keras Backend](#4-3) [5) Appendix](#5) - [Study Material](#5-1) - [Creating block using traditional Mxnet](#5-2) - [Creating block using traditional Pytorch](#5-3) - [Creating block using traditional Keras](#5-4) <a id='1'></a> # Install Monk - git clone https://github.com/Tessellate-Imaging/monk_v1.git - cd monk_v1/installation/Linux && pip install -r requirements_cu9.txt - (Select the requirements file as per OS and CUDA version) ``` !git clone https://github.com/Tessellate-Imaging/monk_v1.git ``` # Imports ``` # Common import numpy as np import math import netron from collections import OrderedDict from functools import partial # Monk import os import sys sys.path.append("monk_v1/monk/"); ``` <a id='2'></a> # Block Information <a id='2_1'></a> ## Visual structure ``` from IPython.display import Image Image(filename='imgs/resnet_v2_with_downsample.png') ``` <a id='2_2'></a> ## Layers in Branches - Number of branches: 2 - Common element - batchnorm -> relu - Branch 1 - conv_1x1 - Branch 2 - conv_3x3 -> batchnorm -> relu -> conv_3x3 - Branches merged using - Elementwise addition (See Appendix to read blogs on resnets) <a id='3'></a> # Creating Block using monk debugger ``` # Imports and setup a project # To use pytorch backend - replace gluon_prototype with pytorch_prototype # To use keras backend - replace gluon_prototype with keras_prototype from gluon_prototype import prototype # Create a sample project gtf = prototype(verbose=1); gtf.Prototype("sample-project-1", "sample-experiment-1"); ``` <a id='3-1'></a> ## Create the first branch ``` def first_branch(output_channels=128, stride=1): network = []; network.append(gtf.convolution(output_channels=output_channels, kernel_size=1, stride=stride)); return network; # Debug the branch branch_1 = first_branch(output_channels=128, stride=1) network = []; network.append(branch_1); gtf.debug_custom_model_design(network); ``` <a id='3-2'></a> ## Create the second branch ``` def second_branch(output_channels=128, stride=1): network = []; network.append(gtf.convolution(output_channels=output_channels, kernel_size=3, stride=stride)); network.append(gtf.batch_normalization()); network.append(gtf.relu()); network.append(gtf.convolution(output_channels=output_channels, kernel_size=3, stride=1)); return network; # Debug the branch branch_2 = second_branch(output_channels=128, stride=1) network = []; network.append(branch_2); gtf.debug_custom_model_design(network); ``` <a id='3-3'></a> ## Merge the branches ``` def final_block(output_channels=128, stride=1): network = []; # Common elements network.append(gtf.batch_normalization()); network.append(gtf.relu()); #Create subnetwork and add branches subnetwork = []; branch_1 = first_branch(output_channels=output_channels, stride=stride) branch_2 = second_branch(output_channels=output_channels, stride=stride) subnetwork.append(branch_1); subnetwork.append(branch_2); # Add merging element subnetwork.append(gtf.add()); # Add the subnetwork network.append(subnetwork); return network; ``` <a id='3-4'></a> ## Debug the merged network ``` final = final_block(output_channels=128, stride=1) network = []; network.append(final); gtf.debug_custom_model_design(network); ``` <a id='3-5'></a> ## Compile the network ``` gtf.Compile_Network(network, data_shape=(3, 224, 224), use_gpu=False); ``` <a id='3-6'></a> ## Run data through the network ``` import mxnet as mx x = np.zeros((1, 3, 224, 224)); x = mx.nd.array(x); y = gtf.system_dict["local"]["model"].forward(x); print(x.shape, y.shape) ``` <a id='3-7'></a> ## Visualize network using netron ``` gtf.Visualize_With_Netron(data_shape=(3, 224, 224)) ``` <a id='4'></a> # Creating Using MONK LOW code API <a id='4-1'></a> ## Mxnet backend ``` from gluon_prototype import prototype gtf = prototype(verbose=1); gtf.Prototype("sample-project-1", "sample-experiment-1"); network = []; # Single line addition of blocks network.append(gtf.resnet_v2_block(output_channels=128)); gtf.Compile_Network(network, data_shape=(3, 224, 224), use_gpu=False); ``` <a id='4-2'></a> ## Pytorch backend - Only the import changes ``` #Change gluon_prototype to pytorch_prototype from pytorch_prototype import prototype gtf = prototype(verbose=1); gtf.Prototype("sample-project-1", "sample-experiment-1"); network = []; # Single line addition of blocks network.append(gtf.resnet_v2_block(output_channels=128)); gtf.Compile_Network(network, data_shape=(3, 224, 224), use_gpu=False); ``` <a id='4-3'></a> ## Keras backend - Only the import changes ``` #Change gluon_prototype to keras_prototype from keras_prototype import prototype gtf = prototype(verbose=1); gtf.Prototype("sample-project-1", "sample-experiment-1"); network = []; # Single line addition of blocks network.append(gtf.resnet_v1_block(output_channels=128)); gtf.Compile_Network(network, data_shape=(3, 224, 224), use_gpu=False); ``` <a id='5'></a> # Appendix <a id='5-1'></a> ## Study links - https://towardsdatascience.com/residual-blocks-building-blocks-of-resnet-fd90ca15d6ec - https://medium.com/@MaheshNKhatri/resnet-block-explanation-with-a-terminology-deep-dive-989e15e3d691 - https://medium.com/analytics-vidhya/understanding-and-implementation-of-residual-networks-resnets-b80f9a507b9c - https://hackernoon.com/resnet-block-level-design-with-deep-learning-studio-part-1-727c6f4927ac <a id='5-2'></a> ## Creating block using traditional Mxnet - Code credits - https://mxnet.incubator.apache.org/ ``` # Traditional-Mxnet-gluon import mxnet as mx from mxnet.gluon import nn from mxnet.gluon.nn import HybridBlock, BatchNorm from mxnet.gluon.contrib.nn import HybridConcurrent, Identity from mxnet import gluon, init, nd def _conv3x3(channels, stride, in_channels): return nn.Conv2D(channels, kernel_size=3, strides=stride, padding=1, use_bias=False, in_channels=in_channels) class ResnetBlockV2(HybridBlock): def __init__(self, channels, stride, in_channels=0, last_gamma=False, norm_layer=BatchNorm, norm_kwargs=None, **kwargs): super(ResnetBlockV2, self).__init__(**kwargs) #Branch - 1 self.downsample = nn.Conv2D(channels, 1, stride, use_bias=False, in_channels=in_channels) # Branch - 2 self.bn1 = norm_layer(**({} if norm_kwargs is None else norm_kwargs)) self.conv1 = _conv3x3(channels, stride, in_channels) if not last_gamma: self.bn2 = norm_layer(**({} if norm_kwargs is None else norm_kwargs)) else: self.bn2 = norm_layer(gamma_initializer='zeros', **({} if norm_kwargs is None else norm_kwargs)) self.conv2 = _conv3x3(channels, 1, channels) def hybrid_forward(self, F, x): residual = x x = self.bn1(x) x = F.Activation(x, act_type='relu') residual = self.downsample(x) x = self.conv1(x) x = self.bn2(x) x = F.Activation(x, act_type='relu') x = self.conv2(x) return x + residual # Invoke the block block = ResnetBlockV2(64, 1) # Initialize network and load block on machine ctx = [mx.cpu()]; block.initialize(init.Xavier(), ctx = ctx); block.collect_params().reset_ctx(ctx) block.hybridize() # Run data through network x = np.zeros((1, 3, 224, 224)); x = mx.nd.array(x); y = block.forward(x); print(x.shape, y.shape) # Export Model to Load on Netron block.export("final", epoch=0); netron.start("final-symbol.json", port=8082) ``` <a id='5-3'></a> ## Creating block using traditional Pytorch - Code credits - https://pytorch.org/ ``` # Traiditional-Pytorch import torch from torch import nn from torch.jit.annotations import List import torch.nn.functional as F def conv3x3(in_planes, out_planes, stride=1, groups=1, dilation=1): """3x3 convolution with padding""" return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, padding=dilation, groups=groups, bias=False, dilation=dilation) def conv1x1(in_planes, out_planes, stride=1): """1x1 convolution""" return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False) class ResnetBlock(nn.Module): expansion = 1 __constants__ = ['downsample'] def __init__(self, inplanes, planes, stride=1, groups=1, base_width=64, dilation=1, norm_layer=None): super(ResnetBlock, self).__init__() norm_layer = nn.BatchNorm2d # Common Element self.bn0 = norm_layer(inplanes) self.relu0 = nn.ReLU(inplace=True) # Branch - 1 self.downsample = conv1x1(inplanes, planes, stride) # Branch - 2 self.conv1 = conv3x3(inplanes, planes, stride) self.bn1 = norm_layer(planes) self.relu = nn.ReLU(inplace=True) self.conv2 = conv3x3(planes, planes) self.stride = stride def forward(self, x): x = self.bn0(x); x = self.relu0(x); identity = self.downsample(x) out = self.conv1(x) out = self.bn1(out) out = self.relu(out) out = self.conv2(out) out += identity out = self.relu(out) return out # Invoke the block block = ResnetBlock(3, 64, stride=1); # Initialize network and load block on machine layers = [] layers.append(block); net = nn.Sequential(*layers); # Run data through network x = torch.randn(1, 3, 224, 224) y = net(x) print(x.shape, y.shape); # Export Model to Load on Netron torch.onnx.export(net, # model being run x, # model input (or a tuple for multiple inputs) "model.onnx", # where to save the model (can be a file or file-like object) export_params=True, # store the trained parameter weights inside the model file opset_version=10, # the ONNX version to export the model to do_constant_folding=True, # whether to execute constant folding for optimization input_names = ['input'], # the model's input names output_names = ['output'], # the model's output names dynamic_axes={'input' : {0 : 'batch_size'}, # variable lenght axes 'output' : {0 : 'batch_size'}}) netron.start('model.onnx', port=9998); ``` <a id='5-4'></a> ## Creating block using traditional Keras - Code credits: https://keras.io/ ``` # Traditional-Keras import keras import keras.layers as kla import keras.models as kmo import tensorflow as tf from keras.models import Model backend = 'channels_last' from keras import layers def resnet_conv_block(input_tensor, kernel_size, filters, stage, block, strides=(2, 2)): filters1, filters2, filters3 = filters bn_axis = 3 conv_name_base = 'res' + str(stage) + block + '_branch' bn_name_base = 'bn' + str(stage) + block + '_branch' # Common Element start = layers.BatchNormalization(axis=bn_axis, name=bn_name_base + '0a')(input_tensor) start = layers.Activation('relu')(start) #Branch - 1 shortcut = layers.Conv2D(filters3, (1, 1), strides=strides, kernel_initializer='he_normal', name=conv_name_base + '1')(start) #Branch - 2 x = layers.Conv2D(filters1, (1, 1), strides=strides, kernel_initializer='he_normal', name=conv_name_base + '2a')(start) x = layers.BatchNormalization(axis=bn_axis, name=bn_name_base + '2a')(x) x = layers.Activation('relu')(x) x = layers.Conv2D(filters2, kernel_size, padding='same', kernel_initializer='he_normal', name=conv_name_base + '2b')(x) x = layers.add([x, shortcut]) x = layers.Activation('relu')(x) return x def create_model(input_shape, kernel_size, filters, stage, block): img_input = layers.Input(shape=input_shape); x = resnet_conv_block(img_input, kernel_size, filters, stage, block) return Model(img_input, x); # Invoke the block kernel_size=3; filters=[64, 64, 64]; input_shape=(224, 224, 3); model = create_model(input_shape, kernel_size, filters, 0, "0"); # Run data through network x = tf.placeholder(tf.float32, shape=(1, 224, 224, 3)) y = model(x) print(x.shape, y.shape) # Export Model to Load on Netron model.save("final.h5"); netron.start("final.h5", port=8082) ```
github_jupyter
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-59152712-8"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); </script> # Equations of General Relativistic Hydrodynamics (GRHD) ## Authors: Zach Etienne & Patrick Nelson ## This notebook documents and constructs a number of quantities useful for building symbolic (SymPy) expressions for the equations of general relativistic hydrodynamics (GRHD), using the same (Valencia) formalism as `IllinoisGRMHD` **Notebook Status:** <font color='orange'><b> Self-Validated </b></font> **Validation Notes:** This tutorial notebook has been confirmed to be self-consistent with its corresponding NRPy+ module, as documented [below](#code_validation). **Additional validation tests may have been performed, but are as yet, undocumented. (TODO)** ## Introduction We write the equations of general relativistic hydrodynamics in conservative form as follows (adapted from Eqs. 41-44 of [Duez et al](https://arxiv.org/pdf/astro-ph/0503420.pdf): \begin{eqnarray} \ \partial_t \rho_* &+& \partial_j \left(\rho_* v^j\right) = 0 \\ \partial_t \tilde{\tau} &+& \partial_j \left(\alpha^2 \sqrt{\gamma} T^{0j} - \rho_* v^j \right) = s \\ \partial_t \tilde{S}_i &+& \partial_j \left(\alpha \sqrt{\gamma} T^j{}_i \right) = \frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i}, \end{eqnarray} where we assume $T^{\mu\nu}$ is the stress-energy tensor of a perfect fluid: $$ T^{\mu\nu} = \rho_0 h u^{\mu} u^{\nu} + P g^{\mu\nu}, $$ the $s$ source term is given in terms of ADM quantities via $$ s = \alpha \sqrt{\gamma}\left[\left(T^{00}\beta^i\beta^j + 2 T^{0i}\beta^j + T^{ij} \right)K_{ij} - \left(T^{00}\beta^i + T^{0i} \right)\partial_i\alpha \right], $$ and \begin{align} v^j &= \frac{u^j}{u^0} \\ \rho_* &= \alpha\sqrt{\gamma} \rho_0 u^0 \\ h &= 1 + \epsilon + \frac{P}{\rho_0}. \end{align} Also we will write the 4-metric in terms of the ADM 3-metric, lapse, and shift using standard equations. Thus the full set of input variables include: * Spacetime quantities: * ADM quantities $\alpha$, $\beta^i$, $\gamma_{ij}$, $K_{ij}$ * Hydrodynamical quantities: * Rest-mass density $\rho_0$ * Pressure $P$ * Internal energy $\epsilon$ * 4-velocity $u^\mu$ For completeness, the rest of the conservative variables are given by \begin{align} \tilde{\tau} &= \alpha^2\sqrt{\gamma} T^{00} - \rho_* \\ \tilde{S}_i &= \alpha \sqrt{\gamma} T^0{}_i \end{align} ### A Note on Notation As is standard in NRPy+, * Greek indices refer to four-dimensional quantities where the zeroth component indicates temporal (time) component. * Latin indices refer to three-dimensional quantities. This is somewhat counterintuitive since Python always indexes its lists starting from 0. As a result, the zeroth component of three-dimensional quantities will necessarily indicate the first *spatial* direction. For instance, in calculating the first term of $b^2 u^\mu u^\nu$, we use Greek indices: ```python T4EMUU = ixp.zerorank2(DIM=4) for mu in range(4): for nu in range(4): # Term 1: b^2 u^{\mu} u^{\nu} T4EMUU[mu][nu] = smallb2*u4U[mu]*u4U[nu] ``` When we calculate $\beta_i = \gamma_{ij} \beta^j$, we use Latin indices: ```python betaD = ixp.zerorank1(DIM=3) for i in range(3): for j in range(3): betaD[i] += gammaDD[i][j] * betaU[j] ``` As a corollary, any expressions involving mixed Greek and Latin indices will need to offset one set of indices by one: A Latin index in a four-vector will be incremented and a Greek index in a three-vector will be decremented (however, the latter case does not occur in this tutorial notebook). This can be seen when we handle $\frac{1}{2} \alpha \sqrt{\gamma} T^{\mu \nu}_{\rm EM} \partial_i g_{\mu \nu}$: ```python # \alpha \sqrt{\gamma} T^{\mu \nu}_{\rm EM} \partial_i g_{\mu \nu} / 2 for i in range(3): for mu in range(4): for nu in range(4): S_tilde_rhsD[i] += alpsqrtgam * T4EMUU[mu][nu] * g4DD_zerotimederiv_dD[mu][nu][i+1] / 2 ``` <a id='toc'></a> # Table of Contents $$\label{toc}$$ Each family of quantities is constructed within a given function (**boldfaced** below). This notebook is organized as follows 1. [Step 1](#importmodules): Import needed NRPy+ & Python modules 1. [Step 2](#stressenergy): Define the stress-energy tensor $T^{\mu\nu}$ and $T^\mu{}_\nu$: * **compute_enthalpy()**, **compute_T4UU()**, **compute_T4UD()**: 1. [Step 3](#primtoconserv): Writing the conservative variables in terms of the primitive variables: * **compute_sqrtgammaDET()**, **compute_rho_star()**, **compute_tau_tilde()**, **compute_S_tildeD()** 1. [Step 4](#grhdfluxes): Define the fluxes for the GRHD equations 1. [Step 4.a](#rhostarfluxterm): Define $\rho_*$ flux term for GRHD equations: * **compute_vU_from_u4U__no_speed_limit()**, **compute_rho_star_fluxU()**: 1. [Step 4.b](#taustildesourceterms) Define $\tilde{\tau}$ and $\tilde{S}_i$ flux terms for GRHD equations: * **compute_tau_tilde_fluxU()**, **compute_S_tilde_fluxUD()** 1. [Step 5](#grhdsourceterms): Define source terms on RHSs of GRHD equations 1. [Step 5.a](#ssourceterm): Define $s$ source term on RHS of $\tilde{\tau}$ equation: * **compute_s_source_term()** 1. [Step 5.b](#stildeisourceterm): Define source term on RHS of $\tilde{S}_i$ equation 1. [Step 5.b.i](#fourmetricderivs): Compute $g_{\mu\nu,i}$ in terms of ADM quantities and their derivatives: * **compute_g4DD_zerotimederiv_dD()** 1. [Step 5.b.ii](#stildeisource): Compute source term of the $\tilde{S}_i$ equation: $\frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i}$: * **compute_S_tilde_source_termD()** 1. [Step 6](#convertvtou): Conversion of $v^i$ to $u^\mu$ (Courtesy Patrick Nelson): * **u4U_in_terms_of_ValenciavU__rescale_ValenciavU_by_applying_speed_limit()**, **u4U_in_terms_of_vU__rescale_vU_by_applying_speed_limit()** 1. [Step 7](#declarevarsconstructgrhdeqs): Declare ADM and hydrodynamical input variables, and construct GRHD equations 1. [Step 8](#code_validation): Code Validation against `GRHD.equations` NRPy+ module 1. [Step 9](#latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file <a id='importmodules'></a> # Step 1: Import needed NRPy+ & Python modules \[Back to [top](#toc)\] $$\label{importmodules}$$ ``` # Step 1: Import needed core NRPy+ modules from outputC import nrpyAbs # NRPy+: Core C code output module import NRPy_param_funcs as par # NRPy+: parameter interface import sympy as sp # SymPy: The Python computer algebra package upon which NRPy+ depends import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support ``` <a id='stressenergy'></a> # Step 2: Define the stress-energy tensor $T^{\mu\nu}$ and $T^\mu{}_\nu$ \[Back to [top](#toc)\] $$\label{stressenergy}$$ Recall from above that $$ T^{\mu\nu} = \rho_0 h u^{\mu} u^{\nu} + P g^{\mu\nu}, $$ where $h = 1 + \epsilon + \frac{P}{\rho_0}$. Also $$ T^\mu{}_{\nu} = T^{\mu\delta} g_{\delta \nu} $$ ``` # Step 2.a: First define h, the enthalpy: def compute_enthalpy(rho_b,P,epsilon): global h h = 1 + epsilon + P/rho_b # Step 2.b: Define T^{mu nu} (a 4-dimensional tensor) def compute_T4UU(gammaDD,betaU,alpha, rho_b,P,epsilon,u4U): global T4UU compute_enthalpy(rho_b,P,epsilon) # Then define g^{mu nu} in terms of the ADM quantities: import BSSN.ADMBSSN_tofrom_4metric as AB4m AB4m.g4UU_ito_BSSN_or_ADM("ADM",gammaDD,betaU,alpha) # Finally compute T^{mu nu} T4UU = ixp.zerorank2(DIM=4) for mu in range(4): for nu in range(4): T4UU[mu][nu] = rho_b * h * u4U[mu]*u4U[nu] + P*AB4m.g4UU[mu][nu] # Step 2.c: Define T^{mu}_{nu} (a 4-dimensional tensor) def compute_T4UD(gammaDD,betaU,alpha, T4UU): global T4UD # Next compute T^mu_nu = T^{mu delta} g_{delta nu}, needed for S_tilde flux. # First we'll need g_{alpha nu} in terms of ADM quantities: import BSSN.ADMBSSN_tofrom_4metric as AB4m AB4m.g4DD_ito_BSSN_or_ADM("ADM",gammaDD,betaU,alpha) T4UD = ixp.zerorank2(DIM=4) for mu in range(4): for nu in range(4): for delta in range(4): T4UD[mu][nu] += T4UU[mu][delta]*AB4m.g4DD[delta][nu] ``` <a id='primtoconserv'></a> # Step 3: Writing the conservative variables in terms of the primitive variables \[Back to [top](#toc)\] $$\label{primtoconserv}$$ Recall from above that the conservative variables may be written as \begin{align} \rho_* &= \alpha\sqrt{\gamma} \rho_0 u^0 \\ \tilde{\tau} &= \alpha^2\sqrt{\gamma} T^{00} - \rho_* \\ \tilde{S}_i &= \alpha \sqrt{\gamma} T^0{}_i \end{align} $T^{\mu\nu}$ and $T^\mu{}_\nu$ have already been defined $-$ all in terms of primitive variables. Thus we'll just need $\sqrt{\gamma}=$`gammaDET`, and all conservatives can then be written in terms of other defined quantities, which themselves are written in terms of primitive variables and the ADM metric. ``` # Step 3: Writing the conservative variables in terms of the primitive variables def compute_sqrtgammaDET(gammaDD): global sqrtgammaDET gammaUU, gammaDET = ixp.symm_matrix_inverter3x3(gammaDD) sqrtgammaDET = sp.sqrt(gammaDET) def compute_rho_star(alpha, sqrtgammaDET, rho_b,u4U): global rho_star # Compute rho_star: rho_star = alpha*sqrtgammaDET*rho_b*u4U[0] def compute_tau_tilde(alpha, sqrtgammaDET, T4UU,rho_star): global tau_tilde tau_tilde = alpha**2*sqrtgammaDET*T4UU[0][0] - rho_star def compute_S_tildeD(alpha, sqrtgammaDET, T4UD): global S_tildeD S_tildeD = ixp.zerorank1(DIM=3) for i in range(3): S_tildeD[i] = alpha*sqrtgammaDET*T4UD[0][i+1] ``` <a id='grhdfluxes'></a> # Step 4: Define the fluxes for the GRHD equations \[Back to [top](#toc)\] $$\label{grhdfluxes}$$ <a id='rhostarfluxterm'></a> ## Step 4.a: Define $\rho_*$ flux term for GRHD equations \[Back to [top](#toc)\] $$\label{rhostarfluxterm}$$ Recall from above that \begin{array} \ \partial_t \rho_* &+ \partial_j \left(\rho_* v^j\right) = 0. \end{array} Here we will define the $\rho_* v^j$ that constitutes the flux of $\rho_*$, first defining $v^j=u^j/u^0$: ``` # Step 4: Define the fluxes for the GRHD equations # Step 4.a: vU from u4U may be needed for computing rho_star_flux from u4U def compute_vU_from_u4U__no_speed_limit(u4U): global vU # Now compute v^i = u^i/u^0: vU = ixp.zerorank1(DIM=3) for j in range(3): vU[j] = u4U[j+1]/u4U[0] # Step 4.b: rho_star flux def compute_rho_star_fluxU(vU, rho_star): global rho_star_fluxU rho_star_fluxU = ixp.zerorank1(DIM=3) for j in range(3): rho_star_fluxU[j] = rho_star*vU[j] ``` <a id='taustildesourceterms'></a> ## Step 4.b: Define $\tilde{\tau}$ and $\tilde{S}_i$ flux terms for GRHD equations \[Back to [top](#toc)\] $$\label{taustildesourceterms}$$ Recall from above that \begin{array} \ \partial_t \tilde{\tau} &+ \partial_j \underbrace{\left(\alpha^2 \sqrt{\gamma} T^{0j} - \rho_* v^j \right)} &= s \\ \partial_t \tilde{S}_i &+ \partial_j \underbrace{\left(\alpha \sqrt{\gamma} T^j{}_i \right)} &= \frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i}. \end{array} Here we will define all terms that go inside the $\partial_j$'s on the left-hand side of the above equations (i.e., the underbraced expressions): ``` # Step 4.c: tau_tilde flux def compute_tau_tilde_fluxU(alpha, sqrtgammaDET, vU,T4UU, rho_star): global tau_tilde_fluxU tau_tilde_fluxU = ixp.zerorank1(DIM=3) for j in range(3): tau_tilde_fluxU[j] = alpha**2*sqrtgammaDET*T4UU[0][j+1] - rho_star*vU[j] # Step 4.d: S_tilde flux def compute_S_tilde_fluxUD(alpha, sqrtgammaDET, T4UD): global S_tilde_fluxUD S_tilde_fluxUD = ixp.zerorank2(DIM=3) for j in range(3): for i in range(3): S_tilde_fluxUD[j][i] = alpha*sqrtgammaDET*T4UD[j+1][i+1] ``` <a id='grhdsourceterms'></a> # Step 5: Define source terms on RHSs of GRHD equations \[Back to [top](#toc)\] $$\label{grhdsourceterms}$$ <a id='ssourceterm'></a> ## Step 5.a: Define $s$ source term on RHS of $\tilde{\tau}$ equation \[Back to [top](#toc)\] $$\label{ssourceterm}$$ Recall again from above the $s$ source term on the right-hand side of the $\tilde{\tau}$ evolution equation is given in terms of ADM quantities and the stress-energy tensor via $$ s = \underbrace{\alpha \sqrt{\gamma}}_{\text{Term 3}}\left[\underbrace{\left(T^{00}\beta^i\beta^j + 2 T^{0i}\beta^j + T^{ij} \right)K_{ij}}_{\text{Term 1}} \underbrace{- \left(T^{00}\beta^i + T^{0i} \right)\partial_i\alpha}_{\text{Term 2}} \right], $$ ``` def compute_s_source_term(KDD,betaU,alpha, sqrtgammaDET,alpha_dD, T4UU): global s_source_term s_source_term = sp.sympify(0) # Term 1: for i in range(3): for j in range(3): s_source_term += (T4UU[0][0]*betaU[i]*betaU[j] + 2*T4UU[0][i+1]*betaU[j] + T4UU[i+1][j+1])*KDD[i][j] # Term 2: for i in range(3): s_source_term += -(T4UU[0][0]*betaU[i] + T4UU[0][i+1])*alpha_dD[i] # Term 3: s_source_term *= alpha*sqrtgammaDET ``` <a id='stildeisourceterm'></a> ## Step 5.b: Define source term on RHS of $\tilde{S}_i$ equation \[Back to [top](#toc)\] $$\label{stildeisourceterm}$$ Recall from above $$ \partial_t \tilde{S}_i + \partial_j \left(\alpha \sqrt{\gamma} T^j{}_i \right) = \frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i}. $$ Our goal here will be to compute $$ \frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i}. $$ <a id='fourmetricderivs'></a> ### Step 5.b.i: Compute $g_{\mu\nu,i}$ in terms of ADM quantities and their derivatives \[Back to [top](#toc)\] $$\label{fourmetricderivs}$$ To compute $g_{\mu\nu,i}$ we need to evaluate the first derivative of $g_{\mu\nu}$ in terms of ADM variables. We are given $\gamma_{ij}$, $\alpha$, and $\beta^i$, and the 4-metric is given in terms of these quantities via $$ g_{\mu\nu} = \begin{pmatrix} -\alpha^2 + \beta^k \beta_k & \beta_i \\ \beta_j & \gamma_{ij} \end{pmatrix}. $$ Thus $$ g_{\mu\nu,k} = \begin{pmatrix} -2 \alpha\alpha_{,i} + \beta^j_{,k} \beta_j + \beta^j \beta_{j,k} & \beta_{i,k} \\ \beta_{j,k} & \gamma_{ij,k} \end{pmatrix}, $$ where $\beta_{i} = \gamma_{ij} \beta^j$, so $$ \beta_{i,k} = \gamma_{ij,k} \beta^j + \gamma_{ij} \beta^j_{,k} $$ ``` def compute_g4DD_zerotimederiv_dD(gammaDD,betaU,alpha, gammaDD_dD,betaU_dD,alpha_dD): global g4DD_zerotimederiv_dD # Eq. 2.121 in B&S betaD = ixp.zerorank1(DIM=3) for i in range(3): for j in range(3): betaD[i] += gammaDD[i][j]*betaU[j] betaDdD = ixp.zerorank2(DIM=3) for i in range(3): for j in range(3): for k in range(3): # Recall that betaD[i] = gammaDD[i][j]*betaU[j] (Eq. 2.121 in B&S) betaDdD[i][k] += gammaDD_dD[i][j][k]*betaU[j] + gammaDD[i][j]*betaU_dD[j][k] # Eq. 2.122 in B&S g4DD_zerotimederiv_dD = ixp.zerorank3(DIM=4) for k in range(3): # Recall that g4DD[0][0] = -alpha^2 + betaU[j]*betaD[j] g4DD_zerotimederiv_dD[0][0][k+1] += -2*alpha*alpha_dD[k] for j in range(3): g4DD_zerotimederiv_dD[0][0][k+1] += betaU_dD[j][k]*betaD[j] + betaU[j]*betaDdD[j][k] for i in range(3): for k in range(3): # Recall that g4DD[i][0] = g4DD[0][i] = betaD[i] g4DD_zerotimederiv_dD[i+1][0][k+1] = g4DD_zerotimederiv_dD[0][i+1][k+1] = betaDdD[i][k] for i in range(3): for j in range(3): for k in range(3): # Recall that g4DD[i][j] = gammaDD[i][j] g4DD_zerotimederiv_dD[i+1][j+1][k+1] = gammaDD_dD[i][j][k] ``` <a id='stildeisource'></a> ### Step 5.b.ii: Compute source term of the $\tilde{S}_i$ equation: $\frac{1}{2} \alpha\sqrt{\gamma} T^{\mu\nu} g_{\mu\nu,i}$ \[Back to [top](#toc)\] $$\label{stildeisource}$$ Now that we've computed `g4DD_zerotimederiv_dD`$=g_{\mu\nu,i}$, the $\tilde{S}_i$ evolution equation source term may be quickly constructed. ``` # Step 5.b.ii: Compute S_tilde source term def compute_S_tilde_source_termD(alpha, sqrtgammaDET,g4DD_zerotimederiv_dD, T4UU): global S_tilde_source_termD S_tilde_source_termD = ixp.zerorank1(DIM=3) for i in range(3): for mu in range(4): for nu in range(4): S_tilde_source_termD[i] += sp.Rational(1,2)*alpha*sqrtgammaDET*T4UU[mu][nu]*g4DD_zerotimederiv_dD[mu][nu][i+1] ``` <a id='convertvtou'></a> # Step 6: Conversion of $v^i$ to $u^\mu$ (Courtesy Patrick Nelson) \[Back to [top](#toc)\] $$\label{convertvtou}$$ According to Eqs. 9-11 of [the IllinoisGRMHD paper](https://arxiv.org/pdf/1501.07276.pdf), the Valencia 3-velocity $v^i_{(n)}$ is related to the 4-velocity $u^\mu$ via \begin{align} \alpha v^i_{(n)} &= \frac{u^i}{u^0} + \beta^i \\ \implies u^i &= u^0 \left(\alpha v^i_{(n)} - \beta^i\right) \end{align} Defining $v^i = \frac{u^i}{u^0}$, we get $$v^i = \alpha v^i_{(n)} - \beta^i,$$ and in terms of this variable we get \begin{align} g_{00} \left(u^0\right)^2 + 2 g_{0i} u^0 u^i + g_{ij} u^i u^j &= \left(u^0\right)^2 \left(g_{00} + 2 g_{0i} v^i + g_{ij} v^i v^j\right)\\ \implies u^0 &= \pm \sqrt{\frac{-1}{g_{00} + 2 g_{0i} v^i + g_{ij} v^i v^j}} \\ &= \pm \sqrt{\frac{-1}{(-\alpha^2 + \beta^2) + 2 \beta_i v^i + \gamma_{ij} v^i v^j}} \\ &= \pm \sqrt{\frac{1}{\alpha^2 - \gamma_{ij}\left(\beta^i + v^i\right)\left(\beta^j + v^j\right)}}\\ &= \pm \sqrt{\frac{1}{\alpha^2 - \alpha^2 \gamma_{ij}v^i_{(n)}v^j_{(n)}}}\\ &= \pm \frac{1}{\alpha}\sqrt{\frac{1}{1 - \gamma_{ij}v^i_{(n)}v^j_{(n)}}} \end{align} Generally speaking, numerical errors will occasionally drive expressions under the radical to either negative values or potentially enormous values (corresponding to enormous Lorentz factors). Thus a reliable approach for computing $u^0$ requires that we first rewrite the above expression in terms of the Lorentz factor squared: $\Gamma^2=\left(\alpha u^0\right)^2$: \begin{align} u^0 &= \pm \frac{1}{\alpha}\sqrt{\frac{1}{1 - \gamma_{ij}v^i_{(n)}v^j_{(n)}}}\\ \implies \left(\alpha u^0\right)^2 &= \frac{1}{1 - \gamma_{ij}v^i_{(n)}v^j_{(n)}} \\ \implies \gamma_{ij}v^i_{(n)}v^j_{(n)} &= 1 - \frac{1}{\left(\alpha u^0\right)^2} \\ &= 1 - \frac{1}{\Gamma^2} \end{align} In order for the bottom expression to hold true, the left-hand side must be between 0 and 1. Again, this is not guaranteed due to the appearance of numerical errors. In fact, a robust algorithm will not allow $\Gamma^2$ to become too large (which might contribute greatly to the stress-energy of a given gridpoint), so let's define the largest allowed Lorentz factor as $\Gamma_{\rm max}$. Then our algorithm for computing $u^0$ is as follows: If $$R=\gamma_{ij}v^i_{(n)}v^j_{(n)}>1 - \frac{1}{\Gamma_{\rm max}^2},$$ then adjust the 3-velocity $v^i$ as follows: $$v^i_{(n)} \to \sqrt{\frac{1 - \frac{1}{\Gamma_{\rm max}^2}}{R}}v^i_{(n)}.$$ After this rescaling, we are then guaranteed that if $R$ is recomputed, it will be set to its ceiling value $R=R_{\rm max} = 1 - \frac{1}{\Gamma_{\rm max}^2}$. Then, regardless of whether the ceiling on $R$ was applied, $u^0$ can be safely computed via $$ u^0 = \frac{1}{\alpha \sqrt{1-R}}, $$ and the remaining components $u^i$ via $$ u^i = u^0 v^i. $$ In summary our algorithm for computing $u^{\mu}$ from $v^i = \frac{u^i}{u^0}$ is as follows: 1. Choose a maximum Lorentz factor $\Gamma_{\rm max}$=`GAMMA_SPEED_LIMIT`, and define $v^i_{(n)} = \frac{1}{\alpha}\left( \frac{u^i}{u^0} + \beta^i\right)$. 1. Compute $R=\gamma_{ij}v^i_{(n)}v^j_{(n)}=1 - \frac{1}{\Gamma^2}$ 1. If $R \le 1 - \frac{1}{\Gamma_{\rm max}^2}$, then skip the next step. 1. Otherwise if $R > 1 - \frac{1}{\Gamma_{\rm max}^2}$ then adjust $v^i_{(n)}\to \sqrt{\frac{1 - \frac{1}{\Gamma_{\rm max}^2}}{R}}v^i_{(n)}$, which will force $R=R_{\rm max}$. 1. Given the $R$ computed in the above step, $u^0 = \frac{1}{\alpha \sqrt{1-R}}$, and $u^i=u^0 v^i$. While the above algorithm is quite robust, its `if()` statement in the fourth step is not very friendly to NRPy+ or an optimizing C compiler, as it would require NRPy+ to generate separate C kernels for each branch of the `if()`. Let's instead try the following trick, which Roland Haas taught us. Define $R^*$ as $$ R^* = \frac{1}{2} \left(R_{\rm max} + R - |R_{\rm max} - R| \right). $$ If $R>R_{\rm max}$, then $|R_{\rm max} - R|=R - R_{\rm max}$, and we get: $$ R^* = \frac{1}{2} \left(R_{\rm max} + R - (R - R_{\rm max}) \right) = \frac{1}{2} \left(2 R_{\rm max}\right) = R_{\rm max} $$ If $R\le R_{\rm max}$, then $|R_{\rm max} - R|=R_{\rm max} - R$, and we get: $$ R^* = \frac{1}{2} \left(R_{\rm max} + R - (R_{\rm max} - R) \right) = \frac{1}{2} \left(2 R\right) = R $$ Then we can rescale *all* $v^i_{(n)}$ via $$ v^i_{(n)} \to v^i_{(n)} \sqrt{\frac{R^*}{R}}, $$ though we must be very careful to carefully handle the case in which $R=0$. To avoid any problems in this case, we simply adjust the above rescaling by adding a tiny number [`TINYDOUBLE`](https://en.wikipedia.org/wiki/Tiny_Bubbles) to $R$ in the denominator, typically `1e-100`: $$ v^i_{(n)} \to v^i_{(n)} \sqrt{\frac{R^*}{R + {\rm TINYDOUBLE}}}. $$ Finally, $u^0$ can be immediately and safely computed, via: $$ u^0 = \frac{1}{\alpha \sqrt{1-R^*}}, $$ and $u^i$ via $$ u^i = u^0 v^i = u^0 \left(\alpha v^i_{(n)} - \beta^i\right). $$ ``` # Step 6.a: Convert Valencia 3-velocity v_{(n)}^i into u^\mu, and apply a speed limiter # Speed-limited ValenciavU is output to rescaledValenciavU global. def u4U_in_terms_of_ValenciavU__rescale_ValenciavU_by_applying_speed_limit(alpha,betaU,gammaDD, ValenciavU): # Inputs: Metric lapse alpha, shift betaU, 3-metric gammaDD, Valencia 3-velocity ValenciavU # Outputs (as globals): u4U_ito_ValenciavU, rescaledValenciavU # R = gamma_{ij} v^i v^j R = sp.sympify(0) for i in range(3): for j in range(3): R += gammaDD[i][j]*ValenciavU[i]*ValenciavU[j] thismodule = "GRHD" # The default value isn't terribly important here, since we can overwrite in the main C code GAMMA_SPEED_LIMIT = par.Cparameters("REAL", thismodule, "GAMMA_SPEED_LIMIT", 10.0) # Default value based on # IllinoisGRMHD. # GiRaFFE default = 2000.0 Rmax = 1 - 1 / (GAMMA_SPEED_LIMIT * GAMMA_SPEED_LIMIT) # Now, we set Rstar = min(Rmax,R): # If R < Rmax, then Rstar = 0.5*(Rmax+R-Rmax+R) = R # If R >= Rmax, then Rstar = 0.5*(Rmax+R+Rmax-R) = Rmax Rstar = sp.Rational(1, 2) * (Rmax + R - nrpyAbs(Rmax - R)) # We add TINYDOUBLE to R below to avoid a 0/0, which occurs when # ValenciavU == 0 for all Valencia 3-velocity components. # "Those tiny *doubles* make me warm all over # with a feeling that I'm gonna love you till the end of time." # - Adapted from Connie Francis' "Tiny Bubbles" TINYDOUBLE = par.Cparameters("#define",thismodule,"TINYDOUBLE",1e-100) # The rescaled (speed-limited) Valencia 3-velocity # is given by, v_{(n)}^i = sqrt{Rstar/R} v^i global rescaledValenciavU rescaledValenciavU = ixp.zerorank1(DIM=3) for i in range(3): # If R == 0, then Rstar == 0, so sqrt( Rstar/(R+TINYDOUBLE) )=sqrt(0/1e-100) = 0 # If your velocities are of order 1e-100 and this is physically # meaningful, there must be something wrong with your unit conversion. rescaledValenciavU[i] = ValenciavU[i]*sp.sqrt(Rstar/(R + TINYDOUBLE)) # Finally compute u^mu in terms of Valenciav^i # u^0 = 1/(alpha-sqrt(1-R^*)) global u4U_ito_ValenciavU u4U_ito_ValenciavU = ixp.zerorank1(DIM=4) u4U_ito_ValenciavU[0] = 1/(alpha*sp.sqrt(1-Rstar)) # u^i = u^0 ( alpha v^i_{(n)} - beta^i ), where v^i_{(n)} is the Valencia 3-velocity for i in range(3): u4U_ito_ValenciavU[i+1] = u4U_ito_ValenciavU[0] * (alpha * rescaledValenciavU[i] - betaU[i]) # Step 6.b: Convert v^i into u^\mu, and apply a speed limiter. # Speed-limited vU is output to rescaledvU global. def u4U_in_terms_of_vU__rescale_vU_by_applying_speed_limit(alpha,betaU,gammaDD, vU): ValenciavU = ixp.zerorank1(DIM=3) for i in range(3): ValenciavU[i] = (vU[i] + betaU[i])/alpha u4U_in_terms_of_ValenciavU__rescale_ValenciavU_by_applying_speed_limit(alpha,betaU,gammaDD, ValenciavU) # Since ValenciavU is written in terms of vU, # u4U_ito_ValenciavU is actually u4U_ito_vU global u4U_ito_vU u4U_ito_vU = ixp.zerorank1(DIM=4) for mu in range(4): u4U_ito_vU[mu] = u4U_ito_ValenciavU[mu] # Finally compute the rescaled (speed-limited) vU global rescaledvU rescaledvU = ixp.zerorank1(DIM=3) for i in range(3): rescaledvU[i] = alpha * rescaledValenciavU[i] - betaU[i] ``` <a id='declarevarsconstructgrhdeqs'></a> # Step 7: Declare ADM and hydrodynamical input variables, and construct GRHD equations \[Back to [top](#toc)\] $$\label{declarevarsconstructgrhdeqs}$$ ``` # First define hydrodynamical quantities u4U = ixp.declarerank1("u4U", DIM=4) rho_b,P,epsilon = sp.symbols('rho_b P epsilon',real=True) # Then ADM quantities gammaDD = ixp.declarerank2("gammaDD","sym01",DIM=3) KDD = ixp.declarerank2("KDD" ,"sym01",DIM=3) betaU = ixp.declarerank1("betaU", DIM=3) alpha = sp.symbols('alpha', real=True) # First compute stress-energy tensor T4UU and T4UD: compute_T4UU(gammaDD,betaU,alpha, rho_b,P,epsilon,u4U) compute_T4UD(gammaDD,betaU,alpha, T4UU) # Next sqrt(gamma) compute_sqrtgammaDET(gammaDD) # Compute conservative variables in terms of primitive variables compute_rho_star( alpha, sqrtgammaDET, rho_b,u4U) compute_tau_tilde(alpha, sqrtgammaDET, T4UU,rho_star) compute_S_tildeD( alpha, sqrtgammaDET, T4UD) # Then compute v^i from u^mu compute_vU_from_u4U__no_speed_limit(u4U) # Next compute fluxes of conservative variables compute_rho_star_fluxU( vU, rho_star) compute_tau_tilde_fluxU(alpha, sqrtgammaDET, vU,T4UU, rho_star) compute_S_tilde_fluxUD( alpha, sqrtgammaDET, T4UD) # Then declare derivatives & compute g4DD_zerotimederiv_dD gammaDD_dD = ixp.declarerank3("gammaDD_dD","sym01",DIM=3) betaU_dD = ixp.declarerank2("betaU_dD" ,"nosym",DIM=3) alpha_dD = ixp.declarerank1("alpha_dD" ,DIM=3) compute_g4DD_zerotimederiv_dD(gammaDD,betaU,alpha, gammaDD_dD,betaU_dD,alpha_dD) # Then compute source terms on tau_tilde and S_tilde equations compute_s_source_term(KDD,betaU,alpha, sqrtgammaDET,alpha_dD, T4UU) compute_S_tilde_source_termD( alpha, sqrtgammaDET,g4DD_zerotimederiv_dD, T4UU) # Then compute the 4-velocities in terms of an input Valencia 3-velocity testValenciavU[i] testValenciavU = ixp.declarerank1("testValenciavU",DIM=3) u4U_in_terms_of_ValenciavU__rescale_ValenciavU_by_applying_speed_limit(alpha,betaU,gammaDD, testValenciavU) # Finally compute the 4-velocities in terms of an input 3-velocity testvU[i] = u^i/u^0 testvU = ixp.declarerank1("testvU",DIM=3) u4U_in_terms_of_vU__rescale_vU_by_applying_speed_limit(alpha,betaU,gammaDD, testvU) ``` <a id='code_validation'></a> # Step 8: Code Validation against `GRHD.equations` NRPy+ module \[Back to [top](#toc)\] $$\label{code_validation}$$ As a code validation check, we verify agreement in the SymPy expressions for the GRHD equations generated in 1. this tutorial versus 2. the NRPy+ [GRHD.equations](../edit/GRHD/equations.py) module. ``` import GRHD.equations as Ge # First compute stress-energy tensor T4UU and T4UD: Ge.compute_T4UU(gammaDD,betaU,alpha, rho_b,P,epsilon,u4U) Ge.compute_T4UD(gammaDD,betaU,alpha, Ge.T4UU) # Next sqrt(gamma) Ge.compute_sqrtgammaDET(gammaDD) # Compute conservative variables in terms of primitive variables Ge.compute_rho_star( alpha, Ge.sqrtgammaDET, rho_b,u4U) Ge.compute_tau_tilde(alpha, Ge.sqrtgammaDET, Ge.T4UU,Ge.rho_star) Ge.compute_S_tildeD( alpha, Ge.sqrtgammaDET, Ge.T4UD) # Then compute v^i from u^mu Ge.compute_vU_from_u4U__no_speed_limit(u4U) # Next compute fluxes of conservative variables Ge.compute_rho_star_fluxU ( Ge.vU, Ge.rho_star) Ge.compute_tau_tilde_fluxU(alpha, Ge.sqrtgammaDET, Ge.vU,Ge.T4UU, Ge.rho_star) Ge.compute_S_tilde_fluxUD (alpha, Ge.sqrtgammaDET, Ge.T4UD) # Then declare derivatives & compute g4DD_zerotimederiv_dD # gammaDD_dD = ixp.declarerank3("gammaDD_dD","sym01",DIM=3) # betaU_dD = ixp.declarerank2("betaU_dD" ,"nosym",DIM=3) # alpha_dD = ixp.declarerank1("alpha_dD" ,DIM=3) Ge.compute_g4DD_zerotimederiv_dD(gammaDD,betaU,alpha, gammaDD_dD,betaU_dD,alpha_dD) # Finally compute source terms on tau_tilde and S_tilde equations Ge.compute_s_source_term(KDD,betaU,alpha, Ge.sqrtgammaDET,alpha_dD, Ge.T4UU) Ge.compute_S_tilde_source_termD( alpha, Ge.sqrtgammaDET,Ge.g4DD_zerotimederiv_dD,Ge.T4UU) GetestValenciavU = ixp.declarerank1("testValenciavU") Ge.u4U_in_terms_of_ValenciavU__rescale_ValenciavU_by_applying_speed_limit(alpha, betaU, gammaDD, GetestValenciavU) GetestvU = ixp.declarerank1("testvU") Ge.u4U_in_terms_of_vU__rescale_vU_by_applying_speed_limit( alpha, betaU, gammaDD, GetestvU) all_passed=True def comp_func(expr1,expr2,basename,prefixname2="Ge."): if str(expr1-expr2)!="0": print(basename+" - "+prefixname2+basename+" = "+ str(expr1-expr2)) all_passed=False def gfnm(basename,idx1,idx2=None,idx3=None): if idx2 is None: return basename+"["+str(idx1)+"]" if idx3 is None: return basename+"["+str(idx1)+"]["+str(idx2)+"]" return basename+"["+str(idx1)+"]["+str(idx2)+"]["+str(idx3)+"]" expr_list = [] exprcheck_list = [] namecheck_list = [] namecheck_list.extend(["sqrtgammaDET","rho_star","tau_tilde","s_source_term"]) exprcheck_list.extend([Ge.sqrtgammaDET,Ge.rho_star,Ge.tau_tilde,Ge.s_source_term]) expr_list.extend([sqrtgammaDET,rho_star,tau_tilde,s_source_term]) for mu in range(4): namecheck_list.extend([gfnm("u4_ito_ValenciavU",mu),gfnm("u4U_ito_vU",mu)]) exprcheck_list.extend([ Ge.u4U_ito_ValenciavU[mu], Ge.u4U_ito_vU[mu]]) expr_list.extend( [ u4U_ito_ValenciavU[mu], u4U_ito_vU[mu]]) for nu in range(4): namecheck_list.extend([gfnm("T4UU",mu,nu),gfnm("T4UD",mu,nu)]) exprcheck_list.extend([Ge.T4UU[mu][nu],Ge.T4UD[mu][nu]]) expr_list.extend([T4UU[mu][nu],T4UD[mu][nu]]) for delta in range(4): namecheck_list.extend([gfnm("g4DD_zerotimederiv_dD",mu,nu,delta)]) exprcheck_list.extend([Ge.g4DD_zerotimederiv_dD[mu][nu][delta]]) expr_list.extend([g4DD_zerotimederiv_dD[mu][nu][delta]]) for i in range(3): namecheck_list.extend([gfnm("S_tildeD",i),gfnm("vU",i),gfnm("rho_star_fluxU",i), gfnm("tau_tilde_fluxU",i),gfnm("S_tilde_source_termD",i), gfnm("rescaledValenciavU",i), gfnm("rescaledvU",i)]) exprcheck_list.extend([Ge.S_tildeD[i],Ge.vU[i],Ge.rho_star_fluxU[i], Ge.tau_tilde_fluxU[i],Ge.S_tilde_source_termD[i], Ge.rescaledValenciavU[i],Ge.rescaledvU[i]]) expr_list.extend([S_tildeD[i],vU[i],rho_star_fluxU[i], tau_tilde_fluxU[i],S_tilde_source_termD[i], rescaledValenciavU[i],rescaledvU[i]]) for j in range(3): namecheck_list.extend([gfnm("S_tilde_fluxUD",i,j)]) exprcheck_list.extend([Ge.S_tilde_fluxUD[i][j]]) expr_list.extend([S_tilde_fluxUD[i][j]]) for i in range(len(expr_list)): comp_func(expr_list[i],exprcheck_list[i],namecheck_list[i]) import sys if all_passed: print("ALL TESTS PASSED!") else: print("ERROR: AT LEAST ONE TEST DID NOT PASS") sys.exit(1) ``` <a id='latex_pdf_output'></a> # Step 9: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\] $$\label{latex_pdf_output}$$ The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename [Tutorial-GRHD_Equations-Cartesian.pdf](Tutorial-GRHD_Equations-Cartesian.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.) ``` import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-GRHD_Equations-Cartesian") ```
github_jupyter
``` from misc import HP import argparse import random import time import pickle import copy import SYCLOP_env as syc from misc import * import sys import os import cv2 import argparse import tensorflow.keras as keras from keras_networks import rnn_model_102, rnn_model_multicore_201, rnn_model_multicore_202 from curriculum_utils import create_mnist_dataset, bad_res102 def generate_trajectory(n_steps,max_q,acceleration_mode): starting_point = np.array([max_q[0] // 2, max_q[1] // 2]) steps = [] qdot=0 for j in range(n_steps): steps.append(starting_point * 1) if acceleration_mode: qdot += np.random.randint(-1, 2, 2) starting_point += qdot else: starting_point += np.random.randint(-5, 6, 2) return np.array(steps) def split_dataset_xy(dataset): dataset_x1 = [uu[0] for uu in dataset] dataset_x2 = [uu[1] for uu in dataset] dataset_y = [uu[-1] for uu in dataset] return (np.array(dataset_x1)[...,np.newaxis],np.array(dataset_x2)[:,:n_timesteps,:]),np.array(dataset_y) #parse hyperparameters lsbjob = os.getenv('LSB_JOBID') lsbjob = '' if lsbjob is None else lsbjob hp = HP() hp.save_path = 'saved_runs' hp.description='' parser = argparse.ArgumentParser() parser.add_argument('--tau_int', default=4., type=float, help='Integration timescale for adaaptation') parser.add_argument('--resize', default=1.0, type=float, help='resize of images') parser.add_argument('--run_name_suffix', default='', type=str, help='suffix for runname') parser.add_argument('--eval_dir', default=None, type=str, help='eval dir') parser.add_argument('--dqn_initial_network', default=None, type=str, help='dqn_initial_network') parser.add_argument('--decoder_initial_network', default=None, type=str, help='decoder_initial_network') parser.add_argument('--decoder_arch', default='default', type=str, help='decoder_network architecture: default / multicore_201') parser.add_argument('--decoder_n_cores', default=1, type=int, help='decoder number of cores') parser.add_argument('--decoder_learning_rate', default=1e-3, type=float, help='decoder learning rate') parser.add_argument('--decoder_dropout', default=0.0, type=float, help='decoder dropout') parser.add_argument('--decoder_rnn_type', default='gru', type=str, help='gru or rnn') parser.add_argument('--decoder_rnn_units', default=100, type=int, help='decoder rnn units') parser.add_argument('--decoder_rnn_layers', default=1, type=int, help='decoder rnn units') parser.add_argument('--decoder_ignore_position', dest='decoder_ignore_position', action='store_true') parser.add_argument('--no-decoder_ignore_position', dest='decoder_ignore_position', action='store_false') parser.add_argument('--syclop_learning_rate', default=2.5e-3, type=float, help='syclop (RL) learning rate') parser.add_argument('--color', default='grayscale', type=str, help='grayscale/rgb') parser.add_argument('--speed_reward', default=0.0, type=float, help='speed reward, typically negative') parser.add_argument('--intensity_reward', default=0.0, type=float, help='speed penalty reward') parser.add_argument('--loss_reward', default=-1.0, type=float, help='reward for loss, typically negative') parser.add_argument('--resolution', default=28, type=int, help='resolution') parser.add_argument('--max_eval_episodes', default=10000, type=int, help='episodes for evaluation mode') parser.add_argument('--steps_per_episode', default=5, type=int, help='time steps in each episode in ') parser.add_argument('--fit_verbose', default=1, type=int, help='verbose level for model.fit ') parser.add_argument('--steps_between_learnings', default=100, type=int, help='steps_between_learnings') parser.add_argument('--num_epochs', default=100, type=int, help='steps_between_learnings') parser.add_argument('--alpha_increment', default=0.01, type=float, help='reward for loss, typically negative') parser.add_argument('--beta_t1', default=400000, type=int, help='time rising bete') parser.add_argument('--beta_t2', default=700000, type=int, help='end rising beta') parser.add_argument('--beta_b1', default=0.1, type=float, help='beta initial value') parser.add_argument('--beta_b2', default=1.0, type=float, help='beta final value') parser.add_argument('--curriculum_enable', dest='curriculum_enable', action='store_true') parser.add_argument('--no-curriculum_enable', dest='curriculum_enable', action='store_false') parser.add_argument('--conv_fe', dest='conv_fe', action='store_true') parser.add_argument('--no-conv_fe', dest='conv_fe', action='store_false') parser.add_argument('--acceleration_mode', dest='acceleration_mode', action='store_true') parser.add_argument('--no-acceleration_mode', dest='acceleration_mode', action='store_false') parser.set_defaults(eval_mode=False, decode_from_dvs=False,test_mode=False,rising_beta_schedule=True,decoder_ignore_position=False, curriculum_enable=True, conv_fe=False, acceleration_mode=True) config = parser.parse_args('') # config = parser.parse_args() config = vars(config) hp.upadte_from_dict(config) hp.this_run_name = sys.argv[0] + '_noname_' + hp.run_name_suffix + '_' + lsbjob + '_' + str(int(time.time())) #define model n_timesteps = hp.steps_per_episode ## # deploy_logs() ## # if hp.decoder_arch == 'multicore_201': # decoder = rnn_model_multicore_201(n_cores=hp.decoder_n_cores,lr=hp.decoder_learning_rate,ignore_input_B=hp.decoder_ignore_position,dropout=hp.decoder_dropout,rnn_type=hp.decoder_rnn_type, # input_size=(hp.resolution,hp.resolution, 1),rnn_layers=hp.decoder_rnn_layers,conv_fe=hp.conv_fe, rnn_units=hp.decoder_rnn_units, n_timesteps=hp.steps_per_episode) # if hp.decoder_arch == 'multicore_202': # decoder = rnn_model_multicore_202(n_cores=hp.decoder_n_cores, lr=hp.decoder_learning_rate, # ignore_input_B=hp.decoder_ignore_position, dropout=hp.decoder_dropout, # rnn_type=hp.decoder_rnn_type, # input_size=(hp.resolution, hp.resolution, 1), # rnn_layers=hp.decoder_rnn_layers, conv_fe=hp.conv_fe, # rnn_units=hp.decoder_rnn_units, n_timesteps=hp.steps_per_episode) # elif hp.decoder_arch == 'default': # decoder = rnn_model_102(lr=hp.decoder_learning_rate,ignore_input_B=hp.decoder_ignore_position,dropout=hp.decoder_dropout,rnn_type=hp.decoder_rnn_type, # input_size=(hp.resolution,hp.resolution, 1),rnn_layers=hp.decoder_rnn_layers,conv_fe=hp.conv_fe,rnn_units=hp.decoder_rnn_units, n_timesteps=hp.steps_per_episode) decoder_initial_network = 'saved_runs/trajectory_curriculum101.py_noname__613128_1624010531_1//final_decoder.nwk' decoder = keras.models.load_model(decoder_initial_network) #define dataset (images, labels), (images_test, labels_test) = keras.datasets.mnist.load_data(path="mnist.npz") #fit one epoch in a time # scheduler = Scheduler(hp.lambda_schedule) # for epoch in range(hp.num_epochs): # lambda_epoch = scheduler.step(epoch) hp.acceleration_mode alpha=0 hp.num_trials = 30 trajectories = [] train_pred_pred = [] val_pred_pred = [] for trial in range(hp.num_trials): this_trajectory=generate_trajectory(hp.steps_per_episode,[72,72],hp.acceleration_mode) # this_trajectory=trajectories[trial] train_dataset, test_dataset = create_mnist_dataset(images, labels, 6, sample=hp.steps_per_episode, bad_res_func=bad_res102, return_datasets=True, q_0=this_trajectory, alpha=0.0, random_trajectories=True,acceleration_mode=hp.acceleration_mode) train_dataset_x, train_dataset_y = split_dataset_xy(train_dataset) test_dataset_x, test_dataset_y = split_dataset_xy(test_dataset) q_prime = train_dataset_x[1][0] # print('epoch', epoch, ' CONTROL!!!',' first q --', q_prime.reshape([-1])) print("evaluating trajectory ", trial) train_preds = decoder.predict( train_dataset_x, batch_size=64, verbose=hp.fit_verbose, # We pass some validation for # monitoring validation loss and metrics # at the end of each epoch ) val_preds = decoder.predict( test_dataset_x, batch_size=64, verbose=hp.fit_verbose, # We pass some validation for # monitoring validation loss and metrics # at the end of each epoch ) accuracy = np.mean(np.argmax(val_preds, axis=1)==test_dataset_y) print('accuracy:', accuracy) trajectories.append(this_trajectory+0.) train_pred_pred.append(train_preds+0.0) val_pred_pred.append(val_preds+0.0) accuracy = np.mean(np.argmax(val_preds, axis=1)==test_dataset_y) accuracy ent = np.zeros([np.shape(test_dataset_y)[0],hp.num_trials]) lablab = np.zeros([np.shape(test_dataset_y)[0],hp.num_trials]) for jj,preds in enumerate(val_pred_pred): ent[:,jj]=np.sum(-preds*np.log(preds),axis=1) lablab[:,jj]=np.argmax(preds, axis=1) ii=np.argmin(ent,axis=1) best_lbl=[] for jj,uu in enumerate(ii): best_lbl.append(lablab[jj,uu]) np.mean(best_lbl==test_dataset_y) #random syclop, np.mean(lablab==test_dataset_y.reshape([-1,1])) accuracies=np.mean(lablab==test_dataset_y.reshape([-1,1]),axis=0) best_ii=np.argmax(np.mean(lablab==test_dataset_y.reshape([-1,1]),axis=0)) np.mean(ii==best_ii) np.mean(np.any(lablab==test_dataset_y.reshape([-1,1]),axis=1)) best_ent=np.min(ent,axis=1) _=plt.hist(best_ent,bins=20) _=plt.hist(best_ent[best_lbl!=test_dataset_y],bins=20) _=plt.hist(best_ent[best_lbl==test_dataset_y],bins=20) super_pred=np.sum(val_pred_pred,axis=0) super_label=np.argmax(super_pred,axis=1) np.mean(super_label==test_dataset_y) super_label.shape with open('committee103s5_traj_30.pkl','wb') as f: pickle.dump(trajectories,f) def super_pred_fun(pred,T=1): logits = np.log(pred) pred_T = np.exp(1./T*logits) pred_T = pred_T/np.sum(pred_T,axis=-1)[...,np.newaxis] super_pred=np.sum(pred_T,axis=0) return super_pred super_pred = super_pred_fun(train_pred_pred) super_pred = super_pred_fun(val_pred_pred,T=1000) super_label=np.argmax(super_pred,axis=1) print(np.mean(super_label==test_dataset_y)) np.linspace(0.1,5.0,100) super_pred = super_pred_fun(val_pred_pred[:15],T=1000) super_label=np.argmax(super_pred,axis=1) print(np.mean(super_label==test_dataset_y)) super_pred = super_pred_fun(val_pred_pred[:5],T=1000) super_label=np.argmax(super_pred,axis=1) print(np.mean(super_label==test_dataset_y)) super_pred = super_pred_fun(val_pred_pred[:2],T=1000) super_label=np.argmax(super_pred,axis=1) print(np.mean(super_label==test_dataset_y)) # x = np.linspace(0, 2*np.pi, 64) # y = np.cos(x) # pl.figure() # pl.plot(x,y) n = hp.num_trials # colors = plt.cm.jet(accuracies) colors = plt.cm.jet((accuracies-np.min(accuracies))/(np.max(accuracies)-np.min(accuracies))) # for trial in range(hp.num_trials): plt.plot(trajectories[trial][:,0],trajectories[trial][:,1], color=colors[trial]) # plt.colorbar() colors = plt.cm.jet((accuracies-np.min(accuracies))/(np.max(accuracies)-np.min(accuracies))) n = hp.num_trials # colors = plt.cm.jet(accuracies) colors = plt.cm.RdYlGn((accuracies-np.min(accuracies))/(np.max(accuracies)-np.min(accuracies))) # for trial in range(hp.num_trials): plt.plot(trajectories[trial][:,0],trajectories[trial][:,1], color=colors[trial],linewidth=3) plt.cm.jet(1.0) n_lines = hp.num_trials x = np.arange(100) yint = np.arange(0, n_lines*10, 10) ys = np.array([x + b for b in yint]) xs = np.array([x for i in range(n_lines)]) # could also use np.tile colors = np.arange(n_lines) fig, ax = plt.subplots() lc = multiline(xs, ys, yint, cmap='bwr', lw=2) axcb = fig.colorbar(lc) axcb.set_label('Y-intercept') ax.set_title('Line Collection with mapped colors') # Set the input shape input_shape = (300,) # print(f'Feature shape: {input_shape}') # Create the model model = keras.Sequential() model.add(keras.layers.Dense(300, input_shape=input_shape, activation='relu')) model.add(keras.layers.Dropout(0.4)) model.add(keras.layers.Dense(100, activation='relu')) model.add(keras.layers.Dropout(0.4)) model.add(keras.layers.Dense(50, activation='relu')) model.add(keras.layers.Dropout(0.2)) model.add(keras.layers.Dense(10, activation='softmax')) # Configure the model and start training model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy']) model.fit(np.transpose(train_pred_pred,[1,2,0]).reshape([-1,300]), train_dataset_y.astype(int), epochs=100, batch_size=250, verbose=1, validation_split=0.2) plt.hist(accuracies,bins=30) for pred in np.array(val_pred_pred)[:,7,:]: plt.plot(pred) plt.xlabel('label') plt.ylabel('probability') np.array(val_pred_pred[:,0,:]) ```
github_jupyter
### K-Means ``` class Kmeans: """K-Means Clustering Algorithm""" def __init__(self, k, centers=None, cost=None,iter=None, labels=None, max_iter = 1000): """Initialize Parameters""" self.max_iter = max_iter self.k = k self.centers = np.empty(1) self.cost = [] self.iter = 1 self.labels = np.empty(1) def calc_distances(self, data, centers, weights): """Distance Matrix""" distance = pairwise_distances(data, centers)**2 min_distance = np.min(distance, axis = 1) D = min_distance*weights return D def fit(self, data): """Clustering Process""" ## Initial centers if type(data) == pd.DataFrame: data = data.values nrow = data.shape[0] index = np.random.choice(range(nrow), self.k, False) self.centers = data[index] while (self.iter <= self.max_iter): distance = pairwise_distances(data, self.centers)**2 self.cost.append(sum(np.min(distance, axis=1))) self.labels = np.argmin(distance, axis=1) centers_new = np.array([np.mean(data[self.labels == i], axis=0) for i in np.unique(self.labels)]) ## sanity check if(np.all(self.centers == centers_new)): break self.centers = centers_new self.iter += 1 ## convergence check if (sum(np.min(pairwise_distances(data, self.centers)**2, axis=1)) != self.cost[-1]): warnings.warn("Algorithm Did Not Converge In {} Iterations".format(self.max_iter)) return self ``` ### K-Means++ ``` class Kmeanspp: """K-Means++ Clustering Algorithm""" def __init__(self, k, centers=None, cost=None,iter=None, labels=None, max_iter = 1000): """Initialize Parameters""" self.max_iter = max_iter self.k = k self.centers = np.empty(1) self.cost = [] self.iter = 1 self.labels = np.empty(1) def calc_distances(self, data, centers, weights): """Distance Matrix""" distance = pairwise_distances(data, centers)**2 min_distance = np.min(distance, axis = 1) D = min_distance*weights return D def initial_centers_Kmeansapp(self, data, k, weights): """Initialize centers for K-Means++""" centers = [] centers.append(random.choice(data)) while(len(centers) < k): distances = self.calc_distances(data, centers, weights) prob = distances/sum(distances) c = np.random.choice(range(data.shape[0]), 1, p=prob) centers.append(data[c[0]]) return centers def fit(self, data, weights=None): """Clustering Process""" if weights is None: weights = np.ones(len(data)) if type(data) == pd.DataFrame: data=data.values nrow = data.shape[0] self.centers = self.initial_centers_Kmeansapp(data, self.k, weights) while (self.iter <= self.max_iter): distance = pairwise_distances(data, self.centers)**2 self.cost.append(sum(np.min(distance, axis=1))) self.labels = np.argmin(distance, axis=1) centers_new = np.array([np.mean(data[self.labels == i], axis=0) for i in np.unique(self.labels)]) ## sanity check if(np.all(self.centers == centers_new)): break self.centers = centers_new self.iter += 1 ## convergence check if (sum(np.min(pairwise_distances(data, self.centers)**2, axis=1)) != self.cost[-1]): warnings.warn("Algorithm Did Not Converge In {} Iterations".format(self.max_iter)) return self ``` ### K-Meansll ``` class Kmeansll: """K-Meansll Clustering Algorithm""" def __init__(self, k, omega, centers=None, cost=None,iter=None, labels=None, max_iter = 1000): """Initialize Parameters""" self.max_iter = max_iter self.k = k self.omega = omega self.centers = np.empty(1) self.cost = [] self.iter = 1 self.labels = np.empty(1) def calc_weight(self, data, centers): """Weight Calculation""" l = len(centers) distance = pairwise_distances(data, centers) labels = np.argmin(distance, axis=1) weights = [sum(labels == i) for i in range(l)] return (weights/sum(weights)) def calc_distances(self, data, centers, weights): """Distance Matrix""" distance = pairwise_distances(data, centers)**2 min_distance = np.min(distance, axis = 1) D = min_distance*weights return D def initial_centers_Kmeansll(self, data, k, omega, weights): """Initialize Centers for K-Meansll""" centers = [] centers.append(random.choice(data)) phi = np.int(np.round(np.log(sum(self.calc_distances(data, centers, weights))))) l = k*omega ## oversampling factor for i in range(phi): dist = self.calc_distances(data, centers, weights) prob = l*dist/sum(dist) for i in range(len(prob)): if prob[i] > np.random.uniform(): centers.append(data[i]) centers = np.array(centers) recluster_weight = self.calc_weight(data, centers) reclusters = kmeanspp.Kmeanspp(k).fit(centers, recluster_weight).labels initial_centers = [] for i in np.unique(reclusters): initial_centers.append(np.mean(centers[reclusters == i], axis = 0)) return initial_centers def fit(self, data, weights=None): """Clustering Process""" if weights is None: weights = np.ones(len(data)) if type(data) == pd.DataFrame: data=data.values nrow = data.shape[0] self.centers = self.initial_centers_Kmeansll(data, self.k, self.omega, weights) while (self.iter <= self.max_iter): distance = pairwise_distances(data, self.centers)**2 self.cost.append(sum(np.min(distance, axis=1))) self.labels = np.argmin(distance, axis=1) centers_new = np.array([np.mean(data[self.labels == i], axis=0) for i in np.unique(self.labels)]) ## sanity check if(np.all(self.centers == centers_new)): break self.centers = centers_new self.iter += 1 ## convergence check if (sum(np.min(pairwise_distances(data, self.centers)**2, axis=1)) != self.cost[-1]): warnings.warn("Algorithm Did Not Converge In {} Iterations".format(self.max_iter)) return self ```
github_jupyter
``` import os import pandas as pd import numpy as np import plotly.graph_objects as go from sklearn.preprocessing import MinMaxScaler from plotly.subplots import make_subplots df_dict = {} for path in os.listdir('Data/FINAL INDEX'): var = path.split('.')[0] vars()[var] = pd.read_csv('Data/FINAL INDEX/' + path, thousands = ',') df_dict[var] = vars()[var] for df_name in df_dict.keys(): df_dict[df_name]['Date'] = pd.to_datetime(df_dict[df_name]['Date']) dj_pharma.info() scatters = [[ dict( type="scatter", x=df_dict[df_name]["Date"].sort_values(), y=df_dict[df_name]["Price"].map(lambda x : np.log(x)), # Since we just want to look at a trend use the log values name=None, line_color="rgba(217, 217, 217, 1.0)", # Same color for all the graphics in the plot except for the highlighted ) if df_name != df_name_main else dict( type="scatter", x=df_dict[df_name]["Date"].sort_values(), y=df_dict[df_name]["Price"].map(lambda x : np.log(x)), name=False, line_color="rgba(156, 165, 196, 1.0)", # Highlight color ) for df_name in df_dict ] for df_name_main in df_dict] # titles = [df_name.split('_')[1] for df_name in df_dict] titles = ['Financials', 'Pharmaceuticals', 'Retail', 'Technology', 'Telecommunications', 'Travel'] plot = make_subplots(rows=2, cols=3, subplot_titles=titles, specs=[ [dict(type='xy', rowspan=1), dict(type='xy', rowspan=1), dict(type='xy', rowspan=1)], [dict(type='xy', rowspan=1), dict(type='xy', rowspan=1), dict(type='xy', rowspan=1)]]) i = 0 r = 1 c = 1 while True: for j in range(len(scatters[i])): plot.add_trace(scatters[i][j], row=r, col = c) i += 1 c += 1 if c == 4 and r == 1: r += 1 c = 1 elif c == 4 and r == 2: break # Define the new layout layout_scatter =dict(title=dict(text='', font=dict(size=12), x=0.5), plot_bgcolor='rgba(0,0,0,0)', # paper_bgcolor='rgba(0,0,0,0)', showlegend = False, ) plot.update_yaxes(title_text="Stock Value", row=1, col=1) plot.update_yaxes(title_text="Stock Value", row=2, col=1) plot.update_yaxes(visible = False, row=1, col=2) plot.update_yaxes(visible = False, row=2, col=2) plot.update_yaxes(visible = False, row=1, col=3) plot.update_yaxes(visible = False, row=2, col=3) plot.update_xaxes(title_text="Date", row=2, col=1) plot.update_xaxes(title_text="Date", row=2, col=2) plot.update_xaxes(title_text="Date", row=2, col=3) plot.update_xaxes(visible = False, row=1, col=1) plot.update_xaxes(visible = False, row=1, col=2) plot.update_xaxes(visible = False, row=1, col=3) plot.update_layout(layout_scatter, height=600, paper_bgcolor='rgba(0,0,0,0)', plot_bgcolor='rgba(0,0,0,0)' ) plot.show() ```
github_jupyter
``` import numpy as np import matplotlib.pyplot as plt ``` # 1. ## a) ``` def simetrica(A): "Verifică dacă matricea A este simetrică" return np.all(A == A.T) def pozitiv_definita(A): "Verifică dacă matricea A este pozitiv definită" for i in range(1, len(A) + 1): d_minor = np.linalg.det(A[:i, :i]) if d_minor < 0: return False return True def fact_ll(A): # Pasul 1 if not simetrica(A): raise Exception("Nu este simetrica") if not pozitiv_definita(A): raise Exception("Nu este pozitiv definită") N = A.shape[0] # Pasul 2 S = A.copy() L = np.zeros((N, N)) # Pasul 3 for i in range(N): # Actualizez coloana i din matricea L L[:, i] = S[:, i] / np.sqrt(S[i, i]) # Calculez noul complement Schur S_21 = S[i + 1:, i] S_nou = np.eye(N) S_nou[i + 1:, i + 1:] = S[i + 1:, i + 1:] - np.outer(S_21, S_21.T) / S[i, i] S = S_nou # Returnez matricea calculată return L A = np.array([ [25, 15, -5], [15, 18, 0], [-5, 0, 11] ], dtype=np.float64) L = fact_ll(A) print("L este:") print(L) print("Verificare:") print(L @ L.T) ``` ## b) ``` b = np.array([1, 2, 3], dtype=np.float64) y = np.zeros(3) x = np.zeros(3) # Substituție ascendentă for i in range(0, 3): coefs = L[i, :i + 1] values = y[:i + 1] y[i] = (b[i] - coefs @ values) / L[i, i] L_t = L.T # Substituție descendentă for i in range(2, -1, -1): coefs = L_t[i, i + 1:] values = x[i + 1:] x[i] = (y[i] - coefs @ values) / L_t[i, i] print("x =", x) print() print("Verificare: A @ x =", A @ x) ``` ## 2. ``` def step(x, f, df): "Calculează un pas din metoda Newton-Rhapson." return x - f(x) / df(x) def newton_rhapson(f, df, x0, eps): "Determină o soluție a f(x) = 0 plecând de la x_0" # Primul punct este cel primit ca parametru prev_x = x0 # Execut o iterație x = step(x0, f, df) N = 1 while True: # Verific condiția de oprire if abs(x - prev_x) / abs(prev_x) < eps: break # Execut încă un pas prev_x = x x = step(x, f, df) # Contorizez numărul de iterații N += 1 return x, N ``` Funcția dată este $$ f(x) = x^3 + 3 x^2 - 18 x - 40 $$ iar derivatele ei sunt $$ f'(x) = 3x^2 + 6 x - 18 $$ $$ f''(x) = 6x + 6 $$ ``` f = lambda x: (x ** 3) + 3 * (x ** 2) - 18 * x - 40 df = lambda x: 3 * (x ** 2) + 6 * x - 18 ddf = lambda x: 6 * x + 6 left = -8 right = +8 x_grafic = np.linspace(left, right, 500) def set_spines(ax): # Mut axele de coordonate ax.spines['bottom'].set_position('zero') ax.spines['top'].set_color('none') ax.spines['left'].set_position('zero') ax.spines['right'].set_color('none') fig, ax = plt.subplots(dpi=120) set_spines(ax) plt.plot(x_grafic, f(x_grafic), label='$f$') plt.plot(x_grafic, df(x_grafic), label="$f'$") plt.plot(x_grafic, ddf(x_grafic), label="$f''$") plt.legend() plt.show() ``` Alegem subintervale astfel încât $f(a) f(b) < 0$: - $[-8, -4]$ - $[-4, 0]$ - $[2, 6]$ Pentru fiecare dintre acestea, căutăm un punct $x_0$ astfel încât $f(x_0) f''(x_0) > 0$: - $-6$ - $-1$ - $5$ ``` eps = 1e-3 x1, _ = newton_rhapson(f, df, -6, eps) x2, _ = newton_rhapson(f, df, -1, eps) x3, _ = newton_rhapson(f, df, 5, eps) fig, ax = plt.subplots(dpi=120) plt.suptitle('Soluțiile lui $f(x) = 0$') set_spines(ax) plt.plot(x_grafic, f(x_grafic)) plt.scatter(x1, 0) plt.scatter(x2, 0) plt.scatter(x3, 0) plt.show() ```
github_jupyter
# Input data representation as 2D array of 3D blocks > An easy way to represent input data to neural networks or any other machine learning algorithm in the form of 2D array of 3D-blocks - toc: false - branch: master - badges: true - comments: true - categories: [machine learning, jupyter, graphviz] - image: images/array_visualiser/thumbnail.png - search_exclude: false --- Often while working with machine learning algorithms the developer has a good picture of how the input data looks like apart from knowing what the input data is. Also, most of the times the input data is usually represented or decribed with array terminology. Hence, this particular post is one such attempt to create simple 2D representations of 3D-blocks symbolising the arrays used for input. [Graphviz](https://graphviz.readthedocs.io/en/stable/) a highly versatile graphing library that creates graphs based on DOT language is used to create the 2D array representation of 3D blocks with annotation and color uniformity to create quick and concise graphs/pictures for good explanations of input data used in various machine learning/deep learning algorithms. In what follows is a script to create the 2D array representation og 3D blocks mainly intented for time-series data. The script facilitates some features which include- * Starting at time instant 0 or -1 * counting backwards i.e. t-4 -> t-3 -> t-2 -> t-1 -> t-0 or counting forwards t-0 -> t-1 -> t-2 -> t-3 -> t-4 -> t-5 ### Imports and global constants ``` import graphviz as G # to create the required graphs import random # to generate random hex codes for colors FORWARDS = True # to visualise array from left to right BACKWARDS = False # to visualise array from right to left ``` ### Properties of 2D representation of 3D array blocks Main features/properties of the array visualisation needed are defined gere before actually creating the graph/picture. 1) Number of Rows: similar to rows in a matrix where each each row corresponds to one particular data type with data across different time instants arranged in columns 2) Blocks: which corresponds to the number of time instants in each row (jagged arrays can also be graphed) 3) Prefix: the annotation used to annotate each 3D block in the 2D array representation ``` ROW_NUMS = [1, 2] # Layer numbers corresponding to the number of rows of array data (must be contiguous) BLOCKS = [3, 3] # number of data fields in each row i.e., columns in each row diff = [x - ROW_NUMS[i] for i, x in enumerate(ROW_NUMS[1:])] assert diff == [1]*(len(ROW_NUMS) - 1), '"layer_num" should contain contiguous numbers only' assert len(ROW_NUMS) == len(BLOCKS), "'cells' list and 'layer_num' list should contain same number of entries" direction = BACKWARDS # control the direction of countdown of timesteps INCLUDE_ZERO = True # for time series based data START_AT = 0 if INCLUDE_ZERO else 1 # names = [['Softmax\nprobabilities', 'p1', 'p2', 'p3', 'p4', 'p5', 'p6', 'p7', 'p8', 'p9', 'p10'],['', ' +', ' +', ' +', ' +', ' +', ' +'],['GMM\nprobabilities', 'p1', 'p2', 'p3', 'p4', 'p5', 'p6']] # the trick to adding symbols like the "partial(dou)" i.e. '∂' is to write these symbols in a markdown cell using the $\partial$ utilising the mathjax support and # copying the content after being rendered and paste in the code as a string wherever needed prefix = ['∂(i)-', '∂(v)-'] r = lambda: random.randint(0,255) # to generate random colors for each row # intantiate a directed graph with intial properties dot = G.Digraph(comment='Matrix', graph_attr={'nodesep':'0.02', 'ranksep':'0.02', 'bgcolor':'transparent'}, node_attr={'shape':'box3d','fixedsize':'true', 'width':'1.1'}) for row_no in ROW_NUMS: if row_no != 1: dot.edge(str(row_no-1)+str(START_AT), str(row_no)+str(START_AT), style='invis') # invisible edges to contrain layout with dot.subgraph() as sg: sg.attr(rank='same') color = '#{:02x}{:02x}{:02x}'.format(r(),r(),r()) for block_no in range(START_AT, BLOCKS[row_no-1]+START_AT): if direction: sg.node(str(row_no)+str(block_no), 't-'+str(block_no), style='filled', fillcolor=color) else: if START_AT == 0: sg.node(str(row_no)+str(block_no), prefix[row_no-1]+str(BLOCKS[row_no-1]-block_no-1), style='filled', fillcolor=color) else: sg.node(str(row_no)+str(block_no), prefix[row_no-1]+str(BLOCKS[row_no-1]-block_no-1), style='filled', fillcolor=color) ``` ### Render ``` dot ``` ### Save/Export ``` # dot.format = 'jpeg' # or PDF, SVG, JPEG, PNG, etc. # to save the file, pdf is default dot.render('./lstm_input') ``` ### Additional script to just show the breakdown of train-test data of the dataset being used ``` import random r = lambda: random.randint(0,255) # to generate random colors for each row folders = G.Digraph(node_attr={'style':'filled'}, graph_attr={'style':'invis', 'rankdir':'LR'},edge_attr={'color':'black', 'arrowsize':'.2'}) color = '#{:02x}{:02x}{:02x}'.format(r(),r(),r()) with folders.subgraph(name='cluster0') as f: f.node('root', 'Dataset \n x2000', shape='folder', fillcolor=color) color = '#{:02x}{:02x}{:02x}'.format(r(),r(),r()) with folders.subgraph(name='cluster1') as f: f.node('train', 'Train \n 1800', shape='note', fillcolor=color) f.node('test', 'Test \n x200', shape='note', fillcolor=color) folders.edge('root', 'train') folders.edge('root', 'test') folders folders.render('./dataset') ```
github_jupyter
``` import jupman; jupman.init() ``` # Jupman Tests Tests and cornercases. The page Title has one sharp, the Sections always have two sharps. ## Sezione 1 bla bla ## Sezione 2 Subsections always have three sharps ### Subsection 1 bla bla ### Subsection 2 bla bla ## Quotes > I'm quoted with **greater than** symbol > on multiple lines > Am I readable? I'm quoted with **spaces** on multiple lines Am I readable? ## Download links Files manually put in `_static` : * Download [trial.odt](_static/trial.odt) * Download [trial.pdf](_static/trial.pdf) Files in arbitrary folder position : * Download [requirements.txt](requirements.txt) NOTE: download links are messy, [see issue 8](https://github.com/DavidLeoni/jupman/issues/8) ## Info/Warning Boxes Until there is an info/warning extension for Markdown/CommonMark (see this issue), such boxes can be created by using HTML <div> elements like this: <div class="alert alert-info"> **Note:** This is an info! </div> <div class="alert alert-warning"> **Note:** This is a warn! </div> For this to work reliably, you should obey the following guidelines: * The class attribute has to be either "alert alert-info" or "alert alert-warning", other values will not be converted correctly. * No further attributes are allowed. * For compatibility with CommonMark, you should add an empty line between the <div> start tag and the beginning of the content. ## Math For math stuff, [see npshpinx docs](https://nbsphinx.readthedocs.io/en/0.2.14/markdown-cells.html#Equations) Here we put just some equation to show it behaves fine in Jupman This is infinity: $\infty$ ## Unicode Unicode characters should display an HTML, but with latex you might have problems, and need to manually map characters in conf.py You should see a star in a black circle: ✪ You should see a check: ✓ table characters: │ ├ └ ─ ## Image ### SVG Images SVG images work in notebook, but here it is commented since it breaks Latex, [see issue](https://github.com/DavidLeoni/jupman/issues/1) ``` ![An image](img/cc-by.svg) ``` This one also doesn't works (and shows ugly code in the notebook anyway) ``` from IPython.display import SVG SVG(filename='img/cc-by.svg') ``` ### PNG Images ![A PNG image](_static/img/notebook_icon.png) ### Inline images - pure markdown Bla ![A PNG image](_static/img/notebook_icon.png) bli blo Bla ![A PNG image](_static/img/notebook_icon.png) bli blo ### Inline images - markdown and img bla <img style="display:inline" src="_static/img/notebook_icon.png"> bli blo bla <img style="display:inline !important" src="_static/img/notebook_icon.png"> bli blo ### Img class If we pass a class, it will to be present in the website: <img class="jupman-inline-img" src="_static/img/notebook_icon.png"> This <img class="jupman-inline-img" src="_static/img/notebook_icon.png"> should be inline ## Expressions list Highlighting **does** work both in Jupyter and Sphinx Three quotes, multiple lines - Careful: put **exactly 4 spaces** indentation 1. ```python [2,3,1] != "[2,3,1]" ``` 1. ```python [4,8,12] == [2*2,"4*2",6*2] ``` 1. ```python [][:] == [] ``` Three quotes, multiple lines, more compact - works in Jupyter, **doesn't** in Sphinx 1. ```python [2,3,1] != "[2,3,1]"``` 1. ```python [4,8,12] == [2*2,"4*2",6*2]``` 1. ```python [][:] == []``` Highlighting **doesn't** work in Jupyter neither in Sphinx: Three quotes, single line 1. ```python [2,3,1] != ["2",3,1]``` 1. ```python [4,8,12] == [2*2,"4*2",6*2]``` 1. ```python [][:] == "[]"``` Single quote, single line 1. `python [2,3,1] != ["2",3,1]` 1. `python [4,8,12] == [2*2,"4*2",6*2]` 1. `python [][:] == "[]"` ## Togglable cells There are various ways to have togglable cells. ### Show/hide exercises (PREFERRED) If you need clickable show/hide buttons for exercise solutions , see here: [Usage - Exercise types](https://jupman.softpython.org/en/latest/usage.html#Type-of-exercises). It manages comprehensively use cases for display in website, student zips, exams, etc If you have other needs, we report here some test we made, but keep in mind this sort of hacks tend to change behaviour with different versions of jupyter. ### Toggling with Javascript * Works in MarkDown * Works while in Jupyter * Works in HTML * Does not show in Latex (which might be a good point, if you intend to put somehow solutions at the end of the document) * NOTE: after creating the text to see the results you have to run the initial cell with jupman.init (as for the toc) * NOTE: you can't use Markdown block code since of Sept 2017 doesn't show well in HTML output <div class="jupman-togglable"> <code> <pre> # SOME CODE color = raw_input("What's your eyes' color?") if color == "": sys.exit() </pre> </code> </div> <div class="jupman-togglable" data-jupman-show="Customized show msg" data-jupman-hide="Customized hide msg"> <code> <pre> # SOME OTHER CODE how_old = raw_input("How old are you?") x = random.randint(1,8) if question == "": sys.exit() </pre> </code> </div> ### HTML details in Markdown, code tag * Works while in Jupyter * Doesn't work in HTML output * as of Sept Oct 2017, not yet supported in Microsoft browsers <details> <summary>Click here to see the code</summary> <code> question = raw_input("What?") answers = random.randint(1,8) if question == "": sys.exit() </code> </details> ### HTML details in Markdown, Markdown mixed code * Works while in Jupyter * Doesn't work in HTML output * as of Sept Oct 2017, not yet supported in Microsoft browsers <details> <summary>Click here to see the code</summary> ```python question = raw_input("What?") answers = random.randint(1,8) if question == "": sys.exit() ``` </details> ### HTML details in HTML, raw NBConvert Format * Doesn't work in Jupyter * Works in HTML output * NOTE: as of Sept Oct 2017, not yet supported in Microsoft browsers * Doesn't show at all in PDF output Some other Markdown cell afterwards .... ## Files in templates Since Dec 2019 they are not accessible [see issue 10](https://github.com/DavidLeoni/jupman/issues/10), but it is not a great problem, you can always put a link to Github, see for example [exam-yyyy-mm-dd.ipynb](https://github.com/DavidLeoni/jupman/tree/master/_templates/exam/exam-yyyy-mm-dd.ipynb) ## Python tutor There are various ways to embed Python tutor, first we put the recommended one. ### jupman.pytut **RECOMMENDED**: You can put a call to `jupman.pytut()` at the end of a cell, and the cell code will magically appear in python tutor in the output (except the call to `pytut()` of course). Does not need internet connection. ``` x = [5,8,4,10,30,20,40,50,60,70,20,30] y= {3:9} z = [x] jupman.pytut() ``` **jupman.pytut scope**: BEWARE of variables which were initialized in previous cells, they WILL NOT be available in Python Tutor: ``` w = 8 x = w + 5 jupman.pytut() ``` **jupman.pytut window overflow**: When too much right space is taken, it might be difficult to scroll: ``` x = [3,2,5,2,42,34,2,4,34,2,3,4,23,4,23,4,2,34,23,4,23,4,23,4,234,34,23,4,23,4,23,4,2] jupman.pytut() x = w + 5 jupman.pytut() ``` **jupman.pytut execution:** Some cells might execute in Jupyter but not so well in Python Tutor, due to [its inherent limitations](https://github.com/pgbovine/OnlinePythonTutor/blob/master/unsupported-features.md): ``` x = 0 for i in range(10000): x += 1 print(x) jupman.pytut() ``` **jupman.pytut infinite loops**: Since execution occurs first in Jupyter and then in Python tutor, if you have an infinite loop no Python Tutor instance will be spawned: ```python while True: pass jupman.pytut() ``` **jupman.pytut() resizability:** long vertical and horizontal expansion should work: ``` x = {0:'a'} for i in range(1,30): x[i] = x[i-1]+str(i*10000) jupman.pytut() ``` **jupman.pytut cross arrows**: With multiple visualizations, arrows shouldn't cross from one to the other even if underlying script is loaded multiple times (relates to visualizerIdOverride) ``` x = [1,2,3] jupman.pytut() ``` **jupman.pytut print output**: With only one line of print, Print output panel shouldn't be too short: ``` print("hello") jupman.pytut() y = [1,2,3,4] jupman.pytut() ``` ### HTML magics Another option is to directly paste Python Tutor iframe in the cells, and use Jupyter `%%HTML` magics command. HTML should be available both in notebook and website - of course, requires an internet connection. Beware: you need the HTTP**S** ! ``` %%HTML <iframe width="800" height="300" frameborder="0" src="https://pythontutor.com/iframe-embed.html#code=x+%3D+5%0Ay+%3D+10%0Az+%3D+x+%2B+y&cumulative=false&py=2&curInstr=3"> </iframe> ``` ### NBTutor To show Python Tutor in notebooks, there is already a jupyter extension called [NBTutor](https://github.com/lgpage/nbtutor) , afterwards you can use magic `%%nbtutor` to show the interpreter. Unfortunately, it doesn't show in the generated HTML :-/ ``` %reload_ext nbtutor %%nbtutor for x in range(1,4): print("ciao") x=5 y=7 x +y ``` ## Stripping answers For stripping answers examples, see [jupyter-example/jupyter-example-sol](jupyter-example/jupyter-example-sol.ipynb). For explanation, see [usage](usage.ipynb#Tags-to-strip) ## Metadata to HTML classes ## Formatting problems ### Characters per line Python standard for code has limit to 79, many styles have 80 (see [Wikipedia](https://en.wikipedia.org/wiki/Characters_per_line)) We can keep 80: ``` -------------------------------------------------------------------------------- ``` ```python -------------------------------------------------------------------------------- ``` Errors hold 75 dashes: Plain: ``` --------------------------------------------------------------------------- ZeroDivisionError Traceback (most recent call last) <ipython-input-15-9e1622b385b6> in <module>() ----> 1 1/0 ZeroDivisionError: division by zero ``` As Python markup: ```python --------------------------------------------------------------------------- ZeroDivisionError Traceback (most recent call last) <ipython-input-15-9e1622b385b6> in <module>() ----> 1 1/0 ZeroDivisionError: division by zero ``` ``` len('---------------------------------------------------------------------------') ``` On website this **may** display a scroll bar, because it will actually print `'` apexes plus the dashes ``` '-'*80 ``` This should **not** display a scrollbar: ``` '-'*78 ``` This should **not** display a scrollbar: ``` print('-'*80) ``` ### Very large input In Jupyter: default behaviour, show scrollbar On the website: should expand in horizontal as much as it wants, the rationale is that for input code since it may be printed to PDF you should always manually put line breaks. ``` # line with an exceedingly long comment line with an exceedingly long comment line with an exceedingly long comment line with an exceedingly long comment line with an exceedingly long comment line with an exceedingly long comment # line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment line with an an out-of-this-world long comment ``` **Very long HTML** (and long code line) Should expand in vertical as much as it wants. ``` %%HTML <iframe width="100%" height="1300px" frameBorder="0" src="https://umap.openstreetmap.fr/en/map/mia-mappa-agritur_182055?scaleControl=false&miniMap=false&scrollWheelZoom=false&zoomControl=true&allowEdit=false&moreControl=true&searchControl=null&tilelayersControl=null&embedControl=null&datalayersControl=true&onLoadPanel=undefined&captionBar=false#11/46.0966/11.4024"></iframe><p><a href="http://umap.openstreetmap.fr/en/map/mia-mappa-agritur_182055">See full screen</a></p> ``` ### Very long output In Jupyter: by clicking, you can collapse On the website: a scrollbar should appear ``` for x in range(150): print('long output ...', x) ```
github_jupyter
## Load Dataset ``` import pandas as pd import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt import seaborn as sns # 노트북 안에 그래프를 그리기 위해 %matplotlib inline # 그래프에서 마이너스 폰트 깨지는 문제에 대한 대처 mpl.rcParams['axes.unicode_minus'] = False import warnings warnings.filterwarnings('ignore') train = pd.read_csv("data/train.csv", parse_dates=["datetime"]) train.shape test = pd.read_csv("data/test.csv", parse_dates=["datetime"]) test.shape ``` ## Feature Engineering ``` train["year"] = train["datetime"].dt.year train["month"] = train["datetime"].dt.month train["day"] = train["datetime"].dt.day train["hour"] = train["datetime"].dt.hour train["minute"] = train["datetime"].dt.minute train["second"] = train["datetime"].dt.second train["dayofweek"] = train["datetime"].dt.dayofweek train.shape test["year"] = test["datetime"].dt.year test["month"] = test["datetime"].dt.month test["day"] = test["datetime"].dt.day test["hour"] = test["datetime"].dt.hour test["minute"] = test["datetime"].dt.minute test["second"] = test["datetime"].dt.second test["dayofweek"] = test["datetime"].dt.dayofweek test.shape # widspeed 풍속에 0 값이 가장 많다. => 잘못 기록된 데이터를 고쳐 줄 필요가 있음 fig, axes = plt.subplots(nrows=2) fig.set_size_inches(18,10) plt.sca(axes[0]) plt.xticks(rotation=30, ha='right') axes[0].set(ylabel='Count',title="train windspeed") sns.countplot(data=train, x="windspeed", ax=axes[0]) plt.sca(axes[1]) plt.xticks(rotation=30, ha='right') axes[1].set(ylabel='Count',title="test windspeed") sns.countplot(data=test, x="windspeed", ax=axes[1]) # 풍속의 0값에 특정 값을 넣어준다. # 평균을 구해 일괄적으로 넣어줄 수도 있지만, 예측의 정확도를 높이는 데 도움이 될것 같진 않다. # train.loc[train["windspeed"] == 0, "windspeed"] = train["windspeed"].mean() # test.loc[train["windspeed"] == 0, "windspeed"] = train["windspeed"].mean() # 풍속이 0인것과 아닌 것의 세트를 나누어 준다. trainWind0 = train.loc[train['windspeed'] == 0] trainWindNot0 = train.loc[train['windspeed'] != 0] print(trainWind0.shape) print(trainWindNot0.shape) # 그래서 머신러닝으로 예측을 해서 풍속을 넣어주도록 한다. from sklearn.ensemble import RandomForestClassifier def predict_windspeed(data): # 풍속이 0인것과 아닌 것을 나누어 준다. dataWind0 = data.loc[data['windspeed'] == 0] dataWindNot0 = data.loc[data['windspeed'] != 0] # 풍속을 예측할 피처를 선택한다. wCol = ["season", "weather", "humidity", "month", "temp", "year", "atemp"] # 풍속이 0이 아닌 데이터들의 타입을 스트링으로 바꿔준다. dataWindNot0["windspeed"] = dataWindNot0["windspeed"].astype("str") # 랜덤포레스트 분류기를 사용한다. rfModel_wind = RandomForestClassifier() # wCol에 있는 피처의 값을 바탕으로 풍속을 학습시킨다. rfModel_wind.fit(dataWindNot0[wCol], dataWindNot0["windspeed"]) # 학습한 값을 바탕으로 풍속이 0으로 기록 된 데이터의 풍속을 예측한다. wind0Values = rfModel_wind.predict(X = dataWind0[wCol]) # 값을 다 예측 후 비교해 보기 위해 # 예측한 값을 넣어 줄 데이터 프레임을 새로 만든다. predictWind0 = dataWind0 predictWindNot0 = dataWindNot0 # 값이 0으로 기록 된 풍속에 대해 예측한 값을 넣어준다. predictWind0["windspeed"] = wind0Values # dataWindNot0 0이 아닌 풍속이 있는 데이터프레임에 예측한 값이 있는 데이터프레임을 합쳐준다. data = predictWindNot0.append(predictWind0) # 풍속의 데이터타입을 float으로 지정해 준다. data["windspeed"] = data["windspeed"].astype("float") data.reset_index(inplace=True) data.drop('index', inplace=True, axis=1) return data # 0값을 조정한다. train = predict_windspeed(train) # test = predict_windspeed(test) # widspeed 의 0값을 조정한 데이터를 시각화 fig, ax1 = plt.subplots() fig.set_size_inches(18,6) plt.sca(ax1) plt.xticks(rotation=30, ha='right') ax1.set(ylabel='Count',title="train windspeed") sns.countplot(data=train, x="windspeed", ax=ax1) ``` ## Feature Selection * 신호와 잡음을 구분해야 한다. * 피처가 많다고 해서 무조건 좋은 성능을 내지 않는다. * 피처를 하나씩 추가하고 변경해 가면서 성능이 좋지 않은 피처는 제거하도록 한다. ``` # 연속형 feature와 범주형 feature # 연속형 feature = ["temp","humidity","windspeed","atemp"] # 범주형 feature의 type을 category로 변경 해 준다. categorical_feature_names = ["season","holiday","workingday","weather", "dayofweek","month","year","hour"] for var in categorical_feature_names: train[var] = train[var].astype("category") test[var] = test[var].astype("category") feature_names = ["season", "weather", "temp", "atemp", "humidity", "windspeed", "year", "hour", "dayofweek", "holiday", "workingday"] feature_names X_train = train[feature_names] print(X_train.shape) X_train.head() X_test = test[feature_names] print(X_test.shape) X_test.head() label_name = "count" y_train = train[label_name] print(y_train.shape) y_train.head() ``` # Score ## RMSLE 과대평가 된 항목보다는 과소평가 된 항목에 패널티를 준다. 오차(Error)를 제곱(Square)해서 평균(Mean)한 값의 제곱근(Root) 으로 값이 작을 수록 정밀도가 높다. 0에 가까운 값이 나올 수록 정밀도가 높은 값이다. Submissions are evaluated one the Root Mean Squared Logarithmic Error (RMSLE) $$ \sqrt{\frac{1}{n} \sum_{i=1}^n (\log(p_i + 1) - \log(a_i+1))^2 } $$ * \\({n}\\) is the number of hours in the test set * \\(p_i\\) is your predicted count * \\(a_i\\) is the actual count * \\(\log(x)\\) is the natural logarithm * 좀 더 자세한 설명은 : [RMSLE cost function](https://www.slideshare.net/KhorSoonHin/rmsle-cost-function) * 잔차(residual)에 대한 평균에 로그를 씌운 값이다. => 과대평가 된 항목보다 과소 평가 된 항목에 패널티를 주기위해 * 정답에 대한 오류를 숫자로 나타낸 값으로 값이 클 수록 오차가 크다는 의미다. * 값이 작을 수록 오류가 적다는 의미를 나타낸다. ![image.png](https://upload.wikimedia.org/wikipedia/commons/thumb/7/73/Logarithms.svg/456px-Logarithms.svg.png) 이미지 출처 : 위키피디아 https://ko.wikipedia.org/wiki/로그 ``` from sklearn.metrics import make_scorer def rmsle(predicted_values, actual_values): # 넘파이로 배열 형태로 바꿔준다. predicted_values = np.array(predicted_values) actual_values = np.array(actual_values) # 예측값과 실제 값에 1을 더하고 로그를 씌워준다. log_predict = np.log(predicted_values + 1) log_actual = np.log(actual_values + 1) # 위에서 계산한 예측값에서 실제값을 빼주고 제곱을 해준다. difference = log_predict - log_actual # difference = (log_predict - log_actual) ** 2 difference = np.square(difference) # 평균을 낸다. mean_difference = difference.mean() # 다시 루트를 씌운다. score = np.sqrt(mean_difference) return score rmsle_scorer = make_scorer(rmsle) rmsle_scorer ``` ### Cross Validation 교차 검증 * 일반화 성능을 측정하기 위해 데이터를 여러 번 반복해서 나누고 여러 모델을 학습한다. ![image.png](https://www.researchgate.net/profile/Halil_Bisgin/publication/228403467/figure/fig2/AS:302039595798534@1449023259454/Figure-4-k-fold-cross-validation-scheme-example.png) 이미지 출처 : https://www.researchgate.net/figure/228403467_fig2_Figure-4-k-fold-cross-validation-scheme-example * KFold 교차검증 * 데이터를 폴드라 부르는 비슷한 크기의 부분집합(n_splits)으로 나누고 각각의 폴드 정확도를 측정한다. * 첫 번째 폴드를 테스트 세트로 사용하고 나머지 폴드를 훈련세트로 사용하여 학습한다. * 나머지 훈련세트로 만들어진 세트의 정확도를 첫 번째 폴드로 평가한다. * 다음은 두 번째 폴드가 테스트 세트가 되고 나머지 폴드의 훈련세트를 두 번째 폴드로 정확도를 측정한다. * 이 과정을 마지막 폴드까지 반복한다. * 이렇게 훈련세트와 테스트세트로 나누는 N개의 분할마다 정확도를 측정하여 평균 값을 낸게 정확도가 된다. ``` from sklearn.model_selection import KFold from sklearn.model_selection import cross_val_score k_fold = KFold(n_splits=10, shuffle=True, random_state=0) ``` ## RandomForest ``` from sklearn.ensemble import RandomForestRegressor max_depth_list = [] model = RandomForestRegressor(n_estimators=100, # 높을수록 좋지만, 느려짐. n_jobs=-1, random_state=0) model %time score = cross_val_score(model, X_train, y_train, cv=k_fold, scoring=rmsle_scorer) score = score.mean() # 0에 근접할수록 좋은 데이터 print("Score= {0:.5f}".format(score)) ``` ## Train ``` # 학습시킴, 피팅(옷을 맞출 때 사용하는 피팅을 생각함) - 피처와 레이블을 넣어주면 알아서 학습을 함 model.fit(X_train, y_train) # 예측 predictions = model.predict(X_test) print(predictions.shape) predictions[0:10] # 예측한 데이터를 시각화 해본다. fig,(ax1,ax2)= plt.subplots(ncols=2) fig.set_size_inches(12,5) sns.distplot(y_train,ax=ax1,bins=50) ax1.set(title="train") sns.distplot(predictions,ax=ax2,bins=50) ax2.set(title="test") ``` # Submit ``` submission = pd.read_csv("data/sampleSubmission.csv") submission submission["count"] = predictions print(submission.shape) submission.head() submission.to_csv("data/Score_{0:.5f}_submission.csv".format(score), index=False) ``` 참고 : * [EDA & Ensemble Model (Top 10 Percentile) | Kaggle](https://www.kaggle.com/viveksrinivasan/eda-ensemble-model-top-10-percentile) * [How to finish top 10 percentile in Bike Sharing Demand Competition In Kaggle? (part -1)](https://medium.com/@viveksrinivasan/how-to-finish-top-10-percentile-in-bike-sharing-demand-competition-in-kaggle-part-1-c816ea9c51e1) * [How to finish top 10 percentile in Bike Sharing Demand Competition In Kaggle? (part -2)](https://medium.com/@viveksrinivasan/how-to-finish-top-10-percentile-in-bike-sharing-demand-competition-in-kaggle-part-2-29e854aaab7d)
github_jupyter
# From Variables to Classes ## A short Introduction Python - as any programming language - has many extensions and libraries at its disposal. Basically, there exist libraries for everything. <center>But what are **libraries**? </center> Basically, **libraries** are a collection of methods (_small pieces of code where you put sth in and get sth else out_) which you can use to analyse your data, visualise your data, run models ... do anything you like. As said, methods usually take _something_ as input. That _something_ is usually a **variable**. In the following, we will work our way from **variables** to **libraries**. ## Variables Variables are one of the simplest types of objects in a programming language. An [object](https://en.wikipedia.org/wiki/Object_(computer_science) is a value stored in the memory of your computer, marked by a specific identifyer. Variables can have different types, such as [strings, numbers, and booleans](https://www.learnpython.org/en/Variables_and_Types). Differently to other programming languages, you do not need to declare the type of a variable, as variables are handled as objects in Python. ```python x = 4.2 # floating point number y = 'Hello World!' # string z = True # boolean ``` ``` x = 4.24725723 print(type(x)) y = 'Hello World! Hello universe' print(y) z = True print(type(z)) ``` We can use operations (normal arithmetic operations) to use variables for getting results we want. With numbers, you can add, substract, multiply, divide, basically taking the values from the memory assigned to the variable name and performing calculations. Let's have a look at operations with numbers and strings. We leave booleans to the side for the moment. We will simply add the variables below. ```python n1 = 7 n2 = 42 s1 = 'Looking good, ' s2 = 'you are.' ``` ``` n1 = 7 n2 = 42 s1 = 'Looking good, ' s2 = 'you are.' first_sum = n1 + n2 print(first_sum) first_conc = s1 + s2 print(first_conc) ``` Variables can be more than just a number. If you think of an Excel-Spreadsheet, a variable can be the content of a single cell, or multiple cells can be combined in one variable (e.g. one column of an Excel table). So let's create a list -_a collection of variables_ - from `x`, `n1`, and `n2`. Lists in python are created using [ ]. Now, if you want to calculate the sum of this list, it is really exhausting to sum up every item of this list manually. ```python first_list = [x, n1, n2] # a sum of a list could look like second_sum = some_list[0] + some_list[1] + ... + some_list[n] # where n is the last item of the list, e.g. 2 for first_list. ``` Actually, writing the second sum like this is the same as before. It would be great, if this step of calculating the sum could be used many times without writing it out. And this is, what functions are for. For example, there already exists a sum function: ```python sum(first_list)``` ``` first_list = [x, n1, n2] second_sum = first_list[0] + first_list[1] + first_list[2] print('manual sum {}'.format(second_sum)) # This can also be done with a function print('sum function {}'.format(sum(first_list))) ``` ## Functions The `sum()` method we used above is a **function**. Functions (later we will call them methods) are pieces of code, which take an input, perform some kind of operation, and (_optionally_) return an output. In Python, functions are written like: ```python def func(input): """ Description of the functions content # called the function header """ some kind of operation on input # called the function body return output ``` As an example, we write a `sumup` function which sums up a list. ``` def sumup(inp): """ input: inp - list/array with floating point or integer numbers return: sumd - scalar value of the summed up list """ val = 0 for i in inp: val = val + i return val # let's compare the implemented standard sum function with the new sumup function sum1 = sum(first_list) sum2 = sumup(first_list) print("The python sum function yields {}, \nand our sumup function yields {}.".format(*(sum1,sum2))) # summing up the numbers from 1 to 100 import numpy as np ar_2_sum = np.linspace(1,100,100, dtype='i') print("the sum of the array is: {}".format(sumup(ar_2_sum))) ``` As we see above, functions are quite practical and save a lot of time. Further, they help structuring your code. Some functions are directly available in python without any libraries or other external software. In the example above however, you might have noticed, that we `import`ed a library called `numpy`. In those libraries, functions are merged to one package, having the advantage that you don't need to import each single function at a time. Imagine you move and have to pack all your belongings. You can think of libraries as packing things with similar purpose in the same box (= library). ## Functions to Methods as part of classes When we talk about functions in the environment of classes, we usually call them methods. But what are **classes**? [Classes](https://docs.python.org/3/tutorial/classes.html) are ways to bundle functionality together. Logically, functionality with similar purpose (or different kind of similarity). One example could be: think of **apples**. Apples are now a class. You can apply methods to this class, such as `eat()` or `cut()`. Or more sophisticated methods including various recipes using apples comprised in a cookbook. The `eat()` method is straight forward. But the `cut()` method may be more interesting, since there are various ways to cut an apple. Let's assume there are two apples to be cut differently. In python, once you have assigned a class to a variable, you have created an **instance** of that class. Then, methods of are applied to that instance by using a . notation. ```python Golden_Delicious = apple() Yoya = apple() Golden_Delicious.cut(4) Yoya.cut(8) ``` The two apples Golden Delicious and Yoya are _instances_ of the class apple. Real _incarnations_ of the abstract concept _apple_. The Golden Delicious is cut into 4 pieces, while the Yoya is cut into 8 pieces. This is similar to more complex libraries, such as the `scikit-learn`. In one exercise, you used the command: ```python from sklearn.cluster import KMeans ``` which simply imports the **class** `KMeans` from the library part `sklearn.cluster`. `KMeans` comprises several methods for clustering, which you can use by calling them similar to the apple example before. For this, you need to create an _instance_ of the `KMeans` class. ```python ... kmeans_inst = KMeans(n_clusters=n_clusters) # first we create the instance of the KMeans class called kmeans_inst kmeans_inst.fit(data) # then we apply a method to the instance kmeans_inst ... ``` An example: ``` # here we just create the data for clustering from sklearn.datasets.samples_generator import make_blobs import matplotlib.pyplot as plt %matplotlib inline X, y = make_blobs(n_samples=100, centers=3, cluster_std= 0.5, random_state=0) plt.scatter(X[:,0], X[:,1], s=70) # now we create an instance of the KMeans class from sklearn.cluster import KMeans nr_of_clusters = 3 # because we see 3 clusters in the plot above kmeans_inst = KMeans(n_clusters= nr_of_clusters) # create the instance kmeans_inst kmeans_inst.fit(X) # apply a method to the instance y_predict = kmeans_inst.predict(X) # apply another method to the instance and save it in another variable # lets plot the predicted cluster centers colored in the cluster color plt.scatter(X[:, 0], X[:, 1], c=y_predict, s=50, cmap='Accent') centers = kmeans_inst.cluster_centers_ # apply the method to find the new centers of the determined clusters plt.scatter(centers[:, 0], centers[:, 1], c='red', s=200, alpha=0.6); # plot the cluster centers ``` ## Summary This short presentation is meant to make you familiar with the concept of variables, functions, methods and classes. All of which are objects! * Variables are normally declared by the user and link a value stored in the memory of your pc to a variable name. They are usually the input of functions * Functions are pieces of code taking an input and performing some operation on said input. Optionally, they return directly an output value * To facilitate the use of functions, they are sometimes bundled as methods within classes. Classes in turn can build up whole libraries in python. * Similar to real book libraries, python libraries contain a collection of _recipes_ which can be applied to your data. * In terms of apples: You own different kinds of apples. A book about apple dishes (_class_) from the library contains different recipes (_methods_) which can be used for your different apples (_instances of the class_). ## Further links * [Data Science Handbook](https://jakevdp.github.io/PythonDataScienceHandbook/) * [Python for Geosciences](https://github.com/koldunovn/python_for_geosciences) * [Introduction to Python for Geoscientists](http://ggorman.github.io/Introduction-to-programming-for-geoscientists/) * [Full Video course on Object Oriented Programming](https://www.youtube.com/watch?v=ZDa-Z5JzLYM&list=PL-osiE80TeTsqhIuOqKhwlXsIBIdSeYtc)
github_jupyter
``` # Dependencies from bs4 import BeautifulSoup as bs import requests import pymongo from splinter import Browser from webdriver_manager.chrome import ChromeDriverManager import pandas as pd ``` ``` #setup splinter executable_path = {'executable_path': ChromeDriverManager().install()} browser = Browser('chrome', **executable_path, headless=False) # Save url and visit the page url = 'https://redplanetscience.com/' browser.visit(url) html = browser.html soup = bs(html, "html.parser") #collect the latest tand extract text News Title form html document title = soup.find('div', class_='content_title').text #collect Paragraph Text new_p = soup.find('div', class_='article_teaser_body').text # Print title and paragraph print(title) print('------------------------------------------------------') print(new_p) # get the image url new_url="https://spaceimages-mars.com/" browser.visit(new_url) # set the path xpath = "/html/body/div[1]/img" #bring the full resolution full = browser.find_by_xpath(xpath) image_ = full[0] image_.click() #create a beautiful object html = browser.html soup = bs(html, "html.parser") image_url = soup.find("img", class_="headerimage fade-in")["src"] #concatenate to find the image url featured_image_url = url + image_url featured_image_url ``` ``` #set the URL mars_url ="https://galaxyfacts-mars.com/" #Extract the Facts Table from the URL using pandas tables = pd.read_html(mars_url) tables # set the columns in the dataframes mars_df = tables[1] mars_df.rename(columns ={0 :'Description', 1: 'Dimension'}, inplace = True) mars_df #convert the dataframe back to html format html = mars_df.to_html() html ``` ``` #extract the image from html file astro ="https://marshemispheres.com/" browser.visit(astro) #create a beautiful object html = browser.html soup = bs(html, "html.parser") print(soup.prettify()) ``` Step 2 - MongoDB and Flask Application ``` # Extract the hemisphere element results = soup.find("div", class_="collapsible results") hemispheres = results.find_all("div", class_="item") #create the list to store the image urls hemisphere_image_urls=[] #Iterate trough each image for hemisphere in hemispheres : # Scrape the titles title = hemisphere.find("h3").text # the hem links link = hemisphere.find("a")["href"] u_url ="https://marshemispheres.com/" hem_link = u_url + link browser.visit(hem_link) # Parse link HTMLs with Beautiful Soup html = browser.html soup = bs(html, "html.parser") print(soup.prettify()) # Scrape the full size images load = soup.find("div", class_="downloads") load_url = load.find("a")["href"] # Add URLs and Titles hemisphere_image_urls.append({"title": title, "image_url": url + load_url}) # Print image title and url print(hemisphere_image_urls) mars = {} mars["title"] = title, mars["paragraph"] = new_p, mars["featured_image"] = featured_image_url, mars[ "facts"] = html, mars["hemispheres"] = hemisphere_image_urls mars # Finally, close the Google Chrome window... browser.quit() ```
github_jupyter
# Visualize Counts for the three classes The number of volume-wise predictions for each of the three classes can be visualized in a 2D-space (with two classes as the axes and the remained or class1-class2 as the value of the third class). Also, the percentage of volume-wise predictions can be shown in a modified pie-chart, i.e. a doughnut plot. ### import modules ``` import os import pickle import numpy as np import pandas as pd from sklearn import preprocessing from sklearn import svm import scipy.misc from scipy import ndimage from scipy.stats import beta from PIL import Image import matplotlib import matplotlib.pyplot as plt import seaborn as sns sns.set_context('poster') sns.set_style('ticks') # after converstion to .py, we can use __file__ to get the module folder try: thisDir = os.path.realpath(__file__) # in notebook form, we take the current working directory (we need to be in 'notebooks/' for this!) except: thisDir = '.' # convert relative path into absolute path, so this will work with notebooks and py modules supDir = os.path.abspath(os.path.join(os.path.dirname(thisDir), '..')) supDir ``` ## Outline the WTA prediction model ### make all possible values ``` def make_all_dummy(): my_max = 100 d = {} count = 0 for bi in np.arange(0,my_max+(10**-10),0.5): left_and_right = my_max - bi for left in np.arange(0,left_and_right+(10**-10),0.5): right = left_and_right-left d[count] = {'left':left,'bilateral':bi,'right':right} count+=1 df = pd.DataFrame(d).T assert np.unique(df.sum(axis=1))[-1] == my_max df['pred'] = df.idxmax(axis=1) return df dummy_df = make_all_dummy() dummy_df.tail() ``` ### transform labels into numbers ``` my_labeler = preprocessing.LabelEncoder() my_labeler.fit(['left','bilateral','right','inconclusive']) my_labeler.classes_ ``` ### 2d space where highest number indiciates class membership (WTA) ``` def make_dummy_space(dummy_df): space_df = dummy_df.copy() space_df['pred'] = my_labeler.transform(dummy_df['pred']) space_df.index = [space_df.left, space_df.right] space_df = space_df[['pred']] space_df = space_df.unstack(1)['pred'] return space_df dummy_space_df = make_dummy_space(dummy_df) dummy_space_df.tail() ``` ### define color map ``` colors_file = os.path.join(supDir,'models','colors.p') with open(colors_file, 'rb') as f: color_dict = pickle.load(f) my_cols = {} for i, j in zip(['red','yellow','blue','trans'], ['left','bilateral','right','inconclusive']): my_cols[j] = color_dict[i] my_col_order = np.array([my_cols[g] for g in my_labeler.classes_]) cmap = matplotlib.colors.LinearSegmentedColormap.from_list("", my_col_order) ``` ### plot WTA predictions ``` plt.figure(figsize=(6,6)) plt.imshow(dummy_space_df, origin='image',cmap=cmap,extent=(0,100,0,100),alpha=0.8) plt.contour(dummy_space_df[::-1],colors='white',alpha=1,origin='image',extent=(0,100,0,100),antialiased=True) plt.xlabel('right',fontsize=32) plt.xticks(range(0,101,25),np.arange(0,1.01,.25),fontsize=28) plt.yticks(range(0,101,25),np.arange(0,1.01,.25),fontsize=28) plt.ylabel('left',fontsize=32) sns.despine() plt.show() ``` ### load data ``` groupdata_filename = '../data/processed/csv/withinconclusive_prediction_df.csv' prediction_df = pd.read_csv(groupdata_filename,index_col=[0,1],header=0) ``` #### toolbox use ``` #groupdata_filename = os.path.join(supDir,'models','withinconclusive_prediction_df.csv') #prediction_df = pd.read_csv(groupdata_filename,index_col=[0,1],header=0) prediction_df.tail() ``` ### show data and WTA space ``` plt.figure(figsize=(6,6)) plt.imshow(dummy_space_df, origin='image',cmap=cmap,extent=(0,100,0,100),alpha=0.8) plt.contour(dummy_space_df[::-1],colors='white',alpha=1,origin='image',extent=(0,100,0,100),antialiased=True) for c in ['left','right','bilateral']: a_df = prediction_df.loc[c,['left','right']] * 100 plt.scatter(a_df['right'],a_df['left'],c=[my_cols[c]],edgecolor='k',linewidth=2,s=200,alpha=0.6) plt.xlabel('right',fontsize=32) plt.xticks(range(0,101,25),np.arange(0,1.01,.25),fontsize=28) plt.yticks(range(0,101,25),np.arange(0,1.01,.25),fontsize=28) plt.ylabel('left',fontsize=32) sns.despine() plt.savefig('../reports/figures/14-prediction-space.png',dpi=300,bbox_inches='tight') plt.show() ``` ## show one patient's data ### doughnut plot ``` p_name = 'pat###' p_count_df = pd.read_csv('../data/processed/csv/%s_counts_df.csv'%p_name,index_col=[0,1],header=0) p_count_df def make_donut(p_count_df, ax, my_cols=my_cols): """show proportion of the number of volumes correlating highest with one of the three groups""" percentages = p_count_df/p_count_df.sum(axis=1).values[-1] * 100 ## donut plot visualization adapted from https://gist.github.com/krishnakummar/ad00d05311977732764f#file-donut-example-py ax.pie( percentages.values[-1], pctdistance=0.75, colors=[my_cols[x] for x in percentages.columns], autopct='%0.0f%%', shadow=False, textprops={'fontsize': 40}) centre_circle = plt.Circle((0, 0), 0.55, fc='white') ax.add_artist(centre_circle) ax.set_aspect('equal') return ax fig,ax = plt.subplots(1,1,figsize=(8,8)) ax = make_donut(p_count_df,ax) plt.savefig('../examples/%s_donut.png'%p_name,dpi=300,bbox_inches='tight') plt.show() ``` ### prediction space ``` def make_pred_space(p_count_df, prediction_df, ax, dummy_space_df=dummy_space_df): ax.imshow(dummy_space_df, origin='image',cmap=cmap,extent=(0,100,0,100),alpha=0.8) ax.contour(dummy_space_df[::-1],colors='white',alpha=1,origin='image',extent=(0,100,0,100),antialiased=True) for c in ['left','right','bilateral']: a_df = prediction_df.loc[c,['left','right']] * 100 ax.scatter(a_df['right'],a_df['left'],c=[my_cols[c]],edgecolor='k',linewidth=2,s=200,alpha=0.6) percentages = p_count_df/p_count_df.sum(axis=1).values[-1] * 100 y_pred = percentages.idxmax(axis=1).values[-1] ax.scatter(percentages['right'],percentages['left'],c=[my_cols[y_pred]],edgecolor='white',linewidth=4,s=1500,alpha=1) plt.xlabel('right',fontsize=32) plt.xticks(range(0,101,25),np.arange(0,1.01,.25),fontsize=28) plt.yticks(range(0,101,25),np.arange(0,1.01,.25),fontsize=28) plt.ylabel('left',fontsize=32) sns.despine() return ax fig,ax = plt.subplots(1,1,figsize=(8,8)) ax = make_pred_space(p_count_df,prediction_df,ax) plt.savefig('../examples/%s_predSpace.png'%p_name,dpi=300,bbox_inches='tight') plt.show() ``` #### toolbox use ``` #def make_p(pFolder,pName,prediction_df=prediction_df): # # count_filename = os.path.join(pFolder,''.join([pName,'_counts_df.csv'])) # p_count_df = pd.read_csv(count_filename,index_col=[0,1],header=0) # # fig = plt.figure(figsize=(8,8)) # ax = plt.subplot(111) # ax = make_donut(p_count_df,ax) # out_name_donut = os.path.join(pFolder,''.join([pName,'_donut.png'])) # plt.savefig(out_name_donut,dpi=300,bbox_inches='tight') # plt.close() # # fig = plt.figure(figsize=(8,8)) # with sns.axes_style("ticks"): # ax = plt.subplot(111) # ax = make_pred_space(p_count_df,prediction_df,ax) # out_name_space = os.path.join(pFolder,''.join([pName,'_predSpace.png'])) # plt.savefig(out_name_space,dpi=300,bbox_inches='tight') # plt.close() # # return out_name_donut, out_name_space ``` ### summary The prediction space allows to see the results on the group level. If used in an application on the level of N=1, the value of the patient of interest in relation to the rest of the group can be seen. If one is interested in the precise numbers, scaled to sum up to 100%, the doughnut plot supplements the prediction space plot in this regard. ************** < [Previous](13-mw-make-group-predictions.ipynb) | [Contents](00-mw-overview-notebook.ipynb) | [Next >](15-mw-visualize-logistic-regression.ipynb)
github_jupyter
``` #default_exp torch_core #export from fastai2.imports import * from fastai2.torch_imports import * from nbdev.showdoc import * from PIL import Image #export _all_ = ['progress_bar','master_bar'] #export if torch.cuda.is_available(): if torch.cuda.current_device()==0: def_gpu = int(os.environ.get('DEFAULT_GPU') or 0) if torch.cuda.device_count()>=def_gpu: torch.cuda.set_device(def_gpu) torch.backends.cudnn.benchmark = True ``` # Torch Core > Basic pytorch functions used in the fastai library ## Arrays and show ``` #export @delegates(plt.subplots, keep=True) def subplots(nrows=1, ncols=1, figsize=None, imsize=3, add_vert=0, **kwargs): if figsize is None: figsize=(ncols*imsize, nrows*imsize+add_vert) fig,ax = plt.subplots(nrows, ncols, figsize=figsize, **kwargs) if nrows*ncols==1: ax = array([ax]) return fig,ax #hide _,axs = subplots() test_eq(axs.shape,[1]) plt.close() _,axs = subplots(2,3) test_eq(axs.shape,[2,3]) plt.close() #export def _fig_bounds(x): r = x//32 return min(5, max(1,r)) #export def show_image(im, ax=None, figsize=None, title=None, ctx=None, **kwargs): "Show a PIL or PyTorch image on `ax`." # Handle pytorch axis order if hasattrs(im, ('data','cpu','permute')): im = im.data.cpu() if im.shape[0]<5: im=im.permute(1,2,0) elif not isinstance(im,np.ndarray): im=array(im) # Handle 1-channel images if im.shape[-1]==1: im=im[...,0] ax = ifnone(ax,ctx) if figsize is None: figsize = (_fig_bounds(im.shape[0]), _fig_bounds(im.shape[1])) if ax is None: _,ax = plt.subplots(figsize=figsize) ax.imshow(im, **kwargs) if title is not None: ax.set_title(title) ax.axis('off') return ax ``` `show_image` can show PIL images... ``` im = Image.open(TEST_IMAGE_BW) ax = show_image(im, cmap="Greys") ``` ...and color images with standard `CHW` dim order... ``` im2 = np.array(Image.open(TEST_IMAGE)) ax = show_image(im2, figsize=(2,2)) ``` ...and color images with `HWC` dim order... ``` im3 = torch.as_tensor(im2).permute(2,0,1) ax = show_image(im3, figsize=(2,2)) #export def show_titled_image(o, **kwargs): "Call `show_image` destructuring `o` to `(img,title)`" show_image(o[0], title=str(o[1]), **kwargs) show_titled_image((im3,'A puppy'), figsize=(2,2)) #export @delegates(subplots) def show_images(ims, nrows=1, ncols=None, titles=None, **kwargs): "Show all images `ims` as subplots with `rows` using `titles`" if ncols is None: ncols = int(math.ceil(len(ims)/nrows)) if titles is None: titles = [None]*len(ims) axs = subplots(nrows, ncols, **kwargs)[1].flat for im,t,ax in zip(ims, titles, axs): show_image(im, ax=ax, title=t) show_images((im,im3), titles=('number','puppy'), imsize=2) ``` `ArrayImage`, `ArrayImageBW` and `ArrayMask` are subclasses of `ndarray` that know how to show themselves. ``` #export class ArrayBase(ndarray): @classmethod def _before_cast(cls, x): return x if isinstance(x,ndarray) else array(x) #export class ArrayImageBase(ArrayBase): _show_args = {'cmap':'viridis'} def show(self, ctx=None, **kwargs): return show_image(self, ctx=ctx, **{**self._show_args, **kwargs}) #export class ArrayImage(ArrayImageBase): pass #export class ArrayImageBW(ArrayImage): _show_args = {'cmap':'Greys'} #export class ArrayMask(ArrayImageBase): _show_args = {'alpha':0.5, 'cmap':'tab20', 'interpolation':'nearest'} im = Image.open(TEST_IMAGE) im_t = cast(im, ArrayImage) test_eq(type(im_t), ArrayImage) ax = im_t.show(figsize=(2,2)) test_fig_exists(ax) ``` ## Basics ``` #export @patch def __array_eq__(self:Tensor,b): return torch.equal(self,b) if self.dim() else self==b #export def _array2tensor(x): if x.dtype==np.uint16: x = x.astype(np.float32) return torch.from_numpy(x) #export def tensor(x, *rest, **kwargs): "Like `torch.as_tensor`, but handle lists too, and can pass multiple vector elements directly." if len(rest): x = (x,)+rest # There was a Pytorch bug in dataloader using num_workers>0. Haven't confirmed if fixed # if isinstance(x, (tuple,list)) and len(x)==0: return tensor(0) res = (x if isinstance(x, Tensor) else torch.tensor(x, **kwargs) if isinstance(x, (tuple,list)) else _array2tensor(x) if isinstance(x, ndarray) else as_tensor(x.values, **kwargs) if isinstance(x, (pd.Series, pd.DataFrame)) else as_tensor(x, **kwargs) if hasattr(x, '__array__') or is_iter(x) else _array2tensor(array(x), **kwargs)) if res.dtype is torch.float64: return res.float() return res test_eq(tensor(torch.tensor([1,2,3])), torch.tensor([1,2,3])) test_eq(tensor(array([1,2,3])), torch.tensor([1,2,3])) test_eq(tensor(1,2,3), torch.tensor([1,2,3])) test_eq_type(tensor(1.0), torch.tensor(1.0)) #export def set_seed(s): "Set random seed for `random`, `torch`, and `numpy` (where available)" try: torch.manual_seed(s) except NameError: pass try: np.random.seed(s%(2**32-1)) except NameError: pass random.seed(s) set_seed(2*33) a1 = np.random.random() a2 = torch.rand(()) a3 = random.random() set_seed(2*33) b1 = np.random.random() b2 = torch.rand(()) b3 = random.random() test_eq(a1,b1) test_eq(a2,b2) test_eq(a3,b3) #export def unsqueeze(x, dim=-1, n=1): "Same as `torch.unsqueeze` but can add `n` dims" for _ in range(n): x = x.unsqueeze(dim) return x t = tensor([1]) t2 = unsqueeze(t, n=2) test_eq(t2,t[:,None,None]) #export def unsqueeze_(x, dim=-1, n=1): "Same as `torch.unsqueeze_` but can add `n` dims" for _ in range(n): x.unsqueeze_(dim) return x t = tensor([1]) unsqueeze_(t, n=2) test_eq(t, tensor([1]).view(1,1,1)) #export def _fa_rebuild_tensor (cls, *args, **kwargs): return cls(torch._utils._rebuild_tensor_v2(*args, **kwargs)) def _fa_rebuild_qtensor(cls, *args, **kwargs): return cls(torch._utils._rebuild_qtensor (*args, **kwargs)) #export def apply(func, x, *args, **kwargs): "Apply `func` recursively to `x`, passing on args" if is_listy(x): return type(x)([apply(func, o, *args, **kwargs) for o in x]) if isinstance(x,dict): return {k: apply(func, v, *args, **kwargs) for k,v in x.items()} res = func(x, *args, **kwargs) return res if x is None else retain_type(res, x) #export def maybe_gather(x, axis=0): "Gather copies of `x` on `axis` (if training is distributed)" if num_distrib()<=1: return x ndim = x.ndim res = [x.new_zeros(*x.shape if ndim > 0 else (1,)) for _ in range(num_distrib())] torch.distributed.all_gather(res, x if ndim > 0 else x[None]) return torch.cat(res, dim=axis) if ndim > 0 else torch.cat(res, dim=axis).mean() #export def to_detach(b, cpu=True, gather=True): "Recursively detach lists of tensors in `b `; put them on the CPU if `cpu=True`." def _inner(x, cpu=True, gather=True): if not isinstance(x,Tensor): return x x = x.detach() if gather: x = maybe_gather(x) return x.cpu() if cpu else x return apply(_inner, b, cpu=cpu, gather=gather) ``` `gather` only applies during distributed training and the result tensor will be the one gathered accross processes if `gather=True` (as a result, the batch size will be multiplied by the number of processes). ``` #export def to_half(b): "Recursively map lists of tensors in `b ` to FP16." return apply(lambda x: x.half() if torch.is_floating_point(x) else x, b) #export def to_float(b): "Recursively map lists of int tensors in `b ` to float." return apply(lambda x: x.float() if torch.is_floating_point(x) else x, b) #export # None: True if available; True: error if not availabe; False: use CPU defaults.use_cuda = None #export def default_device(use_cuda=-1): "Return or set default device; `use_cuda`: None - CUDA if available; True - error if not availabe; False - CPU" if use_cuda != -1: defaults.use_cuda=use_cuda use = defaults.use_cuda or (torch.cuda.is_available() and defaults.use_cuda is None) assert torch.cuda.is_available() or not use return torch.device(torch.cuda.current_device()) if use else torch.device('cpu') #cuda _td = torch.device(torch.cuda.current_device()) test_eq(default_device(None), _td) test_eq(default_device(True), _td) test_eq(default_device(False), torch.device('cpu')) default_device(None); #export def to_device(b, device=None): "Recursively put `b` on `device`." if defaults.use_cuda==False: device='cpu' elif device is None: device=default_device() def _inner(o): return o.to(device, non_blocking=True) if isinstance(o,Tensor) else o.to_device(device) if hasattr(o, "to_device") else o return apply(_inner, b) t = to_device((3,(tensor(3),tensor(2)))) t1,(t2,t3) = t test_eq_type(t,(3,(tensor(3).cuda(),tensor(2).cuda()))) test_eq(t2.type(), "torch.cuda.LongTensor") test_eq(t3.type(), "torch.cuda.LongTensor") #export def to_cpu(b): "Recursively map lists of tensors in `b ` to the cpu." return to_device(b,'cpu') t3 = to_cpu(t3) test_eq(t3.type(), "torch.LongTensor") test_eq(t3, 2) #export def to_np(x): "Convert a tensor to a numpy array." return apply(lambda o: o.data.cpu().numpy(), x) t3 = to_np(t3) test_eq(type(t3), np.ndarray) test_eq(t3, 2) #export def to_concat(xs, dim=0): "Concat the element in `xs` (recursively if they are tuples/lists of tensors)" if is_listy(xs[0]): return type(xs[0])([to_concat([x[i] for x in xs], dim=dim) for i in range_of(xs[0])]) if isinstance(xs[0],dict): return {k: to_concat([x[k] for x in xs], dim=dim) for k in xs[0].keys()} #We may receives xs that are not concatenatable (inputs of a text classifier for instance), # in this case we return a big list try: return retain_type(torch.cat(xs, dim=dim), xs[0]) except: return sum([L(retain_type(o_.index_select(dim, tensor(i)).squeeze(dim), xs[0]) for i in range_of(o_)) for o_ in xs], L()) test_eq(to_concat([tensor([1,2]), tensor([3,4])]), tensor([1,2,3,4])) test_eq(to_concat([tensor([[1,2]]), tensor([[3,4]])], dim=1), tensor([[1,2,3,4]])) test_eq_type(to_concat([(tensor([1,2]), tensor([3,4])), (tensor([3,4]), tensor([5,6]))]), (tensor([1,2,3,4]), tensor([3,4,5,6]))) test_eq_type(to_concat([[tensor([1,2]), tensor([3,4])], [tensor([3,4]), tensor([5,6])]]), [tensor([1,2,3,4]), tensor([3,4,5,6])]) test_eq_type(to_concat([(tensor([1,2]),), (tensor([3,4]),)]), (tensor([1,2,3,4]),)) test_eq(to_concat([tensor([[1,2]]), tensor([[3,4], [5,6]])], dim=1), [tensor([1]),tensor([3, 5]),tensor([4, 6])]) test_eq(type(to_concat([dict(foo=tensor([1,2]), bar=tensor(3,4))])), dict) ``` ## Tensor subtypes ``` #export @patch def set_meta(self:Tensor, x): "Set all metadata in `__dict__`" if hasattr(x,'__dict__'): self.__dict__ = x.__dict__ #export @patch def get_meta(self:Tensor, n, d=None): "Set `n` from `self._meta` if it exists and returns default `d` otherwise" return getattr(self, '_meta', {}).get(n, d) #export @patch def as_subclass(self:Tensor, typ): "Cast to `typ` (should be in future PyTorch version, so remove this then)" res = torch.Tensor._make_subclass(typ, self) return retain_meta(self, res) ``` `Tensor.set_meta` and `Tensor.as_subclass` work together to maintain `_meta` after casting. ``` class _T(Tensor): pass t = tensor(1) t._meta = {'img_size': 1} t2 = t.as_subclass(_T) test_eq(t._meta, t2._meta) test_eq(t2.get_meta('img_size'), 1) #export class TensorBase(Tensor): def __new__(cls, x, **kwargs): res = cast(tensor(x), cls) res._meta = kwargs return res @classmethod def _before_cast(cls, x): return x if isinstance(x,Tensor) else tensor(x) def __reduce_ex__(self,proto): torch.utils.hooks.warn_if_has_hooks(self) args = (type(self), self.storage(), self.storage_offset(), tuple(self.size()), self.stride()) if self.is_quantized: args = args + (self.q_scale(), self.q_zero_point()) f = _fa_rebuild_qtensor if self.is_quantized else _fa_rebuild_tensor return (f, args + (self.requires_grad, OrderedDict())) def gi(self, i): res = self[i] return res.as_subclass(type(self)) if isinstance(res,Tensor) else res def __repr__(self): return re.sub('tensor', self.__class__.__name__, super().__repr__()) #export def _patch_tb(): if getattr(TensorBase,'_patched',False): return TensorBase._patched = True def get_f(fn): def _f(self, *args, **kwargs): cls = self.__class__ res = getattr(super(TensorBase, self), fn)(*args, **kwargs) return retain_type(res, self) return _f t = tensor([1]) skips = 'as_subclass __getitem__ __class__ __deepcopy__ __delattr__ __dir__ __doc__ __getattribute__ __hash__ __init__ \ __init_subclass__ __new__ __reduce__ __reduce_ex__ __repr__ __module__ __setstate__'.split() for fn in dir(t): if fn in skips: continue f = getattr(t, fn) if isinstance(f, (MethodWrapperType, BuiltinFunctionType, BuiltinMethodType, MethodType, FunctionType)): setattr(TensorBase, fn, get_f(fn)) _patch_tb() #export class TensorCategory(TensorBase): pass #export class TensorMultiCategory(TensorCategory): pass class _T(TensorBase): pass t = _T(range(5)) test_eq(t[0], 0) test_eq_type(t.gi(0), _T(0)) test_eq_type(t.gi(slice(2)), _T([0,1])) test_eq_type(t+1, _T(range(1,6))) test_eq(repr(t), '_T([0, 1, 2, 3, 4])') test_eq(type(pickle.loads(pickle.dumps(t))), _T) t = tensor([1,2,3]) m = TensorBase([False,True,True]) test_eq(t[m], tensor([2,3])) t = tensor([[1,2,3],[1,2,3]]) m = cast(tensor([[False,True,True], [False,True,True]]), TensorBase) test_eq(t[m], tensor([2,3,2,3])) t = tensor([[1,2,3],[1,2,3]]) t._meta = {'img_size': 1} t2 = cast(t, TensorBase) test_eq(t2._meta, t._meta) x = retain_type(tensor([4,5,6]), t2) test_eq(x._meta, t._meta) t3 = TensorBase([[1,2,3],[1,2,3]], img_size=1) test_eq(t3._meta, t._meta) #export class TensorImageBase(TensorBase): _show_args = ArrayImageBase._show_args def show(self, ctx=None, **kwargs): return show_image(self, ctx=ctx, **{**self._show_args, **kwargs}) #export class TensorImage(TensorImageBase): pass #export class TensorImageBW(TensorImage): _show_args = ArrayImageBW._show_args #export class TensorMask(TensorImageBase): _show_args = ArrayMask._show_args def show(self, ctx=None, **kwargs): codes = self.get_meta('codes') if codes is not None: kwargs = merge({'vmin': 1, 'vmax': len(codes)}, kwargs) return super().show(ctx=ctx, **kwargs) im = Image.open(TEST_IMAGE) im_t = cast(array(im), TensorImage) test_eq(type(im_t), TensorImage) im_t2 = cast(tensor(1), TensorMask) test_eq(type(im_t2), TensorMask) test_eq(im_t2, tensor(1)) ax = im_t.show(figsize=(2,2)) test_fig_exists(ax) #hide (last test of to_concat) test_eq_type(to_concat([TensorImage([1,2]), TensorImage([3,4])]), TensorImage([1,2,3,4])) #export class TitledTensorScalar(TensorBase): "A tensor containing a scalar that has a `show` method" def show(self, **kwargs): show_title(self.item(), **kwargs) ``` ## L - ``` #export @patch def tensored(self:L): "`mapped(tensor)`" return self.map(tensor) @patch def stack(self:L, dim=0): "Same as `torch.stack`" return torch.stack(list(self.tensored()), dim=dim) @patch def cat (self:L, dim=0): "Same as `torch.cat`" return torch.cat (list(self.tensored()), dim=dim) show_doc(L.tensored) ``` There are shortcuts for `torch.stack` and `torch.cat` if your `L` contains tensors or something convertible. You can manually convert with `tensored`. ``` t = L(([1,2],[3,4])) test_eq(t.tensored(), [tensor(1,2),tensor(3,4)]) show_doc(L.stack) test_eq(t.stack(), tensor([[1,2],[3,4]])) show_doc(L.cat) test_eq(t.cat(), tensor([1,2,3,4])) ``` ## Chunks ``` #export def concat(*ls): "Concatenate tensors, arrays, lists, or tuples" if not len(ls): return [] it = ls[0] if isinstance(it,torch.Tensor): res = torch.cat(ls) elif isinstance(it,ndarray): res = np.concatenate(ls) else: res = itertools.chain.from_iterable(map(L,ls)) if isinstance(it,(tuple,list)): res = type(it)(res) else: res = L(res) return retain_type(res, it) a,b,c = [1],[1,2],[1,1,2] test_eq(concat(a,b), c) test_eq_type(concat(tuple (a),tuple (b)), tuple (c)) test_eq_type(concat(array (a),array (b)), array (c)) test_eq_type(concat(tensor(a),tensor(b)), tensor(c)) test_eq_type(concat(TensorBase(a),TensorBase(b)), TensorBase(c)) test_eq_type(concat([1,1],1), [1,1,1]) test_eq_type(concat(1,1,1), L(1,1,1)) test_eq_type(concat(L(1,2),1), L(1,2,1)) #export class Chunks: "Slice and int indexing into a list of lists" def __init__(self, chunks, lens=None): self.chunks = chunks self.lens = L(map(len,self.chunks) if lens is None else lens) self.cumlens = np.cumsum(0+self.lens) self.totlen = self.cumlens[-1] def __getitem__(self,i): if isinstance(i,slice): return retain_type(self.getslice(i), old=self.chunks[0]) di,idx = self.doc_idx(i) return retain_type(self.chunks[di][idx], old=self.chunks[0]) def getslice(self, i): st_d,st_i = self.doc_idx(ifnone(i.start,0)) en_d,en_i = self.doc_idx(ifnone(i.stop,self.totlen+1)) res = [self.chunks[st_d][st_i:(en_i if st_d==en_d else sys.maxsize)]] for b in range(st_d+1,en_d): res.append(self.chunks[b]) if st_d!=en_d and en_d<len(self.chunks): res.append(self.chunks[en_d][:en_i]) return concat(*res) def doc_idx(self, i): if i<0: i=self.totlen+i # count from end docidx = np.searchsorted(self.cumlens, i+1)-1 cl = self.cumlens[docidx] return docidx,i-cl docs = L(list(string.ascii_lowercase[a:b]) for a,b in ((0,3),(3,7),(7,8),(8,16),(16,24),(24,26))) b = Chunks(docs) test_eq([b[ o] for o in range(0,5)], ['a','b','c','d','e']) test_eq([b[-o] for o in range(1,6)], ['z','y','x','w','v']) test_eq(b[6:13], 'g,h,i,j,k,l,m'.split(',')) test_eq(b[20:77], 'u,v,w,x,y,z'.split(',')) test_eq(b[:5], 'a,b,c,d,e'.split(',')) test_eq(b[:2], 'a,b'.split(',')) t = torch.arange(26) docs = L(t[a:b] for a,b in ((0,3),(3,7),(7,8),(8,16),(16,24),(24,26))) b = Chunks(docs) test_eq([b[ o] for o in range(0,5)], range(0,5)) test_eq([b[-o] for o in range(1,6)], [25,24,23,22,21]) test_eq(b[6:13], torch.arange(6,13)) test_eq(b[20:77], torch.arange(20,26)) test_eq(b[:5], torch.arange(5)) test_eq(b[:2], torch.arange(2)) docs = L(TensorBase(t[a:b]) for a,b in ((0,3),(3,7),(7,8),(8,16),(16,24),(24,26))) b = Chunks(docs) test_eq_type(b[:2], TensorBase(range(2))) test_eq_type(b[:5], TensorBase(range(5))) test_eq_type(b[9:13], TensorBase(range(9,13))) ``` ## Simple types ``` #export def show_title(o, ax=None, ctx=None, label=None, color='black', **kwargs): "Set title of `ax` to `o`, or print `o` if `ax` is `None`" ax = ifnone(ax,ctx) if ax is None: print(o) elif hasattr(ax, 'set_title'): t = ax.title.get_text() if len(t) > 0: o = t+'\n'+str(o) ax.set_title(o, color=color) elif isinstance(ax, pd.Series): while label in ax: label += '_' ax = ax.append(pd.Series({label: o})) return ax test_stdout(lambda: show_title("title"), "title") # ensure that col names are unique when showing to a pandas series assert show_title("title", ctx=pd.Series(dict(a=1)), label='a').equals(pd.Series(dict(a=1,a_='title'))) #export class ShowTitle: "Base class that adds a simple `show`" _show_args = {'label': 'text'} def show(self, ctx=None, **kwargs): "Show self" return show_title(str(self), ctx=ctx, **merge(self._show_args, kwargs)) class TitledInt(Int, ShowTitle): _show_args = {'label': 'text'} def show(self, ctx=None, **kwargs): "Show self" return show_title(str(self), ctx=ctx, **merge(self._show_args, kwargs)) class TitledFloat(Float, ShowTitle): _show_args = {'label': 'text'} def show(self, ctx=None, **kwargs): "Show self" return show_title(str(self), ctx=ctx, **merge(self._show_args, kwargs)) class TitledStr(Str, ShowTitle): _show_args = {'label': 'text'} def show(self, ctx=None, **kwargs): "Show self" return show_title(str(self), ctx=ctx, **merge(self._show_args, kwargs)) class TitledTuple(Tuple, ShowTitle): _show_args = {'label': 'text'} def show(self, ctx=None, **kwargs): "Show self" return show_title(str(self), ctx=ctx, **merge(self._show_args, kwargs)) add_docs(TitledInt, "An `int` with `show`"); add_docs(TitledStr, "An `str` with `show`"); add_docs(TitledFloat, "A `float` with `show`"); add_docs(TitledTuple, "A `Tuple` with `show`") show_doc(TitledInt, title_level=3) show_doc(TitledStr, title_level=3) show_doc(TitledFloat, title_level=3) test_stdout(lambda: TitledStr('s').show(), 's') test_stdout(lambda: TitledInt(1).show(), '1') show_doc(TitledTuple, title_level=3) #hide df = pd.DataFrame(index = range(1)) row = df.iloc[0] x = TitledFloat(2.56) row = x.show(ctx=row, label='lbl') test_eq(float(row.lbl), 2.56) #export @patch def truncate(self:TitledStr, n): "Truncate self to `n`" words = self.split(' ')[:n] return TitledStr(' '.join(words)) ``` ## Other functions ``` #export if not hasattr(pd.DataFrame,'_old_init'): pd.DataFrame._old_init = pd.DataFrame.__init__ #export @patch def __init__(self:pd.DataFrame, data=None, index=None, columns=None, dtype=None, copy=False): if data is not None and isinstance(data, Tensor): data = to_np(data) self._old_init(data, index=index, columns=columns, dtype=dtype, copy=copy) #export def get_empty_df(n): "Return `n` empty rows of a dataframe" df = pd.DataFrame(index = range(n)) return [df.iloc[i] for i in range(n)] #export def display_df(df): "Display `df` in a notebook or defaults to print" try: from IPython.display import display, HTML except: return print(df) display(HTML(df.to_html())) #export def get_first(c): "Get the first element of c, even if c is a dataframe" return getattr(c, 'iloc', c)[0] #export def one_param(m): "First parameter in `m`" return first(m.parameters()) #export def item_find(x, idx=0): "Recursively takes the `idx`-th element of `x`" if is_listy(x): return item_find(x[idx]) if isinstance(x,dict): key = list(x.keys())[idx] if isinstance(idx, int) else idx return item_find(x[key]) return x #export def find_device(b): "Recursively search the device of `b`." return item_find(b).device t2 = to_device(tensor(0)) dev = default_device() test_eq(find_device(t2), dev) test_eq(find_device([t2,t2]), dev) test_eq(find_device({'a':t2,'b':t2}), dev) test_eq(find_device({'a':[[t2],[t2]],'b':t2}), dev) #export def find_bs(b): "Recursively search the batch size of `b`." return item_find(b).shape[0] x = torch.randn(4,5) test_eq(find_bs(x), 4) test_eq(find_bs([x, x]), 4) test_eq(find_bs({'a':x,'b':x}), 4) test_eq(find_bs({'a':[[x],[x]],'b':x}), 4) def np_func(f): "Convert a function taking and returning numpy arrays to one taking and returning tensors" def _inner(*args, **kwargs): nargs = [to_np(arg) if isinstance(arg,Tensor) else arg for arg in args] return tensor(f(*nargs, **kwargs)) functools.update_wrapper(_inner, f) return _inner ``` This decorator is particularly useful for using numpy functions as fastai metrics, for instance: ``` from sklearn.metrics import f1_score @np_func def f1(inp,targ): return f1_score(targ, inp) a1,a2 = array([0,1,1]),array([1,0,1]) t = f1(tensor(a1),tensor(a2)) test_eq(f1_score(a1,a2), t) assert isinstance(t,Tensor) #export class Module(nn.Module, metaclass=PrePostInitMeta): "Same as `nn.Module`, but no need for subclasses to call `super().__init__`" def __pre_init__(self, *args, **kwargs): super().__init__() def __init__(self): pass show_doc(Module, title_level=3) class _T(Module): def __init__(self): self.f = nn.Linear(1,1) def forward(self,x): return self.f(x) t = _T() t(tensor([1.])) # export from torch.nn.parallel import DistributedDataParallel def get_model(model): "Return the model maybe wrapped inside `model`." return model.module if isinstance(model, (DistributedDataParallel, nn.DataParallel)) else model # export def one_hot(x, c): "One-hot encode `x` with `c` classes." res = torch.zeros(c, dtype=torch.uint8) if isinstance(x, Tensor) and x.numel()>0: res[x] = 1. else: res[list(L(x, use_list=None))] = 1. return res test_eq(one_hot([1,4], 5), tensor(0,1,0,0,1).byte()) test_eq(one_hot(torch.tensor([]), 5), tensor(0,0,0,0,0).byte()) test_eq(one_hot(2, 5), tensor(0,0,1,0,0).byte()) #export def one_hot_decode(x, vocab=None): return L(vocab[i] if vocab else i for i,x_ in enumerate(x) if x_==1) test_eq(one_hot_decode(tensor(0,1,0,0,1)), [1,4]) test_eq(one_hot_decode(tensor(0,0,0,0,0)), [ ]) test_eq(one_hot_decode(tensor(0,0,1,0,0)), [2 ]) #export def params(m): "Return all parameters of `m`" return [p for p in m.parameters()] #export def trainable_params(m): "Return all trainable parameters of `m`" return [p for p in m.parameters() if p.requires_grad] m = nn.Linear(4,5) test_eq(trainable_params(m), [m.weight, m.bias]) m.weight.requires_grad_(False) test_eq(trainable_params(m), [m.bias]) #export norm_types = (nn.BatchNorm1d, nn.BatchNorm2d, nn.BatchNorm3d, nn.InstanceNorm1d, nn.InstanceNorm2d, nn.InstanceNorm3d) #export def bn_bias_params(m, with_bias=True): # TODO: Rename to `norm_bias_params` "Return all bias and BatchNorm parameters" if isinstance(m, norm_types): return L(m.parameters()) res = L(m.children()).map(bn_bias_params, with_bias=with_bias).concat() if with_bias and getattr(m, 'bias', None) is not None: res.append(m.bias) return res for norm_func in [nn.BatchNorm1d, partial(nn.InstanceNorm1d, affine=True)]: model = nn.Sequential(nn.Linear(10,20), norm_func(20), nn.Conv1d(3,4, 3)) test_eq(bn_bias_params(model), [model[0].bias, model[1].weight, model[1].bias, model[2].bias]) model = nn.ModuleList([nn.Linear(10,20, bias=False), nn.Sequential(norm_func(20), nn.Conv1d(3,4,3))]) test_eq(bn_bias_params(model), [model[1][0].weight, model[1][0].bias, model[1][1].bias]) model = nn.ModuleList([nn.Linear(10,20), nn.Sequential(norm_func(20), nn.Conv1d(3,4,3))]) test_eq(bn_bias_params(model, with_bias=False), [model[1][0].weight, model[1][0].bias]) #export def batch_to_samples(b, max_n=10): "'Transposes' a batch to (at most `max_n`) samples" if isinstance(b, Tensor): return retain_types(list(b[:max_n]), [b]) else: res = L(b).map(partial(batch_to_samples,max_n=max_n)) return retain_types(res.zip(), [b]) t = tensor([1,2,3]) test_eq(batch_to_samples([t,t+1], max_n=2), ([1,2],[2,3])) test_eq(batch_to_samples(tensor([1,2,3]), 10), [1, 2, 3]) test_eq(batch_to_samples([tensor([1,2,3]), tensor([4,5,6])], 10), [(1, 4), (2, 5), (3, 6)]) test_eq(batch_to_samples([tensor([1,2,3]), tensor([4,5,6])], 2), [(1, 4), (2, 5)]) test_eq(batch_to_samples([tensor([1,2,3]), [tensor([4,5,6]),tensor([7,8,9])]], 10), [(1, (4, 7)), (2, (5, 8)), (3, (6, 9))]) test_eq(batch_to_samples([tensor([1,2,3]), [tensor([4,5,6]),tensor([7,8,9])]], 2), [(1, (4, 7)), (2, (5, 8))]) t = Tuple(tensor([1,2,3]),TensorBase([2,3,4])) test_eq_type(batch_to_samples(t)[0][1], TensorBase(2)) test_eq(batch_to_samples(t).map(type), [Tuple]*3) #export @patch def interp_1d(x:Tensor, xp, fp): "Same as `np.interp`" slopes = (fp[1:]-fp[:-1])/(xp[1:]-xp[:-1]) incx = fp[:-1] - (slopes*xp[:-1]) locs = (x[:,None]>=xp[None,:]).long().sum(1)-1 locs = locs.clamp(0,len(slopes)-1) return slopes[locs]*x + incx[locs] brks = tensor(0,1,2,4,8,64).float() ys = tensor(range_of(brks)).float() ys /= ys[-1].item() pts = tensor(0.2,0.5,0.8,3,5,63) preds = pts.interp_1d(brks, ys) test_close(preds.numpy(), np.interp(pts.numpy(), brks.numpy(), ys.numpy())) plt.scatter(brks,ys) plt.scatter(pts,preds) plt.legend(['breaks','preds']); #export @patch def pca(x:Tensor, k=2): "Compute PCA of `x` with `k` dimensions." x = x-torch.mean(x,0) U,S,V = torch.svd(x.t()) return torch.mm(x,U[:,:k]) # export def logit(x): "Logit of `x`, clamped to avoid inf." x = x.clamp(1e-7, 1-1e-7) return -(1/x-1).log() #export def num_distrib(): "Return the number of processes in distributed training (if applicable)." return int(os.environ.get('WORLD_SIZE', 0)) #export def rank_distrib(): "Return the distributed rank of this process (if applicable)." return int(os.environ.get('RANK', 0)) #export def distrib_barrier(): "Place a synchronization barrier in distributed training so that ALL sub-processes in the pytorch process group must arrive here before proceeding." if num_distrib() > 1: torch.distributed.barrier() #export # Saving arrays requires pytables - optional dependency try: import tables except: pass #export def _comp_filter(lib='lz4',lvl=3): return tables.Filters(complib=f'blosc:{lib}', complevel=lvl) #export @patch def save_array(p:Path, o, complib='lz4', lvl=3): "Save numpy array to a compressed `pytables` file, using compression level `lvl`" if isinstance(o,Tensor): o = to_np(o) with tables.open_file(p, mode='w', filters=_comp_filter(lib=complib,lvl=lvl)) as f: f.create_carray('/', 'data', obj=o) ``` Compression lib can be any of: blosclz, lz4, lz4hc, snappy, zlib or zstd. ``` #export @patch def load_array(p:Path): "Save numpy array to a `pytables` file" with tables.open_file(p, 'r') as f: return f.root.data.read() inspect.getdoc(load_array) str(inspect.signature(load_array)) #export def base_doc(elt): "Print a base documentation of `elt`" name = getattr(elt, '__qualname__', getattr(elt, '__name__', '')) print(f'{name}{inspect.signature(elt)}\n{inspect.getdoc(elt)}\n') print('To get a prettier result with hyperlinks to source code and documentation, install nbdev: pip install nbdev') #export def doc(elt): "Try to use doc form nbdev and fall back to `base_doc`" try: from nbdev.showdoc import doc doc(elt) except: base_doc(elt) #export def nested_reorder(t, idxs): "Reorder all tensors in `t` using `idxs`" if isinstance(t, (Tensor,L)): return t[idxs] elif is_listy(t): return type(t)(nested_reorder(t_, idxs) for t_ in t) if t is None: return t raise TypeError(f"Expected tensor, tuple, list or L but got {type(t)}") x = tensor([0,1,2,3,4,5]) idxs = tensor([2,5,1,0,3,4]) test_eq_type(nested_reorder(([x], x), idxs), ([idxs], idxs)) y = L(0,1,2,3,4,5) z = L(i.item() for i in idxs) test_eq_type(nested_reorder((y, x), idxs), (z,idxs)) ``` ## Image helpers ``` #export def to_image(x): if isinstance(x,Image.Image): return x if isinstance(x,Tensor): x = to_np(x.permute((1,2,0))) if x.dtype==np.float32: x = (x*255).astype(np.uint8) return Image.fromarray(x, mode=['RGB','CMYK'][x.shape[0]==4]) #export def make_cross_image(bw=True): "Create a tensor containing a cross image, either `bw` (True) or color" if bw: im = torch.zeros(5,5) im[2,:] = 1. im[:,2] = 1. else: im = torch.zeros(3,5,5) im[0,2,:] = 1. im[1,:,2] = 1. return im plt.imshow(make_cross_image(), cmap="Greys"); plt.imshow(make_cross_image(False).permute(1,2,0)); #export def show_image_batch(b, show=show_titled_image, items=9, cols=3, figsize=None, **kwargs): "Display batch `b` in a grid of size `items` with `cols` width" if items<cols: cols=items rows = (items+cols-1) // cols if figsize is None: figsize = (cols*3, rows*3) fig,axs = plt.subplots(rows, cols, figsize=figsize) for *o,ax in zip(*to_cpu(b), axs.flatten()): show(o, ax=ax, **kwargs) show_image_batch(([Image.open(TEST_IMAGE_BW),Image.open(TEST_IMAGE)],['bw','color']), items=2) ``` ## Model init ``` #export def requires_grad(m): "Check if the first parameter of `m` requires grad or not" ps = list(m.parameters()) return ps[0].requires_grad if len(ps)>0 else False tst = nn.Linear(4,5) assert requires_grad(tst) for p in tst.parameters(): p.requires_grad_(False) assert not requires_grad(tst) #export def init_default(m, func=nn.init.kaiming_normal_): "Initialize `m` weights with `func` and set `bias` to 0." if func: if hasattr(m, 'weight'): func(m.weight) if hasattr(m, 'bias') and hasattr(m.bias, 'data'): m.bias.data.fill_(0.) return m tst = nn.Linear(4,5) tst.weight.data.uniform_(-1,1) tst.bias.data.uniform_(-1,1) tst = init_default(tst, func = lambda x: x.data.fill_(1.)) test_eq(tst.weight, torch.ones(5,4)) test_eq(tst.bias, torch.zeros(5)) #export def cond_init(m, func): "Apply `init_default` to `m` unless it's a batchnorm module" if (not isinstance(m, norm_types)) and requires_grad(m): init_default(m, func) tst = nn.Linear(4,5) tst.weight.data.uniform_(-1,1) tst.bias.data.uniform_(-1,1) cond_init(tst, func = lambda x: x.data.fill_(1.)) test_eq(tst.weight, torch.ones(5,4)) test_eq(tst.bias, torch.zeros(5)) tst = nn.BatchNorm2d(5) init = [tst.weight.clone(), tst.bias.clone()] cond_init(tst, func = lambda x: x.data.fill_(1.)) test_eq(tst.weight, init[0]) test_eq(tst.bias, init[1]) #export def apply_leaf(m, f): "Apply `f` to children of `m`." c = m.children() if isinstance(m, nn.Module): f(m) for l in c: apply_leaf(l,f) tst = nn.Sequential(nn.Linear(4,5), nn.Sequential(nn.Linear(4,5), nn.Linear(4,5))) apply_leaf(tst, partial(init_default, func=lambda x: x.data.fill_(1.))) for l in [tst[0], *tst[1]]: test_eq(l.weight, torch.ones(5,4)) for l in [tst[0], *tst[1]]: test_eq(l.bias, torch.zeros(5)) #export def apply_init(m, func=nn.init.kaiming_normal_): "Initialize all non-batchnorm layers of `m` with `func`." apply_leaf(m, partial(cond_init, func=func)) tst = nn.Sequential(nn.Linear(4,5), nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(5))) init = [tst[1][1].weight.clone(), tst[1][1].bias.clone()] apply_init(tst, func=lambda x: x.data.fill_(1.)) for l in [tst[0], tst[1][0]]: test_eq(l.weight, torch.ones(5,4)) for l in [tst[0], tst[1][0]]: test_eq(l.bias, torch.zeros(5)) test_eq(tst[1][1].weight, init[0]) test_eq(tst[1][1].bias, init[1]) ``` ## Multiprocessing ``` #export from multiprocessing import Process, Queue #export def set_num_threads(nt): "Get numpy (and others) to use `nt` threads" try: import mkl; mkl.set_num_threads(nt) except: pass torch.set_num_threads(1) os.environ['IPC_ENABLE']='1' for o in ['OPENBLAS_NUM_THREADS','NUMEXPR_NUM_THREADS','OMP_NUM_THREADS','MKL_NUM_THREADS']: os.environ[o] = str(nt) #export @delegates(concurrent.futures.ProcessPoolExecutor) class ProcessPoolExecutor(concurrent.futures.ProcessPoolExecutor): def __init__(self, max_workers=None, on_exc=print, **kwargs): self.not_parallel = max_workers==0 self.on_exc = on_exc if self.not_parallel: max_workers=1 super().__init__(max_workers, **kwargs) def map(self, f, items, *args, **kwargs): g = partial(f, *args, **kwargs) if self.not_parallel: return map(g, items) try: return super().map(g, items) except Exception as e: self.on_exc(e) #export def parallel(f, items, *args, n_workers=defaults.cpus, total=None, progress=True, **kwargs): "Applies `func` in parallel to `items`, using `n_workers`" with ProcessPoolExecutor(n_workers) as ex: r = ex.map(f,items, *args, **kwargs) if progress: if total is None: total = len(items) r = progress_bar(r, total=total, leave=False) return L(r) def add_one(x, a=1): time.sleep(random.random()/100) return x+a inp,exp = range(50),range(1,51) test_eq(parallel(add_one, inp, n_workers=2), exp) test_eq(parallel(add_one, inp, n_workers=0), exp) test_eq(parallel(add_one, inp, n_workers=1, a=2), range(2,52)) test_eq(parallel(add_one, inp, n_workers=0, a=2), range(2,52)) #export def run_procs(f, f_done, args): "Call `f` for each item in `args` in parallel, yielding `f_done`" processes = L(args).map(Process, args=arg0, target=f) for o in processes: o.start() try: yield from f_done() except Exception as e: print(e) finally: processes.map(Self.join()) #export def parallel_gen(cls, items, n_workers=defaults.cpus, as_gen=False, **kwargs): "Instantiate `cls` in `n_workers` procs & call each on a subset of `items` in parallel." batches = np.array_split(items, n_workers) idx = np.cumsum(0 + L(batches).map(len)) queue = Queue() def f(batch, start_idx): for i,b in enumerate(cls(**kwargs)(batch)): queue.put((start_idx+i,b)) def done(): return (queue.get() for _ in progress_bar(items, leave=False)) yield from run_procs(f, done, L(batches,idx).zip()) ``` `cls` is any class with `__call__`. It will be passed `args` and `kwargs` when initialized. Note that `n_workers` instances of `cls` are created, one in each process. `items` are then split in `n_workers` batches and one is sent to each `cls`. The function then returns a list of all the results, matching the order of `items` (if not `as_gen`) or a generator of tuples of item indices and results (if `as_gen`). ``` class SleepyBatchFunc: def __init__(self): self.a=1 def __call__(self, batch): for k in batch: time.sleep(random.random()/4) yield k+self.a x = np.linspace(0,0.99,20) res = L(parallel_gen(SleepyBatchFunc, x, n_workers=2)) test_eq(res.sorted().itemgot(1), x+1) ``` ## autograd jit functions ``` #export def script_use_ctx(f): "Decorator: create jit script and pass everything in `ctx.saved_variables to `f`, after `*args`" sf = torch.jit.script(f) def _f(ctx, *args, **kwargs): return sf(*args, *ctx.saved_variables, **kwargs) return update_wrapper(_f,f) #export def script_save_ctx(static, *argidx): "Decorator: create jit script and save args with indices `argidx` using `ctx.save_for_backward`" def _dec(f): sf = torch.jit.script(f) def _f(ctx, *args, **kwargs): if argidx: save = [args[o] for o in argidx] ctx.save_for_backward(*save) if not argidx: args = [ctx]+args return sf(*args, **kwargs) if static: _f = staticmethod(_f) return update_wrapper(_f,f) return _dec #export def script_fwd(*argidx): "Decorator: create static jit script and save args with indices `argidx` using `ctx.save_for_backward`" return script_save_ctx(True, *argidx) #export def script_bwd(f): "Decorator: create static jit script and pass everything in `ctx.saved_variables to `f`, after `*args`" return staticmethod(script_use_ctx(f)) #export def grad_module(cls): "Decorator: convert `cls` into an autograd function" class _c(nn.Module): def forward(self, *args, **kwargs): return cls.apply(*args, **kwargs) return _c ``` # Export - ``` #hide from nbdev.export import notebook2script notebook2script() ```
github_jupyter
# Soft Computing ## Vežba 1 - Digitalna slika, computer vision, OpenCV ### OpenCV Open source biblioteka namenjena oblasti računarske vizije (eng. computer vision). Dokumentacija dostupna <a href="https://opencv.org/">ovde</a>. ### matplotlib Plotting biblioteka za programski jezik Python i njegov numerički paket NumPy. Dokumentacija dostupna <a href="https://matplotlib.org/">ovde</a>. ### Učitavanje slike OpenCV metoda za učitavanje slike sa diska je <b>imread(path_to_image)</b>, koja kao parametar prima putanju do slike na disku. Učitana slika <i>img</i> je zapravo NumPy matrica, čije dimenzije zavise od same prirode slike. Ako je slika u boji, onda je <i>img</i> trodimenzionalna matrica, čije su prve dve dimenzije visina i širina slike, a treća dimenzija je veličine 3, zato što ona predstavlja boju (RGB, po jedan segment za svaku osnonvu boju). ``` import numpy as np import cv2 # OpenCV biblioteka import matplotlib import matplotlib.pyplot as plt # iscrtavanje slika i grafika unutar samog browsera %matplotlib inline # prikaz vecih slika matplotlib.rcParams['figure.figsize'] = 16,12 img = cv2.imread('images/girl.jpg') # ucitavanje slike sa diska img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # konvertovanje iz BGR u RGB model boja (OpenCV ucita sliku kao BGR) plt.imshow(img) # prikazivanje slike ``` ### Prikazivanje dimenzija slike ``` print(img.shape) # shape je property Numpy array-a za prikaz dimenzija ``` Obratiti pažnju da slika u boji ima 3 komponente za svaki piksel na slici - R (red), G (green) i B (blue). ![images/cat_rgb.png](images/cat_rgb.png) ``` img ``` Primetite da je svaki element matrice **uint8** (unsigned 8-bit integer), odnosno celobroja vrednost u interval [0, 255]. ``` img.dtype ``` ### Osnovne operacije pomoću NumPy Predstavljanje slike kao NumPy array je vrlo korisna stvar, jer omogućava jednostavnu manipulaciju i izvršavanje osnovih operacija nad slikom. #### Isecanje (crop) ``` img_crop = img[100:200, 300:600] # prva koordinata je po visini (formalno red), druga po širini (formalo kolona) plt.imshow(img_crop) ``` #### Okretanje (flip) ``` img_flip_h = img[:, ::-1] # prva koordinata ostaje ista, a kolone se uzimaju unazad plt.imshow(img_flip_h) img_flip_v = img[::-1, :] # druga koordinata ostaje ista, a redovi se uzimaju unazad plt.imshow(img_flip_v) img_flip_c = img[:, :, ::-1] # možemo i izmeniti redosled boja (RGB->BGR), samo je pitanje koliko to ima smisla plt.imshow(img_flip_c) ``` #### Invertovanje ``` img_inv = 255 - img # ako su pikeli u intervalu [0,255] ovo je ok, a ako su u intervalu [0.,1.] onda bi bilo 1. - img plt.imshow(img_inv) ``` ### Konvertovanje iz RGB u "grayscale" Konvertovanjem iz RGB modela u nijanse sivih (grayscale) se gubi informacija o boji piksela na slici, ali sama slika postaje mnogo lakša za dalju obradu. Ovo se može uraditi na više načina: 1. **Srednja vrednost** RGB komponenti - najjednostavnija varijanta $$ G = \frac{R+G+B}{3} $$ 2. **Metod osvetljenosti** - srednja vrednost najjače i najslabije boje $$ G = \frac{max(R,G,B) + min(R,G,B)}{2} $$ 3. **Metod perceptivne osvetljenosti** - težinska srednja vrednost koja uzima u obzir ljudsku percepciju (npr. najviše smo osetljivi na zelenu boju, pa to treba uzeti u obzir)$$ G = 0.21*R + 0.72*G + 0.07*B $$ ``` # implementacija metode perceptivne osvetljenosti def my_rgb2gray(img_rgb): img_gray = np.ndarray((img_rgb.shape[0], img_rgb.shape[1])) # zauzimanje memorije za sliku (nema trece dimenzije) img_gray = 0.21*img_rgb[:, :, 0] + 0.77*img_rgb[:, :, 1] + 0.07*img_rgb[:, :, 2] img_gray = img_gray.astype('uint8') # u prethodnom koraku smo mnozili sa float, pa sada moramo da vratimo u [0,255] opseg return img_gray img_gray = my_rgb2gray(img) plt.imshow(img_gray, 'gray') # kada se prikazuje slika koja nije RGB, obavezno je staviti 'gray' kao drugi parametar ``` Ipak je najbolje se držati implementacije u **OpenCV** biblioteci :). ``` img_gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) img_gray.shape plt.imshow(img_gray, 'gray') img_gray ``` ### Binarna slika Slika čiji pikseli imaju samo dve moguće vrednosti: crno i belo. U zavisnosti da li interval realan (float32) ili celobrojan (uint8), ove vrednosti mogu biti {0,1} ili {0,255}. U binarnoj slici često izdvajamo ono što nam je bitno (**foreground**), od ono što nam je nebitno (**background**). Formalnije, ovaj postupak izdvajanja bitnog od nebitnog na slici nazivamo **segmentacija**. Najčešći način dobijanja binarne slike je korišćenje nekog praga (**threshold**), pa ako je vrednost piksela veća od zadatog praga taj piksel dobija vrednost 1, u suprotnom 0. Postoji više tipova threshold-ovanja: 1. Globalni threshold - isti prag se primenjuje na sve piksele 2. Lokalni threshold - različiti pragovi za različite delove slike 3. Adaptivni threshold - prag se ne određuje ručno (ne zadaje ga čovek), već kroz neki postupak. Može biti i globalni i lokalni. #### Globalni threshold Kako izdvojiti npr. samo lice? ``` img_tr = img_gray > 127 # svi piskeli koji su veci od 127 ce dobiti vrednost True, tj. 1, i obrnuto plt.imshow(img_tr, 'gray') ``` OpenCV ima metodu <b>threshold</b> koja kao prvi parametar prima sliku koja se binarizuje, kao drugi parametar prima prag binarizacije, treći parametar je vrednost rezultujućeg piksela ako je veći od praga (255=belo), poslednji parametar je tip thresholda (u ovo slučaju je binarizacija). ``` ret, image_bin = cv2.threshold(img_gray, 100, 255, cv2.THRESH_BINARY) # ret je vrednost praga, image_bin je binarna slika print(ret) plt.imshow(image_bin, 'gray') ``` #### Otsu threshold <a href="https://en.wikipedia.org/wiki/Otsu%27s_method">Otsu metoda</a> se koristi za automatsko pronalaženje praga za threshold na slici. ``` ret, image_bin = cv2.threshold(img_gray, 0, 255, cv2.THRESH_OTSU) # ret je izracunata vrednost praga, image_bin je binarna slika print("Otsu's threshold: " + str(ret)) plt.imshow(image_bin, 'gray') ``` #### Adaptivni threshold U nekim slučajevima primena globalnog praga za threshold ne daje dobre rezultate. Dobar primer su slike na kojima se menja osvetljenje, gde globalni threshold praktično uništi deo slike koji je previše osvetljen ili zatamnjen. Adaptivni threshold je drugačiji pristup, gde se za svaki piksel na slici izračunava zaseban prag, na osnovu njemu okolnnih piksela. <a href="https://docs.opencv.org/master/d7/d4d/tutorial_py_thresholding.html#gsc.tab=0">Primer</a> ``` image_ada = cv2.imread('images/sonnet.png') image_ada = cv2.cvtColor(image_ada, cv2.COLOR_BGR2GRAY) plt.imshow(image_ada, 'gray') ret, image_ada_bin = cv2.threshold(image_ada, 100, 255, cv2.THRESH_BINARY) plt.imshow(image_ada_bin, 'gray') ``` Loši rezultati su dobijeni upotrebom globalnog thresholda. Poboljšavamo rezultate korišćenjem adaptivnog thresholda. Pretposlednji parametar metode <b>adaptiveThreshold</b> je ključan, jer predstavlja veličinu bloka susednih piksela (npr. 15x15) na osnovnu kojih se računa lokalni prag. ``` # adaptivni threshold gde se prag racuna = srednja vrednost okolnih piksela image_ada_bin = cv2.adaptiveThreshold(image_ada, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 15, 5) plt.figure() # ako je potrebno da se prikaze vise slika u jednoj celiji plt.imshow(image_ada_bin, 'gray') # adaptivni threshold gde se prag racuna = tezinska suma okolnih piksela, gde su tezine iz gausove raspodele image_ada_bin = cv2.adaptiveThreshold(image_ada, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 15, 5) plt.figure() plt.imshow(image_ada_bin, 'gray') ``` ### Histogram Možemo koristiti **histogram**, koji će nam dati informaciju o distribuciji osvetljenosti piksela. Vrlo koristan kada je potrebno odrediti prag za globalni threshold. Pseudo-kod histograma za grayscale sliku: ```code inicijalizovati nula vektor od 256 elemenata za svaki piksel na slici: preuzeti inicijalni intezitet piksela uvecati za 1 broj piksela tog inteziteta plotovati histogram ``` ``` def hist(image): height, width = image.shape[0:2] x = range(0, 256) y = np.zeros(256) for i in range(0, height): for j in range(0, width): pixel = image[i, j] y[pixel] += 1 return (x, y) x,y = hist(img_gray) plt.plot(x, y, 'b') plt.show() ``` Koristeći <b>matplotlib</b>: ``` plt.hist(img_gray.ravel(), 255, [0, 255]) plt.show() ``` Koristeći <b>OpenCV</b>: ``` hist_full = cv2.calcHist([img_gray], [0], None, [255], [0, 255]) plt.plot(hist_full) plt.show() ``` Pretpostavimo da su vrednosti piksela lica između 100 i 200. ``` img_tr = (img_gray > 100) * (img_gray < 200) plt.imshow(img_tr, 'gray') ``` ### Konverovanje iz "grayscale" u RGB Ovo je zapravo trivijalna operacija koja za svaki kanal boje (RGB) napravi kopiju od originalne grayscale slike. Ovo je zgodno kada nešto što je urađeno u grayscale modelu treba iskoristiti zajedno sa RGB slikom. ``` img_tr_rgb = cv2.cvtColor(img_tr.astype('uint8'), cv2.COLOR_GRAY2RGB) plt.imshow(img*img_tr_rgb) # množenje originalne RGB slike i slike sa izdvojenim pikselima lica ``` ### Morfološke operacije Veliki skup operacija za obradu digitalne slike, gde su te operacije zasnovane na oblicima, odnosno **strukturnim elementima**. U morfološkim operacijama, vrednost svakog piksela rezultujuće slike se zasniva na poređenju odgovarajućeg piksela na originalnoj slici sa svojom okolinom. Veličina i oblik ove okoline predstavljaju strukturni element. ``` kernel = np.ones((3, 3)) # strukturni element 3x3 blok print(kernel) ``` #### Erozija Morfološka erozija postavlja vrednost piksela rez. slike na ```(i,j)``` koordinatama na **minimalnu** vrednost svih piksela u okolini ```(i,j)``` piksela na orig. slici. U suštini erozija umanjuje regione belih piksela, a uvećava regione crnih piksela. Često se koristi za uklanjanje šuma (u vidu sitnih regiona belih piksela). ![images/erosion.gif](images/erosion.gif) ``` plt.imshow(cv2.erode(image_bin, kernel, iterations=1), 'gray') ``` #### Dilacija Morfološka dilacija postavlja vrednost piksela rez. slike na ```(i,j)``` koordinatama na **maksimalnu** vrednost svih piksela u okolini ```(i,j)``` piksela na orig. slici. U suštini dilacija uvećava regione belih piksela, a umanjuje regione crnih piksela. Zgodno za izražavanje regiona od interesa. ![images/dilation.gif](images/dilation.gif) ``` # drugaciji strukturni element kernel = cv2.getStructuringElement(cv2.MORPH_CROSS, (5,5)) # MORPH_ELIPSE, MORPH_RECT... print(kernel) plt.imshow(cv2.dilate(image_bin, kernel, iterations=5), 'gray') # 5 iteracija ``` #### Otvaranje i zatvaranje **```otvaranje = erozija + dilacija```**, uklanjanje šuma erozijom i vraćanje originalnog oblika dilacijom. **```zatvaranje = dilacija + erozija```**, zatvaranje sitnih otvora među belim pikselima ``` kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5, 5)) print(kernel) img_ero = cv2.erode(image_bin, kernel, iterations=1) img_open = cv2.dilate(img_ero, kernel, iterations=1) plt.imshow(img_open, 'gray') img_dil = cv2.dilate(image_bin, kernel, iterations=1) img_close = cv2.erode(img_dil, kernel, iterations=1) plt.imshow(img_close, 'gray') ``` Primer detekcije ivica na binarnoj slici korišćenjem dilatacije i erozije: ``` kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3)) image_edges = cv2.dilate(image_bin, kernel, iterations=1) - cv2.erode(image_bin, kernel, iterations=1) plt.imshow(image_edges, 'gray') ``` ### Zamućenje (blur) Zamućenje slike se dobija tako što se za svaki piksel slike kao nova vrednost uzima srednja vrednost okolnih piksela, recimo u okolini 5 x 5. Kernel <b>k</b> predstavlja kernel za <i>uniformno zamućenje</i>. Ovo je jednostavnija verzija <a href="https://en.wikipedia.org/wiki/Gaussian_blur">Gausovskog zamućenja</a>. <img src="https://render.githubusercontent.com/render/math?math=k%285x5%29%3D%0A%20%20%5Cbegin%7Bbmatrix%7D%0A%20%20%20%201%2F25%20%26amp%3B%201%2F25%20%26amp%3B%201%2F25%20%26amp%3B%201%2F25%20%26amp%3B%201%2F25%20%5C%5C%0A%20%20%20%201%2F25%20%26amp%3B%201%2F25%20%26amp%3B%201%2F25%20%26amp%3B%201%2F25%20%26amp%3B%201%2F25%20%5C%5C%0A%20%20%20%201%2F25%20%26amp%3B%201%2F25%20%26amp%3B%201%2F25%20%26amp%3B%201%2F25%20%26amp%3B%201%2F25%20%5C%5C%0A%20%20%20%201%2F25%20%26amp%3B%201%2F25%20%26amp%3B%201%2F25%20%26amp%3B%201%2F25%20%26amp%3B%201%2F25%20%5C%5C%0A%20%20%20%201%2F25%20%26amp%3B%201%2F25%20%26amp%3B%201%2F25%20%26amp%3B%201%2F25%20%26amp%3B%201%2F25%0A%20%20%5Cend%7Bbmatrix%7D&mode=display"> ``` from scipy import signal k_size = 5 k = (1./k_size*k_size) * np.ones((k_size, k_size)) image_blur = signal.convolve2d(img_gray, k) plt.imshow(image_blur, 'gray') ``` ### Regioni i izdvajanje regiona Najjednostavnije rečeno, region je skup međusobno povezanih belih piksela. Kada se kaže povezanih, misli se na to da se nalaze u neposrednoj okolini. Razlikuju se dve vrste povezanosti: tzv. **4-connectivity** i **8-connectivity**: ![images/48connectivity.png](images/48connectivity.png) Postupak kojim se izdvajanju/obeležavaju regioni se naziva **connected components labelling**. Ovo ćemo primeniti na problemu izdvajanja barkoda. ``` # ucitavanje slike i convert u RGB img_barcode = cv2.cvtColor(cv2.imread('images/barcode.jpg'), cv2.COLOR_BGR2RGB) plt.imshow(img_barcode) ``` Recimo da želimo da izdvojimo samo linije barkoda sa slike. Za početak, uradimo neke standardne operacije, kao što je konvertovanje u grayscale i adaptivni threshold. ``` img_barcode_gs = cv2.cvtColor(img_barcode, cv2.COLOR_RGB2GRAY) # konvert u grayscale plt.imshow(img_barcode_gs, 'gray') #ret, image_barcode_bin = cv2.threshold(img_barcode_gs, 80, 255, cv2.THRESH_BINARY) image_barcode_bin = cv2.adaptiveThreshold(img_barcode_gs, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 35, 10) plt.imshow(image_barcode_bin, 'gray') ``` ### Pronalaženje kontura/regiona Konture, odnosno regioni na slici su grubo rečeno grupe crnih piksela. OpenCV metoda <b>findContours</b> pronalazi sve ove grupe crnih piksela, tj. regione. Druga povratna vrednost metode, odnosno <i>contours</i> je lista pronađeih kontura na slici. Ove konture je zaim moguće iscrtati metodom <b>drawContours</b>, gde je prvi parametar slika na kojoj se iscrtavaju pronađene konture, drugi parametar je lista kontura koje je potrebno iscrtati, treći parametar određuje koju konturu po redosledu iscrtati (-1 znači iscrtavanje svih kontura), četvrti parametar je boja kojom će se obeležiti kontura, a poslednji parametar je debljina linije. ``` contours, hierarchy = cv2.findContours(image_barcode_bin, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE) img = img_barcode.copy() cv2.drawContours(img, contours, -1, (255, 0, 0), 1) plt.imshow(img) ``` #### Osobine regiona Svi pronađeni regioni imaju neke svoje karakteristične osobine: površina, obim, konveksni omotač, konveksnost, obuhvatajući pravougaonik, ugao... Ove osobine mogu biti izuzetno korisne kada je neophodno izdvojiti samo određene regione sa slike koji ispoljavaju neku osobinu. Za sve osobine pogledati <a href="https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_contours/py_contour_features/py_contour_features.html">ovo</a> i <a href="https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_contours/py_contour_properties/py_contour_properties.html">ovo</a>. Izdvajamo samo bar-kod sa slike. ``` contours_barcode = [] #ovde ce biti samo konture koje pripadaju bar-kodu for contour in contours: # za svaku konturu center, size, angle = cv2.minAreaRect(contour) # pronadji pravougaonik minimalne povrsine koji ce obuhvatiti celu konturu width, height = size if width > 3 and width < 30 and height > 300 and height < 400: # uslov da kontura pripada bar-kodu contours_barcode.append(contour) # ova kontura pripada bar-kodu img = img_barcode.copy() cv2.drawContours(img, contours_barcode, -1, (255, 0, 0), 1) plt.imshow(img) print('Ukupan broj regiona: %d' % len(contours_barcode)) ``` Naravno, u ogromnom broj slučajeva odnos visine i širine neće biti dovoljan, već se moraju koristiti i ostale osobine. ## Zadaci * Sa slike sa sijalicama (**images/bulbs.jpg**) prebrojati koliko ima sijalica. * Sa slike barkoda (**images/barcode.jpg**) izdvojiti samo brojeve i slova, bez linija barkoda. * Na slici sa snouborderima (**images/snowboarders.jpg**) prebrojati koliko ima snoubordera. * Na slici sa fudbalerima (**images/football.jpg**) izdvojiti samo one fudbalere u belim dresovima. * Na slici sa crvenim krvnim zrncima (**images/bloodcells.jpg**), prebrojati koliko ima crvenih krvnih zrnaca.
github_jupyter
## <center>Ensemble models from machine learning: an example of wave runup and coastal dune erosion</center> ### <center>Tomas Beuzen<sup>1</sup>, Evan B. Goldstein<sup>2</sup>, Kristen D. Splinter<sup>1</sup></center> <center><sup>1</sup>Water Research Laboratory, School of Civil and Environmental Engineering, UNSW Sydney, NSW, Australia</center> <center><sup>2</sup>Department of Geography, Environment, and Sustainability, University of North Carolina at Greensboro, Greensboro, NC, USA</center> This notebook contains the code required to develop the Gaussian Process (GP) runup predictor developed in the manuscript "*Ensemble models from machine learning: an example of wave runup and coastal dune erosion*" by Beuzen et al. **Citation:** Beuzen, T, Goldstein, E.B., Splinter, K.S. (In Review). Ensemble models from machine learning: an example of wave runup and coastal dune erosion, Natural Hazards and Earth Systems Science, SI Advances in computational modeling of geoprocesses and geohazards. ### Table of Contents: 1. [Imports](#bullet-0) 2. [Load and Visualize Data](#bullet-1) 3. [Develop GP Runup Predictor](#bullet-2) 4. [Test GP Runup Predictor](#bullet-3) 5. [Explore GP Prediction Uncertainty](#bullet-4) ## 1. Imports <a class="anchor" id="bullet-0"></a> ``` # Required imports # Standard computing packages import numpy as np import pandas as pd import matplotlib.pyplot as plt # Gaussian Process tools from sklearn.metrics import mean_squared_error from sklearn.gaussian_process import GaussianProcessRegressor from sklearn.gaussian_process.kernels import RBF, WhiteKernel # Notebook functionality %matplotlib inline ``` ## 2. Load and Visualize Data <a class="anchor" id="bullet-1"></a> In this section, we will load and visualise the wave, beach slope, and runup data we will use to develop the Gaussian process (GP) runup predictor. ``` # Read in .csv data file as a pandas dataframe df = pd.read_csv('../data_repo_temporary/lidar_runup_data_for_GP_training.csv',index_col=0) # Print the size and head of the dataframe print('Data size:', df.shape) df.head() # This cell plots histograms of the data # Initialize the figure and axes fig, axes = plt.subplots(2,2,figsize=(6,6)) plt.tight_layout(w_pad=0.1, h_pad=3) # Subplot (0,0): Hs ax = axes[0,0] ax.hist(df.Hs,28,color=(0.6,0.6,0.6),edgecolor='k',lw=0.5) # Plot histogram ax.set_xlabel('H$_s$ (m)') # Format plot ax.set_ylabel('Frequency') ax.set_xticks((0,1.5,3,4.5)) ax.set_xlim((0,4.5)) ax.set_ylim((0,50)) ax.grid(lw=0.5,alpha=0.7) ax.text(-1.1, 52, 'A)', fontsize=12) ax.tick_params(direction='in') ax.set_axisbelow(True) # Subplot (0,1): Tp ax = axes[0,1] ax.hist(df.Tp,20,color=(0.6,0.6,0.6),edgecolor='k',lw=0.5) # Plot histogram ax.set_xlabel('T$_p$ (s)') # Format plot ax.set_xticks((0,6,12,18)) ax.set_xlim((0,18)) ax.set_ylim((0,50)) ax.set_yticklabels([]) ax.grid(lw=0.5,alpha=0.7) ax.text(-2.1, 52, 'B)', fontsize=12) ax.tick_params(direction='in') ax.set_axisbelow(True) # Subplot (1,0): beta ax = axes[1,0] ax.hist(df.beach_slope,20,color=(0.6,0.6,0.6),edgecolor='k',lw=0.5) # Plot histogram ax.set_xlabel(r'$\beta$') # Format plot ax.set_ylabel('Frequency') ax.set_xticks((0,0.1,0.2,0.3)) ax.set_xlim((0,0.3)) ax.set_ylim((0,50)) ax.grid(lw=0.5,alpha=0.7) ax.text(-0.073, 52, 'C)', fontsize=12) ax.tick_params(direction='in') ax.set_axisbelow(True) # Subplot (1,1): R2 ax = axes[1,1] ax.hist(df.runup,24,color=(0.9,0.2,0.2),edgecolor='k',lw=0.5) # Plot histogram ax.set_xlabel('R$_2$ (m)') # Format plot ax.set_xticks((0,1,2,3)) ax.set_xlim((0,3)) ax.set_ylim((0,50)) ax.set_yticklabels([]) ax.grid(lw=0.5,alpha=0.7) ax.text(-0.35, 52, 'D)', fontsize=12) ax.tick_params(direction='in') ax.set_axisbelow(True); ``` ## 3. Develop GP Runup Predictor <a class="anchor" id="bullet-2"></a> In this section we will develop the GP runup predictor. We standardize the data for use in the GP by removing the mean and scaling to unit variance. This does not really affect GP performance but improves computational efficiency (see sklearn documentation for more information). A kernel must be specified to develop the GP. Many kernels were trialled in initial GP development. The final kernel is a combination of the RBF and WhiteKernel. See **Section 2.1** and **Section 2.2** of the manuscript for further discussion. ``` # Define features and response data X = df.drop(columns=df.columns[-1]) # Drop the last column to retain input features (Hs, Tp, slope) y = df[[df.columns[-1]]] # The last column is the predictand (R2) ``` ``` # Specify the kernel to use in the GP kernel = RBF(0.1, (1e-2, 1e2)) + WhiteKernel(1,(1e-2,1e2)) # Train GP model on training dataset gp = GaussianProcessRegressor(kernel=kernel, n_restarts_optimizer=9, normalize_y=True, random_state=123) gp.fit(X, y); ``` ## 4. Test GP Runup Predictor <a class="anchor" id="bullet-3"></a> This section now shows how the GP runup predictor can be used to test 50 test samples not previosuly used in training. ``` # Read in .csv test data file as a pandas dataframe df_test = pd.read_csv('../data_repo_temporary/lidar_runup_data_for_GP_testing.csv',index_col=0) # Print the size and head of the dataframe print('Data size:', df_test.shape) df_test.head() # Predict the data X_test = df_test.drop(columns=df.columns[-1]) # Drop the last column to retain input features (Hs, Tp, slope) y_test = df_test[[df_test.columns[-1]]] # The last column is the predictand (R2) y_test_predictions = gp.predict(X_test) print('GP RMSE on test data =', format(np.sqrt(mean_squared_error(y_test,y_test_predictions)),'.2f')) # This cell plots a figure comparing GP predictions to observations for the testing dataset # Similar to Figure 4 in the manuscript # Initialize the figure and axes fig, axes = plt.subplots(figsize=(6,6)) plt.tight_layout(pad=2.2) # Plot and format axes.scatter(y_test,y_test_predictions,s=20,c='b',marker='.') axes.plot([0,4],[0,4],'k--') axes.set_ylabel('Predicted R$_2$ (m)') axes.set_xlabel('Observed R$_2$ (m)') axes.grid(lw=0.5,alpha=0.7) axes.set_xlim(0,1.5) axes.set_ylim(0,1.5) # Print some statistics print('GP RMSE on test data =', format(np.sqrt(mean_squared_error(y_test,y_test_predictions)),'.2f')) print('GP bias on test data =', format(np.mean(y_test_predictions-y_test.values),'.2f')) ``` ## 5. Explore GP Prediction Uncertainty <a class="anchor" id="bullet-3"></a> This section explores how we can draw random samples from the GP to explain scatter in the runup predictions. We randomly draw 100 samples from the GP and calculate how much of the scatter in the runup predictions is captured by the ensemble envelope for different ensemble sizes. The process is repeated 100 times for robustness. See **Section 3.3** of the manuscript for further discussion. We then plot the prediction with prediction uncertainty to help visualize. ``` # Draw 100 samples from the GP model using the testing dataset GP_draws = gp.sample_y(X_test, n_samples=100, random_state=123).squeeze() # Draw 100 random samples from the GP # Initialize result arrays perc_ens = np.zeros((100,100)) # Initialize ensemble capture array perc_err = np.zeros((100,)) # Initialise arbitray error array # Loop to get results for i in range(0,perc_ens.shape[0]): # Caclulate capture % in envelope created by adding arbitrary, uniform error to mean GP prediction lower = y_test_predictions*(1-i/100) # Lower bound upper = y_test_predictions*(1+i/100) # Upper bound perc_err[i] = sum((np.squeeze(y_test)>=np.squeeze(lower)) & (np.squeeze(y_test)<=np.squeeze(upper)))/y_test.shape[0] # Store percent capture for j in range(0,perc_ens.shape[1]): ind = np.random.randint(0,perc_ens.shape[0],i+1) # Determine i random integers lower = np.min(GP_draws[:,ind],axis=1) # Lower bound of ensemble of i random members upper = np.max(GP_draws[:,ind],axis=1) # Upper bound of ensemble of i random members perc_ens[i,j] = sum((np.squeeze(y_test)>=lower) & (np.squeeze(y_test)<=upper))/y_test.shape[0] # Store percent capture # This cell plots a figure showing how samples from the GP can help to capture uncertainty in predictions # Similar to Figure 5 from the manuscript # Initialize the figure and axes fig, axes = plt.subplots(1,2,figsize=(9,4)) plt.tight_layout() lim = 0.95 # Desired limit to test # Plot ensemble results ax = axes[0] perc_ens_mean = np.mean(perc_ens,axis=1) ax.plot(perc_ens_mean*100,'k-',lw=2) ind = np.argmin(abs(perc_ens_mean-lim)) # Find where the capture rate > lim ax.plot([ind,ind],[0,perc_ens_mean[ind]*100],'r--') ax.plot([0,ind],[perc_ens_mean[ind]*100,perc_ens_mean[ind]*100],'r--') ax.set_xlabel('# Draws from GP') ax.set_ylabel('Observations captured \n within ensemble range (%)') ax.grid(lw=0.5,alpha=0.7) ax.minorticks_on() ax.set_xlim(0,100); ax.set_ylim(0,100); ax.text(-11.5, 107, 'A)', fontweight='bold', fontsize=12) print('# draws needed for ' + format(lim*100,'.0f') + '% capture = ' + str(ind)) print('Mean/Min/Max for ' + str(ind) + ' draws = ' + format(np.mean(perc_ens[ind,:])*100,'.1f') + '%/' + format(np.min(perc_ens[ind,:])*100,'.1f') + '%/' + format(np.max(perc_ens[ind,:])*100,'.1f') + '%') # Plot arbitrary error results ax = axes[1] ax.plot(perc_err*100,'k-',lw=2) ind = np.argmin(abs(perc_err-lim)) # Find where the capture rate > lim ax.plot([ind,ind],[0,perc_err[ind]*100],'r--') ax.plot([0,ind],[perc_err[ind]*100,perc_err[ind]*100],'r--') ax.set_xlabel('% Error added to mean GP estimate') ax.grid(lw=0.5,alpha=0.7) ax.minorticks_on() ax.set_xlim(0,100); ax.set_ylim(0,100); ax.text(-11.5, 107, 'B)', fontweight='bold', fontsize=12) print('% added error needed for ' + format(lim*100,'.0f') + '% capture = ' + str(ind) + '%') # This cell plots predictions for all 50 test samples with prediction uncertainty from 12 ensemble members. # In the cell above, 12 members was identified as optimal for capturing 95% of observations. # Initialize the figure and axes fig, axes = plt.subplots(1,1,figsize=(10,6)) # Make some data for plotting x = np.arange(1, len(y_test)+1) lower = np.min(GP_draws[:,:12],axis=1) # Lower bound of ensemble of 12 random members upper = np.max(GP_draws[:,:12],axis=1) # Upper bound of ensemble of 12 random members # Plot axes.plot(x,y_test,'o',linestyle='-',color='C0',mfc='C0',mec='k',zorder=10,label='Observed') axes.plot(x,y_test_predictions,'k',marker='o',color='C1',mec='k',label='GP Ensemble Mean') axes.fill_between(x, lower, upper, alpha=0.2, facecolor='C1', label='GP Ensemble Range') # Formatting axes.set_xlim(0,50) axes.set_ylim(0,2.5) axes.set_xlabel('Observation') axes.set_ylabel('R2 (m)') axes.grid() axes.legend(framealpha=1) ```
github_jupyter
``` import torch from torch import nn import torch.nn.functional as F from torchvision import datasets, transforms import matplotlib.pyplot as plt import numpy as np # Define a transform to normalize the data - change the range of values in the image [histogram stretch] transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)), ]) # Download and load the training data trainset = datasets.MNIST('~/.pytorch/MNIST_data/', download=True, train=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True) #batch_size up indicates that 64 images are taken at a time from the data loader #create iterator to go through images detaiter = iter(trainloader) images,labels = detaiter.next() print(type(images)) print(images.shape)#64 images per batch, 1 color channel and 28 x 28 pixels print(labels.shape)#1 label per image #plot a sample image plt.imshow(images[1].numpy().squeeze(),cmap= 'Greys_r'); #Need to convert the current tensor to vector such that one image is a vector of (1 * 28 * 28) or (784) #This will lead to each batch of 64 having size 784 => (64, 784) #Source: https://www.aiworkbox.com/lessons/flatten-a-pytorch-tensor flattened = images.view(images.shape[0],-1) print(flattened) ##Random sampling creation ops are contained in torch.randn #Iterator in Python is simply an object that can be iterated upon. #An object which will return data, one element at a time. #images.shape[0] returns the dimensions o fhte array => basically take the first batch of images and flatten #Use sigmoid for activation layer """"Same as before we start by defining the activation function""" #Define the sigmoid activtion function def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) #Sample solution for flattening images inputs = images.view(images.shape[0],-1) #change the size of the image iterator based on the first image and flatten #Create the weights and bias parameters #build network with 784 input units, 256 hidden units and 10 output units for weights and biases #Input layer w1 = torch.randn(784,256) b1 = torch.randn(256) #hidden layer w2 = torch.randn(256,10) b2 = torch.randn(10) #Connected layers h = activation(torch.mm(inputs,w1)+b1) out = torch.mm(h,w2)+b2 print(out.shape) #Define the softmax activation function to obtain a probability distribution of the result def softmax(x): return torch.exp(x)/torch.sum(torch.exp(x), dim=1).view(-1, 1) probabilities = softmax(out) #Setting dim=0 takes the sum across the rows while dim=1 takes the sum across the columns. #=> Sanity checks # Does it have the right shape? Should be (64, 10) print(probabilities.shape) # Does it sum to 1? print(probabilities.sum(dim=1)) from torch import nn class Network(nn.Module): def __init__(self): super().__init__() # Inputs to hidden layer linear transformation self.hidden = nn.Linear(784, 256) # Output layer, 10 units - one for each digit self.output = nn.Linear(256, 10) # Define sigmoid activation and softmax output self.sigmoid = nn.Sigmoid() self.softmax = nn.Softmax(dim=1) def forward(self, x): # Pass the input tensor through each of our operations x = self.hidden(x) x = self.sigmoid(x) x = self.output(x) x = self.softmax(x) return x ``` Let's go through this bit by bit. ```python class Network(nn.Module): ``` Here we're inheriting from `nn.Module`. Combined with `super().__init__()` this creates a class that tracks the architecture and provides a lot of useful methods and attributes. It is mandatory to inherit from `nn.Module` when you're creating a class for your network. The name of the class itself can be anything. ```python self.hidden = nn.Linear(784, 256) ``` This line creates a module for a linear transformation, $x\mathbf{W} + b$, with 784 inputs and 256 outputs and assigns it to `self.hidden`. The module automatically creates the weight and bias tensors which we'll use in the `forward` method. You can access the weight and bias tensors once the network (`net`) is created with `net.hidden.weight` and `net.hidden.bias`. ```python self.output = nn.Linear(256, 10) ``` Similarly, this creates another linear transformation with 256 inputs and 10 outputs. ```python self.sigmoid = nn.Sigmoid() self.softmax = nn.Softmax(dim=1) ``` Here I defined operations for the sigmoid activation and softmax output. Setting `dim=1` in `nn.Softmax(dim=1)` calculates softmax across the columns. ```python def forward(self, x): ``` PyTorch networks created with `nn.Module` must have a `forward` method defined. It takes in a tensor `x` and passes it through the operations you defined in the `__init__` method. ```python x = self.hidden(x) x = self.sigmoid(x) x = self.output(x) x = self.softmax(x) ``` Here the input tensor `x` is passed through each operation a reassigned to `x`. We can see that the input tensor goes through the hidden layer, then a sigmoid function, then the output layer, and finally the softmax function. It doesn't matter what you name the variables here, as long as the inputs and outputs of the operations match the network architecture you want to build. The order in which you define things in the `__init__` method doesn't matter, but you'll need to sequence the operations correctly in the `forward` method. Now we can create a `Network` object. ``` #Text version of model architecture model = Network() model # #Common way to define model using PyTorch # import torch.nn.functional as F # class Network(nn.Module): # def __init__(self): # super().__init__() # # Inputs to hidden layer linear transformation # self.hidden = nn.Linear(784, 256) # # Output layer, 10 units - one for each digit # self.output = nn.Linear(256, 10) # def forward(self, x): # # Hidden layer with sigmoid activation # x = F.sigmoid(self.hidden(x)) # # Output layer with softmax activation # x = F.softmax(self.output(x), dim=1) # return x from torch import nn import torch.nn.functional as F #Create a network with 784 input units, a hidden layer with 128 units and a ReLU activation, #then a hidden layer with 64 units and a ReLU activation, #and finally an output layer with a softmax activation as shown above. class Network(nn.Module): def __init__(self): super().__init__() # Defining the layers, 128, 64, 10 units each self.fc1 = nn.Linear(784, 128) self.fc2 = nn.Linear(128, 64) # Output layer, 10 units - one for each digit self.fc3 = nn.Linear(64, 10) def forward(self, x): ''' Forward pass through the network, returns the output logits ''' x = self.fc1(x) x = F.relu(x) x = self.fc2(x) x = F.relu(x) x = self.fc3(x) x = F.softmax(x, dim=1) return x model1 = Network() model1 # Import necessary packages %matplotlib inline %config InlineBackend.figure_format = 'retina' import numpy as np import torch import helper import matplotlib.pyplot as plt # Hyperparameters for our network input_size = 784 hidden_sizes = [128, 64] output_size = 10 # Build a feed-forward network model = nn.Sequential(nn.Linear(input_size, hidden_sizes[0]), nn.ReLU(), nn.Linear(hidden_sizes[0], hidden_sizes[1]), nn.ReLU(), nn.Linear(hidden_sizes[1], output_size), nn.Softmax(dim=1)) print(model) # Forward pass through the network and display output images, labels = next(iter(trainloader)) images.resize_(images.shape[0], 1, 784) ps = model.forward(images[0,:]) helper.view_classify(images[0].view(1, 28, 28), ps) ```
github_jupyter
``` !wget https://LoanStats_2019Q1.csv.zip !wget https://LoanStats_2019Q2.csv.zip !wget https://LoanStats_2019Q3.csv.zip !wget https://LoanStats_2019Q4.csv.zip //LoanStats_2020Q1.csv.zip import numpy as np import pandas as pd from pathlib import Path from collections import Counter from sklearn.model_selection import train_test_split columns = [ "loan_amnt", "int_rate", "installment", "home_ownership", "annual_inc", "verification_status", "pymnt_plan", "dti", "delinq_2yrs", "inq_last_6mths", "open_acc", "pub_rec", "revol_bal", "total_acc", "initial_list_status", "out_prncp", "out_prncp_inv", "total_pymnt", "total_pymnt_inv", "total_rec_prncp", "total_rec_int", "total_rec_late_fee", "recoveries", "collection_recovery_fee", "last_pymnt_amnt", "collections_12_mths_ex_med", "policy_code", "application_type", "acc_now_delinq", "tot_coll_amt", "tot_cur_bal", "open_acc_6m", "open_act_il", "open_il_12m", "open_il_24m", "mths_since_rcnt_il", "total_bal_il", "il_util", "open_rv_12m", "open_rv_24m", "max_bal_bc", "all_util", "total_rev_hi_lim", "inq_fi", "total_cu_tl", "inq_last_12m", "acc_open_past_24mths", "avg_cur_bal", "bc_open_to_buy", "bc_util", "chargeoff_within_12_mths", "delinq_amnt", "mo_sin_old_il_acct", "mo_sin_old_rev_tl_op", "mo_sin_rcnt_rev_tl_op", "mo_sin_rcnt_tl", "mort_acc", "mths_since_recent_bc", "mths_since_recent_inq", "num_accts_ever_120_pd", "num_actv_bc_tl", "num_actv_rev_tl", "num_bc_sats", "num_bc_tl", "num_il_tl", "num_op_rev_tl", "num_rev_accts", "num_rev_tl_bal_gt_0", "num_sats", "num_tl_120dpd_2m", "num_tl_30dpd", "num_tl_90g_dpd_24m", "num_tl_op_past_12m", "pct_tl_nvr_dlq", "percent_bc_gt_75", "pub_rec_bankruptcies", "tax_liens", "tot_hi_cred_lim", "total_bal_ex_mort", "total_bc_limit", "total_il_high_credit_limit", "hardship_flag", "debt_settlement_flag", "loan_status" ] target = "loan_status" # Load the data df1 = pd.read_csv(Path('../Resources/LoanStats_2019Q1.csv.zip'), skiprows=1)[:-2] df2 = pd.read_csv(Path('../Resources/LoanStats_2019Q2.csv.zip'), skiprows=1)[:-2] df3 = pd.read_csv(Path('../Resources/LoanStats_2019Q3.csv.zip'), skiprows=1)[:-2] df4 = pd.read_csv(Path('../Resources/LoanStats_2019Q4.csv.zip'), skiprows=1)[:-2] df = pd.concat([df1, df2, df3, df4]).loc[:, columns].copy() # Drop the null columns where all values are null df = df.dropna(axis='columns', how='all') # Drop the null rows df = df.dropna() # Remove the `Issued` loan status issued_mask = df['loan_status'] != 'Issued' df = df.loc[issued_mask] # convert interest rate to numerical df['int_rate'] = df['int_rate'].str.replace('%', '') df['int_rate'] = df['int_rate'].astype('float') / 100 # Convert the target column values to low_risk and high_risk based on their values x = {'Current': 'low_risk'} df = df.replace(x) x = dict.fromkeys(['Late (31-120 days)', 'Late (16-30 days)', 'Default', 'In Grace Period'], 'high_risk') df = df.replace(x) low_risk_rows = df[df[target] == 'low_risk'] high_risk_rows = df[df[target] == 'high_risk'] #df = pd.concat([low_risk_rows, high_risk_rows.sample(n=len(low_risk_rows), replace=True)]) df = pd.concat([low_risk_rows.sample(n=len(high_risk_rows), random_state=42), high_risk_rows]) df = df.reset_index(drop=True) df = df.rename({target:'target'}, axis="columns") df df.to_csv('2019loans.csv', index=False) # Load the data validate_df = pd.read_csv(Path('../Resources/LoanStats_2020Q1.csv.zip'), skiprows=1)[:-2] validate_df = validate_df.loc[:, columns].copy() # Drop the null columns where all values are null validate_df = validate_df.dropna(axis='columns', how='all') # Drop the null rows validate_df = validate_df.dropna() # Remove the `Issued` loan status issued_mask = validate_df[target] != 'Issued' validate_df = validate_df.loc[issued_mask] # convert interest rate to numerical validate_df['int_rate'] = validate_df['int_rate'].str.replace('%', '') validate_df['int_rate'] = validate_df['int_rate'].astype('float') / 100 # Convert the target column values to low_risk and high_risk based on their values x = dict.fromkeys(['Current', 'Fully Paid'], 'low_risk') validate_df = validate_df.replace(x) x = dict.fromkeys(['Late (31-120 days)', 'Late (16-30 days)', 'Default', 'In Grace Period', 'Charged Off'], 'high_risk') validate_df = validate_df.replace(x) low_risk_rows = validate_df[validate_df[target] == 'low_risk'] high_risk_rows = validate_df[validate_df[target] == 'high_risk'] validate_df = pd.concat([low_risk_rows.sample(n=len(high_risk_rows), random_state=37), high_risk_rows]) validate_df = validate_df.reset_index(drop=True) validate_df = validate_df.rename({target:'target'}, axis="columns") validate_df validate_df.to_csv('2020Q1loans.csv', index=False) ```
github_jupyter
``` import os os.environ['CUDA_VISIBLE_DEVICES'] = '' # !git pull import tensorflow as tf import malaya_speech import malaya_speech.train from malaya_speech.train.model import fastspeech2 import numpy as np _pad = 'pad' _start = 'start' _eos = 'eos' _punctuation = "!'(),.:;? " _special = '-' _letters = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz' _rejected = '\'():;"' MALAYA_SPEECH_SYMBOLS = ( [_pad, _start, _eos] + list(_special) + list(_punctuation) + list(_letters) ) input_ids = tf.placeholder(tf.int32, [None, None]) lens = tf.placeholder(tf.int32, [None, None]) mel_outputs = tf.placeholder(tf.float32, [None, None, 80]) mel_lengths = tf.placeholder(tf.int32, [None]) energies = tf.placeholder(tf.float32, [None, None]) energies_lengths = tf.placeholder(tf.int32, [None]) f0s = tf.placeholder(tf.float32, [None, None]) f0s_lengths = tf.placeholder(tf.int32, [None]) config = malaya_speech.config.fastspeech2_config config = fastspeech2.Config( vocab_size = len(MALAYA_SPEECH_SYMBOLS), **config ) model = fastspeech2.Model(config) r_training = model(input_ids, lens, f0s, energies, training = False) speed_ratios = tf.placeholder(tf.float32, [None], name = 'speed_ratios') f0_ratios = tf.placeholder(tf.float32, [None], name = 'f0_ratios') energy_ratios = tf.placeholder(tf.float32, [None], name = 'energy_ratios') r = model.inference(input_ids, speed_ratios, f0_ratios, energy_ratios) r decoder_output = tf.identity(r[0], name = 'decoder_output') post_mel_outputs = tf.identity(r[1], name = 'post_mel_outputs') sess = tf.InteractiveSession() sess.run(tf.global_variables_initializer()) path = 'fastspeech2-haqkiem' ckpt_path = tf.train.latest_checkpoint(path) ckpt_path saver = tf.train.Saver() saver.restore(sess, ckpt_path) import re from unidecode import unidecode import malaya normalizer = malaya.normalize.normalizer(date = False, time = False) pad_to = 8 def tts_encode(string: str, add_eos: bool = True): r = [MALAYA_SPEECH_SYMBOLS.index(c) for c in string if c in MALAYA_SPEECH_SYMBOLS] if add_eos: r = r + [MALAYA_SPEECH_SYMBOLS.index('eos')] return r def put_spacing_num(string): string = re.sub('[A-Za-z]+', lambda ele: ' ' + ele[0] + ' ', string) return re.sub(r'[ ]+', ' ', string).strip() def convert_to_ascii(string): return unidecode(string) def collapse_whitespace(string): return re.sub(_whitespace_re, ' ', string) def cleaning(string, normalize = True, add_eos = False): sequence = [] string = convert_to_ascii(string) if string[-1] in '-,': string = string[:-1] if string[-1] not in '.,?!': string = string + '.' string = string.replace('&', ' dan ') string = string.replace(':', ',').replace(';', ',') if normalize: t = normalizer._tokenizer(string) for i in range(len(t)): if t[i] == '-': t[i] = ',' string = ' '.join(t) string = normalizer.normalize(string, check_english = False, normalize_entity = False, normalize_text = False, normalize_url = True, normalize_email = True, normalize_year = True) string = string['normalize'] else: string = string string = put_spacing_num(string) string = ''.join([c for c in string if c in MALAYA_SPEECH_SYMBOLS and c not in _rejected]) string = re.sub(r'[ ]+', ' ', string).strip() string = string.lower() ids = tts_encode(string, add_eos = add_eos) text_input = np.array(ids) num_pad = pad_to - ((len(text_input) + 2) % pad_to) text_input = np.pad( text_input, ((1, 1)), 'constant', constant_values = ((1, 2)) ) text_input = np.pad( text_input, ((0, num_pad)), 'constant', constant_values = 0 ) return string, text_input import matplotlib.pyplot as plt # https://umno-online.my/2020/12/28/isu-kartel-daging-haram-lagi-pihak-gesa-kerajaan-ambil-tindakan-tegas-drastik/ t, ids = cleaning('Haqkiem adalah pelajar tahun akhir yang mengambil Ijazah Sarjana Muda Sains Komputer Kecerdasan Buatan utama dari Universiti Teknikal Malaysia Melaka (UTeM) yang kini berusaha untuk latihan industri di mana dia secara praktikal dapat menerapkan pengetahuannya dalam Perisikan Perisian dan Pengaturcaraan ke arah organisasi atau industri yang berkaitan.') t, ids %%time o = sess.run([decoder_output, post_mel_outputs], feed_dict = {input_ids: [ids], speed_ratios: [1.0], f0_ratios: [1.0], energy_ratios: [1.0]}) o[1].shape mel_outputs_ = np.reshape(o[1], [-1, 80]) fig = plt.figure(figsize=(10, 8)) ax1 = fig.add_subplot(311) ax1.set_title(f'Predicted Mel-before-Spectrogram') im = ax1.imshow(np.rot90(mel_outputs_), aspect='auto', interpolation='none') fig.colorbar(mappable=im, shrink=0.65, orientation='horizontal', ax=ax1) plt.show() mel_outputs_ = np.reshape(o[0], [-1, 80]) fig = plt.figure(figsize=(10, 8)) ax1 = fig.add_subplot(311) ax1.set_title(f'Predicted Mel-before-Spectrogram') im = ax1.imshow(np.rot90(mel_outputs_), aspect='auto', interpolation='none') fig.colorbar(mappable=im, shrink=0.65, orientation='horizontal', ax=ax1) plt.show() import pickle with open('a.pkl', 'wb') as fopen: pickle.dump([np.reshape(o[0], [-1, 80]), np.reshape(o[1], [-1, 80])], fopen) saver = tf.train.Saver() saver.save(sess, 'fastspeech2-haqkiem-output/model.ckpt') strings = ','.join( [ n.name for n in tf.get_default_graph().as_graph_def().node if ('Variable' in n.op or 'gather' in n.op.lower() or 'Placeholder' in n.name or 'ratios' in n.name or 'post_mel_outputs' in n.name or 'decoder_output' in n.name or 'alignment_histories' in n.name) and 'adam' not in n.name and 'global_step' not in n.name and 'Assign' not in n.name and 'ReadVariableOp' not in n.name and 'Gather' not in n.name and 'IsVariableInitialized' not in n.name ] ) strings.split(',') def freeze_graph(model_dir, output_node_names): if not tf.gfile.Exists(model_dir): raise AssertionError( "Export directory doesn't exists. Please specify an export " 'directory: %s' % model_dir ) checkpoint = tf.train.get_checkpoint_state(model_dir) input_checkpoint = checkpoint.model_checkpoint_path absolute_model_dir = '/'.join(input_checkpoint.split('/')[:-1]) output_graph = absolute_model_dir + '/frozen_model.pb' clear_devices = True with tf.Session(graph = tf.Graph()) as sess: saver = tf.train.import_meta_graph( input_checkpoint + '.meta', clear_devices = clear_devices ) saver.restore(sess, input_checkpoint) output_graph_def = tf.graph_util.convert_variables_to_constants( sess, tf.get_default_graph().as_graph_def(), output_node_names.split(','), ) with tf.gfile.GFile(output_graph, 'wb') as f: f.write(output_graph_def.SerializeToString()) print('%d ops in the final graph.' % len(output_graph_def.node)) freeze_graph('fastspeech2-haqkiem-output', strings) def load_graph(frozen_graph_filename): with tf.gfile.GFile(frozen_graph_filename, 'rb') as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f.read()) with tf.Graph().as_default() as graph: tf.import_graph_def(graph_def) return graph g = load_graph('fastspeech2-haqkiem-output/frozen_model.pb') test_sess = tf.InteractiveSession(graph = g) X = g.get_tensor_by_name('import/Placeholder:0') speed_ratios = g.get_tensor_by_name('import/speed_ratios:0') f0_ratios = g.get_tensor_by_name('import/f0_ratios:0') energy_ratios = g.get_tensor_by_name('import/energy_ratios:0') output_nodes = ['decoder_output', 'post_mel_outputs'] outputs = {n: g.get_tensor_by_name(f'import/{n}:0') for n in output_nodes} %%time o = test_sess.run(outputs, feed_dict = {X: [ids], speed_ratios: [1.0], f0_ratios: [1.0], energy_ratios: [1.0]}) mel_outputs_ = np.reshape(o['decoder_output'], [-1, 80]) fig = plt.figure(figsize=(10, 8)) ax1 = fig.add_subplot(311) ax1.set_title(f'Predicted Mel-before-Spectrogram') im = ax1.imshow(np.rot90(mel_outputs_), aspect='auto', interpolation='none') fig.colorbar(mappable=im, shrink=0.65, orientation='horizontal', ax=ax1) plt.show() mel_outputs_ = np.reshape(o['post_mel_outputs'], [-1, 80]) fig = plt.figure(figsize=(10, 8)) ax1 = fig.add_subplot(311) ax1.set_title(f'Predicted Mel-before-Spectrogram') im = ax1.imshow(np.rot90(mel_outputs_), aspect='auto', interpolation='none') fig.colorbar(mappable=im, shrink=0.65, orientation='horizontal', ax=ax1) plt.show() from tensorflow.tools.graph_transforms import TransformGraph transforms = ['add_default_attributes', 'remove_nodes(op=Identity, op=CheckNumerics)', 'fold_batch_norms', 'fold_old_batch_norms', 'quantize_weights(fallback_min=-1024, fallback_max=1024)', 'strip_unused_nodes', 'sort_by_execution_order'] pb = 'fastspeech2-haqkiem-output/frozen_model.pb' input_graph_def = tf.GraphDef() with tf.gfile.FastGFile(pb, 'rb') as f: input_graph_def.ParseFromString(f.read()) transformed_graph_def = TransformGraph(input_graph_def, ['Placeholder', 'speed_ratios', 'f0_ratios', 'energy_ratios'], output_nodes, transforms) with tf.gfile.GFile(f'{pb}.quantized', 'wb') as f: f.write(transformed_graph_def.SerializeToString()) g = load_graph('fastspeech2-haqkiem-output/frozen_model.pb.quantized') test_sess = tf.InteractiveSession(graph = g) X = g.get_tensor_by_name(f'import/Placeholder:0') speed_ratios = g.get_tensor_by_name('import/speed_ratios:0') f0_ratios = g.get_tensor_by_name('import/f0_ratios:0') energy_ratios = g.get_tensor_by_name('import/energy_ratios:0') outputs = {n: g.get_tensor_by_name(f'import/{n}:0') for n in output_nodes} %%time o = test_sess.run(outputs, feed_dict = {X: [ids], speed_ratios: [1.0], f0_ratios: [1.0], energy_ratios: [1.0]}) mel_outputs_ = np.reshape(o['decoder_output'], [-1, 80]) fig = plt.figure(figsize=(10, 8)) ax1 = fig.add_subplot(311) ax1.set_title(f'Predicted Mel-before-Spectrogram') im = ax1.imshow(np.rot90(mel_outputs_), aspect='auto', interpolation='none') fig.colorbar(mappable=im, shrink=0.65, orientation='horizontal', ax=ax1) plt.show() mel_outputs_ = np.reshape(o['post_mel_outputs'], [-1, 80]) fig = plt.figure(figsize=(10, 8)) ax1 = fig.add_subplot(311) ax1.set_title(f'Predicted Mel-before-Spectrogram') im = ax1.imshow(np.rot90(mel_outputs_), aspect='auto', interpolation='none') fig.colorbar(mappable=im, shrink=0.65, orientation='horizontal', ax=ax1) plt.show() ```
github_jupyter
# Subplots ``` %matplotlib notebook import matplotlib.pyplot as plt import numpy as np plt.subplot? plt.figure() # subplot with 1 row, 2 columns, and current axis is 1st subplot axes plt.subplot(1, 2, 1) linear_data = np.array([1,2,3,4,5,6,7,8]) plt.plot(linear_data, '-o') exponential_data = linear_data**2 # subplot with 1 row, 2 columns, and current axis is 2nd subplot axes plt.subplot(1, 2, 2) plt.plot(exponential_data, '-o') # plot exponential data on 1st subplot axes plt.subplot(1, 2, 1) plt.plot(exponential_data, '-x') plt.figure() ax1 = plt.subplot(1, 2, 1) plt.plot(linear_data, '-o') # pass sharey=ax1 to ensure the two subplots share the same y axis ax2 = plt.subplot(1, 2, 2, sharey=ax1) plt.plot(exponential_data, '-x') plt.figure() # the right hand side is equivalent shorthand syntax plt.subplot(1,2,1) == plt.subplot(121) # create a 3x3 grid of subplots fig, ((ax1,ax2,ax3), (ax4,ax5,ax6), (ax7,ax8,ax9)) = plt.subplots(3, 3, sharex=True, sharey=True) # plot the linear_data on the 5th subplot axes ax5.plot(linear_data, '-') # set inside tick labels to visible for ax in plt.gcf().get_axes(): for label in ax.get_xticklabels() + ax.get_yticklabels(): label.set_visible(True) # necessary on some systems to update the plot plt.gcf().canvas.draw() ``` # Histograms ``` # create 2x2 grid of axis subplots fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, sharex=True) axs = [ax1,ax2,ax3,ax4] # draw n = 10, 100, 1000, and 10000 samples from the normal distribution and plot corresponding histograms for n in range(0,len(axs)): sample_size = 10**(n+1) sample = np.random.normal(loc=0.0, scale=1.0, size=sample_size) axs[n].hist(sample) axs[n].set_title('n={}'.format(sample_size)) # repeat with number of bins set to 100 fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, sharex=True) axs = [ax1,ax2,ax3,ax4] for n in range(0,len(axs)): sample_size = 10**(n+1) sample = np.random.normal(loc=0.0, scale=1.0, size=sample_size) axs[n].hist(sample, bins=100) axs[n].set_title('n={}'.format(sample_size)) plt.figure() Y = np.random.normal(loc=0.0, scale=1.0, size=10000) X = np.random.random(size=10000) plt.scatter(X,Y) # use gridspec to partition the figure into subplots import matplotlib.gridspec as gridspec plt.figure() gspec = gridspec.GridSpec(3, 3) top_histogram = plt.subplot(gspec[0, 1:]) side_histogram = plt.subplot(gspec[1:, 0]) lower_right = plt.subplot(gspec[1:, 1:]) Y = np.random.normal(loc=0.0, scale=1.0, size=10000) X = np.random.random(size=10000) lower_right.scatter(X, Y) top_histogram.hist(X, bins=100) s = side_histogram.hist(Y, bins=100, orientation='horizontal') # clear the histograms and plot normed histograms top_histogram.clear() top_histogram.hist(X, bins=100, normed=True) side_histogram.clear() side_histogram.hist(Y, bins=100, orientation='horizontal', normed=True) # flip the side histogram's x axis side_histogram.invert_xaxis() # change axes limits for ax in [top_histogram, lower_right]: ax.set_xlim(0, 1) for ax in [side_histogram, lower_right]: ax.set_ylim(-5, 5) %%HTML <img src='http://educationxpress.mit.edu/sites/default/files/journal/WP1-Fig13.jpg' /> ``` # Box and Whisker Plots ``` import pandas as pd normal_sample = np.random.normal(loc=0.0, scale=1.0, size=10000) random_sample = np.random.random(size=10000) gamma_sample = np.random.gamma(2, size=10000) df = pd.DataFrame({'normal': normal_sample, 'random': random_sample, 'gamma': gamma_sample}) df.describe() plt.figure() # create a boxplot of the normal data, assign the output to a variable to supress output _ = plt.boxplot(df['normal'], whis='range') # clear the current figure plt.clf() # plot boxplots for all three of df's columns _ = plt.boxplot([ df['normal'], df['random'], df['gamma'] ], whis='range') plt.figure() _ = plt.hist(df['gamma'], bins=100) import mpl_toolkits.axes_grid1.inset_locator as mpl_il plt.figure() plt.boxplot([ df['normal'], df['random'], df['gamma'] ], whis='range') # overlay axis on top of another ax2 = mpl_il.inset_axes(plt.gca(), width='60%', height='40%', loc=2) ax2.hist(df['gamma'], bins=100) ax2.margins(x=0.5) # switch the y axis ticks for ax2 to the right side ax2.yaxis.tick_right() # if `whis` argument isn't passed, boxplot defaults to showing 1.5*interquartile (IQR) whiskers with outliers plt.figure() _ = plt.boxplot([ df['normal'], df['random'], df['gamma'] ] ) ``` # Heatmaps ``` plt.figure() Y = np.random.normal(loc=0.0, scale=1.0, size=10000) X = np.random.random(size=10000) _ = plt.hist2d(X, Y, bins=25) plt.figure() _ = plt.hist2d(X, Y, bins=100) # add a colorbar legend plt.colorbar() ``` # Animations ``` import matplotlib.animation as animation n = 100 x = np.random.randn(n) # create the function that will do the plotting, where curr is the current frame def update(curr): # check if animation is at the last frame, and if so, stop the animation a if curr == n: a.event_source.stop() plt.cla() bins = np.arange(-4, 4, 0.5) plt.hist(x[:curr], bins=bins) plt.axis([-4,4,0,30]) plt.gca().set_title('Sampling the Normal Distribution') plt.gca().set_ylabel('Frequency') plt.gca().set_xlabel('Value') plt.annotate('n = {}'.format(curr), [3,27]) fig = plt.figure() a = animation.FuncAnimation(fig, update, interval=1000) ``` # Interactivity ``` plt.figure() data = np.random.rand(10) plt.plot(data) def onclick(event): plt.cla() plt.plot(data) plt.gca().set_title('Event at pixels {},{} \nand data {},{}'.format(event.x, event.y, event.xdata, event.ydata)) # tell mpl_connect we want to pass a 'button_press_event' into onclick when the event is detected plt.gcf().canvas.mpl_connect('button_press_event', onclick) from random import shuffle origins = ['China', 'Brazil', 'India', 'USA', 'Canada', 'UK', 'Germany', 'Iraq', 'Chile', 'Mexico'] shuffle(origins) df = pd.DataFrame({'height': np.random.rand(10), 'weight': np.random.rand(10), 'origin': origins}) df plt.figure() # picker=5 means the mouse doesn't have to click directly on an event, but can be up to 5 pixels away plt.scatter(df['height'], df['weight'], picker=5) plt.gca().set_ylabel('Weight') plt.gca().set_xlabel('Height') def onpick(event): origin = df.iloc[event.ind[0]]['origin'] plt.gca().set_title('Selected item came from {}'.format(origin)) # tell mpl_connect we want to pass a 'pick_event' into onpick when the event is detected plt.gcf().canvas.mpl_connect('pick_event', onpick) ```
github_jupyter
``` %pylab inline %config InlineBackend.figure_format = 'retina' from ipywidgets import interact import scipy import scipy.special ``` # Question #1 Assume that $f(\cdot)$ is an infinitely smooth and continuous scalar function. Suppose that $a\in \mathbb{R}$ is a given constant in the domain of the function $f$ and that $h>0$ is a given parameter assumed to be small. Consider the following numerical approximation of a first derivative, $$ f'(a) \approx c_h(a) = \frac{f(a+h) - f(a - h)}{2h}.$$ A. Use a Taylor's series expansion of the function $f$ around $a$ to show that the approximation error is $O(h^2)$ provided that $f'''(a) \neq 0$. B. What happens to the error if $f'''(a) = 0$? ------------------------------------- ## Solution A. The absolute error is $$\mathcal{E}_{\rm abs} = \left \vert\frac{f(x+h) - f(x - h)}{2h} - f'(x) \right \vert. $$ To derive the error, we expand our function in a Taylor's series, with $$ f(a \pm h) = f(a) \pm h f'(a) + \frac{h^2}{2}f''(a) \pm \frac{h^3}{6} f'''(a) + O(h^4) $$ Substituting the Taylor's series into the absolute error yields \begin{align*} \mathcal{E}_{\rm abs} &= \left \vert \frac{1}{2h}\left(hf'(a) + \frac{h^2}{2}f''(a) + \frac{h^3}{6} f'''(a) + O(h^4) + hf'(a) - \frac{h^2}{2}f''(a) + \frac{h^3}{6} f'''(a) - O(h^4)\right) \right \vert \\ &= \left \vert f'(a) + \frac{h^2}{6}f'''(a) + O(h^4) - f'(a)\right \vert \\ &= \left \vert \frac{h^2}{6}f'''(a) + O(h^4) \right \vert \\ &= \frac{h^2}{6}\left \vert f'''(a)\right \vert + O(h^4) \end{align*} B. The next nonzero term in the Taylor's series expansion of the error is $O(h^4)$, namely $$ \frac{h^4}{5!}f^{(5)}(a). $$ Note that the $O(h^3)$ cancels out. # Question #2 Use Example 2 in the Week 2 Jupyter notebook as a starting point. Copy the code and paste it into a new cell (you should be using a copy of the Week 2 notebook or a new notebook). A. Compute the derivative approximation derived in Q1 for the function $f(x) = \sin(x)$ at the point $x=1.2$ for a range of values $10^{-20} \leq h \leq 10^{-1}$. $$$$ B. Compute the absolute error between the approximation and the exact derivative for a range of values $10^{-20} \leq h \leq 10^{-1}$. (For parts A and B, turn in a screen shot of your code.) C. Create a plot of the absolute error. Add a plot of the discretization error that you derived in Q1. Is the derivative approximation that you derived in Q2 more accurate than the approximation used in Example 2? ``` x0 = 1.2 f0 = sin(x0) fp = cos(x0) fpp = -sin(x0) fppp = -cos(x0) i = linspace(-20, 0, 40) h = 10.0**i fp_approx = (sin(x0 + h) - f0)/h fp_center_diff_approx = (sin(x0 + h) - sin(x0 - h))/(2*h) err = absolute(fp - fp_approx) err2 = absolute(fp - fp_center_diff_approx) d_err = h/2*absolute(fpp) d2_err = h**2/6*absolute(fppp) figure(1, [7, 5]) loglog(h, err, '-*') loglog(h, err2, '-*') loglog(h, d_err, 'r-', label=r'$\frac{h}{2}\vert f^{\prime\prime}(x) \vert $') loglog(h, d2_err, label=r'$\frac{h^2}{6}\vert f^{\prime\prime\prime}(x) \vert $') xlabel('h', fontsize=20) ylabel('absolute error', fontsize=20) ylim(1e-15, 1) legend(fontsize=24); ``` The centered difference formula is more accurate for (roughly) $h>10^{-8}$. After the cancelation error takes over, the two errors are roughly comparable.
github_jupyter
# Building and using data schemas for computer vision This tutorial illustrates how to use raymon profiling to guard image quality in your production system. The image data is taken from [Kaggle](https://www.kaggle.com/ravirajsinh45/real-life-industrial-dataset-of-casting-product) and is courtesy of PILOT TECHNOCAST, Shapar, Rajkot. Commercial use of this data is not permitted, but we have received permission to use this data in our tutorials. Note that some outputs may not work when viewing on Github since they are shown in iframes. We recommend to clone this repo and execute the notebooks locally. ``` %load_ext autoreload %autoreload 2 from PIL import Image from pathlib import Path ``` First, let's load some data. In this tutorial, we'll take the example of quality inspection in manufacturing. The puprose of our system may be to determine whether a manufactured part passes the required quality checks. These checks may measure the roudness of the part, the smoothness of the edges, the smoothness of the part overall, etc... let's assume you have automated those checks with an ML based system. What we demonstrate here is how you can easily set up quality checks on the incoming data like whether the image is sharp enough and whether it is similar enough to the data the model was trained on. Doing checks like this may be important because people's actions, periodic maintenance and wear and tear may have an impact on what data exaclty is sent to your system. If your data changes, your system may keep running but will suffer from reduced performance, resulting in lower business value. ``` DATA_PATH = Path("../raymon/tests/sample_data/castinginspection/ok_front/") LIM = 150 def load_data(dpath, lim): files = dpath.glob("*.jpeg") images = [] for n, fpath in enumerate(files): if n == lim: break img = Image.open(fpath) images.append(img) return images loaded_data = load_data(dpath=DATA_PATH, lim=LIM) loaded_data[0] ``` ## Constructing and building a profile For this tutorial, we'll construct a profile that checks the image sharpness and will calculate an outlier score on the image. This way, we hope to get alerting when something seems off with the input data. Just like in the case of structured data, we need to start by specifying a profile and its components. ``` from raymon import ModelProfile, InputComponent from raymon.profiling.extractors.vision import Sharpness, DN2AnomalyScorer profile = ModelProfile( name="casting-inspection", version="0.0.1", components=[ InputComponent(name="sharpness", extractor=Sharpness()), InputComponent(name="outlierscore", extractor=DN2AnomalyScorer(k=16)) ], ) profile.build(input=loaded_data) ## Inspect the schema profile.view(poi=loaded_data[-1], mode="external") ``` ## Use the profile to check new data We can save the schema to JSON, load it again (in your production system), and use it to validate incoming data. ``` profile.save(".") profile = ModelProfile.load("[email protected]") tags = profile.validate_input(loaded_data[-1]) tags ``` As you can see, all the extracted feature values are returned. This is useful for when you want to track feature distributions on your monitoring backend (which is what happens on the Raymon.ai platform). Also note that these features are not necessarily the ones going into your ML model. ## Corrupting inputs Let's see what happens when we blur an image. ``` from PIL import ImageFilter img_blur = loaded_data[-1].copy().filter(ImageFilter.GaussianBlur(radius=5)) img_blur profile.validate_input(img_blur) ``` As can be seen, every feature extractor now gives rise to 2 tags: one being the feature and one being a schema error, indicating that the data has failed both sanity checks. Awesome. We can visualize this datum while inspecting the profile. ``` profile.view(poi=img_blur, mode="external") ``` As we can see, the calculated feature values are way outside the range that were seen during training. Having alerting set up for this is crucial to deliver reliable systems.
github_jupyter
``` pip install pandas pip install numpy pip install sklearn pip install matplotlib from sklearn import cluster from sklearn.cluster import KMeans import pandas as pd import numpy as np from matplotlib import pyplot as plt df = pd.read_csv("sample_stocks.csv") df df.describe() df.head() df.info() # x = df['returns'] # idx = np.argsort(x) # dividen = df['dividendyield'] # plt.figure(figsize = (20, 7)) # Plotar a dispersão de Returns vs dividendyield plt.scatter(df["returns"], df["dividendyield"]) plt.show() # Normalizar os dados from sklearn.preprocessing import StandardScaler normalize = StandardScaler() x = pd.DataFrame(normalize.fit_transform(df)) # Plotar a dispersão novamente plt.scatter(x[0], x[1]) plt.show() # Crie e treine o Kmeans from sklearn import cluster kmeans = cluster.KMeans(n_clusters = 2) kmeans = kmeans.fit(x) # Plote a dispersão juntamente com o KMeans.cluster_centers_ # Na primeira linha foi usado "c", pois é: color, sequence, or sequence of colors plt.scatter(x[0], x[1], c = kmeans.labels_, cmap = "viridis_r") plt.scatter(kmeans.cluster_centers_, kmeans.cluster_centers_, color = "blue") plt.show() # Analisar K, usando o método Elbow inertia = [] for i in range(1,15): kmeans = KMeans(n_clusters = i) kmeans = kmeans.fit(x) inertia.append(kmeans.inertia_) print(kmeans.inertia_) plt.plot(range(1, 15), inertia, "bx-") plt.plot(range(1, 15), inertia, "bx-") plt.show() ``` # Clustering hierárquico ``` # imports necessários from sklearn.cluster import AgglomerativeClustering from scipy.cluster.hierarchy import dendrogram # Implemente Clustering Hierárquico modelo = AgglomerativeClustering(distance_threshold = 0, n_clusters = None, linkage = "single") modelo.fit_predict(x) # clusters.children_ # Plotando o dendograma def plot_dendrogram(modelo, **kwargs): counts = np.zeros(modelo.children_.shape[0]) n_samples = len(modelo.labels_) for i, merge in enumerate(modelo.children_): current_count = 0 for child_index in merge: if child_index < n_samples: current_count += 1 else: current_count += counts[child_index - n_samples] counts[i] = current_count linkage_matrix = np.column_stack([modelo.children_, modelo.distances_, counts]).astype(float) dendrogram(linkage_matrix, **kwargs) plot_dendrogram(modelo, truncate_mode = 'level', p = 12) plt.show() # DBSCAN # https://scikit-learn.org/stable/modules/generated/sklearn.cluster.dbscan.html?highlight=dbscan#sklearn.cluster.dbscan from sklearn.cluster import DBSCAN dbscan = DBSCAN(eps = .5, min_samples = 15).fit(x) # Não consegui desenvolver essa forma de clustering ```
github_jupyter
This notebook demonstrates how to perform regression analysis using scikit-learn and the watson-machine-learning-client package. Some familiarity with Python is helpful. This notebook is compatible with Python 3.7. You will use the sample data set, **sklearn.datasets.load_boston** which is available in scikit-learn, to predict house prices. ## Learning goals In this notebook, you will learn how to: - Load a sample data set from ``scikit-learn`` - Explore data - Prepare data for training and evaluation - Create a scikit-learn pipeline - Train and evaluate a model - Store a model in the Watson Machine Learning (WML) repository - Deploy a model as Core ML ## Contents 1. [Set up the environment](#setup) 2. [Load and explore data](#load) 3. [Build a scikit-learn linear regression model](#model) 4. [Set up the WML instance and save the model in the WML repository](#upload) 5. [Deploy the model via Core ML](#deploy) 6. [Clean up](#cleanup) 7. [Summary and next steps](#summary) <a id="setup"></a> ## 1. Set up the environment Before you use the sample code in this notebook, you must perform the following setup tasks: - Contact with your Cloud Pack for Data administrator and ask him for your account credentials ### Connection to WML Authenticate the Watson Machine Learning service on IBM Cloud Pack for Data. You need to provide platform `url`, your `username` and `password`. ``` username = 'PASTE YOUR USERNAME HERE' password = 'PASTE YOUR PASSWORD HERE' url = 'PASTE THE PLATFORM URL HERE' wml_credentials = { "username": username, "password": password, "url": url, "instance_id": 'openshift', "version": '3.5' } ``` ### Install and import the `ibm-watson-machine-learning` package **Note:** `ibm-watson-machine-learning` documentation can be found <a href="http://ibm-wml-api-pyclient.mybluemix.net/" target="_blank" rel="noopener no referrer">here</a>. ``` !pip install -U ibm-watson-machine-learning from ibm_watson_machine_learning import APIClient client = APIClient(wml_credentials) ``` ### Working with spaces First of all, you need to create a space that will be used for your work. If you do not have space already created, you can use `{PLATFORM_URL}/ml-runtime/spaces?context=icp4data` to create one. - Click New Deployment Space - Create an empty space - Go to space `Settings` tab - Copy `space_id` and paste it below **Tip**: You can also use SDK to prepare the space for your work. More information can be found [here](https://github.com/IBM/watson-machine-learning-samples/blob/master/cpd3.5/notebooks/python_sdk/instance-management/Space%20management.ipynb). **Action**: Assign space ID below ``` space_id = 'PASTE YOUR SPACE ID HERE' ``` You can use `list` method to print all existing spaces. ``` client.spaces.list(limit=10) ``` To be able to interact with all resources available in Watson Machine Learning, you need to set **space** which you will be using. ``` client.set.default_space(space_id) ``` <a id="load"></a> ## 2. Load and explore data The sample data set contains boston house prices. The data set can be found <a href="https://archive.ics.uci.edu/ml/machine-learning-databases/housing/" target="_blank" rel="noopener no referrer">here</a>. In this section, you will learn how to: - [2.1 Explore Data](#dataset) - [2.2 Check the correlations between predictors and the target](#corr) ### 2.1 Explore data<a id="dataset"></a> In this subsection, you will perform exploratory data analysis of the boston house prices data set. ``` !pip install --upgrade scikit-learn==0.23.1 seaborn import sklearn from sklearn import datasets import pandas as pd boston_data = datasets.load_boston() ``` Let's check the names of the predictors. ``` print(boston_data.feature_names) ``` **Tip:** Run `print(boston_data.DESCR)` to view a detailed description of the data set. ``` print(boston_data.DESCR) ``` Create a pandas DataFrame and display some descriptive statistics. ``` boston_pd = pd.DataFrame(boston_data.data) boston_pd.columns = boston_data.feature_names boston_pd['PRICE'] = boston_data.target ``` The describe method generates summary statistics of numerical predictors. ``` boston_pd.describe() ``` ### 2.2 Check the correlations between predictors and the target<a id="corr"></a> ``` import seaborn as sns %matplotlib inline corr_coeffs = boston_pd.corr() sns.heatmap(corr_coeffs, xticklabels=corr_coeffs.columns, yticklabels=corr_coeffs.columns); ``` <a id="model"></a> ## 3. Build a scikit-learn linear regression model In this section, you will learn how to: - [3.1 Split data](#prep) - [3.2 Create a scikit-learn pipeline](#pipe) - [3.3 Train the model](#train) ### 3.1 Split data<a id="prep"></a> In this subsection, you will split the data set into: - Train data set - Test data set ``` from sklearn.model_selection import train_test_split X = boston_pd.drop('PRICE', axis = 1) y = boston_pd['PRICE'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.33, random_state = 5) print('Number of training records: ' + str(X_train.shape[0])) print('Number of test records: ' + str(X_test.shape[0])) ``` Your data has been successfully split into two data sets: - The train data set, which is the largest group, is used for training. - The test data set will be used for model evaluation and is used to test the model. ### 3.2 Create a scikit-learn pipeline<a id="pipe"></a> In this subsection, you will create a scikit-learn pipeline. First, import the scikit-learn machine learning packages that are needed in the subsequent steps. ``` from sklearn.pipeline import Pipeline from sklearn import preprocessing from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_squared_error ``` Standardize the features by removing the mean and by scaling to unit variance. ``` scaler = preprocessing.StandardScaler() ``` Next, define the regressor you want to use. This notebook uses the Linear Regression model. ``` lr = LinearRegression() ``` Build the pipeline. A pipeline consists of a transformer (Standard Scaler) and an estimator (Linear Regression model). ``` pipeline = Pipeline([('scaler', scaler), ('lr', lr)]) ``` ### 3.3 Train the model<a id="train"></a> Now, you can use the **pipeline** and **train data** you defined previously to train your SVM model. ``` model = pipeline.fit(X_train, y_train) ``` Check the model quality. ``` y_pred = model.predict(X_test) mse = sklearn.metrics.mean_squared_error(y_test, y_pred) print('MSE: ' + str(mse)) ``` Plot the scatter plot of prices vs. predicted prices. ``` import matplotlib.pyplot as plt plt.style.use('ggplot') plt.title('Predicted prices vs prices') plt.ylabel('Prices') plt.xlabel('Predicted prices') plot = plt.scatter(y_pred, y_test) ``` **Note:** You can tune your model to achieve better accuracy. To keep this example simple, the tuning section is omitted. <a id="upload"></a> ## 4. Save the model in the WML repository In this section, you will learn how to use the common Python client to manage your model in the WML repository. ``` sofware_spec_uid = client.software_specifications.get_id_by_name("default_py3.7") metadata = { client.repository.ModelMetaNames.NAME: 'Boston house price', client.repository.ModelMetaNames.TYPE: 'scikit-learn_0.23', client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: sofware_spec_uid } published_model = client.repository.store_model( model=model, meta_props=metadata, training_data=X_train, training_target=y_train) model_uid = client.repository.get_model_uid(published_model) ``` #### Get information about all of the models in the WML repository. ``` models_details = client.repository.list_models() ``` <a id="deploy"></a> ## 5. Deploy the model via Core ML In this section, you will learn how to use the WML client to create a **virtual** deployment via the `Core ML`. You will also learn how to use `download_url` to download a Core ML model for your <a href="https://developer.apple.com/xcode/" target="_blank" rel="noopener no referrer">Xcode</a> project. - [5.1 Create a virtual deployment for the model](#create) - [5.2 Download the Core ML file from the deployment](#getdeploy) - [5.3 Test the CoreML model](#testcoreML) ### 5.1 Create a virtual deployment for the model<a id="create"></a> ``` metadata = { client.deployments.ConfigurationMetaNames.NAME: "Virtual deployment of Boston model", client.deployments.ConfigurationMetaNames.VIRTUAL: {"export_format": "coreml"} } created_deployment = client.deployments.create(model_uid, meta_props=metadata) ``` Now, you can define and print the download endpoint. You can use this endpoint to download the Core ML model. ### 5.2 Download the `Core ML` file from the deployment<a id="getdeploy"></a> ``` client.deployments.list() ``` <a id="score"></a> #### Download the virtual deployment content: Core ML model. ``` deployment_uid = client.deployments.get_uid(created_deployment) deployment_content = client.deployments.download(deployment_uid) ``` Use the code in the cell below to create the download link. ``` from ibm_watson_machine_learning.utils import create_download_link create_download_link(deployment_content) ``` **Note:** You can use <a href="https://developer.apple.com/xcode/" target="_blank" rel="noopener no referrer">Xcode</a> to preview the model's metadata (after unzipping). ### 5.3 Test the `Core ML` model<a id="testcoreML"></a> Use the following steps to run a test against the downloaded Core ML model. ``` !pip install --upgrade coremltools ``` Use the ``coremltools`` to load the model and check some basic metadata. First, extract the model. ``` from ibm_watson_machine_learning.utils import extract_mlmodel_from_archive extracted_model_path = extract_mlmodel_from_archive('mlartifact.tar.gz', model_uid) ``` Load the model and check the description. ``` import coremltools loaded_model = coremltools.models.MLModel(extracted_model_path) print(loaded_model.get_spec()) ``` The model looks good and can be used on your iPhone now. <a id="cleanup"></a> ## 6. Clean up If you want to clean up all created assets: - experiments - trainings - pipelines - model definitions - models - functions - deployments please follow up this sample [notebook](https://github.com/IBM/watson-machine-learning-samples/blob/master/cpd3.5/notebooks/python_sdk/instance-management/Machine%20Learning%20artifacts%20management.ipynb). <a id="summary"></a> ## 7. Summary and next steps You successfully completed this notebook! You learned how to use scikit-learn to create a Core ML model. If you are interested in sample swift application (for iOS), please visit <a href="https://github.com/IBM/watson-machine-learning-samples/tree/master/cloud/applications/go-digits" target="_blank" rel="noopener no referrer">here</a>. Check out our <a href="https://dataplatform.cloud.ibm.com/docs/content/analyze-data/wml-setup.html" target="_blank" rel="noopener noreferrer">Online Documentation</a> for more samples, tutorials, documentation, how-tos, and blog posts. ### Author **Lukasz Cmielowski**, Ph.D., is a Lead Data Scientist at IBM developing enterprise-level applications that substantially increases clients' ability to turn data into actionable knowledge. **Jihyoung Kim**, Ph.D., is a Data Scientist at IBM who strives to make data science easy for everyone through Watson Studio. Copyright © 2020, 2021 IBM. This notebook and its source code are released under the terms of the MIT License.
github_jupyter
# Copyright Netherlands eScience Center <br> ** Function : Computing AMET with Surface & TOA flux** <br> ** Author : Yang Liu ** <br> ** First Built : 2019.08.09 ** <br> ** Last Update : 2019.09.09 ** <br> Description : This notebook aims to compute AMET with TOA/surface flux fields from NorESM model. The NorESM model is launched by NERSC in Blue Action Work Package 3 as coordinated experiments for joint analysis. It contributes to the Deliverable 3.1. <br> Return Values : netCDF4 <br> Caveat : The fields used here are post-processed monthly mean fields. Hence there is no accumulation that need to be taken into account.<br> The **positive sign** for each variable varies:<br> * Latent heat flux (LHF) - downward <br> * Sensible heat flux (SHF) - downward <br> * Net solar radiation flux at TOA (NTopSol & UTopSol) - downward <br> * Net solar radiation flux at surface (NSurfSol) - downward <br> * Net longwave radiation flux at surface (NSurfTherm) - downward <br> * Net longwave radiation flux at TOA (OLR) - downward <br> ``` %matplotlib inline import numpy as np import sys sys.path.append("/home/ESLT0068/NLeSC/Computation_Modeling/Bjerknes/Scripts/META") import scipy as sp import pygrib import time as tttt from netCDF4 import Dataset,num2date import os import meta.statistics import meta.visualizer # constants constant = {'g' : 9.80616, # gravititional acceleration [m / s2] 'R' : 6371009, # radius of the earth [m] 'cp': 1004.64, # heat capacity of air [J/(Kg*K)] 'Lv': 2264670, # Latent heat of vaporization [J/Kg] 'R_dry' : 286.9, # gas constant of dry air [J/(kg*K)] 'R_vap' : 461.5, # gas constant for water vapour [J/(kg*K)] } ################################ Input zone ###################################### # specify starting and ending time start_year = 1979 end_year = 2013 # specify data path datapath = '/home/ESLT0068/WorkFlow/Core_Database_BlueAction_WP3/MPIESM_MPI' # specify output path for figures output_path = '/home/ESLT0068/WorkFlow/Core_Database_BlueAction_WP3/AMET_netCDF' # ensemble number ensemble = 10 # experiment number exp = 4 # example file #datapath_example = os.path.join(datapath, 'SHF', 'Amon2d_amip_bac_rg_1_SHF_1979-2013.grb') #datapath_example = os.path.join(datapath, 'LHF', 'Amon2d_amip_bac_rg_1_LHF_1979-2013.grb') #datapath_example = os.path.join(datapath, 'NSurfSol', 'Amon2d_amip_bac_rg_1_NSurfSol_1979-2014.grb') #datapath_example = os.path.join(datapath, 'NTopSol', 'Amon2d_amip_bac_rg_1_DTopSol_1979-2014.grb') #datapath_example = os.path.join(datapath, 'UTopSol', 'Amon2d_amip_bac_rg_1_UTopSol_1979-2014.grb') #datapath_example = os.path.join(datapath, 'NSurfTherm', 'Amon2d_amip_bac_rg_1_NSurfTherm_1979-2014.grb') datapath_example = os.path.join(datapath, 'OLR', 'Amon2d_amip_bac_rg_1_OLR_1979-2014.grb') #################################################################################### def var_key_retrieve(datapath, exp_num, ensemble_num): # get the path to each datasets print ("Start retrieving datasets of experiment {} ensemble number {}".format(exp_num+1, ensemble_num)) # get data path if exp_num == 0 : # exp 1 datapath_slhf = os.path.join(datapath, 'LHF', 'Amon2d_amip_bac_rg_{}_LHF_1979-2013.grb'.format(ensemble_num)) datapath_sshf = os.path.join(datapath, 'SHF', 'Amon2d_amip_bac_rg_{}_SHF_1979-2013.grb'.format(ensemble_num)) datapath_ssr = os.path.join(datapath, 'NSurfSol', 'Amon2d_amip_bac_rg_{}_NSurfSol_1979-2014.grb'.format(ensemble_num)) datapath_str = os.path.join(datapath, 'NSurfTherm', 'Amon2d_amip_bac_rg_{}_NSurfTherm_1979-2014.grb'.format(ensemble_num)) datapath_tsr_in = os.path.join(datapath, 'NTopSol', 'Amon2d_amip_bac_rg_{}_DTopSol_1979-2014.grb'.format(ensemble_num)) datapath_tsr_out = os.path.join(datapath, 'UTopSol', 'Amon2d_amip_bac_rg_{}_UTopSol_1979-2014.grb'.format(ensemble_num)) datapath_ttr = os.path.join(datapath, 'OLR', 'Amon2d_amip_bac_rg_{}_OLR_1979-2014.grb'.format(ensemble_num)) elif exp_num == 1: datapath_slhf = os.path.join(datapath, 'LHF', 'Amon2d_amip_bac_exp{}_rg_{}_LHF_1979-2013.grb'.format(exp_num+1, ensemble_num)) datapath_sshf = os.path.join(datapath, 'SHF', 'Amon2d_amip_bac_exp{}_rg_{}_SHF_1979-2013.grb'.format(exp_num+1, ensemble_num)) datapath_ssr = os.path.join(datapath, 'NSurfSol', 'Amon2d_amip_bac_exp{}_rg_{}_NSurfSol_1979-2014.grb'.format(exp_num+1, ensemble_num)) datapath_str = os.path.join(datapath, 'NSurfTherm', 'Amon2d_amip_bac_exp{}_rg_{}_NSurfTherm_1979-2014.grb'.format(exp_num+1, ensemble_num)) datapath_tsr_in = os.path.join(datapath, 'NTopSol', 'Amon2d_amip_bac_exp{}_rg_{}_DTopSol_1979-2014.grb'.format(exp_num+1, ensemble_num)) datapath_tsr_out = os.path.join(datapath, 'UTopSol', 'Amon2d_amip_bac_exp{}_rg_{}_UTopSol_1979-2014.grb'.format(exp_num+1, ensemble_num)) datapath_ttr = os.path.join(datapath, 'OLR', 'Amon2d_amip_bac_exp{}_rg_{}_OLR_1979-2014.grb'.format(exp_num+1, ensemble_num)) else: datapath_slhf = os.path.join(datapath, 'LHF', 'Amon2d_amip_bac_exp{}_rg_{}_LHF_1979-2013.grb'.format(exp_num+1, ensemble_num)) datapath_sshf = os.path.join(datapath, 'SHF', 'Amon2d_amip_bac_exp{}_rg_{}_SHF_1979-2013.grb'.format(exp_num+1, ensemble_num)) datapath_ssr = os.path.join(datapath, 'NSurfSol', 'Amon2d_amip_bac_exp{}_rg_{}_NSurfSol_1979-2013.grb'.format(exp_num+1, ensemble_num)) datapath_str = os.path.join(datapath, 'NSurfTherm', 'Amon2d_amip_bac_exp{}_rg_{}_NSurfTherm_1979-2013.grb'.format(exp_num+1, ensemble_num)) datapath_tsr_in = os.path.join(datapath, 'NTopSol', 'Amon2d_amip_bac_exp{}_rg_{}_DTopSol_1979-2013.grb'.format(exp_num+1, ensemble_num)) datapath_tsr_out = os.path.join(datapath, 'UTopSol', 'Amon2d_amip_bac_exp{}_rg_{}_UTopSol_1979-2013.grb'.format(exp_num+1, ensemble_num)) datapath_ttr = os.path.join(datapath, 'OLR', 'Amon2d_amip_bac_exp{}_rg_{}_OLR_1979-2013.grb'.format(exp_num+1, ensemble_num)) # get the variable keys grbs_slhf = pygrib.open(datapath_slhf) grbs_sshf = pygrib.open(datapath_sshf) grbs_ssr = pygrib.open(datapath_ssr) grbs_str = pygrib.open(datapath_str) grbs_tsr_in = pygrib.open(datapath_tsr_in) grbs_tsr_out = pygrib.open(datapath_tsr_out) grbs_ttr = pygrib.open(datapath_ttr) print ("Retrieving datasets successfully and return the variable key!") return grbs_slhf, grbs_sshf, grbs_ssr, grbs_str, grbs_tsr_in, grbs_tsr_out, grbs_ttr def amet(grbs_slhf, grbs_sshf, grbs_ssr, grbs_str, grbs_tsr_in, grbs_tsr_out, grbs_ttr, period_1979_2013, lat, lon): # get all the varialbes # make sure we know the sign of all the input variables!!! # ascending lat var_slhf = np.zeros((len(period_1979_2013)*12,len(lat),len(lon)),dtype=float) # surface latent heat flux W/m2 var_sshf = np.zeros((len(period_1979_2013)*12,len(lat),len(lon)),dtype=float) # surface sensible heat flux W/m2 var_ssr = np.zeros((len(period_1979_2013)*12,len(lat),len(lon)),dtype=float) var_str = np.zeros((len(period_1979_2013)*12,len(lat),len(lon)),dtype=float) var_tsr_in = np.zeros((len(period_1979_2013)*12,len(lat),len(lon)),dtype=float) var_tsr_out = np.zeros((len(period_1979_2013)*12,len(lat),len(lon)),dtype=float) var_ttr = np.zeros((len(period_1979_2013)*12,len(lat),len(lon)),dtype=float) # load data counter = 1 for i in np.arange(len(period_1979_2013)*12): key_slhf = grbs_slhf.message(counter) key_sshf = grbs_sshf.message(counter) key_ssr = grbs_ssr.message(counter) key_str = grbs_str.message(counter) key_tsr_in = grbs_tsr_in.message(counter) key_tsr_out = grbs_tsr_out.message(counter) key_ttr = grbs_ttr.message(counter) var_slhf[i,:,:] = key_slhf.values var_sshf[i,:,:] = key_sshf.values var_ssr[i,:,:] = key_ssr.values var_str[i,:,:] = key_str.values var_tsr_in[i,:,:] = key_tsr_in.values var_tsr_out[i,:,:] = key_tsr_out.values var_ttr[i,:,:] = key_ttr.values # counter update counter +=1 #size of the grid box dx = 2 * np.pi * constant['R'] * np.cos(2 * np.pi * lat / 360) / len(lon) dy = np.pi * constant['R'] / len(lat) # calculate total net energy flux at TOA/surface net_flux_surf = var_slhf + var_sshf + var_ssr + var_str net_flux_toa = var_tsr_in + var_tsr_out + var_ttr net_flux_surf_area = np.zeros(net_flux_surf.shape, dtype=float) # unit W net_flux_toa_area = np.zeros(net_flux_toa.shape, dtype=float) grbs_slhf.close() grbs_sshf.close() grbs_ssr.close() grbs_str.close() grbs_tsr_in.close() grbs_tsr_out.close() grbs_ttr.close() for i in np.arange(len(lat)): # change the unit to terawatt net_flux_surf_area[:,i,:] = net_flux_surf[:,i,:]* dx[i] * dy / 1E+12 net_flux_toa_area[:,i,:] = net_flux_toa[:,i,:]* dx[i] * dy / 1E+12 # take the zonal integral of flux net_flux_surf_int = np.sum(net_flux_surf_area,2) / 1000 # PW net_flux_toa_int = np.sum(net_flux_toa_area,2) / 1000 # AMET as the residual of net flux at TOA & surface AMET_res_ERAI = np.zeros(net_flux_surf_int.shape) for i in np.arange(len(lat)): AMET_res_ERAI[:,i] = -(np.sum(net_flux_toa_int[:,0:i+1],1) - np.sum(net_flux_surf_int[:,0:i+1],1)) AMET_res_ERAI = AMET_res_ERAI.reshape(-1,12,len(lat)) return AMET_res_ERAI def create_netcdf_point (pool_amet, lat, output_path, exp): print ('*******************************************************************') print ('*********************** create netcdf file*************************') print ('*******************************************************************') #logging.info("Start creating netcdf file for the 2D fields of ERAI at each grid point.") # get the basic dimensions ens, year, month, _ = pool_amet.shape # wrap the datasets into netcdf file # 'NETCDF3_CLASSIC', 'NETCDF3_64BIT', 'NETCDF4_CLASSIC', and 'NETCDF4' data_wrap = Dataset(os.path.join(output_path, 'amet_MPIESM_MPI_exp{}.nc'.format(exp+1)),'w',format = 'NETCDF4') # create dimensions for netcdf data ens_wrap_dim = data_wrap.createDimension('ensemble', ens) year_wrap_dim = data_wrap.createDimension('year', year) month_wrap_dim = data_wrap.createDimension('month', month) lat_wrap_dim = data_wrap.createDimension('latitude', len(lat)) # create coordinate variable ens_wrap_var = data_wrap.createVariable('ensemble',np.int32,('ensemble',)) year_wrap_var = data_wrap.createVariable('year',np.int32,('year',)) month_wrap_var = data_wrap.createVariable('month',np.int32,('month',)) lat_wrap_var = data_wrap.createVariable('latitude',np.float32,('latitude',)) # create the actual 4d variable amet_wrap_var = data_wrap.createVariable('amet',np.float64,('ensemble','year','month','latitude'),zlib=True) # global attributes data_wrap.description = 'Monthly mean atmospheric meridional energy transport' # variable attributes lat_wrap_var.units = 'degree_north' amet_wrap_var.units = 'PW' amet_wrap_var.long_name = 'atmospheric meridional energy transport' # writing data ens_wrap_var[:] = np.arange(ens) month_wrap_var[:] = np.arange(month)+1 year_wrap_var[:] = np.arange(year)+1979 lat_wrap_var[:] = lat amet_wrap_var[:] = pool_amet # close the file data_wrap.close() print ("The generation of netcdf files is complete!!") if __name__=="__main__": #################################################################### ###### Create time namelist matrix for variable extraction ####### #################################################################### # date and time arrangement # namelist of month and days for file manipulation namelist_month = ['01','02','03','04','05','06','07','08','09','10','11','12'] ensemble_list = ['01','02','03','04','05','06','07','08','09','10', '11','12','13','14','15','16','17','18','19','20', '21','22','23','24','25','26','27','28','29','30',] # index of months period_1979_2013 = np.arange(start_year,end_year+1,1) index_month = np.arange(1,13,1) #################################################################### ###### Extract invariant and calculate constants ####### #################################################################### # get basic dimensions from sample file grbs_example = pygrib.open(datapath_example) key_example = grbs_example.message(1) lats, lons = key_example.latlons() lat = lats[:,0] lon = lons[0,:] grbs_example.close() # get invariant from benchmark file Dim_year_1979_2013 = len(period_1979_2013) Dim_month = len(index_month) Dim_latitude = len(lat) Dim_longitude = len(lon) ############################################# ##### Create space for stroing data ##### ############################################# # loop for calculation for i in range(exp): pool_amet = np.zeros((ensemble,Dim_year_1979_2013,Dim_month,Dim_latitude),dtype = float) for j in range(ensemble): # get variable keys grbs_slhf, grbs_sshf, grbs_ssr, grbs_str, grbs_tsr_in,\ grbs_tsr_out, grbs_ttr = var_key_retrieve(datapath, i, j) # compute amet pool_amet[j,:,:,:] = amet(grbs_slhf, grbs_sshf, grbs_ssr, grbs_str, grbs_tsr_in,\ grbs_tsr_out, grbs_ttr, period_1979_2013, lat, lon) #################################################################### ###### Data Wrapping (NetCDF) ####### #################################################################### # save netcdf create_netcdf_point(pool_amet, lat, output_path, i) print ('Packing AMET is complete!!!') print ('The output is in sleep, safe and sound!!!') ############################################################################ ############################################################################ # first check grbs_example = pygrib.open(datapath_example) key_example = grbs_example.message(1) lats, lons = key_example.latlons() lat = lats[:,0] lon = lons[0,:] print(lat) print(lon) #k = key_example.values #print(k[30:40,330:340]) #print(key_example.unit) # print all the credentials #for i in grbs_example: # print(i) grbs_example.close() # index of months period_1979_2013 = np.arange(start_year,end_year+1,1) values = np.zeros((len(period_1979_2013)*12,len(lat),len(lon)),dtype=float) counter = 1 grbs_example = pygrib.open(datapath_example) for i in np.arange(len(period_1979_2013)*12): key = grbs_example.message(counter) values[i,:,:] = key.values counter +=1 value_max = np.amax(values) value_min = np.amin(values) print(value_max) print(value_min) ```
github_jupyter
``` import numpy as np import sklearn import pandas as pd import seaborn as sns import matplotlib.pyplot as plt %matplotlib inline # Load the Boston Housing Dataset from sklearn from sklearn.datasets import load_boston boston_dataset = load_boston() print(boston_dataset.keys()) print(boston_dataset.DESCR) # Create the dataset boston = pd.DataFrame(boston_dataset.data, columns=boston_dataset.feature_names) boston['MEDV'] = boston_dataset.target boston.head() # Introductory Data Analysis # First, let us make sure there are no missing values or NANs in the dataset print(boston.isnull().sum()) # Next, let us plot the target vaqriable MEDV sns.set(rc={'figure.figsize':(11.7,8.27)}) sns.distplot(boston['MEDV'], bins=30) plt.show() # Finally, let us get the correlation matrix correlation_matrix = boston.corr().round(2) # annot = True to print the values inside the square sns.heatmap(data=correlation_matrix, annot=True) # Let us take few of the features and see how they relate to the target in a 1D plot plt.figure(figsize=(20, 5)) features = ['LSTAT', 'RM','CHAS','NOX','AGE','DIS'] target = boston['MEDV'] for i, col in enumerate(features): plt.subplot(1, len(features) , i+1) x = boston[col] y = target plt.scatter(x, y, marker='o') plt.title(col) plt.xlabel(col) plt.ylabel('MEDV') from sklearn.model_selection import train_test_split X = boston.to_numpy() X = np.delete(X, 13, 1) y = boston['MEDV'].to_numpy() X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state=5) print(X_train.shape) print(X_test.shape) print(y_train.shape) print(y_test.shape) # Lets now train the model from sklearn.linear_model import LinearRegression lin_model = LinearRegression() lin_model.fit(X_train, y_train) # Model Evaluation # Lets first evaluate on training set from sklearn.metrics import r2_score def rmse(predictions, targets): return np.sqrt(((predictions - targets) ** 2).mean()) y_pred_train = lin_model.predict(X_train) rmse_train = rmse(y_pred_train, y_train) r2_train = r2_score(y_train, y_pred_train) print("Training RMSE = " + str(rmse_train)) print("Training R2 = " + str(r2_train)) # Let us now evaluate on the test set y_pred_test = lin_model.predict(X_test) rmse_test = rmse(y_pred_test, y_test) r2_test = r2_score(y_test, y_pred_test) print("Test RMSE = " + str(rmse_test)) print("Test R2 = " + str(r2_test)) # Finally, let us see the learnt weights! np.set_printoptions(precision=3) print(lin_model.coef_) # Now, what if we use lesser number of features? # For example, suppose we choose two of the highly correlated features 'LSTAT' and 'RM' X = pd.DataFrame(np.c_[boston['LSTAT'], boston['RM']], columns = ['LSTAT','RM']) y = boston['MEDV'] X = np.array(X) y = np.array(y) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state=5) # Training Phase lin_model = LinearRegression() lin_model.fit(X_train, y_train) # Evaluation Phase y_pred_train = lin_model.predict(X_train) rmse_train = rmse(y_pred_train, y_train) r2_train = r2_score(y_train, y_pred_train) print("Training RMSE = " + str(rmse_train)) print("Training R2 = " + str(r2_train)) # Let us now evaluate on the test set y_pred_test = lin_model.predict(X_test) rmse_test = rmse(y_pred_test, y_test) r2_test = r2_score(y_test, y_pred_test) print("Test RMSE = " + str(rmse_test)) print("Test R2 = " + str(r2_test)) ```
github_jupyter
``` from IPython.core.display import display, HTML import pandas as pd import numpy as np import copy import os %load_ext autoreload %autoreload 2 import sys sys.path.insert(0,"/local/rankability_toolbox") PATH_TO_RANKLIB='/local/ranklib' from numpy import ix_ import numpy as np D = np.loadtxt(PATH_TO_RANKLIB+"/problem_instances/instances/graphs/NFL-2007-D_Matrix.txt",delimiter=",") Dsmall = D[ix_(np.arange(8),np.arange(8))] Dsmall import pyrankability (6*6-6)/2.-9 import itertools import random from collections import Counter import math D = np.zeros((6,6),dtype=int) for i in range(D.shape[0]): for j in range(i+1,D.shape[0]): D[i,j] = 1 Dtest = np.zeros((6,6),dtype=int) Dtest[0,5] = 1 Dtest[0,4] = 1 Dtest[0,1] = 1 Dtest[1,2] = 1 Dtest[1,3] = 1 Dtest[2,1] = 1 Dtest[3,0] = 1 Dtest[3,5] = 1 Dtest[5,1] = 1 Dtest[5,2] = 1 Dtest[5,4] = 1 D = Dtest target_k = 9 target_p = 12 match_k = [] match_p = [] match_both = [] max_count = 100000 for num_ones in [1]:#[target_k]: possible_inxs = list(set(list(range(D.shape[0]*D.shape[0]))) - set([0,6+1,6+6+2,6+6+6+3,6+6+6+6+4,6+6+6+6+6+5])) n = len(possible_inxs) r = num_ones total = math.factorial(n) / math.factorial(r) / math.factorial(n-r) print(total) count = 0 for one_inxs in itertools.combinations(possible_inxs,num_ones): count += 1 if count > max_count: print("reached max") break if count % 100 == 0: print(count/total) remaining_inxs = list(set(possible_inxs) - set(one_inxs)) Dcopy = copy.copy(D) for ix in one_inxs: if Dcopy.flat[ix] == 1: Dcopy.flat[ix] = 0 else: Dcopy.flat[ix] = 1 k,P = pyrankability.exact.find_P_simple(Dcopy) if len(P) != target_p: continue P = np.array(P)+1 d = dict(Counter(P[:,0])) t1 = len(d.values()) vs = list(d.values()) vs.sort() d2 = dict(Counter(P[:,1])) t2 = len(d2.values()) vs2 = list(d2.values()) vs2.sort() if tuple(vs) == (2,10) and tuple(vs2) == (2,2,2,6): print(Dcopy) match_p.append(Dcopy) print('finished') Dcopy k,P = pyrankability.exact.find_P_simple(match_p[-1]) print(k) np.array(P).transpose() match_p Dtest = np.zeros((6,6),dtype=int) Dtest[0,5] = 1 Dtest[0,4] = 1 Dtest[0,1] = 1 Dtest[1,2] = 1 Dtest[1,3] = 1 Dtest[2,1] = 1 #Dtest[3,0] = 1 Dtest[3,5] = 1 Dtest[5,1] = 1 Dtest[5,2] = 1 Dtest[5,4] = 1 k,P = pyrankability.exact.find_P_simple(Dtest) k,P from collections import Counter for Dcopy in [match_p[-1]]: k,P = pyrankability.exact.find_P_simple(Dcopy) P = np.array(P)+1 #t1 = len(dict(Counter(P[:,0])).values()) print("k",k) print(P.transpose()) for i in range(6): d = dict(Counter(P[:,i])) t = list(d.values()) t.sort() print(t) perm = np.array([1,2,5,4,3,6])-1 Dnew = pyrankability.common.permute_D(match_p[-1],perm) rows,cols = np.where(Dnew == 0) inxs = [] for i in range(len(rows)): if rows[i] == cols[i]: continue inxs.append((rows[i],cols[i])) saved = [] for choice in itertools.combinations(inxs,2): Dcopy = copy.copy(Dnew) for item in choice: Dcopy[item[0],item[1]] = 1 k,P = pyrankability.exact.find_P_simple(Dcopy) P = np.array(P)+1 if len(P) == 2 and k == 7: saved.append((Dcopy,choice)) from collections import Counter i = 1 for Dcopy,choice in saved: print("Option",i) k,P = pyrankability.exact.find_P_simple(Dcopy) P = np.array(P)+1 #t1 = len(dict(Counter(P[:,0])).values()) print(Dcopy) print(np.array(choice)+1) print("k",k) print(P.transpose()) i+=1 P_target = [[5,4,1,6,3,2], [5,4,1,6,2,3], [4,5,1,6,2,3], [4,5,1,6,3,2], [4,1,6,3,5,2], [4,1,6,3,2,5], [4,1,6,2,5,3], [4,1,6,2,3,5], [4,1,6,5,2,3], [4,1,6,5,3,2], [4,6,5,1,3,2], [4,6,5,1,2,3] ] for i in range(len(P_target)): P_target[i] = tuple(P_target[i]) for perm in P: if tuple(perm) in P_target: print('here') else: print('not') P_target = [[5,4,1,6,3,2], [5,4,1,6,2,3], [4,5,1,6,2,3], [4,5,1,6,3,2], [4,1,6,3,5,2], [4,1,6,3,2,5], [4,1,6,2,5,3], [4,1,6,2,3,5], [4,1,6,5,2,3], [4,1,6,5,3,2], [4,6,5,1,3,2], [4,6,5,1,2,3] ] P_determined = [[4 1 6 2 3 5] [4 1 6 2 5 3] [4 1 6 3 2 5] [4 1 6 3 5 2] [4 1 6 5 2 3] [4 1 6 5 3 2] [4 5 1 6 2 3] [4 5 1 6 3 2] [4 6 5 1 2 3] [4 6 5 1 3 2] [5 4 1 6 2 3] [5 4 1 6 3 2]] P_target = np.array(P_target) print(P_target.transpose()) for i in range(6): d = dict(Counter(P_target[:,i])) t = list(d.values()) t.sort() print(t) [2, 5] [2, 5, 6] [4, 5, 6] [3, 4, 5, 7] [3, 5, 7] [3, 5, 7] Dtilde, changes, output = pyrankability.improve.greedy(D,1,verbose=False) Dchanges if D.shape[0] <= 8: # Only solve small problems search = pyrankability.exact.ExhaustiveSearch(Dsmall) search.find_P() print(pyrankability.common.as_json(search.k,search.P,{})) p = len(search.P) k = search.k def greedy(D,l): D = np.copy(D) # Leave the original untouched for niter in range(l): n=D.shape[0] k,P,X,Y,k2 = pyrankability.lp.lp(D) mult = 100 X = np.round(X*mult)/mult Y = np.round(Y*mult)/mult T0 = np.zeros((n,n)) T1 = np.zeros((n,n)) inxs = np.where(D + D.transpose() == 0) T0[inxs] = 1 inxs = np.where(D + D.transpose() == 2) T1[inxs] = 1 T0[np.arange(n),np.arange(n)]= 0 T1[np.arange(n),np.arange(n)] = 0 DOM = D + X - Y Madd=T0*DOM # note: DOM = P_> in paper M1 = Madd # Copy Madd into M, % Madd identifies values >0 in P_> that have 0-tied values in D M1[Madd<=0] = np.nan # Set anything <= 0 to NaN min_inx = np.nanargmin(M1) # Find min value and index bestlinktoadd_i, bestlinktoadd_j = np.unravel_index(min_inx,M1.shape) # adding (i,j) link associated with # smallest nonzero value in Madd is likely to produce greatest improvement in rankability minMadd = M1[bestlinktoadd_i, bestlinktoadd_j] Mdelete=T1*DOM # note: DOM = P_> in paper Mdelete=Mdelete*(Mdelete<1) # Mdelete identifies values <1 in P_> that have 1-tied values in D bestlinktodelete_i, bestlinktodelete_j=np.unravel_index(np.nanargmax(Mdelete), Mdelete.shape) # deleting (i,j) link associated with # largest non-unit (less than 1) value in Mdelete is likely to produce greatest improvement in rankability maxMdelete = Mdelete[bestlinktodelete_i, bestlinktodelete_j] # This next section modifies D to create Dtilde Dtilde = np.copy(D) # initialize Dtilde # choose whether to add or remove a link depending on which will have the biggest # impact on reducing the size of the set P # PAUL: Or if we only want to do link addition, you don't need to form # Mdelete and find the largest non-unit value in it. And vice versa, if # only link removal is desired, don't form Madd. if (1-minMadd)>maxMdelete and p>=2: formatSpec = 'The best one-link way to improve rankability is by adding a link from %d to %d.\nThis one modification removes about %.10f percent of the rankings in P.'%(bestlinktoadd_i,bestlinktoadd_j,(1-minMadd)*100) print(formatSpec) Dtilde[bestlinktoadd_i,bestlinktoadd_j]=1 # adds this link, creating one-mod Dtilde elif 1-minMadd<maxMdelete and p>=2: formatSpec = 'The best one-link way to improve rankability is by deleting the link from %d to %d.\nThis one modification removes about %.10f percent of the rankings in P.' % (bestlinktodelete_i,bestlinktodelete_j,maxMdelete*100) print(formatSpec) Dtilde[bestlinktodelete_i,bestlinktodelete_j] = 0 # removes this link, creating one-mod Dtilde D = Dtilde Dtilde = greedy(D,1) search = pyrankability.exact.ExhaustiveSearch(Dtilde) search.find_P() print(pyrankability.common.as_json(search.k,search.P,{})) bestlinktoadd_i, bestlinktoadd_j % Form modification matrices Madd (M_+) and Mdelete (M_-), which are used % to determine which link modification most improves rankability Mdelete=T1.*DOM; % note: DOM = P_> in paper Mdelete=Mdelete.*(Mdelete<1); % Mdelete identifies values <1 in P_> that have 1-tied values in D maxMdelete=max(max(Mdelete)); [bestlinktodelete_i bestlinktodelete_j]=find(Mdelete==maxMdelete); % deleting (i,j) link associated with % largest non-unit (less than 1) value in Mdelete is likely to produce greatest improvement in rankability % This next section modifies D to create Dtilde Dtilde=D; % initialize Dtilde % choose whether to add or remove a link depending on which will have the biggest % impact on reducing the size of the set P % PAUL: Or if we only want to do link addition, you don't need to form % Mdelete and find the largest non-unit value in it. And vice versa, if % only link removal is desired, don't form Madd. if 1-minMadd>maxMdelete & p>=2 formatSpec = 'The best one-link way to improve rankability is by adding a link from %4.f to %4.f.\nThis one modification removes about %2.f percent of the rankings in P.'; fprintf(formatSpec,bestlinktoadd_i(1),bestlinktoadd_j(1),(1-minMadd)*100) Dtilde(bestlinktoadd_i(1),bestlinktoadd_j(1))=1; % adds this link, creating one-mod Dtilde elseif 1-minMadd<maxMdelete & p>=2 formatSpec = 'The best one-link way to improve rankability is by deleting the link from %4.f to %4.f.\nThis one modification removes about %2.f percent of the rankings in P.'; fprintf(formatSpec,bestlinktodelete_i(1),bestlinktodelete_j(1),maxMdelete*100) Dtilde(bestlinktodelete_i(1),bestlinktodelete_j(1))=0; % removes this link, creating one-mod Dtilde end % set D=Dtilde and repeat until l link modifications have been made or % p=1 D=Dtilde; ```
github_jupyter
``` # Annulus_Simple_Matplotlib # mjm June 20, 2016 # # solve Poisson eqn with Vin = V0 and Vout = 0 for an annulus # with inner radius r1, outer radius r2 # Vin = 10, Vout =0 # from dolfin import * from mshr import * # need for Circle object to make annulus import numpy as np import matplotlib.pyplot as plt import matplotlib.tri as tri from mpl_toolkits.mplot3d import Axes3D #parameters["plotting_backend"] = "matplotlib" import logging logging.getLogger("FFC").setLevel(logging.WARNING) #from matplotlib import cm %matplotlib inline ``` # Commands for plotting These are used so the the usual "plot" will use matplotlib. ``` # commands for plotting, "plot" works with matplotlib def mesh2triang(mesh): xy = mesh.coordinates() return tri.Triangulation(xy[:, 0], xy[:, 1], mesh.cells()) def mplot_cellfunction(cellfn): C = cellfn.array() tri = mesh2triang(cellfn.mesh()) return plt.tripcolor(tri, facecolors=C) def mplot_function(f): mesh = f.function_space().mesh() if (mesh.geometry().dim() != 2): raise AttributeError('Mesh must be 2D') # DG0 cellwise function if f.vector().size() == mesh.num_cells(): C = f.vector().array() return plt.tripcolor(mesh2triang(mesh), C) # Scalar function, interpolated to vertices elif f.value_rank() == 0: C = f.compute_vertex_values(mesh) return plt.tripcolor(mesh2triang(mesh), C, shading='gouraud') # Vector function, interpolated to vertices elif f.value_rank() == 1: w0 = f.compute_vertex_values(mesh) if (len(w0) != 2*mesh.num_vertices()): raise AttributeError('Vector field must be 2D') X = mesh.coordinates()[:, 0] Y = mesh.coordinates()[:, 1] U = w0[:mesh.num_vertices()] V = w0[mesh.num_vertices():] return plt.quiver(X,Y,U,V) # Plot a generic dolfin object (if supported) def plot(obj): plt.gca().set_aspect('equal') if isinstance(obj, Function): return mplot_function(obj) elif isinstance(obj, CellFunctionSizet): return mplot_cellfunction(obj) elif isinstance(obj, CellFunctionDouble): return mplot_cellfunction(obj) elif isinstance(obj, CellFunctionInt): return mplot_cellfunction(obj) elif isinstance(obj, Mesh): if (obj.geometry().dim() != 2): raise AttributeError('Mesh must be 2D') return plt.triplot(mesh2triang(obj), color='#808080') raise AttributeError('Failed to plot %s'%type(obj)) # end of commands for plotting ``` # Annulus This is the field in an annulus. We specify boundary conditions and solve the problem. ``` r1 = 1 # inner circle radius r2 = 10 # outer circle radius # shapes of inner/outer boundaries are circles c1 = Circle(Point(0.0, 0.0), r1) c2 = Circle(Point(0.0, 0.0), r2) domain = c2 - c1 # solve between circles res = 20 mesh = generate_mesh(domain, res) class outer_boundary(SubDomain): def inside(self, x, on_boundary): tol = 1e-2 return on_boundary and (abs(sqrt(x[0]*x[0] + x[1]*x[1])) - r2) < tol class inner_boundary(SubDomain): def inside(self, x, on_boundary): tol = 1e-2 return on_boundary and (abs(sqrt(x[0]*x[0] + x[1]*x[1])) - r1) < tol outerradius = outer_boundary() innerradius = inner_boundary() boundaries = FacetFunction("size_t", mesh) boundaries.set_all(0) outerradius.mark(boundaries,2) innerradius.mark(boundaries,1) V = FunctionSpace(mesh,'Lagrange',1) n = Constant(10.0) bcs = [DirichletBC(V, 0, boundaries, 2), DirichletBC(V, n, boundaries, 1)] # DirichletBC(V, nx, boundaries, 1)] u = TrialFunction(V) v = TestFunction(V) f = Constant(0.0) a = inner(nabla_grad(u), nabla_grad(v))*dx L = f*v*dx u = Function(V) solve(a == L, u, bcs) ``` # Plotting with matplotlib Now the usual "plot" commands will work for plotting the mesh and the function. ``` plot(mesh) # usual Fenics command, will use matplotlib plot(u) # usual Fenics command, will use matplotlib ``` If you want to do usual "matplotlib" stuff then you still need "plt." prefix on commands. ``` plt.figure() plt.subplot(1,2,1) plot(mesh) plt.xlabel('x') plt.ylabel('y') plt.subplot(1,2,2) plot(u) plt.title('annulus solution') ``` # Plotting along a line It turns out the the solution "u" is a function that can be evaluated at a point. So in the next cell we loop through a line and make a vector of points for plotting. You just need to give it coordinates $u(x,y)$. ``` y = np.linspace(r1,r2*0.99,100) uu = [] np.array(uu) for i in range(len(y)): yy = y[i] uu.append(u(0.0,yy)) #evaluate u along y axis plt.figure() plt.plot(y,uu) plt.grid(True) plt.xlabel('y') plt.ylabel('V') u ```
github_jupyter
# Handwritten Digits Recognition 02 - TensorFlow From the table below, we see that MNIST database is way larger than scikit-learn database, which we modelled in the previous notebook. Both number of samples and size of each sample are significantly higher. The good new is that, with TensorFlow and Keras, we can build neural networks, which are powerful enough to handle MNIST database! In this notebook, we are going to use Convolutional Neural Network (CNN) to perform image recognition. | | Scikit-learn database | MNIST database | |-----------|-----------------------|----------------| | Samples | 1797 | 70,000 | | Dimensions | 64 (8x8) | 784 (28x28) | 1. More information about Scikit-learn Database: https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html 2. More information about MNIST Database: https://en.wikipedia.org/wiki/MNIST_database ## Loading MNIST database We are going to load MNIST database using utilities provided by TensorFlow. When importing TensorFlow, I always first check if it is using the GPU. ``` import tensorflow as tf from tensorflow import keras import numpy as np import matplotlib.pyplot as plt print("TensorFlow Version", tf.__version__) if tf.test.is_gpu_available: print("Device:" ,tf.test.gpu_device_name()) ``` Now, load the MNIST database using TensorFlow. From the output, we can see that the images are 28x28. The database contains 60,000 training and 10,000 testing images. There is no missing entries. ``` (X_train, y_train), (X_test, y_test) = tf.keras.datasets.mnist.load_data() print(X_train.shape, y_train.shape) print(X_test.shape, y_test.shape) ``` Before we get our hands dirty with all the hardwork, let's just take a moment and look at some digits in the dataset. The digits displayed are the first eight digits in the set. We can see that the image quality is quite high, significantly better than the ones in scikit-learn digits set. ``` fig, axes = plt.subplots(2, 4) for i, ax in zip(range(8), axes.flatten()): ax.imshow(X_train[i], cmap=plt.cm.gray_r, interpolation='nearest') ax.set_title("Number %d" % y_train[i]) ax.set_axis_off() fig.suptitle("Image of Digits in MNIST Database") plt.show() ``` ## Training a convolutional neural network with TensorFlow Each pixel in the images is stored as integers ranging from 0 to 255. CNN requires us to normalize the numbers to be between 0 and 1. We also increased a dimension so that the images can be fed into the CNN. Also, convert the labels (*y_train, y_test*) to one-hot encoding since we are categorizing images. ``` # Normalize and flatten the images x_train = X_train.reshape((60000, 28, 28, 1)).astype('float32') / 255 x_test = X_test.reshape((10000, 28, 28, 1)).astype('float32') / 255 # Convert to one-hot encoding from keras.utils import np_utils y_train = np_utils.to_categorical(y_train) y_test = np_utils.to_categorical(y_test) ``` This is the structure of the convolutional neural network. We have two convolution layers to extract features, along with two pooling layers to reduce the dimension of the features. The dropout layer disgards 20% of the data to prevent overfitting. The multi-dimensional data is then flattened in to vectors. The two dence layers with 128 neurons are trained to do the classification. Lastly, the dense layer with 10 neurons output the results. ``` model = keras.Sequential([ keras.layers.Conv2D(32, (5,5), activation = 'relu'), keras.layers.MaxPool2D(pool_size = (2,2)), keras.layers.Conv2D(32, (5,5), activation = 'relu'), keras.layers.MaxPool2D(pool_size = (2,2)), keras.layers.Dropout(rate = 0.2), keras.layers.Flatten(), keras.layers.Dense(units = 128, activation = 'relu'), keras.layers.Dense(units = 128, activation = 'relu'), keras.layers.Dense(units = 10, activation = 'softmax') ]) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) model.fit(x_train, y_train, epochs=10) # Test the accuracy of the model on the testing set test_loss, test_acc = model.evaluate(x_test, y_test, verbose = 2) print() print('Test accuracy:', test_acc) ``` The accuracy of the CNN is 99.46% and its performance on the testing set is 99.21%. No overfitting. We have a robust model! ## Saving the trained model Below is the summary of the model. It is amazing that we have trained 109,930 parameters! Now, save this model so we don't have to train it again in the future. ``` # Show the model architecture model.summary() ``` Just like in the previous notebook, we can save this model as well. ``` model.save("CNN_model.h5") ```
github_jupyter
``` import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn.ensemble import RandomForestRegressor, RandomForestClassifier from sklearn.metrics import mean_squared_error, accuracy_score, f1_score, r2_score, explained_variance_score, roc_auc_score from sklearn.preprocessing import MinMaxScaler, OneHotEncoder, LabelBinarizer from sklearn.neural_network import MLPClassifier, MLPRegressor from sklearn.linear_model import Lasso import torch from torch import nn import torch.nn.functional as F from dp_wgan import Generator, Discriminator from dp_autoencoder import Autoencoder from evaluation import * import dp_optimizer, sampling, analysis, evaluation torch.manual_seed(0) np.random.seed(0) names = ['age', 'workclass', 'fnlwgt', 'education', 'education-num', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'capital-gain', 'capital-loss', 'hours-per-week', 'native-country', 'salary'] train = pd.read_csv('adult.data', names=names) test = pd.read_csv('adult.test', names=names) df = pd.concat([train, test]) df class Processor: def __init__(self, datatypes): self.datatypes = datatypes def fit(self, matrix): preprocessors, cutoffs = [], [] for i, (column, datatype) in enumerate(self.datatypes): preprocessed_col = matrix[:,i].reshape(-1, 1) if 'categorical' in datatype: preprocessor = LabelBinarizer() else: preprocessor = MinMaxScaler() preprocessed_col = preprocessor.fit_transform(preprocessed_col) cutoffs.append(preprocessed_col.shape[1]) preprocessors.append(preprocessor) self.cutoffs = cutoffs self.preprocessors = preprocessors def transform(self, matrix): preprocessed_cols = [] for i, (column, datatype) in enumerate(self.datatypes): preprocessed_col = matrix[:,i].reshape(-1, 1) preprocessed_col = self.preprocessors[i].transform(preprocessed_col) preprocessed_cols.append(preprocessed_col) return np.concatenate(preprocessed_cols, axis=1) def fit_transform(self, matrix): self.fit(matrix) return self.transform(matrix) def inverse_transform(self, matrix): postprocessed_cols = [] j = 0 for i, (column, datatype) in enumerate(self.datatypes): postprocessed_col = self.preprocessors[i].inverse_transform(matrix[:,j:j+self.cutoffs[i]]) if 'categorical' in datatype: postprocessed_col = postprocessed_col.reshape(-1, 1) else: if 'positive' in datatype: postprocessed_col = postprocessed_col.clip(min=0) if 'int' in datatype: postprocessed_col = postprocessed_col.round() postprocessed_cols.append(postprocessed_col) j += self.cutoffs[i] return np.concatenate(postprocessed_cols, axis=1) datatypes = [ ('age', 'positive int'), ('workclass', 'categorical'), ('education-num', 'categorical'), ('marital-status', 'categorical'), ('occupation', 'categorical'), ('relationship', 'categorical'), ('race', 'categorical'), ('sex', 'categorical binary'), ('capital-gain', 'positive float'), ('capital-loss', 'positive float'), ('hours-per-week', 'positive int'), ('native-country', 'categorical'), ('salary', 'categorical binary'), ] np.random.seed(0) processor = Processor(datatypes) relevant_df = df.drop(columns=['education', 'fnlwgt']) for column, datatype in datatypes: if 'categorical' in datatype: relevant_df[column] = relevant_df[column].astype('category').cat.codes train_df = relevant_df.head(32562) X_real = torch.tensor(relevant_df.values.astype('float32')) X_encoded = torch.tensor(processor.fit_transform(X_real).astype('float32')) train_cutoff = 32562 X_train_real = X_real[:train_cutoff] X_test_real = X_real[:train_cutoff] X_train_encoded = X_encoded[:train_cutoff] X_test_encoded = X_encoded[train_cutoff:] X_encoded.shape print(X_train_encoded) print(X_test_encoded) ae_params = { 'b1': 0.9, 'b2': 0.999, 'binary': False, 'compress_dim': 15, 'delta': 1e-5, 'device': 'cuda', 'iterations': 20000, 'lr': 0.005, 'l2_penalty': 0., 'l2_norm_clip': 0.012, 'minibatch_size': 64, 'microbatch_size': 1, 'noise_multiplier': 2.5, 'nonprivate': True, } autoencoder = Autoencoder( example_dim=len(X_train_encoded[0]), compression_dim=ae_params['compress_dim'], binary=ae_params['binary'], device=ae_params['device'], ) decoder_optimizer = dp_optimizer.DPAdam( l2_norm_clip=ae_params['l2_norm_clip'], noise_multiplier=ae_params['noise_multiplier'], minibatch_size=ae_params['minibatch_size'], microbatch_size=ae_params['microbatch_size'], nonprivate=ae_params['nonprivate'], params=autoencoder.get_decoder().parameters(), lr=ae_params['lr'], betas=(ae_params['b1'], ae_params['b2']), weight_decay=ae_params['l2_penalty'], ) encoder_optimizer = torch.optim.Adam( params=autoencoder.get_encoder().parameters(), lr=ae_params['lr'] * ae_params['microbatch_size'] / ae_params['minibatch_size'], betas=(ae_params['b1'], ae_params['b2']), weight_decay=ae_params['l2_penalty'], ) weights, ds = [], [] for name, datatype in datatypes: if 'categorical' in datatype: num_values = len(np.unique(relevant_df[name])) if num_values == 2: weights.append(1.) ds.append((datatype, 1)) else: for i in range(num_values): weights.append(1. / num_values) ds.append((datatype, num_values)) else: weights.append(1.) ds.append((datatype, 1)) weights = torch.tensor(weights).to(ae_params['device']) #autoencoder_loss = (lambda input, target: torch.mul(weights, torch.pow(input-target, 2)).sum(dim=1).mean(dim=0)) #autoencoder_loss = lambda input, target: torch.mul(weights, F.binary_cross_entropy(input, target, reduction='none')).sum(dim=1).mean(dim=0) autoencoder_loss = nn.BCELoss() #autoencoder_loss = nn.MSELoss() print(autoencoder) print('Achieves ({}, {})-DP'.format( analysis.epsilon( len(X_train_encoded), ae_params['minibatch_size'], ae_params['noise_multiplier'], ae_params['iterations'], ae_params['delta'] ), ae_params['delta'], )) minibatch_loader, microbatch_loader = sampling.get_data_loaders( minibatch_size=ae_params['minibatch_size'], microbatch_size=ae_params['microbatch_size'], iterations=ae_params['iterations'], nonprivate=ae_params['nonprivate'], ) train_losses, validation_losses = [], [] X_train_encoded = X_train_encoded.to(ae_params['device']) X_test_encoded = X_test_encoded.to(ae_params['device']) for iteration, X_minibatch in enumerate(minibatch_loader(X_train_encoded)): encoder_optimizer.zero_grad() decoder_optimizer.zero_grad() for X_microbatch in microbatch_loader(X_minibatch): decoder_optimizer.zero_microbatch_grad() output = autoencoder(X_microbatch) loss = autoencoder_loss(output, X_microbatch) loss.backward() decoder_optimizer.microbatch_step() validation_loss = autoencoder_loss(autoencoder(X_test_encoded).detach(), X_test_encoded) encoder_optimizer.step() decoder_optimizer.step() train_losses.append(loss.item()) validation_losses.append(validation_loss.item()) if iteration % 1000 == 0: print ('[Iteration %d/%d] [Loss: %f] [Validation Loss: %f]' % ( iteration, ae_params['iterations'], loss.item(), validation_loss.item()) ) pd.DataFrame(data={'train': train_losses, 'validation': validation_losses}).plot() with open('ae_eps_inf.dat', 'wb') as f: torch.save(autoencoder, f) gan_params = { 'alpha': 0.99, 'binary': False, 'clip_value': 0.01, 'd_updates': 15, 'delta': 1e-5, 'device': 'cuda', 'iterations': 15000, 'latent_dim': 64, 'lr': 0.005, 'l2_penalty': 0., 'l2_norm_clip': 0.022, 'minibatch_size': 128, 'microbatch_size': 1, 'noise_multiplier': 3.5, 'nonprivate': False, } with open('ae_eps_inf.dat', 'rb') as f: autoencoder = torch.load(f) decoder = autoencoder.get_decoder() generator = Generator( input_dim=gan_params['latent_dim'], output_dim=autoencoder.get_compression_dim(), binary=gan_params['binary'], device=gan_params['device'], ) g_optimizer = torch.optim.RMSprop( params=generator.parameters(), lr=gan_params['lr'], alpha=gan_params['alpha'], weight_decay=gan_params['l2_penalty'], ) discriminator = Discriminator( input_dim=len(X_train_encoded[0]), device=gan_params['device'], ) d_optimizer = dp_optimizer.DPRMSprop( l2_norm_clip=gan_params['l2_norm_clip'], noise_multiplier=gan_params['noise_multiplier'], minibatch_size=gan_params['minibatch_size'], microbatch_size=gan_params['microbatch_size'], nonprivate=gan_params['nonprivate'], params=discriminator.parameters(), lr=gan_params['lr'], alpha=gan_params['alpha'], weight_decay=gan_params['l2_penalty'], ) print(generator) print(discriminator) print('Achieves ({}, {})-DP'.format( analysis.epsilon( len(X_train_encoded), gan_params['minibatch_size'], gan_params['noise_multiplier'], gan_params['iterations'], gan_params['delta'] ), gan_params['delta'], )) minibatch_loader, microbatch_loader = sampling.get_data_loaders( minibatch_size=gan_params['minibatch_size'], microbatch_size=gan_params['microbatch_size'], iterations=gan_params['iterations'], nonprivate=gan_params['nonprivate'], ) X_train_encoded = X_train_encoded.to(gan_params['device']) X_test_encoded = X_test_encoded.to(ae_params['device']) for iteration, X_minibatch in enumerate(minibatch_loader(X_train_encoded)): d_optimizer.zero_grad() for real in microbatch_loader(X_minibatch): z = torch.randn(real.size(0), gan_params['latent_dim'], device=gan_params['device']) fake = decoder(generator(z)).detach() d_optimizer.zero_microbatch_grad() d_loss = -torch.mean(discriminator(real)) + torch.mean(discriminator(fake)) d_loss.backward() d_optimizer.microbatch_step() d_optimizer.step() for parameter in discriminator.parameters(): parameter.data.clamp_(-gan_params['clip_value'], gan_params['clip_value']) if iteration % gan_params['d_updates'] == 0: z = torch.randn(X_minibatch.size(0), gan_params['latent_dim'], device=gan_params['device']) fake = decoder(generator(z)) g_optimizer.zero_grad() g_loss = -torch.mean(discriminator(fake)) g_loss.backward() g_optimizer.step() if iteration % 1000 == 0: print('[Iteration %d/%d] [D loss: %f] [G loss: %f]' % ( iteration, gan_params['iterations'], d_loss.item(), g_loss.item() )) z = torch.randn(len(X_train_real), gan_params['latent_dim'], device=gan_params['device']) X_synthetic_encoded = decoder(generator(z)).cpu().detach().numpy() X_synthetic_real = processor.inverse_transform(X_synthetic_encoded) X_synthetic_encoded = processor.transform(X_synthetic_real) synthetic_data = pd.DataFrame(X_synthetic_real, columns=relevant_df.columns) i = 0 columns = relevant_df.columns relevant_df[columns[i]].hist() synthetic_data[columns[i]].hist() plt.show() #pca_evaluation(pd.DataFrame(X_train_real), pd.DataFrame(X_synthetic_real)) #plt.show() with open('gen_eps_inf.dat', 'wb') as f: torch.save(generator, f) X_train_encoded = X_train_encoded.cpu() X_test_encoded = X_test_encoded.cpu() clf = RandomForestClassifier(n_estimators=100) clf.fit(X_train_encoded[:,:-1], X_train_encoded[:,-1]) prediction = clf.predict(X_test_encoded[:,:-1]) print(accuracy_score(X_test_encoded[:,-1], prediction)) print(f1_score(X_test_encoded[:,-1], prediction)) with open('gen_eps_inf.dat', 'rb') as f: generator = torch.load(f) with open('ae_eps_inf.dat', 'rb') as f: autoencoder = torch.load(f) decoder = autoencoder.get_decoder() z = torch.randn(len(X_train_real), gan_params['latent_dim'], device=gan_params['device']) X_synthetic_encoded = decoder(generator(z)).cpu().detach().numpy() X_synthetic_real = processor.inverse_transform(X_synthetic_encoded) X_synthetic_encoded = processor.transform(X_synthetic_real) #pd.DataFrame(X_encoded.numpy()).to_csv('real.csv') pd.DataFrame(X_synthetic_encoded).to_csv('synthetic.csv') with open('gen_eps_inf.dat', 'rb') as f: generator = torch.load(f) with open('ae_eps_inf.dat', 'rb') as f: autoencoder = torch.load(f) decoder = autoencoder.get_decoder() X_test_encoded = X_test_encoded.cpu() z = torch.randn(len(X_train_real), gan_params['latent_dim'], device=gan_params['device']) X_synthetic_encoded = decoder(generator(z)).cpu().detach().numpy() X_synthetic_real = processor.inverse_transform(X_synthetic_encoded) X_synthetic_encoded = processor.transform(X_synthetic_real) clf = RandomForestClassifier(n_estimators=100) clf.fit(X_synthetic_encoded[:,:-1], X_synthetic_encoded[:,-1]) prediction = clf.predict(X_test_encoded[:,:-1]) print(accuracy_score(X_test_encoded[:,-1], prediction)) print(f1_score(X_test_encoded[:,-1], prediction)) with open('gen_eps_inf.dat', 'rb') as f: generator = torch.load(f) with open('ae_eps_inf.dat', 'rb') as f: autoencoder = torch.load(f) decoder = autoencoder.get_decoder() z = torch.randn(len(X_train_real), gan_params['latent_dim'], device=gan_params['device']) X_synthetic_encoded = decoder(generator(z)).cpu().detach().numpy() X_synthetic_real = processor.inverse_transform(X_synthetic_encoded) synthetic_data = pd.DataFrame(X_synthetic_real, columns=relevant_df.columns) column = 'age' fig = plt.figure() ax = fig.add_subplot() ax.hist(train_df[column].values,)# bins=) ax.hist(synthetic_data[column].values, color='red', alpha=0.35,)# bins10) with open('gen_eps_inf.dat', 'rb') as f: generator = torch.load(f) with open('ae_eps_inf.dat', 'rb') as f: autoencoder = torch.load(f) decoder = autoencoder.get_decoder() z = torch.randn(len(X_train_real), gan_params['latent_dim'], device=gan_params['device']) X_synthetic_encoded = decoder(generator(z)).cpu().detach().numpy() X_synthetic_real = processor.inverse_transform(X_synthetic_encoded) synthetic_data = pd.DataFrame(X_synthetic_real, columns=relevant_df.columns) regression_real = [] classification_real = [] regression_synthetic = [] classification_synthetic = [] target_real = [] target_synthetic = [] for column, datatype in datatypes: p = Processor([datatype for datatype in datatypes if datatype[0] != column]) train_cutoff = 32562 p.fit(relevant_df.drop(columns=[column]).values) X_enc = p.transform(relevant_df.drop(columns=[column]).values) y_enc = relevant_df[column] X_enc_train = X_enc[:train_cutoff] X_enc_test = X_enc[train_cutoff:] y_enc_train = y_enc[:train_cutoff] y_enc_test = y_enc[train_cutoff:] X_enc_syn = p.transform(synthetic_data.drop(columns=[column]).values) y_enc_syn = synthetic_data[column] if 'binary' in datatype: model = lambda: RandomForestClassifier(n_estimators=10) score = lambda true, pred: f1_score(true, pred) elif 'categorical' in datatype: model = lambda: RandomForestClassifier(n_estimators=10) score = lambda true, pred: f1_score(true, pred, average='micro') else: model = lambda: Lasso() explained_var = lambda true, pred: explained_variance_score(true, pred) score = r2_score real, synthetic = model(), model() real.fit(X_enc_train, y_enc_train) synthetic.fit(X_enc_syn, y_enc_syn) real_preds = real.predict(X_enc_test) synthetic_preds = synthetic.predict(X_enc_test) print(column, datatype) if column == 'salary': target_real.append(score(y_enc_test, real_preds)) target_synthetic.append(score(y_enc_test, synthetic_preds)) elif 'categorical' in datatype: classification_real.append(score(y_enc_test, real_preds)) classification_synthetic.append(score(y_enc_test, synthetic_preds)) else: regression_real.append(score(y_enc_test, real_preds)) regression_synthetic.append(score(y_enc_test, synthetic_preds)) print(score.__name__) print('Real: {}'.format(score(y_enc_test, real_preds))) print('Synthetic: {}'.format(score(y_enc_test, synthetic_preds))) print('') plt.scatter(classification_real, classification_synthetic, c='blue') plt.scatter(regression_real, regression_synthetic, c='red') plt.scatter(target_real, target_synthetic, c='green') plt.xlabel('Real Data') plt.ylabel('Synthetic Data') plt.axis((0., 1., 0., 1.)) plt.plot((0, 1), (0, 1)) plt.show() ```
github_jupyter
``` import pandas as pd import numpy as np np.warnings.filterwarnings('ignore') import xarray as xr from metpy.units import units from metpy.plots import SkewT import metpy.calc as mpcalc import matplotlib.pyplot as plt import seaborn as sns import sys sys.path.append('/home/franzihe/Documents/Python/Thesis/') import createFolder as cF # plot cosmetics sns.set_context('paper', font_scale=1.6) sns.set(font = 'Serif', font_scale = 1.6, ) sns.set_style('ticks', {'font.family':'serif', #'font.serif':'Helvetica' 'grid.linestyle': '--', 'axes.grid': True, }, ) # Set the palette to the "pastel" default palette: sns.set_palette("colorblind") savefig = 1 if savefig == 1: figdir = '/home/franzihe/Documents/Figures/Weathermast_MEPS_Retrieval/Haukeliseter/MEPS_CTRL_ICET/' cF.createFolder('%s/' %figdir) form = 'png' hour = '12' m = ['12','01', '02'] h = ['00', '12'] meps_run = [ 'CTRL', 'ICE-T', ] # Select col_names to be importet for the sounding plot col_names = ['PRES', 'HGHT', 'TEMP', 'DWPT', 'MIXR', 'DRCT', 'SKNT', 'THTA'] header = np.arange(0,6) def concat_profile_all_days(df, Date, observation, _pres, _temp, _dwpt, _xwind, _ywind): _lev = np.arange(1000,-25, -25) _averaged = pd.DataFrame() for i in _lev: filter1 = np.logical_and(df.PRES > i-25, df.PRES <= i+25 ) _averaged = pd.concat([_averaged, df.where(filter1).mean()], axis = 1) _averaged = _averaged.rename(columns = {0:i}) _averaged = _averaged.T # concat the pressure, height, temperature, dewpoint, mixing ration, wind direction, wind speed, # potential temperature of all dates _pres = pd.concat([_pres, _averaged.PRES], axis = 1).rename(columns = {'PRES':Date}) # _hght = pd.concat([_hght, _averaged.HGHT], axis = 1).rename(columns = {'HGHT':Date}) _temp = pd.concat([_temp, _averaged.TEMP], axis = 1).rename(columns = {'TEMP':Date}) _dwpt = pd.concat([_dwpt, _averaged.DWPT], axis = 1).rename(columns = {'DWPT':Date}) # _mixr = pd.concat([_mixr, _averaged.MIXR], axis = 1).rename(columns = {'MIXR':Date}) # _drct = pd.concat([_drct, _averaged.DRCT], axis = 1).rename(columns = {'DRCT':Date}) # _sknt = pd.concat([_sknt, _averaged.SKNT], axis = 1).rename(columns = {'SKNT':Date}) # _thta = pd.concat([_thta, _averaged.THTA], axis = 1).rename(columns = {'THTA':Date}) _xwind = pd.concat([_xwind, _averaged.x_wind], axis = 1).rename(columns = {'x_wind':Date}) _ywind = pd.concat([_ywind, _averaged.y_wind], axis = 1).rename(columns = {'y_wind':Date}) return(_pres, _temp, _dwpt, _xwind, _ywind) p = dict() T = dict() Td = dict() u = dict() v = dict() p_meps = dict() T_meps = dict() Td_meps = dict() u_meps = dict() v_meps = dict() for hour in h: _temp = pd.DataFrame() _pres = pd.DataFrame() _hght = pd.DataFrame() _temp = pd.DataFrame() _dwpt = pd.DataFrame() _mixr = pd.DataFrame() _drct = pd.DataFrame() _sknt = pd.DataFrame() _thta = pd.DataFrame() _xwind = pd.DataFrame() _ywind = pd.DataFrame() _pres_meps = pd.DataFrame() _temp_meps = pd.DataFrame() _dwpt_meps = pd.DataFrame() _xwind_meps = pd.DataFrame() _ywind_meps = pd.DataFrame() for month in m: if month == '12': t = np.array([8, 9, 10, 12, 15, 20, 21, 22, 23, 24, 25, 26, 29, 31]) if month == '01': t = np.array([2, 3, 5, 6, 8, 9, 10, 11, 12, 28]) if month == '02': t = np.array([2, 3, 4]) if month == '12': year = '2016' if month == '01' or month == '02': year = '2017' for day in t: if day < 10: day = '0%s' %day Date = year+month+str(day) stn = '01415' #1415 is ID for Stavanger Sounding_filename = '/home/franzihe/Documents/Data/Sounding/{}/{}{}{}_{}.txt'.format(stn,year,month,str(day),hour) df = pd.read_table(Sounding_filename, delim_whitespace=True, skiprows = header, \ usecols=[0, 1, 2, 3, 5, 6, 7, 8], names=col_names) ### the footer changes depending on how high the sound measured --> lines change from Radiosonde to Radiosonde # 1. find idx of first value matching the name 'Station' lines = df.index[df['PRES'].str.match('Station')] if len(lines) == 0: print('no file found: %s%s%s_%s' %(year,month,day,hour)) # continue else: # read in the Sounding files idx = lines[0] footer = np.arange((idx+header.size),220) skiprow = np.append(header,footer) df = pd.read_table(Sounding_filename, delim_whitespace=True, skiprows = skiprow, \ usecols=[0, 1, 2, 3, 5, 6, 7, 8], names=col_names) df['x_wind'], df['y_wind'] = mpcalc.wind_components(df.SKNT.values *units.knots, df.DRCT.values*units.degrees) # _lev = np.arange(1000,-25, -25) # _averaged = pd.DataFrame() # for i in _lev: # filter1 = np.logical_and(df.PRES > i-25, # df.PRES <= i+25 ) # # _averaged = pd.concat([_averaged, df.where(filter1).mean()], axis = 1) # _averaged = _averaged.rename(columns = {0:i}) # # _averaged = _averaged.T # concat the pressure, height, temperature, dewpoint, mixing ration, wind direction, wind speed, # potential temperature of all dates # _pres = pd.concat([_pres, _averaged.PRES], axis = 1).rename(columns = {'PRES':Date}) # _hght = pd.concat([_hght, _averaged.HGHT], axis = 1).rename(columns = {'HGHT':Date}) # _temp = pd.concat([_temp, _averaged.TEMP], axis = 1).rename(columns = {'TEMP':Date}) # _dwpt = pd.concat([_dwpt, _averaged.DWPT], axis = 1).rename(columns = {'DWPT':Date}) # _mixr = pd.concat([_mixr, _averaged.MIXR], axis = 1).rename(columns = {'MIXR':Date}) # _drct = pd.concat([_drct, _averaged.DRCT], axis = 1).rename(columns = {'DRCT':Date}) # _sknt = pd.concat([_sknt, _averaged.SKNT], axis = 1).rename(columns = {'SKNT':Date}) # _thta = pd.concat([_thta, _averaged.THTA], axis = 1).rename(columns = {'THTA':Date}) # _xwind = pd.concat([_xwind, _averaged.x_wind], axis = 1).rename(columns = {'x_wind':Date}) # _ywind = pd.concat([_ywind, _averaged.y_wind], axis = 1).rename(columns = {'y_wind':Date}) _pres, _temp, _dwpt, _xwind, _ywind = concat_profile_all_days(df, Date, 'RS', _pres, _temp, _dwpt, _xwind, _ywind) # read in the MEPS runs # for meps in meps_run: meps = 'CTRL' stn = 'Stavanger' meps_dirnc = '/home/franzihe/Documents/Data/MEPS/%s/%s/%s_00.nc' %(stn,meps,Date) meps_f = xr.open_dataset(meps_dirnc, drop_variables ={'air_temperature_0m','liquid_water_content_of_surface_snow','rainfall_amount', 'snowfall_amount', 'graupelfall_amount', 'surface_air_pressure', 'surface_geopotential', 'precipitation_amount_acc', 'integral_of_snowfall_amount_wrt_time', 'integral_of_rainfall_amount_wrt_time', 'integral_of_graupelfall_amount_wrt_time', 'surface_snow_sublimation_amount_acc', 'air_temperature_2m','relative_humidity_2m', 'specific_humidity_2m', 'x_wind_10m', 'y_wind_10m', 'air_pressure_at_sea_level', 'atmosphere_cloud_condensed_water_content_ml', 'atmosphere_cloud_ice_content_ml', 'atmosphere_cloud_snow_content_ml','atmosphere_cloud_rain_content_ml', 'atmosphere_cloud_graupel_content_ml', 'pressure_departure', 'layer_thickness', 'geop_layer_thickness'}, ).reset_index(dims_or_levels = ['height0', 'height1', 'height3', 'height_above_msl', ], drop=True).sortby('hybrid', ascending = False) # pressuer into hPa meps_f['pressure_ml'] = meps_f.pressure_ml/100 # air temperature has to be flipped, something was wrong when reading the data from Stavanger meps_f['air_temperature_ml'] = (('time', 'hybrid',),meps_f.air_temperature_ml.values[:,::-1] - 273.15) meps_f['specific_humidity_ml'] = (('time', 'hybrid',),meps_f.specific_humidity_ml.values[:,::-1]) meps_f['x_wind_ml'] = (('time', 'hybrid',), meps_f.x_wind_ml.values[:,::-1]) meps_f['y_wind_ml'] = (('time', 'hybrid',), meps_f.y_wind_ml.values[:,::-1]) # calculate the dewpoint by first calculating the relative humidity from the specific humidity meps_f['relative_humidity'] = (('time', 'hybrid', ), mpcalc.relative_humidity_from_specific_humidity(meps_f.pressure_ml.values * units.hPa, meps_f.air_temperature_ml.values * units.degC, meps_f.specific_humidity_ml.values * units('kg/kg'))) meps_f['DWPT'] = (('time', 'hybrid',), mpcalc.dewpoint_from_relative_humidity(meps_f.air_temperature_ml.values * units.degC, meps_f.relative_humidity)) if hour == '12': meps_f = meps_f.isel(time = 11).to_dataframe() elif hour == '00': meps_f = meps_f.isel(time = 23).to_dataframe() meps_f = meps_f.rename(columns = {'x_wind_ml':'x_wind', 'y_wind_ml':'y_wind', 'pressure_ml':'PRES', 'air_temperature_ml':'TEMP'}) _pres_meps, _temp_meps, _dwpt_meps, _xwind_meps, _ywind_meps = concat_profile_all_days(meps_f, Date, 'MEPS', _pres_meps, _temp_meps, _dwpt_meps, _xwind_meps, _ywind_meps) ## average pressure, height, temperature, dewpoint, mixing ration, wind direction, wind speed, # potential temperature over time to get seasonal mean and assign units. p[hour] = _pres.mean(axis = 1, skipna=True).values * units.hPa #z = _hght.mean(axis = 1, skipna=True).values * units.meter T[hour] = _temp.mean(axis = 1, skipna=True).values * units.degC Td[hour] = _dwpt.mean(axis = 1, skipna=True).values * units.degC #qv = _mixr.mean(axis = 1, skipna=True).values * units('g/kg') #WD = _drct.mean(axis = 1, skipna=True).values * units.degrees #WS = _sknt.mean(axis = 1, skipna=True).values * units.knots #th = _thta.mean(axis = 1, skipna=True).values * units.kelvin u[hour] = _xwind.mean(axis = 1, skipna = True) v[hour] = _ywind.mean(axis = 1, skipna = True) p_meps[hour] = _pres_meps.mean(axis = 1, skipna=True).values * units.hPa T_meps[hour] = _temp_meps.mean(axis = 1, skipna=True).values * units.degC Td_meps[hour] = _dwpt_meps.mean(axis = 1, skipna=True).values * units.degC u_meps[hour] = _xwind_meps.mean(axis = 1, skipna = True) v_meps[hour] = _ywind_meps.mean(axis = 1, skipna = True) #u, v = mpcalc.wind_components(WS, WD) def plt_skewT(skew, p, T, p_meps, T_meps, Td, Td_meps, u, v, profile_time): # Plot the data using normal plotting functions, in this case using # log scaling in Y, as dictated by the typical meteorological plot skew.plot(p, T, ) skew.plot(p_meps, T_meps, ) skew.plot(p, Td, ) skew.plot(p_meps, Td_meps, ) skew.plot_barbs(p, u, v) skew.ax.set_ylim(1000, 100) # Add the relevant special lines skew.plot_dry_adiabats() skew.plot_moist_adiabats() skew.plot_mixing_lines() # Good bounds for aspect ratio skew.ax.set_xlim(-30, 40) skew.ax.text(0.05, 1, 'Vertical profile mean - Stavanger: {} UTC'.format(profile_time), transform=skew.ax.transAxes, fontsize=14, verticalalignment='bottom',)# bbox='fancy') fig = plt.figure(figsize=(18, 9)) #plot skewT for 00UTC skew = SkewT(fig, rotation=45,subplot=121) plt_skewT(skew, p['00'], T['00'], p_meps['00'], T_meps['00'], Td['00'], Td_meps['00'], u['00'], v['00'], '00') skew = SkewT(fig, rotation=45,subplot=122) plt_skewT(skew, p['12'], T['12'], p_meps['12'], T_meps['12'], Td['12'], Td_meps['12'], u['12'], v['12'], '12') if savefig == 1: cF.createFolder('%s/' %(figdir)) fig_name = 'winter_16_17_vertical_profile.'+form plt.savefig('%s/%s' %(figdir, fig_name), format = form, bbox_inches='tight', transparent=True) print('plot saved: %s/%s' %(figdir, fig_name)) plt.close() ```
github_jupyter
# Lets play with a funny fake dataset. This dataset contains few features and it has an dependent variable which says if we are going ever to graduate or not Importing few libraries ``` from sklearn import datasets,model_selection import matplotlib.pyplot as plt import pandas as pd import seaborn as sns import numpy as np from sklearn.metrics import accuracy_score from sklearn.linear_model import LogisticRegression from ipywidgets import interactive from sklearn.preprocessing import MinMaxScaler from sklearn import model_selection ``` Then, we will load our fake dataset, and we will split our dataset in two parts, one for training and one for testing ``` student = pd.read_csv('LionForests-Bot/students2.csv') feature_names = list(student.columns)[:-1] class_names=["Won't graduate",'Will graduate (eventually)'] X = student.iloc[:, 0:-1].values y = student.iloc[:, -1].values x_train, x_test, y_train, y_test = model_selection.train_test_split(X, y, test_size=0.3,random_state=0) fig, (ax1, ax2, ax3, ax4, ax5) = plt.subplots(1, 5, figsize=(20,4), dpi=200) ax1.hist(X[:,0:1], bins='auto') ax1.set(xlabel='Years in school') ax2.hist(X[:,1:2], bins='auto') ax2.set(xlabel='# of courses completed') ax3.hist(X[:,2:3], bins='auto') ax3.set(xlabel='Attending class per week') ax4.hist(X[:,3:4], bins='auto') ax4.set(xlabel='Owns car') ax5.hist(X[:,4:], bins='auto') ax5.set(xlabel='# of roomates') plt.show() ``` We are also scaling our data in the range [0,1] in order later the interpretations to be comparable ``` scaler = MinMaxScaler() scaler.fit(x_train) x_train = scaler.transform(x_train) x_test = scaler.transform(x_test) ``` Now, we will train a linear model, called logistic regression with our dataset. And we will evaluate its performance ``` #lin_model = LogisticRegression(solver="newton-cg",penalty='l2',max_iter=1000,C=100,random_state=0) lin_model = LogisticRegression(solver="liblinear",penalty='l1',max_iter=1000,C=10,random_state=0) lin_model.fit(x_train, y_train) predicted_train = lin_model.predict(x_train) predicted_test = lin_model.predict(x_test) predicted_proba_test = lin_model.predict_proba(x_test) print("Logistic Regression Model Performance:") print("Accuracy in Train Set",accuracy_score(y_train, predicted_train)) print("Accuracy in Test Set",accuracy_score(y_test, predicted_test)) ``` To globally interpret this model, we will plot the weights of each variable/feature ``` weights = lin_model.coef_ model_weights = pd.DataFrame({ 'features': list(feature_names),'weights': list(weights[0])}) #model_weights = model_weights.sort_values(by='weights', ascending=False) #Normal sort model_weights = model_weights.reindex(model_weights['weights'].abs().sort_values(ascending=False).index) #Sort by absolute value model_weights = model_weights[(model_weights["weights"] != 0)] print("Number of features:",len(model_weights.values)) plt.figure(num=None, figsize=(8, 6), dpi=100, facecolor='w', edgecolor='k') sns.barplot(x="weights", y="features", data=model_weights) plt.title("Intercept (Bias): "+str(lin_model.intercept_[0]),loc='right') plt.xticks(rotation=90) plt.show() ```
github_jupyter
# Linear Regression Implementation from Scratch :label:`sec_linear_scratch` Now that you understand the key ideas behind linear regression, we can begin to work through a hands-on implementation in code. In this section, (**we will implement the entire method from scratch, including the data pipeline, the model, the loss function, and the minibatch stochastic gradient descent optimizer.**) While modern deep learning frameworks can automate nearly all of this work, implementing things from scratch is the only way to make sure that you really know what you are doing. Moreover, when it comes time to customize models, defining our own layers or loss functions, understanding how things work under the hood will prove handy. In this section, we will rely only on tensors and auto differentiation. Afterwards, we will introduce a more concise implementation, taking advantage of bells and whistles of deep learning frameworks. ``` %matplotlib inline import random import tensorflow as tf from d2l import tensorflow as d2l ``` ## Generating the Dataset To keep things simple, we will [**construct an artificial dataset according to a linear model with additive noise.**] Our task will be to recover this model's parameters using the finite set of examples contained in our dataset. We will keep the data low-dimensional so we can visualize it easily. In the following code snippet, we generate a dataset containing 1000 examples, each consisting of 2 features sampled from a standard normal distribution. Thus our synthetic dataset will be a matrix $\mathbf{X}\in \mathbb{R}^{1000 \times 2}$. (**The true parameters generating our dataset will be $\mathbf{w} = [2, -3.4]^\top$ and $b = 4.2$, and**) our synthetic labels will be assigned according to the following linear model with the noise term $\epsilon$: (**$$\mathbf{y}= \mathbf{X} \mathbf{w} + b + \mathbf\epsilon.$$**) You could think of $\epsilon$ as capturing potential measurement errors on the features and labels. We will assume that the standard assumptions hold and thus that $\epsilon$ obeys a normal distribution with mean of 0. To make our problem easy, we will set its standard deviation to 0.01. The following code generates our synthetic dataset. ``` def synthetic_data(w, b, num_examples): #@save """Generate y = Xw + b + noise.""" X = tf.zeros((num_examples, w.shape[0])) X += tf.random.normal(shape=X.shape) y = tf.matmul(X, tf.reshape(w, (-1, 1))) + b y += tf.random.normal(shape=y.shape, stddev=0.01) y = tf.reshape(y, (-1, 1)) return X, y true_w = tf.constant([2, -3.4]) true_b = 4.2 features, labels = synthetic_data(true_w, true_b, 1000) ``` Note that [**each row in `features` consists of a 2-dimensional data example and that each row in `labels` consists of a 1-dimensional label value (a scalar).**] ``` print('features:', features[0],'\nlabel:', labels[0]) ``` By generating a scatter plot using the second feature `features[:, 1]` and `labels`, we can clearly observe the linear correlation between the two. ``` d2l.set_figsize() # The semicolon is for displaying the plot only d2l.plt.scatter(features[:, (1)].numpy(), labels.numpy(), 1); ``` ## Reading the Dataset Recall that training models consists of making multiple passes over the dataset, grabbing one minibatch of examples at a time, and using them to update our model. Since this process is so fundamental to training machine learning algorithms, it is worth defining a utility function to shuffle the dataset and access it in minibatches. In the following code, we [**define the `data_iter` function**] (~~that~~) to demonstrate one possible implementation of this functionality. The function (**takes a batch size, a matrix of features, and a vector of labels, yielding minibatches of the size `batch_size`.**) Each minibatch consists of a tuple of features and labels. ``` def data_iter(batch_size, features, labels): num_examples = len(features) indices = list(range(num_examples)) # The examples are read at random, in no particular order random.shuffle(indices) for i in range(0, num_examples, batch_size): j = tf.constant(indices[i: min(i + batch_size, num_examples)]) yield tf.gather(features, j), tf.gather(labels, j) ``` In general, note that we want to use reasonably sized minibatches to take advantage of the GPU hardware, which excels at parallelizing operations. Because each example can be fed through our models in parallel and the gradient of the loss function for each example can also be taken in parallel, GPUs allow us to process hundreds of examples in scarcely more time than it might take to process just a single example. To build some intuition, let us read and print the first small batch of data examples. The shape of the features in each minibatch tells us both the minibatch size and the number of input features. Likewise, our minibatch of labels will have a shape given by `batch_size`. ``` batch_size = 10 for X, y in data_iter(batch_size, features, labels): print(X, '\n', y) break ``` As we run the iteration, we obtain distinct minibatches successively until the entire dataset has been exhausted (try this). While the iteration implemented above is good for didactic purposes, it is inefficient in ways that might get us in trouble on real problems. For example, it requires that we load all the data in memory and that we perform lots of random memory access. The built-in iterators implemented in a deep learning framework are considerably more efficient and they can deal with both data stored in files and data fed via data streams. ## Initializing Model Parameters [**Before we can begin optimizing our model's parameters**] by minibatch stochastic gradient descent, (**we need to have some parameters in the first place.**) In the following code, we initialize weights by sampling random numbers from a normal distribution with mean 0 and a standard deviation of 0.01, and setting the bias to 0. ``` w = tf.Variable(tf.random.normal(shape=(2, 1), mean=0, stddev=0.01), trainable=True) b = tf.Variable(tf.zeros(1), trainable=True) ``` After initializing our parameters, our next task is to update them until they fit our data sufficiently well. Each update requires taking the gradient of our loss function with respect to the parameters. Given this gradient, we can update each parameter in the direction that may reduce the loss. Since nobody wants to compute gradients explicitly (this is tedious and error prone), we use automatic differentiation, as introduced in :numref:`sec_autograd`, to compute the gradient. ## Defining the Model Next, we must [**define our model, relating its inputs and parameters to its outputs.**] Recall that to calculate the output of the linear model, we simply take the matrix-vector dot product of the input features $\mathbf{X}$ and the model weights $\mathbf{w}$, and add the offset $b$ to each example. Note that below $\mathbf{Xw}$ is a vector and $b$ is a scalar. Recall the broadcasting mechanism as described in :numref:`subsec_broadcasting`. When we add a vector and a scalar, the scalar is added to each component of the vector. ``` def linreg(X, w, b): #@save """The linear regression model.""" return tf.matmul(X, w) + b ``` ## Defining the Loss Function Since [**updating our model requires taking the gradient of our loss function,**] we ought to (**define the loss function first.**) Here we will use the squared loss function as described in :numref:`sec_linear_regression`. In the implementation, we need to transform the true value `y` into the predicted value's shape `y_hat`. The result returned by the following function will also have the same shape as `y_hat`. ``` def squared_loss(y_hat, y): #@save """Squared loss.""" return (y_hat - tf.reshape(y, y_hat.shape)) ** 2 / 2 ``` ## Defining the Optimization Algorithm As we discussed in :numref:`sec_linear_regression`, linear regression has a closed-form solution. However, this is not a book about linear regression: it is a book about deep learning. Since none of the other models that this book introduces can be solved analytically, we will take this opportunity to introduce your first working example of minibatch stochastic gradient descent. [~~Despite linear regression has a closed-form solution, other models in this book don't. Here we introduce minibatch stochastic gradient descent.~~] At each step, using one minibatch randomly drawn from our dataset, we will estimate the gradient of the loss with respect to our parameters. Next, we will update our parameters in the direction that may reduce the loss. The following code applies the minibatch stochastic gradient descent update, given a set of parameters, a learning rate, and a batch size. The size of the update step is determined by the learning rate `lr`. Because our loss is calculated as a sum over the minibatch of examples, we normalize our step size by the batch size (`batch_size`), so that the magnitude of a typical step size does not depend heavily on our choice of the batch size. ``` def sgd(params, grads, lr, batch_size): #@save """Minibatch stochastic gradient descent.""" for param, grad in zip(params, grads): param.assign_sub(lr*grad/batch_size) ``` ## Training Now that we have all of the parts in place, we are ready to [**implement the main training loop.**] It is crucial that you understand this code because you will see nearly identical training loops over and over again throughout your career in deep learning. In each iteration, we will grab a minibatch of training examples, and pass them through our model to obtain a set of predictions. After calculating the loss, we initiate the backwards pass through the network, storing the gradients with respect to each parameter. Finally, we will call the optimization algorithm `sgd` to update the model parameters. In summary, we will execute the following loop: * Initialize parameters $(\mathbf{w}, b)$ * Repeat until done * Compute gradient $\mathbf{g} \leftarrow \partial_{(\mathbf{w},b)} \frac{1}{|\mathcal{B}|} \sum_{i \in \mathcal{B}} l(\mathbf{x}^{(i)}, y^{(i)}, \mathbf{w}, b)$ * Update parameters $(\mathbf{w}, b) \leftarrow (\mathbf{w}, b) - \eta \mathbf{g}$ In each *epoch*, we will iterate through the entire dataset (using the `data_iter` function) once passing through every example in the training dataset (assuming that the number of examples is divisible by the batch size). The number of epochs `num_epochs` and the learning rate `lr` are both hyperparameters, which we set here to 3 and 0.03, respectively. Unfortunately, setting hyperparameters is tricky and requires some adjustment by trial and error. We elide these details for now but revise them later in :numref:`chap_optimization`. ``` lr = 0.03 num_epochs = 3 net = linreg loss = squared_loss for epoch in range(num_epochs): for X, y in data_iter(batch_size, features, labels): with tf.GradientTape() as g: l = loss(net(X, w, b), y) # Minibatch loss in `X` and `y` # Compute gradient on l with respect to [`w`, `b`] dw, db = g.gradient(l, [w, b]) # Update parameters using their gradient sgd([w, b], [dw, db], lr, batch_size) train_l = loss(net(features, w, b), labels) print(f'epoch {epoch + 1}, loss {float(tf.reduce_mean(train_l)):f}') ``` In this case, because we synthesized the dataset ourselves, we know precisely what the true parameters are. Thus, we can [**evaluate our success in training by comparing the true parameters with those that we learned**] through our training loop. Indeed they turn out to be very close to each other. ``` print(f'error in estimating w: {true_w - tf.reshape(w, true_w.shape)}') print(f'error in estimating b: {true_b - b}') ``` Note that we should not take it for granted that we are able to recover the parameters perfectly. However, in machine learning, we are typically less concerned with recovering true underlying parameters, and more concerned with parameters that lead to highly accurate prediction. Fortunately, even on difficult optimization problems, stochastic gradient descent can often find remarkably good solutions, owing partly to the fact that, for deep networks, there exist many configurations of the parameters that lead to highly accurate prediction. ## Summary * We saw how a deep network can be implemented and optimized from scratch, using just tensors and auto differentiation, without any need for defining layers or fancy optimizers. * This section only scratches the surface of what is possible. In the following sections, we will describe additional models based on the concepts that we have just introduced and learn how to implement them more concisely. ## Exercises 1. What would happen if we were to initialize the weights to zero. Would the algorithm still work? 1. Assume that you are [Georg Simon Ohm](https://en.wikipedia.org/wiki/Georg_Ohm) trying to come up with a model between voltage and current. Can you use auto differentiation to learn the parameters of your model? 1. Can you use [Planck's Law](https://en.wikipedia.org/wiki/Planck%27s_law) to determine the temperature of an object using spectral energy density? 1. What are the problems you might encounter if you wanted to compute the second derivatives? How would you fix them? 1. Why is the `reshape` function needed in the `squared_loss` function? 1. Experiment using different learning rates to find out how fast the loss function value drops. 1. If the number of examples cannot be divided by the batch size, what happens to the `data_iter` function's behavior? [Discussions](https://discuss.d2l.ai/t/201)
github_jupyter
<a href="https://qworld.net" target="_blank" align="left"><img src="../qworld/images/header.jpg" align="left"></a> $ \newcommand{\bra}[1]{\langle #1|} $ $ \newcommand{\ket}[1]{|#1\rangle} $ $ \newcommand{\braket}[2]{\langle #1|#2\rangle} $ $ \newcommand{\dot}[2]{ #1 \cdot #2} $ $ \newcommand{\biginner}[2]{\left\langle #1,#2\right\rangle} $ $ \newcommand{\mymatrix}[2]{\left( \begin{array}{#1} #2\end{array} \right)} $ $ \newcommand{\myvector}[1]{\mymatrix{c}{#1}} $ $ \newcommand{\myrvector}[1]{\mymatrix{r}{#1}} $ $ \newcommand{\mypar}[1]{\left( #1 \right)} $ $ \newcommand{\mybigpar}[1]{ \Big( #1 \Big)} $ $ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $ $ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $ $ \newcommand{\onehalf}{\frac{1}{2}} $ $ \newcommand{\donehalf}{\dfrac{1}{2}} $ $ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $ $ \newcommand{\vzero}{\myvector{1\\0}} $ $ \newcommand{\vone}{\myvector{0\\1}} $ $ \newcommand{\stateplus}{\myvector{ \sqrttwo \\ \sqrttwo } } $ $ \newcommand{\stateminus}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $ $ \newcommand{\myarray}[2]{ \begin{array}{#1}#2\end{array}} $ $ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $ $ \newcommand{\I}{ \mymatrix{rr}{1 & 0 \\ 0 & 1} } $ $ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $ $ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $ $ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $ $ \newcommand{\norm}[1]{ \left\lVert #1 \right\rVert } $ $ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} #1 \mspace{-1.5mu} \rfloor } $ $ \newcommand{\greenbit}[1] {\mathbf{{\color{green}#1}}} $ $ \newcommand{\bluebit}[1] {\mathbf{{\color{blue}#1}}} $ $ \newcommand{\redbit}[1] {\mathbf{{\color{red}#1}}} $ $ \newcommand{\brownbit}[1] {\mathbf{{\color{brown}#1}}} $ $ \newcommand{\blackbit}[1] {\mathbf{{\color{black}#1}}} $ <font style="font-size:28px;" align="left"><b> <font color="blue"> Solutions for </font> Matrices: Tensor Product</b></font> <br> _prepared by Abuzer Yakaryilmaz_ <br><br> <a id="task1"></a> <h3> Task 1 </h3> Find $ u \otimes v $ and $ v \otimes u $ for the given vectors $ u = \myrvector{-2 \\ -1 \\ 0 \\ 1} $ and $ v = \myrvector{ 1 \\ 2 \\ 3 } $. <h3>Solution</h3> ``` u = [-2,-1,0,1] v = [1,2,3] uv = [] vu = [] for i in range(len(u)): # one element of u is picked for j in range(len(v)): # now we iteratively select every element of v uv.append(u[i]*v[j]) # this one element of u is iteratively multiplied with every element of v print("u-tensor-v is",uv) for i in range(len(v)): # one element of v is picked for j in range(len(u)): # now we iteratively select every element of u vu.append(v[i]*u[j]) # this one element of v is iteratively multiplied with every element of u print("v-tensor-u is",vu) ``` <a id="task2"></a> <h3> Task 2 </h3> Find $ A \otimes B $ for the given matrices $ A = \mymatrix{rrr}{-1 & 0 & 1 \\ -2 & -1 & 2} ~~\mbox{and}~~ B = \mymatrix{rr}{0 & 2 \\ 3 & -1 \\ -1 & 1 }. $ <h3>Solution</h3> ``` A = [ [-1,0,1], [-2,-1,2] ] B = [ [0,2], [3,-1], [-1,1] ] print("A =") for i in range(len(A)): print(A[i]) print() # print a line print("B =") for i in range(len(B)): print(B[i]) # let's define A-tensor-B as a (6x6)-dimensional zero matrix AB = [] for i in range(6): AB.append([]) for j in range(6): AB[i].append(0) # let's find A-tensor-B for i in range(2): for j in range(3): # for each A(i,j) we execute the following codes a = A[i][j] # we access each element of B for m in range(3): for n in range(2): b = B[m][n] # now we put (a*b) in the appropriate index of AB AB[3*i+m][2*j+n] = a * b print() # print a line print("A-tensor-B =") print() # print a line for i in range(6): print(AB[i]) ``` <a id="task3"></a> <h3> Task 3 </h3> Find $ B \otimes A $ for the given matrices $ A = \mymatrix{rrr}{-1 & 0 & 1 \\ -2 & -1 & 2} ~~\mbox{and}~~ B = \mymatrix{rr}{0 & 2 \\ 3 & -1 \\ -1 & 1 }. $ <h3>Solution</h3> ``` A = [ [-1,0,1], [-2,-1,2] ] B = [ [0,2], [3,-1], [-1,1] ] print() # print a line print("B =") for i in range(len(B)): print(B[i]) print("A =") for i in range(len(A)): print(A[i]) # let's define B-tensor-A as a (6x6)-dimensional zero matrix BA = [] for i in range(6): BA.append([]) for j in range(6): BA[i].append(0) # let's find B-tensor-A for i in range(3): for j in range(2): # for each B(i,j) we execute the following codes b = B[i][j] # we access each element of A for m in range(2): for n in range(3): a = A[m][n] # now we put (a*b) in the appropriate index of AB BA[2*i+m][3*j+n] = b * a print() # print a line print("B-tensor-A =") print() # print a line for i in range(6): print(BA[i]) ```
github_jupyter
# Mask R-CNN - Train on Nuclei Dataset (updated from train_shape.ipynb) This notebook shows how to train Mask R-CNN on your own dataset. To keep things simple we use a synthetic dataset of shapes (squares, triangles, and circles) which enables fast training. You'd still need a GPU, though, because the network backbone is a Resnet101, which would be too slow to train on a CPU. On a GPU, you can start to get okay-ish results in a few minutes, and good results in less than an hour. The code of the *Shapes* dataset is included below. It generates images on the fly, so it doesn't require downloading any data. And it can generate images of any size, so we pick a small image size to train faster. ``` import os import sys import random import math import re import time import tqdm import numpy as np import cv2 import matplotlib import matplotlib.pyplot as plt from config import Config import utils import model as modellib import visualize from model import log %matplotlib inline # Root directory of the project ROOT_DIR = os.getcwd() # Directory to save logs and trained model # MODEL_DIR = os.path.join(ROOT_DIR, "logs") MODEL_DIR = "/data/lf/Nuclei/logs" DATA_DIR = os.path.join(ROOT_DIR, "data") # Local path to trained weights file COCO_MODEL_PATH = os.path.join(ROOT_DIR, "models", "mask_rcnn_coco.h5") # Download COCO trained weights from Releases if needed if not os.path.exists(COCO_MODEL_PATH): utils.download_trained_weights(COCO_MODEL_PATH) ``` ## Configurations ``` class NucleiConfig(Config): """Configuration for training on the toy shapes dataset. Derives from the base Config class and overrides values specific to the toy shapes dataset. """ # Give the configuration a recognizable name NAME = "nuclei" # Train on 1 GPU and 8 images per GPU. We can put multiple images on each # GPU because the images are small. Batch size is 8 (GPUs * images/GPU). GPU_COUNT = 1 IMAGES_PER_GPU = 4 # Number of classes (including background) NUM_CLASSES = 1 + 1 # background + 3 shapes # Use small images for faster training. Set the limits of the small side # the large side, and that determines the image shape. IMAGE_MIN_DIM = 512 IMAGE_MAX_DIM = 512 # Use smaller anchors because our image and objects are small RPN_ANCHOR_SCALES = (8, 16, 32, 64, 128) # anchor side in pixels # Reduce training ROIs per image because the images are small and have # few objects. Aim to allow ROI sampling to pick 33% positive ROIs. TRAIN_ROIS_PER_IMAGE = 32 # Use a small epoch since the data is simple STEPS_PER_EPOCH = 100 # use small validation steps since the epoch is small VALIDATION_STEPS = 5 config = NucleiConfig() config.display() type(config.display()) ``` ## Notebook Preferences ``` def get_ax(rows=1, cols=1, size=8): """Return a Matplotlib Axes array to be used in all visualizations in the notebook. Provide a central point to control graph sizes. Change the default size attribute to control the size of rendered images """ _, ax = plt.subplots(rows, cols, figsize=(size*cols, size*rows)) return ax ``` ## Dataset Load the nuclei dataset Extend the Dataset class and add a method to get the nuclei dataset, `load_image_info()`, and override the following methods: * load_image() * load_mask() * image_reference() ``` class NucleiDataset(utils.Dataset): """Load the images and masks from dataset.""" def load_image_info(self, set_path, img_set): """Get the picture names(ids) of the dataset.""" # Add classes self.add_class("nucleis", 1, "regular") # TO DO : Three different image types into three classes # Add images # Get the images ids of training/testing set # train_ids = next(os.walk(set_path))[1] with open(img_set) as f: read_data = f.readlines() train_ids = [read_data[i][:-1] for i in range(0,len(read_data))] # Get the info of the images for i, id_ in enumerate(train_ids): file_path = os.path.join(set_path, id_) img_path = os.path.join(file_path, "images") masks_path = os.path.join(file_path, "masks") img_name = id_ + ".png" img = cv2.imread(os.path.join(img_path, img_name)) width, height, _ = img.shape self.add_image("nucleis", image_id=id_, path=file_path, img_path=img_path, masks_path=masks_path, width=width, height=height, nucleis="nucleis") def load_image(self, image_id): """Load image from file of the given image ID.""" info = self.image_info[image_id] img_path = info["img_path"] img_name = info["id"] + ".png" image = cv2.imread(os.path.join(img_path, img_name)) return image def image_reference(self, image_id): """Return the path of the given image ID.""" info = self.image_info[image_id] if info["source"] == "nucleis": return info["path"] else: super(self.__class__).image_reference(self, image_id) def load_mask(self, image_id): """Load the instance masks of the given image ID.""" info = self.image_info[image_id] mask_files = next(os.walk(info["masks_path"]))[2] masks = np. zeros([info['width'], info['height'], len(mask_files)], dtype=np.uint8) for i, id_ in enumerate(mask_files): single_mask = cv2.imread(os.path.join(info["masks_path"], id_), 0) masks[:, :, i:i+1] = single_mask[:, :, np.newaxis] class_ids = np.ones(len(mask_files)) return masks, class_ids.astype(np.int32) kFOLD_DIR = os.path.join(ROOT_DIR, "kfold_dataset") with open(kFOLD_DIR + '/10-fold-val-3.txt') as f: read_data = f.readlines() train_ids = [read_data[i][:-1] for i in range(0,len(read_data))] print(train_ids) # Training dataset TRAINSET_DIR = os.path.join(DATA_DIR, "stage1_train_fixed") # VALSET_DIR = os.path.join(DATA_DIR, "stage1_val") TESTSET_DIR = os.path.join(DATA_DIR, "stage1_test") kFOLD_DIR = os.path.join(ROOT_DIR, "kfold_dataset") dataset_train = NucleiDataset() dataset_train.load_image_info(TRAINSET_DIR, os.path.join(kFOLD_DIR, "10-fold-train-10.txt")) dataset_train.prepare() dataset_val = NucleiDataset() dataset_val.load_image_info(TRAINSET_DIR, os.path.join(kFOLD_DIR, "10-fold-val-10.txt")) dataset_val.prepare() print("Loading {} training images, {} validation images" .format(dataset_train.num_images, dataset_val.num_images)) # Load and display random samples image_ids = np.random.choice(dataset_train.image_ids, 4) print(dataset_train.num_images) for i, image_id in enumerate(image_ids): image = dataset_train.load_image(image_id) mask, class_ids = dataset_train.load_mask(image_id) visualize.display_top_masks(image, mask, class_ids, dataset_train.class_names) ``` ## Bounding Boxes Although we don't have the specific box coordinates in the dataset, we can compute the bounding boxes from masks instead. This allows us to handle bounding boxes consistently regardless of the source dataset, and it also makes it easier to resize, rotate, or crop images because we simply generate the bounding boxes from the updates masks rather than computing bounding box transformation for each type of image transformation. ``` # Load random image and mask. image_id = random.choice(dataset_train.image_ids) image = dataset_train.load_image(image_id) mask, class_ids = dataset_train.load_mask(image_id) # Compute Bounding box bbox = utils.extract_bboxes(mask) # Display image and additional stats print("image_id ", image_id, dataset_train.image_reference(image_id)) log("image", image) log("mask", mask) log("class_ids", class_ids) log("bbox", bbox) # Display image and instances visualize.display_instances(image, bbox, mask, class_ids, dataset_train.class_names) ``` ## Ceate Model ``` # Create model in training mode model = modellib.MaskRCNN(mode="training", config=config, model_dir=MODEL_DIR) # Which weights to start with? init_with = "coco" # imagenet, coco, or last if init_with == "imagenet": model.load_weights(model.get_imagenet_weights(), by_name=True) elif init_with == "coco": # Load weights trained on MS COCO, but skip layers that # are different due to the different number of classes # See README for instructions to download the COCO weights model.load_weights(COCO_MODEL_PATH, by_name=True, exclude=["mrcnn_class_logits", "mrcnn_bbox_fc", "mrcnn_bbox", "mrcnn_mask"]) elif init_with == "last": # Load the last model you trained and continue training model.load_weights(model.find_last()[1], by_name=True) ``` ## Training Train in two stages: 1. Only the heads. Here we're freezing all the backbone layers and training only the randomly initialized layers (i.e. the ones that we didn't use pre-trained weights from MS COCO). To train only the head layers, pass `layers='heads'` to the `train()` function. 2. Fine-tune all layers. For this simple example it's not necessary, but we're including it to show the process. Simply pass `layers="all` to train all layers. ``` # Train the head branches # Passing layers="heads" freezes all layers except the head # layers. You can also pass a regular expression to select # which layers to train by name pattern. model.train(dataset_train, dataset_val, learning_rate=config.LEARNING_RATE, epochs=1, layers='heads') # Fine tune all layers # Passing layers="all" trains all layers. You can also # pass a regular expression to select which layers to # train by name pattern. model.train(dataset_train, dataset_val, learning_rate=config.LEARNING_RATE / 10, epochs=2, layers="all") import datetime print(now) import time rq = "config-" + time.strftime('%Y%m%d%H%M', time.localtime(time.time())) +".log" print(rq) # Save weights # Typically not needed because callbacks save after every epoch # Uncomment to save manually # model_path = os.path.join(MODEL_DIR, "mask_rcnn_shapes.h5") # model.keras_model.save_weights(model_path) ``` ## Detection Example ``` class InferenceConfig(NucleiConfig): GPU_COUNT = 1 IMAGES_PER_GPU = 1 DETECTION_NMS_THRESHOLD = 0.3 DETECTION_MAX_INSTANCES = 300 inference_config = InferenceConfig() # Recreate the model in inference mode model = modellib.MaskRCNN(mode="inference", config=inference_config, model_dir=MODEL_DIR) # Get path to saved weights # Either set a specific path or find last trained weights # model_path = os.path.join(ROOT_DIR, ".h5 file name here") model_path = "/data2/liangfeng/nuclei_models/nuclei20180202T1847/mask_rcnn_nuclei_0080.h5" # Load trained weights (fill in path to trained weights here) assert model_path != "", "Provide path to trained weights" print("Loading weights from ", model_path) model.load_weights(model_path, by_name=True) # Test on a random image(load_image_gt will resize the image!) image_id = random.choice(dataset_val.image_ids) original_image, image_meta, gt_class_id, gt_bbox, gt_mask =\ modellib.load_image_gt(dataset_val, inference_config, image_id, use_mini_mask=False) print("image_id ", image_id, dataset_val.image_reference(image_id)) log("original_image", original_image) log("image_meta", image_meta) log("gt_class_id", gt_class_id) log("gt_bbox", gt_bbox) log("gt_mask", gt_mask) visualize.display_instances(original_image, gt_bbox, gt_mask, gt_class_id, dataset_train.class_names, figsize=(8, 8)) results = model.detect([original_image], verbose=1) r = results[0] # print(r) visualize.display_instances(original_image, r['rois'], r['masks'], r['class_ids'], dataset_val.class_names, r['scores'], ax=get_ax()) ``` ## Evaluation ``` # Compute VOC-Style mAP @ IoU=0.5 # Running on 10 images. Increase for better accuracy. # image_ids = np.random.choice(dataset_val.image_ids, 10) image_ids = dataset_val.image_ids APs = [] for image_id in image_ids: # Load image and ground truth data image, image_meta, gt_class_id, gt_bbox, gt_mask =\ modellib.load_image_gt(dataset_val, inference_config, image_id, use_mini_mask=False) molded_images = np.expand_dims(modellib.mold_image(image, inference_config), 0) # Run object detection results = model.detect([image], verbose=0) r = results[0] # Compute AP AP, precisions, recalls, overlaps =\ utils.compute_ap(gt_bbox, gt_class_id, r["rois"], r["class_ids"], r["scores"]) APs.append(AP) print("mAP: ", np.mean(APs)) ``` ## Writing the Results ``` # Get the Test set. TESTSET_DIR = os.path.join(DATA_DIR, "stage1_test") dataset_test = NucleiDataset() dataset_test.load_image_info(TESTSET_DIR) dataset_test.prepare() print("Predict {} images".format(dataset_test.num_images)) # Load random image and mask(Original Size). image_id = np.random.choice(dataset_test.image_ids) image = dataset_test.load_image(image_id) plt.figure() plt.imshow(image) plt.title(image_id, fontsize=9) plt.axis('off') # images = dataset_test.load_image(image_ids) # mask, class_ids = dataset_test.load_mask(image_id) # Compute Bounding box # bbox = utils.extract_bboxes(mask) # Display image and additional stats # print("image_id ", image_id, dataset_test.image_reference(image_id)) # log("image", image) # log("mask", mask) # log("class_ids", class_ids) # log("bbox", bbox) # Display image and instances # visualize.display_instances(image, bbox, mask, class_ids, dataset_test.class_names) results = model.detect([image], verbose=1) r = results[0] mask_exist = np.zeros(r['masks'].shape[:-1], dtype=np.uint8) mask_sum = np.zeros(r['masks'].shape[:-1], dtype=np.uint8) for i in range(r['masks'].shape[-1]): _mask = r['masks'][:,:,i] mask_sum += _mask # print(np.multiply(mask_exist, _mask)) # print(np.where(np.multiply(mask_exist, _mask) == 1)) index_ = np.where(np.multiply(mask_exist, _mask) == 1) _mask[index_] = 0 mask_exist += _mask # masks_sum = np.sum(r['masks'] ,axis=2) # overlap = np.where(masks_sum > 1) # print(overlap) # plt.figure() plt.subplot(1,2,1) plt.imshow(mask_exist) plt.subplot(1,2,2) plt.imshow(mask_sum) # visualize.display_instances(image, r['rois'], r['masks'], r['class_ids'], # dataset_test.class_names, r['scores'], ax=get_ax()) a = [[0, 1],[0, 0]] np.any(a) def rle_encoding(x): dots = np.where(x.T.flatten() == 1)[0] run_lengths = [] prev = -2 for b in dots: if (b>prev+1): run_lengths.extend((b + 1, 0)) run_lengths[-1] += 1 prev = b return run_lengths import pandas as pd test_ids = [] test_rles = [] id_ = dataset_val.image_info[image_id]["id"] results = model.detect([image], verbose=1) r = results[0] for i in range(len(r['scores'])): test_ids.append(id_) test_rles.append(rle_encoding(r['masks'][:, : , i])) sub = pd.DataFrame() sub['ImageId'] = test_ids sub['EncodedPixels'] = pd.Series(test_rles).apply(lambda x: ' '.join(str(y) for y in x)) model_path csvpath = "{}.csv".format(model_path) print(csvpath) sub.to_csv(csvpath, index=False) # plt.imshow('image',r['masks'][0]) ```
github_jupyter
<h1>Índice<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><span><a href="#Socioeconomic-data-validation" data-toc-modified-id="Socioeconomic-data-validation-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Socioeconomic data validation</a></span><ul class="toc-item"><li><ul class="toc-item"><li><span><a href="#Goals" data-toc-modified-id="Goals-1.0.1"><span class="toc-item-num">1.0.1&nbsp;&nbsp;</span>Goals</a></span></li><li><span><a href="#Data-scources" data-toc-modified-id="Data-scources-1.0.2"><span class="toc-item-num">1.0.2&nbsp;&nbsp;</span>Data scources</a></span></li><li><span><a href="#Methodology" data-toc-modified-id="Methodology-1.0.3"><span class="toc-item-num">1.0.3&nbsp;&nbsp;</span>Methodology</a></span></li><li><span><a href="#Results" data-toc-modified-id="Results-1.0.4"><span class="toc-item-num">1.0.4&nbsp;&nbsp;</span>Results</a></span><ul class="toc-item"><li><span><a href="#Outputs" data-toc-modified-id="Outputs-1.0.4.1"><span class="toc-item-num">1.0.4.1&nbsp;&nbsp;</span>Outputs</a></span></li></ul></li><li><span><a href="#Authors" data-toc-modified-id="Authors-1.0.5"><span class="toc-item-num">1.0.5&nbsp;&nbsp;</span>Authors</a></span></li></ul></li><li><span><a href="#Import-data" data-toc-modified-id="Import-data-1.1"><span class="toc-item-num">1.1&nbsp;&nbsp;</span>Import data</a></span></li><li><span><a href="#INSE-data-analysis" data-toc-modified-id="INSE-data-analysis-1.2"><span class="toc-item-num">1.2&nbsp;&nbsp;</span>INSE data analysis</a></span><ul class="toc-item"><li><span><a href="#Filtering-model-(refence)-and-risk-(attention)-schools" data-toc-modified-id="Filtering-model-(refence)-and-risk-(attention)-schools-1.2.1"><span class="toc-item-num">1.2.1&nbsp;&nbsp;</span>Filtering model (<code>refence</code>) and risk (<code>attention</code>) schools</a></span></li><li><span><a href="#Join-INSE-data" data-toc-modified-id="Join-INSE-data-1.2.2"><span class="toc-item-num">1.2.2&nbsp;&nbsp;</span>Join INSE data</a></span></li><li><span><a href="#Comparing-INSE-data-in-categories" data-toc-modified-id="Comparing-INSE-data-in-categories-1.2.3"><span class="toc-item-num">1.2.3&nbsp;&nbsp;</span>Comparing INSE data in categories</a></span></li></ul></li><li><span><a href="#Statistical-INSE-analysis" data-toc-modified-id="Statistical-INSE-analysis-1.3"><span class="toc-item-num">1.3&nbsp;&nbsp;</span>Statistical INSE analysis</a></span><ul class="toc-item"><li><span><a href="#Normality-test" data-toc-modified-id="Normality-test-1.3.1"><span class="toc-item-num">1.3.1&nbsp;&nbsp;</span>Normality test</a></span><ul class="toc-item"><li><span><a href="#D'Agostino-and-Pearson's" data-toc-modified-id="D'Agostino-and-Pearson's-1.3.1.1"><span class="toc-item-num">1.3.1.1&nbsp;&nbsp;</span>D'Agostino and Pearson's</a></span></li><li><span><a href="#Shapiro-Wiki" data-toc-modified-id="Shapiro-Wiki-1.3.1.2"><span class="toc-item-num">1.3.1.2&nbsp;&nbsp;</span>Shapiro-Wiki</a></span></li></ul></li><li><span><a href="#t-test" data-toc-modified-id="t-test-1.3.2"><span class="toc-item-num">1.3.2&nbsp;&nbsp;</span><em>t</em> test</a></span><ul class="toc-item"><li><span><a href="#Model-x-risk-schools" data-toc-modified-id="Model-x-risk-schools-1.3.2.1"><span class="toc-item-num">1.3.2.1&nbsp;&nbsp;</span>Model x risk schools</a></span></li></ul></li><li><span><a href="#Cohen's-D" data-toc-modified-id="Cohen's-D-1.3.3"><span class="toc-item-num">1.3.3&nbsp;&nbsp;</span>Cohen's D</a></span><ul class="toc-item"><li><span><a href="#Model-x-risk-schools" data-toc-modified-id="Model-x-risk-schools-1.3.3.1"><span class="toc-item-num">1.3.3.1&nbsp;&nbsp;</span>Model x risk schools</a></span></li><li><span><a href="#Best-evolution-model-x-risk-schools" data-toc-modified-id="Best-evolution-model-x-risk-schools-1.3.3.2"><span class="toc-item-num">1.3.3.2&nbsp;&nbsp;</span>Best evolution model x risk schools</a></span></li><li><span><a href="#Other-model-x-risk-schools" data-toc-modified-id="Other-model-x-risk-schools-1.3.3.3"><span class="toc-item-num">1.3.3.3&nbsp;&nbsp;</span>Other model x risk schools</a></span></li></ul></li></ul></li></ul></li><li><span><a href="#Testes-estatísticos" data-toc-modified-id="Testes-estatísticos-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Testes estatísticos</a></span><ul class="toc-item"><li><span><a href="#Cohen's-D" data-toc-modified-id="Cohen's-D-2.1"><span class="toc-item-num">2.1&nbsp;&nbsp;</span>Cohen's D</a></span></li></ul></li><li><span><a href="#Tentando-inferir-causalidade" data-toc-modified-id="Tentando-inferir-causalidade-3"><span class="toc-item-num">3&nbsp;&nbsp;</span>Tentando inferir causalidade</a></span><ul class="toc-item"><li><span><a href="#Regressões-lineares" data-toc-modified-id="Regressões-lineares-3.1"><span class="toc-item-num">3.1&nbsp;&nbsp;</span>Regressões lineares</a></span></li><li><span><a href="#Testes-pareados" data-toc-modified-id="Testes-pareados-3.2"><span class="toc-item-num">3.2&nbsp;&nbsp;</span>Testes pareados</a></span></li></ul></li></ul></div> # Socioeconomic data validation --- A literatura indica que o fator mais importante para o desempenho das escolas é o nível sócio econômico dos alunos. Estamos pressupondo que escolas próximas possuem alunos de nível sócio econômico próximo, mas isso precisa ser testado. Usei os dados do [INSE](http://portal.inep.gov.br/web/guest/indicadores-educacionais) para medir qual era o nível sócio econômico dos alunos de cada escola em 2015. ### Goals Examining the geolocated IDEB data frrom schools and modeling *risk* and *model* schools for the research. Combining the school's IDEB (SAEB + approval rate) marks with Rio de Janeiro's municipal shapefile, we hope to discover some local standards in school performance over the years. The time interval we will analyze is from 2011 until today. ### Data scources - `ideb_merged.csv`: resulted data from the geolocalization, IDEB by years on columns - `ideb_merged_kepler.csv`: resulted data from the geolocalization, format for kepler input ### Methodology The goal is to determine the "model" schools in a certain ratio. We'll define those "models" as schools that had a great grown and stands nearby "high risk" schools, the ones in the lowest strata. For that, we construct the model below with suggestions by Ragazzo: We are interested in the following groups: - Group 1: Schools from very low (< 4) to high (> 6) - Group 2: Schools from low (4 < x < 5) to high (> 6) - Group 3: Schools went to high (> 6) with delta > 2 The *attention level* (or risk) of a school is defined by which quartile it belongs on IDEB 2017 distribution (most recent), from the lowest quartile (level 4) to the highest (level 1). ### Results 1. [Identify the schools with most IDEB variation from 2005 to 2017](#1) 2. [Identify schools that jumped from low / very low IDEB (<5 / <4) and went to high IDEB (> 6), from 2005 to 2017](#2) 2. [Model neighboors: which schools had a large delta and were nearby schools on the highest attention level (4)?](#3) 3. [See if the education census contains information on who was the principal of each school each year.](#4) - actually, we use an indicator of school's "managment complexity" with the IDEB data. We didn't find any difference between levels of "managment complexity" related to IDEB marks from those schools in each level. #### Outputs - `model_neighboors_closest_multiple.csv`: database with the risk schools and closest model schools - `top_15_delta.csv`, `bottom_15_delta.csv`: top and bottom schools evolution from 2005 to 2017 - `kepler_with_filters.csv`: database for plot in kepler with schools categories (from the methology) ### Authors Original code by Guilherme Almeida here, adapted by Fernanda Scovino - 2019. ``` # Import config import os import sys sys.path.insert(0, '../') from config import RAW_PATH, TREAT_PATH, OUTPUT_PATH # DATA ANALYSIS & VIZ TOOLS from copy import deepcopy import pandas as pd import numpy as np pd.options.display.max_columns = 999 import geopandas as gpd from shapely.wkt import loads import matplotlib.pyplot as plt import seaborn as sns %pylab inline pylab.rcParams['figure.figsize'] = (12, 15) # CONFIGS %load_ext autoreload #%autoreload 2 #import warnings #warnings.filterwarnings('ignore') palette = ['#FEC300', '#F1920E', '#E3611C', '#C70039', '#900C3F', '#5A1846', '#3a414c', '#29323C'] sns.set() ``` ## Import data ``` inse = pd.read_excel(RAW_PATH / "INSE_2015.xlsx") schools_ideb = pd.read_csv(OUTPUT_PATH / "kepler_with_filters.csv") ``` ## INSE data analysis ``` inse.rename(columns={"CO_ESCOLA" : "cod_inep"}, inplace=True) inse.head() schools_ideb['ano'] = pd.to_datetime(schools_ideb['ano']) schools_ideb.head() ``` ### Filtering model (`refence`) and risk (`attention`) schools ``` reference = schools_ideb[(schools_ideb['ano'].dt.year == 2017) & ((schools_ideb['pessimo_pra_bom_bin'] == 1) | (schools_ideb['ruim_pra_bom_bin'] == 1))] reference.info() attention = schools_ideb[(schools_ideb['ano'].dt.year == 2017) & (schools_ideb['nivel_atencao'] == 4)] attention.info() ``` ### Join INSE data ``` inse_cols = ["cod_inep", "NOME_ESCOLA", "INSE_VALOR_ABSOLUTO", "INSE_CLASSIFICACAO"] reference = pd.merge(reference, inse[inse_cols], how = "left", on = "cod_inep") attention = pd.merge(attention, inse[inse_cols], how = "left", on = "cod_inep") reference['tipo_escola'] = 'Escola referência' reference.info() attention['tipo_escola'] = 'Escola de risco' attention.info() df_inse = attention.append(reference) df_inse['escola_risco'] = df_inse['nivel_atencao'].apply(lambda x : 1 if x == 4 else 0) df_inse['tipo_especifico'] = df_inse[['pessimo_pra_bom_bin', 'ruim_pra_bom_bin', 'escola_risco']].idxmax(axis=1) del df_inse['escola_risco'] df_inse.head() df_inse['tipo_especifico'].value_counts() df_inse.to_csv(TREAT_PATH / "risk_and_model_schools_inse.csv", index = False) ``` ### Comparing INSE data in categories ``` sns.distplot(attention["INSE_VALOR_ABSOLUTO"].dropna(), bins='fd', label='Escolas de risco') sns.distplot(reference["INSE_VALOR_ABSOLUTO"].dropna(), bins='fd', label='Escolas modelo') plt.legend() pylab.rcParams['figure.figsize'] = (10, 8) title = "Comparação do nível sócio-econômico das escolas selecionadas" ylabel="INSE (2015) médio da escola" xlabel="Tipo da escola" sns.boxplot(y ="INSE_VALOR_ABSOLUTO", x="tipo_escola", data=df_inse).set(ylabel=ylabel, xlabel=xlabel, title=title) pylab.rcParams['figure.figsize'] = (10, 8) xlabel = "Tipo da escola (específico)" sns.boxplot(y = "INSE_VALOR_ABSOLUTO", x="tipo_especifico", data=df_inse).set(ylabel=ylabel, xlabel=xlabel, title=title) ``` ## Statistical INSE analysis ### Normality test From [this article:](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3693611/) > According to the available literature, **assessing the normality assumption should be taken into account for using parametric statistical tests.** It seems that the most popular test for normality, that is, the K-S test, should no longer be used owing to its low power. It is preferable that normality be assessed both visually and through normality tests, of which the Shapiro-Wilk test, provided by the SPSS software, is highly recommended. The normality assumption also needs to be considered for validation of data presented in the literature as it shows whether correct statistical tests have been used. ``` from scipy.stats import normaltest, shapiro, probplot ``` #### D'Agostino and Pearson's ``` normaltest(attention["INSE_VALOR_ABSOLUTO"].dropna()) normaltest(reference["INSE_VALOR_ABSOLUTO"].dropna()) ``` #### Shapiro-Wiki ``` shapiro(attention["INSE_VALOR_ABSOLUTO"].dropna()) qs = probplot(reference["INSE_VALOR_ABSOLUTO"].dropna(), plot=plt) shapiro(reference["INSE_VALOR_ABSOLUTO"].dropna()) ws = probplot(attention["INSE_VALOR_ABSOLUTO"].dropna(), plot=plt) ``` ### *t* test About parametric tests: [here](https://www.healthknowledge.org.uk/public-health-textbook/research-methods/1b-statistical-methods/parametric-nonparametric-tests) We can test the hypothesis of INSE be related to IDEB scores from the risk ($\mu_r$) and model schools ($\mu_m$) as it follows: $H_0 = \mu_r = \mu_m$ $H_a = \mu_r != \mu_m$ For the *t* test, we need to ensure that: 1. the variances arer equal (1.94 close ennough to 2.05) 2. the samples have the same size (?) 3. ``` from scipy.stats import ttest_ind as ttest, normaltest, kstest attention["INSE_VALOR_ABSOLUTO"].dropna().describe() reference["INSE_VALOR_ABSOLUTO"].dropna().describe() ``` #### Model x risk schools ``` ttest(attention["INSE_VALOR_ABSOLUTO"], reference["INSE_VALOR_ABSOLUTO"], nan_policy="omit", equal_var=True) ttest(attention["INSE_VALOR_ABSOLUTO"], reference["INSE_VALOR_ABSOLUTO"], nan_policy="omit", equal_var=False) ``` ### Cohen's D Minha métrica preferida de tamanho de efeito é o Cohen's D, mas aparentemente não tem nenhuma implementação canônica dele. Vou usar a que eu encontrei [nesse site](https://machinelearningmastery.com/effect-size-measures-in-python/). ``` from numpy.random import randn from numpy.random import seed from numpy import mean from numpy import var from math import sqrt # == Code made by Guilherme Almeida, 2019 == # function to calculate Cohen's d for independent samples def cohend(d1, d2): # calculate the size of samples n1, n2 = len(d1), len(d2) # calculate the variance of the samples s1, s2 = var(d1, ddof=1), var(d2, ddof=1) # calculate the pooled standard deviation s = sqrt(((n1 - 1) * s1 + (n2 - 1) * s2) / (n1 + n2 - 2)) # calculate the means of the samples u1, u2 = mean(d1), mean(d2) # calculate the effect size result = abs(u1 - u2) / s return result ``` #### Model x risk schools ``` ttest(attention["INSE_VALOR_ABSOLUTO"], reference["INSE_VALOR_ABSOLUTO"], nan_policy="omit") cohend(reference["INSE_VALOR_ABSOLUTO"], attention["INSE_VALOR_ABSOLUTO"]) ``` #### Best evolution model x risk schools ``` best_evolution = df_inse[df_inse['tipo_especifico'] == "pessimo_pra_bom_bin"] ttest(attention["INSE_VALOR_ABSOLUTO"], best_evolution["INSE_VALOR_ABSOLUTO"], nan_policy="omit") cohend(attention["INSE_VALOR_ABSOLUTO"], best_evolution["INSE_VALOR_ABSOLUTO"]) ``` #### Other model x risk schools ``` medium_evolution = df_inse[df_inse['tipo_especifico'] == "ruim_pra_bom_bin"] ttest(attention["INSE_VALOR_ABSOLUTO"], medium_evolution["INSE_VALOR_ABSOLUTO"], nan_policy="omit") cohend(attention["INSE_VALOR_ABSOLUTO"], medium_evolution["INSE_VALOR_ABSOLUTO"]) ``` ``` referencias.head() referencias = pd.merge(referencias, inse[["cod_inep", "NOME_ESCOLA", "INSE_VALOR_ABSOLUTO", "INSE_CLASSIFICACAO"]], how = "left", on = "cod_inep") risco = pd.merge(risco, inse[["cod_inep", "NOME_ESCOLA", "INSE_VALOR_ABSOLUTO", "INSE_CLASSIFICACAO"]], how="left", on="cod_inep") referencias.INSE_VALOR_ABSOLUTO.describe() risco.INSE_VALOR_ABSOLUTO.describe() risco["tipo"] = "Escolas com desempenho abaixo do esperado" referencias["tipo"] = "Escolas-referência" df = risco.append(referencias) df.to_csv("risco_referencia_inse.csv", index = False) df = pd.read_csv("risco_referencia_inse.csv") sen.sen_boxplot(x = "tipo", y = "INSE_VALOR_ABSOLUTO", y_label = "INSE (2015) médio da escola", x_label = " ", plot_title = "Comparação do nível sócio-econômico das escolas selecionadas", palette = {"Escolas com desempenho abaixo do esperado" : "indianred", "Escolas-referência" : "skyblue"}, data = df, output_path = "inse_op1.png") df = pd.read_csv("risco_referencia_inse.csv") sen.sen_boxplot(x = "tipo_especifico", y = "INSE_VALOR_ABSOLUTO", y_label = "INSE (2015) médio da escola", x_label = " ", plot_title = "Comparação do nível sócio-econômico das escolas selecionadas", palette = {"Desempenho abaixo\ndo esperado" : "indianred", "Ruim para bom" : "skyblue", "Muito ruim para bom" : "lightblue"}, data = df, output_path = "inse_op2.png") ``` # Testes estatísticos ## Cohen's D Minha métrica preferida de tamanho de efeito é o Cohen's D, mas aparentemente não tem nenhuma implementação canônica dele. Vou usar a que eu encontrei [nesse site](https://machinelearningmastery.com/effect-size-measures-in-python/). ``` from numpy.random import randn from numpy.random import seed from numpy import mean from numpy import var from math import sqrt # function to calculate Cohen's d for independent samples def cohend(d1, d2): # calculate the size of samples n1, n2 = len(d1), len(d2) # calculate the variance of the samples s1, s2 = var(d1, ddof=1), var(d2, ddof=1) # calculate the pooled standard deviation s = sqrt(((n1 - 1) * s1 + (n2 - 1) * s2) / (n1 + n2 - 2)) # calculate the means of the samples u1, u2 = mean(d1), mean(d2) # calculate the effect size return (u1 - u2) / s ``` Todas as escolas referência vs. escolas risco ``` ttest(risco["INSE_VALOR_ABSOLUTO"], referencias["INSE_VALOR_ABSOLUTO"], nan_policy="omit") cohend(referencias["INSE_VALOR_ABSOLUTO"], risco["INSE_VALOR_ABSOLUTO"]) ``` Só as escolas muito ruim pra bom vs. escolas risco ``` ttest(risco["INSE_VALOR_ABSOLUTO"], referencias.query("tipo_especifico == 'Muito ruim para bom'")["INSE_VALOR_ABSOLUTO"], nan_policy="omit") cohend(referencias.query("tipo_especifico == 'Muito ruim para bom'")["INSE_VALOR_ABSOLUTO"], risco["INSE_VALOR_ABSOLUTO"]) ``` # Tentando inferir causalidade Sabemos que existe uma diferença significativa entre os níveis sócio econômicos dos 2 grupos. Mas até que ponto essa diferença no INSE é capaz de explicar a diferença no IDEB? Será que resta algum efeito que pode ser atribuído às práticas de gestão? Esses testes buscam encontrar uma resposta para essa pergunta. ## Regressões lineares ``` #pega a nota do IDEB pra servir de DV ideb = pd.read_csv("./pr-educacao/data/output/ideb_merged_kepler.csv") ideb["ano_true"] = ideb["ano"].apply(lambda x: int(x[0:4])) ideb = ideb.query("ano_true == 2017").copy() nota_ideb = ideb[["cod_inep", "ideb"]] df = pd.merge(df, nota_ideb, how = "left", on = "cod_inep") df.dropna(subset=["INSE_VALOR_ABSOLUTO"], inplace = True) df["tipo_bin"] = np.where(df["tipo"] == "Escolas-referência", 1, 0) from statsmodels.regression.linear_model import OLS as ols_py from statsmodels.tools.tools import add_constant ivs_multi = add_constant(df[["tipo_bin", "INSE_VALOR_ABSOLUTO"]]) modelo_multi = ols_py(df[["ideb"]], ivs_multi).fit() print(modelo_multi.summary()) ``` O problema de fazer a regressão da maneira como eu coloquei acima é que tipo_bin foi criada parcialmente em função do IDEB (ver histogramas abaixo), então não é uma variável verdadeiramente independente. Talvez uma estratégia seja comparar modelos simples só com INSE e só com tipo_bin. ``` df.ideb.hist() df.query("tipo_bin == 0").ideb.hist() df.query("tipo_bin == 1").ideb.hist() #correlação simples from scipy.stats import pearsonr pearsonr(df[["ideb"]], df[["INSE_VALOR_ABSOLUTO"]]) iv_inse = add_constant(df[["INSE_VALOR_ABSOLUTO"]]) iv_ideb = add_constant(df[["tipo_bin"]]) modelo_inse = ols_py(df[["ideb"]], iv_inse).fit() modelo_tipo = ols_py(df[["ideb"]], iv_ideb).fit() print(modelo_inse.summary()) print("-----------------------------------------------------------") print(modelo_tipo.summary()) ``` ## Testes pareados Nossa unidade de observação, na verdade, não deveria ser uma escola, mas sim um par de escolas. Abaixo, tento fazer as análises levando em consideração o delta de INSE e o delta de IDEB para cada par de escolas. Isso é importante: sabemos que o INSE faz a diferença no IDEB geral, mas a pergunta é se ele consegue explicar as diferenças na performance dentro de cada par. ``` pairs = pd.read_csv("sponsors_mais_proximos.csv") pairs.head() pairs.shape inse_risco = inse[["cod_inep", "INSE_VALOR_ABSOLUTO"]] inse_risco.columns = ["cod_inep_risco","inse_risco"] inse_ref = inse[["cod_inep", "INSE_VALOR_ABSOLUTO"]] inse_ref.columns = ["cod_inep_referencia","inse_referencia"] pairs = pd.merge(pairs, inse_risco, how = "left", on = "cod_inep_risco") pairs = pd.merge(pairs, inse_ref, how = "left", on = "cod_inep_referencia") #calcula os deltas pairs["delta_inse"] = pairs["inse_referencia"] - pairs["inse_risco"] pairs["delta_ideb"] = pairs["ideb_referencia"] - pairs["ideb_risco"] pairs["delta_inse"].describe() pairs["delta_inse"].hist() pairs["delta_ideb"].describe() pairs["delta_ideb"].hist() pairs[pairs["delta_inse"].isnull()] clean_pairs = pairs.dropna(subset = ["delta_inse"]) import seaborn as sns import matplotlib.pyplot as plt plt.figure(figsize = sen.aspect_ratio_locker([16, 9], 0.6)) inse_plot = sns.regplot("delta_inse", "delta_ideb", data = clean_pairs) plt.title("Correlação entre as diferenças do IDEB (2017) e do INSE (2015)\npara cada par de escolas mais próximas") plt.xlabel("$INSE_{referência} - INSE_{desempenho\,abaixo\,do\,esperado}$", fontsize = 12) plt.ylabel("$IDEB_{referência} - IDEB_{desempenh\,abaixo\,do\,esperado}$", fontsize = 12) inse_plot.get_figure().savefig("delta_inse.png", dpi = 600) pearsonr(clean_pairs[["delta_ideb"]], clean_pairs[["delta_inse"]]) X = add_constant(clean_pairs[["delta_inse"]]) modelo_pairs = ols_py(clean_pairs[["delta_ideb"]], X).fit() print(modelo_pairs.summary()) ``` Testando a assumption de que distância física se correlaciona com distância de INSE ``` pairs.head() sns.regplot("distancia", "delta_inse", data = clean_pairs.query("distancia < 4000")) multi_iv = add_constant(clean_pairs[["distancia", "delta_inse"]]) modelo_ze = ols_py(clean_pairs[["delta_ideb"]], multi_iv).fit() print(modelo_ze.summary()) ```
github_jupyter
### Intro & Resources * [Sutton/Barto ebook](https://goo.gl/7utZaz); [Silver online course](https://goo.gl/AWcMFW) ### Learning to Optimize Rewards * Definitions: software *agents* make *observations* & take *actions* within an *environment*. In return they can receive *rewards* (positive or negative). ### Policy Search * **Policy**: the algorithm used by an agent to determine a next action. ### OpenAI Gym ([link:](https://gym.openai.com/)) * A toolkit for various simulated environments. ``` !pip3 install --upgrade gym import gym env = gym.make("CartPole-v0") obs = env.reset() obs env.render() ``` * **make()** creates environment * **reset()** returns a 1st env't * **CartPole()** - each observation = 1D numpy array (hposition, velocity, angle, angularvelocity) ![cartpole](pics/cartpole.png) ``` img = env.render(mode="rgb_array") img.shape # what actions are possible? # in this case: 0 = accelerate left, 1 = accelerate right env.action_space # pole is leaning right. let's go further to the right. action = 1 obs, reward, done, info = env.step(action) obs, reward, done, info ``` * new observation: * hpos = obs[0]<0 * velocity = obs[1]>0 = moving to the right * angle = obs[2]>0 = leaning right * ang velocity = obs[3]<0 = slowing down? * reward = 1.0 * done = False (episode not over) * info = (empty) ``` # example policy: # (1) accelerate left when leaning left, (2) accelerate right when leaning right # average reward over 500 episodes? def basic_policy(obs): angle = obs[2] return 0 if angle < 0 else 1 totals = [] for episode in range(500): episode_rewards = 0 obs = env.reset() for step in range(1000): # 1000 steps max, we don't want to run forever action = basic_policy(obs) obs, reward, done, info = env.step(action) episode_rewards += reward if done: break totals.append(episode_rewards) import numpy as np np.mean(totals), np.std(totals), np.min(totals), np.max(totals) ``` ### NN Policies * observations as inputs - actions to be executed as outputs - determined by p(action) * approach lets agent find best balance between **exploring new actions** & **reusing known good actions**. ### Evaluating Actions: Credit Assignment problem * Reinforcement Learning (RL) training not like supervised learning. * RL feedback is via rewards (often sparse & delayed) * How to determine which previous steps were "good" or "bad"? (aka "*credit assigmnment problem*") * Common tactic: applying a **discount rate** to older rewards. * Use normalization across many episodes to increase score reliability. NN Policy | Discounts & Rewards - | - ![nn-policy](pics/nn-policy.png) | ![discount-rewards](pics/discount-rewards.png) ``` import tensorflow as tf from tensorflow.contrib.layers import fully_connected # 1. Specify the neural network architecture n_inputs = 4 # == env.observation_space.shape[0] n_hidden = 4 # simple task, don't need more hidden neurons n_outputs = 1 # only output prob(accelerating left) initializer = tf.contrib.layers.variance_scaling_initializer() # 2. Build the neural network X = tf.placeholder( tf.float32, shape=[None, n_inputs]) hidden = fully_connected( X, n_hidden, activation_fn=tf.nn.elu, weights_initializer=initializer) logits = fully_connected( hidden, n_outputs, activation_fn=None, weights_initializer=initializer) outputs = tf.nn.sigmoid(logits) # logistic (sigmoid) ==> return 0.0-1.0 # 3. Select a random action based on the estimated probabilities p_left_and_right = tf.concat( axis=1, values=[outputs, 1 - outputs]) action = tf.multinomial( tf.log(p_left_and_right), num_samples=1) init = tf.global_variables_initializer() ``` ### Policy Gradient (PG) algorithms * example: ["reinforce" algo, 1992](https://goo.gl/tUe4Sh) ### Markov Decision processes (MDPs) * Markov chains = stochastic processes, no memory, fixed #states, random transitions * Markov decision processes = similar to MCs - agent can choose action; transition probabilities depend on the action; transitions can return reward/punishment. * Goal: find policy with maximum rewards over time. Markov Chain | Markov Decision Process - | - ![markov-chain](pics/markov-chain.png) | ![alt](pics/markov-decision-process.png) * **Bellman Optimality Equation**: a method to estimate optimal state value of any state *s*. * Knowing optimal states = useful, but doesn't tell agent what to do. **Q-Value algorithm** helps solve this problem. Optimal Q-Value of a state-action pair = sum of discounted future rewards the agent can expect on average. ``` # Define MDP: nan=np.nan # represents impossible actions T = np.array([ # shape=[s, a, s'] [[0.7, 0.3, 0.0], [1.0, 0.0, 0.0], [0.8, 0.2, 0.0]], [[0.0, 1.0, 0.0], [nan, nan, nan], [0.0, 0.0, 1.0]], [[nan, nan, nan], [0.8, 0.1, 0.1], [nan, nan, nan]], ]) R = np.array([ # shape=[s, a, s'] [[10., 0.0, 0.0], [0.0, 0.0, 0.0], [0.0, 0.0, 0.0]], [[10., 0.0, 0.0], [nan, nan, nan], [0.0, 0.0, -50.]], [[nan, nan, nan], [40., 0.0, 0.0], [nan, nan, nan]], ]) possible_actions = [[0, 1, 2], [0, 2], [1]] # run Q-Value Iteration algo Q = np.full((3, 3), -np.inf) for state, actions in enumerate(possible_actions): Q[state, actions] = 0.0 # Initial value = 0.0, for all possible actions learning_rate = 0.01 discount_rate = 0.95 n_iterations = 100 for iteration in range(n_iterations): Q_prev = Q.copy() for s in range(3): for a in possible_actions[s]: Q[s, a] = np.sum([ T[s, a, sp] * (R[s, a, sp] + discount_rate * np.max(Q_prev[sp])) for sp in range(3) ]) print("Q: \n",Q) print("Optimal action for each state:\n",np.argmax(Q, axis=1)) # change discount rate to 0.9, see how policy changes: discount_rate = 0.90 for iteration in range(n_iterations): Q_prev = Q.copy() for s in range(3): for a in possible_actions[s]: Q[s, a] = np.sum([ T[s, a, sp] * (R[s, a, sp] + discount_rate * np.max(Q_prev[sp])) for sp in range(3) ]) print("Q: \n",Q) print("Optimal action for each state:\n",np.argmax(Q, axis=1)) ``` ### Temporal Difference Learning & Q-Learning * In general - agent has no knowledge of transition probabilities or rewards * **Temporal Difference Learning** (TD Learning) similar to value iteration, but accounts for this lack of knowlege. * Algorithm tracks running average of most recent awards & anticipated rewards. * **Q-Learning** algorithm adaptation of Q-Value Iteration where initial transition probabilities & rewards are unknown. ``` import numpy.random as rnd learning_rate0 = 0.05 learning_rate_decay = 0.1 n_iterations = 20000 s = 0 # start in state 0 Q = np.full((3, 3), -np.inf) # -inf for impossible actions for state, actions in enumerate(possible_actions): Q[state, actions] = 0.0 # Initial value = 0.0, for all possible actions for iteration in range(n_iterations): a = rnd.choice(possible_actions[s]) # choose an action (randomly) sp = rnd.choice(range(3), p=T[s, a]) # pick next state using T[s, a] reward = R[s, a, sp] learning_rate = learning_rate0 / (1 + iteration * learning_rate_decay) Q[s, a] = learning_rate * Q[s, a] + (1 - learning_rate) * (reward + discount_rate * np.max(Q[sp])) s = sp # move to next state print("Q: \n",Q) print("Optimal action for each state:\n",np.argmax(Q, axis=1)) ``` ### Exploration Policies * Q-Learning works only if exploration is thorough - not always possible. * Better alternative: explore more interesting routes using a *sigma* probability ### Approximate Q-Learning * TODO ### Ms Pac-Man with Deep Q-Learning ``` env = gym.make('MsPacman-v0') obs = env.reset() obs.shape, env.action_space # action_space = 9 possible joystick actions # observations = atari screenshots as 3D NumPy arrays mspacman_color = np.array([210, 164, 74]).mean() # crop image, shrink to 88x80 pixels, convert to grayscale, improve contrast def preprocess_observation(obs): img = obs[1:176:2, ::2] # crop and downsize img = img.mean(axis=2) # to greyscale img[img==mspacman_color] = 0 # improve contrast img = (img - 128) / 128 - 1 # normalize from -1. to 1. return img.reshape(88, 80, 1) ``` Ms PacMan Observation | Deep-Q net - | - ![observation](pics/mspacman-before-after.png) | ![alt](pics/mspacman-deepq.png) ``` # Create DQN # 3 convo layers, then 2 FC layers including output layer from tensorflow.contrib.layers import convolution2d, fully_connected input_height = 88 input_width = 80 input_channels = 1 conv_n_maps = [32, 64, 64] conv_kernel_sizes = [(8,8), (4,4), (3,3)] conv_strides = [4, 2, 1] conv_paddings = ["SAME"]*3 conv_activation = [tf.nn.relu]*3 n_hidden_in = 64 * 11 * 10 # conv3 has 64 maps of 11x10 each n_hidden = 512 hidden_activation = tf.nn.relu n_outputs = env.action_space.n # 9 discrete actions are available initializer = tf.contrib.layers.variance_scaling_initializer() # training will need ***TWO*** DQNs: # one to train the actor # another to learn from trials & errors (critic) # q_network is our net builder. def q_network(X_state, scope): prev_layer = X_state conv_layers = [] with tf.variable_scope(scope) as scope: for n_maps, kernel_size, stride, padding, activation in zip( conv_n_maps, conv_kernel_sizes, conv_strides, conv_paddings, conv_activation): prev_layer = convolution2d( prev_layer, num_outputs=n_maps, kernel_size=kernel_size, stride=stride, padding=padding, activation_fn=activation, weights_initializer=initializer) conv_layers.append(prev_layer) last_conv_layer_flat = tf.reshape( prev_layer, shape=[-1, n_hidden_in]) hidden = fully_connected( last_conv_layer_flat, n_hidden, activation_fn=hidden_activation, weights_initializer=initializer) outputs = fully_connected( hidden, n_outputs, activation_fn=None, weights_initializer=initializer) trainable_vars = tf.get_collection( tf.GraphKeys.TRAINABLE_VARIABLES, scope=scope.name) trainable_vars_by_name = {var.name[len(scope.name):]: var for var in trainable_vars} return outputs, trainable_vars_by_name # create input placeholders & two DQNs X_state = tf.placeholder( tf.float32, shape=[None, input_height, input_width, input_channels]) actor_q_values, actor_vars = q_network(X_state, scope="q_networks/actor") critic_q_values, critic_vars = q_network(X_state, scope="q_networks/critic") copy_ops = [actor_var.assign(critic_vars[var_name]) for var_name, actor_var in actor_vars.items()] # op to copy all trainable vars of critic DQN to actor DQN... # use tf.group() to group all assignment ops together copy_critic_to_actor = tf.group(*copy_ops) # Critic DQN learns by matching Q-Value predictions # to actor's Q-Value estimations during game play # Actor will use a "replay memory" (5 tuples): # state, action, next-state, reward, (0=over/1=continue) # use normal supervised training ops # occasionally copy critic DQN to actor DQN # DQN normally returns one Q-Value for every poss. action # only need Q-Value of action actually chosen # So, convert action to one-hot vector [0...1...0], multiple by Q-values # then sum over 1st axis. X_action = tf.placeholder( tf.int32, shape=[None]) q_value = tf.reduce_sum( critic_q_values * tf.one_hot(X_action, n_outputs), axis=1, keep_dims=True) # training setup tf.reset_default_graph() y = tf.placeholder( tf.float32, shape=[None, 1]) cost = tf.reduce_mean( tf.square(y - q_value)) # non-trainable. minimize() op will manage incrementing it global_step = tf.Variable( 0, trainable=False, name='global_step') optimizer = tf.train.AdamOptimizer(learning_rate) training_op = optimizer.minimize(cost, global_step=global_step) init = tf.global_variables_initializer() saver = tf.train.Saver() # use a deque list to build the replay memory from collections import deque replay_memory_size = 10000 replay_memory = deque( [], maxlen=replay_memory_size) def sample_memories(batch_size): indices = rnd.permutation( len(replay_memory))[:batch_size] cols = [[], [], [], [], []] # state, action, reward, next_state, continue for idx in indices: memory = replay_memory[idx] for col, value in zip(cols, memory): col.append(value) cols = [np.array(col) for col in cols] return (cols[0], cols[1], cols[2].reshape(-1, 1), cols[3], cols[4].reshape(-1, 1)) # create an actor # use epsilon-greedy policy # gradually decrease epsilon from 1.0 to 0.05 across 50K training steps eps_min = 0.05 eps_max = 1.0 eps_decay_steps = 50000 def epsilon_greedy(q_values, step): epsilon = max(eps_min, eps_max - (eps_max-eps_min) * step/eps_decay_steps) if rnd.rand() < epsilon: return rnd.randint(n_outputs) # random action else: return np.argmax(q_values) # optimal action # training setup: the variables n_steps = 100000 # total number of training steps training_start = 1000 # start training after 1,000 game iterations training_interval = 3 # run a training step every 3 game iterations save_steps = 50 # save the model every 50 training steps copy_steps = 25 # copy the critic to the actor every 25 training steps discount_rate = 0.95 skip_start = 90 # skip the start of every game (it's just waiting time) batch_size = 50 iteration = 0 # game iterations checkpoint_path = "./my_dqn.ckpt" done = True # env needs to be reset # let's get busy import os with tf.Session() as sess: # restore models if checkpoint file exists if os.path.isfile(checkpoint_path): saver.restore(sess, checkpoint_path) # otherwise normally initialize variables else: init.run() while True: step = global_step.eval() if step >= n_steps: break # iteration = total number of game steps from beginning iteration += 1 if done: # game over, start again obs = env.reset() for skip in range(skip_start): # skip the start of each game obs, reward, done, info = env.step(0) state = preprocess_observation(obs) # Actor evaluates what to do q_values = actor_q_values.eval(feed_dict={X_state: [state]}) action = epsilon_greedy(q_values, step) # Actor plays obs, reward, done, info = env.step(action) next_state = preprocess_observation(obs) # Let's memorize what just happened replay_memory.append((state, action, reward, next_state, 1.0 - done)) state = next_state if iteration < training_start or iteration % training_interval != 0: continue # Critic learns X_state_val, X_action_val, rewards, X_next_state_val, continues = ( sample_memories(batch_size)) next_q_values = actor_q_values.eval( feed_dict={X_state: X_next_state_val}) max_next_q_values = np.max( next_q_values, axis=1, keepdims=True) y_val = rewards + continues * discount_rate * max_next_q_values training_op.run( feed_dict={X_state: X_state_val, X_action: X_action_val, y: y_val}) # Regularly copy critic to actor if step % copy_steps == 0: copy_critic_to_actor.run() # And save regularly if step % save_steps == 0: saver.save(sess, checkpoint_path) print("\n",np.average(y_val)) ```
github_jupyter
# SQLAlchemy Homework - Surfs Up! ### Before You Begin 1. Create a new repository for this project called `sqlalchemy-challenge`. **Do not add this homework to an existing repository**. 2. Clone the new repository to your computer. 3. Add your Jupyter notebook and `app.py` to this folder. These will be the main scripts to run for analysis. 4. Push the above changes to GitHub or GitLab. ![surfs-up.png](Images/surfs-up.png) Congratulations! You've decided to treat yourself to a long holiday vacation in Honolulu, Hawaii! To help with your trip planning, you need to do some climate analysis on the area. The following outlines what you need to do. ## Step 1 - Climate Analysis and Exploration To begin, use Python and SQLAlchemy to do basic climate analysis and data exploration of your climate database. All of the following analysis should be completed using SQLAlchemy ORM queries, Pandas, and Matplotlib. * Use the provided [starter notebook](climate_starter.ipynb) and [hawaii.sqlite](Resources/hawaii.sqlite) files to complete your climate analysis and data exploration. * Choose a start date and end date for your trip. Make sure that your vacation range is approximately 3-15 days total. * Use SQLAlchemy `create_engine` to connect to your sqlite database. * Use SQLAlchemy `automap_base()` to reflect your tables into classes and save a reference to those classes called `Station` and `Measurement`. ### Precipitation Analysis * Design a query to retrieve the last 12 months of precipitation data. * Select only the `date` and `prcp` values. * Load the query results into a Pandas DataFrame and set the index to the date column. * Sort the DataFrame values by `date`. * Plot the results using the DataFrame `plot` method. ![precipitation](Images/precipitation.png) * Use Pandas to print the summary statistics for the precipitation data. ### Station Analysis * Design a query to calculate the total number of stations. * Design a query to find the most active stations. * List the stations and observation counts in descending order. * Which station has the highest number of observations? * Hint: You will need to use a function such as `func.min`, `func.max`, `func.avg`, and `func.count` in your queries. * Design a query to retrieve the last 12 months of temperature observation data (TOBS). * Filter by the station with the highest number of observations. * Plot the results as a histogram with `bins=12`. ![station-histogram](Images/station-histogram.png) - - - ## Step 2 - Climate App Now that you have completed your initial analysis, design a Flask API based on the queries that you have just developed. * Use Flask to create your routes. ### Routes * `/` * Home page. * List all routes that are available. * `/api/v1.0/precipitation` * Convert the query results to a dictionary using `date` as the key and `prcp` as the value. * Return the JSON representation of your dictionary. * `/api/v1.0/stations` * Return a JSON list of stations from the dataset. * `/api/v1.0/tobs` * Query the dates and temperature observations of the most active station for the last year of data. * Return a JSON list of temperature observations (TOBS) for the previous year. * `/api/v1.0/<start>` and `/api/v1.0/<start>/<end>` * Return a JSON list of the minimum temperature, the average temperature, and the max temperature for a given start or start-end range. * When given the start only, calculate `TMIN`, `TAVG`, and `TMAX` for all dates greater than and equal to the start date. * When given the start and the end date, calculate the `TMIN`, `TAVG`, and `TMAX` for dates between the start and end date inclusive. ## Hints * You will need to join the station and measurement tables for some of the queries. * Use Flask `jsonify` to convert your API data into a valid JSON response object. - - - ``` %matplotlib inline from matplotlib import style style.use('fivethirtyeight') import matplotlib.pyplot as plt import numpy as np import pandas as pd import datetime as dt import seaborn as sns from scipy.stats import linregress from sklearn import datasets ``` # Reflect Tables into SQLAlchemy ORM ### Precipitation Analysis * Design a query to retrieve the last 12 months of precipitation data. * Select only the `date` and `prcp` values. * Load the query results into a Pandas DataFrame and set the index to the date column. * Sort the DataFrame values by `date`. * Plot the results using the DataFrame `plot` method. *![precipitation](Images/precipitation.png) * Use Pandas to print the summary statistics for the precipitation data. ``` # Python SQL toolkit and Object Relational Mapper import sqlalchemy from sqlalchemy.ext.automap import automap_base from sqlalchemy.orm import Session from sqlalchemy import create_engine, func, inspect engine = create_engine("sqlite:///Resources/hawaii.sqlite") #Base.metadata.create_all(engine) inspector = inspect(engine) inspector.get_table_names() # reflect an existing database into a new model Base = automap_base() # reflect the tables Base.prepare(engine,reflect= True) # Reflect Database into ORM class #Base.classes.measurement # Create our session (link) from Python to the DB session = Session(bind=engine) session = Session(engine) # We can view all of the classes that automap found Base.classes.keys() # Save references to each table Measurement = Base.classes.measurement Station = Base.classes.station engine.execute('Select * from measurement').fetchall() # Get columns of 'measurement' table columns = inspector.get_columns('measurement') for c in columns: print(c) # A very odd way to get all column values if they are made by tuples with keys and values, it's more straightforward # and sensible to just do columns = inspector.get_columns('measurement') the a for loop: for c in columns: print(c) columns = inspector.get_columns('measurement') for c in columns: print(c.keys()) for c in columns: print(c.values()) ``` # Exploratory Climate Analysis ``` # Design a query to retrieve the last 12 months of precipitation data and plot the results # Design a query to retrieve the last 12 months of precipitation data. max_date = session.query(func.max(Measurement.date)).all()[0][0] # Select only the date and prcp values. #datetime.datetime.strptime(date_time_str, '%Y-%m-%d %H:%M:%S.%f') import datetime print(max_date) print(type(max_date)) # Calculate the date 1 year ago from the last data point in the database min_date = datetime.datetime.strptime(max_date,'%Y-%m-%d') - datetime.timedelta(days = 365) print(min_date) print(min_date.year, min_date.month, min_date.day) # Perform a query to retrieve the data and precipitation scores results = session.query(Measurement.prcp, Measurement.date).filter(Measurement.date >= min_date).all() results # Load the query results into a Pandas DataFrame and set the index to the date column. prcp_anal_df = pd.DataFrame(results, columns = ['prcp','date']).set_index('date') # Sort the DataFrame values by date. prcp_anal_df.sort_values(by=['date'], inplace=True) prcp_anal_df # Create Plot(s) prcp_anal_df.plot(rot = 90) plt.xlabel('Date') plt.ylabel('Precipitation (inches)') plt.title('Precipitation over One Year in Hawaii') plt.savefig("histo_prcp_date.png") plt.show() sns.set() plot1 = prcp_anal_df.plot(figsize = (10, 5)) fig = plot1.get_figure() plt.title('Precipitation in Hawaii') plt.xlabel('Date') plt.ylabel('Precipitation') plt.legend(["Precipitation"],loc="best") plt.xticks(rotation=45) plt.tight_layout() plt.savefig("Precipitation in Hawaii_bar.png") plt.show() prcp_anal_df.describe() # I wanted a range of precipitation amounts for plotting purposes...the code on line 3 and 4 and 5 didn't work ## prcp_anal.max_prcp = session.query(func.max(Measurement.prcp.filter(Measurement.date >= '2016-08-23' ))).\ ## order_by(func.max(Items.UnitPrice * Items.Quantity).desc()).all() ## prcp_anal.max_prcp prcp_anal_max_prcp = session.query(Measurement.prcp, func.max(Measurement.prcp)).\ filter(Measurement.date >= '2016-08-23').\ group_by(Measurement.date).\ order_by(func.max(Measurement.prcp).asc()).all() prcp_anal_max_prcp # I initially did the following in a cell below. Again, I wanted a range of prcp values for the year in our DataFrame # so here I got the min but realized both the min and the max, or both queries are useless to me here unless I were # use plt.ylim in my plots, which I don't, I just allow the DF to supply its intrinsic values # and both give identical results. I will leave it here in thes assignment just to show my thought process # prcp_anal_min_prcp = session.query(Measurement.prcp, func.min(Measurement.prcp)).\ # filter(Measurement.date > '2016-08-23').\ # group_by(Measurement.date).\ # order_by(func.min(Measurement.prcp).asc()).all() # prcp_anal_min_prcp ``` ***STATION ANALYSIS***.\ 1) Design a query to calculate the total number of stations.\ 2) Design a query to find the most active stations.\ 3) List the stations and observation counts in descending order.\ 4) Which station has the highest number of observations?.\ Hint: You will need to use a function such as func.min, func.max, func.avg, and func.count in your queries..\ 5) Design a query to retrieve the last 12 months of temperature observation data (TOBS)..\ 6) Filter by the station with the highest number of observations..\ 7) Plot the results as a histogram with bins=12. ``` Station = Base.classes.station session = Session(engine) # Getting column values from each table, here 'station' columns = inspector.get_columns('station') for c in columns: print(c) # Get columns of 'measurement' table columns = inspector.get_columns('measurement') for c in columns: print(c) engine.execute('Select * from station').fetchall() # Design a query to show how many stations are available in this dataset? session.query(Station.station).count() # What are the most active stations? (i.e. what stations have the most rows)? # List the stations and the counts in descending order. # List the stations and the counts in descending order. Think about somehow using this from extra activity Active_Stations = session.query(Station.station ,func.count(Measurement.tobs)).filter(Station.station == Measurement.station).\ group_by(Station.station).order_by(func.count(Measurement.tobs).desc()).all() print(f"The most active station {Active_Stations[0][0]} has {Active_Stations[0][1]} observations!") Active_Stations # Using the station id from the previous query, calculate the lowest temperature recorded, # highest temperature recorded, and average temperature of the most active station? Station_Name = session.query(Station.name).filter(Station.station == Active_Stations[0][0]).all() print(Station_Name) Temp_Stats = session.query(func.min(Measurement.tobs),func.max(Measurement.tobs),func.avg(Measurement.tobs)).\ filter(Station.station == Active_Stations[0][0]).all() print(Temp_Stats) # Choose the station with the highest number of temperature observations. Station_Name = session.query(Station.name).filter(Station.station == Active_Stations[0][0]).all() Station_Name # Query the last 12 months of temperature observation data for this station results_WAHIAWA = session.query(Measurement.date,Measurement.tobs).filter(Measurement.date > min_date).\ filter(Station.station == Active_Stations[0][0]).all() results_WAHIAWA # Make a DataFrame from the query results above showing dates and temp observation at the most active station results_WAHIAWA_df = pd.DataFrame(results_WAHIAWA) results_WAHIAWA_df # Plot the results as a histogram sns.set() plt.figure(figsize=(10,5)) plt.hist(results_WAHIAWA_df['tobs'],bins=12,color='magenta') plt.xlabel('Temperature',weight='bold') plt.ylabel('Frequency',weight='bold') plt.title('Station Analysis',weight='bold') plt.legend(["Temperature Observation"],loc="best") plt.savefig("Station_Analysis_hist.png") plt.show() ``` ## Bonus Challenge Assignment ``` # This function called `calc_temps` will accept start date and end date in the format '%Y-%m-%d' # and return the minimum, average, and maximum temperatures for that range of dates def calc_temps(start_date, end_date): """TMIN, TAVG, and TMAX for a list of dates. Args: start_date (string): A date string in the format %Y-%m-%d end_date (string): A date string in the format %Y-%m-%d Returns: TMIN, TAVE, and TMAX """ return session.query(func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)).\ filter(Measurement.date >= start_date).filter(Measurement.date <= end_date).all() # function usage example print(calc_temps('2012-02-28', '2012-03-05')) # Use your previous function `calc_temps` to calculate the tmin, tavg, and tmax calc_temps('2017-06-22', '2017-07-05') # for your trip using the previous year's data for those same dates. (calc_temps('2016-06-22', '2016-07-05')) # Plot the results from your previous query as a bar chart. # Use "Trip Avg Temp" as your Title # Use the average temperature for the y value # Use the peak-to-peak (tmax-tmin) value as the y error bar (yerr) # Calculate the total amount of rainfall per weather station for your trip dates using the previous year's matching dates. # Sort this in descending order by precipitation amount and list the station, name, latitude, longitude, and elevation # Create a query that will calculate the daily normals # (i.e. the averages for tmin, tmax, and tavg for all historic data matching a specific month and day) def daily_normals(date): """Daily Normals. Args: date (str): A date string in the format '%m-%d' Returns: A list of tuples containing the daily normals, tmin, tavg, and tmax """ sel = [func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)] return session.query(*sel).filter(func.strftime("%m-%d", Measurement.date) == date).all() daily_normals("01-01") # calculate the daily normals for your trip # push each tuple of calculations into a list called `normals` # Set the start and end date of the trip # Use the start and end date to create a range of dates # Stip off the year and save a list of %m-%d strings # Loop through the list of %m-%d strings and calculate the normals for each date # Load the previous query results into a Pandas DataFrame and add the `trip_dates` range as the `date` index # Plot the daily normals as an area plot with `stacked=False` ``` ## Step 2 - Climate App Now that you have completed your initial analysis, design a Flask API based on the queries that you have just developed. * Use Flask to create your routes. ### Routes * `/` * Home page. * List all routes that are available. * `/api/v1.0/precipitation` * Convert the query results to a dictionary using `date` as the key and `prcp` as the value. * Return the JSON representation of your dictionary. * `/api/v1.0/stations` * Return a JSON list of stations from the dataset. * `/api/v1.0/tobs` * Query the dates and temperature observations of the most active station for the last year of data. * Return a JSON list of temperature observations (TOBS) for the previous year. * `/api/v1.0/<start>` and `/api/v1.0/<start>/<end>` * Return a JSON list of the minimum temperature, the average temperature, and the max temperature for a given start or start-end range. * When given the start only, calculate `TMIN`, `TAVG`, and `TMAX` for all dates greater than and equal to the start date. * When given the start and the end date, calculate the `TMIN`, `TAVG`, and `TMAX` for dates between the start and end date inclusive. ## Hints * You will need to join the station and measurement tables for some of the queries. * Use Flask `jsonify` to convert your API data into a valid JSON response object. - - - ``` import numpy as np import datetime as dt from datetime import timedelta, datetime import sqlalchemy from sqlalchemy.ext.automap import automap_base from sqlalchemy.orm import Session from sqlalchemy import create_engine, func, distinct, text, desc from flask import Flask, jsonify ################################################# # Database Setup ################################################# #engine = create_engine("sqlite:///Resources/hawaii.sqlite") engine = create_engine("sqlite:///Resources/hawaii.sqlite?check_same_thread=False") # reflect an existing database into a new model Base = automap_base() # reflect the tables Base.prepare(engine, reflect=True) # Save reference to the table Measurement = Base.classes.measurement Station = Base.classes.station ################################################# # Flask Setup ################################################# app = Flask(__name__) ################################################# # Flask Routes ################################################# @app.route("/") def welcome(): """List all available api routes.""" return ( f"Available Routes:<br/>" f"/api/v1.0/precipitation<br/>" f"/api/v1.0/stations<br/>" f"/api/v1.0/tobs<br/>" f"/api/v1.0/<br/>" f"/api/v1.0/" ) @app.route("/api/v1.0/precipitation") def precipitation(): # Create our session (link) from Python to the DB session = Session(engine) """Return a list of all precipitation data""" # Query Precipitation data annual_rainfall = session.query(Measurement.date, Measurement.prcp).order_by(Measurement.date).all() session.close() # Convert list of tuples into normal list all_rain = dict(annual_rainfall) return jsonify(all_rain) if __name__ == '__main__': app.run(debug=True) ```
github_jupyter
###### Text provided under a Creative Commons Attribution license, CC-BY. Code under MIT license. (c)2014 Lorena A. Barba, Olivier Mesnard. Thanks: NSF for support via CAREER award #1149784. ##### Version 0.2 -- February 2014 # Doublet Welcome to the third lesson of *AeroPython*! We created some very interesting potential flows in lessons 1 and 2, with our [Source & Sink](01_Lesson01_sourceSink.ipynb) notebook, and our [Source & Sink in a Freestream](02_Lesson02_sourceSinkFreestream.ipynb) notebook. Think about the Source & Sink again, and now imagine that you are looking at this flow pattern from very far away. The streamlines that are between the source and the sink will be very short, from this vantage point. And the other streamlines will start looking like two groups of circles, tangent at the origin. If you look from far enough away, the distance between source and sink approaches zero, and the pattern you see is called a *doublet*. Let's see what this looks like. First, load our favorite libraries. ``` import math import numpy from matplotlib import pyplot # embed figures into the notebook %matplotlib inline ``` In the previous notebook, we saw that a source-sink pair in a uniform flow can be used to represent the streamlines around a particular shape, named a Rankine oval. In this notebook, we will turn that source-sink pair into a doublet. First, consider a source of strength $\sigma$ at $\left(-\frac{l}{2},0\right)$ and a sink of opposite strength located at $\left(\frac{l}{2},0\right)$. Here is a sketch to help you visualize the situation: <center><img src="resources/doubletSketch1.png"><center> The stream-function associated to the source-sink pair, evaluated at point $\text{P}\left(x,y\right)$, is $$\psi\left(x,y\right) = \frac{\sigma}{2\pi}\left(\theta_1-\theta_2\right) = -\frac{\sigma}{2\pi}\Delta\theta$$ Let the distance $l$ between the two singularities approach zero while the strength magnitude is increasing so that the product $\sigma l$ remains constant. In the limit, this flow pattern is a *doublet* and we define its strength by $\kappa = \sigma l$. The stream-function of a doublet, evaluated at point $\text{P}\left(x,y\right)$, is given by $$\psi\left(x,y\right) = \lim \limits_{l \to 0} \left(-\frac{\sigma}{2\pi}d\theta\right) \quad \text{and} \quad \sigma l = \text{constant}$$ <center><img src="resources/doubletSketch2.png"></center> Considering the case where $d\theta$ is infinitesimal, we deduce from the figure above that $$a = l\sin\theta$$ $$b = r-l\cos\theta$$ $$d\theta = \frac{a}{b} = \frac{l\sin\theta}{r-l\cos\theta}$$ so the stream function becomes $$\psi\left(r,\theta\right) = \lim \limits_{l \to 0} \left(-\frac{\sigma l}{2\pi}\frac{\sin\theta}{r-l\cos\theta}\right) \quad \text{and} \quad \sigma l = \text{constant}$$ i.e. $$\psi\left(r,\theta\right) = -\frac{\kappa}{2\pi}\frac{\sin\theta}{r}$$ In Cartesian coordinates, a doublet located at the origin has the stream function $$\psi\left(x,y\right) = -\frac{\kappa}{2\pi}\frac{y}{x^2+y^2}$$ from which we can derive the velocity components $$u\left(x,y\right) = \frac{\partial\psi}{\partial y} = -\frac{\kappa}{2\pi}\frac{x^2-y^2}{\left(x^2+y^2\right)^2}$$ $$v\left(x,y\right) = -\frac{\partial\psi}{\partial x} = -\frac{\kappa}{2\pi}\frac{2xy}{\left(x^2+y^2\right)^2}$$ Now we have done the math, it is time to code and visualize what the streamlines look like. We start by creating a mesh grid. ``` N = 50 # Number of points in each direction x_start, x_end = -2.0, 2.0 # x-direction boundaries y_start, y_end = -1.0, 1.0 # y-direction boundaries x = numpy.linspace(x_start, x_end, N) # creates a 1D-array for x y = numpy.linspace(y_start, y_end, N) # creates a 1D-array for y X, Y = numpy.meshgrid(x, y) # generates a mesh grid ``` We consider a doublet of strength $\kappa=1.0$ located at the origin. ``` kappa = 1.0 # strength of the doublet x_doublet, y_doublet = 0.0, 0.0 # location of the doublet ``` As seen in the previous notebook, we play smart by defining functions to calculate the stream function and the velocity components that could be re-used if we decide to insert more than one doublet in our domain. ``` def get_velocity_doublet(strength, xd, yd, X, Y): """ Returns the velocity field generated by a doublet. Parameters ---------- strength: float Strength of the doublet. xd: float x-coordinate of the doublet. yd: float y-coordinate of the doublet. X: 2D Numpy array of floats x-coordinate of the mesh points. Y: 2D Numpy array of floats y-coordinate of the mesh points. Returns ------- u: 2D Numpy array of floats x-component of the velocity vector field. v: 2D Numpy array of floats y-component of the velocity vector field. """ u = (- strength / (2 * math.pi) * ((X - xd)**2 - (Y - yd)**2) / ((X - xd)**2 + (Y - yd)**2)**2) v = (- strength / (2 * math.pi) * 2 * (X - xd) * (Y - yd) / ((X - xd)**2 + (Y - yd)**2)**2) return u, v def get_stream_function_doublet(strength, xd, yd, X, Y): """ Returns the stream-function generated by a doublet. Parameters ---------- strength: float Strength of the doublet. xd: float x-coordinate of the doublet. yd: float y-coordinate of the doublet. X: 2D Numpy array of floats x-coordinate of the mesh points. Y: 2D Numpy array of floats y-coordinate of the mesh points. Returns ------- psi: 2D Numpy array of floats The stream-function. """ psi = - strength / (2 * math.pi) * (Y - yd) / ((X - xd)**2 + (Y - yd)**2) return psi ``` Once the functions have been defined, we call them using the parameters of the doublet: its strength `kappa` and its location `x_doublet`, `y_doublet`. ``` # compute the velocity field on the mesh grid u_doublet, v_doublet = get_velocity_doublet(kappa, x_doublet, y_doublet, X, Y) # compute the stream-function on the mesh grid psi_doublet = get_stream_function_doublet(kappa, x_doublet, y_doublet, X, Y) ``` We are ready to do a nice visualization. ``` # plot the streamlines width = 10 height = (y_end - y_start) / (x_end - x_start) * width pyplot.figure(figsize=(width, height)) pyplot.xlabel('x', fontsize=16) pyplot.ylabel('y', fontsize=16) pyplot.xlim(x_start, x_end) pyplot.ylim(y_start, y_end) pyplot.streamplot(X, Y, u_doublet, v_doublet, density=2, linewidth=1, arrowsize=1, arrowstyle='->') pyplot.scatter(x_doublet, y_doublet, color='#CD2305', s=80, marker='o'); ``` Just like we imagined that the streamlines of a source-sink pair would look from very far away. What is this good for, you might ask? It does not look like any streamline pattern that has a practical use in aerodynamics. If that is what you think, you would be wrong! ## Uniform flow past a doublet A doublet alone does not give so much information about how it can be used to represent a practical flow pattern in aerodynamics. But let's use our superposition powers: our doublet in a uniform flow turns out to be a very interesting flow pattern. Let's first define a uniform horizontal flow. ``` u_inf = 1.0 # freestream speed ``` Remember from our previous lessons that the Cartesian velocity components of a uniform flow in the $x$-direction are given by $u=U_\infty$ and $v=0$. Integrating, we find the stream-function, $\psi = U_\infty y$. So let's calculate velocities and stream function values for all points in our grid. And as we now know, we can calculate them all together with one line of code per array. ``` u_freestream = u_inf * numpy.ones((N, N), dtype=float) v_freestream = numpy.zeros((N, N), dtype=float) psi_freestream = u_inf * Y ``` Below, the stream function of the flow created by superposition of a doublet in a free stream is obtained by simple addition. Like we did before in the [Source & Sink in a Freestream](02_Lesson02_sourceSinkFreestream.ipynb) notebook, we find the *dividing streamline* and plot it separately in red. The plot shows that this pattern can represent the flow around a cylinder with center at the location of the doublet. All the streamlines remaining outside the cylinder originated from the uniform flow. All the streamlines inside the cylinder can be ignored and this area assumed to be a solid object. This will turn out to be more useful than you may think. ``` # superposition of the doublet on the freestream flow u = u_freestream + u_doublet v = v_freestream + v_doublet psi = psi_freestream + psi_doublet # plot the streamlines width = 10 height = (y_end - y_start) / (x_end - x_start) * width pyplot.figure(figsize=(width, height)) pyplot.xlabel('x', fontsize=16) pyplot.ylabel('y', fontsize=16) pyplot.xlim(x_start, x_end) pyplot.ylim(y_start, y_end) pyplot.streamplot(X, Y, u, v, density=2, linewidth=1, arrowsize=1, arrowstyle='->') pyplot.contour(X, Y, psi, levels=[0.], colors='#CD2305', linewidths=2, linestyles='solid') pyplot.scatter(x_doublet, y_doublet, color='#CD2305', s=80, marker='o') # calculate the stagnation points x_stagn1, y_stagn1 = +math.sqrt(kappa / (2 * math.pi * u_inf)), 0.0 x_stagn2, y_stagn2 = -math.sqrt(kappa / (2 * math.pi * u_inf)), 0.0 # display the stagnation points pyplot.scatter([x_stagn1, x_stagn2], [y_stagn1, y_stagn2], color='g', s=80, marker='o'); ``` ##### Challenge question What is the radius of the circular cylinder created when a doublet of strength $\kappa$ is added to a uniform flow $U_\infty$ in the $x$-direction? ##### Challenge task You have the streamfunction of the doublet in cylindrical coordinates above. Add the streamfunction of the free stream in those coordinates, and study it. You will see that $\psi=0$ at $r=a$ for all values of $\theta$. The line $\psi=0$ represents the circular cylinder of radius $a$. Now write the velocity components in cylindrical coordinates, find the speed of the flow at the surface. What does this tell you? ### Bernoulli's equation and the pressure coefficient A very useful measurement of a flow around a body is the *coefficient of pressure* $C_p$. To evaluate the pressure coefficient, we apply *Bernoulli's equation* for ideal flow, which says that along a streamline we can apply the following between two points: $$p_\infty + \frac{1}{2}\rho U_\infty^2 = p + \frac{1}{2}\rho U^2$$ We define the pressure coefficient as the ratio between the pressure difference with the free stream, and the dynamic pressure: $$C_p = \frac{p-p_\infty}{\frac{1}{2}\rho U_\infty^2}$$ i.e., $$C_p = 1 - \left(\frac{U}{U_\infty}\right)^2$$ In an incompressible flow, $C_p=1$ at a stagnation point. Let's plot the pressure coefficient in the whole domain. ``` # compute the pressure coefficient field cp = 1.0 - (u**2 + v**2) / u_inf**2 # plot the pressure coefficient field width = 10 height = (y_end - y_start) / (x_end - x_start) * width pyplot.figure(figsize=(1.1 * width, height)) pyplot.xlabel('x', fontsize=16) pyplot.ylabel('y', fontsize=16) pyplot.xlim(x_start, x_end) pyplot.ylim(y_start, y_end) contf = pyplot.contourf(X, Y, cp, levels=numpy.linspace(-2.0, 1.0, 100), extend='both') cbar = pyplot.colorbar(contf) cbar.set_label('$C_p$', fontsize=16) cbar.set_ticks([-2.0, -1.0, 0.0, 1.0]) pyplot.scatter(x_doublet, y_doublet, color='#CD2305', s=80, marker='o') pyplot.contour(X,Y,psi, levels=[0.], colors='#CD2305', linewidths=2, linestyles='solid') pyplot.scatter([x_stagn1, x_stagn2], [y_stagn1, y_stagn2], color='g', s=80, marker='o'); ``` ##### Challenge task Show that the pressure coefficient distribution on the surface of the circular cylinder is given by $$C_p = 1-4\sin^2\theta$$ and plot the coefficient of pressure versus the angle. ##### Think Don't you find it a bit fishy that the pressure coefficient (and the surface distribution of pressure) is symmetric about the vertical axis? That means that the pressure in the front of the cylinder is the same as the pressure in the *back* of the cylinder. In turn, this means that the horizontal components of forces are zero. We know that, even at very low Reynolds number (creeping flow), there *is* in fact a drag force. The theory is unable to reflect that experimentally observed fact! This discrepancy is known as *d'Alembert's paradox*. Here's how creeping flow around a cylinder *really* looks like: ``` from IPython.display import YouTubeVideo YouTubeVideo('Ekd8czwELOc') ``` If you look carefully, there is a slight asymmetry in the flow pattern. Can you explain it? What is the consequence of that? Here's a famous visualization of actual flow around a cylinder at a Reynolds number of 1.54. This image was obtained by S. Taneda and it appears in the "Album of Fluid Motion", by Milton Van Dyke. A treasure of a book. <center><img src="resources/Cylinder-Re=1dot54.png"></center> --- ``` from IPython.core.display import HTML def css_styling(filepath): styles = open(filepath, 'r').read() return HTML(styles) css_styling('../styles/custom.css') ```
github_jupyter
``` import MySQLdb from sklearn.svm import LinearSVC from tensorflow import keras from keras.models import load_model import tensorflow as tf from random import seed import pandas as pd import numpy as np import re from re import sub import os import string import tempfile import pickle import tarfile from unidecode import unidecode import nltk from nltk.corpus import stopwords from keras.callbacks import EarlyStopping, ReduceLROnPlateau path = "." username = "remote_root" password = "Faltan_4Ks" host = "ci-oand-apps-02.hi.inet" db = "alejandro_test_db" scheme = "MARKETv2" query = f"Select * from {scheme}" conn = MySQLdb.connect(host=host, user=username, passwd=password, db=db) try: cursor = conn.cursor() cursor.execute(f"describe {scheme}") columns_tuple = cursor.fetchall() columns = [i[0] for i in columns_tuple] cursor.execute(query) results = cursor.fetchall() except Exception as e: print("Exception occur:", e) finally: conn.close() data = pd.DataFrame(columns=columns, data=results) data.head() def text_to_word_list(text, stem=False, stopw=True): from nltk.stem import SnowballStemmer ''' Data Preprocess handler version 1.1 Pre process and convert texts to a list of words ''' text = unidecode(text) text = str(text) text = text.lower() # Clean the text text = re.sub(r"<u.+>", "", text) # Remove emojis text = re.sub(r"[^A-Za-z0-9^,!?.\/'+]", " ", text) text = re.sub(r",", " ", text) text = re.sub(r"\.", " ", text) text = re.sub(r"!", " ! ", text) text = re.sub(r"\?", " ? ", text) text = re.sub(r"'", " ", text) text = re.sub(r":", " : ", text) text = re.sub(r"\s{2,}", " ", text) text = text.split() if stopw: # Remove stopw stopw = stopwords.words("spanish") stopw.remove("no") text = [word for word in text if word not in stopw and len(word) > 1] if stem: stemmer = SnowballStemmer("spanish") text = [stemmer.stem(word) for word in text] text = " ".join(text) return text def clean_dataset(df): # df["Review.Last.Update.Date.and.Time"] = df["Review.Last.Update.Date.and.Time"].astype('datetime64') df["State"] = True df.loc[df.Pais == "Brasil", "State"] = False df.loc[df.Comentario.isna(), "State"] = False df["Clean_text"] = " " df.loc[df.State, "Clean_text"] = df[df.State == True].Comentario.apply(lambda x: text_to_word_list(x)) df.loc[df.Clean_text.str.len() == 0, "State"] = False df.loc[df.Clean_text.isna(), "State"] = False df.Clean_text = df.Clean_text.str.replace("a{2,}", "a") df.Clean_text = df.Clean_text.str.replace("e{3,}", "e") df.Clean_text = df.Clean_text.str.replace("i{3,}", "i") df.Clean_text = df.Clean_text.str.replace("o{3,}", "o") df.Clean_text = df.Clean_text.str.replace("u{3,}", "u") df.Clean_text = df.Clean_text.str.replace("y{2,}", "y") df.Clean_text= df.Clean_text.str.replace(r"\bapp[s]?\b", " aplicacion ") df.Clean_text= df.Clean_text.str.replace("^ns$", "no sirve") df.Clean_text= df.Clean_text.str.replace("^ns .+", "no se") df.Clean_text= df.Clean_text.str.replace("tlf", "telefono") df.Clean_text= df.Clean_text.str.replace(" si no ", " sino ") df.Clean_text= df.Clean_text.str.replace(" nose ", " no se ") df.Clean_text= df.Clean_text.str.replace("extreno", "estreno") df.Clean_text= df.Clean_text.str.replace("atravez", "a traves") df.Clean_text= df.Clean_text.str.replace("root(\w+)?", "root") df.Clean_text= df.Clean_text.str.replace("(masomenos)|(mas menos)]", "mas_menos") df.Clean_text= df.Clean_text.str.replace("tbn", "tambien") df.Clean_text= df.Clean_text.str.replace("deverian", "deberian") df.Clean_text= df.Clean_text.str.replace("malicima", "mala") return df seed(20) df2 = clean_dataset(data) df2.sample(20) def load_ml_model(path, model_name, data_type="stopw"): # Open tarfile tar = tarfile.open(mode="r:gz", fileobj=open(os.path.join(path, f"{model_name}_{data_type}.tar.gz"), "rb")) for filename in tar.getnames(): if filename == f"{model_name}.pickle": clf = pickle.loads(tar.extractfile(filename).read()) if filename == "vectorizer.pickle": vectorizer = pickle.loads(tar.extractfile(filename).read()) if filename == "encoder.pickle": encoder = pickle.loads(tar.extractfile(filename).read()) return clf, vectorizer, encoder clf, vectorizer, encoder = load_ml_model(path, "linearSVM") sample = df2.loc[df2.State, "Clean_text"].values # df2.head() sample_vect = vectorizer.transform(sample) categorias = encoder.classes_[clf.predict(sample_vect)] df2.loc[df2.State, "Categorias"] = categorias df2.sample(20) df2["Star.Rating"] = df2["Star.Rating"].astype("int") df2.loc[(df2.State==False)&(df2["Star.Rating"]<3), "Categorias"] = "Valoración negativa" df2.loc[(df2.State==False)&(df2["Star.Rating"]>=3), "Categorias"] = "Valoración positiva" df2.head() df2["Review.Last.Update.Date.and.Time"] = pd.to_datetime(df2["Review.Last.Update.Date.and.Time"]) df2 = df2.drop("tipo", axis=1) df2.tipo_equivalencias.value_counts() df2.tipo_equivalencias = df2.Categorias df2.loc[df2.Categorias.str.contains("Actualización"), "tipo_equivalencias"] = "Actualizaciones" df2.loc[df2.Categorias.str.contains("Error de Reproducción"), "tipo_equivalencias"] = "Error de Reproducción" df2 = df2.rename(columns={"Categorias":"tipo"}) df_good = df2.loc[:,columns] df_good.head() scheme = "MARKETv2" fields = ["%s" for i in range(len(df_good.columns))] query = f"INSERT INTO {scheme} VALUES ({', '.join(fields)})" values = df_good.values.tolist() values = [tuple(x) for x in values] # query = f"Select * from {scheme}" conn = MySQLdb.connect(host=host, user=username, passwd=password, db=db) try: cursor = conn.cursor() cursor.executemany(query, values) conn.commit() except Exception as e: print("Exception occur:", e) finally: conn.close() data.sort_values(by="Review.Last.Update.Date.and.Time", ascending=False).head(10) texto = "Mediocre" # texto = text_to_word_list(texto) print(texto) # v = vectorizer.transform(np.array([texto])) # encoder.inverse_transform(clf.predict(v)) df2.loc[(df2.State)&(df2.Comentario.str.contains("Muy mala aplicacion, ")), "Comentario"] data.to_csv("data_db.csv", index=False) model = Word2VecKeras() model.load(path, filename="word2vec_kerasv2_base.tar.gz") sample = df2.loc[df2.State, "Comentario"].values[:200] df2.loc[df2.State, ["Comentarios", "Categoría"] # preds2.Predictions = "Valoración negativa" preds2 = preds2.rename(columns={"Comments":"Comentarios", "Predictions":"Categorias"}) model.retrain(data=preds.iloc[:200,:]) class Word2VecKeras(object): """ Wrapper class that combines word2vec with keras model in order to build strong text classifier. This class is adapted from the Word2VecKeras module that can be found on pypi, to fit the requirenments of our case """ def __init__(self, w2v_model=None): """ Initialize empty classifier """ self.w2v_size = None self.w2v_window = None self.w2v_min_count = None self.w2v_epochs = None self.label_encoder = None self.num_classes = None self.tokenizer = None self.k_max_sequence_len = None self.k_batch_size = None self.k_epochs = None self.k_lstm_neurons = None self.k_hidden_layer_neurons = None self.w2v_model = w2v_model self.k_model = None def train(self, x_train, y_train, corpus, x_test, y_test, w2v_size=300, w2v_window=5, w2v_min_count=1, w2v_epochs=100, k_max_sequence_len=350, k_batch_size=128, k_epochs=20, k_lstm_neurons=128, k_hidden_layer_neurons=(128, 64, 32), verbose=1): """ Train new Word2Vec & Keras model :param x_train: list of sentence for trainning :param y_train: list of categories for trainning :param x_test: list of sentence for testing :param y_test: list of categories for testing :param corpus: text corpus to create vocabulary :param w2v_size: Word2Vec vector size (embeddings dimensions) :param w2v_window: Word2Vec windows size :param w2v_min_count: Word2Vec min word count :param w2v_epochs: Word2Vec epochs number :param k_max_sequence_len: Max sequence length :param k_batch_size: Keras training batch size :param k_epochs: Keras epochs number :param k_lstm_neurons: neurons number for Keras LSTM layer :param k_hidden_layer_neurons: array of keras hidden layers :param verbose: Verbosity """ # Set variables self.w2v_size = w2v_size self.w2v_window = w2v_window self.w2v_min_count = w2v_min_count self.w2v_epochs = w2v_epochs self.k_max_sequence_len = k_max_sequence_len self.k_batch_size = k_batch_size self.k_epochs = k_epochs self.k_lstm_neurons = k_lstm_neurons self.k_hidden_layer_neurons = k_hidden_layer_neurons # split text in tokens # x_train = [gensim.utils.simple_preprocess(text) for text in x_train] # x_test = [gensim.utils.simple_preprocess(text) for text in x_test] corpus = [gensim.utils.simple_preprocess(corpus_text) for corpus_text in corpus] logging.info("Build & train Word2Vec model") self.w2v_model = gensim.models.Word2Vec(min_count=self.w2v_min_count, window=self.w2v_window, size=self.w2v_size, workers=multiprocessing.cpu_count()) self.w2v_model.build_vocab(corpus) self.w2v_model.train(corpus, total_examples=self.w2v_model.corpus_count, epochs=self.w2v_epochs) w2v_words = list(self.w2v_model.wv.vocab) logging.info("Vocabulary size: %i" % len(w2v_words)) logging.info("Word2Vec trained") logging.info("Fit LabelEncoder") self.label_encoder = LabelEncoder() y_train = self.label_encoder.fit_transform(y_train) self.num_classes = len(self.label_encoder.classes_) y_train = tf.keras.utils.to_categorical(y_train, self.num_classes) y_test = self.label_encoder.transform(y_test) y_test = tf.keras.utils.to_categorical(y_test, self.num_classes) logging.info("Fit Tokenizer") self.tokenizer = Tokenizer() self.tokenizer.fit_on_texts(corpus) x_train = tf.keras.preprocessing.sequence.pad_sequences(self.tokenizer.texts_to_sequences(x_train), maxlen=self.k_max_sequence_len, padding="post", truncating="post") x_test = tf.keras.preprocessing.sequence.pad_sequences(self.tokenizer.texts_to_sequences(x_test), maxlen=self.k_max_sequence_len, padding="post", truncating="post") num_words = len(self.tokenizer.word_index) + 1 logging.info("Number of unique words: %i" % num_words) logging.info("Create Embedding matrix") word_index = self.tokenizer.word_index vocab_size = len(word_index) + 1 embedding_matrix = np.zeros((vocab_size, self.w2v_size)) for word, idx in word_index.items(): if word in w2v_words: embedding_vector = self.w2v_model.wv.get_vector(word) if embedding_vector is not None: embedding_matrix[idx] = self.w2v_model.wv[word] logging.info("Embedding matrix: %s" % str(embedding_matrix.shape)) logging.info("Build Keras model") logging.info('x_train shape: %s' % str(x_train.shape)) logging.info('y_train shape: %s' % str(y_train.shape)) self.k_model = Sequential() self.k_model.add(Embedding(vocab_size, self.w2v_size, weights=[embedding_matrix], input_length=self.k_max_sequence_len, trainable=False, name="w2v_embeddings")) self.k_model.add(Bidirectional(LSTM(self.k_lstm_neurons, dropout=0.5, return_sequences=True), name="Bidirectional_LSTM_1")) self.k_model.add(Bidirectional(LSTM(self.k_lstm_neurons, dropout=0.5), name="Bidirectional_LSTM_2")) for hidden_layer in self.k_hidden_layer_neurons: self.k_model.add(Dense(hidden_layer, activation='relu', name="dense_%s"%hidden_layer)) self.k_model.add(Dropout(0.2)) if self.num_classes > 1: self.k_model.add(Dense(self.num_classes, activation='softmax', name="output_layer")) else: self.k_model.add(Dense(self.num_classes, activation='sigmoid')) self.k_model.compile(loss='categorical_crossentropy' if self.num_classes > 1 else 'binary_crossentropy', optimizer="adam", metrics=['accuracy']) logging.info(self.k_model.summary()) print(tf.keras.utils.plot_model(self.k_model, show_shapes=True, rankdir="LR")) # Callbacks early_stopping = EarlyStopping(monitor='val_accuracy', patience=6, verbose=0, mode='max', restore_best_weights=True) rop = ReduceLROnPlateau(monitor='val_accuracy', factor=0.1, patience=3, verbose=1, min_delta=1e-4, mode='max') callbacks = [early_stopping, rop] logging.info("Fit Keras model") self.history = self.k_model.fit(x_train, y_train, batch_size=self.k_batch_size, epochs=self.k_epochs, callbacks=callbacks, verbose=verbose, validation_data=(x_test, y_test)) logging.info("Done") return self.history def preprocess(self, text): """Not implemented""" pass def retrain(self, data=None, filename="new_data.csv"): """ Method to train incrementally :param filename: CSV file that contains the new data to feed the algorithm. This CSV must contains as columns ("Comentarios", "Categorías") """ if data.empty: df = pd.read_csv(filename) else: df = data comments = df.Comentarios tokens = [self.text_to_word_list(text) for text in comments] labels = df.Categorias labels = self.label_encoder.fit_transform(labels) labels = tf.keras.utils.to_categorical(labels, self.num_classes) sequences = keras.preprocessing.sequence.pad_sequences(self.tokenizer.texts_to_sequences(tokens), maxlen=self.k_max_sequence_len, padding="post", truncating="post") early_stopping = EarlyStopping(monitor='val_accuracy', patience=6, verbose=0, mode='max', restore_best_weights=True) rop = ReduceLROnPlateau(monitor='val_accuracy', factor=0.1, patience=3, verbose=1, min_delta=1e-4, mode='max') callbacks = [early_stopping, rop] # logging.info("Fit Keras model") history = self.k_model.fit(sequences, labels, batch_size=self.k_batch_size, epochs=10, callbacks=callbacks, verbose=1) def predict(self, texts: np.array, return_df=False): """ Predict and array of comments :param text: numpy array of shape (n_samples,) :param return_df: Whether return only predictions labels or a dataframe containing sentences and predicted labels """ comments = [self.text_to_word_list(text) for text in texts] sequences = keras.preprocessing.sequence.pad_sequences(self.tokenizer.texts_to_sequences(comments), maxlen=self.k_max_sequence_len, padding="post", truncating="post") confidences = self.k_model.predict(sequences, verbose=1) preds = [self.label_encoder.classes_[np.argmax(c)] for c in confidences] if return_df: results = pd.DataFrame(data={"Comments": texts, "Predictions": preds}) else: results = np.array(preds) return results def evaluate(self, x_test, y_test): """ Evaluate Model with several KPI :param x_test: Text to test :param y_test: labels for text :return: dictionary with KPIs """ result = {} results = [] # Prepare test x_test = [self.text_to_word_list(text) for text in x_test] x_test = keras.preprocessing.sequence.pad_sequences( self.tokenizer.texts_to_sequences(x_test), maxlen=self.k_max_sequence_len, padding="post", truncating="post") # Predict confidences = self.k_model.predict(x_test, verbose=1) y_pred_1d = [] for confidence in confidences: idx = np.argmax(confidence) y_pred_1d.append(self.label_encoder.classes_[idx]) y_pred_bin = [] for i in range(0, len(results)): y_pred_bin.append(1 if y_pred_1d[i] == y_test[i] else 0) # Classification report result["CLASSIFICATION_REPORT"] = classification_report(y_test, y_pred_1d, output_dict=True) result["CLASSIFICATION_REPORT_STR"] = classification_report(y_test, y_pred_1d) # Confusion matrix result["CONFUSION_MATRIX"] = confusion_matrix(y_test, y_pred_1d) # Accuracy result["ACCURACY"] = accuracy_score(y_test, y_pred_1d) return result def save(self, path="word2vec_keras.tar.gz"): """ Save all models in pickles file :param path: path to save """ tokenizer_path = os.path.join(tempfile.gettempdir(), "tokenizer.pkl") label_encoder_path = os.path.join(tempfile.gettempdir(), "label_encoder.pkl") params_path = os.path.join(tempfile.gettempdir(), "params.pkl") keras_path = os.path.join(tempfile.gettempdir(), "model.h5") w2v_path = os.path.join(tempfile.gettempdir(), "model.w2v") # Dump pickle pickle.dump(self.tokenizer, open(tokenizer_path, "wb")) pickle.dump(self.label_encoder, open(label_encoder_path, "wb")) pickle.dump(self.__attributes__(), open(params_path, "wb")) pickle.dump(self.w2v_model, open(w2v_path, "wb")) self.k_model.save(keras_path) # self.w2v_model.save(w2v_path) # Create Tar file tar = tarfile.open(path, "w:gz") for name in [tokenizer_path, label_encoder_path, params_path, keras_path, w2v_path]: tar.add(name, arcname=os.path.basename(name)) tar.close() # Remove temp file for name in [tokenizer_path, label_encoder_path, params_path, keras_path, w2v_path]: os.remove(name) def load(self, path, filename="word2vec_keras.tar.gz"): """ Load all attributes from path :param path: tar.gz dump """ # Open tarfile tar = tarfile.open(mode="r:gz", fileobj=open(os.path.join(path, filename), "rb")) # Extract keras model temp_dir = tempfile.gettempdir() tar.extract("model.h5", temp_dir) self.k_model = load_model(os.path.join(temp_dir, "model.h5")) os.remove(os.path.join(temp_dir, "model.h5")) # Iterate over every member for filename in tar.getnames(): if filename == "model.w2v": self.w2v_model = pickle.loads(tar.extractfile(filename).read()) if filename == "tokenizer.pkl": self.tokenizer = pickle.loads(tar.extractfile(filename).read()) if filename == "label_encoder.pkl": self.label_encoder = pickle.loads(tar.extractfile(filename).read()) if filename == "params.pkl": params = pickle.loads(tar.extractfile(filename).read()) for k, v in params.items(): self.__setattr__(k, v) def text_to_word_list(self, text, stem=False, stopw=False): ''' Pre process and convert texts to a list of words ''' text = unidecode(text) text = str(text) text = text.lower() # Clean the text text = re.sub(r"<u.+>", "", text) # Remove emojis text = re.sub(r"[^A-Za-z0-9^,!?.\/'+]", " ", text) text = re.sub(r",", " ", text) text = re.sub(r"\.", " ", text) text = re.sub(r"!", " ! ", text) text = re.sub(r"\?", " ? ", text) text = re.sub(r"'", " ", text) text = re.sub(r":", " : ", text) text = re.sub(r"\s{2,}", " ", text) text = text.split() if stopw: # Remove stopw stopw = stopwords.words("spanish") stopw.remove("no") text = [word for word in text if word not in stopw and len(word) > 1] # if stem: # stemmer = SnowballStemmer("spanish") # text = [stemmer.stem(word) for word in text] # text = " ".join(text) return text def __attributes__(self): """ Attributes to dump :return: dictionary """ return { "w2v_size": self.w2v_size, "w2v_window": self.w2v_window, "w2v_min_count": self.w2v_min_count, "w2v_epochs": self.w2v_epochs, "num_classes": self.num_classes, "k_max_sequence_len": self.k_max_sequence_len, "k_batch_size": self.k_batch_size, "k_epochs": self.k_epochs, "k_lstm_neurons": self.k_lstm_neurons, "k_hidden_layer_neurons": self.k_hidden_layer_neurons, "history": self.history.history } ```
github_jupyter
# Maximum Likelihood Estimation (Generic models) This tutorial explains how to quickly implement new maximum likelihood models in `statsmodels`. We give two examples: 1. Probit model for binary dependent variables 2. Negative binomial model for count data The `GenericLikelihoodModel` class eases the process by providing tools such as automatic numeric differentiation and a unified interface to ``scipy`` optimization functions. Using ``statsmodels``, users can fit new MLE models simply by "plugging-in" a log-likelihood function. ## Example 1: Probit model ``` import numpy as np from scipy import stats import statsmodels.api as sm from statsmodels.base.model import GenericLikelihoodModel ``` The ``Spector`` dataset is distributed with ``statsmodels``. You can access a vector of values for the dependent variable (``endog``) and a matrix of regressors (``exog``) like this: ``` data = sm.datasets.spector.load_pandas() exog = data.exog endog = data.endog print(sm.datasets.spector.NOTE) print(data.exog.head()) ``` Them, we add a constant to the matrix of regressors: ``` exog = sm.add_constant(exog, prepend=True) ``` To create your own Likelihood Model, you simply need to overwrite the loglike method. ``` class MyProbit(GenericLikelihoodModel): def loglike(self, params): exog = self.exog endog = self.endog q = 2 * endog - 1 return stats.norm.logcdf(q*np.dot(exog, params)).sum() ``` Estimate the model and print a summary: ``` sm_probit_manual = MyProbit(endog, exog).fit() print(sm_probit_manual.summary()) ``` Compare your Probit implementation to ``statsmodels``' "canned" implementation: ``` sm_probit_canned = sm.Probit(endog, exog).fit() print(sm_probit_canned.params) print(sm_probit_manual.params) print(sm_probit_canned.cov_params()) print(sm_probit_manual.cov_params()) ``` Notice that the ``GenericMaximumLikelihood`` class provides automatic differentiation, so we did not have to provide Hessian or Score functions in order to calculate the covariance estimates. ## Example 2: Negative Binomial Regression for Count Data Consider a negative binomial regression model for count data with log-likelihood (type NB-2) function expressed as: $$ \mathcal{L}(\beta_j; y, \alpha) = \sum_{i=1}^n y_i ln \left ( \frac{\alpha exp(X_i'\beta)}{1+\alpha exp(X_i'\beta)} \right ) - \frac{1}{\alpha} ln(1+\alpha exp(X_i'\beta)) + ln \Gamma (y_i + 1/\alpha) - ln \Gamma (y_i+1) - ln \Gamma (1/\alpha) $$ with a matrix of regressors $X$, a vector of coefficients $\beta$, and the negative binomial heterogeneity parameter $\alpha$. Using the ``nbinom`` distribution from ``scipy``, we can write this likelihood simply as: ``` import numpy as np from scipy.stats import nbinom def _ll_nb2(y, X, beta, alph): mu = np.exp(np.dot(X, beta)) size = 1/alph prob = size/(size+mu) ll = nbinom.logpmf(y, size, prob) return ll ``` ### New Model Class We create a new model class which inherits from ``GenericLikelihoodModel``: ``` from statsmodels.base.model import GenericLikelihoodModel class NBin(GenericLikelihoodModel): def __init__(self, endog, exog, **kwds): super(NBin, self).__init__(endog, exog, **kwds) def nloglikeobs(self, params): alph = params[-1] beta = params[:-1] ll = _ll_nb2(self.endog, self.exog, beta, alph) return -ll def fit(self, start_params=None, maxiter=10000, maxfun=5000, **kwds): # we have one additional parameter and we need to add it for summary self.exog_names.append('alpha') if start_params == None: # Reasonable starting values start_params = np.append(np.zeros(self.exog.shape[1]), .5) # intercept start_params[-2] = np.log(self.endog.mean()) return super(NBin, self).fit(start_params=start_params, maxiter=maxiter, maxfun=maxfun, **kwds) ``` Two important things to notice: + ``nloglikeobs``: This function should return one evaluation of the negative log-likelihood function per observation in your dataset (i.e. rows of the endog/X matrix). + ``start_params``: A one-dimensional array of starting values needs to be provided. The size of this array determines the number of parameters that will be used in optimization. That's it! You're done! ### Usage Example The [Medpar](https://raw.githubusercontent.com/vincentarelbundock/Rdatasets/doc/COUNT/medpar.html) dataset is hosted in CSV format at the [Rdatasets repository](https://raw.githubusercontent.com/vincentarelbundock/Rdatasets). We use the ``read_csv`` function from the [Pandas library](https://pandas.pydata.org) to load the data in memory. We then print the first few columns: ``` import statsmodels.api as sm medpar = sm.datasets.get_rdataset("medpar", "COUNT", cache=True).data medpar.head() ``` The model we are interested in has a vector of non-negative integers as dependent variable (``los``), and 5 regressors: ``Intercept``, ``type2``, ``type3``, ``hmo``, ``white``. For estimation, we need to create two variables to hold our regressors and the outcome variable. These can be ndarrays or pandas objects. ``` y = medpar.los X = medpar[["type2", "type3", "hmo", "white"]].copy() X["constant"] = 1 ``` Then, we fit the model and extract some information: ``` mod = NBin(y, X) res = mod.fit() ``` Extract parameter estimates, standard errors, p-values, AIC, etc.: ``` print('Parameters: ', res.params) print('Standard errors: ', res.bse) print('P-values: ', res.pvalues) print('AIC: ', res.aic) ``` As usual, you can obtain a full list of available information by typing ``dir(res)``. We can also look at the summary of the estimation results. ``` print(res.summary()) ``` ### Testing We can check the results by using the statsmodels implementation of the Negative Binomial model, which uses the analytic score function and Hessian. ``` res_nbin = sm.NegativeBinomial(y, X).fit(disp=0) print(res_nbin.summary()) print(res_nbin.params) print(res_nbin.bse) ``` Or we could compare them to results obtained using the MASS implementation for R: url = 'https://raw.githubusercontent.com/vincentarelbundock/Rdatasets/csv/COUNT/medpar.csv' medpar = read.csv(url) f = los~factor(type)+hmo+white library(MASS) mod = glm.nb(f, medpar) coef(summary(mod)) Estimate Std. Error z value Pr(>|z|) (Intercept) 2.31027893 0.06744676 34.253370 3.885556e-257 factor(type)2 0.22124898 0.05045746 4.384861 1.160597e-05 factor(type)3 0.70615882 0.07599849 9.291748 1.517751e-20 hmo -0.06795522 0.05321375 -1.277024 2.015939e-01 white -0.12906544 0.06836272 -1.887951 5.903257e-02 ### Numerical precision The ``statsmodels`` generic MLE and ``R`` parameter estimates agree up to the fourth decimal. The standard errors, however, agree only up to the second decimal. This discrepancy is the result of imprecision in our Hessian numerical estimates. In the current context, the difference between ``MASS`` and ``statsmodels`` standard error estimates is substantively irrelevant, but it highlights the fact that users who need very precise estimates may not always want to rely on default settings when using numerical derivatives. In such cases, it is better to use analytical derivatives with the ``LikelihoodModel`` class.
github_jupyter
# Connecting MLOS to a C++ application This notebook walks through connecting MLOS to a C++ application within a docker container. We will start a docker container, and run an MLOS Agent within it. The MLOS Agent will start the actual application, and communicate with it via a shared memory channel. In this example, the MLOS Agent controls the execution of the workloads on the application, and we will later connect to the agent to optimize the configuration of our application. The application is a "SmartCache" similar to the one in the SmartCacheOptimization notebook, though with some more parameters to tune. The source for this example is in the `source/Examples/SmartCache` folder. ## Building the application To build and run the necessary components for this example you need to create and run a docker image. To that end, open a separate terminal and go to the MLOS main folder. Within that folder, run the following commands: 1. [Build the Docker image](https://microsoft.github.io/MLOS/documentation/01-Prerequisites/#build-the-docker-image) using the [`Dockerfile`](../../Dockerfile#mlos-github-tree-view) at the root of the repository. ```shell docker build --build-arg=UbuntuVersion=20.04 -t mlos/build:ubuntu-20.04 . ``` 2. [Run the Docker image](https://microsoft.github.io/MLOS/documentation/02-Build/#create-a-new-container-instance) you just built. ```shell docker run -it -v $PWD:/src/MLOS -p 127.0.0.1:50051:50051/tcp \ --name mlos-build mlos/build:ubuntu-20.04 ``` This will open a shell inside the docker container. We're also exposing port 50051 on the docker container to port 50051 of our host machine. This will allow us later to connect to the optimizer that runs inside the docker container. 3. Inside the container, [build the compiled software](https://microsoft.github.io/MLOS/documentation/02-Build/#cli-make) with `make`: ```sh make dotnet-build cmake-build cmake-install ``` The relevant output will be at: - Mlos.Agent.Server: This file corresponds to the main entry point for MLOS, written in C#. You can find the source in `source/Mlos.Agent.Server/MlosAgentServer.cs` and the binary at `target/bin/Release/Mlos.Agent.Server.dll` - SmartCache: This is the C++ executable that implements the SmartCache and executes some workloads. You can find the source in `source/Examples/SmartCache/Main.cpp` and the binary at `target/bin/Release/SmartCache` - SmartCache.SettingsRegistry: This is the C# code that declares the configuration options for the SmartCache component, and defines the communication between the the MLOS Agent and the SmartCache component. You can find the source in `source/Examples/SmartCache/SmartCache.SettingsRegistry/AssemblyInitializer.cs` and the binary at `target/bin/Release/SmartCache.SettingsRegistry.dll` ## Starting the MLOS Agent and executing the workloads: Within the docker container, we can now tell the agent where the configuration options are stored, by setting the `MLOS_Settings_REGISTRY_PATH`. Then, we can run the MLOS Agent, which will in turn run the SmartCache executable. ```sh export MLOS_SETTINGS_REGISTRY_PATH="target/bin/Release" tools/bin/dotnet target/bin/Release/Mlos.Agent.Server.dll \ --executable target/bin/Release/SmartCache ``` The main loop of ``SmartCache`` contains the following: ```cpp for (int observations = 0; observations < 100; observations++) { // run 100 observations std::cout << "observations: " << observations << std::endl; for (int i = 0; i < 20; i++) { // run a workload 20 times CyclicalWorkload(2048, smartCache); } bool isConfigReady = false; std::mutex waitForConfigMutex; std::condition_variable waitForConfigCondVar; // Setup a callback. // // OMMITTED // [...] // Send a request to obtain a new configuration. SmartCache::RequestNewConfigurationMessage msg = { 0 }; mlosContext.SendTelemetryMessage(msg); // wait for MLOS Agent so send a message with a new configuration std::unique_lock<std::mutex> lock(waitForConfigMutex); while (!isConfigReady) { waitForConfigCondVar.wait(lock); } config.Update(); smartCache.Reconfigure(); } ``` After each iteration, a TelemetryMessage is sent to the MLOS Agent, and the SmartCache blocks until it receives a new configuration to run the next workload. By default, the agent is not connected to any optimizer, and will not change the original configuration, so the workload will just run uninterrupted. ## Starting an Optimizer We can now also start an Optimizer service for the MLOS Agent to connect to so that we can actually optimize the parameters for this workload. As the optimizer is running in a separate process, we need to create a new shell on the running docker container using the following command: ```shell docker exec -it mlos-build /bin/bash ``` Within the container, we now install the Python optimizer service: ```shell pip install -e source/Mlos.Python/ ``` And run it: ```shell start_optimizer_microservice launch --port 50051 ``` ## Connecting the Agent to the Optimizer Now we can start the agent again, this time also pointing it to the optimizer: ```sh tools/bin/dotnet target/bin/Release/Mlos.Agent.Server.dll \ --executable target/bin/Release/SmartCache \ --optimizer-uri http://localhost:50051 ``` This will run the workload again, this time using the optimizer to suggest better configurations. You should see output both in the terminal the agent is running in and in the terminal the OptimizerMicroservice is running in. ## Inspecting results After (or even while) the optimization is running, we can connect to the optimizer via another GRPC channel. The optimizer is running within the docker container, but when we started docker, we exposed the port 50051 as the same port 50051 on the host machine (on which this notebook is running). So we can now connect to the optimizer within the docker container at `127.0.0.1:50051`. This assumes this notebook runs in an environment with the `mlos` Python package installed ([see the documentation](https://microsoft.github.io/MLOS/documentation/01-Prerequisites/#python-quickstart)). ``` from mlos.Grpc.OptimizerMonitor import OptimizerMonitor import grpc # create a grpc channel and instantiate the OptimizerMonitor channel = grpc.insecure_channel('127.0.0.1:50051') optimizer_monitor = OptimizerMonitor(grpc_channel=channel) optimizer_monitor # There should be one optimizer running in the docker container # corresponding to the C++ SmartCache optimization problem # An OptimizerMicroservice can run multiple optimizers, which would all be listed here optimizers = optimizer_monitor.get_existing_optimizers() optimizers ``` We can now get the observations exactly the same way as for the Python example in `SmartCacheOptimization.ipynb` ``` optimizer = optimizers[0] features_df, objectives_df = optimizer.get_all_observations() import pandas as pd features, targets = optimizer.get_all_observations() data = pd.concat([features, targets], axis=1) data.to_json("CacheLatencyMainCPP.json") data lru_data, mru_data = data.groupby('cache_implementation') import matplotlib.pyplot as plt line_lru = lru_data[1].plot( y='PushLatency', label='LRU', marker='o', linestyle='none', alpha=.6,figsize=(16, 6)) mru_data[1].plot( y='PushLatency', label='MRU', marker='o', linestyle='none', alpha=.6, ax=plt.gca(),figsize=(16, 6)) plt.ylabel("Cache Latency") plt.xlabel("Observations") plt.legend() plt.savefig("Cache Latency&Observations-Main.png") lru_data, mru_data = data.groupby('cache_implementation') import matplotlib.pyplot as plt line_lru = lru_data[1].plot(x='lru_cache_config.cache_size', y='PushLatency', label='LRU', marker='o', linestyle='none', alpha=.6,figsize=(16, 6)) mru_data[1].plot(x='mru_cache_config.cache_size', y='PushLatency', label='MRU', marker='o', linestyle='none', alpha=.6, ax=plt.gca(),figsize=(16, 6)) plt.ylabel("Cache Latency") plt.xlabel("Cache Size") plt.legend() plt.savefig("Cache Latency & Size - Main.png") ``` # Going Further 1. Instead of cache hit rate, use a metric based on runtime (e.g. latency, throughput, etc) as performance metric. Environment (context) sensitive metrics can also be measured (e.g. [time](https://bduvenhage.me/performance/2019/06/22/high-performance-timer.html). How does the signal from the runtime based metric compare to the application specific one (hit rate)? How consistent are the runtime results across multiple runs? 2. Pick another widely used [cache replacement policy](https://en.wikipedia.org/wiki/Cache_replacement_policies) such as LFU and construct a synthetic workload on which you expect this strategy to work well. Implement the policy and workload as part of the SmartCache example, and add a new option to the ``SmartCache.SettingsRegistry\AssemblyInitializer.cs``. Run the optimization again with your new workload. Does the optimizer find that your new policy performs best?
github_jupyter
``` #Download the dataset from opensig import urllib.request urllib.request.urlretrieve('http://opendata.deepsig.io/datasets/2016.10/RML2016.10a.tar.bz2', 'RML2016.10a.tar.bz2') #decompress the .bz2 file into .tar file import sys import os import bz2 zipfile = bz2.BZ2File('./RML2016.10a.tar.bz2') # open the file data = zipfile.read() # get the decompressed data #write the .tar file open('./RML2016.10a.tar', 'wb').write(data) # write a uncompressed file #extract the .tar file import tarfile my_tar = tarfile.open('./RML2016.10a.tar') my_tar.extractall('./') # specify which folder to extract to my_tar.close() #extract the pickle file import pickle import numpy as np Xd = pickle.load(open("RML2016.10a_dict.pkl",'rb'),encoding="bytes") snrs,mods = map(lambda j: sorted(list(set(map(lambda x: x[j], Xd.keys())))), [1,0]) X = [] lbl = [] for mod in mods: for snr in snrs: X.append(Xd[(mod,snr)]) for i in range(Xd[(mod,snr)].shape[0]): lbl.append((mod,snr)) X = np.vstack(X) # Import all the things we need --- %matplotlib inline import random import tensorflow.keras.utils import tensorflow.keras.models as models from tensorflow.keras.layers import Reshape,Dense,Dropout,Activation,Flatten from tensorflow.keras.layers import GaussianNoise from tensorflow.keras.layers import Convolution2D, MaxPooling2D, ZeroPadding2D from tensorflow.keras.regularizers import * from tensorflow.keras.optimizers import * import matplotlib.pyplot as plt import seaborn as sns import tensorflow.keras # Partition the data # into training and test sets of the form we can train/test on np.random.seed(2020) n_examples = X.shape[0] n_train = n_examples // 2 train_idx = np.random.choice(range(0,n_examples), size=n_train, replace=False) test_idx = list(set(range(0,n_examples))-set(train_idx)) X_train = X[train_idx] X_test = X[test_idx] #one-hot encoding the label from sklearn import preprocessing lb = preprocessing.LabelBinarizer() lb.fit(np.asarray(lbl)[:,0]) print(lb.classes_) lbl_encoded=lb.transform(np.asarray(lbl)[:,0]) y_train=lbl_encoded[train_idx] y_test=lbl_encoded[test_idx] in_shp = list(X_train.shape[1:]) print(X_train.shape, in_shp) classes = mods dr = 0.5 # dropout rate (%) model = models.Sequential() model.add(Reshape([1]+in_shp, input_shape=in_shp)) model.add(ZeroPadding2D((0, 2))) model.add(Convolution2D(256, 1, 3, activation="relu", name="conv1")) model.add(Dropout(dr)) model.add(ZeroPadding2D((0, 2))) model.add(Convolution2D(80, 1, 3, activation="relu", name="conv2")) model.add(Dropout(dr)) model.add(Flatten()) model.add(Dense(256, activation='relu', name="dense1")) model.add(Dropout(dr)) model.add(Dense( len(classes), name="dense2" )) model.add(Activation('softmax')) model.add(Reshape([len(classes)])) model.compile(loss='categorical_crossentropy', optimizer='adam') model.summary() # Set up some params nb_epoch = 100 # number of epochs to train on batch_size = 1024 # training batch size from sklearn.model_selection import train_test_split X_train, X_valid, y_train, y_valid = train_test_split(X_train, y_train, test_size=0.2) # perform training ... # - call the main training loop in keras for our network+dataset filepath = 'convmodrecnets_CNN2_0.5.wts.h5' import time t_0=time.time() history = model.fit(X_train, y_train, batch_size=batch_size, epochs=nb_epoch, verbose=2, validation_data=(X_valid, y_valid), callbacks = [ tensorflow.keras.callbacks.ModelCheckpoint(filepath, monitor='val_loss', verbose=0, save_best_only=True, mode='auto'), tensorflow.keras.callbacks.EarlyStopping(monitor='val_loss', patience=5, verbose=0, mode='auto') ]) delta_t=time.time()-t_0 print(delta_t) # we re-load the best weights once training is finished model.load_weights(filepath) # Show simple version of performance score = model.evaluate(X_test, y_test, verbose=0, batch_size=batch_size) print(score) # Show loss curves plt.figure() plt.title('Training performance') plt.plot(history.epoch, history.history['loss'], label='train loss+error') plt.plot(history.epoch, history.history['val_loss'], label='val_error') plt.legend() def plot_confusion_matrix(cm, title='Confusion matrix', cmap=plt.cm.Blues, labels=[]): plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() tick_marks = np.arange(len(labels)) plt.xticks(tick_marks, labels, rotation=45) plt.yticks(tick_marks, labels) plt.tight_layout() plt.ylabel('True label') plt.xlabel('Predicted label') # Plot confusion matrix test_Y_hat = model.predict(X_test, batch_size=batch_size) conf = np.zeros([len(classes),len(classes)]) confnorm = np.zeros([len(classes),len(classes)]) for i in range(0,X_test.shape[0]): j = list(y_test[i,:]).index(1) k = int(np.argmax(test_Y_hat[i,:])) conf[j,k] = conf[j,k] + 1 for i in range(0,len(classes)): confnorm[i,:] = conf[i,:] / np.sum(conf[i,:]) plot_confusion_matrix(confnorm, labels=classes) # Get the test accuracy for different SNRs acc = {} acc_array=[] snr_array=np.asarray(lbl)[:,1] lb_temp = preprocessing.LabelBinarizer() lb_temp.fit(snr_array) temp_array=lb_temp.classes_ snr_label_array = [] snr_label_array.append(temp_array[6]) snr_label_array.append(temp_array[4]) snr_label_array.append(temp_array[3]) snr_label_array.append(temp_array[2]) snr_label_array.append(temp_array[1]) snr_label_array.append(temp_array[0]) snr_label_array.append(temp_array[9]) snr_label_array.append(temp_array[8]) snr_label_array.append(temp_array[7]) snr_label_array.append(temp_array[5]) snr_label_array.append(temp_array[10]) snr_label_array.append(temp_array[16]) snr_label_array.append(temp_array[17]) snr_label_array.append(temp_array[18]) snr_label_array.append(temp_array[19]) snr_label_array.append(temp_array[11]) snr_label_array.append(temp_array[12]) snr_label_array.append(temp_array[13]) snr_label_array.append(temp_array[14]) snr_label_array.append(temp_array[15]) #print(snr_label_array) y_test_snr=snr_array[test_idx] for snr in snr_label_array: test_X_i = X_test[np.where(y_test_snr==snr)] test_Y_i = y_test[np.where(y_test_snr==snr)] test_Y_i_hat = model.predict(test_X_i) conf = np.zeros([len(classes),len(classes)]) confnorm = np.zeros([len(classes),len(classes)]) for i in range(0,test_X_i.shape[0]): j = list(test_Y_i[i,:]).index(1) k = int(np.argmax(test_Y_i_hat[i,:])) conf[j,k] = conf[j,k] + 1 for i in range(0,len(classes)): confnorm[i,:] = conf[i,:] / np.sum(conf[i,:]) #plt.figure() #plot_confusion_matrix(confnorm, labels=classes, title="ConvNet Confusion Matrix (SNR=%d)"%(snr)) cor = np.sum(np.diag(conf)) ncor = np.sum(conf) - cor print("Overall Accuracy: ", cor / (cor+ncor),"for SNR",snr) acc[snr] = 1.0*cor/(cor+ncor) acc_array.append(1.0*cor/(cor+ncor)) print("Random Guess Accuracy:",1/11) # Show loss curves plt.figure() plt.title('Accuracy vs SNRs') plt.plot(np.arange(-20,20,2), acc_array) ```
github_jupyter
### Fairness ### ##### This exercise we explore the concepts and techniques in fairness in machine learning ##### <b> Through this exercise one can * Increase awareness of different types of biases that can occur * Explore feature data to identify potential sources of biases before training the model. * Evaluate model performance in subgroup rather than aggregate Dataset: We use the Adult census Income dataset commonly used in machine learning. Task is to predict if the person makes over $50,000 a year while performing different methodologies to ensure fairness </b> ``` ### setup %tensorflow_version 2.x from __future__ import absolute_import, division, print_function, unicode_literals ## title Import revelant modules and install Facets import numpy as np import pandas as pd import tensorflow as tf from tensorflow.keras import layers from matplotlib import pyplot as plt from matplotlib import rcParams import seaborn as sns # adjust the granularity of reporting. pd.options.display.max_rows = 10 pd.options.display.float_format = "{:.1f}".format from google.colab import widgets # code for facets from IPython.core.display import display, HTML import base64 !pip install facets-overview==1.0.0 from facets_overview.feature_statistics_generator import FeatureStatisticsGenerator ## load the adult data set. COLUMNS = ["age", "workclass", "fnlwgt", "education", "education_num", "marital_status", "occupation", "relationship", "race", "gender", "capital_gain", "capital_loss", "hours_per_week", "native_country", "income_bracket"] train_csv = tf.keras.utils.get_file('adult.data', 'https://download.mlcc.google.com/mledu-datasets/adult_census_train.csv') test_csv = tf.keras.utils.get_file('adult.data', 'https://download.mlcc.google.com/mledu-datasets/adult_census_test.csv') train_df = pd.read_csv(train_csv, names=COLUMNS, sep=r'\s*,\s*', engine='python', na_values="?") test_df = pd.read_csv(test_csv, names=COLUMNS, sep=r'\s*,\s*', skiprows=[0], engine='python', na_values="?") ``` <b> Analysing the dataset with facets We analyse the dataset to identify any peculiarities before we train the model Here are some of the questions to ask before we can go ahead with the training * Are there missing feature values for a large number of observations? * Are there features that are missing that might affect other features? * Are there any unexpected feature values? * What signs of data skew do you see? </b> <b> We use the Facets overview to analyze the distribution of values across the Adult dataset </b> ``` ## title Visualize the Data in Facets fsg = FeatureStatisticsGenerator() dataframes = [{'table': train_df, 'name': 'trainData'}] censusProto = fsg.ProtoFromDataFrames(dataframes) protostr = base64.b64encode(censusProto.SerializeToString()).decode("utf-8") HTML_TEMPLATE = """<script src="https://cdnjs.cloudflare.com/ajax/libs/webcomponentsjs/1.3.3/webcomponents-lite.js"></script> <link rel="import" href="https://raw.githubusercontent.com/PAIR-code/facets/1.0.0/facets-dist/facets-jupyter.html"> <facets-overview id="elem"></facets-overview> <script> document.querySelector("#elem").protoInput = "{protostr}"; </script>""" html = HTML_TEMPLATE.format(protostr=protostr) display(HTML(html)) ``` <b> Task #1 We can perform the fairness analysis on the visualization dataset in the faucet, click on the Show Raw Data button on the histograms and categorical features to see the distribution of values, and from that try to find if there are any missing features?, features missing that can affect other features? are there any unexpected feature values? are there any skews in the dataset? </b> <b> Going further, using the knowledge of the Adult datset we can now construct a neural network to predict income by using the Tensor flow's Keras API.</b> ``` ## first convert the pandas data frame of the adult datset to tensor flow arrays. def pandas_to_numpy(data): # Drop empty rows. data = data.dropna(how="any", axis=0) # Separate DataFrame into two Numpy arrays labels = np.array(data['income_bracket'] == ">50K") features = data.drop('income_bracket', axis=1) features = {name:np.array(value) for name, value in features.items()} return features, labels ## map the data to columns that maps to the tensor flow using tf.feature_columns ##title Create categorical feature columns # we use categorical_column_with_hash_bucket() for the occupation and native_country columns to help map # each feature string into an integer ID. # since we dont know the full range of values for this columns. occupation = tf.feature_column.categorical_column_with_hash_bucket( "occupation", hash_bucket_size=1000) native_country = tf.feature_column.categorical_column_with_hash_bucket( "native_country", hash_bucket_size=1000) # since we know what the possible values for the other columns # we can be more explicit and use categorical_column_with_vocabulary_list() gender = tf.feature_column.categorical_column_with_vocabulary_list( "gender", ["Female", "Male"]) race = tf.feature_column.categorical_column_with_vocabulary_list( "race", [ "White", "Asian-Pac-Islander", "Amer-Indian-Eskimo", "Other", "Black" ]) education = tf.feature_column.categorical_column_with_vocabulary_list( "education", [ "Bachelors", "HS-grad", "11th", "Masters", "9th", "Some-college", "Assoc-acdm", "Assoc-voc", "7th-8th", "Doctorate", "Prof-school", "5th-6th", "10th", "1st-4th", "Preschool", "12th" ]) marital_status = tf.feature_column.categorical_column_with_vocabulary_list( "marital_status", [ "Married-civ-spouse", "Divorced", "Married-spouse-absent", "Never-married", "Separated", "Married-AF-spouse", "Widowed" ]) relationship = tf.feature_column.categorical_column_with_vocabulary_list( "relationship", [ "Husband", "Not-in-family", "Wife", "Own-child", "Unmarried", "Other-relative" ]) workclass = tf.feature_column.categorical_column_with_vocabulary_list( "workclass", [ "Self-emp-not-inc", "Private", "State-gov", "Federal-gov", "Local-gov", "?", "Self-emp-inc", "Without-pay", "Never-worked" ]) # title Create numeric feature columns # For Numeric features, we can just call on feature_column.numeric_column() # to use its raw value instead of having to create a map between value and ID. age = tf.feature_column.numeric_column("age") fnlwgt = tf.feature_column.numeric_column("fnlwgt") education_num = tf.feature_column.numeric_column("education_num") capital_gain = tf.feature_column.numeric_column("capital_gain") capital_loss = tf.feature_column.numeric_column("capital_loss") hours_per_week = tf.feature_column.numeric_column("hours_per_week") ## make age a categorical feature age_buckets = tf.feature_column.bucketized_column( age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65]) # Define the model features. # we define the gender as a subgroup and can be used later for special handling. # subgroup is a group of individuals who share a common set of characteristics. # List of variables, with special handling for gender subgroup. variables = [native_country, education, occupation, workclass, relationship, age_buckets] subgroup_variables = [gender] feature_columns = variables + subgroup_variables ``` <b> We can now train a neural network based on the features which we derived earlier, we use a feed-forward neural network with two hidden layers. We first convert our high dimensional categorical features into a real-valued vector, which we call an embedded vector. We use 'gender' for filtering the test for subgroup evaluations. </b> ``` deep_columns = [ tf.feature_column.indicator_column(workclass), tf.feature_column.indicator_column(education), tf.feature_column.indicator_column(age_buckets), tf.feature_column.indicator_column(relationship), tf.feature_column.embedding_column(native_country, dimension=8), tf.feature_column.embedding_column(occupation, dimension=8), ] ## define Deep Neural Net Model # Parameters from form fill-ins HIDDEN_UNITS_LAYER_01 = 128 #@param HIDDEN_UNITS_LAYER_02 = 64 #@param LEARNING_RATE = 0.1 #@param L1_REGULARIZATION_STRENGTH = 0.001 #@param L2_REGULARIZATION_STRENGTH = 0.001 #@param RANDOM_SEED = 512 tf.random.set_seed(RANDOM_SEED) # List of built-in metrics that we'll need to evaluate performance. METRICS = [ tf.keras.metrics.TruePositives(name='tp'), tf.keras.metrics.FalsePositives(name='fp'), tf.keras.metrics.TrueNegatives(name='tn'), tf.keras.metrics.FalseNegatives(name='fn'), tf.keras.metrics.BinaryAccuracy(name='accuracy'), tf.keras.metrics.Precision(name='precision'), tf.keras.metrics.Recall(name='recall'), tf.keras.metrics.AUC(name='auc'), ] regularizer = tf.keras.regularizers.l1_l2( l1=L1_REGULARIZATION_STRENGTH, l2=L2_REGULARIZATION_STRENGTH) model = tf.keras.Sequential([ layers.DenseFeatures(deep_columns), layers.Dense( HIDDEN_UNITS_LAYER_01, activation='relu', kernel_regularizer=regularizer), layers.Dense( HIDDEN_UNITS_LAYER_02, activation='relu', kernel_regularizer=regularizer), layers.Dense( 1, activation='sigmoid', kernel_regularizer=regularizer) ]) model.compile(optimizer=tf.keras.optimizers.Adagrad(LEARNING_RATE), loss=tf.keras.losses.BinaryCrossentropy(), metrics=METRICS) ## title Fit Deep Neural Net Model to the Adult Training Dataset EPOCHS = 10 BATCH_SIZE = 1000 features, labels = pandas_to_numpy(train_df) model.fit(x=features, y=labels, epochs=EPOCHS, batch_size=BATCH_SIZE) ## Evaluate Deep Neural Net Performance features, labels = pandas_to_numpy(test_df) model.evaluate(x=features, y=labels); ``` #### Confusion Matrix #### <b> A confusion matrix is a gird which evaluates a models performance with predictions vs ground truth for your model and summarizes how often the model made the correct prediction and how often it made the wrong prediction. Let's start by creating a binary confusion matrix for our income-prediction model—binary because our label (income_bracket) has only two possible values (<50K or >50K). We'll define an income of >50K as our positive label, and an income of <50k as our negative label. The matrix represents four possible states * true positive: Model predicts >50K, and that is the ground truth. * true negative: Model predicts <50K, and that is the ground truth. * false positive: Model predicts >50K, and that contradicts reality. * false negative: Model predicts <50K, and that contradicts reality. ``` ## Function to Visualize and plot the Binary Confusion Matrix def plot_confusion_matrix( confusion_matrix, class_names, subgroup, figsize = (8,6)): df_cm = pd.DataFrame( confusion_matrix, index=class_names, columns=class_names, ) rcParams.update({ 'font.family':'sans-serif', 'font.sans-serif':['Liberation Sans'], }) sns.set_context("notebook", font_scale=1.25) fig = plt.figure(figsize=figsize) plt.title('Confusion Matrix for Performance Across ' + subgroup) # Combine the instance (numercial value) with its description strings = np.asarray([['True Positives', 'False Negatives'], ['False Positives', 'True Negatives']]) labels = (np.asarray( ["{0:g}\n{1}".format(value, string) for string, value in zip( strings.flatten(), confusion_matrix.flatten())])).reshape(2, 2) heatmap = sns.heatmap(df_cm, annot=labels, fmt="", linewidths=2.0, cmap=sns.color_palette("GnBu_d")); heatmap.yaxis.set_ticklabels( heatmap.yaxis.get_ticklabels(), rotation=0, ha='right') heatmap.xaxis.set_ticklabels( heatmap.xaxis.get_ticklabels(), rotation=45, ha='right') plt.ylabel('References') plt.xlabel('Predictions') return fig ```
github_jupyter
<a href="https://colab.research.google.com/github/lvisdd/object_detection_tutorial/blob/master/object_detection_face_detector.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` # restart (or reset) your virtual machine #!kill -9 -1 ``` # [Tensorflow Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection) ``` !git clone https://github.com/tensorflow/models.git ``` # COCO API installation ``` !git clone https://github.com/cocodataset/cocoapi.git %cd cocoapi/PythonAPI !make !cp -r pycocotools /content/models/research/ ``` # Protobuf Compilation ``` %cd /content/models/research/ !protoc object_detection/protos/*.proto --python_out=. ``` # Add Libraries to PYTHONPATH ``` %cd /content/models/research/ %env PYTHONPATH=/env/python:/content/models/research:/content/models/research/slim:/content/models/research/object_detection %env ``` # Testing the Installation ``` !python object_detection/builders/model_builder_test.py %cd /content/models/research/object_detection ``` ## [Tensorflow Face Detector](https://github.com/yeephycho/tensorflow-face-detection) ``` %cd /content !git clone https://github.com/yeephycho/tensorflow-face-detection.git %cd tensorflow-face-detection !wget https://storage.googleapis.com/download.tensorflow.org/example_images/grace_hopper.jpg filename = 'grace_hopper.jpg' #!python inference_usbCam_face.py grace_hopper.jpg import sys import time import numpy as np import tensorflow as tf import cv2 from utils import label_map_util from utils import visualization_utils_color as vis_util # Path to frozen detection graph. This is the actual model that is used for the object detection. PATH_TO_CKPT = './model/frozen_inference_graph_face.pb' # List of the strings that is used to add correct label for each box. PATH_TO_LABELS = './protos/face_label_map.pbtxt' NUM_CLASSES = 2 label_map = label_map_util.load_labelmap(PATH_TO_LABELS) categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True) category_index = label_map_util.create_category_index(categories) class TensoflowFaceDector(object): def __init__(self, PATH_TO_CKPT): """Tensorflow detector """ self.detection_graph = tf.Graph() with self.detection_graph.as_default(): od_graph_def = tf.GraphDef() with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid: serialized_graph = fid.read() od_graph_def.ParseFromString(serialized_graph) tf.import_graph_def(od_graph_def, name='') with self.detection_graph.as_default(): config = tf.ConfigProto() config.gpu_options.allow_growth = True self.sess = tf.Session(graph=self.detection_graph, config=config) self.windowNotSet = True def run(self, image): """image: bgr image return (boxes, scores, classes, num_detections) """ image_np = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) # the array based representation of the image will be used later in order to prepare the # result image with boxes and labels on it. # Expand dimensions since the model expects images to have shape: [1, None, None, 3] image_np_expanded = np.expand_dims(image_np, axis=0) image_tensor = self.detection_graph.get_tensor_by_name('image_tensor:0') # Each box represents a part of the image where a particular object was detected. boxes = self.detection_graph.get_tensor_by_name('detection_boxes:0') # Each score represent how level of confidence for each of the objects. # Score is shown on the result image, together with the class label. scores = self.detection_graph.get_tensor_by_name('detection_scores:0') classes = self.detection_graph.get_tensor_by_name('detection_classes:0') num_detections = self.detection_graph.get_tensor_by_name('num_detections:0') # Actual detection. start_time = time.time() (boxes, scores, classes, num_detections) = self.sess.run( [boxes, scores, classes, num_detections], feed_dict={image_tensor: image_np_expanded}) elapsed_time = time.time() - start_time print('inference time cost: {}'.format(elapsed_time)) return (boxes, scores, classes, num_detections) # This is needed to display the images. %matplotlib inline tDetector = TensoflowFaceDector(PATH_TO_CKPT) original = cv2.imread(filename) image = cv2.cvtColor(original, cv2.COLOR_BGR2RGB) (boxes, scores, classes, num_detections) = tDetector.run(image) vis_util.visualize_boxes_and_labels_on_image_array( image, np.squeeze(boxes), np.squeeze(classes).astype(np.int32), np.squeeze(scores), category_index, use_normalized_coordinates=True, line_thickness=4) from matplotlib import pyplot as plt plt.imshow(image) ```
github_jupyter
# Writing a Molecular Monte Carlo Simulation Starting today, make sure you have the functions 1. `calculate_LJ` - written in class 1. `read_xyz` - provided in class 1. `calculate_total_energy` - modified version provided in this notebook written for homework which has cutoff 1. `calculate_distance` - should be the version written for homework which accounts for periodic boundaries. 1. `calculate_tail_correction` - written for homework ``` # add imports here import math import random def calculate_total_energy(coordinates, box_length, cutoff): """ Calculate the total energy of a set of particles using the Lennard Jones potential. Parameters ---------- coordinates : list A nested list containing the x, y,z coordinate for each particle box_length : float The length of the box. Assumes cubic box. cutoff : float The cutoff length Returns ------- total_energy : float The total energy of the set of coordinates. """ total_energy = 0 num_atoms = len(coordinates) for i in range(num_atoms): for j in range(i+1, num_atoms): # Calculate the distance between the particles - exercise. dist_ij = calculate_distance(coordinates[i], coordinates[j], box_length) if dist_ij < cutoff: # Calculate the pairwise LJ energy LJ_ij = calculate_LJ(dist_ij) # Add to total energy. total_energy += LJ_ij return total_energy def read_xyz(filepath): """ Reads coordinates from an xyz file. Parameters ---------- filepath : str The path to the xyz file to be processed. Returns ------- atomic_coordinates : list A two dimensional list containing atomic coordinates """ with open(filepath) as f: box_length = float(f.readline().split()[0]) num_atoms = float(f.readline()) coordinates = f.readlines() atomic_coordinates = [] for atom in coordinates: split_atoms = atom.split() float_coords = [] # We split this way to get rid of the atom label. for coord in split_atoms[1:]: float_coords.append(float(coord)) atomic_coordinates.append(float_coords) return atomic_coordinates, box_length def calculate_LJ(r_ij): """ The LJ interaction energy between two particles. Computes the pairwise Lennard Jones interaction energy based on the separation distance in reduced units. Parameters ---------- r_ij : float The distance between the particles in reduced units. Returns ------- pairwise_energy : float The pairwise Lennard Jones interaction energy in reduced units. Examples -------- >>> calculate_LJ(1) 0 """ r6_term = math.pow(1/r_ij, 6) r12_term = math.pow(r6_term, 2) pairwise_energy = 4 * (r12_term - r6_term) return pairwise_energy def calculate_distance(coord1, coord2, box_length=None): """ Calculate the distance between two points. When box_length is set, the minimum image convention is used to calculate the distance between the points. Parameters ---------- coord1, coord2 : list The coordinates of the points, [x, y, z] box_length : float, optional The box length Returns ------- distance : float The distance between the two points accounting for periodic boundaries """ distance = 0 for i in range(3): hold_dist = abs(coord2[i] - coord1[i]) if (box_length): if hold_dist > box_length/2: hold_dist = hold_dist - (box_length * round(hold_dist/box_length)) distance += math.pow(hold_dist, 2) return math.sqrt(distance) ## Add your group's tail correction function def calculate_tail_correction(num_particles, box_length, cutoff): """ The tail correction associated with using a cutoff radius. Computes the tail correction based on a cutoff radius used in the LJ energy calculation in reduced units. Parameters ---------- num_particles : int The number of particles in the system. box_length : int Size of the box length of the system, used to calculate volume. cutoff : int Cutoff distance. Returns ------- tail_correction : float The tail correction associated with using the cutoff. """ brackets = (1/3*math.pow(1/cutoff,9)) - math.pow(1/cutoff,3) volume = box_length**3 constant = ((8*math.pi*(num_particles**2))/(3*volume)) tail_correction = constant * brackets return tail_correction ``` The Metropolis Criterion $$ P_{acc}(m \rightarrow n) = \text{min} \left[ 1,e^{-\beta \Delta U} \right] $$ ``` def accept_or_reject(delta_U, beta): """ Accept or reject a move based on the Metropolis criterion. Parameters ---------- detlta_U : float The change in energy for moving system from state m to n. beta : float 1/temperature Returns ------- boolean Whether the move is accepted. """ if delta_U <= 0.0: accept = True else: #Generate a random number on (0,1) random_number = random.random() p_acc = math.exp(-beta*delta_U) if random_number < p_acc: accept = True else: accept = False return accept # Sanity checks - test cases delta_energy = -1 beta = 1 accepted = accept_or_reject(delta_energy, beta) assert accepted # Sanity checks - test cases delta_energy = 0 beta = 1 accepted = accept_or_reject(delta_energy, beta) assert accepted # To test function with random numbers # can set random seed #To set seed random.seed(0) random.random() delta_energy = 1 beta = 1 random.seed(0) accepted = accept_or_reject(delta_energy, beta) assert accepted is False #Clear seed random.seed() def calculate_pair_energy(coordinates, i_particle, box_length, cutoff): """ Calculate the interaction energy of a particle with its environment (all other particles in the system) Parameters ---------- coordinates : list The coordinates for all the particles in the system. i_particle : int The particle number for which to calculate the energy. cutoff : float The simulation cutoff. Beyond this distance, interactions are not calculated. box_length : float The length of the box for periodic bounds Returns ------- e_total : float The pairwise interaction energy of the ith particles with all other particles in the system """ e_total = 0.0 #creates a list of the coordinates for the i_particle i_position = coordinates[i_particle] num_atoms = len(coordinates) for j_particle in range(num_atoms): if i_particle != j_particle: #creates a list of coordinates for the j_particle j_position = coordinates[j_particle] rij = calculate_distance(i_position, j_position, box_length) if rij < cutoff: e_pair = calculate_LJ(rij) e_total += e_pair return e_total ## Sanity checks test_coords = [[0, 0, 0], [0, 0, 2**(1/6)], [0, 0, 2*2**(1/6)]] # What do you expect the result to be for particle index 1 (use cutoff of 3)? assert calculate_pair_energy(test_coords, 1, 10, 3) == -2 # What do you expect the result to be for particle index 0 (use cutoff of 2)? assert calculate_pair_energy(test_coords, 0, 10, 2) == -1 assert calculate_pair_energy(test_coords, 0, 10, 3) == calculate_pair_energy(test_coords, 2, 10, 3) ``` # Monte Carlo Loop ``` # Read or generate initial coordinates coordinates, box_length = read_xyz('lj_sample_configurations/lj_sample_config_periodic1.txt') # Set simulation parameters reduced_temperature = 0.9 num_steps = 5000 max_displacement = 0.1 cutoff = 3 #how often to print an update freq = 1000 # Calculated quantities beta = 1 / reduced_temperature num_particles = len(coordinates) # Energy calculations total_energy = calculate_total_energy(coordinates, box_length, cutoff) print(total_energy) total_correction = calculate_tail_correction(num_particles, box_length, cutoff) print(total_correction) total_energy += total_correction for step in range(num_steps): # 1. Randomly pick one of the particles. random_particle = random.randrange(num_particles) # 2. Calculate the interaction energy of the selected particle with the system. current_energy = calculate_pair_energy(coordinates, random_particle, box_length, cutoff) # 3. Generate a random x, y, z displacement. x_rand = random.uniform(-max_displacement, max_displacement) y_rand = random.uniform(-max_displacement, max_displacement) z_rand = random.uniform(-max_displacement, max_displacement) # 4. Modify the coordinate of Nth particle by generated displacements. coordinates[random_particle][0] += x_rand coordinates[random_particle][1] += y_rand coordinates[random_particle][2] += z_rand # 5. Calculate the interaction energy of the moved particle with the system and store this value. proposed_energy = calculate_pair_energy(coordinates, random_particle, box_length, cutoff) delta_energy = proposed_energy - current_energy # 6. Calculate if we accept the move based on energy difference. accept = accept_or_reject(delta_energy, beta) # 7. If accepted, move the particle. if accept: total_energy += delta_energy else: #Move not accepted, roll back coordinates coordinates[random_particle][0] -= x_rand coordinates[random_particle][1] -= y_rand coordinates[random_particle][2] -= z_rand # 8. Print the energy if step is a multiple of freq. if step % freq == 0: print(step, total_energy/num_particles) ```
github_jupyter
``` %load_ext autoreload %autoreload 2 import os import re from glob import glob import json import numpy as np import pandas as pd from difflib import SequenceMatcher import matplotlib.pyplot as plt import seaborn as sns ``` ## Data Acquisition ``` arxiv_files = sorted(glob('../data/arxiv/*')) scirate_files = sorted(glob('../data/scirate/*')) arxiv_data = [] for file in arxiv_files: with open(file, 'r') as f: arxiv_data.append(json.load(f)) print(len(arxiv_data)) scirate_data = [] for file in scirate_files: with open(file, 'r') as f: scirate_data.append(json.load(f)) print(len(scirate_data)) arxiv_data[-1]['date'] # 2018-03-30 Arxiv top arxiv_data[-1]['papers'][0] # 2018-03-30 Scirate top scirate_data[-1]['papers'][0] ``` ## EDA Entry ID: paper name (DOI?) We can create an arbitrary paper id that corresponds to each paper title, authors, and DOI. Possible features: - Arxiv order - Scirate order - Paper length (pages) - Title length (words) - Number of authors - Total # of citations of the authors (or first author? last author?) - Bag of Words of title - Bag of Words of abstract ``` # obtain features from both Arxiv and Scirate paper lists index = [] title = [] authors = [] num_authors = [] title_length = [] arxiv_order = [] submit_time = [] submit_weekday = [] paper_size = [] num_versions = [] for res in arxiv_data: date = res['date'] papers = res['papers'] for paper in papers: # create arbitrary paper id - currently, it is "date + Arxiv order" if paper['order'] < 10: idx = '_000' + str(paper['order']) elif 10 <= paper['order'] < 100: idx = '_00' + str(paper['order']) elif 100 <= paper['order'] < 1000: idx = '_0' + str(paper['order']) else: idx = '_' + str(paper['order']) index.append(date + idx) title.append(paper['title']) authors.append(paper['authors']) num_authors.append(len(paper['authors'])) title_length.append(len(paper['title'])) arxiv_order.append(paper['order']) submit_time.append(paper['submit_time']) submit_weekday.append(paper['submit_weekday']) paper_size.append(int(re.findall('\d+', paper['size'])[0])) num_versions.append(paper['num_versions']) len(index) # Scirate rank - string matching to find index of each paper in Arxiv list ### This process is pretty slow - needs to be refactored ### scirate_rank = [-1 for _ in range(len(index))] scite_score = [-1 for _ in range(len(index))] for res in scirate_data: papers = res['papers'] for paper in papers: title_sci = paper['title'] try: idx = title.index(title_sci) except: # if there is no just match, use difflib SequenceMatcher for title matching str_match = np.array([SequenceMatcher(a=title_sci, b=title_arx).ratio() for title_arx in title]) idx = np.argmax(str_match) scirate_rank[idx] = paper['rank'] scite_score[idx] = paper['scite_count'] # columns for pandas DataFrame columns = ['title', 'authors', 'num_authors', 'title_length', 'arxiv_order', 'submit_time', 'submit_weekday', 'paper_size', 'num_versions', 'scirate_rank', 'scite_score'] # this is too dirty... title = np.array(title).reshape(-1, 1) authors = np.array(authors).reshape(-1, 1) num_authors = np.array(num_authors).reshape(-1, 1) title_length = np.array(title_length).reshape(-1, 1) arxiv_order = np.array(arxiv_order).reshape(-1, 1) submit_time = np.array(submit_time).reshape(-1, 1) submit_weekday = np.array(submit_weekday).reshape(-1, 1) paper_size = np.array(paper_size).reshape(-1, 1) num_versions = np.array(num_versions).reshape(-1, 1) scirate_rank = np.array(scirate_rank).reshape(-1, 1) scite_score = np.array(scite_score).reshape(-1, 1) data = np.concatenate([ title, authors, num_authors, title_length, arxiv_order, submit_time, submit_weekday, paper_size, num_versions, scirate_rank, scite_score ], axis=1) df = pd.DataFrame(data=data, columns=columns, index=index) len(df) df.head() df[['arxiv_order', 'scite_score', 'scirate_rank']].astype(float).corr(method='spearman') ```
github_jupyter