code
stringlengths
2.5k
6.36M
kind
stringclasses
2 values
parsed_code
stringlengths
0
404k
quality_prob
float64
0
0.98
learning_prob
float64
0.03
1
--- _You are currently looking at **version 1.2** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-data-analysis/resources/0dhYG) course resource._ --- # Assignment 2 - Pandas Introduction All questions are weighted the same in this assignment. ## Part 1 The following code loads the olympics dataset (olympics.csv), which was derrived from the Wikipedia entry on [All Time Olympic Games Medals](https://en.wikipedia.org/wiki/All-time_Olympic_Games_medal_table), and does some basic data cleaning. The columns are organized as # of Summer games, Summer medals, # of Winter games, Winter medals, total # number of games, total # of medals. Use this dataset to answer the questions below. ``` import pandas as pd df = pd.read_csv('olympics.csv', index_col=0, skiprows=1) for col in df.columns: if col[:2]=='01': df.rename(columns={col:'Gold'+col[4:]}, inplace=True) if col[:2]=='02': df.rename(columns={col:'Silver'+col[4:]}, inplace=True) if col[:2]=='03': df.rename(columns={col:'Bronze'+col[4:]}, inplace=True) if col[:1]=='№': df.rename(columns={col:'#'+col[1:]}, inplace=True) names_ids = df.index.str.split('\s\(') # split the index by '(' df.index = names_ids.str[0] # the [0] element is the country name (new index) df['ID'] = names_ids.str[1].str[:3] # the [1] element is the abbreviation or ID (take first 3 characters from that) df = df.drop('Totals') df.head() ``` ### Question 0 (Example) What is the first country in df? *This function should return a Series.* ``` # You should write your whole answer within the function provided. The autograder will call # this function and compare the return value against the correct solution value def answer_zero(): # This function returns the row for Afghanistan, which is a Series object. The assignment # question description will tell you the general format the autograder is expecting return df.iloc[0] # You can examine what your function returns by calling it in the cell. If you have questions # about the assignment formats, check out the discussion forums for any FAQs answer_zero() ``` ### Question 1 Which country has won the most gold medals in summer games? *This function should return a single string value.* ``` def answer_one(): return df['Gold'].argmax() ``` ### Question 2 Which country had the biggest difference between their summer and winter gold medal counts? *This function should return a single string value.* ``` def answer_two(): return abs(df['Gold']-df['Gold.1']).argmax() ``` ### Question 3 Which country has the biggest difference between their summer gold medal counts and winter gold medal counts relative to their total gold medal count? $$\frac{Summer~Gold - Winter~Gold}{Total~Gold}$$ Only include countries that have won at least 1 gold in both summer and winter. *This function should return a single string value.* ``` def answer_three(): dfc = df[(df['Gold']>0) & (df['Gold.1']>0)] dfc = abs(dfc['Gold'] - dfc['Gold.1'])/dfc['Gold.2'] return dfc.argmax() ``` ### Question 4 Write a function that creates a Series called "Points" which is a weighted value where each gold medal (`Gold.2`) counts for 3 points, silver medals (`Silver.2`) for 2 points, and bronze medals (`Bronze.2`) for 1 point. The function should return only the column (a Series object) which you created, with the country names as indices. *This function should return a Series named `Points` of length 146* ``` def answer_four(): df['Points'] = df['Gold.2']*3 + df['Silver.2']*2 + df['Bronze.2'] return df['Points'] ``` ## Part 2 For the next set of questions, we will be using census data from the [United States Census Bureau](http://www.census.gov). Counties are political and geographic subdivisions of states in the United States. This dataset contains population data for counties and states in the US from 2010 to 2015. [See this document](https://www2.census.gov/programs-surveys/popest/technical-documentation/file-layouts/2010-2015/co-est2015-alldata.pdf) for a description of the variable names. The census dataset (census.csv) should be loaded as census_df. Answer questions using this as appropriate. ### Question 5 Which state has the most counties in it? (hint: consider the sumlevel key carefully! You'll need this for future questions too...) *This function should return a single string value.* ``` census_df = pd.read_csv('census.csv') census_df.head() def answer_five(): return census_df[census_df['SUMLEV'] == 50].groupby(['STNAME'])['CTYNAME'].count().idxmax() ``` ### Question 6 **Only looking at the three most populous counties for each state**, what are the three most populous states (in order of highest population to lowest population)? Use `CENSUS2010POP`. *This function should return a list of string values.* ``` def answer_six(): dfc = (census_df[census_df['SUMLEV'] == 50] .sort_values(['STNAME', 'CENSUS2010POP'], ascending=[1, 0]) .groupby(['STNAME'])['STNAME', 'CTYNAME', 'CENSUS2010POP'].head(3) .groupby(['STNAME'])['CENSUS2010POP'].sum() .sort_values(ascending=False).head(3)) return list(dfc.index) ``` ### Question 7 Which county has had the largest absolute change in population within the period 2010-2015? (Hint: population values are stored in columns POPESTIMATE2010 through POPESTIMATE2015, you need to consider all six columns.) e.g. If County Population in the 5 year period is 100, 120, 80, 105, 100, 130, then its largest change in the period would be |130-80| = 50. *This function should return a single string value.* ``` def answer_seven(): census_df['Max'] = census_df[max(census_df.iloc[:, 9:15])] census_df['Min'] = census_df[min(census_df.iloc[:, 9:15])] census_df['Growth'] = census_df['Max'] - census_df['Min'] return census_df[census_df['SUMLEV'] == 50].groupby(['CTYNAME'])['Growth'].max().idxmax() answer_seven() ``` ### Question 8 In this datafile, the United States is broken up into four regions using the "REGION" column. Create a query that finds the counties that belong to regions 1 or 2, whose name starts with 'Washington', and whose POPESTIMATE2015 was greater than their POPESTIMATE 2014. *This function should return a 5x2 DataFrame with the columns = ['STNAME', 'CTYNAME'] and the same index ID as the census_df (sorted ascending by index).* ``` def answer_eight(): dfc = (census_df[(census_df['SUMLEV'] == 50) & (census_df['REGION'] < 3) & (census_df['CTYNAME'].str.contains('Washington')) & (census_df['POPESTIMATE2015'] > census_df['POPESTIMATE2014']) ] ) return dfc.loc[:, ('STNAME', 'CTYNAME')] ```
github_jupyter
import pandas as pd df = pd.read_csv('olympics.csv', index_col=0, skiprows=1) for col in df.columns: if col[:2]=='01': df.rename(columns={col:'Gold'+col[4:]}, inplace=True) if col[:2]=='02': df.rename(columns={col:'Silver'+col[4:]}, inplace=True) if col[:2]=='03': df.rename(columns={col:'Bronze'+col[4:]}, inplace=True) if col[:1]=='№': df.rename(columns={col:'#'+col[1:]}, inplace=True) names_ids = df.index.str.split('\s\(') # split the index by '(' df.index = names_ids.str[0] # the [0] element is the country name (new index) df['ID'] = names_ids.str[1].str[:3] # the [1] element is the abbreviation or ID (take first 3 characters from that) df = df.drop('Totals') df.head() # You should write your whole answer within the function provided. The autograder will call # this function and compare the return value against the correct solution value def answer_zero(): # This function returns the row for Afghanistan, which is a Series object. The assignment # question description will tell you the general format the autograder is expecting return df.iloc[0] # You can examine what your function returns by calling it in the cell. If you have questions # about the assignment formats, check out the discussion forums for any FAQs answer_zero() def answer_one(): return df['Gold'].argmax() def answer_two(): return abs(df['Gold']-df['Gold.1']).argmax() def answer_three(): dfc = df[(df['Gold']>0) & (df['Gold.1']>0)] dfc = abs(dfc['Gold'] - dfc['Gold.1'])/dfc['Gold.2'] return dfc.argmax() def answer_four(): df['Points'] = df['Gold.2']*3 + df['Silver.2']*2 + df['Bronze.2'] return df['Points'] census_df = pd.read_csv('census.csv') census_df.head() def answer_five(): return census_df[census_df['SUMLEV'] == 50].groupby(['STNAME'])['CTYNAME'].count().idxmax() def answer_six(): dfc = (census_df[census_df['SUMLEV'] == 50] .sort_values(['STNAME', 'CENSUS2010POP'], ascending=[1, 0]) .groupby(['STNAME'])['STNAME', 'CTYNAME', 'CENSUS2010POP'].head(3) .groupby(['STNAME'])['CENSUS2010POP'].sum() .sort_values(ascending=False).head(3)) return list(dfc.index) def answer_seven(): census_df['Max'] = census_df[max(census_df.iloc[:, 9:15])] census_df['Min'] = census_df[min(census_df.iloc[:, 9:15])] census_df['Growth'] = census_df['Max'] - census_df['Min'] return census_df[census_df['SUMLEV'] == 50].groupby(['CTYNAME'])['Growth'].max().idxmax() answer_seven() def answer_eight(): dfc = (census_df[(census_df['SUMLEV'] == 50) & (census_df['REGION'] < 3) & (census_df['CTYNAME'].str.contains('Washington')) & (census_df['POPESTIMATE2015'] > census_df['POPESTIMATE2014']) ] ) return dfc.loc[:, ('STNAME', 'CTYNAME')]
0.195287
0.962427
# MultiGroupDirectLiNGAM ## Import and settings In this example, we need to import `numpy`, `pandas`, and `graphviz` in addition to `lingam`. ``` import numpy as np import pandas as pd import graphviz import lingam from lingam.utils import print_causal_directions, print_dagc, make_dot print([np.__version__, pd.__version__, graphviz.__version__, lingam.__version__]) np.set_printoptions(precision=3, suppress=True) np.random.seed(0) ``` ## Test data We generate two datasets consisting of 6 variables. ``` x3 = np.random.uniform(size=10000) x0 = 3.0*x3 + np.random.uniform(size=10000) x2 = 6.0*x3 + np.random.uniform(size=10000) x1 = 3.0*x0 + 2.0*x2 + np.random.uniform(size=10000) x5 = 4.0*x0 + np.random.uniform(size=10000) x4 = 8.0*x0 - 1.0*x2 + np.random.uniform(size=10000) X1 = pd.DataFrame(np.array([x0, x1, x2, x3, x4, x5]).T ,columns=['x0', 'x1', 'x2', 'x3', 'x4', 'x5']) X1.head() m = np.array([[0.0, 0.0, 0.0, 3.0, 0.0, 0.0], [3.0, 0.0, 2.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 6.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [8.0, 0.0,-1.0, 0.0, 0.0, 0.0], [4.0, 0.0, 0.0, 0.0, 0.0, 0.0]]) make_dot(m) x3 = np.random.uniform(size=1000) x0 = 3.5*x3 + np.random.uniform(size=1000) x2 = 6.5*x3 + np.random.uniform(size=1000) x1 = 3.5*x0 + 2.5*x2 + np.random.uniform(size=1000) x5 = 4.5*x0 + np.random.uniform(size=1000) x4 = 8.5*x0 - 1.5*x2 + np.random.uniform(size=1000) X2 = pd.DataFrame(np.array([x0, x1, x2, x3, x4, x5]).T ,columns=['x0', 'x1', 'x2', 'x3', 'x4', 'x5']) X2.head() m = np.array([[0.0, 0.0, 0.0, 3.5, 0.0, 0.0], [3.5, 0.0, 2.5, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 6.5, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [8.5, 0.0,-1.5, 0.0, 0.0, 0.0], [4.5, 0.0, 0.0, 0.0, 0.0, 0.0]]) make_dot(m) ``` We create a list variable that contains two datasets. ``` X_list = [X1, X2] ``` ## Causal Discovery To run causal discovery for multiple datasets, we create a `MultiGroupDirectLiNGAM` object and call the `fit` method. ``` model = lingam.MultiGroupDirectLiNGAM() model.fit(X_list) ``` Using the `causal_order_` properties, we can see the causal ordering as a result of the causal discovery. ``` model.causal_order_ ``` Also, using the `adjacency_matrix_` properties, we can see the adjacency matrix as a result of the causal discovery. As you can see from the following, DAG in each dataset is correctly estimated. ``` print(model.adjacency_matrices_[0]) make_dot(model.adjacency_matrices_[0]) print(model.adjacency_matrices_[1]) make_dot(model.adjacency_matrices_[1]) ``` To compare, we run DirectLiNGAM with single dataset concatenating two datasets. ``` X_all = pd.concat([X1, X2]) print(X_all.shape) model_all = lingam.DirectLiNGAM() model_all.fit(X_all) model_all.causal_order_ ``` You can see that the causal structure cannot be estimated correctly for a single dataset. ``` make_dot(model_all.adjacency_matrix_) ``` ## Bootstrapping In `MultiGroupDirectLiNGAM`, bootstrap can be executed in the same way as normal `DirectLiNGAM`. ``` results = model.bootstrap(X_list, 100) ``` The `bootstrap` method returns a list of multiple `BootstrapResult`, so you can get the result of bootstrapping from the list. ``` cdc = results[0].get_causal_direction_counts(n_directions=8, min_causal_effect=0.01) print_causal_directions(cdc, 100) cdc = results[1].get_causal_direction_counts(n_directions=8, min_causal_effect=0.01) print_causal_directions(cdc, 100) dagc = results[0].get_directed_acyclic_graph_counts(n_dags=3, min_causal_effect=0.01) print_dagc(dagc, 100) dagc = results[1].get_directed_acyclic_graph_counts(n_dags=3, min_causal_effect=0.01) print_dagc(dagc, 100) ```
github_jupyter
import numpy as np import pandas as pd import graphviz import lingam from lingam.utils import print_causal_directions, print_dagc, make_dot print([np.__version__, pd.__version__, graphviz.__version__, lingam.__version__]) np.set_printoptions(precision=3, suppress=True) np.random.seed(0) x3 = np.random.uniform(size=10000) x0 = 3.0*x3 + np.random.uniform(size=10000) x2 = 6.0*x3 + np.random.uniform(size=10000) x1 = 3.0*x0 + 2.0*x2 + np.random.uniform(size=10000) x5 = 4.0*x0 + np.random.uniform(size=10000) x4 = 8.0*x0 - 1.0*x2 + np.random.uniform(size=10000) X1 = pd.DataFrame(np.array([x0, x1, x2, x3, x4, x5]).T ,columns=['x0', 'x1', 'x2', 'x3', 'x4', 'x5']) X1.head() m = np.array([[0.0, 0.0, 0.0, 3.0, 0.0, 0.0], [3.0, 0.0, 2.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 6.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [8.0, 0.0,-1.0, 0.0, 0.0, 0.0], [4.0, 0.0, 0.0, 0.0, 0.0, 0.0]]) make_dot(m) x3 = np.random.uniform(size=1000) x0 = 3.5*x3 + np.random.uniform(size=1000) x2 = 6.5*x3 + np.random.uniform(size=1000) x1 = 3.5*x0 + 2.5*x2 + np.random.uniform(size=1000) x5 = 4.5*x0 + np.random.uniform(size=1000) x4 = 8.5*x0 - 1.5*x2 + np.random.uniform(size=1000) X2 = pd.DataFrame(np.array([x0, x1, x2, x3, x4, x5]).T ,columns=['x0', 'x1', 'x2', 'x3', 'x4', 'x5']) X2.head() m = np.array([[0.0, 0.0, 0.0, 3.5, 0.0, 0.0], [3.5, 0.0, 2.5, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 6.5, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [8.5, 0.0,-1.5, 0.0, 0.0, 0.0], [4.5, 0.0, 0.0, 0.0, 0.0, 0.0]]) make_dot(m) X_list = [X1, X2] model = lingam.MultiGroupDirectLiNGAM() model.fit(X_list) model.causal_order_ print(model.adjacency_matrices_[0]) make_dot(model.adjacency_matrices_[0]) print(model.adjacency_matrices_[1]) make_dot(model.adjacency_matrices_[1]) X_all = pd.concat([X1, X2]) print(X_all.shape) model_all = lingam.DirectLiNGAM() model_all.fit(X_all) model_all.causal_order_ make_dot(model_all.adjacency_matrix_) results = model.bootstrap(X_list, 100) cdc = results[0].get_causal_direction_counts(n_directions=8, min_causal_effect=0.01) print_causal_directions(cdc, 100) cdc = results[1].get_causal_direction_counts(n_directions=8, min_causal_effect=0.01) print_causal_directions(cdc, 100) dagc = results[0].get_directed_acyclic_graph_counts(n_dags=3, min_causal_effect=0.01) print_dagc(dagc, 100) dagc = results[1].get_directed_acyclic_graph_counts(n_dags=3, min_causal_effect=0.01) print_dagc(dagc, 100)
0.304765
0.914214
# ODM2 API: Retrieve, manipulate and visualize ODM2 water quality measurement-type data This example shows how to use the ODM2 Python API (`odm2api`) to connect to an ODM2 database, retrieve data, and analyze and visualize the data. The [database (iUTAHGAMUT_waterquality_measurementresults_ODM2.sqlite)](https://github.com/ODM2/ODM2PythonAPI/blob/master/Examples/data/iUTAHGAMUT_waterquality_measurementresults_ODM2.sqlite) contains ["measurement"-type results](http://vocabulary.odm2.org/resulttype/measurement/). This example uses SQLite for the database because it doesn't require a server. However, the ODM2 Python API demonstrated here can alse be used with ODM2 databases implemented in MySQL, PostgreSQL or Microsoft SQL Server. More details on the ODM2 Python API and its source code and latest development can be found at https://github.com/ODM2/ODM2PythonAPI Adapted from notebook https://github.com/BiG-CZ/wshp2017_tutorial_content/blob/master/notebooks/ODM2_Example3.ipynb, based in part on earlier code and an ODM2 database from [Jeff Horsburgh's group](http://jeffh.usu.edu) at Utah State University. [Emilio Mayorga](https://github.com/emiliom/) ``` import os import datetime import matplotlib.pyplot as plt %matplotlib inline from shapely.geometry import Point import pandas as pd import geopandas as gpd import folium from folium.plugins import MarkerCluster import odm2api from odm2api.ODMconnection import dbconnection import odm2api.services.readService as odm2rs from odm2api.models import SamplingFeatures "{} UTC".format(datetime.datetime.utcnow()) pd.__version__, gpd.__version__, folium.__version__ ``` **odm2api version used** to run this notebook: ``` odm2api.__version__ ``` ## Connect to the ODM2 SQLite Database This example uses an ODM2 SQLite database file loaded with water quality sample data from multiple monitoring sites in the [iUTAH](https://iutahepscor.org/) Gradients Along Mountain to Urban Transitions ([GAMUT](http://data.iutahepscor.org/mdf/Data/Gamut_Network/)) water quality monitoring network. Water quality samples have been collected and analyzed for nitrogen, phosphorus, total coliform, E-coli, and some water isotopes. The [database (iUTAHGAMUT_waterquality_measurementresults_ODM2.sqlite)](https://github.com/ODM2/ODM2PythonAPI/blob/master/Examples/data/iUTAHGAMUT_waterquality_measurementresults_ODM2.sqlite) contains ["measurement"-type results](http://vocabulary.odm2.org/resulttype/measurement/). The example database is located in the `data` sub-directory. ``` # Assign directory paths and SQLite file name dbname_sqlite = "iUTAHGAMUT_waterquality_measurementresults_ODM2.sqlite" sqlite_pth = os.path.join("data", dbname_sqlite) try: session_factory = dbconnection.createConnection('sqlite', sqlite_pth) read = odm2rs.ReadODM2(session_factory) print("Database connection successful!") except Exception as e: print("Unable to establish connection to the database: ", e) ``` ## Run Some Basic Queries on the ODM2 Database This section shows some examples of how to use the API to run both simple and more advanced queries on the ODM2 database, as well as how to examine the query output in convenient ways thanks to Python tools. Simple query functions like **getVariables( )** return objects similar to the entities in ODM2, and individual attributes can then be retrieved from the objects returned. ### Get all Variables A simple query with simple output. ``` # Get all of the Variables from the ODM2 database then read the records # into a Pandas DataFrame to make it easy to view and manipulate allVars = read.getVariables() variables_df = pd.DataFrame.from_records([vars(variable) for variable in allVars], index='VariableID') variables_df.head(10) ``` ### Get all People Another simple query. ``` allPeople = read.getPeople() pd.DataFrame.from_records([vars(person) for person in allPeople]).head() ``` ### Site Sampling Features: pass arguments to the API query Some of the API functions accept arguments that let you subset what is returned. For example, I can query the database using the **getSamplingFeatures( )** function and pass it a SamplingFeatureType of "Site" to return a list of those SamplingFeatures that are Sites. ``` # Get all of the SamplingFeatures from the ODM2 database that are Sites siteFeatures = read.getSamplingFeatures(sftype='Site') # Read Sites records into a Pandas DataFrame # "if sf.Latitude" is used only to instantiate/read Site attributes) df = pd.DataFrame.from_records([vars(sf) for sf in siteFeatures if sf.Latitude]) ``` Since we know this is a *geospatial* dataset (Sites, which have latitude and longitude), we can use more specialized Python tools like `GeoPandas` (geospatially enabled Pandas) and `Folium` interactive maps. ``` # Create a GeoPandas GeoDataFrame from Sites DataFrame ptgeom = [Point(xy) for xy in zip(df['Longitude'], df['Latitude'])] gdf = gpd.GeoDataFrame(df, geometry=ptgeom, crs={'init': 'epsg:4326'}) gdf.head(5) # Number of records (features) in GeoDataFrame len(gdf) # A trivial but easy-to-generate GeoPandas plot gdf.plot(); ``` A site has a `SiteTypeCV`. Let's examine the site type distribution, and use that information to create a new GeoDataFrame column to specify a map marker color by `SiteTypeCV`. ``` gdf['SiteTypeCV'].value_counts() gdf["color"] = gdf.apply(lambda feat: 'green' if feat['SiteTypeCV'] == 'Stream' else 'red', axis=1) ``` Note: While the database holds a copy of the **ODM2 Controlled Vocabularies**, the complete description of each CV term is available from a web request to the CV API at http://vocabulary.odm2.org. Want to know more about how a "spring" is defined? Here's one simple way, using `Pandas` to access and parse the CSV web service response. ``` sitetype = 'spring' pd.read_csv("http://vocabulary.odm2.org/api/v1/sitetype/{}/?format=csv".format(sitetype)) ``` **Now we'll create an interactive and helpful `Folium` map of the sites.** This map features: - Automatic panning to the location of the sites (no hard wiring, except for the zoom scale), based on GeoPandas functionality and information from the ODM2 Site Sampling Features - Color coding by `SiteTypeCV` - Marker clustering - Simple marker pop ups with content from the ODM2 Site Sampling Features ``` c = gdf.unary_union.centroid m = folium.Map(location=[c.y, c.x], tiles='CartoDB positron', zoom_start=11) marker_cluster = MarkerCluster().add_to(m) for idx, feature in gdf.iterrows(): folium.Marker(location=[feature.geometry.y, feature.geometry.x], icon=folium.Icon(color=feature['color']), popup="{0} ({1}): {2}".format( feature['SamplingFeatureCode'], feature['SiteTypeCV'], feature['SamplingFeatureName']) ).add_to(marker_cluster) # Done with setup. Now render the map m ``` ### Add a new Sampling Feature Just to llustrate how to add a new entry. We won't "commit" (save) the sampling feature to the database. ``` sitesf0 = siteFeatures[0] try: newsf = SamplingFeatures() session = session_factory.getSession() newsf.FeatureGeometryWKT = "POINT(-111.946 41.718)" newsf.Elevation_m = 100 newsf.ElevationDatumCV = sitesf0.ElevationDatumCV newsf.SamplingFeatureCode = "TestSF" newsf.SamplingFeatureDescription = "this is a test to add a sampling feature" newsf.SamplingFeatureGeotypeCV = "Point" newsf.SamplingFeatureTypeCV = sitesf0.SamplingFeatureTypeCV newsf.SamplingFeatureUUID = sitesf0.SamplingFeatureUUID+"2" session.add(newsf) # To save the new sampling feature, do session.commit() print("New sampling feature created, but not saved to database.\n") print(newsf) except Exception as e : print("error adding a sampling feature: {}".format(e)) ``` ### Get Objects and Related Objects from the Database (SamplingFeatures example) This code shows some examples of how objects and related objects can be retrieved using the API. In the following, we use the **getSamplingFeatures( )** function to return a particular sampling feature by passing in its SamplingFeatureCode. This function returns a list of SamplingFeature objects, so just get the first one in the returned list. ``` # Get the SamplingFeature object for a particular SamplingFeature by passing its SamplingFeatureCode sf = read.getSamplingFeatures(codes=['RB_1300E'])[0] type(sf) # Simple way to examine the content (properties) of a Python object, as if it were a dictionary vars(sf) ``` You can also drill down and get objects linked by foreign keys. The API returns related objects in a nested hierarchy so they can be interrogated in an object oriented way. So, if I use the **getResults( )** function to return a Result from the database (e.g., a "Measurement" Result), I also get the associated Action that created that Result (e.g., a "Specimen analysis" Action). ``` try: # Call getResults, but return only the first Result firstResult = read.getResults()[0] frfa = firstResult.FeatureActionObj frfaa = firstResult.FeatureActionObj.ActionObj print("The FeatureAction object for the Result is: ", frfa) print("The Action object for the Result is: ", frfaa) # Print some Action attributes in a more human readable form print("\nThe following are some of the attributes for the Action that created the Result: ") print("ActionTypeCV: {}".format(frfaa.ActionTypeCV)) print("ActionDescription: {}".format(frfaa.ActionDescription)) print("BeginDateTime: {}".format(frfaa.BeginDateTime)) print("EndDateTime: {}".format(frfaa.EndDateTime)) print("MethodName: {}".format(frfaa.MethodObj.MethodName)) print("MethodDescription: {}".format(frfaa.MethodObj.MethodDescription)) except Exception as e: print("Unable to demo Foreign Key Example: {}".format(e)) ``` ### Get a Result and its Attributes Because all of the objects are returned in a nested form, if you retrieve a result, you can interrogate it to get all of its related attributes. When a Result object is returned, it includes objects that contain information about Variable, Units, ProcessingLevel, and the related Action that created that Result. ``` print("------- Example of Retrieving Attributes of a Result -------") try: firstResult = read.getResults()[0] frfa = firstResult.FeatureActionObj print("The following are some of the attributes for the Result retrieved: ") print("ResultID: {}".format(firstResult.ResultID)) print("ResultTypeCV: {}".format(firstResult.ResultTypeCV)) print("ValueCount: {}".format(firstResult.ValueCount)) print("ProcessingLevel: {}".format(firstResult.ProcessingLevelObj.Definition)) print("SampledMedium: {}".format(firstResult.SampledMediumCV)) print("Variable: {}: {}".format(firstResult.VariableObj.VariableCode, firstResult.VariableObj.VariableNameCV)) print("Units: {}".format(firstResult.UnitsObj.UnitsName)) print("SamplingFeatureID: {}".format(frfa.SamplingFeatureObj.SamplingFeatureID)) print("SamplingFeatureCode: {}".format(frfa.SamplingFeatureObj.SamplingFeatureCode)) except Exception as e: print("Unable to demo example of retrieving Attributes of a Result: {}".format(e)) ``` The last block of code returns a particular Measurement Result. From that I can get the SamplingFeaureID (in this case 26) for the Specimen from which the Result was generated. But, if I want to figure out which Site the Specimen was collected at, I need to query the database to get the related Site SamplingFeature. I can use **getRelatedSamplingFeatures( )** for this. Once I've got the SamplingFeature for the Site, I could get the rest of the SamplingFeature attributes. ### Retrieve the "Related" Site at which a Specimen was collected ``` # Pass the Sampling Feature ID of the specimen, and the relationship type relatedSite = read.getRelatedSamplingFeatures(sfid=26, relationshiptype='Was Collected at')[0] vars(relatedSite) ``` ----------------------------------------- ## Return Results and Data Values for a Particular Site/Variable From the list of Variables returned above and the information about the SamplingFeature I queried above, I know that VariableID = 2 for Total Phosphorus and SiteID = 1 for the Red Butte Creek site at 1300E. I can use the **getResults( )** function to get all of the Total Phosphorus results for this site by passing in the VariableID and the SiteID. ``` siteID = 1 # Red Butte Creek at 1300 E (obtained from the getRelatedSamplingFeatures query) v = variables_df[variables_df['VariableCode'] == 'TP'] variableID = v.index[0] results = read.getResults(siteid=siteID, variableid=variableID, restype="Measurement") # Get the list of ResultIDs so I can retrieve the data values associated with all of the results resultIDList = [x.ResultID for x in results] len(resultIDList) ``` ### Retrieve the Result (Data) Values, Then Create a Quick Time Series Plot of the Data Now I can retrieve all of the data values associated with the list of Results I just retrieved. In ODM2, water chemistry measurements are stored as "Measurement" results. Each "Measurement" Result has a single data value associated with it. So, for convenience, the **getResultValues( )** function allows you to pass in a list of ResultIDs so you can get the data values for all of them back in a Pandas data frame object, which is easier to work with. Once I've got the data in a Pandas data frame object, I can use the **plot( )** function directly on the data frame to create a quick visualization. ``` # Get all of the data values for the Results in the list created above # Call getResultValues, which returns a Pandas Data Frame with the data resultValues = read.getResultValues(resultids=resultIDList, lowercols=False) resultValues.head() # Plot the time sequence of Measurement Result Values ax = resultValues.plot(x='ValueDateTime', y='DataValue', title=relatedSite.SamplingFeatureName, kind='line', use_index=True, linestyle='solid', style='o') ax.set_ylabel("{0} ({1})".format(results[0].VariableObj.VariableNameCV, results[0].UnitsObj.UnitsAbbreviation)) ax.set_xlabel('Date/Time') ax.legend().set_visible(False) ``` ### End with a fancier plot, facilitated via a function If I'm going to reuse a series of steps, it's always helpful to write little generic functions that can be called to quickly and consistently get what we need. To conclude this demo, here's one such function that encapsulates the `VariableID`, `getResults` and `getResultValues` queries we showed above. Then we leverage it to create a nice 2-variable (2-axis) plot of TP and TN vs time, and conclude with a reminder that we have ready access to related metadata about analytical lab methods and such. ``` def get_results_and_values(siteid, variablecode): v = variables_df[variables_df['VariableCode'] == variablecode] variableID = v.index[0] results = read.getResults(siteid=siteid, variableid=variableID, restype="Measurement") resultIDList = [x.ResultID for x in results] resultValues = read.getResultValues(resultids=resultIDList, lowercols=False) return resultValues, results ``` Fancy plotting, leveraging the `Pandas` plot method and `matplotlib`. ``` # Plot figure and axis set up f, ax = plt.subplots(1, figsize=(13, 6)) # First plot (left axis) VariableCode = 'TP' resultValues_TP, results_TP = get_results_and_values(siteID, VariableCode) resultValues_TP.plot(x='ValueDateTime', y='DataValue', label=VariableCode, style='o-', kind='line', ax=ax) ax.set_ylabel("{0}: {1} ({2})".format(VariableCode, results_TP[0].VariableObj.VariableNameCV, results_TP[0].UnitsObj.UnitsAbbreviation)) # Second plot (right axis) VariableCode = 'TN' resultValues_TN, results_TN = get_results_and_values(siteID, VariableCode) resultValues_TN.plot(x='ValueDateTime', y='DataValue', label=VariableCode, style='^-', kind='line', ax=ax, secondary_y=True) ax.right_ax.set_ylabel("{0}: {1} ({2})".format(VariableCode, results_TN[0].VariableObj.VariableNameCV, results_TN[0].UnitsObj.UnitsAbbreviation)) # Tweak the figure ax.legend(loc='upper left') ax.right_ax.legend(loc='upper right') ax.grid(True) ax.set_xlabel('') ax.set_title(relatedSite.SamplingFeatureName); ``` Finally, let's show some useful metadata. Use the `Results` records and their relationship to `Actions` (via `FeatureActions`) to **extract and print out the Specimen Analysis methods used for TN and TP**. Or at least for the *first* result for each of the two variables; methods may have varied over time, but the specific method associated with each result is stored in ODM2 and available. ``` results_faam = lambda results, i: results[i].FeatureActionObj.ActionObj.MethodObj print("TP METHOD: {0} ({1})".format(results_faam(results_TP, 0).MethodName, results_faam(results_TP, 0).MethodDescription)) print("TN METHOD: {0} ({1})".format(results_faam(results_TN, 0).MethodName, results_faam(results_TN, 0).MethodDescription)) ```
github_jupyter
import os import datetime import matplotlib.pyplot as plt %matplotlib inline from shapely.geometry import Point import pandas as pd import geopandas as gpd import folium from folium.plugins import MarkerCluster import odm2api from odm2api.ODMconnection import dbconnection import odm2api.services.readService as odm2rs from odm2api.models import SamplingFeatures "{} UTC".format(datetime.datetime.utcnow()) pd.__version__, gpd.__version__, folium.__version__ odm2api.__version__ # Assign directory paths and SQLite file name dbname_sqlite = "iUTAHGAMUT_waterquality_measurementresults_ODM2.sqlite" sqlite_pth = os.path.join("data", dbname_sqlite) try: session_factory = dbconnection.createConnection('sqlite', sqlite_pth) read = odm2rs.ReadODM2(session_factory) print("Database connection successful!") except Exception as e: print("Unable to establish connection to the database: ", e) # Get all of the Variables from the ODM2 database then read the records # into a Pandas DataFrame to make it easy to view and manipulate allVars = read.getVariables() variables_df = pd.DataFrame.from_records([vars(variable) for variable in allVars], index='VariableID') variables_df.head(10) allPeople = read.getPeople() pd.DataFrame.from_records([vars(person) for person in allPeople]).head() # Get all of the SamplingFeatures from the ODM2 database that are Sites siteFeatures = read.getSamplingFeatures(sftype='Site') # Read Sites records into a Pandas DataFrame # "if sf.Latitude" is used only to instantiate/read Site attributes) df = pd.DataFrame.from_records([vars(sf) for sf in siteFeatures if sf.Latitude]) # Create a GeoPandas GeoDataFrame from Sites DataFrame ptgeom = [Point(xy) for xy in zip(df['Longitude'], df['Latitude'])] gdf = gpd.GeoDataFrame(df, geometry=ptgeom, crs={'init': 'epsg:4326'}) gdf.head(5) # Number of records (features) in GeoDataFrame len(gdf) # A trivial but easy-to-generate GeoPandas plot gdf.plot(); gdf['SiteTypeCV'].value_counts() gdf["color"] = gdf.apply(lambda feat: 'green' if feat['SiteTypeCV'] == 'Stream' else 'red', axis=1) sitetype = 'spring' pd.read_csv("http://vocabulary.odm2.org/api/v1/sitetype/{}/?format=csv".format(sitetype)) c = gdf.unary_union.centroid m = folium.Map(location=[c.y, c.x], tiles='CartoDB positron', zoom_start=11) marker_cluster = MarkerCluster().add_to(m) for idx, feature in gdf.iterrows(): folium.Marker(location=[feature.geometry.y, feature.geometry.x], icon=folium.Icon(color=feature['color']), popup="{0} ({1}): {2}".format( feature['SamplingFeatureCode'], feature['SiteTypeCV'], feature['SamplingFeatureName']) ).add_to(marker_cluster) # Done with setup. Now render the map m sitesf0 = siteFeatures[0] try: newsf = SamplingFeatures() session = session_factory.getSession() newsf.FeatureGeometryWKT = "POINT(-111.946 41.718)" newsf.Elevation_m = 100 newsf.ElevationDatumCV = sitesf0.ElevationDatumCV newsf.SamplingFeatureCode = "TestSF" newsf.SamplingFeatureDescription = "this is a test to add a sampling feature" newsf.SamplingFeatureGeotypeCV = "Point" newsf.SamplingFeatureTypeCV = sitesf0.SamplingFeatureTypeCV newsf.SamplingFeatureUUID = sitesf0.SamplingFeatureUUID+"2" session.add(newsf) # To save the new sampling feature, do session.commit() print("New sampling feature created, but not saved to database.\n") print(newsf) except Exception as e : print("error adding a sampling feature: {}".format(e)) # Get the SamplingFeature object for a particular SamplingFeature by passing its SamplingFeatureCode sf = read.getSamplingFeatures(codes=['RB_1300E'])[0] type(sf) # Simple way to examine the content (properties) of a Python object, as if it were a dictionary vars(sf) try: # Call getResults, but return only the first Result firstResult = read.getResults()[0] frfa = firstResult.FeatureActionObj frfaa = firstResult.FeatureActionObj.ActionObj print("The FeatureAction object for the Result is: ", frfa) print("The Action object for the Result is: ", frfaa) # Print some Action attributes in a more human readable form print("\nThe following are some of the attributes for the Action that created the Result: ") print("ActionTypeCV: {}".format(frfaa.ActionTypeCV)) print("ActionDescription: {}".format(frfaa.ActionDescription)) print("BeginDateTime: {}".format(frfaa.BeginDateTime)) print("EndDateTime: {}".format(frfaa.EndDateTime)) print("MethodName: {}".format(frfaa.MethodObj.MethodName)) print("MethodDescription: {}".format(frfaa.MethodObj.MethodDescription)) except Exception as e: print("Unable to demo Foreign Key Example: {}".format(e)) print("------- Example of Retrieving Attributes of a Result -------") try: firstResult = read.getResults()[0] frfa = firstResult.FeatureActionObj print("The following are some of the attributes for the Result retrieved: ") print("ResultID: {}".format(firstResult.ResultID)) print("ResultTypeCV: {}".format(firstResult.ResultTypeCV)) print("ValueCount: {}".format(firstResult.ValueCount)) print("ProcessingLevel: {}".format(firstResult.ProcessingLevelObj.Definition)) print("SampledMedium: {}".format(firstResult.SampledMediumCV)) print("Variable: {}: {}".format(firstResult.VariableObj.VariableCode, firstResult.VariableObj.VariableNameCV)) print("Units: {}".format(firstResult.UnitsObj.UnitsName)) print("SamplingFeatureID: {}".format(frfa.SamplingFeatureObj.SamplingFeatureID)) print("SamplingFeatureCode: {}".format(frfa.SamplingFeatureObj.SamplingFeatureCode)) except Exception as e: print("Unable to demo example of retrieving Attributes of a Result: {}".format(e)) # Pass the Sampling Feature ID of the specimen, and the relationship type relatedSite = read.getRelatedSamplingFeatures(sfid=26, relationshiptype='Was Collected at')[0] vars(relatedSite) siteID = 1 # Red Butte Creek at 1300 E (obtained from the getRelatedSamplingFeatures query) v = variables_df[variables_df['VariableCode'] == 'TP'] variableID = v.index[0] results = read.getResults(siteid=siteID, variableid=variableID, restype="Measurement") # Get the list of ResultIDs so I can retrieve the data values associated with all of the results resultIDList = [x.ResultID for x in results] len(resultIDList) # Get all of the data values for the Results in the list created above # Call getResultValues, which returns a Pandas Data Frame with the data resultValues = read.getResultValues(resultids=resultIDList, lowercols=False) resultValues.head() # Plot the time sequence of Measurement Result Values ax = resultValues.plot(x='ValueDateTime', y='DataValue', title=relatedSite.SamplingFeatureName, kind='line', use_index=True, linestyle='solid', style='o') ax.set_ylabel("{0} ({1})".format(results[0].VariableObj.VariableNameCV, results[0].UnitsObj.UnitsAbbreviation)) ax.set_xlabel('Date/Time') ax.legend().set_visible(False) def get_results_and_values(siteid, variablecode): v = variables_df[variables_df['VariableCode'] == variablecode] variableID = v.index[0] results = read.getResults(siteid=siteid, variableid=variableID, restype="Measurement") resultIDList = [x.ResultID for x in results] resultValues = read.getResultValues(resultids=resultIDList, lowercols=False) return resultValues, results # Plot figure and axis set up f, ax = plt.subplots(1, figsize=(13, 6)) # First plot (left axis) VariableCode = 'TP' resultValues_TP, results_TP = get_results_and_values(siteID, VariableCode) resultValues_TP.plot(x='ValueDateTime', y='DataValue', label=VariableCode, style='o-', kind='line', ax=ax) ax.set_ylabel("{0}: {1} ({2})".format(VariableCode, results_TP[0].VariableObj.VariableNameCV, results_TP[0].UnitsObj.UnitsAbbreviation)) # Second plot (right axis) VariableCode = 'TN' resultValues_TN, results_TN = get_results_and_values(siteID, VariableCode) resultValues_TN.plot(x='ValueDateTime', y='DataValue', label=VariableCode, style='^-', kind='line', ax=ax, secondary_y=True) ax.right_ax.set_ylabel("{0}: {1} ({2})".format(VariableCode, results_TN[0].VariableObj.VariableNameCV, results_TN[0].UnitsObj.UnitsAbbreviation)) # Tweak the figure ax.legend(loc='upper left') ax.right_ax.legend(loc='upper right') ax.grid(True) ax.set_xlabel('') ax.set_title(relatedSite.SamplingFeatureName); results_faam = lambda results, i: results[i].FeatureActionObj.ActionObj.MethodObj print("TP METHOD: {0} ({1})".format(results_faam(results_TP, 0).MethodName, results_faam(results_TP, 0).MethodDescription)) print("TN METHOD: {0} ({1})".format(results_faam(results_TN, 0).MethodName, results_faam(results_TN, 0).MethodDescription))
0.324235
0.977132
# <center>Solution: Choropleth Maps.</center> **<center>UFRN-DATA SCIENCE</center> ** **<center>Luis Ortiz</center> ** **<center>Elizabeth Cabrera</center> ** ### <span style="background-color: #000000; color:#FDFEFE">Step i. Load population data.</span> ``` import os import folium import json import pandas as pd from branca.colormap import linear import numpy as np from shapely.geometry import Polygon from shapely.geometry import Point from numpy import random # dataset name dataset_pop_2017 = os.path.join('data', 'population_2017.csv') # read the data to a dataframe data2017 = pd.read_csv(dataset_pop_2017) # eliminate spaces in name of columns data2017.columns = [cols.replace(' ', '_') for cols in data2017.columns] # filtering data about Northeast Region states estados = ['MA', 'PI', 'CE', 'RN', 'PB', 'PE', 'AL', 'SE', 'BA' ] append_data = [] for e in estados: dataND = data2017[data2017['UF'] == e] # print(len(dataND)) append_data.append(dataND) dataND = pd.concat(append_data, axis=0).reset_index(drop=True) print(dataND.head(10)) print(dataND['NOME_DO_MUNICÍPIO']) # dataND = dataND.sort_values('NOME_DO_MUNICÍPIO') # print(len(dataND)) # print(dataND.head(5)) ``` ### <span style="background-color: #000000; color:#FDFEFE">Step ii. Load GEOJSON data.</span> ``` # searching and loading the geojs-xx-mun.json files (xx=21-29) for i in range(21,30): endND = os.path.join('geojson', 'geojs-'+str(i)+'-mun.json') geo_json_ND = json.load(open(endND,encoding='latin-1')) if i == 21: geo_json_NDx = geo_json_ND # print(len(geo_json_NDx['features'])) else: for city in geo_json_ND['features']: geo_json_NDx['features'].append(city) # print(len(geo_json_NDx['features'])) # list all cities in the state of Northeast cities = [] for city in geo_json_NDx['features']: cities.append(city['properties']['description']) cities # print(cities.index('Natal')) # print(geo_json_NDx['features'][1031]['properties']['description']) # Cleaning data of Geojson data # CE geo_json_NDx['features'][526]['properties']['description'] = 'Itapajé' geo_json_NDx['features'][526]['properties']['name'] = 'Itapajé' # RN geo_json_NDx['features'][736]['properties']['description'] = 'Serra Caiada' geo_json_NDx['features'][736]['properties']['name'] = 'Serra Caiada' # PB geo_json_NDx['features'][946]['properties']['description'] = 'Quixaba' geo_json_NDx['features'][946]['properties']['name'] = 'Quixaba' geo_json_NDx['features'][964]['properties']['description'] = 'Joca Claudino' geo_json_NDx['features'][964]['properties']['name'] = 'Joca Claudino' geo_json_NDx['features'][990]['properties']['description'] = 'São Vicente do Seridó' geo_json_NDx['features'][990]['properties']['name'] = 'São Vicente do Seridó' geo_json_NDx['features'][1003]['properties']['description'] = 'Tacima' geo_json_NDx['features'][1003]['properties']['name'] = 'Tacima' #PE geo_json_NDx['features'][1031]['properties']['description'] = 'Belém do São Francisco' geo_json_NDx['features'][1031]['properties']['name'] = 'Belém do São Francisco' geo_json_NDx['features'][1089]['properties']['description'] = 'Iguaracy' geo_json_NDx['features'][1089]['properties']['name'] = 'Iguaracy' geo_json_NDx['features'][1111]['properties']['description'] = 'Lagoa de Itaenga' geo_json_NDx['features'][1111]['properties']['name'] = 'Lagoa de Itaenga' # SE geo_json_NDx['features'][1324]['properties']['description'] = 'Graccho Cardoso' geo_json_NDx['features'][1324]['properties']['name'] = 'Graccho Cardoso' ``` ### <span style="background-color: #000000; color:#FDFEFE">Step iii. # Create a choropleth map.</span> ``` m = folium.Map( location=[-5.826592, -35.212558], zoom_start=7, tiles='Stamen Terrain' ) # Create a threshold of legend threshold_scaleND = np.linspace(dataND['POPULAÇÃO_ESTIMADA'].min(), dataND['POPULAÇÃO_ESTIMADA'].max(), 6, dtype=int).tolist() m.choropleth( geo_data=geo_json_NDx, data=dataND, columns=['NOME_DO_MUNICÍPIO', 'POPULAÇÃO_ESTIMADA'], key_on='feature.properties.description', fill_color='YlOrBr', legend_name='Estimated Population to Northeast Region (2017)', highlight=True, threshold_scale = threshold_scaleND) #Saving the choroplet map path='map.html' m.save(path) ```
github_jupyter
import os import folium import json import pandas as pd from branca.colormap import linear import numpy as np from shapely.geometry import Polygon from shapely.geometry import Point from numpy import random # dataset name dataset_pop_2017 = os.path.join('data', 'population_2017.csv') # read the data to a dataframe data2017 = pd.read_csv(dataset_pop_2017) # eliminate spaces in name of columns data2017.columns = [cols.replace(' ', '_') for cols in data2017.columns] # filtering data about Northeast Region states estados = ['MA', 'PI', 'CE', 'RN', 'PB', 'PE', 'AL', 'SE', 'BA' ] append_data = [] for e in estados: dataND = data2017[data2017['UF'] == e] # print(len(dataND)) append_data.append(dataND) dataND = pd.concat(append_data, axis=0).reset_index(drop=True) print(dataND.head(10)) print(dataND['NOME_DO_MUNICÍPIO']) # dataND = dataND.sort_values('NOME_DO_MUNICÍPIO') # print(len(dataND)) # print(dataND.head(5)) # searching and loading the geojs-xx-mun.json files (xx=21-29) for i in range(21,30): endND = os.path.join('geojson', 'geojs-'+str(i)+'-mun.json') geo_json_ND = json.load(open(endND,encoding='latin-1')) if i == 21: geo_json_NDx = geo_json_ND # print(len(geo_json_NDx['features'])) else: for city in geo_json_ND['features']: geo_json_NDx['features'].append(city) # print(len(geo_json_NDx['features'])) # list all cities in the state of Northeast cities = [] for city in geo_json_NDx['features']: cities.append(city['properties']['description']) cities # print(cities.index('Natal')) # print(geo_json_NDx['features'][1031]['properties']['description']) # Cleaning data of Geojson data # CE geo_json_NDx['features'][526]['properties']['description'] = 'Itapajé' geo_json_NDx['features'][526]['properties']['name'] = 'Itapajé' # RN geo_json_NDx['features'][736]['properties']['description'] = 'Serra Caiada' geo_json_NDx['features'][736]['properties']['name'] = 'Serra Caiada' # PB geo_json_NDx['features'][946]['properties']['description'] = 'Quixaba' geo_json_NDx['features'][946]['properties']['name'] = 'Quixaba' geo_json_NDx['features'][964]['properties']['description'] = 'Joca Claudino' geo_json_NDx['features'][964]['properties']['name'] = 'Joca Claudino' geo_json_NDx['features'][990]['properties']['description'] = 'São Vicente do Seridó' geo_json_NDx['features'][990]['properties']['name'] = 'São Vicente do Seridó' geo_json_NDx['features'][1003]['properties']['description'] = 'Tacima' geo_json_NDx['features'][1003]['properties']['name'] = 'Tacima' #PE geo_json_NDx['features'][1031]['properties']['description'] = 'Belém do São Francisco' geo_json_NDx['features'][1031]['properties']['name'] = 'Belém do São Francisco' geo_json_NDx['features'][1089]['properties']['description'] = 'Iguaracy' geo_json_NDx['features'][1089]['properties']['name'] = 'Iguaracy' geo_json_NDx['features'][1111]['properties']['description'] = 'Lagoa de Itaenga' geo_json_NDx['features'][1111]['properties']['name'] = 'Lagoa de Itaenga' # SE geo_json_NDx['features'][1324]['properties']['description'] = 'Graccho Cardoso' geo_json_NDx['features'][1324]['properties']['name'] = 'Graccho Cardoso' m = folium.Map( location=[-5.826592, -35.212558], zoom_start=7, tiles='Stamen Terrain' ) # Create a threshold of legend threshold_scaleND = np.linspace(dataND['POPULAÇÃO_ESTIMADA'].min(), dataND['POPULAÇÃO_ESTIMADA'].max(), 6, dtype=int).tolist() m.choropleth( geo_data=geo_json_NDx, data=dataND, columns=['NOME_DO_MUNICÍPIO', 'POPULAÇÃO_ESTIMADA'], key_on='feature.properties.description', fill_color='YlOrBr', legend_name='Estimated Population to Northeast Region (2017)', highlight=True, threshold_scale = threshold_scaleND) #Saving the choroplet map path='map.html' m.save(path)
0.150996
0.718693
___ <a href='https://github.com/ai-vithink'> <img src='https://avatars1.githubusercontent.com/u/41588940?s=200&v=4' /></a> ___ # Regression Plots Seaborn has many built-in capabilities for regression plots, however we won't really discuss regression until the machine learning section of the course, so we will only cover the **lmplot()** function for now. **lmplot** allows you to display linear models, but it also conveniently allows you to split up those plots based off of features, as well as coloring the hue based off of features. Let's explore how this works: ``` import seaborn as sns import matplotlib.pyplot as plt %matplotlib inline from IPython.display import HTML HTML('''<script> code_show_err=false; function code_toggle_err() { if (code_show_err){ $('div.output_stderr').hide(); } else { $('div.output_stderr').show(); } code_show_err = !code_show_err } $( document ).ready(code_toggle_err); </script> To toggle on/off output_stderr, click <a href="javascript:code_toggle_err()">here</a>.''') # To hide warnings, which won't change the desired outcome. %%HTML <style type="text/css"> table.dataframe td, table.dataframe th { border: 3px black solid !important; color: black !important; } # For having gridlines import warnings warnings.filterwarnings("ignore") sns.set_style('darkgrid') tips = sns.load_dataset('tips') tips.head() # Simple linear plot using lmplot # Feature you want on x axis vs the feature on y axis sns.lmplot(x='total_bill', y='tip', data=tips) # Gives us a scatter plot with linear fit on top of it. sns.lmplot(x = 'total_bill', y = 'tip',data = tips,hue ="sex") ``` * Hue to have some separation based off of a categorical label/feature/column. * Gives us 2 scatter plots and 2 linear fits. * It tells us male and females have almost same linear fit on basis of total_bill vs tip given. * Also we can pass in matplotlib style markers and marker types ### Working with Markers lmplot kwargs get passed through to **regplot** which is a more general form of lmplot(). regplot has a scatter_kws parameter that gets passed to plt.scatter. So you want to set the s parameter in that dictionary, which corresponds (a bit confusingly) to the squared markersize. In other words you end up passing a dictionary with the base matplotlib arguments, in this case, s for size of a scatter plot. In general, you probably won't remember this off the top of your head, but instead reference the documentation. ``` sns.lmplot(x = 'total_bill', y = 'tip',data = tips,hue ="sex",markers=['o','v']) # List of markers = [] passed as hue has 2 elements. # If the plot is small for you then we can pass in a scatter_kws parameter to sns, which makes use of matplotlib internally. # Seaborn calls matplotlib under the hood, and we can affect matplotlib from seaborn by passing parameter as dict. sns.lmplot(x = 'total_bill', y = 'tip',data = tips,hue ="sex",markers=['o','v'],scatter_kws={"s":100}) # See how marker size increases. s stands for size. # Reference these in documentation, though this degree of modification is not needed everyday. ``` ## Using a Grid We can add more variable separation through columns and rows with the use of a grid. Just indicate this with the col or row arguments: ``` # Instead of separating by hue we can also use a grid. sns.lmplot(x = 'total_bill', y = 'tip',data = tips,col='sex') # col gives us 2 separate columns separated by sex category instead of separation by colour as we do using hue. # Similarly we can do grids of row and col simultaneouly as well in the following manner sns.lmplot(x = 'total_bill', y = 'tip',data = tips,col='sex',row='time') # If you want to plot even more labels then we can use hue with row and col as well simultaneously resulting in : sns.lmplot(x = 'total_bill', y = 'tip',data = tips,col='day',row='time',hue='sex') # Too much info, try eliminating something. sns.lmplot(x = 'total_bill', y = 'tip',data = tips,col='day',hue='sex') # Better now, but size and aspect looks odd and hard to read. ``` ## Aspect and Size Seaborn figures can have their size and aspect ratio adjusted with the **size** and **aspect** parameters: ``` # We can change ratio of height and width called aspect sns.lmplot(x = 'total_bill', y = 'tip',data = tips,col='day',hue='sex',aspect=0.6,size=8) # Much better but still font-size looks kinda small right ? ``` * NOTE : For more advanced features like setting the marker size, or changing marker type, please refer the documentation. * [Documentation Regression Plot - Seaborn](https://seaborn.pydata.org/generated/seaborn.regplot.html) # Up Next : Font Size, Styling, Colour etc.
github_jupyter
import seaborn as sns import matplotlib.pyplot as plt %matplotlib inline from IPython.display import HTML HTML('''<script> code_show_err=false; function code_toggle_err() { if (code_show_err){ $('div.output_stderr').hide(); } else { $('div.output_stderr').show(); } code_show_err = !code_show_err } $( document ).ready(code_toggle_err); </script> To toggle on/off output_stderr, click <a href="javascript:code_toggle_err()">here</a>.''') # To hide warnings, which won't change the desired outcome. %%HTML <style type="text/css"> table.dataframe td, table.dataframe th { border: 3px black solid !important; color: black !important; } # For having gridlines import warnings warnings.filterwarnings("ignore") sns.set_style('darkgrid') tips = sns.load_dataset('tips') tips.head() # Simple linear plot using lmplot # Feature you want on x axis vs the feature on y axis sns.lmplot(x='total_bill', y='tip', data=tips) # Gives us a scatter plot with linear fit on top of it. sns.lmplot(x = 'total_bill', y = 'tip',data = tips,hue ="sex") sns.lmplot(x = 'total_bill', y = 'tip',data = tips,hue ="sex",markers=['o','v']) # List of markers = [] passed as hue has 2 elements. # If the plot is small for you then we can pass in a scatter_kws parameter to sns, which makes use of matplotlib internally. # Seaborn calls matplotlib under the hood, and we can affect matplotlib from seaborn by passing parameter as dict. sns.lmplot(x = 'total_bill', y = 'tip',data = tips,hue ="sex",markers=['o','v'],scatter_kws={"s":100}) # See how marker size increases. s stands for size. # Reference these in documentation, though this degree of modification is not needed everyday. # Instead of separating by hue we can also use a grid. sns.lmplot(x = 'total_bill', y = 'tip',data = tips,col='sex') # col gives us 2 separate columns separated by sex category instead of separation by colour as we do using hue. # Similarly we can do grids of row and col simultaneouly as well in the following manner sns.lmplot(x = 'total_bill', y = 'tip',data = tips,col='sex',row='time') # If you want to plot even more labels then we can use hue with row and col as well simultaneously resulting in : sns.lmplot(x = 'total_bill', y = 'tip',data = tips,col='day',row='time',hue='sex') # Too much info, try eliminating something. sns.lmplot(x = 'total_bill', y = 'tip',data = tips,col='day',hue='sex') # Better now, but size and aspect looks odd and hard to read. # We can change ratio of height and width called aspect sns.lmplot(x = 'total_bill', y = 'tip',data = tips,col='day',hue='sex',aspect=0.6,size=8) # Much better but still font-size looks kinda small right ?
0.457137
0.949669
``` from traitlets.config.manager import BaseJSONConfigManager from pathlib import Path path = Path.home() / ".jupyter" / "nbconfig" cm = BaseJSONConfigManager(config_dir=str(path)) tmp = cm.update( "rise", { "theme": "black", "transition": "zoom", "start_slideshow_at": "selected", #"autolaunch": True, "width": "100%", "height": "100%", "header": "", "footer":"", "scroll": True, "enable_chalkboard": True, "slideNumber": True, "center": False, "controlsLayout": "edges", "slideNumber": True, "hash": True, } ) ``` # ALICE IN PYTHONLAND: OOP IN LEWIS CARROLL GAMES ## Maria Teresa Grifa ### December, 12, 2020 ## About me ### PhD Canditate in Applied Mathematics at University of L'Aquila, Italy ### ... within a month, I'll become a Machine Learning Specialist 🤓 😎 🤓 😎 ![reallyurl](https://media.giphy.com/media/dSetRSJcR3PGqkvjRg/giphy.gif "really") ### Lewis Carroll's Games and Riddles Lewis Carroll is the doppelgänger of Charles Dodgson. He was a mathematician and logician, he was a devout Euclidean, believing that planes are flat and parallel lines never meet.... ![reallyurl](https://media.giphy.com/media/L20E2bh3ntSCc/giphy.gif "really") Charles Dodgson was also a photographer, a priest and a Gamer ...Lewis Carroll was a Victorian nerd, we can easily think of him as a Python user <img src=https://www.python.org/static/community_logos/python-logo-master-v3-TM.png width="100" align="center"> ### Alice's Adventures in Wonderland Alice's Adventures in Wonderland was published in 1865 It contains many parodies of Victorian popular culture and many maths paradoxes and puzzles. Or we can consider the novel as a creepy march through a world of characters who seem to be set on making life as frustrating and manically as possible they can. ### Alice in Pop Culture Salvador Dalí produced 12 illustrations based on Alice's Adventures in Wonderland. Alice in Cyberspace is a radio drama series presented by the Lewis Carroll Society of Canada Jefferson Airplane's song White Rabbit mentions Alice, the Dormouse, the hookah-smoking caterpillar, it shows parallels between the story and the '70 music culture. ## 42 Obsession in the Nerd Subculture 🎲 random.seed(42) 🚀 In The Hitchhiker's Guide to the Galaxy, the great question of life, the universe and everything is 42 🍄 Alice's Adventures in Wonderland has 42 illustrations<br> 🍄 Alice's Adventures in Wonderland contains a Rule 42 ⏳ ...more ## RoadMap: * Class for defining Wonderland characters * Class method for defining the main character * Class inheritance for differentiate Characters * Get the book from Project Gutenberg * Find the frequency of the word 42 in the book * Hidden riddle: Alice's 42 nightmare with multiplication ## Class ``` class WonderlandMember: """ Creates a character of Wonderland """ def __init__(self, name: str, species: str, fantastic: bool = False): """Dunder init method, this method is called under the wood.... """ self.name = name self.species = species self.fantastic = fantastic def __repr__(self) -> str: """Representation function for the instances of the class Returns: str: class intances """ return ( f"{self.__class__.__name__}(name='{self.name}', " f"species='{self.species}', fanstatic='{self.fantastic}')" ) mad_hat = WonderlandMember('Mad Hat', 'human', True) print(mad_hat) ``` ## Class method decorator ``` class WonderlandMember: """ Creates a character of Wonderland """ def __init__(self, name: str, species: str, fantastic: bool = False): """Dunder init method, this method is called under the wood.... """ self.name = name self.species = species self.fantastic = fantastic @classmethod def hero(cls) -> "WonderlandMember": """Modify class state Returns: class: class object """ return cls("Alice", "human", True) def __repr__(self) -> str: """Representation function for the instances of the class Returns: str: class intances """ return ( f"{self.__class__.__name__}(name='{self.name}', " f"species={self.species}, fanstatic='{self.fantastic}')" ) alice = WonderlandMember.hero() cat = WonderlandMember("Cheshire Cat", "animal") queen = WonderlandMember("Queen of Hearth", "human") print(alice) print() print(cat) print() print(queen) ``` ## Property decorator ``` class WonderlandMember: """ Creates a character of Wonderland """ def __init__(self, name: str, species: str, fantastic: bool = False): """Dunder init method, this method is call under the wood.... """ self.name = name self.species = species self.fantastic = fantastic @classmethod def hero(cls) -> "WonderlandMember": """Modify class state Returns: class: class object """ return cls("Alice", "human", True) @property def mood(self) -> str: if (self.species == "human") and (self.fantastic == True): return "angry" elif (self.species == "human") and (self.fantastic == False): return "whimsical" else: return "mysterious" def __repr__(self) -> str: """Representation function for the intances of the class Returns: str: class intances """ return ( f"{self.__class__.__name__}(name='{self.name}', " f"species='{self.species}', fanstatic='{self.fantastic}')" ) alice = WonderlandMember.hero() cat = WonderlandMember("Cheshire Cat", "animal") queen = WonderlandMember("Queen of Hearth", "human") print(f"{alice.name} is {alice.mood}") print() print(f"{cat.name} is {cat.mood}") print() print(f"{queen.name} is {queen.mood}") ``` ## Getter and Setter ``` class WonderlandMember: """ Creates a character of Wonderland """ def __init__(self, name: str, species: str, interpretation: str): """Dunder init method, this method is called under the wood.... """ self.name = name self.species = species self._interpretation = interpretation @property def interpretation(self): print("@property class method called") return self._interpretation @interpretation.setter def interpretation(self,value): print("@interpretation.setter class method called") self._interpretation = value def __repr__(self) -> str: """Representation function for the instances of the class Returns: str: class intances """ return ( f"{self.__class__.__name__}(name='{self.name}', " f"species='{self.species}', fanstatic='{self.fantastic}')" ) caterpillar = WonderlandMember("Caterpillar", "animal", "wise adult") print(f'The {caterpillar.name} is an {caterpillar.species} that represents a {caterpillar.interpretation}') print("="*70) caterpillar.interpretation = 'Hippy hookah smoker' print(f"Caterpillar interpretation is: {caterpillar.interpretation}") print("="*70) WonderlandMember.interpretation = 'teacher' print(f"Caterpillar interpretation is: a {WonderlandMember.interpretation}") ``` ## Class inheritance ``` class WonderlandMember: """ Creates a character of Wonderland """ def __init__(self, name: str, species: str, fantastic: bool = False): self.name = name self.species = species self.fantastic = fantastic @classmethod def hero(cls) -> "WonderlandMember": """Modify class state Returns: class: class object """ return cls("Alice", "human", True) @property def mood(self) -> str: if (self.species == "human") and (self.fantastic == True): return "angry" elif (self.species == "human") and (self.fantastic == False): return "whimsical" else: return "mysterious" def __repr__(self) -> str: """Representation function for the instances of the class Returns: str: class intances """ return ( f"{self.__class__.__name__}(name='{self.name}', " f"species='{self.species}', fanstatic='{self.fantastic}')" ) class StrangeAnimal(WonderlandMember): """Creates fantastic animals in Wonderland Args: WonderlandMember (class): creates generic Wonderland member """ def __init__( self, name: str, species: str, artefact: str, fantastic: bool = True ): super().__init__(name, species, fantastic) self.artefact = artefact @classmethod def white_rabbit(cls) -> "StrangeAnimal": return cls("White Rabbit", "animal", "clock") def __repr__(self) -> str: return ( f"{self.__class__.__name__}(name='{self.name}', " f"species='{self.species}', artefact='{self.artefact}'" ) rabbit = StrangeAnimal.white_rabbit() print(rabbit) print(f"The {rabbit.name}'s is {rabbit.mood}") ``` ## How many times does 42 apper in the book? ![read](https://media.giphy.com/media/SiMcadhDEZDm93GmTL/giphy.gif) ``` import requests def get_raw_book(url): response = requests.get(url) raw = response.text return raw url_book = 'https://gist.githubusercontent.com/phillipj/4944029/raw/75ba2243dd5ec2875f629bf5d79f6c1e4b5a8b46/alice_in_wonderland.txt' book = get_raw_book(url_book) ``` ## Let's download the book from Project Gutenberg ``` print(book) import nltk import matplotlib.pyplot as plt tokens = nltk.word_tokenize(book) tokens = [token.lower() for token in tokens] text = nltk.Text(tokens) text.dispersion_plot(['forty-two']) ``` ## Unfortunately 42 seems to appear just once 😓 😭 😓 😭 ## Using a prior knowledge... ## ... aka Wikipedia ## One of the 42 games appear in Chapter II: The Pool of Tears ## (Quite a sad title...our hero is not having a good time during her trip) ## Data Class ``` from dataclasses import dataclass @dataclass class Player: """Create Player Dataclass """ name: str player_1 = Player(alice.name) print(player_1) from typing import ClassVar from dataclasses import dataclass, field @dataclass class Player: """Create Player Dataclass """ name: str count_player: ClassVar[int] = 0 player_number: int = field(init=False) def __post_init__(self): self.player_number = Player.update_counter() @classmethod def update_counter(cls): cls.count_player += 1 return cls.count_player player_1 = Player(rabbit.name) player_2 = Player(alice.name) print(player_1) print(player_2) from typing import ClassVar from dataclasses import dataclass, field @dataclass class Player: """Create Player Dataclass """ name: str count_player: ClassVar[int] = 0 player_number: int = field(init=False) def __post_init__(self): self.player_number = Player.update_counter() @classmethod def update_counter(cls): cls.count_player += 1 return cls.count_player def check_who_plays(self): if (self.player_number == 1) and (self.name == "Alice"): raise ValueError("The first player cannot be Alice") elif (self.player_number == 2) and (self.name != "Alice"): raise ValueError("The second player has to be Alice") else: print("Players are in lexicographic order!...Let's start to play!") player_1 = Player(alice.name) print(player_1.check_who_plays()) player_1 = Player(rabbit.name) player_2 = Player(alice.name) player_1 = Player(rabbit.name) player_2 = Player(alice.name) player_1.check_who_plays() ``` ## Riddle 42: Alice's nightmare with Multiplication Table ![multiply](https://media.giphy.com/media/3o6ZtgnSHub0k9lbgc/giphy.gif) ### Riddle 42: Alice's nightmare with Multiplication Table In Chapter II: The pool of tears, Alice’s attempts at simple multiplication but produces some odd results leave her confused. In our world, we work in base ten: we have zero-through-nine digits with which we perform operations. When Alice goes down the White Rabbit hole, she tries to compute multiplications in base ten. But in Wonderland, her answer slipped into higher base systems...and she can't get to 20 because of 42! ### Here the multiplication rolling! 4 x 5 = 20 = (1 x 18) + 2 --> '12' in base 18 4 x 6 = 24 = (1 x 21) + 3 --> '13' in base 21 4 x 7 = 28 = ( 1 x 24) + 4 --> '14' in base 24 4 x 13 = 52 = (1 x 14) + 10 --> '1X' in base 42! where 'X' is the symbol of 10 in base 42! Here an explanation of the Rolling Multiplication Table: F. Abeles, Multiplication in changing bases: a note on Lewis Carroll, Historia Mathematica, 1976 ``` import random import string class RollingBaseMultiplicationTable: def __init__(self, size): pass @staticmethod def rolling_base(row: int = None, column: int = None, default: int = 6) -> int: pass @staticmethod def custom_base_repr(number, base=2, padding=0): pass def change_base_table(self): pass class RollingBaseMultiplicationTable: def __init__(self, size): self.size = size @staticmethod def rolling_base(row: int = None, column: int = None, default: int = 6) -> int: return default + (column - 1) * (row - 1) @staticmethod def custom_base_repr(number, base=2, padding=0): digits_till_36 = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ" random.seed(42) digits_from_36_till_127 = "".join( random.choices(string.ascii_lowercase + string.punctuation, k=127 - 36) ) digits = digits_till_36 + digits_from_36_till_127 if base > len(digits): raise ValueError( "Bases greater than 126 not handled in custom_base_repr.") elif base < 2: raise ValueError("Bases less than 2 not handled in base_repr.") num = abs(number) res = [] while num: res.append(digits[num % base]) num //= base if padding: res.append("0" * padding) if number < 0: res.append("-") return "".join(reversed(res or "0")) def change_base_table(self): M = [[0] * self.size for i in range(self.size)] for i in range(self.size): for j in range(i + 1): M[i][j] = RollingBaseMultiplicationTable.custom_base_repr( (i + 1) * (j + 1), RollingBaseMultiplicationTable.rolling_base(i + 1, j + 1), ) return M import pandas as pd M = RollingBaseMultiplicationTable(12) pd.DataFrame(M.change_base_table()) ``` ## Let the players play ``` class WriteTxt: def __init__(self, paragraph_name): self.paragraph_name = paragraph_name def __enter__(self): self.paragraph = open(self.paragraph_name, "w") return self.paragraph def __exit__(self, exc_type, exc_value, traceback): if self.paragraph: self.paragraph.close() class ReadTxt: def __init__(self, file_name): self.file_name = file_name def __enter__(self): self.paragraph = open(self.file_name, "r") return self.paragraph def __exit__(self, exc_type, exc_value, traceback): if self.paragraph: self.paragraph.close() class Game(RollingBaseMultiplicationTable): def __init__(self, player_1: Player, player_2: Player, size=12): super().__init__(size) self.player_1 = player_1 self.player_2 = player_2 def write_paragraph(self, chapter, content): paragraph_name = f"{chapter}.txt" with WriteTxt(paragraph_name) as w: w.write(content) def read_withclass(self, file): with ReadTxt(file) as r: paragraph = r.read() par_list = paragraph.split("\n\n") return par_list[0], par_list[1] def talk_and_play(self, chapter_name, paragraph_content): paragraph = self.write_paragraph(chapter_name, paragraph_content) first_sentence, second_sentence = self.read_withclass( chapter_name + ".txt") M = self.change_base_table() print(f"{self.player_2.name} says:You are in the Rabbit Hole!") print(f"{self.player_1.name} says:{first_sentence}") print(f"{self.player_2.name} says:Your world is changed!") print(f"{self.player_1.name} says:{second_sentence}") print(f"{self.player_2.name} says:Let us stat play!How much 4 x 5 is?") print(f"{self.player_1.name} says:{M[4][3]}") print(f"{self.player_2.name} says: And 4 x 6 is?") print(f"{self.player_1.name} says:{M[5][3]}") print(f"{self.player_2.name} says:...4 x 7 is?") print(f"{self.player_1.name} says:{M[6][3]}") print(f"{self.player_2.name} says:...4 x 9 is?") print(f"{self.player_1.name} says:{M[8][3]}") print(f"{self.player_2.name} says:...4 x 11 is?") print(f"{self.player_1.name} says:{M[10][3]}") print(f"{self.player_2.name} says:...4 x 12 is?") print(f"{self.player_1.name} says:{M[11][3]}") print( f"{self.player_1.name} says:Oh dear!I shall never get to twenty at that rate!" ) print(f"{self.player_2.name} says:You are right!4 x 13 is not equal to 20!") print( f"{self.player_2.name} says:Ahahah! See, Wonderland is not a decimal-based universe." ) print(f"{self.player_2.name} says:The only way to get 20 is multiplying 4 x 13 = 1X") paragraph_content = "I wonder if I've been changed in the night?\nLet me think: was I the same when I got up this morning?\nI almost think I can remember feeling a little different.\nBut if I am not the same, the next question is, Who inthe world am I?\nAh, that is the great puzzle!\n\nI will try if I know all the things I used to know." chapter_name = "CHAPTER II: The Pool of Tears" game = Game(player_1, player_2) game.talk_and_play(chapter_name, paragraph_content) ``` # Thank you for your attention! ![pingu](https://media.giphy.com/media/QBC5foQmcOkdq/giphy.gif)
github_jupyter
from traitlets.config.manager import BaseJSONConfigManager from pathlib import Path path = Path.home() / ".jupyter" / "nbconfig" cm = BaseJSONConfigManager(config_dir=str(path)) tmp = cm.update( "rise", { "theme": "black", "transition": "zoom", "start_slideshow_at": "selected", #"autolaunch": True, "width": "100%", "height": "100%", "header": "", "footer":"", "scroll": True, "enable_chalkboard": True, "slideNumber": True, "center": False, "controlsLayout": "edges", "slideNumber": True, "hash": True, } ) class WonderlandMember: """ Creates a character of Wonderland """ def __init__(self, name: str, species: str, fantastic: bool = False): """Dunder init method, this method is called under the wood.... """ self.name = name self.species = species self.fantastic = fantastic def __repr__(self) -> str: """Representation function for the instances of the class Returns: str: class intances """ return ( f"{self.__class__.__name__}(name='{self.name}', " f"species='{self.species}', fanstatic='{self.fantastic}')" ) mad_hat = WonderlandMember('Mad Hat', 'human', True) print(mad_hat) class WonderlandMember: """ Creates a character of Wonderland """ def __init__(self, name: str, species: str, fantastic: bool = False): """Dunder init method, this method is called under the wood.... """ self.name = name self.species = species self.fantastic = fantastic @classmethod def hero(cls) -> "WonderlandMember": """Modify class state Returns: class: class object """ return cls("Alice", "human", True) def __repr__(self) -> str: """Representation function for the instances of the class Returns: str: class intances """ return ( f"{self.__class__.__name__}(name='{self.name}', " f"species={self.species}, fanstatic='{self.fantastic}')" ) alice = WonderlandMember.hero() cat = WonderlandMember("Cheshire Cat", "animal") queen = WonderlandMember("Queen of Hearth", "human") print(alice) print() print(cat) print() print(queen) class WonderlandMember: """ Creates a character of Wonderland """ def __init__(self, name: str, species: str, fantastic: bool = False): """Dunder init method, this method is call under the wood.... """ self.name = name self.species = species self.fantastic = fantastic @classmethod def hero(cls) -> "WonderlandMember": """Modify class state Returns: class: class object """ return cls("Alice", "human", True) @property def mood(self) -> str: if (self.species == "human") and (self.fantastic == True): return "angry" elif (self.species == "human") and (self.fantastic == False): return "whimsical" else: return "mysterious" def __repr__(self) -> str: """Representation function for the intances of the class Returns: str: class intances """ return ( f"{self.__class__.__name__}(name='{self.name}', " f"species='{self.species}', fanstatic='{self.fantastic}')" ) alice = WonderlandMember.hero() cat = WonderlandMember("Cheshire Cat", "animal") queen = WonderlandMember("Queen of Hearth", "human") print(f"{alice.name} is {alice.mood}") print() print(f"{cat.name} is {cat.mood}") print() print(f"{queen.name} is {queen.mood}") class WonderlandMember: """ Creates a character of Wonderland """ def __init__(self, name: str, species: str, interpretation: str): """Dunder init method, this method is called under the wood.... """ self.name = name self.species = species self._interpretation = interpretation @property def interpretation(self): print("@property class method called") return self._interpretation @interpretation.setter def interpretation(self,value): print("@interpretation.setter class method called") self._interpretation = value def __repr__(self) -> str: """Representation function for the instances of the class Returns: str: class intances """ return ( f"{self.__class__.__name__}(name='{self.name}', " f"species='{self.species}', fanstatic='{self.fantastic}')" ) caterpillar = WonderlandMember("Caterpillar", "animal", "wise adult") print(f'The {caterpillar.name} is an {caterpillar.species} that represents a {caterpillar.interpretation}') print("="*70) caterpillar.interpretation = 'Hippy hookah smoker' print(f"Caterpillar interpretation is: {caterpillar.interpretation}") print("="*70) WonderlandMember.interpretation = 'teacher' print(f"Caterpillar interpretation is: a {WonderlandMember.interpretation}") class WonderlandMember: """ Creates a character of Wonderland """ def __init__(self, name: str, species: str, fantastic: bool = False): self.name = name self.species = species self.fantastic = fantastic @classmethod def hero(cls) -> "WonderlandMember": """Modify class state Returns: class: class object """ return cls("Alice", "human", True) @property def mood(self) -> str: if (self.species == "human") and (self.fantastic == True): return "angry" elif (self.species == "human") and (self.fantastic == False): return "whimsical" else: return "mysterious" def __repr__(self) -> str: """Representation function for the instances of the class Returns: str: class intances """ return ( f"{self.__class__.__name__}(name='{self.name}', " f"species='{self.species}', fanstatic='{self.fantastic}')" ) class StrangeAnimal(WonderlandMember): """Creates fantastic animals in Wonderland Args: WonderlandMember (class): creates generic Wonderland member """ def __init__( self, name: str, species: str, artefact: str, fantastic: bool = True ): super().__init__(name, species, fantastic) self.artefact = artefact @classmethod def white_rabbit(cls) -> "StrangeAnimal": return cls("White Rabbit", "animal", "clock") def __repr__(self) -> str: return ( f"{self.__class__.__name__}(name='{self.name}', " f"species='{self.species}', artefact='{self.artefact}'" ) rabbit = StrangeAnimal.white_rabbit() print(rabbit) print(f"The {rabbit.name}'s is {rabbit.mood}") import requests def get_raw_book(url): response = requests.get(url) raw = response.text return raw url_book = 'https://gist.githubusercontent.com/phillipj/4944029/raw/75ba2243dd5ec2875f629bf5d79f6c1e4b5a8b46/alice_in_wonderland.txt' book = get_raw_book(url_book) print(book) import nltk import matplotlib.pyplot as plt tokens = nltk.word_tokenize(book) tokens = [token.lower() for token in tokens] text = nltk.Text(tokens) text.dispersion_plot(['forty-two']) from dataclasses import dataclass @dataclass class Player: """Create Player Dataclass """ name: str player_1 = Player(alice.name) print(player_1) from typing import ClassVar from dataclasses import dataclass, field @dataclass class Player: """Create Player Dataclass """ name: str count_player: ClassVar[int] = 0 player_number: int = field(init=False) def __post_init__(self): self.player_number = Player.update_counter() @classmethod def update_counter(cls): cls.count_player += 1 return cls.count_player player_1 = Player(rabbit.name) player_2 = Player(alice.name) print(player_1) print(player_2) from typing import ClassVar from dataclasses import dataclass, field @dataclass class Player: """Create Player Dataclass """ name: str count_player: ClassVar[int] = 0 player_number: int = field(init=False) def __post_init__(self): self.player_number = Player.update_counter() @classmethod def update_counter(cls): cls.count_player += 1 return cls.count_player def check_who_plays(self): if (self.player_number == 1) and (self.name == "Alice"): raise ValueError("The first player cannot be Alice") elif (self.player_number == 2) and (self.name != "Alice"): raise ValueError("The second player has to be Alice") else: print("Players are in lexicographic order!...Let's start to play!") player_1 = Player(alice.name) print(player_1.check_who_plays()) player_1 = Player(rabbit.name) player_2 = Player(alice.name) player_1 = Player(rabbit.name) player_2 = Player(alice.name) player_1.check_who_plays() import random import string class RollingBaseMultiplicationTable: def __init__(self, size): pass @staticmethod def rolling_base(row: int = None, column: int = None, default: int = 6) -> int: pass @staticmethod def custom_base_repr(number, base=2, padding=0): pass def change_base_table(self): pass class RollingBaseMultiplicationTable: def __init__(self, size): self.size = size @staticmethod def rolling_base(row: int = None, column: int = None, default: int = 6) -> int: return default + (column - 1) * (row - 1) @staticmethod def custom_base_repr(number, base=2, padding=0): digits_till_36 = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ" random.seed(42) digits_from_36_till_127 = "".join( random.choices(string.ascii_lowercase + string.punctuation, k=127 - 36) ) digits = digits_till_36 + digits_from_36_till_127 if base > len(digits): raise ValueError( "Bases greater than 126 not handled in custom_base_repr.") elif base < 2: raise ValueError("Bases less than 2 not handled in base_repr.") num = abs(number) res = [] while num: res.append(digits[num % base]) num //= base if padding: res.append("0" * padding) if number < 0: res.append("-") return "".join(reversed(res or "0")) def change_base_table(self): M = [[0] * self.size for i in range(self.size)] for i in range(self.size): for j in range(i + 1): M[i][j] = RollingBaseMultiplicationTable.custom_base_repr( (i + 1) * (j + 1), RollingBaseMultiplicationTable.rolling_base(i + 1, j + 1), ) return M import pandas as pd M = RollingBaseMultiplicationTable(12) pd.DataFrame(M.change_base_table()) class WriteTxt: def __init__(self, paragraph_name): self.paragraph_name = paragraph_name def __enter__(self): self.paragraph = open(self.paragraph_name, "w") return self.paragraph def __exit__(self, exc_type, exc_value, traceback): if self.paragraph: self.paragraph.close() class ReadTxt: def __init__(self, file_name): self.file_name = file_name def __enter__(self): self.paragraph = open(self.file_name, "r") return self.paragraph def __exit__(self, exc_type, exc_value, traceback): if self.paragraph: self.paragraph.close() class Game(RollingBaseMultiplicationTable): def __init__(self, player_1: Player, player_2: Player, size=12): super().__init__(size) self.player_1 = player_1 self.player_2 = player_2 def write_paragraph(self, chapter, content): paragraph_name = f"{chapter}.txt" with WriteTxt(paragraph_name) as w: w.write(content) def read_withclass(self, file): with ReadTxt(file) as r: paragraph = r.read() par_list = paragraph.split("\n\n") return par_list[0], par_list[1] def talk_and_play(self, chapter_name, paragraph_content): paragraph = self.write_paragraph(chapter_name, paragraph_content) first_sentence, second_sentence = self.read_withclass( chapter_name + ".txt") M = self.change_base_table() print(f"{self.player_2.name} says:You are in the Rabbit Hole!") print(f"{self.player_1.name} says:{first_sentence}") print(f"{self.player_2.name} says:Your world is changed!") print(f"{self.player_1.name} says:{second_sentence}") print(f"{self.player_2.name} says:Let us stat play!How much 4 x 5 is?") print(f"{self.player_1.name} says:{M[4][3]}") print(f"{self.player_2.name} says: And 4 x 6 is?") print(f"{self.player_1.name} says:{M[5][3]}") print(f"{self.player_2.name} says:...4 x 7 is?") print(f"{self.player_1.name} says:{M[6][3]}") print(f"{self.player_2.name} says:...4 x 9 is?") print(f"{self.player_1.name} says:{M[8][3]}") print(f"{self.player_2.name} says:...4 x 11 is?") print(f"{self.player_1.name} says:{M[10][3]}") print(f"{self.player_2.name} says:...4 x 12 is?") print(f"{self.player_1.name} says:{M[11][3]}") print( f"{self.player_1.name} says:Oh dear!I shall never get to twenty at that rate!" ) print(f"{self.player_2.name} says:You are right!4 x 13 is not equal to 20!") print( f"{self.player_2.name} says:Ahahah! See, Wonderland is not a decimal-based universe." ) print(f"{self.player_2.name} says:The only way to get 20 is multiplying 4 x 13 = 1X") paragraph_content = "I wonder if I've been changed in the night?\nLet me think: was I the same when I got up this morning?\nI almost think I can remember feeling a little different.\nBut if I am not the same, the next question is, Who inthe world am I?\nAh, that is the great puzzle!\n\nI will try if I know all the things I used to know." chapter_name = "CHAPTER II: The Pool of Tears" game = Game(player_1, player_2) game.talk_and_play(chapter_name, paragraph_content)
0.733643
0.756324
# ***Introduction to Radar Using Python and MATLAB*** ## Andy Harrison - Copyright (C) 2019 Artech House <br/> # Low Pass Filters *** Referring to Section 5.6, filtering may be performed at various locations along the receiver chain. The filter responses are quite different. The type of filter chosen depends greatly on the application. For example, Butterworth filters have quite flat responses in the passband. This type of filter should be used for cases where minimal distortion of the signal is required, such as filtering a signal prior to analog-to-digital conversion. Chebyshev filters, on the other hand, should be chosen when the frequency content of the signal is more important than passband flatness. An example of this would be trying to separate signals closely spaced in frequency. Elliptic filters are more difficult to design but do have the advantage of providing the fastest roll-off for a given number of poles. *** Set the filter order, critical frequency (Hz), maximum ripple (dB) and minimum attenuation (dB) ``` filter_order = 4 critical_frequency = 100 maximum_ripple = 1 minimum_attenuation = 40 ``` Perform low pass filtering for Butterworth, Chebyshev, Bessel and Elliptic type filters ``` from scipy.signal import butter, cheby1, bessel, ellip, freqs b, a = butter(filter_order, critical_frequency, 'low', analog=True) w_butter, h_butter = freqs(b, a) b, a = cheby1(filter_order, maximum_ripple, critical_frequency, 'low', analog=True) w_cheby, h_cheby = freqs(b, a) b, a = bessel(filter_order, critical_frequency, 'low', analog=True, norm='phase') w_bessel, h_bessel = freqs(b, a) b, a = ellip(filter_order, maximum_ripple, minimum_attenuation, critical_frequency, 'low', analog=True) w_ellip, h_ellip = freqs(b, a) ``` Use the `matplotlib` routines to display the results for each filter type ``` from matplotlib import pyplot as plt from numpy import log10 # Set the figure size plt.rcParams["figure.figsize"] = (15, 10) # Create the line plot plt.semilogx(w_butter, 20 * log10(abs(h_butter)), label='Butterworth') plt.semilogx(w_cheby, 20 * log10(abs(h_cheby)), '--', label='Chebyshev') plt.semilogx(w_bessel, 20 * log10(abs(h_bessel)), '-.', label='Bessel') plt.semilogx(w_ellip, 20 * log10(abs(h_ellip)), '-', label='Elliptic') # Set the y axis limit plt.ylim(-80, 5) # Set the x and y axis labels plt.xlabel("Frequency (Hz)", size=12) plt.ylabel("Amplitude (dB)", size=12) # Turn on the grid plt.grid(linestyle=':', linewidth=0.5) # Set the plot title and labels plt.title('Filter Response', size=14) # Set the tick label size plt.tick_params(labelsize=12) # Show the legend plt.legend(loc='upper right', prop={'size': 10}) ```
github_jupyter
filter_order = 4 critical_frequency = 100 maximum_ripple = 1 minimum_attenuation = 40 from scipy.signal import butter, cheby1, bessel, ellip, freqs b, a = butter(filter_order, critical_frequency, 'low', analog=True) w_butter, h_butter = freqs(b, a) b, a = cheby1(filter_order, maximum_ripple, critical_frequency, 'low', analog=True) w_cheby, h_cheby = freqs(b, a) b, a = bessel(filter_order, critical_frequency, 'low', analog=True, norm='phase') w_bessel, h_bessel = freqs(b, a) b, a = ellip(filter_order, maximum_ripple, minimum_attenuation, critical_frequency, 'low', analog=True) w_ellip, h_ellip = freqs(b, a) from matplotlib import pyplot as plt from numpy import log10 # Set the figure size plt.rcParams["figure.figsize"] = (15, 10) # Create the line plot plt.semilogx(w_butter, 20 * log10(abs(h_butter)), label='Butterworth') plt.semilogx(w_cheby, 20 * log10(abs(h_cheby)), '--', label='Chebyshev') plt.semilogx(w_bessel, 20 * log10(abs(h_bessel)), '-.', label='Bessel') plt.semilogx(w_ellip, 20 * log10(abs(h_ellip)), '-', label='Elliptic') # Set the y axis limit plt.ylim(-80, 5) # Set the x and y axis labels plt.xlabel("Frequency (Hz)", size=12) plt.ylabel("Amplitude (dB)", size=12) # Turn on the grid plt.grid(linestyle=':', linewidth=0.5) # Set the plot title and labels plt.title('Filter Response', size=14) # Set the tick label size plt.tick_params(labelsize=12) # Show the legend plt.legend(loc='upper right', prop={'size': 10})
0.892434
0.968321
``` !pip install anvil-uplink import anvil.server import anvil.media anvil.server.connect("3Q3GVRPPVXBL77VRBID5YJV6-JV7P4UGO7KJ72DNA") import numpy as np import pandas as pd import matplotlib.pyplot as plt from keras.preprocessing.image import ImageDataGenerator from keras.preprocessing import image import keras from keras.models import Sequential from keras.layers import Conv2D, MaxPooling2D, Dropout, Flatten, Dense, Activation, BatchNormalization import os import pickle import wandb model = Sequential() model.add(Conv2D(32, (3, 3), input_shape = (32,32,3), activation = 'relu')) model.add(MaxPooling2D(pool_size = (2, 2))) model.add(Conv2D(32, (3, 3), activation = 'relu')) model.add(MaxPooling2D(pool_size = (2, 2))) model.add(Flatten()) model.add(Dense(units = 256, activation = 'relu')) model.add(Dropout(0.2)) model.add(Dense(units = 128, activation = 'relu')) model.add(Dropout(0.2)) model.add(Dense(10)) model.add(Activation('softmax')) model.compile(optimizer = 'adam', loss = 'categorical_crossentropy', metrics = ['accuracy']) model.summary() train_datagen = ImageDataGenerator(rescale = 1./255, shear_range = 0.2, zoom_range = 0.2, brightness_range=[0.6,1.0], width_shift_range=[-0.1,0.1]) test_datagen = ImageDataGenerator(rescale = 1./255) # shear_range = 0.2, # zoom_range = 0.2, # brightness_range=[0.6,1.0] train_generator = train_datagen.flow_from_directory( directory = '../input/digit-test-18/Digits/Training', target_size = (32,32), batch_size = 16, class_mode = 'categorical' ) test_generator = test_datagen.flow_from_directory( directory = '../input/digit-test-18/Digits/Validation', target_size = (32,32), batch_size = 16, class_mode = 'categorical', shuffle=False ) train_steps_per_epoch = np.math.ceil(train_generator.samples / train_generator.batch_size) print (train_steps_per_epoch) test_steps_per_epoch = np.math.ceil(test_generator.samples / test_generator.batch_size) print (test_steps_per_epoch) history = model.fit(train_generator, steps_per_epoch =train_steps_per_epoch, epochs = 25, shuffle = True, validation_data =test_generator, validation_steps = test_steps_per_epoch) plt.plot(history.history['accuracy']) plt.plot(history.history['val_accuracy']) plt.title('training and validation accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('training and validation loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() predictions = model.predict(test_generator, steps=test_steps_per_epoch) # Get most likely class predicted_classes = np.argmax(predictions, axis=1) print(predicted_classes) true_classes = test_generator.classes print(true_classes) class_labels = list(test_generator.class_indices.keys()) print(class_labels) from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay from sklearn.metrics import classification_report conf_mat = confusion_matrix(true_classes, predicted_classes) disp = ConfusionMatrixDisplay(confusion_matrix=conf_mat, display_labels=class_labels) disp.plot() report=classification_report(true_classes, predicted_classes, target_names=class_labels,output_dict=True) df = pd.DataFrame(report).transpose() df from IPython.display import HTML html = df.to_html() # write html to file text_file = open("index.html", "w") text_file.write(html) text_file.close() with wandb.init(project="img_digit_classifier",save_code=True) as run: plt.plot(history.history['accuracy']) plt.plot(history.history['val_accuracy']) plt.title('model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') #plt.show() wandb.log({"Accuracy-metric": plt}) plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') #plt.show() wandb.log({"Loss-metric": plt}) model.save(os.path.join(wandb.run.dir, "model.h5")) disp = ConfusionMatrixDisplay(confusion_matrix=conf_mat,display_labels=class_labels) disp=disp.plot() wandb.log({"conf_mat" : plt}) wandb.log({"Classification_report": wandb.Html(open("./index.html"))}) wandb.finish() model.save("cnn_model.h5") model = keras.models.load_model("cnn_model.h5") def get_result(result): if result[0][0] == 1: return('0') elif result[0][1] == 1: return ('1') elif result[0][2] == 1: return ('2') elif result[0][3] == 1: return ('3') elif result[0][4] == 1: return ('4') elif result[0][5] == 1: return ('5') elif result[0][6] == 1: return ('6') elif result[0][7] == 1: return ('7') elif result[0][8] == 1: return ('8') elif result[0][9] == 1: return ('9') filename = r'../input/digit-data-final/Dataset_2/Testing_2/8/download (1).jpeg' test_image = image.load_img(filename, target_size = (32,32)) plt.imshow(test_image) test_image = image.img_to_array(test_image) test_image = np.expand_dims(test_image, axis = 0) result = model.predict(test_image) result = get_result(result) print ('Predicted Alphabet is: {}'.format(result)) @anvil.server.callable def model_run_digit(path): with anvil.media.TempFile(path) as filename: test_image = image.load_img(filename, target_size = (32,32)) test_image = image.img_to_array(test_image) test_image = np.expand_dims(test_image, axis = 0) result = model.predict(test_image) result = get_result(result) return ('Predicted Alphabet is: {}'.format(result)) ```
github_jupyter
!pip install anvil-uplink import anvil.server import anvil.media anvil.server.connect("3Q3GVRPPVXBL77VRBID5YJV6-JV7P4UGO7KJ72DNA") import numpy as np import pandas as pd import matplotlib.pyplot as plt from keras.preprocessing.image import ImageDataGenerator from keras.preprocessing import image import keras from keras.models import Sequential from keras.layers import Conv2D, MaxPooling2D, Dropout, Flatten, Dense, Activation, BatchNormalization import os import pickle import wandb model = Sequential() model.add(Conv2D(32, (3, 3), input_shape = (32,32,3), activation = 'relu')) model.add(MaxPooling2D(pool_size = (2, 2))) model.add(Conv2D(32, (3, 3), activation = 'relu')) model.add(MaxPooling2D(pool_size = (2, 2))) model.add(Flatten()) model.add(Dense(units = 256, activation = 'relu')) model.add(Dropout(0.2)) model.add(Dense(units = 128, activation = 'relu')) model.add(Dropout(0.2)) model.add(Dense(10)) model.add(Activation('softmax')) model.compile(optimizer = 'adam', loss = 'categorical_crossentropy', metrics = ['accuracy']) model.summary() train_datagen = ImageDataGenerator(rescale = 1./255, shear_range = 0.2, zoom_range = 0.2, brightness_range=[0.6,1.0], width_shift_range=[-0.1,0.1]) test_datagen = ImageDataGenerator(rescale = 1./255) # shear_range = 0.2, # zoom_range = 0.2, # brightness_range=[0.6,1.0] train_generator = train_datagen.flow_from_directory( directory = '../input/digit-test-18/Digits/Training', target_size = (32,32), batch_size = 16, class_mode = 'categorical' ) test_generator = test_datagen.flow_from_directory( directory = '../input/digit-test-18/Digits/Validation', target_size = (32,32), batch_size = 16, class_mode = 'categorical', shuffle=False ) train_steps_per_epoch = np.math.ceil(train_generator.samples / train_generator.batch_size) print (train_steps_per_epoch) test_steps_per_epoch = np.math.ceil(test_generator.samples / test_generator.batch_size) print (test_steps_per_epoch) history = model.fit(train_generator, steps_per_epoch =train_steps_per_epoch, epochs = 25, shuffle = True, validation_data =test_generator, validation_steps = test_steps_per_epoch) plt.plot(history.history['accuracy']) plt.plot(history.history['val_accuracy']) plt.title('training and validation accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('training and validation loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() predictions = model.predict(test_generator, steps=test_steps_per_epoch) # Get most likely class predicted_classes = np.argmax(predictions, axis=1) print(predicted_classes) true_classes = test_generator.classes print(true_classes) class_labels = list(test_generator.class_indices.keys()) print(class_labels) from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay from sklearn.metrics import classification_report conf_mat = confusion_matrix(true_classes, predicted_classes) disp = ConfusionMatrixDisplay(confusion_matrix=conf_mat, display_labels=class_labels) disp.plot() report=classification_report(true_classes, predicted_classes, target_names=class_labels,output_dict=True) df = pd.DataFrame(report).transpose() df from IPython.display import HTML html = df.to_html() # write html to file text_file = open("index.html", "w") text_file.write(html) text_file.close() with wandb.init(project="img_digit_classifier",save_code=True) as run: plt.plot(history.history['accuracy']) plt.plot(history.history['val_accuracy']) plt.title('model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') #plt.show() wandb.log({"Accuracy-metric": plt}) plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') #plt.show() wandb.log({"Loss-metric": plt}) model.save(os.path.join(wandb.run.dir, "model.h5")) disp = ConfusionMatrixDisplay(confusion_matrix=conf_mat,display_labels=class_labels) disp=disp.plot() wandb.log({"conf_mat" : plt}) wandb.log({"Classification_report": wandb.Html(open("./index.html"))}) wandb.finish() model.save("cnn_model.h5") model = keras.models.load_model("cnn_model.h5") def get_result(result): if result[0][0] == 1: return('0') elif result[0][1] == 1: return ('1') elif result[0][2] == 1: return ('2') elif result[0][3] == 1: return ('3') elif result[0][4] == 1: return ('4') elif result[0][5] == 1: return ('5') elif result[0][6] == 1: return ('6') elif result[0][7] == 1: return ('7') elif result[0][8] == 1: return ('8') elif result[0][9] == 1: return ('9') filename = r'../input/digit-data-final/Dataset_2/Testing_2/8/download (1).jpeg' test_image = image.load_img(filename, target_size = (32,32)) plt.imshow(test_image) test_image = image.img_to_array(test_image) test_image = np.expand_dims(test_image, axis = 0) result = model.predict(test_image) result = get_result(result) print ('Predicted Alphabet is: {}'.format(result)) @anvil.server.callable def model_run_digit(path): with anvil.media.TempFile(path) as filename: test_image = image.load_img(filename, target_size = (32,32)) test_image = image.img_to_array(test_image) test_image = np.expand_dims(test_image, axis = 0) result = model.predict(test_image) result = get_result(result) return ('Predicted Alphabet is: {}'.format(result))
0.66628
0.425516
# Dataset ``` import sys sys.path.append('../../datasets/') from prepare_sequences import prepare, germanBats import matplotlib.pyplot as plt import pickle import numpy as np classes = germanBats num_bands = 257 patch_len = 44 # = 250ms ~ 25ms patch_skip = patch_len / 2 # = 150ms ~ 15ms resize = None mode = 'slide' options = { 'seq_len': 60, # = 500ms with ~ 5 calls 'seq_skip': 15, } X_test, Y_test = prepare("../../datasets/prepared.h5", classes, patch_len, patch_skip, options, mode, resize, only_test=True) print("Total sequences:", len(X_test)) print(X_test.shape, Y_test.shape) ``` # Model ``` from torch.utils.data import TensorDataset, DataLoader import torch batch_size = 1 test_data = TensorDataset(torch.Tensor(X_test), torch.from_numpy(Y_test)) test_loader = DataLoader(test_data, batch_size=batch_size) model = torch.jit.load("baseline.pt") device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") if torch.cuda.device_count() > 1: print("Let's use", torch.cuda.device_count(), "GPUs!") model = nn.DataParallel(model, device_ids=[0, 1]) model.to(device) print(device) call_nocall_model = torch.jit.load('../call_nocall/call_nocall.pt') call_nocall_model.to(device) from sklearn.metrics import confusion_matrix import seaborn as sn import pandas as pd import tqdm Y_pred = [] Y_true = [] corrects = 0 classes["unknown"] = 18 model.eval() # iterate over test data for inputs, labels in tqdm.tqdm(test_loader): inputs, labels = inputs[0].to(device).unsqueeze(1), labels[0].to(device) cnc_outputs = call_nocall_model(inputs) _, cnc_pred = torch.max(cnc_outputs, 1) # call indices n_inputs = inputs[cnc_pred.nonzero().squeeze()] if n_inputs.shape[0] > 1: output = model(n_inputs) pred = torch.max(output, 1)[1] pred = torch.mode(pred)[0].item() Y_pred.append(pred) # Save Prediction Y_true.append(labels.item()) # Save Truth else: Y_pred.append(18) # Save Prediction Y_true.append(labels.item()) # Save Truth import numpy as np # Build confusion matrix cf_matrix = confusion_matrix(Y_true, Y_pred) df_cm = pd.DataFrame(cf_matrix / np.sum(cf_matrix, axis=0), index = [i for i in classes], columns = [i for i in classes]) plt.figure(figsize = (12,7)) sn.heatmap(df_cm, annot=True) plt.savefig('seq_test_cf.png') from sklearn.metrics import f1_score corrects = np.equal(Y_pred, Y_true).sum() print("Test accuracy:", corrects/len(Y_pred)) print("F1-score:", f1_score(Y_true, Y_pred, average=None).mean()) pytorch_total_params = sum(p.numel() for p in model.parameters() if p.requires_grad) print(pytorch_total_params) ```
github_jupyter
import sys sys.path.append('../../datasets/') from prepare_sequences import prepare, germanBats import matplotlib.pyplot as plt import pickle import numpy as np classes = germanBats num_bands = 257 patch_len = 44 # = 250ms ~ 25ms patch_skip = patch_len / 2 # = 150ms ~ 15ms resize = None mode = 'slide' options = { 'seq_len': 60, # = 500ms with ~ 5 calls 'seq_skip': 15, } X_test, Y_test = prepare("../../datasets/prepared.h5", classes, patch_len, patch_skip, options, mode, resize, only_test=True) print("Total sequences:", len(X_test)) print(X_test.shape, Y_test.shape) from torch.utils.data import TensorDataset, DataLoader import torch batch_size = 1 test_data = TensorDataset(torch.Tensor(X_test), torch.from_numpy(Y_test)) test_loader = DataLoader(test_data, batch_size=batch_size) model = torch.jit.load("baseline.pt") device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") if torch.cuda.device_count() > 1: print("Let's use", torch.cuda.device_count(), "GPUs!") model = nn.DataParallel(model, device_ids=[0, 1]) model.to(device) print(device) call_nocall_model = torch.jit.load('../call_nocall/call_nocall.pt') call_nocall_model.to(device) from sklearn.metrics import confusion_matrix import seaborn as sn import pandas as pd import tqdm Y_pred = [] Y_true = [] corrects = 0 classes["unknown"] = 18 model.eval() # iterate over test data for inputs, labels in tqdm.tqdm(test_loader): inputs, labels = inputs[0].to(device).unsqueeze(1), labels[0].to(device) cnc_outputs = call_nocall_model(inputs) _, cnc_pred = torch.max(cnc_outputs, 1) # call indices n_inputs = inputs[cnc_pred.nonzero().squeeze()] if n_inputs.shape[0] > 1: output = model(n_inputs) pred = torch.max(output, 1)[1] pred = torch.mode(pred)[0].item() Y_pred.append(pred) # Save Prediction Y_true.append(labels.item()) # Save Truth else: Y_pred.append(18) # Save Prediction Y_true.append(labels.item()) # Save Truth import numpy as np # Build confusion matrix cf_matrix = confusion_matrix(Y_true, Y_pred) df_cm = pd.DataFrame(cf_matrix / np.sum(cf_matrix, axis=0), index = [i for i in classes], columns = [i for i in classes]) plt.figure(figsize = (12,7)) sn.heatmap(df_cm, annot=True) plt.savefig('seq_test_cf.png') from sklearn.metrics import f1_score corrects = np.equal(Y_pred, Y_true).sum() print("Test accuracy:", corrects/len(Y_pred)) print("F1-score:", f1_score(Y_true, Y_pred, average=None).mean()) pytorch_total_params = sum(p.numel() for p in model.parameters() if p.requires_grad) print(pytorch_total_params)
0.42179
0.799403
## Assignment 2 version 2 Note: Student name removed. Submitted, Fall 2019. ``` import warnings warnings.filterwarnings('ignore') ``` ## Visualization Technique (20%) ### A narrative description of the visualization you are planning to use, describing how it works(10%) The visualization I am planning to use is scatter plot via seaborn. A scatter plot is a type of plot that shows the data as a collection of points. The position of a point depends on its two-dimentional value, where each value is a position on either the horizontal or vertical dimension. Scatter plots can be used to compare distribution of two variables and see whether there is any correlation between them. If there are distinct clusters/segments within the data, it will be clear in the scatter plot. ### A discussion of in which circumstances this visualization should and should not be used (what is it close to? What else could you consider? How does it relate to specific aspects of data? (10%) Scatter plots’ primary uses are to identify and present correlational relationships between two numeric variables. Dots in a scatter plot can demonstrate data points values, as well as reveal patterns within the whole dataset. A scatter plot can be used either when one continuous variable is dependent on the other, or when both continuous variables are independent. Sometimes the data points in a scatter plot form distinct groups. These groups are called clusters. Cluster analysis or clustering is the task of grouping a set of objects in such a way that objects in the same group (cluster) are more similar to each other than to those in other group. Some issues when using scatter plots: 1. Overplotting. When there are too many data points to plot, overplotting may happen where data points overlap to a degree too difficult to observe relationships between points and variables, because the data points are too densely packed. Work arounds include: 1) sampling a subset of data points: a random selection of points should still give the general idea of the patterns in the full data;2) changingthe form of the dots, adding transparency to allow for overlaps to be visible, or reducing point size so that fewer overlaps occur; 3) switching to a different chart type using various coloring to indicate the number of points in each group. 2. Interpreting correlation as causation. Correlation does not imply causation. It is possible that the observed relationship is driven by some third variable that affects both of the plotted variables, that the causal link is reversed, or that the pattern is simply coincidental. ## Visualization Library (20%) ### The library you are going to use, and a background on why the library is good for this visualization. Who created it? Is it open source? How do you install it? (10%) The library I am going to use is Seaborn. Seaborn works with the dataset as a whole and is much more intuitive than Matplotlib. “If Matplotlib “tries to make easy things easy and hard things possible, seaborn tries to make a well-defined set of hard things easy too” – Michael Waskom (Creator of Seaborn). Michael Waskom is a postdoctoral researcher in the Center for Neural Science at New York University and a Junior Fellow of the Simons Society of Fellows. He is the creator of Seaborn, an open-source Python data visualization library. Seaborn was developed based on the Matplotlib library. It is used to create more attractive and informative statistical graphics. While seaborn is a different package, it can also be used to develop the attractiveness of matplotlib graphics. Seaborn Installation - Pip installation: `pip install seaborn` - Conda Installation: `conda install seaborn` - Alternatively, you can use pip to install the development version directly from github: `pip install git+https://github.com/mwaskom/seaborn.git` - Another option: clone the github repository and install from your local copy: `pip install .` ### A discussion of the general approach and limitations of this library. Is it declarative or procedural? Does it integrate with Jupyter? Why you decided to use this library (especially if there are other options)? (10%) Seaborn library, similar to matplotlib pyplot, its scripting layer is a procedural method for building a visualization. In that we tell the underlying software which drawing actions we wanted to take in order to render our data. Seaborn overcomes some of the Matplotlib shortcomings: - Matplotlib's customization level is limited. It is hard to find out in Matplotlib which settings are required to make plots more appealing. Seaborn has more customization themes options and high-level interfaces to solve this issue. - Matplotlib doesn’t work as well with DataFrames via Pandas as Seaborn does. Specifically, the reasons I chose Seaborn also involve the following strengths of Seaborn: - Using default themes that are aesthetically pleasing. - Setting custom color palettes. - Making attractive statistical plots. - Easily and flexibly displaying distributions. - Visualizing information from matrices and DataFrames. Limitation: Seaborn is dependent of numpy, scipy, pandas and Matplotlib. It is unlikely to utilize Seaborn alone without Matplotlib to complete 3D scatter plotting. Therefore, I can only pick two of the most curious variables as my obsevation parameters for the 2 dimensional plot. ## Demonstration (60%) ### The dataset you picked and instructions for cleaning the dataset. You should pick a suitable dataset to demonstrate the technique, toolkit, and problem you are facing. (10%) For this specific shopping mall customers dataset, I am curious in observing any potential correlational relationship between customers' income and spending. I am planning on utilizing two of the continuous variables in the dataset, `Annual Income($k)` and and `Spending Score (1-100)` to form the scatter plot axes, and uncover distinct groups of customer mix groups (clusters). Data source:https://www.kaggle.com/shwetabh123/mall-customers I have downloaded the dataset to a csv file along with the notebook submission. ``` import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt %matplotlib inline # reading and briefly observing data df = pd.read_csv("Mall_Customers.csv") df.head() #Examining the df['Annual Income (k$)'] series df['Annual Income (k$)'].describe() sns.distplot(df['Annual Income (k$)']) ``` It appears that the majority of customers have income within the range of approximately $20k-$90k. ``` #Examining the df['Spending Score (1-100)'] series df['Spending Score (1-100)'].describe() sns.distplot(df['Spending Score (1-100)']) ``` The most centralized group of customers has spending score within approximately 40-60 range. ### The quality of your demonstration. First demonstrate the basics of this approach, then show a few of the edges of how the library might be used for other cases. This is the "meat" of the assignment. (40%) One of Seaborn's greatest strengths is its diversity of plotting functions. For instance, to draw a scatter plot with possibility of several semantic groupings takes just one line of code using the `scatterplot()` function. ``` sns.scatterplot(x='Annual Income (k$)',y='Spending Score (1-100)',data=df) ``` As shown in the scatter plot above, there are 5 relatively distinctively identifiable groups formed within the dataset: 1. lower income (<40k) + lower spend(<40); 2. lower income(<40k) + higher spend(>60); 3. mid income(40k-70k) + mid spend (40-60); 4. higher income(>70k) + lower spend (<40); 5. higher income(>70k) + higher spend(>60) To make the groups more visually distinctive, I can color code the groups after adding a categorical parameter in the original dataframe. ``` #income tiers: lower income (li), mid income (mi), higher income (hi) li = (df['Annual Income (k$)'] < 40) mi = (df['Annual Income (k$)'] >= 40) & (df['Annual Income (k$)'] < 70) hi = (df['Annual Income (k$)'] >= 70) #spending score tiers: lower spend(ls),mid spend (ms),higher spend (hs) ls = (df['Spending Score (1-100)'] <= 40) ms = (df['Spending Score (1-100)'] > 40) & (df['Spending Score (1-100)'] <= 60) hs = (df['Spending Score (1-100)'] > 60) #creating a new categorical variable 'group', df['group']='' #Assigning group numbers based on observed categories as above. df['group'][li & ls]=1 df['group'][li & hs]=2 df['group'][mi & ms]=3 df['group'][hi & ls]=4 df['group'][hi & hs]=5 #checking to see if there are any rows left ungrouped df.loc[df['group'] ==''] #assigning the 3 to an 'other' group df['group'].loc[df['group'] =='']='other' df['group'].unique() #Now we can plot all group color coded using the hue argument, changing the color palette to a prettier outlook. sns.scatterplot(x='Annual Income (k$)',y='Spending Score (1-100)',hue='group',data=df,palette='pastel') ``` Additional exploration aspects: `lmplot()` provides a combined approach of plotter scatter dots and regression line in one simple step in the same chart. It is intended as a convenient interface to fit regression models across conditional subsets of a dataset. ``` sns.lmplot(x='Annual Income (k$)',y='Spending Score (1-100)',data=df,palette='pastel') ``` Additional Discussion around limiations of this method: Although there are many additional variations of markers in seaborn scatter plots, bubble plots and 3D axes are easier to realize in matplotlib. ### Adherence to some of Rule et al's rules for computational analyses. You must explicitly describe the rules (aim for 4) you have adhered to in this assignment and provide 2-3 sentences about how you have adhered to these rules. (10%) #### Rule 1: Tell a story for an audience This rule suggests us to use the Jupyter Notebook to interleve with explanatory text with code and result to create a computational narrative, that not only contains bare bone codes, but also tell a story of the mission and thought process of myself as the storyteller. As outlined in the article, how I tell the story in my notebook should depend on my goal and audience. This notebook will likely only be shared and viewed among this course's teaching team. Therefore my explanation in addition to my code needs to be on point and succinct, focusing more on the outlines required in the assignment instructions, rather than expanding too much into real world cluster analysis application among customer segmentation and other aspects. #### Rule 2: Document the process, not just the results This rule refers to cleaning, organizing, and annoting my notebook consistently. To best utilize the interactivitiy of Jupyter notebook, I have noted through out the notebook consistently the techniques I utlized and the thought process of my data visualization and analysis. The grading team will be able to understand my thought process and the iterations I took to demonstrate the edges of the library as far as I explored the features. #### Rule 3 Use cell divisions to make steps clear This rule reminds us to avoid cells too long, and suggests that we put low-level documentation in code comments, and put descriptive markdown headers to organize the notebook into sections easier for people to navigate through, and to add table of contents or separate into a series of indexed notebooks if needed. As shown above, I have sectioned each main section as required in the assignement accordingly, while also commented or used separate mark down cells as descriptions through out my assignment. #### Rule 4 Modularize code This rule encourages us avoid duplicate code by modularizing the sections. This avoids getting the cells too messy to read, or too difficult to debug. I have modularized the income and spending tiering so that when I was later creating cluster groups, those modules were referred and therefore shortened the codes. Additional thoughts on spending scores: I was not able to find clear exploration regarding the Spending Score (1-100) grading system or rationale. It could be graded based on the customers' self-reporting spending willingness, transactional data comparison among existing mall customers, or spending behavior comparing to even larger sample size independent of shopping behavior at the mall. Additional thoughts on clustering approach: there are more advanced approach to help guide researchers identify the optimal number of clusters within a dataset, such as k-means clustering. Looking forward to exploring them in future tasks and assignments.
github_jupyter
import warnings warnings.filterwarnings('ignore') import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt %matplotlib inline # reading and briefly observing data df = pd.read_csv("Mall_Customers.csv") df.head() #Examining the df['Annual Income (k$)'] series df['Annual Income (k$)'].describe() sns.distplot(df['Annual Income (k$)']) #Examining the df['Spending Score (1-100)'] series df['Spending Score (1-100)'].describe() sns.distplot(df['Spending Score (1-100)']) sns.scatterplot(x='Annual Income (k$)',y='Spending Score (1-100)',data=df) #income tiers: lower income (li), mid income (mi), higher income (hi) li = (df['Annual Income (k$)'] < 40) mi = (df['Annual Income (k$)'] >= 40) & (df['Annual Income (k$)'] < 70) hi = (df['Annual Income (k$)'] >= 70) #spending score tiers: lower spend(ls),mid spend (ms),higher spend (hs) ls = (df['Spending Score (1-100)'] <= 40) ms = (df['Spending Score (1-100)'] > 40) & (df['Spending Score (1-100)'] <= 60) hs = (df['Spending Score (1-100)'] > 60) #creating a new categorical variable 'group', df['group']='' #Assigning group numbers based on observed categories as above. df['group'][li & ls]=1 df['group'][li & hs]=2 df['group'][mi & ms]=3 df['group'][hi & ls]=4 df['group'][hi & hs]=5 #checking to see if there are any rows left ungrouped df.loc[df['group'] ==''] #assigning the 3 to an 'other' group df['group'].loc[df['group'] =='']='other' df['group'].unique() #Now we can plot all group color coded using the hue argument, changing the color palette to a prettier outlook. sns.scatterplot(x='Annual Income (k$)',y='Spending Score (1-100)',hue='group',data=df,palette='pastel') sns.lmplot(x='Annual Income (k$)',y='Spending Score (1-100)',data=df,palette='pastel')
0.356895
0.98945
``` %matplotlib inline import seaborn as sns import numpy as np import pandas as pd import json import os from datetime import datetime from os import path from matplotlib import pyplot as plt fname = '../station_information_100319.json' with open(fname) as sf: station_data = json.load(sf) # Convert the station metadata into a dataframe and save it. station_df = pd.DataFrame(station_data["data"]["stations"]) # Add region information to the spreadsheet short_names = station_df.short_name.values prefixes = [sn[0:2] for sn in short_names] city_map = {'BK': 'Berkeley', 'SF': 'San Francisco', 'EM': 'Emeryville', 'SJ': 'San Jose', 'OK': 'Oakland'} region_map = {'BK': 'East Bay', 'SF': 'San Francisco', 'EM': 'East Bay', 'SJ': 'South Bay', 'OK': 'East Bay'} station_df['city'] = [city_map[p] for p in prefixes] station_df['region'] = [region_map[p] for p in prefixes] station_df.to_csv('../data/station_info.csv') # Reading all the JSON files we downloaded. json_fnames = sorted(os.listdir('../downloads')) # Dict maps keys to lists of data from the series of downloads. station_timeseries_dict = {} keys2use = set(['num_bikes_available', 'num_docks_available', 'num_docks_disabled', 'station_id', 'is_installed', 'is_returning', 'num_ebikes_available', 'num_bikes_disabled', 'is_renting', 'last_reported',]) for i, fname in enumerate(json_fnames): # timestamp was saved in the filename - pull it out. query_ts = int(fname.split('_')[-1].split('.')[0]) with open(path.join('../downloads/', fname)) as f: try: json_data = json.load(f) except Exception as e: print (e) print('Skipping file', i, fname) continue stations_info = json_data["data"]["stations"] for station_dict in stations_info: station_timeseries_dict.setdefault("query_ts", []).append(query_ts) for key in keys2use: station_timeseries_dict.setdefault(key, []).append( station_dict.get(key)) # Convert to DataFrame - this will be slow station_timeseries_df = pd.DataFrame(station_timeseries_dict) # Calculate fraction full and various other derived params total_docks = station_timeseries_df.num_bikes_available + station_timeseries_df.num_docks_available station_timeseries_df['fraction_full'] = station_timeseries_df.num_bikes_available / total_docks full = station_timeseries_df.num_bikes_available == total_docks empty = station_timeseries_df.num_bikes_available == 0 full_or_empty = np.logical_or(full, empty) station_timeseries_df['full'] = full.astype('int32') station_timeseries_df['empty'] = empty.astype('int32') station_timeseries_df['full_or_empty'] = full_or_empty.astype('int32') station_timeseries_df['half_full_dev'] = np.abs(station_timeseries_df.fraction_full - 0.5) # Save - may also take a while. station_timeseries_df.to_csv('../data/stations_timeseries.csv') ```
github_jupyter
%matplotlib inline import seaborn as sns import numpy as np import pandas as pd import json import os from datetime import datetime from os import path from matplotlib import pyplot as plt fname = '../station_information_100319.json' with open(fname) as sf: station_data = json.load(sf) # Convert the station metadata into a dataframe and save it. station_df = pd.DataFrame(station_data["data"]["stations"]) # Add region information to the spreadsheet short_names = station_df.short_name.values prefixes = [sn[0:2] for sn in short_names] city_map = {'BK': 'Berkeley', 'SF': 'San Francisco', 'EM': 'Emeryville', 'SJ': 'San Jose', 'OK': 'Oakland'} region_map = {'BK': 'East Bay', 'SF': 'San Francisco', 'EM': 'East Bay', 'SJ': 'South Bay', 'OK': 'East Bay'} station_df['city'] = [city_map[p] for p in prefixes] station_df['region'] = [region_map[p] for p in prefixes] station_df.to_csv('../data/station_info.csv') # Reading all the JSON files we downloaded. json_fnames = sorted(os.listdir('../downloads')) # Dict maps keys to lists of data from the series of downloads. station_timeseries_dict = {} keys2use = set(['num_bikes_available', 'num_docks_available', 'num_docks_disabled', 'station_id', 'is_installed', 'is_returning', 'num_ebikes_available', 'num_bikes_disabled', 'is_renting', 'last_reported',]) for i, fname in enumerate(json_fnames): # timestamp was saved in the filename - pull it out. query_ts = int(fname.split('_')[-1].split('.')[0]) with open(path.join('../downloads/', fname)) as f: try: json_data = json.load(f) except Exception as e: print (e) print('Skipping file', i, fname) continue stations_info = json_data["data"]["stations"] for station_dict in stations_info: station_timeseries_dict.setdefault("query_ts", []).append(query_ts) for key in keys2use: station_timeseries_dict.setdefault(key, []).append( station_dict.get(key)) # Convert to DataFrame - this will be slow station_timeseries_df = pd.DataFrame(station_timeseries_dict) # Calculate fraction full and various other derived params total_docks = station_timeseries_df.num_bikes_available + station_timeseries_df.num_docks_available station_timeseries_df['fraction_full'] = station_timeseries_df.num_bikes_available / total_docks full = station_timeseries_df.num_bikes_available == total_docks empty = station_timeseries_df.num_bikes_available == 0 full_or_empty = np.logical_or(full, empty) station_timeseries_df['full'] = full.astype('int32') station_timeseries_df['empty'] = empty.astype('int32') station_timeseries_df['full_or_empty'] = full_or_empty.astype('int32') station_timeseries_df['half_full_dev'] = np.abs(station_timeseries_df.fraction_full - 0.5) # Save - may also take a while. station_timeseries_df.to_csv('../data/stations_timeseries.csv')
0.32338
0.332798
# An Introduction to WISER, Part 2: Generative Models In this part of the tutorial, we will take the results of the labeling functions from part 1 and learn a generative model that combines them. We will start by reloading the data with the labeling function outputs from part 1. ## Reloading Data ``` import pickle with open('output/tmp/train_data.p', 'rb') as f: train_data = pickle.load(f) with open('output/tmp/dev_data.p', 'rb') as f: dev_data = pickle.load(f) with open('output/tmp/test_data.p', 'rb') as f: test_data = pickle.load(f) ``` ## Reinspecting Data We can now browse the data with all of the tagging rule annotations. Browse the different tagging rules and their votes on the dev data. ``` from wiser.viewer import Viewer Viewer(dev_data, height=120) ``` We can inspect the raw precision, recall, and F1 score using an unweighted combination of tagging rules with ``score_labels_majority_vote``. ``` from wiser.eval import score_labels_majority_vote score_labels_majority_vote(dev_data) ``` # Generative Model To weight the tagging and linking rules according to estimated accuracies, need to train a generative model. ## Defining a Generative Model We now need to declare a generative model. In this tutorial, we will be using the *linked HMM*, a model that makes use of linking rules to model dependencies between adjacent tokens. You may find other generative models in `labelmodels`. Generative moedls have the following hyperparameters: * Initial Accuracy (init_acc) is the initial estimated tagging and link-ing rule accuracy, also used as the mean of the prior distribution of the model parameters. * Strength of Regularization (acc_prior) is the weight of the regularizer pulling tagging and linking rule accuracies toward their initial values. * Balance Prior (balance_prior) is used to regularize the class prior in Naive Bayes or the initial class distribution for HMM and Linked HMM, as well as the transition matrix in those methods, towards a more uniform distribution. We generally recommend running a grid search on the generative model hyperparameters to obtain the best performance. For more details on generative models and the *linked HMM*, please refer to our paper. ``` from labelmodels import LinkedHMM from wiser.generative import Model model = Model(LinkedHMM, init_acc=0.95, acc_prior=50, balance_prior=100) ``` ## Training a Generative Model Once we're done creating our generative model, we're ready to begin training! We first need to create a ``LearningConfig`` to specify the training configuration for the model. ``` from labelmodels import LearningConfig config = LearningConfig() config.epochs = 5 ``` Then, we must pass the config object to the ``train`` , alongside the training and development data. ``` # Outputs the best development score model.train(config, train_data=train_data, dev_data=dev_data) ``` ## Evaluating a Generative Model We can easily evaluate the performance of any generative model using the function ``evaluate`` function. Here, we'll evaluate our *linked HMM* on the test set. ``` model.evaluate(test_data) ``` If you've been following this tutorial, test precision should be around 75.6%, and test F1 should be around 64%. ## Saving the Output of the Generative Model After implementing your generative model, you need to save its probabilistic training labels. The ``save_probabilistic_output`` wrapper function will save the probabilistic tags to the specified directory. We will later use these labels in the next part of the tutorial to train a recurrent neural network. ``` model.save_output(data=train_data, path='output/generative/link_hmm/train_data.p', save_distribution=True) model.save_output(data=dev_data, path='output/generative/link_hmm/dev_data.p', save_distribution=True, save_tags=True) model.save_output(data=test_data, path='output/generative/link_hmm/test_data.p', save_distribution=True, save_tags=True) ```
github_jupyter
import pickle with open('output/tmp/train_data.p', 'rb') as f: train_data = pickle.load(f) with open('output/tmp/dev_data.p', 'rb') as f: dev_data = pickle.load(f) with open('output/tmp/test_data.p', 'rb') as f: test_data = pickle.load(f) from wiser.viewer import Viewer Viewer(dev_data, height=120) from wiser.eval import score_labels_majority_vote score_labels_majority_vote(dev_data) from labelmodels import LinkedHMM from wiser.generative import Model model = Model(LinkedHMM, init_acc=0.95, acc_prior=50, balance_prior=100) from labelmodels import LearningConfig config = LearningConfig() config.epochs = 5 # Outputs the best development score model.train(config, train_data=train_data, dev_data=dev_data) model.evaluate(test_data) model.save_output(data=train_data, path='output/generative/link_hmm/train_data.p', save_distribution=True) model.save_output(data=dev_data, path='output/generative/link_hmm/dev_data.p', save_distribution=True, save_tags=True) model.save_output(data=test_data, path='output/generative/link_hmm/test_data.p', save_distribution=True, save_tags=True)
0.368406
0.988222
``` import torch import torch.nn as nn import torch.optim as optim import torch.nn.functional as F import torch.distributions as distributions import matplotlib.pyplot as plt import numpy as np import gym train_env = gym.make('CartPole-v1') test_env = gym.make('CartPole-v1') SEED = 1234 train_env.seed(SEED); test_env.seed(SEED+1); np.random.seed(SEED); torch.manual_seed(SEED); class MLP(nn.Module): def __init__(self, input_dim, hidden_dim, output_dim, dropout = 0.5): super().__init__() self.fc_1 = nn.Linear(input_dim, hidden_dim) self.fc_2 = nn.Linear(hidden_dim, output_dim) self.dropout = nn.Dropout(dropout) def forward(self, x): x = self.fc_1(x) x = self.dropout(x) x = F.relu(x) x = self.fc_2(x) return x class ActorCritic(nn.Module): def __init__(self, actor, critic): super().__init__() self.actor = actor self.critic = critic def forward(self, state): action_pred = self.actor(state) value_pred = self.critic(state) return action_pred, value_pred INPUT_DIM = train_env.observation_space.shape[0] HIDDEN_DIM = 128 OUTPUT_DIM = train_env.action_space.n actor = MLP(INPUT_DIM, HIDDEN_DIM, OUTPUT_DIM) critic = MLP(INPUT_DIM, HIDDEN_DIM, 1) policy = ActorCritic(actor, critic) def init_weights(m): if type(m) == nn.Linear: torch.nn.init.xavier_normal_(m.weight) m.bias.data.fill_(0) policy.apply(init_weights) LEARNING_RATE = 0.01 optimizer = optim.Adam(policy.parameters(), lr = LEARNING_RATE) def train(env, policy, optimizer, discount_factor): policy.train() log_prob_actions = [] values = [] rewards = [] done = False episode_reward = 0 state = env.reset() while not done: state = torch.FloatTensor(state).unsqueeze(0) action_pred, value_pred = policy(state) action_prob = F.softmax(action_pred, dim = -1) dist = distributions.Categorical(action_prob) action = dist.sample() log_prob_action = dist.log_prob(action) state, reward, done, _ = env.step(action.item()) log_prob_actions.append(log_prob_action) values.append(value_pred) rewards.append(reward) episode_reward += reward log_prob_actions = torch.cat(log_prob_actions) values = torch.cat(values).squeeze(-1) returns = calculate_returns(rewards, discount_factor) policy_loss, value_loss = update_policy(returns, log_prob_actions, values, optimizer) return policy_loss, value_loss, episode_reward def calculate_returns(rewards, discount_factor, normalize = True): returns = [] R = 0 for r in reversed(rewards): R = r + R * discount_factor returns.insert(0, R) returns = torch.tensor(returns) if normalize: returns = (returns - returns.mean()) / returns.std() return returns def update_policy(returns, log_prob_actions, values, optimizer): returns = returns.detach() policy_loss = - (returns * log_prob_actions).sum() value_loss = F.smooth_l1_loss(returns, values).sum() optimizer.zero_grad() policy_loss.backward() value_loss.backward() optimizer.step() return policy_loss.item(), value_loss.item() def evaluate(env, policy): policy.eval() rewards = [] done = False episode_reward = 0 state = env.reset() while not done: state = torch.FloatTensor(state).unsqueeze(0) with torch.no_grad(): action_pred, _ = policy(state) action_prob = F.softmax(action_pred, dim = -1) action = torch.argmax(action_prob, dim = -1) state, reward, done, _ = env.step(action.item()) episode_reward += reward return episode_reward MAX_EPISODES = 500 DISCOUNT_FACTOR = 0.99 N_TRIALS = 25 REWARD_THRESHOLD = 475 PRINT_EVERY = 10 train_rewards = [] test_rewards = [] for episode in range(1, MAX_EPISODES+1): policy_loss, critic_loss, train_reward = train(train_env, policy, optimizer, DISCOUNT_FACTOR) test_reward = evaluate(test_env, policy) train_rewards.append(train_reward) test_rewards.append(test_reward) mean_train_rewards = np.mean(train_rewards[-N_TRIALS:]) mean_test_rewards = np.mean(test_rewards[-N_TRIALS:]) if episode % PRINT_EVERY == 0: print(f'| Episode: {episode:3} | Mean Train Rewards: {mean_train_rewards:5.1f} | Mean Test Rewards: {mean_test_rewards:5.1f} |') if mean_test_rewards >= REWARD_THRESHOLD: print(f'Reached reward threshold in {episode} episodes') break plt.figure(figsize=(12,8)) plt.plot(test_rewards, label='Test Reward') plt.plot(train_rewards, label='Train Reward') plt.xlabel('Episode', fontsize=20) plt.ylabel('Reward', fontsize=20) plt.hlines(REWARD_THRESHOLD, 0, len(test_rewards), color='r') plt.legend(loc='lower right') plt.grid() ```
github_jupyter
import torch import torch.nn as nn import torch.optim as optim import torch.nn.functional as F import torch.distributions as distributions import matplotlib.pyplot as plt import numpy as np import gym train_env = gym.make('CartPole-v1') test_env = gym.make('CartPole-v1') SEED = 1234 train_env.seed(SEED); test_env.seed(SEED+1); np.random.seed(SEED); torch.manual_seed(SEED); class MLP(nn.Module): def __init__(self, input_dim, hidden_dim, output_dim, dropout = 0.5): super().__init__() self.fc_1 = nn.Linear(input_dim, hidden_dim) self.fc_2 = nn.Linear(hidden_dim, output_dim) self.dropout = nn.Dropout(dropout) def forward(self, x): x = self.fc_1(x) x = self.dropout(x) x = F.relu(x) x = self.fc_2(x) return x class ActorCritic(nn.Module): def __init__(self, actor, critic): super().__init__() self.actor = actor self.critic = critic def forward(self, state): action_pred = self.actor(state) value_pred = self.critic(state) return action_pred, value_pred INPUT_DIM = train_env.observation_space.shape[0] HIDDEN_DIM = 128 OUTPUT_DIM = train_env.action_space.n actor = MLP(INPUT_DIM, HIDDEN_DIM, OUTPUT_DIM) critic = MLP(INPUT_DIM, HIDDEN_DIM, 1) policy = ActorCritic(actor, critic) def init_weights(m): if type(m) == nn.Linear: torch.nn.init.xavier_normal_(m.weight) m.bias.data.fill_(0) policy.apply(init_weights) LEARNING_RATE = 0.01 optimizer = optim.Adam(policy.parameters(), lr = LEARNING_RATE) def train(env, policy, optimizer, discount_factor): policy.train() log_prob_actions = [] values = [] rewards = [] done = False episode_reward = 0 state = env.reset() while not done: state = torch.FloatTensor(state).unsqueeze(0) action_pred, value_pred = policy(state) action_prob = F.softmax(action_pred, dim = -1) dist = distributions.Categorical(action_prob) action = dist.sample() log_prob_action = dist.log_prob(action) state, reward, done, _ = env.step(action.item()) log_prob_actions.append(log_prob_action) values.append(value_pred) rewards.append(reward) episode_reward += reward log_prob_actions = torch.cat(log_prob_actions) values = torch.cat(values).squeeze(-1) returns = calculate_returns(rewards, discount_factor) policy_loss, value_loss = update_policy(returns, log_prob_actions, values, optimizer) return policy_loss, value_loss, episode_reward def calculate_returns(rewards, discount_factor, normalize = True): returns = [] R = 0 for r in reversed(rewards): R = r + R * discount_factor returns.insert(0, R) returns = torch.tensor(returns) if normalize: returns = (returns - returns.mean()) / returns.std() return returns def update_policy(returns, log_prob_actions, values, optimizer): returns = returns.detach() policy_loss = - (returns * log_prob_actions).sum() value_loss = F.smooth_l1_loss(returns, values).sum() optimizer.zero_grad() policy_loss.backward() value_loss.backward() optimizer.step() return policy_loss.item(), value_loss.item() def evaluate(env, policy): policy.eval() rewards = [] done = False episode_reward = 0 state = env.reset() while not done: state = torch.FloatTensor(state).unsqueeze(0) with torch.no_grad(): action_pred, _ = policy(state) action_prob = F.softmax(action_pred, dim = -1) action = torch.argmax(action_prob, dim = -1) state, reward, done, _ = env.step(action.item()) episode_reward += reward return episode_reward MAX_EPISODES = 500 DISCOUNT_FACTOR = 0.99 N_TRIALS = 25 REWARD_THRESHOLD = 475 PRINT_EVERY = 10 train_rewards = [] test_rewards = [] for episode in range(1, MAX_EPISODES+1): policy_loss, critic_loss, train_reward = train(train_env, policy, optimizer, DISCOUNT_FACTOR) test_reward = evaluate(test_env, policy) train_rewards.append(train_reward) test_rewards.append(test_reward) mean_train_rewards = np.mean(train_rewards[-N_TRIALS:]) mean_test_rewards = np.mean(test_rewards[-N_TRIALS:]) if episode % PRINT_EVERY == 0: print(f'| Episode: {episode:3} | Mean Train Rewards: {mean_train_rewards:5.1f} | Mean Test Rewards: {mean_test_rewards:5.1f} |') if mean_test_rewards >= REWARD_THRESHOLD: print(f'Reached reward threshold in {episode} episodes') break plt.figure(figsize=(12,8)) plt.plot(test_rewards, label='Test Reward') plt.plot(train_rewards, label='Train Reward') plt.xlabel('Episode', fontsize=20) plt.ylabel('Reward', fontsize=20) plt.hlines(REWARD_THRESHOLD, 0, len(test_rewards), color='r') plt.legend(loc='lower right') plt.grid()
0.932284
0.666364
# Stock Prices You are given access to yesterday's stock prices for a single stock. The data is in the form of an array with the stock price in 30 minute intervals from 9:30 a.m EST opening to 4:00 p.m EST closing time. With this data, write a function that returns the maximum profit obtainable. You will need to buy before you can sell. For example, suppose you have the following prices: `prices = [3, 4, 7, 8, 6]` >Note: This is a shortened array, just for the sake of example—a full set of prices for the day would have 13 elements (one price for each 30 minute interval betwen 9:30 and 4:00), as seen in the test cases further down in this notebook. In order to get the maximum profit in this example, you would want to buy at a price of 3 and sell at a price of 8 to yield a maximum profit of 5. In other words, you are looking for the greatest possible difference between two numbers in the array. ### The Idea The given array has the prices of a single stock at 13 different timestamps. The idea is to pick two timestamps: "buy_at_min" and "sell_at_max" such that the buy is made before a sell. We will use two pairs of indices while traversing the array: * **Pair 1** - This pair keeps track of our maximum profit while iterating over the list. It is done by storing a pair of indices - `min_price_index`, and `max_price_index`. * **Pair 2** - This pair keeps track of the profit between the lowest price seen so far and the current price while traversing the array. The lowest price seen so far is maintained with `current_min_price_index`. At each step we will make the greedy choice by choosing prices such that our profit is maximum. We will store the **maximum of either of the two profits mentioned above**. ### Exercise - Write the function definition here Fill out the function below and run it against the test cases. Take into consideration the time complexity of your solution. ``` def max_returns(prices): """ Calculate maxiumum possible return Args: prices(array): array of prices Returns: int: The maximum profit possible """ return prices ``` <span class="graffiti-highlight graffiti-id_uc722im-id_o4cterg"><i></i><button>Show Solution</button></span> ### Test - Let's test your function ``` # Test Cases def test_function(test_case): prices = test_case[0] solution = test_case[1] output = max_returns(prices) if output == solution: print("Pass") else: print("Fail") # Solution def max_returns(arr): min_price_index = 0 max_price_index = 1 current_min_price_index = 0 if len(arr) < 2: return for index in range(1, len(arr)): # current minimum price if arr[index] < arr[current_min_price_index]: current_min_price_index = index # current max profit if arr[max_price_index] - arr[min_price_index] < arr[index] - arr[current_min_price_index]: max_price_index = index min_price_index = current_min_price_index max_profit = arr[max_price_index] - arr[min_price_index] return max_profit prices = [2, 2, 7, 9, 9, 12, 18, 23, 34, 37, 45, 54, 78] solution = 76 test_case = [prices, solution] test_function(test_case) prices = [54, 18, 37, 9, 11, 48, 23, 1, 7, 34, 2, 45, 67] solution = 66 test_case = [prices, solution] test_function(test_case) prices = [78, 54, 45, 37, 34, 23, 18, 12, 9, 9, 7, 2, 2] solution = 0 test_case = [prices, solution] test_function(test_case) ```
github_jupyter
def max_returns(prices): """ Calculate maxiumum possible return Args: prices(array): array of prices Returns: int: The maximum profit possible """ return prices # Test Cases def test_function(test_case): prices = test_case[0] solution = test_case[1] output = max_returns(prices) if output == solution: print("Pass") else: print("Fail") # Solution def max_returns(arr): min_price_index = 0 max_price_index = 1 current_min_price_index = 0 if len(arr) < 2: return for index in range(1, len(arr)): # current minimum price if arr[index] < arr[current_min_price_index]: current_min_price_index = index # current max profit if arr[max_price_index] - arr[min_price_index] < arr[index] - arr[current_min_price_index]: max_price_index = index min_price_index = current_min_price_index max_profit = arr[max_price_index] - arr[min_price_index] return max_profit prices = [2, 2, 7, 9, 9, 12, 18, 23, 34, 37, 45, 54, 78] solution = 76 test_case = [prices, solution] test_function(test_case) prices = [54, 18, 37, 9, 11, 48, 23, 1, 7, 34, 2, 45, 67] solution = 66 test_case = [prices, solution] test_function(test_case) prices = [78, 54, 45, 37, 34, 23, 18, 12, 9, 9, 7, 2, 2] solution = 0 test_case = [prices, solution] test_function(test_case)
0.694924
0.99371
``` import keras import numpy as np import keras.backend as K from keras.datasets import mnist from keras.utils import np_utils K.set_learning_phase(False) import matplotlib.pyplot as plt plt.rcParams['image.cmap'] = 'gray' %matplotlib inline model = keras.models.load_model('example_keras_mnist_model.h5') model.summary() dataset = mnist.load_data() train_data = dataset[0][0] / 255 train_data = train_data[..., np.newaxis].astype('float32') train_labels = np_utils.to_categorical(dataset[0][1]).astype('float32') test_data = dataset[1][0] / 255 test_data = test_data[..., np.newaxis].astype('float32') test_labels = np_utils.to_categorical(dataset[1][1]).astype('float32') plt.imshow(train_data[0, ..., 0]) ``` Keras model are serialzed in a JSON format. ``` model.get_config() ``` ### Getting the weights Weights can be retrieved either directly from the model or from each individual layer. ``` # Weights and biases of the entire model. model.get_weights() # Weights and bias for a single layer. conv_layer = model.get_layer('conv2d_1') conv_layer.get_weights() ``` Moreover the respespective backend variables that store the weights can be retrieved. ``` conv_layer.weights ``` ### Getting the activations and net inputs Intermediary computation results, i.e. results are not part of the prediction cannot be directly retrieved from Keras. It possible to build a new model for which the intermediary result is the prediction, but this approach makes computation rather inefficient when several intermediary results are to be retrieved. Instead it is better to reach directly into the backend for this purpose. Activations are still fairly straight forward as the relevant tensors can be retrieved as output of the layer. ``` # Getting the Tensorflow session and the input tensor. sess = keras.backend.get_session() network_input_tensor = model.layers[0].input network_input_tensor # Getting the tensor that holds the actiations as the output of a layer. activation_tensor = conv_layer.output activation_tensor activations = sess.run(activation_tensor, feed_dict={network_input_tensor: test_data[0:1]}) activations.shape for i in range(32): plt.imshow(activations[0, ..., i]) plt.show() ``` Net input is a little more complicated as we have to reach heuristically into the TensorFlow graph to find the relevant tensors. However, it can be safely assumed most of the time that the net input tensor in input to the activaton op. ``` net_input_tensor = activation_tensor.op.inputs[0] net_input_tensor net_inputs = sess.run(net_input_tensor, feed_dict={network_input_tensor: test_data[0:1]}) net_inputs.shape for i in range(32): plt.imshow(net_inputs[0, ..., i]) plt.show() ``` ### Getting layer properties Each Keras layer object provides the relevant properties as attributes ``` conv_layer = model.get_layer('conv2d_1') conv_layer conv_layer.input_shape conv_layer.output_shape conv_layer.kernel_size conv_layer.strides max_pool_layer = model.get_layer('max_pooling2d_1') max_pool_layer max_pool_layer.strides max_pool_layer.pool_size ``` Layer type information can only be retrieved through the class name ``` conv_layer.__class__.__name__ ```
github_jupyter
import keras import numpy as np import keras.backend as K from keras.datasets import mnist from keras.utils import np_utils K.set_learning_phase(False) import matplotlib.pyplot as plt plt.rcParams['image.cmap'] = 'gray' %matplotlib inline model = keras.models.load_model('example_keras_mnist_model.h5') model.summary() dataset = mnist.load_data() train_data = dataset[0][0] / 255 train_data = train_data[..., np.newaxis].astype('float32') train_labels = np_utils.to_categorical(dataset[0][1]).astype('float32') test_data = dataset[1][0] / 255 test_data = test_data[..., np.newaxis].astype('float32') test_labels = np_utils.to_categorical(dataset[1][1]).astype('float32') plt.imshow(train_data[0, ..., 0]) model.get_config() # Weights and biases of the entire model. model.get_weights() # Weights and bias for a single layer. conv_layer = model.get_layer('conv2d_1') conv_layer.get_weights() conv_layer.weights # Getting the Tensorflow session and the input tensor. sess = keras.backend.get_session() network_input_tensor = model.layers[0].input network_input_tensor # Getting the tensor that holds the actiations as the output of a layer. activation_tensor = conv_layer.output activation_tensor activations = sess.run(activation_tensor, feed_dict={network_input_tensor: test_data[0:1]}) activations.shape for i in range(32): plt.imshow(activations[0, ..., i]) plt.show() net_input_tensor = activation_tensor.op.inputs[0] net_input_tensor net_inputs = sess.run(net_input_tensor, feed_dict={network_input_tensor: test_data[0:1]}) net_inputs.shape for i in range(32): plt.imshow(net_inputs[0, ..., i]) plt.show() conv_layer = model.get_layer('conv2d_1') conv_layer conv_layer.input_shape conv_layer.output_shape conv_layer.kernel_size conv_layer.strides max_pool_layer = model.get_layer('max_pooling2d_1') max_pool_layer max_pool_layer.strides max_pool_layer.pool_size conv_layer.__class__.__name__
0.770119
0.865849
![JohnSnowLabs](https://nlp.johnsnowlabs.com/assets/images/logo.png) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/healthcare/NER_DIAG_PROC.ipynb) # **Detect Diagnoses and Procedures in Spanish** To run this yourself, you will need to upload your license keys to the notebook. Just Run The Cell Below in order to do that. Also You can open the file explorer on the left side of the screen and upload `license_keys.json` to the folder that opens. Otherwise, you can look at the example outputs at the bottom of the notebook. ## 1. Colab Setup Import license keys ``` import os import json from google.colab import files license_keys = files.upload() with open(list(license_keys.keys())[0]) as f: license_keys = json.load(f) sparknlp_version = license_keys["PUBLIC_VERSION"] jsl_version = license_keys["JSL_VERSION"] print ('SparkNLP Version:', sparknlp_version) print ('SparkNLP-JSL Version:', jsl_version) ``` Install dependencies ``` %%capture for k,v in license_keys.items(): %set_env $k=$v !wget https://raw.githubusercontent.com/JohnSnowLabs/spark-nlp-workshop/master/jsl_colab_setup.sh !bash jsl_colab_setup.sh # Install Spark NLP Display for visualization !pip install --ignore-installed spark-nlp-display ``` Import dependencies into Python and start the Spark session ``` import pandas as pd from pyspark.ml import Pipeline from pyspark.sql import SparkSession import pyspark.sql.functions as F import sparknlp from sparknlp.annotator import * from sparknlp_jsl.annotator import * from sparknlp.base import * import sparknlp_jsl spark = sparknlp_jsl.start(license_keys['SECRET']) # manually start session # params = {"spark.driver.memory" : "16G", # "spark.kryoserializer.buffer.max" : "2000M", # "spark.driver.maxResultSize" : "2000M"} # spark = sparknlp_jsl.start(license_keys['SECRET'],params=params) ``` ## 2. Construct the pipeline Create the pipeline ``` document_assembler = DocumentAssembler() \ .setInputCol('text')\ .setOutputCol('document') sentence_detector = SentenceDetector() \ .setInputCols(['document'])\ .setOutputCol('sentence') tokenizer = Tokenizer()\ .setInputCols(['sentence']) \ .setOutputCol('token') word_embeddings = WordEmbeddingsModel.pretrained("embeddings_scielowiki_300d","es","clinical/models")\ .setInputCols(["document","token"])\ .setOutputCol("word_embeddings") clinical_ner = MedicalNerModel.pretrained("ner_diag_proc","es","clinical/models")\ .setInputCols("sentence","token","word_embeddings")\ .setOutputCol("ner") ner_converter = NerConverter()\ .setInputCols(['sentence', 'token', 'ner']) \ .setOutputCol('ner_chunk') nlp_pipeline = Pipeline(stages=[ document_assembler, sentence_detector, tokenizer, word_embeddings, clinical_ner, ner_converter]) ``` ## 3. Create example inputs ``` # Enter examples as strings in this array input_list = [ """En el último año, el paciente ha sido sometido a una apendicectomía por apendicitis aguda , una artroplastia total de cadera izquierda por artrosis, un cambio de lente refractiva por catarata del ojo izquierdo y actualmente está programada una tomografía computarizada de abdomen y pelvis con contraste intravenoso para descartar la sospecha de cáncer de colon. Tiene antecedentes familiares de cáncer colorrectal, su padre tuvo cáncer de colon ascendente (hemicolectomía derecha).""" ] ``` ## 4. Use the pipeline to create outputs ``` empty_df = spark.createDataFrame([['']]).toDF('text') pipeline_model = nlp_pipeline.fit(empty_df) df = spark.createDataFrame(pd.DataFrame({'text': input_list})) result = pipeline_model.transform(df) ``` ## 5. Visualize results ``` from sparknlp_display import NerVisualizer NerVisualizer().display( result = result.collect()[0], label_col = 'ner_chunk', document_col = 'document' ) ``` Visualize outputs as data frame ``` exploded = F.explode(F.arrays_zip('ner_chunk.result', 'ner_chunk.metadata')) select_expression_0 = F.expr("cols['0']").alias("chunk") select_expression_1 = F.expr("cols['1']['entity']").alias("ner_label") result.select(exploded.alias("cols")) \ .select(select_expression_0, select_expression_1).show(truncate=False) result = result.toPandas() ```
github_jupyter
import os import json from google.colab import files license_keys = files.upload() with open(list(license_keys.keys())[0]) as f: license_keys = json.load(f) sparknlp_version = license_keys["PUBLIC_VERSION"] jsl_version = license_keys["JSL_VERSION"] print ('SparkNLP Version:', sparknlp_version) print ('SparkNLP-JSL Version:', jsl_version) %%capture for k,v in license_keys.items(): %set_env $k=$v !wget https://raw.githubusercontent.com/JohnSnowLabs/spark-nlp-workshop/master/jsl_colab_setup.sh !bash jsl_colab_setup.sh # Install Spark NLP Display for visualization !pip install --ignore-installed spark-nlp-display import pandas as pd from pyspark.ml import Pipeline from pyspark.sql import SparkSession import pyspark.sql.functions as F import sparknlp from sparknlp.annotator import * from sparknlp_jsl.annotator import * from sparknlp.base import * import sparknlp_jsl spark = sparknlp_jsl.start(license_keys['SECRET']) # manually start session # params = {"spark.driver.memory" : "16G", # "spark.kryoserializer.buffer.max" : "2000M", # "spark.driver.maxResultSize" : "2000M"} # spark = sparknlp_jsl.start(license_keys['SECRET'],params=params) document_assembler = DocumentAssembler() \ .setInputCol('text')\ .setOutputCol('document') sentence_detector = SentenceDetector() \ .setInputCols(['document'])\ .setOutputCol('sentence') tokenizer = Tokenizer()\ .setInputCols(['sentence']) \ .setOutputCol('token') word_embeddings = WordEmbeddingsModel.pretrained("embeddings_scielowiki_300d","es","clinical/models")\ .setInputCols(["document","token"])\ .setOutputCol("word_embeddings") clinical_ner = MedicalNerModel.pretrained("ner_diag_proc","es","clinical/models")\ .setInputCols("sentence","token","word_embeddings")\ .setOutputCol("ner") ner_converter = NerConverter()\ .setInputCols(['sentence', 'token', 'ner']) \ .setOutputCol('ner_chunk') nlp_pipeline = Pipeline(stages=[ document_assembler, sentence_detector, tokenizer, word_embeddings, clinical_ner, ner_converter]) # Enter examples as strings in this array input_list = [ """En el último año, el paciente ha sido sometido a una apendicectomía por apendicitis aguda , una artroplastia total de cadera izquierda por artrosis, un cambio de lente refractiva por catarata del ojo izquierdo y actualmente está programada una tomografía computarizada de abdomen y pelvis con contraste intravenoso para descartar la sospecha de cáncer de colon. Tiene antecedentes familiares de cáncer colorrectal, su padre tuvo cáncer de colon ascendente (hemicolectomía derecha).""" ] empty_df = spark.createDataFrame([['']]).toDF('text') pipeline_model = nlp_pipeline.fit(empty_df) df = spark.createDataFrame(pd.DataFrame({'text': input_list})) result = pipeline_model.transform(df) from sparknlp_display import NerVisualizer NerVisualizer().display( result = result.collect()[0], label_col = 'ner_chunk', document_col = 'document' ) exploded = F.explode(F.arrays_zip('ner_chunk.result', 'ner_chunk.metadata')) select_expression_0 = F.expr("cols['0']").alias("chunk") select_expression_1 = F.expr("cols['1']['entity']").alias("ner_label") result.select(exploded.alias("cols")) \ .select(select_expression_0, select_expression_1).show(truncate=False) result = result.toPandas()
0.406509
0.918699
``` import os import logging from web3 import Web3 import sys import time import hero.hero as heroes import hero.utils.utils as hero_utils import genes.gene_science as genes import auctions.sale.sale_auctions as sale_auctions import pandas as pd import web3 import numpy as np from tqdm import tqdm import time GWEI_MULTIPLIER = 1e+18 ABI_GENES = ''' [ {"constant":false,"inputs":[{"name":"_genes1","type":"uint256"},{"name":"_genes2","type":"uint256"},{"name":"_targetBlock","type":"uint256"}],"name":"mixGenes","outputs":[{"name":"","type":"uint256"}],"payable":false,"stateMutability":"nonpayable","type":"function"}, {"constant":true,"inputs":[{"name":"_traits","type":"uint8[]"}],"name":"encode","outputs":[{"name":"_genes","type":"uint256"}],"payable":false,"stateMutability":"pure","type":"function"}, {"constant":true,"inputs":[{"name":"_genes","type":"uint256"}],"name":"decode","outputs":[{"name":"","type":"uint8[]"}],"payable":false,"stateMutability":"pure","type":"function"}, {"constant":true,"inputs":[{"name":"_genes","type":"uint256"}],"name":"expressingTraits","outputs":[{"name":"","type":"uint8[12]"}],"payable":false,"stateMutability":"pure","type":"function"}, {"constant":true,"inputs":[],"name":"isGeneScience","outputs":[{"name":"","type":"bool"}],"payable":false,"stateMutability":"view","type":"function"}, {"inputs":[],"payable":false,"stateMutability":"nonpayable","type":"constructor"} ] ''' ABI_AUCTIONS = """ [ {"inputs":[{"internalType":"address","name":"_heroCoreAddress","type":"address"},{"internalType":"address","name":"_geneScienceAddress","type":"address"},{"internalType":"address","name":"_jewelTokenAddress","type":"address"},{"internalType":"address","name":"_gaiaTearsAddress","type":"address"},{"internalType":"address","name":"_statScienceAddress","type":"address"},{"internalType":"uint256","name":"_cut","type":"uint256"}],"stateMutability":"nonpayable","type":"constructor"}, {"anonymous":false,"inputs":[{"indexed":false,"internalType":"uint256","name":"auctionId","type":"uint256"},{"indexed":true,"internalType":"uint256","name":"tokenId","type":"uint256"}],"name":"AuctionCancelled","type":"event"}, {"anonymous":false,"inputs":[{"indexed":false,"internalType":"uint256","name":"auctionId","type":"uint256"},{"indexed":true,"internalType":"address","name":"owner","type":"address"},{"indexed":true,"internalType":"uint256","name":"tokenId","type":"uint256"},{"indexed":false,"internalType":"uint256","name":"startingPrice","type":"uint256"},{"indexed":false,"internalType":"uint256","name":"endingPrice","type":"uint256"},{"indexed":false,"internalType":"uint256","name":"duration","type":"uint256"},{"indexed":false,"internalType":"address","name":"winner","type":"address"}],"name":"AuctionCreated","type":"event"}, {"anonymous":false,"inputs":[{"indexed":false,"internalType":"uint256","name":"auctionId","type":"uint256"},{"indexed":true,"internalType":"uint256","name":"tokenId","type":"uint256"},{"indexed":false,"internalType":"uint256","name":"totalPrice","type":"uint256"},{"indexed":false,"internalType":"address","name":"winner","type":"address"}],"name":"AuctionSuccessful","type":"event"}, {"anonymous":false,"inputs":[{"indexed":true,"internalType":"address","name":"owner","type":"address"},{"indexed":false,"internalType":"uint256","name":"crystalId","type":"uint256"},{"indexed":false,"internalType":"uint256","name":"heroId","type":"uint256"}],"name":"CrystalOpen","type":"event"}, {"anonymous":false,"inputs":[{"indexed":false,"internalType":"uint256","name":"crystalId","type":"uint256"},{"indexed":true,"internalType":"address","name":"owner","type":"address"},{"indexed":false,"internalType":"uint256","name":"summonerId","type":"uint256"},{"indexed":false,"internalType":"uint256","name":"assistantId","type":"uint256"},{"indexed":false,"internalType":"uint16","name":"generation","type":"uint16"},{"indexed":false,"internalType":"uint256","name":"createdBlock","type":"uint256"},{"indexed":false,"internalType":"uint8","name":"summonerTears","type":"uint8"},{"indexed":false,"internalType":"uint8","name":"assistantTears","type":"uint8"},{"indexed":false,"internalType":"address","name":"bonusItem","type":"address"}],"name":"CrystalSummoned","type":"event"}, {"anonymous":false,"inputs":[{"indexed":true,"internalType":"address","name":"previousOwner","type":"address"},{"indexed":true,"internalType":"address","name":"newOwner","type":"address"}],"name":"OwnershipTransferred","type":"event"}, {"anonymous":false,"inputs":[{"indexed":false,"internalType":"address","name":"account","type":"address"}],"name":"Paused","type":"event"}, {"anonymous":false,"inputs":[{"indexed":true,"internalType":"bytes32","name":"role","type":"bytes32"},{"indexed":true,"internalType":"bytes32","name":"previousAdminRole","type":"bytes32"},{"indexed":true,"internalType":"bytes32","name":"newAdminRole","type":"bytes32"}],"name":"RoleAdminChanged","type":"event"}, {"anonymous":false,"inputs":[{"indexed":true,"internalType":"bytes32","name":"role","type":"bytes32"},{"indexed":true,"internalType":"address","name":"account","type":"address"},{"indexed":true,"internalType":"address","name":"sender","type":"address"}],"name":"RoleGranted","type":"event"}, {"anonymous":false,"inputs":[{"indexed":true,"internalType":"bytes32","name":"role","type":"bytes32"},{"indexed":true,"internalType":"address","name":"account","type":"address"},{"indexed":true,"internalType":"address","name":"sender","type":"address"}],"name":"RoleRevoked","type":"event"}, {"anonymous":false,"inputs":[{"indexed":false,"internalType":"address","name":"account","type":"address"}],"name":"Unpaused","type":"event"}, {"inputs":[],"name":"DEFAULT_ADMIN_ROLE","outputs":[{"internalType":"bytes32","name":"","type":"bytes32"}],"stateMutability":"view","type":"function"}, {"inputs":[],"name":"MODERATOR_ROLE","outputs":[{"internalType":"bytes32","name":"","type":"bytes32"}],"stateMutability":"view","type":"function"}, {"inputs":[],"name":"auctionHeroCore","outputs":[{"internalType":"contract IHeroCore","name":"","type":"address"}],"stateMutability":"view","type":"function"}, {"inputs":[],"name":"baseCooldown","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"}, {"inputs":[],"name":"baseSummonFee","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"uint256","name":"_tokenId","type":"uint256"},{"internalType":"uint256","name":"_bidAmount","type":"uint256"}],"name":"bid","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"components":[{"internalType":"uint256","name":"id","type":"uint256"},{"components":[{"internalType":"uint256","name":"summonedTime","type":"uint256"},{"internalType":"uint256","name":"nextSummonTime","type":"uint256"},{"internalType":"uint256","name":"summonerId","type":"uint256"},{"internalType":"uint256","name":"assistantId","type":"uint256"},{"internalType":"uint32","name":"summons","type":"uint32"},{"internalType":"uint32","name":"maxSummons","type":"uint32"}],"internalType":"struct IHeroTypes.SummoningInfo","name":"summoningInfo","type":"tuple"},{"components":[{"internalType":"uint256","name":"statGenes","type":"uint256"},{"internalType":"uint256","name":"visualGenes","type":"uint256"},{"internalType":"enum IHeroTypes.Rarity","name":"rarity","type":"uint8"},{"internalType":"bool","name":"shiny","type":"bool"},{"internalType":"uint16","name":"generation","type":"uint16"},{"internalType":"uint32","name":"firstName","type":"uint32"},{"internalType":"uint32","name":"lastName","type":"uint32"},{"internalType":"uint8","name":"shinyStyle","type":"uint8"},{"internalType":"uint8","name":"class","type":"uint8"},{"internalType":"uint8","name":"subClass","type":"uint8"}],"internalType":"struct IHeroTypes.HeroInfo","name":"info","type":"tuple"},{"components":[{"internalType":"uint256","name":"staminaFullAt","type":"uint256"},{"internalType":"uint256","name":"hpFullAt","type":"uint256"},{"internalType":"uint256","name":"mpFullAt","type":"uint256"},{"internalType":"uint16","name":"level","type":"uint16"},{"internalType":"uint64","name":"xp","type":"uint64"},{"internalType":"address","name":"currentQuest","type":"address"},{"internalType":"uint8","name":"sp","type":"uint8"},{"internalType":"enum IHeroTypes.HeroStatus","name":"status","type":"uint8"}],"internalType":"struct IHeroTypes.HeroState","name":"state","type":"tuple"},{"components":[{"internalType":"uint16","name":"strength","type":"uint16"},{"internalType":"uint16","name":"intelligence","type":"uint16"},{"internalType":"uint16","name":"wisdom","type":"uint16"},{"internalType":"uint16","name":"luck","type":"uint16"},{"internalType":"uint16","name":"agility","type":"uint16"},{"internalType":"uint16","name":"vitality","type":"uint16"},{"internalType":"uint16","name":"endurance","type":"uint16"},{"internalType":"uint16","name":"dexterity","type":"uint16"},{"internalType":"uint16","name":"hp","type":"uint16"},{"internalType":"uint16","name":"mp","type":"uint16"},{"internalType":"uint16","name":"stamina","type":"uint16"}],"internalType":"struct IHeroTypes.HeroStats","name":"stats","type":"tuple"},{"components":[{"internalType":"uint16","name":"strength","type":"uint16"},{"internalType":"uint16","name":"intelligence","type":"uint16"},{"internalType":"uint16","name":"wisdom","type":"uint16"},{"internalType":"uint16","name":"luck","type":"uint16"},{"internalType":"uint16","name":"agility","type":"uint16"},{"internalType":"uint16","name":"vitality","type":"uint16"},{"internalType":"uint16","name":"endurance","type":"uint16"},{"internalType":"uint16","name":"dexterity","type":"uint16"},{"internalType":"uint16","name":"hpSm","type":"uint16"},{"internalType":"uint16","name":"hpRg","type":"uint16"},{"internalType":"uint16","name":"hpLg","type":"uint16"},{"internalType":"uint16","name":"mpSm","type":"uint16"},{"internalType":"uint16","name":"mpRg","type":"uint16"},{"internalType":"uint16","name":"mpLg","type":"uint16"}],"internalType":"struct IHeroTypes.HeroStatGrowth","name":"primaryStatGrowth","type":"tuple"},{"components":[{"internalType":"uint16","name":"strength","type":"uint16"},{"internalType":"uint16","name":"intelligence","type":"uint16"},{"internalType":"uint16","name":"wisdom","type":"uint16"},{"internalType":"uint16","name":"luck","type":"uint16"},{"internalType":"uint16","name":"agility","type":"uint16"},{"internalType":"uint16","name":"vitality","type":"uint16"},{"internalType":"uint16","name":"endurance","type":"uint16"},{"internalType":"uint16","name":"dexterity","type":"uint16"},{"internalType":"uint16","name":"hpSm","type":"uint16"},{"internalType":"uint16","name":"hpRg","type":"uint16"},{"internalType":"uint16","name":"hpLg","type":"uint16"},{"internalType":"uint16","name":"mpSm","type":"uint16"},{"internalType":"uint16","name":"mpRg","type":"uint16"},{"internalType":"uint16","name":"mpLg","type":"uint16"}],"internalType":"struct IHeroTypes.HeroStatGrowth","name":"secondaryStatGrowth","type":"tuple"},{"components":[{"internalType":"uint16","name":"mining","type":"uint16"},{"internalType":"uint16","name":"gardening","type":"uint16"},{"internalType":"uint16","name":"foraging","type":"uint16"},{"internalType":"uint16","name":"fishing","type":"uint16"}],"internalType":"struct IHeroTypes.HeroProfessions","name":"professions","type":"tuple"}],"internalType":"struct IHeroTypes.Hero","name":"_hero","type":"tuple"}],"name":"calculateSummoningCost","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"uint256","name":"_tokenId","type":"uint256"}],"name":"cancelAuction","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"internalType":"uint256","name":"_tokenId","type":"uint256"}],"name":"cancelAuctionWhenPaused","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[],"name":"cooldownPerGen","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"uint256","name":"_tokenId","type":"uint256"},{"internalType":"uint128","name":"_startingPrice","type":"uint128"},{"internalType":"uint128","name":"_endingPrice","type":"uint128"},{"internalType":"uint64","name":"_duration","type":"uint64"},{"internalType":"address","name":"_winner","type":"address"}],"name":"createAuction","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"internalType":"uint256","name":"","type":"uint256"}],"name":"crystals","outputs":[{"internalType":"address","name":"owner","type":"address"},{"internalType":"uint256","name":"summonerId","type":"uint256"},{"internalType":"uint256","name":"assistantId","type":"uint256"},{"internalType":"uint16","name":"generation","type":"uint16"},{"internalType":"uint256","name":"createdBlock","type":"uint256"},{"internalType":"uint256","name":"heroId","type":"uint256"},{"internalType":"uint8","name":"summonerTears","type":"uint8"},{"internalType":"uint8","name":"assistantTears","type":"uint8"},{"internalType":"address","name":"bonusItem","type":"address"},{"internalType":"uint32","name":"maxSummons","type":"uint32"},{"internalType":"uint32","name":"firstName","type":"uint32"},{"internalType":"uint32","name":"lastName","type":"uint32"},{"internalType":"uint8","name":"shinyStyle","type":"uint8"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"uint256","name":"_rarityRoll","type":"uint256"},{"internalType":"uint256","name":"_rarityMod","type":"uint256"}],"name":"determineRarity","outputs":[{"internalType":"enum IHeroTypes.Rarity","name":"","type":"uint8"}],"stateMutability":"pure","type":"function"}, {"inputs":[],"name":"enabled","outputs":[{"internalType":"bool","name":"","type":"bool"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"uint256","name":"randomNumber","type":"uint256"},{"internalType":"uint256","name":"digits","type":"uint256"},{"internalType":"uint256","name":"offset","type":"uint256"}],"name":"extractNumber","outputs":[{"internalType":"uint256","name":"result","type":"uint256"}],"stateMutability":"pure","type":"function"}, {"inputs":[{"internalType":"uint256","name":"_tokenId","type":"uint256"}],"name":"getAuction","outputs":[{"internalType":"uint256","name":"auctionId","type":"uint256"},{"internalType":"address","name":"seller","type":"address"},{"internalType":"uint256","name":"startingPrice","type":"uint256"},{"internalType":"uint256","name":"endingPrice","type":"uint256"},{"internalType":"uint256","name":"duration","type":"uint256"},{"internalType":"uint256","name":"startedAt","type":"uint256"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"uint256","name":"_crystalId","type":"uint256"}],"name":"getCrystal","outputs":[{"components":[{"internalType":"address","name":"owner","type":"address"},{"internalType":"uint256","name":"summonerId","type":"uint256"},{"internalType":"uint256","name":"assistantId","type":"uint256"},{"internalType":"uint16","name":"generation","type":"uint16"},{"internalType":"uint256","name":"createdBlock","type":"uint256"},{"internalType":"uint256","name":"heroId","type":"uint256"},{"internalType":"uint8","name":"summonerTears","type":"uint8"},{"internalType":"uint8","name":"assistantTears","type":"uint8"},{"internalType":"address","name":"bonusItem","type":"address"},{"internalType":"uint32","name":"maxSummons","type":"uint32"},{"internalType":"uint32","name":"firstName","type":"uint32"},{"internalType":"uint32","name":"lastName","type":"uint32"},{"internalType":"uint8","name":"shinyStyle","type":"uint8"}],"internalType":"struct ICrystalTypes.HeroCrystal","name":"","type":"tuple"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"uint256","name":"_tokenId","type":"uint256"}],"name":"getCurrentPrice","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"bytes32","name":"role","type":"bytes32"}],"name":"getRoleAdmin","outputs":[{"internalType":"bytes32","name":"","type":"bytes32"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"address","name":"_address","type":"address"}],"name":"getUserAuctions","outputs":[{"internalType":"uint256[]","name":"","type":"uint256[]"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"address","name":"_address","type":"address"}],"name":"getUserCrystals","outputs":[{"internalType":"uint256[]","name":"","type":"uint256[]"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"bytes32","name":"role","type":"bytes32"},{"internalType":"address","name":"account","type":"address"}],"name":"grantRole","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"internalType":"bytes32","name":"role","type":"bytes32"},{"internalType":"address","name":"account","type":"address"}],"name":"hasRole","outputs":[{"internalType":"bool","name":"","type":"bool"}],"stateMutability":"view","type":"function"}, {"inputs":[],"name":"increasePerGen","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"}, {"inputs":[],"name":"increasePerSummon","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"uint256","name":"_tokenId","type":"uint256"}],"name":"isOnAuction","outputs":[{"internalType":"bool","name":"","type":"bool"}],"stateMutability":"nonpayable","type":"function"}, {"inputs":[],"name":"jewelToken","outputs":[{"internalType":"contract IJewelToken","name":"","type":"address"}],"stateMutability":"view","type":"function"}, {"inputs":[],"name":"newSummonCooldown","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"uint256","name":"_crystalId","type":"uint256"}],"name":"open","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"nonpayable","type":"function"}, {"inputs":[],"name":"owner","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"}, {"inputs":[],"name":"ownerCut","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"}, {"inputs":[],"name":"paused","outputs":[{"internalType":"bool","name":"","type":"bool"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"uint256","name":"_crystalId","type":"uint256"}],"name":"rechargeCrystal","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[],"name":"renounceOwnership","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"internalType":"bytes32","name":"role","type":"bytes32"},{"internalType":"address","name":"account","type":"address"}],"name":"renounceRole","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"internalType":"bytes32","name":"role","type":"bytes32"},{"internalType":"address","name":"account","type":"address"}],"name":"revokeRole","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"internalType":"address[]","name":"_feeAddresses","type":"address[]"},{"internalType":"uint256[]","name":"_feePercents","type":"uint256[]"}],"name":"setFees","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"internalType":"address","name":"_geneScienceAddress","type":"address"}],"name":"setGeneScienceAddress","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"internalType":"address","name":"_statScienceAddress","type":"address"}],"name":"setStatScienceAddress","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"internalType":"uint256","name":"_newSummonCooldown","type":"uint256"},{"internalType":"uint256","name":"_baseCooldown","type":"uint256"},{"internalType":"uint256","name":"_cooldownPerGen","type":"uint256"}],"name":"setSummonCooldowns","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"internalType":"uint256","name":"_baseSummonFee","type":"uint256"},{"internalType":"uint256","name":"_increasePerSummon","type":"uint256"},{"internalType":"uint256","name":"_increasePerGen","type":"uint256"}],"name":"setSummonFees","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[],"name":"statScience","outputs":[{"internalType":"contract IStatScience","name":"","type":"address"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"uint256","name":"_summonerId","type":"uint256"},{"internalType":"uint256","name":"_assistantId","type":"uint256"},{"internalType":"uint16","name":"_summonerTears","type":"uint16"},{"internalType":"uint16","name":"_assistantTears","type":"uint16"},{"internalType":"address","name":"_bonusItem","type":"address"}],"name":"summonCrystal","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"internalType":"bytes4","name":"interfaceId","type":"bytes4"}],"name":"supportsInterface","outputs":[{"internalType":"bool","name":"","type":"bool"}],"stateMutability":"view","type":"function"}, {"inputs":[],"name":"toggleEnabled","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"internalType":"address","name":"newOwner","type":"address"}],"name":"transferOwnership","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"internalType":"address","name":"","type":"address"},{"internalType":"uint256","name":"","type":"uint256"}],"name":"userAuctions","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"address","name":"","type":"address"},{"internalType":"uint256","name":"","type":"uint256"}],"name":"userCrystals","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"uint256","name":"blockNumber","type":"uint256"}],"name":"vrf","outputs":[{"internalType":"bytes32","name":"result","type":"bytes32"}],"stateMutability":"view","type":"function"} ] """ ABI_HEROES = """ [ {"anonymous":false,"inputs":[{"indexed":true,"internalType":"address","name":"owner","type":"address"},{"indexed":true,"internalType":"address","name":"approved","type":"address"},{"indexed":true,"internalType":"uint256","name":"tokenId","type":"uint256"}],"name":"Approval","type":"event"}, {"anonymous":false,"inputs":[{"indexed":true,"internalType":"address","name":"owner","type":"address"},{"indexed":true,"internalType":"address","name":"operator","type":"address"},{"indexed":false,"internalType":"bool","name":"approved","type":"bool"}],"name":"ApprovalForAll","type":"event"}, {"anonymous":false,"inputs":[{"indexed":true,"internalType":"address","name":"owner","type":"address"},{"indexed":false,"internalType":"uint256","name":"heroId","type":"uint256"},{"indexed":false,"internalType":"uint256","name":"summonerId","type":"uint256"},{"indexed":false,"internalType":"uint256","name":"assistantId","type":"uint256"},{"indexed":false,"internalType":"uint256","name":"statGenes","type":"uint256"},{"indexed":false,"internalType":"uint256","name":"visualGenes","type":"uint256"}],"name":"HeroSummoned","type":"event"}, {"anonymous":false,"inputs":[{"indexed":false,"internalType":"address","name":"account","type":"address"}],"name":"Paused","type":"event"}, {"anonymous":false,"inputs":[{"indexed":true,"internalType":"bytes32","name":"role","type":"bytes32"},{"indexed":true,"internalType":"bytes32","name":"previousAdminRole","type":"bytes32"},{"indexed":true,"internalType":"bytes32","name":"newAdminRole","type":"bytes32"}],"name":"RoleAdminChanged","type":"event"}, {"anonymous":false,"inputs":[{"indexed":true,"internalType":"bytes32","name":"role","type":"bytes32"},{"indexed":true,"internalType":"address","name":"account","type":"address"},{"indexed":true,"internalType":"address","name":"sender","type":"address"}],"name":"RoleGranted","type":"event"}, {"anonymous":false,"inputs":[{"indexed":true,"internalType":"bytes32","name":"role","type":"bytes32"},{"indexed":true,"internalType":"address","name":"account","type":"address"},{"indexed":true,"internalType":"address","name":"sender","type":"address"}],"name":"RoleRevoked","type":"event"}, {"anonymous":false,"inputs":[{"indexed":true,"internalType":"address","name":"from","type":"address"},{"indexed":true,"internalType":"address","name":"to","type":"address"},{"indexed":true,"internalType":"uint256","name":"tokenId","type":"uint256"}],"name":"Transfer","type":"event"}, {"anonymous":false,"inputs":[{"indexed":false,"internalType":"address","name":"account","type":"address"}],"name":"Unpaused","type":"event"}, {"inputs":[],"name":"DEFAULT_ADMIN_ROLE","outputs":[{"internalType":"bytes32","name":"","type":"bytes32"}],"stateMutability":"view","type":"function"}, {"inputs":[],"name":"HERO_MODERATOR_ROLE","outputs":[{"internalType":"bytes32","name":"","type":"bytes32"}],"stateMutability":"view","type":"function"}, {"inputs":[],"name":"MINTER_ROLE","outputs":[{"internalType":"bytes32","name":"","type":"bytes32"}],"stateMutability":"view","type":"function"}, {"inputs":[],"name":"MODERATOR_ROLE","outputs":[{"internalType":"bytes32","name":"","type":"bytes32"}],"stateMutability":"view","type":"function"}, {"inputs":[],"name":"PAUSER_ROLE","outputs":[{"internalType":"bytes32","name":"","type":"bytes32"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"address","name":"to","type":"address"},{"internalType":"uint256","name":"tokenId","type":"uint256"}],"name":"approve","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"internalType":"address","name":"owner","type":"address"}],"name":"balanceOf","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"uint256","name":"tokenId","type":"uint256"}],"name":"burn","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"internalType":"uint256","name":"_statGenes","type":"uint256"},{"internalType":"uint256","name":"_visualGenes","type":"uint256"}, {"internalType":"enum IHeroTypes.Rarity","name":"_rarity","type":"uint8"}, {"internalType":"bool","name":"_shiny","type":"bool"},{"components":[{"internalType":"address","name":"owner","type":"address"},{"internalType":"uint256","name":"summonerId","type":"uint256"},{"internalType":"uint256","name":"assistantId","type":"uint256"},{"internalType":"uint16","name":"generation","type":"uint16"},{"internalType":"uint256","name":"createdBlock","type":"uint256"},{"internalType":"uint256","name":"heroId","type":"uint256"},{"internalType":"uint8","name":"summonerTears","type":"uint8"},{"internalType":"uint8","name":"assistantTears","type":"uint8"},{"internalType":"address","name":"bonusItem","type":"address"},{"internalType":"uint32","name":"maxSummons","type":"uint32"},{"internalType":"uint32","name":"firstName","type":"uint32"},{"internalType":"uint32","name":"lastName","type":"uint32"},{"internalType":"uint8","name":"shinyStyle","type":"uint8"}],"internalType":"struct ICrystalTypes.HeroCrystal","name":"_crystal","type":"tuple"}],"name":"createHero","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"internalType":"uint256","name":"tokenId","type":"uint256"}],"name":"getApproved","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"uint256","name":"_id","type":"uint256"}],"name":"getHero","outputs":[{"components":[{"internalType":"uint256","name":"id","type":"uint256"},{"components":[{"internalType":"uint256","name":"summonedTime","type":"uint256"},{"internalType":"uint256","name":"nextSummonTime","type":"uint256"},{"internalType":"uint256","name":"summonerId","type":"uint256"},{"internalType":"uint256","name":"assistantId","type":"uint256"},{"internalType":"uint32","name":"summons","type":"uint32"},{"internalType":"uint32","name":"maxSummons","type":"uint32"}],"internalType":"struct IHeroTypes.SummoningInfo","name":"summoningInfo","type":"tuple"},{"components":[{"internalType":"uint256","name":"statGenes","type":"uint256"},{"internalType":"uint256","name":"visualGenes","type":"uint256"},{"internalType":"enum IHeroTypes.Rarity","name":"rarity","type":"uint8"},{"internalType":"bool","name":"shiny","type":"bool"},{"internalType":"uint16","name":"generation","type":"uint16"},{"internalType":"uint32","name":"firstName","type":"uint32"},{"internalType":"uint32","name":"lastName","type":"uint32"},{"internalType":"uint8","name":"shinyStyle","type":"uint8"},{"internalType":"uint8","name":"class","type":"uint8"},{"internalType":"uint8","name":"subClass","type":"uint8"}],"internalType":"struct IHeroTypes.HeroInfo","name":"info","type":"tuple"},{"components":[{"internalType":"uint256","name":"staminaFullAt","type":"uint256"},{"internalType":"uint256","name":"hpFullAt","type":"uint256"},{"internalType":"uint256","name":"mpFullAt","type":"uint256"},{"internalType":"uint16","name":"level","type":"uint16"},{"internalType":"uint64","name":"xp","type":"uint64"},{"internalType":"address","name":"currentQuest","type":"address"},{"internalType":"uint8","name":"sp","type":"uint8"},{"internalType":"enum IHeroTypes.HeroStatus","name":"status","type":"uint8"}],"internalType":"struct IHeroTypes.HeroState","name":"state","type":"tuple"},{"components":[{"internalType":"uint16","name":"strength","type":"uint16"},{"internalType":"uint16","name":"intelligence","type":"uint16"},{"internalType":"uint16","name":"wisdom","type":"uint16"},{"internalType":"uint16","name":"luck","type":"uint16"},{"internalType":"uint16","name":"agility","type":"uint16"},{"internalType":"uint16","name":"vitality","type":"uint16"},{"internalType":"uint16","name":"endurance","type":"uint16"},{"internalType":"uint16","name":"dexterity","type":"uint16"},{"internalType":"uint16","name":"hp","type":"uint16"},{"internalType":"uint16","name":"mp","type":"uint16"},{"internalType":"uint16","name":"stamina","type":"uint16"}],"internalType":"struct IHeroTypes.HeroStats","name":"stats","type":"tuple"},{"components":[{"internalType":"uint16","name":"strength","type":"uint16"},{"internalType":"uint16","name":"intelligence","type":"uint16"},{"internalType":"uint16","name":"wisdom","type":"uint16"},{"internalType":"uint16","name":"luck","type":"uint16"},{"internalType":"uint16","name":"agility","type":"uint16"},{"internalType":"uint16","name":"vitality","type":"uint16"},{"internalType":"uint16","name":"endurance","type":"uint16"},{"internalType":"uint16","name":"dexterity","type":"uint16"},{"internalType":"uint16","name":"hpSm","type":"uint16"},{"internalType":"uint16","name":"hpRg","type":"uint16"},{"internalType":"uint16","name":"hpLg","type":"uint16"},{"internalType":"uint16","name":"mpSm","type":"uint16"},{"internalType":"uint16","name":"mpRg","type":"uint16"},{"internalType":"uint16","name":"mpLg","type":"uint16"}],"internalType":"struct IHeroTypes.HeroStatGrowth","name":"primaryStatGrowth","type":"tuple"},{"components":[{"internalType":"uint16","name":"strength","type":"uint16"},{"internalType":"uint16","name":"intelligence","type":"uint16"},{"internalType":"uint16","name":"wisdom","type":"uint16"},{"internalType":"uint16","name":"luck","type":"uint16"},{"internalType":"uint16","name":"agility","type":"uint16"},{"internalType":"uint16","name":"vitality","type":"uint16"},{"internalType":"uint16","name":"endurance","type":"uint16"},{"internalType":"uint16","name":"dexterity","type":"uint16"},{"internalType":"uint16","name":"hpSm","type":"uint16"},{"internalType":"uint16","name":"hpRg","type":"uint16"},{"internalType":"uint16","name":"hpLg","type":"uint16"},{"internalType":"uint16","name":"mpSm","type":"uint16"},{"internalType":"uint16","name":"mpRg","type":"uint16"},{"internalType":"uint16","name":"mpLg","type":"uint16"}],"internalType":"struct IHeroTypes.HeroStatGrowth","name":"secondaryStatGrowth","type":"tuple"},{"components":[{"internalType":"uint16","name":"mining","type":"uint16"},{"internalType":"uint16","name":"gardening","type":"uint16"},{"internalType":"uint16","name":"foraging","type":"uint16"},{"internalType":"uint16","name":"fishing","type":"uint16"}],"internalType":"struct IHeroTypes.HeroProfessions","name":"professions","type":"tuple"}],"internalType":"struct IHeroTypes.Hero","name":"","type":"tuple"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"bytes32","name":"role","type":"bytes32"}],"name":"getRoleAdmin","outputs":[{"internalType":"bytes32","name":"","type":"bytes32"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"bytes32","name":"role","type":"bytes32"},{"internalType":"uint256","name":"index","type":"uint256"}],"name":"getRoleMember","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"bytes32","name":"role","type":"bytes32"}],"name":"getRoleMemberCount","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"address","name":"_address","type":"address"}],"name":"getUserHeroes","outputs":[{"internalType":"uint256[]","name":"","type":"uint256[]"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"bytes32","name":"role","type":"bytes32"},{"internalType":"address","name":"account","type":"address"}],"name":"grantRole","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"internalType":"bytes32","name":"role","type":"bytes32"},{"internalType":"address","name":"account","type":"address"}],"name":"hasRole","outputs":[{"internalType":"bool","name":"","type":"bool"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"string","name":"_name","type":"string"},{"internalType":"string","name":"_symbol","type":"string"},{"internalType":"string","name":"_url","type":"string"},{"internalType":"address","name":"_statScienceAddress","type":"address"}],"name":"initialize","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"internalType":"string","name":"name","type":"string"},{"internalType":"string","name":"symbol","type":"string"},{"internalType":"string","name":"baseTokenURI","type":"string"}],"name":"initialize","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"internalType":"address","name":"owner","type":"address"},{"internalType":"address","name":"operator","type":"address"}],"name":"isApprovedForAll","outputs":[{"internalType":"bool","name":"","type":"bool"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"address","name":"to","type":"address"}],"name":"mint","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[],"name":"name","outputs":[{"internalType":"string","name":"","type":"string"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"uint256","name":"tokenId","type":"uint256"}],"name":"ownerOf","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"}, {"inputs":[],"name":"pause","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[],"name":"paused","outputs":[{"internalType":"bool","name":"","type":"bool"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"bytes32","name":"role","type":"bytes32"},{"internalType":"address","name":"account","type":"address"}],"name":"renounceRole","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"internalType":"bytes32","name":"role","type":"bytes32"},{"internalType":"address","name":"account","type":"address"}],"name":"revokeRole","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"internalType":"address","name":"from","type":"address"},{"internalType":"address","name":"to","type":"address"},{"internalType":"uint256","name":"tokenId","type":"uint256"}],"name":"safeTransferFrom","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"internalType":"address","name":"from","type":"address"},{"internalType":"address","name":"to","type":"address"},{"internalType":"uint256","name":"tokenId","type":"uint256"},{"internalType":"bytes","name":"_data","type":"bytes"}],"name":"safeTransferFrom","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"internalType":"address","name":"operator","type":"address"},{"internalType":"bool","name":"approved","type":"bool"}],"name":"setApprovalForAll","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"internalType":"address","name":"_statScienceAddress","type":"address"}],"name":"setStatScienceAddress","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"internalType":"bytes4","name":"interfaceId","type":"bytes4"}],"name":"supportsInterface","outputs":[{"internalType":"bool","name":"","type":"bool"}],"stateMutability":"view","type":"function"}, {"inputs":[],"name":"symbol","outputs":[{"internalType":"string","name":"","type":"string"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"uint256","name":"index","type":"uint256"}],"name":"tokenByIndex","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"address","name":"owner","type":"address"},{"internalType":"uint256","name":"index","type":"uint256"}],"name":"tokenOfOwnerByIndex","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"uint256","name":"tokenId","type":"uint256"}],"name":"tokenURI","outputs":[{"internalType":"string","name":"","type":"string"}],"stateMutability":"view","type":"function"}, {"inputs":[],"name":"totalSupply","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"address","name":"from","type":"address"},{"internalType":"address","name":"to","type":"address"},{"internalType":"uint256","name":"tokenId","type":"uint256"}],"name":"transferFrom","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[],"name":"unpause","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"components":[{"internalType":"uint256","name":"id","type":"uint256"},{"components":[{"internalType":"uint256","name":"summonedTime","type":"uint256"},{"internalType":"uint256","name":"nextSummonTime","type":"uint256"},{"internalType":"uint256","name":"summonerId","type":"uint256"},{"internalType":"uint256","name":"assistantId","type":"uint256"},{"internalType":"uint32","name":"summons","type":"uint32"},{"internalType":"uint32","name":"maxSummons","type":"uint32"}],"internalType":"struct IHeroTypes.SummoningInfo","name":"summoningInfo","type":"tuple"},{"components":[{"internalType":"uint256","name":"statGenes","type":"uint256"},{"internalType":"uint256","name":"visualGenes","type":"uint256"},{"internalType":"enum IHeroTypes.Rarity","name":"rarity","type":"uint8"},{"internalType":"bool","name":"shiny","type":"bool"},{"internalType":"uint16","name":"generation","type":"uint16"},{"internalType":"uint32","name":"firstName","type":"uint32"},{"internalType":"uint32","name":"lastName","type":"uint32"},{"internalType":"uint8","name":"shinyStyle","type":"uint8"},{"internalType":"uint8","name":"class","type":"uint8"},{"internalType":"uint8","name":"subClass","type":"uint8"}],"internalType":"struct IHeroTypes.HeroInfo","name":"info","type":"tuple"},{"components":[{"internalType":"uint256","name":"staminaFullAt","type":"uint256"},{"internalType":"uint256","name":"hpFullAt","type":"uint256"},{"internalType":"uint256","name":"mpFullAt","type":"uint256"},{"internalType":"uint16","name":"level","type":"uint16"},{"internalType":"uint64","name":"xp","type":"uint64"},{"internalType":"address","name":"currentQuest","type":"address"},{"internalType":"uint8","name":"sp","type":"uint8"},{"internalType":"enum IHeroTypes.HeroStatus","name":"status","type":"uint8"}],"internalType":"struct IHeroTypes.HeroState","name":"state","type":"tuple"},{"components":[{"internalType":"uint16","name":"strength","type":"uint16"},{"internalType":"uint16","name":"intelligence","type":"uint16"},{"internalType":"uint16","name":"wisdom","type":"uint16"},{"internalType":"uint16","name":"luck","type":"uint16"},{"internalType":"uint16","name":"agility","type":"uint16"},{"internalType":"uint16","name":"vitality","type":"uint16"},{"internalType":"uint16","name":"endurance","type":"uint16"},{"internalType":"uint16","name":"dexterity","type":"uint16"},{"internalType":"uint16","name":"hp","type":"uint16"},{"internalType":"uint16","name":"mp","type":"uint16"},{"internalType":"uint16","name":"stamina","type":"uint16"}],"internalType":"struct IHeroTypes.HeroStats","name":"stats","type":"tuple"},{"components":[{"internalType":"uint16","name":"strength","type":"uint16"},{"internalType":"uint16","name":"intelligence","type":"uint16"},{"internalType":"uint16","name":"wisdom","type":"uint16"},{"internalType":"uint16","name":"luck","type":"uint16"},{"internalType":"uint16","name":"agility","type":"uint16"},{"internalType":"uint16","name":"vitality","type":"uint16"},{"internalType":"uint16","name":"endurance","type":"uint16"},{"internalType":"uint16","name":"dexterity","type":"uint16"},{"internalType":"uint16","name":"hpSm","type":"uint16"},{"internalType":"uint16","name":"hpRg","type":"uint16"},{"internalType":"uint16","name":"hpLg","type":"uint16"},{"internalType":"uint16","name":"mpSm","type":"uint16"},{"internalType":"uint16","name":"mpRg","type":"uint16"},{"internalType":"uint16","name":"mpLg","type":"uint16"}],"internalType":"struct IHeroTypes.HeroStatGrowth","name":"primaryStatGrowth","type":"tuple"},{"components":[{"internalType":"uint16","name":"strength","type":"uint16"},{"internalType":"uint16","name":"intelligence","type":"uint16"},{"internalType":"uint16","name":"wisdom","type":"uint16"},{"internalType":"uint16","name":"luck","type":"uint16"},{"internalType":"uint16","name":"agility","type":"uint16"},{"internalType":"uint16","name":"vitality","type":"uint16"},{"internalType":"uint16","name":"endurance","type":"uint16"},{"internalType":"uint16","name":"dexterity","type":"uint16"},{"internalType":"uint16","name":"hpSm","type":"uint16"},{"internalType":"uint16","name":"hpRg","type":"uint16"},{"internalType":"uint16","name":"hpLg","type":"uint16"},{"internalType":"uint16","name":"mpSm","type":"uint16"},{"internalType":"uint16","name":"mpRg","type":"uint16"},{"internalType":"uint16","name":"mpLg","type":"uint16"}],"internalType":"struct IHeroTypes.HeroStatGrowth","name":"secondaryStatGrowth","type":"tuple"},{"components":[{"internalType":"uint16","name":"mining","type":"uint16"},{"internalType":"uint16","name":"gardening","type":"uint16"},{"internalType":"uint16","name":"foraging","type":"uint16"},{"internalType":"uint16","name":"fishing","type":"uint16"}],"internalType":"struct IHeroTypes.HeroProfessions","name":"professions","type":"tuple"}],"internalType":"struct IHeroTypes.Hero","name":"_hero","type":"tuple"}],"name":"updateHero","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"internalType":"address","name":"","type":"address"},{"internalType":"uint256","name":"","type":"uint256"}],"name":"userHeroes","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"} ] """ CONTRACT_ADDRESS_GENES = '0x6b696520997d3eaee602d348f380ca1a0f1252d5' CONTRACT_ADDRESS_HERO = '0x5F753dcDf9b1AD9AabC1346614D1f4746fd6Ce5C' CONTRACT_ADDRESS_RECESSIVES = '0x5100Bd31b822371108A0f63DCFb6594b9919Eaf4' CONTRACT_AUCTIONS = '0x13a65B9F8039E2c032Bc022171Dc05B30c3f2892' DF_COLUMNS = ['hero_id', 'gen', 'left_summons', 'max_summons', 'rarity', 'level', 'xp', 'R0_main', 'R1_main', 'R2_main', 'R3_main', 'R0_sub', 'R1_sub', 'R2_sub', 'R3_sub', 'R0_prof', 'R1_prof', 'R2_prof', 'R3_prof', 'R0_blue', 'R1_blue', 'R2_blue', 'R3_blue', 'R0_green', 'R1_green', 'R2_green', 'R3_green'] RPC_SERVER = 'https://api.fuzz.fi' POKT_SERVER = 'https://harmony-0-rpc.gateway.pokt.network' GRAPHQL = 'http://graph3.defikingdoms.com/subgraphs/name/defikingdoms/apiv5' MULTIPLIER = 6500000000000000000000 / 6500 hero_utils.rarity def get_hero(hero_id, contract_heroes): offspring = contract_heroes.functions.getHero(hero_id).call() return offspring def decode_genes(hero_data): w3 = Web3(Web3.HTTPProvider(RPC_SERVER)) contract_address = Web3.toChecksumAddress(CONTRACT_ADDRESS_GENES) contract = w3.eth.contract(contract_address, abi=ABI_GENES) try: offspring_stat_genes = contract.functions.decode(hero_data[2][0]).call() except: time.sleep(5) offspring_stat_genes = contract.functions.decode(hero_data[2][0]).call() return offspring_stat_genes def hero_genes_human_readable(decoded_genes, hero_other_data): main_classes = [hero_utils._class[el] for el in decoded_genes[-4:]] sub_classes = [hero_utils._class[el] for el in decoded_genes[-8:-4]] professions = [hero_utils.professions[el] for el in decoded_genes[-12:-8]] blue_boost = [hero_utils.stats[el] for el in decoded_genes[-36:-32]] green_boost = [hero_utils.stats[el] for el in decoded_genes[-32:-28]] hero_data = [*hero_other_data, *main_classes, *sub_classes, *professions, *blue_boost, *green_boost] return hero_data def get_hero_generation(hero_data): gen = hero_data[2][4] return gen def get_hero_max_summons(hero_data): max_summons = hero_data[1][-1] return max_summons if max_summons < 11 else np.nan def get_hero_summoned_number(hero_data): summoned_number = hero_data[1][-2] return summoned_number def get_hero_level(hero_data): summoned_number = hero_data[3][3] return summoned_number def get_hero_xp(hero_data): summoned_number = hero_data[3][4] return summoned_number def get_hero_left_summons(hero_data): max_summons = get_hero_max_summons(hero_data) if max_summons: summoned = get_hero_summoned_number(hero_data) return max_summons - summoned else: return np.nan def get_hero_rarity(hero_data): rarity = hero_utils.rarity[hero_data[2][2]] return rarity def rarity_df(hero_id, contract_heroes): hero_data = get_hero(hero_id, contract_heroes) return get_hero_rarity(hero_data) def set_up_contract(contract, abi): w3 = Web3(Web3.HTTPProvider(RPC_SERVER)) contract_address = Web3.toChecksumAddress(contract) contract = w3.eth.contract(contract_address, abi=abi) return contract def check_hero_on_sale(hero_id, contract): offspring = contract.functions.isOnAuction(hero_id).call() return offspring def get_current_price(hero_id, contract): try: offspring = contract.functions.getCurrentPrice(hero_id).call() return offspring / GWEI_MULTIPLIER except Exception as e: return None def get_heroes_genes(id_list, contract_heroes, save=True): hero_list = [] for id_ in tqdm(id_list): try: hero_data = get_hero(id_, contract_heroes) except: time.sleep(10) contract_heroes = set_up_contract(CONTRACT_ADDRESS_HERO, ABI_HEROES) hero_data = get_hero(id_, contract_heroes) hero_other = [id_, get_hero_generation(hero_data), get_hero_left_summons(hero_data), get_hero_max_summons(hero_data), get_hero_rarity(hero_data), get_hero_level(hero_data), get_hero_xp(hero_data) ] hero_genes = decode_genes(hero_data) readable_genes = hero_genes_human_readable(hero_genes, hero_other) hero_list.append(readable_genes) if id_ % 100 == 0 and save: df = pd.DataFrame(hero_list, columns=DF_COLUMNS) df = add_purity_score(df) df.to_csv(os.path.join('data', 'hero_genes.csv'), index=False) df = pd.DataFrame(hero_list, columns=DF_COLUMNS) df = add_purity_score(df) if save: df.to_csv(os.path.join('data', 'hero_genes.csv'), index=False) return df def add_purity_score(df): df['purity_score'] = 4*df.loc[:, 'R1_main'].eq(df.loc[:, 'R0_main'], axis=0) + \ 2*df.loc[:, 'R2_main'].eq(df.loc[:, 'R0_main'], axis=0) + \ 1*df.loc[:, 'R3_main'].eq(df.loc[:, 'R0_main'], axis=0) return df def get_latest_block_number(contract_filter): current_entry = [] while not current_entry: current_entry = contract_filter.get_all_entries() return current_entry[0].blockNumber def get_latest_auctions_created(contract_auction, block_start, block_end='latest'): _filter = contract_auction.events.AuctionCreated.createFilter(fromBlock=block_start, toBlock=block_end) entries = _filter.get_all_entries() def get_latest_hero_id(contract_heroes, starting_block): current_entry = [] filter_hero = contract_heroes.events.HeroSummoned.createFilter(fromBlock=starting_block-100, toBlock='latest') while not current_entry: current_entry = filter_hero.get_all_entries() return current_entry[-1].args.heroId ``` ## 1.Update heroes (list of all available heroes) ``` def update_hero_genes(current_block, contract_heroes): df_prev = pd.read_csv(os.path.join('data', 'hero_genes_all.csv')) max_id_prev = max(df_prev.hero_id) max_id_cur = get_latest_hero_id(contract_heroes, current_block) for i in range(max_id_prev+1, max_id_cur+1, 100): df_prev = pd.read_csv(os.path.join('data', 'hero_genes_all.csv')) df_genes = get_heroes_genes(range(i+1, i+101), contract_heroes, save=False) df_all = df_prev.append(df_genes).reset_index(drop=True) df_all.to_csv(os.path.join('data', 'hero_genes_all.csv'), index=False) def update_pure_heroes(threshold=4): df = pd.read_csv(os.path.join('data', 'hero_genes_all.csv')) df = df[(df['purity_score'] >= threshold)] df.to_csv('pure_all.csv', index=False) def get_listed_pure_heroes(contract_heroes, classes_to_get: list, rarities_to_get: list, min_summons: int, professions=None): ''' classes_to_get -> list of classes you want to analyze rarities_to_get -> list of rarities that you want to get min_summons -> minimal number of summons proffesions -> list of profession that you want to filter (optional) ''' df = pd.read_csv('pure_all.csv') df = df[df['R0_main'].isin(classes_to_get)] df = df[df['rarity'].isin(rarities_to_get)] df = df[df['left_summons']>=min_summons] df['currentPrice'] = df['hero_id'].apply(lambda x: get_current_price(x)) df = df.dropna(subset=['currentPrice']).sort_values('left_summons', ascending=False) if professions: df = df[df['R0_prof'].isin(professions)] return df def set_up_listed_pure_heroes(contract_heroes, min_summons=4): df = pd.read_csv('pure_all.csv') df = df[df['left_summons']>=min_summons] print(df.shape) df['currentPrice'] = df['hero_id'].apply(lambda x: get_current_price(x, contract_heroes)) df = df.dropna(subset=['currentPrice']).sort_values('left_summons', ascending=False) print(df) df.to_csv('pure_listed.csv', index=False) def update_listed_pure_heroes(): remove_sold_heroes() remove_canceled_heroes() add_new_listings_with_alert() pass def get_hero(hero_id, contract_heroes): try: offspring = contract_heroes.functions.getHero(hero_id).call() return offspring except: time.sleep(10) contract_heroes = set_up_contract(CONTRACT_ADDRESS_HERO, ABI_HEROES) offspring = contract_heroes.functions.getHero(hero_id).call() return offspring contract_heroes contract_auction = set_up_contract(CONTRACT_AUCTIONS, ABI_AUCTIONS) contract_heroes = set_up_contract(CONTRACT_ADDRESS_HERO, ABI_HEROES) # _filter_latest = contract_auction.events.AuctionCreated.createFilter(fromBlock='latest', toBlock='latest') # current_block = get_latest_block_number(_filter_latest) # max_id_cur = get_latest_hero_id(contract_heroes, current_block) # update_hero_genes(current_block, contract_heroes) # update_pure_heroes() # set_up_listed_pure_heroes() df = get_heroes_genes(range(1, 2000), contract_heroes, save=False) hero_ids = df['hero_id'] prices = [] for hero_id in tqdm(hero_ids): prices.append(get_current_price(hero_id, contract_auction)) df['price'] = prices df df = df.dropna(subset=['price']) df = df.drop(['rarity'], axis=1) df.to_csv('gen0_sales.csv', index=False) pd.set_option("display.max_columns", 999) df.sort_values('price', ascending=True) df_genes = get_heroes_genes(range(1, max_id_cur), save=True) update_pure_heroes() %%time set_up_listed_pure_heroes(contract_auction) df = pd.read_csv('pure_listed.csv') df df_prev = pd.read_csv(os.path.join('data', 'hero_genes_all.csv')) df_all = df_prev.append(df_genes).reset_index(drop=True) df_all df_all.to_csv(os.path.join('data', 'hero_genes_all.csv'), index=False) ``` ## 2. Update pure heroes ``` df = pd.read_csv(os.path.join('data', 'hero_genes_all.csv')) def filter_pure_heroes(df, threshold=4, get_rarity=True): ''' threshold -> minimal purity threshold (4 default R0_main==R1_main) get_rarity -> add rarity of heroes to dataframe (True / False, can take a lot of time) ''' df = df[(df['purity_score'] >= threshold)] df.to_csv('pure_all.csv', index=False) return df df = filter_pure_heroes(df) ``` ## 3. Check listed pure heroes Takes least amount of time to run so if you don't want to update list of heroes, you can use only this part. It runs current price check (which takes some time), so it is better to not run it on whole data (specify classes that you want to get). TODO: use ABI instead graphQL during price check ``` def get_listed_pure_heroes(contract_heroes, classes_to_get: list, rarities_to_get: list, min_summons: int, professions=None): ''' classes_to_get -> list of classes you want to analyze rarities_to_get -> list of rarities that you want to get min_summons -> minimal number of summons proffesions -> list of profession that you want to filter (optional) ''' df = pd.read_csv('pure_all.csv') df = df[df['R0_main'].isin(classes_to_get)] df = df[df['rarity'].isin(rarities_to_get)] df = df[df['left_summons']>=min_summons] df['currentPrice'] = df['hero_id'].apply(lambda x: get_current_price(x)) df = df.dropna(subset=['currentPrice']).sort_values('left_summons', ascending=False) if professions: df = df[df['R0_prof'].isin(professions)] return df df = pd.read_csv('pure_all.csv') # df = get_listed_pure_heroes(df, ['knight', 'warrior', 'thief', 'archer', 'wizard', 'priest', 'monk', 'pirate', 'paladin', 'darkKnight', 'ninja', 'summoner', 'sage'], # ['common', 'uncommon', 'rare', 'legendary', 'mythic'], 4) df[((df['R0_prof']=='gardening') | (df['R0_prof']=='gardening')) & ((df['R0_main']=='wizard') | (df['R0_main']=='priest'))] df[(df['R0_prof']=='gardening')].head(50) df.to_csv('pure_on_sales.csv', index=False) ```
github_jupyter
import os import logging from web3 import Web3 import sys import time import hero.hero as heroes import hero.utils.utils as hero_utils import genes.gene_science as genes import auctions.sale.sale_auctions as sale_auctions import pandas as pd import web3 import numpy as np from tqdm import tqdm import time GWEI_MULTIPLIER = 1e+18 ABI_GENES = ''' [ {"constant":false,"inputs":[{"name":"_genes1","type":"uint256"},{"name":"_genes2","type":"uint256"},{"name":"_targetBlock","type":"uint256"}],"name":"mixGenes","outputs":[{"name":"","type":"uint256"}],"payable":false,"stateMutability":"nonpayable","type":"function"}, {"constant":true,"inputs":[{"name":"_traits","type":"uint8[]"}],"name":"encode","outputs":[{"name":"_genes","type":"uint256"}],"payable":false,"stateMutability":"pure","type":"function"}, {"constant":true,"inputs":[{"name":"_genes","type":"uint256"}],"name":"decode","outputs":[{"name":"","type":"uint8[]"}],"payable":false,"stateMutability":"pure","type":"function"}, {"constant":true,"inputs":[{"name":"_genes","type":"uint256"}],"name":"expressingTraits","outputs":[{"name":"","type":"uint8[12]"}],"payable":false,"stateMutability":"pure","type":"function"}, {"constant":true,"inputs":[],"name":"isGeneScience","outputs":[{"name":"","type":"bool"}],"payable":false,"stateMutability":"view","type":"function"}, {"inputs":[],"payable":false,"stateMutability":"nonpayable","type":"constructor"} ] ''' ABI_AUCTIONS = """ [ {"inputs":[{"internalType":"address","name":"_heroCoreAddress","type":"address"},{"internalType":"address","name":"_geneScienceAddress","type":"address"},{"internalType":"address","name":"_jewelTokenAddress","type":"address"},{"internalType":"address","name":"_gaiaTearsAddress","type":"address"},{"internalType":"address","name":"_statScienceAddress","type":"address"},{"internalType":"uint256","name":"_cut","type":"uint256"}],"stateMutability":"nonpayable","type":"constructor"}, {"anonymous":false,"inputs":[{"indexed":false,"internalType":"uint256","name":"auctionId","type":"uint256"},{"indexed":true,"internalType":"uint256","name":"tokenId","type":"uint256"}],"name":"AuctionCancelled","type":"event"}, {"anonymous":false,"inputs":[{"indexed":false,"internalType":"uint256","name":"auctionId","type":"uint256"},{"indexed":true,"internalType":"address","name":"owner","type":"address"},{"indexed":true,"internalType":"uint256","name":"tokenId","type":"uint256"},{"indexed":false,"internalType":"uint256","name":"startingPrice","type":"uint256"},{"indexed":false,"internalType":"uint256","name":"endingPrice","type":"uint256"},{"indexed":false,"internalType":"uint256","name":"duration","type":"uint256"},{"indexed":false,"internalType":"address","name":"winner","type":"address"}],"name":"AuctionCreated","type":"event"}, {"anonymous":false,"inputs":[{"indexed":false,"internalType":"uint256","name":"auctionId","type":"uint256"},{"indexed":true,"internalType":"uint256","name":"tokenId","type":"uint256"},{"indexed":false,"internalType":"uint256","name":"totalPrice","type":"uint256"},{"indexed":false,"internalType":"address","name":"winner","type":"address"}],"name":"AuctionSuccessful","type":"event"}, {"anonymous":false,"inputs":[{"indexed":true,"internalType":"address","name":"owner","type":"address"},{"indexed":false,"internalType":"uint256","name":"crystalId","type":"uint256"},{"indexed":false,"internalType":"uint256","name":"heroId","type":"uint256"}],"name":"CrystalOpen","type":"event"}, {"anonymous":false,"inputs":[{"indexed":false,"internalType":"uint256","name":"crystalId","type":"uint256"},{"indexed":true,"internalType":"address","name":"owner","type":"address"},{"indexed":false,"internalType":"uint256","name":"summonerId","type":"uint256"},{"indexed":false,"internalType":"uint256","name":"assistantId","type":"uint256"},{"indexed":false,"internalType":"uint16","name":"generation","type":"uint16"},{"indexed":false,"internalType":"uint256","name":"createdBlock","type":"uint256"},{"indexed":false,"internalType":"uint8","name":"summonerTears","type":"uint8"},{"indexed":false,"internalType":"uint8","name":"assistantTears","type":"uint8"},{"indexed":false,"internalType":"address","name":"bonusItem","type":"address"}],"name":"CrystalSummoned","type":"event"}, {"anonymous":false,"inputs":[{"indexed":true,"internalType":"address","name":"previousOwner","type":"address"},{"indexed":true,"internalType":"address","name":"newOwner","type":"address"}],"name":"OwnershipTransferred","type":"event"}, {"anonymous":false,"inputs":[{"indexed":false,"internalType":"address","name":"account","type":"address"}],"name":"Paused","type":"event"}, {"anonymous":false,"inputs":[{"indexed":true,"internalType":"bytes32","name":"role","type":"bytes32"},{"indexed":true,"internalType":"bytes32","name":"previousAdminRole","type":"bytes32"},{"indexed":true,"internalType":"bytes32","name":"newAdminRole","type":"bytes32"}],"name":"RoleAdminChanged","type":"event"}, {"anonymous":false,"inputs":[{"indexed":true,"internalType":"bytes32","name":"role","type":"bytes32"},{"indexed":true,"internalType":"address","name":"account","type":"address"},{"indexed":true,"internalType":"address","name":"sender","type":"address"}],"name":"RoleGranted","type":"event"}, {"anonymous":false,"inputs":[{"indexed":true,"internalType":"bytes32","name":"role","type":"bytes32"},{"indexed":true,"internalType":"address","name":"account","type":"address"},{"indexed":true,"internalType":"address","name":"sender","type":"address"}],"name":"RoleRevoked","type":"event"}, {"anonymous":false,"inputs":[{"indexed":false,"internalType":"address","name":"account","type":"address"}],"name":"Unpaused","type":"event"}, {"inputs":[],"name":"DEFAULT_ADMIN_ROLE","outputs":[{"internalType":"bytes32","name":"","type":"bytes32"}],"stateMutability":"view","type":"function"}, {"inputs":[],"name":"MODERATOR_ROLE","outputs":[{"internalType":"bytes32","name":"","type":"bytes32"}],"stateMutability":"view","type":"function"}, {"inputs":[],"name":"auctionHeroCore","outputs":[{"internalType":"contract IHeroCore","name":"","type":"address"}],"stateMutability":"view","type":"function"}, {"inputs":[],"name":"baseCooldown","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"}, {"inputs":[],"name":"baseSummonFee","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"uint256","name":"_tokenId","type":"uint256"},{"internalType":"uint256","name":"_bidAmount","type":"uint256"}],"name":"bid","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"components":[{"internalType":"uint256","name":"id","type":"uint256"},{"components":[{"internalType":"uint256","name":"summonedTime","type":"uint256"},{"internalType":"uint256","name":"nextSummonTime","type":"uint256"},{"internalType":"uint256","name":"summonerId","type":"uint256"},{"internalType":"uint256","name":"assistantId","type":"uint256"},{"internalType":"uint32","name":"summons","type":"uint32"},{"internalType":"uint32","name":"maxSummons","type":"uint32"}],"internalType":"struct IHeroTypes.SummoningInfo","name":"summoningInfo","type":"tuple"},{"components":[{"internalType":"uint256","name":"statGenes","type":"uint256"},{"internalType":"uint256","name":"visualGenes","type":"uint256"},{"internalType":"enum IHeroTypes.Rarity","name":"rarity","type":"uint8"},{"internalType":"bool","name":"shiny","type":"bool"},{"internalType":"uint16","name":"generation","type":"uint16"},{"internalType":"uint32","name":"firstName","type":"uint32"},{"internalType":"uint32","name":"lastName","type":"uint32"},{"internalType":"uint8","name":"shinyStyle","type":"uint8"},{"internalType":"uint8","name":"class","type":"uint8"},{"internalType":"uint8","name":"subClass","type":"uint8"}],"internalType":"struct IHeroTypes.HeroInfo","name":"info","type":"tuple"},{"components":[{"internalType":"uint256","name":"staminaFullAt","type":"uint256"},{"internalType":"uint256","name":"hpFullAt","type":"uint256"},{"internalType":"uint256","name":"mpFullAt","type":"uint256"},{"internalType":"uint16","name":"level","type":"uint16"},{"internalType":"uint64","name":"xp","type":"uint64"},{"internalType":"address","name":"currentQuest","type":"address"},{"internalType":"uint8","name":"sp","type":"uint8"},{"internalType":"enum IHeroTypes.HeroStatus","name":"status","type":"uint8"}],"internalType":"struct IHeroTypes.HeroState","name":"state","type":"tuple"},{"components":[{"internalType":"uint16","name":"strength","type":"uint16"},{"internalType":"uint16","name":"intelligence","type":"uint16"},{"internalType":"uint16","name":"wisdom","type":"uint16"},{"internalType":"uint16","name":"luck","type":"uint16"},{"internalType":"uint16","name":"agility","type":"uint16"},{"internalType":"uint16","name":"vitality","type":"uint16"},{"internalType":"uint16","name":"endurance","type":"uint16"},{"internalType":"uint16","name":"dexterity","type":"uint16"},{"internalType":"uint16","name":"hp","type":"uint16"},{"internalType":"uint16","name":"mp","type":"uint16"},{"internalType":"uint16","name":"stamina","type":"uint16"}],"internalType":"struct IHeroTypes.HeroStats","name":"stats","type":"tuple"},{"components":[{"internalType":"uint16","name":"strength","type":"uint16"},{"internalType":"uint16","name":"intelligence","type":"uint16"},{"internalType":"uint16","name":"wisdom","type":"uint16"},{"internalType":"uint16","name":"luck","type":"uint16"},{"internalType":"uint16","name":"agility","type":"uint16"},{"internalType":"uint16","name":"vitality","type":"uint16"},{"internalType":"uint16","name":"endurance","type":"uint16"},{"internalType":"uint16","name":"dexterity","type":"uint16"},{"internalType":"uint16","name":"hpSm","type":"uint16"},{"internalType":"uint16","name":"hpRg","type":"uint16"},{"internalType":"uint16","name":"hpLg","type":"uint16"},{"internalType":"uint16","name":"mpSm","type":"uint16"},{"internalType":"uint16","name":"mpRg","type":"uint16"},{"internalType":"uint16","name":"mpLg","type":"uint16"}],"internalType":"struct IHeroTypes.HeroStatGrowth","name":"primaryStatGrowth","type":"tuple"},{"components":[{"internalType":"uint16","name":"strength","type":"uint16"},{"internalType":"uint16","name":"intelligence","type":"uint16"},{"internalType":"uint16","name":"wisdom","type":"uint16"},{"internalType":"uint16","name":"luck","type":"uint16"},{"internalType":"uint16","name":"agility","type":"uint16"},{"internalType":"uint16","name":"vitality","type":"uint16"},{"internalType":"uint16","name":"endurance","type":"uint16"},{"internalType":"uint16","name":"dexterity","type":"uint16"},{"internalType":"uint16","name":"hpSm","type":"uint16"},{"internalType":"uint16","name":"hpRg","type":"uint16"},{"internalType":"uint16","name":"hpLg","type":"uint16"},{"internalType":"uint16","name":"mpSm","type":"uint16"},{"internalType":"uint16","name":"mpRg","type":"uint16"},{"internalType":"uint16","name":"mpLg","type":"uint16"}],"internalType":"struct IHeroTypes.HeroStatGrowth","name":"secondaryStatGrowth","type":"tuple"},{"components":[{"internalType":"uint16","name":"mining","type":"uint16"},{"internalType":"uint16","name":"gardening","type":"uint16"},{"internalType":"uint16","name":"foraging","type":"uint16"},{"internalType":"uint16","name":"fishing","type":"uint16"}],"internalType":"struct IHeroTypes.HeroProfessions","name":"professions","type":"tuple"}],"internalType":"struct IHeroTypes.Hero","name":"_hero","type":"tuple"}],"name":"calculateSummoningCost","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"uint256","name":"_tokenId","type":"uint256"}],"name":"cancelAuction","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"internalType":"uint256","name":"_tokenId","type":"uint256"}],"name":"cancelAuctionWhenPaused","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[],"name":"cooldownPerGen","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"uint256","name":"_tokenId","type":"uint256"},{"internalType":"uint128","name":"_startingPrice","type":"uint128"},{"internalType":"uint128","name":"_endingPrice","type":"uint128"},{"internalType":"uint64","name":"_duration","type":"uint64"},{"internalType":"address","name":"_winner","type":"address"}],"name":"createAuction","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"internalType":"uint256","name":"","type":"uint256"}],"name":"crystals","outputs":[{"internalType":"address","name":"owner","type":"address"},{"internalType":"uint256","name":"summonerId","type":"uint256"},{"internalType":"uint256","name":"assistantId","type":"uint256"},{"internalType":"uint16","name":"generation","type":"uint16"},{"internalType":"uint256","name":"createdBlock","type":"uint256"},{"internalType":"uint256","name":"heroId","type":"uint256"},{"internalType":"uint8","name":"summonerTears","type":"uint8"},{"internalType":"uint8","name":"assistantTears","type":"uint8"},{"internalType":"address","name":"bonusItem","type":"address"},{"internalType":"uint32","name":"maxSummons","type":"uint32"},{"internalType":"uint32","name":"firstName","type":"uint32"},{"internalType":"uint32","name":"lastName","type":"uint32"},{"internalType":"uint8","name":"shinyStyle","type":"uint8"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"uint256","name":"_rarityRoll","type":"uint256"},{"internalType":"uint256","name":"_rarityMod","type":"uint256"}],"name":"determineRarity","outputs":[{"internalType":"enum IHeroTypes.Rarity","name":"","type":"uint8"}],"stateMutability":"pure","type":"function"}, {"inputs":[],"name":"enabled","outputs":[{"internalType":"bool","name":"","type":"bool"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"uint256","name":"randomNumber","type":"uint256"},{"internalType":"uint256","name":"digits","type":"uint256"},{"internalType":"uint256","name":"offset","type":"uint256"}],"name":"extractNumber","outputs":[{"internalType":"uint256","name":"result","type":"uint256"}],"stateMutability":"pure","type":"function"}, {"inputs":[{"internalType":"uint256","name":"_tokenId","type":"uint256"}],"name":"getAuction","outputs":[{"internalType":"uint256","name":"auctionId","type":"uint256"},{"internalType":"address","name":"seller","type":"address"},{"internalType":"uint256","name":"startingPrice","type":"uint256"},{"internalType":"uint256","name":"endingPrice","type":"uint256"},{"internalType":"uint256","name":"duration","type":"uint256"},{"internalType":"uint256","name":"startedAt","type":"uint256"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"uint256","name":"_crystalId","type":"uint256"}],"name":"getCrystal","outputs":[{"components":[{"internalType":"address","name":"owner","type":"address"},{"internalType":"uint256","name":"summonerId","type":"uint256"},{"internalType":"uint256","name":"assistantId","type":"uint256"},{"internalType":"uint16","name":"generation","type":"uint16"},{"internalType":"uint256","name":"createdBlock","type":"uint256"},{"internalType":"uint256","name":"heroId","type":"uint256"},{"internalType":"uint8","name":"summonerTears","type":"uint8"},{"internalType":"uint8","name":"assistantTears","type":"uint8"},{"internalType":"address","name":"bonusItem","type":"address"},{"internalType":"uint32","name":"maxSummons","type":"uint32"},{"internalType":"uint32","name":"firstName","type":"uint32"},{"internalType":"uint32","name":"lastName","type":"uint32"},{"internalType":"uint8","name":"shinyStyle","type":"uint8"}],"internalType":"struct ICrystalTypes.HeroCrystal","name":"","type":"tuple"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"uint256","name":"_tokenId","type":"uint256"}],"name":"getCurrentPrice","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"bytes32","name":"role","type":"bytes32"}],"name":"getRoleAdmin","outputs":[{"internalType":"bytes32","name":"","type":"bytes32"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"address","name":"_address","type":"address"}],"name":"getUserAuctions","outputs":[{"internalType":"uint256[]","name":"","type":"uint256[]"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"address","name":"_address","type":"address"}],"name":"getUserCrystals","outputs":[{"internalType":"uint256[]","name":"","type":"uint256[]"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"bytes32","name":"role","type":"bytes32"},{"internalType":"address","name":"account","type":"address"}],"name":"grantRole","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"internalType":"bytes32","name":"role","type":"bytes32"},{"internalType":"address","name":"account","type":"address"}],"name":"hasRole","outputs":[{"internalType":"bool","name":"","type":"bool"}],"stateMutability":"view","type":"function"}, {"inputs":[],"name":"increasePerGen","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"}, {"inputs":[],"name":"increasePerSummon","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"uint256","name":"_tokenId","type":"uint256"}],"name":"isOnAuction","outputs":[{"internalType":"bool","name":"","type":"bool"}],"stateMutability":"nonpayable","type":"function"}, {"inputs":[],"name":"jewelToken","outputs":[{"internalType":"contract IJewelToken","name":"","type":"address"}],"stateMutability":"view","type":"function"}, {"inputs":[],"name":"newSummonCooldown","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"uint256","name":"_crystalId","type":"uint256"}],"name":"open","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"nonpayable","type":"function"}, {"inputs":[],"name":"owner","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"}, {"inputs":[],"name":"ownerCut","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"}, {"inputs":[],"name":"paused","outputs":[{"internalType":"bool","name":"","type":"bool"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"uint256","name":"_crystalId","type":"uint256"}],"name":"rechargeCrystal","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[],"name":"renounceOwnership","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"internalType":"bytes32","name":"role","type":"bytes32"},{"internalType":"address","name":"account","type":"address"}],"name":"renounceRole","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"internalType":"bytes32","name":"role","type":"bytes32"},{"internalType":"address","name":"account","type":"address"}],"name":"revokeRole","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"internalType":"address[]","name":"_feeAddresses","type":"address[]"},{"internalType":"uint256[]","name":"_feePercents","type":"uint256[]"}],"name":"setFees","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"internalType":"address","name":"_geneScienceAddress","type":"address"}],"name":"setGeneScienceAddress","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"internalType":"address","name":"_statScienceAddress","type":"address"}],"name":"setStatScienceAddress","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"internalType":"uint256","name":"_newSummonCooldown","type":"uint256"},{"internalType":"uint256","name":"_baseCooldown","type":"uint256"},{"internalType":"uint256","name":"_cooldownPerGen","type":"uint256"}],"name":"setSummonCooldowns","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"internalType":"uint256","name":"_baseSummonFee","type":"uint256"},{"internalType":"uint256","name":"_increasePerSummon","type":"uint256"},{"internalType":"uint256","name":"_increasePerGen","type":"uint256"}],"name":"setSummonFees","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[],"name":"statScience","outputs":[{"internalType":"contract IStatScience","name":"","type":"address"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"uint256","name":"_summonerId","type":"uint256"},{"internalType":"uint256","name":"_assistantId","type":"uint256"},{"internalType":"uint16","name":"_summonerTears","type":"uint16"},{"internalType":"uint16","name":"_assistantTears","type":"uint16"},{"internalType":"address","name":"_bonusItem","type":"address"}],"name":"summonCrystal","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"internalType":"bytes4","name":"interfaceId","type":"bytes4"}],"name":"supportsInterface","outputs":[{"internalType":"bool","name":"","type":"bool"}],"stateMutability":"view","type":"function"}, {"inputs":[],"name":"toggleEnabled","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"internalType":"address","name":"newOwner","type":"address"}],"name":"transferOwnership","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"internalType":"address","name":"","type":"address"},{"internalType":"uint256","name":"","type":"uint256"}],"name":"userAuctions","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"address","name":"","type":"address"},{"internalType":"uint256","name":"","type":"uint256"}],"name":"userCrystals","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"uint256","name":"blockNumber","type":"uint256"}],"name":"vrf","outputs":[{"internalType":"bytes32","name":"result","type":"bytes32"}],"stateMutability":"view","type":"function"} ] """ ABI_HEROES = """ [ {"anonymous":false,"inputs":[{"indexed":true,"internalType":"address","name":"owner","type":"address"},{"indexed":true,"internalType":"address","name":"approved","type":"address"},{"indexed":true,"internalType":"uint256","name":"tokenId","type":"uint256"}],"name":"Approval","type":"event"}, {"anonymous":false,"inputs":[{"indexed":true,"internalType":"address","name":"owner","type":"address"},{"indexed":true,"internalType":"address","name":"operator","type":"address"},{"indexed":false,"internalType":"bool","name":"approved","type":"bool"}],"name":"ApprovalForAll","type":"event"}, {"anonymous":false,"inputs":[{"indexed":true,"internalType":"address","name":"owner","type":"address"},{"indexed":false,"internalType":"uint256","name":"heroId","type":"uint256"},{"indexed":false,"internalType":"uint256","name":"summonerId","type":"uint256"},{"indexed":false,"internalType":"uint256","name":"assistantId","type":"uint256"},{"indexed":false,"internalType":"uint256","name":"statGenes","type":"uint256"},{"indexed":false,"internalType":"uint256","name":"visualGenes","type":"uint256"}],"name":"HeroSummoned","type":"event"}, {"anonymous":false,"inputs":[{"indexed":false,"internalType":"address","name":"account","type":"address"}],"name":"Paused","type":"event"}, {"anonymous":false,"inputs":[{"indexed":true,"internalType":"bytes32","name":"role","type":"bytes32"},{"indexed":true,"internalType":"bytes32","name":"previousAdminRole","type":"bytes32"},{"indexed":true,"internalType":"bytes32","name":"newAdminRole","type":"bytes32"}],"name":"RoleAdminChanged","type":"event"}, {"anonymous":false,"inputs":[{"indexed":true,"internalType":"bytes32","name":"role","type":"bytes32"},{"indexed":true,"internalType":"address","name":"account","type":"address"},{"indexed":true,"internalType":"address","name":"sender","type":"address"}],"name":"RoleGranted","type":"event"}, {"anonymous":false,"inputs":[{"indexed":true,"internalType":"bytes32","name":"role","type":"bytes32"},{"indexed":true,"internalType":"address","name":"account","type":"address"},{"indexed":true,"internalType":"address","name":"sender","type":"address"}],"name":"RoleRevoked","type":"event"}, {"anonymous":false,"inputs":[{"indexed":true,"internalType":"address","name":"from","type":"address"},{"indexed":true,"internalType":"address","name":"to","type":"address"},{"indexed":true,"internalType":"uint256","name":"tokenId","type":"uint256"}],"name":"Transfer","type":"event"}, {"anonymous":false,"inputs":[{"indexed":false,"internalType":"address","name":"account","type":"address"}],"name":"Unpaused","type":"event"}, {"inputs":[],"name":"DEFAULT_ADMIN_ROLE","outputs":[{"internalType":"bytes32","name":"","type":"bytes32"}],"stateMutability":"view","type":"function"}, {"inputs":[],"name":"HERO_MODERATOR_ROLE","outputs":[{"internalType":"bytes32","name":"","type":"bytes32"}],"stateMutability":"view","type":"function"}, {"inputs":[],"name":"MINTER_ROLE","outputs":[{"internalType":"bytes32","name":"","type":"bytes32"}],"stateMutability":"view","type":"function"}, {"inputs":[],"name":"MODERATOR_ROLE","outputs":[{"internalType":"bytes32","name":"","type":"bytes32"}],"stateMutability":"view","type":"function"}, {"inputs":[],"name":"PAUSER_ROLE","outputs":[{"internalType":"bytes32","name":"","type":"bytes32"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"address","name":"to","type":"address"},{"internalType":"uint256","name":"tokenId","type":"uint256"}],"name":"approve","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"internalType":"address","name":"owner","type":"address"}],"name":"balanceOf","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"uint256","name":"tokenId","type":"uint256"}],"name":"burn","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"internalType":"uint256","name":"_statGenes","type":"uint256"},{"internalType":"uint256","name":"_visualGenes","type":"uint256"}, {"internalType":"enum IHeroTypes.Rarity","name":"_rarity","type":"uint8"}, {"internalType":"bool","name":"_shiny","type":"bool"},{"components":[{"internalType":"address","name":"owner","type":"address"},{"internalType":"uint256","name":"summonerId","type":"uint256"},{"internalType":"uint256","name":"assistantId","type":"uint256"},{"internalType":"uint16","name":"generation","type":"uint16"},{"internalType":"uint256","name":"createdBlock","type":"uint256"},{"internalType":"uint256","name":"heroId","type":"uint256"},{"internalType":"uint8","name":"summonerTears","type":"uint8"},{"internalType":"uint8","name":"assistantTears","type":"uint8"},{"internalType":"address","name":"bonusItem","type":"address"},{"internalType":"uint32","name":"maxSummons","type":"uint32"},{"internalType":"uint32","name":"firstName","type":"uint32"},{"internalType":"uint32","name":"lastName","type":"uint32"},{"internalType":"uint8","name":"shinyStyle","type":"uint8"}],"internalType":"struct ICrystalTypes.HeroCrystal","name":"_crystal","type":"tuple"}],"name":"createHero","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"internalType":"uint256","name":"tokenId","type":"uint256"}],"name":"getApproved","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"uint256","name":"_id","type":"uint256"}],"name":"getHero","outputs":[{"components":[{"internalType":"uint256","name":"id","type":"uint256"},{"components":[{"internalType":"uint256","name":"summonedTime","type":"uint256"},{"internalType":"uint256","name":"nextSummonTime","type":"uint256"},{"internalType":"uint256","name":"summonerId","type":"uint256"},{"internalType":"uint256","name":"assistantId","type":"uint256"},{"internalType":"uint32","name":"summons","type":"uint32"},{"internalType":"uint32","name":"maxSummons","type":"uint32"}],"internalType":"struct IHeroTypes.SummoningInfo","name":"summoningInfo","type":"tuple"},{"components":[{"internalType":"uint256","name":"statGenes","type":"uint256"},{"internalType":"uint256","name":"visualGenes","type":"uint256"},{"internalType":"enum IHeroTypes.Rarity","name":"rarity","type":"uint8"},{"internalType":"bool","name":"shiny","type":"bool"},{"internalType":"uint16","name":"generation","type":"uint16"},{"internalType":"uint32","name":"firstName","type":"uint32"},{"internalType":"uint32","name":"lastName","type":"uint32"},{"internalType":"uint8","name":"shinyStyle","type":"uint8"},{"internalType":"uint8","name":"class","type":"uint8"},{"internalType":"uint8","name":"subClass","type":"uint8"}],"internalType":"struct IHeroTypes.HeroInfo","name":"info","type":"tuple"},{"components":[{"internalType":"uint256","name":"staminaFullAt","type":"uint256"},{"internalType":"uint256","name":"hpFullAt","type":"uint256"},{"internalType":"uint256","name":"mpFullAt","type":"uint256"},{"internalType":"uint16","name":"level","type":"uint16"},{"internalType":"uint64","name":"xp","type":"uint64"},{"internalType":"address","name":"currentQuest","type":"address"},{"internalType":"uint8","name":"sp","type":"uint8"},{"internalType":"enum IHeroTypes.HeroStatus","name":"status","type":"uint8"}],"internalType":"struct IHeroTypes.HeroState","name":"state","type":"tuple"},{"components":[{"internalType":"uint16","name":"strength","type":"uint16"},{"internalType":"uint16","name":"intelligence","type":"uint16"},{"internalType":"uint16","name":"wisdom","type":"uint16"},{"internalType":"uint16","name":"luck","type":"uint16"},{"internalType":"uint16","name":"agility","type":"uint16"},{"internalType":"uint16","name":"vitality","type":"uint16"},{"internalType":"uint16","name":"endurance","type":"uint16"},{"internalType":"uint16","name":"dexterity","type":"uint16"},{"internalType":"uint16","name":"hp","type":"uint16"},{"internalType":"uint16","name":"mp","type":"uint16"},{"internalType":"uint16","name":"stamina","type":"uint16"}],"internalType":"struct IHeroTypes.HeroStats","name":"stats","type":"tuple"},{"components":[{"internalType":"uint16","name":"strength","type":"uint16"},{"internalType":"uint16","name":"intelligence","type":"uint16"},{"internalType":"uint16","name":"wisdom","type":"uint16"},{"internalType":"uint16","name":"luck","type":"uint16"},{"internalType":"uint16","name":"agility","type":"uint16"},{"internalType":"uint16","name":"vitality","type":"uint16"},{"internalType":"uint16","name":"endurance","type":"uint16"},{"internalType":"uint16","name":"dexterity","type":"uint16"},{"internalType":"uint16","name":"hpSm","type":"uint16"},{"internalType":"uint16","name":"hpRg","type":"uint16"},{"internalType":"uint16","name":"hpLg","type":"uint16"},{"internalType":"uint16","name":"mpSm","type":"uint16"},{"internalType":"uint16","name":"mpRg","type":"uint16"},{"internalType":"uint16","name":"mpLg","type":"uint16"}],"internalType":"struct IHeroTypes.HeroStatGrowth","name":"primaryStatGrowth","type":"tuple"},{"components":[{"internalType":"uint16","name":"strength","type":"uint16"},{"internalType":"uint16","name":"intelligence","type":"uint16"},{"internalType":"uint16","name":"wisdom","type":"uint16"},{"internalType":"uint16","name":"luck","type":"uint16"},{"internalType":"uint16","name":"agility","type":"uint16"},{"internalType":"uint16","name":"vitality","type":"uint16"},{"internalType":"uint16","name":"endurance","type":"uint16"},{"internalType":"uint16","name":"dexterity","type":"uint16"},{"internalType":"uint16","name":"hpSm","type":"uint16"},{"internalType":"uint16","name":"hpRg","type":"uint16"},{"internalType":"uint16","name":"hpLg","type":"uint16"},{"internalType":"uint16","name":"mpSm","type":"uint16"},{"internalType":"uint16","name":"mpRg","type":"uint16"},{"internalType":"uint16","name":"mpLg","type":"uint16"}],"internalType":"struct IHeroTypes.HeroStatGrowth","name":"secondaryStatGrowth","type":"tuple"},{"components":[{"internalType":"uint16","name":"mining","type":"uint16"},{"internalType":"uint16","name":"gardening","type":"uint16"},{"internalType":"uint16","name":"foraging","type":"uint16"},{"internalType":"uint16","name":"fishing","type":"uint16"}],"internalType":"struct IHeroTypes.HeroProfessions","name":"professions","type":"tuple"}],"internalType":"struct IHeroTypes.Hero","name":"","type":"tuple"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"bytes32","name":"role","type":"bytes32"}],"name":"getRoleAdmin","outputs":[{"internalType":"bytes32","name":"","type":"bytes32"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"bytes32","name":"role","type":"bytes32"},{"internalType":"uint256","name":"index","type":"uint256"}],"name":"getRoleMember","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"bytes32","name":"role","type":"bytes32"}],"name":"getRoleMemberCount","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"address","name":"_address","type":"address"}],"name":"getUserHeroes","outputs":[{"internalType":"uint256[]","name":"","type":"uint256[]"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"bytes32","name":"role","type":"bytes32"},{"internalType":"address","name":"account","type":"address"}],"name":"grantRole","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"internalType":"bytes32","name":"role","type":"bytes32"},{"internalType":"address","name":"account","type":"address"}],"name":"hasRole","outputs":[{"internalType":"bool","name":"","type":"bool"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"string","name":"_name","type":"string"},{"internalType":"string","name":"_symbol","type":"string"},{"internalType":"string","name":"_url","type":"string"},{"internalType":"address","name":"_statScienceAddress","type":"address"}],"name":"initialize","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"internalType":"string","name":"name","type":"string"},{"internalType":"string","name":"symbol","type":"string"},{"internalType":"string","name":"baseTokenURI","type":"string"}],"name":"initialize","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"internalType":"address","name":"owner","type":"address"},{"internalType":"address","name":"operator","type":"address"}],"name":"isApprovedForAll","outputs":[{"internalType":"bool","name":"","type":"bool"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"address","name":"to","type":"address"}],"name":"mint","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[],"name":"name","outputs":[{"internalType":"string","name":"","type":"string"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"uint256","name":"tokenId","type":"uint256"}],"name":"ownerOf","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"}, {"inputs":[],"name":"pause","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[],"name":"paused","outputs":[{"internalType":"bool","name":"","type":"bool"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"bytes32","name":"role","type":"bytes32"},{"internalType":"address","name":"account","type":"address"}],"name":"renounceRole","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"internalType":"bytes32","name":"role","type":"bytes32"},{"internalType":"address","name":"account","type":"address"}],"name":"revokeRole","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"internalType":"address","name":"from","type":"address"},{"internalType":"address","name":"to","type":"address"},{"internalType":"uint256","name":"tokenId","type":"uint256"}],"name":"safeTransferFrom","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"internalType":"address","name":"from","type":"address"},{"internalType":"address","name":"to","type":"address"},{"internalType":"uint256","name":"tokenId","type":"uint256"},{"internalType":"bytes","name":"_data","type":"bytes"}],"name":"safeTransferFrom","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"internalType":"address","name":"operator","type":"address"},{"internalType":"bool","name":"approved","type":"bool"}],"name":"setApprovalForAll","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"internalType":"address","name":"_statScienceAddress","type":"address"}],"name":"setStatScienceAddress","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"internalType":"bytes4","name":"interfaceId","type":"bytes4"}],"name":"supportsInterface","outputs":[{"internalType":"bool","name":"","type":"bool"}],"stateMutability":"view","type":"function"}, {"inputs":[],"name":"symbol","outputs":[{"internalType":"string","name":"","type":"string"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"uint256","name":"index","type":"uint256"}],"name":"tokenByIndex","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"address","name":"owner","type":"address"},{"internalType":"uint256","name":"index","type":"uint256"}],"name":"tokenOfOwnerByIndex","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"uint256","name":"tokenId","type":"uint256"}],"name":"tokenURI","outputs":[{"internalType":"string","name":"","type":"string"}],"stateMutability":"view","type":"function"}, {"inputs":[],"name":"totalSupply","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"}, {"inputs":[{"internalType":"address","name":"from","type":"address"},{"internalType":"address","name":"to","type":"address"},{"internalType":"uint256","name":"tokenId","type":"uint256"}],"name":"transferFrom","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[],"name":"unpause","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"components":[{"internalType":"uint256","name":"id","type":"uint256"},{"components":[{"internalType":"uint256","name":"summonedTime","type":"uint256"},{"internalType":"uint256","name":"nextSummonTime","type":"uint256"},{"internalType":"uint256","name":"summonerId","type":"uint256"},{"internalType":"uint256","name":"assistantId","type":"uint256"},{"internalType":"uint32","name":"summons","type":"uint32"},{"internalType":"uint32","name":"maxSummons","type":"uint32"}],"internalType":"struct IHeroTypes.SummoningInfo","name":"summoningInfo","type":"tuple"},{"components":[{"internalType":"uint256","name":"statGenes","type":"uint256"},{"internalType":"uint256","name":"visualGenes","type":"uint256"},{"internalType":"enum IHeroTypes.Rarity","name":"rarity","type":"uint8"},{"internalType":"bool","name":"shiny","type":"bool"},{"internalType":"uint16","name":"generation","type":"uint16"},{"internalType":"uint32","name":"firstName","type":"uint32"},{"internalType":"uint32","name":"lastName","type":"uint32"},{"internalType":"uint8","name":"shinyStyle","type":"uint8"},{"internalType":"uint8","name":"class","type":"uint8"},{"internalType":"uint8","name":"subClass","type":"uint8"}],"internalType":"struct IHeroTypes.HeroInfo","name":"info","type":"tuple"},{"components":[{"internalType":"uint256","name":"staminaFullAt","type":"uint256"},{"internalType":"uint256","name":"hpFullAt","type":"uint256"},{"internalType":"uint256","name":"mpFullAt","type":"uint256"},{"internalType":"uint16","name":"level","type":"uint16"},{"internalType":"uint64","name":"xp","type":"uint64"},{"internalType":"address","name":"currentQuest","type":"address"},{"internalType":"uint8","name":"sp","type":"uint8"},{"internalType":"enum IHeroTypes.HeroStatus","name":"status","type":"uint8"}],"internalType":"struct IHeroTypes.HeroState","name":"state","type":"tuple"},{"components":[{"internalType":"uint16","name":"strength","type":"uint16"},{"internalType":"uint16","name":"intelligence","type":"uint16"},{"internalType":"uint16","name":"wisdom","type":"uint16"},{"internalType":"uint16","name":"luck","type":"uint16"},{"internalType":"uint16","name":"agility","type":"uint16"},{"internalType":"uint16","name":"vitality","type":"uint16"},{"internalType":"uint16","name":"endurance","type":"uint16"},{"internalType":"uint16","name":"dexterity","type":"uint16"},{"internalType":"uint16","name":"hp","type":"uint16"},{"internalType":"uint16","name":"mp","type":"uint16"},{"internalType":"uint16","name":"stamina","type":"uint16"}],"internalType":"struct IHeroTypes.HeroStats","name":"stats","type":"tuple"},{"components":[{"internalType":"uint16","name":"strength","type":"uint16"},{"internalType":"uint16","name":"intelligence","type":"uint16"},{"internalType":"uint16","name":"wisdom","type":"uint16"},{"internalType":"uint16","name":"luck","type":"uint16"},{"internalType":"uint16","name":"agility","type":"uint16"},{"internalType":"uint16","name":"vitality","type":"uint16"},{"internalType":"uint16","name":"endurance","type":"uint16"},{"internalType":"uint16","name":"dexterity","type":"uint16"},{"internalType":"uint16","name":"hpSm","type":"uint16"},{"internalType":"uint16","name":"hpRg","type":"uint16"},{"internalType":"uint16","name":"hpLg","type":"uint16"},{"internalType":"uint16","name":"mpSm","type":"uint16"},{"internalType":"uint16","name":"mpRg","type":"uint16"},{"internalType":"uint16","name":"mpLg","type":"uint16"}],"internalType":"struct IHeroTypes.HeroStatGrowth","name":"primaryStatGrowth","type":"tuple"},{"components":[{"internalType":"uint16","name":"strength","type":"uint16"},{"internalType":"uint16","name":"intelligence","type":"uint16"},{"internalType":"uint16","name":"wisdom","type":"uint16"},{"internalType":"uint16","name":"luck","type":"uint16"},{"internalType":"uint16","name":"agility","type":"uint16"},{"internalType":"uint16","name":"vitality","type":"uint16"},{"internalType":"uint16","name":"endurance","type":"uint16"},{"internalType":"uint16","name":"dexterity","type":"uint16"},{"internalType":"uint16","name":"hpSm","type":"uint16"},{"internalType":"uint16","name":"hpRg","type":"uint16"},{"internalType":"uint16","name":"hpLg","type":"uint16"},{"internalType":"uint16","name":"mpSm","type":"uint16"},{"internalType":"uint16","name":"mpRg","type":"uint16"},{"internalType":"uint16","name":"mpLg","type":"uint16"}],"internalType":"struct IHeroTypes.HeroStatGrowth","name":"secondaryStatGrowth","type":"tuple"},{"components":[{"internalType":"uint16","name":"mining","type":"uint16"},{"internalType":"uint16","name":"gardening","type":"uint16"},{"internalType":"uint16","name":"foraging","type":"uint16"},{"internalType":"uint16","name":"fishing","type":"uint16"}],"internalType":"struct IHeroTypes.HeroProfessions","name":"professions","type":"tuple"}],"internalType":"struct IHeroTypes.Hero","name":"_hero","type":"tuple"}],"name":"updateHero","outputs":[],"stateMutability":"nonpayable","type":"function"}, {"inputs":[{"internalType":"address","name":"","type":"address"},{"internalType":"uint256","name":"","type":"uint256"}],"name":"userHeroes","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"} ] """ CONTRACT_ADDRESS_GENES = '0x6b696520997d3eaee602d348f380ca1a0f1252d5' CONTRACT_ADDRESS_HERO = '0x5F753dcDf9b1AD9AabC1346614D1f4746fd6Ce5C' CONTRACT_ADDRESS_RECESSIVES = '0x5100Bd31b822371108A0f63DCFb6594b9919Eaf4' CONTRACT_AUCTIONS = '0x13a65B9F8039E2c032Bc022171Dc05B30c3f2892' DF_COLUMNS = ['hero_id', 'gen', 'left_summons', 'max_summons', 'rarity', 'level', 'xp', 'R0_main', 'R1_main', 'R2_main', 'R3_main', 'R0_sub', 'R1_sub', 'R2_sub', 'R3_sub', 'R0_prof', 'R1_prof', 'R2_prof', 'R3_prof', 'R0_blue', 'R1_blue', 'R2_blue', 'R3_blue', 'R0_green', 'R1_green', 'R2_green', 'R3_green'] RPC_SERVER = 'https://api.fuzz.fi' POKT_SERVER = 'https://harmony-0-rpc.gateway.pokt.network' GRAPHQL = 'http://graph3.defikingdoms.com/subgraphs/name/defikingdoms/apiv5' MULTIPLIER = 6500000000000000000000 / 6500 hero_utils.rarity def get_hero(hero_id, contract_heroes): offspring = contract_heroes.functions.getHero(hero_id).call() return offspring def decode_genes(hero_data): w3 = Web3(Web3.HTTPProvider(RPC_SERVER)) contract_address = Web3.toChecksumAddress(CONTRACT_ADDRESS_GENES) contract = w3.eth.contract(contract_address, abi=ABI_GENES) try: offspring_stat_genes = contract.functions.decode(hero_data[2][0]).call() except: time.sleep(5) offspring_stat_genes = contract.functions.decode(hero_data[2][0]).call() return offspring_stat_genes def hero_genes_human_readable(decoded_genes, hero_other_data): main_classes = [hero_utils._class[el] for el in decoded_genes[-4:]] sub_classes = [hero_utils._class[el] for el in decoded_genes[-8:-4]] professions = [hero_utils.professions[el] for el in decoded_genes[-12:-8]] blue_boost = [hero_utils.stats[el] for el in decoded_genes[-36:-32]] green_boost = [hero_utils.stats[el] for el in decoded_genes[-32:-28]] hero_data = [*hero_other_data, *main_classes, *sub_classes, *professions, *blue_boost, *green_boost] return hero_data def get_hero_generation(hero_data): gen = hero_data[2][4] return gen def get_hero_max_summons(hero_data): max_summons = hero_data[1][-1] return max_summons if max_summons < 11 else np.nan def get_hero_summoned_number(hero_data): summoned_number = hero_data[1][-2] return summoned_number def get_hero_level(hero_data): summoned_number = hero_data[3][3] return summoned_number def get_hero_xp(hero_data): summoned_number = hero_data[3][4] return summoned_number def get_hero_left_summons(hero_data): max_summons = get_hero_max_summons(hero_data) if max_summons: summoned = get_hero_summoned_number(hero_data) return max_summons - summoned else: return np.nan def get_hero_rarity(hero_data): rarity = hero_utils.rarity[hero_data[2][2]] return rarity def rarity_df(hero_id, contract_heroes): hero_data = get_hero(hero_id, contract_heroes) return get_hero_rarity(hero_data) def set_up_contract(contract, abi): w3 = Web3(Web3.HTTPProvider(RPC_SERVER)) contract_address = Web3.toChecksumAddress(contract) contract = w3.eth.contract(contract_address, abi=abi) return contract def check_hero_on_sale(hero_id, contract): offspring = contract.functions.isOnAuction(hero_id).call() return offspring def get_current_price(hero_id, contract): try: offspring = contract.functions.getCurrentPrice(hero_id).call() return offspring / GWEI_MULTIPLIER except Exception as e: return None def get_heroes_genes(id_list, contract_heroes, save=True): hero_list = [] for id_ in tqdm(id_list): try: hero_data = get_hero(id_, contract_heroes) except: time.sleep(10) contract_heroes = set_up_contract(CONTRACT_ADDRESS_HERO, ABI_HEROES) hero_data = get_hero(id_, contract_heroes) hero_other = [id_, get_hero_generation(hero_data), get_hero_left_summons(hero_data), get_hero_max_summons(hero_data), get_hero_rarity(hero_data), get_hero_level(hero_data), get_hero_xp(hero_data) ] hero_genes = decode_genes(hero_data) readable_genes = hero_genes_human_readable(hero_genes, hero_other) hero_list.append(readable_genes) if id_ % 100 == 0 and save: df = pd.DataFrame(hero_list, columns=DF_COLUMNS) df = add_purity_score(df) df.to_csv(os.path.join('data', 'hero_genes.csv'), index=False) df = pd.DataFrame(hero_list, columns=DF_COLUMNS) df = add_purity_score(df) if save: df.to_csv(os.path.join('data', 'hero_genes.csv'), index=False) return df def add_purity_score(df): df['purity_score'] = 4*df.loc[:, 'R1_main'].eq(df.loc[:, 'R0_main'], axis=0) + \ 2*df.loc[:, 'R2_main'].eq(df.loc[:, 'R0_main'], axis=0) + \ 1*df.loc[:, 'R3_main'].eq(df.loc[:, 'R0_main'], axis=0) return df def get_latest_block_number(contract_filter): current_entry = [] while not current_entry: current_entry = contract_filter.get_all_entries() return current_entry[0].blockNumber def get_latest_auctions_created(contract_auction, block_start, block_end='latest'): _filter = contract_auction.events.AuctionCreated.createFilter(fromBlock=block_start, toBlock=block_end) entries = _filter.get_all_entries() def get_latest_hero_id(contract_heroes, starting_block): current_entry = [] filter_hero = contract_heroes.events.HeroSummoned.createFilter(fromBlock=starting_block-100, toBlock='latest') while not current_entry: current_entry = filter_hero.get_all_entries() return current_entry[-1].args.heroId def update_hero_genes(current_block, contract_heroes): df_prev = pd.read_csv(os.path.join('data', 'hero_genes_all.csv')) max_id_prev = max(df_prev.hero_id) max_id_cur = get_latest_hero_id(contract_heroes, current_block) for i in range(max_id_prev+1, max_id_cur+1, 100): df_prev = pd.read_csv(os.path.join('data', 'hero_genes_all.csv')) df_genes = get_heroes_genes(range(i+1, i+101), contract_heroes, save=False) df_all = df_prev.append(df_genes).reset_index(drop=True) df_all.to_csv(os.path.join('data', 'hero_genes_all.csv'), index=False) def update_pure_heroes(threshold=4): df = pd.read_csv(os.path.join('data', 'hero_genes_all.csv')) df = df[(df['purity_score'] >= threshold)] df.to_csv('pure_all.csv', index=False) def get_listed_pure_heroes(contract_heroes, classes_to_get: list, rarities_to_get: list, min_summons: int, professions=None): ''' classes_to_get -> list of classes you want to analyze rarities_to_get -> list of rarities that you want to get min_summons -> minimal number of summons proffesions -> list of profession that you want to filter (optional) ''' df = pd.read_csv('pure_all.csv') df = df[df['R0_main'].isin(classes_to_get)] df = df[df['rarity'].isin(rarities_to_get)] df = df[df['left_summons']>=min_summons] df['currentPrice'] = df['hero_id'].apply(lambda x: get_current_price(x)) df = df.dropna(subset=['currentPrice']).sort_values('left_summons', ascending=False) if professions: df = df[df['R0_prof'].isin(professions)] return df def set_up_listed_pure_heroes(contract_heroes, min_summons=4): df = pd.read_csv('pure_all.csv') df = df[df['left_summons']>=min_summons] print(df.shape) df['currentPrice'] = df['hero_id'].apply(lambda x: get_current_price(x, contract_heroes)) df = df.dropna(subset=['currentPrice']).sort_values('left_summons', ascending=False) print(df) df.to_csv('pure_listed.csv', index=False) def update_listed_pure_heroes(): remove_sold_heroes() remove_canceled_heroes() add_new_listings_with_alert() pass def get_hero(hero_id, contract_heroes): try: offspring = contract_heroes.functions.getHero(hero_id).call() return offspring except: time.sleep(10) contract_heroes = set_up_contract(CONTRACT_ADDRESS_HERO, ABI_HEROES) offspring = contract_heroes.functions.getHero(hero_id).call() return offspring contract_heroes contract_auction = set_up_contract(CONTRACT_AUCTIONS, ABI_AUCTIONS) contract_heroes = set_up_contract(CONTRACT_ADDRESS_HERO, ABI_HEROES) # _filter_latest = contract_auction.events.AuctionCreated.createFilter(fromBlock='latest', toBlock='latest') # current_block = get_latest_block_number(_filter_latest) # max_id_cur = get_latest_hero_id(contract_heroes, current_block) # update_hero_genes(current_block, contract_heroes) # update_pure_heroes() # set_up_listed_pure_heroes() df = get_heroes_genes(range(1, 2000), contract_heroes, save=False) hero_ids = df['hero_id'] prices = [] for hero_id in tqdm(hero_ids): prices.append(get_current_price(hero_id, contract_auction)) df['price'] = prices df df = df.dropna(subset=['price']) df = df.drop(['rarity'], axis=1) df.to_csv('gen0_sales.csv', index=False) pd.set_option("display.max_columns", 999) df.sort_values('price', ascending=True) df_genes = get_heroes_genes(range(1, max_id_cur), save=True) update_pure_heroes() %%time set_up_listed_pure_heroes(contract_auction) df = pd.read_csv('pure_listed.csv') df df_prev = pd.read_csv(os.path.join('data', 'hero_genes_all.csv')) df_all = df_prev.append(df_genes).reset_index(drop=True) df_all df_all.to_csv(os.path.join('data', 'hero_genes_all.csv'), index=False) df = pd.read_csv(os.path.join('data', 'hero_genes_all.csv')) def filter_pure_heroes(df, threshold=4, get_rarity=True): ''' threshold -> minimal purity threshold (4 default R0_main==R1_main) get_rarity -> add rarity of heroes to dataframe (True / False, can take a lot of time) ''' df = df[(df['purity_score'] >= threshold)] df.to_csv('pure_all.csv', index=False) return df df = filter_pure_heroes(df) def get_listed_pure_heroes(contract_heroes, classes_to_get: list, rarities_to_get: list, min_summons: int, professions=None): ''' classes_to_get -> list of classes you want to analyze rarities_to_get -> list of rarities that you want to get min_summons -> minimal number of summons proffesions -> list of profession that you want to filter (optional) ''' df = pd.read_csv('pure_all.csv') df = df[df['R0_main'].isin(classes_to_get)] df = df[df['rarity'].isin(rarities_to_get)] df = df[df['left_summons']>=min_summons] df['currentPrice'] = df['hero_id'].apply(lambda x: get_current_price(x)) df = df.dropna(subset=['currentPrice']).sort_values('left_summons', ascending=False) if professions: df = df[df['R0_prof'].isin(professions)] return df df = pd.read_csv('pure_all.csv') # df = get_listed_pure_heroes(df, ['knight', 'warrior', 'thief', 'archer', 'wizard', 'priest', 'monk', 'pirate', 'paladin', 'darkKnight', 'ninja', 'summoner', 'sage'], # ['common', 'uncommon', 'rare', 'legendary', 'mythic'], 4) df[((df['R0_prof']=='gardening') | (df['R0_prof']=='gardening')) & ((df['R0_main']=='wizard') | (df['R0_main']=='priest'))] df[(df['R0_prof']=='gardening')].head(50) df.to_csv('pure_on_sales.csv', index=False)
0.152789
0.192198
``` import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt import plotly.express as px import plotly.graph_objects as go import plotly.figure_factory as ff pd.set_option('display.max_columns', None) pd.set_option('float_format', '{:f}'.format) customer_df = pd.read_excel('data/Customer_Data.xlsx') customer_df.head() customer_df.info() customer_df.isnull().sum()* 100 / len(customer_df) customer_df.drop(columns=['Business Partner','Marital Status','Occupation','Date of Birth','Death date'],axis=1,inplace=True) customer_df.shape customer_df.dropna(inplace=True) customer_df.isnull().sum() customer_df['Customer No.'].nunique() # map column description def map_description(dataset,col, map_dict): dataset[col] = dataset[col].astype(str).map(map_dict) return dataset #assumed - 'Z006':'Ref-Employee' data_origin_dict = { 'Z001':'Camp-Outdoor','Z002':'Camp-Workshop','Z003':'Emailers', 'Z004':'Fleet','Z005':'Ref-Customer','Z006':'Ref-Employee', 'Z007':'Used Car Dealer','Z008':'Just Dial/Other', 'Z009':'Snapdeal/Web','Z010':'Company website', 'Z011':'Float activity','Z012':'Petrol pump', 'Z013':'Hoardings/ADVT','Z014':'Insurance Co', 'Z015':'Television AD','Z016':'Newspaper AD', 'Z017':'Newsppr leaflet','Z018':'Sales Activity', 'Z019':'Spotted outlet','Z020':'M & M Employee', 'Z021':'Outdoor Activty','Z022':'Radio' } #assumed - '9002.0':'Other' partner_type_dict = {'1.0':'Retail','2.0':'Corporate','3.0':'Fleet', '4.0':'Employee','9001.0':'Insurance Company', '9002.0':'Other','9003.0':'Contact Person' } title_type_dict = {'1.0':'Female','2.0':'Male'} custmer_df = map_description(customer_df,'Data Origin', data_origin_dict) custmer_df = map_description(customer_df,'Partner Type', partner_type_dict) custmer_df = map_description(customer_df,'Title', title_type_dict) customer_df.head() customer_df.shape plant_df = pd.read_excel('data/Plant Master.xlsx') plant_df.head() plant_df.drop(columns=[ 'Valuation Area', 'Customer no. - plant', 'Vendor number plant', 'Factory calendar', 'Name 2' ,'PO Box'],inplace=True) plant_df.head() plant_df.isnull().sum()* 100 / len(plant_df) plant_df.dropna(inplace=True) plant_df.isnull().sum()* 100 / len(plant_df) plant_df.shape plant_df['Plant'].nunique() plant_df.head() plant_df.info() jtd_df = pd.read_csv('data/JTD.csv') jtd_df.info() jtd_df.shape jtd_df.head() jtd_df.isnull().sum()* 100 / len(jtd_df) jtd_df.drop(columns= ['Unnamed: 0','Labor Value Number'],axis=1,inplace=True) jtd_df.dropna(inplace=True) # dropping null values jtd_df.shape len(jtd_df['Order Quantity'].unique()) aggregation_jtd_dbno={ 'Description' : {'Description' : lambda x: str(set(x)).strip('{}').replace("'","")}, 'Item Category' : {'Item Category' : lambda x: str(set(x)).strip('{}').replace("'","")}, 'Order Quantity' : {'Order Quantity' : 'sum'}, 'Net value' : {'Net value' : 'sum'} } jtd_grouped = jtd_df.groupby('DBM Order')['Description','Item Category','Order Quantity','Net value'].agg(aggregation_jtd_dbno).reset_index() jtd_grouped.head() jtd_grouped.shape invoice_df = pd.read_csv('data/Final_invoice.csv',low_memory=False) invoice_df.head() invoice_df.isnull().sum()* 100 / len(invoice_df) invoice_df['Technician Name'].fillna(invoice_df['User ID'],inplace=True) invoice_df[invoice_df['Technician Name'].isnull()] invoice_df['Technician Name'].unique() invoice_df['Technician Name'].nunique() invoice_df['Total Value'] = invoice_df.apply( lambda x:x['Total Amt Wtd Tax.'] if np.isnan(x['Total Value']) else x['Total Value'], axis=1 ) invoice_df.drop(columns=['Unnamed: 0','Amt Rcvd From Custom','Amt Rcvd From Ins Co','CGST(14%)','CGST(2.5%)','CGST(6%)', 'CGST(9%)','IGST(12%)','IGST(18%)','IGST(28%)','IGST(5%)','Outstanding Amt', 'SGST/UGST(14%)','SGST/UGST(2.5%)','SGST/UGST(6%)','SGST/UGST(9%)', 'TDS amount','Total CGST','Total GST','Total IGST','Total SGST/UGST', 'Service Advisor Name','Policy no.','ODN No.','Expiry Date','Gate Pass Date','Gate Pass Time', 'Cash /Cashless Type','Claim No.','Insurance Company','Print Status','Area / Locality','Plant Name1','User ID','Total Amt Wtd Tax.','Recovrbl Exp'],axis=1,inplace=True) invoice_df.dropna(inplace=True) invoice_df.isnull().sum() invoice_df['year'] = pd.to_datetime(invoice_df['Invoice Date']).dt.year invoice_df['month'] = pd.to_datetime(invoice_df['Invoice Date']).dt.month invoice_df['invoice_datetime'] = invoice_df['Invoice Date']+ ' ' +invoice_df['Invoice Time'] invoice_df['jobcard_datetime'] = invoice_df['JobCard Date']+ ' ' +invoice_df['JobCard Time'] invoice_df["InvoiceDateTime"]=pd.to_datetime(invoice_df["invoice_datetime"],dayfirst=True) invoice_df["JobCardDateTime"]=pd.to_datetime(invoice_df["jobcard_datetime"],dayfirst=True) invoice_df['Service_Time'] = invoice_df['InvoiceDateTime'] - invoice_df['JobCardDateTime'] invoice_df['Service_Time_Hrs'] = invoice_df['Service_Time']/np.timedelta64(1,'h') invoice_df.head() invoice_df.drop(columns=['JobCard Date','invoice_datetime','jobcard_datetime','InvoiceDateTime','JobCardDateTime','Service_Time','Invoice Time','JobCard Time'],axis=1,inplace=True) invoice_df.head() invoice_df['CITY'] = invoice_df['CITY'].str.lower() invoice_df['CITY'] = invoice_df['CITY'].map(lambda x: 'nashik' if x=='nasik' else x) invoice_df['CITY'] = invoice_df['CITY'].map(lambda x: 'thane' if x=='thane(w)' else x) invoice_df['CITY'] = invoice_df['CITY'].map(lambda x: 'thane' if x=='thane[w]' else x) invoice_df.head() invoice_df.shape invoice_df['Customer No.'].nunique() # 253484 invoice_df['Customer No.'] = invoice_df['Customer No.'].str.lstrip('0') customer_df['Customer No.'] = customer_df['Customer No.'].astype(str) #merge customer and invoice invoice_customer_df = pd.merge(invoice_df, customer_df,on='Customer No.') # invoice_df.join(custmer_df.set_index('Customer No.'), on='Customer No.') invoice_customer_df.shape invoice_customer_df.isnull().sum() invoice_customer_df.head() plant_df['Plant'] = plant_df['Plant'].astype(str) #merge plant and invoice_customer invoice_customer_plant_df = pd.merge(invoice_customer_df, plant_df, on='Plant') invoice_customer_plant_df.shape invoice_customer_plant_df.head() #merge plant and invoice_customer final_df = pd.merge(invoice_customer_plant_df, jtd_grouped, left_on='Job Card No', right_on='DBM Order') final_df.shape final_df.isnull().sum() final_df.head() final_df.to_csv('data/merged_data.csv',index=False) ```
github_jupyter
import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt import plotly.express as px import plotly.graph_objects as go import plotly.figure_factory as ff pd.set_option('display.max_columns', None) pd.set_option('float_format', '{:f}'.format) customer_df = pd.read_excel('data/Customer_Data.xlsx') customer_df.head() customer_df.info() customer_df.isnull().sum()* 100 / len(customer_df) customer_df.drop(columns=['Business Partner','Marital Status','Occupation','Date of Birth','Death date'],axis=1,inplace=True) customer_df.shape customer_df.dropna(inplace=True) customer_df.isnull().sum() customer_df['Customer No.'].nunique() # map column description def map_description(dataset,col, map_dict): dataset[col] = dataset[col].astype(str).map(map_dict) return dataset #assumed - 'Z006':'Ref-Employee' data_origin_dict = { 'Z001':'Camp-Outdoor','Z002':'Camp-Workshop','Z003':'Emailers', 'Z004':'Fleet','Z005':'Ref-Customer','Z006':'Ref-Employee', 'Z007':'Used Car Dealer','Z008':'Just Dial/Other', 'Z009':'Snapdeal/Web','Z010':'Company website', 'Z011':'Float activity','Z012':'Petrol pump', 'Z013':'Hoardings/ADVT','Z014':'Insurance Co', 'Z015':'Television AD','Z016':'Newspaper AD', 'Z017':'Newsppr leaflet','Z018':'Sales Activity', 'Z019':'Spotted outlet','Z020':'M & M Employee', 'Z021':'Outdoor Activty','Z022':'Radio' } #assumed - '9002.0':'Other' partner_type_dict = {'1.0':'Retail','2.0':'Corporate','3.0':'Fleet', '4.0':'Employee','9001.0':'Insurance Company', '9002.0':'Other','9003.0':'Contact Person' } title_type_dict = {'1.0':'Female','2.0':'Male'} custmer_df = map_description(customer_df,'Data Origin', data_origin_dict) custmer_df = map_description(customer_df,'Partner Type', partner_type_dict) custmer_df = map_description(customer_df,'Title', title_type_dict) customer_df.head() customer_df.shape plant_df = pd.read_excel('data/Plant Master.xlsx') plant_df.head() plant_df.drop(columns=[ 'Valuation Area', 'Customer no. - plant', 'Vendor number plant', 'Factory calendar', 'Name 2' ,'PO Box'],inplace=True) plant_df.head() plant_df.isnull().sum()* 100 / len(plant_df) plant_df.dropna(inplace=True) plant_df.isnull().sum()* 100 / len(plant_df) plant_df.shape plant_df['Plant'].nunique() plant_df.head() plant_df.info() jtd_df = pd.read_csv('data/JTD.csv') jtd_df.info() jtd_df.shape jtd_df.head() jtd_df.isnull().sum()* 100 / len(jtd_df) jtd_df.drop(columns= ['Unnamed: 0','Labor Value Number'],axis=1,inplace=True) jtd_df.dropna(inplace=True) # dropping null values jtd_df.shape len(jtd_df['Order Quantity'].unique()) aggregation_jtd_dbno={ 'Description' : {'Description' : lambda x: str(set(x)).strip('{}').replace("'","")}, 'Item Category' : {'Item Category' : lambda x: str(set(x)).strip('{}').replace("'","")}, 'Order Quantity' : {'Order Quantity' : 'sum'}, 'Net value' : {'Net value' : 'sum'} } jtd_grouped = jtd_df.groupby('DBM Order')['Description','Item Category','Order Quantity','Net value'].agg(aggregation_jtd_dbno).reset_index() jtd_grouped.head() jtd_grouped.shape invoice_df = pd.read_csv('data/Final_invoice.csv',low_memory=False) invoice_df.head() invoice_df.isnull().sum()* 100 / len(invoice_df) invoice_df['Technician Name'].fillna(invoice_df['User ID'],inplace=True) invoice_df[invoice_df['Technician Name'].isnull()] invoice_df['Technician Name'].unique() invoice_df['Technician Name'].nunique() invoice_df['Total Value'] = invoice_df.apply( lambda x:x['Total Amt Wtd Tax.'] if np.isnan(x['Total Value']) else x['Total Value'], axis=1 ) invoice_df.drop(columns=['Unnamed: 0','Amt Rcvd From Custom','Amt Rcvd From Ins Co','CGST(14%)','CGST(2.5%)','CGST(6%)', 'CGST(9%)','IGST(12%)','IGST(18%)','IGST(28%)','IGST(5%)','Outstanding Amt', 'SGST/UGST(14%)','SGST/UGST(2.5%)','SGST/UGST(6%)','SGST/UGST(9%)', 'TDS amount','Total CGST','Total GST','Total IGST','Total SGST/UGST', 'Service Advisor Name','Policy no.','ODN No.','Expiry Date','Gate Pass Date','Gate Pass Time', 'Cash /Cashless Type','Claim No.','Insurance Company','Print Status','Area / Locality','Plant Name1','User ID','Total Amt Wtd Tax.','Recovrbl Exp'],axis=1,inplace=True) invoice_df.dropna(inplace=True) invoice_df.isnull().sum() invoice_df['year'] = pd.to_datetime(invoice_df['Invoice Date']).dt.year invoice_df['month'] = pd.to_datetime(invoice_df['Invoice Date']).dt.month invoice_df['invoice_datetime'] = invoice_df['Invoice Date']+ ' ' +invoice_df['Invoice Time'] invoice_df['jobcard_datetime'] = invoice_df['JobCard Date']+ ' ' +invoice_df['JobCard Time'] invoice_df["InvoiceDateTime"]=pd.to_datetime(invoice_df["invoice_datetime"],dayfirst=True) invoice_df["JobCardDateTime"]=pd.to_datetime(invoice_df["jobcard_datetime"],dayfirst=True) invoice_df['Service_Time'] = invoice_df['InvoiceDateTime'] - invoice_df['JobCardDateTime'] invoice_df['Service_Time_Hrs'] = invoice_df['Service_Time']/np.timedelta64(1,'h') invoice_df.head() invoice_df.drop(columns=['JobCard Date','invoice_datetime','jobcard_datetime','InvoiceDateTime','JobCardDateTime','Service_Time','Invoice Time','JobCard Time'],axis=1,inplace=True) invoice_df.head() invoice_df['CITY'] = invoice_df['CITY'].str.lower() invoice_df['CITY'] = invoice_df['CITY'].map(lambda x: 'nashik' if x=='nasik' else x) invoice_df['CITY'] = invoice_df['CITY'].map(lambda x: 'thane' if x=='thane(w)' else x) invoice_df['CITY'] = invoice_df['CITY'].map(lambda x: 'thane' if x=='thane[w]' else x) invoice_df.head() invoice_df.shape invoice_df['Customer No.'].nunique() # 253484 invoice_df['Customer No.'] = invoice_df['Customer No.'].str.lstrip('0') customer_df['Customer No.'] = customer_df['Customer No.'].astype(str) #merge customer and invoice invoice_customer_df = pd.merge(invoice_df, customer_df,on='Customer No.') # invoice_df.join(custmer_df.set_index('Customer No.'), on='Customer No.') invoice_customer_df.shape invoice_customer_df.isnull().sum() invoice_customer_df.head() plant_df['Plant'] = plant_df['Plant'].astype(str) #merge plant and invoice_customer invoice_customer_plant_df = pd.merge(invoice_customer_df, plant_df, on='Plant') invoice_customer_plant_df.shape invoice_customer_plant_df.head() #merge plant and invoice_customer final_df = pd.merge(invoice_customer_plant_df, jtd_grouped, left_on='Job Card No', right_on='DBM Order') final_df.shape final_df.isnull().sum() final_df.head() final_df.to_csv('data/merged_data.csv',index=False)
0.265785
0.205595
``` import tensorflow.compat.v1 as tf tf.disable_v2_behavior() import os import pickle import numpy as np CIFAR_DIR = "./cifar-10-batches-py" print(os.listdir(CIFAR_DIR)) # tensorboard # 1.指定面板图上显示的变量 # 2.训练过程中将这些变量计算出来,输出到文件中 # 3.文件解析 ./tensorboard --logdir=dir. def load_data(filename): """read data from data file.""" with open(filename,'rb') as f: data = pickle.load(f,encoding='iso-8859-1') return data['data'],data['labels'] #tensorflow.Dataset class CifarData: def __init__(self,filenames,need_shuffle): all_data = [] all_labels = [] for filename in filenames: data,labels = load_data(filename) all_data.append(data) all_labels.append(labels) self._data = np.vstack(all_data) self._data = self._data self._labels = np.hstack(all_labels) print(self._data.shape) print(self._labels.shape) self._num_examples = self._data.shape[0] self._need_shuffle = need_shuffle self._indicator = 0 if self._need_shuffle: self._shuffle_data() def _shuffle_data(self): p = np.random.permutation(self._num_examples) self._data = self._data[p] self._labels = self._labels[p] def next_batch(self,batch_size): """:return batch_szie examples as a batch.""" end_indicator = self._indicator + batch_size if end_indicator > self._num_examples: if self._need_shuffle: self._shuffle_data() self._indicator = 0 end_indicator = batch_size else: raise Exception("have no more examples") if end_indicator > self._num_examples: raise Exception("batch size is larger than all examples") batch_data = self._data[self._indicator:end_indicator] batch_labels = self._labels[self._indicator:end_indicator] self._indicator = end_indicator return batch_data,batch_labels train_filenames = [os.path.join(CIFAR_DIR, 'data_batch_%d' % i ) for i in range(1,6)] test_filenames = [os.path.join(CIFAR_DIR, 'test_batch')] train_data = CifarData(train_filenames,True) test_data = CifarData(test_filenames,True) batch_size = 20 X = tf.placeholder(tf.float32,[batch_size, 3072]) y = tf.placeholder(tf.int64,[batch_size]) # [None], eg: [0,5,6,3] X_image = tf.reshape(X, [-1,3,32,32]) # 32 * 32 X_image = tf.transpose(X_image, perm = [0,2,3,1]) X_image_arr = tf.split(X_image, num_or_size_splits = batch_size, axis = 0) result_X_image_arr = [] for X_single_image in X_image_arr: # X_single_image:[1, 32, 32, 3] -> [32,32,3] X_single_image = tf.reshape(X_single_image, [32, 32, 3]) data_aug_1 = tf.image.random_flip_left_right(X_single_image) data_aug_2 = tf.image.random_brightness(data_aug_1, max_delta = 63) data_aug_3 = tf.image.random_contrast(data_aug_2, lower = 0.2, upper = 1.8) X_single_image = tf.reshape(data_aug_3, [1, 32, 32, 3]) result_X_image_arr.append(X_single_image) # 合并为mini batch result_X_images = tf.concat(result_X_image_arr, axis = 0) # 归一化 normal_result_X_images = result_X_images / 127.5 - 1 # conv1: 神经元图, feature_map, 输出图像 with tf.variable_scope("encoder",reuse=tf.AUTO_REUSE) as scope: conv1_1 = tf.layers.conv2d(normal_result_X_images, 32,# output channel number (3,3),# kernel size padding = 'same', activation = tf.nn.relu, reuse=tf.AUTO_REUSE, name = 'conv1_1') conv1_2 = tf.layers.conv2d(conv1_1, 32,# output channel number (3,3),# kernel size padding = 'same', activation = tf.nn.relu, reuse=tf.AUTO_REUSE, name = 'conv1_2') # 16 * 16 pooling1 = tf.layers.max_pooling2d(conv1_2, (2,2), # kernel size (2,2), #stride name = 'pool1') conv2_1 = tf.layers.conv2d(pooling1, 32,# output channel number (3,3),# kernel size padding = 'same', activation = tf.nn.relu, name = 'conv2_1') conv2_2 = tf.layers.conv2d(conv2_1, 32,# output channel number (3,3),# kernel size padding = 'same', activation = tf.nn.relu, name = 'conv2_2') # 8 * 8 pooling2 = tf.layers.max_pooling2d(conv2_2, (2,2), # kernel size (2,2), #stride name = 'pool2') conv3_1 = tf.layers.conv2d(pooling2, 32,# output channel number (3,3),# kernel size padding = 'same', activation = tf.nn.relu, name = 'conv3_1') conv3_2 = tf.layers.conv2d(conv3_1, 32,# output channel number (3,3),# kernel size padding = 'same', activation = tf.nn.relu, name = 'conv3_2') # 4 * 4 * 32 pooling3 = tf.layers.max_pooling2d(conv3_2, (2,2), # kernel size (2,2), #stride name = 'pool3') # [None, 4 * 4 * 32] flatten = tf.layers.flatten(pooling3) y_ = tf.layers.dense(flatten, 10) loss = tf.losses.sparse_softmax_cross_entropy(labels=y, logits=y_) # y_ -> softmax # y -> one_hot # loss = ylogy_ # indices predict = tf.argmax(y_,1) # [1,0,1,1,1,0,0,0] correct_prediction = tf.equal(predict,y) accuary = tf.reduce_mean(tf.cast(correct_prediction,tf.float64)) with tf.name_scope('train_op'): # 反向传播 train_op = tf.train.AdamOptimizer(1e-3).minimize(loss) def variable_summary(var, name): """ 给一个变量的很多统计量建立summary :param var: :param name: :return: """ with tf.name_scope(name): mean = tf.reduce_mean(var) with tf.name_scope('stddev'): stddev = tf.sqrt(tf.reduce_mean(tf.square(var - mean))) tf.summary.scalar('mean',mean) tf.summary.scalar('stddev',stddev) tf.summary.scalar('min',tf.reduce_min(var)) tf.summary.scalar('max',tf.reduce_max(var)) tf.summary.histogram('histogram',var) with tf.name_scope('summary'): variable_summary(conv1_1,'conv1_1') variable_summary(conv1_2,'conv1_2') variable_summary(conv2_1,'conv2_1') variable_summary(conv2_2,'conv2_2') variable_summary(conv3_1,'conv3_1') variable_summary(conv3_2,'conv3_2') loss_summary = tf.summary.scalar('loss',loss) # 'loss' : <10,1.1>,<20,1.08> accuracy_summary = tf.summary.scalar('accuracy',accuary) inputs_summary = tf.summary.histogram('inputs_image',normal_result_X_images) # 合并summary merged_summary = tf.summary.merge_all() merged_summary_test = tf.summary.merge([loss_summary, accuracy_summary]) LOG_DIR = '.' run_label = 'run_vgg_tensorboard' run_dir = os.path.join(LOG_DIR, run_label) if not os.path.exists(run_dir): os.mkdir(run_dir) train_log_dir = os.path.join(run_dir, 'train') test_log_dir = os.path.join(run_dir, 'test') if not os.path.exists(train_log_dir): os.mkdir(train_log_dir) if not os.path.exists(test_log_dir): os.mkdir(test_log_dir) init = tf.global_variables_initializer() train_steps = 100000 test_steps = 100 output_summary_every_steps = 100 with tf.Session() as sess: sess.run(init) train_writer = tf.summary.FileWriter(train_log_dir,sess.graph) test_writer = tf.summary.FileWriter(test_log_dir) fixed_test_batch_data, fixed_test_batch_labels = test_data.next_batch(batch_size) for i in range(train_steps): batch_data,batch_labels = train_data.next_batch(batch_size) eval_ops = [loss, accuary, train_op] should_output_summary = ((i + 1) % output_summary_every_steps == 0) if should_output_summary: eval_ops.append(merged_summary) eval_ops_result = sess.run( eval_ops, feed_dict = { X: batch_data, y: batch_labels}) loss_val, acc_val = eval_ops_result[0:2] if should_output_summary: train_summary_str = eval_ops_result[-1] train_writer.add_summary(train_summary_str,i + 1) test_summary_str = sess.run([merged_summary_test], feed_dict = { X: fixed_test_batch_data, y: fixed_test_batch_labels })[0] test_writer.add_summary(test_summary_str, i + 1) if (i+1) % 100 == 0: print('[Train] Step :%d, loss: %4.5f, acc: %4.5f'\ %(i+1,loss_val,acc_val)) if (i+1) % 1000 == 0: test_data = CifarData(test_filenames, False) all_test_acc_val = [] for j in range(test_steps): test_batch_data ,test_batch_labels\ = test_data.next_batch(batch_size) test_acc_val = sess.run( [accuary], feed_dict = { X: test_batch_data, y: test_batch_labels }) all_test_acc_val.append(test_acc_val) test_acc = np.nanmean(all_test_acc_val) print('[Test ] Step :%d, acc: %4.5f'\ %(i+1,test_acc)) ```
github_jupyter
import tensorflow.compat.v1 as tf tf.disable_v2_behavior() import os import pickle import numpy as np CIFAR_DIR = "./cifar-10-batches-py" print(os.listdir(CIFAR_DIR)) # tensorboard # 1.指定面板图上显示的变量 # 2.训练过程中将这些变量计算出来,输出到文件中 # 3.文件解析 ./tensorboard --logdir=dir. def load_data(filename): """read data from data file.""" with open(filename,'rb') as f: data = pickle.load(f,encoding='iso-8859-1') return data['data'],data['labels'] #tensorflow.Dataset class CifarData: def __init__(self,filenames,need_shuffle): all_data = [] all_labels = [] for filename in filenames: data,labels = load_data(filename) all_data.append(data) all_labels.append(labels) self._data = np.vstack(all_data) self._data = self._data self._labels = np.hstack(all_labels) print(self._data.shape) print(self._labels.shape) self._num_examples = self._data.shape[0] self._need_shuffle = need_shuffle self._indicator = 0 if self._need_shuffle: self._shuffle_data() def _shuffle_data(self): p = np.random.permutation(self._num_examples) self._data = self._data[p] self._labels = self._labels[p] def next_batch(self,batch_size): """:return batch_szie examples as a batch.""" end_indicator = self._indicator + batch_size if end_indicator > self._num_examples: if self._need_shuffle: self._shuffle_data() self._indicator = 0 end_indicator = batch_size else: raise Exception("have no more examples") if end_indicator > self._num_examples: raise Exception("batch size is larger than all examples") batch_data = self._data[self._indicator:end_indicator] batch_labels = self._labels[self._indicator:end_indicator] self._indicator = end_indicator return batch_data,batch_labels train_filenames = [os.path.join(CIFAR_DIR, 'data_batch_%d' % i ) for i in range(1,6)] test_filenames = [os.path.join(CIFAR_DIR, 'test_batch')] train_data = CifarData(train_filenames,True) test_data = CifarData(test_filenames,True) batch_size = 20 X = tf.placeholder(tf.float32,[batch_size, 3072]) y = tf.placeholder(tf.int64,[batch_size]) # [None], eg: [0,5,6,3] X_image = tf.reshape(X, [-1,3,32,32]) # 32 * 32 X_image = tf.transpose(X_image, perm = [0,2,3,1]) X_image_arr = tf.split(X_image, num_or_size_splits = batch_size, axis = 0) result_X_image_arr = [] for X_single_image in X_image_arr: # X_single_image:[1, 32, 32, 3] -> [32,32,3] X_single_image = tf.reshape(X_single_image, [32, 32, 3]) data_aug_1 = tf.image.random_flip_left_right(X_single_image) data_aug_2 = tf.image.random_brightness(data_aug_1, max_delta = 63) data_aug_3 = tf.image.random_contrast(data_aug_2, lower = 0.2, upper = 1.8) X_single_image = tf.reshape(data_aug_3, [1, 32, 32, 3]) result_X_image_arr.append(X_single_image) # 合并为mini batch result_X_images = tf.concat(result_X_image_arr, axis = 0) # 归一化 normal_result_X_images = result_X_images / 127.5 - 1 # conv1: 神经元图, feature_map, 输出图像 with tf.variable_scope("encoder",reuse=tf.AUTO_REUSE) as scope: conv1_1 = tf.layers.conv2d(normal_result_X_images, 32,# output channel number (3,3),# kernel size padding = 'same', activation = tf.nn.relu, reuse=tf.AUTO_REUSE, name = 'conv1_1') conv1_2 = tf.layers.conv2d(conv1_1, 32,# output channel number (3,3),# kernel size padding = 'same', activation = tf.nn.relu, reuse=tf.AUTO_REUSE, name = 'conv1_2') # 16 * 16 pooling1 = tf.layers.max_pooling2d(conv1_2, (2,2), # kernel size (2,2), #stride name = 'pool1') conv2_1 = tf.layers.conv2d(pooling1, 32,# output channel number (3,3),# kernel size padding = 'same', activation = tf.nn.relu, name = 'conv2_1') conv2_2 = tf.layers.conv2d(conv2_1, 32,# output channel number (3,3),# kernel size padding = 'same', activation = tf.nn.relu, name = 'conv2_2') # 8 * 8 pooling2 = tf.layers.max_pooling2d(conv2_2, (2,2), # kernel size (2,2), #stride name = 'pool2') conv3_1 = tf.layers.conv2d(pooling2, 32,# output channel number (3,3),# kernel size padding = 'same', activation = tf.nn.relu, name = 'conv3_1') conv3_2 = tf.layers.conv2d(conv3_1, 32,# output channel number (3,3),# kernel size padding = 'same', activation = tf.nn.relu, name = 'conv3_2') # 4 * 4 * 32 pooling3 = tf.layers.max_pooling2d(conv3_2, (2,2), # kernel size (2,2), #stride name = 'pool3') # [None, 4 * 4 * 32] flatten = tf.layers.flatten(pooling3) y_ = tf.layers.dense(flatten, 10) loss = tf.losses.sparse_softmax_cross_entropy(labels=y, logits=y_) # y_ -> softmax # y -> one_hot # loss = ylogy_ # indices predict = tf.argmax(y_,1) # [1,0,1,1,1,0,0,0] correct_prediction = tf.equal(predict,y) accuary = tf.reduce_mean(tf.cast(correct_prediction,tf.float64)) with tf.name_scope('train_op'): # 反向传播 train_op = tf.train.AdamOptimizer(1e-3).minimize(loss) def variable_summary(var, name): """ 给一个变量的很多统计量建立summary :param var: :param name: :return: """ with tf.name_scope(name): mean = tf.reduce_mean(var) with tf.name_scope('stddev'): stddev = tf.sqrt(tf.reduce_mean(tf.square(var - mean))) tf.summary.scalar('mean',mean) tf.summary.scalar('stddev',stddev) tf.summary.scalar('min',tf.reduce_min(var)) tf.summary.scalar('max',tf.reduce_max(var)) tf.summary.histogram('histogram',var) with tf.name_scope('summary'): variable_summary(conv1_1,'conv1_1') variable_summary(conv1_2,'conv1_2') variable_summary(conv2_1,'conv2_1') variable_summary(conv2_2,'conv2_2') variable_summary(conv3_1,'conv3_1') variable_summary(conv3_2,'conv3_2') loss_summary = tf.summary.scalar('loss',loss) # 'loss' : <10,1.1>,<20,1.08> accuracy_summary = tf.summary.scalar('accuracy',accuary) inputs_summary = tf.summary.histogram('inputs_image',normal_result_X_images) # 合并summary merged_summary = tf.summary.merge_all() merged_summary_test = tf.summary.merge([loss_summary, accuracy_summary]) LOG_DIR = '.' run_label = 'run_vgg_tensorboard' run_dir = os.path.join(LOG_DIR, run_label) if not os.path.exists(run_dir): os.mkdir(run_dir) train_log_dir = os.path.join(run_dir, 'train') test_log_dir = os.path.join(run_dir, 'test') if not os.path.exists(train_log_dir): os.mkdir(train_log_dir) if not os.path.exists(test_log_dir): os.mkdir(test_log_dir) init = tf.global_variables_initializer() train_steps = 100000 test_steps = 100 output_summary_every_steps = 100 with tf.Session() as sess: sess.run(init) train_writer = tf.summary.FileWriter(train_log_dir,sess.graph) test_writer = tf.summary.FileWriter(test_log_dir) fixed_test_batch_data, fixed_test_batch_labels = test_data.next_batch(batch_size) for i in range(train_steps): batch_data,batch_labels = train_data.next_batch(batch_size) eval_ops = [loss, accuary, train_op] should_output_summary = ((i + 1) % output_summary_every_steps == 0) if should_output_summary: eval_ops.append(merged_summary) eval_ops_result = sess.run( eval_ops, feed_dict = { X: batch_data, y: batch_labels}) loss_val, acc_val = eval_ops_result[0:2] if should_output_summary: train_summary_str = eval_ops_result[-1] train_writer.add_summary(train_summary_str,i + 1) test_summary_str = sess.run([merged_summary_test], feed_dict = { X: fixed_test_batch_data, y: fixed_test_batch_labels })[0] test_writer.add_summary(test_summary_str, i + 1) if (i+1) % 100 == 0: print('[Train] Step :%d, loss: %4.5f, acc: %4.5f'\ %(i+1,loss_val,acc_val)) if (i+1) % 1000 == 0: test_data = CifarData(test_filenames, False) all_test_acc_val = [] for j in range(test_steps): test_batch_data ,test_batch_labels\ = test_data.next_batch(batch_size) test_acc_val = sess.run( [accuary], feed_dict = { X: test_batch_data, y: test_batch_labels }) all_test_acc_val.append(test_acc_val) test_acc = np.nanmean(all_test_acc_val) print('[Test ] Step :%d, acc: %4.5f'\ %(i+1,test_acc))
0.595963
0.341912
<a href="https://colab.research.google.com/github/sayakpaul/Generalized-ODIN-TF/blob/main/Generalized_ODIN.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ## Setup ``` # Grab the initial model weights !wget -q https://github.com/sayakpaul/Generalized-ODIN-TF/releases/download/v1.0.0/models.tar.gz !tar xf models.tar.gz !git clone https://github.com/sayakpaul/Generalized-ODIN-TF import sys sys.path.append("Generalized-ODIN-TF") from scripts import resnet20_odin, resnet20 from tensorflow.keras import layers import tensorflow as tf import matplotlib.pyplot as plt import numpy as np tf.random.set_seed(42) np.random.seed(42) ``` ## Load CIFAR10 ``` (x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data() print(f"Total training examples: {len(x_train)}") print(f"Total test examples: {len(x_test)}") ``` ## Define constants ``` BATCH_SIZE = 128 EPOCHS = 200 START_LR = 0.1 AUTO = tf.data.AUTOTUNE ``` ## Prepare data loaders ``` # Augmentation pipeline simple_aug = tf.keras.Sequential( [ layers.experimental.preprocessing.RandomFlip("horizontal"), layers.experimental.preprocessing.RandomRotation(factor=0.02), layers.experimental.preprocessing.RandomZoom( height_factor=0.2, width_factor=0.2 ), ] ) # Now, map the augmentation pipeline to our training dataset train_ds = ( tf.data.Dataset.from_tensor_slices((x_train, y_train)) .shuffle(BATCH_SIZE * 100) .batch(BATCH_SIZE) .map(lambda x, y: (simple_aug(x), y), num_parallel_calls=AUTO) .prefetch(AUTO) ) # Test dataset test_ds = ( tf.data.Dataset.from_tensor_slices((x_test, y_test)) .batch(BATCH_SIZE) .prefetch(AUTO) ) ``` ## Utility function for the model ``` def get_rn_model(arch, num_classes=10): n = 2 depth = n * 9 + 2 n_blocks = ((depth - 2) // 9) - 1 # The input tensor inputs = layers.Input(shape=(32, 32, 3)) x = layers.experimental.preprocessing.Rescaling(scale=1.0 / 127.5, offset=-1)( inputs ) # The Stem Convolution Group x = arch.stem(x) # The learner x = arch.learner(x, n_blocks) # The Classifier for 10 classes outputs = arch.classifier(x, num_classes) # Instantiate the Model model = tf.keras.Model(inputs, outputs) return model # First serialize an initial ResNet20 model for reproducibility # initial_model = get_rn_model(resnet20) # initial_model.save("initial_model") initial_model = tf.keras.models.load_model("initial_model") # Now set the initial model weights of our ODIN model odin_rn_model = get_rn_model(resnet20_odin) for rn20_layer, rn20_odin_layer in zip(initial_model.layers[:-2], odin_rn_model.layers[:-6]): rn20_odin_layer.set_weights(rn20_layer.get_weights()) ``` ## Define LR schedule, optimizer, and loss function ``` def lr_schedule(epoch): if epoch < int(EPOCHS * 0.25) - 1: return START_LR elif epoch < int(EPOCHS*0.5) -1: return float(START_LR * 0.1) elif epoch < int(EPOCHS*0.75) -1: return float(START_LR * 0.01) else: return float(START_LR * 0.001) lr_callback = tf.keras.callbacks.LearningRateScheduler(lambda epoch: lr_schedule(epoch), verbose=True) # Optimizer and loss function. optimizer = tf.keras.optimizers.SGD(learning_rate=START_LR, momentum=0.9) loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) ``` ## Model training with ResNet20 ``` odin_rn_model.compile(loss=loss_fn, optimizer=optimizer, metrics=["accuracy"]) history = odin_rn_model.fit(train_ds, validation_data=test_ds, epochs=EPOCHS, callbacks=[lr_callback]) plt.plot(history.history["loss"], label="train loss") plt.plot(history.history["val_loss"], label="test loss") plt.grid() plt.legend() plt.show() odin_rn_model.save("odin_rn_model") _, train_acc = odin_rn_model.evaluate(train_ds, verbose=0) _, test_acc = odin_rn_model.evaluate(test_ds, verbose=0) print("Train accuracy: {:.2f}%".format(train_acc * 100)) print("Test accuracy: {:.2f}%".format(test_acc * 100)) ```
github_jupyter
# Grab the initial model weights !wget -q https://github.com/sayakpaul/Generalized-ODIN-TF/releases/download/v1.0.0/models.tar.gz !tar xf models.tar.gz !git clone https://github.com/sayakpaul/Generalized-ODIN-TF import sys sys.path.append("Generalized-ODIN-TF") from scripts import resnet20_odin, resnet20 from tensorflow.keras import layers import tensorflow as tf import matplotlib.pyplot as plt import numpy as np tf.random.set_seed(42) np.random.seed(42) (x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data() print(f"Total training examples: {len(x_train)}") print(f"Total test examples: {len(x_test)}") BATCH_SIZE = 128 EPOCHS = 200 START_LR = 0.1 AUTO = tf.data.AUTOTUNE # Augmentation pipeline simple_aug = tf.keras.Sequential( [ layers.experimental.preprocessing.RandomFlip("horizontal"), layers.experimental.preprocessing.RandomRotation(factor=0.02), layers.experimental.preprocessing.RandomZoom( height_factor=0.2, width_factor=0.2 ), ] ) # Now, map the augmentation pipeline to our training dataset train_ds = ( tf.data.Dataset.from_tensor_slices((x_train, y_train)) .shuffle(BATCH_SIZE * 100) .batch(BATCH_SIZE) .map(lambda x, y: (simple_aug(x), y), num_parallel_calls=AUTO) .prefetch(AUTO) ) # Test dataset test_ds = ( tf.data.Dataset.from_tensor_slices((x_test, y_test)) .batch(BATCH_SIZE) .prefetch(AUTO) ) def get_rn_model(arch, num_classes=10): n = 2 depth = n * 9 + 2 n_blocks = ((depth - 2) // 9) - 1 # The input tensor inputs = layers.Input(shape=(32, 32, 3)) x = layers.experimental.preprocessing.Rescaling(scale=1.0 / 127.5, offset=-1)( inputs ) # The Stem Convolution Group x = arch.stem(x) # The learner x = arch.learner(x, n_blocks) # The Classifier for 10 classes outputs = arch.classifier(x, num_classes) # Instantiate the Model model = tf.keras.Model(inputs, outputs) return model # First serialize an initial ResNet20 model for reproducibility # initial_model = get_rn_model(resnet20) # initial_model.save("initial_model") initial_model = tf.keras.models.load_model("initial_model") # Now set the initial model weights of our ODIN model odin_rn_model = get_rn_model(resnet20_odin) for rn20_layer, rn20_odin_layer in zip(initial_model.layers[:-2], odin_rn_model.layers[:-6]): rn20_odin_layer.set_weights(rn20_layer.get_weights()) def lr_schedule(epoch): if epoch < int(EPOCHS * 0.25) - 1: return START_LR elif epoch < int(EPOCHS*0.5) -1: return float(START_LR * 0.1) elif epoch < int(EPOCHS*0.75) -1: return float(START_LR * 0.01) else: return float(START_LR * 0.001) lr_callback = tf.keras.callbacks.LearningRateScheduler(lambda epoch: lr_schedule(epoch), verbose=True) # Optimizer and loss function. optimizer = tf.keras.optimizers.SGD(learning_rate=START_LR, momentum=0.9) loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) odin_rn_model.compile(loss=loss_fn, optimizer=optimizer, metrics=["accuracy"]) history = odin_rn_model.fit(train_ds, validation_data=test_ds, epochs=EPOCHS, callbacks=[lr_callback]) plt.plot(history.history["loss"], label="train loss") plt.plot(history.history["val_loss"], label="test loss") plt.grid() plt.legend() plt.show() odin_rn_model.save("odin_rn_model") _, train_acc = odin_rn_model.evaluate(train_ds, verbose=0) _, test_acc = odin_rn_model.evaluate(test_ds, verbose=0) print("Train accuracy: {:.2f}%".format(train_acc * 100)) print("Test accuracy: {:.2f}%".format(test_acc * 100))
0.705278
0.935051
<a href="https://colab.research.google.com/github/mintesin/Portofolio-Projects/blob/main/Image_classsification.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # 1. IMAGE CLASIFICATION USING CNN MODEL 1. Here we try to classify images from collected from google. 2. The image contains four famous football players(Edison cavani,marcus rashford,christiano ronaldo and lionel messi). 3. There are one hundred total images of each players. 4. Seventy percent of these image will be used as training and validation as the rest will be testing images. ``` #LET US DOWNLOAD OUR IMAGES FROM GOOGLE IMAGES from simple_image_download import simple_image_download as sim response=sim.simple_image_download #DOWNLOADING IMAGES OF FOUR PLAYERS im_names=['RONALDO','RASHFORD','MESSI','CAVANI'] for im in im_names: response().download(im,100) ``` # 2. CNN MODEL ``` #IMPORTING LIBRARIES import tensorflow as tf from tensorflow.keras import Sequential from tensorflow.keras import layers from keras.layers import Conv2D,MaxPooling2D,Dropout from keras.preprocessing import image from tensorflow.keras.preprocessing import image_dataset_from_directory from PIL import Image import numpy as np import os #GENERATING OUR TRAINING AND VALIDATION IMAGES FROM THE DATASET #TRAINING DATASET data_train=image_dataset_from_directory( '/content/drive/MyDrive/train', labels="inferred", color_mode="rgb", batch_size=32, image_size=(32, 32), shuffle=True, seed=0, validation_split=0.3, subset='training', interpolation="bilinear", crop_to_aspect_ratio=True ) #VALIDATION DATASET data_validation=image_dataset_from_directory( '/content/drive/MyDrive/train', labels="inferred", color_mode="rgb", batch_size=32, image_size=(32, 32), shuffle=True, seed=0, validation_split=0.3, subset='validation', interpolation="bilinear", crop_to_aspect_ratio=True ) #CREATIONO OF OUR DATASET def CreateModel(): model=Sequential() model.add(Conv2D(32,(3,3),activation='relu',input_shape=(32,32,3))) model.add(Conv2D(64,(3,3),activation='relu')) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Dropout(0.3)) model.add(layers.Flatten()) model.add(layers.Dense(100, activation='relu')) model.add(Dropout(0.5)) model.add(layers.Dense(4, activation='softmax')) return model #COMPILE OUR MOPDEL model=CreateModel() model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.3), loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics=tf.keras.metrics.sparse_categorical_crossentropy ) model.summary() model.fit(data_train,validation_data=data_validation,epochs=10) model.save('/content/drive/MyDrive/model') ```
github_jupyter
#LET US DOWNLOAD OUR IMAGES FROM GOOGLE IMAGES from simple_image_download import simple_image_download as sim response=sim.simple_image_download #DOWNLOADING IMAGES OF FOUR PLAYERS im_names=['RONALDO','RASHFORD','MESSI','CAVANI'] for im in im_names: response().download(im,100) #IMPORTING LIBRARIES import tensorflow as tf from tensorflow.keras import Sequential from tensorflow.keras import layers from keras.layers import Conv2D,MaxPooling2D,Dropout from keras.preprocessing import image from tensorflow.keras.preprocessing import image_dataset_from_directory from PIL import Image import numpy as np import os #GENERATING OUR TRAINING AND VALIDATION IMAGES FROM THE DATASET #TRAINING DATASET data_train=image_dataset_from_directory( '/content/drive/MyDrive/train', labels="inferred", color_mode="rgb", batch_size=32, image_size=(32, 32), shuffle=True, seed=0, validation_split=0.3, subset='training', interpolation="bilinear", crop_to_aspect_ratio=True ) #VALIDATION DATASET data_validation=image_dataset_from_directory( '/content/drive/MyDrive/train', labels="inferred", color_mode="rgb", batch_size=32, image_size=(32, 32), shuffle=True, seed=0, validation_split=0.3, subset='validation', interpolation="bilinear", crop_to_aspect_ratio=True ) #CREATIONO OF OUR DATASET def CreateModel(): model=Sequential() model.add(Conv2D(32,(3,3),activation='relu',input_shape=(32,32,3))) model.add(Conv2D(64,(3,3),activation='relu')) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Dropout(0.3)) model.add(layers.Flatten()) model.add(layers.Dense(100, activation='relu')) model.add(Dropout(0.5)) model.add(layers.Dense(4, activation='softmax')) return model #COMPILE OUR MOPDEL model=CreateModel() model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.3), loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics=tf.keras.metrics.sparse_categorical_crossentropy ) model.summary() model.fit(data_train,validation_data=data_validation,epochs=10) model.save('/content/drive/MyDrive/model')
0.535098
0.897156
``` import torch import torchvision import torchvision.transforms as transforms import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import PIL import numpy as np import matplotlib.pylab as plt import os %matplotlib inline from modules.utils import load_cifar10, load_cifar100 images, labels = load_cifar10(get_test_data=False) plt.imshow(images[4]) images[4].shape trainset = torchvision.datasets.CIFAR10( root='./image_files', train=True, download=False, transform=transforms.ToTensor() ) trainloader = torch.utils.data.DataLoader(trainset, batch_size=16, shuffle=False, num_workers=0) gen = iter(trainloader) im_cpu, l_cpu = next(gen) im_cpu_matplotlib = torch.transpose(torch.transpose(im_cpu, -3, -1), -3, -2) plt.imshow(im_cpu_matplotlib[4]) im_rfft = torch.rfft(im_cpu, 2, onesided=False) im_rfft.shape lowpass = torch.ones(32, 32, 2) lowpass[16:-16,:,:] = 0 lowpass[:,16:-8,:] = 0 lowpass.shape im_lowpassed_fft = im_rfft * lowpass im_lowpassed = torch.irfft(im_lowpassed_fft, 2, onesided=False) im_lowpassed_matplotlib = torch.transpose(torch.transpose(im_lowpassed, -3, -1), -3, -2) plt.imshow(im_lowpassed_matplotlib[4]) im_lowpassed_matplotlib.dtype im_cpu_matplotlib.dtype torch.min(im_lowpassed_matplotlib[4]) im_cpu_matplotlib[4] torch.set_printoptions(profile="full") plt.imshow(im_rfft[4][0][:,:4,0]) plt.show() plt.imshow(im_rfft[4][0][:,-4:,0]) plt.show() plt.imshow(im_rfft[4][0][:,:4,1]) plt.show() plt.imshow(im_rfft[4][0][:,-4:,1]) plt.show() torch.set_printoptions(profile="default") torch.set_printoptions(profile="full") print(im_rfft[4][0][:4,:4,0]) print(im_rfft[4][0][-3:,:4,0]) print(im_rfft[4][0][:4,:4,1]) print(im_rfft[4][0][-3:,:4,1]) torch.set_printoptions(profile="default") im_rfft_small = torch.rfft(im_cpu[:,:,:16,:16], 2, onesided=False) im_rfft_small.shape torch.set_printoptions(precision=2, profile="full", linewidth=250) print(im_rfft_small[4][0][:,:,0]) print() print(im_rfft_small[4][0][:,:,1]) torch.set_printoptions(profile="default") n = im_rfft_small.shape[2] // 2 im_rfft_small_shifted = torch.cat((im_rfft_small[:,:,n:,:,:], im_rfft_small[:,:,:n,:,:]), dim=-3) im_rfft_small_shifted = torch.cat((im_rfft_small_shifted[:,:,:,n:,:], im_rfft_small_shifted[:,:,:,:n,:]), dim=-2) torch.set_printoptions(precision=2, profile="full", linewidth=250) print(im_rfft_small_shifted[4][0][:,:,0]) plt.imshow(im_rfft_small_shifted[4][0][:,:,0]) plt.show() print(im_rfft_small_shifted[4][0][:,:,1]) plt.imshow(im_rfft_small_shifted[4][0][:,:,1]) plt.show() torch.set_printoptions(profile="default") plt.imshow(transforms.ToPILImage()(im_cpu[4,:,:16,:16])) im = transforms.ToPILImage()(im_cpu[4,:,:16,:16]) im_smaller = im.resize((15, 15), resample=PIL.Image.LANCZOS) im_rfft_smaller = torch.rfft(transforms.ToTensor()(im_smaller), 2, onesided=False) n = im_rfft_smaller.shape[2] // 2 + 1 im_rfft_smaller_shifted = torch.cat((im_rfft_smaller[:,n:,:,:], im_rfft_smaller[:,:n,:,:]), dim=-3) im_rfft_smaller_shifted = torch.cat((im_rfft_smaller_shifted[:,:,n:,:], im_rfft_smaller_shifted[:,:,:n,:]), dim=-2) torch.set_printoptions(precision=2, profile="full", linewidth=250) print(im_rfft_smaller_shifted[0][:,:,0]) plt.imshow(im_rfft_smaller_shifted[0][:,:,0]) plt.show() print(im_rfft_smaller_shifted[0][:,:,1]) plt.imshow(im_rfft_smaller_shifted[0][:,:,1]) plt.show() torch.set_printoptions(profile="default") plt.imshow(im_smaller) plt.show() plt.imshow(im) plt.show() torch.set_printoptions(precision=2, profile="full", linewidth=250) print(im_rfft_small_shifted[4][0][:,:,1]) print() print(im_rfft_smaller_shifted[0][:,:,1]) plt.imshow(im_rfft_small_shifted[4][0][:,:,1]) plt.show() plt.imshow(im_rfft_smaller_shifted[0][:,:,1]) plt.show() torch.set_printoptions(profile="default") im_rfft_smaller_shifted[0][:,:,1].argmin() print(im_rfft_smaller_shifted[0][:,:,1].min()) xs = torch.tensor(range(3)).type(torch.FloatTensor) bases = [ torch.cos(np.pi * p * (2. * xs + 1) / (2 * 3)) for p in range(3) ] def mesh_bases(b1, b2): rr, cc = torch.meshgrid([b1, b2]) return rr * cc full_bases = torch.stack([ mesh_bases(b1, b2) for b1 in bases for b2 in bases ]) full_bases.shape convo = torch.nn.Conv2d(8, 16, 3) convo.weight.shape def make_bases(length, num_bases): xs = torch.tensor(range(length)).type(torch.FloatTensor) bases = [ torch.cos(np.pi * p * (2. * xs + 1) / (2 * length)) for p in range(num_bases) ] def mesh_bases(b1, b2): rr, cc = torch.meshgrid([b1, b2]) return rr * cc full_bases = torch.stack([ mesh_bases(b1, b2) for b1 in bases for b2 in bases ]) return full_bases num_bases = 3 for length in [3, 4, 6, 8, 11, 16, 22, 32]: fig, axes = plt.subplots( num_bases, num_bases, subplot_kw={'xticks': [], 'yticks': []}, figsize=(6, 6) ) bases = make_bases(length, num_bases) for i, ax in enumerate(axes.flat): ax.imshow(bases[i]) plt.tight_layout() plt.show() print('-' * 60) ```
github_jupyter
import torch import torchvision import torchvision.transforms as transforms import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import PIL import numpy as np import matplotlib.pylab as plt import os %matplotlib inline from modules.utils import load_cifar10, load_cifar100 images, labels = load_cifar10(get_test_data=False) plt.imshow(images[4]) images[4].shape trainset = torchvision.datasets.CIFAR10( root='./image_files', train=True, download=False, transform=transforms.ToTensor() ) trainloader = torch.utils.data.DataLoader(trainset, batch_size=16, shuffle=False, num_workers=0) gen = iter(trainloader) im_cpu, l_cpu = next(gen) im_cpu_matplotlib = torch.transpose(torch.transpose(im_cpu, -3, -1), -3, -2) plt.imshow(im_cpu_matplotlib[4]) im_rfft = torch.rfft(im_cpu, 2, onesided=False) im_rfft.shape lowpass = torch.ones(32, 32, 2) lowpass[16:-16,:,:] = 0 lowpass[:,16:-8,:] = 0 lowpass.shape im_lowpassed_fft = im_rfft * lowpass im_lowpassed = torch.irfft(im_lowpassed_fft, 2, onesided=False) im_lowpassed_matplotlib = torch.transpose(torch.transpose(im_lowpassed, -3, -1), -3, -2) plt.imshow(im_lowpassed_matplotlib[4]) im_lowpassed_matplotlib.dtype im_cpu_matplotlib.dtype torch.min(im_lowpassed_matplotlib[4]) im_cpu_matplotlib[4] torch.set_printoptions(profile="full") plt.imshow(im_rfft[4][0][:,:4,0]) plt.show() plt.imshow(im_rfft[4][0][:,-4:,0]) plt.show() plt.imshow(im_rfft[4][0][:,:4,1]) plt.show() plt.imshow(im_rfft[4][0][:,-4:,1]) plt.show() torch.set_printoptions(profile="default") torch.set_printoptions(profile="full") print(im_rfft[4][0][:4,:4,0]) print(im_rfft[4][0][-3:,:4,0]) print(im_rfft[4][0][:4,:4,1]) print(im_rfft[4][0][-3:,:4,1]) torch.set_printoptions(profile="default") im_rfft_small = torch.rfft(im_cpu[:,:,:16,:16], 2, onesided=False) im_rfft_small.shape torch.set_printoptions(precision=2, profile="full", linewidth=250) print(im_rfft_small[4][0][:,:,0]) print() print(im_rfft_small[4][0][:,:,1]) torch.set_printoptions(profile="default") n = im_rfft_small.shape[2] // 2 im_rfft_small_shifted = torch.cat((im_rfft_small[:,:,n:,:,:], im_rfft_small[:,:,:n,:,:]), dim=-3) im_rfft_small_shifted = torch.cat((im_rfft_small_shifted[:,:,:,n:,:], im_rfft_small_shifted[:,:,:,:n,:]), dim=-2) torch.set_printoptions(precision=2, profile="full", linewidth=250) print(im_rfft_small_shifted[4][0][:,:,0]) plt.imshow(im_rfft_small_shifted[4][0][:,:,0]) plt.show() print(im_rfft_small_shifted[4][0][:,:,1]) plt.imshow(im_rfft_small_shifted[4][0][:,:,1]) plt.show() torch.set_printoptions(profile="default") plt.imshow(transforms.ToPILImage()(im_cpu[4,:,:16,:16])) im = transforms.ToPILImage()(im_cpu[4,:,:16,:16]) im_smaller = im.resize((15, 15), resample=PIL.Image.LANCZOS) im_rfft_smaller = torch.rfft(transforms.ToTensor()(im_smaller), 2, onesided=False) n = im_rfft_smaller.shape[2] // 2 + 1 im_rfft_smaller_shifted = torch.cat((im_rfft_smaller[:,n:,:,:], im_rfft_smaller[:,:n,:,:]), dim=-3) im_rfft_smaller_shifted = torch.cat((im_rfft_smaller_shifted[:,:,n:,:], im_rfft_smaller_shifted[:,:,:n,:]), dim=-2) torch.set_printoptions(precision=2, profile="full", linewidth=250) print(im_rfft_smaller_shifted[0][:,:,0]) plt.imshow(im_rfft_smaller_shifted[0][:,:,0]) plt.show() print(im_rfft_smaller_shifted[0][:,:,1]) plt.imshow(im_rfft_smaller_shifted[0][:,:,1]) plt.show() torch.set_printoptions(profile="default") plt.imshow(im_smaller) plt.show() plt.imshow(im) plt.show() torch.set_printoptions(precision=2, profile="full", linewidth=250) print(im_rfft_small_shifted[4][0][:,:,1]) print() print(im_rfft_smaller_shifted[0][:,:,1]) plt.imshow(im_rfft_small_shifted[4][0][:,:,1]) plt.show() plt.imshow(im_rfft_smaller_shifted[0][:,:,1]) plt.show() torch.set_printoptions(profile="default") im_rfft_smaller_shifted[0][:,:,1].argmin() print(im_rfft_smaller_shifted[0][:,:,1].min()) xs = torch.tensor(range(3)).type(torch.FloatTensor) bases = [ torch.cos(np.pi * p * (2. * xs + 1) / (2 * 3)) for p in range(3) ] def mesh_bases(b1, b2): rr, cc = torch.meshgrid([b1, b2]) return rr * cc full_bases = torch.stack([ mesh_bases(b1, b2) for b1 in bases for b2 in bases ]) full_bases.shape convo = torch.nn.Conv2d(8, 16, 3) convo.weight.shape def make_bases(length, num_bases): xs = torch.tensor(range(length)).type(torch.FloatTensor) bases = [ torch.cos(np.pi * p * (2. * xs + 1) / (2 * length)) for p in range(num_bases) ] def mesh_bases(b1, b2): rr, cc = torch.meshgrid([b1, b2]) return rr * cc full_bases = torch.stack([ mesh_bases(b1, b2) for b1 in bases for b2 in bases ]) return full_bases num_bases = 3 for length in [3, 4, 6, 8, 11, 16, 22, 32]: fig, axes = plt.subplots( num_bases, num_bases, subplot_kw={'xticks': [], 'yticks': []}, figsize=(6, 6) ) bases = make_bases(length, num_bases) for i, ax in enumerate(axes.flat): ax.imshow(bases[i]) plt.tight_layout() plt.show() print('-' * 60)
0.68742
0.621527
``` import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import sklearn dataset = pd.read_csv('/content/drive/MyDrive/Machine Learning A-Z/heart.csv') df = dataset.copy() df.head() ``` **Exploratory data analysis** ``` pd.set_option("display.float","{:.2f}".format) df.describe() df.cp.value_counts().plot(kind = "bar", color=['salmon','lightblue']) df.target.value_counts().plot(kind = "bar", color=['salmon','lightblue']) df.slope.value_counts().plot(kind = "bar", color=['salmon','lightblue']) df.isna().sum() categorical_val = [] continuous_val = [] for column in df.columns: print("======================================================") print(f"{column} : {df[column].unique()}") if len(df[column].unique())<=10: categorical_val.append(column) else: continuous_val.append(column) plt.figure(figsize=(15,15)) for i, column in enumerate(categorical_val, 1): plt.subplot= (3, 3, i) df[df['target']==0][column].hist(bins=35, color='blue', label='Have Heart Disease=No', alpha=0.6) df[df['target']==1][column].hist(bins=35, color='red', label='Have Heart Disease=Yes', alpha=0.6) plt.legend() plt.xlabel(column) corr_matrix = df.corr() fig, ax = plt.subplots(figsize=(15, 15)) ax = sns.heatmap(corr_matrix, annot=True, linewidths=0.5, fmt=".2f", cmap="YlGnBu"); bottom, top = ax.get_ylim() ax.set_ylim(bottom + 0.5, top - 0.5) from sklearn.model_selection import train_test_split X = dataset.drop('target', axis=1) y = dataset.target X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42) from sklearn.linear_model import LogisticRegression lr_clf = LogisticRegression() lr_clf.fit(X_train, y_train) from sklearn.metrics import accuracy_score, confusion_matrix, classification_report def print_score(clf, X_train, y_train, X_test, y_test, train=True): if train: pred = clf.predict(X_train) clf_report = pd.DataFrame(classification_report(y_train, pred, output_dict=True)) print("Train Result:\n================================================") print(f"Accuracy Score: {accuracy_score(y_train, pred) * 100:.2f}%") print("_______________________________________________") print(f"CLASSIFICATION REPORT:\n{clf_report}") print("_______________________________________________") print(f"Confusion Matrix: \n {confusion_matrix(y_train, pred)}\n") elif train==False: pred = clf.predict(X_test) clf_report = pd.DataFrame(classification_report(y_test, pred, output_dict=True)) print("Test Result:\n================================================") print(f"Accuracy Score: {accuracy_score(y_test, pred) * 100:.2f}%") print("_______________________________________________") print(f"CLASSIFICATION REPORT:\n{clf_report}") print("_______________________________________________") print(f"Confusion Matrix: \n {confusion_matrix(y_test, pred)}\n") from sklearn.linear_model import LogisticRegression lr_clf = LogisticRegression(solver='liblinear') lr_clf.fit(X_train, y_train) print_score(lr_clf, X_train, y_train, X_test, y_test, train=True) print_score(lr_clf, X_train, y_train, X_test, y_test, train=False) from sklearn.svm import SVC model = SVC() model.fit(X_train, y_train) from sklearn.metrics import accuracy_score, classification_report accuracy = model.score(X_test,y_test) accuracy ```
github_jupyter
import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import sklearn dataset = pd.read_csv('/content/drive/MyDrive/Machine Learning A-Z/heart.csv') df = dataset.copy() df.head() pd.set_option("display.float","{:.2f}".format) df.describe() df.cp.value_counts().plot(kind = "bar", color=['salmon','lightblue']) df.target.value_counts().plot(kind = "bar", color=['salmon','lightblue']) df.slope.value_counts().plot(kind = "bar", color=['salmon','lightblue']) df.isna().sum() categorical_val = [] continuous_val = [] for column in df.columns: print("======================================================") print(f"{column} : {df[column].unique()}") if len(df[column].unique())<=10: categorical_val.append(column) else: continuous_val.append(column) plt.figure(figsize=(15,15)) for i, column in enumerate(categorical_val, 1): plt.subplot= (3, 3, i) df[df['target']==0][column].hist(bins=35, color='blue', label='Have Heart Disease=No', alpha=0.6) df[df['target']==1][column].hist(bins=35, color='red', label='Have Heart Disease=Yes', alpha=0.6) plt.legend() plt.xlabel(column) corr_matrix = df.corr() fig, ax = plt.subplots(figsize=(15, 15)) ax = sns.heatmap(corr_matrix, annot=True, linewidths=0.5, fmt=".2f", cmap="YlGnBu"); bottom, top = ax.get_ylim() ax.set_ylim(bottom + 0.5, top - 0.5) from sklearn.model_selection import train_test_split X = dataset.drop('target', axis=1) y = dataset.target X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42) from sklearn.linear_model import LogisticRegression lr_clf = LogisticRegression() lr_clf.fit(X_train, y_train) from sklearn.metrics import accuracy_score, confusion_matrix, classification_report def print_score(clf, X_train, y_train, X_test, y_test, train=True): if train: pred = clf.predict(X_train) clf_report = pd.DataFrame(classification_report(y_train, pred, output_dict=True)) print("Train Result:\n================================================") print(f"Accuracy Score: {accuracy_score(y_train, pred) * 100:.2f}%") print("_______________________________________________") print(f"CLASSIFICATION REPORT:\n{clf_report}") print("_______________________________________________") print(f"Confusion Matrix: \n {confusion_matrix(y_train, pred)}\n") elif train==False: pred = clf.predict(X_test) clf_report = pd.DataFrame(classification_report(y_test, pred, output_dict=True)) print("Test Result:\n================================================") print(f"Accuracy Score: {accuracy_score(y_test, pred) * 100:.2f}%") print("_______________________________________________") print(f"CLASSIFICATION REPORT:\n{clf_report}") print("_______________________________________________") print(f"Confusion Matrix: \n {confusion_matrix(y_test, pred)}\n") from sklearn.linear_model import LogisticRegression lr_clf = LogisticRegression(solver='liblinear') lr_clf.fit(X_train, y_train) print_score(lr_clf, X_train, y_train, X_test, y_test, train=True) print_score(lr_clf, X_train, y_train, X_test, y_test, train=False) from sklearn.svm import SVC model = SVC() model.fit(X_train, y_train) from sklearn.metrics import accuracy_score, classification_report accuracy = model.score(X_test,y_test) accuracy
0.5564
0.680794
## Dependencies ``` from openvaccine_scripts import * import warnings, json from sklearn.model_selection import KFold, StratifiedKFold, GroupKFold import tensorflow.keras.layers as L import tensorflow.keras.backend as K from tensorflow.keras import optimizers, losses, Model from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau SEED = 0 seed_everything(SEED) warnings.filterwarnings('ignore') ``` # Model parameters ``` config = { "BATCH_SIZE": 32, "EPOCHS": 70, "LEARNING_RATE": 1e-3, "ES_PATIENCE": 10, "N_FOLDS": 5, "N_USED_FOLDS": 5, "PB_SEQ_LEN": 107, "PV_SEQ_LEN": 130, } with open('config.json', 'w') as json_file: json.dump(json.loads(json.dumps(config)), json_file) config ``` # Load data ``` database_base_path = '/kaggle/input/stanford-covid-vaccine/' train = pd.read_json(database_base_path + 'train.json', lines=True) test = pd.read_json(database_base_path + 'test.json', lines=True) print('Train samples: %d' % len(train)) display(train.head()) print(f'Test samples: {len(test)}') display(test.head()) ``` ## Data augmentation ``` def aug_data(df): target_df = df.copy() new_df = aug_df[aug_df['id'].isin(target_df['id'])] del target_df['structure'] del target_df['predicted_loop_type'] new_df = new_df.merge(target_df, on=['id','sequence'], how='left') df['cnt'] = df['id'].map(new_df[['id','cnt']].set_index('id').to_dict()['cnt']) df['log_gamma'] = 100 df['score'] = 1.0 new_df['augmented'] = True df['augmented'] = False df = df.append(new_df[df.columns]) return df # Augmented data aug_df = pd.read_csv('/kaggle/input/augmented-data-for-stanford-covid-vaccine/48k_augment.csv') print(f'Augmented samples: {len(aug_df)}') display(aug_df.head()) print(f"Samples in train before augmentation: {len(train)}") print(f"Samples in test before augmentation: {len(test)}") train = aug_data(train) train.drop('index', axis=1, inplace=True) train = train.reset_index() test = aug_data(test) test.drop('index', axis=1, inplace=True) test = test.reset_index() print(f"Samples in train after augmentation: {len(train)}") print(f"Samples in test after augmentation: {len(test)}") print(f"Unique id in train: {len(train['id'].unique())}") print(f"Unique sequences in train: {len(train['sequence'].unique())}") print(f"Unique structure in train: {len(train['structure'].unique())}") print(f"Unique predicted_loop_type in train: {len(train['predicted_loop_type'].unique())}") print(f"Unique id in test: {len(test['id'].unique())}") print(f"Unique sequences in test: {len(test['sequence'].unique())}") print(f"Unique structure in test: {len(test['structure'].unique())}") print(f"Unique predicted_loop_type in test: {len(test['predicted_loop_type'].unique())}") ``` ## Auxiliary functions ``` def get_dataset(x, y=None, sample_weights=None, labeled=True, shuffled=True, repeated=False, batch_size=32, buffer_size=-1, seed=0): input_map = {'inputs_seq': x['sequence'], 'inputs_struct': x['structure'], 'inputs_loop': x['predicted_loop_type'], 'inputs_bpps_max': x['bpps_max'], 'inputs_bpps_sum': x['bpps_sum'], 'inputs_bpps_scaled': x['bpps_scaled']} if labeled: output_map = {'output_react': y['reactivity'], 'output_mg_ph': y['deg_Mg_pH10'], 'output_ph': y['deg_pH10'], 'output_mg_c': y['deg_Mg_50C'], 'output_c': y['deg_50C']} if sample_weights is not None: dataset = tf.data.Dataset.from_tensor_slices((input_map, output_map, sample_weights)) else: dataset = tf.data.Dataset.from_tensor_slices((input_map, output_map)) else: dataset = tf.data.Dataset.from_tensor_slices((input_map)) if repeated: dataset = dataset.repeat() if shuffled: dataset = dataset.shuffle(2048, seed=seed) dataset = dataset.batch(batch_size) dataset = dataset.prefetch(buffer_size) return dataset def get_dataset_sampling(x, y=None, sample_weights=None, labeled=True, shuffled=True, repeated=False, batch_size=32, buffer_size=-1, seed=0): input_map = {'inputs_seq': x['sequence'], 'inputs_struct': x['structure'], 'inputs_loop': x['predicted_loop_type'], 'inputs_bpps_max': x['bpps_max'], 'inputs_bpps_sum': x['bpps_sum'], 'inputs_bpps_scaled': x['bpps_scaled']} if labeled: output_map = {'output_react': y['reactivity'], 'output_mg_ph': y['deg_Mg_pH10'], 'output_ph': y['deg_pH10'], 'output_mg_c': y['deg_Mg_50C'], 'output_c': y['deg_50C']} if sample_weights is not None: dataset = tf.data.Dataset.from_tensor_slices((input_map, output_map, sample_weights)) else: dataset = tf.data.Dataset.from_tensor_slices((input_map, output_map)) else: dataset = tf.data.Dataset.from_tensor_slices((input_map)) if repeated: dataset = dataset.repeat() if shuffled: dataset = dataset.shuffle(2048, seed=seed) return dataset ``` # Pre-process ``` # Add bpps as features train = add_bpps_features(train, database_base_path) test = add_bpps_features(test, database_base_path) feature_cols = ['sequence', 'structure', 'predicted_loop_type', 'bpps_max', 'bpps_sum', 'bpps_scaled'] pred_cols = ['reactivity', 'deg_Mg_pH10', 'deg_pH10', 'deg_Mg_50C', 'deg_50C'] encoder_list = [token2int_seq, token2int_struct, token2int_loop, None, None, None] public_test = test.query("seq_length == 107").copy() private_test = test.query("seq_length == 130").copy() x_test_public = get_features_dict(public_test, feature_cols, encoder_list, public_test.index) x_test_private = get_features_dict(private_test, feature_cols, encoder_list, private_test.index) # To use as stratified col train['signal_to_noise_int'] = train['signal_to_noise'].astype(int) ``` # Model ``` def model_fn(hidden_dim=384, dropout=.5, pred_len=68, n_outputs=5): inputs_seq = L.Input(shape=(None, 1), name='inputs_seq') inputs_struct = L.Input(shape=(None, 1), name='inputs_struct') inputs_loop = L.Input(shape=(None, 1), name='inputs_loop') inputs_bpps_max = L.Input(shape=(None, 1), name='inputs_bpps_max') inputs_bpps_sum = L.Input(shape=(None, 1), name='inputs_bpps_sum') inputs_bpps_scaled = L.Input(shape=(None, 1), name='inputs_bpps_scaled') def _one_hot(x, num_classes): return K.squeeze(K.one_hot(K.cast(x, 'uint8'), num_classes=num_classes), axis=2) ohe_seq = L.Lambda(_one_hot, arguments={'num_classes': len(token2int_seq)}, input_shape=(None, 1))(inputs_seq) ohe_struct = L.Lambda(_one_hot, arguments={'num_classes': len(token2int_struct)}, input_shape=(None, 1))(inputs_struct) ohe_loop = L.Lambda(_one_hot, arguments={'num_classes': len(token2int_loop)}, input_shape=(None, 1))(inputs_loop) ### Encoder block # Conv block conv_seq = L.Conv1D(filters=64, kernel_size=3, padding='same')(ohe_seq) conv_struct = L.Conv1D(filters=64, kernel_size=3, padding='same')(ohe_struct) conv_loop = L.Conv1D(filters=64, kernel_size=3, padding='same')(ohe_loop) conv_bpps_max = L.Conv1D(filters=64, kernel_size=3, padding='same')(inputs_bpps_max) conv_bpps_sum = L.Conv1D(filters=64, kernel_size=3, padding='same')(inputs_bpps_sum) conv_bpps_scaled = L.Conv1D(filters=64, kernel_size=3, padding='same')(inputs_bpps_scaled) x_concat = L.concatenate([conv_seq, conv_struct, conv_loop, conv_bpps_max, conv_bpps_sum, conv_bpps_scaled], axis=-1, name='conv_concatenate') # Recurrent block encoder, encoder_state_f, encoder_state_b = L.Bidirectional(L.GRU(hidden_dim, dropout=dropout, return_sequences=True, return_state=True, kernel_initializer='orthogonal'), name='Encoder_RNN')(x_concat) ### Decoder block decoder = L.Bidirectional(L.GRU(hidden_dim, dropout=dropout, return_sequences=True, kernel_initializer='orthogonal'), name='Decoder')(encoder, initial_state=[encoder_state_f, encoder_state_b]) # Since we are only making predictions on the first part of each sequence, we have to truncate it decoder_truncated = decoder[:, :pred_len] output_react = L.Dense(1, name='output_react')(decoder_truncated) output_mg_ph = L.Dense(1, name='output_mg_ph')(decoder_truncated) output_ph = L.Dense(1, name='output_ph')(decoder_truncated) output_mg_c = L.Dense(1, name='output_mg_c')(decoder_truncated) output_c = L.Dense(1, name='output_c')(decoder_truncated) model = Model(inputs=[inputs_seq, inputs_struct, inputs_loop, inputs_bpps_max, inputs_bpps_sum, inputs_bpps_scaled], outputs=[output_react, output_mg_ph, output_ph, output_mg_c, output_c]) opt = optimizers.Adam(learning_rate=config['LEARNING_RATE']) model.compile(optimizer=opt, loss={'output_react': MCRMSE, 'output_mg_ph': MCRMSE, 'output_ph': MCRMSE, 'output_mg_c': MCRMSE, 'output_c': MCRMSE}, loss_weights={'output_react': 1., 'output_mg_ph': 1., 'output_ph': 1., 'output_mg_c': 1., 'output_c': 1.}) return model model = model_fn() model.summary() ``` # Training ``` AUTO = tf.data.experimental.AUTOTUNE skf = GroupKFold(n_splits=config['N_FOLDS']) history_list = [] oof = train[['id', 'SN_filter', 'signal_to_noise'] + pred_cols].copy() oof_preds = np.zeros((len(train), 68, len(pred_cols))) test_public_preds = np.zeros((len(public_test), config['PB_SEQ_LEN'], len(pred_cols))) test_private_preds = np.zeros((len(private_test), config['PV_SEQ_LEN'], len(pred_cols))) for fold,(train_idx, valid_idx) in enumerate(skf.split(train, train['signal_to_noise_int'], train['id'])): if fold >= config['N_USED_FOLDS']: break print(f'\nFOLD: {fold+1}') # Create clean and noisy datasets valid_clean_idxs = np.intersect1d(train[(train['SN_filter'] == 1) & (train['augmented'] == False)].index, valid_idx) ### Create datasets # x_train = get_features_dict(train, feature_cols, encoder_list, train_idx) # y_train = get_targets_dict(train, pred_cols, train_idx) # w_train = np.log(train.iloc[train_idx]['signal_to_noise'].values+1.2)+1 x_valid = get_features_dict(train, feature_cols, encoder_list, valid_clean_idxs) y_valid = get_targets_dict(train, pred_cols, valid_clean_idxs) w_valid = np.log(train.iloc[valid_clean_idxs]['signal_to_noise'].values+1.2)+1 # train_ds = get_dataset(x_train, y_train, w_train, labeled=True, shuffled=True, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED) valid_ds = get_dataset(x_valid, y_valid, w_valid, labeled=True, shuffled=False, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED) oof_ds = get_dataset(get_features_dict(train, feature_cols, encoder_list, valid_idx), labeled=False, shuffled=False, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED) test_public_ds = get_dataset(x_test_public, labeled=False, shuffled=False, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED) test_private_ds = get_dataset(x_test_private, labeled=False, shuffled=False, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED) # Create clean and noisy datasets normal_idxs = np.intersect1d(train[train['augmented'] == False].index, train_idx) x_train_normal = get_features_dict(train, feature_cols, encoder_list, normal_idxs) y_train_normal = get_targets_dict(train, pred_cols, normal_idxs) w_train_normal = np.log(train.iloc[normal_idxs]['signal_to_noise'].values+1.2)+1 normal_ds = get_dataset_sampling(x_train_normal, y_train_normal, w_train_normal, labeled=True, shuffled=True, repeated=True, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED) augmented_idxs = np.intersect1d(train[train['augmented'] == True].index, train_idx) x_train_augmented = get_features_dict(train, feature_cols, encoder_list, augmented_idxs) y_train_augmented = get_targets_dict(train, pred_cols, augmented_idxs) w_train_augmented = np.log(train.iloc[augmented_idxs]['signal_to_noise'].values+1.2)+1 augmented_ds = get_dataset_sampling(x_train_augmented, y_train_augmented, w_train_augmented, labeled=True, shuffled=True, repeated=True, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED) # Resampled TF Dataset resampled_ds = tf.data.experimental.sample_from_datasets([normal_ds, augmented_ds], weights=[.5, .5]) resampled_ds = resampled_ds.batch(config['BATCH_SIZE']).prefetch(AUTO) ### Model K.clear_session() model = model_fn() model_path = f'model_{fold}.h5' es = EarlyStopping(monitor='val_loss', mode='min', patience=config['ES_PATIENCE'], restore_best_weights=True, verbose=1) rlrp = ReduceLROnPlateau(monitor='val_loss', mode='min', factor=0.1, patience=5, verbose=1) ### Train history = model.fit(resampled_ds, validation_data=valid_ds, callbacks=[es, rlrp], epochs=config['EPOCHS'], batch_size=config['BATCH_SIZE'], steps_per_epoch=int(len(normal_idxs)//(config['BATCH_SIZE']* .5)), verbose=2).history history_list.append(history) # Save last model weights model.save_weights(model_path) ### Inference oof_ds_preds = np.array(model.predict(oof_ds)).reshape((len(pred_cols), len(valid_idx), 68)).transpose((1, 2, 0)) oof_preds[valid_idx] = oof_ds_preds # Short sequence (public test) model = model_fn(pred_len=config['PB_SEQ_LEN']) model.load_weights(model_path) test_public_ds_preds = np.array(model.predict(test_public_ds)).reshape((len(pred_cols), len(public_test), config['PB_SEQ_LEN'])).transpose((1, 2, 0)) test_public_preds += test_public_ds_preds * (1 / config['N_USED_FOLDS']) # Long sequence (private test) model = model_fn(pred_len=config['PV_SEQ_LEN']) model.load_weights(model_path) test_private_ds_preds = np.array(model.predict(test_private_ds)).reshape((len(pred_cols), len(private_test), config['PV_SEQ_LEN'])).transpose((1, 2, 0)) test_private_preds += test_private_ds_preds * (1 / config['N_USED_FOLDS']) ``` ## Model loss graph ``` for fold, history in enumerate(history_list): print(f'\nFOLD: {fold+1}') min_valid_idx = np.array(history['val_loss']).argmin() print(f"Train {np.array(history['loss'])[min_valid_idx]:.5f} Validation {np.array(history['val_loss'])[min_valid_idx]:.5f}") plot_metrics_agg(history_list) ``` # Post-processing ``` # Assign preds to OOF set for idx, col in enumerate(pred_cols): val = oof_preds[:, :, idx] oof = oof.assign(**{f'{col}_pred': list(val)}) oof.to_csv('oof.csv', index=False) oof_preds_dict = {} for col in pred_cols: oof_preds_dict[col] = oof_preds[:, :, idx] # Assign values to test set preds_ls = [] for df, preds in [(public_test, test_public_preds), (private_test, test_private_preds)]: for i, uid in enumerate(df.id): single_pred = preds[i] single_df = pd.DataFrame(single_pred, columns=pred_cols) single_df['id_seqpos'] = [f'{uid}_{x}' for x in range(single_df.shape[0])] preds_ls.append(single_df) preds_df = pd.concat(preds_ls) # Averaging over augmented predictions preds_df = pd.concat(preds_ls).groupby('id_seqpos').mean().reset_index() ``` # Model evaluation ``` y_true_dict = get_targets_dict(train, pred_cols, train.index) y_true = np.array([y_true_dict[col] for col in pred_cols]).transpose((1, 2, 0, 3)).reshape(oof_preds.shape) display(evaluate_model(train, y_true, oof_preds, pred_cols)) display(evaluate_model(train, y_true, oof_preds, pred_cols, use_cols=['reactivity', 'deg_Mg_pH10', 'deg_Mg_50C'])) ``` # Visualize test predictions ``` submission = pd.read_csv(database_base_path + 'sample_submission.csv') submission = submission[['id_seqpos']].merge(preds_df, on=['id_seqpos']) ``` # Test set predictions ``` display(submission.head(10)) display(submission.describe()) submission.to_csv('submission.csv', index=False) ```
github_jupyter
from openvaccine_scripts import * import warnings, json from sklearn.model_selection import KFold, StratifiedKFold, GroupKFold import tensorflow.keras.layers as L import tensorflow.keras.backend as K from tensorflow.keras import optimizers, losses, Model from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau SEED = 0 seed_everything(SEED) warnings.filterwarnings('ignore') config = { "BATCH_SIZE": 32, "EPOCHS": 70, "LEARNING_RATE": 1e-3, "ES_PATIENCE": 10, "N_FOLDS": 5, "N_USED_FOLDS": 5, "PB_SEQ_LEN": 107, "PV_SEQ_LEN": 130, } with open('config.json', 'w') as json_file: json.dump(json.loads(json.dumps(config)), json_file) config database_base_path = '/kaggle/input/stanford-covid-vaccine/' train = pd.read_json(database_base_path + 'train.json', lines=True) test = pd.read_json(database_base_path + 'test.json', lines=True) print('Train samples: %d' % len(train)) display(train.head()) print(f'Test samples: {len(test)}') display(test.head()) def aug_data(df): target_df = df.copy() new_df = aug_df[aug_df['id'].isin(target_df['id'])] del target_df['structure'] del target_df['predicted_loop_type'] new_df = new_df.merge(target_df, on=['id','sequence'], how='left') df['cnt'] = df['id'].map(new_df[['id','cnt']].set_index('id').to_dict()['cnt']) df['log_gamma'] = 100 df['score'] = 1.0 new_df['augmented'] = True df['augmented'] = False df = df.append(new_df[df.columns]) return df # Augmented data aug_df = pd.read_csv('/kaggle/input/augmented-data-for-stanford-covid-vaccine/48k_augment.csv') print(f'Augmented samples: {len(aug_df)}') display(aug_df.head()) print(f"Samples in train before augmentation: {len(train)}") print(f"Samples in test before augmentation: {len(test)}") train = aug_data(train) train.drop('index', axis=1, inplace=True) train = train.reset_index() test = aug_data(test) test.drop('index', axis=1, inplace=True) test = test.reset_index() print(f"Samples in train after augmentation: {len(train)}") print(f"Samples in test after augmentation: {len(test)}") print(f"Unique id in train: {len(train['id'].unique())}") print(f"Unique sequences in train: {len(train['sequence'].unique())}") print(f"Unique structure in train: {len(train['structure'].unique())}") print(f"Unique predicted_loop_type in train: {len(train['predicted_loop_type'].unique())}") print(f"Unique id in test: {len(test['id'].unique())}") print(f"Unique sequences in test: {len(test['sequence'].unique())}") print(f"Unique structure in test: {len(test['structure'].unique())}") print(f"Unique predicted_loop_type in test: {len(test['predicted_loop_type'].unique())}") def get_dataset(x, y=None, sample_weights=None, labeled=True, shuffled=True, repeated=False, batch_size=32, buffer_size=-1, seed=0): input_map = {'inputs_seq': x['sequence'], 'inputs_struct': x['structure'], 'inputs_loop': x['predicted_loop_type'], 'inputs_bpps_max': x['bpps_max'], 'inputs_bpps_sum': x['bpps_sum'], 'inputs_bpps_scaled': x['bpps_scaled']} if labeled: output_map = {'output_react': y['reactivity'], 'output_mg_ph': y['deg_Mg_pH10'], 'output_ph': y['deg_pH10'], 'output_mg_c': y['deg_Mg_50C'], 'output_c': y['deg_50C']} if sample_weights is not None: dataset = tf.data.Dataset.from_tensor_slices((input_map, output_map, sample_weights)) else: dataset = tf.data.Dataset.from_tensor_slices((input_map, output_map)) else: dataset = tf.data.Dataset.from_tensor_slices((input_map)) if repeated: dataset = dataset.repeat() if shuffled: dataset = dataset.shuffle(2048, seed=seed) dataset = dataset.batch(batch_size) dataset = dataset.prefetch(buffer_size) return dataset def get_dataset_sampling(x, y=None, sample_weights=None, labeled=True, shuffled=True, repeated=False, batch_size=32, buffer_size=-1, seed=0): input_map = {'inputs_seq': x['sequence'], 'inputs_struct': x['structure'], 'inputs_loop': x['predicted_loop_type'], 'inputs_bpps_max': x['bpps_max'], 'inputs_bpps_sum': x['bpps_sum'], 'inputs_bpps_scaled': x['bpps_scaled']} if labeled: output_map = {'output_react': y['reactivity'], 'output_mg_ph': y['deg_Mg_pH10'], 'output_ph': y['deg_pH10'], 'output_mg_c': y['deg_Mg_50C'], 'output_c': y['deg_50C']} if sample_weights is not None: dataset = tf.data.Dataset.from_tensor_slices((input_map, output_map, sample_weights)) else: dataset = tf.data.Dataset.from_tensor_slices((input_map, output_map)) else: dataset = tf.data.Dataset.from_tensor_slices((input_map)) if repeated: dataset = dataset.repeat() if shuffled: dataset = dataset.shuffle(2048, seed=seed) return dataset # Add bpps as features train = add_bpps_features(train, database_base_path) test = add_bpps_features(test, database_base_path) feature_cols = ['sequence', 'structure', 'predicted_loop_type', 'bpps_max', 'bpps_sum', 'bpps_scaled'] pred_cols = ['reactivity', 'deg_Mg_pH10', 'deg_pH10', 'deg_Mg_50C', 'deg_50C'] encoder_list = [token2int_seq, token2int_struct, token2int_loop, None, None, None] public_test = test.query("seq_length == 107").copy() private_test = test.query("seq_length == 130").copy() x_test_public = get_features_dict(public_test, feature_cols, encoder_list, public_test.index) x_test_private = get_features_dict(private_test, feature_cols, encoder_list, private_test.index) # To use as stratified col train['signal_to_noise_int'] = train['signal_to_noise'].astype(int) def model_fn(hidden_dim=384, dropout=.5, pred_len=68, n_outputs=5): inputs_seq = L.Input(shape=(None, 1), name='inputs_seq') inputs_struct = L.Input(shape=(None, 1), name='inputs_struct') inputs_loop = L.Input(shape=(None, 1), name='inputs_loop') inputs_bpps_max = L.Input(shape=(None, 1), name='inputs_bpps_max') inputs_bpps_sum = L.Input(shape=(None, 1), name='inputs_bpps_sum') inputs_bpps_scaled = L.Input(shape=(None, 1), name='inputs_bpps_scaled') def _one_hot(x, num_classes): return K.squeeze(K.one_hot(K.cast(x, 'uint8'), num_classes=num_classes), axis=2) ohe_seq = L.Lambda(_one_hot, arguments={'num_classes': len(token2int_seq)}, input_shape=(None, 1))(inputs_seq) ohe_struct = L.Lambda(_one_hot, arguments={'num_classes': len(token2int_struct)}, input_shape=(None, 1))(inputs_struct) ohe_loop = L.Lambda(_one_hot, arguments={'num_classes': len(token2int_loop)}, input_shape=(None, 1))(inputs_loop) ### Encoder block # Conv block conv_seq = L.Conv1D(filters=64, kernel_size=3, padding='same')(ohe_seq) conv_struct = L.Conv1D(filters=64, kernel_size=3, padding='same')(ohe_struct) conv_loop = L.Conv1D(filters=64, kernel_size=3, padding='same')(ohe_loop) conv_bpps_max = L.Conv1D(filters=64, kernel_size=3, padding='same')(inputs_bpps_max) conv_bpps_sum = L.Conv1D(filters=64, kernel_size=3, padding='same')(inputs_bpps_sum) conv_bpps_scaled = L.Conv1D(filters=64, kernel_size=3, padding='same')(inputs_bpps_scaled) x_concat = L.concatenate([conv_seq, conv_struct, conv_loop, conv_bpps_max, conv_bpps_sum, conv_bpps_scaled], axis=-1, name='conv_concatenate') # Recurrent block encoder, encoder_state_f, encoder_state_b = L.Bidirectional(L.GRU(hidden_dim, dropout=dropout, return_sequences=True, return_state=True, kernel_initializer='orthogonal'), name='Encoder_RNN')(x_concat) ### Decoder block decoder = L.Bidirectional(L.GRU(hidden_dim, dropout=dropout, return_sequences=True, kernel_initializer='orthogonal'), name='Decoder')(encoder, initial_state=[encoder_state_f, encoder_state_b]) # Since we are only making predictions on the first part of each sequence, we have to truncate it decoder_truncated = decoder[:, :pred_len] output_react = L.Dense(1, name='output_react')(decoder_truncated) output_mg_ph = L.Dense(1, name='output_mg_ph')(decoder_truncated) output_ph = L.Dense(1, name='output_ph')(decoder_truncated) output_mg_c = L.Dense(1, name='output_mg_c')(decoder_truncated) output_c = L.Dense(1, name='output_c')(decoder_truncated) model = Model(inputs=[inputs_seq, inputs_struct, inputs_loop, inputs_bpps_max, inputs_bpps_sum, inputs_bpps_scaled], outputs=[output_react, output_mg_ph, output_ph, output_mg_c, output_c]) opt = optimizers.Adam(learning_rate=config['LEARNING_RATE']) model.compile(optimizer=opt, loss={'output_react': MCRMSE, 'output_mg_ph': MCRMSE, 'output_ph': MCRMSE, 'output_mg_c': MCRMSE, 'output_c': MCRMSE}, loss_weights={'output_react': 1., 'output_mg_ph': 1., 'output_ph': 1., 'output_mg_c': 1., 'output_c': 1.}) return model model = model_fn() model.summary() AUTO = tf.data.experimental.AUTOTUNE skf = GroupKFold(n_splits=config['N_FOLDS']) history_list = [] oof = train[['id', 'SN_filter', 'signal_to_noise'] + pred_cols].copy() oof_preds = np.zeros((len(train), 68, len(pred_cols))) test_public_preds = np.zeros((len(public_test), config['PB_SEQ_LEN'], len(pred_cols))) test_private_preds = np.zeros((len(private_test), config['PV_SEQ_LEN'], len(pred_cols))) for fold,(train_idx, valid_idx) in enumerate(skf.split(train, train['signal_to_noise_int'], train['id'])): if fold >= config['N_USED_FOLDS']: break print(f'\nFOLD: {fold+1}') # Create clean and noisy datasets valid_clean_idxs = np.intersect1d(train[(train['SN_filter'] == 1) & (train['augmented'] == False)].index, valid_idx) ### Create datasets # x_train = get_features_dict(train, feature_cols, encoder_list, train_idx) # y_train = get_targets_dict(train, pred_cols, train_idx) # w_train = np.log(train.iloc[train_idx]['signal_to_noise'].values+1.2)+1 x_valid = get_features_dict(train, feature_cols, encoder_list, valid_clean_idxs) y_valid = get_targets_dict(train, pred_cols, valid_clean_idxs) w_valid = np.log(train.iloc[valid_clean_idxs]['signal_to_noise'].values+1.2)+1 # train_ds = get_dataset(x_train, y_train, w_train, labeled=True, shuffled=True, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED) valid_ds = get_dataset(x_valid, y_valid, w_valid, labeled=True, shuffled=False, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED) oof_ds = get_dataset(get_features_dict(train, feature_cols, encoder_list, valid_idx), labeled=False, shuffled=False, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED) test_public_ds = get_dataset(x_test_public, labeled=False, shuffled=False, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED) test_private_ds = get_dataset(x_test_private, labeled=False, shuffled=False, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED) # Create clean and noisy datasets normal_idxs = np.intersect1d(train[train['augmented'] == False].index, train_idx) x_train_normal = get_features_dict(train, feature_cols, encoder_list, normal_idxs) y_train_normal = get_targets_dict(train, pred_cols, normal_idxs) w_train_normal = np.log(train.iloc[normal_idxs]['signal_to_noise'].values+1.2)+1 normal_ds = get_dataset_sampling(x_train_normal, y_train_normal, w_train_normal, labeled=True, shuffled=True, repeated=True, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED) augmented_idxs = np.intersect1d(train[train['augmented'] == True].index, train_idx) x_train_augmented = get_features_dict(train, feature_cols, encoder_list, augmented_idxs) y_train_augmented = get_targets_dict(train, pred_cols, augmented_idxs) w_train_augmented = np.log(train.iloc[augmented_idxs]['signal_to_noise'].values+1.2)+1 augmented_ds = get_dataset_sampling(x_train_augmented, y_train_augmented, w_train_augmented, labeled=True, shuffled=True, repeated=True, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED) # Resampled TF Dataset resampled_ds = tf.data.experimental.sample_from_datasets([normal_ds, augmented_ds], weights=[.5, .5]) resampled_ds = resampled_ds.batch(config['BATCH_SIZE']).prefetch(AUTO) ### Model K.clear_session() model = model_fn() model_path = f'model_{fold}.h5' es = EarlyStopping(monitor='val_loss', mode='min', patience=config['ES_PATIENCE'], restore_best_weights=True, verbose=1) rlrp = ReduceLROnPlateau(monitor='val_loss', mode='min', factor=0.1, patience=5, verbose=1) ### Train history = model.fit(resampled_ds, validation_data=valid_ds, callbacks=[es, rlrp], epochs=config['EPOCHS'], batch_size=config['BATCH_SIZE'], steps_per_epoch=int(len(normal_idxs)//(config['BATCH_SIZE']* .5)), verbose=2).history history_list.append(history) # Save last model weights model.save_weights(model_path) ### Inference oof_ds_preds = np.array(model.predict(oof_ds)).reshape((len(pred_cols), len(valid_idx), 68)).transpose((1, 2, 0)) oof_preds[valid_idx] = oof_ds_preds # Short sequence (public test) model = model_fn(pred_len=config['PB_SEQ_LEN']) model.load_weights(model_path) test_public_ds_preds = np.array(model.predict(test_public_ds)).reshape((len(pred_cols), len(public_test), config['PB_SEQ_LEN'])).transpose((1, 2, 0)) test_public_preds += test_public_ds_preds * (1 / config['N_USED_FOLDS']) # Long sequence (private test) model = model_fn(pred_len=config['PV_SEQ_LEN']) model.load_weights(model_path) test_private_ds_preds = np.array(model.predict(test_private_ds)).reshape((len(pred_cols), len(private_test), config['PV_SEQ_LEN'])).transpose((1, 2, 0)) test_private_preds += test_private_ds_preds * (1 / config['N_USED_FOLDS']) for fold, history in enumerate(history_list): print(f'\nFOLD: {fold+1}') min_valid_idx = np.array(history['val_loss']).argmin() print(f"Train {np.array(history['loss'])[min_valid_idx]:.5f} Validation {np.array(history['val_loss'])[min_valid_idx]:.5f}") plot_metrics_agg(history_list) # Assign preds to OOF set for idx, col in enumerate(pred_cols): val = oof_preds[:, :, idx] oof = oof.assign(**{f'{col}_pred': list(val)}) oof.to_csv('oof.csv', index=False) oof_preds_dict = {} for col in pred_cols: oof_preds_dict[col] = oof_preds[:, :, idx] # Assign values to test set preds_ls = [] for df, preds in [(public_test, test_public_preds), (private_test, test_private_preds)]: for i, uid in enumerate(df.id): single_pred = preds[i] single_df = pd.DataFrame(single_pred, columns=pred_cols) single_df['id_seqpos'] = [f'{uid}_{x}' for x in range(single_df.shape[0])] preds_ls.append(single_df) preds_df = pd.concat(preds_ls) # Averaging over augmented predictions preds_df = pd.concat(preds_ls).groupby('id_seqpos').mean().reset_index() y_true_dict = get_targets_dict(train, pred_cols, train.index) y_true = np.array([y_true_dict[col] for col in pred_cols]).transpose((1, 2, 0, 3)).reshape(oof_preds.shape) display(evaluate_model(train, y_true, oof_preds, pred_cols)) display(evaluate_model(train, y_true, oof_preds, pred_cols, use_cols=['reactivity', 'deg_Mg_pH10', 'deg_Mg_50C'])) submission = pd.read_csv(database_base_path + 'sample_submission.csv') submission = submission[['id_seqpos']].merge(preds_df, on=['id_seqpos']) display(submission.head(10)) display(submission.describe()) submission.to_csv('submission.csv', index=False)
0.453504
0.567277
# Doc2Vec -- UMAP ``` import sys, os, string, glob, gensim, umap import pandas as pd import numpy as np import gensim.models.doc2vec assert gensim.models.doc2vec.FAST_VERSION > -1 # This will be painfully slow otherwise from gensim.models.doc2vec import Doc2Vec, TaggedDocument from gensim.utils import simple_preprocess import plotly.express as px import plotly.graph_objs as go from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot init_notebook_mode(connected = True) # Import parser module. module_path = os.path.abspath(os.path.join('..')) if module_path not in sys.path: sys.path.append(module_path + '//Scripts') from functions_xml_ET_parse import * # Declare absolute path. abs_dir = "/Users/quinn.wi/Documents/" ``` ## Build Dataframe from XML ``` %%time """ Declare variables. """ # Declare regex to simplify file paths below regex = re.compile(r'.*/.*/(.*.xml)') # Declare document level of file. Requires root starting point ('.'). doc_as_xpath = './/ns:div/[@type="entry"]' # Declare date element of each document. date_path = './ns:bibl/ns:date/[@when]' # Declare person elements in each document. person_path = './/ns:p/ns:persRef/[@ref]' # Declare subject elements in each document. subject_path = './/ns:bibl//ns:subject' # Declare text level within each document. text_path = './ns:div/[@type="docbody"]/ns:p' """ Build dataframe. """ dataframe = [] for file in glob.glob(abs_dir + 'Data/PSC/JQA/*/*.xml'): reFile = str(regex.search(file).group(1)) # Call functions to create necessary variables and grab content. root = get_root(file) ns = get_namespace(root) for eachDoc in root.findall(doc_as_xpath, ns): # Call functions. entry = get_document_id(eachDoc, '{http://www.w3.org/XML/1998/namespace}id') date = get_date_from_attrValue(eachDoc, date_path, 'when', ns) people = get_peopleList_from_attrValue(eachDoc, person_path, 'ref', ns) subject = get_subject(eachDoc, subject_path, ns) text = get_textContent(eachDoc, text_path, ns) dataframe.append([reFile, entry, date, people, subject, text]) dataframe = pd.DataFrame(dataframe, columns = ['file', 'entry', 'date', 'people', 'subject', 'text']) # Split subject list and return "Multiple-Subject" or lone subject. dataframe['subject'] = dataframe['subject'].str.split(r',') def handle_subjects(subj_list): if len(subj_list) > 1: return 'Multiple-Subjects' else: return subj_list[0] dataframe['subject'] = dataframe['subject'].apply(handle_subjects) dataframe.head(4) ``` ## UMAP ``` %%time model = Doc2Vec.load(abs_dir + 'Data/Output/WordVectors/jqa-d2v.txt') docs = list(model.dv.index_to_key) data = np.array(model[docs]) reducer = umap.UMAP() embedding = reducer.fit_transform(data) x = [] y = [] for e in embedding: x.append(e[0]) y.append(e[1]) data_umap = pd.DataFrame({'entry': dataframe['entry'], 'date': dataframe['date'], 'subject': dataframe['subject'], 'x': x, 'y': y}) data_umap.head(3) %%time data_umap.to_csv(abs_dir + 'Data/Output/WordVectors/jqa-d2v-umap.txt', sep = ',', index = False) ``` ## Visualize ``` # %%time # # Visualize # fig = px.scatter(data_umap, x = 'x', y = 'y', # render_mode = 'webgl') # fig.show() ```
github_jupyter
import sys, os, string, glob, gensim, umap import pandas as pd import numpy as np import gensim.models.doc2vec assert gensim.models.doc2vec.FAST_VERSION > -1 # This will be painfully slow otherwise from gensim.models.doc2vec import Doc2Vec, TaggedDocument from gensim.utils import simple_preprocess import plotly.express as px import plotly.graph_objs as go from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot init_notebook_mode(connected = True) # Import parser module. module_path = os.path.abspath(os.path.join('..')) if module_path not in sys.path: sys.path.append(module_path + '//Scripts') from functions_xml_ET_parse import * # Declare absolute path. abs_dir = "/Users/quinn.wi/Documents/" %%time """ Declare variables. """ # Declare regex to simplify file paths below regex = re.compile(r'.*/.*/(.*.xml)') # Declare document level of file. Requires root starting point ('.'). doc_as_xpath = './/ns:div/[@type="entry"]' # Declare date element of each document. date_path = './ns:bibl/ns:date/[@when]' # Declare person elements in each document. person_path = './/ns:p/ns:persRef/[@ref]' # Declare subject elements in each document. subject_path = './/ns:bibl//ns:subject' # Declare text level within each document. text_path = './ns:div/[@type="docbody"]/ns:p' """ Build dataframe. """ dataframe = [] for file in glob.glob(abs_dir + 'Data/PSC/JQA/*/*.xml'): reFile = str(regex.search(file).group(1)) # Call functions to create necessary variables and grab content. root = get_root(file) ns = get_namespace(root) for eachDoc in root.findall(doc_as_xpath, ns): # Call functions. entry = get_document_id(eachDoc, '{http://www.w3.org/XML/1998/namespace}id') date = get_date_from_attrValue(eachDoc, date_path, 'when', ns) people = get_peopleList_from_attrValue(eachDoc, person_path, 'ref', ns) subject = get_subject(eachDoc, subject_path, ns) text = get_textContent(eachDoc, text_path, ns) dataframe.append([reFile, entry, date, people, subject, text]) dataframe = pd.DataFrame(dataframe, columns = ['file', 'entry', 'date', 'people', 'subject', 'text']) # Split subject list and return "Multiple-Subject" or lone subject. dataframe['subject'] = dataframe['subject'].str.split(r',') def handle_subjects(subj_list): if len(subj_list) > 1: return 'Multiple-Subjects' else: return subj_list[0] dataframe['subject'] = dataframe['subject'].apply(handle_subjects) dataframe.head(4) %%time model = Doc2Vec.load(abs_dir + 'Data/Output/WordVectors/jqa-d2v.txt') docs = list(model.dv.index_to_key) data = np.array(model[docs]) reducer = umap.UMAP() embedding = reducer.fit_transform(data) x = [] y = [] for e in embedding: x.append(e[0]) y.append(e[1]) data_umap = pd.DataFrame({'entry': dataframe['entry'], 'date': dataframe['date'], 'subject': dataframe['subject'], 'x': x, 'y': y}) data_umap.head(3) %%time data_umap.to_csv(abs_dir + 'Data/Output/WordVectors/jqa-d2v-umap.txt', sep = ',', index = False) # %%time # # Visualize # fig = px.scatter(data_umap, x = 'x', y = 'y', # render_mode = 'webgl') # fig.show()
0.321567
0.511351
``` import torchvision import torch from torch import nn import matplotlib.pyplot as plt import numpy as np from torch import optim device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") class To3Channels(object): """Convert ndarrays in sample to Tensors.""" def __call__(self, sample): if sample.shape[0] < 3: sample = torch.squeeze(sample) sample = torch.stack([sample, sample,sample], 0) return sample transformer = torchvision.transforms.Compose( [ torchvision.transforms.ToTensor()]) #To3Channels(), #torchvision.transforms.Normalize((0.5,0.5,0.5),(0.5,0.5,0.5))]) CIFAR10_train = torchvision.datasets.CIFAR10("../datasets/CIFAR10/", train=True, transform=transformer, target_transform=None, download=True) CIFAR100_train = torchvision.datasets.CIFAR100("../datasets/CIFAR100/", train=True, transform=transformer, target_transform=None, download=True) FashionMNIST_train = torchvision.datasets.FashionMNIST("../datasets/FashionMNIST/", train=True, transform=transformer, target_transform=None, download=True) CIFAR10_test = torchvision.datasets.CIFAR10("../datasets/CIFAR10/", train=False, transform=transformer, target_transform=None, download=True) CIFAR100_test = torchvision.datasets.CIFAR100("../datasets/CIFAR100/", train=False, transform=transformer, target_transform=None, download=True) FashionMNIST_test = torchvision.datasets.FashionMNIST("../datasets/FashionMNIST/", train=False, transform=transformer, target_transform=None, download=True) def get_loaders(dataset = "CIFAR10"): train_loader = None test_loader = None labels_num = None if dataset == "CIFAR10": train_loader = torch.utils.data.DataLoader(CIFAR10_train, batch_size=64, shuffle=True, num_workers=2) test_loader = torch.utils.data.DataLoader(CIFAR10_test, batch_size=64, shuffle=True, num_workers=2) labels_num = 10#len(set(CIFAR10_train.train_labels)) elif dataset == "CIFAR100": train_loader = torch.utils.data.DataLoader(CIFAR100_train, batch_size=32, shuffle=True, num_workers=2) test_loader = torch.utils.data.DataLoader(CIFAR100_test, batch_size=2, shuffle=True, num_workers=2) labels_num = 100 elif dataset == "FASHION_MNIST": train_loader = torch.utils.data.DataLoader(FashionMNIST_train, batch_size=64, shuffle=True, num_workers=2) test_loader = torch.utils.data.DataLoader(FashionMNIST_test, batch_size=64, shuffle=True, num_workers=2) labels_num = len(set(FashionMNIST_train.train_labels)) return train_loader,test_loader,labels_num class VGG16(nn.Module): def __init__(self,num_classes): super(VGG16, self).__init__() self.conv1_1 = nn.Conv2d(in_channels=1,out_channels=16,kernel_size=3,stride=1,padding=1) nn.init.xavier_uniform_(self.conv1_1.weight) self.actv1_1 = nn.ReLU() self.conv1_2 = nn.Conv2d(in_channels=16,out_channels=16,kernel_size=3,stride=1,padding=1) nn.init.xavier_uniform_(self.conv1_2.weight) self.actv1_2 = nn.ReLU() self.pool1 = nn.MaxPool2d(stride=2,kernel_size=2) self.conv2_1 = nn.Conv2d(in_channels=16,out_channels=32,kernel_size=3,stride=1,padding=1) nn.init.xavier_uniform_(self.conv2_1.weight) self.actv2_1 = nn.ReLU() self.conv2_2 = nn.Conv2d(in_channels=32,out_channels=32,kernel_size=3,stride=1,padding=1) nn.init.xavier_uniform_(self.conv2_2.weight) self.actv2_2 = nn.ReLU() self.pool2 = nn.MaxPool2d(stride=2,kernel_size=2) self.conv3_1 = nn.Conv2d(in_channels=32,out_channels=64,kernel_size=3,stride=1,padding=1) nn.init.xavier_uniform_(self.conv3_1.weight) self.actv3_1 = nn.ReLU() self.conv3_2 = nn.Conv2d(in_channels=64,out_channels=64,kernel_size=3,stride=1,padding=1) nn.init.xavier_uniform_(self.conv3_2.weight) self.actv3_2 = nn.ReLU() self.conv3_3 = nn.Conv2d(in_channels=64,out_channels=64,kernel_size=3,stride=1,padding=1) nn.init.xavier_uniform_(self.conv3_3.weight) self.actv3_3 = nn.ReLU() self.pool3 = nn.MaxPool2d(stride=2,kernel_size=2) self.conv4_1 = nn.Conv2d(in_channels=256,out_channels=512,kernel_size=3,stride=1,padding=1) nn.init.xavier_uniform_(self.conv4_1.weight) self.actv4_1 = nn.ReLU() self.conv4_2 = nn.Conv2d(in_channels=512,out_channels=512,kernel_size=3,stride=1,padding=1) nn.init.xavier_uniform_(self.conv4_2.weight) self.actv4_2 = nn.ReLU() self.conv4_3 = nn.Conv2d(in_channels=512,out_channels=512,kernel_size=3,stride=1,padding=1) nn.init.xavier_uniform_(self.conv4_3.weight) self.actv4_3 = nn.ReLU() self.pool4 = nn.MaxPool2d(stride=2,kernel_size=2) self.conv5_1 = nn.Conv2d(in_channels=512,out_channels=512,kernel_size=3,stride=1,padding=1) nn.init.xavier_uniform_(self.conv5_1.weight) self.actv5_1 = nn.ReLU() self.conv5_2 = nn.Conv2d(in_channels=512,out_channels=512,kernel_size=3,stride=1,padding=1) nn.init.xavier_uniform_(self.conv5_2.weight) self.actv5_2 = nn.ReLU() self.conv5_3 = nn.Conv2d(in_channels=512,out_channels=512,kernel_size=3,stride=1,padding=1) nn.init.xavier_uniform_(self.conv5_3.weight) self.actv5_3 = nn.ReLU() self.pool5 = nn.MaxPool2d(stride=2,kernel_size=2) self.fc6 = nn.Linear(3*3*64,1000) nn.init.xavier_uniform_(self.fc6.weight) self.actv6 = nn.ReLU() self.dropout6 = nn.Dropout(0.5) self.fc7 = nn.Linear(1000,1000) nn.init.xavier_uniform_(self.fc7.weight) self.actv7 = nn.ReLU() self.dropout7 = nn.Dropout(0.5) self.fc8 = nn.Linear(7*7*32,num_classes) nn.init.xavier_uniform_(self.fc8.weight) def forward(self, x): x = self.actv1_1(self.conv1_1(x)) x = self.actv1_2(self.conv1_2(x)) x = self.pool1(x) x = self.actv2_1(self.conv2_1(x)) x = self.actv2_2(self.conv2_2(x)) x = self.pool2(x) x = self.actv3_1(self.conv3_1(x)) x = self.actv3_2(self.conv3_2(x)) x = self.actv3_3(self.conv3_3(x)) x = self.pool3(x) x = self.actv4_1(self.conv4_1(x)) x = self.actv4_2(self.conv4_2(x)) x = self.actv4_3(self.conv4_3(x)) x = self.pool4(x) x = self.actv5_1(self.conv5_1(x)) x = self.actv5_2(self.conv5_2(x)) x = self.actv5_3(self.conv5_3(x)) x = self.pool5(x) x = torch.flatten(x, start_dim=1) x = self.actv6(self.fc6(x)) x = self.actv7(self.fc7(x)) x = self.fc8(x) return x def compute_accuracy(net, testloader): net.eval() correct = 0 total = 0 with torch.no_grad(): for images, labels in testloader: images, labels = images.to(device), labels.to(device) outputs = net(images) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() return correct / total def train(net,trainloader,testloader,optim_name = "adam",epochs = 30): optimizer = optim.Adam(net.parameters(),lr= 0.01,weight_decay=0.0005) if optim_name == "sgd": optimizer = optim.SGD(net.parameters(),0.01,0.9) criterion = torch.nn.CrossEntropyLoss() losses = [] accuracies = [] for epoch in range(epochs): running_loss = 0.0 net.train() for i,data in enumerate(trainloader,0): inputs, labels = data[0].to(device), data[1].to(device) # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() running_loss += loss.item() if i % 200 == 199: # print every 100 mini-batches print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 200)) losses.append(running_loss/200) running_loss = 0.0 accuracy = compute_accuracy(net,testloader) accuracies.append(accuracy) print('Accuracy of the network on the test images: %.3f' % accuracy) return accuracies,losses from google.colab import files def run(dataset = "CIFAR10",epochs = 30): trainloader, testloader, num_classes = get_loaders(dataset) net = VGG16(num_classes) net.to(device) accuracies, losses = train(net, trainloader, testloader,optim_name = "sgd",epochs=epochs) f = plt.figure(1) x = np.linspace(0, 1, len(losses)) plt.plot(x,losses) f.show() g = plt.figure(2) x = np.linspace(0, 1, len(accuracies)) plt.plot(x, accuracies, figure = g) g.show() #files.download( dataset + "_loss.png") plt.show() #files.download( dataset + "_accuracy.png") #run(epochs=15) #run("CIFAR100",30) run("FASHION_MNIST",15) ```
github_jupyter
import torchvision import torch from torch import nn import matplotlib.pyplot as plt import numpy as np from torch import optim device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") class To3Channels(object): """Convert ndarrays in sample to Tensors.""" def __call__(self, sample): if sample.shape[0] < 3: sample = torch.squeeze(sample) sample = torch.stack([sample, sample,sample], 0) return sample transformer = torchvision.transforms.Compose( [ torchvision.transforms.ToTensor()]) #To3Channels(), #torchvision.transforms.Normalize((0.5,0.5,0.5),(0.5,0.5,0.5))]) CIFAR10_train = torchvision.datasets.CIFAR10("../datasets/CIFAR10/", train=True, transform=transformer, target_transform=None, download=True) CIFAR100_train = torchvision.datasets.CIFAR100("../datasets/CIFAR100/", train=True, transform=transformer, target_transform=None, download=True) FashionMNIST_train = torchvision.datasets.FashionMNIST("../datasets/FashionMNIST/", train=True, transform=transformer, target_transform=None, download=True) CIFAR10_test = torchvision.datasets.CIFAR10("../datasets/CIFAR10/", train=False, transform=transformer, target_transform=None, download=True) CIFAR100_test = torchvision.datasets.CIFAR100("../datasets/CIFAR100/", train=False, transform=transformer, target_transform=None, download=True) FashionMNIST_test = torchvision.datasets.FashionMNIST("../datasets/FashionMNIST/", train=False, transform=transformer, target_transform=None, download=True) def get_loaders(dataset = "CIFAR10"): train_loader = None test_loader = None labels_num = None if dataset == "CIFAR10": train_loader = torch.utils.data.DataLoader(CIFAR10_train, batch_size=64, shuffle=True, num_workers=2) test_loader = torch.utils.data.DataLoader(CIFAR10_test, batch_size=64, shuffle=True, num_workers=2) labels_num = 10#len(set(CIFAR10_train.train_labels)) elif dataset == "CIFAR100": train_loader = torch.utils.data.DataLoader(CIFAR100_train, batch_size=32, shuffle=True, num_workers=2) test_loader = torch.utils.data.DataLoader(CIFAR100_test, batch_size=2, shuffle=True, num_workers=2) labels_num = 100 elif dataset == "FASHION_MNIST": train_loader = torch.utils.data.DataLoader(FashionMNIST_train, batch_size=64, shuffle=True, num_workers=2) test_loader = torch.utils.data.DataLoader(FashionMNIST_test, batch_size=64, shuffle=True, num_workers=2) labels_num = len(set(FashionMNIST_train.train_labels)) return train_loader,test_loader,labels_num class VGG16(nn.Module): def __init__(self,num_classes): super(VGG16, self).__init__() self.conv1_1 = nn.Conv2d(in_channels=1,out_channels=16,kernel_size=3,stride=1,padding=1) nn.init.xavier_uniform_(self.conv1_1.weight) self.actv1_1 = nn.ReLU() self.conv1_2 = nn.Conv2d(in_channels=16,out_channels=16,kernel_size=3,stride=1,padding=1) nn.init.xavier_uniform_(self.conv1_2.weight) self.actv1_2 = nn.ReLU() self.pool1 = nn.MaxPool2d(stride=2,kernel_size=2) self.conv2_1 = nn.Conv2d(in_channels=16,out_channels=32,kernel_size=3,stride=1,padding=1) nn.init.xavier_uniform_(self.conv2_1.weight) self.actv2_1 = nn.ReLU() self.conv2_2 = nn.Conv2d(in_channels=32,out_channels=32,kernel_size=3,stride=1,padding=1) nn.init.xavier_uniform_(self.conv2_2.weight) self.actv2_2 = nn.ReLU() self.pool2 = nn.MaxPool2d(stride=2,kernel_size=2) self.conv3_1 = nn.Conv2d(in_channels=32,out_channels=64,kernel_size=3,stride=1,padding=1) nn.init.xavier_uniform_(self.conv3_1.weight) self.actv3_1 = nn.ReLU() self.conv3_2 = nn.Conv2d(in_channels=64,out_channels=64,kernel_size=3,stride=1,padding=1) nn.init.xavier_uniform_(self.conv3_2.weight) self.actv3_2 = nn.ReLU() self.conv3_3 = nn.Conv2d(in_channels=64,out_channels=64,kernel_size=3,stride=1,padding=1) nn.init.xavier_uniform_(self.conv3_3.weight) self.actv3_3 = nn.ReLU() self.pool3 = nn.MaxPool2d(stride=2,kernel_size=2) self.conv4_1 = nn.Conv2d(in_channels=256,out_channels=512,kernel_size=3,stride=1,padding=1) nn.init.xavier_uniform_(self.conv4_1.weight) self.actv4_1 = nn.ReLU() self.conv4_2 = nn.Conv2d(in_channels=512,out_channels=512,kernel_size=3,stride=1,padding=1) nn.init.xavier_uniform_(self.conv4_2.weight) self.actv4_2 = nn.ReLU() self.conv4_3 = nn.Conv2d(in_channels=512,out_channels=512,kernel_size=3,stride=1,padding=1) nn.init.xavier_uniform_(self.conv4_3.weight) self.actv4_3 = nn.ReLU() self.pool4 = nn.MaxPool2d(stride=2,kernel_size=2) self.conv5_1 = nn.Conv2d(in_channels=512,out_channels=512,kernel_size=3,stride=1,padding=1) nn.init.xavier_uniform_(self.conv5_1.weight) self.actv5_1 = nn.ReLU() self.conv5_2 = nn.Conv2d(in_channels=512,out_channels=512,kernel_size=3,stride=1,padding=1) nn.init.xavier_uniform_(self.conv5_2.weight) self.actv5_2 = nn.ReLU() self.conv5_3 = nn.Conv2d(in_channels=512,out_channels=512,kernel_size=3,stride=1,padding=1) nn.init.xavier_uniform_(self.conv5_3.weight) self.actv5_3 = nn.ReLU() self.pool5 = nn.MaxPool2d(stride=2,kernel_size=2) self.fc6 = nn.Linear(3*3*64,1000) nn.init.xavier_uniform_(self.fc6.weight) self.actv6 = nn.ReLU() self.dropout6 = nn.Dropout(0.5) self.fc7 = nn.Linear(1000,1000) nn.init.xavier_uniform_(self.fc7.weight) self.actv7 = nn.ReLU() self.dropout7 = nn.Dropout(0.5) self.fc8 = nn.Linear(7*7*32,num_classes) nn.init.xavier_uniform_(self.fc8.weight) def forward(self, x): x = self.actv1_1(self.conv1_1(x)) x = self.actv1_2(self.conv1_2(x)) x = self.pool1(x) x = self.actv2_1(self.conv2_1(x)) x = self.actv2_2(self.conv2_2(x)) x = self.pool2(x) x = self.actv3_1(self.conv3_1(x)) x = self.actv3_2(self.conv3_2(x)) x = self.actv3_3(self.conv3_3(x)) x = self.pool3(x) x = self.actv4_1(self.conv4_1(x)) x = self.actv4_2(self.conv4_2(x)) x = self.actv4_3(self.conv4_3(x)) x = self.pool4(x) x = self.actv5_1(self.conv5_1(x)) x = self.actv5_2(self.conv5_2(x)) x = self.actv5_3(self.conv5_3(x)) x = self.pool5(x) x = torch.flatten(x, start_dim=1) x = self.actv6(self.fc6(x)) x = self.actv7(self.fc7(x)) x = self.fc8(x) return x def compute_accuracy(net, testloader): net.eval() correct = 0 total = 0 with torch.no_grad(): for images, labels in testloader: images, labels = images.to(device), labels.to(device) outputs = net(images) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() return correct / total def train(net,trainloader,testloader,optim_name = "adam",epochs = 30): optimizer = optim.Adam(net.parameters(),lr= 0.01,weight_decay=0.0005) if optim_name == "sgd": optimizer = optim.SGD(net.parameters(),0.01,0.9) criterion = torch.nn.CrossEntropyLoss() losses = [] accuracies = [] for epoch in range(epochs): running_loss = 0.0 net.train() for i,data in enumerate(trainloader,0): inputs, labels = data[0].to(device), data[1].to(device) # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() running_loss += loss.item() if i % 200 == 199: # print every 100 mini-batches print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 200)) losses.append(running_loss/200) running_loss = 0.0 accuracy = compute_accuracy(net,testloader) accuracies.append(accuracy) print('Accuracy of the network on the test images: %.3f' % accuracy) return accuracies,losses from google.colab import files def run(dataset = "CIFAR10",epochs = 30): trainloader, testloader, num_classes = get_loaders(dataset) net = VGG16(num_classes) net.to(device) accuracies, losses = train(net, trainloader, testloader,optim_name = "sgd",epochs=epochs) f = plt.figure(1) x = np.linspace(0, 1, len(losses)) plt.plot(x,losses) f.show() g = plt.figure(2) x = np.linspace(0, 1, len(accuracies)) plt.plot(x, accuracies, figure = g) g.show() #files.download( dataset + "_loss.png") plt.show() #files.download( dataset + "_accuracy.png") #run(epochs=15) #run("CIFAR100",30) run("FASHION_MNIST",15)
0.852014
0.605653
``` import pandas as pd import numpy as np import cv2 import os import re from PIL import Image import albumentations as A from albumentations.pytorch.transforms import ToTensorV2 import torch import torchvision from torchvision.models.detection.faster_rcnn import FastRCNNPredictor from torchvision.models.detection import FasterRCNN from torchvision.models.detection.rpn import AnchorGenerator from torch.utils.data import DataLoader, Dataset from torch.utils.data.sampler import SequentialSampler from matplotlib import pyplot as plt DIR_INPUT = '/home/hy/dataset/gwd' DIR_TRAIN = f'{DIR_INPUT}/train' DIR_TEST = f'{DIR_INPUT}/test' train_df = pd.read_csv(f'{DIR_INPUT}/train.csv') train_df.shape train_df train_df['x'] = -1 train_df['y'] = -1 train_df['w'] = -1 train_df['h'] = -1 train_df def expand_bbox(x): r = np.array(re.findall("([0-9]+[.]?[0-9]*)", x)) if len(r)==0: r=[-1,-1,-1,-1] return r train_df[['x','y','w','h']] = np.stack(train_df['bbox'].apply(lambda x: expand_bbox(x))) train_df.drop(columns=['bbox'], inplace=True) train_df['x'] = train_df['x'].astype(np.float) train_df['y'] = train_df['y'].astype(np.float) train_df['w'] = train_df['w'].astype(np.float) train_df['h'] = train_df['h'].astype(np.float) train_df image_ids = train_df['image_id'].unique() valid_ids = image_ids[-665:] train_ids = image_ids[:-665] valid_df = train_df[train_df['image_id'].isin(valid_ids)] train_df = train_df[train_df['image_id'].isin(train_ids)] valid_df.shape, train_df.shape train_df class WheatDataset(Dataset): def __init__(self, dataframe, image_dir, transform=None): super().__init__() self.image_ids= dataframe['image_id'].unique() self.ds = dataframe self.image_dir = image_dir self.transforms = transforms def __getitem__(self, index: int): records = self.image_ids[index] records = self.df[self.df['image_id'] == image_id] image = cv2.imread(f'{self.image_dir}/{image_id}.jpg', cv2.IMREAD_COLOR) image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB).astype(np.float32) image /= 255.0 boxes = records[['x','y','w','h']].values train_df['image_id'].unique()[0] train_df[train_df['image_id'] == 'b6ab77fd7'] #records records = train_df[train_df['image_id'] == 'b6ab77fd7'] records[['x','y','w','h']].values boxes =records[['x','y','w','h']].values boxes[:, 2] = boxes[:, 0] + boxes[:, 2] boxes[:, 3] = boxes[:, 1] + boxes[:, 3] boxes area = (boxes[:, 3] - boxes[:, 1]) * (boxes[:, 2] - boxes[:, 0]) area = torch.as_tensor(area, dtype=torch.float32) area records.shape[0] # there is only one class labels = torch.ones((records.shape[0],), dtype=torch.int64) # suppose all instances are not crowd iscrowd = torch.zeros((records.shape[0],), dtype=torch.int64) target = {} target['boxes'] = boxes target['labels'] = labels # target['masks'] = None target['image_id'] = torch.tensor([0]) target['area'] = area target['iscrowd'] = iscrowd target ```
github_jupyter
import pandas as pd import numpy as np import cv2 import os import re from PIL import Image import albumentations as A from albumentations.pytorch.transforms import ToTensorV2 import torch import torchvision from torchvision.models.detection.faster_rcnn import FastRCNNPredictor from torchvision.models.detection import FasterRCNN from torchvision.models.detection.rpn import AnchorGenerator from torch.utils.data import DataLoader, Dataset from torch.utils.data.sampler import SequentialSampler from matplotlib import pyplot as plt DIR_INPUT = '/home/hy/dataset/gwd' DIR_TRAIN = f'{DIR_INPUT}/train' DIR_TEST = f'{DIR_INPUT}/test' train_df = pd.read_csv(f'{DIR_INPUT}/train.csv') train_df.shape train_df train_df['x'] = -1 train_df['y'] = -1 train_df['w'] = -1 train_df['h'] = -1 train_df def expand_bbox(x): r = np.array(re.findall("([0-9]+[.]?[0-9]*)", x)) if len(r)==0: r=[-1,-1,-1,-1] return r train_df[['x','y','w','h']] = np.stack(train_df['bbox'].apply(lambda x: expand_bbox(x))) train_df.drop(columns=['bbox'], inplace=True) train_df['x'] = train_df['x'].astype(np.float) train_df['y'] = train_df['y'].astype(np.float) train_df['w'] = train_df['w'].astype(np.float) train_df['h'] = train_df['h'].astype(np.float) train_df image_ids = train_df['image_id'].unique() valid_ids = image_ids[-665:] train_ids = image_ids[:-665] valid_df = train_df[train_df['image_id'].isin(valid_ids)] train_df = train_df[train_df['image_id'].isin(train_ids)] valid_df.shape, train_df.shape train_df class WheatDataset(Dataset): def __init__(self, dataframe, image_dir, transform=None): super().__init__() self.image_ids= dataframe['image_id'].unique() self.ds = dataframe self.image_dir = image_dir self.transforms = transforms def __getitem__(self, index: int): records = self.image_ids[index] records = self.df[self.df['image_id'] == image_id] image = cv2.imread(f'{self.image_dir}/{image_id}.jpg', cv2.IMREAD_COLOR) image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB).astype(np.float32) image /= 255.0 boxes = records[['x','y','w','h']].values train_df['image_id'].unique()[0] train_df[train_df['image_id'] == 'b6ab77fd7'] #records records = train_df[train_df['image_id'] == 'b6ab77fd7'] records[['x','y','w','h']].values boxes =records[['x','y','w','h']].values boxes[:, 2] = boxes[:, 0] + boxes[:, 2] boxes[:, 3] = boxes[:, 1] + boxes[:, 3] boxes area = (boxes[:, 3] - boxes[:, 1]) * (boxes[:, 2] - boxes[:, 0]) area = torch.as_tensor(area, dtype=torch.float32) area records.shape[0] # there is only one class labels = torch.ones((records.shape[0],), dtype=torch.int64) # suppose all instances are not crowd iscrowd = torch.zeros((records.shape[0],), dtype=torch.int64) target = {} target['boxes'] = boxes target['labels'] = labels # target['masks'] = None target['image_id'] = torch.tensor([0]) target['area'] = area target['iscrowd'] = iscrowd target
0.458106
0.367611
# Predicting the amount of win shares that a player could potentially bring to a team * Previously evaluated NBA generational data * Curious as to what individual players bring to a team * Built a library for interacting and visualizing my dataset within jupyter notebook ## But what is a win share? A win share is a individual player statistic that attempts to divvy up credit for team success to the indifviduals on the team There are different ways to measure win shares, but our stats have been normalized to never go above the maximum amount of games that can be possibly played (Either negative or positive win shares) ## The data we're using The dataset we're working with contains over 50+ years of nba data, however it is nowhere near perfect * Tons of NaN values for earlier and current generations (not so much for current) * The NBA has drastically changed it's ruleset over the course of it's extistence * There are lots of outliers (Average career length is 4 years). Superstars play for upwards of 20. ## Working with the code Accessing our custom helper functions ``` import sys sys.path.append('.') # Custom regression module located within the same directory as the notebook from regression import obtain_linear_reg ``` Just like that, we have our access to our custom helper functions and can start working obtaining data from our linear regression model that we need. Let's get to work ``` # Grab a standard linear regression with no preprocessing/scaling done linear_reg = obtain_linear_reg() ``` As you can see, we now have access directly to a scored linear regression model that has 23 features, a score of .92, and a mean squared error of .96 ``` # We load our player statistics that were used to calculate the model linear_reg.stats ``` ## How distributed are our win shares? As mentioned previously, there are lots of outliers within the nba but the amount does not compare to the overall amount of nba players. So what does our distribution of win shares across players look like? ``` import seaborn as sns sns.distplot(linear_reg.ws) ``` As you can tell, we have lot's of players with 0-5 winshares, but not too many from 5-20. This will later be problematic for our model for reasons to be explained ## Are our stats correlated? How well did we pick our stats? Well, let's find out! While there were more stats than listed within our dataframe, I decided to remove all *special* statistics, meaning stats that weren't generated by players themselves but abstract statistics built on the more primitive ones like (Games, Games played, Minutes played, shots taken, FG%, etc) ``` corr = linear_reg.stats.corr() sns.heatmap(corr) ``` ## What does the average player look like? The averages for this specific dataframe reflect players from 2010 to 2017, also known as the modern NBA era. I chose this timeframe specifically because the style of play is constantly changing and what could've attributed more win shares to a player in the 90s might not be relevant to today. ``` output_list = [] # get the mean and max value for column in linear_reg.stats.columns: mean = linear_reg.stats[column].mean() max_val = linear_reg.stats[column].max() output_list.append((column, mean, max_val)) # Output the results for data in output_list: print(f"{data[0]}\t|\tMean: {data[1]:.2f}\t|\tMax: {data[2]}") ``` ## So... How well does my linear regression perform? Let's use some testing metrics provided by sklearn to test how well this current model performs ### R2 score (measuring explained variance vs total variance) ``` from sklearn.metrics import mean_squared_error, r2_score prediction = linear_reg.regression.predict(linear_reg.features.testing) score = r2_score(prediction, linear_reg.target.testing) score ``` ### Mean squared error (measuring the mean squared error of our prediction and actual points) ``` mean_squared_error(prediction, linear_reg.target.testing) ``` ### Well, our model does pretty well... I think? ``` testing = linear_reg.target.testing.values print("Actual\t\t-\t Prediction") for i in range(len(prediction[:20])): print(f"{prediction[i]}\t|\t{testing[i]}") print() ``` ## What does our regression look like? For that, we're going to need to reduce the dimensionality of our dataframe, as wee keep track of 23 different features and we would not be able to visualize that. Let's... * Obtain the best model * Test the model * Plot the model ``` # Obtain a model type 4 (plain pca) linear regression pca_reg = obtain_linear_reg(model_type=4, pca_dimensions=1) prediction = pca_reg.regression.predict(pca_reg.features.testing) score = r2_score(prediction, pca_reg.target.testing) score mean_squared_error(prediction, pca_reg.target.testing) testing = pca_reg.target.testing.values print("Actual\t\t-\t Prediction") for i in range(len(prediction[:20])): print(f"{prediction[i]}\t|\t{testing[i]}") print() ``` As you can see, pca doesn't perform that well on this, but we will come back to pcas performance later when we discuss improving our overall models performance ### Visualizing ``` # Plot our 1 dimension reduced pca model vs our sns.regplot(x=pca_reg.stats['pca-1'], y=pca_reg.ws['WS']) ``` ## Can we improve scores? Of course we can. While applying pca is an effective way of reducing the dimensionality of our data and get rid of as much correlated information as possible, you can see that as we start to increase the total number of win shares, the data becomes less linear and a little bit more scattered, with the outliers not being represented that well and in a way, underestimated due to the weight of all the average player based samples. ``` # Obtain a model type 5 (standard scaled pca) and model type 6 (MinMax scaled pca) linear regressions standard_pca_reg = obtain_linear_reg(model_type=5, pca_dimensions=1) mm_pca_reg = obtain_linear_reg(model_type=6, pca_dimensions=1) ``` ### Testing standard scaled model ``` ## Evaluation of standard model prediction = standard_pca_reg.regression.predict(standard_pca_reg.features.testing) score = r2_score(prediction, standard_pca_reg.target.testing) score mean_squared_error(prediction, standard_pca_reg.target.testing) sns.regplot(x=standard_pca_reg.stats['pca-1'], y=standard_pca_reg.ws['WS']) ``` ### Testing our MinMax scaled model ``` ## Evaluation of standard model prediction = mm_pca_reg.regression.predict(mm_pca_reg.features.testing) score = r2_score(prediction, mm_pca_reg.target.testing) score mean_squared_error(prediction, mm_pca_reg.target.testing) sns.regplot(x=mm_pca_reg.stats['pca-1'], y=pca_reg.ws['WS']) ``` ### Information preservation At the cost of reducing the dimensionality of our data, we lose information. Luckily, our built in function allows us to specify the minimum amount of information preservation we want but will have to almost certaintly use more than one dimension. ``` # Find us a dimension that preserves the amount of information we're looking for. thresholds = [.95, .96, .97, .98, .99] results = [] for threshold in thresholds: model = obtain_linear_reg(model_type=4, pca_dimensions=0, pca_threshold=threshold) print(f"To preserve {threshold * 100:.2f}% information, we need: {len(model.stats.columns)} dimensions") ``` ### Standard scaling ``` # Find us a dimension that preserves the amount of information we're looking for. thresholds = [.95, .96, .97, .98, .99] results = [] for threshold in thresholds: model = obtain_linear_reg(model_type=5, pca_dimensions=0, pca_threshold=threshold) print(f"To preserve {threshold * 100:.2f}% information, we need: {len(model.stats.columns)} dimensions") ``` ### MinMax scaling ``` # Find us a dimension that preserves the amount of information we're looking for. thresholds = [.95, .96, .97, .98, .99] results = [] for threshold in thresholds: model = obtain_linear_reg(model_type=6, pca_dimensions=0, pca_threshold=threshold) print(f"To preserve {threshold * 100:.2f}% information, we need: {len(model.stats.columns)} dimensions") ``` ### How well does a two dimensional PCA model perform? ``` pca_model = obtain_linear_reg(model_type=4, pca_dimensions=2) ## Evaluation of standard model prediction = pca_model.regression.predict(pca_model.features.testing) score = r2_score(prediction, pca_model.target.testing) score mean_squared_error(prediction, pca_model.target.testing) ``` ### Hmmmm, still not good. How about with standard scaling? ``` std_pca_model = obtain_linear_reg(model_type=5, pca_dimensions=2) ## Evaluation of standard model prediction = std_pca_model.regression.predict(std_pca_model.features.testing) score = r2_score(prediction, std_pca_model.target.testing) score mean_squared_error(prediction, std_pca_model.target.testing) ``` ### It doesn't seem to be helping Looking back on the information preservation section, it seems like we cannot reduce to 2-3 dimensions if we're going to try and preserve information. Let's evaluate each model with a minimum of 95% information preservation ### Standard scaled PCA with 95% information preservation ``` std_pca_model = obtain_linear_reg(model_type=5, pca_dimensions=9) ## Evaluation of standard model prediction = std_pca_model.regression.predict(std_pca_model.features.testing) score = r2_score(prediction, std_pca_model.target.testing) score mean_squared_error(prediction, std_pca_model.target.testing) ``` ### MinMax scaled PCA with 95% information preservation ``` mm_pca_model = obtain_linear_reg(model_type=6, pca_dimensions=8) ## Evaluation of standard model prediction = mm_pca_model.regression.predict(mm_pca_model.features.testing) score = r2_score(prediction, mm_pca_model.target.testing) score mean_squared_error(prediction, mm_pca_model.target.testing) ``` # Overall conclusions, comments, and future additions * Building a library around my dataset was really fun * Extending the library to do a lot more with the dataset + be more flexible * Plan to utilize other kernels to better assist representation for higher winshare count players * Targeting other special statistics like value over replacement, OWS, DWS, etc ## Thank you!
github_jupyter
import sys sys.path.append('.') # Custom regression module located within the same directory as the notebook from regression import obtain_linear_reg # Grab a standard linear regression with no preprocessing/scaling done linear_reg = obtain_linear_reg() # We load our player statistics that were used to calculate the model linear_reg.stats import seaborn as sns sns.distplot(linear_reg.ws) corr = linear_reg.stats.corr() sns.heatmap(corr) output_list = [] # get the mean and max value for column in linear_reg.stats.columns: mean = linear_reg.stats[column].mean() max_val = linear_reg.stats[column].max() output_list.append((column, mean, max_val)) # Output the results for data in output_list: print(f"{data[0]}\t|\tMean: {data[1]:.2f}\t|\tMax: {data[2]}") from sklearn.metrics import mean_squared_error, r2_score prediction = linear_reg.regression.predict(linear_reg.features.testing) score = r2_score(prediction, linear_reg.target.testing) score mean_squared_error(prediction, linear_reg.target.testing) testing = linear_reg.target.testing.values print("Actual\t\t-\t Prediction") for i in range(len(prediction[:20])): print(f"{prediction[i]}\t|\t{testing[i]}") print() # Obtain a model type 4 (plain pca) linear regression pca_reg = obtain_linear_reg(model_type=4, pca_dimensions=1) prediction = pca_reg.regression.predict(pca_reg.features.testing) score = r2_score(prediction, pca_reg.target.testing) score mean_squared_error(prediction, pca_reg.target.testing) testing = pca_reg.target.testing.values print("Actual\t\t-\t Prediction") for i in range(len(prediction[:20])): print(f"{prediction[i]}\t|\t{testing[i]}") print() # Plot our 1 dimension reduced pca model vs our sns.regplot(x=pca_reg.stats['pca-1'], y=pca_reg.ws['WS']) # Obtain a model type 5 (standard scaled pca) and model type 6 (MinMax scaled pca) linear regressions standard_pca_reg = obtain_linear_reg(model_type=5, pca_dimensions=1) mm_pca_reg = obtain_linear_reg(model_type=6, pca_dimensions=1) ## Evaluation of standard model prediction = standard_pca_reg.regression.predict(standard_pca_reg.features.testing) score = r2_score(prediction, standard_pca_reg.target.testing) score mean_squared_error(prediction, standard_pca_reg.target.testing) sns.regplot(x=standard_pca_reg.stats['pca-1'], y=standard_pca_reg.ws['WS']) ## Evaluation of standard model prediction = mm_pca_reg.regression.predict(mm_pca_reg.features.testing) score = r2_score(prediction, mm_pca_reg.target.testing) score mean_squared_error(prediction, mm_pca_reg.target.testing) sns.regplot(x=mm_pca_reg.stats['pca-1'], y=pca_reg.ws['WS']) # Find us a dimension that preserves the amount of information we're looking for. thresholds = [.95, .96, .97, .98, .99] results = [] for threshold in thresholds: model = obtain_linear_reg(model_type=4, pca_dimensions=0, pca_threshold=threshold) print(f"To preserve {threshold * 100:.2f}% information, we need: {len(model.stats.columns)} dimensions") # Find us a dimension that preserves the amount of information we're looking for. thresholds = [.95, .96, .97, .98, .99] results = [] for threshold in thresholds: model = obtain_linear_reg(model_type=5, pca_dimensions=0, pca_threshold=threshold) print(f"To preserve {threshold * 100:.2f}% information, we need: {len(model.stats.columns)} dimensions") # Find us a dimension that preserves the amount of information we're looking for. thresholds = [.95, .96, .97, .98, .99] results = [] for threshold in thresholds: model = obtain_linear_reg(model_type=6, pca_dimensions=0, pca_threshold=threshold) print(f"To preserve {threshold * 100:.2f}% information, we need: {len(model.stats.columns)} dimensions") pca_model = obtain_linear_reg(model_type=4, pca_dimensions=2) ## Evaluation of standard model prediction = pca_model.regression.predict(pca_model.features.testing) score = r2_score(prediction, pca_model.target.testing) score mean_squared_error(prediction, pca_model.target.testing) std_pca_model = obtain_linear_reg(model_type=5, pca_dimensions=2) ## Evaluation of standard model prediction = std_pca_model.regression.predict(std_pca_model.features.testing) score = r2_score(prediction, std_pca_model.target.testing) score mean_squared_error(prediction, std_pca_model.target.testing) std_pca_model = obtain_linear_reg(model_type=5, pca_dimensions=9) ## Evaluation of standard model prediction = std_pca_model.regression.predict(std_pca_model.features.testing) score = r2_score(prediction, std_pca_model.target.testing) score mean_squared_error(prediction, std_pca_model.target.testing) mm_pca_model = obtain_linear_reg(model_type=6, pca_dimensions=8) ## Evaluation of standard model prediction = mm_pca_model.regression.predict(mm_pca_model.features.testing) score = r2_score(prediction, mm_pca_model.target.testing) score mean_squared_error(prediction, mm_pca_model.target.testing)
0.617743
0.989223
![JohnSnowLabs](https://nlp.johnsnowlabs.com/assets/images/logo.png) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/Certification_Trainings/Public/10.T5_Workshop_with_Spark_NLP.ipynb) # **10. T5 Workshop with Spark NLP** --- # Overview of every task available with T5 [The T5 model](https://arxiv.org/pdf/1910.10683.pdf) is trained on various datasets for 17 different tasks which fall into 8 categories. 1. Text Summarization 2. Question Answering 3. Translation 4. Sentiment analysis 5. Natural Language Inference 6. Coreference Resolution 7. Sentence Completion 8. Word Sense Disambiguation # Every T5 Task with explanation: |Task Name | Explanation | |----------|--------------| |[1.CoLA](https://nyu-mll.github.io/CoLA/) | Classify if a sentence is gramaticaly correct| |[2.RTE](https://dl.acm.org/doi/10.1007/11736790_9) | Classify whether if a statement can be deducted from a sentence| |[3.MNLI](https://arxiv.org/abs/1704.05426) | Classify for a hypothesis and premise whether they contradict or contradict each other or neither of both (3 class).| |[4.MRPC](https://www.aclweb.org/anthology/I05-5002.pdf) | Classify whether a pair of sentences is a re-phrasing of each other (semantically equivalent)| |[5.QNLI](https://arxiv.org/pdf/1804.07461.pdf) | Classify whether the answer to a question can be deducted from an answer candidate.| |[6.QQP](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | Classify whether a pair of questions is a re-phrasing of each other (semantically equivalent)| |[7.SST2](https://www.aclweb.org/anthology/D13-1170.pdf) | Classify the sentiment of a sentence as positive or negative| |[8.STSB](https://www.aclweb.org/anthology/S17-2001/) | Classify the sentiment of a sentence on a scale from 1 to 5 (21 Sentiment classes)| |[9.CB](https://ojs.ub.uni-konstanz.de/sub/index.php/sub/article/view/601) | Classify for a premise and a hypothesis whether they contradict each other or not (binary).| |[10.COPA](https://www.aaai.org/ocs/index.php/SSS/SSS11/paper/view/2418/0) | Classify for a question, premise, and 2 choices which choice the correct choice is (binary).| |[11.MultiRc](https://www.aclweb.org/anthology/N18-1023.pdf) | Classify for a question, a paragraph of text, and an answer candidate, if the answer is correct (binary),| |[12.WiC](https://arxiv.org/abs/1808.09121) | Classify for a pair of sentences and a disambigous word if the word has the same meaning in both sentences.| |[13.WSC/DPR](https://www.aaai.org/ocs/index.php/KR/KR12/paper/view/4492/0) | Predict for an ambiguous pronoun in a sentence what it is referring to. | |[14.Summarization](https://arxiv.org/abs/1506.03340) | Summarize text into a shorter representation.| |[15.SQuAD](https://arxiv.org/abs/1606.05250) | Answer a question for a given context.| |[16.WMT1.](https://arxiv.org/abs/1706.03762) | Translate English to German| |[17.WMT2.](https://arxiv.org/abs/1706.03762) | Translate English to French| |[18.WMT3.](https://arxiv.org/abs/1706.03762) | Translate English to Romanian| # Information about pre-procession for T5 tasks ## Tasks that require no pre-processing The following tasks work fine without any additional pre-processing, only setting the `task parameter` on the T5 model is required: - CoLA - Summarization - SST2 - WMT1. - WMT2. - WMT3. ## Tasks that require pre-processing with 1 tag The following tasks require `exactly 1 additional tag` added by manual pre-processing. Set the `task parameter` and then join the sentences on the `tag` for these tasks. - RTE - MNLI - MRPC - QNLI - QQP - SST2 - STSB - CB ## Tasks that require pre-processing with multiple tags The following tasks require `more than 1 additional tag` added manual by pre-processing. Set the `task parameter` and then prefix sentences with their corresponding tags and join them for these tasks: - COPA - MultiRc - WiC ## WSC/DPR is a special case that requires `*` surrounding The task WSC/DPR requires highlighting a pronoun with `*` and configuring a `task parameter`. <br><br><br><br><br> The following sections describe each task in detail, with an example and also a pre-processed example. ***NOTE:*** Linebreaks are added to the `pre-processed examples` in the following section. The T5 model also works with linebreaks, but it can hinder the performance and it is not recommended to intentionally add them. # Task 1 [CoLA - Binary Grammatical Sentence acceptability classification](https://nyu-mll.github.io/CoLA/) Judges if a sentence is grammatically acceptable. This is a sub-task of [GLUE](https://arxiv.org/pdf/1804.07461.pdf). ## Example |sentence | prediction| |------------|------------| | Anna and Mike is going skiing and they is liked is | unacceptable | | Anna and Mike like to dance | acceptable | ## How to configure T5 task for CoLA `.setTask(cola sentence:)` prefix. ### Example pre-processed input for T5 CoLA sentence acceptability judgement: ``` cola sentence: Anna and Mike is going skiing and they is liked is ``` # Task 2 [RTE - Natural language inference Deduction Classification](https://dl.acm.org/doi/10.1007/11736790_9) The RTE task is defined as recognizing, given two text fragments, whether the meaning of one text can be inferred (entailed) from the other or not. Classification of sentence pairs as entailed and not_entailed This is a sub-task of [GLUE](https://arxiv.org/pdf/1804.07461.pdf) and [SuperGLUE](https://w4ngatang.github.io/static/papers/superglue.pdf). ## Example |sentence 1 | sentence 2 | prediction| |------------|------------|----------| Kessler ’s team conducted 60,643 interviews with adults in 14 countries. | Kessler ’s team interviewed more than 60,000 adults in 14 countries | entailed Peter loves New York, it is his favorite city| Peter loves new York. | entailed Recent report say Johnny makes he alot of money, he earned 10 million USD each year for the last 5 years. |Johnny is a millionare | entailment| Recent report say Johnny makes he alot of money, he earned 10 million USD each year for the last 5 years. |Johnny is a poor man | not_entailment | | It was raining in England for the last 4 weeks | England was very dry yesterday | not_entailment| ## How to configure T5 task for RTE `.setTask('rte sentence1:)` and prefix second sentence with `sentence2:` ### Example pre-processed input for T5 RTE - 2 Class Natural language inference ``` rte sentence1: Recent report say Peter makes he alot of money, he earned 10 million USD each year for the last 5 years. sentence2: Peter is a millionare. ``` ### References - https://arxiv.org/abs/2010.03061 # Task 3 [MNLI - 3 Class Natural Language Inference 3-class contradiction classification](https://arxiv.org/abs/1704.05426) Classification of sentence pairs with the labels `entailment`, `contradiction`, and `neutral`. This is a sub-task of [GLUE](https://arxiv.org/pdf/1804.07461.pdf). This classifier predicts for two sentences : - Whether the first sentence logically and semantically follows from the second sentence as entailment - Whether the first sentence is a contradiction to the second sentence as a contradiction - Whether the first sentence does not entail or contradict the first sentence as neutral | Hypothesis | Premise | prediction| |------------|------------|----------| | Recent report say Johnny makes he alot of money, he earned 10 million USD each year for the last 5 years. | Johnny is a poor man. | contradiction| |It rained in England the last 4 weeks.| It was snowing in New York last week| neutral | ## How to configure T5 task for MNLI `.setTask('mnli hypothesis:)` and prefix second sentence with `premise:` ### Example pre-processed input for T5 MNLI - 3 Class Natural Language Inference ``` mnli hypothesis: At 8:34, the Boston Center controller received a third, transmission from American 11. premise: The Boston Center controller got a third transmission from American 11. ``` # Task 4 [MRPC - Binary Paraphrasing/ sentence similarity classification ](https://www.aclweb.org/anthology/I05-5002.pdf) Detect whether one sentence is a re-phrasing or similar to another sentence This is a sub-task of [GLUE](https://arxiv.org/pdf/1804.07461.pdf). | Sentence1 | Sentence2 | prediction| |------------|------------|----------| |We acted because we saw the existing evidence in a new light , through the prism of our experience on 11 September , " Rumsfeld said .| Rather , the US acted because the administration saw " existing evidence in a new light , through the prism of our experience on September 11 " . | equivalent | | I like to eat peanutbutter for breakfast| I like to play football | not_equivalent | ## How to configure T5 task for MRPC `.setTask('mrpc sentence1:)` and prefix second sentence with `sentence2:` ### Example pre-processed input for T5 MRPC - Binary Paraphrasing/ sentence similarity ``` mrpc sentence1: We acted because we saw the existing evidence in a new light , through the prism of our experience on 11 September , " Rumsfeld said . sentence2: Rather , the US acted because the administration saw " existing evidence in a new light , through the prism of our experience on September 11", ``` *ISSUE:* Can only get neutral and contradiction as prediction results for tested samples but no entailment predictions. # Task 5 [QNLI - Natural Language Inference question answered classification](https://arxiv.org/pdf/1804.07461.pdf) Classify whether a question is answered by a sentence (`entailed`). This is a sub-task of [GLUE](https://arxiv.org/pdf/1804.07461.pdf). | Question | Answer | prediction| |------------|------------|----------| |Where did Jebe die?| Ghenkis Khan recalled Subtai back to Mongolia soon afterward, and Jebe died on the road back to Samarkand | entailment| |What does Steve like to eat? | Steve watches TV all day | not_netailment ## How to configure T5 task for QNLI - Natural Language Inference question answered classification `.setTask('QNLI sentence1:)` and prefix question with `question:` sentence with `sentence:`: ### Example pre-processed input for T5 QNLI - Natural Language Inference question answered classification ``` qnli question: Where did Jebe die? sentence: Ghenkis Khan recalled Subtai back to Mongolia soon afterwards, and Jebe died on the road back to Samarkand, ``` # Task 6 [QQP - Binary Question Similarity/Paraphrasing](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) Based on a quora dataset, determine whether a pair of questions are semantically equivalent. This is a sub-task of [GLUE](https://arxiv.org/pdf/1804.07461.pdf). | Question1 | Question2 | prediction| |------------|------------|----------| |What attributes would have made you highly desirable in ancient Rome? | How I GET OPPERTINUTY TO JOIN IT COMPANY AS A FRESHER? | not_duplicate | |What was it like in Ancient rome? | What was Ancient rome like?| duplicate | ## How to configure T5 task for QQP .setTask('qqp question1:) and prefix second sentence with question2: ### Example pre-processed input for T5 QQP - Binary Question Similarity/Paraphrasing ``` qqp question1: What attributes would have made you highly desirable in ancient Rome? question2: How I GET OPPERTINUTY TO JOIN IT COMPANY AS A FRESHER?', ``` # Task 7 [SST2 - Binary Sentiment Analysis](https://www.aclweb.org/anthology/D13-1170.pdf) Binary sentiment classification. This is a sub-task of [GLUE](https://arxiv.org/pdf/1804.07461.pdf). | Sentence1 | Prediction | |-----------|-----------| |it confirms fincher ’s status as a film maker who artfully bends technical know-how to the service of psychological insight | positive| |I really hated that movie | negative | ## How to configure T5 task for SST2 `.setTask('sst2 sentence: ')` ### Example pre-processed input for T5 SST2 - Binary Sentiment Analysis ``` sst2 sentence: I hated that movie ``` # Task8 [STSB - Regressive semantic sentence similarity](https://www.aclweb.org/anthology/S17-2001/) Measures how similar two sentences are on a scale from 0 to 5 with 21 classes representing a regressive label. This is a sub-task of [GLUE](https://arxiv.org/pdf/1804.07461.pdf). | Question1 | Question2 | prediction| |------------|------------|----------| |What attributes would have made you highly desirable in ancient Rome? | How I GET OPPERTINUTY TO JOIN IT COMPANY AS A FRESHER? | 0 | |What was it like in Ancient rome? | What was Ancient rome like?| 5.0 | |What was live like as a King in Ancient Rome?? | What is it like to live in Rome? | 3.2 | ## How to configure T5 task for STSB `.setTask('stsb sentence1:)` and prefix second sentence with `sentence2:` ### Example pre-processed input for T5 STSB - Regressive semantic sentence similarity ``` stsb sentence1: What attributes would have made you highly desirable in ancient Rome? sentence2: How I GET OPPERTINUTY TO JOIN IT COMPANY AS A FRESHER?', ``` # Task 9[ CB - Natural language inference contradiction classification](https://ojs.ub.uni-konstanz.de/sub/index.php/sub/article/view/601) Classify whether a Premise contradicts a Hypothesis. Predicts entailment, neutral and contradiction This is a sub-task of [SuperGLUE](https://w4ngatang.github.io/static/papers/superglue.pdf). | Hypothesis | Premise | Prediction | |--------|-------------|----------| |Valence was helping | Valence the void-brain, Valence the virtuous valet. Why couldn’t the figger choose his own portion of titanic anatomy to shaft? Did he think he was helping'| Contradiction| ## How to configure T5 task for CB `.setTask('cb hypothesis:)` and prefix premise with `premise:` ### Example pre-processed input for T5 CB - Natural language inference contradiction classification ``` cb hypothesis: Valence was helping premise: Valence the void-brain, Valence the virtuous valet. Why couldn’t the figger choose his own portion of titanic anatomy to shaft? Did he think he was helping, ``` # Task 10 [COPA - Sentence Completion/ Binary choice selection](https://www.aaai.org/ocs/index.php/SSS/SSS11/paper/view/2418/0) The Choice of Plausible Alternatives (COPA) task by Roemmele et al. (2011) evaluates causal reasoning between events, which requires commonsense knowledge about what usually takes place in the world. Each example provides a premise and either asks for the correct cause or effect from two choices, thus testing either ``backward`` or `forward causal reasoning`. COPA data, which consists of 1,000 examples total, can be downloaded at https://people.ict.usc.e This is a sub-task of [SuperGLUE](https://w4ngatang.github.io/static/papers/superglue.pdf). This classifier selects from a choice of `2 options` which one the correct is based on a `premise`. ## forward causal reasoning Premise: The man lost his balance on the ladder. question: What happened as a result? Alternative 1: He fell off the ladder. Alternative 2: He climbed up the ladder. ## backwards causal reasoning Premise: The man fell unconscious. What was the cause of this? Alternative 1: The assailant struck the man in the head. Alternative 2: The assailant took the man’s wallet. | Question | Premise | Choice 1 | Choice 2 | Prediction | |--------|-------------|----------|---------|-------------| |effect | Politcal Violence broke out in the nation. | many citizens relocated to the capitol. | Many citizens took refuge in other territories | Choice 1 | |correct| The men fell unconscious | The assailant struckl the man in the head | he assailant s took the man's wallet. | choice1 | ## How to configure T5 task for COPA `.setTask('copa choice1:)`, prefix choice2 with `choice2:` , prefix premise with `premise:` and prefix the question with `question` ### Example pre-processed input for T5 COPA - Sentence Completion/ Binary choice selection ``` copa choice1: He fell off the ladder choice2: He climbed up the lader premise: The man lost his balance on the ladder question: effect ``` # Task 11 [MultiRc - Question Answering](https://www.aclweb.org/anthology/N18-1023.pdf) Evaluates an `answer` for a `question` as `true` or `false` based on an input `paragraph` The T5 model predicts for a `question` and a `paragraph` of `sentences` wether an `answer` is true or not, based on the semantic contents of the paragraph. This is a sub-task of [SuperGLUE](https://w4ngatang.github.io/static/papers/superglue.pdf). **Exceeds human performance by a large margin** | Question | Answer | Prediction | paragraph| |--------------------------------------------------------------|---------------------------------------------------------------------|------------|----------| | Why was Joey surprised the morning he woke up for breakfast? | There was only pie to eat, rather than traditional breakfast foods | True |Once upon a time, there was a squirrel named Joey. Joey loved to go outside and play with his cousin Jimmy. Joey and Jimmy played silly games together, and were always laughing. One day, Joey and Jimmy went swimming together 50 at their Aunt Julie’s pond. Joey woke up early in the morning to eat some food before they left. He couldn’t find anything to eat except for pie! Usually, Joey would eat cereal, fruit (a pear), or oatmeal for breakfast. After he ate, he and Jimmy went to the pond. On their way there they saw their friend Jack Rabbit. They dove into the water and swam for several hours. The sun was out, but the breeze was cold. Joey and Jimmy got out of the water and started walking home. Their fur was wet, and the breeze chilled them. When they got home, they dried off, and Jimmy put on his favorite purple shirt. Joey put on a blue shirt with red and green dots. The two squirrels ate some food that Joey’s mom, Jasmine, made and went off to bed., | | Why was Joey surprised the morning he woke up for breakfast? | There was a T-Rex in his garden | False |Once upon a time, there was a squirrel named Joey. Joey loved to go outside and play with his cousin Jimmy. Joey and Jimmy played silly games together, and were always laughing. One day, Joey and Jimmy went swimming together 50 at their Aunt Julie’s pond. Joey woke up early in the morning to eat some food before they left. He couldn’t find anything to eat except for pie! Usually, Joey would eat cereal, fruit (a pear), or oatmeal for breakfast. After he ate, he and Jimmy went to the pond. On their way there they saw their friend Jack Rabbit. They dove into the water and swam for several hours. The sun was out, but the breeze was cold. Joey and Jimmy got out of the water and started walking home. Their fur was wet, and the breeze chilled them. When they got home, they dried off, and Jimmy put on his favorite purple shirt. Joey put on a blue shirt with red and green dots. The two squirrels ate some food that Joey’s mom, Jasmine, made and went off to bed., | ## How to configure T5 task for MultiRC `.setTask('multirc questions:)` followed by `answer:` prefix for the answer to evaluate, followed by `paragraph:` and then a series of sentences, where each sentence is prefixed with `Sent n:`prefix second sentence with sentence2: ### Example pre-processed input for T5 MultiRc task: ``` multirc questions: Why was Joey surprised the morning he woke up for breakfast? answer: There was a T-REX in his garden. paragraph: Sent 1: Once upon a time, there was a squirrel named Joey. Sent 2: Joey loved to go outside and play with his cousin Jimmy. Sent 3: Joey and Jimmy played silly games together, and were always laughing. Sent 4: One day, Joey and Jimmy went swimming together 50 at their Aunt Julie’s pond. Sent 5: Joey woke up early in the morning to eat some food before they left. Sent 6: He couldn’t find anything to eat except for pie! Sent 7: Usually, Joey would eat cereal, fruit (a pear), or oatmeal for breakfast. Sent 8: After he ate, he and Jimmy went to the pond. Sent 9: On their way there they saw their friend Jack Rabbit. Sent 10: They dove into the water and swam for several hours. Sent 11: The sun was out, but the breeze was cold. Sent 12: Joey and Jimmy got out of the water and started walking home. Sent 13: Their fur was wet, and the breeze chilled them. Sent 14: When they got home, they dried off, and Jimmy put on his favorite purple shirt. Sent 15: Joey put on a blue shirt with red and green dots. Sent 16: The two squirrels ate some food that Joey’s mom, Jasmine, made and went off to bed. ``` # Task 12 [WiC - Word sense disambiguation](https://arxiv.org/abs/1808.09121) Decide for `two sentence`s with a shared `disambigous word` wether they have the target word has the same `semantic meaning` in both sentences. This is a sub-task of [SuperGLUE](https://w4ngatang.github.io/static/papers/superglue.pdf). |Predicted | disambigous word| Sentence 1 | Sentence 2 | |----------|-----------------|------------|------------| | False | kill | He totally killed that rock show! | The airplane crash killed his family | | True | window | The expanded window will give us time to catch the thieves.|You have a two-hour window for turning in your homework. | | False | window | He jumped out of the window.|You have a two-hour window for turning in your homework. | ## How to configure T5 task for MultiRC `.setTask('wic pos:)` followed by `sentence1:` prefix for the first sentence, followed by `sentence2:` prefix for the second sentence. ### Example pre-processed input for T5 WiC task: ``` wic pos: sentence1: The expanded window will give us time to catch the thieves. sentence2: You have a two-hour window of turning in your homework. word : window ``` # Task 13 [WSC and DPR - Coreference resolution/ Pronoun ambiguity resolver ](https://www.aaai.org/ocs/index.php/KR/KR12/paper/view/4492/0) Predict for an `ambiguous pronoun` to which `noun` it is referring to. This is a sub-task of [GLUE](https://arxiv.org/pdf/1804.07461.pdf) and [SuperGLUE](https://w4ngatang.github.io/static/papers/superglue.pdf). |Prediction| Text | |----------|-------| | stable | The stable was very roomy, with four good stalls; a large swinging window opened into the yard , which made *it* pleasant and airy. | ## How to configure T5 task for WSC/DPR `.setTask('wsc:)` and surround pronoun with asteriks symbols.. ### Example pre-processed input for T5 WSC/DPR task: The `ambiguous pronous` should be surrounded with `*` symbols. ***Note*** Read [Appendix A.](https://arxiv.org/pdf/1910.10683.pdf#page=64&zoom=100,84,360) for more info ``` wsc: The stable was very roomy, with four good stalls; a large swinging window opened into the yard , which made *it* pleasant and airy. ``` # Task 14 [Text summarization](https://arxiv.org/abs/1506.03340) `Summarizes` a paragraph into a shorter version with the same semantic meaning. | Predicted summary| Text | |------------------|-------| | manchester united face newcastle in the premier league on wednesday . louis van gaal's side currently sit two points clear of liverpool in fourth . the belgian duo took to the dance floor on monday night with some friends . | the belgian duo took to the dance floor on monday night with some friends . manchester united face newcastle in the premier league on wednesday . red devils will be looking for just their second league away win in seven . louis van gaal’s side currently sit two points clear of liverpool in fourth . | ## How to configure T5 task for summarization `.setTask('summarize:)` ### Example pre-processed input for T5 summarization task: This task requires no pre-processing, setting the task to `summarize` is sufficient. ``` the belgian duo took to the dance floor on monday night with some friends . manchester united face newcastle in the premier league on wednesday . red devils will be looking for just their second league away win in seven . louis van gaal’s side currently sit two points clear of liverpool in fourth . ``` # Task 15 [SQuAD - Context based question answering](https://arxiv.org/abs/1606.05250) Predict an `answer` to a `question` based on input `context`. |Predicted Answer | Question | Context | |-----------------|----------|------| |carbon monoxide| What does increased oxygen concentrations in the patient’s lungs displace? | Hyperbaric (high-pressure) medicine uses special oxygen chambers to increase the partial pressure of O 2 around the patient and, when needed, the medical staff. Carbon monoxide poisoning, gas gangrene, and decompression sickness (the ’bends’) are sometimes treated using these devices. Increased O 2 concentration in the lungs helps to displace carbon monoxide from the heme group of hemoglobin. Oxygen gas is poisonous to the anaerobic bacteria that cause gas gangrene, so increasing its partial pressure helps kill them. Decompression sickness occurs in divers who decompress too quickly after a dive, resulting in bubbles of inert gas, mostly nitrogen and helium, forming in their blood. Increasing the pressure of O 2 as soon as possible is part of the treatment. |pie| What did Joey eat for breakfast?| Once upon a time, there was a squirrel named Joey. Joey loved to go outside and play with his cousin Jimmy. Joey and Jimmy played silly games together, and were always laughing. One day, Joey and Jimmy went swimming together 50 at their Aunt Julie’s pond. Joey woke up early in the morning to eat some food before they left. Usually, Joey would eat cereal, fruit (a pear), or oatmeal for breakfast. After he ate, he and Jimmy went to the pond. On their way there they saw their friend Jack Rabbit. They dove into the water and swam for several hours. The sun was out, but the breeze was cold. Joey and Jimmy got out of the water and started walking home. Their fur was wet, and the breeze chilled them. When they got home, they dried off, and Jimmy put on his favorite purple shirt. Joey put on a blue shirt with red and green dots. The two squirrels ate some food that Joey’s mom, Jasmine, made and went off to bed,'| ## How to configure T5 task parameter for Squad Context based question answering `.setTask('question:)` and prefix the context which can be made up of multiple sentences with `context:` ## Example pre-processed input for T5 Squad Context based question answering: ``` question: What does increased oxygen concentrations in the patient’s lungs displace? context: Hyperbaric (high-pressure) medicine uses special oxygen chambers to increase the partial pressure of O 2 around the patient and, when needed, the medical staff. Carbon monoxide poisoning, gas gangrene, and decompression sickness (the ’bends’) are sometimes treated using these devices. Increased O 2 concentration in the lungs helps to displace carbon monoxide from the heme group of hemoglobin. Oxygen gas is poisonous to the anaerobic bacteria that cause gas gangrene, so increasing its partial pressure helps kill them. Decompression sickness occurs in divers who decompress too quickly after a dive, resulting in bubbles of inert gas, mostly nitrogen and helium, forming in their blood. Increasing the pressure of O 2 as soon as possible is part of the treatment. ``` # Task 16 [WMT1 Translate English to German](https://arxiv.org/abs/1706.03762) For translation tasks use the `marian` model ## How to configure T5 task parameter for WMT Translate English to German `.setTask('translate English to German:)` # Task 17 [WMT2 Translate English to French](https://arxiv.org/abs/1706.03762) For translation tasks use the `marian` model ## How to configure T5 task parameter for WMT Translate English to French `.setTask('translate English to French:)` # 18 [WMT3 - Translate English to Romanian](https://arxiv.org/abs/1706.03762) For translation tasks use the `marian` model ## How to configure T5 task parameter for English to Romanian `.setTask('translate English to Romanian:)` # Spark-NLP Example for every Task: # Install Spark NLP ``` # Install Spark NLP import os ! apt-get update -qq > /dev/null # Install java ! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64" os.environ["PATH"] = os.environ["JAVA_HOME"] + "/bin:" + os.environ["PATH"] ! pip install sparknlp pyspark==2.4.7 > /dev/null import os # Install java ! apt-get update -qq ! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64" os.environ["PATH"] = os.environ["JAVA_HOME"] + "/bin:" + os.environ["PATH"] ! java -version # Install pyspark ! pip install --ignore-installed -q pyspark==2.4.4 ! pip install --ignore-installed -q spark-nlp==2.7.1 import sparknlp spark = sparknlp.start() print("Spark NLP version", sparknlp.version()) print("Apache Spark version:", spark.version) ``` ## Define Document assembler and T5 model for running the tasks ``` import pandas as pd pd.set_option('display.width', 100000) pd.set_option('max_colwidth', 8000) pd.set_option('display.max_rows', 500) pd.set_option('display.max_columns', 500) from sparknlp.annotator import * import sparknlp from sparknlp.common import * from sparknlp.base import * from pyspark.ml import Pipeline documentAssembler = DocumentAssembler() \ .setInputCol("text") \ .setOutputCol("document") # Can take in document or sentence columns t5 = T5Transformer.pretrained(name='t5_base',lang='en')\ .setInputCols('document')\ .setOutputCol("T5") ``` # Task 1 [CoLA - Binary Grammatical Sentence acceptability classification](https://nyu-mll.github.io/CoLA/) Judges if a sentence is grammatically acceptable. This is a sub-task of [GLUE](https://arxiv.org/pdf/1804.07461.pdf). ## Example |sentence | prediction| |------------|------------| | Anna and Mike is going skiing and they is liked is | unacceptable | | Anna and Mike like to dance | acceptable | ## How to configure T5 task for CoLA `.setTask(cola sentence:)` prefix. ### Example pre-processed input for T5 CoLA sentence acceptability judgement: ``` cola sentence: Anna and Mike is going skiing and they is liked is ``` ``` # Set the task on T5 t5.setTask('cola sentence:') # Build pipeline with T5 pipe_components = [documentAssembler,t5] pipeline = Pipeline().setStages( pipe_components) # define Data sentences = [['Anna and Mike is going skiing and they is liked is'],['Anna and Mike like to dance']] df = spark.createDataFrame(sentences).toDF("text") #Predict on text data with T5 model = pipeline.fit(df) annotated_df = model.transform(df) annotated_df.select(['text','t5.result']).show(truncate=False) ``` # Task 2 [RTE - Natural language inference Deduction Classification](https://dl.acm.org/doi/10.1007/11736790_9) The RTE task is defined as recognizing, given two text fragments, whether the meaning of one text can be inferred (entailed) from the other or not. Classification of sentence pairs as entailment and not_entailment This is a sub-task of [GLUE](https://arxiv.org/pdf/1804.07461.pdf) and [SuperGLUE](https://w4ngatang.github.io/static/papers/superglue.pdf). ## Example |sentence 1 | sentence 2 | prediction| |------------|------------|----------| Kessler ’s team conducted 60,643 interviews with adults in 14 countries. | Kessler ’s team interviewed more than 60,000 adults in 14 countries | entailment Peter loves New York, it is his favorite city| Peter loves new York. | entailment Recent report say Johnny makes he alot of money, he earned 10 million USD each year for the last 5 years. |Johnny is a millionare | entailment| Recent report say Johnny makes he alot of money, he earned 10 million USD each year for the last 5 years. |Johnny is a poor man | not_entailment | | It was raining in England for the last 4 weeks | England was very dry yesterday | not_entailment| ## How to configure T5 task for RTE `.setTask('rte sentence1:)` and prefix second sentence with `sentence2:` ### Example pre-processed input for T5 RTE - 2 Class Natural language inference ``` rte sentence1: Recent report say Peter makes he alot of money, he earned 10 million USD each year for the last 5 years. sentence2: Peter is a millionare. ``` ### References - https://arxiv.org/abs/2010.03061 ``` # Set the task on T5 t5.setTask('rte sentence1:') # Build pipeline with T5 pipe_components = [documentAssembler,t5] pipeline = Pipeline().setStages( pipe_components) # define Data, add additional tags between sentences sentences = [ ['Recent report say Peter makes he alot of money, he earned 10 million USD each year for the last 5 years. sentence2: Peter is a millionare'], ['Recent report say Peter makes he alot of money, he earned 10 million USD each year for the last 5 years. sentence2: Peter is a poor man'] ] df = spark.createDataFrame(sentences).toDF("text") #Predict on text data with T5 model = pipeline.fit(df) annotated_df = model.transform(df) annotated_df.select(['text','t5.result']).show(truncate=False) ``` # Task 3 [MNLI - 3 Class Natural Language Inference 3-class contradiction classification](https://arxiv.org/abs/1704.05426) Classification of sentence pairs with the labels `entailment`, `contradiction`, and `neutral`. This is a sub-task of [GLUE](https://arxiv.org/pdf/1804.07461.pdf). This classifier predicts for two sentences : - Whether the first sentence logically and semantically follows from the second sentence as entailment - Whether the first sentence is a contradiction to the second sentence as a contradiction - Whether the first sentence does not entail or contradict the first sentence as neutral | Hypothesis | Premise | prediction| |------------|------------|----------| | Recent report say Johnny makes he alot of money, he earned 10 million USD each year for the last 5 years. | Johnny is a poor man. | contradiction| |It rained in England the last 4 weeks.| It was snowing in New York last week| neutral | ## How to configure T5 task for MNLI `.setTask('mnli hypothesis:)` and prefix second sentence with `premise:` ### Example pre-processed input for T5 MNLI - 3 Class Natural Language Inference ``` mnli hypothesis: At 8:34, the Boston Center controller received a third, transmission from American 11. premise: The Boston Center controller got a third transmission from American 11. ``` ``` # Set the task on T5 t5.setTask('mnli ') # Build pipeline with T5 pipe_components = [documentAssembler,t5] pipeline = Pipeline().setStages( pipe_components) # define Data, add additional tags between sentences sentences = [ [ ''' hypothesis: At 8:34, the Boston Center controller received a third, transmission from American 11. premise: The Boston Center controller got a third transmission from American 11. ''' ], [''' hypothesis: Recent report say Johnny makes he alot of money, he earned 10 million USD each year for the last 5 years. premise: Johnny is a poor man. '''] ] df = spark.createDataFrame(sentences).toDF("text") #Predict on text data with T5 model = pipeline.fit(df) annotated_df = model.transform(df) annotated_df.select(['text','t5.result']).show()#.toPandas().head(5) <-- for better vis of result data frame ``` # Task 4 [MRPC - Binary Paraphrasing/ sentence similarity classification ](https://www.aclweb.org/anthology/I05-5002.pdf) Detect whether one sentence is a re-phrasing or similar to another sentence This is a sub-task of [GLUE](https://arxiv.org/pdf/1804.07461.pdf). | Sentence1 | Sentence2 | prediction| |------------|------------|----------| |We acted because we saw the existing evidence in a new light , through the prism of our experience on 11 September , " Rumsfeld said .| Rather , the US acted because the administration saw " existing evidence in a new light , through the prism of our experience on September 11 " . | equivalent | | I like to eat peanutbutter for breakfast| I like to play football | not_equivalent | ## How to configure T5 task for MRPC `.setTask('mrpc sentence1:)` and prefix second sentence with `sentence2:` ### Example pre-processed input for T5 MRPC - Binary Paraphrasing/ sentence similarity ``` mrpc sentence1: We acted because we saw the existing evidence in a new light , through the prism of our experience on 11 September , " Rumsfeld said . sentence2: Rather , the US acted because the administration saw " existing evidence in a new light , through the prism of our experience on September 11", ``` ``` # Set the task on T5 t5.setTask('mrpc ') # Build pipeline with T5 pipe_components = [documentAssembler,t5] pipeline = Pipeline().setStages( pipe_components) # define Data, add additional tags between sentences sentences = [ [ ''' sentence1: We acted because we saw the existing evidence in a new light , through the prism of our experience on 11 September , " Rumsfeld said . sentence2: Rather , the US acted because the administration saw " existing evidence in a new light , through the prism of our experience on September 11 " ''' ], [''' sentence1: I like to eat peanutbutter for breakfast sentence2: I like to play football. '''] ] df = spark.createDataFrame(sentences).toDF("text") #Predict on text data with T5 model = pipeline.fit(df) annotated_df = model.transform(df) annotated_df.select(['text','t5.result']).toPandas()#show() ``` # Task 5 [QNLI - Natural Language Inference question answered classification](https://arxiv.org/pdf/1804.07461.pdf) Classify whether a question is answered by a sentence (`entailed`). This is a sub-task of [GLUE](https://arxiv.org/pdf/1804.07461.pdf). | Question | Answer | prediction| |------------|------------|----------| |Where did Jebe die?| Ghenkis Khan recalled Subtai back to Mongolia soon afterward, and Jebe died on the road back to Samarkand | entailment| |What does Steve like to eat? | Steve watches TV all day | not_netailment ## How to configure T5 task for QNLI - Natural Language Inference question answered classification `.setTask('QNLI sentence1:)` and prefix question with `question:` sentence with `sentence:`: ### Example pre-processed input for T5 QNLI - Natural Language Inference question answered classification ``` qnli question: Where did Jebe die? sentence: Ghenkis Khan recalled Subtai back to Mongolia soon afterwards, and Jebe died on the road back to Samarkand, ``` ``` # Set the task on T5 t5.setTask('QNLI ') # Build pipeline with T5 pipe_components = [documentAssembler,t5] pipeline = Pipeline().setStages( pipe_components) # define Data, add additional tags between sentences sentences = [ [ ''' question: Where did Jebe die? sentence: Ghenkis Khan recalled Subtai back to Mongolia soon afterwards, and Jebe died on the road back to Samarkand, ''' ], [''' question: What does Steve like to eat? sentence: Steve watches TV all day '''] ] df = spark.createDataFrame(sentences).toDF("text") #Predict on text data with T5 model = pipeline.fit(df) annotated_df = model.transform(df) annotated_df.select(['text','t5.result']).toPandas()#.show() ``` # Task 6 [QQP - Binary Question Similarity/Paraphrasing](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) Based on a quora dataset, determine whether a pair of questions are semantically equivalent. This is a sub-task of [GLUE](https://arxiv.org/pdf/1804.07461.pdf). | Question1 | Question2 | prediction| |------------|------------|----------| |What attributes would have made you highly desirable in ancient Rome? | How I GET OPPERTINUTY TO JOIN IT COMPANY AS A FRESHER? | not_duplicate | |What was it like in Ancient rome? | What was Ancient rome like?| duplicate | ## How to configure T5 task for QQP .setTask('qqp question1:) and prefix second sentence with question2: ### Example pre-processed input for T5 QQP - Binary Question Similarity/Paraphrasing ``` qqp question1: What attributes would have made you highly desirable in ancient Rome? question2: How I GET OPPERTINUTY TO JOIN IT COMPANY AS A FRESHER?', ``` ``` # Set the task on T5 t5.setTask('qqp ') # Build pipeline with T5 pipe_components = [documentAssembler,t5] pipeline = Pipeline().setStages( pipe_components) # define Data, add additional tags between sentences sentences = [ [ ''' question1: What attributes would have made you highly desirable in ancient Rome? question2: How I GET OPPERTINUTY TO JOIN IT COMPANY AS A FRESHER?' ''' ], [''' question1: What was it like in Ancient rome? question2: What was Ancient rome like? '''] ] df = spark.createDataFrame(sentences).toDF("text") #Predict on text data with T5 model = pipeline.fit(df) annotated_df = model.transform(df) annotated_df.select(['text','t5.result']).toPandas()#.show() ``` # Task 7 [SST2 - Binary Sentiment Analysis](https://www.aclweb.org/anthology/D13-1170.pdf) Binary sentiment classification. This is a sub-task of [GLUE](https://arxiv.org/pdf/1804.07461.pdf). | Sentence1 | Prediction | |-----------|-----------| |it confirms fincher ’s status as a film maker who artfully bends technical know-how to the service of psychological insight | positive| |I really hated that movie | negative | ## How to configure T5 task for SST2 `.setTask('sst2 sentence: ')` ### Example pre-processed input for T5 SST2 - Binary Sentiment Analysis ``` sst2 sentence: I hated that movie ``` ``` # Set the task on T5 t5.setTask('sst2 sentence: ') # Build pipeline with T5 pipe_components = [documentAssembler,t5] pipeline = Pipeline().setStages( pipe_components) # define Data, add additional tags between sentences sentences = [ [ ''' I really hated that movie'''], [''' it confirms fincher ’s status as a film maker who artfully bends technical know-how to the service of psychological insight''']] df = spark.createDataFrame(sentences).toDF("text") #Predict on text data with T5 model = pipeline.fit(df) annotated_df = model.transform(df) annotated_df.select(['text','t5.result']).toPandas()#show() ``` # Task8 [STSB - Regressive semantic sentence similarity](https://www.aclweb.org/anthology/S17-2001/) Measures how similar two sentences are on a scale from 0 to 5 with 21 classes representing a regressive label. This is a sub-task of [GLUE](https://arxiv.org/pdf/1804.07461.pdf). | Question1 | Question2 | prediction| |------------|------------|----------| |What attributes would have made you highly desirable in ancient Rome? | How I GET OPPERTINUTY TO JOIN IT COMPANY AS A FRESHER? | 0 | |What was it like in Ancient rome? | What was Ancient rome like?| 5.0 | |What was live like as a King in Ancient Rome?? | What is it like to live in Rome? | 3.2 | ## How to configure T5 task for STSB `.setTask('stsb sentence1:)` and prefix second sentence with `sentence2:` ### Example pre-processed input for T5 STSB - Regressive semantic sentence similarity ``` stsb sentence1: What attributes would have made you highly desirable in ancient Rome? sentence2: How I GET OPPERTINUTY TO JOIN IT COMPANY AS A FRESHER?', ``` ``` # Set the task on T5 t5.setTask('stsb ') # Build pipeline with T5 pipe_components = [documentAssembler,t5] pipeline = Pipeline().setStages( pipe_components) # define Data, add additional tags between sentences sentences = [ [ ''' sentence1: What attributes would have made you highly desirable in ancient Rome? sentence2: How I GET OPPERTINUTY TO JOIN IT COMPANY AS A FRESHER?' ''' ], [''' sentence1: What was it like in Ancient rome? sentence2: What was Ancient rome like? '''], [''' sentence1: What was live like as a King in Ancient Rome?? sentence2: What was Ancient rome like? '''] ] df = spark.createDataFrame(sentences).toDF("text") #Predict on text data with T5 model = pipeline.fit(df) annotated_df = model.transform(df) annotated_df.select(['text','t5.result']).toPandas()#show(truncate=False) ``` # Task 9[ CB - Natural language inference contradiction classification](https://ojs.ub.uni-konstanz.de/sub/index.php/sub/article/view/601) Classify whether a Premise contradicts a Hypothesis. Predicts entailment, neutral and contradiction This is a sub-task of [SuperGLUE](https://w4ngatang.github.io/static/papers/superglue.pdf). | Hypothesis | Premise | Prediction | |--------|-------------|----------| |Valence was helping | Valence the void-brain, Valence the virtuous valet. Why couldn’t the figger choose his own portion of titanic anatomy to shaft? Did he think he was helping'| Contradiction| ## How to configure T5 task for CB `.setTask('cb hypothesis:)` and prefix premise with `premise:` ### Example pre-processed input for T5 CB - Natural language inference contradiction classification ``` cb hypothesis: Valence was helping premise: Valence the void-brain, Valence the virtuous valet. Why couldn’t the figger choose his own portion of titanic anatomy to shaft? Did he think he was helping, ``` ``` # Set the task on T5 t5.setTask('cb ') # Build pipeline with T5 pipe_components = [documentAssembler,t5] pipeline = Pipeline().setStages( pipe_components) # define Data, add additional tags between sentences sentences = [ [ ''' hypothesis: Recent report say Johnny makes he alot of money, he earned 10 million USD each year for the last 5 years. premise: Johnny is a poor man. '''] ] df = spark.createDataFrame(sentences).toDF("text") #Predict on text data with T5 model = pipeline.fit(df) annotated_df = model.transform(df) annotated_df.select(['text','t5.result']).toPandas()#show(truncate=False) ``` # Task 10 [COPA - Sentence Completion/ Binary choice selection](https://www.aaai.org/ocs/index.php/SSS/SSS11/paper/view/2418/0) The Choice of Plausible Alternatives (COPA) task by Roemmele et al. (2011) evaluates causal reasoning between events, which requires commonsense knowledge about what usually takes place in the world. Each example provides a premise and either asks for the correct cause or effect from two choices, thus testing either ``backward`` or `forward causal reasoning`. COPA data, which consists of 1,000 examples total, can be downloaded at https://people.ict.usc.e This is a sub-task of [SuperGLUE](https://w4ngatang.github.io/static/papers/superglue.pdf). This classifier selects from a choice of `2 options` which one the correct is based on a `premise`. ## forward causal reasoning Premise: The man lost his balance on the ladder. question: What happened as a result? Alternative 1: He fell off the ladder. Alternative 2: He climbed up the ladder. ## backwards causal reasoning Premise: The man fell unconscious. What was the cause of this? Alternative 1: The assailant struck the man in the head. Alternative 2: The assailant took the man’s wallet. | Question | Premise | Choice 1 | Choice 2 | Prediction | |--------|-------------|----------|---------|-------------| |effect | Politcal Violence broke out in the nation. | many citizens relocated to the capitol. | Many citizens took refuge in other territories | Choice 1 | |correct| The men fell unconscious | The assailant struckl the man in the head | he assailant s took the man's wallet. | choice1 | ## How to configure T5 task for COPA `.setTask('copa choice1:)`, prefix choice2 with `choice2:` , prefix premise with `premise:` and prefix the question with `question` ### Example pre-processed input for T5 COPA - Sentence Completion/ Binary choice selection ``` copa choice1: He fell off the ladder choice2: He climbed up the lader premise: The man lost his balance on the ladder question: effect ``` ``` # Set the task on T5 t5.setTask('copa ') # Build pipeline with T5 pipe_components = [documentAssembler,t5] pipeline = Pipeline().setStages( pipe_components) # define Data, add additional tags between sentences sentences = [ [ ''' choice1: He fell off the ladder choice2: He climbed up the lader premise: The man lost his balance on the ladder question: effect '''] ] df = spark.createDataFrame(sentences).toDF("text") #Predict on text data with T5 model = pipeline.fit(df) annotated_df = model.transform(df) annotated_df.select(['text','t5.result']).toPandas()#show(truncate=False) ``` # Task 11 [MultiRc - Question Answering](https://www.aclweb.org/anthology/N18-1023.pdf) Evaluates an `answer` for a `question` as `true` or `false` based on an input `paragraph` The T5 model predicts for a `question` and a `paragraph` of `sentences` wether an `answer` is true or not, based on the semantic contents of the paragraph. This is a sub-task of [SuperGLUE](https://w4ngatang.github.io/static/papers/superglue.pdf). **Exceeds human performance by a large margin** | Question | Answer | Prediction | paragraph| |--------------------------------------------------------------|---------------------------------------------------------------------|------------|----------| | Why was Joey surprised the morning he woke up for breakfast? | There was only pie to eat, rather than traditional breakfast foods | True |Once upon a time, there was a squirrel named Joey. Joey loved to go outside and play with his cousin Jimmy. Joey and Jimmy played silly games together, and were always laughing. One day, Joey and Jimmy went swimming together 50 at their Aunt Julie’s pond. Joey woke up early in the morning to eat some food before they left. He couldn’t find anything to eat except for pie! Usually, Joey would eat cereal, fruit (a pear), or oatmeal for breakfast. After he ate, he and Jimmy went to the pond. On their way there they saw their friend Jack Rabbit. They dove into the water and swam for several hours. The sun was out, but the breeze was cold. Joey and Jimmy got out of the water and started walking home. Their fur was wet, and the breeze chilled them. When they got home, they dried off, and Jimmy put on his favorite purple shirt. Joey put on a blue shirt with red and green dots. The two squirrels ate some food that Joey’s mom, Jasmine, made and went off to bed., | | Why was Joey surprised the morning he woke up for breakfast? | There was a T-Rex in his garden | False |Once upon a time, there was a squirrel named Joey. Joey loved to go outside and play with his cousin Jimmy. Joey and Jimmy played silly games together, and were always laughing. One day, Joey and Jimmy went swimming together 50 at their Aunt Julie’s pond. Joey woke up early in the morning to eat some food before they left. He couldn’t find anything to eat except for pie! Usually, Joey would eat cereal, fruit (a pear), or oatmeal for breakfast. After he ate, he and Jimmy went to the pond. On their way there they saw their friend Jack Rabbit. They dove into the water and swam for several hours. The sun was out, but the breeze was cold. Joey and Jimmy got out of the water and started walking home. Their fur was wet, and the breeze chilled them. When they got home, they dried off, and Jimmy put on his favorite purple shirt. Joey put on a blue shirt with red and green dots. The two squirrels ate some food that Joey’s mom, Jasmine, made and went off to bed., | ## How to configure T5 task for MultiRC `.setTask('multirc questions:)` followed by `answer:` prefix for the answer to evaluate, followed by `paragraph:` and then a series of sentences, where each sentence is prefixed with `Sent n:`prefix second sentence with sentence2: ### Example pre-processed input for T5 MultiRc task: ``` multirc questions: Why was Joey surprised the morning he woke up for breakfast? answer: There was a T-REX in his garden. paragraph: Sent 1: Once upon a time, there was a squirrel named Joey. Sent 2: Joey loved to go outside and play with his cousin Jimmy. Sent 3: Joey and Jimmy played silly games together, and were always laughing. Sent 4: One day, Joey and Jimmy went swimming together 50 at their Aunt Julie’s pond. Sent 5: Joey woke up early in the morning to eat some food before they left. Sent 6: He couldn’t find anything to eat except for pie! Sent 7: Usually, Joey would eat cereal, fruit (a pear), or oatmeal for breakfast. Sent 8: After he ate, he and Jimmy went to the pond. Sent 9: On their way there they saw their friend Jack Rabbit. Sent 10: They dove into the water and swam for several hours. Sent 11: The sun was out, but the breeze was cold. Sent 12: Joey and Jimmy got out of the water and started walking home. Sent 13: Their fur was wet, and the breeze chilled them. Sent 14: When they got home, they dried off, and Jimmy put on his favorite purple shirt. Sent 15: Joey put on a blue shirt with red and green dots. Sent 16: The two squirrels ate some food that Joey’s mom, Jasmine, made and went off to bed. ``` ``` # Set the task on T5 t5.setTask('multirc ') # Build pipeline with T5 pipe_components = [documentAssembler,t5] pipeline = Pipeline().setStages( pipe_components) # define Data, add additional tags between sentences sentences = [ [ ''' questions: Why was Joey surprised the morning he woke up for breakfast? answer: There was a T-REX in his garden. paragraph: Sent 1: Once upon a time, there was a squirrel named Joey. Sent 2: Joey loved to go outside and play with his cousin Jimmy. Sent 3: Joey and Jimmy played silly games together, and were always laughing. Sent 4: One day, Joey and Jimmy went swimming together 50 at their Aunt Julie’s pond. Sent 5: Joey woke up early in the morning to eat some food before they left. Sent 6: He couldn’t find anything to eat except for pie! Sent 7: Usually, Joey would eat cereal, fruit (a pear), or oatmeal for breakfast. Sent 8: After he ate, he and Jimmy went to the pond. Sent 9: On their way there they saw their friend Jack Rabbit. Sent 10: They dove into the water and swam for several hours. Sent 11: The sun was out, but the breeze was cold. Sent 12: Joey and Jimmy got out of the water and started walking home. Sent 13: Their fur was wet, and the breeze chilled them. Sent 14: When they got home, they dried off, and Jimmy put on his favorite purple shirt. Sent 15: Joey put on a blue shirt with red and green dots. Sent 16: The two squirrels ate some food that Joey’s mom, Jasmine, made and went off to bed. '''], [ ''' questions: Why was Joey surprised the morning he woke up for breakfast? answer: There was only pie for breakfast. paragraph: Sent 1: Once upon a time, there was a squirrel named Joey. Sent 2: Joey loved to go outside and play with his cousin Jimmy. Sent 3: Joey and Jimmy played silly games together, and were always laughing. Sent 4: One day, Joey and Jimmy went swimming together 50 at their Aunt Julie’s pond. Sent 5: Joey woke up early in the morning to eat some food before they left. Sent 6: He couldn’t find anything to eat except for pie! Sent 7: Usually, Joey would eat cereal, fruit (a pear), or oatmeal for breakfast. Sent 8: After he ate, he and Jimmy went to the pond. Sent 9: On their way there they saw their friend Jack Rabbit. Sent 10: They dove into the water and swam for several hours. Sent 11: The sun was out, but the breeze was cold. Sent 12: Joey and Jimmy got out of the water and started walking home. Sent 13: Their fur was wet, and the breeze chilled them. Sent 14: When they got home, they dried off, and Jimmy put on his favorite purple shirt. Sent 15: Joey put on a blue shirt with red and green dots. Sent 16: The two squirrels ate some food that Joey’s mom, Jasmine, made and went off to bed. '''] ] df = spark.createDataFrame(sentences).toDF("text") #Predict on text data with T5 model = pipeline.fit(df) annotated_df = model.transform(df) annotated_df.select(['text','t5.result']).show(truncate=False) ``` # Task 12 [WiC - Word sense disambiguation](https://arxiv.org/abs/1808.09121) Decide for `two sentence`s with a shared `disambigous word` wether they have the target word has the same `semantic meaning` in both sentences. This is a sub-task of [SuperGLUE](https://w4ngatang.github.io/static/papers/superglue.pdf). |Predicted | disambigous word| Sentence 1 | Sentence 2 | |----------|-----------------|------------|------------| | False | kill | He totally killed that rock show! | The airplane crash killed his family | | True | window | The expanded window will give us time to catch the thieves.|You have a two-hour window for turning in your homework. | | False | window | He jumped out of the window.|You have a two-hour window for turning in your homework. | ## How to configure T5 task for MultiRC `.setTask('wic pos:)` followed by `sentence1:` prefix for the first sentence, followed by `sentence2:` prefix for the second sentence. ### Example pre-processed input for T5 WiC task: ``` wic pos: sentence1: The expanded window will give us time to catch the thieves. sentence2: You have a two-hour window of turning in your homework. word : window ``` ``` # Set the task on T5 t5.setTask('wic ') # Build pipeline with T5 pipe_components = [documentAssembler,t5] pipeline = Pipeline().setStages( pipe_components) # define Data, add additional tags between sentences sentences = [ [ ''' pos: sentence1: The expanded window will give us time to catch the thieves. sentence2: You have a two-hour window of turning in your homework. word : window '''],] df = spark.createDataFrame(sentences).toDF("text") #Predict on text data with T5 model = pipeline.fit(df) annotated_df = model.transform(df) annotated_df.select(['text','t5.result']).show(truncate=180) ``` # Task 13 [WSC and DPR - Coreference resolution/ Pronoun ambiguity resolver ](https://www.aaai.org/ocs/index.php/KR/KR12/paper/view/4492/0) Predict for an `ambiguous pronoun` to which `noun` it is referring to. This is a sub-task of [GLUE](https://arxiv.org/pdf/1804.07461.pdf) and [SuperGLUE](https://w4ngatang.github.io/static/papers/superglue.pdf). |Prediction| Text | |----------|-------| | stable | The stable was very roomy, with four good stalls; a large swinging window opened into the yard , which made *it* pleasant and airy. | ## How to configure T5 task for WSC/DPR `.setTask('wsc:)` and surround pronoun with asteriks symbols.. ### Example pre-processed input for T5 WSC/DPR task: The `ambiguous pronous` should be surrounded with `*` symbols. ***Note*** Read [Appendix A.](https://arxiv.org/pdf/1910.10683.pdf#page=64&zoom=100,84,360) for more info ``` wsc: The stable was very roomy, with four good stalls; a large swinging window opened into the yard , which made *it* pleasant and airy. ``` ``` # Does not work yet 100% correct # Set the task on T5 t5.setTask('wsc') # Build pipeline with T5 pipe_components = [documentAssembler,t5] pipeline = Pipeline().setStages( pipe_components) # define Data, add additional tags between sentences sentences = [['''The stable was very roomy, with four good stalls; a large swinging window opened into the yard , which made *it* pleasant and airy.'''],] df = spark.createDataFrame(sentences).toDF("text") #Predict on text data with T5 model = pipeline.fit(df) annotated_df = model.transform(df) annotated_df.select(['text','t5.result']).show(truncate=False) ``` # Task 14 [Text summarization](https://arxiv.org/abs/1506.03340) `Summarizes` a paragraph into a shorter version with the same semantic meaning. | Predicted summary| Text | |------------------|-------| | manchester united face newcastle in the premier league on wednesday . louis van gaal's side currently sit two points clear of liverpool in fourth . the belgian duo took to the dance floor on monday night with some friends . | the belgian duo took to the dance floor on monday night with some friends . manchester united face newcastle in the premier league on wednesday . red devils will be looking for just their second league away win in seven . louis van gaal’s side currently sit two points clear of liverpool in fourth . | ## How to configure T5 task for summarization `.setTask('summarize:)` ### Example pre-processed input for T5 summarization task: This task requires no pre-processing, setting the task to `summarize` is sufficient. ``` the belgian duo took to the dance floor on monday night with some friends . manchester united face newcastle in the premier league on wednesday . red devils will be looking for just their second league away win in seven . louis van gaal’s side currently sit two points clear of liverpool in fourth . ``` ``` # Set the task on T5 t5.setTask('summarize ') # Build pipeline with T5 pipe_components = [documentAssembler,t5] pipeline = Pipeline().setStages( pipe_components) # define Data, add additional tags between sentences sentences = [ [ ''' The belgian duo took to the dance floor on monday night with some friends . manchester united face newcastle in the premier league on wednesday . red devils will be looking for just their second league away win in seven . louis van gaal’s side currently sit two points clear of liverpool in fourth . '''], [''' Calculus, originally called infinitesimal calculus or "the calculus of infinitesimals", is the mathematical study of continuous change, in the same way that geometry is the study of shape and algebra is the study of generalizations of arithmetic operations. It has two major branches, differential calculus and integral calculus; the former concerns instantaneous rates of change, and the slopes of curves, while integral calculus concerns accumulation of quantities, and areas under or between curves. These two branches are related to each other by the fundamental theorem of calculus, and they make use of the fundamental notions of convergence of infinite sequences and infinite series to a well-defined limit.[1] Infinitesimal calculus was developed independently in the late 17th century by Isaac Newton and Gottfried Wilhelm Leibniz.[2][3] Today, calculus has widespread uses in science, engineering, and economics.[4] In mathematics education, calculus denotes courses of elementary mathematical analysis, which are mainly devoted to the study of functions and limits. The word calculus (plural calculi) is a Latin word, meaning originally "small pebble" (this meaning is kept in medicine – see Calculus (medicine)). Because such pebbles were used for calculation, the meaning of the word has evolved and today usually means a method of computation. It is therefore used for naming specific methods of calculation and related theories, such as propositional calculus, Ricci calculus, calculus of variations, lambda calculus, and process calculus.'''] ] df = spark.createDataFrame(sentences).toDF("text") #Predict on text data with T5 model = pipeline.fit(df) annotated_df = model.transform(df) annotated_df.select(['text','t5.result']).show(truncate=False) ``` # Task 15 [SQuAD - Context based question answering](https://arxiv.org/abs/1606.05250) Predict an `answer` to a `question` based on input `context`. |Predicted Answer | Question | Context | |-----------------|----------|------| |carbon monoxide| What does increased oxygen concentrations in the patient’s lungs displace? | Hyperbaric (high-pressure) medicine uses special oxygen chambers to increase the partial pressure of O 2 around the patient and, when needed, the medical staff. Carbon monoxide poisoning, gas gangrene, and decompression sickness (the ’bends’) are sometimes treated using these devices. Increased O 2 concentration in the lungs helps to displace carbon monoxide from the heme group of hemoglobin. Oxygen gas is poisonous to the anaerobic bacteria that cause gas gangrene, so increasing its partial pressure helps kill them. Decompression sickness occurs in divers who decompress too quickly after a dive, resulting in bubbles of inert gas, mostly nitrogen and helium, forming in their blood. Increasing the pressure of O 2 as soon as possible is part of the treatment. |pie| What did Joey eat for breakfast?| Once upon a time, there was a squirrel named Joey. Joey loved to go outside and play with his cousin Jimmy. Joey and Jimmy played silly games together, and were always laughing. One day, Joey and Jimmy went swimming together 50 at their Aunt Julie’s pond. Joey woke up early in the morning to eat some food before they left. Usually, Joey would eat cereal, fruit (a pear), or oatmeal for breakfast. After he ate, he and Jimmy went to the pond. On their way there they saw their friend Jack Rabbit. They dove into the water and swam for several hours. The sun was out, but the breeze was cold. Joey and Jimmy got out of the water and started walking home. Their fur was wet, and the breeze chilled them. When they got home, they dried off, and Jimmy put on his favorite purple shirt. Joey put on a blue shirt with red and green dots. The two squirrels ate some food that Joey’s mom, Jasmine, made and went off to bed,'| ## How to configure T5 task parameter for Squad Context based question answering `.setTask('question:)` and prefix the context which can be made up of multiple sentences with `context:` ## Example pre-processed input for T5 Squad Context based question answering: ``` question: What does increased oxygen concentrations in the patient’s lungs displace? context: Hyperbaric (high-pressure) medicine uses special oxygen chambers to increase the partial pressure of O 2 around the patient and, when needed, the medical staff. Carbon monoxide poisoning, gas gangrene, and decompression sickness (the ’bends’) are sometimes treated using these devices. Increased O 2 concentration in the lungs helps to displace carbon monoxide from the heme group of hemoglobin. Oxygen gas is poisonous to the anaerobic bacteria that cause gas gangrene, so increasing its partial pressure helps kill them. Decompression sickness occurs in divers who decompress too quickly after a dive, resulting in bubbles of inert gas, mostly nitrogen and helium, forming in their blood. Increasing the pressure of O 2 as soon as possible is part of the treatment. ``` ``` # Set the task on T5 t5.setTask('question: ') # Build pipeline with T5 pipe_components = [documentAssembler,t5] pipeline = Pipeline().setStages( pipe_components) # define Data, add additional tags between sentences sentences = [ [ ''' What does increased oxygen concentrations in the patient’s lungs displace? context: Hyperbaric (high-pressure) medicine uses special oxygen chambers to increase the partial pressure of O 2 around the patient and, when needed, the medical staff. Carbon monoxide poisoning, gas gangrene, and decompression sickness (the ’bends’) are sometimes treated using these devices. Increased O 2 concentration in the lungs helps to displace carbon monoxide from the heme group of hemoglobin. Oxygen gas is poisonous to the anaerobic bacteria that cause gas gangrene, so increasing its partial pressure helps kill them. Decompression sickness occurs in divers who decompress too quickly after a dive, resulting in bubbles of inert gas, mostly nitrogen and helium, forming in their blood. Increasing the pressure of O 2 as soon as possible is part of the treatment. '''] ] df = spark.createDataFrame(sentences).toDF("text") #Predict on text data with T5 model = pipeline.fit(df) annotated_df = model.transform(df) annotated_df.select(['text','t5.result']).show(truncate=False) ``` # Task 16 [WMT1 Translate English to German](https://arxiv.org/abs/1706.03762) For translation tasks use the `marian` model ## How to configure T5 task parameter for WMT Translate English to German `.setTask('translate English to German:)` ``` # Set the task on T5 t5.setTask('translate English to German: ') # Build pipeline with T5 pipe_components = [documentAssembler,t5] pipeline = Pipeline().setStages( pipe_components) # define Data, add additional tags between sentences sentences = [ [ '''I like sausage and Tea for breakfast with potatoes'''],] df = spark.createDataFrame(sentences).toDF("text") #Predict on text data with T5 model = pipeline.fit(df) annotated_df = model.transform(df) annotated_df.select(['text','t5.result']).show(truncate=False) ``` # Task 17 [WMT2 Translate English to French](https://arxiv.org/abs/1706.03762) For translation tasks use the `marian` model ## How to configure T5 task parameter for WMT Translate English to French `.setTask('translate English to French:)` ``` # Set the task on T5 t5.setTask('translate English to French: ') # Build pipeline with T5 pipe_components = [documentAssembler,t5] pipeline = Pipeline().setStages( pipe_components) # define Data, add additional tags between sentences sentences = [ [ '''I like sausage and Tea for breakfast with potatoes'''] ] df = spark.createDataFrame(sentences).toDF("text") #Predict on text data with T5 model = pipeline.fit(df) annotated_df = model.transform(df) annotated_df.select(['text','t5.result']).show(truncate=False) ``` # 18 [WMT3 - Translate English to Romanian](https://arxiv.org/abs/1706.03762) For translation tasks use the `marian` model ## How to configure T5 task parameter for English to Romanian `.setTask('translate English to Romanian:)` ``` # Set the task on T5 t5.setTask('translate English to Romanian: ') # Build pipeline with T5 pipe_components = [documentAssembler,t5] pipeline = Pipeline().setStages( pipe_components) # define Data, add additional tags between sentences sentences = [ ['''I like sausage and Tea for breakfast with potatoes'''] ] df = spark.createDataFrame(sentences).toDF("text") #Predict on text data with T5 model = pipeline.fit(df) annotated_df = model.transform(df) annotated_df.select(['text','t5.result']).show(truncate=False) ```
github_jupyter
cola sentence: Anna and Mike is going skiing and they is liked is rte sentence1: Recent report say Peter makes he alot of money, he earned 10 million USD each year for the last 5 years. sentence2: Peter is a millionare. mnli hypothesis: At 8:34, the Boston Center controller received a third, transmission from American 11. premise: The Boston Center controller got a third transmission from American 11. mrpc sentence1: We acted because we saw the existing evidence in a new light , through the prism of our experience on 11 September , " Rumsfeld said . sentence2: Rather , the US acted because the administration saw " existing evidence in a new light , through the prism of our experience on September 11", qnli question: Where did Jebe die? sentence: Ghenkis Khan recalled Subtai back to Mongolia soon afterwards, and Jebe died on the road back to Samarkand, qqp question1: What attributes would have made you highly desirable in ancient Rome? question2: How I GET OPPERTINUTY TO JOIN IT COMPANY AS A FRESHER?', sst2 sentence: I hated that movie stsb sentence1: What attributes would have made you highly desirable in ancient Rome? sentence2: How I GET OPPERTINUTY TO JOIN IT COMPANY AS A FRESHER?', cb hypothesis: Valence was helping premise: Valence the void-brain, Valence the virtuous valet. Why couldn’t the figger choose his own portion of titanic anatomy to shaft? Did he think he was helping, copa choice1: He fell off the ladder choice2: He climbed up the lader premise: The man lost his balance on the ladder question: effect multirc questions: Why was Joey surprised the morning he woke up for breakfast? answer: There was a T-REX in his garden. paragraph: Sent 1: Once upon a time, there was a squirrel named Joey. Sent 2: Joey loved to go outside and play with his cousin Jimmy. Sent 3: Joey and Jimmy played silly games together, and were always laughing. Sent 4: One day, Joey and Jimmy went swimming together 50 at their Aunt Julie’s pond. Sent 5: Joey woke up early in the morning to eat some food before they left. Sent 6: He couldn’t find anything to eat except for pie! Sent 7: Usually, Joey would eat cereal, fruit (a pear), or oatmeal for breakfast. Sent 8: After he ate, he and Jimmy went to the pond. Sent 9: On their way there they saw their friend Jack Rabbit. Sent 10: They dove into the water and swam for several hours. Sent 11: The sun was out, but the breeze was cold. Sent 12: Joey and Jimmy got out of the water and started walking home. Sent 13: Their fur was wet, and the breeze chilled them. Sent 14: When they got home, they dried off, and Jimmy put on his favorite purple shirt. Sent 15: Joey put on a blue shirt with red and green dots. Sent 16: The two squirrels ate some food that Joey’s mom, Jasmine, made and went off to bed. wic pos: sentence1: The expanded window will give us time to catch the thieves. sentence2: You have a two-hour window of turning in your homework. word : window wsc: The stable was very roomy, with four good stalls; a large swinging window opened into the yard , which made *it* pleasant and airy. the belgian duo took to the dance floor on monday night with some friends . manchester united face newcastle in the premier league on wednesday . red devils will be looking for just their second league away win in seven . louis van gaal’s side currently sit two points clear of liverpool in fourth . question: What does increased oxygen concentrations in the patient’s lungs displace? context: Hyperbaric (high-pressure) medicine uses special oxygen chambers to increase the partial pressure of O 2 around the patient and, when needed, the medical staff. Carbon monoxide poisoning, gas gangrene, and decompression sickness (the ’bends’) are sometimes treated using these devices. Increased O 2 concentration in the lungs helps to displace carbon monoxide from the heme group of hemoglobin. Oxygen gas is poisonous to the anaerobic bacteria that cause gas gangrene, so increasing its partial pressure helps kill them. Decompression sickness occurs in divers who decompress too quickly after a dive, resulting in bubbles of inert gas, mostly nitrogen and helium, forming in their blood. Increasing the pressure of O 2 as soon as possible is part of the treatment. # Install Spark NLP import os ! apt-get update -qq > /dev/null # Install java ! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64" os.environ["PATH"] = os.environ["JAVA_HOME"] + "/bin:" + os.environ["PATH"] ! pip install sparknlp pyspark==2.4.7 > /dev/null import os # Install java ! apt-get update -qq ! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64" os.environ["PATH"] = os.environ["JAVA_HOME"] + "/bin:" + os.environ["PATH"] ! java -version # Install pyspark ! pip install --ignore-installed -q pyspark==2.4.4 ! pip install --ignore-installed -q spark-nlp==2.7.1 import sparknlp spark = sparknlp.start() print("Spark NLP version", sparknlp.version()) print("Apache Spark version:", spark.version) import pandas as pd pd.set_option('display.width', 100000) pd.set_option('max_colwidth', 8000) pd.set_option('display.max_rows', 500) pd.set_option('display.max_columns', 500) from sparknlp.annotator import * import sparknlp from sparknlp.common import * from sparknlp.base import * from pyspark.ml import Pipeline documentAssembler = DocumentAssembler() \ .setInputCol("text") \ .setOutputCol("document") # Can take in document or sentence columns t5 = T5Transformer.pretrained(name='t5_base',lang='en')\ .setInputCols('document')\ .setOutputCol("T5") cola sentence: Anna and Mike is going skiing and they is liked is # Set the task on T5 t5.setTask('cola sentence:') # Build pipeline with T5 pipe_components = [documentAssembler,t5] pipeline = Pipeline().setStages( pipe_components) # define Data sentences = [['Anna and Mike is going skiing and they is liked is'],['Anna and Mike like to dance']] df = spark.createDataFrame(sentences).toDF("text") #Predict on text data with T5 model = pipeline.fit(df) annotated_df = model.transform(df) annotated_df.select(['text','t5.result']).show(truncate=False) rte sentence1: Recent report say Peter makes he alot of money, he earned 10 million USD each year for the last 5 years. sentence2: Peter is a millionare. # Set the task on T5 t5.setTask('rte sentence1:') # Build pipeline with T5 pipe_components = [documentAssembler,t5] pipeline = Pipeline().setStages( pipe_components) # define Data, add additional tags between sentences sentences = [ ['Recent report say Peter makes he alot of money, he earned 10 million USD each year for the last 5 years. sentence2: Peter is a millionare'], ['Recent report say Peter makes he alot of money, he earned 10 million USD each year for the last 5 years. sentence2: Peter is a poor man'] ] df = spark.createDataFrame(sentences).toDF("text") #Predict on text data with T5 model = pipeline.fit(df) annotated_df = model.transform(df) annotated_df.select(['text','t5.result']).show(truncate=False) mnli hypothesis: At 8:34, the Boston Center controller received a third, transmission from American 11. premise: The Boston Center controller got a third transmission from American 11. # Set the task on T5 t5.setTask('mnli ') # Build pipeline with T5 pipe_components = [documentAssembler,t5] pipeline = Pipeline().setStages( pipe_components) # define Data, add additional tags between sentences sentences = [ [ ''' hypothesis: At 8:34, the Boston Center controller received a third, transmission from American 11. premise: The Boston Center controller got a third transmission from American 11. ''' ], [''' hypothesis: Recent report say Johnny makes he alot of money, he earned 10 million USD each year for the last 5 years. premise: Johnny is a poor man. '''] ] df = spark.createDataFrame(sentences).toDF("text") #Predict on text data with T5 model = pipeline.fit(df) annotated_df = model.transform(df) annotated_df.select(['text','t5.result']).show()#.toPandas().head(5) <-- for better vis of result data frame mrpc sentence1: We acted because we saw the existing evidence in a new light , through the prism of our experience on 11 September , " Rumsfeld said . sentence2: Rather , the US acted because the administration saw " existing evidence in a new light , through the prism of our experience on September 11", # Set the task on T5 t5.setTask('mrpc ') # Build pipeline with T5 pipe_components = [documentAssembler,t5] pipeline = Pipeline().setStages( pipe_components) # define Data, add additional tags between sentences sentences = [ [ ''' sentence1: We acted because we saw the existing evidence in a new light , through the prism of our experience on 11 September , " Rumsfeld said . sentence2: Rather , the US acted because the administration saw " existing evidence in a new light , through the prism of our experience on September 11 " ''' ], [''' sentence1: I like to eat peanutbutter for breakfast sentence2: I like to play football. '''] ] df = spark.createDataFrame(sentences).toDF("text") #Predict on text data with T5 model = pipeline.fit(df) annotated_df = model.transform(df) annotated_df.select(['text','t5.result']).toPandas()#show() qnli question: Where did Jebe die? sentence: Ghenkis Khan recalled Subtai back to Mongolia soon afterwards, and Jebe died on the road back to Samarkand, # Set the task on T5 t5.setTask('QNLI ') # Build pipeline with T5 pipe_components = [documentAssembler,t5] pipeline = Pipeline().setStages( pipe_components) # define Data, add additional tags between sentences sentences = [ [ ''' question: Where did Jebe die? sentence: Ghenkis Khan recalled Subtai back to Mongolia soon afterwards, and Jebe died on the road back to Samarkand, ''' ], [''' question: What does Steve like to eat? sentence: Steve watches TV all day '''] ] df = spark.createDataFrame(sentences).toDF("text") #Predict on text data with T5 model = pipeline.fit(df) annotated_df = model.transform(df) annotated_df.select(['text','t5.result']).toPandas()#.show() qqp question1: What attributes would have made you highly desirable in ancient Rome? question2: How I GET OPPERTINUTY TO JOIN IT COMPANY AS A FRESHER?', # Set the task on T5 t5.setTask('qqp ') # Build pipeline with T5 pipe_components = [documentAssembler,t5] pipeline = Pipeline().setStages( pipe_components) # define Data, add additional tags between sentences sentences = [ [ ''' question1: What attributes would have made you highly desirable in ancient Rome? question2: How I GET OPPERTINUTY TO JOIN IT COMPANY AS A FRESHER?' ''' ], [''' question1: What was it like in Ancient rome? question2: What was Ancient rome like? '''] ] df = spark.createDataFrame(sentences).toDF("text") #Predict on text data with T5 model = pipeline.fit(df) annotated_df = model.transform(df) annotated_df.select(['text','t5.result']).toPandas()#.show() sst2 sentence: I hated that movie # Set the task on T5 t5.setTask('sst2 sentence: ') # Build pipeline with T5 pipe_components = [documentAssembler,t5] pipeline = Pipeline().setStages( pipe_components) # define Data, add additional tags between sentences sentences = [ [ ''' I really hated that movie'''], [''' it confirms fincher ’s status as a film maker who artfully bends technical know-how to the service of psychological insight''']] df = spark.createDataFrame(sentences).toDF("text") #Predict on text data with T5 model = pipeline.fit(df) annotated_df = model.transform(df) annotated_df.select(['text','t5.result']).toPandas()#show() stsb sentence1: What attributes would have made you highly desirable in ancient Rome? sentence2: How I GET OPPERTINUTY TO JOIN IT COMPANY AS A FRESHER?', # Set the task on T5 t5.setTask('stsb ') # Build pipeline with T5 pipe_components = [documentAssembler,t5] pipeline = Pipeline().setStages( pipe_components) # define Data, add additional tags between sentences sentences = [ [ ''' sentence1: What attributes would have made you highly desirable in ancient Rome? sentence2: How I GET OPPERTINUTY TO JOIN IT COMPANY AS A FRESHER?' ''' ], [''' sentence1: What was it like in Ancient rome? sentence2: What was Ancient rome like? '''], [''' sentence1: What was live like as a King in Ancient Rome?? sentence2: What was Ancient rome like? '''] ] df = spark.createDataFrame(sentences).toDF("text") #Predict on text data with T5 model = pipeline.fit(df) annotated_df = model.transform(df) annotated_df.select(['text','t5.result']).toPandas()#show(truncate=False) cb hypothesis: Valence was helping premise: Valence the void-brain, Valence the virtuous valet. Why couldn’t the figger choose his own portion of titanic anatomy to shaft? Did he think he was helping, # Set the task on T5 t5.setTask('cb ') # Build pipeline with T5 pipe_components = [documentAssembler,t5] pipeline = Pipeline().setStages( pipe_components) # define Data, add additional tags between sentences sentences = [ [ ''' hypothesis: Recent report say Johnny makes he alot of money, he earned 10 million USD each year for the last 5 years. premise: Johnny is a poor man. '''] ] df = spark.createDataFrame(sentences).toDF("text") #Predict on text data with T5 model = pipeline.fit(df) annotated_df = model.transform(df) annotated_df.select(['text','t5.result']).toPandas()#show(truncate=False) copa choice1: He fell off the ladder choice2: He climbed up the lader premise: The man lost his balance on the ladder question: effect # Set the task on T5 t5.setTask('copa ') # Build pipeline with T5 pipe_components = [documentAssembler,t5] pipeline = Pipeline().setStages( pipe_components) # define Data, add additional tags between sentences sentences = [ [ ''' choice1: He fell off the ladder choice2: He climbed up the lader premise: The man lost his balance on the ladder question: effect '''] ] df = spark.createDataFrame(sentences).toDF("text") #Predict on text data with T5 model = pipeline.fit(df) annotated_df = model.transform(df) annotated_df.select(['text','t5.result']).toPandas()#show(truncate=False) multirc questions: Why was Joey surprised the morning he woke up for breakfast? answer: There was a T-REX in his garden. paragraph: Sent 1: Once upon a time, there was a squirrel named Joey. Sent 2: Joey loved to go outside and play with his cousin Jimmy. Sent 3: Joey and Jimmy played silly games together, and were always laughing. Sent 4: One day, Joey and Jimmy went swimming together 50 at their Aunt Julie’s pond. Sent 5: Joey woke up early in the morning to eat some food before they left. Sent 6: He couldn’t find anything to eat except for pie! Sent 7: Usually, Joey would eat cereal, fruit (a pear), or oatmeal for breakfast. Sent 8: After he ate, he and Jimmy went to the pond. Sent 9: On their way there they saw their friend Jack Rabbit. Sent 10: They dove into the water and swam for several hours. Sent 11: The sun was out, but the breeze was cold. Sent 12: Joey and Jimmy got out of the water and started walking home. Sent 13: Their fur was wet, and the breeze chilled them. Sent 14: When they got home, they dried off, and Jimmy put on his favorite purple shirt. Sent 15: Joey put on a blue shirt with red and green dots. Sent 16: The two squirrels ate some food that Joey’s mom, Jasmine, made and went off to bed. # Set the task on T5 t5.setTask('multirc ') # Build pipeline with T5 pipe_components = [documentAssembler,t5] pipeline = Pipeline().setStages( pipe_components) # define Data, add additional tags between sentences sentences = [ [ ''' questions: Why was Joey surprised the morning he woke up for breakfast? answer: There was a T-REX in his garden. paragraph: Sent 1: Once upon a time, there was a squirrel named Joey. Sent 2: Joey loved to go outside and play with his cousin Jimmy. Sent 3: Joey and Jimmy played silly games together, and were always laughing. Sent 4: One day, Joey and Jimmy went swimming together 50 at their Aunt Julie’s pond. Sent 5: Joey woke up early in the morning to eat some food before they left. Sent 6: He couldn’t find anything to eat except for pie! Sent 7: Usually, Joey would eat cereal, fruit (a pear), or oatmeal for breakfast. Sent 8: After he ate, he and Jimmy went to the pond. Sent 9: On their way there they saw their friend Jack Rabbit. Sent 10: They dove into the water and swam for several hours. Sent 11: The sun was out, but the breeze was cold. Sent 12: Joey and Jimmy got out of the water and started walking home. Sent 13: Their fur was wet, and the breeze chilled them. Sent 14: When they got home, they dried off, and Jimmy put on his favorite purple shirt. Sent 15: Joey put on a blue shirt with red and green dots. Sent 16: The two squirrels ate some food that Joey’s mom, Jasmine, made and went off to bed. '''], [ ''' questions: Why was Joey surprised the morning he woke up for breakfast? answer: There was only pie for breakfast. paragraph: Sent 1: Once upon a time, there was a squirrel named Joey. Sent 2: Joey loved to go outside and play with his cousin Jimmy. Sent 3: Joey and Jimmy played silly games together, and were always laughing. Sent 4: One day, Joey and Jimmy went swimming together 50 at their Aunt Julie’s pond. Sent 5: Joey woke up early in the morning to eat some food before they left. Sent 6: He couldn’t find anything to eat except for pie! Sent 7: Usually, Joey would eat cereal, fruit (a pear), or oatmeal for breakfast. Sent 8: After he ate, he and Jimmy went to the pond. Sent 9: On their way there they saw their friend Jack Rabbit. Sent 10: They dove into the water and swam for several hours. Sent 11: The sun was out, but the breeze was cold. Sent 12: Joey and Jimmy got out of the water and started walking home. Sent 13: Their fur was wet, and the breeze chilled them. Sent 14: When they got home, they dried off, and Jimmy put on his favorite purple shirt. Sent 15: Joey put on a blue shirt with red and green dots. Sent 16: The two squirrels ate some food that Joey’s mom, Jasmine, made and went off to bed. '''] ] df = spark.createDataFrame(sentences).toDF("text") #Predict on text data with T5 model = pipeline.fit(df) annotated_df = model.transform(df) annotated_df.select(['text','t5.result']).show(truncate=False) wic pos: sentence1: The expanded window will give us time to catch the thieves. sentence2: You have a two-hour window of turning in your homework. word : window # Set the task on T5 t5.setTask('wic ') # Build pipeline with T5 pipe_components = [documentAssembler,t5] pipeline = Pipeline().setStages( pipe_components) # define Data, add additional tags between sentences sentences = [ [ ''' pos: sentence1: The expanded window will give us time to catch the thieves. sentence2: You have a two-hour window of turning in your homework. word : window '''],] df = spark.createDataFrame(sentences).toDF("text") #Predict on text data with T5 model = pipeline.fit(df) annotated_df = model.transform(df) annotated_df.select(['text','t5.result']).show(truncate=180) wsc: The stable was very roomy, with four good stalls; a large swinging window opened into the yard , which made *it* pleasant and airy. # Does not work yet 100% correct # Set the task on T5 t5.setTask('wsc') # Build pipeline with T5 pipe_components = [documentAssembler,t5] pipeline = Pipeline().setStages( pipe_components) # define Data, add additional tags between sentences sentences = [['''The stable was very roomy, with four good stalls; a large swinging window opened into the yard , which made *it* pleasant and airy.'''],] df = spark.createDataFrame(sentences).toDF("text") #Predict on text data with T5 model = pipeline.fit(df) annotated_df = model.transform(df) annotated_df.select(['text','t5.result']).show(truncate=False) the belgian duo took to the dance floor on monday night with some friends . manchester united face newcastle in the premier league on wednesday . red devils will be looking for just their second league away win in seven . louis van gaal’s side currently sit two points clear of liverpool in fourth . # Set the task on T5 t5.setTask('summarize ') # Build pipeline with T5 pipe_components = [documentAssembler,t5] pipeline = Pipeline().setStages( pipe_components) # define Data, add additional tags between sentences sentences = [ [ ''' The belgian duo took to the dance floor on monday night with some friends . manchester united face newcastle in the premier league on wednesday . red devils will be looking for just their second league away win in seven . louis van gaal’s side currently sit two points clear of liverpool in fourth . '''], [''' Calculus, originally called infinitesimal calculus or "the calculus of infinitesimals", is the mathematical study of continuous change, in the same way that geometry is the study of shape and algebra is the study of generalizations of arithmetic operations. It has two major branches, differential calculus and integral calculus; the former concerns instantaneous rates of change, and the slopes of curves, while integral calculus concerns accumulation of quantities, and areas under or between curves. These two branches are related to each other by the fundamental theorem of calculus, and they make use of the fundamental notions of convergence of infinite sequences and infinite series to a well-defined limit.[1] Infinitesimal calculus was developed independently in the late 17th century by Isaac Newton and Gottfried Wilhelm Leibniz.[2][3] Today, calculus has widespread uses in science, engineering, and economics.[4] In mathematics education, calculus denotes courses of elementary mathematical analysis, which are mainly devoted to the study of functions and limits. The word calculus (plural calculi) is a Latin word, meaning originally "small pebble" (this meaning is kept in medicine – see Calculus (medicine)). Because such pebbles were used for calculation, the meaning of the word has evolved and today usually means a method of computation. It is therefore used for naming specific methods of calculation and related theories, such as propositional calculus, Ricci calculus, calculus of variations, lambda calculus, and process calculus.'''] ] df = spark.createDataFrame(sentences).toDF("text") #Predict on text data with T5 model = pipeline.fit(df) annotated_df = model.transform(df) annotated_df.select(['text','t5.result']).show(truncate=False) question: What does increased oxygen concentrations in the patient’s lungs displace? context: Hyperbaric (high-pressure) medicine uses special oxygen chambers to increase the partial pressure of O 2 around the patient and, when needed, the medical staff. Carbon monoxide poisoning, gas gangrene, and decompression sickness (the ’bends’) are sometimes treated using these devices. Increased O 2 concentration in the lungs helps to displace carbon monoxide from the heme group of hemoglobin. Oxygen gas is poisonous to the anaerobic bacteria that cause gas gangrene, so increasing its partial pressure helps kill them. Decompression sickness occurs in divers who decompress too quickly after a dive, resulting in bubbles of inert gas, mostly nitrogen and helium, forming in their blood. Increasing the pressure of O 2 as soon as possible is part of the treatment. # Set the task on T5 t5.setTask('question: ') # Build pipeline with T5 pipe_components = [documentAssembler,t5] pipeline = Pipeline().setStages( pipe_components) # define Data, add additional tags between sentences sentences = [ [ ''' What does increased oxygen concentrations in the patient’s lungs displace? context: Hyperbaric (high-pressure) medicine uses special oxygen chambers to increase the partial pressure of O 2 around the patient and, when needed, the medical staff. Carbon monoxide poisoning, gas gangrene, and decompression sickness (the ’bends’) are sometimes treated using these devices. Increased O 2 concentration in the lungs helps to displace carbon monoxide from the heme group of hemoglobin. Oxygen gas is poisonous to the anaerobic bacteria that cause gas gangrene, so increasing its partial pressure helps kill them. Decompression sickness occurs in divers who decompress too quickly after a dive, resulting in bubbles of inert gas, mostly nitrogen and helium, forming in their blood. Increasing the pressure of O 2 as soon as possible is part of the treatment. '''] ] df = spark.createDataFrame(sentences).toDF("text") #Predict on text data with T5 model = pipeline.fit(df) annotated_df = model.transform(df) annotated_df.select(['text','t5.result']).show(truncate=False) # Set the task on T5 t5.setTask('translate English to German: ') # Build pipeline with T5 pipe_components = [documentAssembler,t5] pipeline = Pipeline().setStages( pipe_components) # define Data, add additional tags between sentences sentences = [ [ '''I like sausage and Tea for breakfast with potatoes'''],] df = spark.createDataFrame(sentences).toDF("text") #Predict on text data with T5 model = pipeline.fit(df) annotated_df = model.transform(df) annotated_df.select(['text','t5.result']).show(truncate=False) # Set the task on T5 t5.setTask('translate English to French: ') # Build pipeline with T5 pipe_components = [documentAssembler,t5] pipeline = Pipeline().setStages( pipe_components) # define Data, add additional tags between sentences sentences = [ [ '''I like sausage and Tea for breakfast with potatoes'''] ] df = spark.createDataFrame(sentences).toDF("text") #Predict on text data with T5 model = pipeline.fit(df) annotated_df = model.transform(df) annotated_df.select(['text','t5.result']).show(truncate=False) # Set the task on T5 t5.setTask('translate English to Romanian: ') # Build pipeline with T5 pipe_components = [documentAssembler,t5] pipeline = Pipeline().setStages( pipe_components) # define Data, add additional tags between sentences sentences = [ ['''I like sausage and Tea for breakfast with potatoes'''] ] df = spark.createDataFrame(sentences).toDF("text") #Predict on text data with T5 model = pipeline.fit(df) annotated_df = model.transform(df) annotated_df.select(['text','t5.result']).show(truncate=False)
0.307774
0.970409
# Amazon SageMaker Multi-Model Endpoints using Scikit Learn With [Amazon SageMaker multi-model endpoints](https://docs.aws.amazon.com/sagemaker/latest/dg/multi-model-endpoints.html), customers can create an endpoint that seamlessly hosts up to thousands of models. These endpoints are well suited to use cases where any one of a large number of models, which can be served from a common inference container to save inference costs, needs to be invokable on-demand and where it is acceptable for infrequently invoked models to incur some additional latency. For applications which require consistently low inference latency, an endpoint deploying a single model is still the best choice. At a high level, Amazon SageMaker manages the loading and unloading of models for a multi-model endpoint, as they are needed. When an invocation request is made for a particular model, Amazon SageMaker routes the request to an instance assigned to that model, downloads the model artifacts from S3 onto that instance, and initiates loading of the model into the memory of the container. As soon as the loading is complete, Amazon SageMaker performs the requested invocation and returns the result. If the model is already loaded in memory on the selected instance, the downloading and loading steps are skipped and the invocation is performed immediately. To demonstrate how multi-model endpoints are created and used, this notebook provides an example using a set of Scikit Learn models that each predict housing prices for a single location. This domain is used as a simple example to easily experiment with multi-model endpoints. The Amazon SageMaker multi-model endpoint capability is designed to work across with Mxnet, PyTorch and Scikit-Learn machine learning frameworks (TensorFlow coming soon), SageMaker XGBoost, KNN, and Linear Learner algorithms. In addition, Amazon SageMaker multi-model endpoints are also designed to work with cases where you bring your own container that integrates with the multi-model server library. An example of this can be found [here](https://github.com/awslabs/amazon-sagemaker-examples/tree/master/advanced_functionality/multi_model_bring_your_own) and documentation [here.](https://docs.aws.amazon.com/sagemaker/latest/dg/build-multi-model-build-container.html) ### Contents 1. [Generate synthetic data for housing models](#Generate-synthetic-data-for-housing-models) 1. [Train multiple house value prediction models](#Train-multiple-house-value-prediction-models) 1. [Create the Amazon SageMaker MultiDataModel entity](#Create-the-Amazon-SageMaker-MultiDataModel-entity) 1. [Create the Multi-Model Endpoint](#Create-the-multi-model-endpoint) 1. [Deploy the Multi-Model Endpoint](#deploy-the-multi-model-endpoint) 1. [Get Predictions from the endpoint](#Get-predictions-from-the-endpoint) 1. [Additional Information](#Additional-information) 1. [Clean up](#Clean-up) ## Generate synthetic data for housing models The code below contains helper functions to generate synthetic data in the form of a `1x7` numpy array representing the features of a house. The first entry in the array is the randomly generated price of a house. The remaining entries are the features (i.e. number of bedroom, square feet, number of bathrooms, etc.). These functions will be used to generate synthetic data for training, validation, and testing. It will also allow us to submit synthetic payloads for inference to test our multi-model endpoint. ``` import numpy as np import pandas as pd import time NUM_HOUSES_PER_LOCATION = 1000 LOCATIONS = [ "NewYork_NY", "LosAngeles_CA", "Chicago_IL", "Houston_TX", "Dallas_TX", "Phoenix_AZ", "Philadelphia_PA", "SanAntonio_TX", "SanDiego_CA", "SanFrancisco_CA", ] PARALLEL_TRAINING_JOBS = 4 # len(LOCATIONS) if your account limits can handle it MAX_YEAR = 2019 def gen_price(house): _base_price = int(house["SQUARE_FEET"] * 150) _price = int( _base_price + (10000 * house["NUM_BEDROOMS"]) + (15000 * house["NUM_BATHROOMS"]) + (15000 * house["LOT_ACRES"]) + (15000 * house["GARAGE_SPACES"]) - (5000 * (MAX_YEAR - house["YEAR_BUILT"])) ) return _price def gen_random_house(): _house = { "SQUARE_FEET": int(np.random.normal(3000, 750)), "NUM_BEDROOMS": np.random.randint(2, 7), "NUM_BATHROOMS": np.random.randint(2, 7) / 2, "LOT_ACRES": round(np.random.normal(1.0, 0.25), 2), "GARAGE_SPACES": np.random.randint(0, 4), "YEAR_BUILT": min(MAX_YEAR, int(np.random.normal(1995, 10))), } _price = gen_price(_house) return [ _price, _house["YEAR_BUILT"], _house["SQUARE_FEET"], _house["NUM_BEDROOMS"], _house["NUM_BATHROOMS"], _house["LOT_ACRES"], _house["GARAGE_SPACES"], ] def gen_houses(num_houses): _house_list = [] for i in range(num_houses): _house_list.append(gen_random_house()) _df = pd.DataFrame( _house_list, columns=[ "PRICE", "YEAR_BUILT", "SQUARE_FEET", "NUM_BEDROOMS", "NUM_BATHROOMS", "LOT_ACRES", "GARAGE_SPACES", ], ) return _df ``` ## Train multiple house value prediction models In the follow section, we are setting up the code to train a house price prediction model for each of 4 different cities. As such, we will launch multiple training jobs asynchronously, using the AWS Managed container for Scikit Learn via the Sagemaker SDK using the `SKLearn` estimator class. In this notebook, we will be using the AWS Managed Scikit Learn image for both training and inference - this image provides native support for launching multi-model endpoints. ``` import sagemaker from sagemaker import get_execution_role from sagemaker.inputs import TrainingInput import boto3 from time import gmtime, strftime s3 = boto3.resource("s3") sagemaker_session = sagemaker.Session() role = get_execution_role() BUCKET = sagemaker_session.default_bucket() TRAINING_FILE = "training.py" INFERENCE_FILE = "inference.py" SOURCE_DIR = "source_dir" DATA_PREFIX = "DEMO_MME_SCIKIT_V1" MULTI_MODEL_ARTIFACTS = "multi_model_artifacts" TRAIN_INSTANCE_TYPE = "ml.m4.xlarge" ENDPOINT_INSTANCE_TYPE = "ml.m4.xlarge" CUR = strftime("%Y-%m-%d-%H-%M-%S", gmtime()) ENDPOINT_NAME = "mme-sklearn-housing-V1" + "-" + CUR MODEL_NAME = ENDPOINT_NAME ``` ### Split a given dataset into train, validation, and test The code below will generate 3 sets of data. 1 set to train, 1 set for validation and 1 for testing. ``` from sklearn.model_selection import train_test_split SEED = 7 SPLIT_RATIOS = [0.6, 0.3, 0.1] def split_data(df): # split data into train and test sets seed = SEED val_size = SPLIT_RATIOS[1] test_size = SPLIT_RATIOS[2] num_samples = df.shape[0] X1 = df.values[:num_samples, 1:] # keep only the features, skip the target, all rows Y1 = df.values[:num_samples, :1] # keep only the target, all rows # Use split ratios to divide up into train/val/test X_train, X_val, y_train, y_val = train_test_split( X1, Y1, test_size=(test_size + val_size), random_state=seed ) # Of the remaining non-training samples, give proper ratio to validation and to test X_test, X_test, y_test, y_test = train_test_split( X_val, y_val, test_size=(test_size / (test_size + val_size)), random_state=seed ) # reassemble the datasets with target in first column and features after that _train = np.concatenate([y_train, X_train], axis=1) _val = np.concatenate([y_val, X_val], axis=1) _test = np.concatenate([y_test, X_test], axis=1) return _train, _val, _test ``` ### Prepare training and inference scripts By using the Scikit Learn estimator via the Sagemaker SDK, we can host and train models on Amazon Sagemaker. For training, we do the following: 1. Prepare a training script - this script will execute the training logic within a SageMaker managed Scikit Learn container. 2. Create a `sagemaker.sklearn.estimator.SKLearn` estimator 3. Call the estimators `.fit()` method. For more information on using scikit learn with the Sagemaker SDK, see the docs [here.](https://sagemaker.readthedocs.io/en/stable/frameworks/sklearn/using_sklearn.html) Below, we will create the training script called `training.py` that will be located at the root of a dicrectory called `source_dir`. In this example, we will be training a [RandomForestRegressor](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html) model that will later be used for inference in predicting house prices. **NOTE:** You would modify the script below to implement your own training logic. ``` !mkdir $SOURCE_DIR %%writefile $SOURCE_DIR/$TRAINING_FILE import argparse import os import numpy as np import pandas as pd from sklearn.ensemble import RandomForestRegressor import joblib if __name__ == "__main__": print("extracting arguments") parser = argparse.ArgumentParser() # hyperparameters sent by the client are passed as command-line arguments to the script. # to simplify the demo we don't use all sklearn RandomForest hyperparameters parser.add_argument("--n-estimators", type=int, default=10) parser.add_argument("--min-samples-leaf", type=int, default=3) # Data, model, and output directories parser.add_argument("--model-dir", type=str, default=os.environ.get("SM_MODEL_DIR")) parser.add_argument("--train", type=str, default=os.environ.get("SM_CHANNEL_TRAIN")) parser.add_argument("--validation", type=str, default=os.environ.get("SM_CHANNEL_VALIDATION")) parser.add_argument("--model-name", type=str) args, _ = parser.parse_known_args() print("reading data") print("model_name: {}".format(args.model_name)) train_file = os.path.join(args.train, args.model_name + "_train.csv") train_df = pd.read_csv(train_file) # read in the training data val_file = os.path.join(args.validation, args.model_name + "_val.csv") test_df = pd.read_csv(os.path.join(val_file)) # read in the test data # Matrix representation of the data print("building training and testing datasets") X_train = train_df[train_df.columns[1 : train_df.shape[1]]] X_test = test_df[test_df.columns[1 : test_df.shape[1]]] y_train = train_df[train_df.columns[0]] y_test = test_df[test_df.columns[0]] # fitting the model print("training model") model = RandomForestRegressor( n_estimators=args.n_estimators, min_samples_leaf=args.min_samples_leaf, n_jobs=-1 ) model.fit(X_train, y_train) # print abs error print("validating model") abs_err = np.abs(model.predict(X_test) - y_test) # print couple perf metrics for q in [10, 50, 90]: print("AE-at-" + str(q) + "th-percentile: " + str(np.percentile(a=abs_err, q=q))) # persist model path = os.path.join(args.model_dir, "model.joblib") joblib.dump(model, path) print("model persisted at " + path) ``` When using multi-model endpoints with the Sagemaker managed Scikit Learn container, we need to provide an entry point script for inference that will **at least** load the saved model. We will now create this script and call it `inference.py` and store it at the root of a directory called `source_dir`. This is the same directory which contains our `training.py` script. **Note:** You could place the below `model_fn` function within the `training.py` script (above the main guard) if you prefer to have a single script. **Note:** You would modify the script below to implement your own inferencing logic. Additional information on model loading and model serving for Scikit Learn on SageMaker can be found [here.](https://sagemaker.readthedocs.io/en/stable/frameworks/sklearn/using_sklearn.html#deploy-a-scikit-learn-model) ``` %%writefile $SOURCE_DIR/$INFERENCE_FILE import os import joblib def model_fn(model_dir): print("loading model.joblib from: {}".format(model_dir)) loaded_model = joblib.load(os.path.join(model_dir, "model.joblib")) return loaded_model ``` ### Launch a single training job for a given housing location There is nothing specific to multi-model endpoints in terms of the models it will host. They are trained in the same way as all other SageMaker models. Here we are using the Scikit Learn estimator and not waiting for the job to complete. ``` from sagemaker.sklearn.estimator import SKLearn def launch_training_job(location): # clear out old versions of the data s3_bucket = s3.Bucket(BUCKET) full_input_prefix = f"{DATA_PREFIX}/model_prep/{location}" s3_bucket.objects.filter(Prefix=full_input_prefix + "/").delete() # upload the entire set of data for all three channels local_folder = f"data/{location}" inputs = sagemaker_session.upload_data(path=local_folder, key_prefix=full_input_prefix) print(f"Training data uploaded: {inputs}") _job = "skl-{}".format(location.replace("_", "-")) full_output_prefix = f"{DATA_PREFIX}/model_artifacts/{location}" s3_output_path = f"s3://{BUCKET}/{full_output_prefix}" code_location = f"s3://{BUCKET}/{full_input_prefix}/code" # Add code_location argument in order to ensure that code_artifacts are stored in the same place. estimator = SKLearn( entry_point=TRAINING_FILE, # script to use for training job role=role, source_dir=SOURCE_DIR, # Location of scripts instance_count=1, instance_type=TRAIN_INSTANCE_TYPE, framework_version="0.23-1", # 0.23-1 is the latest version output_path=s3_output_path, # Where to store model artifacts base_job_name=_job, code_location=code_location, # This is where the .tar.gz of the source_dir will be stored metric_definitions=[{"Name": "median-AE", "Regex": "AE-at-50th-percentile: ([0-9.]+).*$"}], hyperparameters={"n-estimators": 100, "min-samples-leaf": 3, "model-name": location}, ) DISTRIBUTION_MODE = "FullyReplicated" train_input = TrainingInput( s3_data=inputs + "/train", distribution=DISTRIBUTION_MODE, content_type="csv" ) val_input = TrainingInput( s3_data=inputs + "/val", distribution=DISTRIBUTION_MODE, content_type="csv" ) remote_inputs = {"train": train_input, "validation": val_input} estimator.fit(remote_inputs, wait=False) # Return the estimator object return estimator ``` ### Kick off a model training job for each housing location ``` def save_data_locally(location, train, val, test): # _header = ','.join(COLUMNS) os.makedirs(f"data/{location}/train") np.savetxt(f"data/{location}/train/{location}_train.csv", train, delimiter=",", fmt="%.2f") os.makedirs(f"data/{location}/val") np.savetxt(f"data/{location}/val/{location}_val.csv", val, delimiter=",", fmt="%.2f") os.makedirs(f"data/{location}/test") np.savetxt(f"data/{location}/test/{location}_test.csv", test, delimiter=",", fmt="%.2f") import shutil import os estimators = [] shutil.rmtree("data", ignore_errors=True) for loc in LOCATIONS[:PARALLEL_TRAINING_JOBS]: _houses = gen_houses(NUM_HOUSES_PER_LOCATION) _train, _val, _test = split_data(_houses) save_data_locally(loc, _train, _val, _test) estimator = launch_training_job(loc) estimators.append(estimator) time.sleep(2) # to avoid throttling the CreateTrainingJob API print() print( f"{len(estimators)} training jobs launched: {[x.latest_training_job.job_name for x in estimators]}" ) ``` ### Wait for all model training to finish ``` def wait_for_training_job_to_complete(estimator): job = estimator.latest_training_job.job_name print(f"Waiting for job: {job}") status = estimator.latest_training_job.describe()["TrainingJobStatus"] while status == "InProgress": time.sleep(45) status = estimator.latest_training_job.describe()["TrainingJobStatus"] if status == "InProgress": print(f"{job} job status: {status}") print(f"DONE. Status for {job} is {status}\n") # wait for the jobs to finish for est in estimators: wait_for_training_job_to_complete(est) ``` ## Create the multi-model endpoint with the SageMaker SDK ### Create a SageMaker Model from one of the Estimators ``` estimator = estimators[0] # inference.py is the entry_point for when we deploy the model # Note how we do NOT specify source_dir again, this information is inherited from the estimator model = estimator.create_model(role=role, entry_point="inference.py") ``` ### Create the Amazon SageMaker MultiDataModel entity We create the multi-model endpoint using the [```MultiDataModel```](https://sagemaker.readthedocs.io/en/stable/api/inference/multi_data_model.html) class. You can create a MultiDataModel by directly passing in a `sagemaker.model.Model` object - in which case, the Endpoint will inherit information about the image to use, as well as any environmental variables, network isolation, etc., once the MultiDataModel is deployed. In addition, a MultiDataModel can also be created without explictly passing a `sagemaker.model.Model` object. Please refer to the documentation for additional details. ``` from sagemaker.multidatamodel import MultiDataModel # This is where our MME will read models from on S3. model_data_prefix = f"s3://{BUCKET}/{DATA_PREFIX}/{MULTI_MODEL_ARTIFACTS}/" print(model_data_prefix) mme = MultiDataModel( name=MODEL_NAME, model_data_prefix=model_data_prefix, model=model, # passing our model sagemaker_session=sagemaker_session, ) ``` ## Deploy the Multi Model Endpoint You need to consider the appropriate instance type and number of instances for the projected prediction workload across all the models you plan to host behind your multi-model endpoint. The number and size of the individual models will also drive memory requirements. ``` predictor = mme.deploy( initial_instance_count=1, instance_type=ENDPOINT_INSTANCE_TYPE, endpoint_name=ENDPOINT_NAME ) ``` ### Our endpoint has launched! Let's look at what models are available to the endpoint! By 'available', what we mean is, what model artfiacts are currently stored under the S3 prefix we defined when setting up the `MultiDataModel` above i.e. `model_data_prefix`. Currently, since we have no artifacts (i.e. `tar.gz` files) stored under our defined S3 prefix, our endpoint, will have no models 'available' to serve inference requests. We will demonstrate how to make models 'available' to our endpoint below. ``` # No models visible! list(mme.list_models()) ``` ### Lets deploy model artifacts to be found by the endpoint We are now using the `.add_model()` method of the `MultiDataModel` to copy over our model artifacts from where they were initially stored, during training, to where our endpoint will source model artifacts for inference requests. `model_data_source` refers to the location of our model artifact (i.e. where it was deposited on S3 after training completed) `model_data_path` is the **relative** path to the S3 prefix we specified above (i.e. `model_data_prefix`) where our endpoint will source models for inference requests. Since this is a **relative** path, we can simply pass the name of what we wish to call the model artifact at inference time (i.e. `Chicago_IL.tar.gz`) ### Dynamically deploying additional models It is also important to note, that we can always use the `.add_model()` method, as shown below, to dynamically deploy more models to the endpoint, to serve up inference requests as needed. ``` for est in estimators: artifact_path = est.latest_training_job.describe()["ModelArtifacts"]["S3ModelArtifacts"] model_name = artifact_path.split("/")[-4] + ".tar.gz" # This is copying over the model artifact to the S3 location for the MME. mme.add_model(model_data_source=artifact_path, model_data_path=model_name) ``` ## We have added the 4 model artifacts from our training jobs! We can see that the S3 prefix we specified when setting up `MultiDataModel` now has 4 model artifacts. As such, the endpoint can now serve up inference requests for these models. ``` list(mme.list_models()) ``` ## Get predictions from the endpoint Recall that ```mme.deploy()``` returns a [RealTimePredictor](https://github.com/aws/sagemaker-python-sdk/blob/master/src/sagemaker/predictor.py#L35) that we saved in a variable called ```predictor```. We will use ```predictor``` to submit requests to the endpoint. ### Invoking models on a multi-model endpoint Notice the higher latencies on the first invocation of any given model. This is due to the time it takes SageMaker to download the model to the Endpoint instance and then load the model into the inference container. Subsequent invocations of the same model take advantage of the model already being loaded into the inference container. ``` start_time = time.time() predicted_value = predictor.predict(data=gen_random_house()[1:], target_model="Chicago_IL.tar.gz") duration = time.time() - start_time print("${:,.2f}, took {:,d} ms\n".format(predicted_value[0], int(duration * 1000))) start_time = time.time() predicted_value = predictor.predict(data=gen_random_house()[1:], target_model="Chicago_IL.tar.gz") duration = time.time() - start_time print("${:,.2f}, took {:,d} ms\n".format(predicted_value[0], int(duration * 1000))) start_time = time.time() predicted_value = predictor.predict(data=gen_random_house()[1:], target_model="Houston_TX.tar.gz") duration = time.time() - start_time print("${:,.2f}, took {:,d} ms\n".format(predicted_value[0], int(duration * 1000))) start_time = time.time() predicted_value = predictor.predict(data=gen_random_house()[1:], target_model="Houston_TX.tar.gz") duration = time.time() - start_time print("${:,.2f}, took {:,d} ms\n".format(predicted_value[0], int(duration * 1000))) ``` ### Updating a model To update a model, you would follow the same approach as above and add it as a new model. For example, if you have retrained the `NewYork_NY.tar.gz` model and wanted to start invoking it, you would upload the updated model artifacts behind the S3 prefix with a new name such as `NewYork_NY_v2.tar.gz`, and then change the `target_model` field to invoke `NewYork_NY_v2.tar.gz` instead of `NewYork_NY.tar.gz`. You do not want to overwrite the model artifacts in Amazon S3, because the old version of the model might still be loaded in the containers or on the storage volume of the instances on the endpoint. Invocations to the new model could then invoke the old version of the model. Alternatively, you could stop the endpoint and re-deploy a fresh set of models. ## Using Boto APIs to invoke the endpoint While developing interactively within a Jupyter notebook, since `.deploy()` returns a `RealTimePredictor` it is a more seamless experience to start invoking your endpoint using the SageMaker SDK. You have more fine grained control over the serialization and deserialization protocols to shape your request and response payloads to/from the endpoint. This is the approach we demonstrated above where the `RealTimePredictor` was stored in the variable `predictor`. This is great for iterative experimentation within a notebook. Furthermore, should you have an application that has access to the SageMaker SDK, you can always import `RealTimePredictor` and attach it to an existing endpoint - this allows you to stick to using the high level SDK if preferable. Additional documentation on `RealTimePredictor` can be found [here.](https://sagemaker.readthedocs.io/en/stable/api/inference/predictors.html?highlight=RealTimePredictor#sagemaker.predictor.RealTimePredictor) The lower level Boto3 SDK may be preferable if you are attempting to invoke the endpoint as a part of a broader architecture. Imagine an API gateway frontend that uses a Lambda Proxy in order to transform request payloads before hitting a SageMaker Endpoint - in this example, Lambda does not have access to the SageMaker Python SDK, and as such, Boto3 can still allow you to interact with your endpoint and serve inference requests. Boto3 allows for quick injection of ML intelligence via SageMaker Endpoints into existing applications with minimal/no refactoring to existing code. Boto3 will submit your requests as a binary payload, while still allowing you to supply your desired `Content-Type` and `Accept` headers with serialization being handled by the inference container in the SageMaker Endpoint. Additional documentation on `.invoke_endpoint()` can be found [here.](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker-runtime.html) ``` import boto3 import json runtime_sm_client = boto3.client(service_name="sagemaker-runtime") def predict_one_house_value(features, model_name): print(f"Using model {model_name} to predict price of this house: {features}") float_features = [float(i) for i in features] body = ",".join(map(str, float_features)) + "\n" start_time = time.time() response = runtime_sm_client.invoke_endpoint( EndpointName=ENDPOINT_NAME, ContentType="text/csv", TargetModel=model_name, Body=body ) predicted_value = json.loads(response["Body"].read())[0] duration = time.time() - start_time print("${:,.2f}, took {:,d} ms\n".format(predicted_value, int(duration * 1000))) predict_one_house_value(gen_random_house()[1:], "Chicago_IL.tar.gz") ``` ## Clean up Here, to be sure we are not billed for endpoints we are no longer using, we clean up. ``` predictor.delete_endpoint() ```
github_jupyter
import numpy as np import pandas as pd import time NUM_HOUSES_PER_LOCATION = 1000 LOCATIONS = [ "NewYork_NY", "LosAngeles_CA", "Chicago_IL", "Houston_TX", "Dallas_TX", "Phoenix_AZ", "Philadelphia_PA", "SanAntonio_TX", "SanDiego_CA", "SanFrancisco_CA", ] PARALLEL_TRAINING_JOBS = 4 # len(LOCATIONS) if your account limits can handle it MAX_YEAR = 2019 def gen_price(house): _base_price = int(house["SQUARE_FEET"] * 150) _price = int( _base_price + (10000 * house["NUM_BEDROOMS"]) + (15000 * house["NUM_BATHROOMS"]) + (15000 * house["LOT_ACRES"]) + (15000 * house["GARAGE_SPACES"]) - (5000 * (MAX_YEAR - house["YEAR_BUILT"])) ) return _price def gen_random_house(): _house = { "SQUARE_FEET": int(np.random.normal(3000, 750)), "NUM_BEDROOMS": np.random.randint(2, 7), "NUM_BATHROOMS": np.random.randint(2, 7) / 2, "LOT_ACRES": round(np.random.normal(1.0, 0.25), 2), "GARAGE_SPACES": np.random.randint(0, 4), "YEAR_BUILT": min(MAX_YEAR, int(np.random.normal(1995, 10))), } _price = gen_price(_house) return [ _price, _house["YEAR_BUILT"], _house["SQUARE_FEET"], _house["NUM_BEDROOMS"], _house["NUM_BATHROOMS"], _house["LOT_ACRES"], _house["GARAGE_SPACES"], ] def gen_houses(num_houses): _house_list = [] for i in range(num_houses): _house_list.append(gen_random_house()) _df = pd.DataFrame( _house_list, columns=[ "PRICE", "YEAR_BUILT", "SQUARE_FEET", "NUM_BEDROOMS", "NUM_BATHROOMS", "LOT_ACRES", "GARAGE_SPACES", ], ) return _df import sagemaker from sagemaker import get_execution_role from sagemaker.inputs import TrainingInput import boto3 from time import gmtime, strftime s3 = boto3.resource("s3") sagemaker_session = sagemaker.Session() role = get_execution_role() BUCKET = sagemaker_session.default_bucket() TRAINING_FILE = "training.py" INFERENCE_FILE = "inference.py" SOURCE_DIR = "source_dir" DATA_PREFIX = "DEMO_MME_SCIKIT_V1" MULTI_MODEL_ARTIFACTS = "multi_model_artifacts" TRAIN_INSTANCE_TYPE = "ml.m4.xlarge" ENDPOINT_INSTANCE_TYPE = "ml.m4.xlarge" CUR = strftime("%Y-%m-%d-%H-%M-%S", gmtime()) ENDPOINT_NAME = "mme-sklearn-housing-V1" + "-" + CUR MODEL_NAME = ENDPOINT_NAME from sklearn.model_selection import train_test_split SEED = 7 SPLIT_RATIOS = [0.6, 0.3, 0.1] def split_data(df): # split data into train and test sets seed = SEED val_size = SPLIT_RATIOS[1] test_size = SPLIT_RATIOS[2] num_samples = df.shape[0] X1 = df.values[:num_samples, 1:] # keep only the features, skip the target, all rows Y1 = df.values[:num_samples, :1] # keep only the target, all rows # Use split ratios to divide up into train/val/test X_train, X_val, y_train, y_val = train_test_split( X1, Y1, test_size=(test_size + val_size), random_state=seed ) # Of the remaining non-training samples, give proper ratio to validation and to test X_test, X_test, y_test, y_test = train_test_split( X_val, y_val, test_size=(test_size / (test_size + val_size)), random_state=seed ) # reassemble the datasets with target in first column and features after that _train = np.concatenate([y_train, X_train], axis=1) _val = np.concatenate([y_val, X_val], axis=1) _test = np.concatenate([y_test, X_test], axis=1) return _train, _val, _test !mkdir $SOURCE_DIR %%writefile $SOURCE_DIR/$TRAINING_FILE import argparse import os import numpy as np import pandas as pd from sklearn.ensemble import RandomForestRegressor import joblib if __name__ == "__main__": print("extracting arguments") parser = argparse.ArgumentParser() # hyperparameters sent by the client are passed as command-line arguments to the script. # to simplify the demo we don't use all sklearn RandomForest hyperparameters parser.add_argument("--n-estimators", type=int, default=10) parser.add_argument("--min-samples-leaf", type=int, default=3) # Data, model, and output directories parser.add_argument("--model-dir", type=str, default=os.environ.get("SM_MODEL_DIR")) parser.add_argument("--train", type=str, default=os.environ.get("SM_CHANNEL_TRAIN")) parser.add_argument("--validation", type=str, default=os.environ.get("SM_CHANNEL_VALIDATION")) parser.add_argument("--model-name", type=str) args, _ = parser.parse_known_args() print("reading data") print("model_name: {}".format(args.model_name)) train_file = os.path.join(args.train, args.model_name + "_train.csv") train_df = pd.read_csv(train_file) # read in the training data val_file = os.path.join(args.validation, args.model_name + "_val.csv") test_df = pd.read_csv(os.path.join(val_file)) # read in the test data # Matrix representation of the data print("building training and testing datasets") X_train = train_df[train_df.columns[1 : train_df.shape[1]]] X_test = test_df[test_df.columns[1 : test_df.shape[1]]] y_train = train_df[train_df.columns[0]] y_test = test_df[test_df.columns[0]] # fitting the model print("training model") model = RandomForestRegressor( n_estimators=args.n_estimators, min_samples_leaf=args.min_samples_leaf, n_jobs=-1 ) model.fit(X_train, y_train) # print abs error print("validating model") abs_err = np.abs(model.predict(X_test) - y_test) # print couple perf metrics for q in [10, 50, 90]: print("AE-at-" + str(q) + "th-percentile: " + str(np.percentile(a=abs_err, q=q))) # persist model path = os.path.join(args.model_dir, "model.joblib") joblib.dump(model, path) print("model persisted at " + path) %%writefile $SOURCE_DIR/$INFERENCE_FILE import os import joblib def model_fn(model_dir): print("loading model.joblib from: {}".format(model_dir)) loaded_model = joblib.load(os.path.join(model_dir, "model.joblib")) return loaded_model from sagemaker.sklearn.estimator import SKLearn def launch_training_job(location): # clear out old versions of the data s3_bucket = s3.Bucket(BUCKET) full_input_prefix = f"{DATA_PREFIX}/model_prep/{location}" s3_bucket.objects.filter(Prefix=full_input_prefix + "/").delete() # upload the entire set of data for all three channels local_folder = f"data/{location}" inputs = sagemaker_session.upload_data(path=local_folder, key_prefix=full_input_prefix) print(f"Training data uploaded: {inputs}") _job = "skl-{}".format(location.replace("_", "-")) full_output_prefix = f"{DATA_PREFIX}/model_artifacts/{location}" s3_output_path = f"s3://{BUCKET}/{full_output_prefix}" code_location = f"s3://{BUCKET}/{full_input_prefix}/code" # Add code_location argument in order to ensure that code_artifacts are stored in the same place. estimator = SKLearn( entry_point=TRAINING_FILE, # script to use for training job role=role, source_dir=SOURCE_DIR, # Location of scripts instance_count=1, instance_type=TRAIN_INSTANCE_TYPE, framework_version="0.23-1", # 0.23-1 is the latest version output_path=s3_output_path, # Where to store model artifacts base_job_name=_job, code_location=code_location, # This is where the .tar.gz of the source_dir will be stored metric_definitions=[{"Name": "median-AE", "Regex": "AE-at-50th-percentile: ([0-9.]+).*$"}], hyperparameters={"n-estimators": 100, "min-samples-leaf": 3, "model-name": location}, ) DISTRIBUTION_MODE = "FullyReplicated" train_input = TrainingInput( s3_data=inputs + "/train", distribution=DISTRIBUTION_MODE, content_type="csv" ) val_input = TrainingInput( s3_data=inputs + "/val", distribution=DISTRIBUTION_MODE, content_type="csv" ) remote_inputs = {"train": train_input, "validation": val_input} estimator.fit(remote_inputs, wait=False) # Return the estimator object return estimator def save_data_locally(location, train, val, test): # _header = ','.join(COLUMNS) os.makedirs(f"data/{location}/train") np.savetxt(f"data/{location}/train/{location}_train.csv", train, delimiter=",", fmt="%.2f") os.makedirs(f"data/{location}/val") np.savetxt(f"data/{location}/val/{location}_val.csv", val, delimiter=",", fmt="%.2f") os.makedirs(f"data/{location}/test") np.savetxt(f"data/{location}/test/{location}_test.csv", test, delimiter=",", fmt="%.2f") import shutil import os estimators = [] shutil.rmtree("data", ignore_errors=True) for loc in LOCATIONS[:PARALLEL_TRAINING_JOBS]: _houses = gen_houses(NUM_HOUSES_PER_LOCATION) _train, _val, _test = split_data(_houses) save_data_locally(loc, _train, _val, _test) estimator = launch_training_job(loc) estimators.append(estimator) time.sleep(2) # to avoid throttling the CreateTrainingJob API print() print( f"{len(estimators)} training jobs launched: {[x.latest_training_job.job_name for x in estimators]}" ) def wait_for_training_job_to_complete(estimator): job = estimator.latest_training_job.job_name print(f"Waiting for job: {job}") status = estimator.latest_training_job.describe()["TrainingJobStatus"] while status == "InProgress": time.sleep(45) status = estimator.latest_training_job.describe()["TrainingJobStatus"] if status == "InProgress": print(f"{job} job status: {status}") print(f"DONE. Status for {job} is {status}\n") # wait for the jobs to finish for est in estimators: wait_for_training_job_to_complete(est) estimator = estimators[0] # inference.py is the entry_point for when we deploy the model # Note how we do NOT specify source_dir again, this information is inherited from the estimator model = estimator.create_model(role=role, entry_point="inference.py") from sagemaker.multidatamodel import MultiDataModel # This is where our MME will read models from on S3. model_data_prefix = f"s3://{BUCKET}/{DATA_PREFIX}/{MULTI_MODEL_ARTIFACTS}/" print(model_data_prefix) mme = MultiDataModel( name=MODEL_NAME, model_data_prefix=model_data_prefix, model=model, # passing our model sagemaker_session=sagemaker_session, ) predictor = mme.deploy( initial_instance_count=1, instance_type=ENDPOINT_INSTANCE_TYPE, endpoint_name=ENDPOINT_NAME ) # No models visible! list(mme.list_models()) for est in estimators: artifact_path = est.latest_training_job.describe()["ModelArtifacts"]["S3ModelArtifacts"] model_name = artifact_path.split("/")[-4] + ".tar.gz" # This is copying over the model artifact to the S3 location for the MME. mme.add_model(model_data_source=artifact_path, model_data_path=model_name) list(mme.list_models()) start_time = time.time() predicted_value = predictor.predict(data=gen_random_house()[1:], target_model="Chicago_IL.tar.gz") duration = time.time() - start_time print("${:,.2f}, took {:,d} ms\n".format(predicted_value[0], int(duration * 1000))) start_time = time.time() predicted_value = predictor.predict(data=gen_random_house()[1:], target_model="Chicago_IL.tar.gz") duration = time.time() - start_time print("${:,.2f}, took {:,d} ms\n".format(predicted_value[0], int(duration * 1000))) start_time = time.time() predicted_value = predictor.predict(data=gen_random_house()[1:], target_model="Houston_TX.tar.gz") duration = time.time() - start_time print("${:,.2f}, took {:,d} ms\n".format(predicted_value[0], int(duration * 1000))) start_time = time.time() predicted_value = predictor.predict(data=gen_random_house()[1:], target_model="Houston_TX.tar.gz") duration = time.time() - start_time print("${:,.2f}, took {:,d} ms\n".format(predicted_value[0], int(duration * 1000))) import boto3 import json runtime_sm_client = boto3.client(service_name="sagemaker-runtime") def predict_one_house_value(features, model_name): print(f"Using model {model_name} to predict price of this house: {features}") float_features = [float(i) for i in features] body = ",".join(map(str, float_features)) + "\n" start_time = time.time() response = runtime_sm_client.invoke_endpoint( EndpointName=ENDPOINT_NAME, ContentType="text/csv", TargetModel=model_name, Body=body ) predicted_value = json.loads(response["Body"].read())[0] duration = time.time() - start_time print("${:,.2f}, took {:,d} ms\n".format(predicted_value, int(duration * 1000))) predict_one_house_value(gen_random_house()[1:], "Chicago_IL.tar.gz") predictor.delete_endpoint()
0.396302
0.973968
``` import pandas as pd import sklearn !pip install sklearn_crfsuite import sklearn_crfsuite from nltk.tokenize import word_tokenize import re from sklearn.model_selection import KFold from sklearn_crfsuite import metrics from sklearn_crfsuite import scorers import nltk nltk.download('punkt') #read datasets words = open("sample_data/NE.ma.txt", "r") text = open("sample_data/NE.txt", "r") words = words.read() #text = text.read() #Extract information index = 1 labels = list() for i in text.readlines(): tuple_ = re.findall("<b_enamex TYPE=\"(\w*)\">([\w ]+)<e_enamex>", i) #([\w ]+) if tuple_: for j in tuple_: labels.append((index, j[0], j[1])) index = index + 1 labels ##IO Tagging #Seperate entities ==> (row number, label, word) """ IOLabels = list() for i in labels: for j in i[2].split(" "): if i[1] == "ORGANIZATION": IOLabels.append((i[0], "I-ORG", j)) elif i[1] == "PERSON": IOLabels.append((i[0],"I-PER", j)) elif i[1] == "LOCATION": IOLabels.append((i[0], "I-LOC", j)) """ ##IOB Tagging #Seperate entities ==> (row number, label, word) IOLabels = list() for i in labels: check = True for j in i[2].split(" "): if i[1] == "ORGANIZATION": if check is True: IOLabels.append((i[0], "B-ORG", j)) check = False else: IOLabels.append((i[0], "I-ORG", j)) elif i[1] == "PERSON": if check is True: IOLabels.append((i[0],"B-PER", j)) check = False else: IOLabels.append((i[0],"I-PER", j)) elif i[1] == "LOCATION": if check is True: IOLabels.append((i[0], "B-LOC", j)) check = False else: IOLabels.append((i[0], "I-LOC", j)) IOLabels # Extract information with regex ==> (row number, word, features) features = re.findall("(\d+) ([\w+\']+) ([\w+\+]+)", words) #(\d+) (\w+) features ``` #Preprocessing ``` #Create Dataframe df_features = pd.DataFrame(features, columns=["Row", "Words", "Feature"]) df_features["Row"] = pd.to_numeric(df_features["Row"]) #Seperate features df_features["Features2"]= df_features["Feature"].str.split("+") df_features #Create dataframe df_labels = pd.DataFrame(IOLabels, columns=["Row", "Label", "Words"]) df_labels #Merge dataframes witf based on df_features dataset merged_df = df_features.merge(df_labels, on = ["Row", "Words"], how="left") merged_df["Label"].fillna("O", inplace=True) merged_df #Add Features merged_df["word.lower"] = merged_df["Words"].str.lower() merged_df["postag"] = merged_df['Features2'].str[1] merged_df["Root"] = merged_df['Features2'].str[0] merged_df["word.istitle"] = merged_df["Words"].str.istitle() merged_df["word.isupper"] = merged_df["Words"].str.isupper() merged_df["word.isdigit"] = merged_df["Words"].str.isdigit() merged_df["pnon"] = merged_df["Feature"].str.contains("Pnon") merged_df["nom"] = merged_df["Feature"].str.contains("Nom") merged_df["inf"] = merged_df["Features2"].str[2:] #Gazzetteer from 1st project organizations = ["Türkiye Cumhuriyeti", "TBMM", "Türkiye Büyük Millet Meclisi", "Cumhurbaşkanlığı", "Bakanlık", "ABD", "Amerika Birleşik Devletleri", "NATO", "Çin Halk Cumhuriyeti" , "Merkez Bankası", "CHP", "MHP", "AKP", "AK Parti", "HDP", "İYİ Parti", "DEVA Partisi", "Gelecek Partisi", "HADEP", "ANAP", "DSP", "Yargıtay", "Sayıştay", "Danıştay", "YÖK", "Yüksek Öğretim Kurumu", "YSK", "Yüksek Seçim Kurumu", "Hakimler ve Savcılar Yüksek Kurulu", "Hakimler ve Savcılar Kurulu", "HSK", "HSYK", "Milli Güvenlik Kurulu", "Avrupa Birliği", "AB", "Mehmetçik Vakfı", "Pegasus Havayolları", "Türk Hava Yolları", "THY", "Ülker", "Akbank", "İş Bankası", "Yapı Kredi Bankası", "Ziraat Bankası", "Dünya Bankası", "Halk Bankası", "Tüpraş", "TPAO", "Türkiye Petrolleri Anonim Ortaklığı", "Vodafone", "Migros", "Tofaş", "Mercedes" ,"Doğuş Otomotiv", "Enerjisa", "Ereğli Demir Çelik", "Turkcell", "Arçelik", "Türk Telekom", "Shell", "Ford Otosan", "BİM", "OPET", "Apple", "Samsung", "Tesla", "IBM", "Google", "Facebook" , "Intel", "Microsoft", "Sony", "TRT", "Üniversitesi", "Holding", "Vakfı", "Federasyonu", "Şirket", "Enstitüsü", "Kurumu", "Bankası", "Kurumu", "Bakanlığı", "Köyü", "Dağı", "Mahallesi", "Sokağı", "Köprüsü", "Sarayı", "Mezarlığı", "Futbol Takımı", "Müzesi", "Partisi", "Belediyesi", "Büyükşehir Belediyesi", "Marinası", "Kulübü", "Gazetesi", "Festivali", "Sitesi", "Apartmanı", "Konağı", "Köşkü", "Külliyesi", "Radyo", "Barosu", "Karayolu"] #Removed #merged_df["is_organization"] = merged_df["Root"].apply(lambda x: True if x in organizations else False) def wordshape(text): import re t1 = re.sub('[A-ZĞÜŞİÖÇ]', 'X',text) t2 = re.sub('[a-zığüşiöç]', 'x', t1) return re.sub('[0-9]', 'd', t2) merged_df["word.shape"] = merged_df["Words"].apply(lambda x: wordshape(x)) merged_df["Prev_Label"] = merged_df['Label'].shift(1) merged_df["Prev_Label"].fillna("O", inplace=True) merged_df["Prev_istitle"] = merged_df['word.istitle'].shift(1) merged_df["Prev_istitle"].fillna(False, inplace=True) merged_df["Prev_isupper"] = merged_df['word.isupper'].shift(1) merged_df["Prev_isupper"].fillna(False, inplace=True) merged_df["Prev_lower"] = merged_df['word.lower'].shift(1) merged_df["Prev_lower"].fillna("", inplace=True) merged_df["Prev_postag"] = merged_df["postag"].shift(1) merged_df["Prev_postag"].fillna("Noun", inplace=True) merged_df #Prepare train set ==> list of list of dictionaries df_features = merged_df[["word.lower", "postag", "word.istitle", "word.isupper", "word.isdigit", "pnon", "nom", "inf", "word.shape", "Prev_Label", "Prev_istitle", "Prev_isupper", "Prev_lower", "Prev_postag", "Row"]].groupby(by=["Row"]) x_train = list() for i in range(len(df_features.groups)): x_train.append(df_features.get_group(i+1)[["word.lower", "postag", "word.istitle", "word.isupper", "word.isdigit", "pnon", "nom", "inf", "word.shape", "Prev_Label", "Prev_istitle", "Prev_isupper", "Prev_lower", "Prev_postag"]].to_dict("records")) #Prepare label set ==> list of list of labels df_labels = merged_df[["Label", "Row"]].groupby(by=["Row"]) y_train = list() for i in range(len(df_labels.groups)): y_train.append(df_labels.get_group(i+1)["Label"].to_list()) ``` #5 Fold Split ``` # 5 folds split x_train_1 = list() x_train_2 = list() x_train_3 = list() x_train_4 = list() x_train_5 = list() y_train_1 = list() y_train_2 = list() y_train_3 = list() y_train_4 = list() y_train_5 = list() for i in range(len(x_train)): if i % 5 == 0: x_train_1.append(x_train[i]) y_train_1.append(y_train[i]) elif i % 5 == 1: x_train_2.append(x_train[i]) y_train_2.append(y_train[i]) elif i % 5 == 2: x_train_3.append(x_train[i]) y_train_3.append(y_train[i]) elif i % 5 == 3: x_train_4.append(x_train[i]) y_train_4.append(y_train[i]) elif i % 5 == 4: x_train_5.append(x_train[i]) y_train_5.append(y_train[i]) ``` #Training ``` #hyper parameters c1 = 0.1 c2 = 0.1 max_iter = 150 all_poss_trans = True #First fold is test crf1_x = list() crf1_y = list() crf1_x.extend(x_train_2) crf1_x.extend(x_train_3) crf1_x.extend(x_train_4) crf1_x.extend(x_train_5) crf1_y.extend(y_train_2) crf1_y.extend(y_train_3) crf1_y.extend(y_train_4) crf1_y.extend(y_train_5) crf1 = sklearn_crfsuite.CRF( algorithm='lbfgs', c1=c1, c2=c2, max_iterations=max_iter, all_possible_transitions=all_poss_trans ) crf1.fit(crf1_x, crf1_y) crf2_x = list() crf2_y = list() crf2_x.extend(x_train_1) crf2_x.extend(x_train_3) crf2_x.extend(x_train_4) crf2_x.extend(x_train_5) crf2_y.extend(y_train_1) crf2_y.extend(y_train_3) crf2_y.extend(y_train_4) crf2_y.extend(y_train_5) #Second fold is test crf2 = sklearn_crfsuite.CRF( algorithm='lbfgs', c1=c1, c2=c2, max_iterations=max_iter, all_possible_transitions=all_poss_trans ) crf2.fit(crf2_x, crf2_y) crf3_x = list() crf3_y = list() crf3_x.extend(x_train_1) crf3_x.extend(x_train_2) crf3_x.extend(x_train_4) crf3_x.extend(x_train_5) crf3_y.extend(y_train_1) crf3_y.extend(y_train_2) crf3_y.extend(y_train_4) crf3_y.extend(y_train_5) #Third fold is test crf3 = sklearn_crfsuite.CRF( algorithm='lbfgs', c1=c1, c2=c2, max_iterations=max_iter, all_possible_transitions=all_poss_trans ) crf3.fit(crf3_x, crf3_y) crf4_x = list() crf4_y = list() crf4_x.extend(x_train_1) crf4_x.extend(x_train_2) crf4_x.extend(x_train_3) crf4_x.extend(x_train_5) crf4_y.extend(y_train_1) crf4_y.extend(y_train_2) crf4_y.extend(y_train_3) crf4_y.extend(y_train_5) #Fourth fold is test crf4 = sklearn_crfsuite.CRF( algorithm='lbfgs', c1=c1, c2=c2, max_iterations=max_iter, all_possible_transitions=all_poss_trans ) crf4.fit(crf4_x, crf4_y) crf5_x = list() crf5_y = list() crf5_x.extend(x_train_1) crf5_x.extend(x_train_2) crf5_x.extend(x_train_3) crf5_x.extend(x_train_4) crf5_y.extend(y_train_1) crf5_y.extend(y_train_2) crf5_y.extend(y_train_3) crf5_y.extend(y_train_4) #Fifth fold is test crf5 = sklearn_crfsuite.CRF( algorithm='lbfgs', c1=c1, c2=c2, max_iterations=max_iter, all_possible_transitions=all_poss_trans ) crf5.fit(crf5_x, crf5_y) ``` #Evaluation ``` labels = list(crf1.classes_) labels.remove('O') labels sorted_labels = sorted( labels, key=lambda name: (name[1:], name[0]) ) sorted_labels pred1 = crf1.predict(x_train_1) crf1_f1_score = metrics.flat_f1_score(y_train_1, pred1, average='weighted', labels=labels) crf1_f1_score crf1_acc = metrics.flat_accuracy_score(y_train_1, pred1) crf1_acc crf1_matrix = metrics.flat_classification_report( y_train_1, pred1, digits=3, labels=sorted_labels) print(crf1_matrix) pred2 = crf2.predict(x_train_2) crf2_f1_score = metrics.flat_f1_score(y_train_2, pred2, average='weighted', labels=labels) crf2_f1_score crf2_acc = metrics.flat_accuracy_score(y_train_2, pred2) crf2_acc crf2_matrix = metrics.flat_classification_report( y_train_2, pred2, digits=3, labels=sorted_labels) print(crf2_matrix) pred3 = crf3.predict(x_train_3) crf3_f1_score = metrics.flat_f1_score(y_train_3, pred3, average='weighted', labels=labels) crf3_f1_score crf3_acc = metrics.flat_accuracy_score(y_train_3, pred3) crf3_acc crf3_matrix = metrics.flat_classification_report( y_train_3, pred3, digits=3, labels=sorted_labels) print(crf3_matrix) pred4 = crf4.predict(x_train_4) crf4_f1_score = metrics.flat_f1_score(y_train_4, pred4, average='weighted', labels=labels) crf4_f1_score crf4_acc = metrics.flat_accuracy_score(y_train_4, pred4) crf4_acc crf4_matrix = metrics.flat_classification_report( y_train_4, pred4, digits=3, labels=sorted_labels) print(crf4_matrix) pred5 = crf5.predict(x_train_5) crf5_f1_score = metrics.flat_f1_score(y_train_5, pred5, average='weighted', labels=labels) crf5_f1_score crf5_acc = metrics.flat_accuracy_score(y_train_5, pred5) crf5_acc crf5_matrix = metrics.flat_classification_report( y_train_5, pred5, digits=3, labels=sorted_labels) print(crf5_matrix) from collections import Counter def print_transitions(trans_features): for (label_from, label_to), weight in trans_features: print("%-6s -> %-7s %0.6f" % (label_from, label_to, weight)) print("Top likely transitions:") print_transitions(Counter(crf1.transition_features_).most_common(20)) print("\nTop unlikely transitions:") print_transitions(Counter(crf1.transition_features_).most_common()[-20:]) ``` #Average Scores ``` avg_f1_score = (crf1_f1_score + crf2_f1_score + crf3_f1_score + crf4_f1_score + crf5_f1_score)/5 avg_f1_score avg_acc = (crf1_acc + crf2_acc + crf3_acc + crf4_acc + crf5_acc)/5 avg_acc ```
github_jupyter
import pandas as pd import sklearn !pip install sklearn_crfsuite import sklearn_crfsuite from nltk.tokenize import word_tokenize import re from sklearn.model_selection import KFold from sklearn_crfsuite import metrics from sklearn_crfsuite import scorers import nltk nltk.download('punkt') #read datasets words = open("sample_data/NE.ma.txt", "r") text = open("sample_data/NE.txt", "r") words = words.read() #text = text.read() #Extract information index = 1 labels = list() for i in text.readlines(): tuple_ = re.findall("<b_enamex TYPE=\"(\w*)\">([\w ]+)<e_enamex>", i) #([\w ]+) if tuple_: for j in tuple_: labels.append((index, j[0], j[1])) index = index + 1 labels ##IO Tagging #Seperate entities ==> (row number, label, word) """ IOLabels = list() for i in labels: for j in i[2].split(" "): if i[1] == "ORGANIZATION": IOLabels.append((i[0], "I-ORG", j)) elif i[1] == "PERSON": IOLabels.append((i[0],"I-PER", j)) elif i[1] == "LOCATION": IOLabels.append((i[0], "I-LOC", j)) """ ##IOB Tagging #Seperate entities ==> (row number, label, word) IOLabels = list() for i in labels: check = True for j in i[2].split(" "): if i[1] == "ORGANIZATION": if check is True: IOLabels.append((i[0], "B-ORG", j)) check = False else: IOLabels.append((i[0], "I-ORG", j)) elif i[1] == "PERSON": if check is True: IOLabels.append((i[0],"B-PER", j)) check = False else: IOLabels.append((i[0],"I-PER", j)) elif i[1] == "LOCATION": if check is True: IOLabels.append((i[0], "B-LOC", j)) check = False else: IOLabels.append((i[0], "I-LOC", j)) IOLabels # Extract information with regex ==> (row number, word, features) features = re.findall("(\d+) ([\w+\']+) ([\w+\+]+)", words) #(\d+) (\w+) features #Create Dataframe df_features = pd.DataFrame(features, columns=["Row", "Words", "Feature"]) df_features["Row"] = pd.to_numeric(df_features["Row"]) #Seperate features df_features["Features2"]= df_features["Feature"].str.split("+") df_features #Create dataframe df_labels = pd.DataFrame(IOLabels, columns=["Row", "Label", "Words"]) df_labels #Merge dataframes witf based on df_features dataset merged_df = df_features.merge(df_labels, on = ["Row", "Words"], how="left") merged_df["Label"].fillna("O", inplace=True) merged_df #Add Features merged_df["word.lower"] = merged_df["Words"].str.lower() merged_df["postag"] = merged_df['Features2'].str[1] merged_df["Root"] = merged_df['Features2'].str[0] merged_df["word.istitle"] = merged_df["Words"].str.istitle() merged_df["word.isupper"] = merged_df["Words"].str.isupper() merged_df["word.isdigit"] = merged_df["Words"].str.isdigit() merged_df["pnon"] = merged_df["Feature"].str.contains("Pnon") merged_df["nom"] = merged_df["Feature"].str.contains("Nom") merged_df["inf"] = merged_df["Features2"].str[2:] #Gazzetteer from 1st project organizations = ["Türkiye Cumhuriyeti", "TBMM", "Türkiye Büyük Millet Meclisi", "Cumhurbaşkanlığı", "Bakanlık", "ABD", "Amerika Birleşik Devletleri", "NATO", "Çin Halk Cumhuriyeti" , "Merkez Bankası", "CHP", "MHP", "AKP", "AK Parti", "HDP", "İYİ Parti", "DEVA Partisi", "Gelecek Partisi", "HADEP", "ANAP", "DSP", "Yargıtay", "Sayıştay", "Danıştay", "YÖK", "Yüksek Öğretim Kurumu", "YSK", "Yüksek Seçim Kurumu", "Hakimler ve Savcılar Yüksek Kurulu", "Hakimler ve Savcılar Kurulu", "HSK", "HSYK", "Milli Güvenlik Kurulu", "Avrupa Birliği", "AB", "Mehmetçik Vakfı", "Pegasus Havayolları", "Türk Hava Yolları", "THY", "Ülker", "Akbank", "İş Bankası", "Yapı Kredi Bankası", "Ziraat Bankası", "Dünya Bankası", "Halk Bankası", "Tüpraş", "TPAO", "Türkiye Petrolleri Anonim Ortaklığı", "Vodafone", "Migros", "Tofaş", "Mercedes" ,"Doğuş Otomotiv", "Enerjisa", "Ereğli Demir Çelik", "Turkcell", "Arçelik", "Türk Telekom", "Shell", "Ford Otosan", "BİM", "OPET", "Apple", "Samsung", "Tesla", "IBM", "Google", "Facebook" , "Intel", "Microsoft", "Sony", "TRT", "Üniversitesi", "Holding", "Vakfı", "Federasyonu", "Şirket", "Enstitüsü", "Kurumu", "Bankası", "Kurumu", "Bakanlığı", "Köyü", "Dağı", "Mahallesi", "Sokağı", "Köprüsü", "Sarayı", "Mezarlığı", "Futbol Takımı", "Müzesi", "Partisi", "Belediyesi", "Büyükşehir Belediyesi", "Marinası", "Kulübü", "Gazetesi", "Festivali", "Sitesi", "Apartmanı", "Konağı", "Köşkü", "Külliyesi", "Radyo", "Barosu", "Karayolu"] #Removed #merged_df["is_organization"] = merged_df["Root"].apply(lambda x: True if x in organizations else False) def wordshape(text): import re t1 = re.sub('[A-ZĞÜŞİÖÇ]', 'X',text) t2 = re.sub('[a-zığüşiöç]', 'x', t1) return re.sub('[0-9]', 'd', t2) merged_df["word.shape"] = merged_df["Words"].apply(lambda x: wordshape(x)) merged_df["Prev_Label"] = merged_df['Label'].shift(1) merged_df["Prev_Label"].fillna("O", inplace=True) merged_df["Prev_istitle"] = merged_df['word.istitle'].shift(1) merged_df["Prev_istitle"].fillna(False, inplace=True) merged_df["Prev_isupper"] = merged_df['word.isupper'].shift(1) merged_df["Prev_isupper"].fillna(False, inplace=True) merged_df["Prev_lower"] = merged_df['word.lower'].shift(1) merged_df["Prev_lower"].fillna("", inplace=True) merged_df["Prev_postag"] = merged_df["postag"].shift(1) merged_df["Prev_postag"].fillna("Noun", inplace=True) merged_df #Prepare train set ==> list of list of dictionaries df_features = merged_df[["word.lower", "postag", "word.istitle", "word.isupper", "word.isdigit", "pnon", "nom", "inf", "word.shape", "Prev_Label", "Prev_istitle", "Prev_isupper", "Prev_lower", "Prev_postag", "Row"]].groupby(by=["Row"]) x_train = list() for i in range(len(df_features.groups)): x_train.append(df_features.get_group(i+1)[["word.lower", "postag", "word.istitle", "word.isupper", "word.isdigit", "pnon", "nom", "inf", "word.shape", "Prev_Label", "Prev_istitle", "Prev_isupper", "Prev_lower", "Prev_postag"]].to_dict("records")) #Prepare label set ==> list of list of labels df_labels = merged_df[["Label", "Row"]].groupby(by=["Row"]) y_train = list() for i in range(len(df_labels.groups)): y_train.append(df_labels.get_group(i+1)["Label"].to_list()) # 5 folds split x_train_1 = list() x_train_2 = list() x_train_3 = list() x_train_4 = list() x_train_5 = list() y_train_1 = list() y_train_2 = list() y_train_3 = list() y_train_4 = list() y_train_5 = list() for i in range(len(x_train)): if i % 5 == 0: x_train_1.append(x_train[i]) y_train_1.append(y_train[i]) elif i % 5 == 1: x_train_2.append(x_train[i]) y_train_2.append(y_train[i]) elif i % 5 == 2: x_train_3.append(x_train[i]) y_train_3.append(y_train[i]) elif i % 5 == 3: x_train_4.append(x_train[i]) y_train_4.append(y_train[i]) elif i % 5 == 4: x_train_5.append(x_train[i]) y_train_5.append(y_train[i]) #hyper parameters c1 = 0.1 c2 = 0.1 max_iter = 150 all_poss_trans = True #First fold is test crf1_x = list() crf1_y = list() crf1_x.extend(x_train_2) crf1_x.extend(x_train_3) crf1_x.extend(x_train_4) crf1_x.extend(x_train_5) crf1_y.extend(y_train_2) crf1_y.extend(y_train_3) crf1_y.extend(y_train_4) crf1_y.extend(y_train_5) crf1 = sklearn_crfsuite.CRF( algorithm='lbfgs', c1=c1, c2=c2, max_iterations=max_iter, all_possible_transitions=all_poss_trans ) crf1.fit(crf1_x, crf1_y) crf2_x = list() crf2_y = list() crf2_x.extend(x_train_1) crf2_x.extend(x_train_3) crf2_x.extend(x_train_4) crf2_x.extend(x_train_5) crf2_y.extend(y_train_1) crf2_y.extend(y_train_3) crf2_y.extend(y_train_4) crf2_y.extend(y_train_5) #Second fold is test crf2 = sklearn_crfsuite.CRF( algorithm='lbfgs', c1=c1, c2=c2, max_iterations=max_iter, all_possible_transitions=all_poss_trans ) crf2.fit(crf2_x, crf2_y) crf3_x = list() crf3_y = list() crf3_x.extend(x_train_1) crf3_x.extend(x_train_2) crf3_x.extend(x_train_4) crf3_x.extend(x_train_5) crf3_y.extend(y_train_1) crf3_y.extend(y_train_2) crf3_y.extend(y_train_4) crf3_y.extend(y_train_5) #Third fold is test crf3 = sklearn_crfsuite.CRF( algorithm='lbfgs', c1=c1, c2=c2, max_iterations=max_iter, all_possible_transitions=all_poss_trans ) crf3.fit(crf3_x, crf3_y) crf4_x = list() crf4_y = list() crf4_x.extend(x_train_1) crf4_x.extend(x_train_2) crf4_x.extend(x_train_3) crf4_x.extend(x_train_5) crf4_y.extend(y_train_1) crf4_y.extend(y_train_2) crf4_y.extend(y_train_3) crf4_y.extend(y_train_5) #Fourth fold is test crf4 = sklearn_crfsuite.CRF( algorithm='lbfgs', c1=c1, c2=c2, max_iterations=max_iter, all_possible_transitions=all_poss_trans ) crf4.fit(crf4_x, crf4_y) crf5_x = list() crf5_y = list() crf5_x.extend(x_train_1) crf5_x.extend(x_train_2) crf5_x.extend(x_train_3) crf5_x.extend(x_train_4) crf5_y.extend(y_train_1) crf5_y.extend(y_train_2) crf5_y.extend(y_train_3) crf5_y.extend(y_train_4) #Fifth fold is test crf5 = sklearn_crfsuite.CRF( algorithm='lbfgs', c1=c1, c2=c2, max_iterations=max_iter, all_possible_transitions=all_poss_trans ) crf5.fit(crf5_x, crf5_y) labels = list(crf1.classes_) labels.remove('O') labels sorted_labels = sorted( labels, key=lambda name: (name[1:], name[0]) ) sorted_labels pred1 = crf1.predict(x_train_1) crf1_f1_score = metrics.flat_f1_score(y_train_1, pred1, average='weighted', labels=labels) crf1_f1_score crf1_acc = metrics.flat_accuracy_score(y_train_1, pred1) crf1_acc crf1_matrix = metrics.flat_classification_report( y_train_1, pred1, digits=3, labels=sorted_labels) print(crf1_matrix) pred2 = crf2.predict(x_train_2) crf2_f1_score = metrics.flat_f1_score(y_train_2, pred2, average='weighted', labels=labels) crf2_f1_score crf2_acc = metrics.flat_accuracy_score(y_train_2, pred2) crf2_acc crf2_matrix = metrics.flat_classification_report( y_train_2, pred2, digits=3, labels=sorted_labels) print(crf2_matrix) pred3 = crf3.predict(x_train_3) crf3_f1_score = metrics.flat_f1_score(y_train_3, pred3, average='weighted', labels=labels) crf3_f1_score crf3_acc = metrics.flat_accuracy_score(y_train_3, pred3) crf3_acc crf3_matrix = metrics.flat_classification_report( y_train_3, pred3, digits=3, labels=sorted_labels) print(crf3_matrix) pred4 = crf4.predict(x_train_4) crf4_f1_score = metrics.flat_f1_score(y_train_4, pred4, average='weighted', labels=labels) crf4_f1_score crf4_acc = metrics.flat_accuracy_score(y_train_4, pred4) crf4_acc crf4_matrix = metrics.flat_classification_report( y_train_4, pred4, digits=3, labels=sorted_labels) print(crf4_matrix) pred5 = crf5.predict(x_train_5) crf5_f1_score = metrics.flat_f1_score(y_train_5, pred5, average='weighted', labels=labels) crf5_f1_score crf5_acc = metrics.flat_accuracy_score(y_train_5, pred5) crf5_acc crf5_matrix = metrics.flat_classification_report( y_train_5, pred5, digits=3, labels=sorted_labels) print(crf5_matrix) from collections import Counter def print_transitions(trans_features): for (label_from, label_to), weight in trans_features: print("%-6s -> %-7s %0.6f" % (label_from, label_to, weight)) print("Top likely transitions:") print_transitions(Counter(crf1.transition_features_).most_common(20)) print("\nTop unlikely transitions:") print_transitions(Counter(crf1.transition_features_).most_common()[-20:]) avg_f1_score = (crf1_f1_score + crf2_f1_score + crf3_f1_score + crf4_f1_score + crf5_f1_score)/5 avg_f1_score avg_acc = (crf1_acc + crf2_acc + crf3_acc + crf4_acc + crf5_acc)/5 avg_acc
0.18418
0.474083
``` import numpy as np ``` # Algorithme de Schönhage–Strassen ## Etape 1 : Transformation d'un entier en polynôme ``` #transforme un entier x en un polynôme def toPoly(x): digits = list(str(x)) digits.reverse() poly = np.zeros(pow(2,len(digits)),dtype=int) for i in range(0,len(digits)): poly[i] = int(digits[i]) return poly #transforme un polynôme poly en un entier def toInt(poly): x = 0 for i in range(0,len(poly)): x += poly[i] * pow(10,i) return x #tests assert(167546 == toInt(toPoly(167546))) assert(546 == toInt(toPoly(546))) assert(12 == toInt(toPoly(12))) assert(1 == toInt(toPoly(1))) ``` ## Etape 2 : Calcul du produit de convolution de deux polynômes ``` #permet de calculer le produit de convolution entre les polynômes polX et polY def prodConv(polX,polY): #Etape 2.1 : #on calcule tout d'abord la transformée de Fourier des deux polynômes à l'aide de la FFT fftX = np.fft.fft(polX) fftY = np.fft.fft(polY) #Etape 2.2 : #on effectue le produit point par point des deux transformations prod = fftX * fftY #Etape 2.3 : #on calcule finalement la transformée inverse de Fourier : le résultat correspond au produit de convolution souhaité ifft = np.fft.ifft(prod) return ifft #test np.testing.assert_array_equal(prodConv(toPoly(41),toPoly(37)),np.array([7.+0.j,31.+0.j,12.+0.j,0.+0.j])) ``` ## Etape 3 : Transformation numérique réelle ``` #permet de transformer le produit de convolution pro en valeur numérique réelle def toNumerical(prod): prod = (prod.real).astype(int) res = 0 for i in range(0,len(prod)): res += prod[i] * pow(10,i) return res #test assert(toNumerical(np.array([7.+0.j,31.+0.j,12.+0.j,0.+0.j])) == 1517) ``` ## Implémentation finale ``` #permet de calculer le produit entre x et y en utilisant l'algorithme de Schönhage–Strassen def SchStr(x,y): polX = toPoly(x) polY = toPoly(y) prod = prodConv(polX,polY) return toNumerical(prod) #tests assert(SchStr(456,789) == 456*789) assert(SchStr(1,1) == 1) assert(SchStr(39405,39405) == 39405*39405) ``` # Algorithme de multiplication "standard" ``` #permet de calculer le produit entre x et y en utilisant l'algorithme de multiplication standard def Stand(x,y): digX = list(str(x)) digY = list(str(y)) res = np.zeros(len(digX) + len(digY), dtype = int) for i in range(len(digX) - 1, -1, -1): for j in range(len(digY) - 1, -1, -1): k = int(digX[i]) * int(digY[j]) + res[i+j+1] res[i+j+1] = k % 10 res[i+j] += k//10 return int("". join([str(digit) for digit in res])) #tests assert(Stand(456,789) == 456*789) assert(Stand(1,1) == 1) assert(Stand(39405,39405) == 39405*39405) ``` # Algorithme de Karatsuba ``` #permet de calculer le produit entre x et y en utilisant l'algorithme de Karatsuba def Kara(x,y): if x < 10 or y < 10: return x*y k = max(len(str(x)),len(str(y))) k = k // 2 a = x // 10**(k) b = x % 10**(k) c = y // 10**(k) d = y % 10**(k) z0 = Kara(b,d) z1 = Kara((a+b),(c+d)) z2 = Kara(a,c) return (z2 * 10**(2*k)) + ((z1 - z2 - z0) * 10**(k)) + (z0) #tests assert(Kara(456,789) == 456*789) assert(Kara(1,1) == 1) assert(Kara(39405,39405) == 39405*39405) ```
github_jupyter
import numpy as np #transforme un entier x en un polynôme def toPoly(x): digits = list(str(x)) digits.reverse() poly = np.zeros(pow(2,len(digits)),dtype=int) for i in range(0,len(digits)): poly[i] = int(digits[i]) return poly #transforme un polynôme poly en un entier def toInt(poly): x = 0 for i in range(0,len(poly)): x += poly[i] * pow(10,i) return x #tests assert(167546 == toInt(toPoly(167546))) assert(546 == toInt(toPoly(546))) assert(12 == toInt(toPoly(12))) assert(1 == toInt(toPoly(1))) #permet de calculer le produit de convolution entre les polynômes polX et polY def prodConv(polX,polY): #Etape 2.1 : #on calcule tout d'abord la transformée de Fourier des deux polynômes à l'aide de la FFT fftX = np.fft.fft(polX) fftY = np.fft.fft(polY) #Etape 2.2 : #on effectue le produit point par point des deux transformations prod = fftX * fftY #Etape 2.3 : #on calcule finalement la transformée inverse de Fourier : le résultat correspond au produit de convolution souhaité ifft = np.fft.ifft(prod) return ifft #test np.testing.assert_array_equal(prodConv(toPoly(41),toPoly(37)),np.array([7.+0.j,31.+0.j,12.+0.j,0.+0.j])) #permet de transformer le produit de convolution pro en valeur numérique réelle def toNumerical(prod): prod = (prod.real).astype(int) res = 0 for i in range(0,len(prod)): res += prod[i] * pow(10,i) return res #test assert(toNumerical(np.array([7.+0.j,31.+0.j,12.+0.j,0.+0.j])) == 1517) #permet de calculer le produit entre x et y en utilisant l'algorithme de Schönhage–Strassen def SchStr(x,y): polX = toPoly(x) polY = toPoly(y) prod = prodConv(polX,polY) return toNumerical(prod) #tests assert(SchStr(456,789) == 456*789) assert(SchStr(1,1) == 1) assert(SchStr(39405,39405) == 39405*39405) #permet de calculer le produit entre x et y en utilisant l'algorithme de multiplication standard def Stand(x,y): digX = list(str(x)) digY = list(str(y)) res = np.zeros(len(digX) + len(digY), dtype = int) for i in range(len(digX) - 1, -1, -1): for j in range(len(digY) - 1, -1, -1): k = int(digX[i]) * int(digY[j]) + res[i+j+1] res[i+j+1] = k % 10 res[i+j] += k//10 return int("". join([str(digit) for digit in res])) #tests assert(Stand(456,789) == 456*789) assert(Stand(1,1) == 1) assert(Stand(39405,39405) == 39405*39405) #permet de calculer le produit entre x et y en utilisant l'algorithme de Karatsuba def Kara(x,y): if x < 10 or y < 10: return x*y k = max(len(str(x)),len(str(y))) k = k // 2 a = x // 10**(k) b = x % 10**(k) c = y // 10**(k) d = y % 10**(k) z0 = Kara(b,d) z1 = Kara((a+b),(c+d)) z2 = Kara(a,c) return (z2 * 10**(2*k)) + ((z1 - z2 - z0) * 10**(k)) + (z0) #tests assert(Kara(456,789) == 456*789) assert(Kara(1,1) == 1) assert(Kara(39405,39405) == 39405*39405)
0.389547
0.939582
``` from kndetect.utils import get_data_dir_path data_dir = get_data_dir_path() ``` # PLOT PCS ``` import numpy as np import matplotlib.pyplot as plt num_components_to_plot = 3 ``` ## KN PCs ``` PCs_mixed = np.load(data_dir + "/mixed_pcs.npy", allow_pickle=True).item() PCs_mixed = PCs_mixed['all'][0:num_components_to_plot] var_mixed_dict = np.load(data_dir+"/pc_var_ratio_mixed_pcs.npy", allow_pickle=True).item() var_mixed = var_mixed_dict['all'] markers = ['o','s','D','*','x'] fig = plt.figure(figsize=(10,5)) for i in range(num_components_to_plot): PC = PCs_mixed[i] x = np.arange(0,102,2)-50 plt.plot(x,PC,marker=markers[i],label = "PC "+str(i+1), ) plt.xlabel("days since maximum", fontsize = 25) plt.ylabel("PCs", fontsize=25) plt.rc('xtick', labelsize=17) plt.rc('ytick', labelsize=17) plt.rc('legend', fontsize=15) plt.legend() plt.tight_layout() plt.show() markers = ['o','s','D','*','x'] fig = plt.figure(figsize=(10,5)) colors = ['#F5622E', '#15284F', '#3C8DFF'] for i in range(num_components_to_plot): PC = PCs_mixed[i] x = np.arange(0,102,2)-50 variance = "{:.2f}".format(var_mixed[i]) plt.plot(x,PC,marker=markers[i],label = "PC"+str(i+1) +", Var = "+ variance, color=colors[i]) plt.xlabel("days since maximum", fontsize = 25) plt.ylabel("PCs", fontsize=25) plt.rc('xtick', labelsize=17) plt.rc('ytick', labelsize=17) plt.rc('legend', fontsize=15) plt.legend() plt.tight_layout() fig.savefig("results/"+"PCs_mixed.pdf") plt.show() ``` # PC interpolation ``` pcs_interpolated = np.load(data_dir + "/interpolated_mixed_pcs.npy", allow_pickle=True) np.shape(pcs_interpolated) x_interpolated = np.linspace(-50, 50, num=401, endpoint=True) import numpy as np fig = plt.figure(figsize=(10,5)) pc_names = ["PC1", "PC2", "PC3"] colors = ['#F5622E', '#15284F', '#3C8DFF'] markers = ['o','s','D','*','x'] num_pc_components = 3 for i in range(num_pc_components): max_val = np.amax(np.abs(pcs_interpolated[i])) pcs_interpolated[i] = pcs_interpolated[i]/max_val PC = pcs_interpolated[i] plt.plot(x_interpolated, PC, label = pc_names[i], marker=markers[i], ms=5, color=colors[i]) ax = plt.gca() plt.xlabel("days since maximum", fontsize=25) plt.ylabel("normalized PCs", fontsize=25) leg = ax.legend() plt.rc('xtick', labelsize=17) plt.rc('ytick', labelsize=17) plt.rc('legend', loc='lower right', fontsize=15) plt.legend() plt.tight_layout() plt.savefig("results/PCs_mixed_interpolated.pdf") from astropy.table import Table pc_generation_data = np.load(data_dir + "/PC_generation_dataset_mixed.npy", allow_pickle=True).item() import matplotlib.pyplot as plt def get_extracted_region(object_id, band, title): pos = np.where(np.asarray(pc_generation_data['object_ids']) == object_id) print(pos) band_data = pc_generation_data[band][pos] print(np.shape(band_data)) x = np.linspace(-50, 50, num=51, endpoint=True) fig = plt.figure(figsize=(10,5)) ax = plt.gca() #plt.axvline(x=0, ls="--", label="Day0", color='#15284F') plt.scatter(x, band_data.flatten(), c='#F5622E',label="r band") plt.axvspan(-2, +2, facecolor= '#D5D5D3', alpha=.25, lw=2, edgecolor ='#15284F', ls="--", label="region of max") plt.xticks(fontsize=22) plt.yticks(fontsize=22) plt.xlabel("Days since anchor", fontsize=30) plt.ylabel("FLUXCAL", fontsize=30) ax = plt.gca() leg = ax.legend(fontsize=17, loc='upper right') leg.set_title(title, prop={'size':17}) plt.tight_layout() plt.savefig("results/perfect_single_band_extracted.pdf") fig = get_extracted_region(2311, 'r', title='Extracted perfect lightcurve') ```
github_jupyter
from kndetect.utils import get_data_dir_path data_dir = get_data_dir_path() import numpy as np import matplotlib.pyplot as plt num_components_to_plot = 3 PCs_mixed = np.load(data_dir + "/mixed_pcs.npy", allow_pickle=True).item() PCs_mixed = PCs_mixed['all'][0:num_components_to_plot] var_mixed_dict = np.load(data_dir+"/pc_var_ratio_mixed_pcs.npy", allow_pickle=True).item() var_mixed = var_mixed_dict['all'] markers = ['o','s','D','*','x'] fig = plt.figure(figsize=(10,5)) for i in range(num_components_to_plot): PC = PCs_mixed[i] x = np.arange(0,102,2)-50 plt.plot(x,PC,marker=markers[i],label = "PC "+str(i+1), ) plt.xlabel("days since maximum", fontsize = 25) plt.ylabel("PCs", fontsize=25) plt.rc('xtick', labelsize=17) plt.rc('ytick', labelsize=17) plt.rc('legend', fontsize=15) plt.legend() plt.tight_layout() plt.show() markers = ['o','s','D','*','x'] fig = plt.figure(figsize=(10,5)) colors = ['#F5622E', '#15284F', '#3C8DFF'] for i in range(num_components_to_plot): PC = PCs_mixed[i] x = np.arange(0,102,2)-50 variance = "{:.2f}".format(var_mixed[i]) plt.plot(x,PC,marker=markers[i],label = "PC"+str(i+1) +", Var = "+ variance, color=colors[i]) plt.xlabel("days since maximum", fontsize = 25) plt.ylabel("PCs", fontsize=25) plt.rc('xtick', labelsize=17) plt.rc('ytick', labelsize=17) plt.rc('legend', fontsize=15) plt.legend() plt.tight_layout() fig.savefig("results/"+"PCs_mixed.pdf") plt.show() pcs_interpolated = np.load(data_dir + "/interpolated_mixed_pcs.npy", allow_pickle=True) np.shape(pcs_interpolated) x_interpolated = np.linspace(-50, 50, num=401, endpoint=True) import numpy as np fig = plt.figure(figsize=(10,5)) pc_names = ["PC1", "PC2", "PC3"] colors = ['#F5622E', '#15284F', '#3C8DFF'] markers = ['o','s','D','*','x'] num_pc_components = 3 for i in range(num_pc_components): max_val = np.amax(np.abs(pcs_interpolated[i])) pcs_interpolated[i] = pcs_interpolated[i]/max_val PC = pcs_interpolated[i] plt.plot(x_interpolated, PC, label = pc_names[i], marker=markers[i], ms=5, color=colors[i]) ax = plt.gca() plt.xlabel("days since maximum", fontsize=25) plt.ylabel("normalized PCs", fontsize=25) leg = ax.legend() plt.rc('xtick', labelsize=17) plt.rc('ytick', labelsize=17) plt.rc('legend', loc='lower right', fontsize=15) plt.legend() plt.tight_layout() plt.savefig("results/PCs_mixed_interpolated.pdf") from astropy.table import Table pc_generation_data = np.load(data_dir + "/PC_generation_dataset_mixed.npy", allow_pickle=True).item() import matplotlib.pyplot as plt def get_extracted_region(object_id, band, title): pos = np.where(np.asarray(pc_generation_data['object_ids']) == object_id) print(pos) band_data = pc_generation_data[band][pos] print(np.shape(band_data)) x = np.linspace(-50, 50, num=51, endpoint=True) fig = plt.figure(figsize=(10,5)) ax = plt.gca() #plt.axvline(x=0, ls="--", label="Day0", color='#15284F') plt.scatter(x, band_data.flatten(), c='#F5622E',label="r band") plt.axvspan(-2, +2, facecolor= '#D5D5D3', alpha=.25, lw=2, edgecolor ='#15284F', ls="--", label="region of max") plt.xticks(fontsize=22) plt.yticks(fontsize=22) plt.xlabel("Days since anchor", fontsize=30) plt.ylabel("FLUXCAL", fontsize=30) ax = plt.gca() leg = ax.legend(fontsize=17, loc='upper right') leg.set_title(title, prop={'size':17}) plt.tight_layout() plt.savefig("results/perfect_single_band_extracted.pdf") fig = get_extracted_region(2311, 'r', title='Extracted perfect lightcurve')
0.47658
0.776835
# Get DrugBank Drug-Target Interactions This notebook gets the drug-target interactions from DrugBank and formats them as a nice JSON. ## Installation Bio2BEL DrugBank must be installed from GitHub first using the following command in the terminal: ```bash python3 -m pip install git+https://github.com/bio2bel/drugbank.git@master ``` ## Imports ``` import json import sys import time import rdkit import bio2bel import bio2bel_drugbank from bio2bel_drugbank.models import * ``` ## Runtime Environment ``` print(sys.version) print(time.asctime()) print('Bio2BEL version:', bio2bel.get_version()) print('Bio2BEL DrugBank version:', bio2bel_drugbank.get_version()) drugbank_manager = bio2bel_drugbank.Manager() drugbank_manager ``` ## Data Download If you'd like to populate DrugBank yourself, you need to ensure that there's a folder called `~/.pybel/bio2bel/drugbank` in which the file contained at https://www.drugbank.ca/releases/5-1-0/downloads/all-full-database is downloaded. ``` if not drugbank_manager.is_populated(): drugbank_manager.populate() drugbank_manager.summarize() ``` ## Data Processing ``` drugbank_manager.list_groups() # this can be swapped for any of the other groups as well approved = drugbank_manager.get_group_by_name('approved') ``` ## Output ``` %%time output_json = [ { 'drugbank_id': drug.drugbank_id, 'name': drug.name, 'cas_number': drug.cas_number, 'inchi': drug.inchi, 'inchikey': drug.inchikey, 'targets': [ { 'uniprot_id': interaction.protein.uniprot_id, 'uniprot_accession': interaction.protein.uniprot_accession, 'name': interaction.protein.name, 'hgnc_id': interaction.protein.hgnc_id, 'articles': [ article.pubmed_id for article in interaction.articles ] } for interaction in drug.protein_interactions ] } for drug in approved.drugs ] with open('drugbank-targets.json', 'w') as f: json.dump(output_json, f) %%time with open('drugbank-interactions.tsv', 'w') as file: print( 'drug_name', 'drug_drugbank_id', 'protein_name', 'protein_uniprot_id', 'protein_species', 'pubmed_id', sep='\t', file=file ) for drug_target_interaction in drugbank_manager.list_drug_protein_interactions(): drug = drug_target_interaction.drug protein = drug_target_interaction.protein for article in drug_target_interaction.articles: print( drug.name, drug.drugbank_id, protein.name, protein.uniprot_id, protein.species.name, article.pubmed_id, sep='\t', file=file, ) ```
github_jupyter
python3 -m pip install git+https://github.com/bio2bel/drugbank.git@master import json import sys import time import rdkit import bio2bel import bio2bel_drugbank from bio2bel_drugbank.models import * print(sys.version) print(time.asctime()) print('Bio2BEL version:', bio2bel.get_version()) print('Bio2BEL DrugBank version:', bio2bel_drugbank.get_version()) drugbank_manager = bio2bel_drugbank.Manager() drugbank_manager if not drugbank_manager.is_populated(): drugbank_manager.populate() drugbank_manager.summarize() drugbank_manager.list_groups() # this can be swapped for any of the other groups as well approved = drugbank_manager.get_group_by_name('approved') %%time output_json = [ { 'drugbank_id': drug.drugbank_id, 'name': drug.name, 'cas_number': drug.cas_number, 'inchi': drug.inchi, 'inchikey': drug.inchikey, 'targets': [ { 'uniprot_id': interaction.protein.uniprot_id, 'uniprot_accession': interaction.protein.uniprot_accession, 'name': interaction.protein.name, 'hgnc_id': interaction.protein.hgnc_id, 'articles': [ article.pubmed_id for article in interaction.articles ] } for interaction in drug.protein_interactions ] } for drug in approved.drugs ] with open('drugbank-targets.json', 'w') as f: json.dump(output_json, f) %%time with open('drugbank-interactions.tsv', 'w') as file: print( 'drug_name', 'drug_drugbank_id', 'protein_name', 'protein_uniprot_id', 'protein_species', 'pubmed_id', sep='\t', file=file ) for drug_target_interaction in drugbank_manager.list_drug_protein_interactions(): drug = drug_target_interaction.drug protein = drug_target_interaction.protein for article in drug_target_interaction.articles: print( drug.name, drug.drugbank_id, protein.name, protein.uniprot_id, protein.species.name, article.pubmed_id, sep='\t', file=file, )
0.244814
0.816187
<h1>Script-mode Custom Training Container (2)</h1> This notebook demonstrates how to build and use a custom Docker container for training with Amazon SageMaker that leverages on the <strong>Script Mode</strong> execution that is implemented by the sagemaker-training-toolkit library. Reference documentation is available at https://github.com/aws/sagemaker-training-toolkit. The difference from the first example is that we are not copying the training code during the Docker build process, and we are loading them dynamically from Amazon S3 (this feature is implemented through the sagemaker-training-toolkit). We start by defining some variables like the current execution role, the ECR repository that we are going to use for pushing the custom Docker container and a default Amazon S3 bucket to be used by Amazon SageMaker. ``` import boto3 import sagemaker from sagemaker import get_execution_role ecr_namespace = 'sagemaker-training-containers/' prefix = 'tf-script-mode-container-2' ecr_repository_name = ecr_namespace + prefix role = get_execution_role() account_id = role.split(':')[4] region = boto3.Session().region_name sagemaker_session = sagemaker.session.Session() bucket = sagemaker_session.default_bucket() print(account_id) print(region) print(role) print(bucket) ``` Let's take a look at the Dockerfile which defines the statements for building our script-mode custom training container: ``` ! pygmentize ../docker/Dockerfile ``` At high-level the Dockerfile specifies the following operations for building this container: <ul> <li>Start from Ubuntu 16.04</li> <li>Define some variables to be used at build time to install Python 3</li> <li>Some handful libraries are installed with apt-get</li> <li>We then install Python 3 and create a symbolic link</li> <li>We install some Python libraries like numpy, pandas, ScikitLearn, etc.</li> <li>We set e few environment variables, including PYTHONUNBUFFERED which is used to avoid buffering Python standard output (useful for logging)</li> <li>We install the <strong>sagemaker-training-toolkit</strong> library</li> </ul> <h3>Build and push the container</h3> We are now ready to build this container and push it to Amazon ECR. This task is executed using a shell script stored in the ../script/ folder. Let's take a look at this script and then execute it. ``` ! pygmentize ../scripts/build_and_push.sh ``` <h3>--------------------------------------------------------------------------------------------------------------------</h3> The script builds the Docker container, then creates the repository if it does not exist, and finally pushes the container to the ECR repository. The build task requires a few minutes to be executed the first time, then Docker caches build outputs to be reused for the subsequent build operations. ``` %%capture ! ../scripts/build_and_push.sh $account_id $region $ecr_repository_name ``` <h3>Training with Amazon SageMaker</h3> Once we have correctly pushed our container to Amazon ECR, we are ready to start training with Amazon SageMaker, which requires the ECR path to the Docker container used for training as parameter for starting a training job. ``` container_image_uri = '{0}.dkr.ecr.{1}.amazonaws.com/{2}:latest'.format(account_id, region, ecr_repository_name) print(container_image_uri) ``` Given the purpose of this example is explaining how to build custom script-mode containers, we are not going to train a real model. The script that will be executed does not define a specific training logic; it just outputs the configurations injected by SageMaker and implements a dummy training loop. Training data is also dummy. Let's analyze the script first: ``` ! pygmentize source_dir/train.py ``` You can realize that the training code has been implemented as a standard Python script, that will be invoked by the sagemaker-training-toolkit library passing hyperparameters as arguments. This way of invoking training script is indeed called <strong>Script Mode</strong> for Amazon SageMaker containers. Now, we upload some dummy data to Amazon S3, in order to define our S3-based training channels. ``` container_image_uri = '057716757052.dkr.ecr.ap-northeast-2.amazonaws.com/sagemaker-training-containers/tf-script-mode-container-2:latest' %store container_image_uri ! echo "val1, val2, val3" > dummy.csv print(sagemaker_session.upload_data('dummy.csv', bucket, prefix + '/train')) print(sagemaker_session.upload_data('dummy.csv', bucket, prefix + '/val')) ! rm dummy.csv ``` We want to dynamically run user-provided code loading it from Amazon S3, so we need to: <ul> <li>Package the <strong>source_dir</strong> folder in a tar.gz archive</li> <li>Upload the archive to Amazon S3</li> <li>Specify the path to the archive in Amazon S3 as one of the parameters of the training job</li> </ul> <strong>Note:</strong> these steps are executed automatically by the Amazon SageMaker Python SDK when using framework estimators for MXNet, Tensorflow, etc. ``` import tarfile import os def create_tar_file(source_files, target=None): if target: filename = target else: _, filename = tempfile.mkstemp() with tarfile.open(filename, mode="w:gz") as t: for sf in source_files: # Add all files from the directory into the root of the directory structure of the tar t.add(sf, arcname=os.path.basename(sf)) return filename create_tar_file(["source_dir/train.py", "source_dir/utils.py"], "sourcedir.tar.gz") sources = sagemaker_session.upload_data('sourcedir.tar.gz', bucket, prefix + '/code') print(sources) ! rm sourcedir.tar.gz ``` When starting the training job, we need to let the sagemaker-training-toolkit library know where the sources are stored in Amazon S3 and what is the module to be invoked. These parameters are specified through the following reserved hyperparameters (these reserved hyperparameters are injected automatically when using framework estimators of the Amazon SageMaker Python SDK): <ul> <li>sagemaker_program</li> <li>sagemaker_submit_directory</li> </ul> Finally, we can execute the training job by calling the fit() method of the generic Estimator object defined in the Amazon SageMaker Python SDK (https://github.com/aws/sagemaker-python-sdk/blob/master/src/sagemaker/estimator.py). This corresponds to calling the CreateTrainingJob() API (https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html). ``` import sagemaker import json # JSON encode hyperparameters. def json_encode_hyperparameters(hyperparameters): return {str(k): json.dumps(v) for (k, v) in hyperparameters.items()} hyperparameters = json_encode_hyperparameters({ "sagemaker_program": "train.py", "sagemaker_submit_directory": sources, "hp1": "value1", "hp2": 300, "hp3": 0.001}) est = sagemaker.estimator.Estimator(container_image_uri, role, train_instance_count=1, train_instance_type='local', base_job_name=prefix, hyperparameters=hyperparameters) train_config = sagemaker.session.s3_input('s3://{0}/{1}/train/'.format(bucket, prefix), content_type='text/csv') val_config = sagemaker.session.s3_input('s3://{0}/{1}/val/'.format(bucket, prefix), content_type='text/csv') est.fit({'train': train_config, 'validation': val_config }) ``` <h3>Training with a custom SDK framework estimator</h3> As you have seen, in the previous steps we had to upload our code to Amazon S3 and then inject reserved hyperparameters to execute training. In order to facilitate this task, you can also try defining a custom framework estimator using the Amazon SageMaker Python SDK and run training with that class, which will take care of managing these tasks. ``` from sagemaker.estimator import Framework class CustomFramework(Framework): def __init__( self, entry_point, source_dir=None, hyperparameters=None, py_version="py3", framework_version=None, image_name=None, distributions=None, **kwargs ): super(CustomFramework, self).__init__( entry_point, source_dir, hyperparameters, image_name=image_name, **kwargs ) def _configure_distribution(self, distributions): return def create_model( self, model_server_workers=None, role=None, vpc_config_override=None, entry_point=None, source_dir=None, dependencies=None, image_name=None, **kwargs ): return None import sagemaker est = CustomFramework(image_name=container_image_uri, role=role, entry_point='train.py', source_dir='source_dir/', train_instance_count=1, train_instance_type='local', # we use local mode #train_instance_type='ml.m5.xlarge', base_job_name=prefix, hyperparameters={ "hp1": "value1", "hp2": "300", "hp3": "0.001" }) train_config = sagemaker.session.s3_input('s3://{0}/{1}/train/'.format(bucket, prefix), content_type='text/csv') val_config = sagemaker.session.s3_input('s3://{0}/{1}/val/'.format(bucket, prefix), content_type='text/csv') est.fit({'train': train_config, 'validation': val_config }) ``` ## Test in Cloud ``` from sagemaker.estimator import Framework class CustomFramework(Framework): def __init__( self, entry_point, source_dir=None, hyperparameters=None, py_version="py3", framework_version=None, image_name=None, distributions=None, **kwargs ): super(CustomFramework, self).__init__( entry_point, source_dir, hyperparameters, image_name=image_name, **kwargs ) def _configure_distribution(self, distributions): return def create_model( self, model_server_workers=None, role=None, vpc_config_override=None, entry_point=None, source_dir=None, dependencies=None, image_name=None, **kwargs ): return None import sagemaker est = CustomFramework(image_name=container_image_uri, role=role, entry_point='train.py', source_dir='source_dir/', train_instance_count=1, # train_instance_type='local', # we use local mode train_instance_type='ml.m5.xlarge', base_job_name=prefix, hyperparameters={ "hp1": "value1", "hp2": "300", "hp3": "0.001" }) train_config = sagemaker.session.s3_input('s3://{0}/{1}/train/'.format(bucket, prefix), content_type='text/csv') val_config = sagemaker.session.s3_input('s3://{0}/{1}/val/'.format(bucket, prefix), content_type='text/csv') est.fit({'train': train_config, 'validation': val_config }) ```
github_jupyter
import boto3 import sagemaker from sagemaker import get_execution_role ecr_namespace = 'sagemaker-training-containers/' prefix = 'tf-script-mode-container-2' ecr_repository_name = ecr_namespace + prefix role = get_execution_role() account_id = role.split(':')[4] region = boto3.Session().region_name sagemaker_session = sagemaker.session.Session() bucket = sagemaker_session.default_bucket() print(account_id) print(region) print(role) print(bucket) ! pygmentize ../docker/Dockerfile ! pygmentize ../scripts/build_and_push.sh %%capture ! ../scripts/build_and_push.sh $account_id $region $ecr_repository_name container_image_uri = '{0}.dkr.ecr.{1}.amazonaws.com/{2}:latest'.format(account_id, region, ecr_repository_name) print(container_image_uri) ! pygmentize source_dir/train.py container_image_uri = '057716757052.dkr.ecr.ap-northeast-2.amazonaws.com/sagemaker-training-containers/tf-script-mode-container-2:latest' %store container_image_uri ! echo "val1, val2, val3" > dummy.csv print(sagemaker_session.upload_data('dummy.csv', bucket, prefix + '/train')) print(sagemaker_session.upload_data('dummy.csv', bucket, prefix + '/val')) ! rm dummy.csv import tarfile import os def create_tar_file(source_files, target=None): if target: filename = target else: _, filename = tempfile.mkstemp() with tarfile.open(filename, mode="w:gz") as t: for sf in source_files: # Add all files from the directory into the root of the directory structure of the tar t.add(sf, arcname=os.path.basename(sf)) return filename create_tar_file(["source_dir/train.py", "source_dir/utils.py"], "sourcedir.tar.gz") sources = sagemaker_session.upload_data('sourcedir.tar.gz', bucket, prefix + '/code') print(sources) ! rm sourcedir.tar.gz import sagemaker import json # JSON encode hyperparameters. def json_encode_hyperparameters(hyperparameters): return {str(k): json.dumps(v) for (k, v) in hyperparameters.items()} hyperparameters = json_encode_hyperparameters({ "sagemaker_program": "train.py", "sagemaker_submit_directory": sources, "hp1": "value1", "hp2": 300, "hp3": 0.001}) est = sagemaker.estimator.Estimator(container_image_uri, role, train_instance_count=1, train_instance_type='local', base_job_name=prefix, hyperparameters=hyperparameters) train_config = sagemaker.session.s3_input('s3://{0}/{1}/train/'.format(bucket, prefix), content_type='text/csv') val_config = sagemaker.session.s3_input('s3://{0}/{1}/val/'.format(bucket, prefix), content_type='text/csv') est.fit({'train': train_config, 'validation': val_config }) from sagemaker.estimator import Framework class CustomFramework(Framework): def __init__( self, entry_point, source_dir=None, hyperparameters=None, py_version="py3", framework_version=None, image_name=None, distributions=None, **kwargs ): super(CustomFramework, self).__init__( entry_point, source_dir, hyperparameters, image_name=image_name, **kwargs ) def _configure_distribution(self, distributions): return def create_model( self, model_server_workers=None, role=None, vpc_config_override=None, entry_point=None, source_dir=None, dependencies=None, image_name=None, **kwargs ): return None import sagemaker est = CustomFramework(image_name=container_image_uri, role=role, entry_point='train.py', source_dir='source_dir/', train_instance_count=1, train_instance_type='local', # we use local mode #train_instance_type='ml.m5.xlarge', base_job_name=prefix, hyperparameters={ "hp1": "value1", "hp2": "300", "hp3": "0.001" }) train_config = sagemaker.session.s3_input('s3://{0}/{1}/train/'.format(bucket, prefix), content_type='text/csv') val_config = sagemaker.session.s3_input('s3://{0}/{1}/val/'.format(bucket, prefix), content_type='text/csv') est.fit({'train': train_config, 'validation': val_config }) from sagemaker.estimator import Framework class CustomFramework(Framework): def __init__( self, entry_point, source_dir=None, hyperparameters=None, py_version="py3", framework_version=None, image_name=None, distributions=None, **kwargs ): super(CustomFramework, self).__init__( entry_point, source_dir, hyperparameters, image_name=image_name, **kwargs ) def _configure_distribution(self, distributions): return def create_model( self, model_server_workers=None, role=None, vpc_config_override=None, entry_point=None, source_dir=None, dependencies=None, image_name=None, **kwargs ): return None import sagemaker est = CustomFramework(image_name=container_image_uri, role=role, entry_point='train.py', source_dir='source_dir/', train_instance_count=1, # train_instance_type='local', # we use local mode train_instance_type='ml.m5.xlarge', base_job_name=prefix, hyperparameters={ "hp1": "value1", "hp2": "300", "hp3": "0.001" }) train_config = sagemaker.session.s3_input('s3://{0}/{1}/train/'.format(bucket, prefix), content_type='text/csv') val_config = sagemaker.session.s3_input('s3://{0}/{1}/val/'.format(bucket, prefix), content_type='text/csv') est.fit({'train': train_config, 'validation': val_config })
0.387111
0.891292
``` %matplotlib inline ``` <div class="alert alert-danger"> <h3>Disclaimer</h3> The package <code>nistats</code> will soon be merged into <code>nilearn</code> and all of its functionality will be available in the release of <code>nilearn</code> 0.7.0 in late 2020. Instead of using the retired version of <code>nistats</code>, we decided to already provide a brief spoiler of how things look in the new version (by installing <code>nilearn</code> from the current <code>main branch</code>. While drastic changes regarding functions and modules shouldn't be a thin, please watch out for smaller differences like <code>arguments</code>, <code>function/argument names/defaults</code>, etc. . </div> Nilearn GLM: statistical analyses of MRI in Python ========================================================= [Nilearn]()'s [GLM/stats]() module allows fast and easy MRI statistical analysis. It leverages [Nibabel]() and other Python libraries from the Python scientific stack like [Scipy](), [Numpy]() and [Pandas](). In this tutorial, we're going to explore `nilearn's GLM` functionality by analyzing 1) a single subject single run and 2) three subject group level example using a General Linear Model (GLM). We're gonna use the same example dataset (ds000114) as from the `nibabel` and `nilearn` tutorials. As this is a multi run multi task dataset, we've to decide on a run and a task we want to analyze. Let's go with `ses-test` and `task-fingerfootlips`, starting with a single subject `sub-01`. # Individual level analysis Setting and inspecting the data ========================= At first, we have to set and indicate the data we want to analyze. As stated above, we're going to use the anatomical image and the preprocessed functional image of `sub-01` from `ses-test`. The preprocessing was conducted through [fmriprep](https://fmriprep.readthedocs.io/en/stable/index.html). ``` fmri_img = '/data/ds000114/derivatives/fmriprep/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_space-MNI152nlin2009casym_desc-preproc_bold.nii.gz' anat_img = '/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz' ``` We can display the mean functional image and the subject's anatomy: ``` from nilearn.image import mean_img mean_img = mean_img(fmri_img) from nilearn.plotting import plot_stat_map, plot_anat, plot_img, show, plot_glass_brain plot_img(mean_img) plot_anat(anat_img) ``` Specifying the experimental paradigm ------------------------------------ We must now provide a description of the experiment, that is, define the timing of the task and rest periods. This is typically provided in an events.tsv file. ``` import pandas as pd events = pd.read_table('/data/ds000114/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_events.tsv') print(events) ``` Performing the GLM analysis --------------------------- It is now time to create and estimate a ``FirstLevelModel`` object, that will generate the *design matrix* using the information provided by the ``events`` object. ``` from nilearn.glm.first_level import FirstLevelModel ``` There are a lot of important parameters one needs to define within a `FirstLevelModel` and the majority of them will have a prominent influence on your results. Thus, make sure to check them before running your model: ``` FirstLevelModel? ``` We need the TR of the functional images, luckily we can extract that information using `nibabel`: ``` !nib-ls /data/ds000114/derivatives/fmriprep/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_space-MNI152nlin2009casym_desc-preproc_bold.nii.gz ``` As we can see the `TR` is 2.5. ``` fmri_glm = FirstLevelModel(t_r=2.5, noise_model='ar1', hrf_model='spm', drift_model='cosine', high_pass=1./160, signal_scaling=False, minimize_memory=False) ``` Usually, we also want to include confounds computed during preprocessing (e.g., motion, global signal, etc.) as regressors of no interest. In our example, these were computed by `fmriprep` and can be found in `derivatives/fmriprep/sub-01/func/`. We can use `pandas` to inspect that file: ``` import pandas as pd confounds = pd.read_csv('/data/ds000114/derivatives/fmriprep/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_bold_desc-confounds_timeseries.tsv', delimiter='\t') confounds ``` Comparable to other neuroimaging softwards, we have a timepoint x confound dataframe. However, `fmriprep` computes way more confounds than most of you are used to and that require a bit of reading to understand and therefore utilize properly. We therefore and for the sake of simplicity stick to the "classic" ones: `WhiteMatter`, `GlobalSignal`, `FramewiseDisplacement` and the `motion correction parameters` in translation and rotation: ``` import numpy as np confounds_glm = confounds[['WhiteMatter', 'GlobalSignal', 'FramewiseDisplacement', 'X', 'Y', 'Z', 'RotX', 'RotY', 'RotZ']].replace(np.nan, 0) confounds_glm ``` Now that we have specified the model, we can run it on the fMRI image ``` fmri_glm = fmri_glm.fit(fmri_img, events, confounds_glm) ``` One can inspect the design matrix (rows represent time, and columns contain the predictors). ``` design_matrix = fmri_glm.design_matrices_[0] ``` Formally, we have taken the first design matrix, because the model is implictily meant to for multiple runs. ``` from nilearn.plotting import plot_design_matrix plot_design_matrix(design_matrix) import matplotlib.pyplot as plt plt.show() ``` Save the design matrix image to disk, first creating a directory where you want to write the images: ``` import os outdir = 'results' if not os.path.exists(outdir): os.mkdir(outdir) from os.path import join plot_design_matrix(design_matrix, output_file=join(outdir, 'design_matrix.png')) ``` The first column contains the expected reponse profile of regions which are sensitive to the "Finger" task. Let's plot this first column: ``` plt.plot(design_matrix['Finger']) plt.xlabel('scan') plt.title('Expected Response for condition "Finger"') plt.show() ``` Detecting voxels with significant effects ----------------------------------------- To access the estimated coefficients (Betas of the GLM model), we created constrast with a single '1' in each of the columns: The role of the contrast is to select some columns of the model --and potentially weight them-- to study the associated statistics. So in a nutshell, a contrast is a weigted combination of the estimated effects. Here we can define canonical contrasts that just consider the two condition in isolation ---let's call them "conditions"--- then a contrast that makes the difference between these conditions. ``` from numpy import array conditions = { 'active - Finger': array([1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]), 'active - Foot': array([0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]), 'active - Lips': array([0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]), } ``` Let's look at it: plot the coefficients of the contrast, indexed by the names of the columns of the design matrix. ``` from nilearn.plotting import plot_contrast_matrix plot_contrast_matrix(conditions['active - Finger'], design_matrix=design_matrix) ``` Below, we compute the estimated effect. It is in BOLD signal unit, but has no statistical guarantees, because it does not take into account the associated variance. ``` eff_map = fmri_glm.compute_contrast(conditions['active - Finger'], output_type='effect_size') ``` In order to get statistical significance, we form a t-statistic, and directly convert is into z-scale. The z-scale means that the values are scaled to match a standard Gaussian distribution (mean=0, variance=1), across voxels, if there were now effects in the data. ``` z_map = fmri_glm.compute_contrast(conditions['active - Finger'], output_type='z_score') ``` Plot thresholded z scores map. We display it on top of the average functional image of the series (could be the anatomical image of the subject). We use arbitrarily a threshold of 3.0 in z-scale. We'll see later how to use corrected thresholds. we show to display 3 axial views: display_mode='z', cut_coords=3 ``` plot_stat_map(z_map, bg_img=mean_img, threshold=3.0, display_mode='z', cut_coords=3, black_bg=True, title='active - Finger (Z>3)') plt.show() plot_glass_brain(z_map, threshold=3.0, black_bg=True, plot_abs=False, title='active - Finger (Z>3)') plt.show() ``` Statistical signifiance testing. One should worry about the statistical validity of the procedure: here we used an arbitrary threshold of 3.0 but the threshold should provide some guarantees on the risk of false detections (aka type-1 errors in statistics). One first suggestion is to control the false positive rate (fpr) at a certain level, e.g. 0.001: this means that there is.1% chance of declaring active an inactive voxel. ``` from nilearn.glm.thresholding import threshold_stats_img _, threshold = threshold_stats_img(z_map, alpha=.001, height_control='fpr') print('Uncorrected p<0.001 threshold: %.3f' % threshold) plot_stat_map(z_map, bg_img=mean_img, threshold=threshold, display_mode='z', cut_coords=3, black_bg=True, title='active - Finger (p<0.001)') plt.show() plot_glass_brain(z_map, threshold=threshold, black_bg=True, plot_abs=False, title='active - Finger (p<0.001)') plt.show() ``` The problem is that with this you expect 0.001 * n_voxels to show up while they're not active --- tens to hundreds of voxels. A more conservative solution is to control the family wise errro rate, i.e. the probability of making ony one false detection, say at 5%. For that we use the so-called Bonferroni correction: ``` _, threshold = threshold_stats_img(z_map, alpha=.05, height_control='bonferroni') print('Bonferroni-corrected, p<0.05 threshold: %.3f' % threshold) plot_stat_map(z_map, bg_img=mean_img, threshold=threshold, display_mode='z', cut_coords=3, black_bg=True, title='active - Finger (p<0.05, corrected)') plt.show() plot_glass_brain(z_map, threshold=threshold, black_bg=True, plot_abs=False, title='active - Finger (p<0.05, corrected)') plt.show() ``` This is quite conservative indeed ! A popular alternative is to control the false discovery rate, i.e. the expected proportion of false discoveries among detections. This is called the false disovery rate. ``` _, threshold = threshold_stats_img(z_map, alpha=.05, height_control='fdr') print('False Discovery rate = 0.05 threshold: %.3f' % threshold) plot_stat_map(z_map, bg_img=mean_img, threshold=threshold, display_mode='z', cut_coords=3, black_bg=True, title='active - Finger (fdr=0.05)') plt.show() plot_glass_brain(z_map, threshold=threshold, black_bg=True, plot_abs=False, title='active - Finger (fdr=0.05)') plt.show() ``` Finally people like to discard isolated voxels (aka "small clusters") from these images. It is possible to generate a thresholded map with small clusters removed by providing a cluster_threshold argument. here clusters smaller than 10 voxels will be discarded. ``` clean_map, threshold = threshold_stats_img( z_map, alpha=.05, height_control='fdr', cluster_threshold=10) plot_stat_map(clean_map, bg_img=mean_img, threshold=threshold, display_mode='z', cut_coords=3, black_bg=True, colorbar=False, title='active - Finger (fdr=0.05), clusters > 10 voxels') plt.show() plot_glass_brain(z_map, threshold=threshold, black_bg=True, plot_abs=False, title='active - Finger (fdr=0.05), clusters > 10 voxels)') plt.show() ``` We can save the effect and zscore maps to the disk ``` z_map.to_filename(join(outdir, 'sub-01_ses-test_task-footfingerlips_space-MNI152nlin2009casym_desc-finger_zmap.nii.gz')) eff_map.to_filename(join(outdir, 'sub-01_ses-test_task-footfingerlips_space-MNI152nlin2009casym_desc-finger_effmap.nii.gz')) ``` Report the found positions in a table ``` from nilearn.reporting import get_clusters_table table = get_clusters_table(z_map, stat_threshold=threshold, cluster_threshold=20) print(table) ``` This table can be saved for future use: ``` table.to_csv(join(outdir, 'table.csv')) ``` Or use [atlasreader](https://github.com/miykael/atlasreader) to get even more information and informative figures: ``` from atlasreader import create_output from os.path import join z_map.to_filename(join(outdir, 'active_finger_z_map.nii.gz')) create_output(join(outdir, 'active_finger_z_map.nii.gz'), cluster_extent=5, voxel_thresh=threshold) ``` Let's have a look at the csv file containing relevant information about the peak of each cluster. This table contains the cluster association and location of each peak, its signal value at this location, the cluster extent (in mm, not in number of voxels), as well as the membership of each peak, given a particular atlas. ``` peak_info = pd.read_csv('results/active_finger_z_map_peaks.csv') peak_info ``` And the clusters: ``` cluster_info = pd.read_csv('results/active_finger_z_map_clusters.csv') cluster_info ``` For each cluster, we also get a corresponding visualization, saved as `.png`: ``` from IPython.display import Image Image("results/active_finger_z_map.png") Image("results/active_finger_z_map_cluster01.png") Image("results/active_finger_z_map_cluster02.png") Image("results/active_finger_z_map_cluster03.png") ``` But wait, there's more! There's even a functionality to create entire `GLM reports` including information regarding the `model` and its `parameters`, `design matrix`, `contrasts`, etc. . All we need is the `make_glm_report` function from `nilearn.reporting` and apply it to our `fitted GLM`, specifying a `contrast of interest`. ``` from nilearn.reporting import make_glm_report report = make_glm_report(fmri_glm, contrasts='Finger', bg_img=mean_img ) ``` Once generated, we have several options to view the `GLM report`: directly in the `notebook`, in the `browser` or save it as an `html` file: ``` report #report.open_in_browser() #report.save_as_html("GLM_report.html") ``` ### Performing an F-test "active vs rest" is a typical t test: condition versus baseline. Another popular type of test is an F test in which one seeks whether a certain combination of conditions (possibly two-, three- or higher-dimensional) explains a significant proportion of the signal. Here one might for instance test which voxels are well explained by combination of the active and rest condition. ``` import numpy as np effects_of_interest = np.vstack((conditions['active - Finger'], conditions['active - Lips'])) plot_contrast_matrix(effects_of_interest, design_matrix) plt.show() ``` Specify the contrast and compute the correspoding map. Actually, the contrast specification is done exactly the same way as for t contrasts. ``` z_map = fmri_glm.compute_contrast(effects_of_interest, output_type='z_score') ``` Note that the statistic has been converted to a z-variable, which makes it easier to represent it. ``` clean_map, threshold = threshold_stats_img( z_map, alpha=.05, height_control='fdr', cluster_threshold=0) plot_stat_map(clean_map, bg_img=mean_img, threshold=threshold, display_mode='z', cut_coords=3, black_bg=True, title='Effects of interest (fdr=0.05), clusters > 10 voxels', cmap='magma') plt.show() ``` ### Evaluating models While not commonly done, it's a very good and important idea to actually evaluate your model in terms of its fit. We can do that comprehensively, yet easily through `nilearn` functionality. In more detail, we're going to inspect the residuals and evaluate the predicted time series. Let's do this for the peak voxels. At first, we have to extract them using `get_clusters_table`: ``` table = get_clusters_table(z_map, stat_threshold=1, cluster_threshold=20).set_index('Cluster ID', drop=True) table.head() ``` From this `dataframe`, we get the `largest clusters` and prepare a `masker` to extract their `time series`: ``` from nilearn import input_data # get the largest clusters' max x, y, and z coordinates coords = table.loc[range(1, 7), ['X', 'Y', 'Z']].values # extract time series from each coordinate masker = input_data.NiftiSpheresMasker(coords) ``` #### Get and check model residuals We can simply obtain the `residuals` of the peak voxels from our `fitted model` via applying the prepared `masker` (and thus `peak voxel`) to the `residuals` our: ``` resid = masker.fit_transform(fmri_glm.residuals[0]) ``` And now, we can plot them and evaluate our `peak voxels` based on their `distribution` of `residuals`: ``` # colors for each of the clusters colors = ['blue', 'navy', 'purple', 'magenta', 'olive', 'teal'] fig2, axs2 = plt.subplots(2, 3) axs2 = axs2.flatten() for i in range(0, 6): axs2[i].set_title('Cluster peak {}\n'.format(coords[i])) axs2[i].hist(resid[:, i], color=colors[i]) print('Mean residuals: {}'.format(resid[:, i].mean())) fig2.set_size_inches(12, 7) fig2.tight_layout() ``` #### Get and check predicted time series In order to evaluate the `predicted time series` we need to extract them, as well as the `actual time series`. To do so, we can use the `masker` again: ``` real_timeseries = masker.fit_transform(fmri_img) predicted_timeseries = masker.fit_transform(fmri_glm.predicted[0]) ``` Having obtained both `time series`, we can plot them against each other. To make it more informative, we will also visualize the respective `peak voxels` on the `mean functional image`: ``` from nilearn import plotting # plot the time series and corresponding locations fig1, axs1 = plt.subplots(2, 6) for i in range(0, 6): # plotting time series axs1[0, i].set_title('Cluster peak {}\n'.format(coords[i])) axs1[0, i].plot(real_timeseries[:, i], c=colors[i], lw=2) axs1[0, i].plot(predicted_timeseries[:, i], c='r', ls='--', lw=2) axs1[0, i].set_xlabel('Time') axs1[0, i].set_ylabel('Signal intensity', labelpad=0) # plotting image below the time series roi_img = plotting.plot_stat_map( z_map, cut_coords=[coords[i][2]], threshold=3.1, figure=fig1, axes=axs1[1, i], display_mode='z', colorbar=False, bg_img=mean_img, cmap='magma') roi_img.add_markers([coords[i]], colors[i], 300) fig1.set_size_inches(24, 14) ``` #### Plot the R-squared Another option to evaluate our model is to plot the `R-squared`, that is the amount of variance explained through our `GLM` in total. While this plot will be informative, its interpretation will be limited as we can't tell if a voxel exhibits a large `R-squared` because of a response to a `condition` in our experiment or to `noise`. For these things, one should employ `F-Tests` as shown above. However, as expected we see that the `R-squared` decreases the further away `voxels` are from the `receive coils` (e.g. deeper in the brain). ``` plotting.plot_stat_map(fmri_glm.r_square[0], bg_img=mean_img, threshold=.1, display_mode='z', cut_coords=7, cmap='magma') ``` ## Group level statistics Now that we've explored the individual level analysis quite a bit, one might ask: but what about `group level` statistics? No problem at all, `nilearn`'s `GLM` functionality of course supports this as well. As in other software packages, we need to repeat the `individual level analysis` for each subject to obtain the same contrast images, that we can submit to a `group level analysis`. ### Run individual level analysis for multiple participants By now, we know how to do this easily. Let's use a simple `for loop` to repeat the analysis from above for `sub-02` and `sub-03`. ``` for subject in ['02', '03']: # set the fMRI image fmri_img = '/data/ds000114/derivatives/fmriprep/sub-%s/ses-test/func/sub-%s_ses-test_task-fingerfootlips_space-MNI152nlin2009casym_desc-preproc_bold.nii.gz' %(subject, subject) # read in the events events = pd.read_table('/data/ds000114/sub-%s/ses-test/func/sub-%s_ses-test_task-fingerfootlips_events.tsv' %(subject, subject)) # read in the confounds confounds = pd.read_table('/data/ds000114/derivatives/fmriprep/sub-%s/ses-test/func/sub-%s_ses-test_task-fingerfootlips_bold_desc-confounds_timeseries.tsv' %(subject, subject)) # restrict the to be included confounds to a subset confounds_glm = confounds[['WhiteMatter', 'GlobalSignal', 'FramewiseDisplacement', 'X', 'Y', 'Z', 'RotX', 'RotY', 'RotZ']].replace(np.nan, 0) # run the GLM fmri_glm = fmri_glm.fit(fmri_img, events, confounds_glm) # compute the contrast as a z-map z_map = fmri_glm.compute_contrast(conditions['active - Finger'], output_type='z_score') # save the z-map z_map.to_filename(join(outdir, 'sub-%s_ses-test_task-footfingerlips_space-MNI152nlin2009casym_desc-finger_zmap.nii.gz' %subject)) ``` ### Define a group level model As we now have the same contrast from multiple `subjects` we can define our `group level model`. At first, we need to gather the `individual contrast maps`: ``` from glob import glob list_z_maps = glob(join(outdir, 'sub-*_ses-test_task-footfingerlips_space-MNI152nlin2009casym_desc-finger_zmap.nii.gz')) list_z_maps ``` Great! The next step includes the definition of a `design matrix`. As we want to run a simple `one-sample t-test`, we just need to indicate as many `1` as we have `z-maps`: ``` design_matrix = pd.DataFrame([1] * len(list_z_maps), columns=['intercept']) ``` Believe it or not, that's all it takes. Within the next step we can already set and run our model. It's basically identical to the `First_level_model`: we need to define the `images` and `design matrix`: ``` from nilearn.glm.second_level import SecondLevelModel second_level_model = SecondLevelModel() second_level_model = second_level_model.fit(list_z_maps, design_matrix=design_matrix) ``` The same holds true for `contrast computation`: ``` z_map_group = second_level_model.compute_contrast(output_type='z_score') ``` What do we get? After defining a liberal threshold of `p<0.001 (uncorrected)`, we can plot our computed `group level contrast image`: ``` from scipy.stats import norm p001_unc = norm.isf(0.001) plotting.plot_glass_brain(z_map_group, colorbar=True, threshold=p001_unc, title='Group Finger tapping (unc p<0.001)', plot_abs=False, display_mode='x', cmap='magma') plotting.show() ``` Well, not much going there...But please remember we also just included three participants. Besides this rather simple model, `nilearn`'s `GLM` functionality of course also allows you to run `paired t-test`, `two-sample t-test`, `F-test`, etc. . As shown above, you also can define different `thresholds` and `multiple comparison corrections`. There's yet another cool thing we didn't talk about. It's possible to run analyses in a rather automated way if your dataset is in `BIDS`. ## Performing statistical analyses on BIDS datasets Even though model specification and running was comparably easy and straightforward, it can be even better. `Nilearn`'s `GLM` functionality actually enables you to define models for multiple participants through one function by leveraging the `BIDS` standard. More precisely, the function `first_level_from_bids` takes the same input arguments as `First_Level_model` (e.g. `t_r`, `hrf_model`, `high_pass`, etc.), but through defining the `BIDS raw` and `derivatives folder`, as well as a `task` and `space` label automatically extracts all information necessary to run `individual level models` and creates the `model` itself for all participants. ``` from nilearn.glm.first_level import first_level_from_bids data_dir = '/data/ds000114/' task_label = 'fingerfootlips' space_label = 'MNI152nlin2009casym' derivatives_folder = 'derivatives/fmriprep' models, models_run_imgs, models_events, models_confounds = \ first_level_from_bids(data_dir, task_label, space_label, smoothing_fwhm=5.0, derivatives_folder=derivatives_folder, t_r=2.5, noise_model='ar1', hrf_model='spm', drift_model='cosine', high_pass=1./160, signal_scaling=False, minimize_memory=False) ``` Done, let's check if things work as expected. As an example, we will have a look at the information for `sub-01`. We're going to start with the `images`. ``` import os print([os.path.basename(run) for run in models_run_imgs[0]]) ``` Looks good. How about confounds? ``` print(models_confounds[0][0]) ``` Ah, the `NaN` again. Let's fix those as we did last time, but for all participants. ``` models_confounds_no_nan = [] for confounds in models_confounds: models_confounds_no_nan.append(confounds[0].fillna(0)[['WhiteMatter', 'GlobalSignal', 'FramewiseDisplacement', 'X', 'Y', 'Z', 'RotX', 'RotY', 'RotZ']]) ``` Last but not least: how do the `events` look? ``` print(models_events[0][0]['trial_type'].value_counts()) ``` Fantastic, now we're ready to run our models. With a little `zip` magic this is done without a problem. We also going to compute `z-maps` as before and plot them side by side. ``` from nilearn import plotting import matplotlib.pyplot as plt models_fitted = [] fig, axes = plt.subplots(nrows=1, ncols=3, figsize=(12, 8.5)) model_and_args = zip(models, models_run_imgs, models_events, models_confounds_no_nan) for midx, (model, imgs, events, confounds) in enumerate(model_and_args): # fit the GLM model.fit(imgs, events, confounds) models_fitted.append(model) # compute the contrast of interest zmap = model.compute_contrast('Finger') plotting.plot_glass_brain(zmap, colorbar=False, threshold=p001_unc, title=('sub-' + model.subject_label), axes=axes[int(midx)-1], plot_abs=False, display_mode='x', cmap='magma') fig.suptitle('subjects z_map finger tapping (unc p<0.001)') plotting.show() ``` That looks about right. However, let's also check the `design matrix` ``` from nilearn.plotting import plot_design_matrix plot_design_matrix(models_fitted[0].design_matrices_[0]) ``` and `contrast matrix`. ``` plot_contrast_matrix('Finger', models_fitted[0].design_matrices_[0]) plt.show() ``` Nothing to complain here and thus we can move on to the `group level model`. Instead of assembling `contrast images` from each participant, we also have the option to simply provide the `fitted individual level models` as input. ``` from nilearn.glm.second_level import SecondLevelModel second_level_input = models_fitted ``` That's all it takes and we can run our `group level model` again. ``` second_level_model = SecondLevelModel() second_level_model = second_level_model.fit(second_level_input) ``` And after computing the `contrast` ``` zmap = second_level_model.compute_contrast( first_level_contrast='Finger') ``` we can plot the results again. ``` plotting.plot_glass_brain(zmap, colorbar=True, threshold=p001_unc, title='Group Finger tapping (unc p<0.001)', plot_abs=False, display_mode='x', cmap='magma') plotting.show() ``` That's all for now. Please note, that we only showed a small part of what's possible. Make sure to check the documentation and the examples it includes. We hope we could show you how powerful `nilearn` will be through including `GLM` functionality starting with the new release. While there's already a lot you can do, there will be even more in the future.
github_jupyter
%matplotlib inline fmri_img = '/data/ds000114/derivatives/fmriprep/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_space-MNI152nlin2009casym_desc-preproc_bold.nii.gz' anat_img = '/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz' from nilearn.image import mean_img mean_img = mean_img(fmri_img) from nilearn.plotting import plot_stat_map, plot_anat, plot_img, show, plot_glass_brain plot_img(mean_img) plot_anat(anat_img) import pandas as pd events = pd.read_table('/data/ds000114/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_events.tsv') print(events) from nilearn.glm.first_level import FirstLevelModel FirstLevelModel? !nib-ls /data/ds000114/derivatives/fmriprep/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_space-MNI152nlin2009casym_desc-preproc_bold.nii.gz fmri_glm = FirstLevelModel(t_r=2.5, noise_model='ar1', hrf_model='spm', drift_model='cosine', high_pass=1./160, signal_scaling=False, minimize_memory=False) import pandas as pd confounds = pd.read_csv('/data/ds000114/derivatives/fmriprep/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_bold_desc-confounds_timeseries.tsv', delimiter='\t') confounds import numpy as np confounds_glm = confounds[['WhiteMatter', 'GlobalSignal', 'FramewiseDisplacement', 'X', 'Y', 'Z', 'RotX', 'RotY', 'RotZ']].replace(np.nan, 0) confounds_glm fmri_glm = fmri_glm.fit(fmri_img, events, confounds_glm) design_matrix = fmri_glm.design_matrices_[0] from nilearn.plotting import plot_design_matrix plot_design_matrix(design_matrix) import matplotlib.pyplot as plt plt.show() import os outdir = 'results' if not os.path.exists(outdir): os.mkdir(outdir) from os.path import join plot_design_matrix(design_matrix, output_file=join(outdir, 'design_matrix.png')) plt.plot(design_matrix['Finger']) plt.xlabel('scan') plt.title('Expected Response for condition "Finger"') plt.show() from numpy import array conditions = { 'active - Finger': array([1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]), 'active - Foot': array([0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]), 'active - Lips': array([0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]), } from nilearn.plotting import plot_contrast_matrix plot_contrast_matrix(conditions['active - Finger'], design_matrix=design_matrix) eff_map = fmri_glm.compute_contrast(conditions['active - Finger'], output_type='effect_size') z_map = fmri_glm.compute_contrast(conditions['active - Finger'], output_type='z_score') plot_stat_map(z_map, bg_img=mean_img, threshold=3.0, display_mode='z', cut_coords=3, black_bg=True, title='active - Finger (Z>3)') plt.show() plot_glass_brain(z_map, threshold=3.0, black_bg=True, plot_abs=False, title='active - Finger (Z>3)') plt.show() from nilearn.glm.thresholding import threshold_stats_img _, threshold = threshold_stats_img(z_map, alpha=.001, height_control='fpr') print('Uncorrected p<0.001 threshold: %.3f' % threshold) plot_stat_map(z_map, bg_img=mean_img, threshold=threshold, display_mode='z', cut_coords=3, black_bg=True, title='active - Finger (p<0.001)') plt.show() plot_glass_brain(z_map, threshold=threshold, black_bg=True, plot_abs=False, title='active - Finger (p<0.001)') plt.show() _, threshold = threshold_stats_img(z_map, alpha=.05, height_control='bonferroni') print('Bonferroni-corrected, p<0.05 threshold: %.3f' % threshold) plot_stat_map(z_map, bg_img=mean_img, threshold=threshold, display_mode='z', cut_coords=3, black_bg=True, title='active - Finger (p<0.05, corrected)') plt.show() plot_glass_brain(z_map, threshold=threshold, black_bg=True, plot_abs=False, title='active - Finger (p<0.05, corrected)') plt.show() _, threshold = threshold_stats_img(z_map, alpha=.05, height_control='fdr') print('False Discovery rate = 0.05 threshold: %.3f' % threshold) plot_stat_map(z_map, bg_img=mean_img, threshold=threshold, display_mode='z', cut_coords=3, black_bg=True, title='active - Finger (fdr=0.05)') plt.show() plot_glass_brain(z_map, threshold=threshold, black_bg=True, plot_abs=False, title='active - Finger (fdr=0.05)') plt.show() clean_map, threshold = threshold_stats_img( z_map, alpha=.05, height_control='fdr', cluster_threshold=10) plot_stat_map(clean_map, bg_img=mean_img, threshold=threshold, display_mode='z', cut_coords=3, black_bg=True, colorbar=False, title='active - Finger (fdr=0.05), clusters > 10 voxels') plt.show() plot_glass_brain(z_map, threshold=threshold, black_bg=True, plot_abs=False, title='active - Finger (fdr=0.05), clusters > 10 voxels)') plt.show() z_map.to_filename(join(outdir, 'sub-01_ses-test_task-footfingerlips_space-MNI152nlin2009casym_desc-finger_zmap.nii.gz')) eff_map.to_filename(join(outdir, 'sub-01_ses-test_task-footfingerlips_space-MNI152nlin2009casym_desc-finger_effmap.nii.gz')) from nilearn.reporting import get_clusters_table table = get_clusters_table(z_map, stat_threshold=threshold, cluster_threshold=20) print(table) table.to_csv(join(outdir, 'table.csv')) from atlasreader import create_output from os.path import join z_map.to_filename(join(outdir, 'active_finger_z_map.nii.gz')) create_output(join(outdir, 'active_finger_z_map.nii.gz'), cluster_extent=5, voxel_thresh=threshold) peak_info = pd.read_csv('results/active_finger_z_map_peaks.csv') peak_info cluster_info = pd.read_csv('results/active_finger_z_map_clusters.csv') cluster_info from IPython.display import Image Image("results/active_finger_z_map.png") Image("results/active_finger_z_map_cluster01.png") Image("results/active_finger_z_map_cluster02.png") Image("results/active_finger_z_map_cluster03.png") from nilearn.reporting import make_glm_report report = make_glm_report(fmri_glm, contrasts='Finger', bg_img=mean_img ) report #report.open_in_browser() #report.save_as_html("GLM_report.html") import numpy as np effects_of_interest = np.vstack((conditions['active - Finger'], conditions['active - Lips'])) plot_contrast_matrix(effects_of_interest, design_matrix) plt.show() z_map = fmri_glm.compute_contrast(effects_of_interest, output_type='z_score') clean_map, threshold = threshold_stats_img( z_map, alpha=.05, height_control='fdr', cluster_threshold=0) plot_stat_map(clean_map, bg_img=mean_img, threshold=threshold, display_mode='z', cut_coords=3, black_bg=True, title='Effects of interest (fdr=0.05), clusters > 10 voxels', cmap='magma') plt.show() table = get_clusters_table(z_map, stat_threshold=1, cluster_threshold=20).set_index('Cluster ID', drop=True) table.head() from nilearn import input_data # get the largest clusters' max x, y, and z coordinates coords = table.loc[range(1, 7), ['X', 'Y', 'Z']].values # extract time series from each coordinate masker = input_data.NiftiSpheresMasker(coords) resid = masker.fit_transform(fmri_glm.residuals[0]) # colors for each of the clusters colors = ['blue', 'navy', 'purple', 'magenta', 'olive', 'teal'] fig2, axs2 = plt.subplots(2, 3) axs2 = axs2.flatten() for i in range(0, 6): axs2[i].set_title('Cluster peak {}\n'.format(coords[i])) axs2[i].hist(resid[:, i], color=colors[i]) print('Mean residuals: {}'.format(resid[:, i].mean())) fig2.set_size_inches(12, 7) fig2.tight_layout() real_timeseries = masker.fit_transform(fmri_img) predicted_timeseries = masker.fit_transform(fmri_glm.predicted[0]) from nilearn import plotting # plot the time series and corresponding locations fig1, axs1 = plt.subplots(2, 6) for i in range(0, 6): # plotting time series axs1[0, i].set_title('Cluster peak {}\n'.format(coords[i])) axs1[0, i].plot(real_timeseries[:, i], c=colors[i], lw=2) axs1[0, i].plot(predicted_timeseries[:, i], c='r', ls='--', lw=2) axs1[0, i].set_xlabel('Time') axs1[0, i].set_ylabel('Signal intensity', labelpad=0) # plotting image below the time series roi_img = plotting.plot_stat_map( z_map, cut_coords=[coords[i][2]], threshold=3.1, figure=fig1, axes=axs1[1, i], display_mode='z', colorbar=False, bg_img=mean_img, cmap='magma') roi_img.add_markers([coords[i]], colors[i], 300) fig1.set_size_inches(24, 14) plotting.plot_stat_map(fmri_glm.r_square[0], bg_img=mean_img, threshold=.1, display_mode='z', cut_coords=7, cmap='magma') for subject in ['02', '03']: # set the fMRI image fmri_img = '/data/ds000114/derivatives/fmriprep/sub-%s/ses-test/func/sub-%s_ses-test_task-fingerfootlips_space-MNI152nlin2009casym_desc-preproc_bold.nii.gz' %(subject, subject) # read in the events events = pd.read_table('/data/ds000114/sub-%s/ses-test/func/sub-%s_ses-test_task-fingerfootlips_events.tsv' %(subject, subject)) # read in the confounds confounds = pd.read_table('/data/ds000114/derivatives/fmriprep/sub-%s/ses-test/func/sub-%s_ses-test_task-fingerfootlips_bold_desc-confounds_timeseries.tsv' %(subject, subject)) # restrict the to be included confounds to a subset confounds_glm = confounds[['WhiteMatter', 'GlobalSignal', 'FramewiseDisplacement', 'X', 'Y', 'Z', 'RotX', 'RotY', 'RotZ']].replace(np.nan, 0) # run the GLM fmri_glm = fmri_glm.fit(fmri_img, events, confounds_glm) # compute the contrast as a z-map z_map = fmri_glm.compute_contrast(conditions['active - Finger'], output_type='z_score') # save the z-map z_map.to_filename(join(outdir, 'sub-%s_ses-test_task-footfingerlips_space-MNI152nlin2009casym_desc-finger_zmap.nii.gz' %subject)) from glob import glob list_z_maps = glob(join(outdir, 'sub-*_ses-test_task-footfingerlips_space-MNI152nlin2009casym_desc-finger_zmap.nii.gz')) list_z_maps design_matrix = pd.DataFrame([1] * len(list_z_maps), columns=['intercept']) from nilearn.glm.second_level import SecondLevelModel second_level_model = SecondLevelModel() second_level_model = second_level_model.fit(list_z_maps, design_matrix=design_matrix) z_map_group = second_level_model.compute_contrast(output_type='z_score') from scipy.stats import norm p001_unc = norm.isf(0.001) plotting.plot_glass_brain(z_map_group, colorbar=True, threshold=p001_unc, title='Group Finger tapping (unc p<0.001)', plot_abs=False, display_mode='x', cmap='magma') plotting.show() from nilearn.glm.first_level import first_level_from_bids data_dir = '/data/ds000114/' task_label = 'fingerfootlips' space_label = 'MNI152nlin2009casym' derivatives_folder = 'derivatives/fmriprep' models, models_run_imgs, models_events, models_confounds = \ first_level_from_bids(data_dir, task_label, space_label, smoothing_fwhm=5.0, derivatives_folder=derivatives_folder, t_r=2.5, noise_model='ar1', hrf_model='spm', drift_model='cosine', high_pass=1./160, signal_scaling=False, minimize_memory=False) import os print([os.path.basename(run) for run in models_run_imgs[0]]) print(models_confounds[0][0]) models_confounds_no_nan = [] for confounds in models_confounds: models_confounds_no_nan.append(confounds[0].fillna(0)[['WhiteMatter', 'GlobalSignal', 'FramewiseDisplacement', 'X', 'Y', 'Z', 'RotX', 'RotY', 'RotZ']]) print(models_events[0][0]['trial_type'].value_counts()) from nilearn import plotting import matplotlib.pyplot as plt models_fitted = [] fig, axes = plt.subplots(nrows=1, ncols=3, figsize=(12, 8.5)) model_and_args = zip(models, models_run_imgs, models_events, models_confounds_no_nan) for midx, (model, imgs, events, confounds) in enumerate(model_and_args): # fit the GLM model.fit(imgs, events, confounds) models_fitted.append(model) # compute the contrast of interest zmap = model.compute_contrast('Finger') plotting.plot_glass_brain(zmap, colorbar=False, threshold=p001_unc, title=('sub-' + model.subject_label), axes=axes[int(midx)-1], plot_abs=False, display_mode='x', cmap='magma') fig.suptitle('subjects z_map finger tapping (unc p<0.001)') plotting.show() from nilearn.plotting import plot_design_matrix plot_design_matrix(models_fitted[0].design_matrices_[0]) plot_contrast_matrix('Finger', models_fitted[0].design_matrices_[0]) plt.show() from nilearn.glm.second_level import SecondLevelModel second_level_input = models_fitted second_level_model = SecondLevelModel() second_level_model = second_level_model.fit(second_level_input) zmap = second_level_model.compute_contrast( first_level_contrast='Finger') plotting.plot_glass_brain(zmap, colorbar=True, threshold=p001_unc, title='Group Finger tapping (unc p<0.001)', plot_abs=False, display_mode='x', cmap='magma') plotting.show()
0.418459
0.973241
<table class="ee-notebook-buttons" align="left"> <td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/JavaScripts/Image/ZeroCrossing.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td> <td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/JavaScripts/Image/ZeroCrossing.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td> <td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/JavaScripts/Image/ZeroCrossing.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td> </table> ## Install Earth Engine API and geemap Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`. The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet. **Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving). ``` # Installs geemap package import subprocess try: import geemap except ImportError: print('geemap package not installed. Installing ...') subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap']) # Checks whether this notebook is running on Google Colab try: import google.colab import geemap.eefolium as emap except: import geemap as emap # Authenticates and initializes Earth Engine import ee try: ee.Initialize() except Exception as e: ee.Authenticate() ee.Initialize() ``` ## Create an interactive map The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py#L13) can be added using the `Map.add_basemap()` function. ``` Map = emap.Map(center=[40,-100], zoom=4) Map.add_basemap('ROADMAP') # Add Google Map Map ``` ## Add Earth Engine Python script ``` # Add Earth Engine dataset # Mark pixels where the elevation crosses 1000m value and compare # that to pixels that are exactly equal to 1000m. elev = ee.Image('CGIAR/SRTM90_V4') # A zero-crossing is defined as any pixel where the right, # bottom, or diagonal bottom-right pixel has the opposite sign. image = elev.subtract(1000).zeroCrossing() Map.setCenter(-121.68148, 37.50877, 13) Map.addLayer(image, {'min': 0, 'max': 1, 'opacity': 0.5}, 'Crossing 1000m') exact = elev.eq(1000) Map.addLayer(exact.updateMask(exact), {'palette': 'red'}, 'Exactly 1000m') ``` ## Display Earth Engine data layers ``` Map.addLayerControl() # This line is not needed for ipyleaflet-based Map. Map ```
github_jupyter
# Installs geemap package import subprocess try: import geemap except ImportError: print('geemap package not installed. Installing ...') subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap']) # Checks whether this notebook is running on Google Colab try: import google.colab import geemap.eefolium as emap except: import geemap as emap # Authenticates and initializes Earth Engine import ee try: ee.Initialize() except Exception as e: ee.Authenticate() ee.Initialize() Map = emap.Map(center=[40,-100], zoom=4) Map.add_basemap('ROADMAP') # Add Google Map Map # Add Earth Engine dataset # Mark pixels where the elevation crosses 1000m value and compare # that to pixels that are exactly equal to 1000m. elev = ee.Image('CGIAR/SRTM90_V4') # A zero-crossing is defined as any pixel where the right, # bottom, or diagonal bottom-right pixel has the opposite sign. image = elev.subtract(1000).zeroCrossing() Map.setCenter(-121.68148, 37.50877, 13) Map.addLayer(image, {'min': 0, 'max': 1, 'opacity': 0.5}, 'Crossing 1000m') exact = elev.eq(1000) Map.addLayer(exact.updateMask(exact), {'palette': 'red'}, 'Exactly 1000m') Map.addLayerControl() # This line is not needed for ipyleaflet-based Map. Map
0.680454
0.965316
``` import os import glob import re import datetime from datetime import date, time, timedelta import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt from sklearn.metrics.pairwise import cosine_similarity from itertools import chain from cv2 import VideoCapture, CAP_PROP_FRAME_COUNT, CAP_PROP_FPS, CAP_PROP_POS_FRAMES import cv2 from PIL import Image import ffmpeg from imutils.video import FileVideoStream import imutils import time import glob from bokeh.plotting import figure, show from bokeh.io import output_notebook from bokeh.models import Span, DatetimeTicker, DatetimeTickFormatter import pytesseract sns.set() output_notebook() def mask_rh_corner(frame, w, h): if not (isinstance(w, (float, int)) and isinstance(h, (float, int))): raise ValueError(f"w and h must both be float or int type, instead got w: {type(w)}, h: {type(h)}") if isinstance(w, float): w = int(frame.shape[1] * (1 - w)) if isinstance(h, float): h = int(frame.shape[0] * h) frame[:h, w:, :] = 0 return frame ``` # Set video variables and build paths ``` #meeting_id = 160320 #meeting_id = 83512718053 meeting_id = 220120 #meeting_id = 170127 video_path = glob.glob(f'zoom_data/{meeting_id}/*.mp4')[0] print(video_path) diff_path = f'diff_data/diffs_{meeting_id}_pct_masked_cossim.csv' sc_labels = f'slide_change_labels/{meeting_id}.csv' ``` # Load Video ``` vidcap = VideoCapture(video_path) fps = vidcap.get(CAP_PROP_FPS) fps ``` # View frame - Word count - Avergage reading speed - Table with slide and flag those where instructor went too fast - Summary table - What's on the slides - How much time - etc.... ``` ## 83512718053: 26343 (noise), 26353 (slide change), 26404 (noise), 18744 (noise) vidcap.set(1, 18744) success, f = vidcap.read() f = mask_rh_corner(f, 0.17, 0.16) color_coverted = cv2.cvtColor(f, cv2.COLOR_BGR2RGB) pil_image_resized = Image.fromarray(imutils.resize(color_coverted, width=1000)) display(pil_image_resized) ``` ## String output ``` print(pytesseract.image_to_string(color_coverted)) print(pytesseract.image_to_string(color_coverted, config=r'--psm 6')) ``` ## Data output ``` df = pytesseract.image_to_data(color_coverted, output_type='data.frame') df['text_area'] = df.width * df.height text_df = df.loc[(df.text.notna()) & (df.text.str.strip() != ''), :] text_df text_area = text_df.text_area.sum() text_area frame_area = f.shape[0] * f.shape[1] frame_area text_area / frame_area ``` ## Boxes output ``` print(pytesseract.image_to_boxes(color_coverted)) ```
github_jupyter
import os import glob import re import datetime from datetime import date, time, timedelta import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt from sklearn.metrics.pairwise import cosine_similarity from itertools import chain from cv2 import VideoCapture, CAP_PROP_FRAME_COUNT, CAP_PROP_FPS, CAP_PROP_POS_FRAMES import cv2 from PIL import Image import ffmpeg from imutils.video import FileVideoStream import imutils import time import glob from bokeh.plotting import figure, show from bokeh.io import output_notebook from bokeh.models import Span, DatetimeTicker, DatetimeTickFormatter import pytesseract sns.set() output_notebook() def mask_rh_corner(frame, w, h): if not (isinstance(w, (float, int)) and isinstance(h, (float, int))): raise ValueError(f"w and h must both be float or int type, instead got w: {type(w)}, h: {type(h)}") if isinstance(w, float): w = int(frame.shape[1] * (1 - w)) if isinstance(h, float): h = int(frame.shape[0] * h) frame[:h, w:, :] = 0 return frame #meeting_id = 160320 #meeting_id = 83512718053 meeting_id = 220120 #meeting_id = 170127 video_path = glob.glob(f'zoom_data/{meeting_id}/*.mp4')[0] print(video_path) diff_path = f'diff_data/diffs_{meeting_id}_pct_masked_cossim.csv' sc_labels = f'slide_change_labels/{meeting_id}.csv' vidcap = VideoCapture(video_path) fps = vidcap.get(CAP_PROP_FPS) fps ## 83512718053: 26343 (noise), 26353 (slide change), 26404 (noise), 18744 (noise) vidcap.set(1, 18744) success, f = vidcap.read() f = mask_rh_corner(f, 0.17, 0.16) color_coverted = cv2.cvtColor(f, cv2.COLOR_BGR2RGB) pil_image_resized = Image.fromarray(imutils.resize(color_coverted, width=1000)) display(pil_image_resized) print(pytesseract.image_to_string(color_coverted)) print(pytesseract.image_to_string(color_coverted, config=r'--psm 6')) df = pytesseract.image_to_data(color_coverted, output_type='data.frame') df['text_area'] = df.width * df.height text_df = df.loc[(df.text.notna()) & (df.text.str.strip() != ''), :] text_df text_area = text_df.text_area.sum() text_area frame_area = f.shape[0] * f.shape[1] frame_area text_area / frame_area print(pytesseract.image_to_boxes(color_coverted))
0.334481
0.392541
# Stochastic Gradient Descent Regression with StandardScaler This Code template is for regression analysis using the SGDRegressor based on the Stochastic Gradient Descent approach and feature rescaling technique StandardScaler in a pipeline. ### Required Packages ``` import warnings import numpy as np import pandas as pd import seaborn as se import matplotlib.pyplot as plt from sklearn.pipeline import make_pipeline from sklearn.preprocessing import StandardScaler from sklearn.model_selection import train_test_split from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error from sklearn.linear_model import SGDRegressor warnings.filterwarnings('ignore') ``` ### Initialization Filepath of CSV file ``` #filepath file_path= "" ``` List of features which are required for model training . ``` #x_values features=[] ``` Target feature for prediction. ``` #y_value target='' ``` ### Data Fetching Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools. We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry. ``` df=pd.read_csv(file_path) df.head() ``` ### Feature Selections It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model. We will assign all the required input features to X and target/outcome to Y. ``` X=df[features] Y=df[target] ``` ### Data Preprocessing Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes. ``` def NullClearner(df): if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])): df.fillna(df.mean(),inplace=True) return df elif(isinstance(df, pd.Series)): df.fillna(df.mode()[0],inplace=True) return df else:return df def EncodeX(df): return pd.get_dummies(df) ``` Calling preprocessing functions on the feature and target set. ``` x=X.columns.to_list() for i in x: X[i]=NullClearner(X[i]) X=EncodeX(X) Y=NullClearner(Y) X.head() ``` #### Correlation Map In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns. ``` f,ax = plt.subplots(figsize=(18, 18)) matrix = np.triu(X.corr()) se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix) plt.show() ``` ### Data Splitting The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data. ``` x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123) ``` ### Model Stochastic Gradient Descent (SGD) is a simple yet very efficient approach to fitting linear classifiers and regressors under convex loss functions such as (linear) Support Vector Machines and Logistic Regression. SGD is merely an optimization technique and does not correspond to a specific family of machine learning models. It is only a way to train a model. Often, an instance of SGDClassifier or SGDRegressor will have an equivalent estimator in the scikit-learn API, potentially using a different optimization technique. For example, using SGDRegressor(loss='squared_loss', penalty='l2') and Ridge solve the same optimization problem, via different means. #### Model Tuning Parameters > - **loss** -> The loss function to be used. The possible values are ‘squared_loss’, ‘huber’, ‘epsilon_insensitive’, or ‘squared_epsilon_insensitive’ > - **penalty** -> The penalty (aka regularization term) to be used. Defaults to ‘l2’ which is the standard regularizer for linear SVM models. ‘l1’ and ‘elasticnet’ might bring sparsity to the model (feature selection) not achievable with ‘l2’. > - **alpha** -> Constant that multiplies the regularization term. The higher the value, the stronger the regularization. Also used to compute the learning rate when set to learning_rate is set to ‘optimal’. > - **l1_ratio** -> The Elastic Net mixing parameter, with 0 <= l1_ratio <= 1. l1_ratio=0 corresponds to L2 penalty, l1_ratio=1 to L1. Only used if penalty is ‘elasticnet’. > - **tol** -> The stopping criterion > - **learning_rate** -> The learning rate schedule,possible values {'optimal','constant','invscaling','adaptive'} > - **eta0** -> The initial learning rate for the ‘constant’, ‘invscaling’ or ‘adaptive’ schedules. > - **power_t** -> The exponent for inverse scaling learning rate. > - **epsilon** -> Epsilon in the epsilon-insensitive loss functions; only if loss is ‘huber’, ‘epsilon_insensitive’, or ‘squared_epsilon_insensitive’. ``` model=make_pipeline(StandardScaler(),SGDRegressor(random_state=123)) model.fit(x_train,y_train) ``` #### Model Accuracy We will use the trained model to make a prediction on the test set.Then use the predicted value for measuring the accuracy of our model. score: The score function returns the coefficient of determination R2 of the prediction. ``` print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100)) ``` > **r2_score**: The **r2_score** function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions. > **mae**: The **mean abosolute error** function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model. > **mse**: The **mean squared error** function squares the error(penalizes the model for large errors) by our model. ``` y_pred=model.predict(x_test) print("R2 Score: {:.2f} %".format(r2_score(y_test,y_pred)*100)) print("Mean Absolute Error {:.2f}".format(mean_absolute_error(y_test,y_pred))) print("Mean Squared Error {:.2f}".format(mean_squared_error(y_test,y_pred))) ``` #### Prediction Plot First, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis. For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis. ``` plt.figure(figsize=(14,10)) plt.plot(range(20),y_test[0:20], color = "green") plt.plot(range(20),model.predict(x_test[0:20]), color = "red") plt.legend(["Actual","prediction"]) plt.title("Predicted vs True Value") plt.xlabel("Record number") plt.ylabel(target) plt.show() ``` #### Creator: Thilakraj Devadiga , Github: [Profile](https://github.com/Thilakraj1998)
github_jupyter
import warnings import numpy as np import pandas as pd import seaborn as se import matplotlib.pyplot as plt from sklearn.pipeline import make_pipeline from sklearn.preprocessing import StandardScaler from sklearn.model_selection import train_test_split from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error from sklearn.linear_model import SGDRegressor warnings.filterwarnings('ignore') #filepath file_path= "" #x_values features=[] #y_value target='' df=pd.read_csv(file_path) df.head() X=df[features] Y=df[target] def NullClearner(df): if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])): df.fillna(df.mean(),inplace=True) return df elif(isinstance(df, pd.Series)): df.fillna(df.mode()[0],inplace=True) return df else:return df def EncodeX(df): return pd.get_dummies(df) x=X.columns.to_list() for i in x: X[i]=NullClearner(X[i]) X=EncodeX(X) Y=NullClearner(Y) X.head() f,ax = plt.subplots(figsize=(18, 18)) matrix = np.triu(X.corr()) se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix) plt.show() x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123) model=make_pipeline(StandardScaler(),SGDRegressor(random_state=123)) model.fit(x_train,y_train) print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100)) y_pred=model.predict(x_test) print("R2 Score: {:.2f} %".format(r2_score(y_test,y_pred)*100)) print("Mean Absolute Error {:.2f}".format(mean_absolute_error(y_test,y_pred))) print("Mean Squared Error {:.2f}".format(mean_squared_error(y_test,y_pred))) plt.figure(figsize=(14,10)) plt.plot(range(20),y_test[0:20], color = "green") plt.plot(range(20),model.predict(x_test[0:20]), color = "red") plt.legend(["Actual","prediction"]) plt.title("Predicted vs True Value") plt.xlabel("Record number") plt.ylabel(target) plt.show()
0.323915
0.989938
# 0. DEPENDENCIES Fix for Jupyter Notebook imports: ``` import os import sys print(os.getcwd()) # sys.path.append("S:\Dropbox\\000 - CARND\CarND-T1-P5-Vehicle-Detection") for path in sys.path: print(path) ``` Remove the additional entry if needed: ``` # sys.path = sys.path[:-1] for path in sys.path: print(path) ``` Load all dependencies: ``` import matplotlib.image as mpimg import matplotlib.pyplot as plt import numpy as np import cv2 import glob import time import pickle from importlib import reload from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from sklearn.svm import SVC, LinearSVC from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.naive_bayes import GaussianNB from sklearn.ensemble import AdaBoostClassifier from sklearn.ensemble import VotingClassifier from sklearn.pipeline import Pipeline from sklearn.metrics import confusion_matrix from sklearn.externals import joblib import src.helpers.constants as C import src.helpers.io as IO import src.helpers.features as FT import src.helpers.plot as PLT # RELOAD: reload(C) reload(IO) reload(FT) reload(PLT) ``` # 1. LOAD DATA First, load all images filenames on the datasets, split into cars and non-cars. Print the counts and percentages of each to verify that the dataset is balanced. ``` files_cars = glob.glob("../../input/images/dataset/vehicles/*/*.png") files_no_cars = glob.glob("../../input/images/dataset/non-vehicles/*/*.png") count_cars = len(files_cars) count_no_cars = len(files_no_cars) count_total = count_cars + count_no_cars percent_cars = 100 * count_cars / count_total percent_no_cars = 100 * count_no_cars / count_total print(" CAR IMAGES {0:5d} = {1:6.2f} %".format(count_cars, percent_cars)) print("NON-CAR IMAGES {0:5d} = {1:6.2f} %".format(count_no_cars, percent_no_cars)) print("-------------------------------") print(" TOTAL {0:5d} = 100.00 %".format(count_total)) ``` The data looks quite balanced, so no need to do any augmentation. Next, preload them and check their total size to see if it's feasible to preload them all in different color spaces. ``` # Load all images (RGB only): imgs_cars = IO.load_images_rgb(files_cars) imgs_no_cars = IO.load_images_rgb(files_no_cars) # Calculate their size by dumping them: size_cars_b = sys.getsizeof(pickle.dumps(imgs_cars)) size_no_cars_b = sys.getsizeof(pickle.dumps(imgs_no_cars)) size_total_b = size_cars_b + size_no_cars_b ``` Print results in multiple units and calculate total for all channels: ``` size_cars_mb = size_cars_b / 1048576 size_no_cars_mb = size_no_cars_b / 1048576 size_total_mb = size_total_b / 1048576 size_all_spaces_mb = size_total_mb * (1 + len(C.COLOR_SPACES)) # RGB not included in C.COLOR_SPACES size_all_spaces_gb = size_all_spaces_mb / 1024 print(" CAR IMAGES SIZE = {0:12.2f} B = {1:6.2f} MB".format(size_cars_b, size_cars_mb)) print(" NON-CAR IMAGES SIZE = {0:12.2f} B = {1:6.2f} MB".format(size_no_cars_b, size_no_cars_mb)) print("---------------------------------------------------") print(" TOTAL SIZE = {0:12.2f} B = {1:6.2f} MB".format(size_total_b, size_total_mb)) print("ESTIMATED ALL SPACES SIZE = {0:12.2f} MB = {1:6.2f} GB".format(size_all_spaces_mb, size_all_spaces_gb)) ``` Free up space: ``` try: del imgs_cars except NameError: pass # Was not defined try: del imgs_no_cars except NameError: pass # Was not defined ``` Load all images in all color spaces: ``` # CARS: imgs_cars_rgb, \ imgs_cars_hsv, \ imgs_cars_luv, \ imgs_cars_hls, \ imgs_cars_yuv, \ imgs_cars_ycrcb, \ imgs_cars_gray = IO.load_images_all(files_cars) # NON-CARS: imgs_no_cars_rgb, \ imgs_no_cars_hsv, \ imgs_no_cars_luv, \ imgs_no_cars_hls, \ imgs_no_cars_yuv, \ imgs_no_cars_ycrcb, \ imgs_no_cars_gray = IO.load_images_all(files_no_cars) ``` Some basic checks: ``` # CARS: assert len(imgs_cars_rgb) == count_cars assert len(imgs_cars_hsv) == count_cars assert len(imgs_cars_luv) == count_cars assert len(imgs_cars_hls) == count_cars assert len(imgs_cars_yuv) == count_cars assert len(imgs_cars_ycrcb) == count_cars assert len(imgs_cars_gray) == count_cars # NON-CARS: assert len(imgs_no_cars_rgb) == count_no_cars assert len(imgs_no_cars_hsv) == count_no_cars assert len(imgs_no_cars_luv) == count_no_cars assert len(imgs_no_cars_hls) == count_no_cars assert len(imgs_no_cars_yuv) == count_no_cars assert len(imgs_no_cars_ycrcb) == count_no_cars assert len(imgs_no_cars_gray) == count_no_cars ``` Let's see what the raw data of those images look like (helpful when using `matplotlib image`): ``` print(imgs_cars_rgb[0][0, 0], np.amin(imgs_cars_rgb[0]), np.amax(imgs_cars_rgb[0])) print(imgs_cars_hsv[0][0, 0], np.amin(imgs_cars_hsv[0]), np.amax(imgs_cars_hsv[0])) print(imgs_cars_luv[0][0, 0], np.amin(imgs_cars_luv[0]), np.amax(imgs_cars_luv[0])) print(imgs_cars_hls[0][0, 0], np.amin(imgs_cars_hls[0]), np.amax(imgs_cars_hls[0])) print(imgs_cars_yuv[0][0, 0], np.amin(imgs_cars_yuv[0]), np.amax(imgs_cars_yuv[0])) print(imgs_cars_ycrcb[0][0, 0], np.amin(imgs_cars_ycrcb[0]), np.amax(imgs_cars_ycrcb[0])) print(imgs_cars_gray[0][0, 0], np.amin(imgs_cars_gray[0]), np.amax(imgs_cars_gray[0])) ``` Let's see how the actual car images look like: ``` start = np.random.randint(0, count_cars) PLT.showAll(imgs_cars_rgb[start:start+8], 8,) ``` And now the non-car ones: ``` start = np.random.randint(0, count_no_cars) PLT.showAll(imgs_no_cars_rgb[start:start+8], 8,) ``` Free up space: ``` # CARS: try: del imgs_cars_rgb except NameError: pass # Was not defined try: del imgs_cars_hsv except NameError: pass # Was not defined try: del imgs_cars_luv except NameError: pass # Was not defined try: del imgs_cars_hls except NameError: pass # Was not defined try: del imgs_cars_yuv except NameError: pass # Was not defined try: del imgs_cars_ycrcb except NameError: pass # Was not defined try: del imgs_cars_gray except NameError: pass # Was not defined # NON-CARS: try: del imgs_no_cars_rgb except NameError: pass # Was not defined try: del imgs_no_cars_hsv except NameError: pass # Was not defined try: del imgs_no_cars_luv except NameError: pass # Was not defined try: del imgs_no_cars_hls except NameError: pass # Was not defined try: del imgs_no_cars_yuv except NameError: pass # Was not defined try: del imgs_no_cars_ycrcb except NameError: pass # Was not defined try: del imgs_no_cars_gray except NameError: pass # Was not defined ``` ## SECTION'S CONCERNS, IMPROVEMENTS, TODOS... - Should images that belong to the same sequence be grouped together so that half of each of them can go to a different subset (training and test)? - __Images visualizations in different color spaces.__ # 2. EXTRACT FEATURES First, let's quickly check how HOG features look like for car and non-car HLS images: ``` # CAR: car_image = imgs_cars_hls[start] car_channels = [car_image[:,:,0], car_image[:,:,1], car_image[:,:,2]] car_hogs = FT.features_hog(car_image, 9, 12, 2, visualise=True)[2] PLT.showAll(car_channels + car_hogs, 6, "gray") # NON-CAR: non_car_image = imgs_no_cars_hls[start] non_car_channels = [non_car_image[:,:,0], non_car_image[:,:,1], non_car_image[:,:,2]] non_car_hogs = FT.features_hog(non_car_image, 9, 12, 2, visualise=True)[2] PLT.showAll(non_car_channels + non_car_hogs, 6, "gray") ``` Ok, so now we are ready to extract all the features from all the images: ``` # Use a subset to train params! # TODO: Add channel to all feature methods or check how I did it in project 4 # TOOO: Plot hog and histograms (cars VS non cars) ft_car_binned = FT.extract_binned_color(imgs_cars_hls, size=(8, 8)) ft_no_car_binned = FT.extract_binned_color(imgs_no_cars_hls, size=(8, 8)) print("BINNED") ft_car_hist = FT.extract_histogram_color(imgs_cars_hls, bins=32) ft_no_car_hist = FT.extract_histogram_color(imgs_no_cars_hls, bins=32) print("HIST") ft_car_hog = FT.extract_hog(imgs_cars_hls, orients=9, ppc=12, cpb=2) ft_no_car_hog = FT.extract_hog(imgs_no_cars_hls, orients=9, ppc=12, cpb=2) print("HOG") ``` # 3. TRAIN CLASSIFIER (SVM) First, generate the final features vectors: ``` features_car = FT.combine_features((ft_car_binned, ft_car_hist, ft_car_hog)) features_no_car = FT.combine_features((ft_no_car_binned, ft_no_car_hist, ft_no_car_hog)) print('Feature vector length:', len(features_car[0])) ``` Next, train a classifier with them and check some stats about its performance: ``` # Create an array stack of feature vectors and a vector of labels: X = np.vstack((features_car, features_no_car)).astype(np.float64) y = np.hstack((np.ones(count_cars), np.zeros(count_no_cars))) # Split up data into randomized training and test sets rand_state = np.random.randint(0, 1000) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=rand_state) # Create a Pipeline to be able to save scaler and classifier together: clf = Pipeline([ ('SCALER', StandardScaler()), ('CLASSIFIER', LinearSVC(loss="hinge")) # ('CLASSIFIER', SVC(kernel="linear")) ]) # Pipeline. See: http://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html # SVC VS LinearSVC. See: https://stackoverflow.com/questions/35076586/linearsvc-vs-svckernel-linear-conflicting-arguments # Train the model: t0 = time.time() clf.fit(X_train, y_train) t = time.time() - t0 # Output model's stats: print(" TRAINING TIME = {0:2.4f} SEC".format(t)) print("TRAINING ACCURACY = {0:2.4f} %".format(clf.score(X_train, y_train))) print(" TEST ACCURACY = {0:2.4f} %".format(clf.score(X_test, y_test))) print("\nCONFUSION MATRIX (TRAIN / TEST / ALL):") t0 = time.time() y_train_pred = clf.predict(X_train) y_test_pred = clf.predict(X_test) y_pred = clf.predict(X) t = time.time() - t0 print(confusion_matrix(y_train, y_train_pred)) print(confusion_matrix(y_test, y_test_pred)) print(confusion_matrix(y, y_pred)) print("PREDICTION TIME = {0:2.4f} MS".format(t, 1000 * t / (2 * count_total))) # TODO: Automatically adjust classifier's params! # LinearSVC: # 0.9859 with loss="hinge" # 0.9840 with C=100, loss="hinge" # 0.9825 with nothing # 0.9825 with C=10, loss="hinge" # 0.9823 with dual=False # 0.9823 with C=10 # SVC kernel="linear": SLOW # 0.9865 with nothing # 0.9862 with C=10 # 0.9854 with C=100 # SVC kernel="rbf": SUPER SLOW # 0.9913 with nothing # 0.9620 with gamma=0.01 # SVC kernel="poly": SLOW # 0.9524 with nothing # DecisionTreeClassifier: # 0.9657 with min_samples_split=10 # 0.9631 with max_depth=32 # 0.9628 with min_samples_split=32 # 0.9626 with min_samples_split=10, max_depth=16 # 0.9620 with max_depth=8 # 0.9614 with nothing # 0.9614 with min_samples_split=10, max_depth=32 # 0.9592 with max_depth=16 # 0.9566 with min_samples_split=10, max_depth=8 # 0.9544 with criterion="entropy" # GaussianNB: # 0.8229 with nothing # RandomForestClassifier: # 0.9882 with n_estimators=20 # 0.9856 with n_estimators=24 # 0.9856 with n_estimators=32 # 0.9797 with nothing # AdaBoostClassifier: SUPER SLOW # 0.9891 with nothing # 0.9885 with n_estimators=100 # ALL ABOVE WITH HSV IMAGES. BELOW, LinearSVC with loss="hinge" in other color spaces: # RGB: 0.9820 - OK (very few false positives). Does not detect black car. # HSV: 0.9859 - Lots of false positives (especially with bigger window). Does not detect black car. # LUV: 0.9896 - Lots of false positives (especially with bigger window). Detects both cars. # HSL: 0.9851 - OK (still problematic with bigger window). Detects both cars. # YUV: 0.9893 - Lots of false positives (especially with bigger window). Detects both cars. # YCRCB: 0.9842 - Lots of false positives (especially with bigger window). Detects both cars. # RGB (binned + hist) + HSL (hog): 0.9814 - Lots of false positives. Does not detect black car. # RGB (hog) + HSL (binned + hist): 0.9859 - Lots of false positives (especially with bigger window) quite ok. ``` # 4. ANALYZE ERRORS Let's see which images are incorrectly classified: ``` # TODO ``` # 5. SAVE THE MODEL ``` # TODO: Save each model with the params used? joblib.dump(clf, "../../output/models/classifier_augmented_nocars_2.pkl") # See: http://scikit-learn.org/stable/modules/model_persistence.html ```
github_jupyter
import os import sys print(os.getcwd()) # sys.path.append("S:\Dropbox\\000 - CARND\CarND-T1-P5-Vehicle-Detection") for path in sys.path: print(path) # sys.path = sys.path[:-1] for path in sys.path: print(path) import matplotlib.image as mpimg import matplotlib.pyplot as plt import numpy as np import cv2 import glob import time import pickle from importlib import reload from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from sklearn.svm import SVC, LinearSVC from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.naive_bayes import GaussianNB from sklearn.ensemble import AdaBoostClassifier from sklearn.ensemble import VotingClassifier from sklearn.pipeline import Pipeline from sklearn.metrics import confusion_matrix from sklearn.externals import joblib import src.helpers.constants as C import src.helpers.io as IO import src.helpers.features as FT import src.helpers.plot as PLT # RELOAD: reload(C) reload(IO) reload(FT) reload(PLT) files_cars = glob.glob("../../input/images/dataset/vehicles/*/*.png") files_no_cars = glob.glob("../../input/images/dataset/non-vehicles/*/*.png") count_cars = len(files_cars) count_no_cars = len(files_no_cars) count_total = count_cars + count_no_cars percent_cars = 100 * count_cars / count_total percent_no_cars = 100 * count_no_cars / count_total print(" CAR IMAGES {0:5d} = {1:6.2f} %".format(count_cars, percent_cars)) print("NON-CAR IMAGES {0:5d} = {1:6.2f} %".format(count_no_cars, percent_no_cars)) print("-------------------------------") print(" TOTAL {0:5d} = 100.00 %".format(count_total)) # Load all images (RGB only): imgs_cars = IO.load_images_rgb(files_cars) imgs_no_cars = IO.load_images_rgb(files_no_cars) # Calculate their size by dumping them: size_cars_b = sys.getsizeof(pickle.dumps(imgs_cars)) size_no_cars_b = sys.getsizeof(pickle.dumps(imgs_no_cars)) size_total_b = size_cars_b + size_no_cars_b size_cars_mb = size_cars_b / 1048576 size_no_cars_mb = size_no_cars_b / 1048576 size_total_mb = size_total_b / 1048576 size_all_spaces_mb = size_total_mb * (1 + len(C.COLOR_SPACES)) # RGB not included in C.COLOR_SPACES size_all_spaces_gb = size_all_spaces_mb / 1024 print(" CAR IMAGES SIZE = {0:12.2f} B = {1:6.2f} MB".format(size_cars_b, size_cars_mb)) print(" NON-CAR IMAGES SIZE = {0:12.2f} B = {1:6.2f} MB".format(size_no_cars_b, size_no_cars_mb)) print("---------------------------------------------------") print(" TOTAL SIZE = {0:12.2f} B = {1:6.2f} MB".format(size_total_b, size_total_mb)) print("ESTIMATED ALL SPACES SIZE = {0:12.2f} MB = {1:6.2f} GB".format(size_all_spaces_mb, size_all_spaces_gb)) try: del imgs_cars except NameError: pass # Was not defined try: del imgs_no_cars except NameError: pass # Was not defined # CARS: imgs_cars_rgb, \ imgs_cars_hsv, \ imgs_cars_luv, \ imgs_cars_hls, \ imgs_cars_yuv, \ imgs_cars_ycrcb, \ imgs_cars_gray = IO.load_images_all(files_cars) # NON-CARS: imgs_no_cars_rgb, \ imgs_no_cars_hsv, \ imgs_no_cars_luv, \ imgs_no_cars_hls, \ imgs_no_cars_yuv, \ imgs_no_cars_ycrcb, \ imgs_no_cars_gray = IO.load_images_all(files_no_cars) # CARS: assert len(imgs_cars_rgb) == count_cars assert len(imgs_cars_hsv) == count_cars assert len(imgs_cars_luv) == count_cars assert len(imgs_cars_hls) == count_cars assert len(imgs_cars_yuv) == count_cars assert len(imgs_cars_ycrcb) == count_cars assert len(imgs_cars_gray) == count_cars # NON-CARS: assert len(imgs_no_cars_rgb) == count_no_cars assert len(imgs_no_cars_hsv) == count_no_cars assert len(imgs_no_cars_luv) == count_no_cars assert len(imgs_no_cars_hls) == count_no_cars assert len(imgs_no_cars_yuv) == count_no_cars assert len(imgs_no_cars_ycrcb) == count_no_cars assert len(imgs_no_cars_gray) == count_no_cars print(imgs_cars_rgb[0][0, 0], np.amin(imgs_cars_rgb[0]), np.amax(imgs_cars_rgb[0])) print(imgs_cars_hsv[0][0, 0], np.amin(imgs_cars_hsv[0]), np.amax(imgs_cars_hsv[0])) print(imgs_cars_luv[0][0, 0], np.amin(imgs_cars_luv[0]), np.amax(imgs_cars_luv[0])) print(imgs_cars_hls[0][0, 0], np.amin(imgs_cars_hls[0]), np.amax(imgs_cars_hls[0])) print(imgs_cars_yuv[0][0, 0], np.amin(imgs_cars_yuv[0]), np.amax(imgs_cars_yuv[0])) print(imgs_cars_ycrcb[0][0, 0], np.amin(imgs_cars_ycrcb[0]), np.amax(imgs_cars_ycrcb[0])) print(imgs_cars_gray[0][0, 0], np.amin(imgs_cars_gray[0]), np.amax(imgs_cars_gray[0])) start = np.random.randint(0, count_cars) PLT.showAll(imgs_cars_rgb[start:start+8], 8,) start = np.random.randint(0, count_no_cars) PLT.showAll(imgs_no_cars_rgb[start:start+8], 8,) # CARS: try: del imgs_cars_rgb except NameError: pass # Was not defined try: del imgs_cars_hsv except NameError: pass # Was not defined try: del imgs_cars_luv except NameError: pass # Was not defined try: del imgs_cars_hls except NameError: pass # Was not defined try: del imgs_cars_yuv except NameError: pass # Was not defined try: del imgs_cars_ycrcb except NameError: pass # Was not defined try: del imgs_cars_gray except NameError: pass # Was not defined # NON-CARS: try: del imgs_no_cars_rgb except NameError: pass # Was not defined try: del imgs_no_cars_hsv except NameError: pass # Was not defined try: del imgs_no_cars_luv except NameError: pass # Was not defined try: del imgs_no_cars_hls except NameError: pass # Was not defined try: del imgs_no_cars_yuv except NameError: pass # Was not defined try: del imgs_no_cars_ycrcb except NameError: pass # Was not defined try: del imgs_no_cars_gray except NameError: pass # Was not defined # CAR: car_image = imgs_cars_hls[start] car_channels = [car_image[:,:,0], car_image[:,:,1], car_image[:,:,2]] car_hogs = FT.features_hog(car_image, 9, 12, 2, visualise=True)[2] PLT.showAll(car_channels + car_hogs, 6, "gray") # NON-CAR: non_car_image = imgs_no_cars_hls[start] non_car_channels = [non_car_image[:,:,0], non_car_image[:,:,1], non_car_image[:,:,2]] non_car_hogs = FT.features_hog(non_car_image, 9, 12, 2, visualise=True)[2] PLT.showAll(non_car_channels + non_car_hogs, 6, "gray") # Use a subset to train params! # TODO: Add channel to all feature methods or check how I did it in project 4 # TOOO: Plot hog and histograms (cars VS non cars) ft_car_binned = FT.extract_binned_color(imgs_cars_hls, size=(8, 8)) ft_no_car_binned = FT.extract_binned_color(imgs_no_cars_hls, size=(8, 8)) print("BINNED") ft_car_hist = FT.extract_histogram_color(imgs_cars_hls, bins=32) ft_no_car_hist = FT.extract_histogram_color(imgs_no_cars_hls, bins=32) print("HIST") ft_car_hog = FT.extract_hog(imgs_cars_hls, orients=9, ppc=12, cpb=2) ft_no_car_hog = FT.extract_hog(imgs_no_cars_hls, orients=9, ppc=12, cpb=2) print("HOG") features_car = FT.combine_features((ft_car_binned, ft_car_hist, ft_car_hog)) features_no_car = FT.combine_features((ft_no_car_binned, ft_no_car_hist, ft_no_car_hog)) print('Feature vector length:', len(features_car[0])) # Create an array stack of feature vectors and a vector of labels: X = np.vstack((features_car, features_no_car)).astype(np.float64) y = np.hstack((np.ones(count_cars), np.zeros(count_no_cars))) # Split up data into randomized training and test sets rand_state = np.random.randint(0, 1000) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=rand_state) # Create a Pipeline to be able to save scaler and classifier together: clf = Pipeline([ ('SCALER', StandardScaler()), ('CLASSIFIER', LinearSVC(loss="hinge")) # ('CLASSIFIER', SVC(kernel="linear")) ]) # Pipeline. See: http://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html # SVC VS LinearSVC. See: https://stackoverflow.com/questions/35076586/linearsvc-vs-svckernel-linear-conflicting-arguments # Train the model: t0 = time.time() clf.fit(X_train, y_train) t = time.time() - t0 # Output model's stats: print(" TRAINING TIME = {0:2.4f} SEC".format(t)) print("TRAINING ACCURACY = {0:2.4f} %".format(clf.score(X_train, y_train))) print(" TEST ACCURACY = {0:2.4f} %".format(clf.score(X_test, y_test))) print("\nCONFUSION MATRIX (TRAIN / TEST / ALL):") t0 = time.time() y_train_pred = clf.predict(X_train) y_test_pred = clf.predict(X_test) y_pred = clf.predict(X) t = time.time() - t0 print(confusion_matrix(y_train, y_train_pred)) print(confusion_matrix(y_test, y_test_pred)) print(confusion_matrix(y, y_pred)) print("PREDICTION TIME = {0:2.4f} MS".format(t, 1000 * t / (2 * count_total))) # TODO: Automatically adjust classifier's params! # LinearSVC: # 0.9859 with loss="hinge" # 0.9840 with C=100, loss="hinge" # 0.9825 with nothing # 0.9825 with C=10, loss="hinge" # 0.9823 with dual=False # 0.9823 with C=10 # SVC kernel="linear": SLOW # 0.9865 with nothing # 0.9862 with C=10 # 0.9854 with C=100 # SVC kernel="rbf": SUPER SLOW # 0.9913 with nothing # 0.9620 with gamma=0.01 # SVC kernel="poly": SLOW # 0.9524 with nothing # DecisionTreeClassifier: # 0.9657 with min_samples_split=10 # 0.9631 with max_depth=32 # 0.9628 with min_samples_split=32 # 0.9626 with min_samples_split=10, max_depth=16 # 0.9620 with max_depth=8 # 0.9614 with nothing # 0.9614 with min_samples_split=10, max_depth=32 # 0.9592 with max_depth=16 # 0.9566 with min_samples_split=10, max_depth=8 # 0.9544 with criterion="entropy" # GaussianNB: # 0.8229 with nothing # RandomForestClassifier: # 0.9882 with n_estimators=20 # 0.9856 with n_estimators=24 # 0.9856 with n_estimators=32 # 0.9797 with nothing # AdaBoostClassifier: SUPER SLOW # 0.9891 with nothing # 0.9885 with n_estimators=100 # ALL ABOVE WITH HSV IMAGES. BELOW, LinearSVC with loss="hinge" in other color spaces: # RGB: 0.9820 - OK (very few false positives). Does not detect black car. # HSV: 0.9859 - Lots of false positives (especially with bigger window). Does not detect black car. # LUV: 0.9896 - Lots of false positives (especially with bigger window). Detects both cars. # HSL: 0.9851 - OK (still problematic with bigger window). Detects both cars. # YUV: 0.9893 - Lots of false positives (especially with bigger window). Detects both cars. # YCRCB: 0.9842 - Lots of false positives (especially with bigger window). Detects both cars. # RGB (binned + hist) + HSL (hog): 0.9814 - Lots of false positives. Does not detect black car. # RGB (hog) + HSL (binned + hist): 0.9859 - Lots of false positives (especially with bigger window) quite ok. # TODO # TODO: Save each model with the params used? joblib.dump(clf, "../../output/models/classifier_augmented_nocars_2.pkl") # See: http://scikit-learn.org/stable/modules/model_persistence.html
0.339061
0.728145
# Developing an AI application Going forward, AI algorithms will be incorporated into more and more everyday applications. For example, you might want to include an image classifier in a smart phone app. To do this, you'd use a deep learning model trained on hundreds of thousands of images as part of the overall application architecture. A large part of software development in the future will be using these types of models as common parts of applications. In this project, you'll train an image classifier to recognize different species of flowers. You can imagine using something like this in a phone app that tells you the name of the flower your camera is looking at. In practice you'd train this classifier, then export it for use in your application. We'll be using [this dataset](http://www.robots.ox.ac.uk/~vgg/data/flowers/102/index.html) of 102 flower categories, you can see a few examples below. <img src='assets/Flowers.png' width=500px> The project is broken down into multiple steps: * Load and preprocess the image dataset * Train the image classifier on your dataset * Use the trained classifier to predict image content We'll lead you through each part which you'll implement in Python. When you've completed this project, you'll have an application that can be trained on any set of labeled images. Here your network will be learning about flowers and end up as a command line application. But, what you do with your new skills depends on your imagination and effort in building a dataset. For example, imagine an app where you take a picture of a car, it tells you what the make and model is, then looks up information about it. Go build your own dataset and make something new. First up is importing the packages you'll need. It's good practice to keep all the imports at the beginning of your code. As you work through this notebook and find you need to import a package, make sure to add the import up here. ``` # All the necessary packages and modules are imported import torch import numpy as np from torch import nn from torch import optim import torch.nn.functional as F from torchvision import datasets, transforms, models from collections import OrderedDict from PIL import Image import matplotlib.pyplot as plt ``` ## Load the data Here you'll use `torchvision` to load the data ([documentation](http://pytorch.org/docs/0.3.0/torchvision/index.html)). The data should be included alongside this notebook, otherwise you can [download it here](https://s3.amazonaws.com/content.udacity-data.com/nd089/flower_data.tar.gz). The dataset is split into three parts, training, validation, and testing. For the training, you'll want to apply transformations such as random scaling, cropping, and flipping. This will help the network generalize leading to better performance. You'll also need to make sure the input data is resized to 224x224 pixels as required by the pre-trained networks. The validation and testing sets are used to measure the model's performance on data it hasn't seen yet. For this you don't want any scaling or rotation transformations, but you'll need to resize then crop the images to the appropriate size. The pre-trained networks you'll use were trained on the ImageNet dataset where each color channel was normalized separately. For all three sets you'll need to normalize the means and standard deviations of the images to what the network expects. For the means, it's `[0.485, 0.456, 0.406]` and for the standard deviations `[0.229, 0.224, 0.225]`, calculated from the ImageNet images. These values will shift each color channel to be centered at 0 and range from -1 to 1. ``` data_dir = 'flowers' train_dir = data_dir + '/train' valid_dir = data_dir + '/valid' test_dir = data_dir + '/test' # torchvision transforms are used to augment the training data # The training, validation, and testing data is appropriately transformed data_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]) test_transforms = transforms.Compose([transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]) # (train, validation, test) is loaded with torchvision's ImageFolder image_datasets = datasets.ImageFolder(train_dir, transform=data_transforms) image_datasets_valid = datasets.ImageFolder(valid_dir, transform=test_transforms) image_datasets_test = datasets.ImageFolder(test_dir, transform=test_transforms) # data for each set is loaded with torchvision's DataLoader dataloaders = torch.utils.data.DataLoader(image_datasets, batch_size=50, shuffle=True) dataloaders_valid = torch.utils.data.DataLoader(image_datasets_valid, batch_size=25) dataloaders_test = torch.utils.data.DataLoader(image_datasets_test, batch_size=25) ``` ### Label mapping You'll also need to load in a mapping from category label to category name. You can find this in the file `cat_to_name.json`. It's a JSON object which you can read in with the [`json` module](https://docs.python.org/2/library/json.html). This will give you a dictionary mapping the integer encoded categories to the actual names of the flowers. ``` import json with open('cat_to_name.json', 'r') as f: cat_to_name = json.load(f) ``` # Building and training the classifier Now that the data is ready, it's time to build and train the classifier. As usual, you should use one of the pretrained models from `torchvision.models` to get the image features. Build and train a new feed-forward classifier using those features. We're going to leave this part up to you. If you want to talk through it with someone, chat with your fellow students! You can also ask questions on the forums or join the instructors in office hours. Refer to [the rubric](https://review.udacity.com/#!/rubrics/1663/view) for guidance on successfully completing this section. Things you'll need to do: * Load a [pre-trained network](http://pytorch.org/docs/master/torchvision/models.html) (If you need a starting point, the VGG networks work great and are straightforward to use) * Define a new, untrained feed-forward network as a classifier, using ReLU activations and dropout * Train the classifier layers using backpropagation using the pre-trained network to get the features * Track the loss and accuracy on the validation set to determine the best hyperparameters We've left a cell open for you below, but use as many as you need. Our advice is to break the problem up into smaller parts you can run separately. Check that each part is doing what you expect, then move on to the next. You'll likely find that as you work through each part, you'll need to go back and modify your previous code. This is totally normal! When training make sure you're updating only the weights of the feed-forward network. You should be able to get the validation accuracy above 70% if you build everything right. Make sure to try different hyperparameters (learning rate, units in the classifier, epochs, etc) to find the best model. Save those hyperparameters to use as default values in the next part of the project. ``` # A pretrained network such as VGG16 is loaded from torchvision.models model = models.vgg16(pretrained=True) # parameters are frozen for param in model.parameters(): param.requires_grad = False # defined a classifier def classifier(input_size, output_size, hidden_layers, drop_p): classifier = nn.Sequential(OrderedDict([ ('drop1', nn.Dropout(drop_p)), ('fc1', nn.Linear(input_size, hidden_layers[0])), ('relu1', nn.ReLU()), ('drop2', nn.Dropout(drop_p*0.5)), ('fc2', nn.Linear(hidden_layers[0], hidden_layers[1])), ('relu2', nn.ReLU()), ('fc3', nn.Linear(hidden_layers[1], hidden_layers[2])), ('relu3', nn.ReLU()), ('fc4', nn.Linear(hidden_layers[2], output_size)), ('output', nn.LogSoftmax(dim=1)) ])) return classifier # build a model input_size = 25088 output_size = 102 hidden_layers = [600, 400, 200] drop_p = 0.5 classifier = classifier(input_size, output_size ,hidden_layers , drop_p) model.classifier = classifier criterion = nn.NLLLoss() optimizer = optim.Adam(model.classifier.parameters(), lr=0.001) # function for the validation pass def validation(model, testloader, criterion): test_loss = 0 accuracy = 0 for data in testloader: inputs, labels = data inputs, labels = inputs.to('cuda'), labels.to('cuda') outputs = model.forward(inputs) test_loss += criterion(outputs, labels).item() _, predicted = torch.max(outputs, 1) equality = (labels == predicted) accuracy += equality.type(torch.FloatTensor).mean() return test_loss, accuracy # Train the network # During training, the validation loss and accuracy are displayed epochs = 20 steps = 0 running_loss = 0 print_every = 101 model.to('cuda') for e in range(epochs): model.train() for ii, (inputs, labels) in enumerate(dataloaders): steps += 1 inputs, labels = inputs.to('cuda'), labels.to('cuda') optimizer.zero_grad() outputs = model.forward(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() running_loss += loss.item() if steps % print_every == 0: test_loss, accuracy = validation(model, dataloaders_valid, criterion) print("Epoch: {}/{}.. ".format(e+1, epochs), "Training Loss: {:.3f}.. ".format(running_loss/print_every), "Validation Loss: {:.3f}.. ".format(test_loss/len(dataloaders_valid)), "Validation Accuracy: {:.3f}".format(100*accuracy/len(dataloaders_valid))) running_loss = 0 model.train() ``` ## Testing your network It's good practice to test your trained network on test data, images the network has never seen either in training or validation. This will give you a good estimate for the model's performance on completely new images. Run the test images through the network and measure the accuracy, the same way you did validation. You should be able to reach around 70% accuracy on the test set if the model has been trained well. ``` # The network's accuracy is measured on the test data correct = 0 total = 0 model.to('cuda') model.eval() with torch.no_grad(): for ii, (inputs, labels) in enumerate(dataloaders_test): inputs, labels = inputs.to('cuda'), labels.to('cuda') outputs = model.forward(inputs) _, predicted = torch.max(outputs, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the test images: %d %%' % (100 * correct / total)) ``` ## Save the checkpoint Now that your network is trained, save the model so you can load it later for making predictions. You probably want to save other things such as the mapping of classes to indices which you get from one of the image datasets: `image_datasets['train'].class_to_idx`. You can attach this to the model as an attribute which makes inference easier later on. ```model.class_to_idx = image_datasets['train'].class_to_idx``` Remember that you'll want to completely rebuild the model later so you can use it for inference. Make sure to include any information you need in the checkpoint. If you want to load the model and keep training, you'll want to save the number of epochs as well as the optimizer state, `optimizer.state_dict`. You'll likely want to use this trained model in the next part of the project, so best to save it now. ``` # The trained model is saved as a checkpoint model.class_to_idx = image_datasets.class_to_idx state_dict = model.state_dict() optimizer_state = optimizer.state_dict() checkpoint = {'input_size': input_size, 'output_size': output_size, 'hidden_layers': hidden_layers, 'drop_p' : drop_p, 'arch': 'vgg16', 'state_dict': state_dict, 'class_to_idx': model.class_to_idx, 'optimizer_state': optimizer_state , 'epoch': e+1 } torch.save(checkpoint, 'checkpoint.pth') ``` ## Loading the checkpoint At this point it's good to write a function that can load a checkpoint and rebuild the model. That way you can come back to this project and keep working on it without having to retrain the network. ``` # function that successfully loads a checkpoint def load_checkpoint(filepath): checkpoint = torch.load(filepath) if checkpoint['arch'] == "vgg16": model = models.vgg16(pretrained=True) for param in model.parameters(): param.requires_grad = False input_size = checkpoint['input_size'] hidden_layers = checkpoint['hidden_layers'] output_size = checkpoint['output_size'] drop_p = checkpoint['drop_p'] model.class_to_idx = checkpoint['class_to_idx'] classifier = nn.Sequential(OrderedDict([ ('fc1', nn.Linear(input_size, hidden_layers[0])), ('relu1', nn.ReLU()), ('drop1', nn.Dropout(drop_p)), ('fc2', nn.Linear(hidden_layers[0], hidden_layers[1])), ('relu2', nn.ReLU()), ('drop2', nn.Dropout(drop_p*0.5)), ('fc3', nn.Linear(hidden_layers[1], hidden_layers[2])), ('relu3', nn.ReLU()), ('fc4', nn.Linear(hidden_layers[2], output_size)), ('output', nn.LogSoftmax(dim=1)) ])) model.classifier = classifier model.load_state_dict(checkpoint['state_dict']) criterion = nn.NLLLoss() optimizer.load_state_dict(checkpoint['optimizer_state']) model.start_epoch = checkpoint['epoch'] return model # Load the checkpoint model = load_checkpoint('checkpoint.pth') ``` # Inference for classification Now you'll write a function to use a trained network for inference. That is, you'll pass an image into the network and predict the class of the flower in the image. Write a function called `predict` that takes an image and a model, then returns the top $K$ most likely classes along with the probabilities. It should look like ```python probs, classes = predict(image_path, model) print(probs) print(classes) > [ 0.01558163 0.01541934 0.01452626 0.01443549 0.01407339] > ['70', '3', '45', '62', '55'] ``` First you'll need to handle processing the input image such that it can be used in your network. ## Image Preprocessing You'll want to use `PIL` to load the image ([documentation](https://pillow.readthedocs.io/en/latest/reference/Image.html)). It's best to write a function that preprocesses the image so it can be used as input for the model. This function should process the images in the same manner used for training. First, resize the images where the shortest side is 256 pixels, keeping the aspect ratio. This can be done with the [`thumbnail`](http://pillow.readthedocs.io/en/3.1.x/reference/Image.html#PIL.Image.Image.thumbnail) or [`resize`](http://pillow.readthedocs.io/en/3.1.x/reference/Image.html#PIL.Image.Image.thumbnail) methods. Then you'll need to crop out the center 224x224 portion of the image. Color channels of images are typically encoded as integers 0-255, but the model expected floats 0-1. You'll need to convert the values. It's easiest with a Numpy array, which you can get from a PIL image like so `np_image = np.array(pil_image)`. As before, the network expects the images to be normalized in a specific way. For the means, it's `[0.485, 0.456, 0.406]` and for the standard deviations `[0.229, 0.224, 0.225]`. You'll want to subtract the means from each color channel, then divide by the standard deviation. And finally, PyTorch expects the color channel to be the first dimension but it's the third dimension in the PIL image and Numpy array. You can reorder dimensions using [`ndarray.transpose`](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.ndarray.transpose.html). The color channel needs to be first and retain the order of the other two dimensions. ``` # function successfully converts a PIL image def process_image(image): width, height = image.size ratio_w = width/height ratio_h = height/width if width<height: size = 256, height*ratio_h if width>height: size = width*ratio_w, 256 else : size = (256 , 256) image.thumbnail(size) xn = (image.size[0] - 224)/2 yp = (image.size[1] - 224)/2 xp = (image.size[0] + 224)/2 yn = (image.size[1] + 224)/2 image = image.crop((xn, yp, xp, yn)) image = np.array(image) image = image/255 image = image - np.array([0.485, 0.456, 0.406]) image = image / np.array([0.229, 0.224, 0.225]) image = image.transpose(2, 0, 1) return image ``` To check your work, the function below converts a PyTorch tensor and displays it in the notebook. If your `process_image` function works, running the output through this function should return the original image (except for the cropped out portions). ``` def imshow(image, ax=None, title=None): """Imshow for Tensor.""" if ax is None: fig, ax = plt.subplots() if title: plt.title(title) # PyTorch tensors assume the color channel is the first dimension # but matplotlib assumes is the third dimension image = image.transpose((1, 2, 0)) # Undo preprocessing mean = np.array([0.485, 0.456, 0.406]) std = np.array([0.229, 0.224, 0.225]) image = std * image + mean # Image needs to be clipped between 0 and 1 or it looks like noise when displayed image = np.clip(image, 0, 1) ax.imshow(image) return ax ``` ## Class Prediction Once you can get images in the correct format, it's time to write a function for making predictions with your model. A common practice is to predict the top 5 or so (usually called top-$K$) most probable classes. You'll want to calculate the class probabilities then find the $K$ largest values. To get the top $K$ largest values in a tensor use [`x.topk(k)`](http://pytorch.org/docs/master/torch.html#torch.topk). This method returns both the highest `k` probabilities and the indices of those probabilities corresponding to the classes. You need to convert from these indices to the actual class labels using `class_to_idx` which hopefully you added to the model or from an `ImageFolder` you used to load the data ([see here](#Save-the-checkpoint)). Make sure to invert the dictionary so you get a mapping from index to class as well. Again, this method should take a path to an image and a model checkpoint, then return the probabilities and classes. ```python probs, classes = predict(image_path, model) print(probs) print(classes) > [ 0.01558163 0.01541934 0.01452626 0.01443549 0.01407339] > ['70', '3', '45', '62', '55'] ``` ``` # The predict function successfully takes the path to an image and a model def predict(image_path, model, topk=5): image = Image.open(image_path) image = process_image(image) image = torch.from_numpy(image).type(torch.FloatTensor) image = image.unsqueeze(0) model.to('cpu') output = model(image) probs, classes = torch.exp(output).topk(5) probs = probs.detach().numpy().tolist()[0] classes = classes.detach().numpy().tolist()[0] idx_new = {v:k for k,v in model.class_to_idx.items()} label = [idx_new[k] for k in classes] classes = [cat_to_name[idx_new[k]] for k in classes] return probs , classes ``` ## Sanity Checking Now that you can use a trained model for predictions, check to make sure it makes sense. Even if the testing accuracy is high, it's always good to check that there aren't obvious bugs. Use `matplotlib` to plot the probabilities for the top 5 classes as a bar graph, along with the input image. It should look like this: <img src='assets/inference_example.png' width=300px> You can convert from the class integer encoding to actual flower names with the `cat_to_name.json` file (should have been loaded earlier in the notebook). To show a PyTorch tensor as an image, use the `imshow` function defined above. ``` # image and its associated top 5 most probable classes image_path = 'flowers/test/76/image_02550.jpg' actual_image_title = cat_to_name[image_path.split('/')[2]] probs , classes = predict(image_path, model) image = Image.open(image_path) image = process_image(image) imshow(image, None, actual_image_title) fig, axs = plt.subplots(1, figsize=(3, 3)) axs.barh(classes, probs) ``` ## Attributes <br> 1- 90% of the code from Udacity courses, mostly from our current course deep learning. <br> 2- I have benefit from the documentations of (numpy, python, pytorch, argparse, PIL, and PLT). <br> 3- Many ideas I have benefit from stackoverflow.com, some of this ideas (if __name__ == __main__, argparse, resize image, crop) <br> 4- Many errors appear that I have search by the error to solve them from a lot of websites. <br> 5- From Josh Bernhard post in medium is help me to solve following problems (tensors to numpy arrays , flip the key and val for the dictionary). <br>
github_jupyter
# All the necessary packages and modules are imported import torch import numpy as np from torch import nn from torch import optim import torch.nn.functional as F from torchvision import datasets, transforms, models from collections import OrderedDict from PIL import Image import matplotlib.pyplot as plt data_dir = 'flowers' train_dir = data_dir + '/train' valid_dir = data_dir + '/valid' test_dir = data_dir + '/test' # torchvision transforms are used to augment the training data # The training, validation, and testing data is appropriately transformed data_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]) test_transforms = transforms.Compose([transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]) # (train, validation, test) is loaded with torchvision's ImageFolder image_datasets = datasets.ImageFolder(train_dir, transform=data_transforms) image_datasets_valid = datasets.ImageFolder(valid_dir, transform=test_transforms) image_datasets_test = datasets.ImageFolder(test_dir, transform=test_transforms) # data for each set is loaded with torchvision's DataLoader dataloaders = torch.utils.data.DataLoader(image_datasets, batch_size=50, shuffle=True) dataloaders_valid = torch.utils.data.DataLoader(image_datasets_valid, batch_size=25) dataloaders_test = torch.utils.data.DataLoader(image_datasets_test, batch_size=25) import json with open('cat_to_name.json', 'r') as f: cat_to_name = json.load(f) # A pretrained network such as VGG16 is loaded from torchvision.models model = models.vgg16(pretrained=True) # parameters are frozen for param in model.parameters(): param.requires_grad = False # defined a classifier def classifier(input_size, output_size, hidden_layers, drop_p): classifier = nn.Sequential(OrderedDict([ ('drop1', nn.Dropout(drop_p)), ('fc1', nn.Linear(input_size, hidden_layers[0])), ('relu1', nn.ReLU()), ('drop2', nn.Dropout(drop_p*0.5)), ('fc2', nn.Linear(hidden_layers[0], hidden_layers[1])), ('relu2', nn.ReLU()), ('fc3', nn.Linear(hidden_layers[1], hidden_layers[2])), ('relu3', nn.ReLU()), ('fc4', nn.Linear(hidden_layers[2], output_size)), ('output', nn.LogSoftmax(dim=1)) ])) return classifier # build a model input_size = 25088 output_size = 102 hidden_layers = [600, 400, 200] drop_p = 0.5 classifier = classifier(input_size, output_size ,hidden_layers , drop_p) model.classifier = classifier criterion = nn.NLLLoss() optimizer = optim.Adam(model.classifier.parameters(), lr=0.001) # function for the validation pass def validation(model, testloader, criterion): test_loss = 0 accuracy = 0 for data in testloader: inputs, labels = data inputs, labels = inputs.to('cuda'), labels.to('cuda') outputs = model.forward(inputs) test_loss += criterion(outputs, labels).item() _, predicted = torch.max(outputs, 1) equality = (labels == predicted) accuracy += equality.type(torch.FloatTensor).mean() return test_loss, accuracy # Train the network # During training, the validation loss and accuracy are displayed epochs = 20 steps = 0 running_loss = 0 print_every = 101 model.to('cuda') for e in range(epochs): model.train() for ii, (inputs, labels) in enumerate(dataloaders): steps += 1 inputs, labels = inputs.to('cuda'), labels.to('cuda') optimizer.zero_grad() outputs = model.forward(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() running_loss += loss.item() if steps % print_every == 0: test_loss, accuracy = validation(model, dataloaders_valid, criterion) print("Epoch: {}/{}.. ".format(e+1, epochs), "Training Loss: {:.3f}.. ".format(running_loss/print_every), "Validation Loss: {:.3f}.. ".format(test_loss/len(dataloaders_valid)), "Validation Accuracy: {:.3f}".format(100*accuracy/len(dataloaders_valid))) running_loss = 0 model.train() # The network's accuracy is measured on the test data correct = 0 total = 0 model.to('cuda') model.eval() with torch.no_grad(): for ii, (inputs, labels) in enumerate(dataloaders_test): inputs, labels = inputs.to('cuda'), labels.to('cuda') outputs = model.forward(inputs) _, predicted = torch.max(outputs, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the test images: %d %%' % (100 * correct / total)) Remember that you'll want to completely rebuild the model later so you can use it for inference. Make sure to include any information you need in the checkpoint. If you want to load the model and keep training, you'll want to save the number of epochs as well as the optimizer state, `optimizer.state_dict`. You'll likely want to use this trained model in the next part of the project, so best to save it now. ## Loading the checkpoint At this point it's good to write a function that can load a checkpoint and rebuild the model. That way you can come back to this project and keep working on it without having to retrain the network. # Inference for classification Now you'll write a function to use a trained network for inference. That is, you'll pass an image into the network and predict the class of the flower in the image. Write a function called `predict` that takes an image and a model, then returns the top $K$ most likely classes along with the probabilities. It should look like First you'll need to handle processing the input image such that it can be used in your network. ## Image Preprocessing You'll want to use `PIL` to load the image ([documentation](https://pillow.readthedocs.io/en/latest/reference/Image.html)). It's best to write a function that preprocesses the image so it can be used as input for the model. This function should process the images in the same manner used for training. First, resize the images where the shortest side is 256 pixels, keeping the aspect ratio. This can be done with the [`thumbnail`](http://pillow.readthedocs.io/en/3.1.x/reference/Image.html#PIL.Image.Image.thumbnail) or [`resize`](http://pillow.readthedocs.io/en/3.1.x/reference/Image.html#PIL.Image.Image.thumbnail) methods. Then you'll need to crop out the center 224x224 portion of the image. Color channels of images are typically encoded as integers 0-255, but the model expected floats 0-1. You'll need to convert the values. It's easiest with a Numpy array, which you can get from a PIL image like so `np_image = np.array(pil_image)`. As before, the network expects the images to be normalized in a specific way. For the means, it's `[0.485, 0.456, 0.406]` and for the standard deviations `[0.229, 0.224, 0.225]`. You'll want to subtract the means from each color channel, then divide by the standard deviation. And finally, PyTorch expects the color channel to be the first dimension but it's the third dimension in the PIL image and Numpy array. You can reorder dimensions using [`ndarray.transpose`](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.ndarray.transpose.html). The color channel needs to be first and retain the order of the other two dimensions. To check your work, the function below converts a PyTorch tensor and displays it in the notebook. If your `process_image` function works, running the output through this function should return the original image (except for the cropped out portions). ## Class Prediction Once you can get images in the correct format, it's time to write a function for making predictions with your model. A common practice is to predict the top 5 or so (usually called top-$K$) most probable classes. You'll want to calculate the class probabilities then find the $K$ largest values. To get the top $K$ largest values in a tensor use [`x.topk(k)`](http://pytorch.org/docs/master/torch.html#torch.topk). This method returns both the highest `k` probabilities and the indices of those probabilities corresponding to the classes. You need to convert from these indices to the actual class labels using `class_to_idx` which hopefully you added to the model or from an `ImageFolder` you used to load the data ([see here](#Save-the-checkpoint)). Make sure to invert the dictionary so you get a mapping from index to class as well. Again, this method should take a path to an image and a model checkpoint, then return the probabilities and classes. ## Sanity Checking Now that you can use a trained model for predictions, check to make sure it makes sense. Even if the testing accuracy is high, it's always good to check that there aren't obvious bugs. Use `matplotlib` to plot the probabilities for the top 5 classes as a bar graph, along with the input image. It should look like this: <img src='assets/inference_example.png' width=300px> You can convert from the class integer encoding to actual flower names with the `cat_to_name.json` file (should have been loaded earlier in the notebook). To show a PyTorch tensor as an image, use the `imshow` function defined above.
0.914468
0.989551
``` import pandas as pd import numpy as np from docx2python import docx2python from itertools import chain import re docd = docx2python("dataset/tarih/tarih_duman_1.docx") print(len(docd.body[0][0][0])) doca = docx2python("dataset/tarih/tarih_asel_1.docx") print(len(doca.body[0][0][0])) doca.body[0][0][0][16835:] ``` ``` lstd = doca.body[0][0][0][:15999] + docd.body[0][0][0] ls = [] for i in range(len(lstd)): aa = preprocess(lstd[i]) if(aa != '' and aa != ' ' and aa != ' ' and aa != ' ' and len(aa.split(" ")) > 0): ls.append(aa) print(len(ls)) pd.set_option('display.max_rows', 10) df = pd.DataFrame(np.array(ls[:]).reshape(-1,2).tolist(), columns = ['question', 'answer']) df df.to_csv("dataset/tarih.csv", index = False) df ``` # Old ``` doc1 = docx2python("dataset/tarih/tarih_1.docx") print(len(doc1.body[0][0][0])) doc2 = docx2python("dataset/tarih/tarih_2.docx") print(len(doc2.body[0][0][0])) doc3 = docx2python("dataset/tarih/tarih_3.docx") print(len(doc3.body[0][0][0])) doc4 = docx2python("dataset/tarih/tarih_4.docx") print(len(doc4.body[0][0][0])) doc5 = docx2python("dataset/tarih/tarih_5.docx") print(len(doc5.body[0][0][0])) lst1 = doc1.body[0][0][0][:200] lst2 = doc2.body[0][0][0][:188] lst3 = doc3.body[0][0][0][:136] lst4 = doc4.body[0][0][0] lst5 = doc5.body[0][0][0][:200] for i in range(len(lst5)): if(i%2==0): lst5[i] = lst5[i][3:].replace('.','') else: lst5[i] = lst5[i] for i in range(len(lst4)): if(i%2==0): lst4[i] = lst4[i][2:].replace('.', '') else: lst4[i] = lst4[i][7:] lst4[122] = lst4[122][1:] for i in range(len(lst3)): lst3[i] = lst3[i].replace('**/', '').replace('/', '') for i in range(len(lst2)): lst2[i] = lst2[i].replace('\xa0', ' ').replace('-', ' ') for i in range(len(lst1)): if(i%2==0): lst1[i] = lst1[i][2:].replace(')', '') else: lst1[i] = lst1[i].replace("Е)", '').replace("E)", '').replace("B)", '').replace("A)", '').replace("В)", '').replace("С)", '').replace("D)", '').replace("А)", '').replace("C)", '') lst1[198] = lst1[198][1:] lst = lst1 + lst2 + lst3 + lst4 + lst5 for i in range(len(lst)): lst[i] = lst[i].replace('«', '').replace('»', '') lst df = pd.DataFrame(np.array(lst).reshape(-1,2).tolist(), columns = ['question', 'answer']) df df.to_csv("dataset/tarih_1.csv", index = False) df ``` ## Uncleaned 15.docx ``` def preprocess(raw_text): raw_text = raw_text.replace('\xa0', ' ') raw_text = raw_text.replace('\t', ' ') raw_text = re.sub('\s+', ' ', raw_text) return raw_text doc = docx2python("dataset/test_uncleaned/253q.docx") doc1 = [] for i in range(len(doc.body)): doc1 += doc.body[i] doc2 = [] for i in range(len(doc1)): doc2 += doc1[i] doc3 = [] for i in range(len(doc2)): doc3 += doc2[i] doc3[:30] for i in range(len(doc3)): doc3[i] = preprocess(doc3[i]) doc3[:10] doc4 = [] for i in range(len(doc3)): if doc3[i] != '': doc4.append(doc3[i]) doc4[:] doc5 = [] i = 0 j = 0 while i < len(doc4): if doc4[i][0].isnumeric() or (doc4[i]+' ')[1].isnumeric(): doc5.append(doc4[i]) j+=1 else: doc5[j-1] += doc4[i] + ' ' i+=1 len(doc5) doc5 for i in range(len(doc5)): if 'A' in doc5[i]: print('Yes', i) else: print('No', i) doc5[119: 126] result = [] for i in range(1,len(doc1.body[0][0][0]),2): for j in range(len(doc.body[:70][i]) - 1): for k in range(len(doc.body[:70][i][j])): que = doc.body[:70][i][j][0] s = '' iter = 0 for n in que: if(n[:2] == 'A)'): break s += n + ' ' iter += 1 result.append([s] + que[iter:iter+5]) list(chain(*doc.body[71][1])) answers = [] for i in range(len(doc.body[71][1:])): for j in doc.body[71][1:][i][1:]: answers.append([j[0]]) ans_df = pd.DataFrame(answers, columns = ['Answer']) ans_df ``` ``` for i in result: if(len(i) == 7): print(i) que_df = pd.DataFrame(result, columns = ['Question', 'A', 'B', 'C', 'D', 'E']) que_df.to_csv("real.csv", index = False) que_df def find_answer(que, ans): result = [] for i in range(que.shape[0]): if(ans.Answer[i] == 'A'): result.append([que.Question[i][3:].lower(), que.A[i][3:].lower()]) elif(ans.Answer[i] == 'B'): result.append([que.Question[i][3:].lower(), que.B[i][3:].lower()]) elif(ans.Answer[i] == 'C'): result.append([que.Question[i][3:].lower(), que.C[i][3:].lower()]) elif(ans.Answer[i] == 'D'): result.append([que.Question[i][3:].lower(), que.D[i][3:].lower()]) elif(ans.Answer[i] == 'E'): result.append([que.Question[i][3:].lower(), que.E[i][3:].lower()]) return pd.DataFrame(result, columns = ['question', 'answer']) data = find_answer(que_df, ans_df) #data = data.drop_duplicates(subset = ["question"]) data.to_csv('english_1.csv', index = False) data ```
github_jupyter
import pandas as pd import numpy as np from docx2python import docx2python from itertools import chain import re docd = docx2python("dataset/tarih/tarih_duman_1.docx") print(len(docd.body[0][0][0])) doca = docx2python("dataset/tarih/tarih_asel_1.docx") print(len(doca.body[0][0][0])) doca.body[0][0][0][16835:] lstd = doca.body[0][0][0][:15999] + docd.body[0][0][0] ls = [] for i in range(len(lstd)): aa = preprocess(lstd[i]) if(aa != '' and aa != ' ' and aa != ' ' and aa != ' ' and len(aa.split(" ")) > 0): ls.append(aa) print(len(ls)) pd.set_option('display.max_rows', 10) df = pd.DataFrame(np.array(ls[:]).reshape(-1,2).tolist(), columns = ['question', 'answer']) df df.to_csv("dataset/tarih.csv", index = False) df doc1 = docx2python("dataset/tarih/tarih_1.docx") print(len(doc1.body[0][0][0])) doc2 = docx2python("dataset/tarih/tarih_2.docx") print(len(doc2.body[0][0][0])) doc3 = docx2python("dataset/tarih/tarih_3.docx") print(len(doc3.body[0][0][0])) doc4 = docx2python("dataset/tarih/tarih_4.docx") print(len(doc4.body[0][0][0])) doc5 = docx2python("dataset/tarih/tarih_5.docx") print(len(doc5.body[0][0][0])) lst1 = doc1.body[0][0][0][:200] lst2 = doc2.body[0][0][0][:188] lst3 = doc3.body[0][0][0][:136] lst4 = doc4.body[0][0][0] lst5 = doc5.body[0][0][0][:200] for i in range(len(lst5)): if(i%2==0): lst5[i] = lst5[i][3:].replace('.','') else: lst5[i] = lst5[i] for i in range(len(lst4)): if(i%2==0): lst4[i] = lst4[i][2:].replace('.', '') else: lst4[i] = lst4[i][7:] lst4[122] = lst4[122][1:] for i in range(len(lst3)): lst3[i] = lst3[i].replace('**/', '').replace('/', '') for i in range(len(lst2)): lst2[i] = lst2[i].replace('\xa0', ' ').replace('-', ' ') for i in range(len(lst1)): if(i%2==0): lst1[i] = lst1[i][2:].replace(')', '') else: lst1[i] = lst1[i].replace("Е)", '').replace("E)", '').replace("B)", '').replace("A)", '').replace("В)", '').replace("С)", '').replace("D)", '').replace("А)", '').replace("C)", '') lst1[198] = lst1[198][1:] lst = lst1 + lst2 + lst3 + lst4 + lst5 for i in range(len(lst)): lst[i] = lst[i].replace('«', '').replace('»', '') lst df = pd.DataFrame(np.array(lst).reshape(-1,2).tolist(), columns = ['question', 'answer']) df df.to_csv("dataset/tarih_1.csv", index = False) df def preprocess(raw_text): raw_text = raw_text.replace('\xa0', ' ') raw_text = raw_text.replace('\t', ' ') raw_text = re.sub('\s+', ' ', raw_text) return raw_text doc = docx2python("dataset/test_uncleaned/253q.docx") doc1 = [] for i in range(len(doc.body)): doc1 += doc.body[i] doc2 = [] for i in range(len(doc1)): doc2 += doc1[i] doc3 = [] for i in range(len(doc2)): doc3 += doc2[i] doc3[:30] for i in range(len(doc3)): doc3[i] = preprocess(doc3[i]) doc3[:10] doc4 = [] for i in range(len(doc3)): if doc3[i] != '': doc4.append(doc3[i]) doc4[:] doc5 = [] i = 0 j = 0 while i < len(doc4): if doc4[i][0].isnumeric() or (doc4[i]+' ')[1].isnumeric(): doc5.append(doc4[i]) j+=1 else: doc5[j-1] += doc4[i] + ' ' i+=1 len(doc5) doc5 for i in range(len(doc5)): if 'A' in doc5[i]: print('Yes', i) else: print('No', i) doc5[119: 126] result = [] for i in range(1,len(doc1.body[0][0][0]),2): for j in range(len(doc.body[:70][i]) - 1): for k in range(len(doc.body[:70][i][j])): que = doc.body[:70][i][j][0] s = '' iter = 0 for n in que: if(n[:2] == 'A)'): break s += n + ' ' iter += 1 result.append([s] + que[iter:iter+5]) list(chain(*doc.body[71][1])) answers = [] for i in range(len(doc.body[71][1:])): for j in doc.body[71][1:][i][1:]: answers.append([j[0]]) ans_df = pd.DataFrame(answers, columns = ['Answer']) ans_df for i in result: if(len(i) == 7): print(i) que_df = pd.DataFrame(result, columns = ['Question', 'A', 'B', 'C', 'D', 'E']) que_df.to_csv("real.csv", index = False) que_df def find_answer(que, ans): result = [] for i in range(que.shape[0]): if(ans.Answer[i] == 'A'): result.append([que.Question[i][3:].lower(), que.A[i][3:].lower()]) elif(ans.Answer[i] == 'B'): result.append([que.Question[i][3:].lower(), que.B[i][3:].lower()]) elif(ans.Answer[i] == 'C'): result.append([que.Question[i][3:].lower(), que.C[i][3:].lower()]) elif(ans.Answer[i] == 'D'): result.append([que.Question[i][3:].lower(), que.D[i][3:].lower()]) elif(ans.Answer[i] == 'E'): result.append([que.Question[i][3:].lower(), que.E[i][3:].lower()]) return pd.DataFrame(result, columns = ['question', 'answer']) data = find_answer(que_df, ans_df) #data = data.drop_duplicates(subset = ["question"]) data.to_csv('english_1.csv', index = False) data
0.037309
0.48249
[@LorenaABarba](https://twitter.com/LorenaABarba) 12 steps to Navier-Stokes ===== *** This lesson complements the first interactive module of the online [CFD Python](https://bitbucket.org/cfdpython/cfd-python-class) class, by Prof. Lorena A. Barba, called **12 Steps to Navier-Stokes.** The interactive module starts with simple exercises in 1D that at first use little of the power of Python. We now present some new ways of doing the same things that are more efficient and produce prettier code. This lesson was written with BU graduate student Gilbert Forsyth. Defining Functions in Python ---- In steps 1 through 8, we wrote Python code that is meant to run from top to bottom. We were able to reuse code (to great effect!) by copying and pasting, to incrementally build a solver for the Burgers' equation. But moving forward there are more efficient ways to write our Python codes. In this lesson, we are going to introduce *function definitions*, which will allow us more flexibility in reusing and also in organizing our code. We'll begin with a trivial example: a function which adds two numbers. To create a function in Python, we start with the following: def simpleadd(a,b): This statement creates a function called `simpleadd` which takes two inputs, `a` and `b`. Let's execute this definition code. ``` def simpleadd(a, b): return a+b ``` The `return` statement tells Python what data to return in response to being called. Now we can try calling our `simpleadd` function: ``` simpleadd(3, 4) ``` Of course, there can be much more happening between the `def` line and the `return` line. In this way, one can build code in a *modular* way. Let's try a function which returns the `n`-th number in the Fibonacci sequence. ``` def fibonacci(n): a, b = 0, 1 for i in range(n): a, b = b, a + b return a fibonacci(7) ``` Once defined, the function `fibonacci` can be called like any of the built-in Python functions that we've already used. For exmaple, we might want to print out the Fibonacci sequence up through the `n`-th value: ``` for n in range(10): print(fibonacci(n)) ``` We will use the capacity of defining our own functions in Python to help us build code that is easier to reuse, easier to maintain, easier to share! ##### Exercise (Pending.) Learn more ----- *** Remember our short detour on using [array operations with NumPy](http://nbviewer.ipython.org/urls/github.com/barbagroup/CFDPython/blob/master/lessons/07_Step_5.ipynb)? Well, there are a few more ways to make your scientific codes in Python run faster. We recommend the article on the Technical Discovery blog about [Speeding Up Python](http://technicaldiscovery.blogspot.com/2011/06/speeding-up-python-numpy-cython-and.html) (June 20, 2011), which talks about NumPy, Cython and Weave. It uses as example the Laplace equation (which we will solve in [Step 9](http://nbviewer.ipython.org/urls/github.com/barbagroup/CFDPython/blob/master/lessons/12_Step_9.ipynb)) and makes neat use of defined functions. But a recent new way to get fast Python codes is [Numba](http://numba.pydata.org). We'll learn a bit about that after we finish the **12 steps to Navier-Stokes**. There are many exciting things happening in the world of high-performance Python right now! *** ``` from IPython.core.display import HTML def css_styling(): styles = open("../styles/custom.css", "r").read() return HTML(styles) css_styling() ``` > (The cell above executes the style for this notebook.)
github_jupyter
def simpleadd(a, b): return a+b simpleadd(3, 4) def fibonacci(n): a, b = 0, 1 for i in range(n): a, b = b, a + b return a fibonacci(7) for n in range(10): print(fibonacci(n)) from IPython.core.display import HTML def css_styling(): styles = open("../styles/custom.css", "r").read() return HTML(styles) css_styling()
0.343672
0.973062
<div style="text-align:center"> <h1> Expressions </h1> <h2> CS3100 Monsoon 2020 </h2> </div> ## Recap <h4> Last Time: </h4> * Why functional programming matters? <h4> Today: </h4> * Expressions, Values, Definitions in OCaml. ## Expressions Every kind of expression has: * **Syntax** * **Semantics:** + Type-checking rules (static semantics): produce a type or fail with an error message + Evaluation rules (dynamic semantics): produce a value * (or exception or infinite loop) * **Used only on expressions that type-check** (static vs dynamic languages) ## Values A *value* is an expression that does not need further evaluation. <center> <img src="images/val-expr.svg" width="300"> </center> ## Values in OCaml ``` 42 "Hello" 3.1415 ``` * Observe that the values have + static semantics: types `int`, `string`, `float`. + dynamic semantics: the value itself. ## Type Inference and annotation * OCaml compiler **infers** types + Compilation fails with type error if it can't + Hard part of language design: guaranteeing compiler can infer types when program is correctly written * You can manually annotate types anywhere – Replace `e` with `(e : t)` + Useful for resolving type errors ``` (42.4 : float) ``` ## More values OCaml also support other values. See [manual](https://caml.inria.fr/pub/docs/manual-ocaml/values.html). ``` () (1,"hello", true, 3.4) [1;2;3] [|1;2;3|] ``` ## Static vs Dynamic distinction Static typing helps catch lots errors at compile time. Which of these is a static error? ``` 23 = 45.0 23 = 45 ``` ## If expression ```ocaml if e1 then e2 else e3 ``` * **Static Semantics:** + If `e1` has type `bool` and + `e2` has type `t` and + `e3` has type `t` then + `if e1 then e2 else e3` has type `t`. * **Dynamic Semantics:** + If `e1` evaluates to `true`, + then evaluate `e2`, + else evaluate `e3` ``` if 32 = 31 then "Hello" else "World" if true then 13 else 13.4 ``` ## More Formally <script type="text/x-mathjax-config"> MathJax.Hub.Config({ TeX: { extensions: ["color.js"] }}); </script> $ \newcommand{\inferrule}[2]{\displaystyle{\frac{#1}{#2}}} \newcommand{\ite}[3]{\text{if }{#1}\text{ then }{#2}\text{ else }{#3}} \newcommand{\t}[1]{\color{green}{#1}} \newcommand{\true}{\color{purple}{true}} \newcommand{\false}{\color{purple}{false}} \newcommand{\letin}[3]{\text{let }{{#1} = {#2}}\text{ in }{#3}} $ **Static Semantics of if expression** \\[ \inferrule{e1:\t{bool} \quad e2:\t{t} \quad e3:\t{t}}{\ite{e1}{e2}{e3} : \t{t}} \\] (omits some details which we will cover in later lectures) #### to be read as \\[ \inferrule{Premise_1 \quad Premise_2 \quad \ldots \quad Premise_N}{Conclusion} \\] Such rules are known as inference rules. ## Dynamic semantics of if expression For the case when the predicate evaluates to `true`: \\[ \inferrule{e1 \rightarrow \true \quad e2 \rightarrow v}{\ite{e1}{e2}{e3} \rightarrow v} \\] For the case when the predicate evaluates to `false`: \\[ \inferrule{e1 \rightarrow \false \quad e3 \rightarrow v}{\ite{e1}{e2}{e3} \rightarrow v} \\] Read $\rightarrow$ as *evaluates to*. ## Let expression ```ocaml let x = e1 in e2 ``` * `x` is an identifier * `e1` is the binding expression * `e2` is the body expression * `let x = e1 in e2` is itself an expression ``` let x = 5 in x + 5 let x = 5 in let y = 10 in x + y let x = 5 in let x = 10 in x ``` ## Scopes & shadowing ```ocaml let x = 5 in let x = 10 in x ``` is parsed as ```ocaml let x = 5 in (let x = 10 in x) ``` * Importantly, `x` is not mutated; there are two `x`s in different **scopes**. * Inner definitions **shadow** the outer definitions. ## What is the result of this expression? ``` let x = 5 in let y = let x = 10 in x in x+y ``` ## let at the top-level ```ocaml let x = e ``` is implicitly, "**in** the rest of the program text" ``` let a = "Hello" let b = "World" let c = a ^ " " ^ b ``` `^` is the operator from string concatenation. ## Definitions * The top-level `let x = e` are known as **definitions**. * Definitions give name to a value. * Definitions are not expressions, or vice versa. * But definitions syntactically contain expressions. <center> <img src="images/val-expr-defn.svg"> </center> ## Let expression ```ocaml let x = e1 in e2 ``` **Static semantics** \\[ \inferrule{x : \t{t1} \quad e1 : \t{t1} \quad e2 : \t{t2}}{\letin{x}{e1}{e2} : \t{t2}} \\] (again omits some details) **Dynamic semantics** \\[ \inferrule{e1 \rightarrow v1 \quad \text{substituting } v1 \text{ for } x \text{ in } e2 \rightarrow v2} {\letin{x}{e1}{e2} \rightarrow v2} \\] ## Exercise * In OCaml, we cannot use `+` for floating point addition, and instead have to use `+.`. + Why do you think this is the case? ``` 5.4 +. 6.0 ``` ## Exercise Write down the static semantics for `+` and `+.`. <div style="text-align:center"> <h1> <i> Fin. </i> </h1> </div>
github_jupyter
42 "Hello" 3.1415 (42.4 : float) () (1,"hello", true, 3.4) [1;2;3] [|1;2;3|] 23 = 45.0 23 = 45 if e1 then e2 else e3 if 32 = 31 then "Hello" else "World" if true then 13 else 13.4 let x = e1 in e2 let x = 5 in x + 5 let x = 5 in let y = 10 in x + y let x = 5 in let x = 10 in x let x = 5 in let x = 10 in x let x = 5 in (let x = 10 in x) let x = 5 in let y = let x = 10 in x in x+y let x = e let a = "Hello" let b = "World" let c = a ^ " " ^ b let x = e1 in e2 5.4 +. 6.0
0.186762
0.96859
``` import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns matches_df = pd.read_csv('Dataset/matches.csv') match_details = pd.read_csv('Dataset/deliveries.csv') ``` # Insights of 1st Innings for a match ## Step 1: Loading the Data ``` matches_df.head() match_details.head() # Selecting the 1st match AUG vs PAK and the 1st inning of the match match1 = match_details[match_details.match_id == 1] match1_1st = match1[match1.inning == 1] # 1st inning of the AUG vs PAK match match1_1st ``` ### Observation on the data * Australia is the batting in the 1st innings * Pakisthan is bowling * Opening Batsmen are DA Warner and TM Head * 1st Bowler is Mohammad Amir ### Important Insight There are **310 rows** in this dataset, A one innings have only **300 balls(50 overs)**<br> So, **the extra 10 balls must be wide or no balls**<br> ``` # Overall Overview of the match matches_df.iloc[0] ``` ## Step 2: Data Preprocessing ``` match1_1st = match1_1st[['over', 'ball','batsman', 'bowler', 'total','player_dismissed']] match1_1st.columns match1_1st # Calculating the Score for each ball Score=[] total_run = 0 for i,runs in enumerate(match1_1st.total): total_run += runs Score.append(total_run) print(i,total_run) #Calculation for Wickets taken Wickets=[] total_wickets=0 for i,wicket in enumerate(match1_1st.player_dismissed): if pd.isnull(wicket) == False: total_wickets+=1 print(wicket) Wickets.append(total_wickets) print(len(Wickets)) #Calculation for Balls_Bowled Balls_Bowled=[] total_balls=0 for i,ball in enumerate(match1_1st.ball): if ball <=6: total_balls+=1 Balls_Bowled.append(total_balls) len(Balls_Bowled) ### Adding the new features into the dataset match1_1st.rename(columns = {'total':'runs_accquired'}, inplace = True) match1_1st = match1_1st.drop('player_dismissed',axis=1) match1_1st['Score'] = Score match1_1st['Wickets'] = Wickets match1_1st['Balls_Bowled'] = Balls_Bowled match1_1st = match1_1st[['over','Balls_Bowled','ball','batsman','bowler','runs_accquired','Score','Wickets']] ``` ### Now the Dataset contains the features according the problem statement ![alt text](images/1.jpg) ``` match1_1st # Some Insight plot from the runs_accquired data sns.countplot(x ='runs_accquired', data = match1_1st) plt.ylabel('balls') # Show the plot plt.show() ``` Insights: * Almost in 150 balls the batsman didn't take any run, either he defensed or didn't take any runs. * The next highest is accquiring a 1 run, which is the safest way to score. * Then the rarity increases from 4,2 followed by 6 and 3. * Hitting a 6 is obviously very rare because it's risky. * We can rarely observe 3 runs in a game ``` # Some Insight plot from the Score vs overs data plt.plot(match1_1st.over,match1_1st.Score) plt.xlabel('Overs') plt.ylabel('Score') plt.show() ``` Insights: * In over 0-10 the score increased slight exponential pace * In over 10-20 the score increased at a constant pace * In over 30-40 the pace of the score was plateauing * In over 40-50 the score increased in a gradual pace ``` # Some Insight plot from the Batsmen data sns.countplot(x ='batsman', data = match1_1st) plt.ylabel('balls') plt.xticks(rotation=90) # Show the plot plt.show() ``` Insight: * Here We can judge the performance of the Batmen of australia with their edurance/lifetime during their match * MS Wade played 100 balls followed by GJ Maxwell(60 balls), DA Warmer( 20 balls) * In this match SPD Smith showed poor even getting to bat just after the openning
github_jupyter
import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns matches_df = pd.read_csv('Dataset/matches.csv') match_details = pd.read_csv('Dataset/deliveries.csv') matches_df.head() match_details.head() # Selecting the 1st match AUG vs PAK and the 1st inning of the match match1 = match_details[match_details.match_id == 1] match1_1st = match1[match1.inning == 1] # 1st inning of the AUG vs PAK match match1_1st # Overall Overview of the match matches_df.iloc[0] match1_1st = match1_1st[['over', 'ball','batsman', 'bowler', 'total','player_dismissed']] match1_1st.columns match1_1st # Calculating the Score for each ball Score=[] total_run = 0 for i,runs in enumerate(match1_1st.total): total_run += runs Score.append(total_run) print(i,total_run) #Calculation for Wickets taken Wickets=[] total_wickets=0 for i,wicket in enumerate(match1_1st.player_dismissed): if pd.isnull(wicket) == False: total_wickets+=1 print(wicket) Wickets.append(total_wickets) print(len(Wickets)) #Calculation for Balls_Bowled Balls_Bowled=[] total_balls=0 for i,ball in enumerate(match1_1st.ball): if ball <=6: total_balls+=1 Balls_Bowled.append(total_balls) len(Balls_Bowled) ### Adding the new features into the dataset match1_1st.rename(columns = {'total':'runs_accquired'}, inplace = True) match1_1st = match1_1st.drop('player_dismissed',axis=1) match1_1st['Score'] = Score match1_1st['Wickets'] = Wickets match1_1st['Balls_Bowled'] = Balls_Bowled match1_1st = match1_1st[['over','Balls_Bowled','ball','batsman','bowler','runs_accquired','Score','Wickets']] match1_1st # Some Insight plot from the runs_accquired data sns.countplot(x ='runs_accquired', data = match1_1st) plt.ylabel('balls') # Show the plot plt.show() # Some Insight plot from the Score vs overs data plt.plot(match1_1st.over,match1_1st.Score) plt.xlabel('Overs') plt.ylabel('Score') plt.show() # Some Insight plot from the Batsmen data sns.countplot(x ='batsman', data = match1_1st) plt.ylabel('balls') plt.xticks(rotation=90) # Show the plot plt.show()
0.476823
0.834407
<a href="https://colab.research.google.com/github/datadynamo/aiconf_sj_2019_pytorch/blob/master/03_Custom_Data_Loader_CSV.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> *Accompanying code examples of the book "Introduction to Artificial Neural Networks and Deep Learning: A Practical Guide with Applications in Python" by [Sebastian Raschka](https://sebastianraschka.com). All code examples are released under the [MIT license](https://github.com/rasbt/deep-learning-book/blob/master/LICENSE). If you find this content useful, please consider supporting the work by buying a [copy of the book](https://leanpub.com/ann-and-deeplearning).* Other code examples and content are available on [GitHub](https://github.com/rasbt/deep-learning-book). The PDF and ebook versions of the book are available through [Leanpub](https://leanpub.com/ann-and-deeplearning). This notebook is presented with slight modifications from: https://github.com/rasbt/deeplearning-models/blob/master/pytorch_ipynb/mechanics/custom-data-loader-csv.ipynb #Please buy Sebastian Raschka's awesome book # Using PyTorch Dataset Loading Utilities for Custom Datasets (CSV files converted to HDF5) This notebook provides an example for how to load a dataset from an HDF5 file created from a CSV file, using PyTorch's data loading utilities. For a more in-depth discussion, please see the official - [Data Loading and Processing Tutorial](http://pytorch.org/tutorials/beginner/data_loading_tutorial.html) - [torch.utils.data](http://pytorch.org/docs/master/data.html) API documentation An Hierarchical Data Format (HDF) is a convenient way that allows quick access to data instances during minibatch learning if a dataset is too large to fit into memory. The approach outlined in this notebook uses uses the common [HDF5](https://support.hdfgroup.org/HDF5/) format and should be accessible to any programming language or tool with an HDF5 API. **In this example, we are going to use the Iris dataset for illustrative purposes. Let's pretend it's our large training dataset that doesn't fit into memory**. ## Imports ``` import pandas as pd import numpy as np import h5py import torch from torch.utils.data import Dataset from torch.utils.data import DataLoader ``` ## Converting a CSV file to HDF5 In this first step, we are going to process a CSV file (here, Iris) into an HDF5 database: ``` # suppose this is a large CSV that does not # fit into memory: csv_path = 'https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data' # Get number of lines in the CSV file if it's on your hard drive: #num_lines = subprocess.check_output(['wc', '-l', in_csv]) #num_lines = int(nlines.split()[0]) num_lines = 150 num_features = 4 class_dict = {'Iris-setosa': 0, 'Iris-versicolor': 1, 'Iris-virginica': 2} # use 10,000 or 100,000 or so for large files chunksize = 10 # this is your HDF5 database: with h5py.File('iris.h5', 'w') as h5f: # use num_features-1 if the csv file has a column header dset1 = h5f.create_dataset('features', shape=(num_lines, num_features), compression=None, dtype='float32') dset2 = h5f.create_dataset('labels', shape=(num_lines,), compression=None, dtype='int32') # change range argument from 0 -> 1 if your csv file contains a column header for i in range(0, num_lines, chunksize): df = pd.read_csv(csv_path, header=None, # no header, define column header manually later nrows=chunksize, # number of rows to read at each iteration skiprows=i) # skip rows that were already read df[4] = df[4].map(class_dict) features = df.values[:, :4] labels = df.values[:, -1] # use i-1 and i-1+10 if csv file has a column header dset1[i:i+10, :] = features dset2[i:i+10] = labels[0] ``` After creating the database, let's double-check that everything works correctly: ``` with h5py.File('iris.h5', 'r') as h5f: print(h5f['features'].shape) print(h5f['labels'].shape) with h5py.File('iris.h5', 'r') as h5f: print('Features of entry no. 99:', h5f['features'][99]) print('Class label of entry no. 99:', h5f['labels'][99]) ``` ## Implementing a Custom Dataset Class Now, we implement a custom `Dataset` for reading the training examples. The `__getitem__` method will 1. read a single training example from HDF5 based on an `index` (more on batching later) 2. return a single training example and it's corresponding label Note that we will keep an open connection to the database for efficiency via `self.h5f = h5py.File(h5_path, 'r')` -- you may want to close it when you are done (more on this later). ``` class Hdf5Dataset(Dataset): """Custom Dataset for loading entries from HDF5 databases""" def __init__(self, h5_path, transform=None): self.h5f = h5py.File(h5_path, 'r') self.num_entries = self.h5f['labels'].shape[0] self.transform = transform def __getitem__(self, index): features = self.h5f['features'][index] label = self.h5f['labels'][index] if self.transform is not None: features = self.transform(features) return features, label def __len__(self): return self.num_entries ``` Now that we have created our custom Dataset class, we can initialize a Dataset instance for the training examples using the 'iris.h5' database file. Then, we initialize a `DataLoader` that allows us to read from the dataset. ``` train_dataset = Hdf5Dataset(h5_path='iris.h5', transform=None) train_loader = DataLoader(dataset=train_dataset, batch_size=50, shuffle=True, num_workers=4) ``` That's it! Now we can iterate over an epoch using the train_loader as an iterator and use the features and labels from the training dataset for model training as shown in the next section ## Iterating Through the Custom Dataset ``` device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") torch.manual_seed(0) num_epochs = 5 for epoch in range(num_epochs): for batch_idx, (x, y) in enumerate(train_loader): print('Epoch:', epoch+1, end='') print(' | Batch index:', batch_idx, end='') print(' | Batch size:', y.size()[0]) x = x.to(device) y = y.to(device) # do model training on x and y here ``` **Remember that we kept an open connection to the HDF5 database in the `Hdf5Dataset` (via `self.h5f = h5py.File(h5_path, 'r')`). Once we are done, we may want to close this connection:** ``` train_dataset.h5f.close() ``` This notebook is presented with slight modifications from: https://github.com/rasbt/deep-learning-book/blob/master/code/model_zoo/pytorch_ipynb/custom-data-loader-csv.ipynb #Please buy Sebastian Raschka's awesome book https://leanpub.com/ann-and-deeplearning
github_jupyter
import pandas as pd import numpy as np import h5py import torch from torch.utils.data import Dataset from torch.utils.data import DataLoader # suppose this is a large CSV that does not # fit into memory: csv_path = 'https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data' # Get number of lines in the CSV file if it's on your hard drive: #num_lines = subprocess.check_output(['wc', '-l', in_csv]) #num_lines = int(nlines.split()[0]) num_lines = 150 num_features = 4 class_dict = {'Iris-setosa': 0, 'Iris-versicolor': 1, 'Iris-virginica': 2} # use 10,000 or 100,000 or so for large files chunksize = 10 # this is your HDF5 database: with h5py.File('iris.h5', 'w') as h5f: # use num_features-1 if the csv file has a column header dset1 = h5f.create_dataset('features', shape=(num_lines, num_features), compression=None, dtype='float32') dset2 = h5f.create_dataset('labels', shape=(num_lines,), compression=None, dtype='int32') # change range argument from 0 -> 1 if your csv file contains a column header for i in range(0, num_lines, chunksize): df = pd.read_csv(csv_path, header=None, # no header, define column header manually later nrows=chunksize, # number of rows to read at each iteration skiprows=i) # skip rows that were already read df[4] = df[4].map(class_dict) features = df.values[:, :4] labels = df.values[:, -1] # use i-1 and i-1+10 if csv file has a column header dset1[i:i+10, :] = features dset2[i:i+10] = labels[0] with h5py.File('iris.h5', 'r') as h5f: print(h5f['features'].shape) print(h5f['labels'].shape) with h5py.File('iris.h5', 'r') as h5f: print('Features of entry no. 99:', h5f['features'][99]) print('Class label of entry no. 99:', h5f['labels'][99]) class Hdf5Dataset(Dataset): """Custom Dataset for loading entries from HDF5 databases""" def __init__(self, h5_path, transform=None): self.h5f = h5py.File(h5_path, 'r') self.num_entries = self.h5f['labels'].shape[0] self.transform = transform def __getitem__(self, index): features = self.h5f['features'][index] label = self.h5f['labels'][index] if self.transform is not None: features = self.transform(features) return features, label def __len__(self): return self.num_entries train_dataset = Hdf5Dataset(h5_path='iris.h5', transform=None) train_loader = DataLoader(dataset=train_dataset, batch_size=50, shuffle=True, num_workers=4) device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") torch.manual_seed(0) num_epochs = 5 for epoch in range(num_epochs): for batch_idx, (x, y) in enumerate(train_loader): print('Epoch:', epoch+1, end='') print(' | Batch index:', batch_idx, end='') print(' | Batch size:', y.size()[0]) x = x.to(device) y = y.to(device) # do model training on x and y here train_dataset.h5f.close()
0.599251
0.989076
**Versionen** ``` !python -c "import torch; print(torch.__version__)" !python -c "import torch; print(torch.version.cuda)" !python --version !nvidia-smi ``` - PyTorch Geometric => Erstellen von Graph Neural Network - RDKit => Moleküldaten ``` #@title # Install rdkit import sys import os import requests import subprocess import shutil from logging import getLogger, StreamHandler, INFO logger = getLogger(__name__) logger.addHandler(StreamHandler()) logger.setLevel(INFO) def install( chunk_size=4096, file_name="Miniconda3-latest-Linux-x86_64.sh", url_base="https://repo.continuum.io/miniconda/", conda_path=os.path.expanduser(os.path.join("~", "miniconda")), rdkit_version=None, add_python_path=True, force=False): """install rdkit from miniconda ``` import rdkit_installer rdkit_installer.install() ``` """ python_path = os.path.join( conda_path, "lib", "python{0}.{1}".format(*sys.version_info), "site-packages", ) if add_python_path and python_path not in sys.path: logger.info("add {} to PYTHONPATH".format(python_path)) sys.path.append(python_path) if os.path.isdir(os.path.join(python_path, "rdkit")): logger.info("rdkit is already installed") if not force: return logger.info("force re-install") url = url_base + file_name python_version = "{0}.{1}.{2}".format(*sys.version_info) logger.info("python version: {}".format(python_version)) if os.path.isdir(conda_path): logger.warning("remove current miniconda") shutil.rmtree(conda_path) elif os.path.isfile(conda_path): logger.warning("remove {}".format(conda_path)) os.remove(conda_path) logger.info('fetching installer from {}'.format(url)) res = requests.get(url, stream=True) res.raise_for_status() with open(file_name, 'wb') as f: for chunk in res.iter_content(chunk_size): f.write(chunk) logger.info('done') logger.info('installing miniconda to {}'.format(conda_path)) subprocess.check_call(["bash", file_name, "-b", "-p", conda_path]) logger.info('done') logger.info("installing rdkit") subprocess.check_call([ os.path.join(conda_path, "bin", "conda"), "install", "--yes", "-c", "rdkit", "python==3.7.3", "rdkit" if rdkit_version is None else "rdkit=={}".format(rdkit_version)]) logger.info("done") import rdkit logger.info("rdkit-{} installation finished!".format(rdkit.__version__)) if __name__ == "__main__": install() ``` **PyTorch installieren** ``` import torch pytorch_version = "torch-" + torch.__version__ + ".html" !pip install --no-index torch-scatter -f https://pytorch-geometric.com/whl/$pytorch_version !pip install --no-index torch-sparse -f https://pytorch-geometric.com/whl/$pytorch_version !pip install --no-index torch-cluster -f https://pytorch-geometric.com/whl/$pytorch_version !pip install --no-index torch-spline-conv -f https://pytorch-geometric.com/whl/$pytorch_version !pip install torch-geometric ``` **Infos zum Datensatz** "*ESOL* ist ein kleiner Datensatz, der aus Wasserlöslichkeitsdaten für 1128 Verbindungen besteht. Der Datensatz wurde verwendet, um Modelle zu trainieren, die die Löslichkeit direkt aus chemischen Strukturen (wie in SMILES-Strings kodiert) schätzen. Beachten Sie, dass diese Strukturen keine 3D-Koordinaten enthalten, da die Löslichkeit eine Eigenschaft eines Moleküls und nicht seiner speziellen Konformeren ist." Datentyp => SMILES (Simplified Molecular Input Line Entry System, https://medium.com/@sunitachoudhary103/generating-molecules-using-a-char-rnn-in-pytorch-16885fd9394b) Nähere Infos zum Datensatz selbst: http://moleculenet.ai/datasets-1 ``` import rdkit from torch_geometric.datasets import MoleculeNet # Load the ESOL dataset data = MoleculeNet(root=".", name="ESOL") data print("Dataset type: ", type(data)) print("Dataset features: ", data.num_features) print("Dataset target: ", data.num_classes) print("Dataset length: ", data.len) print("Dataset sample: ", data[0]) print("Sample nodes: ", data[0].num_nodes) print("Sample edges: ", data[0].num_edges) ``` edge_index => Verbindung der Graphen smiles = Molekül und dessen Atome x = Node-Features (32 Nodes, jedes mit 9 Features) y = Labels (Dimensionen) ``` # Shape: [num_nodes, num_node_features] data[0].x data[0].y ``` **Konvertierung von SMILES in RDKit Moleküle** ``` data[0]["smiles"] ``` SMILES Moleküle als Graphen ``` from rdkit import Chem from rdkit.Chem.Draw import IPythonConsole molecule = Chem.MolFromSmiles(data[0]["smiles"]) molecule type(molecule) ``` **Implementierung von GNNs** Selbes Verfahren wie mit CNNs: - in_channels = Size of each input sample. - out_channels = Size of each output sample. Verschiedene Learning Problems (Node-, Edge- oder Graph-Prediction) benötigen verschiedene GNN-Architekturen: - Graph-Prediction => Kombinierung von Node-Embeddings **Vorgehen** 1. 3 Convolutional Layers hinzufügen 2. Pooling Layer hinzufügen (Informationen der einzelnen Knoten kombinieren, da Vorhersage auf Graphenebene) ``` import torch from torch.nn import Linear import torch.nn.functional as F from torch_geometric.nn import GCNConv, TopKPooling, global_mean_pool from torch_geometric.nn import global_mean_pool as gap, global_max_pool as gmp embedding_size = 64 # 64, da grosse Molekuele # je mehr Layers man hinzufuegt, desto mehr Informationen erhaelt man aus den Graphen class GCN(torch.nn.Module): def __init__(self): super(GCN, self).__init__() torch.manual_seed(42) # GCN layers self.initial_conv = GCNConv(data.num_features, embedding_size) self.conv1 = GCNConv(embedding_size, embedding_size) self.conv2 = GCNConv(embedding_size, embedding_size) self.conv3 = GCNConv(embedding_size, embedding_size) # Linear Layer als Final Output Layer fuer das Regressionsproblem # Output layer self.out = Linear(embedding_size*2, data.num_classes) def forward(self, x, edge_index, batch_index): # First Conv layer hidden = self.initial_conv(x, edge_index) hidden = F.tanh(hidden) # Other Conv layers hidden = self.conv1(hidden, edge_index) hidden = F.tanh(hidden) hidden = self.conv2(hidden, edge_index) hidden = F.tanh(hidden) hidden = self.conv3(hidden, edge_index) hidden = F.tanh(hidden) # Global Pooling hidden = torch.cat([gmp(hidden, batch_index), gap(hidden, batch_index)], dim=1) # Finaler (lineare) Classifier out = self.out(hidden) return out, hidden model = GCN() print(model) print("Number of parameters: ", sum(p.numel() for p in model.parameters())) ``` **GNN-Training** ``` from torch_geometric.data import DataLoader import warnings warnings.filterwarnings("ignore") # Root mean squared error loss_fn = torch.nn.MSELoss() optimizer = torch.optim.Adam(model.parameters(), lr=0.0007) # GPU zum trainieren device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") model = model.to(device) # Data Loader data_size = len(data) NUM_GRAPHS_PER_BATCH = 64 loader = DataLoader(data[:int(data_size * 0.8)], batch_size=NUM_GRAPHS_PER_BATCH, shuffle=True) test_loader = DataLoader(data[int(data_size * 0.8):], batch_size=NUM_GRAPHS_PER_BATCH, shuffle=True) def train(data): for batch in loader: batch.to(device) optimizer.zero_grad() pred, embedding = model(batch.x.float(), batch.edge_index, batch.batch) loss = torch.sqrt(loss_fn(pred, batch.y)) loss.backward() optimizer.step() return loss, embedding print("Training starten...") losses = [] for epoch in range(2000): loss, h = train(data) losses.append(loss) if epoch % 100 == 0: print(f"Epoch {epoch} | Train Loss {loss}") ``` **Visulisierung der Training Loss** ``` import seaborn as sns losses_float = [float(loss.cpu().detach().numpy()) for loss in losses] loss_indices = [i for i,l in enumerate(losses_float)] plt = sns.lineplot(loss_indices, losses_float) plt ``` **Test-Prediction** ``` import pandas as pd test_batch = next(iter(test_loader)) with torch.no_grad(): test_batch.to(device) pred, embed = model(test_batch.x.float(), test_batch.edge_index, test_batch.batch) df = pd.DataFrame() df["y_real"] = test_batch.y.tolist() df["y_pred"] = pred.tolist() df["y_real"] = df["y_real"].apply(lambda row: row[0]) df["y_pred"] = df["y_pred"].apply(lambda row: row[0]) df plt = sns.scatterplot(data=df, x="y_real", y="y_pred") plt.set(xlim=(-7, 2)) plt.set(ylim=(-7, 2)) plt ```
github_jupyter
!python -c "import torch; print(torch.__version__)" !python -c "import torch; print(torch.version.cuda)" !python --version !nvidia-smi #@title # Install rdkit import sys import os import requests import subprocess import shutil from logging import getLogger, StreamHandler, INFO logger = getLogger(__name__) logger.addHandler(StreamHandler()) logger.setLevel(INFO) def install( chunk_size=4096, file_name="Miniconda3-latest-Linux-x86_64.sh", url_base="https://repo.continuum.io/miniconda/", conda_path=os.path.expanduser(os.path.join("~", "miniconda")), rdkit_version=None, add_python_path=True, force=False): """install rdkit from miniconda ``` import rdkit_installer rdkit_installer.install() ``` """ python_path = os.path.join( conda_path, "lib", "python{0}.{1}".format(*sys.version_info), "site-packages", ) if add_python_path and python_path not in sys.path: logger.info("add {} to PYTHONPATH".format(python_path)) sys.path.append(python_path) if os.path.isdir(os.path.join(python_path, "rdkit")): logger.info("rdkit is already installed") if not force: return logger.info("force re-install") url = url_base + file_name python_version = "{0}.{1}.{2}".format(*sys.version_info) logger.info("python version: {}".format(python_version)) if os.path.isdir(conda_path): logger.warning("remove current miniconda") shutil.rmtree(conda_path) elif os.path.isfile(conda_path): logger.warning("remove {}".format(conda_path)) os.remove(conda_path) logger.info('fetching installer from {}'.format(url)) res = requests.get(url, stream=True) res.raise_for_status() with open(file_name, 'wb') as f: for chunk in res.iter_content(chunk_size): f.write(chunk) logger.info('done') logger.info('installing miniconda to {}'.format(conda_path)) subprocess.check_call(["bash", file_name, "-b", "-p", conda_path]) logger.info('done') logger.info("installing rdkit") subprocess.check_call([ os.path.join(conda_path, "bin", "conda"), "install", "--yes", "-c", "rdkit", "python==3.7.3", "rdkit" if rdkit_version is None else "rdkit=={}".format(rdkit_version)]) logger.info("done") import rdkit logger.info("rdkit-{} installation finished!".format(rdkit.__version__)) if __name__ == "__main__": install() import torch pytorch_version = "torch-" + torch.__version__ + ".html" !pip install --no-index torch-scatter -f https://pytorch-geometric.com/whl/$pytorch_version !pip install --no-index torch-sparse -f https://pytorch-geometric.com/whl/$pytorch_version !pip install --no-index torch-cluster -f https://pytorch-geometric.com/whl/$pytorch_version !pip install --no-index torch-spline-conv -f https://pytorch-geometric.com/whl/$pytorch_version !pip install torch-geometric import rdkit from torch_geometric.datasets import MoleculeNet # Load the ESOL dataset data = MoleculeNet(root=".", name="ESOL") data print("Dataset type: ", type(data)) print("Dataset features: ", data.num_features) print("Dataset target: ", data.num_classes) print("Dataset length: ", data.len) print("Dataset sample: ", data[0]) print("Sample nodes: ", data[0].num_nodes) print("Sample edges: ", data[0].num_edges) # Shape: [num_nodes, num_node_features] data[0].x data[0].y data[0]["smiles"] from rdkit import Chem from rdkit.Chem.Draw import IPythonConsole molecule = Chem.MolFromSmiles(data[0]["smiles"]) molecule type(molecule) import torch from torch.nn import Linear import torch.nn.functional as F from torch_geometric.nn import GCNConv, TopKPooling, global_mean_pool from torch_geometric.nn import global_mean_pool as gap, global_max_pool as gmp embedding_size = 64 # 64, da grosse Molekuele # je mehr Layers man hinzufuegt, desto mehr Informationen erhaelt man aus den Graphen class GCN(torch.nn.Module): def __init__(self): super(GCN, self).__init__() torch.manual_seed(42) # GCN layers self.initial_conv = GCNConv(data.num_features, embedding_size) self.conv1 = GCNConv(embedding_size, embedding_size) self.conv2 = GCNConv(embedding_size, embedding_size) self.conv3 = GCNConv(embedding_size, embedding_size) # Linear Layer als Final Output Layer fuer das Regressionsproblem # Output layer self.out = Linear(embedding_size*2, data.num_classes) def forward(self, x, edge_index, batch_index): # First Conv layer hidden = self.initial_conv(x, edge_index) hidden = F.tanh(hidden) # Other Conv layers hidden = self.conv1(hidden, edge_index) hidden = F.tanh(hidden) hidden = self.conv2(hidden, edge_index) hidden = F.tanh(hidden) hidden = self.conv3(hidden, edge_index) hidden = F.tanh(hidden) # Global Pooling hidden = torch.cat([gmp(hidden, batch_index), gap(hidden, batch_index)], dim=1) # Finaler (lineare) Classifier out = self.out(hidden) return out, hidden model = GCN() print(model) print("Number of parameters: ", sum(p.numel() for p in model.parameters())) from torch_geometric.data import DataLoader import warnings warnings.filterwarnings("ignore") # Root mean squared error loss_fn = torch.nn.MSELoss() optimizer = torch.optim.Adam(model.parameters(), lr=0.0007) # GPU zum trainieren device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") model = model.to(device) # Data Loader data_size = len(data) NUM_GRAPHS_PER_BATCH = 64 loader = DataLoader(data[:int(data_size * 0.8)], batch_size=NUM_GRAPHS_PER_BATCH, shuffle=True) test_loader = DataLoader(data[int(data_size * 0.8):], batch_size=NUM_GRAPHS_PER_BATCH, shuffle=True) def train(data): for batch in loader: batch.to(device) optimizer.zero_grad() pred, embedding = model(batch.x.float(), batch.edge_index, batch.batch) loss = torch.sqrt(loss_fn(pred, batch.y)) loss.backward() optimizer.step() return loss, embedding print("Training starten...") losses = [] for epoch in range(2000): loss, h = train(data) losses.append(loss) if epoch % 100 == 0: print(f"Epoch {epoch} | Train Loss {loss}") import seaborn as sns losses_float = [float(loss.cpu().detach().numpy()) for loss in losses] loss_indices = [i for i,l in enumerate(losses_float)] plt = sns.lineplot(loss_indices, losses_float) plt import pandas as pd test_batch = next(iter(test_loader)) with torch.no_grad(): test_batch.to(device) pred, embed = model(test_batch.x.float(), test_batch.edge_index, test_batch.batch) df = pd.DataFrame() df["y_real"] = test_batch.y.tolist() df["y_pred"] = pred.tolist() df["y_real"] = df["y_real"].apply(lambda row: row[0]) df["y_pred"] = df["y_pred"].apply(lambda row: row[0]) df plt = sns.scatterplot(data=df, x="y_real", y="y_pred") plt.set(xlim=(-7, 2)) plt.set(ylim=(-7, 2)) plt
0.636805
0.459561
# Fix Labels ## Alphabets are put as words in some of the labels ``` import sys import os from json import load, dump sys.path.append(os.path.abspath(os.path.join('..'))) from scripts.logger_creator import CreateLogger # Initializing Logger logger = CreateLogger('LabelsFixer', handlers=1) logger = logger.get_default_logger() class LabelCleaner(): def __init__(self, train_labels: str = '../data/train_labels.json', test_labels: str = '../data/test_labels.json') -> None: try: self.train_labels_path = train_labels self.test_labels_path = test_labels logger.info('Successfully Created Label Cleaner Class Object') except Exception as e: logger.exception('Failed to create Label Cleaner Class Object') def load_labels(self): try: with open(self.train_labels_path, 'r', encoding='UTF-8') as label_file: self.train_labels = load(label_file) with open(self.test_labels_path, 'r', encoding='UTF-8') as label_file: self.test_labels = load(label_file) logger.info('Successfully Loaded Train and Test Label Files') except Exception as e: logger.exception('Failed to Load Labels') def clean_suffixes(self): self.train_cleaned_labels = self.clean_labels_suffixes(self.train_labels) self.test_cleaned_labels = self.clean_labels_suffixes(self.test_labels) def save_labels(self, train_file_name: str = '../data/train_labels.json', test_file_name: str = '../data/test_labels.json') -> None: try: with open(train_file_name, "w", encoding='UTF-8') as export_file: dump(self.train_cleaned_labels, export_file, indent=4, sort_keys=True, ensure_ascii=False) with open(test_file_name, "w", encoding='UTF-8') as export_file: dump(self.test_cleaned_labels, export_file, indent=4, sort_keys=True, ensure_ascii=False) logger.info(f'Successfully Saved Cleaned Lables in: {train_file_name} and {test_file_name}') except Exception as e: logger.exception('Failed to Save Cleaned lables') def clean_labels_suffixes(self, label_dict:dict): try: cleaned_labels = {} for key, label in label_dict.items(): word_list = label.split() cleaned_label = [] append_prefix = None prefix_words = ['እ', 'የ', "አይ", "ሲ", "አላ",'እንዲ', 'ኰ', 'በ', 'ስለ', 'የሚ', 'ያ', 'አ', 'ለ', 'ከ', 'ተጉ', 'ሳ', 'ጐረ', 'አል', 'እጀ', 'ባ', 'እንዳስ', 'በተ', 'ተና', 'እንደ', 'ሳይ', 'ንግስተ', 'ሊ', 'እንደ', 'ሊ', 'የተ', 'ጠቁ', 'ተ', 'እያ', 'እን', 'ተሽ', 'አሳ', 'አከራ', 'አስራ', 'ለባለ', 'አለ', 'ከሚያ', 'ሳይ', 'ካይ', 'እንዳል', 'ካ', 'ሊያ', 'ያመኑ', 'አሰባ', 'እንደሚ', 'እየ'] suffix_words = ['ን', "ም", "ና", "ያት",'ው', 'ነዋል', 'ተው', 'መ', 'መና', 'ች', 'ማት', 'ተር', 'ኝ', 'ቱ', 'ሎ', 'ት', 'ሁ', 'ጤ', 'ብ', 'ፋው', 'ዬ', 'ጉር', 'ጉ', 'ሯቸው', 'ወድ', 'ስ', 'ዬን', 'ጓጉ', 'ቻት', 'ጔ', 'ወ', 'ሚ', 'ልሽ', 'ንም', 'ሺ', 'ኲ', 'ቷል', 'ዋል', 'ቸውን', 'ተኛ', 'ስት', 'ዎች', 'ታል', 'ል', 'ዋጣ', 'ያችን', 'ችን', 'ውን', 'ስቶች', 'በታል', 'ነውን', 'ችል', 'ቸው', 'ባቸዋል', 'ሉት', 'ሉት', 'ላቸው', 'ተውናል', 'ችሏል', 'ዶች'] for word in word_list: if(word in prefix_words): if(append_prefix != None): append_prefix = append_prefix + word else: append_prefix = word try: if(word == word_list[-1]): cleaned_label[-1] = cleaned_label[-1] + append_prefix continue except: continue elif(word in suffix_words): if(append_prefix != None): append_prefix = append_prefix + word else: try: cleaned_label[-1] = cleaned_label[-1] + word except: append_prefix = word continue elif(append_prefix != None): word = append_prefix + word append_prefix = None cleaned_label.append(word) cleaned_labels[key] = ' '.join(cleaned_label) logger.info('Successfully Cleaned Label Suffixes') return cleaned_labels except Exception as e: logger.exception('Failed To Clean Labels') def clean_and_save(self): self.load_labels() self.clean_suffixes() self.save_labels() label_cleaner = LabelCleaner() label_cleaner.clean_and_save() ```
github_jupyter
import sys import os from json import load, dump sys.path.append(os.path.abspath(os.path.join('..'))) from scripts.logger_creator import CreateLogger # Initializing Logger logger = CreateLogger('LabelsFixer', handlers=1) logger = logger.get_default_logger() class LabelCleaner(): def __init__(self, train_labels: str = '../data/train_labels.json', test_labels: str = '../data/test_labels.json') -> None: try: self.train_labels_path = train_labels self.test_labels_path = test_labels logger.info('Successfully Created Label Cleaner Class Object') except Exception as e: logger.exception('Failed to create Label Cleaner Class Object') def load_labels(self): try: with open(self.train_labels_path, 'r', encoding='UTF-8') as label_file: self.train_labels = load(label_file) with open(self.test_labels_path, 'r', encoding='UTF-8') as label_file: self.test_labels = load(label_file) logger.info('Successfully Loaded Train and Test Label Files') except Exception as e: logger.exception('Failed to Load Labels') def clean_suffixes(self): self.train_cleaned_labels = self.clean_labels_suffixes(self.train_labels) self.test_cleaned_labels = self.clean_labels_suffixes(self.test_labels) def save_labels(self, train_file_name: str = '../data/train_labels.json', test_file_name: str = '../data/test_labels.json') -> None: try: with open(train_file_name, "w", encoding='UTF-8') as export_file: dump(self.train_cleaned_labels, export_file, indent=4, sort_keys=True, ensure_ascii=False) with open(test_file_name, "w", encoding='UTF-8') as export_file: dump(self.test_cleaned_labels, export_file, indent=4, sort_keys=True, ensure_ascii=False) logger.info(f'Successfully Saved Cleaned Lables in: {train_file_name} and {test_file_name}') except Exception as e: logger.exception('Failed to Save Cleaned lables') def clean_labels_suffixes(self, label_dict:dict): try: cleaned_labels = {} for key, label in label_dict.items(): word_list = label.split() cleaned_label = [] append_prefix = None prefix_words = ['እ', 'የ', "አይ", "ሲ", "አላ",'እንዲ', 'ኰ', 'በ', 'ስለ', 'የሚ', 'ያ', 'አ', 'ለ', 'ከ', 'ተጉ', 'ሳ', 'ጐረ', 'አል', 'እጀ', 'ባ', 'እንዳስ', 'በተ', 'ተና', 'እንደ', 'ሳይ', 'ንግስተ', 'ሊ', 'እንደ', 'ሊ', 'የተ', 'ጠቁ', 'ተ', 'እያ', 'እን', 'ተሽ', 'አሳ', 'አከራ', 'አስራ', 'ለባለ', 'አለ', 'ከሚያ', 'ሳይ', 'ካይ', 'እንዳል', 'ካ', 'ሊያ', 'ያመኑ', 'አሰባ', 'እንደሚ', 'እየ'] suffix_words = ['ን', "ም", "ና", "ያት",'ው', 'ነዋል', 'ተው', 'መ', 'መና', 'ች', 'ማት', 'ተር', 'ኝ', 'ቱ', 'ሎ', 'ት', 'ሁ', 'ጤ', 'ብ', 'ፋው', 'ዬ', 'ጉር', 'ጉ', 'ሯቸው', 'ወድ', 'ስ', 'ዬን', 'ጓጉ', 'ቻት', 'ጔ', 'ወ', 'ሚ', 'ልሽ', 'ንም', 'ሺ', 'ኲ', 'ቷል', 'ዋል', 'ቸውን', 'ተኛ', 'ስት', 'ዎች', 'ታል', 'ል', 'ዋጣ', 'ያችን', 'ችን', 'ውን', 'ስቶች', 'በታል', 'ነውን', 'ችል', 'ቸው', 'ባቸዋል', 'ሉት', 'ሉት', 'ላቸው', 'ተውናል', 'ችሏል', 'ዶች'] for word in word_list: if(word in prefix_words): if(append_prefix != None): append_prefix = append_prefix + word else: append_prefix = word try: if(word == word_list[-1]): cleaned_label[-1] = cleaned_label[-1] + append_prefix continue except: continue elif(word in suffix_words): if(append_prefix != None): append_prefix = append_prefix + word else: try: cleaned_label[-1] = cleaned_label[-1] + word except: append_prefix = word continue elif(append_prefix != None): word = append_prefix + word append_prefix = None cleaned_label.append(word) cleaned_labels[key] = ' '.join(cleaned_label) logger.info('Successfully Cleaned Label Suffixes') return cleaned_labels except Exception as e: logger.exception('Failed To Clean Labels') def clean_and_save(self): self.load_labels() self.clean_suffixes() self.save_labels() label_cleaner = LabelCleaner() label_cleaner.clean_and_save()
0.163345
0.390011
### Note * Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps. ``` # Dependencies and Setup import pandas as pd # File to Load (Remember to Change These) school_data_to_load = "Resources/schools_complete.csv" student_data_to_load = "Resources/students_complete.csv" # Read School and Student Data File and store into Pandas DataFrames school_data = pd.read_csv(school_data_to_load) student_data = pd.read_csv(student_data_to_load) # Combine the data into a single dataset. school_data_complete = pd.merge(student_data, school_data, how="left", on=["school_name", "school_name"]) ``` ## District Summary * Calculate the total number of schools * Calculate the total number of students * Calculate the total budget * Calculate the average math score * Calculate the average reading score * Calculate the percentage of students with a passing math score (70 or greater) * Calculate the percentage of students with a passing reading score (70 or greater) * Calculate the percentage of students who passed math **and** reading (% Overall Passing) * Create a dataframe to hold the above results * Optional: give the displayed data cleaner formatting ## School Summary * Create an overview table that summarizes key metrics about each school, including: * School Name * School Type * Total Students * Total School Budget * Per Student Budget * Average Math Score * Average Reading Score * % Passing Math * % Passing Reading * % Overall Passing (The percentage of students that passed math **and** reading.) * Create a dataframe to hold the above results ## Top Performing Schools (By % Overall Passing) * Sort and display the top five performing schools by % overall passing. ## Bottom Performing Schools (By % Overall Passing) * Sort and display the five worst-performing schools by % overall passing. ## Math Scores by Grade * Create a table that lists the average Reading Score for students of each grade level (9th, 10th, 11th, 12th) at each school. * Create a pandas series for each grade. Hint: use a conditional statement. * Group each series by school * Combine the series into a dataframe * Optional: give the displayed data cleaner formatting ## Reading Score by Grade * Perform the same operations as above for reading scores ## Scores by School Spending * Create a table that breaks down school performances based on average Spending Ranges (Per Student). Use 4 reasonable bins to group school spending. Include in the table each of the following: * Average Math Score * Average Reading Score * % Passing Math * % Passing Reading * Overall Passing Rate (Average of the above two) ## Scores by School Size * Perform the same operations as above, based on school size. ## Scores by School Type * Perform the same operations as above, based on school type
github_jupyter
# Dependencies and Setup import pandas as pd # File to Load (Remember to Change These) school_data_to_load = "Resources/schools_complete.csv" student_data_to_load = "Resources/students_complete.csv" # Read School and Student Data File and store into Pandas DataFrames school_data = pd.read_csv(school_data_to_load) student_data = pd.read_csv(student_data_to_load) # Combine the data into a single dataset. school_data_complete = pd.merge(student_data, school_data, how="left", on=["school_name", "school_name"])
0.411939
0.857768
``` import numpy as np import pandas as pd import matplotlib.pyplot as plt import sklearn as sk from sklearn.neighbors import KNeighborsRegressor from sklearn.model_selection import train_test_split from sklearn.preprocessing import MinMaxScaler from sklearn.metrics import mean_squared_error # Набор данных взят с https://www.kaggle.com/aungpyaeap/fish-market # Параметры нескольких популярных промысловых рыб # length 1 = Body height # length 2 = Total Length # length 3 = Diagonal Length fish_data = pd.read_csv("../../datasets/Fish.csv", delimiter=',') print(fish_data) # Выделим входные параметры и целевое значение x_labels = ['Height', 'Width'] y_label = 'Weight' data = fish_data[x_labels + [y_label]] print(data) # Определим размер валидационной и тестовой выборок val_test_size = round(0.2*len(data)) print(val_test_size) # Генерируем уникальный seed my_code = "Пысларь" seed_limit = 2 ** 32 my_seed = int.from_bytes(my_code.encode(), "little") % seed_limit # Создадим обучающую, валидационную и тестовую выборки random_state = my_seed train_val, test = train_test_split(data, test_size=val_test_size, random_state=random_state) train, val = train_test_split(train_val, test_size=val_test_size, random_state=random_state) print(len(train), len(val), len(test)) # Выделим обучающую, валидационную и тестовую выборки train_x = train[x_labels] train_y = np.array(train[y_label]).reshape(-1,1) val_x = val[x_labels] val_y = np.array(val[y_label]).reshape(-1,1) test_x = test[x_labels] test_y = np.array(test[y_label]).reshape(-1,1) # Нормируем значения параметров scaler_x = MinMaxScaler() scaler_x.fit(train_x) scaled_train_x = scaler_x.transform(train_x) scaler_y = MinMaxScaler() scaler_y.fit(train_y) scaled_train_y = scaler_y.transform(train_y) # Создадим модель метода k-ближайших соседей и обучим ее на нормированных данных. По умолчанию k = 5. minmse=10 mink=0 for k in range(1,51): model1 = KNeighborsRegressor(n_neighbors = k) model1.fit(scaled_train_x, scaled_train_y) scaled_val_x = scaler_x.transform(val_x) scaled_val_y = scaler_y.transform(val_y) val_predicted = model1.predict(scaled_val_x) mse1 = mean_squared_error(scaled_val_y, val_predicted) if mse1<minmse: minmse=mse1 mink=k print("минимальная среднеквадратичная ошибка",minmse) print("значение k, которому соответсвует минимальная среднеквадратичная ошибка",mink) model1 = KNeighborsRegressor(n_neighbors = mink) model1.fit(scaled_train_x, scaled_train_y) val_predicted = model1.predict(scaled_val_x) mse1 = mean_squared_error(scaled_val_y, val_predicted) print(mse1) # Проверим результат на тестевойой выборке. scaled_test_x = scaler_x.transform(test_x) scaled_test_y = scaler_y.transform(test_y) test_predicted = model1.predict(scaled_test_x) mse2 = mean_squared_error(scaled_test_y,test_predicted) print(mse2) ```
github_jupyter
import numpy as np import pandas as pd import matplotlib.pyplot as plt import sklearn as sk from sklearn.neighbors import KNeighborsRegressor from sklearn.model_selection import train_test_split from sklearn.preprocessing import MinMaxScaler from sklearn.metrics import mean_squared_error # Набор данных взят с https://www.kaggle.com/aungpyaeap/fish-market # Параметры нескольких популярных промысловых рыб # length 1 = Body height # length 2 = Total Length # length 3 = Diagonal Length fish_data = pd.read_csv("../../datasets/Fish.csv", delimiter=',') print(fish_data) # Выделим входные параметры и целевое значение x_labels = ['Height', 'Width'] y_label = 'Weight' data = fish_data[x_labels + [y_label]] print(data) # Определим размер валидационной и тестовой выборок val_test_size = round(0.2*len(data)) print(val_test_size) # Генерируем уникальный seed my_code = "Пысларь" seed_limit = 2 ** 32 my_seed = int.from_bytes(my_code.encode(), "little") % seed_limit # Создадим обучающую, валидационную и тестовую выборки random_state = my_seed train_val, test = train_test_split(data, test_size=val_test_size, random_state=random_state) train, val = train_test_split(train_val, test_size=val_test_size, random_state=random_state) print(len(train), len(val), len(test)) # Выделим обучающую, валидационную и тестовую выборки train_x = train[x_labels] train_y = np.array(train[y_label]).reshape(-1,1) val_x = val[x_labels] val_y = np.array(val[y_label]).reshape(-1,1) test_x = test[x_labels] test_y = np.array(test[y_label]).reshape(-1,1) # Нормируем значения параметров scaler_x = MinMaxScaler() scaler_x.fit(train_x) scaled_train_x = scaler_x.transform(train_x) scaler_y = MinMaxScaler() scaler_y.fit(train_y) scaled_train_y = scaler_y.transform(train_y) # Создадим модель метода k-ближайших соседей и обучим ее на нормированных данных. По умолчанию k = 5. minmse=10 mink=0 for k in range(1,51): model1 = KNeighborsRegressor(n_neighbors = k) model1.fit(scaled_train_x, scaled_train_y) scaled_val_x = scaler_x.transform(val_x) scaled_val_y = scaler_y.transform(val_y) val_predicted = model1.predict(scaled_val_x) mse1 = mean_squared_error(scaled_val_y, val_predicted) if mse1<minmse: minmse=mse1 mink=k print("минимальная среднеквадратичная ошибка",minmse) print("значение k, которому соответсвует минимальная среднеквадратичная ошибка",mink) model1 = KNeighborsRegressor(n_neighbors = mink) model1.fit(scaled_train_x, scaled_train_y) val_predicted = model1.predict(scaled_val_x) mse1 = mean_squared_error(scaled_val_y, val_predicted) print(mse1) # Проверим результат на тестевойой выборке. scaled_test_x = scaler_x.transform(test_x) scaled_test_y = scaler_y.transform(test_y) test_predicted = model1.predict(scaled_test_x) mse2 = mean_squared_error(scaled_test_y,test_predicted) print(mse2)
0.444806
0.706114
``` %use lets-plot class gBar(titulo : String, cantidades : Any, conceptos : Any){ private var datos = mapOf<String, Any>( "Cantidad" to cantidades, "Conceptos" to conceptos ) private var z = lets_plot(datos) private var layer = geom_bar(stat = Stat.identity){ x = "Conceptos" y = "Cantidad" fill = "Conceptos" } var graph = z + layer init{ graph = (z + layer + ggtitle(titulo)) } } class gLin(titulo : String, cantidades : Any, conceptos : Any){ private var datos = mapOf<String, Any>( "Cantidad" to cantidades, "Conceptos" to conceptos ) private var z = lets_plot(datos) private var layer = geom_line(stat = Stat.identity){ x = "Conceptos" y = "Cantidad" } var graph = z + layer init{ graph = (z + layer + ggtitle(titulo)) } } var mTodo = listOf( "junio '18", "julio '18", "agosto '18", "septiembre '18", "octubre '18", "noviembre '18", "diciembre '18", //enero 2019 "enero '19", "febrero '19", "marzo '19", "abril '19", "mayo '19", "junio '19", "julio '19", "agosto '19","septiembre '19", "octubre '19", "noviembre '19", "diciembre '19", //enero 2020 "enero '20", "febrero '20", "marzo '20", "abril '20", "mayo '20", "junio '20", //este mes "julio '20") var vTodo = listOf( 1062, 718, 861, 2724, 3472, 1761, 1501, //"enero 2019" 3207, 2886, 1834, 1846, 3168, 2145, 672, 2805, 3599, 5007, 2053, 3791, //"enero 2020" 2629, 2053, 3308, 5652, 6102, 8249, //"Este mes" 9600) var gTodo = gLin("Historia de la empresa", vTodo, mTodo) gTodo.graph var dAC = listOf(5652, 6102, 8249, 9600) var meses = listOf("Abril", "Mayo", "Junio", "Julio") var gBA = gLin("2020", dAC, meses) println("Abril: 5,652 \nMayo: 6,102 \nJunio: 8,249 \nJulio: 9,600 \nTotal: 29,603") gBA.graph var cant_v = listOf(1700, 30) var conc_v = listOf("Piedras sueltas", "collares") var vtas = gBar("Ventas desde abril 2020", cant_v, conc_v) println("Desde abril se han vendido 1700 piedras sueltas y 30 collares") vtas.graph var collar = 75 var c_col = 30 var t_col = collar * c_col var piedra = 13 var c_pi = 1700 var t_pi = piedra * c_pi var todo = t_col + t_pi var texto = "Cada collar en promedio vale ${collar} y cada piedra ${13}\n" + "${c_col} collares = \$${t_col} \n" + "${c_pi} piedras = \$${t_pi} \n\n" + "Total: \$${todo}" var poc = listOf("Piedras", "Collares", "Total") var costos = listOf(t_pi, t_col, todo) var grafica = gBar("Relacion de costos y total", costos, poc) print(texto) grafica.graph var jasC = listOf(718, 861, 2724, 3472) var jasM = listOf("julio '18", "agosto '18", "septiembre '18", "octubre '18") var jasG = gLin("Jul - Sept 2018: La primera Subida", jasC, jasM) println("Julio: 718 \nAgosto: 861 \nSeptiembre: 2,724 \nOctubre: 3,472") jasG.graph var ondC = listOf(3472, 1761, 1501) var ondM = listOf("octubre '18", "noviembre '18", "diciembre '18") var texto = "Octubre: 3,472 \nNoviembre: 1,761 \nDiciembre: 1,501" var grafica = gLin("Octubre - Diciembre 2018: La primera bajada", ondC, ondM) print(texto) grafica.graph var defmC = listOf(1501, 3207, 2886, 1834) var defmM = listOf("diciembre '18","enero '19","febrero '19","marzo '19") var texto = "Diciembre '18: 1,501 \nEnero '19: 3,207 \nFebrero '19: 2,886 \nMarzo '19: 1,834" var grafica = gLin("Enero - Febrero 2019: Los meses mas importantes", defmC, defmM) print(texto) grafica.graph var mjjaC = listOf(3168, 2145, 672, 2805) var mjjaM = listOf("Mayo", "Junio", "Julio", "Agosto") var texto = "Mayo '19: 3168 \nJunio: 2,145 \nJulio: 672 \nAgosto: 2,805" var grafica = gLin("Junio - Julio 2019: Caes y te levantas", mjjaC, mjjaM) print(texto) grafica.graph var asonC = listOf(2805, 3599, 5007, 2053) var asonM = listOf("Agosto '19", "Septiembre '19", "Octubre '19", "Noviembre '19") var texto = "Agosto '19: 2,805 \nSeptiembre '19: 3,599 \nOctubre '19: 5,007 \nNoviembre '19: 2,053" var grafica = gLin("Agosto - Octubre 2019: Primera gran racha", asonC, asonM) print(texto) grafica.graph var fmaC = listOf(2053, 3308, 5652) var fmaM = listOf("Febrero '20", "Marzo '20", "Abril '20") var texto = "Febrero '19: 2,053 \nMarzo '19: 3,308 \nAbril '19: 5,652" var grafica = gLin("2020: La empresa crece", fmaC, fmaM) print(texto) grafica.graph var mjjC = listOf(6102, 8249, 9600) var mjjM = listOf("Mayo '20", "Junio '20", "Julio '20") var texto = "Mayo '20: 6,102 \nJunio '20: 8,249 \nJulio '20: 9,600" var grafica = gLin("2020: Lo mas reciente", mjjC, mjjM) print(texto) grafica.graph ```
github_jupyter
%use lets-plot class gBar(titulo : String, cantidades : Any, conceptos : Any){ private var datos = mapOf<String, Any>( "Cantidad" to cantidades, "Conceptos" to conceptos ) private var z = lets_plot(datos) private var layer = geom_bar(stat = Stat.identity){ x = "Conceptos" y = "Cantidad" fill = "Conceptos" } var graph = z + layer init{ graph = (z + layer + ggtitle(titulo)) } } class gLin(titulo : String, cantidades : Any, conceptos : Any){ private var datos = mapOf<String, Any>( "Cantidad" to cantidades, "Conceptos" to conceptos ) private var z = lets_plot(datos) private var layer = geom_line(stat = Stat.identity){ x = "Conceptos" y = "Cantidad" } var graph = z + layer init{ graph = (z + layer + ggtitle(titulo)) } } var mTodo = listOf( "junio '18", "julio '18", "agosto '18", "septiembre '18", "octubre '18", "noviembre '18", "diciembre '18", //enero 2019 "enero '19", "febrero '19", "marzo '19", "abril '19", "mayo '19", "junio '19", "julio '19", "agosto '19","septiembre '19", "octubre '19", "noviembre '19", "diciembre '19", //enero 2020 "enero '20", "febrero '20", "marzo '20", "abril '20", "mayo '20", "junio '20", //este mes "julio '20") var vTodo = listOf( 1062, 718, 861, 2724, 3472, 1761, 1501, //"enero 2019" 3207, 2886, 1834, 1846, 3168, 2145, 672, 2805, 3599, 5007, 2053, 3791, //"enero 2020" 2629, 2053, 3308, 5652, 6102, 8249, //"Este mes" 9600) var gTodo = gLin("Historia de la empresa", vTodo, mTodo) gTodo.graph var dAC = listOf(5652, 6102, 8249, 9600) var meses = listOf("Abril", "Mayo", "Junio", "Julio") var gBA = gLin("2020", dAC, meses) println("Abril: 5,652 \nMayo: 6,102 \nJunio: 8,249 \nJulio: 9,600 \nTotal: 29,603") gBA.graph var cant_v = listOf(1700, 30) var conc_v = listOf("Piedras sueltas", "collares") var vtas = gBar("Ventas desde abril 2020", cant_v, conc_v) println("Desde abril se han vendido 1700 piedras sueltas y 30 collares") vtas.graph var collar = 75 var c_col = 30 var t_col = collar * c_col var piedra = 13 var c_pi = 1700 var t_pi = piedra * c_pi var todo = t_col + t_pi var texto = "Cada collar en promedio vale ${collar} y cada piedra ${13}\n" + "${c_col} collares = \$${t_col} \n" + "${c_pi} piedras = \$${t_pi} \n\n" + "Total: \$${todo}" var poc = listOf("Piedras", "Collares", "Total") var costos = listOf(t_pi, t_col, todo) var grafica = gBar("Relacion de costos y total", costos, poc) print(texto) grafica.graph var jasC = listOf(718, 861, 2724, 3472) var jasM = listOf("julio '18", "agosto '18", "septiembre '18", "octubre '18") var jasG = gLin("Jul - Sept 2018: La primera Subida", jasC, jasM) println("Julio: 718 \nAgosto: 861 \nSeptiembre: 2,724 \nOctubre: 3,472") jasG.graph var ondC = listOf(3472, 1761, 1501) var ondM = listOf("octubre '18", "noviembre '18", "diciembre '18") var texto = "Octubre: 3,472 \nNoviembre: 1,761 \nDiciembre: 1,501" var grafica = gLin("Octubre - Diciembre 2018: La primera bajada", ondC, ondM) print(texto) grafica.graph var defmC = listOf(1501, 3207, 2886, 1834) var defmM = listOf("diciembre '18","enero '19","febrero '19","marzo '19") var texto = "Diciembre '18: 1,501 \nEnero '19: 3,207 \nFebrero '19: 2,886 \nMarzo '19: 1,834" var grafica = gLin("Enero - Febrero 2019: Los meses mas importantes", defmC, defmM) print(texto) grafica.graph var mjjaC = listOf(3168, 2145, 672, 2805) var mjjaM = listOf("Mayo", "Junio", "Julio", "Agosto") var texto = "Mayo '19: 3168 \nJunio: 2,145 \nJulio: 672 \nAgosto: 2,805" var grafica = gLin("Junio - Julio 2019: Caes y te levantas", mjjaC, mjjaM) print(texto) grafica.graph var asonC = listOf(2805, 3599, 5007, 2053) var asonM = listOf("Agosto '19", "Septiembre '19", "Octubre '19", "Noviembre '19") var texto = "Agosto '19: 2,805 \nSeptiembre '19: 3,599 \nOctubre '19: 5,007 \nNoviembre '19: 2,053" var grafica = gLin("Agosto - Octubre 2019: Primera gran racha", asonC, asonM) print(texto) grafica.graph var fmaC = listOf(2053, 3308, 5652) var fmaM = listOf("Febrero '20", "Marzo '20", "Abril '20") var texto = "Febrero '19: 2,053 \nMarzo '19: 3,308 \nAbril '19: 5,652" var grafica = gLin("2020: La empresa crece", fmaC, fmaM) print(texto) grafica.graph var mjjC = listOf(6102, 8249, 9600) var mjjM = listOf("Mayo '20", "Junio '20", "Julio '20") var texto = "Mayo '20: 6,102 \nJunio '20: 8,249 \nJulio '20: 9,600" var grafica = gLin("2020: Lo mas reciente", mjjC, mjjM) print(texto) grafica.graph
0.119639
0.354685
Given the following patterns $$P_1, t_1 = (2,6) , A $$ $$P_2, t_2 = (4,4) , A $$ $$P_3, t_3 = (6,3) , A $$ $$P_4, t_4 = (4,10) , B $$ $$P_5, t_5 = (7,10) , B $$ $$P_6, t_6 = (9,8) , B $$ Train a ADALINE using the delta rule ``` %matplotlib inline import matplotlib.pyplot as plt plt.style.use('seaborn-whitegrid') import numpy as np #create input vector def createInputVector(): p[0] = [2,6]; p[1] = [4,4]; p[2] = [6,3]; p[3] = [4,10]; p[4] = [7,10]; p[5] = [9,8]; print("p:",str(p)); p = np.zeros((6,2)); createInputVector(); #create target vector A = 1 and B = 1 def createTargetVector(): t[0] = 1; t[1] = 1; t[2] = 1; t[3] = -1; t[4] = -1; t[5] = -1; print("t:",str(t)); t = np.zeros(6); createTargetVector(); # translate ADALINE output def translatePerceptronOutput(p, output): print("input: ", str(p), "output: ", str(output)); if output > 0: print("Classified as A"); else: print("Classified as B"); #define ADALINE class class ADALINE: def __init__(self, p_training, t_training, learning_rate ): self.P = p_training; self.T = t_training; self.alfa = learning_rate; self.E = np.ones(len(self.P)); self.Errors = np.array([]); self.initW(); self.initBias(); def initW(self): """Initialize ADALINE weights""" self.W = np.random.rand(len(self.P[0])); print("initial W:", str(self.W)); def initBias(self): """Initialize ADALINE bias""" self.bias = np.random.randint(2); print("initial bias:", self.bias); def trainDeltaLearningRule(self, max_epoch): iterations = 0; self.max_epoch = max_epoch; self.plotTrainingSet(); self.plotDecisionBoundary('red','initial boundary'); while((iterations < self.max_epoch)): #print("Iteration------------------", iterations); MSE = 0; for index in range(len(self.P)): #print("input:", str(self.P[index])); #print("weight:", self.W); #print("dotProduct:", np.dot(self.P[index],self.W)); #network output a = self.purelin(np.dot(self.P[index],self.W) + self.bias); #print("a:",a); #input error self.E[index] = self.T[index] - a; MSE = MSE + self.E[index]**2; #print("E:", self.E[index]); #Perceptron learning rule self.W = self.W + self.alfa * (self.E[index] * self.P[index]); self.bias = self.bias + (self.alfa * self.E[index]); #print("new weight:", self.W); #print("new bias:", self.bias); iterations = iterations + 1; MSE = MSE / len(self.P); self.Errors = np.concatenate((self.Errors,[MSE]),axis=0); print('final bias:',self.bias); print('final W:',self.W); self.plotDecisionBoundary('green','final boundary'); plt.legend(loc="upper right"); print("Epochs:", iterations); def purelin(self, x): return x; def evaluate(self, new_p): return self.purelin(np.dot(new_p,self.W) + self.bias); def plotDecisionBoundary(self, color, label): plt.xlim([-1.0, 20.0]); plt.ylim([-1.0, 20.0]); x = np.linspace(-1, 20); y = - (self.bias / self.W[1]) - ((x * self.W[0]) / self.W[1]); plt.plot(x, y, color=color, label=label); def plotTrainingSet(self): plt.plot(self.P[0:3,0],self.P[0:3,1], 's', color='black', label='A'); plt.plot(self.P[3:7,0],self.P[3:7,1], '^', color='black', label='B'); def plotErrors(self): plt.plot(self.Errors, label='Error'); print("Last error:", self.Errors[-1]); #create ADALINE object ada = ADALINE(p,t, 0.0015); #train ADALINE using Delta rule, max epoch 50,000 ada.trainDeltaLearningRule(5000); #evaluate new coordenate (5,5) new_p = np.array([5,5]) res = ada.evaluate(new_p); translatePerceptronOutput(new_p,res); #evaluate new coordenate (6,8) new_p = np.array([6,8]) res = ada.evaluate(new_p); translatePerceptronOutput(new_p,res); ada.plotErrors(); ``` important notes: * initialization of weights and bias with random values don´t affect to find a solution * epoch can help to reduce erros. * learning rate affects learning speed so the smaller the more epochs need to have a good estimate. * solution could be different based on initial values and for that reason new inputs could be classified different. * learning rate alters the speed of the training process * It is highly importat to have target values as positive and nevative otherwise network wont converge.
github_jupyter
%matplotlib inline import matplotlib.pyplot as plt plt.style.use('seaborn-whitegrid') import numpy as np #create input vector def createInputVector(): p[0] = [2,6]; p[1] = [4,4]; p[2] = [6,3]; p[3] = [4,10]; p[4] = [7,10]; p[5] = [9,8]; print("p:",str(p)); p = np.zeros((6,2)); createInputVector(); #create target vector A = 1 and B = 1 def createTargetVector(): t[0] = 1; t[1] = 1; t[2] = 1; t[3] = -1; t[4] = -1; t[5] = -1; print("t:",str(t)); t = np.zeros(6); createTargetVector(); # translate ADALINE output def translatePerceptronOutput(p, output): print("input: ", str(p), "output: ", str(output)); if output > 0: print("Classified as A"); else: print("Classified as B"); #define ADALINE class class ADALINE: def __init__(self, p_training, t_training, learning_rate ): self.P = p_training; self.T = t_training; self.alfa = learning_rate; self.E = np.ones(len(self.P)); self.Errors = np.array([]); self.initW(); self.initBias(); def initW(self): """Initialize ADALINE weights""" self.W = np.random.rand(len(self.P[0])); print("initial W:", str(self.W)); def initBias(self): """Initialize ADALINE bias""" self.bias = np.random.randint(2); print("initial bias:", self.bias); def trainDeltaLearningRule(self, max_epoch): iterations = 0; self.max_epoch = max_epoch; self.plotTrainingSet(); self.plotDecisionBoundary('red','initial boundary'); while((iterations < self.max_epoch)): #print("Iteration------------------", iterations); MSE = 0; for index in range(len(self.P)): #print("input:", str(self.P[index])); #print("weight:", self.W); #print("dotProduct:", np.dot(self.P[index],self.W)); #network output a = self.purelin(np.dot(self.P[index],self.W) + self.bias); #print("a:",a); #input error self.E[index] = self.T[index] - a; MSE = MSE + self.E[index]**2; #print("E:", self.E[index]); #Perceptron learning rule self.W = self.W + self.alfa * (self.E[index] * self.P[index]); self.bias = self.bias + (self.alfa * self.E[index]); #print("new weight:", self.W); #print("new bias:", self.bias); iterations = iterations + 1; MSE = MSE / len(self.P); self.Errors = np.concatenate((self.Errors,[MSE]),axis=0); print('final bias:',self.bias); print('final W:',self.W); self.plotDecisionBoundary('green','final boundary'); plt.legend(loc="upper right"); print("Epochs:", iterations); def purelin(self, x): return x; def evaluate(self, new_p): return self.purelin(np.dot(new_p,self.W) + self.bias); def plotDecisionBoundary(self, color, label): plt.xlim([-1.0, 20.0]); plt.ylim([-1.0, 20.0]); x = np.linspace(-1, 20); y = - (self.bias / self.W[1]) - ((x * self.W[0]) / self.W[1]); plt.plot(x, y, color=color, label=label); def plotTrainingSet(self): plt.plot(self.P[0:3,0],self.P[0:3,1], 's', color='black', label='A'); plt.plot(self.P[3:7,0],self.P[3:7,1], '^', color='black', label='B'); def plotErrors(self): plt.plot(self.Errors, label='Error'); print("Last error:", self.Errors[-1]); #create ADALINE object ada = ADALINE(p,t, 0.0015); #train ADALINE using Delta rule, max epoch 50,000 ada.trainDeltaLearningRule(5000); #evaluate new coordenate (5,5) new_p = np.array([5,5]) res = ada.evaluate(new_p); translatePerceptronOutput(new_p,res); #evaluate new coordenate (6,8) new_p = np.array([6,8]) res = ada.evaluate(new_p); translatePerceptronOutput(new_p,res); ada.plotErrors();
0.206574
0.780788
# LSTM+CRF实现序列标注 ## 概述 序列标注指给定输入序列,给序列中每个Token进行标注标签的过程。序列标注问题通常用于从文本中进行信息抽取,包括分词(Word Segmentation)、词性标注(Position Tagging)、命名实体识别(Named Entity Recognition, NER)等。以命名实体识别为例: | 输入序列 | 清 | 华 | 大 | 学 | 座 | 落 | 于 | 首 | 都 | 北 | 京 | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | |输出标注| B | I | I | I | O | O | O | O | O | B | I | 如上表所示,`清华大学` 和 `北京`是地名,需要将其识别,我们对每个输入的单词预测其标签,最后根据标签来识别实体。 > 这里使用了一种常见的命名实体识别的标注方法——“BIOE”标注,将一个实体(Entity)的开头标注为B,其他部分标注为I,非实体标注为O。 ## 条件随机场(Conditional Random Field, CRF) 从上文的举例可以看到,对序列进行标注,实际上是对序列中每个Token进行标签预测,可以直接视作简单的多分类问题。但是序列标注不仅仅需要对单个Token进行分类预测,同时相邻Token直接有关联关系。以`清华大学`一词为例: | 输入序列 | 清 | 华 | 大 | 学 | | | --- | --- | --- | --- | --- | --- | | 输出标注 | B | I | I | I | √ | | 输出标注 | O | I | I | I | × | 如上表所示,正确的实体中包含的4个Token有依赖关系,I前必须是B或I,而错误输出结果将`清`字标注为O,违背了这一依赖。将命名实体识别视为多分类问题,则每个词的预测概率都是独立的,易产生类似的问题,因此需要引入一种能够学习到此种关联关系的算法来保证预测结果的正确性。而条件随机场是适合此类场景的一种[概率图模型](https://en.wikipedia.org/wiki/Graphical_model)。下面对条件随机场的定义和参数化形式进行简析。 > 考虑到序列标注问题的线性序列特点,本节所述的条件随机场特指线性链条件随机场(Linear Chain CRF) 设$x=\{x_0, ..., x_n\}$为输入序列,$y=\{y_0, ..., y_n\}, y \in Y$为输出的标注序列,其中$n$为序列的最大长度,$Y$表示$x$对应的所有可能的输出序列集合。则输出序列$y$的概率为: $$\begin{align}P(y|x) = \frac{\exp{(\text{Score}(x, y)})}{\sum_{y' \in Y} \exp{(\text{Score}(x, y')})} \qquad (1)\end{align}$$ 设$x_i$, $y_i$为序列的第$i$个Token和对应的标签,则$\text{Score}$需要能够在计算$x_i$和$y_i$的映射的同时,捕获相邻标签$y_{i-1}$和$y_{i}$之间的关系,因此我们定义两个概率函数: 1. 发射概率函数$\psi_\text{EMIT}$:表示$x_i \rightarrow y_i$的概率。 2. 转移概率函数$\psi_\text{TRANS}$:表示$y_{i-1} \rightarrow y_i$的概率。 则可以得到$\text{Score}$的计算公式: $$\begin{align}\text{Score}(x,y) = \sum_i \log \psi_\text{EMIT}(x_i \rightarrow y_i) + \log \psi_\text{TRANS}(y_{i-1} \rightarrow y_i) \qquad (2)\end{align} $$ 设标签集合为$T$, 构造大小为$|T|x|T|$的矩阵$\textbf{P}$,用于存储标签间的转移概率;由编码层(可以为Dense、LSTM等)输出的隐状态$h$可以直接视作发射概率,此时$\text{Score}$的计算公式可以转化为: $$\begin{align}\text{Score}(x,y) = \sum_i h_i[y_i] + \textbf{P}_{y_{i-1}, y_{i}} \qquad (3)\end{align}$$ > 完整的CRF完整推导可参考[Log-Linear Models, MEMMs, and CRFs](http://www.cs.columbia.edu/~mcollins/crf.pdf) 接下来我们根据上述公式,使用MindSpore来实现CRF的参数化形式。首先实现CRF层的前向训练部分,将CRF和损失函数做合并,选择分类问题常用的负对数似然函数(Negative Log Likelihood, NLL),则有: $$\begin{align}\text{Loss} = -log(P(y|x)) \qquad (4)\end{align} $$ 由公式$(1)$可得, $$\begin{align}\text{Loss} = -log(\frac{\exp{(\text{Score}(x, y)})}{\sum_{y' \in Y} \exp{(\text{Score}(x, y')})}) \qquad (5)\end{align} $$ $$\begin{align}= log(\sum_{y' \in Y} \exp{(\text{Score}(x, y')}) - \text{Score}(x, y) \end{align}$$ 根据公式$(5)$,我们称被减数为Normalizer,减数为Score,分别实现后相减得到最终Loss。 ### Score计算 首先根据公式$(3)$计算正确标签序列所对应的得分,这里需要注意,除了转移概率矩阵$\textbf{P}$外,还需要维护两个大小为$|T|$的向量,分别作为序列开始和结束时的转移概率。同时我们引入了一个掩码矩阵$mask$,将多个序列打包为一个Batch时填充的值忽略,使得$\text{Score}$计算仅包含有效的Token。 ``` def compute_score(emissions, tags, seq_ends, mask, trans, start_trans, end_trans): # emissions: (seq_length, batch_size, num_tags) # tags: (seq_length, batch_size) # mask: (seq_length, batch_size) seq_length, batch_size = tags.shape mask = mask.astype(emissions.dtype) # 将score设置为初始转移概率 # shape: (batch_size,) score = start_trans[tags[0]] # score += 第一次发射概率 # shape: (batch_size,) score += emissions[0, mnp.arange(batch_size), tags[0]] for i in range(1, seq_length): # 标签由i-1转移至i的转移概率(当mask == 1时有效) # shape: (batch_size,) score += trans[tags[i - 1], tags[i]] * mask[i] # 预测tags[i]的发射概率(当mask == 1时有效) # shape: (batch_size,) score += emissions[i, mnp.arange(batch_size), tags[i]] * mask[i] # 结束转移 # shape: (batch_size,) last_tags = tags[seq_ends, mnp.arange(batch_size)] # score += 结束转移概率 # shape: (batch_size,) score += end_trans[last_tags] return score ``` ### Normalizer计算 根据公式$(5)$,Normalizer是$x$对应的所有可能的输出序列的Score的对数指数和(Log-Sum-Exp)。此时如果按穷举法进行计算,则需要将每个可能的输出序列Score都计算一遍,共有$|T|^{n}$个结果。这里我们采用动态规划算法,通过复用计算结果来提高效率。 假设需要计算从第$0$至第$i$个Token所有可能的输出序列得分$\text{Score}_{i}$,则可以先计算出从第$0$至第$i-1$个Token所有可能的输出序列得分$\text{Score}_{i-1}$。因此,Normalizer可以改写为以下形式: $$log(\sum_{y'_{0,i} \in Y} \exp{(\text{Score}_i})) = log(\sum_{y'_{0,i-1} \in Y} \exp{(\text{Score}_{i-1} + h_{i} + \textbf{P}})) \qquad (6)$$ 其中$h_i$为第$i$个Token的发射概率,$\textbf{P}$是转移矩阵。由于发射概率矩阵$h$和转移概率矩阵$\textbf{P}$独立于$y$的序列路径计算,可以将其提出,可得: $$log(\sum_{y'_{0,i} \in Y} \exp{(\text{Score}_i})) = log(\sum_{y'_{0,i-1} \in Y} \exp{(\text{Score}_{i-1}})) + h_{i} + \textbf{P} \qquad (7)$$ 根据公式(7),Normalizer的实现如下: ``` def compute_normalizer(emissions, mask, trans, start_trans, end_trans): # emissions: (seq_length, batch_size, num_tags) # mask: (seq_length, batch_size) seq_length = emissions.shape[0] # 将score设置为初始转移概率,并加上第一次发射概率 # shape: (batch_size, num_tags) score = start_trans + emissions[0] for i in range(1, seq_length): # 扩展score的维度用于总score的计算 # shape: (batch_size, num_tags, 1) broadcast_score = score.expand_dims(2) # 扩展emission的维度用于总score的计算 # shape: (batch_size, 1, num_tags) broadcast_emissions = emissions[i].expand_dims(1) # 根据公式(7),计算score_i # 此时broadcast_score是由第0个到当前Token所有可能路径 # 对应score的log_sum_exp # shape: (batch_size, num_tags, num_tags) next_score = broadcast_score + trans + broadcast_emissions # 对score_i做log_sum_exp运算,用于下一个Token的score计算 # shape: (batch_size, num_tags) next_score = mnp.log(mnp.sum(mnp.exp(next_score), axis=1)) # 当mask == 1时,score才会变化 # shape: (batch_size, num_tags) score = mnp.where(mask[i].expand_dims(1), next_score, score) # 最后加结束转移概率 # shape: (batch_size, num_tags) score += end_trans # 对所有可能的路径得分求log_sum_exp # shape: (batch_size,) return mnp.log(mnp.sum(mnp.exp(score), axis=1)) ``` ### Viterbi算法 在完成前向训练部分后,需要实现解码部分。这里我们选择适合求解序列最优路径的[Viterbi算法](https://en.wikipedia.org/wiki/Viterbi_algorithm)。与计算Normalizer类似,使用动态规划求解所有可能的预测序列得分。不同的是在解码时同时需要将第$i$个Token对应的score取值最大的标签保存,供后续使用Viterbi算法求解最优预测序列使用。 取得最大概率得分$\text{Score}$,以及每个Token对应的标签历史$\text{History}$后,根据Viterbi算法可以得到公式: $$P_{0,i} = max(P_{0, i-1}) + P_{i-1, i}$$ 从第0个至第$i$个Token对应概率最大的序列,只需要考虑从第0个至第$i-1$个Token对应概率最大的序列,以及从第$i$个至第$i-1$个概率最大的标签即可。因此我们逆序求解每一个概率最大的标签,构成最佳的预测序列。 > 由于静态图语法限制,我们将Viterbi算法求解最佳预测序列的部分作为后处理函数,不纳入后续CRF层的实现。 ``` def viterbi_decode(emissions, mask, trans, start_trans, end_trans): # emissions: (seq_length, batch_size, num_tags) # mask: (seq_length, batch_size) seq_length = mask.shape[0] score = start_trans + emissions[0] history = () for i in range(1, seq_length): broadcast_score = score.expand_dims(2) broadcast_emission = emissions[i].expand_dims(1) next_score = broadcast_score + trans + broadcast_emission # 求当前Token对应score取值最大的标签,并保存 indices = next_score.argmax(axis=1) history += (indices,) next_score = next_score.max(axis=1) score = mnp.where(mask[i].expand_dims(1), next_score, score) score += end_trans return score, history def post_decode(score, history, seq_length): # 使用Score和History计算最佳预测序列 batch_size = seq_length.shape[0] seq_ends = seq_length - 1 # shape: (batch_size,) best_tags_list = [] # 依次对一个Batch中每个样例进行解码 for idx in range(batch_size): # 查找使最后一个Token对应的预测概率最大的标签, # 并将其添加至最佳预测序列存储的列表中 best_last_tag = score[idx].argmax(axis=0) best_tags = [int(best_last_tag.asnumpy())] # # 重复查找每个Token对应的预测概率最大的标签,加入列表 for hist in reversed(history[:seq_ends[idx]]): best_last_tag = hist[idx][best_tags[-1]] best_tags.append(int(best_last_tag.asnumpy())) # 将逆序求解的序列标签重置为正序 best_tags.reverse() best_tags_list.append(best_tags) return best_tags_list ``` ### CRF层 完成上述前向训练和解码部分的代码后,将其组装完整的CRF层。考虑到输入序列可能存在Padding的情况,CRF的输入需要考虑输入序列的真实长度,因此除发射矩阵和标签外,加入`seq_length`参数传入序列Padding前的长度,并实现生成mask矩阵的`sequence_mask`方法。 综合上述代码,使用`nn.Cell`进行封装,最后实现完整的CRF层如下: ``` import mindspore import mindspore.nn as nn import mindspore.numpy as mnp from mindspore import Parameter from mindspore.common.initializer import initializer, Uniform def sequence_mask(seq_length, max_length, batch_first=False): """根据序列实际长度和最大长度生成mask矩阵""" range_vector = mnp.arange(0, max_length, 1, seq_length.dtype) result = range_vector < seq_length.view(seq_length.shape + (1,)) if batch_first: return result.astype(mindspore.int64) return result.astype(mindspore.int64).swapaxes(0, 1) class CRF(nn.Cell): def __init__(self, num_tags: int, batch_first: bool = False, reduction: str = 'sum') -> None: if num_tags <= 0: raise ValueError(f'invalid number of tags: {num_tags}') super().__init__() if reduction not in ('none', 'sum', 'mean', 'token_mean'): raise ValueError(f'invalid reduction: {reduction}') self.num_tags = num_tags self.batch_first = batch_first self.reduction = reduction self.start_transitions = Parameter(initializer(Uniform(0.1), (num_tags,)), name='start_transitions') self.end_transitions = Parameter(initializer(Uniform(0.1), (num_tags,)), name='end_transitions') self.transitions = Parameter(initializer(Uniform(0.1), (num_tags, num_tags)), name='transitions') def construct(self, emissions, tags=None, seq_length=None): if tags is None: return self._decode(emissions, seq_length) return self._forward(emissions, tags, seq_length) def _forward(self, emissions, tags=None, seq_length=None): if self.batch_first: batch_size, max_length = tags.shape emissions = emissions.swapaxes(0, 1) tags = tags.swapaxes(0, 1) else: max_length, batch_size = tags.shape if seq_length is None: seq_length = mnp.full((batch_size,), max_length, mindspore.int64) mask = sequence_mask(seq_length, max_length) # shape: (batch_size,) numerator = compute_score(emissions, tags, seq_length-1, mask, self.transitions, self.start_transitions, self.end_transitions) # shape: (batch_size,) denominator = compute_normalizer(emissions, mask, self.transitions, self.start_transitions, self.end_transitions) # shape: (batch_size,) llh = denominator - numerator if self.reduction == 'none': return llh elif self.reduction == 'sum': return llh.sum() elif self.reduction == 'mean': return llh.mean() return llh.sum() / mask.astype(emissions.dtype).sum() def _decode(self, emissions, seq_length=None): if self.batch_first: batch_size, max_length = emissions.shape[:2] emissions = emissions.swapaxes(0, 1) else: batch_size, max_length = emissions.shape[:2] if seq_length is None: seq_length = mnp.full((batch_size,), max_length, mindspore.int64) mask = sequence_mask(seq_length, max_length) return viterbi_decode(emissions, mask, self.transitions, self.start_transitions, self.end_transitions) ``` ## BiLSTM+CRF模型 在实现CRF后,我们设计一个双向LSTM+CRF的模型来进行命名实体识别任务的训练。模型结构如下: ```text nn.Embedding -> nn.LSTM -> nn.Dense -> CRF ``` 其中LSTM提取序列特征,经过Dense层变换获得发射概率矩阵,最后送入CRF层。具体实现如下: ``` class BiLSTM_CRF(nn.Cell): def __init__(self, vocab_size, embedding_dim, hidden_dim, num_tags, padding_idx=0): super().__init__() self.embedding = nn.Embedding(vocab_size, embedding_dim, padding_idx=padding_idx) self.lstm = nn.LSTM(embedding_dim, hidden_dim // 2, bidirectional=True, batch_first=True) self.hidden2tag = nn.Dense(hidden_dim, num_tags, 'he_uniform') self.crf = CRF(num_tags, batch_first=True) def construct(self, inputs, seq_length, tags=None): embeds = self.embedding(inputs) outputs, _ = self.lstm(embeds, seq_length=seq_length) feats = self.hidden2tag(outputs) crf_outs = self.crf(feats, tags, seq_length) return crf_outs ``` 完成模型设计后,我们生成两句例子和对应的标签,并构造词表和标签表。 ``` embedding_dim = 5 hidden_dim = 4 training_data = [( "清 华 大 学 坐 落 于 首 都 北 京".split(), "B I I I O O O O O B I".split() ), ( "重 庆 是 一 个 魔 幻 城 市".split(), "B I O O O O O O O".split() )] word_to_idx = {} word_to_idx['<pad>'] = 0 for sentence, tags in training_data: for word in sentence: if word not in word_to_idx: word_to_idx[word] = len(word_to_idx) tag_to_idx = {"B": 0, "I": 1, "O": 2} len(word_to_idx) ``` 接下来实例化模型,选择优化器并将模型和优化器送入Wrapper。 > 由于CRF层已经进行了NLLLoss的计算,因此不需要再设置Loss。 ``` model = BiLSTM_CRF(len(word_to_idx), embedding_dim, hidden_dim, len(tag_to_idx)) optimizer = nn.SGD(model.trainable_params(), learning_rate=0.01, weight_decay=1e-4) train_one_step = nn.TrainOneStepCell(model, optimizer) ``` 将生成的数据打包成Batch,按照序列最大长度,对长度不足的序列进行填充,分别返回输入序列、输出标签和序列长度构成的Tensor。 ``` def prepare_sequence(seqs, word_to_idx, tag_to_idx): seq_outputs, label_outputs, seq_length = [], [], [] max_len = max([len(i[0]) for i in seqs]) for seq, tag in seqs: seq_length.append(len(seq)) idxs = [word_to_idx[w] for w in seq] labels = [tag_to_idx[t] for t in tag] idxs.extend([word_to_idx['<pad>'] for i in range(max_len - len(seq))]) labels.extend([tag_to_idx['O'] for i in range(max_len - len(seq))]) seq_outputs.append(idxs) label_outputs.append(labels) return mindspore.Tensor(seq_outputs, mindspore.int64), \ mindspore.Tensor(label_outputs, mindspore.int64), \ mindspore.Tensor(seq_length, mindspore.int64) data, label, seq_length = prepare_sequence(training_data, word_to_idx, tag_to_idx) data.shape, label.shape, seq_length.shape ``` 对模型进行预编译后,训练500个step。 > 训练流程可视化依赖`tqdm`库,可使用```pip install tqdm```命令安装。 ``` train_one_step.compile(data, seq_length, label) from tqdm import tqdm steps = 500 with tqdm(total=steps) as t: for i in range(steps): loss = train_one_step(data, seq_length, label) t.set_postfix(loss=loss) t.update(1) ``` 最后我们来观察训练500个step后的模型效果,首先使用模型预测可能的路径得分以及候选序列。 ``` score, histroy = model(data, seq_length) score ``` 使用后处理函数进行预测得分的后处理。 ``` predict = post_decode(score, histroy, seq_length) predict ``` 最后将预测的index序列转换为标签序列,打印输出结果,查看效果。 ``` idx_to_tag = {idx: tag for tag, idx in tag_to_idx.items()} def sequence_to_tag(sequences, idx_to_tag): outputs = [] for seq in sequences: outputs.append([idx_to_tag[i] for i in seq]) return outputs sequence_to_tag(predict, idx_to_tag) ```
github_jupyter
def compute_score(emissions, tags, seq_ends, mask, trans, start_trans, end_trans): # emissions: (seq_length, batch_size, num_tags) # tags: (seq_length, batch_size) # mask: (seq_length, batch_size) seq_length, batch_size = tags.shape mask = mask.astype(emissions.dtype) # 将score设置为初始转移概率 # shape: (batch_size,) score = start_trans[tags[0]] # score += 第一次发射概率 # shape: (batch_size,) score += emissions[0, mnp.arange(batch_size), tags[0]] for i in range(1, seq_length): # 标签由i-1转移至i的转移概率(当mask == 1时有效) # shape: (batch_size,) score += trans[tags[i - 1], tags[i]] * mask[i] # 预测tags[i]的发射概率(当mask == 1时有效) # shape: (batch_size,) score += emissions[i, mnp.arange(batch_size), tags[i]] * mask[i] # 结束转移 # shape: (batch_size,) last_tags = tags[seq_ends, mnp.arange(batch_size)] # score += 结束转移概率 # shape: (batch_size,) score += end_trans[last_tags] return score def compute_normalizer(emissions, mask, trans, start_trans, end_trans): # emissions: (seq_length, batch_size, num_tags) # mask: (seq_length, batch_size) seq_length = emissions.shape[0] # 将score设置为初始转移概率,并加上第一次发射概率 # shape: (batch_size, num_tags) score = start_trans + emissions[0] for i in range(1, seq_length): # 扩展score的维度用于总score的计算 # shape: (batch_size, num_tags, 1) broadcast_score = score.expand_dims(2) # 扩展emission的维度用于总score的计算 # shape: (batch_size, 1, num_tags) broadcast_emissions = emissions[i].expand_dims(1) # 根据公式(7),计算score_i # 此时broadcast_score是由第0个到当前Token所有可能路径 # 对应score的log_sum_exp # shape: (batch_size, num_tags, num_tags) next_score = broadcast_score + trans + broadcast_emissions # 对score_i做log_sum_exp运算,用于下一个Token的score计算 # shape: (batch_size, num_tags) next_score = mnp.log(mnp.sum(mnp.exp(next_score), axis=1)) # 当mask == 1时,score才会变化 # shape: (batch_size, num_tags) score = mnp.where(mask[i].expand_dims(1), next_score, score) # 最后加结束转移概率 # shape: (batch_size, num_tags) score += end_trans # 对所有可能的路径得分求log_sum_exp # shape: (batch_size,) return mnp.log(mnp.sum(mnp.exp(score), axis=1)) def viterbi_decode(emissions, mask, trans, start_trans, end_trans): # emissions: (seq_length, batch_size, num_tags) # mask: (seq_length, batch_size) seq_length = mask.shape[0] score = start_trans + emissions[0] history = () for i in range(1, seq_length): broadcast_score = score.expand_dims(2) broadcast_emission = emissions[i].expand_dims(1) next_score = broadcast_score + trans + broadcast_emission # 求当前Token对应score取值最大的标签,并保存 indices = next_score.argmax(axis=1) history += (indices,) next_score = next_score.max(axis=1) score = mnp.where(mask[i].expand_dims(1), next_score, score) score += end_trans return score, history def post_decode(score, history, seq_length): # 使用Score和History计算最佳预测序列 batch_size = seq_length.shape[0] seq_ends = seq_length - 1 # shape: (batch_size,) best_tags_list = [] # 依次对一个Batch中每个样例进行解码 for idx in range(batch_size): # 查找使最后一个Token对应的预测概率最大的标签, # 并将其添加至最佳预测序列存储的列表中 best_last_tag = score[idx].argmax(axis=0) best_tags = [int(best_last_tag.asnumpy())] # # 重复查找每个Token对应的预测概率最大的标签,加入列表 for hist in reversed(history[:seq_ends[idx]]): best_last_tag = hist[idx][best_tags[-1]] best_tags.append(int(best_last_tag.asnumpy())) # 将逆序求解的序列标签重置为正序 best_tags.reverse() best_tags_list.append(best_tags) return best_tags_list import mindspore import mindspore.nn as nn import mindspore.numpy as mnp from mindspore import Parameter from mindspore.common.initializer import initializer, Uniform def sequence_mask(seq_length, max_length, batch_first=False): """根据序列实际长度和最大长度生成mask矩阵""" range_vector = mnp.arange(0, max_length, 1, seq_length.dtype) result = range_vector < seq_length.view(seq_length.shape + (1,)) if batch_first: return result.astype(mindspore.int64) return result.astype(mindspore.int64).swapaxes(0, 1) class CRF(nn.Cell): def __init__(self, num_tags: int, batch_first: bool = False, reduction: str = 'sum') -> None: if num_tags <= 0: raise ValueError(f'invalid number of tags: {num_tags}') super().__init__() if reduction not in ('none', 'sum', 'mean', 'token_mean'): raise ValueError(f'invalid reduction: {reduction}') self.num_tags = num_tags self.batch_first = batch_first self.reduction = reduction self.start_transitions = Parameter(initializer(Uniform(0.1), (num_tags,)), name='start_transitions') self.end_transitions = Parameter(initializer(Uniform(0.1), (num_tags,)), name='end_transitions') self.transitions = Parameter(initializer(Uniform(0.1), (num_tags, num_tags)), name='transitions') def construct(self, emissions, tags=None, seq_length=None): if tags is None: return self._decode(emissions, seq_length) return self._forward(emissions, tags, seq_length) def _forward(self, emissions, tags=None, seq_length=None): if self.batch_first: batch_size, max_length = tags.shape emissions = emissions.swapaxes(0, 1) tags = tags.swapaxes(0, 1) else: max_length, batch_size = tags.shape if seq_length is None: seq_length = mnp.full((batch_size,), max_length, mindspore.int64) mask = sequence_mask(seq_length, max_length) # shape: (batch_size,) numerator = compute_score(emissions, tags, seq_length-1, mask, self.transitions, self.start_transitions, self.end_transitions) # shape: (batch_size,) denominator = compute_normalizer(emissions, mask, self.transitions, self.start_transitions, self.end_transitions) # shape: (batch_size,) llh = denominator - numerator if self.reduction == 'none': return llh elif self.reduction == 'sum': return llh.sum() elif self.reduction == 'mean': return llh.mean() return llh.sum() / mask.astype(emissions.dtype).sum() def _decode(self, emissions, seq_length=None): if self.batch_first: batch_size, max_length = emissions.shape[:2] emissions = emissions.swapaxes(0, 1) else: batch_size, max_length = emissions.shape[:2] if seq_length is None: seq_length = mnp.full((batch_size,), max_length, mindspore.int64) mask = sequence_mask(seq_length, max_length) return viterbi_decode(emissions, mask, self.transitions, self.start_transitions, self.end_transitions) nn.Embedding -> nn.LSTM -> nn.Dense -> CRF class BiLSTM_CRF(nn.Cell): def __init__(self, vocab_size, embedding_dim, hidden_dim, num_tags, padding_idx=0): super().__init__() self.embedding = nn.Embedding(vocab_size, embedding_dim, padding_idx=padding_idx) self.lstm = nn.LSTM(embedding_dim, hidden_dim // 2, bidirectional=True, batch_first=True) self.hidden2tag = nn.Dense(hidden_dim, num_tags, 'he_uniform') self.crf = CRF(num_tags, batch_first=True) def construct(self, inputs, seq_length, tags=None): embeds = self.embedding(inputs) outputs, _ = self.lstm(embeds, seq_length=seq_length) feats = self.hidden2tag(outputs) crf_outs = self.crf(feats, tags, seq_length) return crf_outs embedding_dim = 5 hidden_dim = 4 training_data = [( "清 华 大 学 坐 落 于 首 都 北 京".split(), "B I I I O O O O O B I".split() ), ( "重 庆 是 一 个 魔 幻 城 市".split(), "B I O O O O O O O".split() )] word_to_idx = {} word_to_idx['<pad>'] = 0 for sentence, tags in training_data: for word in sentence: if word not in word_to_idx: word_to_idx[word] = len(word_to_idx) tag_to_idx = {"B": 0, "I": 1, "O": 2} len(word_to_idx) model = BiLSTM_CRF(len(word_to_idx), embedding_dim, hidden_dim, len(tag_to_idx)) optimizer = nn.SGD(model.trainable_params(), learning_rate=0.01, weight_decay=1e-4) train_one_step = nn.TrainOneStepCell(model, optimizer) def prepare_sequence(seqs, word_to_idx, tag_to_idx): seq_outputs, label_outputs, seq_length = [], [], [] max_len = max([len(i[0]) for i in seqs]) for seq, tag in seqs: seq_length.append(len(seq)) idxs = [word_to_idx[w] for w in seq] labels = [tag_to_idx[t] for t in tag] idxs.extend([word_to_idx['<pad>'] for i in range(max_len - len(seq))]) labels.extend([tag_to_idx['O'] for i in range(max_len - len(seq))]) seq_outputs.append(idxs) label_outputs.append(labels) return mindspore.Tensor(seq_outputs, mindspore.int64), \ mindspore.Tensor(label_outputs, mindspore.int64), \ mindspore.Tensor(seq_length, mindspore.int64) data, label, seq_length = prepare_sequence(training_data, word_to_idx, tag_to_idx) data.shape, label.shape, seq_length.shape train_one_step.compile(data, seq_length, label) from tqdm import tqdm steps = 500 with tqdm(total=steps) as t: for i in range(steps): loss = train_one_step(data, seq_length, label) t.set_postfix(loss=loss) t.update(1) score, histroy = model(data, seq_length) score predict = post_decode(score, histroy, seq_length) predict idx_to_tag = {idx: tag for tag, idx in tag_to_idx.items()} def sequence_to_tag(sequences, idx_to_tag): outputs = [] for seq in sequences: outputs.append([idx_to_tag[i] for i in seq]) return outputs sequence_to_tag(predict, idx_to_tag)
0.5564
0.692642
``` import os, json, pandas, re from xml.etree import ElementTree as et class document: def __init__(self, folder): self.folder = folder return def read(self): group = { 'path' : [], 'original content': [], 'character size':[], 'word size':[], 'sentence size':[], } pass cache = {} for index, name in enumerate(os.listdir(self.folder), 1): path = os.path.join(self.folder, name) group['path'] += [path] if('xml' in name): data = et.parse(path) root = data.getroot() ## 擷取所有的字串。 content = "".join([e.text for e in root.iter()]) group['original content'] = [content] text = "" for article in root.findall("PubmedArticle"): abstract = article.find("MedlineCitation").find("Article").find("Abstract") text += "".join([e.text for e in abstract.findall("AbstractText")]) pass character = re.sub(r'[^{}{}{}]+'.format(string.punctuation, string.digits, string.ascii_letters),'', text) sentence = text.split(".")[:-1] word = re.sub("\s+", " ", re.sub(r'[{}]+'.format(string.punctuation), " ", text)).splot(" ") group['character size'] += [len(character)] group['sentence size'] += [len(sentence)] group['word size'] += [len(word)] pass if('json' in name): with open(path, 'r') as paper: data = json.load(paper) pass content = [] text = [] for i in data: for k, v in i.items(): if(k=='tweet_text'): text += [v] pass content += [v] pass pass content = "".join(content) text = ''.join(text) character = re.sub(r'[^{}{}{}]+'.format(string.punctuation, string.digits, string.ascii_letters),'', text) word = re.sub("\s+", " ", re.sub(r'[{}]+'.format(string.punctuation), " ", text)).splot(" ") sentence = text.split(".")[:-1] group['original content'] += [content] group['character size'] += [len(character)] group['sentence size'] += [len(sentence)] group['word size'] += [len(word)] pass number = index pass self.data = pandas.DataFrame(group) print("load {} files...".format(number)) return def search(self, keyword): output = { "file":[], "exist":[], "number":[] } pass for _, item in self.data.iterrows(): output['file'] += [item['file']] output['exist'] += [bool(re.search(keyword, item['content']))] output['number'] += [len(re.findall(keyword, item['content']))] pass output = pandas.DataFrame(output) return(output) pass doc = document(folder = 'demo_data') doc.read() doc.data doc.search(keyword='Wang') ```
github_jupyter
import os, json, pandas, re from xml.etree import ElementTree as et class document: def __init__(self, folder): self.folder = folder return def read(self): group = { 'path' : [], 'original content': [], 'character size':[], 'word size':[], 'sentence size':[], } pass cache = {} for index, name in enumerate(os.listdir(self.folder), 1): path = os.path.join(self.folder, name) group['path'] += [path] if('xml' in name): data = et.parse(path) root = data.getroot() ## 擷取所有的字串。 content = "".join([e.text for e in root.iter()]) group['original content'] = [content] text = "" for article in root.findall("PubmedArticle"): abstract = article.find("MedlineCitation").find("Article").find("Abstract") text += "".join([e.text for e in abstract.findall("AbstractText")]) pass character = re.sub(r'[^{}{}{}]+'.format(string.punctuation, string.digits, string.ascii_letters),'', text) sentence = text.split(".")[:-1] word = re.sub("\s+", " ", re.sub(r'[{}]+'.format(string.punctuation), " ", text)).splot(" ") group['character size'] += [len(character)] group['sentence size'] += [len(sentence)] group['word size'] += [len(word)] pass if('json' in name): with open(path, 'r') as paper: data = json.load(paper) pass content = [] text = [] for i in data: for k, v in i.items(): if(k=='tweet_text'): text += [v] pass content += [v] pass pass content = "".join(content) text = ''.join(text) character = re.sub(r'[^{}{}{}]+'.format(string.punctuation, string.digits, string.ascii_letters),'', text) word = re.sub("\s+", " ", re.sub(r'[{}]+'.format(string.punctuation), " ", text)).splot(" ") sentence = text.split(".")[:-1] group['original content'] += [content] group['character size'] += [len(character)] group['sentence size'] += [len(sentence)] group['word size'] += [len(word)] pass number = index pass self.data = pandas.DataFrame(group) print("load {} files...".format(number)) return def search(self, keyword): output = { "file":[], "exist":[], "number":[] } pass for _, item in self.data.iterrows(): output['file'] += [item['file']] output['exist'] += [bool(re.search(keyword, item['content']))] output['number'] += [len(re.findall(keyword, item['content']))] pass output = pandas.DataFrame(output) return(output) pass doc = document(folder = 'demo_data') doc.read() doc.data doc.search(keyword='Wang')
0.232659
0.176778
# Leverage Make sure to watch the video and slides for this lecture for the full explanation! $ Leverage Ratio = \frac{Debt + Capital Base}{Capital Base}$ ## Leverage from Algorithm Make sure to watch the video for this! Basically run this and grab your own backtestid as shown in the video. More info: The get_backtest function provides programmatic access to the results of backtests run on the Quantopian platform. It takes a single parameter, the ID of a backtest for which results are desired. You can find the ID of a backtest in the URL of its full results page, which will be of the form: https://www.quantopian.com/algorithms/<algorithm_id>/<backtest_id>. You are only entitled to view the backtests that either: * 1) you have created * 2) you are a collaborator on ``` def initialize(context): context.amzn = sid(16841) context.ibm = sid(3766) schedule_function(rebalance, date_rules.every_day(), time_rules.market_open()) schedule_function(record_vars, date_rules.every_day(), time_rules.market_close()) def rebalance(context,data): order_target_percent(context.amzn, 0.5) order_target_percent(context.ibm, -0.5) def record_vars(context,data): record(amzn_close=data.current(context.amzn, 'close')) record(ibm_close=data.current(context.ibm, 'close')) record(Leverage = context.account.leverage) record(Exposure = context.account.net_leverage) ``` ## Backtest Info ``` bt = get_backtest('5986b969dbab994fa4264696') bt.algo_id bt.recorded_vars bt.recorded_vars['Leverage'].plot() bt.recorded_vars['Exposure'].plot() ``` ## High Leverage Example You can actually specify to borrow on margin (NOT RECOMMENDED) ``` def initialize(context): context.amzn = sid(16841) context.ibm = sid(3766) schedule_function(rebalance, date_rules.every_day(), time_rules.market_open()) schedule_function(record_vars, date_rules.every_day(), time_rules.market_close()) def rebalance(context,data): order_target_percent(context.ibm, -2.0) order_target_percent(context.amzn, 2.0) def record_vars(context,data): record(amzn_close=data.current(context.amzn, 'close')) record(ibm_close=data.current(context.ibm, 'close')) record(Leverage = context.account.leverage) record(Exposure = context.account.net_leverage) bt = get_backtest('5986bd68ceda5554428a005b') bt.recorded_vars['Leverage'].plot() ``` ## Set Hard Limit on Leverage http://www.zipline.io/appendix.html?highlight=leverage#zipline.api.set_max_leverage ``` def initialize(context): context.amzn = sid(16841) context.ibm = sid(3766) set_max_leverage(1.03) schedule_function(rebalance, date_rules.every_day(), time_rules.market_open()) schedule_function(record_vars, date_rules.every_day(), time_rules.market_close()) def rebalance(context,data): order_target_percent(context.ibm, -0.5) order_target_percent(context.amzn, 0.5) def record_vars(context,data): record(amzn_close=data.current(context.amzn,'close')) record(ibm_close=data.current(context.ibm,'close')) record(Leverage = context.account.leverage) record(Exposure = context.account.net_leverage) ```
github_jupyter
def initialize(context): context.amzn = sid(16841) context.ibm = sid(3766) schedule_function(rebalance, date_rules.every_day(), time_rules.market_open()) schedule_function(record_vars, date_rules.every_day(), time_rules.market_close()) def rebalance(context,data): order_target_percent(context.amzn, 0.5) order_target_percent(context.ibm, -0.5) def record_vars(context,data): record(amzn_close=data.current(context.amzn, 'close')) record(ibm_close=data.current(context.ibm, 'close')) record(Leverage = context.account.leverage) record(Exposure = context.account.net_leverage) bt = get_backtest('5986b969dbab994fa4264696') bt.algo_id bt.recorded_vars bt.recorded_vars['Leverage'].plot() bt.recorded_vars['Exposure'].plot() def initialize(context): context.amzn = sid(16841) context.ibm = sid(3766) schedule_function(rebalance, date_rules.every_day(), time_rules.market_open()) schedule_function(record_vars, date_rules.every_day(), time_rules.market_close()) def rebalance(context,data): order_target_percent(context.ibm, -2.0) order_target_percent(context.amzn, 2.0) def record_vars(context,data): record(amzn_close=data.current(context.amzn, 'close')) record(ibm_close=data.current(context.ibm, 'close')) record(Leverage = context.account.leverage) record(Exposure = context.account.net_leverage) bt = get_backtest('5986bd68ceda5554428a005b') bt.recorded_vars['Leverage'].plot() def initialize(context): context.amzn = sid(16841) context.ibm = sid(3766) set_max_leverage(1.03) schedule_function(rebalance, date_rules.every_day(), time_rules.market_open()) schedule_function(record_vars, date_rules.every_day(), time_rules.market_close()) def rebalance(context,data): order_target_percent(context.ibm, -0.5) order_target_percent(context.amzn, 0.5) def record_vars(context,data): record(amzn_close=data.current(context.amzn,'close')) record(ibm_close=data.current(context.ibm,'close')) record(Leverage = context.account.leverage) record(Exposure = context.account.net_leverage)
0.292797
0.853364
# Classifying Surnames with a Multilayer Perceptron ## Imports ``` from argparse import Namespace from collections import Counter import json import os import string import numpy as np import pandas as pd import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torch.utils.data import Dataset, DataLoader from tqdm import notebook ``` ## Data Vectorization classes ### The Vocabulary ``` class Vocabulary(object): """Class to process text and extract vocabulary for mapping""" def __init__(self, token_to_idx=None, add_unk=True, unk_token="<UNK>"): """ Args: token_to_idx (dict): a pre-existing map of tokens to indices add_unk (bool): a flag that indicates whether to add the UNK token unk_token (str): the UNK token to add into the Vocabulary """ if token_to_idx is None: token_to_idx = {} self._token_to_idx = token_to_idx self._idx_to_token = {idx: token for token, idx in self._token_to_idx.items()} self._add_unk = add_unk self._unk_token = unk_token self.unk_index = -1 if add_unk: self.unk_index = self.add_token(unk_token) def to_serializable(self): """ returns a dictionary that can be serialized """ return {'token_to_idx': self._token_to_idx, 'add_unk': self._add_unk, 'unk_token': self._unk_token} @classmethod def from_serializable(cls, contents): """ instantiates the Vocabulary from a serialized dictionary """ return cls(**contents) def add_token(self, token): """Update mapping dicts based on the token. Args: token (str): the item to add into the Vocabulary Returns: index (int): the integer corresponding to the token """ try: index = self._token_to_idx[token] except KeyError: index = len(self._token_to_idx) self._token_to_idx[token] = index self._idx_to_token[index] = token return index def add_many(self, tokens): """Add a list of tokens into the Vocabulary Args: tokens (list): a list of string tokens Returns: indices (list): a list of indices corresponding to the tokens """ return [self.add_token(token) for token in tokens] def lookup_token(self, token): """Retrieve the index associated with the token or the UNK index if token isn't present. Args: token (str): the token to look up Returns: index (int): the index corresponding to the token Notes: `unk_index` needs to be >=0 (having been added into the Vocabulary) for the UNK functionality """ if self.unk_index >= 0: return self._token_to_idx.get(token, self.unk_index) else: return self._token_to_idx[token] def lookup_index(self, index): """Return the token associated with the index Args: index (int): the index to look up Returns: token (str): the token corresponding to the index Raises: KeyError: if the index is not in the Vocabulary """ if index not in self._idx_to_token: raise KeyError("the index (%d) is not in the Vocabulary" % index) return self._idx_to_token[index] def __str__(self): return "<Vocabulary(size=%d)>" % len(self) def __len__(self): return len(self._token_to_idx) ``` ### The Vectorizer ``` class SurnameVectorizer(object): """ The Vectorizer which coordinates the Vocabularies and puts them to use""" def __init__(self, surname_vocab, nationality_vocab): """ Args: surname_vocab (Vocabulary): maps characters to integers nationality_vocab (Vocabulary): maps nationalities to integers """ self.surname_vocab = surname_vocab self.nationality_vocab = nationality_vocab def vectorize(self, surname): """ Args: surname (str): the surname Returns: one_hot (np.ndarray): a collapsed one-hot encoding """ vocab = self.surname_vocab one_hot = np.zeros(len(vocab), dtype=np.float32) for token in surname: one_hot[vocab.lookup_token(token)] = 1 return one_hot @classmethod def from_dataframe(cls, surname_df): """Instantiate the vectorizer from the dataset dataframe Args: surname_df (pandas.DataFrame): the surnames dataset Returns: an instance of the SurnameVectorizer """ surname_vocab = Vocabulary(unk_token="@") nationality_vocab = Vocabulary(add_unk=False) for index, row in surname_df.iterrows(): for letter in row.surname: surname_vocab.add_token(letter) nationality_vocab.add_token(row.nationality) return cls(surname_vocab, nationality_vocab) @classmethod def from_serializable(cls, contents): surname_vocab = Vocabulary.from_serializable(contents['surname_vocab']) nationality_vocab = Vocabulary.from_serializable(contents['nationality_vocab']) return cls(surname_vocab=surname_vocab, nationality_vocab=nationality_vocab) def to_serializable(self): return {'surname_vocab': self.surname_vocab.to_serializable(), 'nationality_vocab': self.nationality_vocab.to_serializable()} ``` ### The Dataset ``` class SurnameDataset(Dataset): def __init__(self, surname_df, vectorizer): """ Args: surname_df (pandas.DataFrame): the dataset vectorizer (SurnameVectorizer): vectorizer instatiated from dataset """ self.surname_df = surname_df self._vectorizer = vectorizer self.train_df = self.surname_df[self.surname_df.split=='train'] self.train_size = len(self.train_df) self.val_df = self.surname_df[self.surname_df.split=='val'] self.validation_size = len(self.val_df) self.test_df = self.surname_df[self.surname_df.split=='test'] self.test_size = len(self.test_df) self._lookup_dict = {'train': (self.train_df, self.train_size), 'val': (self.val_df, self.validation_size), 'test': (self.test_df, self.test_size)} self.set_split('train') # Class weights class_counts = surname_df.nationality.value_counts().to_dict() def sort_key(item): return self._vectorizer.nationality_vocab.lookup_token(item[0]) sorted_counts = sorted(class_counts.items(), key=sort_key) frequencies = [count for _, count in sorted_counts] self.class_weights = 1.0 / torch.tensor(frequencies, dtype=torch.float32) @classmethod def load_dataset_and_make_vectorizer(cls, surname_csv): """Load dataset and make a new vectorizer from scratch Args: surname_csv (str): location of the dataset Returns: an instance of SurnameDataset """ surname_df = pd.read_csv(surname_csv) train_surname_df = surname_df[surname_df.split=='train'] return cls(surname_df, SurnameVectorizer.from_dataframe(train_surname_df)) @classmethod def load_dataset_and_load_vectorizer(cls, surname_csv, vectorizer_filepath): """Load dataset and the corresponding vectorizer. Used in the case in the vectorizer has been cached for re-use Args: surname_csv (str): location of the dataset vectorizer_filepath (str): location of the saved vectorizer Returns: an instance of SurnameDataset """ surname_df = pd.read_csv(surname_csv) vectorizer = cls.load_vectorizer_only(vectorizer_filepath) return cls(surname_df, vectorizer) @staticmethod def load_vectorizer_only(vectorizer_filepath): """a static method for loading the vectorizer from file Args: vectorizer_filepath (str): the location of the serialized vectorizer Returns: an instance of SurnameVectorizer """ with open(vectorizer_filepath) as fp: return SurnameVectorizer.from_serializable(json.load(fp)) def save_vectorizer(self, vectorizer_filepath): """saves the vectorizer to disk using json Args: vectorizer_filepath (str): the location to save the vectorizer """ with open(vectorizer_filepath, "w") as fp: json.dump(self._vectorizer.to_serializable(), fp) def get_vectorizer(self): """ returns the vectorizer """ return self._vectorizer def set_split(self, split="train"): """ selects the splits in the dataset using a column in the dataframe """ self._target_split = split self._target_df, self._target_size = self._lookup_dict[split] def __len__(self): return self._target_size def __getitem__(self, index): """the primary entry point method for PyTorch datasets Args: index (int): the index to the data point Returns: a dictionary holding the data point's: features (x_surname) label (y_nationality) """ row = self._target_df.iloc[index] surname_vector = \ self._vectorizer.vectorize(row.surname) nationality_index = \ self._vectorizer.nationality_vocab.lookup_token(row.nationality) return {'x_surname': surname_vector, 'y_nationality': nationality_index} def get_num_batches(self, batch_size): """Given a batch size, return the number of batches in the dataset Args: batch_size (int) Returns: number of batches in the dataset """ return len(self) // batch_size def generate_batches(dataset, batch_size, shuffle=True, drop_last=True, device="cpu"): """ A generator function which wraps the PyTorch DataLoader. It will ensure each tensor is on the write device location. """ dataloader = DataLoader(dataset=dataset, batch_size=batch_size, shuffle=shuffle, drop_last=drop_last) for data_dict in dataloader: out_data_dict = {} for name, tensor in data_dict.items(): out_data_dict[name] = data_dict[name].to(device) yield out_data_dict ``` ## The Model: SurnameClassifier ``` class SurnameClassifier(nn.Module): """ A 2-layer Multilayer Perceptron for classifying surnames """ def __init__(self, input_dim, hidden_dim, output_dim): """ Args: input_dim (int): the size of the input vectors hidden_dim (int): the output size of the first Linear layer output_dim (int): the output size of the second Linear layer """ super(SurnameClassifier, self).__init__() self.fc1 = nn.Linear(input_dim, hidden_dim) self.fc2 = nn.Linear(hidden_dim, output_dim) def forward(self, x_in, apply_softmax=False): """The forward pass of the classifier Args: x_in (torch.Tensor): an input data tensor. x_in.shape should be (batch, input_dim) apply_softmax (bool): a flag for the softmax activation should be false if used with the Cross Entropy losses Returns: the resulting tensor. tensor.shape should be (batch, output_dim) """ intermediate_vector = F.relu(self.fc1(x_in)) prediction_vector = self.fc2(intermediate_vector) if apply_softmax: prediction_vector = F.softmax(prediction_vector, dim=1) return prediction_vector ``` ## Training Routine ### Helper functions ``` def make_train_state(args): return {'stop_early': False, 'early_stopping_step': 0, 'early_stopping_best_val': 1e8, 'learning_rate': args.learning_rate, 'epoch_index': 0, 'train_loss': [], 'train_acc': [], 'val_loss': [], 'val_acc': [], 'test_loss': -1, 'test_acc': -1, 'model_filename': args.model_state_file} def update_train_state(args, model, train_state): """Handle the training state updates. Components: - Early Stopping: Prevent overfitting. - Model Checkpoint: Model is saved if the model is better :param args: main arguments :param model: model to train :param train_state: a dictionary representing the training state values :returns: a new train_state """ # Save one model at least if train_state['epoch_index'] == 0: torch.save(model.state_dict(), train_state['model_filename']) train_state['stop_early'] = False # Save model if performance improved elif train_state['epoch_index'] >= 1: loss_tm1, loss_t = train_state['val_loss'][-2:] # If loss worsened if loss_t >= train_state['early_stopping_best_val']: # Update step train_state['early_stopping_step'] += 1 # Loss decreased else: # Save the best model if loss_t < train_state['early_stopping_best_val']: torch.save(model.state_dict(), train_state['model_filename']) # Reset early stopping step train_state['early_stopping_step'] = 0 # Stop early ? train_state['stop_early'] = \ train_state['early_stopping_step'] >= args.early_stopping_criteria return train_state def compute_accuracy(y_pred, y_target): _, y_pred_indices = y_pred.max(dim=1) n_correct = torch.eq(y_pred_indices, y_target).sum().item() return n_correct / len(y_pred_indices) * 100 ``` #### general utilities ``` def set_seed_everywhere(seed, cuda): np.random.seed(seed) torch.manual_seed(seed) if cuda: torch.cuda.manual_seed_all(seed) def handle_dirs(dirpath): if not os.path.exists(dirpath): os.makedirs(dirpath) ``` ### Settings and some prep work ``` args = Namespace( # Data and path information surname_csv="../data/surnames/surnames_with_splits.csv", vectorizer_file="vectorizer.json", model_state_file="model.pth", save_dir="../model_storage/ch4/surname_mlp", # Model hyper parameters hidden_dim=300, # Training hyper parameters seed=1337, num_epochs=100, early_stopping_criteria=5, learning_rate=0.001, batch_size=64, # Runtime options cuda=False, reload_from_files=False, expand_filepaths_to_save_dir=True, ) if args.expand_filepaths_to_save_dir: args.vectorizer_file = os.path.join(args.save_dir, args.vectorizer_file) args.model_state_file = os.path.join(args.save_dir, args.model_state_file) print("Expanded filepaths: ") print("\t{}".format(args.vectorizer_file)) print("\t{}".format(args.model_state_file)) # Check CUDA if not torch.cuda.is_available(): args.cuda = False args.device = torch.device("cuda" if args.cuda else "cpu") print("Using CUDA: {}".format(args.cuda)) # Set seed for reproducibility set_seed_everywhere(args.seed, args.cuda) # handle dirs handle_dirs(args.save_dir) ``` ### Initializations ``` if args.reload_from_files: # training from a checkpoint print("Reloading!") dataset = SurnameDataset.load_dataset_and_load_vectorizer(args.surname_csv, args.vectorizer_file) else: # create dataset and vectorizer print("Creating fresh!") dataset = SurnameDataset.load_dataset_and_make_vectorizer(args.surname_csv) dataset.save_vectorizer(args.vectorizer_file) vectorizer = dataset.get_vectorizer() classifier = SurnameClassifier(input_dim=len(vectorizer.surname_vocab), hidden_dim=args.hidden_dim, output_dim=len(vectorizer.nationality_vocab)) ``` ### Training loop ``` classifier = classifier.to(args.device) dataset.class_weights = dataset.class_weights.to(args.device) loss_func = nn.CrossEntropyLoss(dataset.class_weights) optimizer = optim.Adam(classifier.parameters(), lr=args.learning_rate) scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer=optimizer, mode='min', factor=0.5, patience=1) train_state = make_train_state(args) epoch_bar = notebook.tqdm(desc='training routine', total=args.num_epochs, position=0) dataset.set_split('train') train_bar = notebook.tqdm(desc='split=train', total=dataset.get_num_batches(args.batch_size), position=1, leave=True) dataset.set_split('val') val_bar = notebook.tqdm(desc='split=val', total=dataset.get_num_batches(args.batch_size), position=1, leave=True) try: for epoch_index in range(args.num_epochs): train_state['epoch_index'] = epoch_index # Iterate over training dataset # setup: batch generator, set loss and acc to 0, set train mode on dataset.set_split('train') batch_generator = generate_batches(dataset, batch_size=args.batch_size, device=args.device) running_loss = 0.0 running_acc = 0.0 classifier.train() for batch_index, batch_dict in enumerate(batch_generator): # the training routine is these 5 steps: # -------------------------------------- # step 1. zero the gradients optimizer.zero_grad() # step 2. compute the output y_pred = classifier(batch_dict['x_surname']) # step 3. compute the loss loss = loss_func(y_pred, batch_dict['y_nationality']) loss_t = loss.item() running_loss += (loss_t - running_loss) / (batch_index + 1) # step 4. use loss to produce gradients loss.backward() # step 5. use optimizer to take gradient step optimizer.step() # ----------------------------------------- # compute the accuracy acc_t = compute_accuracy(y_pred, batch_dict['y_nationality']) running_acc += (acc_t - running_acc) / (batch_index + 1) # update bar train_bar.set_postfix(loss=running_loss, acc=running_acc, epoch=epoch_index) train_bar.update() train_state['train_loss'].append(running_loss) train_state['train_acc'].append(running_acc) # Iterate over val dataset # setup: batch generator, set loss and acc to 0; set eval mode on dataset.set_split('val') batch_generator = generate_batches(dataset, batch_size=args.batch_size, device=args.device) running_loss = 0. running_acc = 0. classifier.eval() for batch_index, batch_dict in enumerate(batch_generator): # compute the output y_pred = classifier(batch_dict['x_surname']) # step 3. compute the loss loss = loss_func(y_pred, batch_dict['y_nationality']) loss_t = loss.to("cpu").item() running_loss += (loss_t - running_loss) / (batch_index + 1) # compute the accuracy acc_t = compute_accuracy(y_pred, batch_dict['y_nationality']) running_acc += (acc_t - running_acc) / (batch_index + 1) val_bar.set_postfix(loss=running_loss, acc=running_acc, epoch=epoch_index) val_bar.update() train_state['val_loss'].append(running_loss) train_state['val_acc'].append(running_acc) train_state = update_train_state(args=args, model=classifier, train_state=train_state) scheduler.step(train_state['val_loss'][-1]) if train_state['stop_early']: break train_bar.n = 0 val_bar.n = 0 epoch_bar.update() except KeyboardInterrupt: print("Exiting loop") # compute the loss & accuracy on the test set using the best available model classifier.load_state_dict(torch.load(train_state['model_filename'])) classifier = classifier.to(args.device) dataset.class_weights = dataset.class_weights.to(args.device) loss_func = nn.CrossEntropyLoss(dataset.class_weights) dataset.set_split('test') batch_generator = generate_batches(dataset, batch_size=args.batch_size, device=args.device) running_loss = 0. running_acc = 0. classifier.eval() for batch_index, batch_dict in enumerate(batch_generator): # compute the output y_pred = classifier(batch_dict['x_surname']) # compute the loss loss = loss_func(y_pred, batch_dict['y_nationality']) loss_t = loss.item() running_loss += (loss_t - running_loss) / (batch_index + 1) # compute the accuracy acc_t = compute_accuracy(y_pred, batch_dict['y_nationality']) running_acc += (acc_t - running_acc) / (batch_index + 1) train_state['test_loss'] = running_loss train_state['test_acc'] = running_acc print("Test loss: {};".format(train_state['test_loss'])) print("Test Accuracy: {}".format(train_state['test_acc'])) ``` ### Inference ``` def predict_nationality(surname, classifier, vectorizer): """Predict the nationality from a new surname Args: surname (str): the surname to classifier classifier (SurnameClassifer): an instance of the classifier vectorizer (SurnameVectorizer): the corresponding vectorizer Returns: a dictionary with the most likely nationality and its probability """ vectorized_surname = vectorizer.vectorize(surname) vectorized_surname = torch.tensor(vectorized_surname).view(1, -1) result = classifier(vectorized_surname, apply_softmax=True) probability_values, indices = result.max(dim=1) index = indices.item() predicted_nationality = vectorizer.nationality_vocab.lookup_index(index) probability_value = probability_values.item() return {'nationality': predicted_nationality, 'probability': probability_value} new_surname = input("Enter a surname to classify: ") classifier = classifier.to("cpu") prediction = predict_nationality(new_surname, classifier, vectorizer) print("{} -> {} (p={:0.2f})".format(new_surname, prediction['nationality'], prediction['probability'])) ``` ### Top-K Inference ``` vectorizer.nationality_vocab.lookup_index(8) def predict_topk_nationality(name, classifier, vectorizer, k=5): vectorized_name = vectorizer.vectorize(name) vectorized_name = torch.tensor(vectorized_name).view(1, -1) prediction_vector = classifier(vectorized_name, apply_softmax=True) probability_values, indices = torch.topk(prediction_vector, k=k) # returned size is 1,k probability_values = probability_values.detach().numpy()[0] indices = indices.detach().numpy()[0] results = [] for prob_value, index in zip(probability_values, indices): nationality = vectorizer.nationality_vocab.lookup_index(index) results.append({'nationality': nationality, 'probability': prob_value}) return results new_surname = input("Enter a surname to classify: ") classifier = classifier.to("cpu") k = int(input("How many of the top predictions to see? ")) if k > len(vectorizer.nationality_vocab): print("Sorry! That's more than the # of nationalities we have.. defaulting you to max size :)") k = len(vectorizer.nationality_vocab) predictions = predict_topk_nationality(new_surname, classifier, vectorizer, k=k) print("Top {} predictions:".format(k)) print("===================") for prediction in predictions: print("{} -> {} (p={:0.2f})".format(new_surname, prediction['nationality'], prediction['probability'])) ```
github_jupyter
from argparse import Namespace from collections import Counter import json import os import string import numpy as np import pandas as pd import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torch.utils.data import Dataset, DataLoader from tqdm import notebook class Vocabulary(object): """Class to process text and extract vocabulary for mapping""" def __init__(self, token_to_idx=None, add_unk=True, unk_token="<UNK>"): """ Args: token_to_idx (dict): a pre-existing map of tokens to indices add_unk (bool): a flag that indicates whether to add the UNK token unk_token (str): the UNK token to add into the Vocabulary """ if token_to_idx is None: token_to_idx = {} self._token_to_idx = token_to_idx self._idx_to_token = {idx: token for token, idx in self._token_to_idx.items()} self._add_unk = add_unk self._unk_token = unk_token self.unk_index = -1 if add_unk: self.unk_index = self.add_token(unk_token) def to_serializable(self): """ returns a dictionary that can be serialized """ return {'token_to_idx': self._token_to_idx, 'add_unk': self._add_unk, 'unk_token': self._unk_token} @classmethod def from_serializable(cls, contents): """ instantiates the Vocabulary from a serialized dictionary """ return cls(**contents) def add_token(self, token): """Update mapping dicts based on the token. Args: token (str): the item to add into the Vocabulary Returns: index (int): the integer corresponding to the token """ try: index = self._token_to_idx[token] except KeyError: index = len(self._token_to_idx) self._token_to_idx[token] = index self._idx_to_token[index] = token return index def add_many(self, tokens): """Add a list of tokens into the Vocabulary Args: tokens (list): a list of string tokens Returns: indices (list): a list of indices corresponding to the tokens """ return [self.add_token(token) for token in tokens] def lookup_token(self, token): """Retrieve the index associated with the token or the UNK index if token isn't present. Args: token (str): the token to look up Returns: index (int): the index corresponding to the token Notes: `unk_index` needs to be >=0 (having been added into the Vocabulary) for the UNK functionality """ if self.unk_index >= 0: return self._token_to_idx.get(token, self.unk_index) else: return self._token_to_idx[token] def lookup_index(self, index): """Return the token associated with the index Args: index (int): the index to look up Returns: token (str): the token corresponding to the index Raises: KeyError: if the index is not in the Vocabulary """ if index not in self._idx_to_token: raise KeyError("the index (%d) is not in the Vocabulary" % index) return self._idx_to_token[index] def __str__(self): return "<Vocabulary(size=%d)>" % len(self) def __len__(self): return len(self._token_to_idx) class SurnameVectorizer(object): """ The Vectorizer which coordinates the Vocabularies and puts them to use""" def __init__(self, surname_vocab, nationality_vocab): """ Args: surname_vocab (Vocabulary): maps characters to integers nationality_vocab (Vocabulary): maps nationalities to integers """ self.surname_vocab = surname_vocab self.nationality_vocab = nationality_vocab def vectorize(self, surname): """ Args: surname (str): the surname Returns: one_hot (np.ndarray): a collapsed one-hot encoding """ vocab = self.surname_vocab one_hot = np.zeros(len(vocab), dtype=np.float32) for token in surname: one_hot[vocab.lookup_token(token)] = 1 return one_hot @classmethod def from_dataframe(cls, surname_df): """Instantiate the vectorizer from the dataset dataframe Args: surname_df (pandas.DataFrame): the surnames dataset Returns: an instance of the SurnameVectorizer """ surname_vocab = Vocabulary(unk_token="@") nationality_vocab = Vocabulary(add_unk=False) for index, row in surname_df.iterrows(): for letter in row.surname: surname_vocab.add_token(letter) nationality_vocab.add_token(row.nationality) return cls(surname_vocab, nationality_vocab) @classmethod def from_serializable(cls, contents): surname_vocab = Vocabulary.from_serializable(contents['surname_vocab']) nationality_vocab = Vocabulary.from_serializable(contents['nationality_vocab']) return cls(surname_vocab=surname_vocab, nationality_vocab=nationality_vocab) def to_serializable(self): return {'surname_vocab': self.surname_vocab.to_serializable(), 'nationality_vocab': self.nationality_vocab.to_serializable()} class SurnameDataset(Dataset): def __init__(self, surname_df, vectorizer): """ Args: surname_df (pandas.DataFrame): the dataset vectorizer (SurnameVectorizer): vectorizer instatiated from dataset """ self.surname_df = surname_df self._vectorizer = vectorizer self.train_df = self.surname_df[self.surname_df.split=='train'] self.train_size = len(self.train_df) self.val_df = self.surname_df[self.surname_df.split=='val'] self.validation_size = len(self.val_df) self.test_df = self.surname_df[self.surname_df.split=='test'] self.test_size = len(self.test_df) self._lookup_dict = {'train': (self.train_df, self.train_size), 'val': (self.val_df, self.validation_size), 'test': (self.test_df, self.test_size)} self.set_split('train') # Class weights class_counts = surname_df.nationality.value_counts().to_dict() def sort_key(item): return self._vectorizer.nationality_vocab.lookup_token(item[0]) sorted_counts = sorted(class_counts.items(), key=sort_key) frequencies = [count for _, count in sorted_counts] self.class_weights = 1.0 / torch.tensor(frequencies, dtype=torch.float32) @classmethod def load_dataset_and_make_vectorizer(cls, surname_csv): """Load dataset and make a new vectorizer from scratch Args: surname_csv (str): location of the dataset Returns: an instance of SurnameDataset """ surname_df = pd.read_csv(surname_csv) train_surname_df = surname_df[surname_df.split=='train'] return cls(surname_df, SurnameVectorizer.from_dataframe(train_surname_df)) @classmethod def load_dataset_and_load_vectorizer(cls, surname_csv, vectorizer_filepath): """Load dataset and the corresponding vectorizer. Used in the case in the vectorizer has been cached for re-use Args: surname_csv (str): location of the dataset vectorizer_filepath (str): location of the saved vectorizer Returns: an instance of SurnameDataset """ surname_df = pd.read_csv(surname_csv) vectorizer = cls.load_vectorizer_only(vectorizer_filepath) return cls(surname_df, vectorizer) @staticmethod def load_vectorizer_only(vectorizer_filepath): """a static method for loading the vectorizer from file Args: vectorizer_filepath (str): the location of the serialized vectorizer Returns: an instance of SurnameVectorizer """ with open(vectorizer_filepath) as fp: return SurnameVectorizer.from_serializable(json.load(fp)) def save_vectorizer(self, vectorizer_filepath): """saves the vectorizer to disk using json Args: vectorizer_filepath (str): the location to save the vectorizer """ with open(vectorizer_filepath, "w") as fp: json.dump(self._vectorizer.to_serializable(), fp) def get_vectorizer(self): """ returns the vectorizer """ return self._vectorizer def set_split(self, split="train"): """ selects the splits in the dataset using a column in the dataframe """ self._target_split = split self._target_df, self._target_size = self._lookup_dict[split] def __len__(self): return self._target_size def __getitem__(self, index): """the primary entry point method for PyTorch datasets Args: index (int): the index to the data point Returns: a dictionary holding the data point's: features (x_surname) label (y_nationality) """ row = self._target_df.iloc[index] surname_vector = \ self._vectorizer.vectorize(row.surname) nationality_index = \ self._vectorizer.nationality_vocab.lookup_token(row.nationality) return {'x_surname': surname_vector, 'y_nationality': nationality_index} def get_num_batches(self, batch_size): """Given a batch size, return the number of batches in the dataset Args: batch_size (int) Returns: number of batches in the dataset """ return len(self) // batch_size def generate_batches(dataset, batch_size, shuffle=True, drop_last=True, device="cpu"): """ A generator function which wraps the PyTorch DataLoader. It will ensure each tensor is on the write device location. """ dataloader = DataLoader(dataset=dataset, batch_size=batch_size, shuffle=shuffle, drop_last=drop_last) for data_dict in dataloader: out_data_dict = {} for name, tensor in data_dict.items(): out_data_dict[name] = data_dict[name].to(device) yield out_data_dict class SurnameClassifier(nn.Module): """ A 2-layer Multilayer Perceptron for classifying surnames """ def __init__(self, input_dim, hidden_dim, output_dim): """ Args: input_dim (int): the size of the input vectors hidden_dim (int): the output size of the first Linear layer output_dim (int): the output size of the second Linear layer """ super(SurnameClassifier, self).__init__() self.fc1 = nn.Linear(input_dim, hidden_dim) self.fc2 = nn.Linear(hidden_dim, output_dim) def forward(self, x_in, apply_softmax=False): """The forward pass of the classifier Args: x_in (torch.Tensor): an input data tensor. x_in.shape should be (batch, input_dim) apply_softmax (bool): a flag for the softmax activation should be false if used with the Cross Entropy losses Returns: the resulting tensor. tensor.shape should be (batch, output_dim) """ intermediate_vector = F.relu(self.fc1(x_in)) prediction_vector = self.fc2(intermediate_vector) if apply_softmax: prediction_vector = F.softmax(prediction_vector, dim=1) return prediction_vector def make_train_state(args): return {'stop_early': False, 'early_stopping_step': 0, 'early_stopping_best_val': 1e8, 'learning_rate': args.learning_rate, 'epoch_index': 0, 'train_loss': [], 'train_acc': [], 'val_loss': [], 'val_acc': [], 'test_loss': -1, 'test_acc': -1, 'model_filename': args.model_state_file} def update_train_state(args, model, train_state): """Handle the training state updates. Components: - Early Stopping: Prevent overfitting. - Model Checkpoint: Model is saved if the model is better :param args: main arguments :param model: model to train :param train_state: a dictionary representing the training state values :returns: a new train_state """ # Save one model at least if train_state['epoch_index'] == 0: torch.save(model.state_dict(), train_state['model_filename']) train_state['stop_early'] = False # Save model if performance improved elif train_state['epoch_index'] >= 1: loss_tm1, loss_t = train_state['val_loss'][-2:] # If loss worsened if loss_t >= train_state['early_stopping_best_val']: # Update step train_state['early_stopping_step'] += 1 # Loss decreased else: # Save the best model if loss_t < train_state['early_stopping_best_val']: torch.save(model.state_dict(), train_state['model_filename']) # Reset early stopping step train_state['early_stopping_step'] = 0 # Stop early ? train_state['stop_early'] = \ train_state['early_stopping_step'] >= args.early_stopping_criteria return train_state def compute_accuracy(y_pred, y_target): _, y_pred_indices = y_pred.max(dim=1) n_correct = torch.eq(y_pred_indices, y_target).sum().item() return n_correct / len(y_pred_indices) * 100 def set_seed_everywhere(seed, cuda): np.random.seed(seed) torch.manual_seed(seed) if cuda: torch.cuda.manual_seed_all(seed) def handle_dirs(dirpath): if not os.path.exists(dirpath): os.makedirs(dirpath) args = Namespace( # Data and path information surname_csv="../data/surnames/surnames_with_splits.csv", vectorizer_file="vectorizer.json", model_state_file="model.pth", save_dir="../model_storage/ch4/surname_mlp", # Model hyper parameters hidden_dim=300, # Training hyper parameters seed=1337, num_epochs=100, early_stopping_criteria=5, learning_rate=0.001, batch_size=64, # Runtime options cuda=False, reload_from_files=False, expand_filepaths_to_save_dir=True, ) if args.expand_filepaths_to_save_dir: args.vectorizer_file = os.path.join(args.save_dir, args.vectorizer_file) args.model_state_file = os.path.join(args.save_dir, args.model_state_file) print("Expanded filepaths: ") print("\t{}".format(args.vectorizer_file)) print("\t{}".format(args.model_state_file)) # Check CUDA if not torch.cuda.is_available(): args.cuda = False args.device = torch.device("cuda" if args.cuda else "cpu") print("Using CUDA: {}".format(args.cuda)) # Set seed for reproducibility set_seed_everywhere(args.seed, args.cuda) # handle dirs handle_dirs(args.save_dir) if args.reload_from_files: # training from a checkpoint print("Reloading!") dataset = SurnameDataset.load_dataset_and_load_vectorizer(args.surname_csv, args.vectorizer_file) else: # create dataset and vectorizer print("Creating fresh!") dataset = SurnameDataset.load_dataset_and_make_vectorizer(args.surname_csv) dataset.save_vectorizer(args.vectorizer_file) vectorizer = dataset.get_vectorizer() classifier = SurnameClassifier(input_dim=len(vectorizer.surname_vocab), hidden_dim=args.hidden_dim, output_dim=len(vectorizer.nationality_vocab)) classifier = classifier.to(args.device) dataset.class_weights = dataset.class_weights.to(args.device) loss_func = nn.CrossEntropyLoss(dataset.class_weights) optimizer = optim.Adam(classifier.parameters(), lr=args.learning_rate) scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer=optimizer, mode='min', factor=0.5, patience=1) train_state = make_train_state(args) epoch_bar = notebook.tqdm(desc='training routine', total=args.num_epochs, position=0) dataset.set_split('train') train_bar = notebook.tqdm(desc='split=train', total=dataset.get_num_batches(args.batch_size), position=1, leave=True) dataset.set_split('val') val_bar = notebook.tqdm(desc='split=val', total=dataset.get_num_batches(args.batch_size), position=1, leave=True) try: for epoch_index in range(args.num_epochs): train_state['epoch_index'] = epoch_index # Iterate over training dataset # setup: batch generator, set loss and acc to 0, set train mode on dataset.set_split('train') batch_generator = generate_batches(dataset, batch_size=args.batch_size, device=args.device) running_loss = 0.0 running_acc = 0.0 classifier.train() for batch_index, batch_dict in enumerate(batch_generator): # the training routine is these 5 steps: # -------------------------------------- # step 1. zero the gradients optimizer.zero_grad() # step 2. compute the output y_pred = classifier(batch_dict['x_surname']) # step 3. compute the loss loss = loss_func(y_pred, batch_dict['y_nationality']) loss_t = loss.item() running_loss += (loss_t - running_loss) / (batch_index + 1) # step 4. use loss to produce gradients loss.backward() # step 5. use optimizer to take gradient step optimizer.step() # ----------------------------------------- # compute the accuracy acc_t = compute_accuracy(y_pred, batch_dict['y_nationality']) running_acc += (acc_t - running_acc) / (batch_index + 1) # update bar train_bar.set_postfix(loss=running_loss, acc=running_acc, epoch=epoch_index) train_bar.update() train_state['train_loss'].append(running_loss) train_state['train_acc'].append(running_acc) # Iterate over val dataset # setup: batch generator, set loss and acc to 0; set eval mode on dataset.set_split('val') batch_generator = generate_batches(dataset, batch_size=args.batch_size, device=args.device) running_loss = 0. running_acc = 0. classifier.eval() for batch_index, batch_dict in enumerate(batch_generator): # compute the output y_pred = classifier(batch_dict['x_surname']) # step 3. compute the loss loss = loss_func(y_pred, batch_dict['y_nationality']) loss_t = loss.to("cpu").item() running_loss += (loss_t - running_loss) / (batch_index + 1) # compute the accuracy acc_t = compute_accuracy(y_pred, batch_dict['y_nationality']) running_acc += (acc_t - running_acc) / (batch_index + 1) val_bar.set_postfix(loss=running_loss, acc=running_acc, epoch=epoch_index) val_bar.update() train_state['val_loss'].append(running_loss) train_state['val_acc'].append(running_acc) train_state = update_train_state(args=args, model=classifier, train_state=train_state) scheduler.step(train_state['val_loss'][-1]) if train_state['stop_early']: break train_bar.n = 0 val_bar.n = 0 epoch_bar.update() except KeyboardInterrupt: print("Exiting loop") # compute the loss & accuracy on the test set using the best available model classifier.load_state_dict(torch.load(train_state['model_filename'])) classifier = classifier.to(args.device) dataset.class_weights = dataset.class_weights.to(args.device) loss_func = nn.CrossEntropyLoss(dataset.class_weights) dataset.set_split('test') batch_generator = generate_batches(dataset, batch_size=args.batch_size, device=args.device) running_loss = 0. running_acc = 0. classifier.eval() for batch_index, batch_dict in enumerate(batch_generator): # compute the output y_pred = classifier(batch_dict['x_surname']) # compute the loss loss = loss_func(y_pred, batch_dict['y_nationality']) loss_t = loss.item() running_loss += (loss_t - running_loss) / (batch_index + 1) # compute the accuracy acc_t = compute_accuracy(y_pred, batch_dict['y_nationality']) running_acc += (acc_t - running_acc) / (batch_index + 1) train_state['test_loss'] = running_loss train_state['test_acc'] = running_acc print("Test loss: {};".format(train_state['test_loss'])) print("Test Accuracy: {}".format(train_state['test_acc'])) def predict_nationality(surname, classifier, vectorizer): """Predict the nationality from a new surname Args: surname (str): the surname to classifier classifier (SurnameClassifer): an instance of the classifier vectorizer (SurnameVectorizer): the corresponding vectorizer Returns: a dictionary with the most likely nationality and its probability """ vectorized_surname = vectorizer.vectorize(surname) vectorized_surname = torch.tensor(vectorized_surname).view(1, -1) result = classifier(vectorized_surname, apply_softmax=True) probability_values, indices = result.max(dim=1) index = indices.item() predicted_nationality = vectorizer.nationality_vocab.lookup_index(index) probability_value = probability_values.item() return {'nationality': predicted_nationality, 'probability': probability_value} new_surname = input("Enter a surname to classify: ") classifier = classifier.to("cpu") prediction = predict_nationality(new_surname, classifier, vectorizer) print("{} -> {} (p={:0.2f})".format(new_surname, prediction['nationality'], prediction['probability'])) vectorizer.nationality_vocab.lookup_index(8) def predict_topk_nationality(name, classifier, vectorizer, k=5): vectorized_name = vectorizer.vectorize(name) vectorized_name = torch.tensor(vectorized_name).view(1, -1) prediction_vector = classifier(vectorized_name, apply_softmax=True) probability_values, indices = torch.topk(prediction_vector, k=k) # returned size is 1,k probability_values = probability_values.detach().numpy()[0] indices = indices.detach().numpy()[0] results = [] for prob_value, index in zip(probability_values, indices): nationality = vectorizer.nationality_vocab.lookup_index(index) results.append({'nationality': nationality, 'probability': prob_value}) return results new_surname = input("Enter a surname to classify: ") classifier = classifier.to("cpu") k = int(input("How many of the top predictions to see? ")) if k > len(vectorizer.nationality_vocab): print("Sorry! That's more than the # of nationalities we have.. defaulting you to max size :)") k = len(vectorizer.nationality_vocab) predictions = predict_topk_nationality(new_surname, classifier, vectorizer, k=k) print("Top {} predictions:".format(k)) print("===================") for prediction in predictions: print("{} -> {} (p={:0.2f})".format(new_surname, prediction['nationality'], prediction['probability']))
0.860222
0.794903
# Administering your GIS ArcGIS administrators can leverage the `gis.admin` module of **ArcGIS API for Python** to assist with and automate their administrative tasks. These tasks can include anything from checking the status of servers, assigning licenses to named user accounts to modifying the GIS's look and feel. ArcGIS Online Organizations (AGOL) and ArcGIS Enterprise instances vary on the amount of customization you can make. Enterprise organizations can be customized much more than ArcGIS Online organizations as Enterprise allows administrations full access. No matter what your organization is, the API and usage is identical. The table below illustrates the extent to which each can be customized and administered. <h3>Organizational differences</h3> <table border="1" class="dataframe"><thead><tr style="text-align: right;"><th>Function</th><th>ArcGIS Online</th><th>ArcGIS Enterprise</th></tr></thead><tbody><tr><td>collaborations</td><td>X</td><td>X</td></tr><tr><td>credits</td><td>X</td><td></td></tr><tr><td>federation</td><td></td><td>X</td></tr><tr><td>license</td><td>X</td><td>X</td></tr><tr><td>logs</td><td></td><td>X</td></tr><tr><td>machines</td><td></td><td>X</td></tr><tr><td>metadata</td><td>X</td><td>X</td></tr><tr><td>password_policy</td><td>X</td><td>X</td></tr><tr><td>security</td><td></td><td>X</td></tr><tr><td>server</td><td></td><td>X</td></tr><tr><td>site</td><td></td><td>X</td></tr><tr><td>system</td><td></td><td>X</td></tr><tr><td>ux</td><td>X</td><td>X</td></tr></tbody></table> Most properties on ArcGIS Online are available on ArcGIS Enterprise except 'credit reporting' because ArcGIS Enterprise does not consume credits. <blockquote><b>Note:</b> You need to log in using a named user account with administrator privileges. When you login, the API detects if you are an organizational administrator, then, the <b>GIS</b> object will ensure you gain access to the <b>admin</b> module.</blockquote> **Table of Contents** - [Managing named user licenses and entitlements](#Managing-named-user-licenses-and-entitlements) - [Listing apps licensed through the organization](#Listing-apps-licensed-through-the-organization) - [Getting available licenses for an app](#Getting-available-licenses-for-an-app) - [Querying extensions assigned to a named user account](#Querying-extensions-assigned-to-a-named-user-account) - [Assigning licenses and entitlements to a named user account](#Assigning-licenses-and-entitlements-to-a-named-user-account) - [Revoking licenses from a named user account](#Revoking-licenses-from-a-named-used-account) - [Managing ArcGIS Online credits](#Managing-ArcGIS-Online-credits) - [Viewing available credits](#Viewing-available-credits:) - [Managing credits through credit budgeting](#Managing-credits-through-credit-budgeting) - [Allocating credits to a user](#Allocating-credits-to-a-user) - [Checking credits assigned and available to a user](#Checking-credits-assigned-and-available-to-a-user) - [Attaching and removing servers from your GIS](#Attaching-and-removing-ArcGIS-Servers-from-your-GIS) - [Validating your servers](#Validating-your-servers) - [Unfederating a server](#Unfederating-a-server) - [Querying Portal logs](#Querying-Portal-logs) - [Filtering and querying Portal logs](#Filtering-and-querying-Portal-logs) - [Clearing logs](#Clearing-logs) - [Managing GIS security](#Managing-GIS-security) - [Working with password policies](#Working-with-password-policies) - [Inspecting password policy](#Inspecting-password-policy) - [Updating password policy](#Updating-password-policy) - [Resetting password policy](#Resetting-password-policy) - [Working with security configurations](#Working-with-security-configurations) - [Working with certificates](#SSL-certificates) - [Enterprise identity store](#Enterprise-identity-store) - [Managing Enterprise licenses and system settings](#Managing-Enterprise-licenses-and-system-settings) - [Inspecting licenses for Portal for ArcGIS](#Inspecting-licenses-for-Portal-for-ArcGIS) - [Releasing ArcGIS Pro licenses checked out for offline use](#Releasing-ArcGIS-Pro-licenses-checked-out-for-offline-use) - [Inspecting the machines powering your Portal for ArcGIS](#Inspecting-the-machines-powering-your-Portal-for-ArcGIS) - [Inspecting system directories](#Inspecting-system-directories) - [Inspecting web adaptors](#Inspecting-web-adaptors) - [Inspecting other system properties](#Inspecting-other-system-properties) ``` from arcgis.gis import GIS gis = GIS("https://portalname.domain.com/webadaptor", "username", "password") ``` ## Managing named user licenses and entitlements ArcGIS Online and Enterprise support assigning licenses for Esri premium apps such as ArcGIS Pro, Navigator for ArcGIS, AppStudio for ArcGIS Standard, Drone2Map for ArcGIS, ArcGIS Business Analyst web app, ArcGIS Community Analyst, GeoPlanner for ArcGIS, and other apps sold through ArcGIS Marketplace that use a per-member license type. As an administrator, you use the `gis.admin.license` class of Python API to view, manage and specify which members have licenses for these apps. To learn more about named user licensing model visit [manage licenses help](http://doc.arcgis.com/en/arcgis-online/administer/manage-licenses.htm). ### Listing apps licensed through the organization To list all the apps currently licensed through your organization, use the `all()` method: ``` gis.admin.license.all() ``` You can get the license for a particular app using the `get()` method: ``` pro_license = gis.admin.license.get('ArcGIS Pro') pro_license type(pro_license) ``` ### Getting available licenses for an app To query the list of all users licensed for an app, call the `all()` method from the `License` object corresponding to that app: ``` #get all users licensed for ArcGIS Pro pro_license.all() ``` Using the `plot()` method of the `License` object, you can quickly pull up a bar chart showing the number of assigned and remaining licenses for each extension of the app. ``` %matplotlib inline pro_license.plot() ``` Using the `License` object's `report` property, you can view the same information as a Pandas DataFrame table ``` pro_license.report ``` ### Querying extensions assigned to a named user account You can find which of the app's extensions are assigned to a particular user using the `License` object's `user_entitlement()` method: ``` pro_license.user_entitlement('username') ``` ### Assigning licenses and entitlements to a named user account You can assign licenses to an application and its extensions using the `assign()` method. ``` pro_license.assign(username='arcgis_python', entitlements='desktopBasicN') ``` ### Revoking licenses from a named used account To revoke an app's license from a user, call the `revoke()` method from the corresponding `License` object. To revoke all the entitlements, pass `*` as a string. ``` pro_license.revoke(username='arcgis_python', entitlements='*') ``` ## Managing ArcGIS Online credits If your GIS is an organization on ArcGIS Online, you would notice a `credits` property exposed on your `admin` object. You can use this to view, allocate credits to your users, set a default limit etc. To learn more about credits refer [here](http://doc.arcgis.com/en/arcgis-online/reference/credits.htm). <blockquote><b>Note:</b> ArcGIS Enterprises do not support the concept of credits. Hence if your GIS is an instance of Enterprise, you would not see the `credits` property. </blockquote> ### Viewing available credits: ``` gis.admin.credits.credits ``` ### Managing credits through credit budgeting The credit budgeting feature of ArcGIS Online allows administrators to view, limit and allocate credits to its users. Learn more about [credit budgeting here](http://doc.arcgis.com/en/arcgis-online/administer/configure-credits.htm). You can use the `enable()` method to turn on credit budgeting. ``` gis.admin.credits.enable() ``` You can use the `is_enabled()` property to verify if credit budgeting is turned on. ``` gis.admin.credits.is_enabled ``` Once you turn on credit budgeting, you can set a default limit for the number of credits for each user. In addition to this, you can set custom limits to users as well. Default limit applies when you create a new user and do not set a custom limit. ``` gis.admin.credits.default_limit ``` #### Allocating credits to a user You can use the `allocate()` and `deallocate()` methods to allocate custom number of credits or remove credits from a named user. ``` #assign one tenth of the available credits to arcgis_python account api_acc_credits = gis.admin.credits.credits / 10 gis.admin.credits.allocate(username='arcgis_python', credits=api_acc_credits) ``` #### Checking credits assigned and available to a user ``` api_acc = gis.users.get('arcgis_python') api_acc ``` When you turn on credit budgeting (using the `enable()` method), the `User` object gets additional properties to indicate the `assignedCredits` and remaining `avialableCredits`. Thus, you can verify as shown below: ``` api_acc.assignedCredits api_acc.availableCredits ``` As the user continues to use the credits, the `availableCredits` property can be used to check how much is available for that account. If a user does not have a limit set, then the total available credits in the org become their available credits. The account shown below as not custom limit, hence, it inherits the org's total limit. ``` rohit = gis.users.get('rsingh_geosaurus') rohit.availableCredits ``` #### Disable credit budgeting Yon disable this feature by calling the `disable()` method. ``` gis.admin.credits.disable() ``` ## Attaching and removing ArcGIS Servers from your GIS If your GIS is an instance of ArcGIS Enterprise, you can build it up by federating (attaching) ArcGIS Server sites to your Enterprise. During this step, you can assign a role to your server - such as Hosting or Federated. You can also assign a function such as 'Raster Analysis', 'GeoAnalytics' etc. to designate it a purpose. Federating and maintaining your server bank is an important administrative task. To learn more about this topic and the implications of federation, refer [here](http://server.arcgis.com/en/server/latest/administer/windows/federate-an-arcgis-server-site-with-your-portal.htm). <blockquote><b>Note:</b> Federation only applies to ArcGIS Enterprise orgs. If your GIS is an org on ArcGIS Online, you cannot perform these tasks</blockquote> The `Federation` class of the `admin` module allows GIS administrators to script and automate tasks such as listing the servers in a GIS, identifying their role and function, federating new servers, unfederating servers under maintenance, validating the list of servers etc. Get the list of servers federated to the GIS: ``` gis.admin.federation.servers ``` There are 2 servers federated to this GIS, the first is a `HOSTING_SERVER` and the second a `FEDERATED_SERVER`. The `serverFunction` of the second server is set to `RasterAnalytics`. ### Validating your servers To validate all the servers attached to your GIS, call the `validate_all()` method. To validate a particular server, call `validate()` and pass the server info. ``` gis.admin.federation.validate_all() ``` The second server reports a failure as the Enterprise is unable to reach or ping it. This server requires maintenance. ### Unfederating a server You remove a server from the GIS by calling the `unfederate()` method and passing the `serverId`. ``` gis.admin.federation.servers['servers'][1]['id'] gis.admin.federation.unfederate('GFyaVzJXiogsxKxH') ``` ## Querying Portal logs Portal for ArcGIS records events that occur, and any errors associated with those events, to logs. Logs are an important tool for monitoring and troubleshooting problems with your portal. Information in the logs will help you identify errors and provide context on how to address problems. The logs also comprise a history of the events that occur over time. For example, the following events are recorded in the logs: - Installation and upgrade events, such as authorizing the software and creating the portal website - Publishing of services and items, such as hosted services, web maps, and data items - Content management events, such as sharing items, changing item ownership, and adding, updating, moving, and deleting items - Security events, such as users logging in to the portal, creating, deleting, and disabling users, creating and changing user roles, updating HTTP and HTTPS settings, import and export of security certificates, and updating the portal's identity store - Organization management events, such as adding and configuring groups, adding or removing users from a group, configuration of the gallery, basemaps, utility services, and federated servers, and configuring log settings and deleting logs - General events, such as updating the portal's search index and restarting the portal Understanding log messages is important to maintain your GIS. Refer [here](http://server.arcgis.com/en/portal/latest/administer/windows/about-portal-logs.htm) to learn more about logging in general and [here](http://server.arcgis.com/en/portal/latest/administer/windows/work-with-portal-logs.htm#ESRI_SECTION2_F96B4BDF7FBD4EFC865E316C1DFB460C) to understand what gets logged and what the messages mean. Using the `Logs` class of the `admin` module, administrators can query and work with Portal log messages. You can query the logging level and other settings from the `settings` property: ``` gis.admin.logs.settings ``` ### Filtering and querying Portal logs Using the `query()` method, you can filter and search for Portal logs. Refer to the [query API ref doc](http://esri.github.io/arcgis-python-api/apidoc/html/arcgis.gis.admin.html#arcgis.gis.admin.Logs.query) for all the arguments supported. In the example below, logs for the previous 10 days is searched. ``` import datetime import pandas as pd now = datetime.datetime.now() start_time = now - datetime.timedelta(days=10) start_time ``` You can pass a Python `Datetime` object to the time arguments. ``` recent_logs = gis.admin.logs.query(start_time = start_time) #print a message as a sample recent_logs['logMessages'][0] ``` You can construct a Pandas `DataFrame` from the query result and visualize the logs as a table: ``` log_df = pd.DataFrame.from_records(recent_logs) log_df.head(5) #display the first 5 records ``` Once you have the logs as a `DataFrame`, you can save it to disk in any format you choose. For instance, you can save it to a `csv` file for archival. ``` log_df.to_csv('./portal_logs_last_10_days.csv') ``` ### Clearing logs You can remove old logs and free up space on your Portal by calling the `clean()` method. Note, this action is not reversible. ``` gis.admin.logs.clean() ``` ## Managing GIS security One of the important tasks you carry out as an administrator is managing the security settings of your GIS. With the `admin` module, you can accomplish tasks such as setting the password policy, managing security certificates etc. ### Working with password policies #### Inspecting password policy You can use the `PasswordPolicy` class in the `admin` module to inspect and update the policy for your GIS. This is applicable if you GIS uses a built-in identity store. ``` existing_policy = gis.admin.password_policy.policy existing_policy ``` #### Updating password policy You can update this policy to any desired standard. In the example below, the following additional criteria is added. - Contains at least one letter (A-Z, a-z) - Contains at least one upper case letter (A-Z) - Contains at least one lower case letter (a-z) - Contains at least one number (0-9) - Contains at least one special (non-alphanumeric) character - Password will expire after `90` days - Members may not reuse their last `5` passwords ``` from copy import deepcopy new_policy = deepcopy(existing_policy) new_policy['passwordPolicy']['minLength'] = 10 new_policy['passwordPolicy']['minUpper'] = 1 new_policy['passwordPolicy']['minLower'] = 1 new_policy['passwordPolicy']['minDigit'] = 1 new_policy['passwordPolicy']['minOther'] = 1 new_policy['passwordPolicy']['expirationInDays'] = 90 new_policy['passwordPolicy']['historySize'] = 5 ``` To update the policy, simply set the `policy` property with the new values ``` gis.admin.password_policy.policy = new_policy['passwordPolicy'] ``` Query the GIS to get the updated policy ``` gis.admin.password_policy.policy ``` #### Resetting password policy You can reset the policy to the default by calling the `reset()` method. ``` gis.admin.password_policy.reset() ``` ### Working with security configurations The `config` property of the `Security` class gives you a snapshot of your security configuration ``` gis.admin.security.config ``` #### SSL certificates The `SSLCertificates` class provides you with a set of methods to search for certificates, import new certificates and update existing ones. The `SSLCertificate` object that you get when you call the `get()` or `list()` methods on this class allows you to inspect, update or export individual certificates. To learn about all the tasks that can be accomplished, refer to the [API REF doc](http://esri.github.io/arcgis-python-api/apidoc/html/arcgis.gis.admin.html#sslcertificates). ``` gis.admin.security.ssl.list() ``` You can download or export the certificate to disk: ``` portal_cert = gis.admin.security.ssl.list()[0] portal_cert.export(out_path = './') ``` #### Enterprise identity store If your GIS uses an enterprise identity store instead of the built-in, you can use the `EnterpriseUsers` class and `EnterpriseGroups` class to search for users and user groups in the enterprise user database. ``` gis.admin.security.enterpriseusers gis.admin.security.groups.properties ``` ## Managing Enterprise licenses and system settings As an administrator, you can manage the licenses of the Enterprise and all the apps licensed through your Enterprise using the `system.licenses` class of the `admin` sub module. This functionality is different from [managing named user licenses and entitlements](#Managing-named-user-licenses-and-entitlements) mentioned in the beginning of this guide. This section shows you how to import and remove entitlements for different apps, the number of named user accounts that you are licensed to create and the number remaining etc. ### Inspecting licenses for Portal for ArcGIS Calling `system.licenses.properties` will return a dictionary containing information about your license for using Portal for ArcGIS application. The dictionary below reveals the license is current, `12` is the number of named user accounts created so far and the `75` is the max licensed. The `features` dictionary reveals the details on number of level 1 and 2 users that can be created. ``` gis.admin.system.licenses.properties ``` Using Python's `datetime` module, you can conver the date to human readable form: ``` from datetime import datetime datetime.fromtimestamp(round(gis.admin.system.licenses.properties.expiration/1000)) ``` ### Releasing ArcGIS Pro licenses checked out for offline use If a user checks out an ArcGIS Pro license for offline or disconnected use, and is unable to check it back in, you can release the license for the specified account by calling `release_license()` method. Learn more about [offline licenses in ArcGIS Pro](http://pro.arcgis.com/en/pro-app/get-started/named-user-licenses.htm#ESRI_SECTION1_3379AFCFCE8D44EE8395A91E1A484594). ``` gis.admin.system.licenses.release_license('username') ``` ### Inspecting the machines powering your Portal for ArcGIS You can query the machines powering your Portal for ArcGIS application using the `Machines` class at `admin.machines`. You can inspect machine status, and unregister those under repair. ``` gis.admin.machines.list() mac1 = gis.admin.machines.list()[0] mac1.properties ``` Query the status of a machine. ``` mac1.status() ``` ### Inspecting system directories You can inspect the physical location of various system directories used by the Portal for ArcGIS application: ``` portal_dir_list = gis.admin.system.directories portal_dir_list[0].properties for portal_dir in portal_dir_list: print(portal_dir.properties.name + " | " + portal_dir.properties.physicalPath) ``` ### Inspecting web adaptors You can query the web adaptors serving the Portal for ArcGIS application using the `system.web_adaptors.list()` method. This returns you a list of `WebAdaptor` objects. You can use this object to query the properties such as IP address, version and also unregister the adaptor for maintenance. ``` gis.admin.system.web_adaptors.list() wa = gis.admin.system.web_adaptors.list()[0] wa.properties wa.url ``` ### Inspecting other system properties **Database** ``` gis.admin.system.database ``` **Index status** ``` gis.admin.system.index_status ``` **Supported languages** ``` gis.admin.system.languages ```
github_jupyter
from arcgis.gis import GIS gis = GIS("https://portalname.domain.com/webadaptor", "username", "password") gis.admin.license.all() pro_license = gis.admin.license.get('ArcGIS Pro') pro_license type(pro_license) #get all users licensed for ArcGIS Pro pro_license.all() %matplotlib inline pro_license.plot() pro_license.report pro_license.user_entitlement('username') pro_license.assign(username='arcgis_python', entitlements='desktopBasicN') pro_license.revoke(username='arcgis_python', entitlements='*') gis.admin.credits.credits gis.admin.credits.enable() gis.admin.credits.is_enabled gis.admin.credits.default_limit #assign one tenth of the available credits to arcgis_python account api_acc_credits = gis.admin.credits.credits / 10 gis.admin.credits.allocate(username='arcgis_python', credits=api_acc_credits) api_acc = gis.users.get('arcgis_python') api_acc api_acc.assignedCredits api_acc.availableCredits rohit = gis.users.get('rsingh_geosaurus') rohit.availableCredits gis.admin.credits.disable() gis.admin.federation.servers gis.admin.federation.validate_all() gis.admin.federation.servers['servers'][1]['id'] gis.admin.federation.unfederate('GFyaVzJXiogsxKxH') gis.admin.logs.settings import datetime import pandas as pd now = datetime.datetime.now() start_time = now - datetime.timedelta(days=10) start_time recent_logs = gis.admin.logs.query(start_time = start_time) #print a message as a sample recent_logs['logMessages'][0] log_df = pd.DataFrame.from_records(recent_logs) log_df.head(5) #display the first 5 records log_df.to_csv('./portal_logs_last_10_days.csv') gis.admin.logs.clean() existing_policy = gis.admin.password_policy.policy existing_policy from copy import deepcopy new_policy = deepcopy(existing_policy) new_policy['passwordPolicy']['minLength'] = 10 new_policy['passwordPolicy']['minUpper'] = 1 new_policy['passwordPolicy']['minLower'] = 1 new_policy['passwordPolicy']['minDigit'] = 1 new_policy['passwordPolicy']['minOther'] = 1 new_policy['passwordPolicy']['expirationInDays'] = 90 new_policy['passwordPolicy']['historySize'] = 5 gis.admin.password_policy.policy = new_policy['passwordPolicy'] gis.admin.password_policy.policy gis.admin.password_policy.reset() gis.admin.security.config gis.admin.security.ssl.list() portal_cert = gis.admin.security.ssl.list()[0] portal_cert.export(out_path = './') gis.admin.security.enterpriseusers gis.admin.security.groups.properties gis.admin.system.licenses.properties from datetime import datetime datetime.fromtimestamp(round(gis.admin.system.licenses.properties.expiration/1000)) gis.admin.system.licenses.release_license('username') gis.admin.machines.list() mac1 = gis.admin.machines.list()[0] mac1.properties mac1.status() portal_dir_list = gis.admin.system.directories portal_dir_list[0].properties for portal_dir in portal_dir_list: print(portal_dir.properties.name + " | " + portal_dir.properties.physicalPath) gis.admin.system.web_adaptors.list() wa = gis.admin.system.web_adaptors.list()[0] wa.properties wa.url gis.admin.system.database gis.admin.system.index_status gis.admin.system.languages
0.28398
0.919426
## BASIC EXPLORATORY DATA ANALYSIS ### NOTEBOOK PLANS: - Read Dataset and check the info, head, isnull, shape, type of columns, value_counts ``` import pandas as pd train = pd.read_csv("../data/Train_maskedv2.csv") test = pd.read_csv("../data/Test_maskedv2.csv") description = pd.read_csv("../data/variable_descriptions_v2.csv") ``` ### VARIABLE DESCRIPTION ``` description.head(50) ``` ### TRAIN DATASET ``` train.shape train.info() train.head(-5) train.isnull().sum() train.describe() train.columns ## Determine columns we only "0" values for i in ['total_households', 'total_individuals', 'target_pct_vunerable', 'dw_00', 'dw_01', 'dw_02', 'dw_03', 'dw_04', 'dw_05', 'dw_06', 'dw_07', 'dw_08', 'dw_09', 'dw_10', 'dw_11', 'dw_12', 'dw_13', 'psa_00', 'psa_01', 'psa_02', 'psa_03', 'psa_04', 'stv_00', 'stv_01', 'car_00', 'car_01', 'lln_00', 'lln_01', 'lan_00', 'lan_01', 'lan_02', 'lan_03', 'lan_04', 'lan_05', 'lan_06', 'lan_07', 'lan_08', 'lan_09', 'lan_10', 'lan_11', 'lan_12', 'lan_13', 'lan_14', 'pg_00', 'pg_01', 'pg_02', 'pg_03', 'pg_04', 'lgt_00']: print(i, train[i].mean()) ``` ### TEST DATA ``` test.shape test.info() test.head(-5) test.isnull().sum() test.describe() ## Determine columns we only "0" values for i in ['total_households', 'total_individuals', 'dw_00', 'dw_01', 'dw_02', 'dw_03', 'dw_04', 'dw_05', 'dw_06', 'dw_07', 'dw_08', 'dw_09', 'dw_10', 'dw_11', 'dw_12', 'dw_13', 'psa_00', 'psa_01', 'psa_02', 'psa_03', 'psa_04', 'stv_00', 'stv_01', 'car_00', 'car_01', 'lln_00', 'lln_01', 'lan_00', 'lan_01', 'lan_02', 'lan_03', 'lan_04', 'lan_05', 'lan_06', 'lan_07', 'lan_08', 'lan_09', 'lan_10', 'lan_11', 'lan_12', 'lan_13', 'lan_14', 'pg_00', 'pg_01', 'pg_02', 'pg_03', 'pg_04', 'lgt_00']: print(i, test[i].mean()) zero_list = [] close_zero = [] for i in ['total_households', 'total_individuals', 'dw_00', 'dw_01', 'dw_02', 'dw_03', 'dw_04', 'dw_05', 'dw_06', 'dw_07', 'dw_08', 'dw_09', 'dw_10', 'dw_11', 'dw_12', 'dw_13', 'psa_00', 'psa_01', 'psa_02', 'psa_03', 'psa_04', 'stv_00', 'stv_01', 'car_00', 'car_01', 'lln_00', 'lln_01', 'lan_00', 'lan_01', 'lan_02', 'lan_03', 'lan_04', 'lan_05', 'lan_06', 'lan_07', 'lan_08', 'lan_09', 'lan_10', 'lan_11', 'lan_12', 'lan_13', 'lan_14', 'pg_00', 'pg_01', 'pg_02', 'pg_03', 'pg_04', 'lgt_00']: if train[i].mean() == 0: zero_list.append(i) elif 0.01 > train[i].mean() > 0 : close_zero.append(i) elif print("Equals to 0.00: ", zero_list) print("Less than 0.01: ", close_zero) ``` ### CONCLUSION: - All the columns are numerical - There is no missing entry in the data - Columns "lan_13", "dw_12" and "dw_13" seems to be redundant and should be dropped
github_jupyter
import pandas as pd train = pd.read_csv("../data/Train_maskedv2.csv") test = pd.read_csv("../data/Test_maskedv2.csv") description = pd.read_csv("../data/variable_descriptions_v2.csv") description.head(50) train.shape train.info() train.head(-5) train.isnull().sum() train.describe() train.columns ## Determine columns we only "0" values for i in ['total_households', 'total_individuals', 'target_pct_vunerable', 'dw_00', 'dw_01', 'dw_02', 'dw_03', 'dw_04', 'dw_05', 'dw_06', 'dw_07', 'dw_08', 'dw_09', 'dw_10', 'dw_11', 'dw_12', 'dw_13', 'psa_00', 'psa_01', 'psa_02', 'psa_03', 'psa_04', 'stv_00', 'stv_01', 'car_00', 'car_01', 'lln_00', 'lln_01', 'lan_00', 'lan_01', 'lan_02', 'lan_03', 'lan_04', 'lan_05', 'lan_06', 'lan_07', 'lan_08', 'lan_09', 'lan_10', 'lan_11', 'lan_12', 'lan_13', 'lan_14', 'pg_00', 'pg_01', 'pg_02', 'pg_03', 'pg_04', 'lgt_00']: print(i, train[i].mean()) test.shape test.info() test.head(-5) test.isnull().sum() test.describe() ## Determine columns we only "0" values for i in ['total_households', 'total_individuals', 'dw_00', 'dw_01', 'dw_02', 'dw_03', 'dw_04', 'dw_05', 'dw_06', 'dw_07', 'dw_08', 'dw_09', 'dw_10', 'dw_11', 'dw_12', 'dw_13', 'psa_00', 'psa_01', 'psa_02', 'psa_03', 'psa_04', 'stv_00', 'stv_01', 'car_00', 'car_01', 'lln_00', 'lln_01', 'lan_00', 'lan_01', 'lan_02', 'lan_03', 'lan_04', 'lan_05', 'lan_06', 'lan_07', 'lan_08', 'lan_09', 'lan_10', 'lan_11', 'lan_12', 'lan_13', 'lan_14', 'pg_00', 'pg_01', 'pg_02', 'pg_03', 'pg_04', 'lgt_00']: print(i, test[i].mean()) zero_list = [] close_zero = [] for i in ['total_households', 'total_individuals', 'dw_00', 'dw_01', 'dw_02', 'dw_03', 'dw_04', 'dw_05', 'dw_06', 'dw_07', 'dw_08', 'dw_09', 'dw_10', 'dw_11', 'dw_12', 'dw_13', 'psa_00', 'psa_01', 'psa_02', 'psa_03', 'psa_04', 'stv_00', 'stv_01', 'car_00', 'car_01', 'lln_00', 'lln_01', 'lan_00', 'lan_01', 'lan_02', 'lan_03', 'lan_04', 'lan_05', 'lan_06', 'lan_07', 'lan_08', 'lan_09', 'lan_10', 'lan_11', 'lan_12', 'lan_13', 'lan_14', 'pg_00', 'pg_01', 'pg_02', 'pg_03', 'pg_04', 'lgt_00']: if train[i].mean() == 0: zero_list.append(i) elif 0.01 > train[i].mean() > 0 : close_zero.append(i) elif print("Equals to 0.00: ", zero_list) print("Less than 0.01: ", close_zero)
0.187207
0.72952
> **Citation**: The data used in this exercise is derived from [Student Performance Data Set](http://archive.ics.uci.edu/ml/datasets/Student+Performance). ## Explore the Data ``` import pandas as pd # load the training dataset student_mat_data = pd.read_csv('../data/student-mat.csv', sep=';') student_mat_data.head(5) ``` ### The data consists of the following columns: Attribute Information: Attributes for both student-mat.csv (Math course) and student-por.csv (Portuguese language course) datasets: 1. school - student's school (binary: 'GP' - Gabriel Pereira or 'MS' - Mousinho da Silveira) 2. sex - student's sex (binary: 'F' - female or 'M' - male) 3. age - student's age (numeric: from 15 to 22) 4. address - student's home address type (binary: 'U' - urban or 'R' - rural) 5. famsize - family size (binary: 'LE3' - less or equal to 3 or 'GT3' - greater than 3) 6. Pstatus - parent's cohabitation status (binary: 'T' - living together or 'A' - apart) 7. Medu - mother's education (numeric: 0 - none, 1 - primary education (4th grade), 2 – 5th to 9th grade, 3 – secondary education or 4 – higher education) 8. Fedu - father's education (numeric: 0 - none, 1 - primary education (4th grade), 2 – 5th to 9th grade, 3 – secondary education or 4 – higher education) 9. Mjob - mother's job (nominal: 'teacher', 'health' care related, civil 'services' (e.g. administrative or police), 'at_home' or 'other') 10. Fjob - father's job (nominal: 'teacher', 'health' care related, civil 'services' (e.g. administrative or police), 'at_home' or 'other') 11. reason - reason to choose this school (nominal: close to 'home', school 'reputation', 'course' preference or 'other') 12. guardian - student's guardian (nominal: 'mother', 'father' or 'other') 13. traveltime - home to school travel time (numeric: 1 - <15 min., 2 - 15 to 30 min., 3 - 30 min. to 1 hour, or 4 - >1 hour) 14. studytime - weekly study time (numeric: 1 - <2 hours, 2 - 2 to 5 hours, 3 - 5 to 10 hours, or 4 - >10 hours) 15. failures - number of past class failures (numeric: n if 1<=n<3, else 4) 16. schoolsup - extra educational support (binary: yes or no) 17. famsup - family educational support (binary: yes or no) 18. paid - extra paid classes within the course subject (Math or Portuguese) (binary: yes or no) 19. activities - extra-curricular activities (binary: yes or no) 20. nursery - attended nursery school (binary: yes or no) 21. higher - wants to take higher education (binary: yes or no) 22. internet - Internet access at home (binary: yes or no) 23. romantic - with a romantic relationship (binary: yes or no) 24. famrel - quality of family relationships (numeric: from 1 - very bad to 5 - excellent) 25. freetime - free time after school (numeric: from 1 - very low to 5 - very high) 26. goout - going out with friends (numeric: from 1 - very low to 5 - very high) 27. Dalc - workday alcohol consumption (numeric: from 1 - very low to 5 - very high) 28. Walc - weekend alcohol consumption (numeric: from 1 - very low to 5 - very high) 29. health - current health status (numeric: from 1 - very bad to 5 - very good) 30. absences - number of school absences (numeric: from 0 to 93) these grades are related with the course subject, Math or Portuguese: 31. G1 - first period grade (numeric: from 0 to 20) 32. G2 - second period grade (numeric: from 0 to 20) 33. G3 - final grade (numeric: from 0 to 20, output target) ``` numeric_features = ['absences', 'age', 'G1', 'G2'] student_mat_data[numeric_features + ['G3']].describe() import numpy as np import matplotlib.pyplot as plt # plot a bar plot for each categorical feature count categorical_features = ['school','sex','address','famsize','Pstatus','Mjob', 'Fjob', 'reason', 'guardian'] for col in categorical_features: counts = student_mat_data[col].value_counts().sort_index() fig = plt.figure(figsize=(9, 6)) ax = fig.gca() counts.plot.bar(ax = ax, color='steelblue') ax.set_title(col + ' counts') ax.set_xlabel(col) ax.set_ylabel("Frequency") plt.show() ``` One Hot Encoding of all Nominal Columns ``` final = pd.get_dummies(student_mat_data,columns=['school','sex','address','famsize','Pstatus','Mjob','Fjob','reason','guardian']) final ``` K-1 OneHotEncoding to avoid Multicollinearity and the Dummy Variable Trap Multicollinearity occurs when two or more independent variables (a.k.a. features) in the dataset are correlated with each other. See this post: https://towardsdatascience.com/one-hot-encoding-multicollinearity-and-the-dummy-variable-trap-b5840be3c41a drop_first=True is important to use, as it helps in reducing the extra column created during dummy variable creation. Hence it reduces the correlations created among dummy variables. ``` finalk1 = pd.get_dummies(student_mat_data,columns=['school','sex','address','famsize','Pstatus','Mjob','Fjob','reason','guardian'],drop_first=True) finalk1 final.dtypes finalk1.dtypes ``` ### Export Dataframe to csv ``` final.to_csv(r'../data/output/student-mat-ohe.csv', index = False) finalk1.to_csv(r'../data/output/student-mat-ohe-k1.csv', index = False) ``` ## The Azure Machine Learning Python SDK You can run pretty much any Python code in a notebook, provided the required Python packages are installed in the environment where you're running it. In this case, you're running the notebook in a *Conda* environment on an Azure Machine Learning compute instance. This environment is installed in the compute instance by default, and contains common Python packages that data scientists typically work with. It also includes the Azure Machine Learning Python SDK, which is a Python package that enables you to write code that uses resources in your Azure Machine Learning workspace. Run the cell below to import the **azureml-core** package and checking the version of the SDK that is installed. ``` import azureml.core print("Ready to use Azure ML", azureml.core.VERSION) from azureml.core import Workspace ws = Workspace.from_config() print(ws.name, "loaded") ``` ## View Azure Machine Learning resources in the workspace Now that you have a connection to your workspace, you can work with the resources. For example, you can use the following code to enumerate the compute resources in your workspace. ``` print("Compute Resources:") for compute_name in ws.compute_targets: compute = ws.compute_targets[compute_name] print("\t", compute.name, ':', compute.type) ``` ## Work with datastores In Azure ML, *datastores* are references to storage locations, such as Azure Storage blob containers. Every workspace has a default datastore - usually the Azure storage blob container that was created with the workspace. If you need to work with data that is stored in different locations, you can add custom datastores to your workspace and set any of them to be the default. ### View datastores Run the following code to determine the datastores in your workspace: ``` # Get the default datastore default_ds = ws.get_default_datastore() # Enumerate all datastores, indicating which is the default for ds_name in ws.datastores: print(ds_name, "- Default =", ds_name == default_ds.name) ``` ### Upload data to a datastore Now that you have determined the available datastores, you can upload files from your local file system to a datastore so that it will be accessible to experiments running in the workspace, regardless of where the experiment script is actually being run. ``` default_ds.upload_files(files=['../data/output/student-mat-ohe.csv'], # Upload the csv files in /data target_path='student-ohe-data/', # Put it in a folder path in the datastore overwrite=True, # Replace existing files of the same name show_progress=True) default_ds.upload_files(files=['../data/output/student-mat-ohe-k1.csv'], # Upload the csv files in /data target_path='student-ohe-k1-data/', # Put it in a folder path in the datastore overwrite=True, # Replace existing files of the same name show_progress=True) ``` ## Work with datasets Azure Machine Learning provides an abstraction for data in the form of *datasets*. A dataset is a versioned reference to a specific set of data that you may want to use in an experiment. Datasets can be *tabular* or *file*-based. ### Create a tabular dataset Let's create a dataset from the diabetes data you uploaded to the datastore, and view the first 20 records. In this case, the data is in a structured format in a CSV file, so we'll use a *tabular* dataset. ``` from azureml.core import Dataset # Get the default datastore default_ds = ws.get_default_datastore() #Create a tabular dataset from the path on the datastore (this may take a short while) tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'student-ohe-data/*.csv')) # Display the first 20 rows as a Pandas dataframe tab_data_set.take(20).to_pandas_dataframe() #Create a tabular dataset from the path on the datastore (this may take a short while) tab_data_set2 = Dataset.Tabular.from_delimited_files(path=(default_ds, 'student-ohe-k1-data/*.csv')) # Display the first 20 rows as a Pandas dataframe tab_data_set2.take(20).to_pandas_dataframe() ``` ### Register datasets Now that you have created datasets that reference the diabetes data, you can register them to make them easily accessible to any experiment being run in the workspace. We'll register the tabular dataset as **diabetes dataset**, and the file dataset as **diabetes files**. ``` # Register the tabular datasetbbb try: tab_data_set = tab_data_set.register(workspace=ws, name='student ohe dataset', description='student data with one hot encoding', tags = {'format':'CSV'}, create_new_version=True) except Exception as ex: print(ex) # Register the tabular datasetbbb try: tab_data_set2 = tab_data_set2.register(workspace=ws, name='student ohe k1 dataset', description='student data with one hot encoding k1', tags = {'format':'CSV'}, create_new_version=True) except Exception as ex: print(ex) ``` You can view and manage datasets on the **Datasets** page for your workspace in [Azure Machine Learning studio](https://ml.azure.com). You can also get a list of datasets from the workspace object: ``` print("Datasets:") for dataset_name in list(ws.datasets.keys()): dataset = Dataset.get_by_name(ws, dataset_name) print("\t", dataset.name, 'version', dataset.version) ``` Note: Their is a koalas.get_dummies in databricks https://koalas.readthedocs.io/en/latest/reference/api/databricks.koalas.get_dummies.html
github_jupyter
import pandas as pd # load the training dataset student_mat_data = pd.read_csv('../data/student-mat.csv', sep=';') student_mat_data.head(5) numeric_features = ['absences', 'age', 'G1', 'G2'] student_mat_data[numeric_features + ['G3']].describe() import numpy as np import matplotlib.pyplot as plt # plot a bar plot for each categorical feature count categorical_features = ['school','sex','address','famsize','Pstatus','Mjob', 'Fjob', 'reason', 'guardian'] for col in categorical_features: counts = student_mat_data[col].value_counts().sort_index() fig = plt.figure(figsize=(9, 6)) ax = fig.gca() counts.plot.bar(ax = ax, color='steelblue') ax.set_title(col + ' counts') ax.set_xlabel(col) ax.set_ylabel("Frequency") plt.show() final = pd.get_dummies(student_mat_data,columns=['school','sex','address','famsize','Pstatus','Mjob','Fjob','reason','guardian']) final finalk1 = pd.get_dummies(student_mat_data,columns=['school','sex','address','famsize','Pstatus','Mjob','Fjob','reason','guardian'],drop_first=True) finalk1 final.dtypes finalk1.dtypes final.to_csv(r'../data/output/student-mat-ohe.csv', index = False) finalk1.to_csv(r'../data/output/student-mat-ohe-k1.csv', index = False) import azureml.core print("Ready to use Azure ML", azureml.core.VERSION) from azureml.core import Workspace ws = Workspace.from_config() print(ws.name, "loaded") print("Compute Resources:") for compute_name in ws.compute_targets: compute = ws.compute_targets[compute_name] print("\t", compute.name, ':', compute.type) # Get the default datastore default_ds = ws.get_default_datastore() # Enumerate all datastores, indicating which is the default for ds_name in ws.datastores: print(ds_name, "- Default =", ds_name == default_ds.name) default_ds.upload_files(files=['../data/output/student-mat-ohe.csv'], # Upload the csv files in /data target_path='student-ohe-data/', # Put it in a folder path in the datastore overwrite=True, # Replace existing files of the same name show_progress=True) default_ds.upload_files(files=['../data/output/student-mat-ohe-k1.csv'], # Upload the csv files in /data target_path='student-ohe-k1-data/', # Put it in a folder path in the datastore overwrite=True, # Replace existing files of the same name show_progress=True) from azureml.core import Dataset # Get the default datastore default_ds = ws.get_default_datastore() #Create a tabular dataset from the path on the datastore (this may take a short while) tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'student-ohe-data/*.csv')) # Display the first 20 rows as a Pandas dataframe tab_data_set.take(20).to_pandas_dataframe() #Create a tabular dataset from the path on the datastore (this may take a short while) tab_data_set2 = Dataset.Tabular.from_delimited_files(path=(default_ds, 'student-ohe-k1-data/*.csv')) # Display the first 20 rows as a Pandas dataframe tab_data_set2.take(20).to_pandas_dataframe() # Register the tabular datasetbbb try: tab_data_set = tab_data_set.register(workspace=ws, name='student ohe dataset', description='student data with one hot encoding', tags = {'format':'CSV'}, create_new_version=True) except Exception as ex: print(ex) # Register the tabular datasetbbb try: tab_data_set2 = tab_data_set2.register(workspace=ws, name='student ohe k1 dataset', description='student data with one hot encoding k1', tags = {'format':'CSV'}, create_new_version=True) except Exception as ex: print(ex) print("Datasets:") for dataset_name in list(ws.datasets.keys()): dataset = Dataset.get_by_name(ws, dataset_name) print("\t", dataset.name, 'version', dataset.version)
0.360039
0.983247
``` # %load ../standard_import.txt import numpy as np import matplotlib.pyplot as plt from scipy.stats import linregress ``` ### Non Linear Regression * Complex models are rarely linear * This is not to say that linear models are not used * Linearity assumption is often "good enough", particularly for: * Quickly prototyping simple models that require full interpretability * Tackling questions that not well defined to benefit from more complex models * Always in search of better models but we need to always evaluate how much more leverage a precise model is going to provide ### Tell-Tale or non-linearity - 1 * The model clearly looks non-linear * we'll generate data from the following model: $$ y = 30 -0.5 x +0.005 x^2 + \epsilon $$ ``` plt.figure(figsize=(16,6)) x = np.linspace(0,100, 200) errors = np.random.normal(0,3, size=200) y = 30 + (-0.3 * x)+ (0.005*x**2) + errors plt.scatter(x,y) ``` ### Tell-Tale or non-linearity - 2 * Residuals are not normally distributed * Recall that the the assumption is that $\epsilon \sim \mathcal{N}(\mu,\sigma)$ * This is a violation of the assumption ``` lm = linregress(x, y) plt.figure(figsize=(16,6)) plt.subplot(1,2,1) plt.scatter(x, y, alpha=0.3) plt.plot(x, lm.intercept + lm.slope * x, color='r', linewidth=4) plt.title("Data and linear fit") RSS_vals = [] for (x_i, y_i) in zip(x,y): y_hat = lm.intercept + lm.slope * x_i RSS_vals.append(y_i - y_hat) plt.subplot(1,2,2) plt.scatter(x, RSS_vals) plt.title("Residuals plot") ``` ### How to Determine a Non-Linear Model * How do we handle the case data is clearly non-linear? * Within a small region, the data will most likely be liner * for instance, we look at the region $x \in [39,41]$ ``` small_range = x[(x >= 35) & (x <= 45)] small_range positions = np.where((x >= 35) & (x <= 45)) x[positions] y[positions] ### How to Determine a Non-Linear Model lm = linregress(x, y) plt.figure(figsize=(16,6)) lm_2 = linregress(x[positions], y[positions]) plt.subplot(1,2,1) plt.scatter(x[positions], y[positions], alpha=0.3) plt.plot(x[positions], lm_2.intercept + lm_2.slope * x[positions], color='r', linewidth=4) plt.title("Data and linear fit") RSS_vals = [] for (x_i, y_i) in zip(x[positions], y[positions]): y_hat = lm_2.intercept + lm_2.slope * x_i RSS_vals.append(y_i - y_hat) plt.subplot(1,2,2) plt.scatter(x[positions], RSS_vals) plt.title("Residuals plot") ``` ### How to Model Non-Linear Dataset * A naive approach is perhaps to compute the model as the average of some points above and below that point * for example, for $x=40$, we need to take 3 observed data points immediately before and after * say for instance we take 5 points * This is called a nearest Neighbor regression * Here, let's just the mean as a prediction within a small region ``` np.searchsorted (array, q) ``` * Find position of smallest value in `array` that is larger than `q`. ``` x[130:150] np.searchsorted(x, 70) x[np.searchsorted(x, 70)] pos = np.searchsorted(x, 70) print(x[pos: pos+5]) print(x[pos-5: pos]) plt.figure(figsize=(16,6)) neighbors = np.arange(pos-5, pos+5) plt.scatter(x, y, alpha=0.3) plt.scatter(x[neighbors], y[neighbors], color="red") plt.figure(figsize=(16,6)) reg_line = [] for i in x[4:-4]: pos = pos = np.searchsorted(x, i) neighbors = np.arange(pos-5, pos+5) reg_line.append(y[neighbors].mean()) plt.scatter(x, y, alpha=0.3) plt.plot(x[4:-4], reg_line, color="red") ``` ### Problems with this approach What are the issues with this approach? ### Problems with Nearest Neighbor Regression * Slow, imagine in the case with a large number of predictors * Does not scale well with a large number of parameters * Highly affected by outliers * We're missing values at the extremities * We cannot use for prediction ### Step Functions * We can remedy these shortcomings by discretizing the $x-$axis * Break the range of $x$ into bins, and fit a different constant in each bin * Such a group-specific constant can be `mean` as in nearest neighbors * This amounts to converting a continuous variable into an ordered categorical variable * This is called a step function ``` intervals = np.split(np.arange(len(x)), 10) intervals intervals[4] x[intervals[4]] y[intervals[4]].mean() plt.figure(figsize=(16,6)) plt.scatter(x, y, alpha=0.3) plt.plot(x[intervals[4]], [y[intervals[4]].mean()] * len(intervals[4]), color="red", linewidth=4) plt.figure(figsize=(16,6)) plt.scatter(x, y, alpha=0.3) means = [] for i in range(len(intervals)): means.append(y[intervals[i]].mean()) plt.plot(x[intervals[i]], [y[intervals[i]].mean()] * len(intervals[i]), color="red", linewidth=4) ``` ### Shortcoming of Step Function * How do you interpret sudden changes between two points close on the x-axis? * Choice of cutpoints or "knots" can be problematic * The step function would have been different if we had split the function into 11, 9 or 13 intervals * Arbitrary knots can lead to substantial variation issues? * Different discrete intervals can lead to significantly different predictions ### Plynomial Regression * Rather than model the data with a 1st-degree polynomial, we will use a higher degree polynomial * Ex. second, third or even higher if needed * A 1st degree polynomial $$ y = \beta_0 + \beta_1 x $$ * 3rd degree polynomial $$ y = \beta_0 ~~+~~ \beta_1 x ~~+~~ \beta_2x^2 ~~+~~ \beta_3x^3 $$ ### Using Linear Model in `sklearn` * Two step process: 1. Transform $x$ into higher degree features 2. Fit the model using the transformed polynomial features. ### Inferring Polynomial Features * From the implementation point of view, this is just plain Ordinary Least Squares Transorm: $ y = \beta_0 ~~+~~ \beta_1 x ~~+~~ \beta_2x^2 ~~+~~ \beta_3x^3 $ into: $ y = \beta_0 * 1 ~~+~~ \beta_1 \cdot A ~~+~~ \beta_2 \cdot B ~~+~~ \beta_3 \cdot C, $ $ \mbox{, such that }A=x\mbox{, }B=x^2\mbox{ and }C=x^3 $ * We can now consider A, B, and C as new features of the model and use the same linear multivariate linear regression * The new representation of $y$ is still considered a linear model * Coefficients associated with the features are still linear. ## Using Linear Model in sklearn - Cont'd Transorm: $$ y = \beta_0 ~~+~~ \beta_1 x ~~+~~ \beta_2x^2 ~~+~~ \beta_3x^3 $$ into: $$ y = \beta_0 ~~+~~ \beta_1 \cdot A ~~+~~ \beta_2 \cdot B ~~+~~ \beta_3 \cdot C\\ \mbox{, where }A=x\mbox{, }B=x^2\mbox{ and }C=x^3 $$ * This can be easily done using `PolynomialFeatures` * Polynomial Features are all polynomial combinations of the features with degree less than or equal to the specified degree." `scikit-learn Documentation` * Takes data as column vector (same as other `scikit-learn` libraries we have used) ``` from sklearn.preprocessing import PolynomialFeatures poly = PolynomialFeatures(degree=3) v = np.array([1,2,3,4,5]) poly.fit_transform(v.reshape(-1,1)) ``` ### Using Linear Model in Sklearn - Cont'd * For $v=[1, 2, 3, 4, 5]$ * In polynomial regression (3rd degree polynomial) $$ v{'} = (v^0 = 1,~~A=v^1,~~B=v^2,~~C =v^3) $$ * Therefore $$ \begin{split} V{'} =& [ [ 1., 1., 1., 1.], \\ & [ 1., 2., 4., 8.], \\ & [ 1., 3., 9., 27.], \\ & [ 1., 4., 16., 64.], \\ & [ 1., 5., 25., 125.] ] \end{split} $$ ``` # we take a small subsample of x, np.random.seed(46) subset_indices = np.random.choice(np.arange(len(x)), size=20) subset_indices.sort() subset_indices x[subset_indices] x[subset_indices][0:5] temp = x[subset_indices].reshape(-1,1) temp[0:5] poly = PolynomialFeatures(degree=2) X_vals_transformed = poly.fit_transform(x[subset_indices].reshape(-1,1)) X_vals_transformed[0:5] plt.figure(figsize=(12,4)) plt.scatter(x[subset_indices], y[subset_indices]) from sklearn import linear_model poly = PolynomialFeatures(degree=2) X_vals_transformed = poly.fit_transform(x[subset_indices].reshape(-1,1)) lin = linear_model.LinearRegression() lin.fit(X_vals_transformed, y[subset_indices].reshape(-1,1)) plt.figure(figsize=(12,4)) x_axis= np.arange(0, max(x)).reshape(-1,1) X_axis_transformed = poly.transform(x_axis) y_hat = lin.predict(X_axis_transformed) plt.plot(x_axis, y_hat, label= "Polynomial degree %s" % 2) plt.scatter(x[subset_indices], y[subset_indices]) plt.legend() ``` ### Increasing the Polynomial Degree * We said earlier that the best model is the one that minimizes the RSS * In the above example, we see that we can improve the fit by choosing a higher-degree polynomial ``` from sklearn import linear_model np.random plt.figure(figsize=(24,6)) x_axis= np.arange(0,max(x)).reshape(-1,1) for i, polDegree in enumerate([2, 6, 8]): plt.subplot(1,3,i+1) plt.scatter(x[subset_indices], y[subset_indices]) poly = PolynomialFeatures(degree=polDegree) X_vals_transformed = poly.fit_transform(x[subset_indices].reshape(-1,1)) X_axis_transformed = poly.transform(x_axis) lin = linear_model.LinearRegression() lin.fit(X_vals_transformed, y[subset_indices].reshape(-1,1)) y_hat = lin.predict(X_axis_transformed) plt.plot(x_axis[0:-4], y_hat[0:-4], label= "Polynomial degree %s" % polDegree) plt.legend() ``` ### Shortcoming of Higher Order Polynomials * Better fits are often achieved by higher polynomials * A nth degree polynomial will have $n-1$ turning points. * Overfits the data * Oscillations are very unlikely to be characteristic of the data * The model is capturing the noise in the data. * This is an example of over-fitting. * Even though this model passes through most of the data, it will fail to generalize on unseen data. * Again, we can use train validation splitting strategy to find the best polynomial degree * This strategy will work for most machine or statistical learning approaches ### Piecewise Polynomials * Generalization of piecewise step functions and improvement over polynomials * Instead of a single polynomial, we use a polynomial in regions defined by knots * To avoid sharp edges between polynomial, like those generated step function, by imposing continuity * i.e., differentiable * Using more knots leads to a more flexible piecewise polynomial * we will illustrate it with a simple linear regression (i.e., degree =1 ) ``` # two nots (30 and 55) # three regions (< 30, >=30 and <55, >= 55) x_1_idx = np.where(x<30) x_2_idx = np.where((x>=30) & (x < 55)) x_3_idx = np.where(x >= 55) plt.scatter(x[subset_indices], y[subset_indices]) x_1_axis = np.arange(0, 30, 0.05).reshape(-1,1) poly = PolynomialFeatures(degree=1) X_vals_transformed = poly.fit_transform(x[x_1_idx].reshape(-1,1)) X_axis_transformed = poly.transform(x_1_axis) lin = linear_model.LinearRegression() lin.fit(X_vals_transformed, y[x_1_idx].reshape(-1,1)) y_hat = lin.predict(X_axis_transformed) plt.plot(x_1_axis, y_hat, label= "Polynomial degree %s" % polDegree) plt.scatter(x[subset_indices], y[subset_indices]) x_1_axis = np.arange(0, 30, 0.05).reshape(-1,1) poly = PolynomialFeatures(degree=1) X_vals_transformed = poly.fit_transform(x[x_1_idx].reshape(-1,1)) X_axis_transformed = poly.transform(x_1_axis) lin = linear_model.LinearRegression() lin.fit(X_vals_transformed, y[x_1_idx].reshape(-1,1)) y_hat = lin.predict(X_axis_transformed) plt.plot(x_1_axis, y_hat, label= "Polynomial degree %s" % polDegree) x_2_axis = np.arange(30, 55, 0.05).reshape(-1,1) poly = PolynomialFeatures(degree=1) X_vals_transformed = poly.fit_transform(x[x_2_idx].reshape(-1,1)) X_axis_transformed = poly.transform(x_2_axis) lin = linear_model.LinearRegression() lin.fit(X_vals_transformed, y[x_2_idx].reshape(-1,1)) y_hat = lin.predict(X_axis_transformed) plt.plot(x_2_axis, y_hat, label= "Polynomial degree %s" % polDegree) x_3_axis = np.arange(55, 100, 0.05).reshape(-1,1) poly = PolynomialFeatures(degree=1) X_vals_transformed = poly.fit_transform(x[x_3_idx].reshape(-1,1)) X_axis_transformed = poly.transform(x_3_axis) lin = linear_model.LinearRegression() lin.fit(X_vals_transformed, y[x_3_idx].reshape(-1,1)) y_hat = lin.predict(X_axis_transformed) plt.plot(x_3_axis, y_hat, label= "Polynomial degree %s" % polDegree) ``` ### What Do We Want to do? * We want to create a smooth function * Given a knot $x_i$, belonging to interval $j$, we want the the model resulting from the combination of $f_{j-1}(x_i)$ and $f_j(x_i)$ is continuous at the knot * Continuous function is a function that does not have any abrupt changes in value * Very small changes $x$ should result in very small changes in $y$, i.e., $f_{j-1}(x_i) \approx f_j(x_i)$ * This is called the matching condition ### "Hacking" the matching condition * Split the data into 4 new intervals * Instead of completely independent intervals, we force the intervals to overlap by 1 data point The knots will be part of both models, therefore causing the models to satisfy $f_{j-1}(x_i) \approx f_j(x_i)$ * In the models, the solution is typically implemented by requiring that: * $f_{j-1}'$ and $f_j'$ exist at $x_i$. The curves are continuous $x_i$ * $f_{j-1}' \approx f_j'$ the curves are close at $x_i$ ``` x_1_subset = x[subset_indices[0:8]] x_2_subset = x[subset_indices[7:12]] x_3_subset = x[subset_indices[11:14]] x_4_subset = x[subset_indices[13:]] y_1_subset = y[subset_indices[0:8]] y_2_subset = y[subset_indices[7:12]] y_3_subset = y[subset_indices[11:14]] y_4_subset = y[subset_indices[13:]] plt.scatter(x_1_subset, y_1_subset) x_1_axis = np.arange(min(x_1_subset), max(x_1_subset), 0.05).reshape(-1,1) poly = PolynomialFeatures(degree=1) X_vals_transformed = poly.fit_transform(x_1_subset[[0,-1]].reshape(-1,1)) X_axis_transformed = poly.transform(x_1_axis) lin = linear_model.LinearRegression() lin.fit(X_vals_transformed, y_1_subset[[0,-1]].reshape(-1,1)) y_hat = lin.predict(X_axis_transformed) plt.plot(x_1_axis, y_hat, label= "Polynomial degree %s" % polDegree) plt.scatter(x_1_subset, y_1_subset) x_1_axis = np.arange(min(x_1_subset), max(x_1_subset), 0.05).reshape(-1,1) poly = PolynomialFeatures(degree=1) X_vals_transformed = poly.fit_transform(x_1_subset[[0,-1]].reshape(-1,1)) X_axis_transformed = poly.transform(x_1_axis) lin = linear_model.LinearRegression() lin.fit(X_vals_transformed, y_1_subset[[0,-1]].reshape(-1,1)) y_hat = lin.predict(X_axis_transformed) plt.plot(x_1_axis, y_hat, label= "Polynomial degree %s" % polDegree) plt.scatter(x_2_subset, y_2_subset) x_2_axis = np.arange(min(x_2_subset), max(x_2_subset), 0.05).reshape(-1,1) poly = PolynomialFeatures(degree=1) X_vals_transformed = poly.fit_transform(x_2_subset[[0,-1]].reshape(-1,1)) X_axis_transformed = poly.transform(x_2_axis) lin = linear_model.LinearRegression() lin.fit(X_vals_transformed, y_2_subset[[0,-1]].reshape(-1,1)) y_hat = lin.predict(X_axis_transformed) plt.plot(x_2_axis, y_hat, label= "Polynomial degree %s" % polDegree) plt.scatter(x_3_subset, y_3_subset) x_3_axis = np.arange(min(x_3_subset), max(x_3_subset), 0.05).reshape(-1,1) poly = PolynomialFeatures(degree=1) X_vals_transformed = poly.fit_transform(x_3_subset[[0,-1]].reshape(-1,1)) X_axis_transformed = poly.transform(x_3_axis) lin = linear_model.LinearRegression() lin.fit(X_vals_transformed, y_3_subset[[0,-1]].reshape(-1,1)) y_hat = lin.predict(X_axis_transformed) plt.plot(x_3_axis, y_hat, label= "Polynomial degree %s" % polDegree) plt.scatter(x_4_subset, y_4_subset) x_4_axis = np.arange(min(x_4_subset), max(x_4_subset), 0.05).reshape(-1,1) poly = PolynomialFeatures(degree=1) X_vals_transformed = poly.fit_transform(x_4_subset[[0,-1]].reshape(-1,1)) X_axis_transformed = poly.transform(x_4_axis) lin = linear_model.LinearRegression() lin.fit(X_vals_transformed, y_4_subset[[0,-1]].reshape(-1,1)) y_hat = lin.predict(X_axis_transformed) plt.plot(x_4_axis, y_hat, label= "Polynomial degree %s" % polDegree) ``` ### What do we Want to do? - Cont'd * In addition to the above, we want curves to be smooth at the knots * We want second derivative to: * $f_{j-1}'' \approx f_j''$ the curves are smooth at $x_i$ * The second derivative indicates whether the function's slope is increasing or decreasing at that $x_i$ * Guarantees that transitions between the curve are smooth * In math, B-spline or basis spline is a spline function that has minimal support with respect to a given degree, smoothness, and domain partition. Any spline function of given degree can be expressed as a linear combination of B-splines of that degree. ``` # we take a small subsample of x, # results are more dramatic from patsy import dmatrix import statsmodels.api as sm # Specifying 3 knots transformed_x = dmatrix("bs(x, knots=(25,40,60), degree=1)", {"x": x[subset_indices]}, return_type='dataframe') model_4 = sm.GLM(y[subset_indices], transformed_x).fit() pred4 = model_4.predict(dmatrix("bs(x_axis, knots=(25,40,60), degree=1)", {"x_axis": x_axis}, return_type='dataframe')) model_4.params plt.scatter(x[subset_indices], y[subset_indices]) plt.plot(x_axis, pred4, linewidth=4) ``` ### Cubic Splines * Using line does not provide for "local" flexibility at the knots * S line has only two degrees of freedom, $(a,b)$ in $y =a + bx$ * Forcing the line to go through two points, we have used both degrees of freedom * A quadratic has three degrees of freedom $(a,b,c)$ in $y =a + b x +c x^2$ * Forcing a quadratic function to go through two points and fixing the derivative at one of the knots we have used all three degrees of freedom * No way to fix the derivative at the second knot. * A cubic spline has four degrees of freedom $(a,b,c,d)$ in $y =a + b x +c x^2 + d x^3$ ### Cubic Splines - Cont'd * A cubic spline with knots at $\xi_k$, $k = 1, . . . , K$ is a piecewise cubic polynomial with continuous derivatives up to order 2 at each knot ``` # we take a small subsample of x, from patsy import dmatrix import statsmodels.api as sm # Specifying 3 knots for the bsplines model in statsmodels x_axis = np.arange(min(x[subset_indices]), max(x[subset_indices]), 0.05).reshape(-1,1) transformed_x = dmatrix("bs(x, degree=3, df=4)", {"x": x[subset_indices]}, return_type='dataframe') model_4 = sm.GLM(y[subset_indices], transformed_x).fit() pred4 = model_4.predict(dmatrix("bs(x_axis, degree=3, df=4)", {"x_axis": x_axis}, return_type='dataframe')) model_4.params plt.figure(figsize=(12,4)) plt.scatter(x[subset_indices], y[subset_indices]) plt.scatter(x_axis, pred4, alpha=0.4) ``` ### How Many Knots to Use? * Simple (naive approach) is to try out different numbers of knots and see which produces the best fitting curve * A more objective approach is to use a training/validation split strategy ### Where do you Choose the Knots * In practice, it's common to place knots uniformly * For example at equally distant quantiles of the data * Ex. put 3 knots at the 25th, 50th and 75th quantiles of the data * Ideally, you want to put more knots in regions where the data vary most rapidly ### Splines Versus Linear Regression * Regression splines typically give superior results to polynomial regression * High-degree polynomials strange curves at the boundaries * Regression splines do not need to use high degree polynomials to fit the data * We can fit a very complex datatset using a cubic spline * Add more knots to regions where data is complex
github_jupyter
# %load ../standard_import.txt import numpy as np import matplotlib.pyplot as plt from scipy.stats import linregress plt.figure(figsize=(16,6)) x = np.linspace(0,100, 200) errors = np.random.normal(0,3, size=200) y = 30 + (-0.3 * x)+ (0.005*x**2) + errors plt.scatter(x,y) lm = linregress(x, y) plt.figure(figsize=(16,6)) plt.subplot(1,2,1) plt.scatter(x, y, alpha=0.3) plt.plot(x, lm.intercept + lm.slope * x, color='r', linewidth=4) plt.title("Data and linear fit") RSS_vals = [] for (x_i, y_i) in zip(x,y): y_hat = lm.intercept + lm.slope * x_i RSS_vals.append(y_i - y_hat) plt.subplot(1,2,2) plt.scatter(x, RSS_vals) plt.title("Residuals plot") small_range = x[(x >= 35) & (x <= 45)] small_range positions = np.where((x >= 35) & (x <= 45)) x[positions] y[positions] ### How to Determine a Non-Linear Model lm = linregress(x, y) plt.figure(figsize=(16,6)) lm_2 = linregress(x[positions], y[positions]) plt.subplot(1,2,1) plt.scatter(x[positions], y[positions], alpha=0.3) plt.plot(x[positions], lm_2.intercept + lm_2.slope * x[positions], color='r', linewidth=4) plt.title("Data and linear fit") RSS_vals = [] for (x_i, y_i) in zip(x[positions], y[positions]): y_hat = lm_2.intercept + lm_2.slope * x_i RSS_vals.append(y_i - y_hat) plt.subplot(1,2,2) plt.scatter(x[positions], RSS_vals) plt.title("Residuals plot") np.searchsorted (array, q) x[130:150] np.searchsorted(x, 70) x[np.searchsorted(x, 70)] pos = np.searchsorted(x, 70) print(x[pos: pos+5]) print(x[pos-5: pos]) plt.figure(figsize=(16,6)) neighbors = np.arange(pos-5, pos+5) plt.scatter(x, y, alpha=0.3) plt.scatter(x[neighbors], y[neighbors], color="red") plt.figure(figsize=(16,6)) reg_line = [] for i in x[4:-4]: pos = pos = np.searchsorted(x, i) neighbors = np.arange(pos-5, pos+5) reg_line.append(y[neighbors].mean()) plt.scatter(x, y, alpha=0.3) plt.plot(x[4:-4], reg_line, color="red") intervals = np.split(np.arange(len(x)), 10) intervals intervals[4] x[intervals[4]] y[intervals[4]].mean() plt.figure(figsize=(16,6)) plt.scatter(x, y, alpha=0.3) plt.plot(x[intervals[4]], [y[intervals[4]].mean()] * len(intervals[4]), color="red", linewidth=4) plt.figure(figsize=(16,6)) plt.scatter(x, y, alpha=0.3) means = [] for i in range(len(intervals)): means.append(y[intervals[i]].mean()) plt.plot(x[intervals[i]], [y[intervals[i]].mean()] * len(intervals[i]), color="red", linewidth=4) from sklearn.preprocessing import PolynomialFeatures poly = PolynomialFeatures(degree=3) v = np.array([1,2,3,4,5]) poly.fit_transform(v.reshape(-1,1)) # we take a small subsample of x, np.random.seed(46) subset_indices = np.random.choice(np.arange(len(x)), size=20) subset_indices.sort() subset_indices x[subset_indices] x[subset_indices][0:5] temp = x[subset_indices].reshape(-1,1) temp[0:5] poly = PolynomialFeatures(degree=2) X_vals_transformed = poly.fit_transform(x[subset_indices].reshape(-1,1)) X_vals_transformed[0:5] plt.figure(figsize=(12,4)) plt.scatter(x[subset_indices], y[subset_indices]) from sklearn import linear_model poly = PolynomialFeatures(degree=2) X_vals_transformed = poly.fit_transform(x[subset_indices].reshape(-1,1)) lin = linear_model.LinearRegression() lin.fit(X_vals_transformed, y[subset_indices].reshape(-1,1)) plt.figure(figsize=(12,4)) x_axis= np.arange(0, max(x)).reshape(-1,1) X_axis_transformed = poly.transform(x_axis) y_hat = lin.predict(X_axis_transformed) plt.plot(x_axis, y_hat, label= "Polynomial degree %s" % 2) plt.scatter(x[subset_indices], y[subset_indices]) plt.legend() from sklearn import linear_model np.random plt.figure(figsize=(24,6)) x_axis= np.arange(0,max(x)).reshape(-1,1) for i, polDegree in enumerate([2, 6, 8]): plt.subplot(1,3,i+1) plt.scatter(x[subset_indices], y[subset_indices]) poly = PolynomialFeatures(degree=polDegree) X_vals_transformed = poly.fit_transform(x[subset_indices].reshape(-1,1)) X_axis_transformed = poly.transform(x_axis) lin = linear_model.LinearRegression() lin.fit(X_vals_transformed, y[subset_indices].reshape(-1,1)) y_hat = lin.predict(X_axis_transformed) plt.plot(x_axis[0:-4], y_hat[0:-4], label= "Polynomial degree %s" % polDegree) plt.legend() # two nots (30 and 55) # three regions (< 30, >=30 and <55, >= 55) x_1_idx = np.where(x<30) x_2_idx = np.where((x>=30) & (x < 55)) x_3_idx = np.where(x >= 55) plt.scatter(x[subset_indices], y[subset_indices]) x_1_axis = np.arange(0, 30, 0.05).reshape(-1,1) poly = PolynomialFeatures(degree=1) X_vals_transformed = poly.fit_transform(x[x_1_idx].reshape(-1,1)) X_axis_transformed = poly.transform(x_1_axis) lin = linear_model.LinearRegression() lin.fit(X_vals_transformed, y[x_1_idx].reshape(-1,1)) y_hat = lin.predict(X_axis_transformed) plt.plot(x_1_axis, y_hat, label= "Polynomial degree %s" % polDegree) plt.scatter(x[subset_indices], y[subset_indices]) x_1_axis = np.arange(0, 30, 0.05).reshape(-1,1) poly = PolynomialFeatures(degree=1) X_vals_transformed = poly.fit_transform(x[x_1_idx].reshape(-1,1)) X_axis_transformed = poly.transform(x_1_axis) lin = linear_model.LinearRegression() lin.fit(X_vals_transformed, y[x_1_idx].reshape(-1,1)) y_hat = lin.predict(X_axis_transformed) plt.plot(x_1_axis, y_hat, label= "Polynomial degree %s" % polDegree) x_2_axis = np.arange(30, 55, 0.05).reshape(-1,1) poly = PolynomialFeatures(degree=1) X_vals_transformed = poly.fit_transform(x[x_2_idx].reshape(-1,1)) X_axis_transformed = poly.transform(x_2_axis) lin = linear_model.LinearRegression() lin.fit(X_vals_transformed, y[x_2_idx].reshape(-1,1)) y_hat = lin.predict(X_axis_transformed) plt.plot(x_2_axis, y_hat, label= "Polynomial degree %s" % polDegree) x_3_axis = np.arange(55, 100, 0.05).reshape(-1,1) poly = PolynomialFeatures(degree=1) X_vals_transformed = poly.fit_transform(x[x_3_idx].reshape(-1,1)) X_axis_transformed = poly.transform(x_3_axis) lin = linear_model.LinearRegression() lin.fit(X_vals_transformed, y[x_3_idx].reshape(-1,1)) y_hat = lin.predict(X_axis_transformed) plt.plot(x_3_axis, y_hat, label= "Polynomial degree %s" % polDegree) x_1_subset = x[subset_indices[0:8]] x_2_subset = x[subset_indices[7:12]] x_3_subset = x[subset_indices[11:14]] x_4_subset = x[subset_indices[13:]] y_1_subset = y[subset_indices[0:8]] y_2_subset = y[subset_indices[7:12]] y_3_subset = y[subset_indices[11:14]] y_4_subset = y[subset_indices[13:]] plt.scatter(x_1_subset, y_1_subset) x_1_axis = np.arange(min(x_1_subset), max(x_1_subset), 0.05).reshape(-1,1) poly = PolynomialFeatures(degree=1) X_vals_transformed = poly.fit_transform(x_1_subset[[0,-1]].reshape(-1,1)) X_axis_transformed = poly.transform(x_1_axis) lin = linear_model.LinearRegression() lin.fit(X_vals_transformed, y_1_subset[[0,-1]].reshape(-1,1)) y_hat = lin.predict(X_axis_transformed) plt.plot(x_1_axis, y_hat, label= "Polynomial degree %s" % polDegree) plt.scatter(x_1_subset, y_1_subset) x_1_axis = np.arange(min(x_1_subset), max(x_1_subset), 0.05).reshape(-1,1) poly = PolynomialFeatures(degree=1) X_vals_transformed = poly.fit_transform(x_1_subset[[0,-1]].reshape(-1,1)) X_axis_transformed = poly.transform(x_1_axis) lin = linear_model.LinearRegression() lin.fit(X_vals_transformed, y_1_subset[[0,-1]].reshape(-1,1)) y_hat = lin.predict(X_axis_transformed) plt.plot(x_1_axis, y_hat, label= "Polynomial degree %s" % polDegree) plt.scatter(x_2_subset, y_2_subset) x_2_axis = np.arange(min(x_2_subset), max(x_2_subset), 0.05).reshape(-1,1) poly = PolynomialFeatures(degree=1) X_vals_transformed = poly.fit_transform(x_2_subset[[0,-1]].reshape(-1,1)) X_axis_transformed = poly.transform(x_2_axis) lin = linear_model.LinearRegression() lin.fit(X_vals_transformed, y_2_subset[[0,-1]].reshape(-1,1)) y_hat = lin.predict(X_axis_transformed) plt.plot(x_2_axis, y_hat, label= "Polynomial degree %s" % polDegree) plt.scatter(x_3_subset, y_3_subset) x_3_axis = np.arange(min(x_3_subset), max(x_3_subset), 0.05).reshape(-1,1) poly = PolynomialFeatures(degree=1) X_vals_transformed = poly.fit_transform(x_3_subset[[0,-1]].reshape(-1,1)) X_axis_transformed = poly.transform(x_3_axis) lin = linear_model.LinearRegression() lin.fit(X_vals_transformed, y_3_subset[[0,-1]].reshape(-1,1)) y_hat = lin.predict(X_axis_transformed) plt.plot(x_3_axis, y_hat, label= "Polynomial degree %s" % polDegree) plt.scatter(x_4_subset, y_4_subset) x_4_axis = np.arange(min(x_4_subset), max(x_4_subset), 0.05).reshape(-1,1) poly = PolynomialFeatures(degree=1) X_vals_transformed = poly.fit_transform(x_4_subset[[0,-1]].reshape(-1,1)) X_axis_transformed = poly.transform(x_4_axis) lin = linear_model.LinearRegression() lin.fit(X_vals_transformed, y_4_subset[[0,-1]].reshape(-1,1)) y_hat = lin.predict(X_axis_transformed) plt.plot(x_4_axis, y_hat, label= "Polynomial degree %s" % polDegree) # we take a small subsample of x, # results are more dramatic from patsy import dmatrix import statsmodels.api as sm # Specifying 3 knots transformed_x = dmatrix("bs(x, knots=(25,40,60), degree=1)", {"x": x[subset_indices]}, return_type='dataframe') model_4 = sm.GLM(y[subset_indices], transformed_x).fit() pred4 = model_4.predict(dmatrix("bs(x_axis, knots=(25,40,60), degree=1)", {"x_axis": x_axis}, return_type='dataframe')) model_4.params plt.scatter(x[subset_indices], y[subset_indices]) plt.plot(x_axis, pred4, linewidth=4) # we take a small subsample of x, from patsy import dmatrix import statsmodels.api as sm # Specifying 3 knots for the bsplines model in statsmodels x_axis = np.arange(min(x[subset_indices]), max(x[subset_indices]), 0.05).reshape(-1,1) transformed_x = dmatrix("bs(x, degree=3, df=4)", {"x": x[subset_indices]}, return_type='dataframe') model_4 = sm.GLM(y[subset_indices], transformed_x).fit() pred4 = model_4.predict(dmatrix("bs(x_axis, degree=3, df=4)", {"x_axis": x_axis}, return_type='dataframe')) model_4.params plt.figure(figsize=(12,4)) plt.scatter(x[subset_indices], y[subset_indices]) plt.scatter(x_axis, pred4, alpha=0.4)
0.406155
0.944995
# Lecture 2: Naive Bayes *** <img src="files/figs/bayes.jpg",width=1201,height=50> <!--- ![my_image](files/figs/bayes.jpg) --> <a id='prob1'></a> ### Problem 1: Bayes Law and The Monte Hall Problem *** >Suppose you're on a game show, and you're given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1. The host (Monty), who knows what's behind each door, reveals one that has a goat behind it. He then asks if you'd like to change your choice. Is it to your advantage to switch doors? (Here we implilcitly assume you want the car more than a goat) <img src="https://cdn-images-1.medium.com/max/1600/1*fSv7k4vXkOYp8RN7lVeKyA.jpeg",width=500,height=250> **A**: What does your intuition say? Is it in your best interest to switch, or does it matter? **B**: Using what we've learned about Bayes' rule, let's calculate the probability of winning if you switch or stay. ``` # To begin, define the prior as the probability of the car being behind door i (i=1,2,3), call this "pi". # Note that pi is uniformly distributed. p1 = 1/3. p2 = 1/3. p3 = 1/3. # Next, to define the class conditional, we need three pieces of information. Supposing Monty reveals door 3, # we must find: # probability that Monty reveals door 3 given door 3 wins (call this c3) # probability that Monty reveals door 3 given door 2 wins (call this c2) # probability that Monty reveals door 3 given door 1 wins (call this c1) # # For this, suppose you initially choose door 1. c3 = 0 c2 = 1. c1 = 1/2. #Now we need find the marginal for the choice of Monty, call this pd3. Hint: use the sum rule of probability and # your previous calculations. pd3 = c3*p3 + c2*p2 + c1*p1 # The probability of winning if you stay with door 1 is: print("Door 1: %(switch1).2f %%" %{"switch1":100*(c1*p1)/pd3}) # Finally, Bayes' rule tells us the probability of winning if you switch to door 2 is: print("Door 2: %(switch2).2f %%" %{"switch2":100*(c2*p2)/pd3}) # The probability of winning if you switch to door 3 is: print("Door 3: %(switch3).2f %%" %{"switch3":100*(c3*p3)/pd3}) ``` ### Problem 2: Naive Bayes on Symbols *** > This problem was adopted from [Naive Bayes and Text Classification I: Introduction and Theory](https://arxiv.org/abs/1410.5329) by Sebastian Raschka and a script from the CU computer science department. Consider the following training set of 12 symbols which have been labeled as either + or -: <br> <img src="files/figs/shapes.png?raw=true"; width=500> <!--- ![](files/figs/shapes.png?raw=true) --> Answer the following questions: **A**: What are the general features associated with each training example? **Answer**: The two general types of features are **shape** and **color**. For this particular training set, the observed features are **shape** $\in$ {*square*, *circle*} and **color** $\in$ {*red*, *blue*, *green*}. In the next part, we'll use Naive Bayes to classify the following test example: <img src="files/figs/bluesquare.png"; width=200> OK, so this symbol actually appears in the training set, but let's pretend that it doesn't. The decision rule can be defined as >Classify ${\bf x}$ as + if <br> >$p(+ ~|~ {\bf x} = [blue,~ square]) \geq p(- ~|~ {\bf x} = [blue, ~square])$ <br> >else classify sample as - **B**: To begin, let's explore the estimate of an appropriate prior for + and -. We'll define two distributions:<br> For the first, use $$\hat{p}(+)=\frac{\text{# of +}}{\text{# of classified objects}} \text{ and } \hat{p}(-)=\frac{\text{# of -}}{\text{# of classified objects}}$$ <br> For the second, reader's choice. Take anything such that $$\hat{p}(+)\ge 0\text{, }\hat{p}(-)\ge 0\text{, and }\hat{p}(+)+\hat{p}(-)=1$$ ``` # Distribution 1 p1Plus = 7/12.0 p1Minus = 5/12.0 # Distribution 2 p2Plus = 1/12.0 p2Minus = 11/12.0 ``` **C**: Assuming the features are conditionally independent of the class, identify and compute estimates of the class-conditional probabilities required to predict the class of ${\bf x} = [blue,~square]$? **Answer**: The class-conditional probabilities required to classify ${\bf x} = [blue, ~square]$ are $$ p(blue ~|~ +), ~~~~~ p(blue ~|~ -), ~~~~~ p(square ~|~ +), ~~~~~ p(square ~|~ -) $$ From the training set, we have $$ \hat{p}(blue ~|~ +)= \frac{3}{7}, ~~~~~ \hat{p}(blue ~|~ -) = \frac{3}{5}, ~~~~~ \hat{p}(square ~|~ +)=\frac{5}{7}, ~~~~~ \hat{p}(square ~|~ -) = \frac{3}{5} $$ ``` # Class-conditional probabilities pBplus = 3/7.0 pBminus = 3/5.0 pSplus = 5/7.0 pSminus = 3/5.0 ``` **D**: Using the estimates computed above, compute the **posterior** scores for each label, and find the Naive Bayes prediction of the label for ${\bf x} = [blue,~square]$. ``` #Start a section for the results under prior 1 scores1=[(pBplus*pSplus*p1Plus,'+'),(pBminus*pSminus*p1Minus,'-')] class1 = list(max(scores1)) #Beginning of results print('\033[1m'+"Results under prior 1" + '\033[0m') # Posterior score for + under prior 1 print("Posterior score for + under prior 1 is $ %(postPlus).2f" %{"postPlus":scores1[0][0]}) # Posterior score for - under prior 1 print("Posterior score for - under prior 1 is $ %(postMinus).2f" %{"postMinus":scores1[1][0]}) # Classification under prior 1 print("The object is then of class %s" %class1[1]) #Start a section for the results under prior 2 scores2=[(pBplus*pSplus*p2Plus,'+'),(pBminus*pSminus*p2Minus,'-')] class2 = list(max(scores2)) #Beginning of results print('\033[1m'+"Results under prior 2" + '\033[0m') # Posterior score for + under prior 2 print("Posterior score for + under prior 2 is $ %(postPlus).2f" %{"postPlus":scores2[0][0]}) # Posterior score for - under prior 2 print("Posterior score for - under prior 2 is $ %(postMinus).2f" %{"postMinus":scores2[1][0]}) # Classification under prior 2 print("The object is then of class %s" %class2[1]) ``` <a id='prob1ans'></a> **E**: If you haven't already, compute the class-conditional probabilities scores $\hat{p}({\bf x} = [blue,~square] ~|~ +)$ and $\hat{p}({\bf x} = [blue,~square] ~|~ -)$ under the Naive Bayes assumption. How can you reconsile these values with the final prediction that would made? **Answer**: The class-conditional probability scores under the Naive Bayes assumption are $$ \hat{p}({\bf x} = [blue,~square] ~|~ +) = \hat{p}(blue ~|~ +) \cdot \hat{p}(square ~|~ +) = \frac{3}{7} \cdot \frac{5}{7} = 0.31 $$ $$ \hat{p}({\bf x} = [blue,~square] ~|~ -) = \hat{p}(blue ~|~ -) \cdot \hat{p}(square ~|~ -) = \frac{3}{5} \cdot \frac{3}{5} = 0.36 $$ The - label actually has a higher class-conditional probability for ${\bf x}$ than the + label. We ended up predicting the + label because the prior for + was larger than the prior for -. This example demonstrates how the choice of prior can have a large influence on the prediction. <br><br><br><br> <br><br><br><br> <br><br><br><br> <br><br><br><br> ### Helper Functions *** ``` from IPython.core.display import HTML HTML(""" <style> .MathJax nobr>span.math>span{border-left-width:0 !important}; </style> """) from IPython.display import Image ```
github_jupyter
# To begin, define the prior as the probability of the car being behind door i (i=1,2,3), call this "pi". # Note that pi is uniformly distributed. p1 = 1/3. p2 = 1/3. p3 = 1/3. # Next, to define the class conditional, we need three pieces of information. Supposing Monty reveals door 3, # we must find: # probability that Monty reveals door 3 given door 3 wins (call this c3) # probability that Monty reveals door 3 given door 2 wins (call this c2) # probability that Monty reveals door 3 given door 1 wins (call this c1) # # For this, suppose you initially choose door 1. c3 = 0 c2 = 1. c1 = 1/2. #Now we need find the marginal for the choice of Monty, call this pd3. Hint: use the sum rule of probability and # your previous calculations. pd3 = c3*p3 + c2*p2 + c1*p1 # The probability of winning if you stay with door 1 is: print("Door 1: %(switch1).2f %%" %{"switch1":100*(c1*p1)/pd3}) # Finally, Bayes' rule tells us the probability of winning if you switch to door 2 is: print("Door 2: %(switch2).2f %%" %{"switch2":100*(c2*p2)/pd3}) # The probability of winning if you switch to door 3 is: print("Door 3: %(switch3).2f %%" %{"switch3":100*(c3*p3)/pd3}) # Distribution 1 p1Plus = 7/12.0 p1Minus = 5/12.0 # Distribution 2 p2Plus = 1/12.0 p2Minus = 11/12.0 # Class-conditional probabilities pBplus = 3/7.0 pBminus = 3/5.0 pSplus = 5/7.0 pSminus = 3/5.0 #Start a section for the results under prior 1 scores1=[(pBplus*pSplus*p1Plus,'+'),(pBminus*pSminus*p1Minus,'-')] class1 = list(max(scores1)) #Beginning of results print('\033[1m'+"Results under prior 1" + '\033[0m') # Posterior score for + under prior 1 print("Posterior score for + under prior 1 is $ %(postPlus).2f" %{"postPlus":scores1[0][0]}) # Posterior score for - under prior 1 print("Posterior score for - under prior 1 is $ %(postMinus).2f" %{"postMinus":scores1[1][0]}) # Classification under prior 1 print("The object is then of class %s" %class1[1]) #Start a section for the results under prior 2 scores2=[(pBplus*pSplus*p2Plus,'+'),(pBminus*pSminus*p2Minus,'-')] class2 = list(max(scores2)) #Beginning of results print('\033[1m'+"Results under prior 2" + '\033[0m') # Posterior score for + under prior 2 print("Posterior score for + under prior 2 is $ %(postPlus).2f" %{"postPlus":scores2[0][0]}) # Posterior score for - under prior 2 print("Posterior score for - under prior 2 is $ %(postMinus).2f" %{"postMinus":scores2[1][0]}) # Classification under prior 2 print("The object is then of class %s" %class2[1]) from IPython.core.display import HTML HTML(""" <style> .MathJax nobr>span.math>span{border-left-width:0 !important}; </style> """) from IPython.display import Image
0.595493
0.956917
``` from sklearn import datasets iris = datasets.load_iris() data = iris['data'] features = iris['feature_names'] target = iris['target'] target_names = iris['target_names'] # map target labels to species names => Ground Truth species = target_names[target] print(species) # Import KMeans from sklearn.cluster import KMeans # Create a KMeans instance with 3 clusters: model model = KMeans(n_clusters=3) # Fit model to points model.fit(data) # Determine the cluster labels of iris data: labels => Prediction labels = model.predict(data) # can also use: labels = model.fit_predict(data) # Calculate inertia: Measures how spread out the clusters are (lower is be!er) print(model.inertia_) # Import pyplot import matplotlib.pyplot as plt # Assign the columns of new_points: xs and ys xs = data[:,0] ys = data[:,2] fig, ax = plt.subplots() # Make a scatter plot of xs and ys, using labels to define the colors ax.scatter(xs, ys, c=labels, alpha=0.3) # Assign the cluster centers: centroids centroids = model.cluster_centers_ # Assign the columns of centroids: centroids_x, centroids_y centroids_x = centroids[:,0] centroids_y = centroids[:,2] # Make a scatter plot of centroids_x and centroids_y ax.scatter(centroids_x, centroids_y, marker='D', s=100, color='r') ax.set_title('K-means clustering of Iris dataset') ax.set_xlabel(features[0]) ax.set_ylabel(features[2]) plt.show() ``` ### Compare ground truth to prediction ``` import pandas as pd df = pd.DataFrame({'labels': labels, 'species': species}) ct = pd.crosstab(df['labels'], df['species']) print(ct) ``` ### What is the best clusters to choose? The elbow rule, the point where the decrease slows down See below, **3 is a good choice** ``` ks = range(1, 10) inertias = [] for k in ks: # Create a KMeans instance with k clusters: model model = KMeans(n_clusters=k) # Fit model to samples model.fit(data) # Append the inertia to the list of inertias inertias.append(model.inertia_) # Plot ks vs inertias plt.plot(ks, inertias, '-o') plt.xlabel('number of clusters, k') plt.ylabel('inertia') plt.xticks(ks) plt.show() ``` ## Pipelines with Kmeans and StandardScaller ### Standard scaller - in kmeans: feature variance = feature influence - `StandardScaller` transforms each feature to have mean 0 and variance 1 ``` import pandas as pd df = pd.read_csv('fish.csv', header=None) # prevent first row from becoming header samples = df.iloc[:,1:].to_numpy() species = df.iloc[:,0].to_numpy() # Perform the necessary imports from sklearn.pipeline import make_pipeline from sklearn.preprocessing import StandardScaler from sklearn.cluster import KMeans # Create scaler: scaler scaler = StandardScaler() # Create KMeans instance: kmeans kmeans = KMeans(n_clusters=4) # Create pipeline: pipeline pipeline = make_pipeline(scaler, kmeans) # Fit the pipeline to samples pipeline.fit(samples) # Calculate the cluster labels: labels labels = pipeline.predict(samples) # Create a DataFrame with labels and species as columns: df df = pd.DataFrame({'labels':labels, 'species':species}) # Create crosstab: ct ct = pd.crosstab(df['labels'], df['species']) # Display ct print(ct) ``` ### Full pipeline with stocks ``` import pandas as pd df = pd.read_csv('stock.csv') df.head() movements = df.iloc[:,1:].to_numpy() companies = df.iloc[:,0].to_numpy() # Perform the necessary imports from sklearn.pipeline import make_pipeline from sklearn.preprocessing import Normalizer from sklearn.cluster import KMeans # Create a normalizer: normalizer normalizer = Normalizer() # Create a KMeans model with 10 clusters: kmeans kmeans = KMeans(n_clusters=10) # Make a pipeline chaining normalizer and kmeans: pipeline pipeline = make_pipeline(normalizer, kmeans) # Fit pipeline to the daily price movements pipeline.fit(movements) # Import pandas import pandas as pd # Predict the cluster labels: labels labels = pipeline.predict(movements) # Create a DataFrame aligning labels and companies: df df = pd.DataFrame({'labels': labels, 'companies': companies}) # Display df sorted by cluster label print(df.sort_values('labels')) ```
github_jupyter
from sklearn import datasets iris = datasets.load_iris() data = iris['data'] features = iris['feature_names'] target = iris['target'] target_names = iris['target_names'] # map target labels to species names => Ground Truth species = target_names[target] print(species) # Import KMeans from sklearn.cluster import KMeans # Create a KMeans instance with 3 clusters: model model = KMeans(n_clusters=3) # Fit model to points model.fit(data) # Determine the cluster labels of iris data: labels => Prediction labels = model.predict(data) # can also use: labels = model.fit_predict(data) # Calculate inertia: Measures how spread out the clusters are (lower is be!er) print(model.inertia_) # Import pyplot import matplotlib.pyplot as plt # Assign the columns of new_points: xs and ys xs = data[:,0] ys = data[:,2] fig, ax = plt.subplots() # Make a scatter plot of xs and ys, using labels to define the colors ax.scatter(xs, ys, c=labels, alpha=0.3) # Assign the cluster centers: centroids centroids = model.cluster_centers_ # Assign the columns of centroids: centroids_x, centroids_y centroids_x = centroids[:,0] centroids_y = centroids[:,2] # Make a scatter plot of centroids_x and centroids_y ax.scatter(centroids_x, centroids_y, marker='D', s=100, color='r') ax.set_title('K-means clustering of Iris dataset') ax.set_xlabel(features[0]) ax.set_ylabel(features[2]) plt.show() import pandas as pd df = pd.DataFrame({'labels': labels, 'species': species}) ct = pd.crosstab(df['labels'], df['species']) print(ct) ks = range(1, 10) inertias = [] for k in ks: # Create a KMeans instance with k clusters: model model = KMeans(n_clusters=k) # Fit model to samples model.fit(data) # Append the inertia to the list of inertias inertias.append(model.inertia_) # Plot ks vs inertias plt.plot(ks, inertias, '-o') plt.xlabel('number of clusters, k') plt.ylabel('inertia') plt.xticks(ks) plt.show() import pandas as pd df = pd.read_csv('fish.csv', header=None) # prevent first row from becoming header samples = df.iloc[:,1:].to_numpy() species = df.iloc[:,0].to_numpy() # Perform the necessary imports from sklearn.pipeline import make_pipeline from sklearn.preprocessing import StandardScaler from sklearn.cluster import KMeans # Create scaler: scaler scaler = StandardScaler() # Create KMeans instance: kmeans kmeans = KMeans(n_clusters=4) # Create pipeline: pipeline pipeline = make_pipeline(scaler, kmeans) # Fit the pipeline to samples pipeline.fit(samples) # Calculate the cluster labels: labels labels = pipeline.predict(samples) # Create a DataFrame with labels and species as columns: df df = pd.DataFrame({'labels':labels, 'species':species}) # Create crosstab: ct ct = pd.crosstab(df['labels'], df['species']) # Display ct print(ct) import pandas as pd df = pd.read_csv('stock.csv') df.head() movements = df.iloc[:,1:].to_numpy() companies = df.iloc[:,0].to_numpy() # Perform the necessary imports from sklearn.pipeline import make_pipeline from sklearn.preprocessing import Normalizer from sklearn.cluster import KMeans # Create a normalizer: normalizer normalizer = Normalizer() # Create a KMeans model with 10 clusters: kmeans kmeans = KMeans(n_clusters=10) # Make a pipeline chaining normalizer and kmeans: pipeline pipeline = make_pipeline(normalizer, kmeans) # Fit pipeline to the daily price movements pipeline.fit(movements) # Import pandas import pandas as pd # Predict the cluster labels: labels labels = pipeline.predict(movements) # Create a DataFrame aligning labels and companies: df df = pd.DataFrame({'labels': labels, 'companies': companies}) # Display df sorted by cluster label print(df.sort_values('labels'))
0.840979
0.905197
# 01 Differential Geometry for Engineers ## A) Manifolds and Lie groups $\color{#003660}{\text{Nina Miolane - Assistant Professor}}$ @ BioShape Lab @ UCSB ECE - Texts and illustrations by [Adele Myers](https://ahma2017.wixsite.com/adelemyers) @ BioShape Lab. - Textbook: Guigui, Miolane, Pennec, 2022. Introduction to Riemannian Geometry and Geometric Statistics. <center><img src="figs/01_manifold_definitions1.png" width=1000px alt="default"/></center> # Outline: Geometric Learning for BioShape Analysis - **Unit 1 (Geometry - Math!): Differential Geometry for Engineers** - **Unit 2 (Shapes)**: Computational Representations of Biomedical Shapes - **Unit 3 (Machine Learning)**: Geometric Machine Learning for Shape Analysis - **Unit 4 (Deep Learning)**: Geometric Deep Learning for Shape Analysis <center><img src="figs/00_bioshape.jpg" width=500px alt="default"/></center> Examples and applications will be taken from cutting-edge research in the **biomedical field**. # Outline - **Unit 1 (Geometry - Math!)**: Differential Geometry for Engineers - **A) Manifolds and Lie groups** - Our data spaces. - B) Connections and Riemannian Metrics - Tools we use to compute on these spaces. # Motivation: Shape of Glaucoma Glaucoma is a group of eye conditions that: - damage the optic nerve, the health of which is vital for good vision. - are often caused by an abnormally high pressure in your eye. - are one of the leading causes of blindness for people over the age of 60. <center><img src="figs/01_optic_nerves.png" width=400px alt="default"/></center> <center>Comparison of optic nerve heads in monkeys with and without glaucoma.</center> $\color{#EF5645}{\text{Question}}$: Find shape markers of glaucoma that could lead to automatic diagnosis? Data acquired with a Heidelberg Retina Tomograph - Patrangenaru and Ellingson (2015): - 11 Rhesus monkeys - 22 images of monkeys’ eyes: - an experimental glaucoma was introduced in one eye, - while the second eye was kept as control. <center><img src="figs/01_optic_nerves.png" width=400px alt="default"/></center> <center>Comparison of optic nerve heads in monkeys with and without glaucoma.</center> - On each image, 5 anatomical landmarks were recorded: - 1st landmark: superior aspect of the retina, - 2nd landmark: side of the retina closest to the temporal bone of the skull, - 3rd landmark: nose side of the retina, - 4th landmark: inferior point, - 5th landmark: optical nerve head deepest point. Label 0 refers to a normal eye, and Label 1 to an eye with glaucoma. $\color{#EF5645}{\text{Question}}$: Significant difference in shape formed by the landmarks? # Exploratory Analysis ``` import matplotlib.colors as colors import matplotlib.patches as mpatches import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D from mpl_toolkits.mplot3d.art3d import Poly3DCollection import warnings warnings.filterwarnings("ignore") import geomstats.datasets.utils as data_utils nerves, labels, monkeys = data_utils.load_optical_nerves() print(nerves.shape) print(labels) print(monkeys) two_nerves = nerves[monkeys == 1] print(two_nerves.shape) two_labels = labels[monkeys == 1] print(two_labels) label_to_str = {0: "Normal nerve", 1: "Glaucoma nerve"} label_to_color = { 0: (102 / 255, 178 / 255, 255 / 255, 1.0), 1: (255 / 255, 178 / 255, 102 / 255, 1.0), } ``` Try looking at 3D triangles. ``` fig = plt.figure() ax = Axes3D(fig); ax.set_xlim((2000, 4000)); ax.set_ylim((1000, 5000)); ax.set_zlim((-600, 200)) for nerve, label in zip(two_nerves, two_labels): x = nerve[1:4, 0]; y = nerve[1:4, 1]; z = nerve[1:4, 2]; verts = [list(zip(x, y, z))] poly = Poly3DCollection(verts, alpha=0.5); color = label_to_color[int(label)]; poly.set_color(colors.rgb2hex(color)); poly.set_edgecolor("k") ax.add_collection3d(poly) patch_0 = mpatches.Patch(color=label_to_color[0], label=label_to_str[0], alpha=0.5) patch_1 = mpatches.Patch(color=label_to_color[1], label=label_to_str[1], alpha=0.5) plt.legend(handles=[patch_0, patch_1], prop={"size": 20}); plt.show() ``` # Towards a Quantitative Analysis We could do statistics on the $\color{#EF5645}{\text{object}}$: - 2D triangle = 3 points in 2D space $(x_1, y_1), (x_2, y_2), (x_3, y_3)$ - 6 degrees of freedom - 3D triangle = 3 points in 3D space $(x_1, y_1, z_1), (x_2, y_2, z_2), (x_3, y_3, z_3)$ - 9 degrees of freedom But we are really interested in its $\color{#EF5645}{\text{shape}}$: characteristics of the object that remain once we have filtered out the action of **group of transformations** that do not change the shape - For 2D and 3D triangles, we only need 2 numbers to describe the shape (see Unit 2) - ... at the cost of having data that belong to a curved space, called a **manifold** ### Manifold of 2D Triangle Shapes : Sphere <center><img src="figs/01_triangles_2d.png" width=800px alt="default"/></center> ### Manifold of 3D Triangle Shapes : Half-Sphere <center><img src="figs/01_triangle_3d.png" width=900px alt="default"/></center> # Need for Foundations in Geometry Questions: - Why is the shape space of triangles a sphere or a half-sphere? - Why and how do we perform computations and learning for data on a sphere, or half-sphere? $\rightarrow$ Unit 1 defines the mathematical tools such that: - We can answer these questions in the context of biomedical shape analysis. - We present computational tools that can be used for the analysis of data on curved spaces... - ... beyond biomedical shape analysis. # A) Manifolds and Lie groups: Outline You will learn: 1. What is a manifold? What are tangent spaces? 2. Why do we care about manifolds? 3. How can we implement manifolds? 4. What is a (Lie) group of transformations? 5. Why do we care about Lie groups? 6. How can we implement Lie groups? # A) Manifolds and Lie groups: Outline You will learn: 1. **What is a manifold? What are tangent spaces?** 2. Why do we care about manifolds? 3. How can we implement manifolds? 4. What is a (Lie) group of transformations? 5. Why do we care about Lie groups? 6. How can we implement Lie groups? # 1. What is a manifold? $\textbf{Intuition:}$ A manifold $M$ can be seen as a smooth surface with any dimension, where the dimension indicates the number of degrees of freedom that a data point has on this surface. $\color{#047C91}{\text{Example}}$: A sphere is a two dimensional manifold. <center><img src="figs/00_intro_sphere_manifold.png" width=600px alt="default"/></center> $\textbf{Intuition:}$ When you are first learning, it can be a helpful starting point to think of a manifold as a surface. This surface can have any dimension and any shape as long as it is smooth (in the sense of being continuous and differentiable). For example, a hypersphere is a two dimensional manifold, and we will often use this manifold in examples. This is not a particularly precise definition, but it can be helpful starting point for building intuition. ## Mathematical Definition(s) ### Definition 1: Local Parameterization The first way of defining a manifold $M$ is through a local parameterization. $\color{#EF5645}{\text{Mathematical Definition}}$: For every $p \in M$, there are two open subsets $ V \subseteq \mathbb{R}^{d}$ and $ U \subseteq \mathbb{R}^{N}$ with $p \in U$ and $0 \in V$. There is also a smooth function $f: V \to \mathbb{R}^{N}$ such that $f(0) = p$, where $f$ is a homeomorphism between V and $U \cap M$, and $f$ is an immersion at 0. $\color{#EF5645}{\text{Explanation}}$: $M$ is a space that locally resembles Euclidean space near each point. $\color{#047C91}{\text{Example}}$: Consider a sphere and a two dimensional grid. - We can't deform this grid to have the shape of a sphere under any circumstance, - but at each point on the manifold, we can approximate the space near the point with the grid. <center><img src="figs/01_manifold_definition1.png" width=300px alt="default"/></center> ### Definition 2: Local Implicit Function The second way of defining a manifold $M$ is through a constraints. $\color{#EF5645}{\text{Mathematical Definition}}$: For every $p \in M$, there exists an open set $U \in \mathbb{R}^{N}$ and a smooth map $f: U \to \mathbb{R}^{N-d}$ that is a submersion at p, such that $U \cap M = f^{-1}$({0}). $\color{#EF5645}{\text{Explanation}}$: $M$ is the set of points that verify a constraint defined by an implicit equation, given by the function $f$: i.e. the points that verify $f(x) = 0$ for some $f$. <center><img src="figs/01_manifold_definition2.png" width=600px alt="default"/></center> # Why several definitions? Some manifolds are better described and implemented with definition (1) or (2). $\color{#047C91}{\text{Example}}$: The sphere is most easily described and implemented using definition (2). <center><img src="figs/01_manifold_definitions1.png" width=900px alt="default"/></center> ### Dimension $\color{#EF5645}{\text{Definition}}$: A manifold has a dimension $d$, that is equivalent to the number of degrees of freedom required to "walk" on the manifold. (1) The dimension of the grid that is deformed gives the dimension of the manifold. (2) The dimension of the embedding space minus the dimension of the constraints gives the dimension of the manifold. <center><img src="figs/01_manifold_definitions1.png" width=900px alt="default"/></center> $\color{#047C91}{\text{Example}}$: What is the dimension of the sphere? ### Hypersphere example Let's prove that a hypersphere is a manifold using definition (2). $\color{#EF5645}{\text{Definition}}$: A $d$-dimensional hypersphere generalizes a 1-dimensional circle and a 2-dimensional sphere to $d$ dimensions. A $d$-dimensional hypersphere is the set of all points in $\mathbb{R}^{d+1}$ that are a given distance, called the radius $R$, from 0. $$ S = \{ x \in \mathbb{R}^{d+1}, ||x|| = R\}.$$ $\color{#047C91}{\text{Example}}$: The 2-dimensional hypersphere in 3-dimensions is the sphere. <center><img src="figs/00_intro_sphere_manifold.png" width=400px alt="default"/></center> $\textbf{How do we know that a hypersphere is a manifold?}$ We know from the definition that points on a hypersphere $S$ verify $\|x\|^{2} = 1$. We define the function $f(x) = \|x\|^{2} - 1$ that will equal zero for all points that lie on $S$: $x \in S \iff f(x) = 0$ which tells us that $x \in S \iff x \in f^{-1}$({0}) This matches the definition 2 of a manifold: $S = f^{-1}$({0}) where $S$ is the set of points $x$ that satisfy the condition $\|x\|^{2} = 1$. Therefore, $S$ is a manifold. ## First Example of Manifold The shape space of 2D triangles is a sphere, thus it is our first example of manifolds. <center><img src="figs/01_triangles_2d.png" width=600px alt="default"/></center> # Other Examples of Manifolds - **Vector spaces** <center><img src="figs/01_vectorspace.png" width=400px alt="default"/></center> What is the tangent space of this manifold at a given point $P = (p_1, p_2)$? - **Shape transformations** - $\color{#047C91}{\text{Example}}$: Space of 2D rotations is a circle <center><img src="figs/01_rotation_2d.png" width=400px alt="default"/></center> - **A shape itself !** - $\color{#047C91}{\text{Example}}$: Surface of the heart is a manifold <center><img src="figs/01_heart.jpeg" width=200px alt="default"/></center> - **4-dimensional space-time** <center><img src="figs/01_spacetime.jpeg" width=700px alt="default"/></center> - **Space of brain connectomes** <center><img src="figs/01_cone.png" width=400px alt="default"/></center> <center><img src="figs/01_connectome.png" width=400px alt="default"/></center> - **Perception manifolds** - $\color{#047C91}{\text{Example}}$: The Hyperbolic geometry of DMT experiences <center><img src="figs/01_perception.jpeg" width=700px alt="default"/></center> - **Many more** (cf visualization project). <center><img src="figs/01_manifold_hierarchy.jpeg" width=1400px alt="default"/></center> # What are Tangent Vectors and Tangent Spaces? Consider the shape space of triangle, representing shapes of optic nerve head. $\color{#EF5645}{\text{Question}}$: How does the shape of the optic nerve head evolve in time? - Evolution is represented as a trajectory on the shape space (in blue below). $\color{#EF5645}{\text{Question}}$: What is the speed of shape change? - We need the notion of tangent vector and tangent space. <center><img src="figs/01_triangle_curve.png" width=550px alt="default"/></center> ### Tangent Vector to the Sphere ``` import numpy as np import geomstats.visualization as viz fig = plt.figure(figsize=(10, 10)); ax = fig.add_subplot(111, projection="3d") point = np.array([-0.65726771, -0.02678122, 0.7531812]) vector = np.array([1, 0, 0.8]) ax = viz.plot(point, ax=ax, space="S2", s=200, alpha=0.8, label="Point") arrow = viz.Arrow3D(point, vector=vector); arrow.draw(ax, color="black"); ax.legend(); ``` # Tangent Space $\color{#EF5645}{\text{Definition}}$: The tangent space at a certain point $p$ on a manifold $M$ is written $T_p M$ and is comprised of all of the possible tangent vectors that exist at that point. - The tangent space has the same dimension as the manifold. <center><img src="figs/01_tangentspace.jpeg" width=900px alt="default"/></center> Thus, the tangent space of a 1-dimensional manifold (curve) is also one dimensional, and the tangent space of a 2-dimensional manifold (a 2-dimensional surface) is also 2-dimensional. Similarly, for every n-dimensional manifold, there exists an n-dimensional tangent space at each point on the manifold, and the tangent space is comprised of all possible tangent vectors on that manifold. ### Tangent Space to the Hypersphere Recall that the hypersphere is the manifold defined by $S=f^{-1}(\{0\})$ where $f(x) = \|x\|^{2}-1$. The tangent space to the hypersphere is defined, at any point $x$ on $S^{d}$ by: $$ T_{x} S^{d}=\left\{v \in \mathbb{R}^{d+1} \mid\langle x, v\rangle=0\right\} . $$ $\color{#EF5645}{\text{Remark}}$: The tangent space depends on the point $x$ chosen on the manifold. <center><img src="figs/01_tangentspace.jpeg" width=900px alt="default"/></center> # A) Manifolds and Lie groups: Outline You will learn: 1. What is a manifold? What are tangent spaces? 2. **Why do we care about manifolds?** 3. How can we implement manifolds? 4. What is a (Lie) group of transformations? 5. Why do we care about Lie groups? 6. How can we implement Lie groups? # 2. Why do we care about manifolds? $\textbf{Data in nature "naturally falls on manifolds"}$: data are often subject to constraints, and these constraints force the data to lie on manifolds. <center><img src="figs/01_manifold_cities_on_earth.png" width=200px alt="default"/></center> $\color{#047C91}{\text{Example}}$: The cities on earth are subject to the following constraints: - 1) they cannot fly above the surface of the earth because gravity holds them down - 2) they cannot sink down into the earth because the surface of the earth holds them up. $\rightarrow$ they are constrained to move (or not move) on the surface of a sphere. $\textbf{Shapes "naturally falls on manifolds"}$: shapes are subject to constraints, and these constraints force them to lie on manifolds. $\color{#047C91}{\text{Example}}$: Consider the shape of a triangle: - 1) the shape of a triangle does not change if we translate or rotate the triangle, - 2) the shape of a triangle does not change if we rescale it. <center><img src="figs/01_intro_sphere_triangles.png" width=450px alt="default"/></center> We will see that these constraints force triangles to belong to a sphere. # What is the motivation for analyzing data on manifolds? Analyzing data that lie on manifolds is often possible without taking into account the manifold... ...but choosing to do so can be advantageous: a. it reduces the degrees of freedom of the system, which makes computations less complicated and more intuitive and interpretable. b. it can give better understanding of the data's evolution. c. it can give better predictive power and will help you extract the "signal" from a noisy data set or a data set with very few datapoints. ### a. Reduce the number of degrees of freedom The number of degrees of freedom of a system is the minimum number of variables needed to describe the system completely. $\color{#047C91}{\text{Example}}$: - an object moving freely in 3D requires 3 variables $(x, y, z)$ or $(R, \theta, \phi)$ to be described. - if you know that the point lies on the sphere, you only need two variables $(\theta, \phi)$. <center><img src="figs/01_adv1.png" width=900px alt="default"/></center> Knowing that the point lies on the surface of a manifold allows us to use fewer variables to record its location, which is: - computationally more efficient in terms of memory requirements, and - less mentally taxing (if you are solving a problem on paper). ### b. Get a Better Understanding Unaccelerated points travelling along a manifold follow trajectories called "geodesics". The geodesic is the path of shortest distance that a particle can travel in the space that it is in. $\color{#047C91}{\text{Example}}$: Geodesics in 2D and 3D vector space are straight lines (purple $\gamma$), but geodesics on the sphere are different (pink $\gamma$). If you did not know that the object was moving on the sphere, you would wonder why it is taking such an "irratic" path instead of just going straight. If you know the manifold, you realize that the particles are following very reasonable and predictable paths along geodesics. $\color{#047C91}{\text{Example}}$: Particle of light (photons) travel unaccelerated through space-time, and their trajectory is curved as they follow the curvature of the space near massive objects. <center><img src="figs/01_adv2bis.png" width=250px alt="default"/></center> ### c. Get Better Predictive Power Knowing the exact manifold your data lies on can help you analyze your current data points and predict future data. $\color{#047C91}{\text{Example}}$: Take data on a 2-dimensional sphere. - If you did not know that your data live on the surface of a sphere, then you might try to fit your data with a line -- see Fig. (b). - However, you should fit a geodesic curve on a sphere -- see Fig. (a). <center><img src="figs/01_adv3d.png" width=700px alt="default"/></center> # 01 Differential Geometry for Engineers ## A) Manifolds and Lie groups $\color{#003660}{\text{Nina Miolane - Assistant Professor}}$ @ BioShape Lab @ UCSB ECE - Texts and illustrations by Adele Myers @ BioShape Lab. - Textbook: Guigui, Miolane, Pennec, 2022. Introduction to Riemannian Geometry and Geometric Statistics. <center><img src="figs/01_manifold_definitions1.png" width=1000px alt="default"/></center> # Outline: Geometric Learning for BioShape Analysis - **Unit 1 (Geometry - Math!): Differential Geometry for Engineers** - **Unit 2 (Shapes)**: Computational Representations of Biomedical Shapes - **Unit 3 (Machine Learning)**: Geometric Machine Learning for Shape Analysis - **Unit 4 (Deep Learning)**: Geometric Deep Learning for Shape Analysis <center><img src="figs/00_bioshape.jpg" width=500px alt="default"/></center> Examples and applications will be taken from cutting-edge research in the **biomedical field**. # Outline - **Unit 1 (Geometry - Math!)**: Differential Geometry for Engineers - **A) Manifolds and Lie groups** - Our data spaces. - B) Connections and Riemannian Metrics - Tools we use to compute on these spaces. # A) Manifolds and Lie groups: Outline You will learn: 1. What is a manifold? What are tangent spaces? 2. Why do we care about manifolds? 3. **How can we implement manifolds?** 4. What is a (Lie) group of transformations? 5. Why do we care about Lie groups? 6. How can we implement Lie groups? # 3. How can we implement manifolds? $\color{#EF5645}{\text{Geomstats}}$: a Python package for Geometry in Machine Learning and Deep Learning <center><img src="figs/01_geomstats2.png" width=1200px alt="default"/></center> Geomstats uses [object-oriented programming](https://www.educative.io/blog/object-oriented-programming) to define manifolds as "classes" that are organized into a hierarchy. Subclasses are indicated by arrows and represent special cases of their parent class. $\color{#047C91}{\text{Examples}}$: - `LevelSet` is the implementation of manifold corresponding to definition (2): - it is thus a special case of `Manifold` and is implemented as a subclass. - `Hypersphere` can be conveniently represented by definition (2): - it is thus a special case of a `LevelSet`. <center><img src="figs/01_manifold_hierarchy.jpeg" width=900px alt="default"/></center> Rules that are universally true for all manifolds are implemented in the parent class `Manifold`. - $\color{#047C91}{\text{Example}}$: Every manifold has a dimension and a tangent space at each point. Rules that are true for some manifolds are implemented in the subclasses of `Manifold`. - $\color{#047C91}{\text{Example}}$: A vector space is a special type of manifold, where the tangent space is equal to the space itself. <center><img src="figs/01_vectorspace.png" width=300px alt="default"/></center> Now, we describe how to use the main classes of the package. ### The Parent Class: `Manifold` The `Manifold` parent class is an "abstract base class" which provides the template of attributes and methods (i.e. functions) that every manifold should have. Methods of the abstract parent class are declared, but without implementation. They will be overwritten by the subclasses. [See implementation here.](https://github.com/geomstats/geomstats/blob/master/geomstats/geometry/manifold.py) Attributes: - `dim`: Stands for dimension. Methods: - `belongs`: evaluates whether a given element belongs to that manifold - `is_tangent`: evaluates whether a vector is tangent to the manifold at a point - `random_point`: generates a random point that lies on the manifold ### Example: Hypersphere A sphere is an hypersphere of dimension 2, which is a special case of manifold. Thus, the sphere should have a `dim` attribute, and `belongs`, `is_tangent`, `random_point` methods. ``` import numpy as np from geomstats.geometry.hypersphere import Hypersphere sphere = Hypersphere(dim=2); print(f"The sphere has dimension {sphere.dim}") point = np.array([1, 0, 0]); print(f"Point is on the sphere: {sphere.belongs(point)}") random = sphere.random_point(); print(f"Random point is on the sphere: {sphere.belongs(random)}") ``` ### Example: Euclidean Space A plane is a vector space of dimension 2, which is a special case of manifold. Thus, the plane should have a `dim` attribute, and `belongs`, `is_tangent`, `random_point` methods. ``` from geomstats.geometry.euclidean import Euclidean plane = Euclidean(dim=2); print(f"The plane has dimension {plane.dim}") point = np.array([1, 0]); print(f"Point is on the plane: {plane.belongs(point)}") random1 = plane.random_point(); print(f"Random point is on the plane: {plane.belongs(random1)}") random2 = plane.random_point(); print(f"Random point is tangent: {plane.is_tangent(random2, base_point=random1)}") ``` ## The class `OpenSet` <center><img src="figs/01_manifold_hierarchy.jpeg" width=1200px alt="default"/></center> $\color{#EF5645}{\text{Recall}}$: **"Definition 1: Local Parameterization".** $M$ is a space that locally resembles Euclidean space near each point. <center><img src="figs/01_manifold_definition1.png" width=600px alt="default"/></center> - Any open set of a vector space naturally verifies this condition. - $\color{#EF5645}{\text{Definition}}$: Intuitively, an open set is a set of points whose boundary is not included. ### Closed and open sets in 2D The set (a) is a closed set. The set (b) is an open set in $\mathbb{R}^2$. <center><img src="figs/01_manifold_openSurfaces.png" width=500px alt="default"/></center> ### An open set in 3D The cone (volume) without boundary (without its surface) is an open set in $\mathbb{R}^3$. <center><img src="figs/01_cone.png" width=400px alt="default"/></center> ## Implementation of `OpenSet` The implementation of `OpenSet` can be [found here](https://github.com/geomstats/geomstats/blob/306ea04412a33c829d2ab9fc7ff713d99a397707/geomstats/geometry/base.py#L321). Attributes: - `dim`: dimension, as any manifold - `ambient_space`: the vector space within which the open set is defined: $\mathbb{R}^d$. Knowing that a manifold is an open set allows us to overwrite some methods of the `Manifold` class. - `is_tangent`: we check that a vector is tangent to the open set by checking that it is part of the ambient vector space. Run the code below to see the contents of the `OpenSet` class. ``` import inspect from geomstats.geometry.base import OpenSet for line in inspect.getsourcelines(OpenSet)[0]: line = line.replace('\n',''); print(line) ``` ### Example: Poincare Disk (Hyperbolic Geometry) A poincaré disk is a open set of $\mathbb{R}^2$. Thus, the poincaré disk should have a `dim` and an `ambient_space` attribute. <center><img src="figs/01_disk.png" width=400px alt="default"/></center> ``` from geomstats.geometry.poincare_ball import PoincareBall poincare_disk = PoincareBall(dim=2) print(f"The dimension of the poincaré disk is {poincare_disk.dim} and its ambient space is:") print(poincare_disk.ambient_space.dim) ``` ### Example: Open Cone (Brain Connectomes) The 2x2 brain connectomes form an open set of $\mathbb{R}^3$. Thus, the space of 2x2 brain connectomes should have a `dim` and an `ambient_space` attribute. <center><img src="figs/01_connectome.png" width=400px alt="default"/></center> <center><img src="figs/01_cone.png" width=400px alt="default"/></center> ``` from geomstats.geometry.spd_matrices import SPDMatrices spd = SPDMatrices(n=2) print(f"The dimension of the cone of SPD matrices is {spd.dim} and its ambient space is:") print(spd.ambient_space) print(f"which has dimension {spd.ambient_space.dim}.") ``` ## The class `LevelSet` <center><img src="figs/01_manifold_hierarchy.jpeg" width=1200px alt="default"/></center> $\color{#EF5645}{\text{Recall}}$: **"Definition 2: Local Implicit Function".** $M$ is the set of points that verify a constraint defined by an implicit equation, given by the function $f$. <center><img src="figs/01_manifold_definition2.png" width=600px alt="default"/></center> The class `LevelSet` corresponds to the implementation of this definition and can be [found here](https://github.com/geomstats/geomstats/blob/306ea04412a33c829d2ab9fc7ff713d99a397707/geomstats/geometry/base.py#L162). Attributes: - `submersion`: the function $f$ defining the constraints. - `value`: the value $c$ such that $M = f^{-1}(\{c\})$, also called a "level". For the 3 colored spheres below, what is the submersion? What is the level? <center><img src="figs/01_manifold_levelset.png" width=1200px alt="default"/></center> Run the code below to see the contents of the `LevelSet` class. ``` import inspect from geomstats.geometry.base import LevelSet for line in inspect.getsourcelines(LevelSet)[0]: line = line.replace('\n',''); print(line) ``` ### Example: The Hypersphere The Hypersphere is implemented with the definition (2), i.e. as a subclass of the `LevelSet` class. Thus the hypersphere should have `submersion` and `value` attributes. ``` import inspect from geomstats.geometry.hypersphere import Hypersphere hypersphere = Hypersphere(dim=4) print("The submersion defining the hypersphere is:") print(inspect.getsource(hypersphere.submersion)[:-2]) print(f"The value defining the hypersphere is: {hypersphere.value}") ``` ## `ProductManifold` <center><img src="figs/01_manifold_hierarchy.jpeg" width=1200px alt="default"/></center> New manifolds can be created by combining existing manifolds together. A product manifold defines a new manifold as the product manifold of n other manifolds $M_1, ..., M_n$. $\color{#047C91}{\text{Example}}$: We can create a product of two hypersphere as follows. ``` from geomstats.geometry.product_manifold import ProductManifold sphere1 = Hypersphere(dim=2) sphere2 = Hypersphere(dim=2) product_of_two_spheres = ProductManifold([sphere1, sphere2]) product_of_two_spheres.random_point() ``` # Take-Home Messages - Data spaces of real-world applications are often manifolds. - Properties of manifolds that be conveniently implemented in a unique framework of "classes". - For the visualization project: - each team will describe one manifold from Geomstats and provide visualizations. - For this class: - we will focus on manifolds that describe shape and shape tranformations. <center><img src="figs/01_manifold_hierarchy.jpeg" width=800px alt="default"/></center> # A) Manifolds and Lie groups: Outline You will learn: 1. What is a manifold? What are tangent spaces? 2. Why do we care about manifolds? 3. How can we implement manifolds? 4. **What is a (Lie) group?** 5. Why do we care about Lie groups? 6. How can we implement Lie groups? # 4. What is a Lie group? Lie groups are abstract mathematical structures, that become tangible when we consider the way they can transform raw data. - We will be interested on how Lie groups can transform shape data. $\color{#047C91}{\text{Example}}$: The Lie group of 3D rotations, denoted $SO(3)$, can act on biological shapes by effectively rotating these volumes in 3D space. - This specific transformation does not change the actual shape of the biological structure. ``` rotation = np.array([ [1, 0, 0], [0, -0.67611566, -0.73679551], [0, 0.73679551, -0.67611566] ]) rotated_nerve = rotation @ two_nerves[0].T two_nerves[1] = rotated_nerve.T two_nerves fig = plt.figure() ax = Axes3D(fig); ax.set_xlim((2000, 4000)); ax.set_ylim((1000, 5000)); ax.set_zlim((-600, 200)) for i, nerve in enumerate(two_nerves): x = nerve[1:4, 0]; y = nerve[1:4, 1]; z = nerve[1:4, 2]; verts = [list(zip(x, y, z))] poly = Poly3DCollection(verts, alpha=0.5); color = label_to_color[i]; poly.set_color(colors.rgb2hex(color)); poly.set_edgecolor("k") ax.add_collection3d(poly) # patch_0 = mpatches.Patch(color=label_to_color[0], label=label_to_str[0], alpha=0.5) # patch_1 = mpatches.Patch(color=label_to_color[1], label=label_to_str[1], alpha=0.5) # plt.legend(handles=[patch_0, patch_1], prop={"size": 20}); plt.show() ``` ## Precise Mathematical Definition $\color{#EF5645}{\text{Definition}}$: A Lie group is a manifold that is also a group. $\color{#EF5645}{\text{Definition}}$: A group is a set $G$ together with a **binary operation** on $G$, here denoted ".", that combines any two elements $a$ and $b$ to form an element of $G$, denoted $a \cdot b$, such that the following three axioms are satisfied: - **Associativity:** For all $a, b, c$ in $G$, one has $(a \cdot b) \cdot c=a \cdot(b \cdot c)$. - **Identity element:** There exists an element $e$ in $G$ such that, for every $a$ in $G$, one has $e \cdot a=a$ and $a \cdot e=a$. - Such an element is unique and called the identity element of the group. - **Inverse element:** For each $a$ in $G$, there exists an element $b$ in $G$ such that $a \cdot b=e$ and $b \cdot a=e$, where $e$ is the identity element. - For each $a$, the element $b$ is unique. It is called the inverse of $a$ and denoted $a^{-1}$. ## Explanations with 2D rotations The group of 2D rotations is a Lie group called the Special Orthogonal group in 2D and is denoted SO(2). <center><img src="figs/01_rotation_2d.png" width=400px alt="default"/></center> Its elements can be represented by an angle $\theta$ or by a 2 x 2 rotation matrix. Its elements can be represented by an angle $\theta$ or by a 2 x 2 rotation matrix. ``` from geomstats.geometry.special_orthogonal import SpecialOrthogonal so = SpecialOrthogonal(n=2, point_type="vector") theta = so.random_point(); theta from geomstats.geometry.special_orthogonal import SpecialOrthogonal so = SpecialOrthogonal(n=2, point_type="matrix") rotation_matrix = so.random_point(); rotation_matrix ``` - **binary operation**: takes two elements of the group and create a new element. - $\color{#047C91}{\text{Example}}$: Composing two rotations $R_1$ and $R_2$ gives a new rotation $R_1.R_2$ ``` from geomstats.geometry.special_orthogonal import SpecialOrthogonal so = SpecialOrthogonal(n=2, point_type="matrix") rotation1 = so.random_point(); print(rotation1) rotation2 = so.random_point(); print(rotation2) composition = so.compose(rotation1, rotation2); print(composition) so.belongs(composition) ``` - A Lie group has an identity element. - In a Lie group, every element has an inverse. ``` print("The identity element of the group is:") print(so.identity) print("The inverse of a rotation is computed as:") print(so.inverse(rotation1)) ``` - **Associativity**: - $\color{#047C91}{\text{Example}}$: If we wish to compose three rotations sequentially, we can first compute the composition of the first two, and apply the last; or compose the last two and apply the result after applying the first. We can verify that the group of rotations SO(2) verifies the associativity. ``` rotation3 = so.random_point() print(so.compose( rotation1, so.compose(rotation2, rotation3))) print(so.compose( so.compose(rotation1, rotation2), rotation3)) ``` We can verify that composing with the identity element does not change a rotation. ``` print(rotation1) print(so.compose(so.identity, rotation1)) ``` We can verify that composing a group element with its inverse gives the identity. ``` print(so.compose(rotation1, so.inverse(rotation1))) ``` ## Lie Algebra $\color{#EF5645}{\text{Definition}}$: The Lie algebra of a Lie group is its tangent space at identity $T_eG$. - The elements of the group, i.e. the points on the manifolds represent transformations (e.g. rotations). - The tangent vectors represents infinitesimal transformations (e.g. infinitesimal rotations). <center><img src="figs/01_tangentspace.jpeg" width=800px alt="default"/></center> ### Example: Rotations The Lie algebra of rotations, i.e. the space of infinitesimal rotations, is the space of skew-symmetric matrices. ``` so.lie_algebra so.lie_algebra.random_point() ``` # A) Manifolds and Lie groups: Outline You will learn: 1. What is a manifold? What are tangent spaces? 2. Why do we care about manifolds? 3. How can we implement manifolds? 4. What is a (Lie) group of transformations? 5. **Why do we care about Lie groups?** 6. How can we implement Lie groups? # 5. Why do we care about Groups? $\textbf{Groups are important because transformations in nature "naturally form Lie groups"}$. - Lie groups can express transformations. - Lie groups can express symmetries. - Even abstract symmetries, like symmetries in particle physics. - Product of 3D rotation group SO(3), one for each joint of the body, represent human shapes <center><img src="figs/01_dance.png" width=1000px alt="default"/></center> - Cyclic group $C_5$ of 5 rotations can define the symmetries of a biomolecule. <center><img src="figs/01_protein.jpeg" width=800px alt="default"/></center> - Group $SU(3)$ of quarks colors <center><img src="figs/01_particles.jpeg" width=900px alt="default"/></center> # A) Manifolds and Lie groups: Outline You will learn: 1. What is a manifold? What are tangent spaces? 2. Why do we care about manifolds? 3. How can we implement manifolds? 4. What is a (Lie) group of transformations? 5. Why do we care about Lie groups? 6. **How can we implement Lie groups?** # 6. How can we implement Lie groups? - The `MatrixLieGroup` class appears in the hierarchy of manifolds, since a Lie group is a special type of manifold. - The `MatrixLieAlgebra` also appears in the hierarchy, and is a subclass of the `VectorSpace` class. - Indeed, a Lie algebra is a tangent space and thus a vector space. <center><img src="figs/01_manifold_hierarchy.jpeg" width=1000px alt="default"/></center> `MatrixLieGroup` is implemented [here](https://github.com/geomstats/geomstats/blob/306ea04412a33c829d2ab9fc7ff713d99a397707/geomstats/geometry/lie_group.py#L17). - Attributes of the `MatrixLieGroup` class are: - `dim`: the dimension of the group seen as a manifold - `n`: the size of the matrix defining an element of the group - `identity`: identity element - `lie_algebra`: tangent space at the identity of the group - Methods of the `MatrixLieGroup` class are: - `compose`: composing two elements, i.e. using the binary operation. - `inverse`: inverting an element. You can run the code below to see the contents of the `MatrixLieGroup` class. ``` import inspect from geomstats.geometry.lie_group import MatrixLieGroup for line in inspect.getsourcelines(MatrixLieGroup)[0]: line = line.replace('\n',''); print(line) ``` # Take-Home Messages - Real-world transformations and symmetries of real-world applications are often groups. - All groups verify a set of axioms that can be implemented in an class. - For this course: - we will focus on groups that describe shape transformations and shape symmetries. # A) Manifolds and Lie Groups: Conclusion What we saw: 1. What is a manifold? What are tangent spaces? 2. Why do we care about manifolds? 3. How can we implement manifolds? 4. What is a Lie group? 5. Why do we care about Lie groups? 6. How can we implement Lie groups? What we did not see: - How can we compute on these spaces? E.g. compute an average, or a trajectory? # Outline - **Unit 1 (Geometry - Math!)**: Differential Geometry for Engineers - **A) Manifolds and Lie groups** - Our data spaces. - B) Connections and Riemannian Metrics - Tools we use to compute on these spaces.
github_jupyter
import matplotlib.colors as colors import matplotlib.patches as mpatches import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D from mpl_toolkits.mplot3d.art3d import Poly3DCollection import warnings warnings.filterwarnings("ignore") import geomstats.datasets.utils as data_utils nerves, labels, monkeys = data_utils.load_optical_nerves() print(nerves.shape) print(labels) print(monkeys) two_nerves = nerves[monkeys == 1] print(two_nerves.shape) two_labels = labels[monkeys == 1] print(two_labels) label_to_str = {0: "Normal nerve", 1: "Glaucoma nerve"} label_to_color = { 0: (102 / 255, 178 / 255, 255 / 255, 1.0), 1: (255 / 255, 178 / 255, 102 / 255, 1.0), } fig = plt.figure() ax = Axes3D(fig); ax.set_xlim((2000, 4000)); ax.set_ylim((1000, 5000)); ax.set_zlim((-600, 200)) for nerve, label in zip(two_nerves, two_labels): x = nerve[1:4, 0]; y = nerve[1:4, 1]; z = nerve[1:4, 2]; verts = [list(zip(x, y, z))] poly = Poly3DCollection(verts, alpha=0.5); color = label_to_color[int(label)]; poly.set_color(colors.rgb2hex(color)); poly.set_edgecolor("k") ax.add_collection3d(poly) patch_0 = mpatches.Patch(color=label_to_color[0], label=label_to_str[0], alpha=0.5) patch_1 = mpatches.Patch(color=label_to_color[1], label=label_to_str[1], alpha=0.5) plt.legend(handles=[patch_0, patch_1], prop={"size": 20}); plt.show() import numpy as np import geomstats.visualization as viz fig = plt.figure(figsize=(10, 10)); ax = fig.add_subplot(111, projection="3d") point = np.array([-0.65726771, -0.02678122, 0.7531812]) vector = np.array([1, 0, 0.8]) ax = viz.plot(point, ax=ax, space="S2", s=200, alpha=0.8, label="Point") arrow = viz.Arrow3D(point, vector=vector); arrow.draw(ax, color="black"); ax.legend(); import numpy as np from geomstats.geometry.hypersphere import Hypersphere sphere = Hypersphere(dim=2); print(f"The sphere has dimension {sphere.dim}") point = np.array([1, 0, 0]); print(f"Point is on the sphere: {sphere.belongs(point)}") random = sphere.random_point(); print(f"Random point is on the sphere: {sphere.belongs(random)}") from geomstats.geometry.euclidean import Euclidean plane = Euclidean(dim=2); print(f"The plane has dimension {plane.dim}") point = np.array([1, 0]); print(f"Point is on the plane: {plane.belongs(point)}") random1 = plane.random_point(); print(f"Random point is on the plane: {plane.belongs(random1)}") random2 = plane.random_point(); print(f"Random point is tangent: {plane.is_tangent(random2, base_point=random1)}") import inspect from geomstats.geometry.base import OpenSet for line in inspect.getsourcelines(OpenSet)[0]: line = line.replace('\n',''); print(line) from geomstats.geometry.poincare_ball import PoincareBall poincare_disk = PoincareBall(dim=2) print(f"The dimension of the poincaré disk is {poincare_disk.dim} and its ambient space is:") print(poincare_disk.ambient_space.dim) from geomstats.geometry.spd_matrices import SPDMatrices spd = SPDMatrices(n=2) print(f"The dimension of the cone of SPD matrices is {spd.dim} and its ambient space is:") print(spd.ambient_space) print(f"which has dimension {spd.ambient_space.dim}.") import inspect from geomstats.geometry.base import LevelSet for line in inspect.getsourcelines(LevelSet)[0]: line = line.replace('\n',''); print(line) import inspect from geomstats.geometry.hypersphere import Hypersphere hypersphere = Hypersphere(dim=4) print("The submersion defining the hypersphere is:") print(inspect.getsource(hypersphere.submersion)[:-2]) print(f"The value defining the hypersphere is: {hypersphere.value}") from geomstats.geometry.product_manifold import ProductManifold sphere1 = Hypersphere(dim=2) sphere2 = Hypersphere(dim=2) product_of_two_spheres = ProductManifold([sphere1, sphere2]) product_of_two_spheres.random_point() rotation = np.array([ [1, 0, 0], [0, -0.67611566, -0.73679551], [0, 0.73679551, -0.67611566] ]) rotated_nerve = rotation @ two_nerves[0].T two_nerves[1] = rotated_nerve.T two_nerves fig = plt.figure() ax = Axes3D(fig); ax.set_xlim((2000, 4000)); ax.set_ylim((1000, 5000)); ax.set_zlim((-600, 200)) for i, nerve in enumerate(two_nerves): x = nerve[1:4, 0]; y = nerve[1:4, 1]; z = nerve[1:4, 2]; verts = [list(zip(x, y, z))] poly = Poly3DCollection(verts, alpha=0.5); color = label_to_color[i]; poly.set_color(colors.rgb2hex(color)); poly.set_edgecolor("k") ax.add_collection3d(poly) # patch_0 = mpatches.Patch(color=label_to_color[0], label=label_to_str[0], alpha=0.5) # patch_1 = mpatches.Patch(color=label_to_color[1], label=label_to_str[1], alpha=0.5) # plt.legend(handles=[patch_0, patch_1], prop={"size": 20}); plt.show() from geomstats.geometry.special_orthogonal import SpecialOrthogonal so = SpecialOrthogonal(n=2, point_type="vector") theta = so.random_point(); theta from geomstats.geometry.special_orthogonal import SpecialOrthogonal so = SpecialOrthogonal(n=2, point_type="matrix") rotation_matrix = so.random_point(); rotation_matrix from geomstats.geometry.special_orthogonal import SpecialOrthogonal so = SpecialOrthogonal(n=2, point_type="matrix") rotation1 = so.random_point(); print(rotation1) rotation2 = so.random_point(); print(rotation2) composition = so.compose(rotation1, rotation2); print(composition) so.belongs(composition) print("The identity element of the group is:") print(so.identity) print("The inverse of a rotation is computed as:") print(so.inverse(rotation1)) rotation3 = so.random_point() print(so.compose( rotation1, so.compose(rotation2, rotation3))) print(so.compose( so.compose(rotation1, rotation2), rotation3)) print(rotation1) print(so.compose(so.identity, rotation1)) print(so.compose(rotation1, so.inverse(rotation1))) so.lie_algebra so.lie_algebra.random_point() import inspect from geomstats.geometry.lie_group import MatrixLieGroup for line in inspect.getsourcelines(MatrixLieGroup)[0]: line = line.replace('\n',''); print(line)
0.540681
0.969527
``` import time import numpy as np import pandas as pd from transformers import AdamW, get_linear_schedule_with_warmup import torch from torch import nn from torch.utils.data import dataloader from _classifier import BertClassifier, BERT16SKmerDatasetForPhylaClassification, GeneratePhylumLabels, TrainTestSplit ``` ### Add Phylum Lables to Dataset ``` label_generator = GeneratePhylumLabels(data_path='SILVA_parsed_V2.tsv') label_generator.save('SILVA_parsed_V2__labeled.tsv') num_classes = label_generator.num_classes label_generator.other_label num_classes ``` ### Train-Test Split ``` train_df, test_df = TrainTestSplit('SILVA_parsed_V2__labeled.tsv').train_test_split() train_df.to_csv('SILVA_parsed_V2__labeled__train.tsv', sep='\t') test_df.to_csv('SILVA_parsed_V2__labeled__test.tsv', sep='\t') ``` ### Create Dataset ``` trainset = BERT16SKmerDatasetForPhylaClassification( vocab_path='kmer_model/kmer_vocab.txt', data_path='SILVA_parsed_V2__labeled__train.tsv') testset = BERT16SKmerDatasetForPhylaClassification( vocab_path='kmer_model/kmer_vocab.txt', data_path='SILVA_parsed_V2__labeled__test.tsv') batch_size = 32 num_workers = 4 train_loader = dataloader.DataLoader( dataset=trainset, batch_size=batch_size, num_workers=num_workers ) test_loader = dataloader.DataLoader( dataset=testset, batch_size=batch_size, num_workers=num_workers ) ``` ### Define Model ``` if torch.cuda.is_available(): device = torch.device("cuda") print(f'There are {torch.cuda.device_count()} GPU(s) available.') print('Device name:', torch.cuda.get_device_name(0)) else: print('No GPU available, using the CPU instead.') device = torch.device("cpu") def initialize_model(epochs): """Initialize the Bert Classifier, the optimizer and the learning rate scheduler. """ # Instantiate Bert Classifier bert_classifier = BertClassifier(path='kmer_model/', num_classes=num_classes, freeze_bert=False) # Tell PyTorch to run the model on GPU bert_classifier.to(device) # Create the optimizer optimizer = AdamW( bert_classifier.parameters(), lr=5e-5, # Default learning rate eps=1e-8 # Default epsilon value ) # Total number of training steps total_steps = len(trainset) * epochs # Set up the learning rate scheduler scheduler = get_linear_schedule_with_warmup( optimizer, num_warmup_steps=0, # Default value num_training_steps=total_steps) return bert_classifier, optimizer, scheduler # Specify loss function loss_fn = nn.CrossEntropyLoss() ``` ### Define Train Loop ``` def train(model, train_dataloader, val_dataloader=None, epochs=4, evaluation=False): """ Train loop. """ for epoch_i in range(epochs): # Print the header of the result table print(f"{'Epoch':^7} | {'Batch':^15} | {'LR':^7} | {'Train Loss':^12} | {'Val Loss':^10} | {'Val Acc':^9} | {'Elapsed':^9}") print("-"*90) # Measure the elapsed time of each epoch t0_epoch, t0_batch = time.time(), time.time() total_loss, batch_loss, batch_counts = 0, 0, 0 model.train() num_steps = len(train_dataloader) for step, batch in enumerate(train_dataloader): batch_counts += 1 b_input_ids, b_labels = tuple(t.to(device) for t in batch) model.zero_grad() logits = model(b_input_ids) loss = loss_fn(logits, b_labels.view(-1,)) batch_loss += loss.item() total_loss += loss.item() # back-propagation loss.backward() # clip the norm of the gradients to 1.0 to prevent "exploding gradients" #torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0) optimizer.step() scheduler.step() if (step % 50 == 0 and step != 0) or (step == len(train_dataloader) - 1): time_elapsed = time.time() - t0_batch print(f"{epoch_i + 1:^7} | {step:^7}/{num_steps:^7} | {np.round(scheduler.get_lr()[-1], 7):^7}| {batch_loss / batch_counts:^12.6f} | {'-':^10} | {'-':^9} | {time_elapsed:^9.2f}") batch_loss, batch_counts = 0, 0 t0_batch = time.time() avg_train_loss = total_loss / len(train_dataloader) print("-"*70) if evaluation == True: val_loss, val_accuracy = evaluate(model, val_dataloader) time_elapsed = time.time() - t0_epoch print(f"{epoch_i + 1:^7} | {'-':^15} | {'-':^7} | {avg_train_loss:^12.6f} | {val_loss:^10.6f} | {val_accuracy:^9.2f} | {time_elapsed:^9.2f}") print("-"*90) print("\n") def evaluate(model, val_dataloader): """ Evaluate model performance. """ model.eval() val_accuracy = [] val_loss = [] for batch in val_dataloader: b_input_ids, b_labels = tuple(t.to(device) for t in batch) with torch.no_grad(): logits = model(b_input_ids) loss = loss_fn(logits, b_labels.view(-1,)) val_loss.append(loss.item()) preds = torch.argmax(logits, dim=1).flatten() accuracy = (preds == b_labels.view(-1,)).cpu().numpy().mean() * 100 val_accuracy.append(accuracy) # compute the average accuracy and loss over the validation set. val_loss = np.mean(val_loss) val_accuracy = np.mean(val_accuracy) return val_loss, val_accuracy ``` ### Train! ``` %%time bert_classifier, optimizer, scheduler = initialize_model(epochs=5) train(bert_classifier, train_loader, test_loader, epochs=5, evaluation=True) ```
github_jupyter
import time import numpy as np import pandas as pd from transformers import AdamW, get_linear_schedule_with_warmup import torch from torch import nn from torch.utils.data import dataloader from _classifier import BertClassifier, BERT16SKmerDatasetForPhylaClassification, GeneratePhylumLabels, TrainTestSplit label_generator = GeneratePhylumLabels(data_path='SILVA_parsed_V2.tsv') label_generator.save('SILVA_parsed_V2__labeled.tsv') num_classes = label_generator.num_classes label_generator.other_label num_classes train_df, test_df = TrainTestSplit('SILVA_parsed_V2__labeled.tsv').train_test_split() train_df.to_csv('SILVA_parsed_V2__labeled__train.tsv', sep='\t') test_df.to_csv('SILVA_parsed_V2__labeled__test.tsv', sep='\t') trainset = BERT16SKmerDatasetForPhylaClassification( vocab_path='kmer_model/kmer_vocab.txt', data_path='SILVA_parsed_V2__labeled__train.tsv') testset = BERT16SKmerDatasetForPhylaClassification( vocab_path='kmer_model/kmer_vocab.txt', data_path='SILVA_parsed_V2__labeled__test.tsv') batch_size = 32 num_workers = 4 train_loader = dataloader.DataLoader( dataset=trainset, batch_size=batch_size, num_workers=num_workers ) test_loader = dataloader.DataLoader( dataset=testset, batch_size=batch_size, num_workers=num_workers ) if torch.cuda.is_available(): device = torch.device("cuda") print(f'There are {torch.cuda.device_count()} GPU(s) available.') print('Device name:', torch.cuda.get_device_name(0)) else: print('No GPU available, using the CPU instead.') device = torch.device("cpu") def initialize_model(epochs): """Initialize the Bert Classifier, the optimizer and the learning rate scheduler. """ # Instantiate Bert Classifier bert_classifier = BertClassifier(path='kmer_model/', num_classes=num_classes, freeze_bert=False) # Tell PyTorch to run the model on GPU bert_classifier.to(device) # Create the optimizer optimizer = AdamW( bert_classifier.parameters(), lr=5e-5, # Default learning rate eps=1e-8 # Default epsilon value ) # Total number of training steps total_steps = len(trainset) * epochs # Set up the learning rate scheduler scheduler = get_linear_schedule_with_warmup( optimizer, num_warmup_steps=0, # Default value num_training_steps=total_steps) return bert_classifier, optimizer, scheduler # Specify loss function loss_fn = nn.CrossEntropyLoss() def train(model, train_dataloader, val_dataloader=None, epochs=4, evaluation=False): """ Train loop. """ for epoch_i in range(epochs): # Print the header of the result table print(f"{'Epoch':^7} | {'Batch':^15} | {'LR':^7} | {'Train Loss':^12} | {'Val Loss':^10} | {'Val Acc':^9} | {'Elapsed':^9}") print("-"*90) # Measure the elapsed time of each epoch t0_epoch, t0_batch = time.time(), time.time() total_loss, batch_loss, batch_counts = 0, 0, 0 model.train() num_steps = len(train_dataloader) for step, batch in enumerate(train_dataloader): batch_counts += 1 b_input_ids, b_labels = tuple(t.to(device) for t in batch) model.zero_grad() logits = model(b_input_ids) loss = loss_fn(logits, b_labels.view(-1,)) batch_loss += loss.item() total_loss += loss.item() # back-propagation loss.backward() # clip the norm of the gradients to 1.0 to prevent "exploding gradients" #torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0) optimizer.step() scheduler.step() if (step % 50 == 0 and step != 0) or (step == len(train_dataloader) - 1): time_elapsed = time.time() - t0_batch print(f"{epoch_i + 1:^7} | {step:^7}/{num_steps:^7} | {np.round(scheduler.get_lr()[-1], 7):^7}| {batch_loss / batch_counts:^12.6f} | {'-':^10} | {'-':^9} | {time_elapsed:^9.2f}") batch_loss, batch_counts = 0, 0 t0_batch = time.time() avg_train_loss = total_loss / len(train_dataloader) print("-"*70) if evaluation == True: val_loss, val_accuracy = evaluate(model, val_dataloader) time_elapsed = time.time() - t0_epoch print(f"{epoch_i + 1:^7} | {'-':^15} | {'-':^7} | {avg_train_loss:^12.6f} | {val_loss:^10.6f} | {val_accuracy:^9.2f} | {time_elapsed:^9.2f}") print("-"*90) print("\n") def evaluate(model, val_dataloader): """ Evaluate model performance. """ model.eval() val_accuracy = [] val_loss = [] for batch in val_dataloader: b_input_ids, b_labels = tuple(t.to(device) for t in batch) with torch.no_grad(): logits = model(b_input_ids) loss = loss_fn(logits, b_labels.view(-1,)) val_loss.append(loss.item()) preds = torch.argmax(logits, dim=1).flatten() accuracy = (preds == b_labels.view(-1,)).cpu().numpy().mean() * 100 val_accuracy.append(accuracy) # compute the average accuracy and loss over the validation set. val_loss = np.mean(val_loss) val_accuracy = np.mean(val_accuracy) return val_loss, val_accuracy %%time bert_classifier, optimizer, scheduler = initialize_model(epochs=5) train(bert_classifier, train_loader, test_loader, epochs=5, evaluation=True)
0.836688
0.728676
<a href="https://colab.research.google.com/github/PrismarineJS/mineflayer/blob/master/docs/mineflayer.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Using mineflayer in Python This is a tutorial on how to use mineflayer in Python. This example will connect you to the PrismarineJS test server. You can join it with prismarine-viewer or your Minecraft client at server IP **95.111.249.143:10000**. If you're new to Jupyter Notebooks, you can press the "Play" button at the left of each code block to run it. Make sure that you run the blocks in a correct order. ## Setup First, make sure you have Python version 3.7 and Node.js version 14 or newer installed ``` !python --version !node --version ``` Now, we can use pip to install the `javascript` Python package to access Node.js libraries from Python. ``` !pip install javascript ``` ## Usage If all is well, we can import the `javascript` library. We can then import the `require` function which works similarly to the `require` function in Node.js, but does the dependency management for us. You may notice the extra imports : On, Once, off and AsyncTask. These will be discussed later on. ``` from javascript import require, On, Once, AsyncTask ``` We can now import Mineflayer ``` mineflayer = require('mineflayer') ``` Once we've done that, we can create a new `bot` instance, through the `createBot` function. You can see the docs for this function [here](https://github.com/PrismarineJS/mineflayer/blob/master/docs/api.md#bot). In the line below we specify a hostname and a port for the server, but do not pass any `auth` or `password` options, so it will connect to the server in offline mode. Below that, we also an event handlers, one that gets called on "spawn" event and sends a chat message. ``` random_number = id([]) % 1000 # Give us a random number upto 1000 BOT_USERNAME = f'colab_{random_number}' bot = mineflayer.createBot({ 'host': '95.111.249.143', 'port': 10000, 'username': BOT_USERNAME, 'hideErrors': False }) # The spawn event @Once(bot, 'login') def spawn(*a): bot.chat('I spawned') ``` If your bot spawned, we can now take a look at the bot's position ``` bot.entity.position ``` ### Listening to events You can register an event handler with the `@On` or `@Once` decorator. This decorator takes two arguments, first it's the **Event Emitter** (the object that is sending events) and the second is the **event name**, what event you want to listen to. *Do not use the .on or .once methods on bot, use the decorators instead.* A decorator always has a function under it which is being decorated, which can have any name. The first parameter to any event emitter callback is the `this` argument. In the code below, we create an event emitter on `bot` that listens to `playerJoin` events, then print that out. ``` @On(bot, 'playerJoin') def end(this, player): bot.chat('Someone joined!') ``` In Python, you cannot leave any arguments for an event handler callback blank like in JavaScript. Instead, you can use the asterisk (`*`) operator in Python to capture all remaining arguments to the right, much like the `...` rest/spread operator in JavaScript. The parameter with the asterisk will be a tuple containing the captured arguments. You can stop listening for events through an event handler by using the imported `off` function. It takes three parameters: the emitter, event name, and a reference to the Python function. ``` @On(bot, 'chat') def onChat(this, user, message, *rest): print(f'{user} said "{message}"') # If the message contains stop, remove the event listener and stop logging. if 'stop' in message: off(bot, 'chat', onChat) ``` You need to `off` all the event listeners you listen to with `@On`, else the Python process won't exit until all of the active event emitters have been off'ed. If you only need to listen once, you can use the `@Once` decroator like in the example above. ## Asynchronous tasks By default, all the operations you do run on the main thread. This means you can only do one thing at a time. To multitask, you can use the `@AsyncTask` decroator to run a function in a new thread, while not obstructing the main thread. ### Block breaking Take a look at the example below. Here we listen for a "break" trigger in a chat message, then we start digging the block underneath, while simultaneously sending a message that the bot has "started digging". ``` @On(bot, 'chat') def breakListener(this, sender, message, *args): if sender and (sender != BOT_USERNAME): if 'break' in message: pos = bot.entity.position.offset(0, -1, 0) blockUnder = bot.blockAt(pos) if bot.canDigBlock(blockUnder): bot.chat(f"I'm breaking the '{blockUnder.name}' block underneath {bot.canDigBlock(blockUnder)}") # The start=True parameter means to immediately invoke the function underneath # If left blank, you can start it with the `start()` function later on. try: @AsyncTask(start=True) def break_block(task): bot.dig(blockUnder) bot.chat('I started digging!') except Exception as e: bot.chat(f"I had an error {e}") else: bot.chat(f"I can't break the '{blockUnder.name}' block underneath") if 'stop' in message: off(bot, 'chat', breakListener) ``` ## Using mineflayer plugins Pick the plugin you want from the list [here](https://github.com/PrismarineJS/mineflayer#third-party-plugins), then `require()` it and register it to the bot. Some plugins have different ways to register to the bot, look at the plugin's README for usage steps. ### mineflayer-pathfinder `mineflayer-pathfinder` is a essential plugin that helps your bot move between places through A* pathfinding. Let's import it: ``` pathfinder = require('mineflayer-pathfinder') bot.loadPlugin(pathfinder.pathfinder) # Create a new minecraft-data instance with the bot's version mcData = require('minecraft-data')(bot.version) # Create a new movements class movements = pathfinder.Movements(bot, mcData) # How far to be fromt the goal RANGE_GOAL = 1 ``` Now let's have create a goal for the bot to move to where another player wants, based on a chat message. ``` bot.removeAllListeners('chat') @On(bot, 'chat') def handleMsg(this, sender, message, *args): if sender and (sender != BOT_USERNAME): bot.chat('Hi, you said ' + message) if 'come' in message: player = bot.players[sender] target = player.entity if not target: bot.chat("I don't see you !") return pos = target.position bot.pathfinder.setMovements(movements) bot.pathfinder.setGoal(pathfinder.goals.GoalNear(pos.x, pos.y, pos.z, RANGE_GOAL)) if 'stop' in message: off(bot, 'chat', handleMsg) ``` ## Analyzing the world You can also interact with mineflayer through any other Python package. Let's analyze some block frequencies... ``` import matplotlib.pyplot as plt figure = plt.figure() axes = figure.add_axes([0,0,1,1]) Vec3 = require('vec3').Vec3 columns = bot.world.getColumns() block_freqs = {} for c in range(0, 4): # iterate through some of the loaded chunk columns cc = columns[c].column for y in range(1, 40): for x in range(1, 16): for z in range(1, 16): block = cc.getBlock(Vec3(x, y, z)) if block.name in block_freqs: block_freqs[block.name] += 1 else: block_freqs[block.name] = 1 print(block_freqs) axes.bar(block_freqs.keys(), block_freqs.values()) plt.show() ``` ## Exiting the bot Once you're done, you can call `bot.quit()` or `bot.end()` to disconnect and stop the bot. ``` bot.quit() ``` ## Read more * **API** - https://github.com/PrismarineJS/mineflayer/blob/master/docs/api.md * **Type Definitions** - https://github.com/PrismarineJS/mineflayer/blob/master/index.d.ts * FAQ - https://github.com/PrismarineJS/mineflayer/blob/master/docs/FAQ.md * JS tutorial - https://github.com/PrismarineJS/mineflayer/blob/master/docs/tutorial.md
github_jupyter
!python --version !node --version !pip install javascript from javascript import require, On, Once, AsyncTask mineflayer = require('mineflayer') random_number = id([]) % 1000 # Give us a random number upto 1000 BOT_USERNAME = f'colab_{random_number}' bot = mineflayer.createBot({ 'host': '95.111.249.143', 'port': 10000, 'username': BOT_USERNAME, 'hideErrors': False }) # The spawn event @Once(bot, 'login') def spawn(*a): bot.chat('I spawned') bot.entity.position @On(bot, 'playerJoin') def end(this, player): bot.chat('Someone joined!') @On(bot, 'chat') def onChat(this, user, message, *rest): print(f'{user} said "{message}"') # If the message contains stop, remove the event listener and stop logging. if 'stop' in message: off(bot, 'chat', onChat) @On(bot, 'chat') def breakListener(this, sender, message, *args): if sender and (sender != BOT_USERNAME): if 'break' in message: pos = bot.entity.position.offset(0, -1, 0) blockUnder = bot.blockAt(pos) if bot.canDigBlock(blockUnder): bot.chat(f"I'm breaking the '{blockUnder.name}' block underneath {bot.canDigBlock(blockUnder)}") # The start=True parameter means to immediately invoke the function underneath # If left blank, you can start it with the `start()` function later on. try: @AsyncTask(start=True) def break_block(task): bot.dig(blockUnder) bot.chat('I started digging!') except Exception as e: bot.chat(f"I had an error {e}") else: bot.chat(f"I can't break the '{blockUnder.name}' block underneath") if 'stop' in message: off(bot, 'chat', breakListener) pathfinder = require('mineflayer-pathfinder') bot.loadPlugin(pathfinder.pathfinder) # Create a new minecraft-data instance with the bot's version mcData = require('minecraft-data')(bot.version) # Create a new movements class movements = pathfinder.Movements(bot, mcData) # How far to be fromt the goal RANGE_GOAL = 1 bot.removeAllListeners('chat') @On(bot, 'chat') def handleMsg(this, sender, message, *args): if sender and (sender != BOT_USERNAME): bot.chat('Hi, you said ' + message) if 'come' in message: player = bot.players[sender] target = player.entity if not target: bot.chat("I don't see you !") return pos = target.position bot.pathfinder.setMovements(movements) bot.pathfinder.setGoal(pathfinder.goals.GoalNear(pos.x, pos.y, pos.z, RANGE_GOAL)) if 'stop' in message: off(bot, 'chat', handleMsg) import matplotlib.pyplot as plt figure = plt.figure() axes = figure.add_axes([0,0,1,1]) Vec3 = require('vec3').Vec3 columns = bot.world.getColumns() block_freqs = {} for c in range(0, 4): # iterate through some of the loaded chunk columns cc = columns[c].column for y in range(1, 40): for x in range(1, 16): for z in range(1, 16): block = cc.getBlock(Vec3(x, y, z)) if block.name in block_freqs: block_freqs[block.name] += 1 else: block_freqs[block.name] = 1 print(block_freqs) axes.bar(block_freqs.keys(), block_freqs.values()) plt.show() bot.quit()
0.330579
0.934185
# Connecting to the board `%serialconnect` should automatically detect the port, but it doesn't work you can provide the port and baudrate as parameters to this magic function ``` # %serialconnect %serialconnect --port=/dev/tty.usbmodem3660384B30362 --baud=115200 ``` `help('modules')` is a handy function that returns a list of available modules Our build has a few extra (or extended) modules: - `bitcoin` - written in python and contains all necessary functions to build a hardware wallet - `hashlib` - adds support of sha512, ripemd160 and a few extra one-liners like `pbkdf2_hmac_sha512` and `hmac_sha512` - `display` - allows to init and update the display, all the gui stuff should be done with `lvgl` - `lvgl` - micropython bindings to [littlevgl](https://littlevgl.com/) library. It is a very powerful and optimized GUI library with plenty of widgets and advanced features like anti-aliasing, custom fonts etc. - `qrcode` - binding to C library that generates QR codes from a string ``` help('modules') ``` In this part of the tutorial we are interested in `pyb` module. It gives you an interface to communicate with hardware peripherals, in particular with LEDs and a switch (the blue button on the back of the board) # Blinking with LEDs `pyb.LED` is a class that allows you to turn LEDs on and off. There are 4 LEDs on the board (right above the screen) Let's turn them on! ``` import pyb # list of LEDs: leds = [pyb.LED(i) for i in range(1,5)] # turn on every LED for led in leds: led.on() ``` Now let's make them roll. We will shine one LED at a time and move to the next one after 100 ms. As it we have an infinite loop here we will need to interrupt the process when we are bored to look at rolling LEDs. In Jupyter you can do it from the top menu: `Kernel->Interrupt` ``` import time cur = 0 # index of the LED we will turn on while True: for i,led in enumerate(leds): # turn on current led if i==cur: led.on() else: # turn off every other led.off() cur = (cur+1) % len(leds) time.sleep_ms(100) ``` # Schedule and Timer Now let's make it non-blocking. This will be important later when we will start wringing our GUI. The board also has a `pyb.Timer` class that can call a function with a certain period. This is exactly what we need for our LEDs. The only thing you need to remember is that this function works in the interrupt mode that blocks all other processes. This function should be as small as possible, otherwise it is way better not to call this function right away but to add it to the queue. Functions in the queue will be processes "as soon as possible" during normal operation. Micropython has a special method to add a function to this queue: `micropython.schedule`. If the queue is full it will raise an error, but we don't care if one of the function calls will be skipped, so we will `try` it. ``` import micropython # we will change `step` variable # to change the direction of the roll step = 1 # this is our counter cur = 0 def roll(t): """Roll to the next LED""" global cur for i,led in enumerate(leds): # turn on current led if i==cur: led.on() else: # turn off every other led.off() cur = (cur+step) % len(leds) def schedule(t): """Try to schedule an LED update""" try: micropython.schedule(roll, None) except: pass timer = pyb.Timer(4) # timer 4 timer.init(freq=10)# 10Hz - 100 ms per tick timer.callback(schedule) # we can interactively change the `step` now and reverse the direction step = len(leds)-1 ``` Or we can use a button to control the direction: ``` sw = pyb.Switch() def change_direction(): global step step = 1 if step > 1 else len(leds)-1 sw.callback(change_direction) # In order to stop this LED dance we need to # unregister callback and deinit the timer timer.callback(None) timer.deinit() for led in leds: led.off() ``` Now when we spent some time with LEDs let's move on and write a small GUI that controls the LEDs. Also check out the `main.py` file in this folder. You can copy it to the board and it will run this script after reset (black button)
github_jupyter
# %serialconnect %serialconnect --port=/dev/tty.usbmodem3660384B30362 --baud=115200 help('modules') import pyb # list of LEDs: leds = [pyb.LED(i) for i in range(1,5)] # turn on every LED for led in leds: led.on() import time cur = 0 # index of the LED we will turn on while True: for i,led in enumerate(leds): # turn on current led if i==cur: led.on() else: # turn off every other led.off() cur = (cur+1) % len(leds) time.sleep_ms(100) import micropython # we will change `step` variable # to change the direction of the roll step = 1 # this is our counter cur = 0 def roll(t): """Roll to the next LED""" global cur for i,led in enumerate(leds): # turn on current led if i==cur: led.on() else: # turn off every other led.off() cur = (cur+step) % len(leds) def schedule(t): """Try to schedule an LED update""" try: micropython.schedule(roll, None) except: pass timer = pyb.Timer(4) # timer 4 timer.init(freq=10)# 10Hz - 100 ms per tick timer.callback(schedule) # we can interactively change the `step` now and reverse the direction step = len(leds)-1 sw = pyb.Switch() def change_direction(): global step step = 1 if step > 1 else len(leds)-1 sw.callback(change_direction) # In order to stop this LED dance we need to # unregister callback and deinit the timer timer.callback(None) timer.deinit() for led in leds: led.off()
0.191706
0.929504
``` %matplotlib notebook # %matplotlib inline # Fix ROS python2 stuff import sys sys.path = [p for p in sys.path if "python2.7" not in p] import numpy as np import cv2 from matplotlib import pyplot as plt from matplotlib import cm import matplotlib as mpl mpl.rcParams['figure.dpi']= 300 # http://scikit-image.org/docs/dev/auto_examples/segmentation/plot_segmentations.html from skimage.segmentation import felzenszwalb, slic, quickshift, watershed from skimage.segmentation import mark_boundaries from scipy.stats import wasserstein_distance img1 = cv2.imread('/home/arprice/data/ycb_teleop_vive_2018-11-21-15-48-11/rgb20181121T204811.518547.jpg') img2 = cv2.imread('/home/arprice/data/ycb_teleop_vive_2018-11-21-15-48-11/rgb20181121T204812.021465.jpg') # img2 = cv2.imread('/home/arprice/data/ycb_teleop_vive_2018-11-21-15-48-11/rgb20181121T204912.119592.jpg') # plt.imshow(img1[:,:,::-1]) # cv2.imshow('image', img1) # cv2.waitKey(0) # cv2.destroyAllWindows() scikit_img1 = img1[:,:,::-1] scikit_img2 = img2[:,:,::-1] segments_slic1 = slic(scikit_img1, n_segments=250, compactness=10, sigma=1) segments_slic2 = slic(scikit_img2, n_segments=250, compactness=10, sigma=1) plt.figure() plt.imshow(mark_boundaries(scikit_img1, segments_slic1)) plt.show() plt.figure() plt.imshow(segments_slic1) plt.show() bgr = ('b','g','r') def hists(img, segments, s): mask = np.zeros(img.shape[:2], np.uint8) mask[segments == s] = 1 h = [] for i,col in enumerate(bgr): hi = cv2.calcHist([img],[i],mask,[256],[0,256]) h.append(hi / (1.0+np.sum(hi))) return h def mask_hists(img, segments): histograms = dict() for s in np.unique(segments): histograms[s] = hists(img, segments, s) return histograms def visualize_segment(img, segments, s): # NB:"masked" in numpy means "hidden", but "active" in cv mask = np.zeros(img.shape[:2], np.uint8) mask[segments == s] = 1 plt.figure() plt.imshow(mask) plt.show() plt.figure() for j,col in enumerate(bgr): hj = cv2.calcHist([img],[j],mask,[256],[0,256]) plt.plot(hj / (1.0+np.sum(hj)),color = col) plt.xlim([0,256]) plt.show() viz_seg1 = 97 viz_seg2 = 103 visualize_segment(img1, segments_slic1, viz_seg1) visualize_segment(img2, segments_slic2, viz_seg2) histograms1 = mask_hists(img1, segments_slic1) histograms2 = mask_hists(img2, segments_slic2) # print(histograms1[viz_seg1][0].flatten()) print(wasserstein_distance(histograms1[viz_seg1][0].flatten(), histograms2[viz_seg2][0].flatten())) print(wasserstein_distance(histograms1[viz_seg1][0].flatten(), histograms2[viz_seg2-25][0].flatten())) I = len(np.unique(segments_slic1)) J = len(np.unique(segments_slic2)) K = max(I, J) D = np.zeros([I, J]) for i, n in enumerate(np.unique(segments_slic1)): for j, m in enumerate(np.unique(segments_slic2)): for k in range(3): D[i, j] += wasserstein_distance(histograms1[n][k].flatten(), histograms2[m][k].flatten()) plt.figure() plt.imshow(D, cmap=cm.gist_heat, interpolation='nearest') plt.show() from scipy.optimize import linear_sum_assignment row_ind, col_ind = linear_sum_assignment(D) print(col_ind) print(row_ind[viz_seg1], '->', col_ind[viz_seg1]) ```
github_jupyter
%matplotlib notebook # %matplotlib inline # Fix ROS python2 stuff import sys sys.path = [p for p in sys.path if "python2.7" not in p] import numpy as np import cv2 from matplotlib import pyplot as plt from matplotlib import cm import matplotlib as mpl mpl.rcParams['figure.dpi']= 300 # http://scikit-image.org/docs/dev/auto_examples/segmentation/plot_segmentations.html from skimage.segmentation import felzenszwalb, slic, quickshift, watershed from skimage.segmentation import mark_boundaries from scipy.stats import wasserstein_distance img1 = cv2.imread('/home/arprice/data/ycb_teleop_vive_2018-11-21-15-48-11/rgb20181121T204811.518547.jpg') img2 = cv2.imread('/home/arprice/data/ycb_teleop_vive_2018-11-21-15-48-11/rgb20181121T204812.021465.jpg') # img2 = cv2.imread('/home/arprice/data/ycb_teleop_vive_2018-11-21-15-48-11/rgb20181121T204912.119592.jpg') # plt.imshow(img1[:,:,::-1]) # cv2.imshow('image', img1) # cv2.waitKey(0) # cv2.destroyAllWindows() scikit_img1 = img1[:,:,::-1] scikit_img2 = img2[:,:,::-1] segments_slic1 = slic(scikit_img1, n_segments=250, compactness=10, sigma=1) segments_slic2 = slic(scikit_img2, n_segments=250, compactness=10, sigma=1) plt.figure() plt.imshow(mark_boundaries(scikit_img1, segments_slic1)) plt.show() plt.figure() plt.imshow(segments_slic1) plt.show() bgr = ('b','g','r') def hists(img, segments, s): mask = np.zeros(img.shape[:2], np.uint8) mask[segments == s] = 1 h = [] for i,col in enumerate(bgr): hi = cv2.calcHist([img],[i],mask,[256],[0,256]) h.append(hi / (1.0+np.sum(hi))) return h def mask_hists(img, segments): histograms = dict() for s in np.unique(segments): histograms[s] = hists(img, segments, s) return histograms def visualize_segment(img, segments, s): # NB:"masked" in numpy means "hidden", but "active" in cv mask = np.zeros(img.shape[:2], np.uint8) mask[segments == s] = 1 plt.figure() plt.imshow(mask) plt.show() plt.figure() for j,col in enumerate(bgr): hj = cv2.calcHist([img],[j],mask,[256],[0,256]) plt.plot(hj / (1.0+np.sum(hj)),color = col) plt.xlim([0,256]) plt.show() viz_seg1 = 97 viz_seg2 = 103 visualize_segment(img1, segments_slic1, viz_seg1) visualize_segment(img2, segments_slic2, viz_seg2) histograms1 = mask_hists(img1, segments_slic1) histograms2 = mask_hists(img2, segments_slic2) # print(histograms1[viz_seg1][0].flatten()) print(wasserstein_distance(histograms1[viz_seg1][0].flatten(), histograms2[viz_seg2][0].flatten())) print(wasserstein_distance(histograms1[viz_seg1][0].flatten(), histograms2[viz_seg2-25][0].flatten())) I = len(np.unique(segments_slic1)) J = len(np.unique(segments_slic2)) K = max(I, J) D = np.zeros([I, J]) for i, n in enumerate(np.unique(segments_slic1)): for j, m in enumerate(np.unique(segments_slic2)): for k in range(3): D[i, j] += wasserstein_distance(histograms1[n][k].flatten(), histograms2[m][k].flatten()) plt.figure() plt.imshow(D, cmap=cm.gist_heat, interpolation='nearest') plt.show() from scipy.optimize import linear_sum_assignment row_ind, col_ind = linear_sum_assignment(D) print(col_ind) print(row_ind[viz_seg1], '->', col_ind[viz_seg1])
0.257859
0.522385
# CIFRADO CESAR Implemente un programa que encripte mensajes usando el cifrado de Caesar, según lo siguiente. <code> caesar.py 13 plaintext: HELLO ciphertext: URYYB </code> ## ANTECEDENTES Supuestamente, César (sí, ese César) solía “encriptar” (es decir, ocultar de manera reversible) mensajes confidenciales desplazando cada letra en un número de lugares. Por ejemplo, podría escribir A como B, B como C, C como D,… y, en orden alfabético, Z como A. Y entonces, para decir HOLA a alguien, César podría escribir IFMMP. Al recibir tales mensajes de César, los destinatarios tendrían que "descifrarlos" cambiando las letras en la dirección opuesta en el mismo número de lugares. El secreto de este "criptosistema" dependía de que sólo César y los destinatarios conocieran un secreto, el número de lugares por los que César había cambiado sus letras (por ejemplo, 1). No es particularmente seguro para los estándares modernos, pero, bueno, si tal vez eres el primero en el mundo en hacerlo, ¡bastante seguro! El texto no cifrado generalmente se denomina texto sin formato . El texto cifrado generalmente se denomina texto cifrado . Y el secreto utilizado se llama clave . Para ser claros, entonces, así es como se <code>HELLO</code> obtiene el cifrado con una clave de 1 <code>IFMMP</code>: <img src='./img/ejercicio2.png'> Más formalmente, el algoritmo de César (es decir, el cifrado) cifra los mensajes "rotando" cada letra en k posiciones. Más formalmente, si p es un texto plano (es decir, un mensaje no cifrado), p i es el i- ésimo carácter en p , y k es una clave secreta (es decir, un número entero no negativo), entonces cada letra, c i , en el texto cifrado, c , se calcula como <img src='./img/ejercicio2_2.png'> donde <code>% 26</code> aquí significa "resto al dividir por 26". Esta fórmula quizás hace que el cifrado parezca más complicado de lo que es, pero en realidad es solo una forma concisa de expresar el algoritmo con precisión. De hecho, en aras de la discusión, piense en A (o a) como 0, B (o b) como 1,…, H (oh) como 7, I (o i) como 8,… y Z (o z) como 25. Suponga que César solo quiere saludar a alguien que usa de manera confidencial, esta vez, una clave, k , de 3. Y entonces su texto llano, p , es Hola, en cuyo caso el primer carácter de su texto llano, p 0 , es H (también conocido como 7), y el segundo carácter de su texto sin formato, p 1 , es i (también conocido como 8). El primer carácter de su texto cifrado, c 0, es así K, y el segundo carácter de su texto cifrado, c 1 , es así L. ¿Puedes ver por qué? Escribamos un programa llamado <code>caesar</code> que le permita cifrar mensajes usando el cifrado de Caesar. En el momento en que el usuario ejecuta el programa, debe decidir, proporcionando un argumento de línea de comandos, cuál debe ser la clave en el mensaje secreto que proporcionará en tiempo de ejecución. No debemos asumir necesariamente que la clave del usuario va a ser un número; aunque puede suponer que, si es un número, será un entero positivo. A continuación se muestran algunos ejemplos de cómo podría funcionar el programa. Por ejemplo, si el usuario ingresa una clave de <code>1</code> y un texto sin formato de <code>HELLO</code>: <code>caesar.py 1 plaintext: HELLO ciphertext: IFMMP Así es como el programa podría funcionar si el usuario proporciona una clave <code>13</code> y un texto sin formato de <code>hello, world</code>: <code>caesar.py 13 plaintext: hello, world ciphertext: uryyb, jbeyq Observe que ni la coma ni el espacio fueron "desplazados" por el cifrado. ¡Solo rota los caracteres alfabéticos! ¿Qué tal uno más? Así es como el programa podría funcionar si el usuario proporciona una clave de <code>13</code> nuevo, con un texto plano más complejo: <code>caesar.py 13 plaintext: be sure to drink your Ovaltine ciphertext: or fher gb qevax lbhe Binygvar Observe que se ha conservado el caso del mensaje original. Las letras minúsculas permanecen en minúsculas y las letras mayúsculas permanecen en mayúsculas. ### Nota Para todos los casos en que no se proporcione una clave válida el programa debe concluir con algún mensaje de error ## ESPECIFICACIONES Diseñe e implemente un programa, <code>caesar</code> que encripta mensajes usando el cifrado de Caesar. - Implemente su programa en un archivo llamado <code>caesar.py</code> - Su programa debe aceptar un único argumento de línea de comandos, un entero no negativo. Llamémoslo k por el bien de la discusión. - Si su programa se ejecuta sin ningún argumento de línea de comando o con más de un argumento de línea de comando, su programa debe imprimir un mensaje de error de su elección (con <code>print</code>) y retornar un valor de <code>1</code> (que tiende a significar un error) inmediatamente. - Si alguno de los caracteres del argumento de la línea de comandos no es un dígito decimal, su programa debe imprimir el mensaje <code>Usage: ./caesar key</code> y retornar un valor de <code>1</code>. - No asuma que k será menor o igual a 26. Su programa debería funcionar para todos los valores integrales no negativos de k menores que 2 ^ 31 - 26. En otras palabras, no necesita preocuparse si su programa eventualmente se interrumpe si el usuario elige un valor para k que es demasiado grande o casi demasiado grande para caber en un int. Pero, incluso si k es mayor que 26, los caracteres alfabéticos en la entrada de su programa deben seguir siendo caracteres alfabéticos en la salida de su programa. Por ejemplo, si k es 27, <code>A</code> debería convertirse en <code>B</code>, ya que <code>B</code> está a 27 posiciones de <code>A</code>, siempre que pase de <code>Z</code> a <code>A</code>. - Su programa debe generar <code>plaintext</code>:(sin una nueva línea) y luego solicitar al usuario un <code>string</code> texto plano (usando <code>input</code>). - Su programa debe generar <code>ciphertext</code>:(sin una nueva línea) seguido por el texto cifrado correspondiente del texto sin formato, con cada carácter alfabético en el texto sin formato "rotado" por k posiciones; Los caracteres no alfabéticos deben imprimirse sin cambios. - Su programa debe preservar el uso de mayúsculas y minúsculas: las letras mayúsculas, aunque rotas, deben permanecer en mayúscula; las letras minúsculas, aunque estén rotadas, deben permanecer en minúsculas. - Después de generar texto cifrado, debe imprimir una nueva línea. Su programa debería retornar <code>0</code>. ## Algoritmo Revelación Hay más de una forma de hacer esto, ¡así que aquí tienes solo una! 1. Verifique que el programa se ejecutó con un argumento de línea de comando 2. Repita el argumento proporcionado para asegurarse de que todos los caracteres sean dígitos 3. Convierta ese argumento de la línea de comandos de un stringa unint 4. Solicitar al usuario texto sin formato 5. Itera sobre cada carácter del texto sin formato: 6. Si es una letra mayúscula, gírela, conservando el caso, luego imprima el carácter girado 7. Si es una letra minúscula, gírela, conservando el caso, luego imprima el carácter girado 8. Si no es ninguno, imprima el carácter como está 9. Imprimir una nueva línea ## Pruebas Asegúrese de probar su código para cada uno de los siguientes. - Ejecute su programa como python caesar.py, para cada uno de los ejemplos dados en esta página - Otras pruebas se realizarán durante la clase ``` import string 'l'.isupper() ```
github_jupyter
import string 'l'.isupper()
0.139631
0.771736
``` import numpy as np from scipy import optimize import json import random import logging modelDataDir = "modelData/" class LRR: def __init__(self): self.vocab=[] self.vocab = self.loadDataFromFile("vocab.json") self.aspectKeywords={} self.aspectKeywords = self.loadDataFromFile("aspectKeywords.json") #word to its index in the corpus mapping self.wordIndexMapping={} self.createWordIndexMapping() #aspect to its index in the corpus mapping self.aspectIndexMapping={} self.reverseAspIndexmapping={} self.createAspectIndexMapping() #list of Wd matrices of all reviews self.wList=[] self.wList = self.loadDataFromFile("wList.json") #List of ratings dictionaries belonging to review class self.ratingsList=[] self.ratingsList = self.loadDataFromFile("ratingsList.json") #List of Review IDs self.reviewIdList=[] self.reviewIdList = self.loadDataFromFile("reviewIdList.json") #number of reviews in the corpus self.R = len(self.reviewIdList) #breaking dataset into 3:1 ratio, 3 parts for training and 1 for testing self.trainIndex = random.sample(range(0, self.R), int(0.75*self.R)) self.testIndex = list(set(range(0, self.R)) - set(self.trainIndex)) #number of aspects self.k = len(self.aspectIndexMapping) #number of training reviews in the corpus self.Rn = len(self.trainIndex) #vocab size self.n = len(self.wordIndexMapping) #delta - is simply a number self.delta = 1.0 #matrix of aspect rating vectors (Sd) of all reviews - k*Rn self.S = np.empty(shape=(self.k, self.Rn), dtype=np.float64) #matrix of alphas (Alpha-d) of all reviews - k*Rn #each column represents Aplha-d vector for a review self.alpha = np.random.dirichlet(np.ones(self.k), size=1).reshape(self.k, 1) for i in range(self.Rn-1): self.alpha = np.hstack((self.alpha, np.random.dirichlet(np.ones(self.k), size=1).reshape(self.k, 1))) #vector mu - k*1 vector self.mu = np.random.dirichlet(np.ones(self.k), size=1).reshape(self.k, 1) #matrix Beta for the whole corpus (for all aspects, for all words) - k*n matrix self.beta = np.random.uniform(low=-0.1, high=0.1, size=(self.k, self.n)) #matrix sigma for the whole corpus - k*k matrix #Sigma needs to be positive definite, with diagonal elems positive '''self.sigma = np.random.uniform(low=-1.0, high=1.0, size=(self.k, self.k)) self.sigma = np.dot(self.sigma, self.sigma.transpose()) print(self.sigma.shape, self.sigma) ''' #Following is help taken from: #https://stats.stackexchange.com/questions/124538/ W = np.random.randn(self.k, self.k-1) S = np.add(np.dot(W, W.transpose()), np.diag(np.random.rand(self.k))) D = np.diag(np.reciprocal(np.sqrt(np.diagonal(S)))) self.sigma = np.dot(D, np.dot(S, D)) self.sigmaInv=np.linalg.inv(self.sigma) ''' testing for positive semi definite if(np.all(np.linalg.eigvals(self.sigma) > 0)): #whether is positive semi definite print("yes") print(self.sigma) ''' # setting up logger self.logger = logging.getLogger("LRR") self.logger.setLevel(logging.INFO) self.logger.setLevel(logging.DEBUG) fh = logging.FileHandler("lrr.log") formatter = logging.Formatter('%(asctime)s %(message)s') fh.setFormatter(formatter) self.logger.addHandler(fh) def createWordIndexMapping(self): i=0 for word in self.vocab: self.wordIndexMapping[word]=i i+=1 #print(self.wordIndexMapping) def createAspectIndexMapping(self): i=0; for aspect in self.aspectKeywords.keys(): self.aspectIndexMapping[aspect]=i self.reverseAspIndexmapping[i]=aspect i+=1 #print(self.aspectIndexMapping) def loadDataFromFile(self,fileName): with open(modelDataDir+fileName,'r') as fp: obj=json.load(fp) fp.close() return obj #given a dictionary as in every index of self.wList, #creates a W matrix as was in the paper def createWMatrix(self, w): W = np.zeros(shape=(self.k, self.n)) for aspect, Dict in w.items(): for word, freq in Dict.items(): W[self.aspectIndexMapping[aspect]][self.wordIndexMapping[word]]=freq return W #Computing aspectRating array for each review given Wd->W matrix for review 'd' def calcAspectRatings(self,Wd): Sd = np.einsum('ij,ij->i',self.beta,Wd).reshape((self.k,)) try: Sd = np.exp(Sd) except Exception as inst: self.logger.info("Exception in calcAspectRatings : %s", Sd) return Sd def calcMu(self): #calculates mu for (t+1)th iteration self.mu = np.sum(self.alpha, axis=1).reshape((self.k, 1))/self.Rn def calcSigma(self, updateDiagonalsOnly): #update diagonal entries only self.sigma.fill(0) for i in range(self.Rn): columnVec = self.alpha[:, i].reshape((self.k, 1)) columnVec = columnVec - self.mu if updateDiagonalsOnly: for k in range(self.k): self.sigma[k][k] += columnVec[k]*columnVec[k] else: self.sigma = self.sigma + np.dot(columnVec, columnVec.transpose()) for i in range(self.k): self.sigma[i][i] = (1.0+self.sigma[i][i])/(1.0+self.Rn) self.sigmaInv=np.linalg.inv(self.sigma) def calcOverallRating(self,alphaD,Sd): return np.dot(alphaD.transpose(),Sd)[0][0] def calcDeltaSquare(self): self.delta=0.0 for i in range(self.Rn): alphaD=self.alpha[:,i].reshape((self.k, 1)) Sd=self.S[:,i].reshape((self.k, 1)) Rd=float(self.ratingsList[self.trainIndex[i]]["Overall"]) temp=Rd-self.calcOverallRating(alphaD,Sd) try: self.delta+=(temp*temp) except Exception: self.logger.info("Exception in Delta calc") self.delta/=self.Rn def maximumLikelihoodBeta(self,x,*args): beta = x beta=beta.reshape((self.k,self.n)) innerBracket = np.empty(shape=self.Rn) for d in range(self.Rn): tmp = 0.0 rIdx = self.trainIndex[d] #review index in wList for i in range(self.k): W = self.createWMatrix(self.wList[rIdx]) tmp += self.alpha[i][d]*np.dot(beta[i, :].reshape((1, self.n)), W[i, :].reshape((self.n, 1)))[0][0] innerBracket[d] = tmp - float(self.ratingsList[rIdx]["Overall"]) mlBeta=0.0 for d in range(self.Rn): mlBeta+=innerBracket[d] * innerBracket[d] return mlBeta/(2*self.delta) def gradBeta(self,x,*args): beta=x beta=beta.reshape((self.k,self.n)) gradBetaMat=np.empty(shape=((self.k,self.n)),dtype='float64') innerBracket = np.empty(shape=self.Rn) for d in range(self.Rn): tmp = 0.0 rIdx = self.trainIndex[d] #review index in wList for i in range(self.k): W = self.createWMatrix(self.wList[rIdx]) tmp += self.alpha[i][d]*np.dot(beta[i, :].reshape((1, self.n)), W[i, :].reshape((self.n, 1)))[0][0] innerBracket[d] = tmp - float(self.ratingsList[rIdx]["Overall"]) for i in range(self.k): beta_i=np.zeros(shape=(1,self.n)) for d in range(self.Rn): rIdx = self.trainIndex[d] #review index in wList W = self.createWMatrix(self.wList[rIdx]) beta_i += innerBracket[d] * self.alpha[i][d] * W[i, :] gradBetaMat[i,:]=beta_i return gradBetaMat.reshape((self.k*self.n, )) def calcBeta(self): beta, retVal, flags=optimize.fmin_l_bfgs_b(func=self.maximumLikelihoodBeta,x0=self.beta,fprime=self.gradBeta,args=(),m=5,maxiter=15000) converged = True if flags['warnflag']!=0: converged = False self.logger.info("Beta converged : %d", flags['warnflag']) return beta.reshape((self.k,self.n)), converged def maximumLikelihoodAlpha(self, x, *args): alphad=x alphad=alphad.reshape((self.k, 1)) rd,Sd,deltasq,mu,sigmaInv=args temp1=(rd-np.dot(alphad.transpose(),Sd)[0][0]) temp1*=temp1 temp1/=(deltasq*2) temp2=(alphad-mu) temp2=np.dot(np.dot(temp2.transpose(),sigmaInv),temp2)[0][0] temp2/=2 return temp1+temp2 def gradAlpha(self, x,*args): alphad=x alphad=alphad.reshape((self.k, 1)) rd,Sd,deltasq,mu,sigmaInv=args temp1=(np.dot(alphad.transpose(),Sd)[0][0]-rd)*Sd temp1/=deltasq temp2=np.dot(sigmaInv,(alphad-mu)) return (temp1+temp2).reshape((self.k,)) def calcAlphaD(self,i): alphaD=self.alpha[:,i].reshape((self.k,1)) rIdx = self.trainIndex[i] rd=float(self.ratingsList[rIdx]["Overall"]) Sd=self.S[:,i].reshape((self.k,1)) Args=(rd,Sd,self.delta,self.mu,self.sigmaInv) bounds=[(0,1)]*self.k #self.gradf(alphaD, *Args) alphaD, retVal, flags=optimize.fmin_l_bfgs_b(func=self.maximumLikelihoodAlpha,x0=alphaD,fprime=self.gradAlpha,args=Args,bounds=bounds,m=5,maxiter=15000) converged = True if flags['warnflag']!=0: converged = False self.logger.info("Alpha Converged : %d", flags['warnflag']) #Normalizing alphaD so that it follows dirichlet distribution alphaD=np.exp(alphaD) alphaD=alphaD/(np.sum(alphaD)) return alphaD.reshape((self.k,)), converged ''' def getBetaLikelihood(self): likelihood=0 return self.lambda*np.sum(np.einsum('ij,ij->i',self.beta,self.beta)) ''' def dataLikelihood(self): likelihood=0.0 for d in range(self.Rn): rIdx = self.trainIndex[d] Rd=float(self.ratingsList[rIdx]["Overall"]) W=self.createWMatrix(self.wList[rIdx]) Sd=self.calcAspectRatings(W).reshape((self.k, 1)) alphaD=self.alpha[:,d].reshape((self.k, 1)) temp=Rd-self.calcOverallRating(alphaD,Sd) try: likelihood+=(temp*temp) except Exception: self.logger.debug("Exception in dataLikelihood") likelihood/=self.delta return likelihood def alphaLikelihood(self): likelihood=0.0 for d in range(self.Rn): alphad=self.alpha[:,d].reshape((self.k, 1)) temp2=(alphad-self.mu) temp2=np.dot(np.dot(temp2.transpose(),self.sigmaInv),temp2)[0] likelihood+=temp2 try: likelihood+=np.log(np.linalg.det(self.sigma)) except FloatingPointError: self.logger.debug("Exception in alphaLikelihood: %f", np.linalg.det(self.sigma)) return likelihood def calcLikelihood(self): likelihood=0.0 likelihood+=np.log(self.delta) #delta likelihood likelihood+=self.dataLikelihood() #data likelihood - will capture beta likelihood too likelihood+=self.alphaLikelihood() #alpha likelihood return likelihood def EStep(self): for i in range(self.Rn): rIdx = self.trainIndex[i] W=self.createWMatrix(self.wList[rIdx]) self.S[:,i]=self.calcAspectRatings(W) alphaD, converged = self.calcAlphaD(i) if converged: self.alpha[:,i]=alphaD self.logger.info("Alpha calculated") def MStep(self): likelihood=0.0 self.calcMu() self.logger.info("Mu calculated") self.calcSigma(False) self.logger.info("Sigma calculated : %s " % np.linalg.det(self.sigma)) likelihood+=self.alphaLikelihood() #alpha likelihood self.logger.info("alphaLikelihood calculated") beta,converged=self.calcBeta() if converged: self.beta=beta self.logger.info("Beta calculated") likelihood+=self.dataLikelihood() #data likelihood - will capture beta likelihood too self.logger.info("dataLikelihood calculated") self.calcDeltaSquare() self.logger.info("Deltasq calculated") likelihood+=np.log(self.delta) #delta likelihood return likelihood def EMAlgo(self, maxIter, coverge): self.logger.info("Training started") iteration = 0 old_likelihood = self.calcLikelihood() self.logger.info("initial calcLikelihood calculated, det(Sig): %s" % np.linalg.det(self.sigma)) diff = 10.0 while(iteration<min(8, maxIter) or (iteration<maxIter and diff>coverge)): self.EStep() self.logger.info("EStep completed") likelihood = self.MStep() self.logger.info("MStep completed") diff = (old_likelihood-likelihood)/old_likelihood old_likelihood=likelihood iteration+=1 self.logger.info("Training completed") def testing(self): for i in range(self.R-self.Rn): rIdx = self.testIndex[i] W = self.createWMatrix(self.wList[rIdx]) Sd = self.calcAspectRatings(W).reshape((self.k,1)) overallRating = self.calcOverallRating(self.mu,Sd) print("ReviewId-",self.reviewIdList[rIdx]) print("Actual OverallRating:",self.ratingsList[rIdx]["Overall"]) print("Predicted OverallRating:",overallRating) print("Actual vs Predicted Aspect Ratings:") for aspect, rating in self.ratingsList[rIdx].items(): if aspect != "Overall" and aspect.lower() in self.aspectIndexMapping.keys(): r = self.aspectIndexMapping[aspect.lower()] print("Aspect:",aspect," Rating:",rating, "Predic:", Sd[r]) if overallRating > 3.0: print("Positive Review") else: print("Negative Review") np.seterr(all='raise') lrrObj = LRR() lrrObj.EMAlgo(maxIter=10, coverge=0.0001) lrrObj.testing() ```
github_jupyter
import numpy as np from scipy import optimize import json import random import logging modelDataDir = "modelData/" class LRR: def __init__(self): self.vocab=[] self.vocab = self.loadDataFromFile("vocab.json") self.aspectKeywords={} self.aspectKeywords = self.loadDataFromFile("aspectKeywords.json") #word to its index in the corpus mapping self.wordIndexMapping={} self.createWordIndexMapping() #aspect to its index in the corpus mapping self.aspectIndexMapping={} self.reverseAspIndexmapping={} self.createAspectIndexMapping() #list of Wd matrices of all reviews self.wList=[] self.wList = self.loadDataFromFile("wList.json") #List of ratings dictionaries belonging to review class self.ratingsList=[] self.ratingsList = self.loadDataFromFile("ratingsList.json") #List of Review IDs self.reviewIdList=[] self.reviewIdList = self.loadDataFromFile("reviewIdList.json") #number of reviews in the corpus self.R = len(self.reviewIdList) #breaking dataset into 3:1 ratio, 3 parts for training and 1 for testing self.trainIndex = random.sample(range(0, self.R), int(0.75*self.R)) self.testIndex = list(set(range(0, self.R)) - set(self.trainIndex)) #number of aspects self.k = len(self.aspectIndexMapping) #number of training reviews in the corpus self.Rn = len(self.trainIndex) #vocab size self.n = len(self.wordIndexMapping) #delta - is simply a number self.delta = 1.0 #matrix of aspect rating vectors (Sd) of all reviews - k*Rn self.S = np.empty(shape=(self.k, self.Rn), dtype=np.float64) #matrix of alphas (Alpha-d) of all reviews - k*Rn #each column represents Aplha-d vector for a review self.alpha = np.random.dirichlet(np.ones(self.k), size=1).reshape(self.k, 1) for i in range(self.Rn-1): self.alpha = np.hstack((self.alpha, np.random.dirichlet(np.ones(self.k), size=1).reshape(self.k, 1))) #vector mu - k*1 vector self.mu = np.random.dirichlet(np.ones(self.k), size=1).reshape(self.k, 1) #matrix Beta for the whole corpus (for all aspects, for all words) - k*n matrix self.beta = np.random.uniform(low=-0.1, high=0.1, size=(self.k, self.n)) #matrix sigma for the whole corpus - k*k matrix #Sigma needs to be positive definite, with diagonal elems positive '''self.sigma = np.random.uniform(low=-1.0, high=1.0, size=(self.k, self.k)) self.sigma = np.dot(self.sigma, self.sigma.transpose()) print(self.sigma.shape, self.sigma) ''' #Following is help taken from: #https://stats.stackexchange.com/questions/124538/ W = np.random.randn(self.k, self.k-1) S = np.add(np.dot(W, W.transpose()), np.diag(np.random.rand(self.k))) D = np.diag(np.reciprocal(np.sqrt(np.diagonal(S)))) self.sigma = np.dot(D, np.dot(S, D)) self.sigmaInv=np.linalg.inv(self.sigma) ''' testing for positive semi definite if(np.all(np.linalg.eigvals(self.sigma) > 0)): #whether is positive semi definite print("yes") print(self.sigma) ''' # setting up logger self.logger = logging.getLogger("LRR") self.logger.setLevel(logging.INFO) self.logger.setLevel(logging.DEBUG) fh = logging.FileHandler("lrr.log") formatter = logging.Formatter('%(asctime)s %(message)s') fh.setFormatter(formatter) self.logger.addHandler(fh) def createWordIndexMapping(self): i=0 for word in self.vocab: self.wordIndexMapping[word]=i i+=1 #print(self.wordIndexMapping) def createAspectIndexMapping(self): i=0; for aspect in self.aspectKeywords.keys(): self.aspectIndexMapping[aspect]=i self.reverseAspIndexmapping[i]=aspect i+=1 #print(self.aspectIndexMapping) def loadDataFromFile(self,fileName): with open(modelDataDir+fileName,'r') as fp: obj=json.load(fp) fp.close() return obj #given a dictionary as in every index of self.wList, #creates a W matrix as was in the paper def createWMatrix(self, w): W = np.zeros(shape=(self.k, self.n)) for aspect, Dict in w.items(): for word, freq in Dict.items(): W[self.aspectIndexMapping[aspect]][self.wordIndexMapping[word]]=freq return W #Computing aspectRating array for each review given Wd->W matrix for review 'd' def calcAspectRatings(self,Wd): Sd = np.einsum('ij,ij->i',self.beta,Wd).reshape((self.k,)) try: Sd = np.exp(Sd) except Exception as inst: self.logger.info("Exception in calcAspectRatings : %s", Sd) return Sd def calcMu(self): #calculates mu for (t+1)th iteration self.mu = np.sum(self.alpha, axis=1).reshape((self.k, 1))/self.Rn def calcSigma(self, updateDiagonalsOnly): #update diagonal entries only self.sigma.fill(0) for i in range(self.Rn): columnVec = self.alpha[:, i].reshape((self.k, 1)) columnVec = columnVec - self.mu if updateDiagonalsOnly: for k in range(self.k): self.sigma[k][k] += columnVec[k]*columnVec[k] else: self.sigma = self.sigma + np.dot(columnVec, columnVec.transpose()) for i in range(self.k): self.sigma[i][i] = (1.0+self.sigma[i][i])/(1.0+self.Rn) self.sigmaInv=np.linalg.inv(self.sigma) def calcOverallRating(self,alphaD,Sd): return np.dot(alphaD.transpose(),Sd)[0][0] def calcDeltaSquare(self): self.delta=0.0 for i in range(self.Rn): alphaD=self.alpha[:,i].reshape((self.k, 1)) Sd=self.S[:,i].reshape((self.k, 1)) Rd=float(self.ratingsList[self.trainIndex[i]]["Overall"]) temp=Rd-self.calcOverallRating(alphaD,Sd) try: self.delta+=(temp*temp) except Exception: self.logger.info("Exception in Delta calc") self.delta/=self.Rn def maximumLikelihoodBeta(self,x,*args): beta = x beta=beta.reshape((self.k,self.n)) innerBracket = np.empty(shape=self.Rn) for d in range(self.Rn): tmp = 0.0 rIdx = self.trainIndex[d] #review index in wList for i in range(self.k): W = self.createWMatrix(self.wList[rIdx]) tmp += self.alpha[i][d]*np.dot(beta[i, :].reshape((1, self.n)), W[i, :].reshape((self.n, 1)))[0][0] innerBracket[d] = tmp - float(self.ratingsList[rIdx]["Overall"]) mlBeta=0.0 for d in range(self.Rn): mlBeta+=innerBracket[d] * innerBracket[d] return mlBeta/(2*self.delta) def gradBeta(self,x,*args): beta=x beta=beta.reshape((self.k,self.n)) gradBetaMat=np.empty(shape=((self.k,self.n)),dtype='float64') innerBracket = np.empty(shape=self.Rn) for d in range(self.Rn): tmp = 0.0 rIdx = self.trainIndex[d] #review index in wList for i in range(self.k): W = self.createWMatrix(self.wList[rIdx]) tmp += self.alpha[i][d]*np.dot(beta[i, :].reshape((1, self.n)), W[i, :].reshape((self.n, 1)))[0][0] innerBracket[d] = tmp - float(self.ratingsList[rIdx]["Overall"]) for i in range(self.k): beta_i=np.zeros(shape=(1,self.n)) for d in range(self.Rn): rIdx = self.trainIndex[d] #review index in wList W = self.createWMatrix(self.wList[rIdx]) beta_i += innerBracket[d] * self.alpha[i][d] * W[i, :] gradBetaMat[i,:]=beta_i return gradBetaMat.reshape((self.k*self.n, )) def calcBeta(self): beta, retVal, flags=optimize.fmin_l_bfgs_b(func=self.maximumLikelihoodBeta,x0=self.beta,fprime=self.gradBeta,args=(),m=5,maxiter=15000) converged = True if flags['warnflag']!=0: converged = False self.logger.info("Beta converged : %d", flags['warnflag']) return beta.reshape((self.k,self.n)), converged def maximumLikelihoodAlpha(self, x, *args): alphad=x alphad=alphad.reshape((self.k, 1)) rd,Sd,deltasq,mu,sigmaInv=args temp1=(rd-np.dot(alphad.transpose(),Sd)[0][0]) temp1*=temp1 temp1/=(deltasq*2) temp2=(alphad-mu) temp2=np.dot(np.dot(temp2.transpose(),sigmaInv),temp2)[0][0] temp2/=2 return temp1+temp2 def gradAlpha(self, x,*args): alphad=x alphad=alphad.reshape((self.k, 1)) rd,Sd,deltasq,mu,sigmaInv=args temp1=(np.dot(alphad.transpose(),Sd)[0][0]-rd)*Sd temp1/=deltasq temp2=np.dot(sigmaInv,(alphad-mu)) return (temp1+temp2).reshape((self.k,)) def calcAlphaD(self,i): alphaD=self.alpha[:,i].reshape((self.k,1)) rIdx = self.trainIndex[i] rd=float(self.ratingsList[rIdx]["Overall"]) Sd=self.S[:,i].reshape((self.k,1)) Args=(rd,Sd,self.delta,self.mu,self.sigmaInv) bounds=[(0,1)]*self.k #self.gradf(alphaD, *Args) alphaD, retVal, flags=optimize.fmin_l_bfgs_b(func=self.maximumLikelihoodAlpha,x0=alphaD,fprime=self.gradAlpha,args=Args,bounds=bounds,m=5,maxiter=15000) converged = True if flags['warnflag']!=0: converged = False self.logger.info("Alpha Converged : %d", flags['warnflag']) #Normalizing alphaD so that it follows dirichlet distribution alphaD=np.exp(alphaD) alphaD=alphaD/(np.sum(alphaD)) return alphaD.reshape((self.k,)), converged ''' def getBetaLikelihood(self): likelihood=0 return self.lambda*np.sum(np.einsum('ij,ij->i',self.beta,self.beta)) ''' def dataLikelihood(self): likelihood=0.0 for d in range(self.Rn): rIdx = self.trainIndex[d] Rd=float(self.ratingsList[rIdx]["Overall"]) W=self.createWMatrix(self.wList[rIdx]) Sd=self.calcAspectRatings(W).reshape((self.k, 1)) alphaD=self.alpha[:,d].reshape((self.k, 1)) temp=Rd-self.calcOverallRating(alphaD,Sd) try: likelihood+=(temp*temp) except Exception: self.logger.debug("Exception in dataLikelihood") likelihood/=self.delta return likelihood def alphaLikelihood(self): likelihood=0.0 for d in range(self.Rn): alphad=self.alpha[:,d].reshape((self.k, 1)) temp2=(alphad-self.mu) temp2=np.dot(np.dot(temp2.transpose(),self.sigmaInv),temp2)[0] likelihood+=temp2 try: likelihood+=np.log(np.linalg.det(self.sigma)) except FloatingPointError: self.logger.debug("Exception in alphaLikelihood: %f", np.linalg.det(self.sigma)) return likelihood def calcLikelihood(self): likelihood=0.0 likelihood+=np.log(self.delta) #delta likelihood likelihood+=self.dataLikelihood() #data likelihood - will capture beta likelihood too likelihood+=self.alphaLikelihood() #alpha likelihood return likelihood def EStep(self): for i in range(self.Rn): rIdx = self.trainIndex[i] W=self.createWMatrix(self.wList[rIdx]) self.S[:,i]=self.calcAspectRatings(W) alphaD, converged = self.calcAlphaD(i) if converged: self.alpha[:,i]=alphaD self.logger.info("Alpha calculated") def MStep(self): likelihood=0.0 self.calcMu() self.logger.info("Mu calculated") self.calcSigma(False) self.logger.info("Sigma calculated : %s " % np.linalg.det(self.sigma)) likelihood+=self.alphaLikelihood() #alpha likelihood self.logger.info("alphaLikelihood calculated") beta,converged=self.calcBeta() if converged: self.beta=beta self.logger.info("Beta calculated") likelihood+=self.dataLikelihood() #data likelihood - will capture beta likelihood too self.logger.info("dataLikelihood calculated") self.calcDeltaSquare() self.logger.info("Deltasq calculated") likelihood+=np.log(self.delta) #delta likelihood return likelihood def EMAlgo(self, maxIter, coverge): self.logger.info("Training started") iteration = 0 old_likelihood = self.calcLikelihood() self.logger.info("initial calcLikelihood calculated, det(Sig): %s" % np.linalg.det(self.sigma)) diff = 10.0 while(iteration<min(8, maxIter) or (iteration<maxIter and diff>coverge)): self.EStep() self.logger.info("EStep completed") likelihood = self.MStep() self.logger.info("MStep completed") diff = (old_likelihood-likelihood)/old_likelihood old_likelihood=likelihood iteration+=1 self.logger.info("Training completed") def testing(self): for i in range(self.R-self.Rn): rIdx = self.testIndex[i] W = self.createWMatrix(self.wList[rIdx]) Sd = self.calcAspectRatings(W).reshape((self.k,1)) overallRating = self.calcOverallRating(self.mu,Sd) print("ReviewId-",self.reviewIdList[rIdx]) print("Actual OverallRating:",self.ratingsList[rIdx]["Overall"]) print("Predicted OverallRating:",overallRating) print("Actual vs Predicted Aspect Ratings:") for aspect, rating in self.ratingsList[rIdx].items(): if aspect != "Overall" and aspect.lower() in self.aspectIndexMapping.keys(): r = self.aspectIndexMapping[aspect.lower()] print("Aspect:",aspect," Rating:",rating, "Predic:", Sd[r]) if overallRating > 3.0: print("Positive Review") else: print("Negative Review") np.seterr(all='raise') lrrObj = LRR() lrrObj.EMAlgo(maxIter=10, coverge=0.0001) lrrObj.testing()
0.356559
0.339745
En este problema se usan las funciones definidad en la parte 1 para implementar MCMC con el algoritmo Metropolis-Hastings y visualizar problemas de convergencia --------- La gran mayoría de funciones que cree las dejé en el archivo Mis_funciones.py para hacer más fácil de leer esta hoja ``` %matplotlib inline import numpy as np import matplotlib.pyplot as plt from Misfunciones import * import pandas # Estilo de gráficos plt.style.use('dark_background') # Seed np.random.seed(123) plt.style.use('dark_background') Datos = pandas.read_csv('blanton.csv', sep=',') # Pongo los datos en dos variables Mags = Datos['M'] Lum = Datos['f'] Barra_sup = np.loadtxt('Barra_sup.txt') Barra_inf = np.loadtxt('Barra_inf.txt') ERR = [] ij = 0 while ij<len(Barra_sup): ERR.append( (1/2.) * (Barra_sup[ij] - Barra_inf[ij]) ) ij = ij + 1 # Modelo Blanton xs = np.linspace(min(Mags), max(Mags), 100) F_mod = Modelo(xs,1.46e-2, -20.83, -1.2) fig, ax = plt.subplots(1, 2, figsize = (12,6), sharex=True) ax[0].plot(xs, F_mod, color='orange', label='Ajuste Blanton') ax[0].errorbar(Mags, Lum, yerr=ERR, color='cyan', fmt='.', label='Datos') ax[1].plot(Mags, ERR, color='green') ax[0].set_xlabel('Magnitud', fontsize=20) ax[0].set_ylabel('Luminosidad', fontsize=20) ax[1].set_xlabel('Magnitud', fontsize=20) ax[1].set_ylabel('Barra de error', fontsize=20) ax[0].set_yscale('log') ax[0].legend(fontsize=15, loc=2); # Para ver los límites de los PRIORS pruebo algunos valores de los parámetros: Phi_inf = Modelo(xs, 0.46e-2, -20.83, -1.2) Phi_sup = Modelo(xs, 10.46e-2, -20.83, -1.2) Me_inf = Modelo(xs, 1.46e-2, -25.83, -1.2) Me_sup = Modelo(xs, 1.46e-2, -19.93, -1.2) alpha_inf = Modelo(xs, 1.46e-2, -20.83, -1.5) alpha_sup = Modelo(xs, 1.46e-2, -20.83, -0.9) fig, ax = plt.subplots(1, 3, figsize = (12,6), sharex=True) ax[0].scatter(Mags, Lum, color='cyan', label='Datos') ax[0].plot(xs, Phi_inf, color='orange', label='Rangos de Phi', lw=5) ax[0].plot(xs, Phi_sup, color='orange', lw=5) ax[1].scatter(Mags, Lum, color='cyan', label='Datos') ax[1].plot(xs, Me_inf, color='orange', label='Rangos de Me', lw=5) ax[1].plot(xs, Me_sup, color='orange', lw=5) ax[2].scatter(Mags, Lum, color='cyan', label='Datos') ax[2].plot(xs, alpha_inf, color='orange', label='Rangos de alpha', lw=5) ax[2].plot(xs, alpha_sup, color='orange', lw=5) ax[0].set_xlabel('Magnitud', fontsize=20) ax[1].set_xlabel('Magnitud', fontsize=20) ax[2].set_xlabel('Magnitud', fontsize=20) ax[0].set_ylabel('Luminosidad', fontsize=20) ax[0].set_yscale('log') ax[0].legend(fontsize=15) ax[1].set_yscale('log') ax[1].legend(fontsize=15) ax[2].set_yscale('log') ax[2].legend(fontsize=15); # Los límites quedaron: rPhi = [0.46e-2, 10.46e-2] rMe = [-25.83, -19.93] ralpha = [-1.5, -0.9] ``` ## Metropolis- Hastings, definiciones Mi PROPOSAL será una gaussiana centrada en cero con desviación estándar 'std'. Esta desviación me determina qué tan grande (o largo) pueden ser los pasos en las cadenas --------- Mi PRIOR será una distribución uniforme en 3d, está limitado adentro de la definición de la función CADENAS() que viene a continuación. ``` def CADENAS(Nsteps, Nburnt, Tstep, rPhi, rMe, ralpha): """ Devuelve las cadenas de Markov para los tres parámetros del problema Parameters ---------- Nsteps : int Número de pasos de las cadenas Nburnt : int Número de pasos desde que se empiezan a grabar las cadenas (quemado) Tstep : .float Una medida del tamaño de los pasos de las cadenas rPhi, rMe, ralpha : list(2x1), list(2x1), list(2x1) Rangos para el PRIOR, asociados a los parámetros. Ejemplo: rPhi = [0,1] Returns ------- Cadenas : list Lista con los pasos y la evolución de los parámetros (Paso, Phi_evol, Me_evol, alpha_evol) """ import numpy as np Paso = [] # Graba los pasos Phi_evol = [] # Cadenas para el parámetro "Phi" Me_evol = [] alpha_evol = [] # Busco condición inicial tal que la posterior no sea cero post_actual = 0 while post_actual < 1e-8: phi_actual = np.random.normal(loc=np.mean([rPhi[0], rPhi[1]]), scale=(rPhi[1]-rPhi[0])) Me_actual = np.random.normal(loc=np.mean([rMe[0], rMe[1]]), scale=(rMe[1]-rMe[0])) alpha_actual = np.random.normal(loc=np.mean([ralpha[0], ralpha[1]]), scale=(ralpha[1]-ralpha[0])) post_actual = POSTERIOR(Mags, Lum, ERR, Phi=phi_actual, Me=Me_actual, alpha=alpha_actual, Phimin=rPhi[0], Phimax=rPhi[1], Memin=rMe[0], Memax=rMe[1], alphamin=ralpha[0], alphamax=ralpha[1] ) par_actual = [phi_actual, Me_actual, alpha_actual] ij = 0 while ij<Nsteps: # Posterior de los parámetros actuales: post_actual = POSTERIOR(Mags, Lum, ERR, Phi=par_actual[0], Me=par_actual[1], alpha=par_actual[2], Phimin=rPhi[0], Phimax=rPhi[1], Memin=rMe[0], Memax=rMe[1], alphamin=ralpha[0], alphamax=ralpha[1] ) # El nuevo lugar será el anterior más un desplazamiento en todas las direciones # que obedece a unos sorteos gaussianos, para cada variable tengo una longitud # de paso distinta Saltos = np.random.normal(loc=0, scale=Tstep, size=3) pc0 = par_actual[0] + 0.001*Saltos[0] pc1 = par_actual[1] + 0.01*Saltos[1] pc2 = par_actual[2] + 0.005*Saltos[2] par_candid = np.array( [pc0, pc1, pc2] ) # Veo la nueva posterior post_candid = POSTERIOR(Mags, Lum, ERR, Phi=par_candid[0], Me=par_candid[1], alpha=par_candid[2], Phimin=rPhi[0], Phimax=rPhi[1], Memin=rMe[0], Memax=rMe[1], alphamin=ralpha[0], alphamax=ralpha[1] ) # Probabilidad de aceptación: p_accept = min(1., post_candid / post_actual) # Condición de aceptación: accept = np.random.rand() < p_accept if accept==True: par_actual = par_candid else: par_actual = par_actual # Solo guardo los pasos que hallan superado al quemado: if ij>Nburnt: Paso.append( ij ) Phi_evol.append( par_actual[0] ) Me_evol.append( par_actual[1] ) alpha_evol.append( par_actual[2] ) # Imprime progreso: from IPython.display import clear_output clear_output(wait=True) print('%', round(ij*100/Nsteps)) ij = ij + 1 return Paso, Phi_evol, Me_evol, alpha_evol ``` ### Acá pondré unos bloques para crear las cadenas, y luego unos para guardar datos y otro para cargarlos Uno puedo correr las cadenas y luego ir directamente a la parte de ploteos o, sino, puede ignorar las cadenas e ir directamente a la parte de importación de los archivos que las contienen (recomendado) ``` # Datos para hacer las cadenas: Nburnt = 0 Nsteps = 50000 # Haré cadenas cambiando sólo la longitud de los pasos "Tstep" # C = CADENAS(Nsteps=Nsteps, Nburnt=Nburnt, Tstep=1, rPhi=rPhi, rMe=rMe, ralpha=ralpha) # C2 = CADENAS(Nsteps=Nsteps, Nburnt=Nburnt, Tstep=1, rPhi=rPhi, rMe=rMe, ralpha=ralpha) # C3 = CADENAS(Nsteps=Nsteps, Nburnt=Nburnt, Tstep=1, rPhi=rPhi, rMe=rMe, ralpha=ralpha) # C4 = CADENAS(Nsteps=Nsteps, Nburnt=Nburnt, Tstep=0.1, rPhi=rPhi, rMe=rMe, ralpha=ralpha) # C5 = CADENAS(Nsteps=Nsteps, Nburnt=Nburnt, Tstep=10, rPhi=rPhi, rMe=rMe, ralpha=ralpha) ``` ### Guardado de datos (manual): ``` # Save_chain(Steps=C5[0], Phi=C5[1], Me=C5[2], alpha=C5[3], name='Cadena5.txt') ``` ### Importación de datos: ``` D = np.loadtxt('Cadena.txt') C = [D[:,0], D[:,1], D[:,2], D[:,3] ] D2 = np.loadtxt('Cadena2.txt') C2 = [D2[:,0], D2[:,1], D2[:,2], D2[:,3] ] D3 = np.loadtxt('Cadena3.txt') C3 = [D3[:,0], D3[:,1], D3[:,2], D3[:,3] ] D4 = np.loadtxt('Cadena4.txt') C4 = [D4[:,0], D4[:,1], D4[:,2], D4[:,3] ] D5 = np.loadtxt('Cadena5.txt') C5 = [D5[:,0], D5[:,1], D5[:,2], D5[:,3] ] ``` Para hacer los ploteos cree la función $\color{orange}{\text{Ploteo()}}$ Convenientemente, hice que las primeras tres cadenas tengan un buen mezclado (mucha prueba y error) y las otras dos tienen pasos muy chicos o grandes. Las grafico por separado para que se vean mejor: (La línea celeste por detrás es el valor de Blanton) ``` """ BUEN MEZCLADO (SIN QUEMADO) """ fig, ax = plt.subplots(3, 1, figsize = (14,8), sharex=True) Ploteo(C, color='orange', label='1', fig=fig, ax=ax) Ploteo(C2, color='yellow', label='2', fig=fig, ax=ax) Ploteo(C3, color='violet', label='3', fig=fig, ax=ax) # Bordes de los priors # ax[0].fill_between([Nburnt, Nsteps], y1=rPhi[0], y2=rPhi[1], facecolor='green', alpha=0.3) # ax[1].fill_between([Nburnt, Nsteps], y1=rMe[0], y2=rMe[1], facecolor='green', alpha=0.3) # ax[2].fill_between([Nburnt, Nsteps], y1=ralpha[0], y2=ralpha[1], facecolor='green', alpha=0.3) """ MAL MEZCLADO """ fig, ax = plt.subplots(3, 1, figsize = (14,8), sharex=True) Ploteo(C4, color='yellow', label='1', fig=fig, ax=ax) Ploteo(C5, color='red', label='2', fig=fig, ax=ax) # Bordes de los priors # ax[0].fill_between([Nburnt, Nsteps], y1=rPhi[0], y2=rPhi[1], facecolor='green', alpha=0.3) # ax[1].fill_between([Nburnt, Nsteps], y1=rMe[0], y2=rMe[1], facecolor='green', alpha=0.3) # ax[2].fill_between([Nburnt, Nsteps], y1=ralpha[0], y2=ralpha[1], facecolor='green', alpha=0.3) ``` La cadena amarilla tiene un paso muy chico y nunca llega al máximo del likelihood, la roja tiene un paso muy grande (comparada con las cadenas anteriores, pero no es tan malo) ``` """ CAMINOS """ fig, ax = plt.subplots(1, 2, figsize = (14,8), sharex=True) ax[0].plot(C[1], C[2], color='orange', label='1, buen mezclado') ax[0].plot(C2[1], C2[2], color='yellow', label='2, buen mezclado') ax[0].plot(C3[1], C3[2], color='violet', label='3, buen mezclado') ax[1].plot(C4[1], C4[2], color='purple', label='4, mal mezclado') ax[1].plot(C5[1], C5[2], color='red', label='5, mal mezclado'); ax[0].set_ylabel('Me', fontsize=20) ax[0].set_xlabel('Phi', fontsize=20) ax[1].set_ylabel('Me', fontsize=20) ax[1].set_xlabel('Phi', fontsize=20) ax[0].set_title('Cadenas buenas', fontsize=20) ax[1].set_title('Cadenas malas', fontsize=20); ``` En la imagen anterior no hay mucha diferencia entre las cadenas "buenas" y "malas" (porque todo se superpone) Notar que las cadenas "malas" empiezan en el mismo lugar que algunas de las "buenas". Eso fue intencional como para compararlas mejor entre si Ahora hago un corner plot con el paquete de una de las cadenas buenas, para eso hago un quemado manual: ``` # Quemado manual: B = [C[1][10000:], C[2][10000:], C[3][10000:]] np.shape(B) plt.style.use('classic') ndim = 3 # Corner plot de una cadena: import corner aa = np.transpose(B) plt.style.use('classic') fig, ax = plt.subplots(3, 3, figsize = (8,6)) labels = ['Phi', 'Me', 'alpha'] fig = corner.corner(aa, labels = labels, fig = fig, show_titles = True) # Blanton value1 = 0.0146 value2 = -20.83 value3 = -1.20 axes = np.array(fig.axes).reshape((ndim, ndim)) axes[1,0].scatter(value1, value2, zorder=5, color='red', s=40) axes[2,0].scatter(value1, value3, zorder=5, color='red', s=40) axes[2,1].scatter(value2, value3, zorder=5, color='red', s=40) ``` Los puntos rojos son los valores de Blanton Se me complicó poner las regiones de confianza graficadas en las posteriors (líneas verticales), será para otra ocasión Veo la obtención de incertezas: ``` # Obtención de errores, lo hago tal que el area en las colas sea del 10% (arbitrario) # Eso implica que los cuantiles que buscaré son: el 5 y el 95 Param = np.empty(3) ERR_DOWN = np.empty(3) # Arreglos para meter los valores ERR_UP = np.empty(3) ij=0 while ij<3: # Rcordar que C3[0] : pasos (no es un parámetro) q_05, q_50, q_95 = corner.quantile(C[ij+1], [0.05, 0.5, 0.95]) x = q_50 # Parametro ajustado dx_down, dx_up = q_50-q_05, q_95-q_50 # Errores Param[ij] = q_50 ERR_DOWN[ij] = dx_down ERR_UP[ij] = dx_up ij = ij+1 Param, ERR_DOWN, ERR_UP Blanton = [0.0146, -20.83, -1.2] if ERR_DOWN[0] < Blanton[0] and ERR_UP[0] < Blanton[0]: print('Phi es compatible con Blanton') else: print('Phi NO es compatible con Blanton') if ERR_DOWN[1] < Blanton[1] and ERR_UP[1] < Blanton[1]: print('Me es compatible con Blanton') else: print('Me NO es compatible con Blanton') if ERR_DOWN[2] < Blanton[2] and ERR_UP[2] < Blanton[2]: print('alpha es compatible con Blanton') else: print('alpha NO es compatible con Blanton') ``` Hice este análisis muy simplificado de intervalos de confianza por el tiempo. Cualitativamente parecería como que el máximo no está justo en los valores que obtuvo Blanton Grafico mi modelo contra el de Blanton ``` # Modelo Markov xs = np.linspace(min(Mags), max(Mags), 100) F_mod2 = Modelo(xs,C[1][-1], C[2][-1], C[3][-1]) plt.style.use('dark_background') fig, ax = plt.subplots(1, 1, figsize = (12,6), sharex=True) plt.style.use('dark_background') ax.plot(xs, F_mod, color='orange', label='Ajuste Blanton') ax.plot(xs, F_mod2, color='yellow', label='Ajuste Markov') ax.errorbar(Mags, Lum, yerr=ERR, color='cyan', fmt='.') ax.set_xlabel('Magnitud', fontsize=20) ax.set_ylabel('Luminosidad', fontsize=20) ax.set_yscale('log') ax.legend(fontsize=15, loc=2); ``` Prácticamente indistinguibles considerando las barras de error en esta figura
github_jupyter
%matplotlib inline import numpy as np import matplotlib.pyplot as plt from Misfunciones import * import pandas # Estilo de gráficos plt.style.use('dark_background') # Seed np.random.seed(123) plt.style.use('dark_background') Datos = pandas.read_csv('blanton.csv', sep=',') # Pongo los datos en dos variables Mags = Datos['M'] Lum = Datos['f'] Barra_sup = np.loadtxt('Barra_sup.txt') Barra_inf = np.loadtxt('Barra_inf.txt') ERR = [] ij = 0 while ij<len(Barra_sup): ERR.append( (1/2.) * (Barra_sup[ij] - Barra_inf[ij]) ) ij = ij + 1 # Modelo Blanton xs = np.linspace(min(Mags), max(Mags), 100) F_mod = Modelo(xs,1.46e-2, -20.83, -1.2) fig, ax = plt.subplots(1, 2, figsize = (12,6), sharex=True) ax[0].plot(xs, F_mod, color='orange', label='Ajuste Blanton') ax[0].errorbar(Mags, Lum, yerr=ERR, color='cyan', fmt='.', label='Datos') ax[1].plot(Mags, ERR, color='green') ax[0].set_xlabel('Magnitud', fontsize=20) ax[0].set_ylabel('Luminosidad', fontsize=20) ax[1].set_xlabel('Magnitud', fontsize=20) ax[1].set_ylabel('Barra de error', fontsize=20) ax[0].set_yscale('log') ax[0].legend(fontsize=15, loc=2); # Para ver los límites de los PRIORS pruebo algunos valores de los parámetros: Phi_inf = Modelo(xs, 0.46e-2, -20.83, -1.2) Phi_sup = Modelo(xs, 10.46e-2, -20.83, -1.2) Me_inf = Modelo(xs, 1.46e-2, -25.83, -1.2) Me_sup = Modelo(xs, 1.46e-2, -19.93, -1.2) alpha_inf = Modelo(xs, 1.46e-2, -20.83, -1.5) alpha_sup = Modelo(xs, 1.46e-2, -20.83, -0.9) fig, ax = plt.subplots(1, 3, figsize = (12,6), sharex=True) ax[0].scatter(Mags, Lum, color='cyan', label='Datos') ax[0].plot(xs, Phi_inf, color='orange', label='Rangos de Phi', lw=5) ax[0].plot(xs, Phi_sup, color='orange', lw=5) ax[1].scatter(Mags, Lum, color='cyan', label='Datos') ax[1].plot(xs, Me_inf, color='orange', label='Rangos de Me', lw=5) ax[1].plot(xs, Me_sup, color='orange', lw=5) ax[2].scatter(Mags, Lum, color='cyan', label='Datos') ax[2].plot(xs, alpha_inf, color='orange', label='Rangos de alpha', lw=5) ax[2].plot(xs, alpha_sup, color='orange', lw=5) ax[0].set_xlabel('Magnitud', fontsize=20) ax[1].set_xlabel('Magnitud', fontsize=20) ax[2].set_xlabel('Magnitud', fontsize=20) ax[0].set_ylabel('Luminosidad', fontsize=20) ax[0].set_yscale('log') ax[0].legend(fontsize=15) ax[1].set_yscale('log') ax[1].legend(fontsize=15) ax[2].set_yscale('log') ax[2].legend(fontsize=15); # Los límites quedaron: rPhi = [0.46e-2, 10.46e-2] rMe = [-25.83, -19.93] ralpha = [-1.5, -0.9] def CADENAS(Nsteps, Nburnt, Tstep, rPhi, rMe, ralpha): """ Devuelve las cadenas de Markov para los tres parámetros del problema Parameters ---------- Nsteps : int Número de pasos de las cadenas Nburnt : int Número de pasos desde que se empiezan a grabar las cadenas (quemado) Tstep : .float Una medida del tamaño de los pasos de las cadenas rPhi, rMe, ralpha : list(2x1), list(2x1), list(2x1) Rangos para el PRIOR, asociados a los parámetros. Ejemplo: rPhi = [0,1] Returns ------- Cadenas : list Lista con los pasos y la evolución de los parámetros (Paso, Phi_evol, Me_evol, alpha_evol) """ import numpy as np Paso = [] # Graba los pasos Phi_evol = [] # Cadenas para el parámetro "Phi" Me_evol = [] alpha_evol = [] # Busco condición inicial tal que la posterior no sea cero post_actual = 0 while post_actual < 1e-8: phi_actual = np.random.normal(loc=np.mean([rPhi[0], rPhi[1]]), scale=(rPhi[1]-rPhi[0])) Me_actual = np.random.normal(loc=np.mean([rMe[0], rMe[1]]), scale=(rMe[1]-rMe[0])) alpha_actual = np.random.normal(loc=np.mean([ralpha[0], ralpha[1]]), scale=(ralpha[1]-ralpha[0])) post_actual = POSTERIOR(Mags, Lum, ERR, Phi=phi_actual, Me=Me_actual, alpha=alpha_actual, Phimin=rPhi[0], Phimax=rPhi[1], Memin=rMe[0], Memax=rMe[1], alphamin=ralpha[0], alphamax=ralpha[1] ) par_actual = [phi_actual, Me_actual, alpha_actual] ij = 0 while ij<Nsteps: # Posterior de los parámetros actuales: post_actual = POSTERIOR(Mags, Lum, ERR, Phi=par_actual[0], Me=par_actual[1], alpha=par_actual[2], Phimin=rPhi[0], Phimax=rPhi[1], Memin=rMe[0], Memax=rMe[1], alphamin=ralpha[0], alphamax=ralpha[1] ) # El nuevo lugar será el anterior más un desplazamiento en todas las direciones # que obedece a unos sorteos gaussianos, para cada variable tengo una longitud # de paso distinta Saltos = np.random.normal(loc=0, scale=Tstep, size=3) pc0 = par_actual[0] + 0.001*Saltos[0] pc1 = par_actual[1] + 0.01*Saltos[1] pc2 = par_actual[2] + 0.005*Saltos[2] par_candid = np.array( [pc0, pc1, pc2] ) # Veo la nueva posterior post_candid = POSTERIOR(Mags, Lum, ERR, Phi=par_candid[0], Me=par_candid[1], alpha=par_candid[2], Phimin=rPhi[0], Phimax=rPhi[1], Memin=rMe[0], Memax=rMe[1], alphamin=ralpha[0], alphamax=ralpha[1] ) # Probabilidad de aceptación: p_accept = min(1., post_candid / post_actual) # Condición de aceptación: accept = np.random.rand() < p_accept if accept==True: par_actual = par_candid else: par_actual = par_actual # Solo guardo los pasos que hallan superado al quemado: if ij>Nburnt: Paso.append( ij ) Phi_evol.append( par_actual[0] ) Me_evol.append( par_actual[1] ) alpha_evol.append( par_actual[2] ) # Imprime progreso: from IPython.display import clear_output clear_output(wait=True) print('%', round(ij*100/Nsteps)) ij = ij + 1 return Paso, Phi_evol, Me_evol, alpha_evol # Datos para hacer las cadenas: Nburnt = 0 Nsteps = 50000 # Haré cadenas cambiando sólo la longitud de los pasos "Tstep" # C = CADENAS(Nsteps=Nsteps, Nburnt=Nburnt, Tstep=1, rPhi=rPhi, rMe=rMe, ralpha=ralpha) # C2 = CADENAS(Nsteps=Nsteps, Nburnt=Nburnt, Tstep=1, rPhi=rPhi, rMe=rMe, ralpha=ralpha) # C3 = CADENAS(Nsteps=Nsteps, Nburnt=Nburnt, Tstep=1, rPhi=rPhi, rMe=rMe, ralpha=ralpha) # C4 = CADENAS(Nsteps=Nsteps, Nburnt=Nburnt, Tstep=0.1, rPhi=rPhi, rMe=rMe, ralpha=ralpha) # C5 = CADENAS(Nsteps=Nsteps, Nburnt=Nburnt, Tstep=10, rPhi=rPhi, rMe=rMe, ralpha=ralpha) # Save_chain(Steps=C5[0], Phi=C5[1], Me=C5[2], alpha=C5[3], name='Cadena5.txt') D = np.loadtxt('Cadena.txt') C = [D[:,0], D[:,1], D[:,2], D[:,3] ] D2 = np.loadtxt('Cadena2.txt') C2 = [D2[:,0], D2[:,1], D2[:,2], D2[:,3] ] D3 = np.loadtxt('Cadena3.txt') C3 = [D3[:,0], D3[:,1], D3[:,2], D3[:,3] ] D4 = np.loadtxt('Cadena4.txt') C4 = [D4[:,0], D4[:,1], D4[:,2], D4[:,3] ] D5 = np.loadtxt('Cadena5.txt') C5 = [D5[:,0], D5[:,1], D5[:,2], D5[:,3] ] """ BUEN MEZCLADO (SIN QUEMADO) """ fig, ax = plt.subplots(3, 1, figsize = (14,8), sharex=True) Ploteo(C, color='orange', label='1', fig=fig, ax=ax) Ploteo(C2, color='yellow', label='2', fig=fig, ax=ax) Ploteo(C3, color='violet', label='3', fig=fig, ax=ax) # Bordes de los priors # ax[0].fill_between([Nburnt, Nsteps], y1=rPhi[0], y2=rPhi[1], facecolor='green', alpha=0.3) # ax[1].fill_between([Nburnt, Nsteps], y1=rMe[0], y2=rMe[1], facecolor='green', alpha=0.3) # ax[2].fill_between([Nburnt, Nsteps], y1=ralpha[0], y2=ralpha[1], facecolor='green', alpha=0.3) """ MAL MEZCLADO """ fig, ax = plt.subplots(3, 1, figsize = (14,8), sharex=True) Ploteo(C4, color='yellow', label='1', fig=fig, ax=ax) Ploteo(C5, color='red', label='2', fig=fig, ax=ax) # Bordes de los priors # ax[0].fill_between([Nburnt, Nsteps], y1=rPhi[0], y2=rPhi[1], facecolor='green', alpha=0.3) # ax[1].fill_between([Nburnt, Nsteps], y1=rMe[0], y2=rMe[1], facecolor='green', alpha=0.3) # ax[2].fill_between([Nburnt, Nsteps], y1=ralpha[0], y2=ralpha[1], facecolor='green', alpha=0.3) """ CAMINOS """ fig, ax = plt.subplots(1, 2, figsize = (14,8), sharex=True) ax[0].plot(C[1], C[2], color='orange', label='1, buen mezclado') ax[0].plot(C2[1], C2[2], color='yellow', label='2, buen mezclado') ax[0].plot(C3[1], C3[2], color='violet', label='3, buen mezclado') ax[1].plot(C4[1], C4[2], color='purple', label='4, mal mezclado') ax[1].plot(C5[1], C5[2], color='red', label='5, mal mezclado'); ax[0].set_ylabel('Me', fontsize=20) ax[0].set_xlabel('Phi', fontsize=20) ax[1].set_ylabel('Me', fontsize=20) ax[1].set_xlabel('Phi', fontsize=20) ax[0].set_title('Cadenas buenas', fontsize=20) ax[1].set_title('Cadenas malas', fontsize=20); # Quemado manual: B = [C[1][10000:], C[2][10000:], C[3][10000:]] np.shape(B) plt.style.use('classic') ndim = 3 # Corner plot de una cadena: import corner aa = np.transpose(B) plt.style.use('classic') fig, ax = plt.subplots(3, 3, figsize = (8,6)) labels = ['Phi', 'Me', 'alpha'] fig = corner.corner(aa, labels = labels, fig = fig, show_titles = True) # Blanton value1 = 0.0146 value2 = -20.83 value3 = -1.20 axes = np.array(fig.axes).reshape((ndim, ndim)) axes[1,0].scatter(value1, value2, zorder=5, color='red', s=40) axes[2,0].scatter(value1, value3, zorder=5, color='red', s=40) axes[2,1].scatter(value2, value3, zorder=5, color='red', s=40) # Obtención de errores, lo hago tal que el area en las colas sea del 10% (arbitrario) # Eso implica que los cuantiles que buscaré son: el 5 y el 95 Param = np.empty(3) ERR_DOWN = np.empty(3) # Arreglos para meter los valores ERR_UP = np.empty(3) ij=0 while ij<3: # Rcordar que C3[0] : pasos (no es un parámetro) q_05, q_50, q_95 = corner.quantile(C[ij+1], [0.05, 0.5, 0.95]) x = q_50 # Parametro ajustado dx_down, dx_up = q_50-q_05, q_95-q_50 # Errores Param[ij] = q_50 ERR_DOWN[ij] = dx_down ERR_UP[ij] = dx_up ij = ij+1 Param, ERR_DOWN, ERR_UP Blanton = [0.0146, -20.83, -1.2] if ERR_DOWN[0] < Blanton[0] and ERR_UP[0] < Blanton[0]: print('Phi es compatible con Blanton') else: print('Phi NO es compatible con Blanton') if ERR_DOWN[1] < Blanton[1] and ERR_UP[1] < Blanton[1]: print('Me es compatible con Blanton') else: print('Me NO es compatible con Blanton') if ERR_DOWN[2] < Blanton[2] and ERR_UP[2] < Blanton[2]: print('alpha es compatible con Blanton') else: print('alpha NO es compatible con Blanton') # Modelo Markov xs = np.linspace(min(Mags), max(Mags), 100) F_mod2 = Modelo(xs,C[1][-1], C[2][-1], C[3][-1]) plt.style.use('dark_background') fig, ax = plt.subplots(1, 1, figsize = (12,6), sharex=True) plt.style.use('dark_background') ax.plot(xs, F_mod, color='orange', label='Ajuste Blanton') ax.plot(xs, F_mod2, color='yellow', label='Ajuste Markov') ax.errorbar(Mags, Lum, yerr=ERR, color='cyan', fmt='.') ax.set_xlabel('Magnitud', fontsize=20) ax.set_ylabel('Luminosidad', fontsize=20) ax.set_yscale('log') ax.legend(fontsize=15, loc=2);
0.459561
0.81899
# JSON examples and exercise **** + get familiar with packages for dealing with JSON + study examples with JSON strings and files + work on exercise to be completed and submitted **** + reference: http://pandas.pydata.org/pandas-docs/stable/io.html#io-json-reader + data source: http://jsonstudio.com/resources/ **** ``` import pandas as pd ``` ## imports for Python, Pandas ``` import json from pandas.io.json import json_normalize ``` ## JSON example, with string + demonstrates creation of normalized dataframes (tables) from nested json string + source: http://pandas.pydata.org/pandas-docs/stable/io.html#normalization ``` # define json string data = [{'state': 'Florida', 'shortname': 'FL', 'info': {'governor': 'Rick Scott'}, 'counties': [{'name': 'Dade', 'population': 12345}, {'name': 'Broward', 'population': 40000}, {'name': 'Palm Beach', 'population': 60000}]}, {'state': 'Ohio', 'shortname': 'OH', 'info': {'governor': 'John Kasich'}, 'counties': [{'name': 'Summit', 'population': 1234}, {'name': 'Cuyahoga', 'population': 1337}]}] # use normalization to create tables from nested element json_normalize(data, 'counties') # further populate tables created from nested element json_normalize(data, 'counties', ['state', 'shortname', ['info', 'governor']]) ``` **** ## JSON example, with file + demonstrates reading in a json file as a string and as a table + uses small sample file containing data about projects funded by the World Bank + data source: http://jsonstudio.com/resources/ ``` # load json as string json.load((open('data/world_bank_projects_less.json'))) # load as Pandas dataframe sample_json_df = pd.read_json('data/world_bank_projects_less.json') sample_json_df ``` **** ## JSON exercise Using data in file 'data/world_bank_projects.json' and the techniques demonstrated above, 1. Find the 10 countries with most projects 2. Find the top 10 major project themes (using column 'mjtheme_namecode') 3. In 2. above you will notice that some entries have only the code and the name is missing. Create a dataframe with the missing names filled in. ``` import pandas as pd # Load the data into a Pandas DataFrame df = pd.read_json('data/world_bank_projects.json') print(df['mjtheme_namecode'][0]) # Task 1: Determine the Top 10 Borrowers topborrowers = df['countryname'].value_counts().nlargest(10) print(topborrowers) # Task 2: Determine the Top 10 Borrowing Themes # toptendict will count each code as we iterate over mjtheme_namecode toptendict = {} # namebycode will grab each code's corresponding name as we iterate over mjtheme_namecode namebycode = {} for list in df['mjtheme_namecode']: for dict in list: # For Task 2: Initialize key if needed, then icrease the count if dict['code'] not in toptendict.keys(): toptendict[dict['code']] = 0 toptendict[dict['code']] += 1 # If the first instance of the code happened to be blank, we want to override it, hence the 'or' if dict['code'] not in namebycode.keys() or namebycode[dict['code']] == '': namebycode[dict['code']] = dict['name'] topthemes = pd.DataFrame({'Major Theme':pd.Series(namebycode), 'Frequency':pd.Series(toptendict)}) topthemes = topthemes.reindex(columns=['Major Theme', 'Frequency']).sort_values('Frequency', ascending=False) print(topthemes.head(10)) # Task 3: Replace any missing names in mjtheme_namecode dictionaries for list in df['mjtheme_namecode']: for dict in list: dict['name'] = namebycode[dict['code']] print(df.head()) ```
github_jupyter
import pandas as pd import json from pandas.io.json import json_normalize # define json string data = [{'state': 'Florida', 'shortname': 'FL', 'info': {'governor': 'Rick Scott'}, 'counties': [{'name': 'Dade', 'population': 12345}, {'name': 'Broward', 'population': 40000}, {'name': 'Palm Beach', 'population': 60000}]}, {'state': 'Ohio', 'shortname': 'OH', 'info': {'governor': 'John Kasich'}, 'counties': [{'name': 'Summit', 'population': 1234}, {'name': 'Cuyahoga', 'population': 1337}]}] # use normalization to create tables from nested element json_normalize(data, 'counties') # further populate tables created from nested element json_normalize(data, 'counties', ['state', 'shortname', ['info', 'governor']]) # load json as string json.load((open('data/world_bank_projects_less.json'))) # load as Pandas dataframe sample_json_df = pd.read_json('data/world_bank_projects_less.json') sample_json_df import pandas as pd # Load the data into a Pandas DataFrame df = pd.read_json('data/world_bank_projects.json') print(df['mjtheme_namecode'][0]) # Task 1: Determine the Top 10 Borrowers topborrowers = df['countryname'].value_counts().nlargest(10) print(topborrowers) # Task 2: Determine the Top 10 Borrowing Themes # toptendict will count each code as we iterate over mjtheme_namecode toptendict = {} # namebycode will grab each code's corresponding name as we iterate over mjtheme_namecode namebycode = {} for list in df['mjtheme_namecode']: for dict in list: # For Task 2: Initialize key if needed, then icrease the count if dict['code'] not in toptendict.keys(): toptendict[dict['code']] = 0 toptendict[dict['code']] += 1 # If the first instance of the code happened to be blank, we want to override it, hence the 'or' if dict['code'] not in namebycode.keys() or namebycode[dict['code']] == '': namebycode[dict['code']] = dict['name'] topthemes = pd.DataFrame({'Major Theme':pd.Series(namebycode), 'Frequency':pd.Series(toptendict)}) topthemes = topthemes.reindex(columns=['Major Theme', 'Frequency']).sort_values('Frequency', ascending=False) print(topthemes.head(10)) # Task 3: Replace any missing names in mjtheme_namecode dictionaries for list in df['mjtheme_namecode']: for dict in list: dict['name'] = namebycode[dict['code']] print(df.head())
0.431584
0.886764
``` import json import tensorflow as tf import csv import random import numpy as np from tensorflow.keras.preprocessing.text import Tokenizer from tensorflow.keras.preprocessing.sequence import pad_sequences from tensorflow.keras.utils import to_categorical from tensorflow.keras import regularizers embedding_dim = 100 max_length = 16 trunc_type='post' padding_type='post' oov_tok = "<OOV>" training_size=160000 test_portion=.1 corpus = [] # Note that I cleaned the Stanford dataset to remove LATIN1 encoding to make it easier for Python CSV reader # You can do that yourself with: # iconv -f LATIN1 -t UTF8 training.1600000.processed.noemoticon.csv -o training_cleaned.csv # I then hosted it on my site to make it easier to use in this notebook !wget --no-check-certificate \ https://storage.googleapis.com/laurencemoroney-blog.appspot.com/training_cleaned.csv \ -O /tmp/training_cleaned.csv num_sentences = 0 with open("/tmp/training_cleaned.csv") as csvfile: reader = csv.reader(csvfile, delimiter=',') for row in reader: list_item=[] list_item.append(row[5]) this_label=row[0] if this_label=='0': list_item.append(0) else: list_item.append(1) num_sentences = num_sentences + 1 corpus.append(list_item) print(num_sentences) print(len(corpus)) print(corpus[1]) # Expected Output: # 1600000 # 1600000 # ["is upset that he can't update his Facebook by texting it... and might cry as a result School today also. Blah!", 0] sentences=[] labels=[] random.shuffle(corpus) for x in range(training_size): sentences.append(corpus[x][0]) labels.append(corpus[x][1]) tokenizer = Tokenizer() tokenizer.fit_on_texts(sentences) word_index = tokenizer.word_index vocab_size=len(word_index) sequences = tokenizer.texts_to_sequences(sentences) padded = pad_sequences(sequences, maxlen=max_length, padding=padding_type, truncating=trunc_type) split = int(test_portion * training_size) test_sequences = padded[0:split] training_sequences = padded[split:training_size] test_labels = labels[0:split] training_labels = labels[split:training_size] print(vocab_size) print(word_index['i']) # Expected Output # 138858 # 1 # Note this is the 100 dimension version of GloVe from Stanford # I unzipped and hosted it on my site to make this notebook easier !wget --no-check-certificate \ https://storage.googleapis.com/laurencemoroney-blog.appspot.com/glove.6B.100d.txt \ -O /tmp/glove.6B.100d.txt embeddings_index = {}; with open('/tmp/glove.6B.100d.txt') as f: for line in f: values = line.split(); word = values[0]; coefs = np.asarray(values[1:], dtype='float32'); embeddings_index[word] = coefs; embeddings_matrix = np.zeros((vocab_size+1, embedding_dim)); for word, i in word_index.items(): embedding_vector = embeddings_index.get(word); if embedding_vector is not None: embeddings_matrix[i] = embedding_vector; print(len(embeddings_matrix)) # Expected Output # 138859 model = tf.keras.Sequential([ tf.keras.layers.Embedding(vocab_size+1, embedding_dim, input_length=max_length, weights=[embeddings_matrix], trainable=False), tf.keras.layers.Dropout(0.2), tf.keras.layers.Conv1D(64, 5, activation='relu'), tf.keras.layers.MaxPooling1D(pool_size=4), tf.keras.layers.LSTM(64), tf.keras.layers.Dense(1, activation='sigmoid') ]) model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy']) model.summary() num_epochs = 50 history = model.fit(training_sequences, training_labels, epochs=num_epochs, validation_data=(test_sequences, test_labels), verbose=2) print("Training Complete") import matplotlib.image as mpimg import matplotlib.pyplot as plt #----------------------------------------------------------- # Retrieve a list of list results on training and test data # sets for each training epoch #----------------------------------------------------------- acc=history.history['acc'] val_acc=history.history['val_acc'] loss=history.history['loss'] val_loss=history.history['val_loss'] epochs=range(len(acc)) # Get number of epochs #------------------------------------------------ # Plot training and validation accuracy per epoch #------------------------------------------------ plt.plot(epochs, acc, 'r') plt.plot(epochs, val_acc, 'b') plt.title('Training and validation accuracy') plt.xlabel("Epochs") plt.ylabel("Accuracy") plt.legend(["Accuracy", "Validation Accuracy"]) plt.figure() #------------------------------------------------ # Plot training and validation loss per epoch #------------------------------------------------ plt.plot(epochs, loss, 'r') plt.plot(epochs, val_loss, 'b') plt.title('Training and validation loss') plt.xlabel("Epochs") plt.ylabel("Loss") plt.legend(["Loss", "Validation Loss"]) plt.figure() # Expected Output # A chart where the validation loss does not increase sharply! ```
github_jupyter
import json import tensorflow as tf import csv import random import numpy as np from tensorflow.keras.preprocessing.text import Tokenizer from tensorflow.keras.preprocessing.sequence import pad_sequences from tensorflow.keras.utils import to_categorical from tensorflow.keras import regularizers embedding_dim = 100 max_length = 16 trunc_type='post' padding_type='post' oov_tok = "<OOV>" training_size=160000 test_portion=.1 corpus = [] # Note that I cleaned the Stanford dataset to remove LATIN1 encoding to make it easier for Python CSV reader # You can do that yourself with: # iconv -f LATIN1 -t UTF8 training.1600000.processed.noemoticon.csv -o training_cleaned.csv # I then hosted it on my site to make it easier to use in this notebook !wget --no-check-certificate \ https://storage.googleapis.com/laurencemoroney-blog.appspot.com/training_cleaned.csv \ -O /tmp/training_cleaned.csv num_sentences = 0 with open("/tmp/training_cleaned.csv") as csvfile: reader = csv.reader(csvfile, delimiter=',') for row in reader: list_item=[] list_item.append(row[5]) this_label=row[0] if this_label=='0': list_item.append(0) else: list_item.append(1) num_sentences = num_sentences + 1 corpus.append(list_item) print(num_sentences) print(len(corpus)) print(corpus[1]) # Expected Output: # 1600000 # 1600000 # ["is upset that he can't update his Facebook by texting it... and might cry as a result School today also. Blah!", 0] sentences=[] labels=[] random.shuffle(corpus) for x in range(training_size): sentences.append(corpus[x][0]) labels.append(corpus[x][1]) tokenizer = Tokenizer() tokenizer.fit_on_texts(sentences) word_index = tokenizer.word_index vocab_size=len(word_index) sequences = tokenizer.texts_to_sequences(sentences) padded = pad_sequences(sequences, maxlen=max_length, padding=padding_type, truncating=trunc_type) split = int(test_portion * training_size) test_sequences = padded[0:split] training_sequences = padded[split:training_size] test_labels = labels[0:split] training_labels = labels[split:training_size] print(vocab_size) print(word_index['i']) # Expected Output # 138858 # 1 # Note this is the 100 dimension version of GloVe from Stanford # I unzipped and hosted it on my site to make this notebook easier !wget --no-check-certificate \ https://storage.googleapis.com/laurencemoroney-blog.appspot.com/glove.6B.100d.txt \ -O /tmp/glove.6B.100d.txt embeddings_index = {}; with open('/tmp/glove.6B.100d.txt') as f: for line in f: values = line.split(); word = values[0]; coefs = np.asarray(values[1:], dtype='float32'); embeddings_index[word] = coefs; embeddings_matrix = np.zeros((vocab_size+1, embedding_dim)); for word, i in word_index.items(): embedding_vector = embeddings_index.get(word); if embedding_vector is not None: embeddings_matrix[i] = embedding_vector; print(len(embeddings_matrix)) # Expected Output # 138859 model = tf.keras.Sequential([ tf.keras.layers.Embedding(vocab_size+1, embedding_dim, input_length=max_length, weights=[embeddings_matrix], trainable=False), tf.keras.layers.Dropout(0.2), tf.keras.layers.Conv1D(64, 5, activation='relu'), tf.keras.layers.MaxPooling1D(pool_size=4), tf.keras.layers.LSTM(64), tf.keras.layers.Dense(1, activation='sigmoid') ]) model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy']) model.summary() num_epochs = 50 history = model.fit(training_sequences, training_labels, epochs=num_epochs, validation_data=(test_sequences, test_labels), verbose=2) print("Training Complete") import matplotlib.image as mpimg import matplotlib.pyplot as plt #----------------------------------------------------------- # Retrieve a list of list results on training and test data # sets for each training epoch #----------------------------------------------------------- acc=history.history['acc'] val_acc=history.history['val_acc'] loss=history.history['loss'] val_loss=history.history['val_loss'] epochs=range(len(acc)) # Get number of epochs #------------------------------------------------ # Plot training and validation accuracy per epoch #------------------------------------------------ plt.plot(epochs, acc, 'r') plt.plot(epochs, val_acc, 'b') plt.title('Training and validation accuracy') plt.xlabel("Epochs") plt.ylabel("Accuracy") plt.legend(["Accuracy", "Validation Accuracy"]) plt.figure() #------------------------------------------------ # Plot training and validation loss per epoch #------------------------------------------------ plt.plot(epochs, loss, 'r') plt.plot(epochs, val_loss, 'b') plt.title('Training and validation loss') plt.xlabel("Epochs") plt.ylabel("Loss") plt.legend(["Loss", "Validation Loss"]) plt.figure() # Expected Output # A chart where the validation loss does not increase sharply!
0.750918
0.436922
# Extracting training data from the ODC <img align="right" src="../../Supplementary_data/dea_logo.jpg"> * [**Sign up to the DEA Sandbox**](https://docs.dea.ga.gov.au/setup/sandbox.html) to run this notebook interactively from a browser * **Compatibility:** Notebook currently compatible with the `DEA Sandbox` environment * **Products used:** [ls8_nbart_geomedian_annual](https://explorer.sandbox.dea.ga.gov.au/products/ls8_nbart_geomedian_annual/extents), [ls8_nbart_tmad_annual](https://explorer.sandbox.dea.ga.gov.au/products/ls8_nbart_tmad_annual/extents), [fc_percentile_albers_annual](https://explorer.sandbox.dea.ga.gov.au/products/fc_percentile_albers_annual/extents) ## Background **Training data** is the most important part of any supervised machine learning workflow. The quality of the training data has a greater impact on the classification than the algorithm used. Large and accurate training data sets are preferable: increasing the training sample size results in increased classification accuracy ([Maxell et al 2018](https://www.tandfonline.com/doi/full/10.1080/01431161.2018.1433343)). A review of training data methods in the context of Earth Observation is available [here](https://www.mdpi.com/2072-4292/12/6/1034) When creating training labels, be sure to capture the **spectral variability** of the class, and to use imagery from the time period you want to classify (rather than relying on basemap composites). Another common problem with training data is **class imbalance**. This can occur when one of your classes is relatively rare and therefore the rare class will comprise a smaller proportion of the training set. When imbalanced data is used, it is common that the final classification will under-predict less abundant classes relative to their true proportion. There are many platforms to use for gathering training labels, the best one to use depends on your application. GIS platforms are great for collection training data as they are highly flexible and mature platforms; [Geo-Wiki](https://www.geo-wiki.org/) and [Collect Earth Online](https://collect.earth/home) are two open-source websites that may also be useful depending on the reference data strategy employed. Alternatively, there are many pre-existing training datasets on the web that may be useful, e.g. [Radiant Earth](https://www.radiant.earth/) manages a growing number of reference datasets for use by anyone. ## Description This notebook will extract training data (feature layers, in machine learning parlance) from the `open-data-cube` using labelled geometries within a geojson. The default example will use the crop/non-crop labels within the `'data/crop_training_WA.geojson'` file. This reference data was acquired and pre-processed from the USGS's Global Food Security Analysis Data portal [here](https://croplands.org/app/data/search?page=1&page_size=200) and [here](https://e4ftl01.cr.usgs.gov/MEASURES/GFSAD30VAL.001/2008.01.01/). To do this, we rely on a custom `dea-notebooks` function called `collect_training_data`, contained within the [dea_tools.classification](../../Tools/dea_tools/classification.py) script. The principal goal of this notebook is to familarise users with this function so they can extract the appropriate data for their use-case. The default example also highlights extracting a set of useful feature layers for generating a cropland mask forWA. 1. Preview the polygons in our training data by plotting them on a basemap 2. Define a feature layer function to pass to `collect_training_data` 3. Extract training data from the datacube using `collect_training_data` 4. Export the training data to disk for use in subsequent scripts *** ## Getting started To run this analysis, run all the cells in the notebook, starting with the "Load packages" cell. ### Load packages ``` %matplotlib inline import os import datacube import numpy as np import xarray as xr import subprocess as sp import geopandas as gpd from odc.io.cgroups import get_cpu_quota from datacube.utils.geometry import assign_crs import sys sys.path.insert(1, '../../Tools/') from dea_tools.bandindices import calculate_indices from dea_tools.classification import collect_training_data import warnings warnings.filterwarnings("ignore") ``` ## Analysis parameters * `path`: The path to the input vector file from which we will extract training data. A default geojson is provided. * `field`: This is the name of column in your shapefile attribute table that contains the class labels. **The class labels must be integers** ``` path = 'data/crop_training_WA.geojson' field = 'class' ``` ### Find the number of CPUs ``` ncpus = round(get_cpu_quota()) print('ncpus = ' + str(ncpus)) ``` ## Preview input data We can load and preview our input data shapefile using `geopandas`. The shapefile should contain a column with class labels (e.g. 'class'). These labels will be used to train our model. > Remember, the class labels **must** be represented by `integers`. ``` # Load input data shapefile input_data = gpd.read_file(path) # Plot first five rows input_data.head() # Plot training data in an interactive map input_data.explore(column=field) ``` ## Extracting training data The function `collect_training_data` takes our geojson containing class labels and extracts training data (features) from the datacube over the locations specified by the input geometries. The function will also pre-process our training data by stacking the arrays into a useful format and removing any `NaN` or `inf` values. The below variables can be set within the `collect_training_data` function: * `zonal_stats` : An optional string giving the names of zonal statistics to calculate across each polygon (if the geometries in the vector file are polygons and not points). Default is None (all pixel values are returned). Supported values are 'mean', 'median', 'max', and 'min'. In addition to the `zonal_stats` parameter, we also need to set up a datacube query dictionary for the Open Data Cube query such as `measurements` (the bands to load from the satellite), the `resolution` (the cell size), and the `output_crs` (the output projection). These options will be added to a query dictionary that will be passed into `collect_training_data` using the parameter `collect_training_data(dc_query=query, ...)`. The query dictionary will be the only argument in the **feature layer function** which we will define and describe in a moment. > Note: `collect_training_data` also has a number of additional parameters for handling ODC I/O read failures, where polygons that return an excessive number of null values can be resubmitted to the multiprocessing queue. Check out the [docs](https://github.com/GeoscienceAustralia/dea-notebooks/blob/2bbefd45ca1baaa74977a1dc3075d979f3e87168/Tools/dea_tools/classification.py#L580) to learn more. ``` # Set up our inputs to collect_training_data zonal_stats = None # Set up the inputs for the ODC query time = ('2014') measurements = ['blue', 'green', 'red', 'nir', 'swir1', 'swir2'] resolution = (-30, 30) output_crs = 'epsg:3577' # Generate a new datacube query object query = { 'time': time, 'measurements': measurements, 'resolution': resolution, 'output_crs': output_crs, } ``` ## Defining feature layers To create the desired feature layers, we pass instructions to `collect training data` through the `feature_func` parameter. * `feature_func`: A function for generating feature layers that is applied to the data within the bounds of the input geometry. The 'feature_func' must accept a 'dc_query' object, and return a single xarray.Dataset or xarray.DataArray containing 2D coordinates (i.e x, y - no time dimension). e.g. def feature_function(query): dc = datacube.Datacube(app='feature_layers') ds = dc.load(**query) ds = ds.mean('time') return ds Below, we will define a more complicated feature layer function than the brief example shown above. We will calculate some band indices on the Landsat 8 geomedian, append the ternary median aboslute deviation dataset from the same year: [ls8_nbart_tmad_annual](https://explorer.sandbox.dea.ga.gov.au/products/ls8_nbart_tmad_annual/extents), and append fractional cover percentiles for the photosynthetic vegetation band, also from the same year: [fc_percentile_albers_annual](https://explorer.sandbox.dea.ga.gov.au/products/fc_percentile_albers_annual/extents). ``` def feature_layers(query): #connect to the datacube dc = datacube.Datacube(app='custom_feature_layers') #load ls8 geomedian ds = dc.load(product='ls8_nbart_geomedian_annual', **query) # Calculate some band indices da = calculate_indices(ds, index=['NDVI', 'LAI', 'MNDWI'], drop=False, collection='ga_ls_2') # Add TMADs dataset tmad = dc.load(product='ls8_nbart_tmad_annual', measurements=['sdev','edev','bcdev'], like=ds.geobox, #will match geomedian extent time='2014' #same as geomedian ) # Add Fractional cover percentiles fc = dc.load(product='fc_percentile_albers_annual', measurements=['PV_PC_10','PV_PC_50','PV_PC_90'], #only the PV band like=ds.geobox, #will match geomedian extent time='2014' #same as geomedian ) # Merge results into single dataset result = xr.merge([da, tmad, fc],compat='override') return result ``` Now, we can pass this function to `collect_training_data`. This will take a few minutes to run all 430 samples on the default sandbox as it only has two cpus. ``` %%time column_names, model_input = collect_training_data( gdf=input_data, dc_query=query, ncpus=ncpus, return_coords=False, field=field, zonal_stats=zonal_stats, feature_func=feature_layers) print(column_names) print('') print(np.array_str(model_input, precision=2, suppress_small=True)) ``` ## Export training data Once we've collected all the training data we require, we can write the data to disk. This will allow us to import the data in the next step(s) of the workflow. ``` # Set the name and location of the output file output_file = "results/test_training_data.txt" # Grab all columns except the x-y coords model_col_indices = [column_names.index(var_name) for var_name in column_names[0:-2]] # Export files to disk np.savetxt(output_file, model_input[:, model_col_indices], header=" ".join(column_names[0:-2]), fmt="%4f") ``` ## Recommended next steps To continue working through the notebooks in this `Scalable Machine Learning on the ODC` workflow, go to the next notebook `2_Inspect_training_data.ipynb`. 1. **Extracting training data from the ODC (this notebook)** 2. [Inspecting training data](2_Inspect_training_data.ipynb) 3. [Evaluate, optimize, and fit a classifier](3_Evaluate_optimize_fit_classifier.ipynb) 4. [Classifying satellite data](4_Classify_satellite_data.ipynb) 5. [Object-based filtering of pixel classifications](5_Object-based_filtering.ipynb) *** ## Additional information **License:** The code in this notebook is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0). Digital Earth Australia data is licensed under the [Creative Commons by Attribution 4.0](https://creativecommons.org/licenses/by/4.0/) license. **Contact:** If you need assistance, please post a question on the [Open Data Cube Slack channel](http://slack.opendatacube.org/) or on the [GIS Stack Exchange](https://gis.stackexchange.com/questions/ask?tags=open-data-cube) using the `open-data-cube` tag (you can view previously asked questions [here](https://gis.stackexchange.com/questions/tagged/open-data-cube)). If you would like to report an issue with this notebook, you can file one on [Github](https://github.com/GeoscienceAustralia/dea-notebooks). **Last modified:** March 2022 **Compatible datacube version:** ``` print(datacube.__version__) ``` ## Tags Browse all available tags on the DEA User Guide's [Tags Index](https://docs.dea.ga.gov.au/genindex.html)
github_jupyter
%matplotlib inline import os import datacube import numpy as np import xarray as xr import subprocess as sp import geopandas as gpd from odc.io.cgroups import get_cpu_quota from datacube.utils.geometry import assign_crs import sys sys.path.insert(1, '../../Tools/') from dea_tools.bandindices import calculate_indices from dea_tools.classification import collect_training_data import warnings warnings.filterwarnings("ignore") path = 'data/crop_training_WA.geojson' field = 'class' ncpus = round(get_cpu_quota()) print('ncpus = ' + str(ncpus)) # Load input data shapefile input_data = gpd.read_file(path) # Plot first five rows input_data.head() # Plot training data in an interactive map input_data.explore(column=field) # Set up our inputs to collect_training_data zonal_stats = None # Set up the inputs for the ODC query time = ('2014') measurements = ['blue', 'green', 'red', 'nir', 'swir1', 'swir2'] resolution = (-30, 30) output_crs = 'epsg:3577' # Generate a new datacube query object query = { 'time': time, 'measurements': measurements, 'resolution': resolution, 'output_crs': output_crs, } def feature_layers(query): #connect to the datacube dc = datacube.Datacube(app='custom_feature_layers') #load ls8 geomedian ds = dc.load(product='ls8_nbart_geomedian_annual', **query) # Calculate some band indices da = calculate_indices(ds, index=['NDVI', 'LAI', 'MNDWI'], drop=False, collection='ga_ls_2') # Add TMADs dataset tmad = dc.load(product='ls8_nbart_tmad_annual', measurements=['sdev','edev','bcdev'], like=ds.geobox, #will match geomedian extent time='2014' #same as geomedian ) # Add Fractional cover percentiles fc = dc.load(product='fc_percentile_albers_annual', measurements=['PV_PC_10','PV_PC_50','PV_PC_90'], #only the PV band like=ds.geobox, #will match geomedian extent time='2014' #same as geomedian ) # Merge results into single dataset result = xr.merge([da, tmad, fc],compat='override') return result %%time column_names, model_input = collect_training_data( gdf=input_data, dc_query=query, ncpus=ncpus, return_coords=False, field=field, zonal_stats=zonal_stats, feature_func=feature_layers) print(column_names) print('') print(np.array_str(model_input, precision=2, suppress_small=True)) # Set the name and location of the output file output_file = "results/test_training_data.txt" # Grab all columns except the x-y coords model_col_indices = [column_names.index(var_name) for var_name in column_names[0:-2]] # Export files to disk np.savetxt(output_file, model_input[:, model_col_indices], header=" ".join(column_names[0:-2]), fmt="%4f") print(datacube.__version__)
0.448426
0.990169
``` import pandas as pd # read parcels_geography input p_g = pd.read_csv(r'C:\Users\ywang\Box\Modeling and Surveys\Urban Modeling\Bay Area UrbanSim\PBA50\PBA50 Final Blueprint Large General Input Data\2020_11_10_parcels_geography.csv') # read parcels what fall into California Conservation Easement: # https://arcgis.ad.mtc.ca.gov/portal/home/webmap/viewer.html?layers=60d2f1935c3a466ea7113de2a3295292 # this data is created in ArcGIS by "select by location - p10 parcels whose centroid is within conservation easement" cons_easement = pd.read_csv(r'M:\Data\Urban\BAUS\PBA50\Final_Blueprint\Zoning Modifications\p10_within_conservation_easement.csv') # only keep 'PARCEL_ID' and make sure it's integer cons_easement = cons_easement[['PARCEL_ID']] cons_easement.PARCEL_ID = cons_easement.PARCEL_ID.apply(lambda x: int(round(x))) cons_easement.columns = ['PARCEL_ID_cons_easement'] print('{} parcels are within conservation easement'.format(len(cons_easement.PARCEL_ID_cons_easement.unique()))) # merge nodev_comp = p_g.merge(cons_easement, left_on='PARCEL_ID', right_on='PARCEL_ID_cons_easement', how='left') # create a field 'cons_easement' to label parcels within conservation easement nodev_comp['cons_easement'] = 'not cons_easement' nodev_comp.loc[nodev_comp.PARCEL_ID_cons_easement.notnull(), 'cons_easement'] = 'cons_easement' # create a field 'compare' to categorize parcels into the following groups: # - 'cons_easement but developable': parcels within conservation easement but still 'developable' in urbansim # - 'not cons_easement but nodev': parcels outside of conservation easement but still not developable in urbansim # - 'other': other parcels nodev_comp = nodev_comp[['PARCEL_ID', 'nodev', 'cons_easement']] nodev_comp['compare'] = 'other' nodev_comp.loc[(nodev_comp.nodev == 0) & (nodev_comp.cons_easement == 'cons_easement'), 'compare'] = 'cons_easement but developable' nodev_comp.loc[(nodev_comp.nodev == 1) & (nodev_comp.cons_easement == 'not cons_easement'), 'compare'] = 'not cons_easement but nodev' display(nodev_comp.head()) # statistics of 'compare' nodev_comp['compare'].value_counts() # read Urbansim no-project run parcel-level output p50_np = pd.read_csv(r'C:\Users\ywang\Box\Modeling and Surveys\Urban Modeling\Bay Area UrbanSim\PBA50\EIR runs\Baseline Large (s25) runs\NP_v5\run188_parcel_data_2050.csv', usecols = ['parcel_id', 'residential_units', 'totemp']) p50_np.columns = [x+'_50' for x in list(p50_np)] p15_np = pd.read_csv(r'C:\Users\ywang\Box\Modeling and Surveys\Urban Modeling\Bay Area UrbanSim\PBA50\EIR runs\Baseline Large (s25) runs\NP_v5\run188_parcel_data_2015.csv', usecols = ['parcel_id', 'residential_units', 'totemp']) p15_np.columns = [x+'_15' for x in list(p15_np)] # join to parcels within conversation easement but still developable cons_dev_sub = nodev_comp.loc[nodev_comp.compare == 'cons_easement but developable'] cons_dev = cons_dev_sub.merge(p50_np, left_on='PARCEL_ID', right_on='parcel_id_50', how='left').merge(p15_np, left_on='PARCEL_ID', right_on='parcel_id_15', how='left') # fill na and calculate 2015-2050 growth cons_dev.fillna({'residential_units_50':0, 'residential_units_15':0, 'totemp_50':0, 'totemp_15':0}, inplace=True) cons_dev['residential_units_add'] = cons_dev['residential_units_50'] - cons_dev['residential_units_15'] cons_dev['totemp_add'] = cons_dev['totemp_50'] - cons_dev['totemp_15'] # check these parcels that had residential growth print(cons_dev[['residential_units_50','residential_units_15']].sum()) cons_dev.loc[cons_dev.residential_units_add>0][['PARCEL_ID','residential_units_50','residential_units_15','residential_units_add']] print(cons_dev[['totemp_50','totemp_15']].sum()) cons_dev.loc[cons_dev.totemp_add>0][['PARCEL_ID','totemp_50','totemp_15','totemp_add']] # export the 'cons_easement but developable' parcels in order to update the 'parcels_geography' table cons_dev_export = cons_dev[['PARCEL_ID']] cons_dev_export['nodev'] = 1 print('export {} records for parcels within cons_easement but were labeled as developable'.format(cons_dev_export.shape[0])) display(cons_dev_export.head()) cons_dev_export.to_csv(r'M:\Data\Urban\BAUS\PBA50\Final_Blueprint\Zoning Modifications\noDev_parcels_conservation_easement.csv', index=False) ```
github_jupyter
import pandas as pd # read parcels_geography input p_g = pd.read_csv(r'C:\Users\ywang\Box\Modeling and Surveys\Urban Modeling\Bay Area UrbanSim\PBA50\PBA50 Final Blueprint Large General Input Data\2020_11_10_parcels_geography.csv') # read parcels what fall into California Conservation Easement: # https://arcgis.ad.mtc.ca.gov/portal/home/webmap/viewer.html?layers=60d2f1935c3a466ea7113de2a3295292 # this data is created in ArcGIS by "select by location - p10 parcels whose centroid is within conservation easement" cons_easement = pd.read_csv(r'M:\Data\Urban\BAUS\PBA50\Final_Blueprint\Zoning Modifications\p10_within_conservation_easement.csv') # only keep 'PARCEL_ID' and make sure it's integer cons_easement = cons_easement[['PARCEL_ID']] cons_easement.PARCEL_ID = cons_easement.PARCEL_ID.apply(lambda x: int(round(x))) cons_easement.columns = ['PARCEL_ID_cons_easement'] print('{} parcels are within conservation easement'.format(len(cons_easement.PARCEL_ID_cons_easement.unique()))) # merge nodev_comp = p_g.merge(cons_easement, left_on='PARCEL_ID', right_on='PARCEL_ID_cons_easement', how='left') # create a field 'cons_easement' to label parcels within conservation easement nodev_comp['cons_easement'] = 'not cons_easement' nodev_comp.loc[nodev_comp.PARCEL_ID_cons_easement.notnull(), 'cons_easement'] = 'cons_easement' # create a field 'compare' to categorize parcels into the following groups: # - 'cons_easement but developable': parcels within conservation easement but still 'developable' in urbansim # - 'not cons_easement but nodev': parcels outside of conservation easement but still not developable in urbansim # - 'other': other parcels nodev_comp = nodev_comp[['PARCEL_ID', 'nodev', 'cons_easement']] nodev_comp['compare'] = 'other' nodev_comp.loc[(nodev_comp.nodev == 0) & (nodev_comp.cons_easement == 'cons_easement'), 'compare'] = 'cons_easement but developable' nodev_comp.loc[(nodev_comp.nodev == 1) & (nodev_comp.cons_easement == 'not cons_easement'), 'compare'] = 'not cons_easement but nodev' display(nodev_comp.head()) # statistics of 'compare' nodev_comp['compare'].value_counts() # read Urbansim no-project run parcel-level output p50_np = pd.read_csv(r'C:\Users\ywang\Box\Modeling and Surveys\Urban Modeling\Bay Area UrbanSim\PBA50\EIR runs\Baseline Large (s25) runs\NP_v5\run188_parcel_data_2050.csv', usecols = ['parcel_id', 'residential_units', 'totemp']) p50_np.columns = [x+'_50' for x in list(p50_np)] p15_np = pd.read_csv(r'C:\Users\ywang\Box\Modeling and Surveys\Urban Modeling\Bay Area UrbanSim\PBA50\EIR runs\Baseline Large (s25) runs\NP_v5\run188_parcel_data_2015.csv', usecols = ['parcel_id', 'residential_units', 'totemp']) p15_np.columns = [x+'_15' for x in list(p15_np)] # join to parcels within conversation easement but still developable cons_dev_sub = nodev_comp.loc[nodev_comp.compare == 'cons_easement but developable'] cons_dev = cons_dev_sub.merge(p50_np, left_on='PARCEL_ID', right_on='parcel_id_50', how='left').merge(p15_np, left_on='PARCEL_ID', right_on='parcel_id_15', how='left') # fill na and calculate 2015-2050 growth cons_dev.fillna({'residential_units_50':0, 'residential_units_15':0, 'totemp_50':0, 'totemp_15':0}, inplace=True) cons_dev['residential_units_add'] = cons_dev['residential_units_50'] - cons_dev['residential_units_15'] cons_dev['totemp_add'] = cons_dev['totemp_50'] - cons_dev['totemp_15'] # check these parcels that had residential growth print(cons_dev[['residential_units_50','residential_units_15']].sum()) cons_dev.loc[cons_dev.residential_units_add>0][['PARCEL_ID','residential_units_50','residential_units_15','residential_units_add']] print(cons_dev[['totemp_50','totemp_15']].sum()) cons_dev.loc[cons_dev.totemp_add>0][['PARCEL_ID','totemp_50','totemp_15','totemp_add']] # export the 'cons_easement but developable' parcels in order to update the 'parcels_geography' table cons_dev_export = cons_dev[['PARCEL_ID']] cons_dev_export['nodev'] = 1 print('export {} records for parcels within cons_easement but were labeled as developable'.format(cons_dev_export.shape[0])) display(cons_dev_export.head()) cons_dev_export.to_csv(r'M:\Data\Urban\BAUS\PBA50\Final_Blueprint\Zoning Modifications\noDev_parcels_conservation_easement.csv', index=False)
0.233881
0.141104
## Importing required modules ``` import pandas as pd import geemap import ee import seaborn as sns import matplotlib.pyplot as plt import numpy as np ``` ## Initialize the Google Earth Engine ## Loading the coefficients for DMSP-OLS radars (We probably do not need this part) ``` coef = pd.read_csv('assets/dmsp_coeffs.csv') def get_coefs(img, coefdata=coef): imgID = img.id().getInfo() idx = coefdata['satellite']+coefdata['year'].astype(str)==imgID return coefdata.loc[idx, ['c0','c1','c2']].values[0] def calibrate_img(img): c0, c1, c2 = get_coefs(img) return img.expression("c0 + (c1 * X) + (c2 * X**2)", {'X':img, 'c0':c0, 'c1':c1, 'c2':c2}) def clip_img(img, upper_thresh=63, upper_set=63, lower_thresh=6, lower_set=0): return img.where(img.gt(upper_thresh),upper_set).where(img.lte(lower_thresh),lower_set) def calibrate_and_clip(img): return clip_img(calibrate_img(img)) ``` ## Loading the images from DMSP-OLS (WE PROBABELY DO NOT NEED THIS PART) ``` dmsp1999F12 = ee.Image("NOAA/DMSP-OLS/NIGHTTIME_LIGHTS/F152007").select("stable_lights") dmsp1999F14 = ee.Image("NOAA/DMSP-OLS/NIGHTTIME_LIGHTS/F162007").select("stable_lights") dmsp1999F12_calbr = dmsp1999F12.where(dmsp1999F12.gt(63),63).where(dmsp1999F12.lte(6),0) ``` ## Loading the area of interest (WE PROBABELY DO NOT NEED THIS PART) ``` aoi = ee.FeatureCollection("users/amirhkiani1998/teh") ``` ## Drawing the map (WE PROBABELY DO NOT NEED THIS PART) ``` myMap = geemap.Map() myMap.addLayer(dmsp1999F12.clip(aoi)) myMap.centerObject(aoi) myMap ``` ## The map for myMap ``` myMap = geemap.Map() # myMap.addLayer(dmsp1999F12_clip.clip(aoi)) left_layer = geemap.ee_tile_layer(dmsp1999F12_calbr.clip(aoi), {}, "Clipped") right_layer = geemap.ee_tile_layer(dmsp1999F12.clip(aoi), {}, "Not Clipped") myMap.split_map(left_layer, right_layer) myMap.centerObject(aoi) myMap tehranNumpy = geemap.ee_to_numpy(dmsp1999F12, region=aoi) tehranNumpyCalbr = geemap.ee_to_numpy(dmsp1999F12_calbr, region=aoi) fig, ax = plt.subplots(figsize=(15,5)) sns.kdeplot(tehranNumpy.flatten(), label='non-calibrated',legend=True, ax=ax) sns.kdeplot(tehranNumpyCalbr.flatten(), label='calibrated',legend=True, ax=ax) plt.legend(fontsize=20) plt.title('Distribution of DMSP-OLS 1999 composite calibrated vs non (smoothed w/ Gaussian kernel)', fontsize=20); dmsp1999F12_arr = geemap.ee_to_numpy(dmsp1999F12, region=aoi) dmsp1999F14_arr = geemap.ee_to_numpy(dmsp1999F14, region=aoi) fig, ax = plt.subplots(figsize=(15,5)) sns.kdeplot(dmsp1999F12_arr.flatten(), label='F12 1999',legend=True, ax=ax) sns.kdeplot(dmsp1999F14_arr.flatten(), label='F14 1999',legend=True, ax=ax) plt.legend(fontsize=20) plt.title('Probability density of 2007 annual composite for F15 and F16 satellites', fontsize=20); dmsp1999F14_calbr = calibrate_and_clip(dmsp1999F14) dmsp1999F14_arr_calbr = geemap.ee_to_numpy(dmsp1999F14_calbr, region=aoi) dmsp1999F12_arr_calbr = geemap.ee_to_numpy(dmsp1999F12_calbr, region=aoi) fig, ax = plt.subplots(figsize=(15,5)) sns.kdeplot(dmsp1999F14_arr_calbr.flatten(), label='F14 1999 (calibrated)',legend=True, ax=ax) sns.kdeplot(dmsp1999F12_arr_calbr.flatten(), label='F12 1999 (calibrated)',legend=True, ax=ax) plt.legend(fontsize=20) plt.title('Probability density of 2007 annual composite for F15 and F16 satellites after calibration', fontsize=20); viirs2015 = ee.ImageCollection("NOAA/VIIRS/DNB/MONTHLY_V1/VCMSLCFG").filterDate( "2015-07-01","2015-12-31").filterBounds(roi).median().select('avg_rad').clip(roi) viirs2019 = ee.ImageCollection("NOAA/VIIRS/DNB/MONTHLY_V1/VCMSLCFG").filterDate( "2019-07-01","2019-12-31").filterBounds(roi).median().select('avg_rad').clip(roi) viirs_15_tile = geemap.ee_tile_layer(viirs2015, {}, 'Jul-Dec 2015', opacity=0.75) viirs_19_tile = geemap.ee_tile_layer(viirs2019, {}, 'Jul-Dec 2019', opacity=0.75) # initialize our map map2 = geemap.Map() # map2.centerObject(roi, 9) map2.split_map(left_layer=viirs_15_tile, right_layer=viirs_19_tile) map2.addLayerControl() map2 ``` # Start the code ## Import vital libraries ``` import pandas as pd import geemap, ee import seaborn as sns import matplotlib.pyplot as plt import numpy as np import os import datetime try: ee.Initialize() except: ee.Authenticate() ee.Initialize() ``` ## Get VIIRS data <span style='font-size:14px'>(2012-04-01T00:00:00 - 2021-05-01T00:00:00)</span> ``` viirs = ee.ImageCollection("NOAA/VIIRS/DNB/MONTHLY_V1/VCMSLCFG").select('avg_rad') iran = ee.FeatureCollection("users/amirhkiani1998/iran") provinces = pd.read_csv("assets/iran/Province/names_info.csv") provinces ``` ### Make Automation Function ``` def getData(city_name, reducer = ee.Reducer.mean(), reducer_to_save = "mean"): city_name = city_name.strip() city_geom = ee.FeatureCollection("users/amirhkiani1998/" + city_name) def getCityAvgRad(img): return img.reduceRegions(reducer= reducer, collection = city_geom, scale = 500) def getDate(img): return img.set('date', img.date().format()) reduced = viirs.map(getCityAvgRad) dates = viirs.map(getDate) listMean = reduced.flatten().reduceColumns(ee.Reducer.toList(1), [reducer_to_save]).values().getInfo() listDates = dates.reduceColumns(ee.Reducer.toList(1), ["date"]).values().getInfo() dataframe = pd.DataFrame() dataframe["dates"] = np.asarray(listDates).squeeze() dataframe[reducer_to_save] = np.asarray(listMean).squeeze() dataframe["city_name"] = city_name dataframe.to_csv( "assets/iran/Province/" + name + "/" + name + "_" + reducer_to_save + ".csv") return(dataframe) def mergeDataframes(dataframe_1, dataframe_2): return(dataframe_1.merge(right= dataframe_2, on = "date", how = "inner")) ``` #### Getting all Provinces Dataframe ``` provinces_short_name = provinces.short_name dataframeNew = pd.DataFrame() ignoreList = [] errorIgnoreList = [ "islands" ] for name in provinces_short_name: name = name.strip() if name in ignoreList or name in errorIgnoreList: continue extractedProvinceData = getData(name, ee.Reducer.stdDev(), "stdDev") if(dataframeNew.shape == (0,0)): dataframeNew = extractedProvinceData else: dataframeNew = pd.concat([dataframeNew, extractedProvinceData]) extractedProvinceData.to_csv( "assets/iran/Province/" + name + "/" + name + "_std.csv") print(" done", sep="") print() ``` ### Save the dataframe ``` dataframeNew.to_csv("assets/iran/Province/whole_mean.csv") ``` # Get Latitude and Longitiude in VIIRS ## Make map function for getting Long. and Lat. ``` def addLongLatMap(image): return image.addBands(ee.Image.pixelLonLat()) def addLongLat(ImageCollection): return ImageCollection.map(addLongLatMap) viirsWithLongLat = viirs.map(addLongLatMap) viirsWithLongLatList = viirsWithLongLat.toList(100) tehran = ee.FeatureCollection("users/amirhkiani1998/teh") ``` ## Making the function for getting numpy array ``` def imageToNumpy(image, provinceShortName, scale = 100): date = datetime.datetime.fromisoformat(image.date().format().getInfo()) dateString = str(date.year) + "-" + str(date.month) + "-" + str(date.day) print(dateString, provinceShortName, "start", sep="|") province = ee.FeatureCollection( "users/amirhkiani1998/" + provinceShortName) array = image.reduceRegion( reducer=ee.Reducer.toList(), geometry=province, scale=scale) array = array.getInfo() directoryPath = "assets/iran/Province/" + provinceShortName if(not os.path.isdir(directoryPath + "/numpy_arrays")): os.mkdir(directoryPath + "/numpy_arrays") np.save(directoryPath + "/numpy_arrays/" + provinceShortName + "_" + dateString + ".npy", np.array(array)) print(dateString, provinceShortName, "done", sep="|") return True shortNames = provinces.short_name for provinceName in shortNames: provinceName = provinceName.strip() for i in range(88): image = ee.Image(viirsWithLongLatList.get(i)) imageToNumpy(image, provinceShortName=provinceName, scale=500) ``` # Getting data from GHSL (Global Human Settlement Layer) ``` ghsl = ee.ImageCollection('JRC/GHSL/P2016/SMOD_POP_GLOBE_V1').select("smod_code") ghslTehran = ghsl.first().gte(0) myMap = geemap.Map() myMap.addLayer(ghslTehran, {}, 'Degree of Urbanization') myMap.centerObject(iran) myMap def makingGreaterThatTwo(image): return image.gt(2) def getProvinceGHSL(province, image, scale = 500): date = datetime.datetime.fromisoformat(image.date().format().getInfo()) dateString = str(date.year) + "-" + str(date.month) + "-" + str(date.day) print(dateString, province, "start", sep="|") image = image.gt(2) image = image.addBands(ee.Image.pixelLonLat()) province = province.strip() provinceImage = ee.FeatureCollection("users/amirhkiani1998/" + province) array = np.array(image.reduceRegion(reducer=ee.Reducer.toList(), geometry=provinceImage, scale=scale).getInfo()) directoryPath = "assets/iran/Province/" + \ province + "/numpy_arrays/ghsl_" + province + "_" + dateString + ".npy" np.save(directoryPath, array) print(province , "done", sep="|") ``` ## start getting the values for being built-up ``` ghslSize = ghsl.size().getInfo() ghslWithLatLang = addLongLat(ghsl) ghslWithLatLangList = ghslWithLatLang.toList(ghslSize) provinces_short_name = provinces.short_name for i in range(ghslSize): image = ee.Image(ghslWithLatLangList.get(i)) for province in provinces_short_name: getProvinceGHSL(province, image) dictionary = np.load("assets/iran/Province/khz/numpy_arrays/ghsl_khz_1975-1-1.npy", allow_pickle=True)[()] pd.DataFrame(dictionary) dictionary = np.load( "assets/iran/Province/khz/numpy_arrays/ghsl_khz_1975-1-1.npy", allow_pickle=True)[()] x = dictionary["latitude"] y = dictionary["longitude"] sm = dictionary["smod_code"] plt.scatter(x, y, c=(np.array(sm)*255)) dictionary2 = np.load( "assets/iran/Province/khz/numpy_arrays/khz_2021-5-1.npy", allow_pickle=True)[()] x2 = dictionary2["latitude"] y2 = dictionary2["longitude"] avr = dictionary2["avg_rad"] s1 = pd.Series(np.array(x2), name="x2") s2 = pd.Series(np.array(x), name="x1") pd.DataFrame(dictionary2).head() short_names = provinces.short_name.values ``` ## Make a function to get each VIIRS file summary ``` def getSummary(shortProvinceName): years = np.arange(2014, 2022) months = np.arange(1, 13) shortProvinceName = shortProvinceName.strip() dictionary = {"date":[], "std": [], "mean": []} mainPath = "assets/iran/Province/" + shortProvinceName + "/numpy_arrays/" + shortProvinceName + "_" for year in years: for month in months: if(month == 4 and year == 2021): break path = mainPath + str(year) + "-" + str(month) + "-" + "1.npy" avgRad = np.load(path, allow_pickle=True)[()]["avg_rad"] std = np.std(avgRad) mean = np.mean(avgRad) dictionary["date"].append(str(year) + "-" + str(month)) dictionary["std"].append(std) dictionary["mean"].append(mean) return(dictionary) for short_name in short_names: short_name = short_name.strip() resultDictionary = pd.DataFrame(getSummary(short_name)) path = "assets/main data/" + short_name + "_viirs_summary.csv" resultDictionary.to_csv(path) ``` ## Get Information for same regions ``` # Area of interest aoi = ee.FeatureCollection("users/amirhkiani1998/teh").geometry() # VIIRS in 2014 - 01 viirs201401 = ee.Image(ee.ImageCollection("NOAA/VIIRS/DNB/MONTHLY_V1/VCMSLCFG").filterDate( "2014-01-01", "2014-01-10").filterBounds(aoi).first().select("avg_rad").clip(aoi)) # standardizing the aoi viirs mu = ee.Number(viirs201401.reduceRegion(reducer=ee.Reducer.mean(), scale=500).get("avg_rad")) std = ee.Number(viirs201401.reduceRegion(reducer=ee.Reducer.stdDev(), scale=500).get("avg_rad")) viirs201401 = viirs201401.subtract(mu).divide(std) # climate data in 2014 - 01 climate201401 = ee.Image(ee.ImageCollection('IDAHO_EPSCOR/TERRACLIMATE').filterDate( "2014-01-01", "2014-01-10").filterBounds(aoi).first().clip(aoi)) climatePoints = climate201401.sample( **{"region": aoi, "scale": 1000, "seed": 0, 'geometries': True}) climatePoints.first().getInfo() fused = viirs201401.addBands(climate201401) training = fused.sampleRegions(collection=climatePoints, properties=["pr"], scale=1000) size = training.size().getInfo() listData = training.toList(size).getInfo() listData[2] ee.Number(viirs201401.reduceRegion( reducer=ee.Reducer.mean(), scale=500).get("avg_rad")).getInfo() ``` # A research in climate and lighttime data from 2014 to 2020 ``` # VIIRS from 2014 to 2020 viirs = ee.Image(ee.ImageCollection("NOAA/VIIRS/DNB/MONTHLY_V1/VCMSLCFG").filterDate( "2014-01-01", "2020-12-10").mean().select("avg_rad")) # standardizing the aoi viirs mu = ee.Number(viirs.reduceRegion(reducer=ee.Reducer.mean(), scale=500).get("avg_rad")) std = ee.Number(viirs.reduceRegion(reducer=ee.Reducer.stdDev(), scale=500).get("avg_rad")) viirs = viirs.subtract(mu).divide(std) # climate from 2014 to 2020 climate = ee.Image(ee.ImageCollection('IDAHO_EPSCOR/TERRACLIMATE').filterDate( "2014-01-01", "2014-01-10").mean()) # combinaton comb = climate.addBands(viirs) # points climatePoints = climate.sample( **{"region": aoi, "scale": 1000, "seed": 0, 'geometries': True}) ``` ## Getting Data ### * GHSL in 2015 ### * VIIRS mean of the span of 2014-2020 ### * mean for climate in from 2014-2020 ``` def getClimateGHSLData(short_name, thePath="assets/main data/climate and viirs 2014-2020/"): short_name = short_name.strip() aoi = ee.FeatureCollection("users/amirhkiani1998/" + short_name).geometry() # VIIRS from 2014 to 2020 viirs = ee.Image(ee.ImageCollection("NOAA/VIIRS/DNB/MONTHLY_V1/VCMSLCFG").filterDate( "2014-01-01", "2020-12-10").filterBounds(aoi).mean().select("avg_rad").clip(aoi)) # standardizing the aoi viirs mu = ee.Number(viirs.reduceRegion( reducer=ee.Reducer.mean(), scale=500).get("avg_rad")) std = ee.Number(viirs.reduceRegion( reducer=ee.Reducer.stdDev(), scale=500).get("avg_rad")) viirs = viirs.subtract(mu).divide(std) # climate from 2014 to 2020 climate = ee.Image(ee.ImageCollection('IDAHO_EPSCOR/TERRACLIMATE').filterDate( "2014-01-01", "2014-01-10").filterBounds(aoi).mean().clip(aoi)) # GHSL = global human settlement layer ghsl = ee.ImageCollection('JRC/GHSL/P2016/SMOD_POP_GLOBE_V1').filter( ee.Filter.date('2015-01-01', '2015-12-31')).select('smod_code').median().clip(aoi) ghsl = ghsl.gte(2) # combinaton comb = climate.addBands(viirs) # points point = ghsl.sample( **{"region": aoi, "scale": 1500, "seed": 0, 'geometries': True}) training = comb.sampleRegions(collection=point, scale=1500) size = training.size().getInfo() trainingToList = training.toList(size).getInfo() trainingToListModified = [] for value in trainingToList: trainingToListModified.append(value["properties"]) dataframe = pd.DataFrame(trainingToListModified) thePath += short_name + ".csv" dataframe.to_csv(thePath) # limited = ["khr_north", "khz", "znj", "smn"] limited = [] for short_name in short_names: short_name = short_name.strip() if(short_name in limited): continue print(short_name) getClimateGHSLData( short_name, "assets/main data/climate-viirs-ghsl - 2014-2020 - scale 1250/") ``` ## Loading Tehran's data ``` tehranCVG = pd.read_csv("assets/main data/climate-viirs-ghsl - 2014-2020 - scale 1250/teh.csv").drop(columns=["Unnamed: 0"]) tehranCVG.head() ``` ## Drawing the bar plot for **PR** and **Average Radian** ``` plt.figure(figsize=(18,8)) plt.grid(axis="y", zorder=0) plt.bar(tehranCVG.pr.values, tehranCVG.avg_rad.values, label="Average Radian against Precipitation", zorder=3) plt.xlabel("Precipitation Accumulation") plt.ylabel("Scaled Average Radian") plt.savefig("Tehran-Radian and Precipitation", dpi=1400) plt.legend() plt.show() plt.figure(figsize=(18, 8)) plt.grid(axis="y", zorder=0) meanPRRadian = tehranCVG[["avg_rad", "pr"]].groupby("pr").mean().reset_index() pr = meanPRRadian.pr avg_rad = meanPRRadian.avg_rad plt.plot(pr, avg_rad, zorder=3, linewidth=5) plt.show() plt.figure(figsize=(18, 8)) plt.grid(axis="y", zorder=0) meanRORadian = tehranCVG[["avg_rad", "def"] ].groupby("def").mean().reset_index() # ro = meanRORadian.ro # avg_rad = meanRORadian.avg_rad # plt.plot(ro, avg_rad, zorder=3, linewidth=5) # plt.show() meanRORadian ``` # Start analyzing Tehran ``` tehCSV = np.load("E:/Scripts/Github/Nighttime Data/assets/iran/Province/teh/numpy_arrays/teh_2014-1-1.npy", allow_pickle=True)[()] dataframe = pd.DataFrame(tehCSV) dataframe ```
github_jupyter
import pandas as pd import geemap import ee import seaborn as sns import matplotlib.pyplot as plt import numpy as np coef = pd.read_csv('assets/dmsp_coeffs.csv') def get_coefs(img, coefdata=coef): imgID = img.id().getInfo() idx = coefdata['satellite']+coefdata['year'].astype(str)==imgID return coefdata.loc[idx, ['c0','c1','c2']].values[0] def calibrate_img(img): c0, c1, c2 = get_coefs(img) return img.expression("c0 + (c1 * X) + (c2 * X**2)", {'X':img, 'c0':c0, 'c1':c1, 'c2':c2}) def clip_img(img, upper_thresh=63, upper_set=63, lower_thresh=6, lower_set=0): return img.where(img.gt(upper_thresh),upper_set).where(img.lte(lower_thresh),lower_set) def calibrate_and_clip(img): return clip_img(calibrate_img(img)) dmsp1999F12 = ee.Image("NOAA/DMSP-OLS/NIGHTTIME_LIGHTS/F152007").select("stable_lights") dmsp1999F14 = ee.Image("NOAA/DMSP-OLS/NIGHTTIME_LIGHTS/F162007").select("stable_lights") dmsp1999F12_calbr = dmsp1999F12.where(dmsp1999F12.gt(63),63).where(dmsp1999F12.lte(6),0) aoi = ee.FeatureCollection("users/amirhkiani1998/teh") myMap = geemap.Map() myMap.addLayer(dmsp1999F12.clip(aoi)) myMap.centerObject(aoi) myMap myMap = geemap.Map() # myMap.addLayer(dmsp1999F12_clip.clip(aoi)) left_layer = geemap.ee_tile_layer(dmsp1999F12_calbr.clip(aoi), {}, "Clipped") right_layer = geemap.ee_tile_layer(dmsp1999F12.clip(aoi), {}, "Not Clipped") myMap.split_map(left_layer, right_layer) myMap.centerObject(aoi) myMap tehranNumpy = geemap.ee_to_numpy(dmsp1999F12, region=aoi) tehranNumpyCalbr = geemap.ee_to_numpy(dmsp1999F12_calbr, region=aoi) fig, ax = plt.subplots(figsize=(15,5)) sns.kdeplot(tehranNumpy.flatten(), label='non-calibrated',legend=True, ax=ax) sns.kdeplot(tehranNumpyCalbr.flatten(), label='calibrated',legend=True, ax=ax) plt.legend(fontsize=20) plt.title('Distribution of DMSP-OLS 1999 composite calibrated vs non (smoothed w/ Gaussian kernel)', fontsize=20); dmsp1999F12_arr = geemap.ee_to_numpy(dmsp1999F12, region=aoi) dmsp1999F14_arr = geemap.ee_to_numpy(dmsp1999F14, region=aoi) fig, ax = plt.subplots(figsize=(15,5)) sns.kdeplot(dmsp1999F12_arr.flatten(), label='F12 1999',legend=True, ax=ax) sns.kdeplot(dmsp1999F14_arr.flatten(), label='F14 1999',legend=True, ax=ax) plt.legend(fontsize=20) plt.title('Probability density of 2007 annual composite for F15 and F16 satellites', fontsize=20); dmsp1999F14_calbr = calibrate_and_clip(dmsp1999F14) dmsp1999F14_arr_calbr = geemap.ee_to_numpy(dmsp1999F14_calbr, region=aoi) dmsp1999F12_arr_calbr = geemap.ee_to_numpy(dmsp1999F12_calbr, region=aoi) fig, ax = plt.subplots(figsize=(15,5)) sns.kdeplot(dmsp1999F14_arr_calbr.flatten(), label='F14 1999 (calibrated)',legend=True, ax=ax) sns.kdeplot(dmsp1999F12_arr_calbr.flatten(), label='F12 1999 (calibrated)',legend=True, ax=ax) plt.legend(fontsize=20) plt.title('Probability density of 2007 annual composite for F15 and F16 satellites after calibration', fontsize=20); viirs2015 = ee.ImageCollection("NOAA/VIIRS/DNB/MONTHLY_V1/VCMSLCFG").filterDate( "2015-07-01","2015-12-31").filterBounds(roi).median().select('avg_rad').clip(roi) viirs2019 = ee.ImageCollection("NOAA/VIIRS/DNB/MONTHLY_V1/VCMSLCFG").filterDate( "2019-07-01","2019-12-31").filterBounds(roi).median().select('avg_rad').clip(roi) viirs_15_tile = geemap.ee_tile_layer(viirs2015, {}, 'Jul-Dec 2015', opacity=0.75) viirs_19_tile = geemap.ee_tile_layer(viirs2019, {}, 'Jul-Dec 2019', opacity=0.75) # initialize our map map2 = geemap.Map() # map2.centerObject(roi, 9) map2.split_map(left_layer=viirs_15_tile, right_layer=viirs_19_tile) map2.addLayerControl() map2 import pandas as pd import geemap, ee import seaborn as sns import matplotlib.pyplot as plt import numpy as np import os import datetime try: ee.Initialize() except: ee.Authenticate() ee.Initialize() viirs = ee.ImageCollection("NOAA/VIIRS/DNB/MONTHLY_V1/VCMSLCFG").select('avg_rad') iran = ee.FeatureCollection("users/amirhkiani1998/iran") provinces = pd.read_csv("assets/iran/Province/names_info.csv") provinces def getData(city_name, reducer = ee.Reducer.mean(), reducer_to_save = "mean"): city_name = city_name.strip() city_geom = ee.FeatureCollection("users/amirhkiani1998/" + city_name) def getCityAvgRad(img): return img.reduceRegions(reducer= reducer, collection = city_geom, scale = 500) def getDate(img): return img.set('date', img.date().format()) reduced = viirs.map(getCityAvgRad) dates = viirs.map(getDate) listMean = reduced.flatten().reduceColumns(ee.Reducer.toList(1), [reducer_to_save]).values().getInfo() listDates = dates.reduceColumns(ee.Reducer.toList(1), ["date"]).values().getInfo() dataframe = pd.DataFrame() dataframe["dates"] = np.asarray(listDates).squeeze() dataframe[reducer_to_save] = np.asarray(listMean).squeeze() dataframe["city_name"] = city_name dataframe.to_csv( "assets/iran/Province/" + name + "/" + name + "_" + reducer_to_save + ".csv") return(dataframe) def mergeDataframes(dataframe_1, dataframe_2): return(dataframe_1.merge(right= dataframe_2, on = "date", how = "inner")) provinces_short_name = provinces.short_name dataframeNew = pd.DataFrame() ignoreList = [] errorIgnoreList = [ "islands" ] for name in provinces_short_name: name = name.strip() if name in ignoreList or name in errorIgnoreList: continue extractedProvinceData = getData(name, ee.Reducer.stdDev(), "stdDev") if(dataframeNew.shape == (0,0)): dataframeNew = extractedProvinceData else: dataframeNew = pd.concat([dataframeNew, extractedProvinceData]) extractedProvinceData.to_csv( "assets/iran/Province/" + name + "/" + name + "_std.csv") print(" done", sep="") print() dataframeNew.to_csv("assets/iran/Province/whole_mean.csv") def addLongLatMap(image): return image.addBands(ee.Image.pixelLonLat()) def addLongLat(ImageCollection): return ImageCollection.map(addLongLatMap) viirsWithLongLat = viirs.map(addLongLatMap) viirsWithLongLatList = viirsWithLongLat.toList(100) tehran = ee.FeatureCollection("users/amirhkiani1998/teh") def imageToNumpy(image, provinceShortName, scale = 100): date = datetime.datetime.fromisoformat(image.date().format().getInfo()) dateString = str(date.year) + "-" + str(date.month) + "-" + str(date.day) print(dateString, provinceShortName, "start", sep="|") province = ee.FeatureCollection( "users/amirhkiani1998/" + provinceShortName) array = image.reduceRegion( reducer=ee.Reducer.toList(), geometry=province, scale=scale) array = array.getInfo() directoryPath = "assets/iran/Province/" + provinceShortName if(not os.path.isdir(directoryPath + "/numpy_arrays")): os.mkdir(directoryPath + "/numpy_arrays") np.save(directoryPath + "/numpy_arrays/" + provinceShortName + "_" + dateString + ".npy", np.array(array)) print(dateString, provinceShortName, "done", sep="|") return True shortNames = provinces.short_name for provinceName in shortNames: provinceName = provinceName.strip() for i in range(88): image = ee.Image(viirsWithLongLatList.get(i)) imageToNumpy(image, provinceShortName=provinceName, scale=500) ghsl = ee.ImageCollection('JRC/GHSL/P2016/SMOD_POP_GLOBE_V1').select("smod_code") ghslTehran = ghsl.first().gte(0) myMap = geemap.Map() myMap.addLayer(ghslTehran, {}, 'Degree of Urbanization') myMap.centerObject(iran) myMap def makingGreaterThatTwo(image): return image.gt(2) def getProvinceGHSL(province, image, scale = 500): date = datetime.datetime.fromisoformat(image.date().format().getInfo()) dateString = str(date.year) + "-" + str(date.month) + "-" + str(date.day) print(dateString, province, "start", sep="|") image = image.gt(2) image = image.addBands(ee.Image.pixelLonLat()) province = province.strip() provinceImage = ee.FeatureCollection("users/amirhkiani1998/" + province) array = np.array(image.reduceRegion(reducer=ee.Reducer.toList(), geometry=provinceImage, scale=scale).getInfo()) directoryPath = "assets/iran/Province/" + \ province + "/numpy_arrays/ghsl_" + province + "_" + dateString + ".npy" np.save(directoryPath, array) print(province , "done", sep="|") ghslSize = ghsl.size().getInfo() ghslWithLatLang = addLongLat(ghsl) ghslWithLatLangList = ghslWithLatLang.toList(ghslSize) provinces_short_name = provinces.short_name for i in range(ghslSize): image = ee.Image(ghslWithLatLangList.get(i)) for province in provinces_short_name: getProvinceGHSL(province, image) dictionary = np.load("assets/iran/Province/khz/numpy_arrays/ghsl_khz_1975-1-1.npy", allow_pickle=True)[()] pd.DataFrame(dictionary) dictionary = np.load( "assets/iran/Province/khz/numpy_arrays/ghsl_khz_1975-1-1.npy", allow_pickle=True)[()] x = dictionary["latitude"] y = dictionary["longitude"] sm = dictionary["smod_code"] plt.scatter(x, y, c=(np.array(sm)*255)) dictionary2 = np.load( "assets/iran/Province/khz/numpy_arrays/khz_2021-5-1.npy", allow_pickle=True)[()] x2 = dictionary2["latitude"] y2 = dictionary2["longitude"] avr = dictionary2["avg_rad"] s1 = pd.Series(np.array(x2), name="x2") s2 = pd.Series(np.array(x), name="x1") pd.DataFrame(dictionary2).head() short_names = provinces.short_name.values def getSummary(shortProvinceName): years = np.arange(2014, 2022) months = np.arange(1, 13) shortProvinceName = shortProvinceName.strip() dictionary = {"date":[], "std": [], "mean": []} mainPath = "assets/iran/Province/" + shortProvinceName + "/numpy_arrays/" + shortProvinceName + "_" for year in years: for month in months: if(month == 4 and year == 2021): break path = mainPath + str(year) + "-" + str(month) + "-" + "1.npy" avgRad = np.load(path, allow_pickle=True)[()]["avg_rad"] std = np.std(avgRad) mean = np.mean(avgRad) dictionary["date"].append(str(year) + "-" + str(month)) dictionary["std"].append(std) dictionary["mean"].append(mean) return(dictionary) for short_name in short_names: short_name = short_name.strip() resultDictionary = pd.DataFrame(getSummary(short_name)) path = "assets/main data/" + short_name + "_viirs_summary.csv" resultDictionary.to_csv(path) # Area of interest aoi = ee.FeatureCollection("users/amirhkiani1998/teh").geometry() # VIIRS in 2014 - 01 viirs201401 = ee.Image(ee.ImageCollection("NOAA/VIIRS/DNB/MONTHLY_V1/VCMSLCFG").filterDate( "2014-01-01", "2014-01-10").filterBounds(aoi).first().select("avg_rad").clip(aoi)) # standardizing the aoi viirs mu = ee.Number(viirs201401.reduceRegion(reducer=ee.Reducer.mean(), scale=500).get("avg_rad")) std = ee.Number(viirs201401.reduceRegion(reducer=ee.Reducer.stdDev(), scale=500).get("avg_rad")) viirs201401 = viirs201401.subtract(mu).divide(std) # climate data in 2014 - 01 climate201401 = ee.Image(ee.ImageCollection('IDAHO_EPSCOR/TERRACLIMATE').filterDate( "2014-01-01", "2014-01-10").filterBounds(aoi).first().clip(aoi)) climatePoints = climate201401.sample( **{"region": aoi, "scale": 1000, "seed": 0, 'geometries': True}) climatePoints.first().getInfo() fused = viirs201401.addBands(climate201401) training = fused.sampleRegions(collection=climatePoints, properties=["pr"], scale=1000) size = training.size().getInfo() listData = training.toList(size).getInfo() listData[2] ee.Number(viirs201401.reduceRegion( reducer=ee.Reducer.mean(), scale=500).get("avg_rad")).getInfo() # VIIRS from 2014 to 2020 viirs = ee.Image(ee.ImageCollection("NOAA/VIIRS/DNB/MONTHLY_V1/VCMSLCFG").filterDate( "2014-01-01", "2020-12-10").mean().select("avg_rad")) # standardizing the aoi viirs mu = ee.Number(viirs.reduceRegion(reducer=ee.Reducer.mean(), scale=500).get("avg_rad")) std = ee.Number(viirs.reduceRegion(reducer=ee.Reducer.stdDev(), scale=500).get("avg_rad")) viirs = viirs.subtract(mu).divide(std) # climate from 2014 to 2020 climate = ee.Image(ee.ImageCollection('IDAHO_EPSCOR/TERRACLIMATE').filterDate( "2014-01-01", "2014-01-10").mean()) # combinaton comb = climate.addBands(viirs) # points climatePoints = climate.sample( **{"region": aoi, "scale": 1000, "seed": 0, 'geometries': True}) def getClimateGHSLData(short_name, thePath="assets/main data/climate and viirs 2014-2020/"): short_name = short_name.strip() aoi = ee.FeatureCollection("users/amirhkiani1998/" + short_name).geometry() # VIIRS from 2014 to 2020 viirs = ee.Image(ee.ImageCollection("NOAA/VIIRS/DNB/MONTHLY_V1/VCMSLCFG").filterDate( "2014-01-01", "2020-12-10").filterBounds(aoi).mean().select("avg_rad").clip(aoi)) # standardizing the aoi viirs mu = ee.Number(viirs.reduceRegion( reducer=ee.Reducer.mean(), scale=500).get("avg_rad")) std = ee.Number(viirs.reduceRegion( reducer=ee.Reducer.stdDev(), scale=500).get("avg_rad")) viirs = viirs.subtract(mu).divide(std) # climate from 2014 to 2020 climate = ee.Image(ee.ImageCollection('IDAHO_EPSCOR/TERRACLIMATE').filterDate( "2014-01-01", "2014-01-10").filterBounds(aoi).mean().clip(aoi)) # GHSL = global human settlement layer ghsl = ee.ImageCollection('JRC/GHSL/P2016/SMOD_POP_GLOBE_V1').filter( ee.Filter.date('2015-01-01', '2015-12-31')).select('smod_code').median().clip(aoi) ghsl = ghsl.gte(2) # combinaton comb = climate.addBands(viirs) # points point = ghsl.sample( **{"region": aoi, "scale": 1500, "seed": 0, 'geometries': True}) training = comb.sampleRegions(collection=point, scale=1500) size = training.size().getInfo() trainingToList = training.toList(size).getInfo() trainingToListModified = [] for value in trainingToList: trainingToListModified.append(value["properties"]) dataframe = pd.DataFrame(trainingToListModified) thePath += short_name + ".csv" dataframe.to_csv(thePath) # limited = ["khr_north", "khz", "znj", "smn"] limited = [] for short_name in short_names: short_name = short_name.strip() if(short_name in limited): continue print(short_name) getClimateGHSLData( short_name, "assets/main data/climate-viirs-ghsl - 2014-2020 - scale 1250/") tehranCVG = pd.read_csv("assets/main data/climate-viirs-ghsl - 2014-2020 - scale 1250/teh.csv").drop(columns=["Unnamed: 0"]) tehranCVG.head() plt.figure(figsize=(18,8)) plt.grid(axis="y", zorder=0) plt.bar(tehranCVG.pr.values, tehranCVG.avg_rad.values, label="Average Radian against Precipitation", zorder=3) plt.xlabel("Precipitation Accumulation") plt.ylabel("Scaled Average Radian") plt.savefig("Tehran-Radian and Precipitation", dpi=1400) plt.legend() plt.show() plt.figure(figsize=(18, 8)) plt.grid(axis="y", zorder=0) meanPRRadian = tehranCVG[["avg_rad", "pr"]].groupby("pr").mean().reset_index() pr = meanPRRadian.pr avg_rad = meanPRRadian.avg_rad plt.plot(pr, avg_rad, zorder=3, linewidth=5) plt.show() plt.figure(figsize=(18, 8)) plt.grid(axis="y", zorder=0) meanRORadian = tehranCVG[["avg_rad", "def"] ].groupby("def").mean().reset_index() # ro = meanRORadian.ro # avg_rad = meanRORadian.avg_rad # plt.plot(ro, avg_rad, zorder=3, linewidth=5) # plt.show() meanRORadian tehCSV = np.load("E:/Scripts/Github/Nighttime Data/assets/iran/Province/teh/numpy_arrays/teh_2014-1-1.npy", allow_pickle=True)[()] dataframe = pd.DataFrame(tehCSV) dataframe
0.252937
0.706804
``` import os import pandas as pd data_folder = os.path.join(os.path.expanduser("~"), "data", "datasets", "ml-100k") ratings_filename = os.path.join(data_folder, "u.data") all_ratings = pd.read_csv(ratings_filename, delimiter="\t", header=None, names = ["UserID", "MovieID", "Rating", "Datetime"]) all_ratings.head() all_ratings["Datetime"] = pd.to_datetime(all_ratings['Datetime'], unit='s') all_ratings.head() all_ratings["Favorable"] = all_ratings["Rating"] > 3 ratings = all_ratings[all_ratings['UserID'].isin(range(200))] favorable_ratings = ratings[ratings["Favorable"]] favorable_reviews_by_users = dict((k, frozenset(v.values)) for k, v in favorable_ratings.groupby("UserID")["MovieID"]) num_favorable_by_movie = ratings[["MovieID", "Favorable"]].groupby("MovieID").sum() num_favorable_by_movie.sort_values(by="Favorable", ascending=False).head() frequent_itemsets = {} min_support = 50 frequent_itemsets[1] = dict((frozenset((movie_id,)), row["Favorable"]) for movie_id, row in num_favorable_by_movie.iterrows() if row["Favorable"] > min_support) from collections import defaultdict def find_frequent_itemsets(favorable_reviews_by_users, k_1_itemsets, min_support): counts = defaultdict(int) for user, reviews in favorable_reviews_by_users.items(): for itemset in k_1_itemsets: if itemset.issubset(reviews): for other_reviewed_movie in reviews - itemset: current_superset = itemset | frozenset((other_reviewed_movie,)) counts[current_superset] += 1 return dict([(itemset, frequency) for itemset, frequency in counts.items() if frequency >= min_support]) import sys frequent_itemsets = {} # itemsets are sorted by length min_support = 50 # k=1 candidates are the isbns with more than min_support favourable reviews frequent_itemsets[1] = dict((frozenset((movie_id,)), row["Favorable"]) for movie_id, row in num_favorable_by_movie.iterrows() if row["Favorable"] > min_support) print("There are {} movies with more than {} favorable reviews".format(len(frequent_itemsets[1]), min_support)) sys.stdout.flush() for k in range(2, 20): # Generate candidates of length k, using the frequent itemsets of length k-1 # Only store the frequent itemsets cur_frequent_itemsets = find_frequent_itemsets(favorable_reviews_by_users, frequent_itemsets[k-1], min_support) if len(cur_frequent_itemsets) == 0: print("Did not find any frequent itemsets of length {}".format(k)) sys.stdout.flush() break else: print("I found {} frequent itemsets of length {}".format(len(cur_frequent_itemsets), k)) #print(cur_frequent_itemsets) sys.stdout.flush() frequent_itemsets[k] = cur_frequent_itemsets # We aren't interested in the itemsets of length 1, so remove those del frequent_itemsets[1] # Now we create the association rules. First, they are candidates until the confidence has been tested candidate_rules = [] for itemset_length, itemset_counts in frequent_itemsets.items(): for itemset in itemset_counts.keys(): for conclusion in itemset: premise = itemset - set((conclusion,)) candidate_rules.append((premise, conclusion)) print("There are {} candidate rules".format(len(candidate_rules))) print(candidate_rules[:5]) # Now, we compute the confidence of each of these rules. This is very similar to what we did in chapter 1 correct_counts = defaultdict(int) incorrect_counts = defaultdict(int) for user, reviews in favorable_reviews_by_users.items(): for candidate_rule in candidate_rules: premise, conclusion = candidate_rule if premise.issubset(reviews): if conclusion in reviews: correct_counts[candidate_rule] += 1 else: incorrect_counts[candidate_rule] += 1 rule_confidence = {candidate_rule: correct_counts[candidate_rule] / float(correct_counts[candidate_rule] + incorrect_counts[candidate_rule]) for candidate_rule in candidate_rules} # Choose only rules above a minimum confidence level min_confidence = 0.9 # Filter out the rules with poor confidence rule_confidence = {rule: confidence for rule, confidence in rule_confidence.items() if confidence > min_confidence} print(len(rule_confidence)) from operator import itemgetter sorted_confidence = sorted(rule_confidence.items(), key=itemgetter(1), reverse=True) for index in range(5): print("Rule #{0}".format(index + 1)) (premise, conclusion) = sorted_confidence[index][0] print("Rule: If a person recommends {0} they will also recommend {1}".format(premise, conclusion)) print(" - Confidence: {0:.3f}".format(rule_confidence[(premise, conclusion)])) print("") # Even better, we can get the movie titles themselves from the dataset movie_name_filename = os.path.join(data_folder, "u.item") movie_name_data = pd.read_csv(movie_name_filename, delimiter="|", header=None, encoding = "mac-roman") movie_name_data.columns = ["MovieID", "Title", "Release Date", "Video Release", "IMDB", "<UNK>", "Action", "Adventure", "Animation", "Children's", "Comedy", "Crime", "Documentary", "Drama", "Fantasy", "Film-Noir", "Horror", "Musical", "Mystery", "Romance", "Sci-Fi", "Thriller", "War", "Western"] movie_name_data.head() def get_movie_name(movie_id): title_object = movie_name_data[movie_name_data["MovieID"] == movie_id]["Title"] title = title_object.values[0] return title get_movie_name(4) for index in range(5): print("Rule #{0}".format(index + 1)) (premise, conclusion) = sorted_confidence[index][0] premise_names = ", ".join(get_movie_name(idx) for idx in premise) conclusion_name = get_movie_name(conclusion) print("Rule: If a person recommends {0} they will also recommend {1}".format(premise_names, conclusion_name)) print(" - Confidence: {0:.3f}".format(rule_confidence[(premise, conclusion)])) print("") # Evaluation using test data test_dataset = all_ratings[~all_ratings['UserID'].isin(range(200))] test_favorable = test_dataset[test_dataset["Favorable"]] test_favorable_by_users = dict((k, frozenset(v.values)) for k, v in test_favorable.groupby("UserID")["MovieID"]) correct_counts = defaultdict(int) incorrect_counts = defaultdict(int) for user, reviews in test_favorable_by_users.items(): for candidate_rule in candidate_rules: premise, conclusion = candidate_rule if premise.issubset(reviews): if conclusion in reviews: correct_counts[candidate_rule] += 1 else: incorrect_counts[candidate_rule] += 1 test_confidence = {candidate_rule: (correct_counts[candidate_rule] / float(correct_counts[candidate_rule] + incorrect_counts[candidate_rule])) for candidate_rule in rule_confidence} print(len(test_confidence)) for index in range(10): print("Rule #{0}".format(index + 1)) (premise, conclusion) = sorted_confidence[index][0] premise_names = ", ".join(get_movie_name(idx) for idx in premise) conclusion_name = get_movie_name(conclusion) print("Rule: If a person recommends {0} they will also recommend {1}".format(premise_names, conclusion_name)) print(" - Train Confidence: {0:.3f}".format(rule_confidence.get((premise, conclusion), -1))) print(" - Test Confidence: {0:.3f}".format(test_confidence.get((premise, conclusion), -1))) print("") ```
github_jupyter
import os import pandas as pd data_folder = os.path.join(os.path.expanduser("~"), "data", "datasets", "ml-100k") ratings_filename = os.path.join(data_folder, "u.data") all_ratings = pd.read_csv(ratings_filename, delimiter="\t", header=None, names = ["UserID", "MovieID", "Rating", "Datetime"]) all_ratings.head() all_ratings["Datetime"] = pd.to_datetime(all_ratings['Datetime'], unit='s') all_ratings.head() all_ratings["Favorable"] = all_ratings["Rating"] > 3 ratings = all_ratings[all_ratings['UserID'].isin(range(200))] favorable_ratings = ratings[ratings["Favorable"]] favorable_reviews_by_users = dict((k, frozenset(v.values)) for k, v in favorable_ratings.groupby("UserID")["MovieID"]) num_favorable_by_movie = ratings[["MovieID", "Favorable"]].groupby("MovieID").sum() num_favorable_by_movie.sort_values(by="Favorable", ascending=False).head() frequent_itemsets = {} min_support = 50 frequent_itemsets[1] = dict((frozenset((movie_id,)), row["Favorable"]) for movie_id, row in num_favorable_by_movie.iterrows() if row["Favorable"] > min_support) from collections import defaultdict def find_frequent_itemsets(favorable_reviews_by_users, k_1_itemsets, min_support): counts = defaultdict(int) for user, reviews in favorable_reviews_by_users.items(): for itemset in k_1_itemsets: if itemset.issubset(reviews): for other_reviewed_movie in reviews - itemset: current_superset = itemset | frozenset((other_reviewed_movie,)) counts[current_superset] += 1 return dict([(itemset, frequency) for itemset, frequency in counts.items() if frequency >= min_support]) import sys frequent_itemsets = {} # itemsets are sorted by length min_support = 50 # k=1 candidates are the isbns with more than min_support favourable reviews frequent_itemsets[1] = dict((frozenset((movie_id,)), row["Favorable"]) for movie_id, row in num_favorable_by_movie.iterrows() if row["Favorable"] > min_support) print("There are {} movies with more than {} favorable reviews".format(len(frequent_itemsets[1]), min_support)) sys.stdout.flush() for k in range(2, 20): # Generate candidates of length k, using the frequent itemsets of length k-1 # Only store the frequent itemsets cur_frequent_itemsets = find_frequent_itemsets(favorable_reviews_by_users, frequent_itemsets[k-1], min_support) if len(cur_frequent_itemsets) == 0: print("Did not find any frequent itemsets of length {}".format(k)) sys.stdout.flush() break else: print("I found {} frequent itemsets of length {}".format(len(cur_frequent_itemsets), k)) #print(cur_frequent_itemsets) sys.stdout.flush() frequent_itemsets[k] = cur_frequent_itemsets # We aren't interested in the itemsets of length 1, so remove those del frequent_itemsets[1] # Now we create the association rules. First, they are candidates until the confidence has been tested candidate_rules = [] for itemset_length, itemset_counts in frequent_itemsets.items(): for itemset in itemset_counts.keys(): for conclusion in itemset: premise = itemset - set((conclusion,)) candidate_rules.append((premise, conclusion)) print("There are {} candidate rules".format(len(candidate_rules))) print(candidate_rules[:5]) # Now, we compute the confidence of each of these rules. This is very similar to what we did in chapter 1 correct_counts = defaultdict(int) incorrect_counts = defaultdict(int) for user, reviews in favorable_reviews_by_users.items(): for candidate_rule in candidate_rules: premise, conclusion = candidate_rule if premise.issubset(reviews): if conclusion in reviews: correct_counts[candidate_rule] += 1 else: incorrect_counts[candidate_rule] += 1 rule_confidence = {candidate_rule: correct_counts[candidate_rule] / float(correct_counts[candidate_rule] + incorrect_counts[candidate_rule]) for candidate_rule in candidate_rules} # Choose only rules above a minimum confidence level min_confidence = 0.9 # Filter out the rules with poor confidence rule_confidence = {rule: confidence for rule, confidence in rule_confidence.items() if confidence > min_confidence} print(len(rule_confidence)) from operator import itemgetter sorted_confidence = sorted(rule_confidence.items(), key=itemgetter(1), reverse=True) for index in range(5): print("Rule #{0}".format(index + 1)) (premise, conclusion) = sorted_confidence[index][0] print("Rule: If a person recommends {0} they will also recommend {1}".format(premise, conclusion)) print(" - Confidence: {0:.3f}".format(rule_confidence[(premise, conclusion)])) print("") # Even better, we can get the movie titles themselves from the dataset movie_name_filename = os.path.join(data_folder, "u.item") movie_name_data = pd.read_csv(movie_name_filename, delimiter="|", header=None, encoding = "mac-roman") movie_name_data.columns = ["MovieID", "Title", "Release Date", "Video Release", "IMDB", "<UNK>", "Action", "Adventure", "Animation", "Children's", "Comedy", "Crime", "Documentary", "Drama", "Fantasy", "Film-Noir", "Horror", "Musical", "Mystery", "Romance", "Sci-Fi", "Thriller", "War", "Western"] movie_name_data.head() def get_movie_name(movie_id): title_object = movie_name_data[movie_name_data["MovieID"] == movie_id]["Title"] title = title_object.values[0] return title get_movie_name(4) for index in range(5): print("Rule #{0}".format(index + 1)) (premise, conclusion) = sorted_confidence[index][0] premise_names = ", ".join(get_movie_name(idx) for idx in premise) conclusion_name = get_movie_name(conclusion) print("Rule: If a person recommends {0} they will also recommend {1}".format(premise_names, conclusion_name)) print(" - Confidence: {0:.3f}".format(rule_confidence[(premise, conclusion)])) print("") # Evaluation using test data test_dataset = all_ratings[~all_ratings['UserID'].isin(range(200))] test_favorable = test_dataset[test_dataset["Favorable"]] test_favorable_by_users = dict((k, frozenset(v.values)) for k, v in test_favorable.groupby("UserID")["MovieID"]) correct_counts = defaultdict(int) incorrect_counts = defaultdict(int) for user, reviews in test_favorable_by_users.items(): for candidate_rule in candidate_rules: premise, conclusion = candidate_rule if premise.issubset(reviews): if conclusion in reviews: correct_counts[candidate_rule] += 1 else: incorrect_counts[candidate_rule] += 1 test_confidence = {candidate_rule: (correct_counts[candidate_rule] / float(correct_counts[candidate_rule] + incorrect_counts[candidate_rule])) for candidate_rule in rule_confidence} print(len(test_confidence)) for index in range(10): print("Rule #{0}".format(index + 1)) (premise, conclusion) = sorted_confidence[index][0] premise_names = ", ".join(get_movie_name(idx) for idx in premise) conclusion_name = get_movie_name(conclusion) print("Rule: If a person recommends {0} they will also recommend {1}".format(premise_names, conclusion_name)) print(" - Train Confidence: {0:.3f}".format(rule_confidence.get((premise, conclusion), -1))) print(" - Test Confidence: {0:.3f}".format(test_confidence.get((premise, conclusion), -1))) print("")
0.345547
0.329392
``` %reload_ext autoreload %autoreload 2 %matplotlib inline import shutil import numpy as np import pandas as pd from pathlib import Path import json import torch from tqdm import tqdm import matplotlib.pyplot as plt import seaborn as sn from src.data import IFCNetPly from src.models.models import DGCNN from torch.utils.data import DataLoader, Subset, Dataset import torch.nn.functional as F import sklearn.metrics as metrics import torch.nn as nn from sklearn.preprocessing import label_binarize data_root = Path("../data/processed/DGCNN/IFCNetCore") with open("../IFCNetCore_Classes.json", "r") as f: class_names = json.load(f) train_dataset = IFCNetPly(data_root, class_names, partition="train") val_dataset = IFCNetPly(data_root, class_names, partition="train") test_dataset = IFCNetPly(data_root, class_names, partition="test") np.random.seed(42) perm = np.random.permutation(range(len(train_dataset))) train_len = int(0.7 * len(train_dataset)) train_dataset = Subset(train_dataset, sorted(perm[:train_len])) val_dataset = Subset(val_dataset, sorted(perm[train_len:])) train_loader = DataLoader(train_dataset, batch_size=8, num_workers=8) val_loader = DataLoader(val_dataset, batch_size=8, num_workers=8) test_loader = DataLoader(test_dataset, batch_size=8, num_workers=8) model_dir = Path("../models/") with (model_dir/"DGCNNParams.json").open("r") as f: config = json.load(f) model = DGCNN(config["dropout"], config["k"], config["embedding_dim"], len(class_names)) model_state, _ = torch.load(model_dir/"DGCNNWeights+Optimizer") model.load_state_dict(model_state) device = torch.device("cuda") model.eval() model.to(device) def calc_metrics(probabilities, labels): predictions = np.argmax(probabilities, axis=1) acc = metrics.accuracy_score(labels, predictions) balanced_acc = metrics.balanced_accuracy_score(labels, predictions) precision = metrics.precision_score(labels, predictions, average="weighted") recall = metrics.recall_score(labels, predictions, average="weighted") f1 = metrics.f1_score(labels, predictions, average="weighted") return { f"accuracy_score": acc, f"balanced_accuracy_score": balanced_acc, f"precision_score": precision, f"recall_score": recall, f"f1_score": f1 } def plot_confusion_matrix(confusion_matrix, display_labels, fname=None): labels = list(map(lambda x: x[3:], display_labels)) df = pd.DataFrame(confusion_matrix, index=labels, columns=labels) plt.figure(figsize=(7, 5)) heatmap = sn.heatmap(df, cmap="Blues", annot=True, fmt="d", cbar=False) plt.ylabel("Actual class") plt.xlabel("Predicted class") if fname: plt.savefig(fname, dpi=300, bbox_inches="tight") def eval(model, loader, device, class_names, fname=None): model.eval() all_probs = [] all_labels = [] with torch.no_grad(): for data, labels in tqdm(loader): data, labels = data.to(device), labels.to(device) data = data.permute(0, 2, 1) outputs = model(data) probs = F.softmax(outputs, dim=1) all_probs.append(probs.cpu().detach().numpy()) all_labels.append(labels.cpu().numpy()) all_probs = np.concatenate(all_probs) all_labels = np.concatenate(all_labels) result = calc_metrics(all_probs, all_labels) predictions = np.argmax(all_probs, axis=1) confusion_matrix = metrics.confusion_matrix(all_labels, predictions) plot_confusion_matrix(confusion_matrix, class_names, fname=fname) return all_labels, all_probs eval(model, train_loader, device, class_names) eval(model, val_loader, device, class_names) test_labels, test_probs = eval(model, test_loader, device, class_names, fname="../reports/figures/dgcnn_confusion.png") np.savez("DGCNNProbs.npz", labels=test_labels, probs=test_probs) test_predictions = np.argmax(test_probs, axis=1) wrong_predictions = np.where(test_labels != test_predictions)[0] wrong_pred_dir = Path("../data/external/DGCNN/wrong_classes/IFCNetCore") raw_data_dict = {path.stem: path for path in Path("../data/raw/IFCNetCore").glob("**/test/*.obj")} wrong_pred_dir.mkdir(parents=True, exist_ok=True) for i in wrong_predictions: label_str = class_names[test_labels[i]] prediction_str = class_names[test_predictions[i]] print(f"{test_dataset.files[i].stem}, Label: {label_str}, Prediction: {prediction_str}") target_dir = wrong_pred_dir / label_str target_dir.mkdir(exist_ok=True) filename = test_dataset.files[i] shutil.copy(str(raw_data_dict[filename.stem]), str(target_dir / f"{filename.stem}_{prediction_str}.obj")) ```
github_jupyter
%reload_ext autoreload %autoreload 2 %matplotlib inline import shutil import numpy as np import pandas as pd from pathlib import Path import json import torch from tqdm import tqdm import matplotlib.pyplot as plt import seaborn as sn from src.data import IFCNetPly from src.models.models import DGCNN from torch.utils.data import DataLoader, Subset, Dataset import torch.nn.functional as F import sklearn.metrics as metrics import torch.nn as nn from sklearn.preprocessing import label_binarize data_root = Path("../data/processed/DGCNN/IFCNetCore") with open("../IFCNetCore_Classes.json", "r") as f: class_names = json.load(f) train_dataset = IFCNetPly(data_root, class_names, partition="train") val_dataset = IFCNetPly(data_root, class_names, partition="train") test_dataset = IFCNetPly(data_root, class_names, partition="test") np.random.seed(42) perm = np.random.permutation(range(len(train_dataset))) train_len = int(0.7 * len(train_dataset)) train_dataset = Subset(train_dataset, sorted(perm[:train_len])) val_dataset = Subset(val_dataset, sorted(perm[train_len:])) train_loader = DataLoader(train_dataset, batch_size=8, num_workers=8) val_loader = DataLoader(val_dataset, batch_size=8, num_workers=8) test_loader = DataLoader(test_dataset, batch_size=8, num_workers=8) model_dir = Path("../models/") with (model_dir/"DGCNNParams.json").open("r") as f: config = json.load(f) model = DGCNN(config["dropout"], config["k"], config["embedding_dim"], len(class_names)) model_state, _ = torch.load(model_dir/"DGCNNWeights+Optimizer") model.load_state_dict(model_state) device = torch.device("cuda") model.eval() model.to(device) def calc_metrics(probabilities, labels): predictions = np.argmax(probabilities, axis=1) acc = metrics.accuracy_score(labels, predictions) balanced_acc = metrics.balanced_accuracy_score(labels, predictions) precision = metrics.precision_score(labels, predictions, average="weighted") recall = metrics.recall_score(labels, predictions, average="weighted") f1 = metrics.f1_score(labels, predictions, average="weighted") return { f"accuracy_score": acc, f"balanced_accuracy_score": balanced_acc, f"precision_score": precision, f"recall_score": recall, f"f1_score": f1 } def plot_confusion_matrix(confusion_matrix, display_labels, fname=None): labels = list(map(lambda x: x[3:], display_labels)) df = pd.DataFrame(confusion_matrix, index=labels, columns=labels) plt.figure(figsize=(7, 5)) heatmap = sn.heatmap(df, cmap="Blues", annot=True, fmt="d", cbar=False) plt.ylabel("Actual class") plt.xlabel("Predicted class") if fname: plt.savefig(fname, dpi=300, bbox_inches="tight") def eval(model, loader, device, class_names, fname=None): model.eval() all_probs = [] all_labels = [] with torch.no_grad(): for data, labels in tqdm(loader): data, labels = data.to(device), labels.to(device) data = data.permute(0, 2, 1) outputs = model(data) probs = F.softmax(outputs, dim=1) all_probs.append(probs.cpu().detach().numpy()) all_labels.append(labels.cpu().numpy()) all_probs = np.concatenate(all_probs) all_labels = np.concatenate(all_labels) result = calc_metrics(all_probs, all_labels) predictions = np.argmax(all_probs, axis=1) confusion_matrix = metrics.confusion_matrix(all_labels, predictions) plot_confusion_matrix(confusion_matrix, class_names, fname=fname) return all_labels, all_probs eval(model, train_loader, device, class_names) eval(model, val_loader, device, class_names) test_labels, test_probs = eval(model, test_loader, device, class_names, fname="../reports/figures/dgcnn_confusion.png") np.savez("DGCNNProbs.npz", labels=test_labels, probs=test_probs) test_predictions = np.argmax(test_probs, axis=1) wrong_predictions = np.where(test_labels != test_predictions)[0] wrong_pred_dir = Path("../data/external/DGCNN/wrong_classes/IFCNetCore") raw_data_dict = {path.stem: path for path in Path("../data/raw/IFCNetCore").glob("**/test/*.obj")} wrong_pred_dir.mkdir(parents=True, exist_ok=True) for i in wrong_predictions: label_str = class_names[test_labels[i]] prediction_str = class_names[test_predictions[i]] print(f"{test_dataset.files[i].stem}, Label: {label_str}, Prediction: {prediction_str}") target_dir = wrong_pred_dir / label_str target_dir.mkdir(exist_ok=True) filename = test_dataset.files[i] shutil.copy(str(raw_data_dict[filename.stem]), str(target_dir / f"{filename.stem}_{prediction_str}.obj"))
0.513668
0.311748
``` # This reads in SDSS spectra, clips them to restrict them to the wavelength range of # interest, and calculates the S/N given the unnormalized flux and noise from SDSS # Created 2021 July 18 by E.S. import glob import os import numpy as np import pandas as pd import matplotlib.pyplot as plt # path stems stem = "/Users/bandari/Documents/git.repos/rrlyrae_metallicity/src/sdss_cosmic_rays_removed/" # read in each file, clip, find S/N file_names = glob.glob(stem + "*dat") # match S/N with the right file in the table containing Robospect data #df_s_to_n = pd.DataFrame(columns=["file_name","s_to_n"]) dict_s_to_n = {"file_name":[], "s_to_n":[]} for spec_num in range(0,len(file_names)): this_spectrum = pd.read_csv(file_names[spec_num], names=["wavel","flux","noise"], delim_whitespace=True) this_spectrum["s_to_n_spec"] = np.divide(this_spectrum["flux"],this_spectrum["noise"]) # mask out absorption line regions caii_K_line = np.logical_and(this_spectrum["wavel"] >= 3933.66-30,this_spectrum["wavel"] <= 3933.66+30) h_eps_line = np.logical_and(this_spectrum["wavel"] >= 3970.075-30,this_spectrum["wavel"] <= 3970.075+30) h_del_line = np.logical_and(this_spectrum["wavel"] >= 4101.71-30,this_spectrum["wavel"] <= 4101.71+30) h_gam_line = np.logical_and(this_spectrum["wavel"] >= 4340.472-30,this_spectrum["wavel"] <= 4340.472+30) h_beta_line = np.logical_and(this_spectrum["wavel"] >= 4861.29-30,this_spectrum["wavel"] <= 4861.29+30) # sum across the arrays sum_array = np.sum([np.array(caii_K_line), np.array(h_eps_line), np.array(h_del_line), np.array(h_gam_line), np.array(h_beta_line)],axis=0) # convert to boolean column (True == 'there is an absorption line here') line_bool_array = np.array(sum_array, dtype=bool) this_spectrum["line_regions"] = line_bool_array idx_outside_lines = this_spectrum.index[this_spectrum["line_regions"] == False].tolist() net_s_to_n = np.median(this_spectrum["s_to_n_spec"].loc[idx_outside_lines]) dict_s_to_n["s_to_n"].append(net_s_to_n) dict_s_to_n["file_name"].append(os.path.basename(file_names[spec_num])) # dummy for now #df_s_to_n = df_s_to_n.append({"file_name": np.nan,"s_to_n": net_s_to_n}) #print("Net S/N: ") #print(net_s_to_n) df_s_to_n = pd.DataFrame.from_dict(dict_s_to_n) # write to file df_s_to_n.to_csv("junk.csv") plt.hist(df_s_to_n["s_to_n"], bins=300) plt.xlabel("S/N") plt.title("Histogram of S/N of SDSS spectra") plt.savefig("junk.png",dpi=300) # write out plt.plot(this_spectrum["wavel"],this_spectrum["s_to_n_spec"]) plt.plot(this_spectrum["wavel"],this_spectrum["line_regions"]) plt.show() plt.hist(array_net_s_to_n) plt.show() ```
github_jupyter
# This reads in SDSS spectra, clips them to restrict them to the wavelength range of # interest, and calculates the S/N given the unnormalized flux and noise from SDSS # Created 2021 July 18 by E.S. import glob import os import numpy as np import pandas as pd import matplotlib.pyplot as plt # path stems stem = "/Users/bandari/Documents/git.repos/rrlyrae_metallicity/src/sdss_cosmic_rays_removed/" # read in each file, clip, find S/N file_names = glob.glob(stem + "*dat") # match S/N with the right file in the table containing Robospect data #df_s_to_n = pd.DataFrame(columns=["file_name","s_to_n"]) dict_s_to_n = {"file_name":[], "s_to_n":[]} for spec_num in range(0,len(file_names)): this_spectrum = pd.read_csv(file_names[spec_num], names=["wavel","flux","noise"], delim_whitespace=True) this_spectrum["s_to_n_spec"] = np.divide(this_spectrum["flux"],this_spectrum["noise"]) # mask out absorption line regions caii_K_line = np.logical_and(this_spectrum["wavel"] >= 3933.66-30,this_spectrum["wavel"] <= 3933.66+30) h_eps_line = np.logical_and(this_spectrum["wavel"] >= 3970.075-30,this_spectrum["wavel"] <= 3970.075+30) h_del_line = np.logical_and(this_spectrum["wavel"] >= 4101.71-30,this_spectrum["wavel"] <= 4101.71+30) h_gam_line = np.logical_and(this_spectrum["wavel"] >= 4340.472-30,this_spectrum["wavel"] <= 4340.472+30) h_beta_line = np.logical_and(this_spectrum["wavel"] >= 4861.29-30,this_spectrum["wavel"] <= 4861.29+30) # sum across the arrays sum_array = np.sum([np.array(caii_K_line), np.array(h_eps_line), np.array(h_del_line), np.array(h_gam_line), np.array(h_beta_line)],axis=0) # convert to boolean column (True == 'there is an absorption line here') line_bool_array = np.array(sum_array, dtype=bool) this_spectrum["line_regions"] = line_bool_array idx_outside_lines = this_spectrum.index[this_spectrum["line_regions"] == False].tolist() net_s_to_n = np.median(this_spectrum["s_to_n_spec"].loc[idx_outside_lines]) dict_s_to_n["s_to_n"].append(net_s_to_n) dict_s_to_n["file_name"].append(os.path.basename(file_names[spec_num])) # dummy for now #df_s_to_n = df_s_to_n.append({"file_name": np.nan,"s_to_n": net_s_to_n}) #print("Net S/N: ") #print(net_s_to_n) df_s_to_n = pd.DataFrame.from_dict(dict_s_to_n) # write to file df_s_to_n.to_csv("junk.csv") plt.hist(df_s_to_n["s_to_n"], bins=300) plt.xlabel("S/N") plt.title("Histogram of S/N of SDSS spectra") plt.savefig("junk.png",dpi=300) # write out plt.plot(this_spectrum["wavel"],this_spectrum["s_to_n_spec"]) plt.plot(this_spectrum["wavel"],this_spectrum["line_regions"]) plt.show() plt.hist(array_net_s_to_n) plt.show()
0.546738
0.640003
``` import pandas as pd import numpy as np import sys data = pd.read_csv('https://raw.githubusercontent.com/Ragnarok540/pdg/main/tags.txt', sep='~', header=None) data.columns = ['links', 'req'] def add_categories(df): bins = [0, 11, 120, sys.maxsize] labels = ['low', 'moderate', 'high'] category = pd.cut(df['links'], bins=bins, labels=labels) df['category'] = category return df add_categories(data) data["label"] = data["category"].cat.codes data.head() from sklearn.feature_extraction.text import CountVectorizer from sklearn.feature_extraction.text import TfidfTransformer from sklearn.model_selection import train_test_split from sklearn import metrics ngram = False tf_idf = True if ngram: ngram_range = (1, 3) else: ngram_range = (1, 1) count_vect = CountVectorizer(ngram_range=ngram_range) X_train_count = count_vect.fit_transform(data['req']) if tf_idf: tfidf_transformer = TfidfTransformer() X_train_count = tfidf_transformer.fit_transform(X_train_count) y = data['label'] X_train, X_test, y_train, y_test = train_test_split(X_train_count, y, test_size=0.2, random_state=1) # 80% training and 20% test X_train_count.shape from pprint import pprint from sklearn.neighbors import KNeighborsClassifier clf = KNeighborsClassifier() print('Parameters currently in use:\n') pprint(clf.get_params()) param_grid = {'metric': ['minkowski', 'manhattan', 'seuclidean', 'chebyshev', 'mahalanobis', 'wminkowski', 'euclidean'], 'n_neighbors': [1, 3, 5]} pprint(param_grid) from sklearn.model_selection import GridSearchCV grid_search = GridSearchCV(estimator=clf, param_grid=param_grid, cv=5, n_jobs=-1, verbose=2) grid_search.fit(X_train, y_train) grid_search.best_params_ clf = KNeighborsClassifier(n_neighbors=1, metric='manhattan') clf = clf.fit(X_train, y_train) y_pred = clf.predict(X_test) cf_matrix = metrics.confusion_matrix(y_test, y_pred) import seaborn as sns def cf_matrix_plot(cf_matrix): group_names = ['True Low', 'False Moderate', 'False High', 'False Low', 'True Moderate', 'False High', 'False Low', 'False Moderate', 'True High'] group_counts = ['{0:0.0f}'.format(value) for value in cf_matrix.flatten()] group_percentages = ['{0:.2%}'.format(value) for value in cf_matrix.flatten() / np.sum(cf_matrix)] labels = [f'{v1}\n{v2}\n{v3}' for v1, v2, v3 in zip(group_names, group_counts, group_percentages)] labels = np.asarray(labels).reshape(3,3) sns.heatmap(cf_matrix, annot=labels, fmt='', cmap='Blues') cf_matrix_plot(cf_matrix) print(metrics.classification_report(y_test, y_pred, labels=[0, 1, 2])) ```
github_jupyter
import pandas as pd import numpy as np import sys data = pd.read_csv('https://raw.githubusercontent.com/Ragnarok540/pdg/main/tags.txt', sep='~', header=None) data.columns = ['links', 'req'] def add_categories(df): bins = [0, 11, 120, sys.maxsize] labels = ['low', 'moderate', 'high'] category = pd.cut(df['links'], bins=bins, labels=labels) df['category'] = category return df add_categories(data) data["label"] = data["category"].cat.codes data.head() from sklearn.feature_extraction.text import CountVectorizer from sklearn.feature_extraction.text import TfidfTransformer from sklearn.model_selection import train_test_split from sklearn import metrics ngram = False tf_idf = True if ngram: ngram_range = (1, 3) else: ngram_range = (1, 1) count_vect = CountVectorizer(ngram_range=ngram_range) X_train_count = count_vect.fit_transform(data['req']) if tf_idf: tfidf_transformer = TfidfTransformer() X_train_count = tfidf_transformer.fit_transform(X_train_count) y = data['label'] X_train, X_test, y_train, y_test = train_test_split(X_train_count, y, test_size=0.2, random_state=1) # 80% training and 20% test X_train_count.shape from pprint import pprint from sklearn.neighbors import KNeighborsClassifier clf = KNeighborsClassifier() print('Parameters currently in use:\n') pprint(clf.get_params()) param_grid = {'metric': ['minkowski', 'manhattan', 'seuclidean', 'chebyshev', 'mahalanobis', 'wminkowski', 'euclidean'], 'n_neighbors': [1, 3, 5]} pprint(param_grid) from sklearn.model_selection import GridSearchCV grid_search = GridSearchCV(estimator=clf, param_grid=param_grid, cv=5, n_jobs=-1, verbose=2) grid_search.fit(X_train, y_train) grid_search.best_params_ clf = KNeighborsClassifier(n_neighbors=1, metric='manhattan') clf = clf.fit(X_train, y_train) y_pred = clf.predict(X_test) cf_matrix = metrics.confusion_matrix(y_test, y_pred) import seaborn as sns def cf_matrix_plot(cf_matrix): group_names = ['True Low', 'False Moderate', 'False High', 'False Low', 'True Moderate', 'False High', 'False Low', 'False Moderate', 'True High'] group_counts = ['{0:0.0f}'.format(value) for value in cf_matrix.flatten()] group_percentages = ['{0:.2%}'.format(value) for value in cf_matrix.flatten() / np.sum(cf_matrix)] labels = [f'{v1}\n{v2}\n{v3}' for v1, v2, v3 in zip(group_names, group_counts, group_percentages)] labels = np.asarray(labels).reshape(3,3) sns.heatmap(cf_matrix, annot=labels, fmt='', cmap='Blues') cf_matrix_plot(cf_matrix) print(metrics.classification_report(y_test, y_pred, labels=[0, 1, 2]))
0.436622
0.314649
# Dynamical System Approximation This notebook aims at learning a functional correlation based on given snapshots. The data is created through the following ODE: \begin{align} \frac{d^2}{dt^2} x_i = \sum_{i_{l-1},i_l,i_{l+1}=1}^4 c_{i_{l-1} i_{l}i_{l+1}}\psi_{i_{l-1}}(x_{l-1})\psi_{i_l}({x_l})\psi_{i_{l+1}}(x_{l+1}), \end{align} where only $20$ coefficients $ c_{i_{l-1} i_{l}i_{l+1}}$ are non zero, and randomly distributed between $[-1,1]$. Here we only regularize the optimized part of the coefficients, not the selection tensor. ``` import numpy as np import xerus import matplotlib.pyplot as plt from matplotlib.colors import LogNorm import time import helpers as hp import pandas as pd import random %precision 4 # we construct a random solution with 20 (10 at the boundary) non zero coefficients in each equation # the selection pattern is the same as in the fermi pasta equations def project(X): dim = ([(3 if i == 0 or i == noo -1 else 4) for i in range(0,noo)]) dim.extend(dim) C2T = xerus.TTOperator(dim) for eq in range(noo): idx = [0 for i in range(noo)] if eq == 0: idx[0] = 2 idx[1] = 3 elif eq == noo -1: idx[noo-2] = 1 idx[noo-1] = 1 elif eq == noo -2: idx[eq-1] = 1 idx[eq] = 2 idx[eq+1] = 2 else: idx[eq-1] = 1 idx[eq] = 2 idx[eq+1] = 3 idx.extend(idx) C2T += xerus.TTOperator.dirac(dim,idx) C2T.round(1e-12) i1,i2,i3,i4,i5,i6,j1,j2,j3,j4,k1,k2,k3 = xerus.indices(13) X(i1^noo,j1^noo) << X(i1^noo,k1^noo) * C2T(k1^noo,j1^noo) X.round(1e-12) return X def exact(noo,p): s = 3 coeffs = 10 #number of random coefficients for the first and last equation will be doubled for the other equations rank = 4 dim = [p for i in range(0,noo)] dim.extend([s+1 for i in range(0,noo)]) dim[noo] = s dim[2*noo-1]=s ranks = [1] + [rank for i in range(noo-1)] + [1] C1ex = xerus.TTOperator(dim) i=0 # first component tmp = xerus.Tensor([ranks[i],p,dim[i+noo],ranks[i+1]]) tmp1 = xerus.Tensor.dirac([ranks[i],p,1,ranks[i+1]],[0,0,0,0]) tmp2 = xerus.Tensor([ranks[i],p,1,ranks[i+1]]) for c in range(coeffs): r1 = random.randint(0,ranks[i]-1) r2 = random.randint(0,ranks[i+1]-1) pp = random.randint(0,p-1) tmp2[r1,pp,0,r2] = 2*random.random() - 1 tmp3 = xerus.Tensor.identity([ranks[i],p]) tmp3.reinterpret_dimensions([ranks[i],p,1,1]) tmp4 = xerus.Tensor.identity([1,p,1,ranks[i+1]]) tmp.offset_add(tmp1,[0,0,0,0]) tmp.offset_add(tmp2,[0,0,2,0]) tmp.offset_add(tmp4,[0,0,1,0]) C1ex.set_component(i,tmp) i = noo-1 # last component tmp = xerus.Tensor([ranks[i],p,dim[i+noo],ranks[i+1]]) tmp1 = xerus.Tensor.dirac([ranks[i],p,1,ranks[i+1]],[0,0,0,0]) tmp2 = xerus.Tensor([ranks[i],p,1,ranks[i+1]]) for c in range(coeffs): r1 = random.randint(0,ranks[i]-1) r2 = random.randint(0,ranks[i+1]-1) pp = random.randint(0,p-1) tmp2[r1,pp,0,r2] = 2*random.random() - 1 tmp3 = xerus.Tensor.identity([ranks[i],p]) tmp3.reinterpret_dimensions([ranks[i],p,1,1]) tmp4 = xerus.Tensor.identity([1,p,1,ranks[i+1]]) tmp.offset_add(tmp1,[0,0,0,0]) tmp.offset_add(tmp2,[0,0,1,0]) tmp.offset_add(tmp3,[0,0,2,0]) C1ex.set_component(i,tmp) #loop for all components in between for i in range(1,noo-1): tmp = xerus.Tensor([ranks[i],p,dim[i+noo],ranks[i+1]]) tmp1 = xerus.Tensor.dirac([ranks[i],p,1,ranks[i+1]],[0,0,0,0]) tmp2 = xerus.Tensor([ranks[i],p,1,ranks[i+1]]) for c in range(2*coeffs): r1 = random.randint(0,ranks[i]-1) r2 = random.randint(0,ranks[i+1]-1) pp = random.randint(0,p-1) tmp2[r1,pp,0,r2] = 2*random.random() - 1 tmp3 = xerus.Tensor.identity([ranks[i],p]) tmp3.reinterpret_dimensions([ranks[i],p,1,1]) tmp4 = xerus.Tensor.identity([1,p,1,ranks[i+1]]) tmp.offset_add(tmp1,[0,0,0,0]) tmp.offset_add(tmp2,[0,0,2,0]) tmp.offset_add(tmp3,[0,0,3,0]) tmp.offset_add(tmp4,[0,0,1,0]) C1ex.set_component(i,tmp) C1ex = project(C1ex) return C1ex #/ C1ex.frob_norm() # We initialize with a random TT in the needed format def initialize(p,noo): rank = 4 #fix rank dim = [p for i in range(0,noo)] dim.extend([4 for i in range(0,noo)]) dim[noo] = 3 dim[2*noo-1]=3 C = xerus.TTOperator.random(dim,[rank for i in range(0,noo-1)]) # initalize randomly C.move_core(0,True) return C / C.frob_norm() # We choose different pairs of dimensions and samplesizes to run the algoirthm for. #data_noo_nos = [(6,1000),(6,1400),(6,1800),(6,2200),(6,2600),(6,3000),(6,3400),(6,3800),\ # (12,1400),(12,1900),(12,2400),(12,2900),(12,3400),(12,3900),(12,4400),(12,4900),\ # (12,5500),(12,6500),(12,7500),(12,9000),\ # (18,1600),(18,2200),(18,2800),(18,3400),(18,4000),(18,4600),(18,5200),(18,5800),\ # (18,6500),(18,7500),(18,8500),(18,10000)] # pairs used in simulations in the paper # uncomment to simulate but is computational intensive data_noo_nos = [(8,3000)] #specify pairs to simulate for runs = 1 #specify number of runs for each pair (10 in the paper) #runs = 10 max_iter = 25 # specify number of sweeps output = 'data.csv' # specify name of output file number_of_restarts = 5 # here integrated a restart functionality, this specifies the number of recoveries if eps = 1e-6 # the threshold eps was not reached tuples = [] for data in data_noo_nos: noo = data[0] nos = data[1] for r in range(0,runs): tuples.append((noo,nos,r)) index = pd.MultiIndex.from_tuples(tuples, names=['d', 'm','runs']) # The results of each optimization is store in a Dataframe df = pd.DataFrame(np.zeros([len(tuples),max_iter]), index=index) print(len(index)) print(data_noo_nos) df['restarts'] = 0 #loop over all pairs of samples, calls hp.run_als for the solution i1,i2 = xerus.indices(2) lam = 1 #regularization parameter #Master iteration psi = hp.basis(0) # get basis functions, Legendre p = len(psi) for data in data_noo_nos: noo = data[0] nos = data[1] print( "(noo,nos) = (" + str(noo) +',' + str(nos) + ')' ) C2list = hp.build_choice_tensor2(noo) for r in range(runs): C1ex = exact(noo,p) # get new model, whcih will be the exact solution print("C1ex frob_norm: " +str(C1ex.frob_norm())) x = 2 * np.random.rand(noo, nos) - 1 #create samples Alist = hp.build_data_tensor_list2(noo,x,nos,psi,p) # build dictionary tensor y = hp.random_data_selection(Alist,C1ex,C2list,noo,nos) # calculate rhs from random model C1ex Y = xerus.Tensor.from_ndarray(y) starts = 0 errors = [1] while np.nanmin(errors) > eps and starts < number_of_restarts: # restart loop starts +=1 print("Number of starts = " + str(starts)) C1 = initialize(p,noo) # each time random new initialization start = time.time() errors = hp.run_als(noo,nos,C1,C2list,Alist,C1ex,Y,max_iter,lam) print(str(time.time() - start) + ' secs') for i in range(1,len(errors)): df[i-1].loc[(noo,nos,r)] = errors[i-1] df['restarts'].loc[(noo,nos,r)] = starts - 1 print("Run: " +str(r) + " finished result = " + str(errors)) df.to_csv(output) ```
github_jupyter
import numpy as np import xerus import matplotlib.pyplot as plt from matplotlib.colors import LogNorm import time import helpers as hp import pandas as pd import random %precision 4 # we construct a random solution with 20 (10 at the boundary) non zero coefficients in each equation # the selection pattern is the same as in the fermi pasta equations def project(X): dim = ([(3 if i == 0 or i == noo -1 else 4) for i in range(0,noo)]) dim.extend(dim) C2T = xerus.TTOperator(dim) for eq in range(noo): idx = [0 for i in range(noo)] if eq == 0: idx[0] = 2 idx[1] = 3 elif eq == noo -1: idx[noo-2] = 1 idx[noo-1] = 1 elif eq == noo -2: idx[eq-1] = 1 idx[eq] = 2 idx[eq+1] = 2 else: idx[eq-1] = 1 idx[eq] = 2 idx[eq+1] = 3 idx.extend(idx) C2T += xerus.TTOperator.dirac(dim,idx) C2T.round(1e-12) i1,i2,i3,i4,i5,i6,j1,j2,j3,j4,k1,k2,k3 = xerus.indices(13) X(i1^noo,j1^noo) << X(i1^noo,k1^noo) * C2T(k1^noo,j1^noo) X.round(1e-12) return X def exact(noo,p): s = 3 coeffs = 10 #number of random coefficients for the first and last equation will be doubled for the other equations rank = 4 dim = [p for i in range(0,noo)] dim.extend([s+1 for i in range(0,noo)]) dim[noo] = s dim[2*noo-1]=s ranks = [1] + [rank for i in range(noo-1)] + [1] C1ex = xerus.TTOperator(dim) i=0 # first component tmp = xerus.Tensor([ranks[i],p,dim[i+noo],ranks[i+1]]) tmp1 = xerus.Tensor.dirac([ranks[i],p,1,ranks[i+1]],[0,0,0,0]) tmp2 = xerus.Tensor([ranks[i],p,1,ranks[i+1]]) for c in range(coeffs): r1 = random.randint(0,ranks[i]-1) r2 = random.randint(0,ranks[i+1]-1) pp = random.randint(0,p-1) tmp2[r1,pp,0,r2] = 2*random.random() - 1 tmp3 = xerus.Tensor.identity([ranks[i],p]) tmp3.reinterpret_dimensions([ranks[i],p,1,1]) tmp4 = xerus.Tensor.identity([1,p,1,ranks[i+1]]) tmp.offset_add(tmp1,[0,0,0,0]) tmp.offset_add(tmp2,[0,0,2,0]) tmp.offset_add(tmp4,[0,0,1,0]) C1ex.set_component(i,tmp) i = noo-1 # last component tmp = xerus.Tensor([ranks[i],p,dim[i+noo],ranks[i+1]]) tmp1 = xerus.Tensor.dirac([ranks[i],p,1,ranks[i+1]],[0,0,0,0]) tmp2 = xerus.Tensor([ranks[i],p,1,ranks[i+1]]) for c in range(coeffs): r1 = random.randint(0,ranks[i]-1) r2 = random.randint(0,ranks[i+1]-1) pp = random.randint(0,p-1) tmp2[r1,pp,0,r2] = 2*random.random() - 1 tmp3 = xerus.Tensor.identity([ranks[i],p]) tmp3.reinterpret_dimensions([ranks[i],p,1,1]) tmp4 = xerus.Tensor.identity([1,p,1,ranks[i+1]]) tmp.offset_add(tmp1,[0,0,0,0]) tmp.offset_add(tmp2,[0,0,1,0]) tmp.offset_add(tmp3,[0,0,2,0]) C1ex.set_component(i,tmp) #loop for all components in between for i in range(1,noo-1): tmp = xerus.Tensor([ranks[i],p,dim[i+noo],ranks[i+1]]) tmp1 = xerus.Tensor.dirac([ranks[i],p,1,ranks[i+1]],[0,0,0,0]) tmp2 = xerus.Tensor([ranks[i],p,1,ranks[i+1]]) for c in range(2*coeffs): r1 = random.randint(0,ranks[i]-1) r2 = random.randint(0,ranks[i+1]-1) pp = random.randint(0,p-1) tmp2[r1,pp,0,r2] = 2*random.random() - 1 tmp3 = xerus.Tensor.identity([ranks[i],p]) tmp3.reinterpret_dimensions([ranks[i],p,1,1]) tmp4 = xerus.Tensor.identity([1,p,1,ranks[i+1]]) tmp.offset_add(tmp1,[0,0,0,0]) tmp.offset_add(tmp2,[0,0,2,0]) tmp.offset_add(tmp3,[0,0,3,0]) tmp.offset_add(tmp4,[0,0,1,0]) C1ex.set_component(i,tmp) C1ex = project(C1ex) return C1ex #/ C1ex.frob_norm() # We initialize with a random TT in the needed format def initialize(p,noo): rank = 4 #fix rank dim = [p for i in range(0,noo)] dim.extend([4 for i in range(0,noo)]) dim[noo] = 3 dim[2*noo-1]=3 C = xerus.TTOperator.random(dim,[rank for i in range(0,noo-1)]) # initalize randomly C.move_core(0,True) return C / C.frob_norm() # We choose different pairs of dimensions and samplesizes to run the algoirthm for. #data_noo_nos = [(6,1000),(6,1400),(6,1800),(6,2200),(6,2600),(6,3000),(6,3400),(6,3800),\ # (12,1400),(12,1900),(12,2400),(12,2900),(12,3400),(12,3900),(12,4400),(12,4900),\ # (12,5500),(12,6500),(12,7500),(12,9000),\ # (18,1600),(18,2200),(18,2800),(18,3400),(18,4000),(18,4600),(18,5200),(18,5800),\ # (18,6500),(18,7500),(18,8500),(18,10000)] # pairs used in simulations in the paper # uncomment to simulate but is computational intensive data_noo_nos = [(8,3000)] #specify pairs to simulate for runs = 1 #specify number of runs for each pair (10 in the paper) #runs = 10 max_iter = 25 # specify number of sweeps output = 'data.csv' # specify name of output file number_of_restarts = 5 # here integrated a restart functionality, this specifies the number of recoveries if eps = 1e-6 # the threshold eps was not reached tuples = [] for data in data_noo_nos: noo = data[0] nos = data[1] for r in range(0,runs): tuples.append((noo,nos,r)) index = pd.MultiIndex.from_tuples(tuples, names=['d', 'm','runs']) # The results of each optimization is store in a Dataframe df = pd.DataFrame(np.zeros([len(tuples),max_iter]), index=index) print(len(index)) print(data_noo_nos) df['restarts'] = 0 #loop over all pairs of samples, calls hp.run_als for the solution i1,i2 = xerus.indices(2) lam = 1 #regularization parameter #Master iteration psi = hp.basis(0) # get basis functions, Legendre p = len(psi) for data in data_noo_nos: noo = data[0] nos = data[1] print( "(noo,nos) = (" + str(noo) +',' + str(nos) + ')' ) C2list = hp.build_choice_tensor2(noo) for r in range(runs): C1ex = exact(noo,p) # get new model, whcih will be the exact solution print("C1ex frob_norm: " +str(C1ex.frob_norm())) x = 2 * np.random.rand(noo, nos) - 1 #create samples Alist = hp.build_data_tensor_list2(noo,x,nos,psi,p) # build dictionary tensor y = hp.random_data_selection(Alist,C1ex,C2list,noo,nos) # calculate rhs from random model C1ex Y = xerus.Tensor.from_ndarray(y) starts = 0 errors = [1] while np.nanmin(errors) > eps and starts < number_of_restarts: # restart loop starts +=1 print("Number of starts = " + str(starts)) C1 = initialize(p,noo) # each time random new initialization start = time.time() errors = hp.run_als(noo,nos,C1,C2list,Alist,C1ex,Y,max_iter,lam) print(str(time.time() - start) + ' secs') for i in range(1,len(errors)): df[i-1].loc[(noo,nos,r)] = errors[i-1] df['restarts'].loc[(noo,nos,r)] = starts - 1 print("Run: " +str(r) + " finished result = " + str(errors)) df.to_csv(output)
0.214445
0.920218
# **Codenation** ## Desafio - Semana 8 ### __Importando módulos__ ``` import pandas as pd import seaborn as sns import numpy as np from sklearn.compose import make_column_transformer from sklearn.pipeline import make_pipeline, Pipeline from sklearn.preprocessing import OneHotEncoder, MinMaxScaler, StandardScaler from sklearn.linear_model import LinearRegression from sklearn.ensemble import RandomForestRegressor from sklearn.model_selection import cross_val_score from sklearn.model_selection import GridSearchCV # Configurações para o Seaborn. from IPython.core.pylabtools import figsize sns.set() figsize(12,8) ``` ### __Carregando os dados__ ``` df_train = pd.read_csv('train.csv') df_test = pd.read_csv('test.csv') # Gerando DataFrame de resposta. answer = pd.DataFrame() # A coluna 'NU_INSCRICAO' de df_test deve ser salva em answer para gerar o .csv de resposta que o desafio pede. answer['NU_INSCRICAO'] = df_test['NU_INSCRICAO'] ``` ### **Análise / Ajustes** ``` print(f'DataFrame: df_train\nLinhas: {df_train.shape[0]} |\tColunas: {df_train.shape[1]}') print(f'DataFrame: df_test\nLinhas: {df_test.shape[0]} |\tColunas: {df_test.shape[1]}') ``` Obs: O DataFrame `df_teste` aparenta possuir as features ideais já selecionadas. ``` # Checando se as features de df_teste está contida em df_train. features_train = df_train.columns.to_list() features_test = df_test.columns.to_list() # Se os elementos da lista de features de df_test estiver contido na lista de features de df_train retornará True. feature_train_contains_test = all(feature in features_train for feature in features_test) feature_train_contains_test ``` `df_teste possui todas as features contidas em df_train` Selecionaremos as mesmas features de `df_test` para `df_train` ``` # Gerando lista de features. features = df_test.columns.to_list() # Adicionando o target na lista de features para treino. features.append('NU_NOTA_MT') # Criando os dados de treino e teste train = df_train[features].copy() test = df_test.copy() ``` Análisando o [Dicionário dos Microdados do Enem 2016](https://s3-us-west-1.amazonaws.com/acceleration-assets-highway/data-science/dicionario-de-dados.zip) há algumas features que representam códigos de inscrições, códigos das provas, que dificilmente agregariam algum valor ou teriam algum impacto positivo em nosso modelo, então serão dropados de `train` e `test` ``` # Dropando atributos de código que dificilmente acrescentariam em algo ao nosso modelo. train.drop(['NU_INSCRICAO', 'CO_PROVA_CN', 'CO_PROVA_CH', 'CO_PROVA_LC', 'CO_PROVA_MT'], axis=1, inplace=True) test.drop(['NU_INSCRICAO', 'CO_PROVA_CN', 'CO_PROVA_CH', 'CO_PROVA_LC', 'CO_PROVA_MT'], axis=1, inplace=True) # Checando se há NaN values nas features categóricas. train.select_dtypes('object').isnull().sum() ``` No [Dicionário dos Microdados do Enem 2016](https://s3-us-west-1.amazonaws.com/acceleration-assets-highway/data-science/dicionario-de-dados.zip) o atributo `Q027` corresponde a seguinte pergunta: - `Com que idade você começou a exercer uma atividade remunerada?` Entretanto, esse atributo está correlacionado com o atributo `Q026` que pergunta: - `Você exerce ou já exerceu atividade remunerada?` Caso o candidato responda com `(A, Nunca Trabalhei)`, então o candidato não responderá a pergunta `Q027`. Precisamos tratar esses dados faltantes de alguma forma. Dropar não é a melhor opção pois são muitos dados e a exclusão poderia afetar a performance do modelo no final. Podemos atribuir uma variável do tipo `categórica` que "represente" um valor NaN (Nulo) para esses valores faltantes. ``` # Preenchendo os NaN values das features categóricas com '-' train['Q027'] = train['Q027'].fillna('-') test['Q027'] = test['Q027'].fillna('-') # Checando valorer NaN values nas features numéricas train.select_dtypes({'int', 'float'}).isnull().sum() ``` Por se tratar do ENEM, se há valores nulos em features numéricas, faz sentido preencher com `0` já que podemos inferir que esse candidato possa: - Não ter respondido alguma questão. - Ter sido desqualificado por algum motivo. - Ter faltado algum dia de prova ou os dois dias. - Tido algum problema na contabilização das respostas. Excluir essa quantidade de dados pode prejudicar o desempenho do modelo. De forma simplificada, atribuíremos o valor `0` as variáveis faltantes: ``` train.fillna(0, inplace=True) test.fillna(0, inplace=True) ``` #### Checando número de linhas e colunas de ambos DataFrames ``` print(f'Train\nLinhas: {train.shape[0]}\nColunas: {train.shape[1]}') print(f'Test\nLinhas: {test.shape[0]}\nColunas: {test.shape[1]}') ``` `train` possui uma coluna a mais pois contém o target `NU_NOTA_MT` que posteriormente será dropada. ## Plots ``` figsize(15,12) # Visualizando a distribuição das features numéricas train.select_dtypes({'int','float'}).hist() # Checando correlação entre as features numéricas sns.heatmap(data=train.select_dtypes({'int','float'}).corr(), annot=True) ``` #### Verificaremos o desempenho do modelo utilizando as features já selecionadas sem excluir ou adicionar mais nenhuma. ### **Selecionando e separando as colunas por tipo** ``` cat_columns = train.select_dtypes('object').columns num_columns = train.select_dtypes({'int', 'float'}).columns cat_columns num_columns = num_columns.drop('NU_NOTA_MT') num_columns ``` ### **Criando uma instância de** `make_column_transformer` #### Para aplicar os preprocessadores em tipos distinstos de colunas (numeric, object) Nesse caso a instância de `make_column_transformer()` irá aplicar o `OneHotEncoder()` nas features categóricas e irá aplicar o `MinMaxScaler()` nas features numéricas. ``` column_trans = make_column_transformer((OneHotEncoder(), cat_columns), (MinMaxScaler(), num_columns), remainder='passthrough') ``` #### Gerando um Pipeline com o `make_column_tranformer` e `LinearRegression()` ``` lr_model = LinearRegression() lr_pipe = make_pipeline(column_trans, lr_model) ``` #### Separando X_train, y_train ``` X_train = train.drop(['NU_NOTA_MT'], axis=1) y_train = train['NU_NOTA_MT'] X_test = test.copy() ``` # ------ **TESTES** ------ **Por se tratar de um problema de Regressão, iremos avaliar o `RMSE (Root Mean Squared Error / Raiz do Erro Quadrático Médio)`.** ``` def rmse_score(pipeline, x, y): return print(f'RMSE: {cross_val_score(pipeline, x, y, cv=10, scoring="neg_root_mean_squared_error").mean().round(4)}') ``` #### OneHotEncoder / MinMaxScaler / LinearRegression ``` # Resultado rmse_score(lr_pipe, X_train, y_train) ``` #### OneHotEncoder / MinMaxScaler / RandomForestRegressor ``` rfr = RandomForestRegressor(n_jobs=-1) rfr_pipe = make_pipeline(column_trans, rfr) # Resultado rmse_score(rfr_pipe, X_train, y_train) ``` #### OneHotEncoder / StandardScaler / RandomForestRegressor ``` column_trans_std_scaler = make_column_transformer( (OneHotEncoder(), cat_columns), (StandardScaler(), num_columns), remainder="passthrough" ) rf_reg = RandomForestRegressor(n_jobs=-1) pipe_rf_reg = make_pipeline(column_trans_std_scaler, rf_reg) # Resultado rmse_score(pipe_rf_reg, X_train, y_train) ``` #### OneHotEncoder / StandardScaler / LinearRegression ``` lr_model = LinearRegression() lr_pipe = make_pipeline(column_trans_std_scaler, lr_model) # Resultado rmse_score(lr_pipe, X_train, y_train) ``` #### A melhor combinação que mostrou o menor `RMSE`: ##### **OneHotEncoder / StandardScaler / RandomForestRegressor** ## Testes exaustivos usando o `GridSearchCV()` Usaremos o GridSearchCV para procurar os melhores parâmetros para otimizar o algoritmo `RandomForestRegressor()`<br> Treinaremos dois modelos utilizando o `StandardScaler()` e `MinMaxScaler()` devido a pouca diferença no `score` de `RMSE`. #### **`StandardScaler()`** ``` # Instanciando um novo make_column_transformer column_transform_std = make_column_transformer((OneHotEncoder(), cat_columns), (StandardScaler(), num_columns), remainder='passthrough') # Instanciando um novo RandomForestRegressor rfr = RandomForestRegressor() # Instanciando um Pipeline com o OneHotEncoder / StandardScaler / RandomForestRegressor rfr_pipeline_std = Pipeline(steps=[('column_transform_std', column_transform_std), ('rfr', rfr) ] ) # Gerando listas com possíveis valores para os parâmetros de rfr() val_estimator = [100, 150, 200, 250] val_criterion = ["mse"] val_max_features = ["auto", "log2"] # Criando um dict para passar aos parametros do RandomForest dentro do GridSearchCV grid_params = dict( rfr__n_estimators=val_estimator, rfr__criterion=val_criterion, rfr__max_features=val_max_features) # Instânciando o GridSearchCV grid_std = GridSearchCV(rfr_pipeline_std, grid_params, cv=10, n_jobs=-1) # Treinando o modelo (Obs.: Esta etapa pode demorar bastante) grid_std.fit(X_train, y_train) # Gerando um DataFrame com os resultados do GridSearchCV standard_scaler_results = pd.DataFrame(grid_std.cv_results_) standard_scaler_results standard_scaler_score = grid_std.best_score_ standard_scaler_score standard_scaler_score.round(4) ``` #### **`MinMaxScaler()`** ``` # Instanciando um novo make_column_transformer column_transform_min_max = make_column_transformer((OneHotEncoder(), cat_columns), (MinMaxScaler(), num_columns), remainder='passthrough') # Instanciando um novo RandomForestRegressor rfr = RandomForestRegressor() # Instanciando um Pipeline com o OneHotEncoder / StandardScaler / RandomForestRegressor rfr_pipeline = Pipeline(steps=[('column_transform_min_max', column_transform_min_max), ('rfr', rfr) ] ) # Gerando listas com possíveis valores para os parâmetros de rfr() val_estimator = [100, 150, 200, 250] val_criterion = ["mse"] val_max_features = ["auto", "log2"] # Criando um dict para passar aos parametros do RandomForest dentro do GridSearchCV grid_params = dict( rfr__n_estimators=val_estimator, rfr__criterion=val_criterion, rfr__max_features=val_max_features) # Instânciando o GridSearchCV grid_min_max = GridSearchCV(rfr_pipeline, grid_params, cv=10, n_jobs=-1) grid_min_max.fit(X_train, y_train) min_max_score = grid_min_max.best_score_ min_max_score min_max_score.round(4) ``` A diferença ainda é mínima mesmo trocando os preprocessadores de dados, usaremos o modelo treinado com o `MinMaxScaler()` para prever as notas de `"NU_NOTA_MT"` ``` pred = pd.Series(grid_min_max.predict(X_test)) figsize(10,8) sns.distplot(pred) # Checando se há valores a baixo de 0 ou acima de 1000.0 # Se houverem valores abaixo de 0, deve ser feito uma correção de arredondamento desses valores pra 0. pred.min(), pred.max() answer['NU_NOTA_MT'] = pred.round(2) answer answer.to_csv('answer.csv', index=False, header=True) ``` `Resultado do submit do desafio:` ![title](result_score.png)
github_jupyter
import pandas as pd import seaborn as sns import numpy as np from sklearn.compose import make_column_transformer from sklearn.pipeline import make_pipeline, Pipeline from sklearn.preprocessing import OneHotEncoder, MinMaxScaler, StandardScaler from sklearn.linear_model import LinearRegression from sklearn.ensemble import RandomForestRegressor from sklearn.model_selection import cross_val_score from sklearn.model_selection import GridSearchCV # Configurações para o Seaborn. from IPython.core.pylabtools import figsize sns.set() figsize(12,8) df_train = pd.read_csv('train.csv') df_test = pd.read_csv('test.csv') # Gerando DataFrame de resposta. answer = pd.DataFrame() # A coluna 'NU_INSCRICAO' de df_test deve ser salva em answer para gerar o .csv de resposta que o desafio pede. answer['NU_INSCRICAO'] = df_test['NU_INSCRICAO'] print(f'DataFrame: df_train\nLinhas: {df_train.shape[0]} |\tColunas: {df_train.shape[1]}') print(f'DataFrame: df_test\nLinhas: {df_test.shape[0]} |\tColunas: {df_test.shape[1]}') # Checando se as features de df_teste está contida em df_train. features_train = df_train.columns.to_list() features_test = df_test.columns.to_list() # Se os elementos da lista de features de df_test estiver contido na lista de features de df_train retornará True. feature_train_contains_test = all(feature in features_train for feature in features_test) feature_train_contains_test # Gerando lista de features. features = df_test.columns.to_list() # Adicionando o target na lista de features para treino. features.append('NU_NOTA_MT') # Criando os dados de treino e teste train = df_train[features].copy() test = df_test.copy() # Dropando atributos de código que dificilmente acrescentariam em algo ao nosso modelo. train.drop(['NU_INSCRICAO', 'CO_PROVA_CN', 'CO_PROVA_CH', 'CO_PROVA_LC', 'CO_PROVA_MT'], axis=1, inplace=True) test.drop(['NU_INSCRICAO', 'CO_PROVA_CN', 'CO_PROVA_CH', 'CO_PROVA_LC', 'CO_PROVA_MT'], axis=1, inplace=True) # Checando se há NaN values nas features categóricas. train.select_dtypes('object').isnull().sum() # Preenchendo os NaN values das features categóricas com '-' train['Q027'] = train['Q027'].fillna('-') test['Q027'] = test['Q027'].fillna('-') # Checando valorer NaN values nas features numéricas train.select_dtypes({'int', 'float'}).isnull().sum() train.fillna(0, inplace=True) test.fillna(0, inplace=True) print(f'Train\nLinhas: {train.shape[0]}\nColunas: {train.shape[1]}') print(f'Test\nLinhas: {test.shape[0]}\nColunas: {test.shape[1]}') figsize(15,12) # Visualizando a distribuição das features numéricas train.select_dtypes({'int','float'}).hist() # Checando correlação entre as features numéricas sns.heatmap(data=train.select_dtypes({'int','float'}).corr(), annot=True) cat_columns = train.select_dtypes('object').columns num_columns = train.select_dtypes({'int', 'float'}).columns cat_columns num_columns = num_columns.drop('NU_NOTA_MT') num_columns column_trans = make_column_transformer((OneHotEncoder(), cat_columns), (MinMaxScaler(), num_columns), remainder='passthrough') lr_model = LinearRegression() lr_pipe = make_pipeline(column_trans, lr_model) X_train = train.drop(['NU_NOTA_MT'], axis=1) y_train = train['NU_NOTA_MT'] X_test = test.copy() def rmse_score(pipeline, x, y): return print(f'RMSE: {cross_val_score(pipeline, x, y, cv=10, scoring="neg_root_mean_squared_error").mean().round(4)}') # Resultado rmse_score(lr_pipe, X_train, y_train) rfr = RandomForestRegressor(n_jobs=-1) rfr_pipe = make_pipeline(column_trans, rfr) # Resultado rmse_score(rfr_pipe, X_train, y_train) column_trans_std_scaler = make_column_transformer( (OneHotEncoder(), cat_columns), (StandardScaler(), num_columns), remainder="passthrough" ) rf_reg = RandomForestRegressor(n_jobs=-1) pipe_rf_reg = make_pipeline(column_trans_std_scaler, rf_reg) # Resultado rmse_score(pipe_rf_reg, X_train, y_train) lr_model = LinearRegression() lr_pipe = make_pipeline(column_trans_std_scaler, lr_model) # Resultado rmse_score(lr_pipe, X_train, y_train) # Instanciando um novo make_column_transformer column_transform_std = make_column_transformer((OneHotEncoder(), cat_columns), (StandardScaler(), num_columns), remainder='passthrough') # Instanciando um novo RandomForestRegressor rfr = RandomForestRegressor() # Instanciando um Pipeline com o OneHotEncoder / StandardScaler / RandomForestRegressor rfr_pipeline_std = Pipeline(steps=[('column_transform_std', column_transform_std), ('rfr', rfr) ] ) # Gerando listas com possíveis valores para os parâmetros de rfr() val_estimator = [100, 150, 200, 250] val_criterion = ["mse"] val_max_features = ["auto", "log2"] # Criando um dict para passar aos parametros do RandomForest dentro do GridSearchCV grid_params = dict( rfr__n_estimators=val_estimator, rfr__criterion=val_criterion, rfr__max_features=val_max_features) # Instânciando o GridSearchCV grid_std = GridSearchCV(rfr_pipeline_std, grid_params, cv=10, n_jobs=-1) # Treinando o modelo (Obs.: Esta etapa pode demorar bastante) grid_std.fit(X_train, y_train) # Gerando um DataFrame com os resultados do GridSearchCV standard_scaler_results = pd.DataFrame(grid_std.cv_results_) standard_scaler_results standard_scaler_score = grid_std.best_score_ standard_scaler_score standard_scaler_score.round(4) # Instanciando um novo make_column_transformer column_transform_min_max = make_column_transformer((OneHotEncoder(), cat_columns), (MinMaxScaler(), num_columns), remainder='passthrough') # Instanciando um novo RandomForestRegressor rfr = RandomForestRegressor() # Instanciando um Pipeline com o OneHotEncoder / StandardScaler / RandomForestRegressor rfr_pipeline = Pipeline(steps=[('column_transform_min_max', column_transform_min_max), ('rfr', rfr) ] ) # Gerando listas com possíveis valores para os parâmetros de rfr() val_estimator = [100, 150, 200, 250] val_criterion = ["mse"] val_max_features = ["auto", "log2"] # Criando um dict para passar aos parametros do RandomForest dentro do GridSearchCV grid_params = dict( rfr__n_estimators=val_estimator, rfr__criterion=val_criterion, rfr__max_features=val_max_features) # Instânciando o GridSearchCV grid_min_max = GridSearchCV(rfr_pipeline, grid_params, cv=10, n_jobs=-1) grid_min_max.fit(X_train, y_train) min_max_score = grid_min_max.best_score_ min_max_score min_max_score.round(4) pred = pd.Series(grid_min_max.predict(X_test)) figsize(10,8) sns.distplot(pred) # Checando se há valores a baixo de 0 ou acima de 1000.0 # Se houverem valores abaixo de 0, deve ser feito uma correção de arredondamento desses valores pra 0. pred.min(), pred.max() answer['NU_NOTA_MT'] = pred.round(2) answer answer.to_csv('answer.csv', index=False, header=True)
0.52902
0.84228
**1**. (25 points) - Write a **recursive** function that returns the length of the hailstone sequence staring with a positive integer $n$. (15 points) The hailstone sequence is defined by the following rules: ``` - If n is 1, stop - If n is even, divide by 2 and repeat - If n is odd, multiply by 3 and add 1 and repeat ``` For example, the hailstone sequence starting with $n = 3$ has length 8: ``` - 3, 10, 5, 16, 8, 4, 2, 1 ``` Use the `functools` package to avoid duplicate function calls. - Find the number that gives the longest sequence for starting numbers less than 100,000. Report the number and the length of the generated sequence. (10 points) ``` def rec(n, seq=None): if seq is None: seq = [] if n == 1: seq.append(n) elif n % 2 == 0: seq.append(n) n /= 2 rec(n,seq) elif n % 2 != 0: seq.append(n) n *= 3 n += 1 rec(n,seq) return seq rec(3) longest = 0 for x in range(1,100000): length = len(rec(x)) if length > longest: longest = length long = (x, longest) print(long) ``` **2**. (25 points) - Create a `pnadas` DataFrame called `df` from the data set at https://bit.ly/2ksKr8f, taking care to only read in the `time` and `value` columns. (5 points) - Fill all rows with missing values with the value from the last non-missing value (i.e. forward fill) (5 points) - Convert to a `pandas` Series `s` using `time` as the index (5 points) - Create a new series `s1` with the rolling average using a shifting window of size 7 and a minimum period of 1 (5 points) - Report the `time` and value for the largest rolling average (5 points) ``` import pandas as pd df = pd.read_csv("https://bit.ly/2ksKr8f",usecols=[1,2]) df = df.fillna(method="ffill") df = df.set_index("time") s = df.iloc[:,0] s1 = df.rolling(7,1).mean().iloc[:,0] mm = df.rolling(7,1).mean().nlargest(1,"value") mm ``` **3**. (25 points) - Get information in JSON format about startship 23 from the Star Wars API https://swapi.co/api using the `requests` package (5 points) - Report the time interval between `created` and `edited` in minutes using the `pendulum` package. It is also ok if you prefer to do this using the standard `datetime` library (5 points) - Replace the URL values stored at the `films` key with the titles of the actual films (5 points) - Save the new JSON (with film titles and not URLs) to a file `ship.json` (5 points) - Read in the JSON file you have just saved as a Python dictionary (5 points) ``` import requests import pendulum from pendulum.interval import timedelta,Interval url = "https://swapi.dev/api/starships/23" info = requests.get(url).json() info pendulum.time(info['created']) help(pendulum) help(pendulum.interval) ``` **4**. (25 points) Use SQL to answer the following questions using the SQLite3 database `anemia.db`: - Show the tables (not indexes) and their schema (in SQL) in the anemia database (5 points) - Count the number of male and female patients (5 points) - Find the average age of male and female patients (as of right now) (5 points) - Show the sex, hb and name of patients with severe anemia ordered by severity. Severe anemia is defined as - Hb < 7 if female - Hb < 8 if male (10 points) You many assume `pid` is the PRIMARY KEY and the FOREIGN KEY in the appropriate tables. Note: Hb is short for hemoglobin levels. Hint: In SQLite3, you can use `DATE('now')` to get today's date. ``` %load_ext sql %sql sqlite:///data/har.db %%sql SELECT * FROM sqlite_master WHERe type='table' master. ```
github_jupyter
- If n is 1, stop - If n is even, divide by 2 and repeat - If n is odd, multiply by 3 and add 1 and repeat - 3, 10, 5, 16, 8, 4, 2, 1 def rec(n, seq=None): if seq is None: seq = [] if n == 1: seq.append(n) elif n % 2 == 0: seq.append(n) n /= 2 rec(n,seq) elif n % 2 != 0: seq.append(n) n *= 3 n += 1 rec(n,seq) return seq rec(3) longest = 0 for x in range(1,100000): length = len(rec(x)) if length > longest: longest = length long = (x, longest) print(long) import pandas as pd df = pd.read_csv("https://bit.ly/2ksKr8f",usecols=[1,2]) df = df.fillna(method="ffill") df = df.set_index("time") s = df.iloc[:,0] s1 = df.rolling(7,1).mean().iloc[:,0] mm = df.rolling(7,1).mean().nlargest(1,"value") mm import requests import pendulum from pendulum.interval import timedelta,Interval url = "https://swapi.dev/api/starships/23" info = requests.get(url).json() info pendulum.time(info['created']) help(pendulum) help(pendulum.interval) %load_ext sql %sql sqlite:///data/har.db %%sql SELECT * FROM sqlite_master WHERe type='table' master.
0.378344
0.958304
``` import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import time from sklearn.model_selection import train_test_split , GridSearchCV , cross_val_score from sklearn.metrics import classification_report , confusion_matrix , roc_curve , roc_auc_score from sklearn.impute import SimpleImputer from sklearn.linear_model import LogisticRegression from sklearn.preprocessing import StandardScaler from sklearn.neighbors import KNeighborsClassifier from sklearn.pipeline import Pipeline from sklearn.svm import SVC from sklearn import metrics start = time.time() diabets = pd.read_csv('datasets/pima-indians-diabetes.csv' ).dropna(how = 'all') print(time.time() - start) diabets.dtypes diabets.head() diabets.tail() diabets.info() diabets.describe() plt.figure(figsize=(10, 10)) df_corr = diabets.corr() sns.heatmap(df_corr, cmap=sns.diverging_palette(220, 20, n=12), annot=True) plt.title("Diabets") plt.show() X = diabets.drop(['Outcome'], axis = 1 ).values y = diabets['Outcome'].values scalar = StandardScaler() X = scalar.fit_transform(X) X_train, X_test, y_train, y_test = train_test_split(X , y , test_size = 0.2 , random_state = 42) def calculate_and_plot_k_neighbors(X_train, X_test, y_train, y_test): neighbors = np.arange(1, 23) train_accuracy = np.empty(len(neighbors)) test_accuracy = np.empty(len(neighbors)) for i, k in enumerate(neighbors): knn = KNeighborsClassifier(n_neighbors= k) knn.fit(X_train , y_train) train_accuracy[i] = knn.score(X_train, y_train) test_accuracy[i] = knn.score(X_test, y_test) plt.figure(figsize=(10, 8)) sns.set_style("whitegrid") plt.title('k in kNN analysis') plt.plot( neighbors , test_accuracy , label = 'Testing Accuracy') plt.plot(neighbors,train_accuracy ,label = 'Training Accuracy') plt.legend() plt.annotate('Best accuracy for this model with this k is {0:.2f} %'.format(max(test_accuracy) * 100), xy=(np.argmax(test_accuracy) + 1 , max(test_accuracy)), xytext=(4 , 0.973), arrowprops=dict(arrowstyle="->", connectionstyle="angle3,angleA=0,angleB=-90")) plt.xlabel('Number of Neighbors') plt.ylabel('Accuracy') plt.show() def kNN_algorithm(X_train , y_train , X_test , y_test , k): global y_pred_kNN global kNN_pipeline steps = [('impute' , SimpleImputer(missing_values = 0, strategy='mean')), ('sclaer', StandardScaler()), ('kNN', KNeighborsClassifier(n_neighbors = k))] kNN_pipeline = Pipeline(steps) kNN_pipeline.fit(X_train , y_train) y_pred_kNN = kNN_pipeline.predict(X_test) print(classification_report(y_test , y_pred_kNN)) print('kNN algorithm acuracy is : {0:.2f} %'.format(kNN_pipeline.score(X_test , y_test) * 100)) def plot_confusion_matrix(cf_matrix , y_test , model_type , cf_size): if cf_size == '2x2': group_names = ['True Negative','False Positive','False Negative','True Positive'] group_counts = ['{0:0.0f}'.format(value) for value in cf_matrix.flatten()] labels = ['{}\n{}'.format(v1 ,v2) for v1, v2 in zip(group_names,group_counts)] labels = np.asarray(labels).reshape(2,2) plt.figure(figsize=(10, 8)) sns.heatmap( cf_matrix, annot = labels, cmap=sns.cubehelix_palette(100, as_cmap=True, hue=1, dark=0.30), fmt='', linewidths=1.5, vmin=0, vmax=len(y_test), ) plt.title(model_type) plt.show() else: plt.figure(figsize=(10, 8)) sns.heatmap( cf_matrix / np.sum(cf_matrix) * 100, annot = True, cmap=sns.cubehelix_palette(100, as_cmap=True, hue=1, dark=0.30), fmt='.2f', linewidths=1.5, vmin=0, vmax=100, ) plt.title(model_type) plt.show() def plot_AUC_ROC_kNN(X_test , y_test , pipeline): probs = pipeline.predict_proba(X_test) preds = probs[:,1] fpr, tpr, threshold = metrics.roc_curve(y_test, preds) roc_auc = metrics.auc(fpr, tpr) plt.title('Receiver Operating Characteristic') plt.plot(fpr, tpr, 'b', label = 'AUC = {0:.2f}'.format(roc_auc_score(y_test, preds))) plt.legend(loc = 'lower right') plt.plot([0, 1], [0, 1],'r--') plt.xlim([0, 1]) plt.ylim([0, 1]) plt.ylabel('True Positive Rate') plt.xlabel('False Positive Rate') plt.show() print('ROC AUC score is ' + '{0:.2f}'.format(roc_auc_score(y_test, preds))) def SVM_algorithm(X_train, X_test, y_train, y_test): global y_pred_SVM global SVM_pipeline global y_prob_SVM steps = [('impute' , SimpleImputer(missing_values = 0, strategy='mean')), ('scaler', StandardScaler()), ('SVM', SVC(probability=True))] SVM_pipeline = Pipeline(steps) parameters = {'SVM__C':[1, 10, 100 ], 'SVM__gamma':[0.1, 0.01]} cv = GridSearchCV(SVM_pipeline , cv = 3 , param_grid = parameters) cv.fit(X_train , y_train) y_pred_SVM = cv.predict(X_test) y_prob_SVM = cv.predict_proba(X_test) print("Accuracy: {0:.2f} %".format(cv.score(X_test, y_test) * 100)) print(classification_report(y_test, y_pred_SVM)) print("Tuned Model Parameters: {}".format(cv.best_params_)) def plot_AUC_ROC_SVM_and_LG(X_test , y_test , y_prob_SVM): probs = y_prob_SVM preds = probs[:,1] fpr, tpr, threshold = metrics.roc_curve(y_test, preds) roc_auc = metrics.auc(fpr, tpr) plt.title('Receiver Operating Characteristic') plt.plot(fpr, tpr, 'b', label = 'AUC = {0:.2f}'.format(roc_auc_score(y_test, preds))) plt.legend(loc = 'lower right') plt.plot([0, 1], [0, 1],'r--') plt.xlim([0, 1]) plt.ylim([0, 1]) plt.ylabel('True Positive Rate') plt.xlabel('False Positive Rate') plt.show() print('ROC AUC score is ' + '{0:.2f}'.format(roc_auc_score(y_test, preds))) calculate_and_plot_k_neighbors(X_train, X_test, y_train, y_test) kNN_algorithm(X_train , y_train , X_test , y_test , 18) cf_matrix_knn = confusion_matrix(y_test, y_pred_kNN) plot_confusion_matrix(cf_matrix_knn , y_test , 'kNN Confusion Matrix' , '2x2') plot_AUC_ROC_kNN(X_test , y_test , kNN_pipeline) SVM_algorithm(X_train, X_test, y_train, y_test) cf_matrix_svm = confusion_matrix(y_test, y_pred_SVM) plot_confusion_matrix(cf_matrix_svm , y_test , 'SVM Confusion Matrix' , '2x2') plot_AUC_ROC_SVM_and_LG(X_test , y_test , y_prob_SVM) ```
github_jupyter
import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import time from sklearn.model_selection import train_test_split , GridSearchCV , cross_val_score from sklearn.metrics import classification_report , confusion_matrix , roc_curve , roc_auc_score from sklearn.impute import SimpleImputer from sklearn.linear_model import LogisticRegression from sklearn.preprocessing import StandardScaler from sklearn.neighbors import KNeighborsClassifier from sklearn.pipeline import Pipeline from sklearn.svm import SVC from sklearn import metrics start = time.time() diabets = pd.read_csv('datasets/pima-indians-diabetes.csv' ).dropna(how = 'all') print(time.time() - start) diabets.dtypes diabets.head() diabets.tail() diabets.info() diabets.describe() plt.figure(figsize=(10, 10)) df_corr = diabets.corr() sns.heatmap(df_corr, cmap=sns.diverging_palette(220, 20, n=12), annot=True) plt.title("Diabets") plt.show() X = diabets.drop(['Outcome'], axis = 1 ).values y = diabets['Outcome'].values scalar = StandardScaler() X = scalar.fit_transform(X) X_train, X_test, y_train, y_test = train_test_split(X , y , test_size = 0.2 , random_state = 42) def calculate_and_plot_k_neighbors(X_train, X_test, y_train, y_test): neighbors = np.arange(1, 23) train_accuracy = np.empty(len(neighbors)) test_accuracy = np.empty(len(neighbors)) for i, k in enumerate(neighbors): knn = KNeighborsClassifier(n_neighbors= k) knn.fit(X_train , y_train) train_accuracy[i] = knn.score(X_train, y_train) test_accuracy[i] = knn.score(X_test, y_test) plt.figure(figsize=(10, 8)) sns.set_style("whitegrid") plt.title('k in kNN analysis') plt.plot( neighbors , test_accuracy , label = 'Testing Accuracy') plt.plot(neighbors,train_accuracy ,label = 'Training Accuracy') plt.legend() plt.annotate('Best accuracy for this model with this k is {0:.2f} %'.format(max(test_accuracy) * 100), xy=(np.argmax(test_accuracy) + 1 , max(test_accuracy)), xytext=(4 , 0.973), arrowprops=dict(arrowstyle="->", connectionstyle="angle3,angleA=0,angleB=-90")) plt.xlabel('Number of Neighbors') plt.ylabel('Accuracy') plt.show() def kNN_algorithm(X_train , y_train , X_test , y_test , k): global y_pred_kNN global kNN_pipeline steps = [('impute' , SimpleImputer(missing_values = 0, strategy='mean')), ('sclaer', StandardScaler()), ('kNN', KNeighborsClassifier(n_neighbors = k))] kNN_pipeline = Pipeline(steps) kNN_pipeline.fit(X_train , y_train) y_pred_kNN = kNN_pipeline.predict(X_test) print(classification_report(y_test , y_pred_kNN)) print('kNN algorithm acuracy is : {0:.2f} %'.format(kNN_pipeline.score(X_test , y_test) * 100)) def plot_confusion_matrix(cf_matrix , y_test , model_type , cf_size): if cf_size == '2x2': group_names = ['True Negative','False Positive','False Negative','True Positive'] group_counts = ['{0:0.0f}'.format(value) for value in cf_matrix.flatten()] labels = ['{}\n{}'.format(v1 ,v2) for v1, v2 in zip(group_names,group_counts)] labels = np.asarray(labels).reshape(2,2) plt.figure(figsize=(10, 8)) sns.heatmap( cf_matrix, annot = labels, cmap=sns.cubehelix_palette(100, as_cmap=True, hue=1, dark=0.30), fmt='', linewidths=1.5, vmin=0, vmax=len(y_test), ) plt.title(model_type) plt.show() else: plt.figure(figsize=(10, 8)) sns.heatmap( cf_matrix / np.sum(cf_matrix) * 100, annot = True, cmap=sns.cubehelix_palette(100, as_cmap=True, hue=1, dark=0.30), fmt='.2f', linewidths=1.5, vmin=0, vmax=100, ) plt.title(model_type) plt.show() def plot_AUC_ROC_kNN(X_test , y_test , pipeline): probs = pipeline.predict_proba(X_test) preds = probs[:,1] fpr, tpr, threshold = metrics.roc_curve(y_test, preds) roc_auc = metrics.auc(fpr, tpr) plt.title('Receiver Operating Characteristic') plt.plot(fpr, tpr, 'b', label = 'AUC = {0:.2f}'.format(roc_auc_score(y_test, preds))) plt.legend(loc = 'lower right') plt.plot([0, 1], [0, 1],'r--') plt.xlim([0, 1]) plt.ylim([0, 1]) plt.ylabel('True Positive Rate') plt.xlabel('False Positive Rate') plt.show() print('ROC AUC score is ' + '{0:.2f}'.format(roc_auc_score(y_test, preds))) def SVM_algorithm(X_train, X_test, y_train, y_test): global y_pred_SVM global SVM_pipeline global y_prob_SVM steps = [('impute' , SimpleImputer(missing_values = 0, strategy='mean')), ('scaler', StandardScaler()), ('SVM', SVC(probability=True))] SVM_pipeline = Pipeline(steps) parameters = {'SVM__C':[1, 10, 100 ], 'SVM__gamma':[0.1, 0.01]} cv = GridSearchCV(SVM_pipeline , cv = 3 , param_grid = parameters) cv.fit(X_train , y_train) y_pred_SVM = cv.predict(X_test) y_prob_SVM = cv.predict_proba(X_test) print("Accuracy: {0:.2f} %".format(cv.score(X_test, y_test) * 100)) print(classification_report(y_test, y_pred_SVM)) print("Tuned Model Parameters: {}".format(cv.best_params_)) def plot_AUC_ROC_SVM_and_LG(X_test , y_test , y_prob_SVM): probs = y_prob_SVM preds = probs[:,1] fpr, tpr, threshold = metrics.roc_curve(y_test, preds) roc_auc = metrics.auc(fpr, tpr) plt.title('Receiver Operating Characteristic') plt.plot(fpr, tpr, 'b', label = 'AUC = {0:.2f}'.format(roc_auc_score(y_test, preds))) plt.legend(loc = 'lower right') plt.plot([0, 1], [0, 1],'r--') plt.xlim([0, 1]) plt.ylim([0, 1]) plt.ylabel('True Positive Rate') plt.xlabel('False Positive Rate') plt.show() print('ROC AUC score is ' + '{0:.2f}'.format(roc_auc_score(y_test, preds))) calculate_and_plot_k_neighbors(X_train, X_test, y_train, y_test) kNN_algorithm(X_train , y_train , X_test , y_test , 18) cf_matrix_knn = confusion_matrix(y_test, y_pred_kNN) plot_confusion_matrix(cf_matrix_knn , y_test , 'kNN Confusion Matrix' , '2x2') plot_AUC_ROC_kNN(X_test , y_test , kNN_pipeline) SVM_algorithm(X_train, X_test, y_train, y_test) cf_matrix_svm = confusion_matrix(y_test, y_pred_SVM) plot_confusion_matrix(cf_matrix_svm , y_test , 'SVM Confusion Matrix' , '2x2') plot_AUC_ROC_SVM_and_LG(X_test , y_test , y_prob_SVM)
0.714728
0.577019
# Sherpa on simple sine custom model Let's first see whether I can make this work on a simple custom model with only two parameters: a sine with an amplitude `arg` and phase `ph`. Following this example; https://sherpa.readthedocs.io/en/4.11.0/quick.html While taking this as template for custom model: https://sherpa.readthedocs.io/en/4.11.0/model_classes/usermodel.html#usermodel ``` import numpy as np import matplotlib.pyplot as plt from sherpa.models import model from sherpa.data import Data1D from sherpa.plot import DataPlot from sherpa.plot import ModelPlot from sherpa.fit import Fit from sherpa.stats import LeastSq from sherpa.optmethods import LevMar from sherpa.stats import Chi2 from sherpa.plot import FitPlot ``` ## Define the custom model ### First the sine function taking all params and an independent variable ``` def _make_sine(pars, x): """Test function""" (arg, ph) = pars y = arg * np.sin(x + ph) return y ``` ### Now the custom model class ``` class SineTest(model.RegriddableModel1D): """Test model class""" def __init__(self, name='sine'): self.arg = model.Parameter(name, 'arg', 2, min=0.1, hard_min=0) self.ph = model.Parameter(name, 'ph', np.pi) model.RegriddableModel1D.__init__(self, name, (self.arg, self.ph)) def calc(self, pars, x, *args, **kwargs): """Evaluate the model""" # If given an integrated data set, use the center of the bin if len(args) == 1: x = (x + args[0]) / 2 return _make_sine(pars, x) ``` ## Create test data And display with `matplotlib` ``` np.random.seed(0) x = np.linspace(-5., 5., 200) arg_true = 3 ph_true = np.pi + np.pi/3 sigma_true = 0.8 err_true = 0.7 y = arg_true * np.sin(x + ph_true) y += np.random.normal(0., err_true, x.shape) plt.scatter(x, y, s=3) plt.title('Fake data') ``` ## Create data object And display the data with `sherpa`. ``` d = Data1D('example_sine', x, y) # create data object dplot = DataPlot() # create data *plot* object dplot.prepare(d) # prepare plot dplot.plot() ``` ## Define the model ``` s = SineTest() print(s) ``` Visualize the model. ``` mplot = ModelPlot() mplot.prepare(d, s) mplot.plot() ``` You can also combine the two plot results to see how good or bad the current model is. ``` dplot.plot() mplot.overplot() ``` ## Select the statistics Let's do a least-squares statistic, which calculates the numerical difference of the model to the data for each point: ``` stat = LeastSq() ``` ## Select optimization Using Levenberg-Marquardt: ``` opt = LevMar() print(opt) ``` ## Fit ithe data ### Set up the fit ``` sfit = Fit(d, s, stat=stat, method=opt) print(sfit) ``` ### Actually fit the data ``` sres = sfit.fit() print("Fit succeeded?") print(sres.succeeded) # Show fit results print(sres.format()) ``` The `LevMar` optimiser calculates the covariance matrix at the best-fit location, and the errors from this are reported in the output from the call to the `fit()` method. In this particular case - which uses the `LeastSq` statistic - the error estimates do not have much meaning. As discussed below, Sherpa can make use of error estimates on the data to calculate meaningful parameter errors. ``` # Plot the fit over the data fplot = FitPlot() mplot.prepare(d, s) fplot.prepare(dplot, mplot) fplot.plot() # Extracting the parameter values print(sres) ans = dict(zip(sres.parnames, sres.parvals)) print(ans) print("The fitted parameter 'arg' is: {:.2f}".format(ans['sine.arg'])) ``` The model, and its parameter values, can alsobe queried directly, as they have been changed by the fit: ``` print(s) print(s.arg) ``` ## Including errors ``` dy = np.ones(x.size) * err_true # Create data with errors de = Data1D('sine-w-errors', x, y, staterror=dy) print(de) # Plot the data - it will have error bars now deplot = DataPlot() # create data *plot* object deplot.prepare(de) # prepare plot deplot.plot() ``` The statistic is changed from least squares to chi-square (Chi2), to take advantage of this extra knowledge (i.e. the Chi-square statistic includes the error value per bin when calculating the statistic value): ``` ustat = Chi2() # Do the fit se = SineTest("sine-err") sefit = Fit(de, se, stat=ustat, method=opt) seres = sefit.fit() print(seres.format()) if not seres.succeeded: print(seres.message) ``` Since the error value is independent of bin, then the fit results should be the same here (that is, the parameters in `s` are the same as `se`): ``` print(s) print(se) ``` The difference is that more of the fields in the result structure are populated: in particular the rstat and qval fields, which give the reduced statistic and the probability of obtaining this statistic value respectively.: ``` print(seres) ``` ## Errors from Hessian ``` calc_errors = np.sqrt(seres.extra_output['covar'].diagonal()) arg_err = calc_errors[0] ph_err = calc_errors[1] print('arg_err: {}'.format(arg_err)) print('ph_err: {}'.format(ph_err)) ``` ## More thorough error analysis Proceed as in: https://sherpa.readthedocs.io/en/4.11.0/quick.html#error-analysis ## More stuff: On the data class: https://sherpa.readthedocs.io/en/4.11.0/data/index.html Model instances - freezing and thawgin parameters, ressetting them, limits, etc.: https://sherpa.readthedocs.io/en/4.11.0/models/index.html# Evaluating the model: https://sherpa.readthedocs.io/en/4.11.0/evaluation/index.html
github_jupyter
import numpy as np import matplotlib.pyplot as plt from sherpa.models import model from sherpa.data import Data1D from sherpa.plot import DataPlot from sherpa.plot import ModelPlot from sherpa.fit import Fit from sherpa.stats import LeastSq from sherpa.optmethods import LevMar from sherpa.stats import Chi2 from sherpa.plot import FitPlot def _make_sine(pars, x): """Test function""" (arg, ph) = pars y = arg * np.sin(x + ph) return y class SineTest(model.RegriddableModel1D): """Test model class""" def __init__(self, name='sine'): self.arg = model.Parameter(name, 'arg', 2, min=0.1, hard_min=0) self.ph = model.Parameter(name, 'ph', np.pi) model.RegriddableModel1D.__init__(self, name, (self.arg, self.ph)) def calc(self, pars, x, *args, **kwargs): """Evaluate the model""" # If given an integrated data set, use the center of the bin if len(args) == 1: x = (x + args[0]) / 2 return _make_sine(pars, x) np.random.seed(0) x = np.linspace(-5., 5., 200) arg_true = 3 ph_true = np.pi + np.pi/3 sigma_true = 0.8 err_true = 0.7 y = arg_true * np.sin(x + ph_true) y += np.random.normal(0., err_true, x.shape) plt.scatter(x, y, s=3) plt.title('Fake data') d = Data1D('example_sine', x, y) # create data object dplot = DataPlot() # create data *plot* object dplot.prepare(d) # prepare plot dplot.plot() s = SineTest() print(s) mplot = ModelPlot() mplot.prepare(d, s) mplot.plot() dplot.plot() mplot.overplot() stat = LeastSq() opt = LevMar() print(opt) sfit = Fit(d, s, stat=stat, method=opt) print(sfit) sres = sfit.fit() print("Fit succeeded?") print(sres.succeeded) # Show fit results print(sres.format()) # Plot the fit over the data fplot = FitPlot() mplot.prepare(d, s) fplot.prepare(dplot, mplot) fplot.plot() # Extracting the parameter values print(sres) ans = dict(zip(sres.parnames, sres.parvals)) print(ans) print("The fitted parameter 'arg' is: {:.2f}".format(ans['sine.arg'])) print(s) print(s.arg) dy = np.ones(x.size) * err_true # Create data with errors de = Data1D('sine-w-errors', x, y, staterror=dy) print(de) # Plot the data - it will have error bars now deplot = DataPlot() # create data *plot* object deplot.prepare(de) # prepare plot deplot.plot() ustat = Chi2() # Do the fit se = SineTest("sine-err") sefit = Fit(de, se, stat=ustat, method=opt) seres = sefit.fit() print(seres.format()) if not seres.succeeded: print(seres.message) print(s) print(se) print(seres) calc_errors = np.sqrt(seres.extra_output['covar'].diagonal()) arg_err = calc_errors[0] ph_err = calc_errors[1] print('arg_err: {}'.format(arg_err)) print('ph_err: {}'.format(ph_err))
0.726037
0.986017
# Writing Great Code Y.-W. FANG at Kyoto University 这个章节对于 python 初学者真的是极其重要的。 This chapter focuses on writing the great code. You will see it is very useful to help you develop some good manners. ## Code style Pythonistas (vesteran Python developers)总是对 python 这一预言引以为豪的,因为即使不懂 python 的人,阅读一些简单源代码后也可以懂得这个 python 程序的功能。Readability is at the heart of Python‘s design。Python 之所以具有较好的可读性,其关键点在于完备的代码书写准则(Python enhancement proposals PEP 20 and PEP8)和 “Pythonic” idioms。 ### PEP8 PEP8 is the de facto code style guide for Python. It covers naming conventions, code layout, whitespace (tabs versus spaces), and other similar style topics. 在写 Python 代码时,遵从 PEP8 是非常有必要的,这有利于在代码开发过程中与其他开发者交流合作,也有利于代码的阅读。 pycodestyle 是个可以帮助指出 python 代码中不符合 ‘PEP8’ 标准之处的工具(原名叫做 pep8),安装很简单,就是 pip install pycodestyle. 使用方法为 > pycodestyle python.py 还有一个工具是 autopep8,它可以直接format一个python代码,命令为 > autopep8 --in-place python.py 但是如果你并不想 in-plance 方式重新标准化这个代码,可以移除 --in-place,即使用 > autopepe8 python.py 这一命令会将标准格式化的代码打印在屏幕上. 此外,--aggresive 这个 flag 可以帮助做一些更加彻底的标准格式化,并且可以使用多次使得代码更加符合标准。 ### PEP20 (a.k.a The Zen of Python) PEP 20 is the set of guiding principles for decision making in Python. 它的全文为 ``` import this ``` 实际上 PEP20 只包含了 19 条格言,并不是 20 条,据说第二十条还没被写出来,哈哈。 ### General Advice This sections contains style concepts that are hopefully easy to accept without debte. They can be applicable to lanuages other than Python #### Errors should never pass silently/Unless explicitly silenced Error handling in Python is done using the 'try' statement. Don't let errors pass silently: always explicitly indentify by name the exceptions you will catch, and handle only those exceptions. #### Function arguments should be intuitive to use 尽管在 Python 中定义函数时,如何选择 positional arguments 和 optional arguments在于写程序的人。不过,最好只使用一种非常显性的方式来定义函数: - easy to read (meaning the name and arguments need no explanation) - easy to change (meaning adding a new keyword argument won't break other parts of the code) #### We are responsible users Although many tricks are allowed in Python, some of them are potentionally dangerous. **A good example is that any client code can override an object's properties and methods: there is no 'private' keyword in Python.** **The main convention for private properties and implementation details is to prefix all "internals" with an underscore, e.g., sys._getframe.** #### Return values from one place When possible, keep a single exit point--it's difficult to debug functions when you first have to indentify which return statement is responsible for your result. ### Conventions 书中给出了多个 convetions 的案例,值得赞誉的是,书中把xi'g涉及到list的操作,‘逻辑相等’的检查等,请查看原书。 ``` a = [1, 2, 3, 4, 5] b = [x*2 for x in a] c = list(map(lambda i: i+2, a)) print(a) print(b) print(c) ``` ### Idioms Good idioms must be consciously acquired. #### Unpacking If we know the length of a list or tuple, you can assign names to its elements with unpacking. ``` filename, ext = "my_photo.orig.png".rsplit(".", 1) print(filename, 'is a', ext, 'file.') ``` We can use unpacking to swap variables as well: ``` a = 1 b = 2 a, b = b, a print('a is', a) print('b is', b) ``` Nested unpacking can also work, ``` a, (b, c) = 1, (2, 3) print(a) print(b) print(c) ``` In Python 3, a new method of extended unpacking was introduced by PEP 3132: ``` a, *rest = [1, 2, 3] x, *y, z = [5, 28, 33, 4] print(a) print(rest) print(x) print(y) print(z) ``` #### Ignoring a value 当我们使用unpacking时候,有些变量可能我们在程序中根本用不到,这个时候可以使用double underscore (__) ``` filename = 'foobar.txt' basename, __, ext = filename.rpartition('.') print(basename) print(__) print(ext) ``` #### Creating a length-$N$ list of the same thing Use the Python list * operator to make a list of the same immutable iterm: ``` four_nones = [None] * 4 print(four_nones) ``` 由于list是可变对象,'\*' 所完成的操作就是创建出具有N个相同元素的list. But be careful with mutable objects: because lists are mutable, the * operator will create a list of $N$ references to the ${same}$ list, which is not likely what you want. Instead, use a list comprehensions. ``` four_lists = [[1]] * 4 print(four_lists) four_lists[0].append('Ni') print(four_lists) ``` 正如上面的代码所示,虽然我们本意只是想让第1个元素发生变化,但是所有四个元素都增加了字符'Ni'. 上面的结果并不是我们想要的,如过只想改第一个元素,则应该采用如下代码 ``` four_lists = [[1] for __ in range(4)] print(four_lists) four_lists[0].append('Ni') print(four_lists) ``` A common idion for creating strings is to use str.join() on an empty string. This idion can be applied to lists and tuples: ``` letters = ['l', 'e', 't', 't', 'e', 'r'] word = ''.join(letters) print(word) letters = ('l', 'e', 't') word = ''.join(letters) print(word) ``` 有时候我们需要在一个集合里面搜索一些东西。让我们来看两个操作,分别是lists和sets ``` x = list(('foo', 'foo', 'bar', 'baz')) y = set(('foo', 'foo', 'bar', 'baz')) print(x) print(y) 'foo' in x 'foo' in y ``` 尽管最后的两个boolean测试看起来结果是一样的,但是 ${foo}$ in $y$ is utilizing the fact that sets (and dictrionaries) in Python are hash tables, the lookup performace between the two examples is different. Python will have to step through each item in the list to find a matching case, which is time-consuming (for large collections). But finding keys in the set can be done quickly using the hash lookup. Also, sets and dictrionaries drop duplicate eentries, which is why dictrionaries cannot have two identical keys. #### Exceptrion-sefe contexts 这部分不太懂,所以我不再笔记。以后重新阅读时补全---2018 May 7th ### Commom Gotchas 尽管Python试图让与演变的连贯、简单,并且避免令人惊讶的地方,但是实际上对于一些初学者而言,有些东西看起来是很‘出人意料的’. #### Mutable default arguments 最让初学者感到意外的是,在函数定义中,Python 对待可变默认参数的方式。 例如下面这个例子 ``` def append_to(element, to=[]): to.append(element) return to my_list = append_to(10) print(my_list) your_list = append_to(12) print(my_list) print(your_list) ``` 这个结果对于熟悉C和Fortran语言的人或者初学者都是很意外的,因为初学者可能期望的结果是这样的 [10] [10] [12] 他们(当然也包括以前的我)往往会认为,每次call这个function的时候,就有新的list被创建,除非我们给出了第二个关于to的参数。然而,事实上在python中,一旦这个函数被调用后创建了这个list后,这个list会在之后的函数调用中连贯得使用下去。 上面的代码中,第一次调用后,返回的to这个list中只包含了10,但是第二次调用的时候,它继续被使用,并且增加了一个元素12,指向to这个list的列表有两个,分波是my_list和your_list,因此最后输出的时候它们都是[10,12] 在 python 中,假设我们已经改变了某个可变的变量,那么这个可变量就会在之后的程序中继续被用到。 为了避免上述出现的情况,我们可以这样做: Create a new object each time the function is called, by using a default arg to signal that no argument was provided (None is often a good choice): ``` def append_to(element, to=None): b = to is None print(b) if to is None: to = [] to.append(element) return to mylist = append_to(10) print(mylist) yourlist = append_to(12,[2]) print(mylist) print(yourlist) ``` 上述代码中,第一次调用函数时,因我没有给出第二个参数,所以 to 这个list默认就是空的,即None,所以第一次调用后 my_list 是 [10]. 第二次调用时,因为我给出了第二个参数是[2], 因此第二次调用时的 yourlist 是在 [2] 这个list中增加了一个元素12,因此得到的 yourlist 是 [2, 12]. ${When this gotcha isn't a gothca:}$ Sometimes we can specifically 'exploit' this behavior to maintain the state between calls of a function. This is ofter done when writing a caching function (which stores results in-momery), for example: (这段话我不是太理解,不过暂时先写下来?????) ``` def time_consuming_function(x, y, cache={}): args = (x, y) if args in cache: return cache[args] #Otherwise this is the first time with these arguments #Do the time-consuming operation cache[args] = result return result ``` #### Late binding closures Another common source of confiusion is the way Python binds its variables in closures (or in the surronding global scope). 先看个例子: ``` def creat_multipliers(): return [lambda x: i*x for i in range(5)] for multiplier in creat_multipliers(): print(multiplier(2), end="...") print() # you wuold think the result is 0...2...4...6...8... #but actually waht you get is 8...8...8...8... #about the late binding closure, you can also find the discussion on stackflow # https://stackoverflow.com/questions/36463498/late-binding-python-closures ``` Such a result would be superising for Python beginers. Why do we get this result? Python's closures are ${late binding}$. This means that the values of variables used in closures are looked up at the time the inner function is called. ``` def create_multipliers(): multipliers = [] for i in range(5): def multiplier(x): return i * x multipliers.append(multiplier) return multipliers creat_multipliers ``` What we should do instead: ``` def create_multipliers(): return [lambda x, i=i: i*x for x in range(5)] ``` alternatively, we can use the functools.partial() function: ``` from functools import partial from operator import mul def create_multipliers(): return [partial(mul, i) for i in range(5)] ``` ### Structuring Your Project ``` import unittest def fun(x): return x+1 class MyTest(unittest.TestCase): def test_that_fun_adds_one(self): self.assertEqual(fun(3),4) class MySecondTest(unittest.TestCase): def test_that_fun_fails_when_not_adding_number(self): self.assertRaises(TypeError, fun, 'multiply six by nine') ```
github_jupyter
import this a = [1, 2, 3, 4, 5] b = [x*2 for x in a] c = list(map(lambda i: i+2, a)) print(a) print(b) print(c) filename, ext = "my_photo.orig.png".rsplit(".", 1) print(filename, 'is a', ext, 'file.') a = 1 b = 2 a, b = b, a print('a is', a) print('b is', b) a, (b, c) = 1, (2, 3) print(a) print(b) print(c) a, *rest = [1, 2, 3] x, *y, z = [5, 28, 33, 4] print(a) print(rest) print(x) print(y) print(z) filename = 'foobar.txt' basename, __, ext = filename.rpartition('.') print(basename) print(__) print(ext) four_nones = [None] * 4 print(four_nones) four_lists = [[1]] * 4 print(four_lists) four_lists[0].append('Ni') print(four_lists) four_lists = [[1] for __ in range(4)] print(four_lists) four_lists[0].append('Ni') print(four_lists) letters = ['l', 'e', 't', 't', 'e', 'r'] word = ''.join(letters) print(word) letters = ('l', 'e', 't') word = ''.join(letters) print(word) x = list(('foo', 'foo', 'bar', 'baz')) y = set(('foo', 'foo', 'bar', 'baz')) print(x) print(y) 'foo' in x 'foo' in y def append_to(element, to=[]): to.append(element) return to my_list = append_to(10) print(my_list) your_list = append_to(12) print(my_list) print(your_list) def append_to(element, to=None): b = to is None print(b) if to is None: to = [] to.append(element) return to mylist = append_to(10) print(mylist) yourlist = append_to(12,[2]) print(mylist) print(yourlist) def time_consuming_function(x, y, cache={}): args = (x, y) if args in cache: return cache[args] #Otherwise this is the first time with these arguments #Do the time-consuming operation cache[args] = result return result def creat_multipliers(): return [lambda x: i*x for i in range(5)] for multiplier in creat_multipliers(): print(multiplier(2), end="...") print() # you wuold think the result is 0...2...4...6...8... #but actually waht you get is 8...8...8...8... #about the late binding closure, you can also find the discussion on stackflow # https://stackoverflow.com/questions/36463498/late-binding-python-closures def create_multipliers(): multipliers = [] for i in range(5): def multiplier(x): return i * x multipliers.append(multiplier) return multipliers creat_multipliers def create_multipliers(): return [lambda x, i=i: i*x for x in range(5)] from functools import partial from operator import mul def create_multipliers(): return [partial(mul, i) for i in range(5)] import unittest def fun(x): return x+1 class MyTest(unittest.TestCase): def test_that_fun_adds_one(self): self.assertEqual(fun(3),4) class MySecondTest(unittest.TestCase): def test_that_fun_fails_when_not_adding_number(self): self.assertRaises(TypeError, fun, 'multiply six by nine')
0.298185
0.895888
## Dependencies ``` import glob import numpy as np import pandas as pd from transformers import TFDistilBertModel from tokenizers import BertWordPieceTokenizer import tensorflow as tf from tensorflow.keras.models import Model from tensorflow.keras.layers import Dense, Input, Dropout, GlobalAveragePooling1D, Concatenate # Auxiliary functions # Transformer inputs def preprocess_test(text, context, tokenizer, max_seq_len): context_encoded = tokenizer.encode(context) context_encoded = context_encoded.ids[1:-1] encoded = tokenizer.encode(text) encoded.pad(max_seq_len) encoded.truncate(max_seq_len) input_ids = encoded.ids attention_mask = encoded.attention_mask token_type_ids = ([0] * 3) + ([1] * (max_seq_len - 3)) input_ids = [101] + context_encoded + [102] + input_ids # update input ids and attentions masks size input_ids = input_ids[:-3] attention_mask = [1] * 3 + attention_mask[:-3] x = [np.asarray(input_ids, dtype=np.int32), np.asarray(attention_mask, dtype=np.int32), np.asarray(token_type_ids, dtype=np.int32)] return x def get_data_test(df, tokenizer, MAX_LEN): x_input_ids = [] x_attention_masks = [] x_token_type_ids = [] for row in df.itertuples(): x = preprocess_test(getattr(row, "text"), getattr(row, "sentiment"), tokenizer, MAX_LEN) x_input_ids.append(x[0]) x_attention_masks.append(x[1]) x_token_type_ids.append(x[2]) x_data = [np.asarray(x_input_ids), np.asarray(x_attention_masks), np.asarray(x_token_type_ids)] return x_data def decode(pred_start, pred_end, text, tokenizer): offset = tokenizer.encode(text).offsets if pred_end >= len(offset): pred_end = len(offset)-1 decoded_text = "" for i in range(pred_start, pred_end+1): decoded_text += text[offset[i][0]:offset[i][1]] if (i+1) < len(offset) and offset[i][1] < offset[i+1][0]: decoded_text += " " return decoded_text ``` # Load data ``` test = pd.read_csv('/kaggle/input/tweet-sentiment-extraction/test.csv') print('Test samples: %s' % len(test)) display(test.head()) ``` # Model parameters ``` MAX_LEN = 128 base_path = '/kaggle/input/qa-transformers/distilbert/' base_model_path = base_path + 'distilbert-base-uncased-distilled-squad-tf_model.h5' config_path = base_path + 'distilbert-base-uncased-distilled-squad-config.json' input_base_path = '/kaggle/input/6-tweet-train-distilbert-lower-bce-v2/' tokenizer_path = input_base_path + 'vocab.txt' model_path_list = glob.glob(input_base_path + '*.h5') model_path_list.sort() print('Models to predict:') print(*model_path_list, sep = "\n") ``` # Tokenizer ``` tokenizer = BertWordPieceTokenizer(tokenizer_path , lowercase=True) ``` # Pre process ``` test['text'].fillna('', inplace=True) test["text"] = test["text"].apply(lambda x: x.lower()) x_test = get_data_test(test, tokenizer, MAX_LEN) ``` # Model ``` def model_fn(): input_ids = Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids') attention_mask = Input(shape=(MAX_LEN,), dtype=tf.int32, name='attention_mask') token_type_ids = Input(shape=(MAX_LEN,), dtype=tf.int32, name='token_type_ids') base_model = TFDistilBertModel.from_pretrained(base_model_path, config=config_path, name="base_model") sequence_output = base_model({'input_ids': input_ids, 'attention_mask': attention_mask, 'token_type_ids': token_type_ids}) last_state = sequence_output[0] x = GlobalAveragePooling1D()(last_state) y_start = Dense(MAX_LEN, activation='sigmoid', name='y_start')(x) y_end = Dense(MAX_LEN, activation='sigmoid', name='y_end')(x) model = Model(inputs=[input_ids, attention_mask, token_type_ids], outputs=[y_start, y_end]) return model ``` # Make predictions ``` NUM_TEST_IMAGES = len(test) test_start_preds = np.zeros((NUM_TEST_IMAGES, MAX_LEN)) test_end_preds = np.zeros((NUM_TEST_IMAGES, MAX_LEN)) for model_path in model_path_list: print(model_path) model = model_fn() model.load_weights(model_path) test_preds = model.predict(x_test) test_start_preds += test_preds[0] / len(model_path_list) test_end_preds += test_preds[1] / len(model_path_list) ``` # Post process ``` test['start'] = test_start_preds.argmax(axis=-1) test['end'] = test_end_preds.argmax(axis=-1) test['selected_text'] = test.apply(lambda x: decode(x['start'], x['end'], x['text'], tokenizer), axis=1) ``` # Test set predictions ``` submission = pd.read_csv('/kaggle/input/tweet-sentiment-extraction/sample_submission.csv') submission['selected_text'] = test["selected_text"] submission.to_csv('submission.csv', index=False) submission.head(10) ```
github_jupyter
import glob import numpy as np import pandas as pd from transformers import TFDistilBertModel from tokenizers import BertWordPieceTokenizer import tensorflow as tf from tensorflow.keras.models import Model from tensorflow.keras.layers import Dense, Input, Dropout, GlobalAveragePooling1D, Concatenate # Auxiliary functions # Transformer inputs def preprocess_test(text, context, tokenizer, max_seq_len): context_encoded = tokenizer.encode(context) context_encoded = context_encoded.ids[1:-1] encoded = tokenizer.encode(text) encoded.pad(max_seq_len) encoded.truncate(max_seq_len) input_ids = encoded.ids attention_mask = encoded.attention_mask token_type_ids = ([0] * 3) + ([1] * (max_seq_len - 3)) input_ids = [101] + context_encoded + [102] + input_ids # update input ids and attentions masks size input_ids = input_ids[:-3] attention_mask = [1] * 3 + attention_mask[:-3] x = [np.asarray(input_ids, dtype=np.int32), np.asarray(attention_mask, dtype=np.int32), np.asarray(token_type_ids, dtype=np.int32)] return x def get_data_test(df, tokenizer, MAX_LEN): x_input_ids = [] x_attention_masks = [] x_token_type_ids = [] for row in df.itertuples(): x = preprocess_test(getattr(row, "text"), getattr(row, "sentiment"), tokenizer, MAX_LEN) x_input_ids.append(x[0]) x_attention_masks.append(x[1]) x_token_type_ids.append(x[2]) x_data = [np.asarray(x_input_ids), np.asarray(x_attention_masks), np.asarray(x_token_type_ids)] return x_data def decode(pred_start, pred_end, text, tokenizer): offset = tokenizer.encode(text).offsets if pred_end >= len(offset): pred_end = len(offset)-1 decoded_text = "" for i in range(pred_start, pred_end+1): decoded_text += text[offset[i][0]:offset[i][1]] if (i+1) < len(offset) and offset[i][1] < offset[i+1][0]: decoded_text += " " return decoded_text test = pd.read_csv('/kaggle/input/tweet-sentiment-extraction/test.csv') print('Test samples: %s' % len(test)) display(test.head()) MAX_LEN = 128 base_path = '/kaggle/input/qa-transformers/distilbert/' base_model_path = base_path + 'distilbert-base-uncased-distilled-squad-tf_model.h5' config_path = base_path + 'distilbert-base-uncased-distilled-squad-config.json' input_base_path = '/kaggle/input/6-tweet-train-distilbert-lower-bce-v2/' tokenizer_path = input_base_path + 'vocab.txt' model_path_list = glob.glob(input_base_path + '*.h5') model_path_list.sort() print('Models to predict:') print(*model_path_list, sep = "\n") tokenizer = BertWordPieceTokenizer(tokenizer_path , lowercase=True) test['text'].fillna('', inplace=True) test["text"] = test["text"].apply(lambda x: x.lower()) x_test = get_data_test(test, tokenizer, MAX_LEN) def model_fn(): input_ids = Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids') attention_mask = Input(shape=(MAX_LEN,), dtype=tf.int32, name='attention_mask') token_type_ids = Input(shape=(MAX_LEN,), dtype=tf.int32, name='token_type_ids') base_model = TFDistilBertModel.from_pretrained(base_model_path, config=config_path, name="base_model") sequence_output = base_model({'input_ids': input_ids, 'attention_mask': attention_mask, 'token_type_ids': token_type_ids}) last_state = sequence_output[0] x = GlobalAveragePooling1D()(last_state) y_start = Dense(MAX_LEN, activation='sigmoid', name='y_start')(x) y_end = Dense(MAX_LEN, activation='sigmoid', name='y_end')(x) model = Model(inputs=[input_ids, attention_mask, token_type_ids], outputs=[y_start, y_end]) return model NUM_TEST_IMAGES = len(test) test_start_preds = np.zeros((NUM_TEST_IMAGES, MAX_LEN)) test_end_preds = np.zeros((NUM_TEST_IMAGES, MAX_LEN)) for model_path in model_path_list: print(model_path) model = model_fn() model.load_weights(model_path) test_preds = model.predict(x_test) test_start_preds += test_preds[0] / len(model_path_list) test_end_preds += test_preds[1] / len(model_path_list) test['start'] = test_start_preds.argmax(axis=-1) test['end'] = test_end_preds.argmax(axis=-1) test['selected_text'] = test.apply(lambda x: decode(x['start'], x['end'], x['text'], tokenizer), axis=1) submission = pd.read_csv('/kaggle/input/tweet-sentiment-extraction/sample_submission.csv') submission['selected_text'] = test["selected_text"] submission.to_csv('submission.csv', index=False) submission.head(10)
0.573081
0.500916
# Regression Exploration ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt import matplotlib as mpl %matplotlib inline ``` # Does shoe size depend on height? ``` shoes = pd.read_csv('/users/elizabeth/downloads/shoes.csv') plt.scatter(shoes['HEIGHT (IN)'], shoes['SHOE SIZE (FT)']) plt.xlabel('Height (in inches)') plt.ylabel('Shoe Size (in feet)') plt.title('Shoe Size by Height') #computing the correlation coefficient using the long method: def standardize(anylist): '''convert any array of numbers to std units ''' return (anylist - np.mean(anylist)) / np.std(anylist) standardize_x = standardize(shoes['HEIGHT (IN)']) standardize_y = standardize(shoes['SHOE SIZE (FT)']) #correlation coefficient r r = np.mean(standardize_x * standardize_y) r #computing the correlation coefficient using np.corrcoef() np.corrcoef(shoes['HEIGHT (IN)'], shoes['SHOE SIZE (FT)']) #fitting a linear regression line plt.scatter(standardize_x, standardize_y) #graphs the scatter plot of data xvals = np.arange(-4, 3, 0.3) #setting the range of x values for regression line yvals = r * xvals #the regression y values (correlation coefficient * x values) plt.plot(xvals, yvals, color = 'g') #graphing the linear regression plt.title('Distribution of Shoe Sizes by Height') plt.xlabel('Height (in inches)') plt.ylabel('Shoe Size (in feet)') #predicting shoe size m, b = np.polyfit(shoes['HEIGHT (IN)'], shoes['SHOE SIZE (FT)'], 1) m, b # predicting using the linear regression: y = mx + b # My height: 63 inches my_height = 63 my_shoe_size = (m * my_height) + b my_shoe_size ``` - My actual shoe size is an 8. The model is off by about a size. # Testing Chebychev's Inequality As per Chebychev's inequality, as least 88.88% of the data should be within 3 standard deviations of the mean. Is that true for this dataset? ``` delays = pd.read_excel('/users/elizabeth/downloads/flightdelays.xlsx') delays.shape #cleaning data delays = delays.dropna(subset=['ARRIVAL_DELAY']) delays.shape plt.style.use('fivethirtyeight') plt.hist(delays['ARRIVAL_DELAY'], normed = True, bins = 15, ec = 'k') plt.title('distribution of arrival delays', size = 'medium') plt.xlabel('Delay Amount') plt.ylabel('Frequency') ``` - As per Chebychev's inequality, as least 88.88% of the data should be within 3 standard deviations of the mean... ``` standardized_delays = (delays['ARRIVAL_DELAY'] - np.mean(delays['ARRIVAL_DELAY'])) / np.std(delays['ARRIVAL_DELAY']) upper_bound = np.mean(delays['ARRIVAL_DELAY']) + 3 * np.std(delays['ARRIVAL_DELAY']) lower_bound = np.mean(delays['ARRIVAL_DELAY']) - 3 * np.std(delays['ARRIVAL_DELAY']) within_3_SDs = np.sum(np.logical_and(delays['ARRIVAL_DELAY'] < upper_bound, delays['ARRIVAL_DELAY'] > lower_bound)) print(within_3_SDs / len(delays) * 100) ``` - Yes, Chebychev's inequality is true for this dataset. ``` # Determining which airline to avoid... grouped_delays = delays.groupby(['AIRLINE'], as_index=False) grouped_delays.agg({'ARRIVAL_DELAY' : 'mean'}).sort_values('ARRIVAL_DELAY', ascending = False) ``` - Based on this data alone, it would be a good idea to avoid Hawaii Airlines (HA) because it has the highest average of delay time. # NFL data! The NFL players data has the heights and weights of some players in the NFL. It also tells you what position they play in. Things I want to do: 1. Plotting a histogram of the heights. Does this seem normally distributed? What is the mean? What is the median? Are there any significant outliers? 2. Plotting a histogram of the weights. Does this seem normally distributed? What is the mean? What is the median? Are there any significant outliers? 3. Question: does the distribution of weights depend on the position? #### 1: Heights ``` nfl = pd.read_csv('/users/elizabeth/downloads/nfl_players.csv', encoding='latin-1') nfl.head() #Plot a histogram of the heights. plt.hist(nfl['Height'], bins = np.arange(60, 85), ec = 'blue', color = 'lightskyblue') plt.title('distribution of heights', size = 'medium') plt.xlabel('Player Height') plt.ylabel('Frequency') ``` - The distribution looks vaguely but not quite normal. ``` # Mean? np.mean(nfl['Height']) ``` - The mean height of NFL players is 74.013 ``` # Median? np.median(nfl['Height']) ``` - The median height of NFL players is 74 ``` # Outliers? np.max(nfl['Height']), np.min(nfl['Height']) ``` - The minimum and maximum values are not significant enough to be outliers. #### 2: Weights ``` #Plot a histogram of the weights. plt.hist(nfl['Weight'], bins = np.arange(150, 370, 10), ec = 'blue', color = 'lightskyblue') plt.title('distribution of weights', size = 'medium') plt.xlabel('player weight') plt.ylabel('frequency') ``` - The distribution of weights is not normal. ``` # Mean? np.mean(nfl['Weight']) # Median? np.median(nfl['Weight']) # Outliers? np.max(nfl['Weight']), np.min(nfl['Weight']) nfl[['Weight']].sort_values('Weight', ascending=True)[:5] nfl[['Weight']].sort_values('Weight', ascending=False)[:5] ``` - There seem to be no significantly large outliers because the data for both ends of the weight spectrum seem to be gradually increasing/decreasing. ### Does the distribution of weights depend on the position? ``` nfl.columns weight_position = nfl[['Position', 'Weight']] weight_position.head() # Histogramming the Distributions of Weights per Position mpl.style.use('seaborn') _ = nfl.hist(column = ['Weight'], by= ['Position'], figsize = (15, 40), layout = (12, 2), sharey = False, sharex = False, bins = np.arange(150, 364, 5)) ``` - As we can see in the histograms above, the distribution of weights among players of different positions changes. Therefore, yes, the distribution of weights depends on the position. # The making of a popular song... I'll use this music dataset to explore what goes into the making of a popular song. https://think.cs.vt.edu/corgis/csv/music/music.html. #### a) Understanding the data, cleaning... ``` music = pd.read_csv('/users/elizabeth/downloads/music.csv') music.shape, music.columns music = music.rename(columns = {'terms' : 'genre'}) print(music.shape) music.head(5) #removing songs where song.hottnesss is 0 (outliers) music = music[music['song.hotttnesss'] != 0] ``` #### b. Who are the top 10 artists in terms of artist hotness? ``` sorted_hot_artists = music.sort_values(['artist.hotttnesss'], ascending=False) top10 = sorted_hot_artists['artist.name'].unique()[:10] print('These are the top 10 artists by hotness:') for x in top10: print(x) ``` #### Which are the top 10 songs in terms of hotness? ``` song_hotness = music[['song.hotttnesss', 'artist.name', 'title']] top_10_songs = song_hotness.sort_values('song.hotttnesss', ascending = False)[:10] top_10_songs[['title', 'song.hotttnesss', 'artist.name']] ``` - These are the top 10 songs in terms of hotness. #### Investigating Tempo's Effect on Song Hotness. ``` music = music.dropna(subset = ['familiarity', 'song.hotttnesss']) r = np.corrcoef(music['familiarity'], music['song.hotttnesss']) r ``` - the correlation coefficient r, 0.5439, tells us that there is a significant relationship between familiarity and song hotness. ``` m, b = np.polyfit(music['familiarity'], music['song.hotttnesss'], 1) plt.figure(figsize=(10,5)) plt.scatter(music['familiarity'], music['song.hotttnesss'], color = 'dodgerblue') #regression line xvals = np.arange(0, 1.1, 0.1) yvals = m * xvals + b plt.plot(xvals, yvals, color = 'navy') plt.title('Song Hotness by Familiarity', size = 'medium') plt.xlabel('familiarity') plt.ylabel('hotness') ``` - The scatterplot shows that higher familiarity is significantly correlated with more popular songs; in general, the higher the familiarity, the hotter the song should be. This may be because people like to listen to songs over and over, familiar songs are catchy and easy to sing along to. #### Looking at Duration's Effect on Song Hotness. ``` music = music.dropna(subset = ['duration', 'song.hotttnesss']) #removing that one nasty outlier song that lasted over 2,050 cleaned_music = music[music['duration'] < np.max(music['duration'])] r = np.corrcoef(cleaned_music['duration'], cleaned_music['song.hotttnesss']) r plt.figure(figsize=(10,5)) m, b = np.polyfit(cleaned_music['duration'], cleaned_music['song.hotttnesss'], 1) plt.scatter(cleaned_music['duration'], cleaned_music['song.hotttnesss'], color = 'royalblue') #regression line xvals = np.arange(0, 1760, 5) yvals = m * xvals + b plt.plot(xvals, yvals, color = 'k') plt.title('Song Hotness by Duration', size = 'medium') plt.xlabel('duration') plt.ylabel('hotness') ``` - The scatter plot shows that as song duration increases past 750, there are fewer songs with high hotness. This is probably because people have short attention spans and like music in the 3 - 4 minute range; 30 minute songs are too long to hit the popular charts. #### Key's Effect on Song Hotness? Does the key of a song affect it's hotness? ``` music = music.dropna(subset = ['key', 'song.hotttnesss']) r = np.corrcoef(music['key'], music['song.hotttnesss']) r #removing that one nasty outlier key: cleaned_music = music[music['key'] < np.max(music['key'])] r = np.corrcoef(cleaned_music['key'], cleaned_music['song.hotttnesss']) r ``` - The correlation coefficient 0.001238 tells us that there is basically no linear relationship between key and song hottness. ``` plt.figure(figsize=(10,5)) m, b = np.polyfit(cleaned_music['key'], cleaned_music['song.hotttnesss'], 1) plt.scatter(cleaned_music['key'], cleaned_music['song.hotttnesss'], color = 'blue') #regression line xvals = np.arange(0, 10, 5) yvals = m * xvals + b plt.title('Song Hotness by Key', size = 'medium') plt.xlabel('key') plt.ylabel('hotness') ``` - The scatter plot shows that there is generally an even distribution of hot songs by key. Therefore key has very little to no influence on the song's hotness. #### Loudness' Effect on Song Hotness. ``` music = music.dropna(subset = ['loudness', 'song.hotttnesss']) r = np.corrcoef(music['loudness'], music['song.hotttnesss']) r ``` - the correlation coefficient r, 0.22587, tells us that there is a very slight linear relationship between song loudness and song hotness. ``` plt.figure(figsize=(10,5)) m, b = np.polyfit(music['loudness'], music['song.hotttnesss'], 1) plt.scatter(music['loudness'], music['song.hotttnesss'], color = 'dodgerblue') #regression line xvals = np.arange(-45, 0, 1) yvals = m * xvals + b plt.plot(xvals, yvals, color = 'navy') plt.title('Song Hotness by Loudness', size = 'medium') plt.xlabel('loudness') plt.ylabel('hotness') ``` - The scatter plot shows that the number of hot songs is concentrated around louder songs; loudness has somewhat of an influence on the song's hotness. This might be because loud songs are popular among young adults for dancing. ``` music.columns ``` #### Artist's hotness on song's hotness ``` music = music.dropna(subset = ['artist.hotttnesss', 'song.hotttnesss']) r = np.corrcoef(music['artist.hotttnesss'], music['song.hotttnesss']) r ``` - The correlation coefficient r = 0.5223 shows that there is a relatively strong linear relationship between an artist's hotness and the song's hotness. ``` plt.figure(figsize=(10,5)) m, b = np.polyfit(music['artist.hotttnesss'], music['song.hotttnesss'], 1) plt.scatter(music['artist.hotttnesss'], music['song.hotttnesss'], color = 'royalblue') #regression line xvals = np.arange(0, 1.2, .1) yvals = m * xvals + b plt.plot(xvals, yvals, color = 'navy') plt.title('Song Hotness by Artist Hotness', size = 'medium') plt.xlabel('artist hotness') plt.ylabel('song hotness') ``` - The scatterplot shows that there is a considerable relationship between an artist's hotness and their song's hotness. An explanation: the hotter an artist is, the larger their fanbase is. The larger the fanbase, the larger potential receptive audience for the song which means higher hotness. #### Creating a multivariable linear regression model that helps predict song hotness! ``` import statsmodels.api as sm # create a df of the independent variables X = music[['artist.hotttnesss', 'familiarity', 'loudness']] # dependent variable. what are we predicting? y = music['song.hotttnesss'] #we are fitting y = ax_1 + bx_2+ c and not just ax_1 + bx_2 X = sm.add_constant(X) # OLS - ordinary least squares. # best possible hyperplane through the data # best = minimize sum of square distances est = sm.OLS(y, X).fit() est.summary() ``` #### - The model is: ##### predicted song hotness = 0.1273 + 0.3121x + 0.3652y + 0.0032z .... where x is the song artist's hotness, y is the song's familiarity, and z is the song's loudness. # Building a regression model to predict bodyfat ``` bodyfat = pd.read_excel('/users/elizabeth/downloads/BodyFat.xls') bodyfat.head() def correlation(df, x, y): # r = avg(standardize(x) * standardize(y)) x_std = standardize(df[x]) y_std = standardize(df[y]) return np.mean(x_std * y_std) # The variables below have significant correlations with bodyfat print(correlation(bodyfat, 'BODYFAT', 'WEIGHT')) print(correlation(bodyfat, 'BODYFAT', 'DENSITY')) print(correlation(bodyfat, 'BODYFAT', 'ADIPOSITY')) print(correlation(bodyfat, 'BODYFAT', 'CHEST')) print(correlation(bodyfat, 'BODYFAT', 'ABDOMEN')) print(correlation(bodyfat, 'BODYFAT', 'HIP')) print(correlation(bodyfat, 'BODYFAT', 'THIGH')) ``` #### Making the linear regression model for bodyfat ``` # create a df of the independent variables X = bodyfat[['WEIGHT', 'DENSITY', 'ADIPOSITY', 'CHEST', 'ABDOMEN', 'HIP', 'THIGH']] # dependent variable. what are we predicting? y = bodyfat['BODYFAT'] X = sm.add_constant(X) # OLS - ordinary least squares. # best possible hyperplane through the data # best = minimize sum of square distances est = sm.OLS(y, X).fit() est.summary() ``` #### - The model is: ##### predicted bodyfat = 415.1138 - 0.0033x - 381.1288y - 0.0422z + 0.0338a + 0.0428b + 0.0214c - 0.0285d .... where x is the weight, y is the density, and z is the adiposity, a is the chest size, b is the abdomen size, c is the hip size, and d is the thigh size.
github_jupyter
import pandas as pd import numpy as np import matplotlib.pyplot as plt import matplotlib as mpl %matplotlib inline shoes = pd.read_csv('/users/elizabeth/downloads/shoes.csv') plt.scatter(shoes['HEIGHT (IN)'], shoes['SHOE SIZE (FT)']) plt.xlabel('Height (in inches)') plt.ylabel('Shoe Size (in feet)') plt.title('Shoe Size by Height') #computing the correlation coefficient using the long method: def standardize(anylist): '''convert any array of numbers to std units ''' return (anylist - np.mean(anylist)) / np.std(anylist) standardize_x = standardize(shoes['HEIGHT (IN)']) standardize_y = standardize(shoes['SHOE SIZE (FT)']) #correlation coefficient r r = np.mean(standardize_x * standardize_y) r #computing the correlation coefficient using np.corrcoef() np.corrcoef(shoes['HEIGHT (IN)'], shoes['SHOE SIZE (FT)']) #fitting a linear regression line plt.scatter(standardize_x, standardize_y) #graphs the scatter plot of data xvals = np.arange(-4, 3, 0.3) #setting the range of x values for regression line yvals = r * xvals #the regression y values (correlation coefficient * x values) plt.plot(xvals, yvals, color = 'g') #graphing the linear regression plt.title('Distribution of Shoe Sizes by Height') plt.xlabel('Height (in inches)') plt.ylabel('Shoe Size (in feet)') #predicting shoe size m, b = np.polyfit(shoes['HEIGHT (IN)'], shoes['SHOE SIZE (FT)'], 1) m, b # predicting using the linear regression: y = mx + b # My height: 63 inches my_height = 63 my_shoe_size = (m * my_height) + b my_shoe_size delays = pd.read_excel('/users/elizabeth/downloads/flightdelays.xlsx') delays.shape #cleaning data delays = delays.dropna(subset=['ARRIVAL_DELAY']) delays.shape plt.style.use('fivethirtyeight') plt.hist(delays['ARRIVAL_DELAY'], normed = True, bins = 15, ec = 'k') plt.title('distribution of arrival delays', size = 'medium') plt.xlabel('Delay Amount') plt.ylabel('Frequency') standardized_delays = (delays['ARRIVAL_DELAY'] - np.mean(delays['ARRIVAL_DELAY'])) / np.std(delays['ARRIVAL_DELAY']) upper_bound = np.mean(delays['ARRIVAL_DELAY']) + 3 * np.std(delays['ARRIVAL_DELAY']) lower_bound = np.mean(delays['ARRIVAL_DELAY']) - 3 * np.std(delays['ARRIVAL_DELAY']) within_3_SDs = np.sum(np.logical_and(delays['ARRIVAL_DELAY'] < upper_bound, delays['ARRIVAL_DELAY'] > lower_bound)) print(within_3_SDs / len(delays) * 100) # Determining which airline to avoid... grouped_delays = delays.groupby(['AIRLINE'], as_index=False) grouped_delays.agg({'ARRIVAL_DELAY' : 'mean'}).sort_values('ARRIVAL_DELAY', ascending = False) nfl = pd.read_csv('/users/elizabeth/downloads/nfl_players.csv', encoding='latin-1') nfl.head() #Plot a histogram of the heights. plt.hist(nfl['Height'], bins = np.arange(60, 85), ec = 'blue', color = 'lightskyblue') plt.title('distribution of heights', size = 'medium') plt.xlabel('Player Height') plt.ylabel('Frequency') # Mean? np.mean(nfl['Height']) # Median? np.median(nfl['Height']) # Outliers? np.max(nfl['Height']), np.min(nfl['Height']) #Plot a histogram of the weights. plt.hist(nfl['Weight'], bins = np.arange(150, 370, 10), ec = 'blue', color = 'lightskyblue') plt.title('distribution of weights', size = 'medium') plt.xlabel('player weight') plt.ylabel('frequency') # Mean? np.mean(nfl['Weight']) # Median? np.median(nfl['Weight']) # Outliers? np.max(nfl['Weight']), np.min(nfl['Weight']) nfl[['Weight']].sort_values('Weight', ascending=True)[:5] nfl[['Weight']].sort_values('Weight', ascending=False)[:5] nfl.columns weight_position = nfl[['Position', 'Weight']] weight_position.head() # Histogramming the Distributions of Weights per Position mpl.style.use('seaborn') _ = nfl.hist(column = ['Weight'], by= ['Position'], figsize = (15, 40), layout = (12, 2), sharey = False, sharex = False, bins = np.arange(150, 364, 5)) music = pd.read_csv('/users/elizabeth/downloads/music.csv') music.shape, music.columns music = music.rename(columns = {'terms' : 'genre'}) print(music.shape) music.head(5) #removing songs where song.hottnesss is 0 (outliers) music = music[music['song.hotttnesss'] != 0] sorted_hot_artists = music.sort_values(['artist.hotttnesss'], ascending=False) top10 = sorted_hot_artists['artist.name'].unique()[:10] print('These are the top 10 artists by hotness:') for x in top10: print(x) song_hotness = music[['song.hotttnesss', 'artist.name', 'title']] top_10_songs = song_hotness.sort_values('song.hotttnesss', ascending = False)[:10] top_10_songs[['title', 'song.hotttnesss', 'artist.name']] music = music.dropna(subset = ['familiarity', 'song.hotttnesss']) r = np.corrcoef(music['familiarity'], music['song.hotttnesss']) r m, b = np.polyfit(music['familiarity'], music['song.hotttnesss'], 1) plt.figure(figsize=(10,5)) plt.scatter(music['familiarity'], music['song.hotttnesss'], color = 'dodgerblue') #regression line xvals = np.arange(0, 1.1, 0.1) yvals = m * xvals + b plt.plot(xvals, yvals, color = 'navy') plt.title('Song Hotness by Familiarity', size = 'medium') plt.xlabel('familiarity') plt.ylabel('hotness') music = music.dropna(subset = ['duration', 'song.hotttnesss']) #removing that one nasty outlier song that lasted over 2,050 cleaned_music = music[music['duration'] < np.max(music['duration'])] r = np.corrcoef(cleaned_music['duration'], cleaned_music['song.hotttnesss']) r plt.figure(figsize=(10,5)) m, b = np.polyfit(cleaned_music['duration'], cleaned_music['song.hotttnesss'], 1) plt.scatter(cleaned_music['duration'], cleaned_music['song.hotttnesss'], color = 'royalblue') #regression line xvals = np.arange(0, 1760, 5) yvals = m * xvals + b plt.plot(xvals, yvals, color = 'k') plt.title('Song Hotness by Duration', size = 'medium') plt.xlabel('duration') plt.ylabel('hotness') music = music.dropna(subset = ['key', 'song.hotttnesss']) r = np.corrcoef(music['key'], music['song.hotttnesss']) r #removing that one nasty outlier key: cleaned_music = music[music['key'] < np.max(music['key'])] r = np.corrcoef(cleaned_music['key'], cleaned_music['song.hotttnesss']) r plt.figure(figsize=(10,5)) m, b = np.polyfit(cleaned_music['key'], cleaned_music['song.hotttnesss'], 1) plt.scatter(cleaned_music['key'], cleaned_music['song.hotttnesss'], color = 'blue') #regression line xvals = np.arange(0, 10, 5) yvals = m * xvals + b plt.title('Song Hotness by Key', size = 'medium') plt.xlabel('key') plt.ylabel('hotness') music = music.dropna(subset = ['loudness', 'song.hotttnesss']) r = np.corrcoef(music['loudness'], music['song.hotttnesss']) r plt.figure(figsize=(10,5)) m, b = np.polyfit(music['loudness'], music['song.hotttnesss'], 1) plt.scatter(music['loudness'], music['song.hotttnesss'], color = 'dodgerblue') #regression line xvals = np.arange(-45, 0, 1) yvals = m * xvals + b plt.plot(xvals, yvals, color = 'navy') plt.title('Song Hotness by Loudness', size = 'medium') plt.xlabel('loudness') plt.ylabel('hotness') music.columns music = music.dropna(subset = ['artist.hotttnesss', 'song.hotttnesss']) r = np.corrcoef(music['artist.hotttnesss'], music['song.hotttnesss']) r plt.figure(figsize=(10,5)) m, b = np.polyfit(music['artist.hotttnesss'], music['song.hotttnesss'], 1) plt.scatter(music['artist.hotttnesss'], music['song.hotttnesss'], color = 'royalblue') #regression line xvals = np.arange(0, 1.2, .1) yvals = m * xvals + b plt.plot(xvals, yvals, color = 'navy') plt.title('Song Hotness by Artist Hotness', size = 'medium') plt.xlabel('artist hotness') plt.ylabel('song hotness') import statsmodels.api as sm # create a df of the independent variables X = music[['artist.hotttnesss', 'familiarity', 'loudness']] # dependent variable. what are we predicting? y = music['song.hotttnesss'] #we are fitting y = ax_1 + bx_2+ c and not just ax_1 + bx_2 X = sm.add_constant(X) # OLS - ordinary least squares. # best possible hyperplane through the data # best = minimize sum of square distances est = sm.OLS(y, X).fit() est.summary() bodyfat = pd.read_excel('/users/elizabeth/downloads/BodyFat.xls') bodyfat.head() def correlation(df, x, y): # r = avg(standardize(x) * standardize(y)) x_std = standardize(df[x]) y_std = standardize(df[y]) return np.mean(x_std * y_std) # The variables below have significant correlations with bodyfat print(correlation(bodyfat, 'BODYFAT', 'WEIGHT')) print(correlation(bodyfat, 'BODYFAT', 'DENSITY')) print(correlation(bodyfat, 'BODYFAT', 'ADIPOSITY')) print(correlation(bodyfat, 'BODYFAT', 'CHEST')) print(correlation(bodyfat, 'BODYFAT', 'ABDOMEN')) print(correlation(bodyfat, 'BODYFAT', 'HIP')) print(correlation(bodyfat, 'BODYFAT', 'THIGH')) # create a df of the independent variables X = bodyfat[['WEIGHT', 'DENSITY', 'ADIPOSITY', 'CHEST', 'ABDOMEN', 'HIP', 'THIGH']] # dependent variable. what are we predicting? y = bodyfat['BODYFAT'] X = sm.add_constant(X) # OLS - ordinary least squares. # best possible hyperplane through the data # best = minimize sum of square distances est = sm.OLS(y, X).fit() est.summary()
0.61451
0.936547
# Importing The Libraries and The DataSets ``` import pandas as pd import numpy as np import featuretools as ft from math import sin, cos, sqrt, atan2, radians import matplotlib.pyplot as plt import seaborn as sns from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler, LabelBinarizer, LabelEncoder from sklearn.manifold import TSNE import warnings warnings.filterwarnings('ignore') tickets = pd.read_csv('ticket_data.csv') cities = pd.read_csv('cities.csv') stations = pd.read_csv('stations.csv') providers = pd.read_csv('providers.csv') ``` # **Partie 1 : Réponse aux Questions** Je vais commencer par faire une transformation de type de quelque colonnes. ``` tickets['arrival_ts']= pd.to_datetime(tickets['arrival_ts']) tickets['departure_ts']= pd.to_datetime(tickets['departure_ts']) tickets['d_station'] = tickets['d_station'].apply(lambda string : int(string) if string == string else np.nan) tickets['o_station'] = tickets['o_station'].apply(lambda string : int(string) if string == string else np.nan) stations['id'] = stations['id'].apply(lambda string : float(string) if string == string else np.nan) ``` Avant de répondre aux questions, je vais ajouter une colonne dans le tableaux **Tickets** pour la durée de chaque trajet par minutes. ``` def add(dataframe): arrivé = dataframe['arrival_ts'].values depart = dataframe['departure_ts'].values trajet = [] for i in range(len(depart)) : r = (arrivé[i] - depart[i])/(60*10**9) trajet.append(r/np.timedelta64(1, 'ns')) return(trajet) tickets['trajet (min)'] = pd.DataFrame(add(tickets)) trajet = tickets['trajet (min)'].values ``` Je vais egalement ajouter la distance entre la ville de depart et la ville d'arrivé. ``` def distance(dataframe): arrivé = dataframe['o_city'].values depart = dataframe['d_city'].values distance = [] for i in range(len(depart)) : R = 6373.0 #Approximation du rayon de la terre lat1 = float(cities[cities['id'] == arrivé[i]]['latitude']) lat2 = float(cities[cities['id'] == depart[i]]['latitude']) lon1 = float(cities[cities['id'] == arrivé[i]]['longitude']) lon2 = float(cities[cities['id'] == depart[i]]['longitude']) dlon = lon2 - lon1 dlat = lat2 - lat1 a = sin(dlat / 2)**2 + cos(lat1) * cos(lat2) * sin(dlon / 2)**2 c = 2 * atan2(sqrt(a), sqrt(1 - a)) dis = R * c distance.append(dis) return(distance) tickets['distance (Km)'] = pd.DataFrame(distance(tickets)) distance = tickets['distance (Km)'].values ``` 1) Extraire les infos intéressantes Pour extraire les informations comme prix min, moyen et max, durée min/max/moyenne par trajet, je vais utiliser la fonction Describe et je vais obtenu le résultat. ``` tickets.describe().iloc[1:,[4,7,8]] ``` On peut remarquer l'existence d'une très grande variation de prix pour les trajets, et la meme chose pour la durée du trajet. Les informations que je vien d'extraire, sont juste des informations existantes dans le tableau **Tickets**, je peux extraire les meme informations en se basent sur la relation existante entre les 4 tableaux. Pour ce faire, je vais utiliser l'approche de **Automated Feature Engineering**, qui est le processus de construction de caractéristiques supplémentaires à partir de données existantes qui sont souvent réparties dans plusieurs tableaux apparentés. je vais donc extraire les informations pertinentes des données et de les rassembler dans une seule table qui peut ensuite être utilisée pour former un modèle d'apprentissage machine. Les deux premiers concepts de **Automate Feature Engineering** sont les entités et les ensembles d'entités. Une entité est simplement un tableau. Un ensemble d'entités est un ensemble de tables et les relations entre elles. Je vais donc commencer par creer une entité : ``` tickets = pd.read_csv('ticket_data.csv') cities = pd.read_csv('cities.csv') stations = pd.read_csv('stations.csv') providers = pd.read_csv('providers.csv') tickets['d_station'] = tickets['d_station'].apply(lambda string : int(string) if string == string else np.nan) tickets['o_station'] = tickets['o_station'].apply(lambda string : int(string) if string == string else np.nan) stations['id'] = stations['id'].apply(lambda string : float(string) if string == string else np.nan) tickets['trajet (min)'] = pd.DataFrame(trajet) tickets['distance (Km)'] = pd.DataFrame(distance) es = ft.EntitySet(id = 'tickets') ``` je devrais maintenant ajouter des entités. Chaque entité doit avoir un index, qui est une colonne avec tous les éléments uniques. C'est-à-dire que chaque valeur de l'index ne doit apparaître qu'une seule fois dans le tableau. L'index dans le tableaux **Tickets** est le **id**, car chaque ticket n'a qu'une seule ligne dans ce dataframe. C'est ce que je vais faire dans les cellules suivantes pour chaque tableaux. ``` es = es.entity_from_dataframe(entity_id = 'tickets', dataframe = tickets, index = 'id', time_index = 'departure_ts') es = es.entity_from_dataframe(entity_id = 'cities', dataframe = cities, index = 'id') es = es.entity_from_dataframe(entity_id = 'stations', dataframe = stations, index = 'id') es = es.entity_from_dataframe(entity_id = 'providers', dataframe = providers, index = 'id') print(es) ``` Maintenant, apres avoir creer les entités, je vais essayer de definir des relations entre les differents tableaux ( Parce que le **Id** par exemple dans le tableax **Cities** peut apparaitre dans le tableaux **Ticket** sous forme de **O_cities**), dans ce que je defini sont des jointures entre les tableaux. ``` r_cityO_previous = ft.Relationship(es['cities']['id'],es['tickets']['o_city']) es = es.add_relationship(r_cityO_previous) r_cityD_previous = ft.Relationship(es['cities']['id'], es['tickets']['d_city']) es = es.add_relationship(r_cityD_previous) r_stationO_previous = ft.Relationship(es['stations']['id'], es['tickets']['o_station']) es = es.add_relationship(r_stationO_previous) r_stationD_previous = ft.Relationship(es['stations']['id'], es['tickets']['d_station']) es = es.add_relationship(r_stationD_previous) r_company_previous = ft.Relationship(es['providers']['id'], es['tickets']['company']) es = es.add_relationship(r_company_previous) ``` Je vais commencer par extraire combien de fois un element dans les 3 bases de données (Station, Cities et Providers ) existent dans la base de donnees Tickets. ``` city, feature_names = ft.dfs(entityset = es, target_entity = 'cities', agg_primitives = ['count']) station, feature_names = ft.dfs(entityset = es, target_entity = 'stations', agg_primitives = ['count']) provider, feature_names = ft.dfs(entityset = es, target_entity = 'providers', agg_primitives = ['count']) city.head() ``` Exemple : Dans la base donnée Ticket, Barcelona était une ville de depart 174 fois, et ville d'arrivé 28 fois. ``` station.head() ``` Exemple : Dans la base donnée Ticket, Aéroport CDG était une station de depart 268 fois, et ville d'arrivé 62 fois. ``` provider.head() ``` Exemple : Dans la base donnée Ticket, ouibus était un moyen de transport 3560 fois et infobus0 fois. . Extraire de la meme façon des information **Min**. ``` city, feature_names = ft.dfs(entityset = es, target_entity = 'cities', agg_primitives = ['min']) station, feature_names = ft.dfs(entityset = es, target_entity = 'stations', agg_primitives = ['min']) provider, feature_names = ft.dfs(entityset = es, target_entity = 'providers', agg_primitives = ['min']) city.iloc[:,0:11].head() ``` Les informations que je peux extraire du tableau au dessus est que pour l'exemple de la ville de Barcelone, le prix moyen du ticket dans le tableaux **Ticket** quand Barcelone est la ville de depart est : 140 , est le trajet min est 120 min. Je vais faire la meme demarche pour les tableaux **Stations** et **Providers**, le resultat est le suivant : ``` station.iloc[:,0:9].head() provider.iloc[:,0:13].head() ``` Extraire de la meme façon des information Max. ``` city, feature_names = ft.dfs(entityset = es, target_entity = 'cities', agg_primitives = ['max']) station, feature_names = ft.dfs(entityset = es, target_entity = 'stations', agg_primitives = ['max']) provider, feature_names = ft.dfs(entityset = es, target_entity = 'providers', agg_primitives = ['max']) city.iloc[:,0:11].head() station.iloc[:,0:9].head() provider.iloc[:,0:13].head() ``` Extraire de la meme façon des information Mean. ``` city, feature_names = ft.dfs(entityset = es, target_entity = 'cities', agg_primitives = ['mean']) station, feature_names = ft.dfs(entityset = es, target_entity = 'stations', agg_primitives = ['mean']) provider, feature_names = ft.dfs(entityset = es, target_entity = 'providers', agg_primitives = ['mean']) city.iloc[:,0:11].head() station.iloc[:,0:9].head() provider.iloc[:,0:13].head() tickets.head() ``` Partie 2 : Data Visualisation. Dans cette partie, je vais faire quelque visualsations afin de déduire quelque informations utiles de la base de donnée. ``` plt.rcParams['figure.figsize'] = (15, 5) sns.distplot(tickets['price_in_cents'], color = 'blue') plt.xlabel('Prix des tickets', fontsize = 16) plt.ylabel('Le count des tickets', fontsize = 16) plt.title('Distribution des prix pour les tickets', fontsize = 20) plt.xticks(rotation = 90) plt.show() ``` Cette distribution ne donne pas asssez d'information, pour celà, je dois appliquer le Scaling sur la base de donnnée pour nozmaliser le prix. ``` plt.rcParams['figure.figsize'] = (15, 5) sns.distplot(tickets['price_in_cents'], color = 'blue') plt.xlabel('Prix des tickets', fontsize = 16) plt.ylabel('Le count des tickets', fontsize = 16) plt.title('Distribution des prix pour les tickets', fontsize = 20) plt.xticks(rotation = 90) plt.show() plt.rcParams['figure.figsize'] = (15, 5) sns.distplot(tickets['trajet (min)'], color = 'blue') plt.xlabel('Durée du voyage', fontsize = 16) plt.ylabel('Count', fontsize = 16) plt.title('Distribution des durées des voyages', fontsize = 20) plt.xticks(rotation = 90) plt.show() ``` La durée n'est pas bien distribué pour les voyages, la plupart des voyages ont une durée qui ne dépasse pas 100 minutes. # **Partie 2 : Prédiction du prix de voyage** **1)** Data Preprocessing and Feature Engineering Je vais commencer par ajouter quelques informations uriles à la base de donnée **Tickets** en emplaçant l'Id des stations, villes par le nom. ``` tickets['o_city'] = tickets['o_city'].apply(lambda string : str(cities[cities['id'] == float(string)]['local_name'].values[0]) if string == string else np.nan) tickets['d_city'] = tickets['d_city'].apply(lambda string : str(cities[cities['id'] == float(string)]['local_name'].values[0]) if string == string else np.nan) tickets['d_station'] = tickets['d_station'].apply(lambda string : str(stations[stations['id'] == float(string)]['unique_name'].values[0]) if string == string else np.nan) tickets['o_station'] = tickets['o_station'].apply(lambda string : str(stations[stations['id'] == float(string)]['unique_name'].values[0]) if string == string else np.nan) tickets['company'] = tickets['company'].apply(lambda string : str(providers[providers['id'] == float(string)]['fullname'].values[0]) if string == string else np.nan) ``` Maintenant, je vais remplacer les colonnes **Middle Stations** et **Other Companies** pour le nombre de ces elements, par exemple pour la **Middle Stations** {12, 5}, je vais la remplacer par 2, c'est à dire le nombre de **Middle Station** traversé pendant ce voyage. ``` tickets['middle_stations'] = tickets['middle_stations'].apply(lambda string : len(string.split(',')) if string == string else 0) tickets['other_companies'] = tickets['other_companies'].apply(lambda string : len(string.split(',')) if string == string else 0) ``` Maintenant, je vais ajouter une categorie pour chaque colonne de type **Datetime**, on specifiant la periode de depart ou d'arrivvé ( Night , Morning, Evening, Afternoon). ``` def part_day(dataframe, colonne): new_colonne = [] date = dataframe[colonne].values for i in range(len(date)) : if 5<=float(date[i][11:13])<12 : new_colonne.append('Morning') if 12<=float(date[i][11:13])<17 : new_colonne.append('Afternoon') if 17<=float(date[i][11:13])<21 : new_colonne.append('Evening') if 21<=float(date[i][11:13])<24 or 0<=float(date[i][11:13])<5: new_colonne.append('Night') return(new_colonne) tickets['departure_ts'] = pd.DataFrame(part_day(tickets, 'departure_ts')) tickets['arrival_ts'] = pd.DataFrame(part_day(tickets, 'arrival_ts')) tickets['search_ts'] = pd.DataFrame(part_day(tickets, 'search_ts')) ``` Maintenant, je vais ajouter une colonne catégorique, en mettant 1 si le voyage est d'un pays à an autre, 0 sinon. ``` def type_voyage(dataframe): arrivé = tickets['o_city'].values depart = tickets['d_city'].values typeV = [] for i in range(len(depart)): if depart[i].split()[-1] == arrivé[i].split()[-1] : typeV.append(0) else : typeV.append(1) return(typeV) tickets['voy_international'] = pd.DataFrame(type_voyage(tickets)) ``` Je vais ajouter maintenant pour chaque ville, la latitude, la longitude et la population. ``` def add_to_city(dataframe, colonne): data = dataframe[colonne].values List_lat = [] List_long = [] List_pop = [] for i in range(len(data)) : ville = data[i] lat = cities[cities['local_name'] == ville]['latitude'].values[0] long = cities[cities['local_name'] == ville]['longitude'].values[0] pop = cities[cities['local_name'] == ville]['population'].values[0] List_lat.append(lat) List_long.append(long) List_pop.append(pop) return([List_lat, List_long, List_pop]) tickets['o_city_latitude'] = pd.DataFrame(add_to_city(tickets, 'o_city')[0]) tickets['o_city_lingitude'] = pd.DataFrame(add_to_city(tickets, 'o_city')[1]) tickets['o_city_population'] = pd.DataFrame(add_to_city(tickets, 'o_city')[2]) tickets['d_city_latitude'] = pd.DataFrame(add_to_city(tickets, 'd_city')[0]) tickets['d_city_longitude'] = pd.DataFrame(add_to_city(tickets, 'd_city')[1]) tickets['d_city_population'] = pd.DataFrame(add_to_city(tickets, 'd_city')[2]) ``` maintenant, je vais ajouter à la base de donnée quelque information utiles en se basant sur la base de donnée **Providers**. ``` def add_provider(dataframe, colonne): data = dataframe[colonne].values transport_type = [] has_bicycle = [] has_adjustable_seats = [] has_plug = [] has_wifi = [] for i in range(len(data)) : company = data[i] transport = providers[providers['fullname'] == company]['transport_type'].values[0] bicycle = providers[providers['fullname'] == company]['has_bicycle'].values[0] adjustable_seats = providers[providers['fullname'] == company]['has_adjustable_seats'].values[0] plug = providers[providers['fullname'] == company]['has_plug'].values[0] wifi = providers[providers['fullname'] == company]['has_wifi'].values[0] transport_type.append(transport) has_bicycle.append(bicycle) has_adjustable_seats.append(adjustable_seats) has_plug.append(plug) has_wifi.append(wifi) return([transport_type, has_bicycle, has_adjustable_seats, has_plug, has_wifi]) tickets['transport_type'] = pd.DataFrame(add_provider(tickets, 'company')[0]) tickets['has_bicycle'] = pd.DataFrame(add_provider(tickets, 'company')[1]) tickets['has_adjustable_seats'] = pd.DataFrame(add_provider(tickets, 'company')[2]) tickets['has_plug'] = pd.DataFrame(add_provider(tickets, 'company')[3]) tickets['has_wifi'] = pd.DataFrame(add_provider(tickets, 'company')[4]) ``` Je vais supprimer les colonnes suivantes : ``` dropped_columns = ['id', 'company', 'o_station', 'd_station', 'o_city', 'd_city', 'o_city_population', 'd_city_population'] tickets.drop(dropped_columns, axis=1, inplace=True) ``` Data Imputation : Je vais remplacer les cases vides (16 cases vides) par **False**, puisque c'est la valeur la plus denses dans la base de donnée. ``` tickets.isnull().sum() tickets.has_bicycle .fillna('False', inplace=True) tickets.has_adjustable_seats.fillna('False', inplace=True) tickets.has_plug.fillna('False', inplace=True) tickets.has_wifi.fillna('False', inplace=True) ``` **Splitting The DataSet into training set and Test set** ``` prices = tickets['price_in_cents'] data = tickets.drop('price_in_cents', axis = 1) train_data, test_data, train_targets, test_targets = train_test_split(data, prices, test_size=0.33, random_state=42) ``` **Transformation des données catégorique à des données numériques :** ``` labels = ['departure_ts', 'arrival_ts', 'search_ts', 'has_bicycle', 'has_adjustable_seats', 'has_plug', 'has_wifi'] for i in labels : le = LabelEncoder() le.fit(train_data[i]) train_data[i] = le.transform(train_data[i]) test_data[i] = le.transform(test_data[i]) lb = LabelBinarizer() array = train_data['transport_type'].copy().values lb.fit(array) encoded_array = lb.transform(train_data['transport_type'].values) if len(lb.classes_) == 2 : encoded_array = np.hstack((encoded_array, 1-encoded_array)) encoded_data = pd.DataFrame(encoded_array, index=train_data.index, columns=lb.classes_) train_data[encoded_data.columns] = encoded_data train_data.drop('transport_type', axis=1, inplace=True) encoded_array = lb.transform(test_data['transport_type'].values) if len(lb.classes_) == 2 : encoded_array = np.hstack((encoded_array, 1-encoded_array)) encoded_data = pd.DataFrame(encoded_array, index=test_data.index, columns=lb.classes_) test_data[encoded_data.columns] = encoded_data test_data.drop('transport_type', axis=1, inplace=True) ``` Mantenant, je vais applique le **Data Scaling** ``` features = ['trajet (min)', 'distance (Km)', 'o_city_latitude', 'o_city_lingitude', 'd_city_latitude', 'd_city_longitude'] for feature in features: scaler = StandardScaler() array = train_data[feature].copy().values.reshape(-1, 1) scaler.fit(array) array = scaler.transform(array).squeeze() train_data[feature] = array array = test_data[feature].copy().values.reshape(-1, 1) array = scaler.transform(array).squeeze() test_data[feature] = array ``` Exploratory Data Analysis (EDA): ``` red_tsne = TSNE(n_components=1,random_state=42).fit_transform(train_data.values) plt.title('Resultat obtennu par la réduction de dimension en fonction du prix', size = 30) plt.plot(red_tsne , train_targets, '.') plt.xlabel('TSNE', size = 30) plt.ylabel('Price', size = 30) plt.xticks(size = 25) plt.yticks(size = 25) plt.show() ``` D'après la réduction de dimension précédantes, le prix de n'a pas d'une relation linéaire avec les autres colonnes, nous remarquons également l'apparition de certains OUTLIERS. # Data Modeling : Machine Learning Application ``` from sklearn.linear_model import Ridge from sklearn.model_selection import GridSearchCV from sklearn.metrics import mean_squared_error, r2_score ``` Regularization : ``` ridge_lr = Ridge() params = {"alpha" : np.logspace(-3, 2, 6)} ridge_lr_grid = GridSearchCV(ridge_lr, params, scoring='r2', cv=5) _ = ridge_lr_grid.fit(train_data, train_targets.values) train_preds = ridge_lr_grid.predict(train_data) test_preds = ridge_lr_grid.predict(test_data) print("Ridge Linear Regression results :") print(" ") print(" - RMSE on the train set : {:.2f}".format(mean_squared_error(train_targets, train_preds, squared=False))) print(" - RMSE on the test set : {:.2f}".format(mean_squared_error(test_targets, test_preds, squared=False))) print(" ") print(" - R-squared on the train set : {:.2f}%".format(r2_score(train_targets, train_preds)*100)) print(" - R-squared on the test set : {:.2f}%".format(r2_score(test_targets, test_preds)*100)) ``` Ridge Regression ne donne pas un bon résultat, ça est du à la non-linéarité du problème. Je vais utiliser un algorithme de Boosting, qui est le LightGBM, et je vais faire le Hyperparameter Tunning en utilisant **L'optimisation bayésien**. ``` from hyperopt import hp, fmin, tpe, Trials from lightgbm.sklearn import LGBMRegressor from sklearn.model_selection import cross_val_score def gb_mse_cv(params, random_state=42, cv=5, X=train_data.values, y=train_targets.values): params['n_estimators'] = int(params['n_estimators']) params['max_depth'] = int(params['max_depth']) params['num_leaves'] = int(params['num_leaves']) model = LGBMRegressor(random_state=42, **params) score = -cross_val_score(model, X, y, cv=cv, scoring="neg_mean_squared_error", n_jobs=-1).mean() return score space = {'learning_rate': hp.uniform('learning_rate', 0.01, 0.2), 'n_estimators': hp.quniform('n_estimators', 100, 2000, 100), 'max_depth' : hp.quniform('max_depth', 2, 20, 1), 'num_leaves': hp.quniform('num_leaves', 31, 255, 4), 'min_child_weight': hp.uniform('min_child_weight', 0.1, 10), 'colsample_bytree': hp.uniform('colsample_bytree', 0.5, 1.), 'subsample': hp.uniform('subsample', 0.5, 1.), 'reg_alpha': hp.uniform('reg_alpha', 0.001, 1), 'reg_lambda': hp.uniform('reg_lambda', 0.001, 20)} trials = Trials() best = fmin(fn=gb_mse_cv, space=space, algo=tpe.suggest, max_evals=300, trials=trials, rstate=np.random.RandomState(42)) best['n_estimators'] = int(best['n_estimators']) best['num_leaves'] = int(best['num_leaves']) best['max_depth'] = int(best['max_depth']) model = LGBMRegressor(random_state=42, **best) _ = model.fit(train_data,train_targets) train_preds = model.predict(train_data) test_preds = model.predict(test_data) print("Light Gradient Boosting logarithmic results :") print(" ") print(" - RMSE on the train set : {:.2f}".format(mean_squared_error(train_targets, train_preds, squared=False))) print(" - RMSE on the test set : {:.2f}".format(mean_squared_error(test_targets, test_preds, squared=False))) print(" ") print(" - R-squared on the train set : {:.2f}%".format(r2_score(train_targets, train_preds)*100)) print(" - R-squared on the test set : {:.2f}%".format(r2_score(test_targets, test_preds)*100)) ``` **2) La signification du RMSE.** ``` train_targets.describe() ``` Nous pouvons voir que notre erreur moyenne (RMSE) dans le LGBoost initial est d'environ 2000. Étant donné qu'après avoir nettoyé la colonne des prix et les autres colonnes, 50% des voyages coûtent que jusqu'à 3350 et 75% jusqu'à 5250 - même l'écart-type amélioré de 17€3733 est une inexactitude assez massive qui n'aide pas beaucoup à recommander un prix. Il s'avère que le prix ne dépend pas seulement des données utilisés au début. Il va sans dire que la qualité de la présentation , la disponibilité, la communication le statut du moyen du transport Il pourrait également avoir une influence considérable. Mais le but de cette analyse était de recommander un prix à une "recrue" sans aucun examen ni statut. Dans cette optique, nous pourrions dire que nous ne pouvons pas recommander un prix exact, mais plutôt des environs du prix.
github_jupyter
import pandas as pd import numpy as np import featuretools as ft from math import sin, cos, sqrt, atan2, radians import matplotlib.pyplot as plt import seaborn as sns from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler, LabelBinarizer, LabelEncoder from sklearn.manifold import TSNE import warnings warnings.filterwarnings('ignore') tickets = pd.read_csv('ticket_data.csv') cities = pd.read_csv('cities.csv') stations = pd.read_csv('stations.csv') providers = pd.read_csv('providers.csv') tickets['arrival_ts']= pd.to_datetime(tickets['arrival_ts']) tickets['departure_ts']= pd.to_datetime(tickets['departure_ts']) tickets['d_station'] = tickets['d_station'].apply(lambda string : int(string) if string == string else np.nan) tickets['o_station'] = tickets['o_station'].apply(lambda string : int(string) if string == string else np.nan) stations['id'] = stations['id'].apply(lambda string : float(string) if string == string else np.nan) def add(dataframe): arrivé = dataframe['arrival_ts'].values depart = dataframe['departure_ts'].values trajet = [] for i in range(len(depart)) : r = (arrivé[i] - depart[i])/(60*10**9) trajet.append(r/np.timedelta64(1, 'ns')) return(trajet) tickets['trajet (min)'] = pd.DataFrame(add(tickets)) trajet = tickets['trajet (min)'].values def distance(dataframe): arrivé = dataframe['o_city'].values depart = dataframe['d_city'].values distance = [] for i in range(len(depart)) : R = 6373.0 #Approximation du rayon de la terre lat1 = float(cities[cities['id'] == arrivé[i]]['latitude']) lat2 = float(cities[cities['id'] == depart[i]]['latitude']) lon1 = float(cities[cities['id'] == arrivé[i]]['longitude']) lon2 = float(cities[cities['id'] == depart[i]]['longitude']) dlon = lon2 - lon1 dlat = lat2 - lat1 a = sin(dlat / 2)**2 + cos(lat1) * cos(lat2) * sin(dlon / 2)**2 c = 2 * atan2(sqrt(a), sqrt(1 - a)) dis = R * c distance.append(dis) return(distance) tickets['distance (Km)'] = pd.DataFrame(distance(tickets)) distance = tickets['distance (Km)'].values tickets.describe().iloc[1:,[4,7,8]] tickets = pd.read_csv('ticket_data.csv') cities = pd.read_csv('cities.csv') stations = pd.read_csv('stations.csv') providers = pd.read_csv('providers.csv') tickets['d_station'] = tickets['d_station'].apply(lambda string : int(string) if string == string else np.nan) tickets['o_station'] = tickets['o_station'].apply(lambda string : int(string) if string == string else np.nan) stations['id'] = stations['id'].apply(lambda string : float(string) if string == string else np.nan) tickets['trajet (min)'] = pd.DataFrame(trajet) tickets['distance (Km)'] = pd.DataFrame(distance) es = ft.EntitySet(id = 'tickets') es = es.entity_from_dataframe(entity_id = 'tickets', dataframe = tickets, index = 'id', time_index = 'departure_ts') es = es.entity_from_dataframe(entity_id = 'cities', dataframe = cities, index = 'id') es = es.entity_from_dataframe(entity_id = 'stations', dataframe = stations, index = 'id') es = es.entity_from_dataframe(entity_id = 'providers', dataframe = providers, index = 'id') print(es) r_cityO_previous = ft.Relationship(es['cities']['id'],es['tickets']['o_city']) es = es.add_relationship(r_cityO_previous) r_cityD_previous = ft.Relationship(es['cities']['id'], es['tickets']['d_city']) es = es.add_relationship(r_cityD_previous) r_stationO_previous = ft.Relationship(es['stations']['id'], es['tickets']['o_station']) es = es.add_relationship(r_stationO_previous) r_stationD_previous = ft.Relationship(es['stations']['id'], es['tickets']['d_station']) es = es.add_relationship(r_stationD_previous) r_company_previous = ft.Relationship(es['providers']['id'], es['tickets']['company']) es = es.add_relationship(r_company_previous) city, feature_names = ft.dfs(entityset = es, target_entity = 'cities', agg_primitives = ['count']) station, feature_names = ft.dfs(entityset = es, target_entity = 'stations', agg_primitives = ['count']) provider, feature_names = ft.dfs(entityset = es, target_entity = 'providers', agg_primitives = ['count']) city.head() station.head() provider.head() city, feature_names = ft.dfs(entityset = es, target_entity = 'cities', agg_primitives = ['min']) station, feature_names = ft.dfs(entityset = es, target_entity = 'stations', agg_primitives = ['min']) provider, feature_names = ft.dfs(entityset = es, target_entity = 'providers', agg_primitives = ['min']) city.iloc[:,0:11].head() station.iloc[:,0:9].head() provider.iloc[:,0:13].head() city, feature_names = ft.dfs(entityset = es, target_entity = 'cities', agg_primitives = ['max']) station, feature_names = ft.dfs(entityset = es, target_entity = 'stations', agg_primitives = ['max']) provider, feature_names = ft.dfs(entityset = es, target_entity = 'providers', agg_primitives = ['max']) city.iloc[:,0:11].head() station.iloc[:,0:9].head() provider.iloc[:,0:13].head() city, feature_names = ft.dfs(entityset = es, target_entity = 'cities', agg_primitives = ['mean']) station, feature_names = ft.dfs(entityset = es, target_entity = 'stations', agg_primitives = ['mean']) provider, feature_names = ft.dfs(entityset = es, target_entity = 'providers', agg_primitives = ['mean']) city.iloc[:,0:11].head() station.iloc[:,0:9].head() provider.iloc[:,0:13].head() tickets.head() plt.rcParams['figure.figsize'] = (15, 5) sns.distplot(tickets['price_in_cents'], color = 'blue') plt.xlabel('Prix des tickets', fontsize = 16) plt.ylabel('Le count des tickets', fontsize = 16) plt.title('Distribution des prix pour les tickets', fontsize = 20) plt.xticks(rotation = 90) plt.show() plt.rcParams['figure.figsize'] = (15, 5) sns.distplot(tickets['price_in_cents'], color = 'blue') plt.xlabel('Prix des tickets', fontsize = 16) plt.ylabel('Le count des tickets', fontsize = 16) plt.title('Distribution des prix pour les tickets', fontsize = 20) plt.xticks(rotation = 90) plt.show() plt.rcParams['figure.figsize'] = (15, 5) sns.distplot(tickets['trajet (min)'], color = 'blue') plt.xlabel('Durée du voyage', fontsize = 16) plt.ylabel('Count', fontsize = 16) plt.title('Distribution des durées des voyages', fontsize = 20) plt.xticks(rotation = 90) plt.show() tickets['o_city'] = tickets['o_city'].apply(lambda string : str(cities[cities['id'] == float(string)]['local_name'].values[0]) if string == string else np.nan) tickets['d_city'] = tickets['d_city'].apply(lambda string : str(cities[cities['id'] == float(string)]['local_name'].values[0]) if string == string else np.nan) tickets['d_station'] = tickets['d_station'].apply(lambda string : str(stations[stations['id'] == float(string)]['unique_name'].values[0]) if string == string else np.nan) tickets['o_station'] = tickets['o_station'].apply(lambda string : str(stations[stations['id'] == float(string)]['unique_name'].values[0]) if string == string else np.nan) tickets['company'] = tickets['company'].apply(lambda string : str(providers[providers['id'] == float(string)]['fullname'].values[0]) if string == string else np.nan) tickets['middle_stations'] = tickets['middle_stations'].apply(lambda string : len(string.split(',')) if string == string else 0) tickets['other_companies'] = tickets['other_companies'].apply(lambda string : len(string.split(',')) if string == string else 0) def part_day(dataframe, colonne): new_colonne = [] date = dataframe[colonne].values for i in range(len(date)) : if 5<=float(date[i][11:13])<12 : new_colonne.append('Morning') if 12<=float(date[i][11:13])<17 : new_colonne.append('Afternoon') if 17<=float(date[i][11:13])<21 : new_colonne.append('Evening') if 21<=float(date[i][11:13])<24 or 0<=float(date[i][11:13])<5: new_colonne.append('Night') return(new_colonne) tickets['departure_ts'] = pd.DataFrame(part_day(tickets, 'departure_ts')) tickets['arrival_ts'] = pd.DataFrame(part_day(tickets, 'arrival_ts')) tickets['search_ts'] = pd.DataFrame(part_day(tickets, 'search_ts')) def type_voyage(dataframe): arrivé = tickets['o_city'].values depart = tickets['d_city'].values typeV = [] for i in range(len(depart)): if depart[i].split()[-1] == arrivé[i].split()[-1] : typeV.append(0) else : typeV.append(1) return(typeV) tickets['voy_international'] = pd.DataFrame(type_voyage(tickets)) def add_to_city(dataframe, colonne): data = dataframe[colonne].values List_lat = [] List_long = [] List_pop = [] for i in range(len(data)) : ville = data[i] lat = cities[cities['local_name'] == ville]['latitude'].values[0] long = cities[cities['local_name'] == ville]['longitude'].values[0] pop = cities[cities['local_name'] == ville]['population'].values[0] List_lat.append(lat) List_long.append(long) List_pop.append(pop) return([List_lat, List_long, List_pop]) tickets['o_city_latitude'] = pd.DataFrame(add_to_city(tickets, 'o_city')[0]) tickets['o_city_lingitude'] = pd.DataFrame(add_to_city(tickets, 'o_city')[1]) tickets['o_city_population'] = pd.DataFrame(add_to_city(tickets, 'o_city')[2]) tickets['d_city_latitude'] = pd.DataFrame(add_to_city(tickets, 'd_city')[0]) tickets['d_city_longitude'] = pd.DataFrame(add_to_city(tickets, 'd_city')[1]) tickets['d_city_population'] = pd.DataFrame(add_to_city(tickets, 'd_city')[2]) def add_provider(dataframe, colonne): data = dataframe[colonne].values transport_type = [] has_bicycle = [] has_adjustable_seats = [] has_plug = [] has_wifi = [] for i in range(len(data)) : company = data[i] transport = providers[providers['fullname'] == company]['transport_type'].values[0] bicycle = providers[providers['fullname'] == company]['has_bicycle'].values[0] adjustable_seats = providers[providers['fullname'] == company]['has_adjustable_seats'].values[0] plug = providers[providers['fullname'] == company]['has_plug'].values[0] wifi = providers[providers['fullname'] == company]['has_wifi'].values[0] transport_type.append(transport) has_bicycle.append(bicycle) has_adjustable_seats.append(adjustable_seats) has_plug.append(plug) has_wifi.append(wifi) return([transport_type, has_bicycle, has_adjustable_seats, has_plug, has_wifi]) tickets['transport_type'] = pd.DataFrame(add_provider(tickets, 'company')[0]) tickets['has_bicycle'] = pd.DataFrame(add_provider(tickets, 'company')[1]) tickets['has_adjustable_seats'] = pd.DataFrame(add_provider(tickets, 'company')[2]) tickets['has_plug'] = pd.DataFrame(add_provider(tickets, 'company')[3]) tickets['has_wifi'] = pd.DataFrame(add_provider(tickets, 'company')[4]) dropped_columns = ['id', 'company', 'o_station', 'd_station', 'o_city', 'd_city', 'o_city_population', 'd_city_population'] tickets.drop(dropped_columns, axis=1, inplace=True) tickets.isnull().sum() tickets.has_bicycle .fillna('False', inplace=True) tickets.has_adjustable_seats.fillna('False', inplace=True) tickets.has_plug.fillna('False', inplace=True) tickets.has_wifi.fillna('False', inplace=True) prices = tickets['price_in_cents'] data = tickets.drop('price_in_cents', axis = 1) train_data, test_data, train_targets, test_targets = train_test_split(data, prices, test_size=0.33, random_state=42) labels = ['departure_ts', 'arrival_ts', 'search_ts', 'has_bicycle', 'has_adjustable_seats', 'has_plug', 'has_wifi'] for i in labels : le = LabelEncoder() le.fit(train_data[i]) train_data[i] = le.transform(train_data[i]) test_data[i] = le.transform(test_data[i]) lb = LabelBinarizer() array = train_data['transport_type'].copy().values lb.fit(array) encoded_array = lb.transform(train_data['transport_type'].values) if len(lb.classes_) == 2 : encoded_array = np.hstack((encoded_array, 1-encoded_array)) encoded_data = pd.DataFrame(encoded_array, index=train_data.index, columns=lb.classes_) train_data[encoded_data.columns] = encoded_data train_data.drop('transport_type', axis=1, inplace=True) encoded_array = lb.transform(test_data['transport_type'].values) if len(lb.classes_) == 2 : encoded_array = np.hstack((encoded_array, 1-encoded_array)) encoded_data = pd.DataFrame(encoded_array, index=test_data.index, columns=lb.classes_) test_data[encoded_data.columns] = encoded_data test_data.drop('transport_type', axis=1, inplace=True) features = ['trajet (min)', 'distance (Km)', 'o_city_latitude', 'o_city_lingitude', 'd_city_latitude', 'd_city_longitude'] for feature in features: scaler = StandardScaler() array = train_data[feature].copy().values.reshape(-1, 1) scaler.fit(array) array = scaler.transform(array).squeeze() train_data[feature] = array array = test_data[feature].copy().values.reshape(-1, 1) array = scaler.transform(array).squeeze() test_data[feature] = array red_tsne = TSNE(n_components=1,random_state=42).fit_transform(train_data.values) plt.title('Resultat obtennu par la réduction de dimension en fonction du prix', size = 30) plt.plot(red_tsne , train_targets, '.') plt.xlabel('TSNE', size = 30) plt.ylabel('Price', size = 30) plt.xticks(size = 25) plt.yticks(size = 25) plt.show() from sklearn.linear_model import Ridge from sklearn.model_selection import GridSearchCV from sklearn.metrics import mean_squared_error, r2_score ridge_lr = Ridge() params = {"alpha" : np.logspace(-3, 2, 6)} ridge_lr_grid = GridSearchCV(ridge_lr, params, scoring='r2', cv=5) _ = ridge_lr_grid.fit(train_data, train_targets.values) train_preds = ridge_lr_grid.predict(train_data) test_preds = ridge_lr_grid.predict(test_data) print("Ridge Linear Regression results :") print(" ") print(" - RMSE on the train set : {:.2f}".format(mean_squared_error(train_targets, train_preds, squared=False))) print(" - RMSE on the test set : {:.2f}".format(mean_squared_error(test_targets, test_preds, squared=False))) print(" ") print(" - R-squared on the train set : {:.2f}%".format(r2_score(train_targets, train_preds)*100)) print(" - R-squared on the test set : {:.2f}%".format(r2_score(test_targets, test_preds)*100)) from hyperopt import hp, fmin, tpe, Trials from lightgbm.sklearn import LGBMRegressor from sklearn.model_selection import cross_val_score def gb_mse_cv(params, random_state=42, cv=5, X=train_data.values, y=train_targets.values): params['n_estimators'] = int(params['n_estimators']) params['max_depth'] = int(params['max_depth']) params['num_leaves'] = int(params['num_leaves']) model = LGBMRegressor(random_state=42, **params) score = -cross_val_score(model, X, y, cv=cv, scoring="neg_mean_squared_error", n_jobs=-1).mean() return score space = {'learning_rate': hp.uniform('learning_rate', 0.01, 0.2), 'n_estimators': hp.quniform('n_estimators', 100, 2000, 100), 'max_depth' : hp.quniform('max_depth', 2, 20, 1), 'num_leaves': hp.quniform('num_leaves', 31, 255, 4), 'min_child_weight': hp.uniform('min_child_weight', 0.1, 10), 'colsample_bytree': hp.uniform('colsample_bytree', 0.5, 1.), 'subsample': hp.uniform('subsample', 0.5, 1.), 'reg_alpha': hp.uniform('reg_alpha', 0.001, 1), 'reg_lambda': hp.uniform('reg_lambda', 0.001, 20)} trials = Trials() best = fmin(fn=gb_mse_cv, space=space, algo=tpe.suggest, max_evals=300, trials=trials, rstate=np.random.RandomState(42)) best['n_estimators'] = int(best['n_estimators']) best['num_leaves'] = int(best['num_leaves']) best['max_depth'] = int(best['max_depth']) model = LGBMRegressor(random_state=42, **best) _ = model.fit(train_data,train_targets) train_preds = model.predict(train_data) test_preds = model.predict(test_data) print("Light Gradient Boosting logarithmic results :") print(" ") print(" - RMSE on the train set : {:.2f}".format(mean_squared_error(train_targets, train_preds, squared=False))) print(" - RMSE on the test set : {:.2f}".format(mean_squared_error(test_targets, test_preds, squared=False))) print(" ") print(" - R-squared on the train set : {:.2f}%".format(r2_score(train_targets, train_preds)*100)) print(" - R-squared on the test set : {:.2f}%".format(r2_score(test_targets, test_preds)*100)) train_targets.describe()
0.373876
0.898767
# Results visualisation ``` import matplotlib.pyplot as plt import numpy as np import gif from pymoo.algorithms.nsga2 import NSGA2 from pymoo.factory import get_sampling, get_crossover, get_mutation, get_termination, get_problem, get_reference_directions from pymoo.optimize import minimize from pymoo.visualization.scatter import Scatter from pymoo.visualization.pcp import PCP from pymoo.visualization.heatmap import Heatmap from pymoo.visualization.petal import Petal from pymoo.visualization.radar import Radar from pymoo.visualization.radviz import Radviz from pymoo.visualization.star_coordinate import StarCoordinate from pymoo.util.plotting import plot from sklearn.preprocessing import MinMaxScaler ``` ## Problem and training definition ``` problem = get_problem("BNH") algorithm = NSGA2( pop_size=40, n_offsprings=10, sampling=get_sampling("real_random"), crossover=get_crossover("real_sbx", prob=0.9, eta=15), mutation=get_mutation("real_pm", eta=20), eliminate_duplicates=True) termination = get_termination("n_gen", 40) res = minimize(problem=problem, algorithm=algorithm, termination=termination) ``` The result object **res** holds the following values: - **res.X**: design space values - **res.F**: objective spaces values (for a multi-objective problem, this is the set of non-dominated solutions) - **res.G**: constraint values - **res.CV**: aggregated constraint violation - **res.algorithm**: algorithm object - **res.pop**: final population object - **res.history**: history of algorithm object. (only if save_history has been enabled during the algorithm initialization) - **res.time**: the time required to run the algorithm # Result visualisation ## Objective Space We have already seen how to plot the objective space. For some problems (such as those defined [here](https://www.pymoo.org/problems/index.html), the Pareto set and Pareto front are known. In this case, we can also plot them. ``` ps = problem.pareto_set() pf = problem.pareto_front() plot = Scatter(title="Objective Space") plot.add(res.F) plot.add(pf, plot_type="line", color="black", alpha=0.7) plot.show(); ``` ## Design space ``` plot = Scatter(title="Design Space") plot.add(res.X, s=30, facecolors='none', edgecolors='r') plot.add(ps, plot_type="line", color="black", alpha=0.7) # doesn't work plot.show(); ``` ## High dimensionality problems If the problem has an design and/or objective space of dimensionality higher than 2, some other plotting options are possible. For example, "Carside" has 7 design variables and 3 objective variables. ``` problem = get_problem("Carside") algorithm = NSGA2( pop_size=40, n_offsprings=10, sampling=get_sampling("real_random"), crossover=get_crossover("real_sbx", prob=0.9, eta=15), mutation=get_mutation("real_pm", eta=20), eliminate_duplicates=True) termination = get_termination("n_gen", 40) res = minimize( problem=problem, algorithm=algorithm, termination=termination, seed=1, save_history=False, verbose=False) ``` ## Pairwise pair plots ``` plot = Scatter(title="Design Space", figsize=(10, 10), tight_layout=True) plot.add(res.X, s=10, color="r") plot.show(); ``` ## 3D scatter plots ``` # First, scale the objectives between 0 and 1 (for clarity purposes) scaler = MinMaxScaler() norm_F = scaler.fit_transform(res.F) plot = Scatter(title=("Objective Space", {'pad': 30}), tight_layout=True, labels=["cost", "mass", "time"]) plot.add(norm_F, s=10, color="b") plot.show(); ``` ## Parallel Coordinate Plots ``` plot = PCP(title=("Objective space", {'pad': 30}), n_ticks=10, legend=(True, {'loc': "upper left"}), labels=["cost", "mass", "time"] ) plot.set_axis_style(color="black", alpha=0.5) plot.add(norm_F, color="grey", alpha=0.3) plot.add(norm_F[22], linewidth=5, color="r", label="solution #22") plot.add(norm_F[5], linewidth=5, color="b", label="solution #5") plot.show(); ``` ## Heatmaps ``` plot = Heatmap(title=("Objective space", {'pad': 15}), cmap="Oranges_r", labels=["cost", "mass", "time"], figsize=(10,30), order_by_objectives=0) plot.add(norm_F) plot.show(); ``` ## Petal plots ``` plot = Petal(bounds=[0, 1], cmap="tab20", labels=["cost", "mass", "time"], title=["Solution %s" % t for t in range(3)], tight_layout=True) plot.add(norm_F[:3]) # plot the first 3 solutions plot.show(); ``` Note, for this kind of plots, scaling the objectives to unity is recommended. # Radar plot ``` ideal_point = np.array([0, 0, 0]) # best possible solution nadir_point = np.array([1.5, 1.5, 1.5]) # worst possible solution plot = Radar( bounds=[ideal_point, nadir_point], normalize_each_objective=True, point_style={"color": 'red', 's': 30}, axis_style={"color": 'blue'}, title=["Solution %s" % t for t in range(3)], tight_layout=True, labels=["cost", "mass", "time"]) plot.add(norm_F[:3], color="green", alpha=0.8) # plot the first 3 values plot.show(); ``` ## Radviz plot ``` plot = Radviz(title="Objective space", legend=(True, {'loc': "upper left", 'bbox_to_anchor': (-0.1, 1.08, 0, 0)}), labels=["cost", "mass", "time"], endpoint_style={"s": 70, "color": "green"}) plot.set_axis_style(color="black", alpha=1.0) plot.add(norm_F, color="grey", s=20) plot.add(norm_F[5], color="red", s=70, label="Solution 5") plot.add(norm_F[22], color="blue", s=70, label="Solution 22") plot.show(); ``` ## Star coordinate plots ``` plot = StarCoordinate(title="Objective space", legend=(True, {'loc': "upper left", 'bbox_to_anchor': (-0.1, 1.08, 0, 0)}), labels=["cost", "mass", "time"], axis_style={"color": "blue", 'alpha': 0.7}, arrow_style={"head_length": 0.1, "head_width": 0.1}, figsize=(10, 10), tight_layout=True) plot.add(norm_F, color="grey", s=20) plot.add(norm_F[5], color="red", s=70, label="Solution 5") plot.add(norm_F[22], color="green", s=70, label="Solution 22") plot.show(); ``` ## Animated Gif We optimise the problem and save the history. ``` problem = get_problem("zdt1") algorithm = NSGA2(pop_size=100, eliminate_duplicates=True) res = minimize(problem, algorithm, termination=('n_gen', 100), seed=1, save_history=True, verbose=False) ``` We wrap the plot in a function and decorate it with gif.frame. (Note, for some reason, this doesn't work with PYMOO's Scatter function so I had to use a custom plot script with Matplotlib). ``` @gif.frame def plot_gen(entry): X = entry.pop.get("F") pf = entry.problem.pareto_front() # best = entry.opt[0].F # removed: doesn't make sense fig = plt.figure(figsize=(5, 5)) plt.plot(X[:, 0], X[:, 1], 'o', label="Current population") plt.plot(pf[:, 0], pf[:, 1], '-k', alpha=0.7, label="Pareto front") # plt.plot(best[0], best[1], 'xr', markersize=10, markeredgewidth=3, label="Best individual") plt.xlim(0, 1) plt.ylim(0, 6) plt.xlabel("Cost") plt.ylabel("Mass") plt.legend() plt.title(f"Generation #{entry.n_gen}") fig.tight_layout() ``` Loop over the generation in history. ``` gif.options.matplotlib['dpi'] = 100 frames = [] for entry in res.history: frame = plot_gen(entry) frames.append(frame) gif.save(frames, "plots/objective_space.gif", duration=0.1, unit='s') ``` ![gif_generation](plots/objective_space.gif) We can also animate the design space in a gif. First, let's optimise a problem with a 2D design space (and save the history). ``` problem = get_problem("BNH") algorithm = NSGA2( pop_size=40, n_offsprings=10, sampling=get_sampling("real_random"), crossover=get_crossover("real_sbx", prob=0.9, eta=15), mutation=get_mutation("real_pm", eta=20), eliminate_duplicates=True) termination = get_termination("n_gen", 40) res = minimize(problem=problem, algorithm=algorithm, termination=termination, save_history=True) @gif.frame def plot_design_space(entry): X = entry.pop.get("X") fig = plt.figure(figsize=(5, 5)) plt.plot(X[:, 0], X[:, 1], 'x', label="Current population") plt.xlim(0, 5) plt.ylim(0, 3) plt.xlabel("$x_1$") plt.ylabel("$x_2$") plt.legend(loc="lower right") plt.title(f"Generation #{entry.n_gen}") fig.tight_layout() gif.options.matplotlib['dpi'] = 100 frames = [] for entry in res.history: frame = plot_design_space(entry) frames.append(frame) gif.save(frames, "plots/design_space.gif", duration=0.1, unit='s') ``` ![alt text](plots/design_space.gif)
github_jupyter
import matplotlib.pyplot as plt import numpy as np import gif from pymoo.algorithms.nsga2 import NSGA2 from pymoo.factory import get_sampling, get_crossover, get_mutation, get_termination, get_problem, get_reference_directions from pymoo.optimize import minimize from pymoo.visualization.scatter import Scatter from pymoo.visualization.pcp import PCP from pymoo.visualization.heatmap import Heatmap from pymoo.visualization.petal import Petal from pymoo.visualization.radar import Radar from pymoo.visualization.radviz import Radviz from pymoo.visualization.star_coordinate import StarCoordinate from pymoo.util.plotting import plot from sklearn.preprocessing import MinMaxScaler problem = get_problem("BNH") algorithm = NSGA2( pop_size=40, n_offsprings=10, sampling=get_sampling("real_random"), crossover=get_crossover("real_sbx", prob=0.9, eta=15), mutation=get_mutation("real_pm", eta=20), eliminate_duplicates=True) termination = get_termination("n_gen", 40) res = minimize(problem=problem, algorithm=algorithm, termination=termination) ps = problem.pareto_set() pf = problem.pareto_front() plot = Scatter(title="Objective Space") plot.add(res.F) plot.add(pf, plot_type="line", color="black", alpha=0.7) plot.show(); plot = Scatter(title="Design Space") plot.add(res.X, s=30, facecolors='none', edgecolors='r') plot.add(ps, plot_type="line", color="black", alpha=0.7) # doesn't work plot.show(); problem = get_problem("Carside") algorithm = NSGA2( pop_size=40, n_offsprings=10, sampling=get_sampling("real_random"), crossover=get_crossover("real_sbx", prob=0.9, eta=15), mutation=get_mutation("real_pm", eta=20), eliminate_duplicates=True) termination = get_termination("n_gen", 40) res = minimize( problem=problem, algorithm=algorithm, termination=termination, seed=1, save_history=False, verbose=False) plot = Scatter(title="Design Space", figsize=(10, 10), tight_layout=True) plot.add(res.X, s=10, color="r") plot.show(); # First, scale the objectives between 0 and 1 (for clarity purposes) scaler = MinMaxScaler() norm_F = scaler.fit_transform(res.F) plot = Scatter(title=("Objective Space", {'pad': 30}), tight_layout=True, labels=["cost", "mass", "time"]) plot.add(norm_F, s=10, color="b") plot.show(); plot = PCP(title=("Objective space", {'pad': 30}), n_ticks=10, legend=(True, {'loc': "upper left"}), labels=["cost", "mass", "time"] ) plot.set_axis_style(color="black", alpha=0.5) plot.add(norm_F, color="grey", alpha=0.3) plot.add(norm_F[22], linewidth=5, color="r", label="solution #22") plot.add(norm_F[5], linewidth=5, color="b", label="solution #5") plot.show(); plot = Heatmap(title=("Objective space", {'pad': 15}), cmap="Oranges_r", labels=["cost", "mass", "time"], figsize=(10,30), order_by_objectives=0) plot.add(norm_F) plot.show(); plot = Petal(bounds=[0, 1], cmap="tab20", labels=["cost", "mass", "time"], title=["Solution %s" % t for t in range(3)], tight_layout=True) plot.add(norm_F[:3]) # plot the first 3 solutions plot.show(); ideal_point = np.array([0, 0, 0]) # best possible solution nadir_point = np.array([1.5, 1.5, 1.5]) # worst possible solution plot = Radar( bounds=[ideal_point, nadir_point], normalize_each_objective=True, point_style={"color": 'red', 's': 30}, axis_style={"color": 'blue'}, title=["Solution %s" % t for t in range(3)], tight_layout=True, labels=["cost", "mass", "time"]) plot.add(norm_F[:3], color="green", alpha=0.8) # plot the first 3 values plot.show(); plot = Radviz(title="Objective space", legend=(True, {'loc': "upper left", 'bbox_to_anchor': (-0.1, 1.08, 0, 0)}), labels=["cost", "mass", "time"], endpoint_style={"s": 70, "color": "green"}) plot.set_axis_style(color="black", alpha=1.0) plot.add(norm_F, color="grey", s=20) plot.add(norm_F[5], color="red", s=70, label="Solution 5") plot.add(norm_F[22], color="blue", s=70, label="Solution 22") plot.show(); plot = StarCoordinate(title="Objective space", legend=(True, {'loc': "upper left", 'bbox_to_anchor': (-0.1, 1.08, 0, 0)}), labels=["cost", "mass", "time"], axis_style={"color": "blue", 'alpha': 0.7}, arrow_style={"head_length": 0.1, "head_width": 0.1}, figsize=(10, 10), tight_layout=True) plot.add(norm_F, color="grey", s=20) plot.add(norm_F[5], color="red", s=70, label="Solution 5") plot.add(norm_F[22], color="green", s=70, label="Solution 22") plot.show(); problem = get_problem("zdt1") algorithm = NSGA2(pop_size=100, eliminate_duplicates=True) res = minimize(problem, algorithm, termination=('n_gen', 100), seed=1, save_history=True, verbose=False) @gif.frame def plot_gen(entry): X = entry.pop.get("F") pf = entry.problem.pareto_front() # best = entry.opt[0].F # removed: doesn't make sense fig = plt.figure(figsize=(5, 5)) plt.plot(X[:, 0], X[:, 1], 'o', label="Current population") plt.plot(pf[:, 0], pf[:, 1], '-k', alpha=0.7, label="Pareto front") # plt.plot(best[0], best[1], 'xr', markersize=10, markeredgewidth=3, label="Best individual") plt.xlim(0, 1) plt.ylim(0, 6) plt.xlabel("Cost") plt.ylabel("Mass") plt.legend() plt.title(f"Generation #{entry.n_gen}") fig.tight_layout() gif.options.matplotlib['dpi'] = 100 frames = [] for entry in res.history: frame = plot_gen(entry) frames.append(frame) gif.save(frames, "plots/objective_space.gif", duration=0.1, unit='s') problem = get_problem("BNH") algorithm = NSGA2( pop_size=40, n_offsprings=10, sampling=get_sampling("real_random"), crossover=get_crossover("real_sbx", prob=0.9, eta=15), mutation=get_mutation("real_pm", eta=20), eliminate_duplicates=True) termination = get_termination("n_gen", 40) res = minimize(problem=problem, algorithm=algorithm, termination=termination, save_history=True) @gif.frame def plot_design_space(entry): X = entry.pop.get("X") fig = plt.figure(figsize=(5, 5)) plt.plot(X[:, 0], X[:, 1], 'x', label="Current population") plt.xlim(0, 5) plt.ylim(0, 3) plt.xlabel("$x_1$") plt.ylabel("$x_2$") plt.legend(loc="lower right") plt.title(f"Generation #{entry.n_gen}") fig.tight_layout() gif.options.matplotlib['dpi'] = 100 frames = [] for entry in res.history: frame = plot_design_space(entry) frames.append(frame) gif.save(frames, "plots/design_space.gif", duration=0.1, unit='s')
0.691706
0.941331
``` #The first cell is just to align our markdown tables to the left vs. center %%html <style> table {float:left} </style> ``` # Python Lists and List-Like Data types *** ## Learning Objectives In this lesson you will: 1. Learn the fundamentals of lists in Python 2. Work with lists in Python 3. Define data structure 3. Apply methods to modify lists ## Modules covered in this lesson: >- copy ## Links to topics and functions: >- <a id='Lists'></a>[List Notes](#Initial-Notes-on-Lists) >- <a id='methods'></a>[list methods](#Using-Methods-to-Work-with-Lists) ### References: Sweigart(2015, pp. 79-103) #### Don't forget about the Python visualizer tool: http://pythontutor.com/visualize.html#mode=display ## Functions covered in this lesson: |List Methods | Functions| |:-----------: |:--------:| |index() | list() | |append() | tuple() | |remove() | copy() | |sort() | deepcopy()| # Initial Notes on Lists >- Lists and the list-like tuple can contain multiple values which makes it easier to write programs that handle large amounts of data >> - `List Definition`: a *list* is a value that contains multiple values in an ordered sequence >>>- Lists start with a `[` and end with a `]` >>- *List value* vs values in a list >>>- The *list value* is the value associated with the entire list and can be stored in a variable or passed to functions like any other value >>>- Values within a list are also known as *items* # When do we typically use lists? >- Lists are one of the most common data structures programmers use >- And the short answer is that we use lists whenever we have a need that matches the list data structure's useful features such as: 1. We use lists if we need to maintain order. >>- By order we don't me sorted order, just listed order. But we will learn how to sort lists as well. 2. If you need to access the contents randomly by a number >>- Items in a list are all associated with an index number so we can access various data types within a list by the index number 3. If we need to go through the contents linearly (i.e., first to last) >>- And this is where `for-loops` come into play because they go through a list from start to end ## So ask yourself these questions to see if you want to use a list in Python 1. Do you want an ordered list of something? 2. Do you want to store the ordered list? 3. Do you want to access the things in the list randomly or linearly? >- If you answer *yes* to these questions then you want to use a list in your Python program # Notice the term `data structure` in the previous explanation of lists? ## So what is a `data structure`? >- Basically a data structure is a formal way to structure (organize) some data (facts) >- Some data structures can get very complex but just remember all they are is just a way to store facts in a program, that's it. >>- Lists in programming are really not different than lists in other areas of life they just live in a computer ### Let's work through some examples to get more familiar with lists, list values, and items. #### First, define three lists: `hairs`, `eyes`, `weights` #### Now, recall we can loop through lists because loops are an *iterable* object #### Remember also that we can build a list with a for loop with the `append()` method >- And we can see our list as it is being built by including a print() statement in our loop #### What value is stored for `listbuild` now? #### Note: the entire list we see in the output of the previous cell is the value for `listbuild` >- This is different then the individual values (aka items) in the list #### Another Note: Python considers the return value from `range(5)` the list-like value: [0,1,2,3,4] >- These next examples show how the return value of `range(5)` is list-like >- Note the return values when running each of the next two cells #### Recall: we can use print() to see the iterations of a for loop #### Loop through a list of strings and print out the values >- Here we will show the iteration number and the item values within the list #### Now let's print the index value for the first 3 items in the cuB list #### What if we want all items in the list? #### Ok, but what if our list is really long, we want to print all items in it, and we don't know how many items there are? >- You can either count them manually or use your Python ninja skills to get them #### Recall the `len()` function which told us the number of characters in a string? >- We can also use len() to show us how many items are in a list #### Now let's pretend the `cuB` list is really long and print all the items >- Also, let's print a line telling us how long the list is at the bottom # Now let's back up and look at our code ### Notice how we wrote: cuB[i]? >- What the `[i]` part of that code does is tell Python what index in the list to access >- And because we were using a loop we told Python to loop through all the items in the list with `cuB[i]` ### We can access various items in a list using the basic syntax of: listName[indexNumber] >- For example, cuB[0], would access the first element in our list >>- Recall that index, 0, is the first item in a list ### Now let's try accessing stuff in a list >- First define a new variable, `animals`, and assign it a list value #### Now, return the third animal in the list #### Why is the third animal in the list at the index of 2? >- Basically because that is how programming counts stuff. Programming starts counting at 0, not 1. >- So what that means for us is that we have to subtract 1 when someone asks us to pull a value at a certain order number. ### Now let's try accessing multiple items in the list with something called `slicing` >- Slicing lets us get a sublist from a list >- Basic syntax for slicing is `listName[firstIndex:secondIndex]` >>- The first index is included in the slice while the second is not. #### Return the first two items in the list, `animals` #### Return the last two items in a list #### Q: Why doesn't `animals[-2:-1]` give us the last two items? ## More list slicing practice #### Grab the second through the second to last values in the list #### Grab the 5th item in the stuff list #### Grab the 3rd from last item in the stuff list ### Q: How many items are in the stuff list? ### Q: Is a certain item in a list? >- Using the `in` and `not in` operators to search a list >- Note: SQL uses similar keywords to filter results from a database # Let's do more things to lists such as: >- Changing values in list >- List concatenation and replication >- Remove values from a list ## Changing values in a list ### Change the second value in the stuff list to, 'howdy' ### Change the 1st item in list to match the 8th item in the list ### List concatenation and replication >- Similar to how we concatenate strings, we can use the `+` and `*` operators on lists ### Remove values from a list >- use `del` to remove items from a list ### A short program to store a shopping list from a user #### Task: Create a program to ask a user for their shopping list 1. Store the shopping list in a variable called, `shopList` 2. Prompt the user to enter an item for their list 3. Exit the program if the users hits `enter` without typing any characters 4. Print the users final list for them using a numbered list. #### Using `clear_output` to not show all the entries along the way >- Slight variation on previous program ## Using Methods to Work with Lists >- A `method` is a function that is "called on" a value >- Each data type has its own set of methods >- The list data type has several useful methods. For example: 1. To find a value in a list: try the `index()` method 2. To add values to a list: try the `append()` and/or `insert()` methods 3. To remove values from a list: try the `remove()` method 4. To sort values in a list: try the `sort()` method ### Finding the index position of an item with `index()` ### Adding values to list with `insert()` ### Removing values in a list with `remove()` ### Sorting a list with the `sort()` method #### Some notes on the sort() method >- First, Python cannot sort lists that have both numbers and letters because it doesn't no how to compare them >- Second, Like the other methods(), Python sorts the lists in place. >>- Changing things "in place" basically means making changes to the current list variable without making a copy of the list variable >>>- Non "in place" methods make a copy of the variable or object rather than changing the current variable >>- So don't try to do something like this: shopList = shopList.sort() >- Third, sort() uses ASCIIbetical order rather than actual alphabetical order >>- This means a capital 'Z' gets sorted before a lowercase 'a' #### What is the return value of an "in-place" method? #### So if you tried to reassign a variable with a method the return value of your new variable is 'None' >- We don't usually want our variables to return nothing so we wouldn't assign a variable with a method >- Moral of the story, you usually will not use methods in a reassignment of a variable ## When would we use non "in-place" methods? >- On immutable data types such as strings and tuples ### So what is a mutable or immutable data type? >- Defintion: a `mutable` data type can have values added, deleted, or changed >>- Example: a list is a `mutable` data type >- Defintion: an `immutable` data type cannot be changed >>- Examples: strings and tuples are immutable data types ## Tuple data type >- The tuple is a "list-like" data type which is almost identical to the list data type with these differences: 1. The syntax for a tuple uses `()` instead of `[]` 2. The main difference is that tuple is *immutable* whereas a list is *mutable* #### Define a tuple variable, `gradeWts`, to store the percentage weights for this course #### Grab an item in a tuple #### Slice a tuple ### When do we usually use a tuple? >- When we need an ordered sequence of values that we do not want to change >>- One example: in a grade calculator we wouldn't want the course weights to change >>- Another example: company defined percentages for sale items # What if you want to preserve your original list before making changes? ### Copying lists in case we want to preserve our original list >- `copy` module and its `copy()` and `deepcopy()` Functions >- Remember, many of the methods we use on lists make changes in-place so if we want to preserve our original list before applying methods to them we can make a copy #### Using our `shopList` to make a copy before making changes #### Now we can make changes to `shopList` but return to the original list stored as `shopList2` if needed #### Note: the `deepcopy()` function allows you to copy lists that contain lists # The end! ## This wraps up this notebook on `lists` <a id='top'></a>[TopPage](#Teaching-Notes)
github_jupyter
#The first cell is just to align our markdown tables to the left vs. center %%html <style> table {float:left} </style>
0.309545
0.986546
``` import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) import tensorflow as tf import tensorflow_hub as hub import keras import keras.backend as K from keras.layers import * from keras.callbacks import * from keras.optimizers import * from keras import Model import pickle import os def save_obj(obj, name ): with open(name + '.pkl', 'wb') as f: pickle.dump(obj, f, pickle.HIGHEST_PROTOCOL) def load_obj(name ): with open(name + '.pkl', 'rb') as f: return pickle.load(f) for dirname, _, filenames in os.walk('/kaggle/input'): for filename in filenames: print(os.path.join(dirname, filename)) # Any results you write to the current directory are saved as output. train = pd.read_csv('/kaggle/input/google-quest-challenge/train.csv') test = pd.read_csv('/kaggle/input/google-quest-challenge/test.csv') submission = pd.read_csv('/kaggle/input/google-quest-challenge/sample_submission.csv') module_url = "/kaggle/input/universalsentenceencoderlarge4/" embed = hub.load(module_url) # For the keras Lambda def UniversalEmbedding(x): results = embed(tf.squeeze(tf.cast(x, tf.string)))["outputs"] print(results) return keras.backend.concatenate([results]) # setup training data targets = [ 'question_asker_intent_understanding', 'question_body_critical', 'question_conversational', 'question_expect_short_answer', 'question_fact_seeking', 'question_has_commonly_accepted_answer', 'question_interestingness_others', 'question_interestingness_self', 'question_multi_intent', 'question_not_really_a_question', 'question_opinion_seeking', 'question_type_choice', 'question_type_compare', 'question_type_consequence', 'question_type_definition', 'question_type_entity', 'question_type_instructions', 'question_type_procedure', 'question_type_reason_explanation', 'question_type_spelling', 'question_well_written', 'answer_helpful', 'answer_level_of_information', 'answer_plausible', 'answer_relevance', 'answer_satisfaction', 'answer_type_instructions', 'answer_type_procedure', 'answer_type_reason_explanation', 'answer_well_written' ] input_columns = ['question_title','question_body','answer'] X1 = train[input_columns[0]].values.tolist() X2 = train[input_columns[1]].values.tolist() X3 = train[input_columns[2]].values.tolist() X1 = [x.replace('?','.').replace('!','.') for x in X1] X2 = [x.replace('?','.').replace('!','.') for x in X2] X3 = [x.replace('?','.').replace('!','.') for x in X3] X = [X1,X2,X3] y = train[targets].values.tolist() # build network def swish(x): return K.sigmoid(x) * x embed_size = 512 #must be 512 for univerasl embedding layer input_text1 = Input(shape=(1,), dtype=tf.string) embedding1 = Lambda(UniversalEmbedding, output_shape=(embed_size,))(input_text1) input_text2 = Input(shape=(1,), dtype=tf.string) embedding2 = Lambda(UniversalEmbedding, output_shape=(embed_size,))(input_text2) input_text3 = Input(shape=(1,), dtype=tf.string) embedding3 = Lambda(UniversalEmbedding, output_shape=(embed_size,))(input_text3) x = Concatenate()([embedding1,embedding2,embedding3]) x = Dense(256, activation=swish)(x) x = Dropout(0.4)(x) x = BatchNormalization()(x) x = Dense(64, activation=swish, kernel_regularizer=keras.regularizers.l2(0.001))(x) x = Dropout(0.4)(x) x = BatchNormalization()(x) output = Dense(len(targets),activation='sigmoid',name='output')(x) model = Model(inputs=[input_text1,input_text2,input_text3], outputs=[output]) model.summary() # clean up as much as possible import gc print(gc.collect()) # Train the network reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=2, min_lr=1e-7, verbose=1) optimizer = Adadelta() model.compile(optimizer=optimizer, loss='binary_crossentropy') model.fit(X, [y], epochs=20, validation_split=.1,batch_size=32,callbacks=[reduce_lr]) # prep test data X1 = test[input_columns[0]].values.tolist() X2 = test[input_columns[1]].values.tolist() X3 = test[input_columns[2]].values.tolist() X1 = [x.replace('?','.').replace('!','.') for x in X1] X2 = [x.replace('?','.').replace('!','.') for x in X2] X3 = [x.replace('?','.').replace('!','.') for x in X3] pred_X = [X1,X2,X3] # Make a prediction pred_y = model.predict(pred_X) # Check the submission submission = pd.read_csv('/kaggle/input/google-quest-challenge/sample_submission.csv') submission[targets] = pred_y submission.head() # Save the result submission.to_csv("submission.csv", index = False) ```
github_jupyter
import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) import tensorflow as tf import tensorflow_hub as hub import keras import keras.backend as K from keras.layers import * from keras.callbacks import * from keras.optimizers import * from keras import Model import pickle import os def save_obj(obj, name ): with open(name + '.pkl', 'wb') as f: pickle.dump(obj, f, pickle.HIGHEST_PROTOCOL) def load_obj(name ): with open(name + '.pkl', 'rb') as f: return pickle.load(f) for dirname, _, filenames in os.walk('/kaggle/input'): for filename in filenames: print(os.path.join(dirname, filename)) # Any results you write to the current directory are saved as output. train = pd.read_csv('/kaggle/input/google-quest-challenge/train.csv') test = pd.read_csv('/kaggle/input/google-quest-challenge/test.csv') submission = pd.read_csv('/kaggle/input/google-quest-challenge/sample_submission.csv') module_url = "/kaggle/input/universalsentenceencoderlarge4/" embed = hub.load(module_url) # For the keras Lambda def UniversalEmbedding(x): results = embed(tf.squeeze(tf.cast(x, tf.string)))["outputs"] print(results) return keras.backend.concatenate([results]) # setup training data targets = [ 'question_asker_intent_understanding', 'question_body_critical', 'question_conversational', 'question_expect_short_answer', 'question_fact_seeking', 'question_has_commonly_accepted_answer', 'question_interestingness_others', 'question_interestingness_self', 'question_multi_intent', 'question_not_really_a_question', 'question_opinion_seeking', 'question_type_choice', 'question_type_compare', 'question_type_consequence', 'question_type_definition', 'question_type_entity', 'question_type_instructions', 'question_type_procedure', 'question_type_reason_explanation', 'question_type_spelling', 'question_well_written', 'answer_helpful', 'answer_level_of_information', 'answer_plausible', 'answer_relevance', 'answer_satisfaction', 'answer_type_instructions', 'answer_type_procedure', 'answer_type_reason_explanation', 'answer_well_written' ] input_columns = ['question_title','question_body','answer'] X1 = train[input_columns[0]].values.tolist() X2 = train[input_columns[1]].values.tolist() X3 = train[input_columns[2]].values.tolist() X1 = [x.replace('?','.').replace('!','.') for x in X1] X2 = [x.replace('?','.').replace('!','.') for x in X2] X3 = [x.replace('?','.').replace('!','.') for x in X3] X = [X1,X2,X3] y = train[targets].values.tolist() # build network def swish(x): return K.sigmoid(x) * x embed_size = 512 #must be 512 for univerasl embedding layer input_text1 = Input(shape=(1,), dtype=tf.string) embedding1 = Lambda(UniversalEmbedding, output_shape=(embed_size,))(input_text1) input_text2 = Input(shape=(1,), dtype=tf.string) embedding2 = Lambda(UniversalEmbedding, output_shape=(embed_size,))(input_text2) input_text3 = Input(shape=(1,), dtype=tf.string) embedding3 = Lambda(UniversalEmbedding, output_shape=(embed_size,))(input_text3) x = Concatenate()([embedding1,embedding2,embedding3]) x = Dense(256, activation=swish)(x) x = Dropout(0.4)(x) x = BatchNormalization()(x) x = Dense(64, activation=swish, kernel_regularizer=keras.regularizers.l2(0.001))(x) x = Dropout(0.4)(x) x = BatchNormalization()(x) output = Dense(len(targets),activation='sigmoid',name='output')(x) model = Model(inputs=[input_text1,input_text2,input_text3], outputs=[output]) model.summary() # clean up as much as possible import gc print(gc.collect()) # Train the network reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=2, min_lr=1e-7, verbose=1) optimizer = Adadelta() model.compile(optimizer=optimizer, loss='binary_crossentropy') model.fit(X, [y], epochs=20, validation_split=.1,batch_size=32,callbacks=[reduce_lr]) # prep test data X1 = test[input_columns[0]].values.tolist() X2 = test[input_columns[1]].values.tolist() X3 = test[input_columns[2]].values.tolist() X1 = [x.replace('?','.').replace('!','.') for x in X1] X2 = [x.replace('?','.').replace('!','.') for x in X2] X3 = [x.replace('?','.').replace('!','.') for x in X3] pred_X = [X1,X2,X3] # Make a prediction pred_y = model.predict(pred_X) # Check the submission submission = pd.read_csv('/kaggle/input/google-quest-challenge/sample_submission.csv') submission[targets] = pred_y submission.head() # Save the result submission.to_csv("submission.csv", index = False)
0.51562
0.19727
``` # Allow the PyMC3 models to be imported in the notebook folder import os import sys module_path = os.path.abspath(os.path.join('..')) if module_path not in sys.path: sys.path.append(module_path) from matplotlib import pyplot as plt import numpy as np from pymc3 import summary, traceplot import pymc3 as pm %matplotlib inline from pymc3_models.models.LinearRegression import LinearRegression plt.rcParams['figure.figsize'] = (15, 10) plt.rcParams['font.size'] = 16 ``` Generate Synthetic Data === ``` X = np.random.randn(1000, 1) noise = 2 * np.random.randn(1000, 1) Y = 4 * X + 3 + noise Y = np.squeeze(Y) plt.scatter(X, Y) ``` Fit w/ ADVI === ``` LR = LinearRegression() LR = LR.fit(X, Y, minibatch_size=100) LR.plot_elbo() Y_predict = LR.predict(X) LR.score(X, Y) traceplot(LR.trace) plt.show() coefs.head() LR.summary coefs = LR.summary.reset_index().rename(columns = {'index' : 'coef'}) ypa_ci = np.array(list(zip(-coefs['hpd_2.5'] + coefs['mean'], coefs['hpd_97.5'] - coefs['mean']))).T plt.errorbar('mean', 'coef', xerr=ypa_ci, data=coefs, fmt='ko', capthick=2, capsize=10, label=None) plt.show() max_x = max(X) min_x = min(X) m = LR.summary['mean']['betas__0_0'] b = LR.summary['mean']['alpha__0'] fig1 = plt.figure() #ax = fig.add_subplot(111) plt.scatter(X, Y) plt.plot([min_x, max_x], [m*min_x + b, m*max_x + b], 'r', label='ADVI') plt.legend() LR.save('pickle_jar/LR_jar/') LR4 = LinearRegression() LR4.load('pickle_jar/LR_jar/') LR4.score(X, Y) ``` Fit w/ NUTS === ``` LR2 = LinearRegression() LR2.fit(X, Y, inference_type='nuts', inference_args={'draws': 2000}) LR2.score(X, Y) traceplot(LR2.trace) ``` Compare the two methods === ``` max_x = max(X) min_x = min(X) m = LR.summary['mean']['betas__0_0'] b = LR.summary['mean']['alpha__0'] m2 = LR2.summary['mean']['betas__0_0'] b2 = LR2.summary['mean']['alpha__0'] fig1 = plt.figure() plt.scatter(X, Y) plt.plot([min_x, max_x], [m*min_x + b, m*max_x + b], 'r', label='ADVI') plt.plot([min_x, max_x], [m2*min_x + b2, m2*max_x + b2], 'g', label='NUTS', alpha=0.5) plt.legend() ``` Multiple predictors === ``` num_pred = 10 X = np.random.randn(1000, num_pred) noise = np.random.normal(1) * np.random.randn(1000,) Y = X.dot(np.array([4, 5, 6,7,8,9,10,11,12,13])) + 3 + noise Y = np.squeeze(Y) LR3 = LinearRegression() LR3.fit(X, Y) LR3.summary coefs = LR3.summary.reset_index().rename(columns = {'index' : 'coef'}) ypa_ci = np.array(list(zip(-coefs['hpd_2.5'] + coefs['mean'], coefs['hpd_97.5'] - coefs['mean']))).T plt.errorbar('mean', 'coef', xerr=ypa_ci, data=coefs, fmt='ko', capthick=2, capsize=10, label=None) plt.show() ```
github_jupyter
# Allow the PyMC3 models to be imported in the notebook folder import os import sys module_path = os.path.abspath(os.path.join('..')) if module_path not in sys.path: sys.path.append(module_path) from matplotlib import pyplot as plt import numpy as np from pymc3 import summary, traceplot import pymc3 as pm %matplotlib inline from pymc3_models.models.LinearRegression import LinearRegression plt.rcParams['figure.figsize'] = (15, 10) plt.rcParams['font.size'] = 16 X = np.random.randn(1000, 1) noise = 2 * np.random.randn(1000, 1) Y = 4 * X + 3 + noise Y = np.squeeze(Y) plt.scatter(X, Y) LR = LinearRegression() LR = LR.fit(X, Y, minibatch_size=100) LR.plot_elbo() Y_predict = LR.predict(X) LR.score(X, Y) traceplot(LR.trace) plt.show() coefs.head() LR.summary coefs = LR.summary.reset_index().rename(columns = {'index' : 'coef'}) ypa_ci = np.array(list(zip(-coefs['hpd_2.5'] + coefs['mean'], coefs['hpd_97.5'] - coefs['mean']))).T plt.errorbar('mean', 'coef', xerr=ypa_ci, data=coefs, fmt='ko', capthick=2, capsize=10, label=None) plt.show() max_x = max(X) min_x = min(X) m = LR.summary['mean']['betas__0_0'] b = LR.summary['mean']['alpha__0'] fig1 = plt.figure() #ax = fig.add_subplot(111) plt.scatter(X, Y) plt.plot([min_x, max_x], [m*min_x + b, m*max_x + b], 'r', label='ADVI') plt.legend() LR.save('pickle_jar/LR_jar/') LR4 = LinearRegression() LR4.load('pickle_jar/LR_jar/') LR4.score(X, Y) LR2 = LinearRegression() LR2.fit(X, Y, inference_type='nuts', inference_args={'draws': 2000}) LR2.score(X, Y) traceplot(LR2.trace) max_x = max(X) min_x = min(X) m = LR.summary['mean']['betas__0_0'] b = LR.summary['mean']['alpha__0'] m2 = LR2.summary['mean']['betas__0_0'] b2 = LR2.summary['mean']['alpha__0'] fig1 = plt.figure() plt.scatter(X, Y) plt.plot([min_x, max_x], [m*min_x + b, m*max_x + b], 'r', label='ADVI') plt.plot([min_x, max_x], [m2*min_x + b2, m2*max_x + b2], 'g', label='NUTS', alpha=0.5) plt.legend() num_pred = 10 X = np.random.randn(1000, num_pred) noise = np.random.normal(1) * np.random.randn(1000,) Y = X.dot(np.array([4, 5, 6,7,8,9,10,11,12,13])) + 3 + noise Y = np.squeeze(Y) LR3 = LinearRegression() LR3.fit(X, Y) LR3.summary coefs = LR3.summary.reset_index().rename(columns = {'index' : 'coef'}) ypa_ci = np.array(list(zip(-coefs['hpd_2.5'] + coefs['mean'], coefs['hpd_97.5'] - coefs['mean']))).T plt.errorbar('mean', 'coef', xerr=ypa_ci, data=coefs, fmt='ko', capthick=2, capsize=10, label=None) plt.show()
0.503418
0.848596
### Experimental: Convert Library of Congress Classification codes into a classification #### Work in Progress: Do not trust the accuracy of these results. The list immediately below are just some codes to help you to get started. * Left links take you to Open Library book pages. * Right links take you to the Library of Congress page for that classification code. * You can enter either code into the application at the bottom of this page to see the classification. * Please also test other valid codes that are NOT on this list. ###### Testing instructions: 1. Scroll all the way to the bottom of this page 2. Click in the cell that contains __%run -i 'lcc_classifier.py'__ 3. Type `shift-return` to _execute that cell_ and a data entry box should appear after a few seconds 4. Enter either a Library of Congress Classification code or a Open Library ID and press return 5. A classification should appear... * Is it the correct classification? * Are the elements in the correct order? 6. Enter a blank to end the __lcc_classifier.py__ script. 7. Please keep track of bad classifications and make comments on: * https://github.com/internetarchive/openlibrary/pull/3309 Open Library ID | Library of Congress Classification code -- | -- [OL1025841M](https://openlibrary.org/books/OL1025841M) | [HB1951 .R64 1995](https://catalog.loc.gov/vwebv/search?searchCode=CALL%2B&searchType=1&searchArg=HB1951+.R64+1995) [OL1025966M](https://openlibrary.org/books/OL1025966M) | [DP402.C8 O46 1995](https://catalog.loc.gov/vwebv/search?searchCode=CALL%2B&searchType=1&searchArg=DP402.C8+O46+1995) [OL1026156M](https://openlibrary.org/books/OL1026156M) | [CS879 .R3 1995](https://catalog.loc.gov/vwebv/search?searchCode=CALL%2B&searchType=1&searchArg=CS879+.R3+1995) [OL1026211M](https://openlibrary.org/books/OL1026211M) | [NC248.S22 A4 1992](https://catalog.loc.gov/vwebv/search?searchCode=CALL%2B&searchType=1&searchArg=NC248.S22+A4+1992) [OL102629M](https://openlibrary.org/books/OL102629M) | [TJ563 .P66 1998](https://catalog.loc.gov/vwebv/search?searchCode=CALL%2B&searchType=1&searchArg=TJ563+.P66+1998) [OL1026596M](https://openlibrary.org/books/OL1026596M) | [PQ3919.2.M2866 C83 1994](https://catalog.loc.gov/vwebv/search?searchCode=CALL%2B&searchType=1&searchArg=PQ3919.2.M2866+C83+1994) [OL1026624M](https://openlibrary.org/books/OL1026624M) | [NA2500 .H64 1995](https://catalog.loc.gov/vwebv/search?searchCode=CALL%2B&searchType=1&searchArg=NA2500+.H64+1995) [OL1026668M](https://openlibrary.org/books/OL1026668M) | [PN517 .L38 1994](https://catalog.loc.gov/vwebv/search?searchCode=CALL%2B&searchType=1&searchArg=PN517+.L38+1994) [OL1026747M](https://openlibrary.org/books/OL1026747M) | [MLCM 95/14118 (P)](https://catalog.loc.gov/vwebv/search?searchCode=CALL%2B&searchType=1&searchArg=MLCM+95/14118+(P)) [OL102706M](https://openlibrary.org/books/OL102706M) | [QA331.3 .M39 1998](https://catalog.loc.gov/vwebv/search?searchCode=CALL%2B&searchType=1&searchArg=QA331.3+.M39+1998) [OL1027106M](https://openlibrary.org/books/OL1027106M) | [PT8951.12.R5 M56 1980](https://catalog.loc.gov/vwebv/search?searchCode=CALL%2B&searchType=1&searchArg=PT8951.12.R5+M56+1980) [OL1027418M](https://openlibrary.org/books/OL1027418M) | [MLCS 96/04520 (P)](https://catalog.loc.gov/vwebv/search?searchCode=CALL%2B&searchType=1&searchArg=MLCS+96/04520+(P)) [OL1027454M](https://openlibrary.org/books/OL1027454M) | [HQ755.8 .T63 1995](https://catalog.loc.gov/vwebv/search?searchCode=CALL%2B&searchType=1&searchArg=HQ755.8+.T63+1995) [OL1028019M](https://openlibrary.org/books/OL1028019M) | [MLCS 97/02275 (T)](https://catalog.loc.gov/vwebv/search?searchCode=CALL%2B&searchType=1&searchArg=MLCS+97/02275+(T)) [OL1028055M](https://openlibrary.org/books/OL1028055M) | [PZ70.C9 F657 1995](https://catalog.loc.gov/vwebv/search?searchCode=CALL%2B&searchType=1&searchArg=PZ70.C9+F657+1995) [OL1028253M](https://openlibrary.org/books/OL1028253M) | [HC241 .G683 1995](https://catalog.loc.gov/vwebv/search?searchCode=CALL%2B&searchType=1&searchArg=HC241+.G683+1995) [OL1028626M](https://openlibrary.org/books/OL1028626M) | [MLCS 95/08574 (U)](https://catalog.loc.gov/vwebv/search?searchCode=CALL%2B&searchType=1&searchArg=MLCS+95/08574+(U)) [OL1028701M](https://openlibrary.org/books/OL1028701M) | [HC371 .M45 nr. 122](https://catalog.loc.gov/vwebv/search?searchCode=CALL%2B&searchType=1&searchArg=HC371+.M45+nr.+122) [OL102878M](https://openlibrary.org/books/OL102878M) | [MLCS 2002/05802 (P)](https://catalog.loc.gov/vwebv/search?searchCode=CALL%2B&searchType=1&searchArg=MLCS+2002/05802+(P)) [OL1029016M](https://openlibrary.org/books/OL1029016M) | [IN PROCESS](https://catalog.loc.gov/vwebv/search?searchCode=CALL%2B&searchType=1&searchArg=IN+PROCESS) [OL102935M](https://openlibrary.org/books/OL102935M) | [KLA940 .K65 1990](https://catalog.loc.gov/vwebv/search?searchCode=CALL%2B&searchType=1&searchArg=KLA940+.K65+1990) [OL1029463M](https://openlibrary.org/books/OL1029463M) | [KHA878 .G37 1996](https://catalog.loc.gov/vwebv/search?searchCode=CALL%2B&searchType=1&searchArg=KHA878+.G37+1996) [OL1029540M](https://openlibrary.org/books/OL1029540M) | [KHH3003 .Q57 1995](https://catalog.loc.gov/vwebv/search?searchCode=CALL%2B&searchType=1&searchArg=KHH3003+.Q57+1995) [OL1030429M](https://openlibrary.org/books/OL1030429M) | [TX819.A1 T733 1991](https://catalog.loc.gov/vwebv/search?searchCode=CALL%2B&searchType=1&searchArg=TX819.A1+T733+1991) [OL1030465M](https://openlibrary.org/books/OL1030465M) | [PQ7298.12.A40 S26 1987](https://catalog.loc.gov/vwebv/search?searchCode=CALL%2B&searchType=1&searchArg=PQ7298.12.A40+S26+1987) [OL1030780M](https://openlibrary.org/books/OL1030780M) | [HM216 .G44 1993](https://catalog.loc.gov/vwebv/search?searchCode=CALL%2B&searchType=1&searchArg=HM216+.G44+1993) [OL1030894M](https://openlibrary.org/books/OL1030894M) | [SD409 .A38 1990](https://catalog.loc.gov/vwebv/search?searchCode=CALL%2B&searchType=1&searchArg=SD409+.A38+1990) [OL1031493M](https://openlibrary.org/books/OL1031493M) | [J451 .N4 1990z](https://catalog.loc.gov/vwebv/search?searchCode=CALL%2B&searchType=1&searchArg=J451+.N4+1990z) [OL1031615M](https://openlibrary.org/books/OL1031615M) | [TR850 .F88 1993](https://catalog.loc.gov/vwebv/search?searchCode=CALL%2B&searchType=1&searchArg=TR850+.F88+1993) [OL1031659M](https://openlibrary.org/books/OL1031659M) | [MLCS 93/14492 (P)](https://catalog.loc.gov/vwebv/search?searchCode=CALL%2B&searchType=1&searchArg=MLCS+93/14492+(P)) [OL1031710M](https://openlibrary.org/books/OL1031710M) | [KK2222 .L36 1993](https://catalog.loc.gov/vwebv/search?searchCode=CALL%2B&searchType=1&searchArg=KK2222+.L36+1993) [OL1031822M](https://openlibrary.org/books/OL1031822M) | [G525 .M486 1991](https://catalog.loc.gov/vwebv/search?searchCode=CALL%2B&searchType=1&searchArg=G525+.M486+1991) [OL1032690M](https://openlibrary.org/books/OL1032690M) | [HM261 .H47 1993](https://catalog.loc.gov/vwebv/search?searchCode=CALL%2B&searchType=1&searchArg=HM261+.H47+1993) [OL1032795M](https://openlibrary.org/books/OL1032795M) | [PQ8098.23.O516 L38 1988](https://catalog.loc.gov/vwebv/search?searchCode=CALL%2B&searchType=1&searchArg=PQ8098.23.O516+L38+1988) [OL1032953M](https://openlibrary.org/books/OL1032953M) | [PL191 .I94 1992](https://catalog.loc.gov/vwebv/search?searchCode=CALL%2B&searchType=1&searchArg=PL191+.I94+1992) [OL1033073M](https://openlibrary.org/books/OL1033073M) | [LF3194.C65 A657 1992](https://catalog.loc.gov/vwebv/search?searchCode=CALL%2B&searchType=1&searchArg=LF3194.C65+A657+1992) [OL103482M](https://openlibrary.org/books/OL103482M) | [PG5438.V25 J47 1999](https://catalog.loc.gov/vwebv/search?searchCode=CALL%2B&searchType=1&searchArg=PG5438.V25+J47+1999) [OL1035916M](https://openlibrary.org/books/OL1035916M) | [HG1615 .M32 1993](https://catalog.loc.gov/vwebv/search?searchCode=CALL%2B&searchType=1&searchArg=HG1615+.M32+1993) [OL1036001M](https://openlibrary.org/books/OL1036001M) | [KF27 .A3 1992h](https://catalog.loc.gov/vwebv/search?searchCode=CALL%2B&searchType=1&searchArg=KF27+.A3+1992h) [OL103608M](https://openlibrary.org/books/OL103608M) | [PT1937.A1 G35 1999](https://catalog.loc.gov/vwebv/search?searchCode=CALL%2B&searchType=1&searchArg=PT1937.A1+G35+1999) [OL1036126M](https://openlibrary.org/books/OL1036126M) | [MLCS 98/02371 (H)](https://catalog.loc.gov/vwebv/search?searchCode=CALL%2B&searchType=1&searchArg=MLCS+98/02371+(H)) [OL1036553M](https://openlibrary.org/books/OL1036553M) | [MLCM 93/05262 (D)](https://catalog.loc.gov/vwebv/search?searchCode=CALL%2B&searchType=1&searchArg=MLCM+93/05262+(D)) [OL1036719M](https://openlibrary.org/books/OL1036719M) | [KF3613.4 .C34](https://catalog.loc.gov/vwebv/search?searchCode=CALL%2B&searchType=1&searchArg=KF3613.4+.C34) [OL1036755M](https://openlibrary.org/books/OL1036755M) | [DR1313.3 .U54 1993](https://catalog.loc.gov/vwebv/search?searchCode=CALL%2B&searchType=1&searchArg=DR1313.3+.U54+1993) [OL1037020M](https://openlibrary.org/books/OL1037020M) | [DS557.8.M9 B55 1992b](https://catalog.loc.gov/vwebv/search?searchCode=CALL%2B&searchType=1&searchArg=DS557.8.M9+B55+1992b) [OL1037176M](https://openlibrary.org/books/OL1037176M) | [DR82 .G46 1993](https://catalog.loc.gov/vwebv/search?searchCode=CALL%2B&searchType=1&searchArg=DR82+.G46+1993) [OL1037305M](https://openlibrary.org/books/OL1037305M) | [PT2678.E3393 S36 1993](https://catalog.loc.gov/vwebv/search?searchCode=CALL%2B&searchType=1&searchArg=PT2678.E3393+S36+1993) [OL1037349M](https://openlibrary.org/books/OL1037349M) | [HN530.2.A85 I86 1992](https://catalog.loc.gov/vwebv/search?searchCode=CALL%2B&searchType=1&searchArg=HN530.2.A85+I86+1992) [OL1037631M](https://openlibrary.org/books/OL1037631M) | [TK5105.5 .O653 1993](https://catalog.loc.gov/vwebv/search?searchCode=CALL%2B&searchType=1&searchArg=TK5105.5+.O653+1993) [OL1038111M](https://openlibrary.org/books/OL1038111M) | [AM79.5.B26 B34 1993](https://catalog.loc.gov/vwebv/search?searchCode=CALL%2B&searchType=1&searchArg=AM79.5.B26+B34+1993) ``` %run -i 'lcc_classifier.py' ```
github_jupyter
%run -i 'lcc_classifier.py'
0.0509
0.627523
### Iteration #### The `for` Loop Unlike Java, Python does not have a `for` loop with syntax such as: ``` for (int i=0; i++, i < 10) ``` Instead the only `for` loop Python has is the "for-each" clause of Java: ``` for (int i: numbers) ``` that is used to iterate over a collection of objects. In Python we simply write: ``` for my_var in my_list: <code block> ``` Just like the `if` clauses, the `for` loop body is simply **indented** code. Any code line following a `for` loop that is unidented does *not* belong to the for loop. The `my_list` can be any collection type (more specifically any iterable - objects that support iteration). So lists, tuples, strings, sets and dictionaries are all iterables, and can therefore be used in `for` loops. Let's see a few examples: ``` for item in [10, 'hello', 1+1j]: print(item) for item in (10, 'hello', 1+1j): print(item) ``` Reamember that the `,` is what Python uses to indicate tuples, not really the `()` except in rare circumstances. This means the last loop could have been written this way too, although I personally perfer using the `()` as this makes the code more explicit: ``` for item in 10, 'hello', 1+1j: print(item) ``` Strings are iterables too: ``` for c in 'PYTHON': print(c) ``` As are sets: ``` for item in {'a', 10, 1+1j}: print(item) ``` Dictionaries are also iterable, but by default the iteration happens over the **keys** of the dictionary: ``` d = { 'a': 10, 'b': 'Python', 'c': 1+1j } for key in d: print(key) ``` You are probably wondering how could we then create a `for` loop that simply produces integers from some starting point to some (non-inclusive) end point, like this Java code would do: ``` for (int i=0; i < 10; i++) { system.out.println(i); } ``` Python has another type of iterable (container type object) called **generators**. Generators are beyond the scope of this primer, but think of them as iterable objects that do not produce the requested value until requested to do so when iterating over the collection. This is sometimes called *lazy* evaluation, and is something that is very common in Python 3. Python has a special function, called `range()` that can create a generator of numbers. But in order to see what those number are we have to iterate over the generator (the `range` object) - we can use a `for` loop, or simply use the `list()` function which, remember, can take any iterable (including a generator) as an argument. The `range` function has these arguments: ``` help(range) ``` So `range` can - take a single argument, which would be the *stop* (non-inclusive) value, with a default start value of `0` (inclusive) - take two arguments corresponding to *start* (inclusive) and *stop* (exclusive) values - take three values corresponding to *start*, *stop* and *step* values ``` list(range(5)) list(range(1, 5)) list(range(1, 10, 2)) ``` As I mentioned, the return value of the `range` function is an iterable, but not a list or tuple: ``` type(range(1, 5)) ``` So we can iterate over it using a `for` loop: ``` for i in range(10): if i % 2: print('odd', i) ``` We can also terminate loops early, by using the `break` statement: ``` for i in range(1_000_000): if i > 5: break print(i) ``` As you can see, the loop should have run through `1,000,000` iterations, but our `break` statement cut it short. #### The `while` Loop Python also has a `while` loop, that looks, and behaves very similarly to Java's `while` loop: The syntax is: ``` while <expr>: <code block> ``` and the loop will run as long as `<expr>` is `True` (or **truthy** to be exact). ``` i = 0 while i < 5: print(i) i += 1 ``` Again, we can also use a break inside a `while` loop: ``` i = 0 while True: # this is an infinite loop! i += 1 if i > 5: break print(i) ``` Another problem that often comes up is how to iterate over a collection such as a list, and replace values in the list as we iterate over it. As we have seen before, to modify a value at some specific location in the list, we need to know it's index, so that we can assign this way: `lst[i] = value`. In Java, this is straightforward - we iterate over the valid index numbers of an array say, and modify the array values. You could do the same thing in Python: ``` lst = ['this', 'is', 'a', 'dead', 'parrot'] ``` Now suppose we want to make each string all upper case if it is more than 2 characters long: ``` for i in range(len(lst)): print(i) ``` As you can see this iterates over all the valid indexes of `lst`. So now we could do this: ``` lst = ['this', 'is', 'a', 'dead', 'parrot'] for i in range(len(lst)): if len(lst[i]) > 2: lst[i] = lst[i].upper() print(lst) ``` But this type of code will get you bad looks from seasoned Python developers. Instead, we can use the `enumerate()` function which is a generator (remember those?), containing **tuples** of the element index and the element itself in the iterable argument passed to it. Let's see this: ``` lst = ['this', 'is', 'a', 'dead', 'parrot'] list(enumerate(lst)) ``` So now, we can simplify our code somewhat: ``` lst = ['this', 'is', 'a', 'dead', 'parrot'] for item in enumerate(lst): index = item[0] s = item[1] if len(s) > 2: lst[index] = s.upper() print(lst) ``` which is definitely better than what we had before: ``` lst = ['this', 'is', 'a', 'dead', 'parrot'] for i in range(len(lst)): if len(lst[i]) > 2: lst[i] = lst[i].upper() print(lst) ``` But we can do better still!! Which leads to the next topic.
github_jupyter
for (int i=0; i++, i < 10) for (int i: numbers) for my_var in my_list: <code block> for item in [10, 'hello', 1+1j]: print(item) for item in (10, 'hello', 1+1j): print(item) for item in 10, 'hello', 1+1j: print(item) for c in 'PYTHON': print(c) for item in {'a', 10, 1+1j}: print(item) d = { 'a': 10, 'b': 'Python', 'c': 1+1j } for key in d: print(key) for (int i=0; i < 10; i++) { system.out.println(i); } help(range) list(range(5)) list(range(1, 5)) list(range(1, 10, 2)) type(range(1, 5)) for i in range(10): if i % 2: print('odd', i) for i in range(1_000_000): if i > 5: break print(i) while <expr>: <code block> i = 0 while i < 5: print(i) i += 1 i = 0 while True: # this is an infinite loop! i += 1 if i > 5: break print(i) lst = ['this', 'is', 'a', 'dead', 'parrot'] for i in range(len(lst)): print(i) lst = ['this', 'is', 'a', 'dead', 'parrot'] for i in range(len(lst)): if len(lst[i]) > 2: lst[i] = lst[i].upper() print(lst) lst = ['this', 'is', 'a', 'dead', 'parrot'] list(enumerate(lst)) lst = ['this', 'is', 'a', 'dead', 'parrot'] for item in enumerate(lst): index = item[0] s = item[1] if len(s) > 2: lst[index] = s.upper() print(lst) lst = ['this', 'is', 'a', 'dead', 'parrot'] for i in range(len(lst)): if len(lst[i]) > 2: lst[i] = lst[i].upper() print(lst)
0.046023
0.958109
Copyright (c) Microsoft Corporation. All rights reserved. Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-how-to-use-azurebatch-to-run-a-windows-executable.png) # Azure Machine Learning Pipeline with AzureBatchStep This notebook is used to demonstrate the use of AzureBatchStep in Azure Machine Learning Pipeline. An AzureBatchStep will submit a job to an AzureBatch Compute to run a simple windows executable. ## Azure Machine Learning and Pipeline SDK-specific Imports ``` import azureml.core from azureml.core import Workspace, Experiment from azureml.core.compute import ComputeTarget, BatchCompute from azureml.core.datastore import Datastore from azureml.data.data_reference import DataReference from azureml.exceptions import ComputeTargetException from azureml.pipeline.core import Pipeline, PipelineData from azureml.pipeline.steps import AzureBatchStep import os from os import path from tempfile import mkdtemp # Check core SDK version number print("SDK version:", azureml.core.VERSION) ``` ## Initialize Workspace Initialize a workspace object from persisted configuration. Make sure the config file is present at .\config.json If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, If you don't have a config.json file, please go through the [configuration Notebook](https://aka.ms/pl-config) located [here](https://github.com/Azure/MachineLearningNotebooks). This sets you up with a working config file that has information on your workspace, subscription id, etc. ``` ws = Workspace.from_config() print('Workspace Name: ' + ws.name, 'Azure Region: ' + ws.location, 'Subscription Id: ' + ws.subscription_id, 'Resource Group: ' + ws.resource_group, sep = '\n') ``` ## Attach Batch Compute to Workspace To submit jobs to Azure Batch service, you must attach your Azure Batch account to the workspace. ``` batch_compute_name = 'mybatchcompute' # Name to associate with new compute in workspace # Batch account details needed to attach as compute to workspace batch_account_name = "<batch_account_name>" # Name of the Batch account batch_resource_group = "<batch_resource_group>" # Name of the resource group which contains this account try: # check if already attached batch_compute = BatchCompute(ws, batch_compute_name) except ComputeTargetException: print('Attaching Batch compute...') provisioning_config = BatchCompute.attach_configuration(resource_group=batch_resource_group, account_name=batch_account_name) batch_compute = ComputeTarget.attach(ws, batch_compute_name, provisioning_config) batch_compute.wait_for_completion() print("Provisioning state:{}".format(batch_compute.provisioning_state)) print("Provisioning errors:{}".format(batch_compute.provisioning_errors)) print("Using Batch compute:{}".format(batch_compute.cluster_resource_id)) ``` ## Setup Datastore Setting up the Blob storage associated with the workspace. The following call retrieves the Azure Blob Store associated with your workspace. Note that workspaceblobstore is **the name of this store and CANNOT BE CHANGED and must be used as is**. If you want to register another Datastore, please follow the instructions from here: https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-access-data#register-a-datastore ``` datastore = Datastore(ws, "workspaceblobstore") print('Datastore details:') print('Datastore Account Name: ' + datastore.account_name) print('Datastore Workspace Name: ' + datastore.workspace.name) print('Datastore Container Name: ' + datastore.container_name) ``` ## Setup Input and Output For this example we will upload a file in the provided Datastore. These are some helper methods to achieve that. ``` def create_local_file(content, file_name): # create a file in a local temporary directory temp_dir = mkdtemp() with open(path.join(temp_dir, file_name), 'w') as f: f.write(content) return temp_dir def upload_file_to_datastore(datastore, file_name, content): src_dir = create_local_file(content=content, file_name=file_name) datastore.upload(src_dir=src_dir, overwrite=True, show_progress=True) ``` Here we associate the input DataReference with an existing file in the provided Datastore. Feel free to upload the file of your choice manually or use the *upload_file_to_datastore* method. ``` file_name="input.txt" upload_file_to_datastore(datastore=datastore, file_name=file_name, content="this is the content of the file") testdata = DataReference(datastore=datastore, path_on_datastore=file_name, data_reference_name="input") outputdata = PipelineData(name="output", datastore=datastore) ``` ## Setup AzureBatch Job Binaries AzureBatch can run a task within the job and here we put a simple .cmd file to be executed. Feel free to put any binaries in the folder, or modify the .cmd file as needed, they will be uploaded once we create the AzureBatch Step. ``` binaries_folder = "azurebatch/job_binaries" if not os.path.isdir(binaries_folder): os.makedirs(binaries_folder) file_name="azurebatch.cmd" with open(path.join(binaries_folder, file_name), 'w') as f: f.write("copy \"%1\" \"%2\"") ``` ## Create an AzureBatchStep AzureBatchStep is used to submit a job to the attached Azure Batch compute. - **name:** Name of the step - **pool_id:** Name of the pool, it can be an existing pool, or one that will be created when the job is submitted - **inputs:** List of inputs that will be processed by the job - **outputs:** List of outputs the job will create - **executable:** The executable that will run as part of the job - **arguments:** Arguments for the executable. They can be plain string format, inputs, outputs or parameters - **compute_target:** The compute target where the job will run. - **source_directory:** The local directory with binaries to be executed by the job Optional parameters: - **create_pool:** Boolean flag to indicate whether create the pool before running the jobs - **delete_batch_job_after_finish:** Boolean flag to indicate whether to delete the job from Batch account after it's finished - **delete_batch_pool_after_finish:** Boolean flag to indicate whether to delete the pool after the job finishes - **is_positive_exit_code_failure:** Boolean flag to indicate if the job fails if the task exists with a positive code - **vm_image_urn:** If create_pool is true and VM uses VirtualMachineConfiguration. Value format: 'urn:publisher:offer:sku'. Example: urn:MicrosoftWindowsServer:WindowsServer:2012-R2-Datacenter For more details: https://docs.microsoft.com/en-us/azure/virtual-machines/windows/cli-ps-findimage#table-of-commonly-used-windows-images and https://docs.microsoft.com/en-us/azure/virtual-machines/linux/cli-ps-findimage#find-specific-images - **run_task_as_admin:** Boolean flag to indicate if the task should run with Admin privileges - **target_compute_nodes:** Assumes create_pool is true, indicates how many compute nodes will be added to the pool - **source_directory:** Local folder that contains the module binaries, executable, assemblies etc. - **executable:** Name of the command/executable that will be executed as part of the job - **arguments:** Arguments for the command/executable - **inputs:** List of input port bindings - **outputs:** List of output port bindings - **vm_size:** If create_pool is true, indicating Virtual machine size of the compute nodes - **compute_target:** BatchCompute compute - **allow_reuse:** Whether the module should reuse previous results when run with the same settings/inputs - **version:** A version tag to denote a change in functionality for the module ``` step = AzureBatchStep( name="Azure Batch Job", pool_id="MyPoolName", # Replace this with the pool name of your choice inputs=[testdata], outputs=[outputdata], executable="azurebatch.cmd", arguments=[testdata, outputdata], compute_target=batch_compute, source_directory=binaries_folder, ) ``` ## Build and Submit the Pipeline ``` pipeline = Pipeline(workspace=ws, steps=[step]) pipeline_run = Experiment(ws, 'azurebatch_sample').submit(pipeline) ``` ## Visualize the Running Pipeline ``` from azureml.widgets import RunDetails RunDetails(pipeline_run).show() ```
github_jupyter
import azureml.core from azureml.core import Workspace, Experiment from azureml.core.compute import ComputeTarget, BatchCompute from azureml.core.datastore import Datastore from azureml.data.data_reference import DataReference from azureml.exceptions import ComputeTargetException from azureml.pipeline.core import Pipeline, PipelineData from azureml.pipeline.steps import AzureBatchStep import os from os import path from tempfile import mkdtemp # Check core SDK version number print("SDK version:", azureml.core.VERSION) ws = Workspace.from_config() print('Workspace Name: ' + ws.name, 'Azure Region: ' + ws.location, 'Subscription Id: ' + ws.subscription_id, 'Resource Group: ' + ws.resource_group, sep = '\n') batch_compute_name = 'mybatchcompute' # Name to associate with new compute in workspace # Batch account details needed to attach as compute to workspace batch_account_name = "<batch_account_name>" # Name of the Batch account batch_resource_group = "<batch_resource_group>" # Name of the resource group which contains this account try: # check if already attached batch_compute = BatchCompute(ws, batch_compute_name) except ComputeTargetException: print('Attaching Batch compute...') provisioning_config = BatchCompute.attach_configuration(resource_group=batch_resource_group, account_name=batch_account_name) batch_compute = ComputeTarget.attach(ws, batch_compute_name, provisioning_config) batch_compute.wait_for_completion() print("Provisioning state:{}".format(batch_compute.provisioning_state)) print("Provisioning errors:{}".format(batch_compute.provisioning_errors)) print("Using Batch compute:{}".format(batch_compute.cluster_resource_id)) datastore = Datastore(ws, "workspaceblobstore") print('Datastore details:') print('Datastore Account Name: ' + datastore.account_name) print('Datastore Workspace Name: ' + datastore.workspace.name) print('Datastore Container Name: ' + datastore.container_name) def create_local_file(content, file_name): # create a file in a local temporary directory temp_dir = mkdtemp() with open(path.join(temp_dir, file_name), 'w') as f: f.write(content) return temp_dir def upload_file_to_datastore(datastore, file_name, content): src_dir = create_local_file(content=content, file_name=file_name) datastore.upload(src_dir=src_dir, overwrite=True, show_progress=True) file_name="input.txt" upload_file_to_datastore(datastore=datastore, file_name=file_name, content="this is the content of the file") testdata = DataReference(datastore=datastore, path_on_datastore=file_name, data_reference_name="input") outputdata = PipelineData(name="output", datastore=datastore) binaries_folder = "azurebatch/job_binaries" if not os.path.isdir(binaries_folder): os.makedirs(binaries_folder) file_name="azurebatch.cmd" with open(path.join(binaries_folder, file_name), 'w') as f: f.write("copy \"%1\" \"%2\"") step = AzureBatchStep( name="Azure Batch Job", pool_id="MyPoolName", # Replace this with the pool name of your choice inputs=[testdata], outputs=[outputdata], executable="azurebatch.cmd", arguments=[testdata, outputdata], compute_target=batch_compute, source_directory=binaries_folder, ) pipeline = Pipeline(workspace=ws, steps=[step]) pipeline_run = Experiment(ws, 'azurebatch_sample').submit(pipeline) from azureml.widgets import RunDetails RunDetails(pipeline_run).show()
0.35031
0.886911
# Papermill Report Generator ``` import os import pandas as pd import numpy as np import plotnine as pn import seaborn as sns import datetime as dt import matplotlib.pyplot as plt import pdfkit #Check Dataframe Utility function def check_df(dataframe, sample=False): print(f"Dataframe Shape: {dataframe.shape} with rows: {dataframe.shape[0]} and columns: {dataframe.shape[1]}") print(f"\nDF Columns: \n{list(dataframe.columns)}") if sample == True: print(f"\nData:\n{dataframe.head(5)}") return None #Define the default parameters analysis = "listings" #Import the data def import_data(analysis, folder_path=None): if not folder_path: folder_path = os.path.abspath(".") data_dir = 'data' folder_path = os.path.join(folder_path, data_dir) if analysis == 'listings': filename = 'listings.csv' elif analysis == 'reviews': filename = 'reviews.csv' elif analysis == 'calendar': filename = 'calendar.csv' filepath = os.path.join(folder_path, filename) df = pd.read_csv(filepath) check_df(df) return df ## Data cleaning Listings @np.vectorize def remove_dollar(label: str): return float(label.replace('$','').replace(',','')) if analysis == 'listings': #Import dei dati df = import_data(analysis) # Selezioniamo solo alcune delle colonne listings = df[[ 'id','name','longitude','latitude', 'listing_url', 'instant_bookable', 'host_response_time', 'review_scores_rating', 'property_type', 'room_type','accommodates', 'bathrooms','bedrooms','beds','reviews_per_month','amenities', 'number_of_reviews', 'price' ]] #listings['price'] = remove_dollar(listings['price']) listings = listings.assign(price = remove_dollar(listings.price)) listings[['price']] print("Listings dataset readed and parsed") df_clean = listings.copy() ## Data cleaning Reviews if analysis == 'reviews': #Import dei dati df = import_data(analysis) #Date to datetime reviews = df.assign(date = pd.to_datetime(df['date'])) reviews['year'] = reviews['date'].dt.year reviews['month'] = reviews['date'].dt.month reviews = reviews.sort_values(['year', 'month'], ascending=False) print("Reviews dataset readed and parsed") df_clean = reviews.copy() ## Data cleaning Calendar if analysis == 'calendar': # Import dei dati df = import_data(analysis) calendar = df.assign(date = pd.to_datetime(df['date'])) calendar = calendar.assign( price = pd.to_numeric(calendar.price.str.replace('$','').str.replace(',','')), # adjusted_price = pd.to_numeric(calendar.adjusted_price.str.replace('$','').str.replace(',','')), ) calendar['year'] = pd.DatetimeIndex(calendar['date']).year calendar['month'] = pd.DatetimeIndex(calendar['date']).month calendar = calendar.sort_values(['year', 'month'], ascending=False) calendar['available'] = calendar.available.map({ 't': True, 'f': False }) print("Calendar dataset readed and parsed") df_clean = calendar.copy() ``` # 2. Generate analysis and plots ``` # Simple Analysis Generation if analysis == 'listings': room_type_count = ( df_clean.groupby("room_type", dropna=False) .id.count() .reset_index() .rename(columns={"id": "listing_count"}) ) night_price = df_clean.agg({"price": [np.mean]}) night_price_room = df_clean.groupby("room_type").agg( {"price": [np.mean]} ) elif analysis == 'reviews': pass elif analysis == 'calendar': pass # Simply Plot Generation if analysis == 'listings': fig1 = ( pn.ggplot(df_clean) + pn.aes(x='room_type', fill='room_type') + pn.geom_bar() + pn.theme(axis_text_x=pn.element_text(angle=45, hjust=1)) ) fig1_path = os.path.join(os.path.abspath('.'),'plot1.png') fig1.save(filename=fig1_path) fig2 = ( pn.ggplot(df_clean) + pn.aes(x="price") + pn.geom_histogram(fill="blue", colour="black", bins=30) + pn.xlim(0, 200) ) fig2_path = os.path.join(os.path.abspath('.'),'plot2.png') fig2.save(filename=fig2_path) elif analysis == 'reviews': pass elif analysis == 'calendar': pass ``` # 3. Creating the final PDF Report ``` # Defining start and send date for the analysis today = str(dt.date.today()).replace('-', '/') # HTML template to add our data and plots report_template = f''' <!DOCTYPE html> <html> <head> <meta charset='utf-8'> <title>PythonBiellaGroup Report Example</title> <link rel='stylesheet' href='report.css'> <style> h1 {{ font-family: Arial; font-size: 300%; }} h2 {{ font-family: Arial; font-size: 200%; }} @page {{ size: 7in 9.25in; margin: 27mm 16mm 27mm 16mm; }} </style> </head> <h1 align="center">Analysis for: {analysis}</h1> <h2 align="center">Report date: {today}</h2> <figure> <img src="{fig1_path}" width="1200" height="600"> </figure> <figure> <img src="{fig2_path}" width="1200" height="600"> </figure> </html> ''' # Save HTML string to file html_report = os.path.join(os.path.abspath("."),f"{analysis.split(',')[0].replace(' ','_')}_report.html") with open(html_report, "w") as r: r.write(report_template) ``` Be carefull! To use pdfkit with html report export to pdf you need to install on your machine: `wkhtmltopdf` - https://stackoverflow.com/questions/27673870/cant-create-pdf-using-python-pdfkit-error-no-wkhtmltopdf-executable-found ``` # Use pdfkit to create the pdf report from the pdfkit.from_file(html_report, os.path.join(os.path.abspath("."),f"{analysis.split(',')[0].replace(' ', '_')}_report.pdf")) ```
github_jupyter
import os import pandas as pd import numpy as np import plotnine as pn import seaborn as sns import datetime as dt import matplotlib.pyplot as plt import pdfkit #Check Dataframe Utility function def check_df(dataframe, sample=False): print(f"Dataframe Shape: {dataframe.shape} with rows: {dataframe.shape[0]} and columns: {dataframe.shape[1]}") print(f"\nDF Columns: \n{list(dataframe.columns)}") if sample == True: print(f"\nData:\n{dataframe.head(5)}") return None #Define the default parameters analysis = "listings" #Import the data def import_data(analysis, folder_path=None): if not folder_path: folder_path = os.path.abspath(".") data_dir = 'data' folder_path = os.path.join(folder_path, data_dir) if analysis == 'listings': filename = 'listings.csv' elif analysis == 'reviews': filename = 'reviews.csv' elif analysis == 'calendar': filename = 'calendar.csv' filepath = os.path.join(folder_path, filename) df = pd.read_csv(filepath) check_df(df) return df ## Data cleaning Listings @np.vectorize def remove_dollar(label: str): return float(label.replace('$','').replace(',','')) if analysis == 'listings': #Import dei dati df = import_data(analysis) # Selezioniamo solo alcune delle colonne listings = df[[ 'id','name','longitude','latitude', 'listing_url', 'instant_bookable', 'host_response_time', 'review_scores_rating', 'property_type', 'room_type','accommodates', 'bathrooms','bedrooms','beds','reviews_per_month','amenities', 'number_of_reviews', 'price' ]] #listings['price'] = remove_dollar(listings['price']) listings = listings.assign(price = remove_dollar(listings.price)) listings[['price']] print("Listings dataset readed and parsed") df_clean = listings.copy() ## Data cleaning Reviews if analysis == 'reviews': #Import dei dati df = import_data(analysis) #Date to datetime reviews = df.assign(date = pd.to_datetime(df['date'])) reviews['year'] = reviews['date'].dt.year reviews['month'] = reviews['date'].dt.month reviews = reviews.sort_values(['year', 'month'], ascending=False) print("Reviews dataset readed and parsed") df_clean = reviews.copy() ## Data cleaning Calendar if analysis == 'calendar': # Import dei dati df = import_data(analysis) calendar = df.assign(date = pd.to_datetime(df['date'])) calendar = calendar.assign( price = pd.to_numeric(calendar.price.str.replace('$','').str.replace(',','')), # adjusted_price = pd.to_numeric(calendar.adjusted_price.str.replace('$','').str.replace(',','')), ) calendar['year'] = pd.DatetimeIndex(calendar['date']).year calendar['month'] = pd.DatetimeIndex(calendar['date']).month calendar = calendar.sort_values(['year', 'month'], ascending=False) calendar['available'] = calendar.available.map({ 't': True, 'f': False }) print("Calendar dataset readed and parsed") df_clean = calendar.copy() # Simple Analysis Generation if analysis == 'listings': room_type_count = ( df_clean.groupby("room_type", dropna=False) .id.count() .reset_index() .rename(columns={"id": "listing_count"}) ) night_price = df_clean.agg({"price": [np.mean]}) night_price_room = df_clean.groupby("room_type").agg( {"price": [np.mean]} ) elif analysis == 'reviews': pass elif analysis == 'calendar': pass # Simply Plot Generation if analysis == 'listings': fig1 = ( pn.ggplot(df_clean) + pn.aes(x='room_type', fill='room_type') + pn.geom_bar() + pn.theme(axis_text_x=pn.element_text(angle=45, hjust=1)) ) fig1_path = os.path.join(os.path.abspath('.'),'plot1.png') fig1.save(filename=fig1_path) fig2 = ( pn.ggplot(df_clean) + pn.aes(x="price") + pn.geom_histogram(fill="blue", colour="black", bins=30) + pn.xlim(0, 200) ) fig2_path = os.path.join(os.path.abspath('.'),'plot2.png') fig2.save(filename=fig2_path) elif analysis == 'reviews': pass elif analysis == 'calendar': pass # Defining start and send date for the analysis today = str(dt.date.today()).replace('-', '/') # HTML template to add our data and plots report_template = f''' <!DOCTYPE html> <html> <head> <meta charset='utf-8'> <title>PythonBiellaGroup Report Example</title> <link rel='stylesheet' href='report.css'> <style> h1 {{ font-family: Arial; font-size: 300%; }} h2 {{ font-family: Arial; font-size: 200%; }} @page {{ size: 7in 9.25in; margin: 27mm 16mm 27mm 16mm; }} </style> </head> <h1 align="center">Analysis for: {analysis}</h1> <h2 align="center">Report date: {today}</h2> <figure> <img src="{fig1_path}" width="1200" height="600"> </figure> <figure> <img src="{fig2_path}" width="1200" height="600"> </figure> </html> ''' # Save HTML string to file html_report = os.path.join(os.path.abspath("."),f"{analysis.split(',')[0].replace(' ','_')}_report.html") with open(html_report, "w") as r: r.write(report_template) # Use pdfkit to create the pdf report from the pdfkit.from_file(html_report, os.path.join(os.path.abspath("."),f"{analysis.split(',')[0].replace(' ', '_')}_report.pdf"))
0.452778
0.648188
# Machine Learning Engineer Nanodegree ## Introduction and Foundations ## Project: Titanic Survival Exploration In 1912, the ship RMS Titanic struck an iceberg on its maiden voyage and sank, resulting in the deaths of most of its passengers and crew. In this introductory project, we will explore a subset of the RMS Titanic passenger manifest to determine which features best predict whether someone survived or did not survive. To complete this project, you will need to implement several conditional predictions and answer the questions below. Your project submission will be evaluated based on the completion of the code and your responses to the questions. > **Tip:** Quoted sections like this will provide helpful instructions on how to navigate and use an iPython notebook. # Getting Started To begin working with the RMS Titanic passenger data, we'll first need to `import` the functionality we need, and load our data into a `pandas` DataFrame. Run the code cell below to load our data and display the first few entries (passengers) for examination using the `.head()` function. > **Tip:** You can run a code cell by clicking on the cell and using the keyboard shortcut **Shift + Enter** or **Shift + Return**. Alternatively, a code cell can be executed using the **Play** button in the hotbar after selecting it. Markdown cells (text cells like this one) can be edited by double-clicking, and saved using these same shortcuts. [Markdown](http://daringfireball.net/projects/markdown/syntax) allows you to write easy-to-read plain text that can be converted to HTML. ``` # Import libraries necessary for this project import numpy as np import pandas as pd from IPython.display import display # Allows the use of display() for DataFrames # Import supplementary visualizations code visuals.py import visuals as vs # Pretty display for notebooks %matplotlib inline # Load the dataset in_file = 'titanic_data.csv' full_data = pd.read_csv(in_file) # Print the first few entries of the RMS Titanic data display(full_data.head()) ``` From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship: - **Survived**: Outcome of survival (0 = No; 1 = Yes) - **Pclass**: Socio-economic class (1 = Upper class; 2 = Middle class; 3 = Lower class) - **Name**: Name of passenger - **Sex**: Sex of the passenger - **Age**: Age of the passenger (Some entries contain `NaN`) - **SibSp**: Number of siblings and spouses of the passenger aboard - **Parch**: Number of parents and children of the passenger aboard - **Ticket**: Ticket number of the passenger - **Fare**: Fare paid by the passenger - **Cabin** Cabin number of the passenger (Some entries contain `NaN`) - **Embarked**: Port of embarkation of the passenger (C = Cherbourg; Q = Queenstown; S = Southampton) Since we're interested in the outcome of survival for each passenger or crew member, we can remove the **Survived** feature from this dataset and store it as its own separate variable `outcomes`. We will use these outcomes as our prediction targets. Run the code cell below to remove **Survived** as a feature of the dataset and store it in `outcomes`. ``` # Store the 'Survived' feature in a new variable and remove it from the dataset outcomes = full_data['Survived'] data = full_data.drop('Survived', axis = 1) # Show the new dataset with 'Survived' removed display(data.head()) ``` The very same sample of the RMS Titanic data now shows the **Survived** feature removed from the DataFrame. Note that `data` (the passenger data) and `outcomes` (the outcomes of survival) are now *paired*. That means for any passenger `data.loc[i]`, they have the survival outcome `outcomes[i]`. To measure the performance of our predictions, we need a metric to score our predictions against the true outcomes of survival. Since we are interested in how *accurate* our predictions are, we will calculate the proportion of passengers where our prediction of their survival is correct. Run the code cell below to create our `accuracy_score` function and test a prediction on the first five passengers. **Think:** *Out of the first five passengers, if we predict that all of them survived, what would you expect the accuracy of our predictions to be?* ``` def accuracy_score(truth, pred): """ Returns accuracy score for input truth and predictions. """ # Ensure that the number of predictions matches number of outcomes if len(truth) == len(pred): # Calculate and return the accuracy as a percent return "Predictions have an accuracy of {:.2f}%.".format((truth == pred).mean()*100) else: return "Number of predictions does not match number of outcomes!" # Test the 'accuracy_score' function predictions = pd.Series(np.ones(5, dtype = int)) print accuracy_score(outcomes[:5], predictions) ``` > **Tip:** If you save an iPython Notebook, the output from running code blocks will also be saved. However, the state of your workspace will be reset once a new session is started. Make sure that you run all of the code blocks from your previous session to reestablish variables and functions before picking up where you last left off. # Making Predictions If we were asked to make a prediction about any passenger aboard the RMS Titanic whom we knew nothing about, then the best prediction we could make would be that they did not survive. This is because we can assume that a majority of the passengers (more than 50%) did not survive the ship sinking. The `predictions_0` function below will always predict that a passenger did not survive. ``` def predictions_0(data): """ Model with no features. Always predicts a passenger did not survive. """ predictions = [] for _, passenger in data.iterrows(): # Predict the survival of 'passenger' predictions.append(0) # Return our predictions return pd.Series(predictions) # Make the predictions predictions = predictions_0(data) ``` ### Question 1 *Using the RMS Titanic data, how accurate would a prediction be that none of the passengers survived?* **Hint:** Run the code cell below to see the accuracy of this prediction. ``` print accuracy_score(outcomes, predictions) ``` **Answer:** Predictions have an accuracy of 61.62%. *** Let's take a look at whether the feature **Sex** has any indication of survival rates among passengers using the `survival_stats` function. This function is defined in the `titanic_visualizations.py` Python script included with this project. The first two parameters passed to the function are the RMS Titanic data and passenger survival outcomes, respectively. The third parameter indicates which feature we want to plot survival statistics across. Run the code cell below to plot the survival outcomes of passengers based on their sex. ``` vs.survival_stats(data, outcomes, 'Sex') ``` Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females *did* survive the ship sinking. Let's build on our previous prediction: If a passenger was female, then we will predict that they survived. Otherwise, we will predict the passenger did not survive. Fill in the missing code below so that the function will make this prediction. **Hint:** You can access the values of each feature for a passenger like a dictionary. For example, `passenger['Sex']` is the sex of the passenger. ``` def predictions_1(data): """ Model with one feature: - Predict a passenger survived if they are female. """ predictions = [] for _, passenger in data.iterrows(): # Remove the 'pass' statement below # and write your prediction conditions here predictions.append(passenger.Sex=='female') # Return our predictions return pd.Series(predictions) # Make the predictions predictions = predictions_1(data) ``` ### Question 2 *How accurate would a prediction be that all female passengers survived and the remaining passengers did not survive?* **Hint:** Run the code cell below to see the accuracy of this prediction. ``` print accuracy_score(outcomes, predictions) ``` **Answer**: Predictions have an accuracy of 78.68%. *** Using just the **Sex** feature for each passenger, we are able to increase the accuracy of our predictions by a significant margin. Now, let's consider using an additional feature to see if we can further improve our predictions. For example, consider all of the male passengers aboard the RMS Titanic: Can we find a subset of those passengers that had a higher rate of survival? Let's start by looking at the **Age** of each male, by again using the `survival_stats` function. This time, we'll use a fourth parameter to filter out the data so that only passengers with the **Sex** 'male' will be included. Run the code cell below to plot the survival outcomes of male passengers based on their age. ``` vs.survival_stats(data, outcomes, 'Age', ["Sex == 'male'"]) ``` Examining the survival statistics, the majority of males younger than 10 survived the ship sinking, whereas most males age 10 or older *did not survive* the ship sinking. Let's continue to build on our previous prediction: If a passenger was female, then we will predict they survive. If a passenger was male and younger than 10, then we will also predict they survive. Otherwise, we will predict they do not survive. Fill in the missing code below so that the function will make this prediction. **Hint:** You can start your implementation of this function using the prediction code you wrote earlier from `predictions_1`. ``` def predictions_2(data): """ Model with two features: - Predict a passenger survived if they are female. - Predict a passenger survived if they are male and younger than 10. """ predictions = [] for _, passenger in data.iterrows(): if passenger.Sex=='female': predictions.append(1) elif passenger.Age < 10: #passed first if mean it is male (do not need to explicit) predictions.append(1) else: predictions.append(0) # Return our predictions return pd.Series(predictions) # Make the predictions predictions = predictions_2(data) ``` ### Question 3 *How accurate would a prediction be that all female passengers and all male passengers younger than 10 survived?* **Hint:** Run the code cell below to see the accuracy of this prediction. ``` print accuracy_score(outcomes, predictions) ``` **Answer**: Predictions have an accuracy of 79.35%. *** Adding the feature **Age** as a condition in conjunction with **Sex** improves the accuracy by a small margin more than with simply using the feature **Sex** alone. Now it's your turn: Find a series of features and conditions to split the data on to obtain an outcome prediction accuracy of at least 80%. This may require multiple features and multiple levels of conditional statements to succeed. You can use the same feature multiple times with different conditions. **Pclass**, **Sex**, **Age**, **SibSp**, and **Parch** are some suggested features to try. Use the `survival_stats` function below to to examine various survival statistics. **Hint:** To use mulitple filter conditions, put each condition in the list passed as the last argument. Example: `["Sex == 'male'", "Age < 18"]` ``` vs.survival_stats(data, outcomes, 'Embarked', [ "Sex == 'female'", 'Pclass == 3','Age < 20']) ``` After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction. Make sure to keep track of the various features and conditions you tried before arriving at your final prediction model. **Hint:** You can start your implementation of this function using the prediction code you wrote earlier from `predictions_2`. ``` def predictions_3(data): """ Model with multiple features. Makes a prediction with an accuracy of at least 80%. """ predictions = [] for _, passenger in data.iterrows(): if passenger.Pclass ==3: if passenger.Sex=='female' and passenger.Age<20 and passenger.Embarked!='S': predictions.append(1) else: predictions.append(0) elif passenger.Sex=='female': predictions.append(1) elif passenger.Age < 10: if passenger.SibSp >= 3: predictions.append(0) else: predictions.append(1) else: predictions.append(0) # Return our predictions return pd.Series(predictions) # Make the predictions predictions = predictions_3(data) ``` ### Question 4 *Describe the steps you took to implement the final prediction model so that it got an accuracy of at least 80%. What features did you look at? Were certain features more informative than others? Which conditions did you use to split the survival outcomes in the data? How accurate are your predictions?* **Hint:** Run the code cell below to see the accuracy of your predictions. ``` print accuracy_score(outcomes, predictions) ``` **Answer**: Predictions have an accuracy of 80.81%. Some features were much more informative than others, like Sex and Age. Others do not had much sense such as the port of embarcation. The process of searching for features was based entirely on the graphs, such that when I found situations where there was great difference between red and green bars I made a prediction for all of the people in that category to be classified as the highest bar. # Conclusion After several iterations of exploring and conditioning on the data, you have built a useful algorithm for predicting the survival of each passenger aboard the RMS Titanic. The technique applied in this project is a manual implementation of a simple machine learning model, the *decision tree*. A decision tree splits a set of data into smaller and smaller groups (called *nodes*), by one feature at a time. Each time a subset of the data is split, our predictions become more accurate if each of the resulting subgroups are more homogeneous (contain similar labels) than before. The advantage of having a computer do things for us is that it will be more exhaustive and more precise than our manual exploration above. [This link](http://www.r2d3.us/visual-intro-to-machine-learning-part-1/) provides another introduction into machine learning using a decision tree. A decision tree is just one of many models that come from *supervised learning*. In supervised learning, we attempt to use features of the data to predict or model things with objective outcome labels. That is to say, each of our data points has a known outcome value, such as a categorical, discrete label like `'Survived'`, or a numerical, continuous value like predicting the price of a house. ### Question 5 *Think of a real-world scenario where supervised learning could be applied. What would be the outcome variable that you are trying to predict? Name two features about the data used in this scenario that might be helpful for making the predictions.* **Answer**: Supervised learning could be used by banks or creditcard companies, by using localization(city,state), price and type of store as features, as well as many previous cases of fraud, or even some artificially created samples, to detect if a given creditcard use may be a fraud. > **Note**: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to **File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission.
github_jupyter
# Import libraries necessary for this project import numpy as np import pandas as pd from IPython.display import display # Allows the use of display() for DataFrames # Import supplementary visualizations code visuals.py import visuals as vs # Pretty display for notebooks %matplotlib inline # Load the dataset in_file = 'titanic_data.csv' full_data = pd.read_csv(in_file) # Print the first few entries of the RMS Titanic data display(full_data.head()) # Store the 'Survived' feature in a new variable and remove it from the dataset outcomes = full_data['Survived'] data = full_data.drop('Survived', axis = 1) # Show the new dataset with 'Survived' removed display(data.head()) def accuracy_score(truth, pred): """ Returns accuracy score for input truth and predictions. """ # Ensure that the number of predictions matches number of outcomes if len(truth) == len(pred): # Calculate and return the accuracy as a percent return "Predictions have an accuracy of {:.2f}%.".format((truth == pred).mean()*100) else: return "Number of predictions does not match number of outcomes!" # Test the 'accuracy_score' function predictions = pd.Series(np.ones(5, dtype = int)) print accuracy_score(outcomes[:5], predictions) def predictions_0(data): """ Model with no features. Always predicts a passenger did not survive. """ predictions = [] for _, passenger in data.iterrows(): # Predict the survival of 'passenger' predictions.append(0) # Return our predictions return pd.Series(predictions) # Make the predictions predictions = predictions_0(data) print accuracy_score(outcomes, predictions) vs.survival_stats(data, outcomes, 'Sex') def predictions_1(data): """ Model with one feature: - Predict a passenger survived if they are female. """ predictions = [] for _, passenger in data.iterrows(): # Remove the 'pass' statement below # and write your prediction conditions here predictions.append(passenger.Sex=='female') # Return our predictions return pd.Series(predictions) # Make the predictions predictions = predictions_1(data) print accuracy_score(outcomes, predictions) vs.survival_stats(data, outcomes, 'Age', ["Sex == 'male'"]) def predictions_2(data): """ Model with two features: - Predict a passenger survived if they are female. - Predict a passenger survived if they are male and younger than 10. """ predictions = [] for _, passenger in data.iterrows(): if passenger.Sex=='female': predictions.append(1) elif passenger.Age < 10: #passed first if mean it is male (do not need to explicit) predictions.append(1) else: predictions.append(0) # Return our predictions return pd.Series(predictions) # Make the predictions predictions = predictions_2(data) print accuracy_score(outcomes, predictions) vs.survival_stats(data, outcomes, 'Embarked', [ "Sex == 'female'", 'Pclass == 3','Age < 20']) def predictions_3(data): """ Model with multiple features. Makes a prediction with an accuracy of at least 80%. """ predictions = [] for _, passenger in data.iterrows(): if passenger.Pclass ==3: if passenger.Sex=='female' and passenger.Age<20 and passenger.Embarked!='S': predictions.append(1) else: predictions.append(0) elif passenger.Sex=='female': predictions.append(1) elif passenger.Age < 10: if passenger.SibSp >= 3: predictions.append(0) else: predictions.append(1) else: predictions.append(0) # Return our predictions return pd.Series(predictions) # Make the predictions predictions = predictions_3(data) print accuracy_score(outcomes, predictions)
0.742608
0.995902
# Use SPSS and batch deployment with DB2 to predict customer churn with `ibm-watson-machine-learning` This notebook contains steps to deploy a sample SPSS stream and start batch scoring new data. Some familiarity with bash is helpful. This notebook uses Python 3.8. You will use a data set, **Telco Customer Churn**, which details anonymous customer data from a telecommunication company. Use the details of this data set to predict customer churn. This is critical to business, as it's easier to retain existing customers than acquire new ones. ## Learning goals The learning goals of this notebook are: - Loading a CSV file into Db2 on Cloud - Working with the Watson Machine Learning instance - Batch deployment of an SPSS model - Scoring data using deployed model and a Db2 connection ## Contents This notebook contains the following parts: 1. [Setup](#setup) 2. [Model upload](#upload) 3. [Create db2 connection](#connection) 4. [Web service creation](#deploy) 5. [Scoring](#score) 6. [Clean up](#cleanup) 7. [Summary and next steps](#summary) <a id="setup"></a> ## 1. Set up the environment Before you use the sample code in this notebook, create a <a href="https://console.ng.bluemix.net/catalog/services/ibm-watson-machine-learning/" target="_blank" rel="noopener no referrer">Watson Machine Learning (WML) Service</a> instance (a free plan is offered and information about how to create the instance can be found <a href="https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-service-instance.html?context=analytics" target="_blank" rel="noopener no referrer">here</a>). ### Connection to WML Authenticate the Watson Machine Learning service on IBM Cloud. You need to provide platform `api_key` and instance `location`. You can use [IBM Cloud CLI](https://cloud.ibm.com/docs/cli/index.html) to retrieve platform API Key and instance location. API Key can be generated in the following way: ``` ibmcloud login ibmcloud iam api-key-create API_KEY_NAME ``` In result, get the value of `api_key` from the output. Location of your WML instance can be retrieved in the following way: ``` ibmcloud login --apikey API_KEY -a https://cloud.ibm.com ibmcloud resource service-instance WML_INSTANCE_NAME ``` In result, get the value of `location` from the output. **Tip**: You can generate your `Cloud API key` by going to the [**Users** section of the Cloud console](https://cloud.ibm.com/iam#/users). From that page, click your name, scroll down to the **API Keys** section, and click **Create an IBM Cloud API key**. Give your key a name and click **Create**, then copy the created key and paste it below. You can also get the service-specific url by going to the [**Endpoint URLs** section of the Watson Machine Learning docs](https://cloud.ibm.com/apidocs/machine-learning). You can check your instance location in your <a href="https://console.ng.bluemix.net/catalog/services/ibm-watson-machine-learning/" target="_blank" rel="noopener no referrer">Watson Machine Learning (WML) Service</a> instance details. You can also get service specific apikey by going to the [**Service IDs** section of the Cloud Console](https://cloud.ibm.com/iam/serviceids). From that page, click **Create**, then copy the created key and paste it below. **Action**: Enter your `api_key` and `location` in the following cell. ``` api_key = 'PASTE YOUR PLATFORM API KEY HERE' location = 'PASTE YOUR INSTANCE LOCATION HERE' wml_credentials = { "apikey": api_key, "url": 'https://' + location + '.ml.cloud.ibm.com' } ``` ### Install and import the `ibm-watson-machine-learning` package **Note:** `ibm-watson-machine-learning` documentation can be found <a href="http://ibm-wml-api-pyclient.mybluemix.net/" target="_blank" rel="noopener no referrer">here</a>. ``` !pip install -U ibm-watson-machine-learning from ibm_watson_machine_learning import APIClient client = APIClient(wml_credentials) ``` ### Working with spaces First, create a space that will be used for your work. If you do not have a space, you can use [Deployment Spaces Dashboard](https://dataplatform.cloud.ibm.com/ml-runtime/spaces?context=cpdaas) to create one. - Click New Deployment Space - Create an empty space - Select Cloud Object Storage - Select Watson Machine Learning instance and press Create - Copy `space_id` and paste it below **Tip**: You can also use SDK to prepare the space for your work. More information can be found [here](https://github.com/IBM/watson-machine-learning-samples/blob/master/cloud/notebooks/python_sdk/instance-management/Space%20management.ipynb). **Action**: Assign space ID below ``` space_id = 'PASTE YOUR SPACE ID HERE' ``` You can use `list` method to print all existing spaces. ``` client.spaces.list(limit=10) ``` To be able to interact with all resources available in Watson Machine Learning, you need to set **space** which you will be using. ``` client.set.default_space(space_id) ``` <a id="upload"></a> ## 2. Upload model In this section you will learn how to upload the model to the Cloud. **Action**: Download sample SPSS model from git project using wget. ``` import os from wget import download sample_dir = 'spss_sample_model' if not os.path.isdir(sample_dir): os.mkdir(sample_dir) filename=os.path.join(sample_dir, 'db2-customer-satisfaction-prediction.str') if not os.path.isfile(filename): filename = download('https://github.com/IBM/watson-machine-learning-samples/raw/master/cloud/models/spss/db2_customer_satisfaction/model/db2-customer-satisfaction-prediction.str',\ out=sample_dir) print(filename) ``` Store SPSS sample model in your Watson Machine Learning instance. ``` client.software_specifications.list() sw_spec_uid = client.software_specifications.get_uid_by_name("spss-modeler_18.2") model_meta_props = { client.repository.ModelMetaNames.NAME: "SPSS customer satisfaction model", client.repository.ModelMetaNames.TYPE: "spss-modeler_18.2", client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: sw_spec_uid } model_details = client.repository.store_model(filename, model_meta_props) ``` **Note:** You can see that model is successfully stored in Watson Machine Learning Service. ``` client.repository.list_models() ``` <a id="connection"></a> ## 3. Create a Db2 connection You can use commands below to create a db2 connection and required data assets to perform batch scoring. ### Create tables in Db2 on Cloud - Download the [inputScore.csv](https://github.com/IBM/watson-machine-learning-samples/raw/master/cloud/data/customer_churn/scoreInput.csv) and [inputScore2.csv](https://github.com/IBM/watson-machine-learning-samples/raw/master/cloud/data/customer_churn/scoreInput2.csv) file from the GitHub repository - Click the **Open the console to get started with Db2 on Cloud** icon. - Select the **Load Data** and **Desktop** load type. - Drag and drop the previously downloaded file and click Next. - Set table name to **CUSTOMER** and proceed with creating. #### Create a connection ``` schema_name = 'PUT YOUR SCHEMA NAME HERE' db_name = 'db2' input_table_1 = 'CUSTOMER' input_table_2 = 'CUSTOMER_2' output_table = 'OUTPUT' db_credentials = { "db": "***", "host": "***", "https_url": "***", "password": "***", "port": "***", "username": "***" } db2_data_source_type_id = client.connections.get_datasource_type_uid_by_name(db_name) db2_conn_meta_props= { client.connections.ConfigurationMetaNames.NAME: "conn_db2", client.connections.ConfigurationMetaNames.DATASOURCE_TYPE: db2_data_source_type_id, client.connections.ConfigurationMetaNames.DESCRIPTION: "Connection using DB2", client.connections.ConfigurationMetaNames.PROPERTIES: { "database": db_credentials["db"], "port": db_credentials["port"], "host": db_credentials["host"], "password": db_credentials["password"], "username": db_credentials["username"] } } db2_conn_details = client.connections.create(meta_props=db2_conn_meta_props) db2_conn_id = client.connections.get_uid(db2_conn_details) ``` #### Create input connection data asset ``` db2_asset_meta_props={ client.data_assets.ConfigurationMetaNames.NAME: "INPUT_TABLE_1", client.data_assets.ConfigurationMetaNames.CONNECTION_ID: db2_conn_id, client.data_assets.ConfigurationMetaNames.DESCRIPTION: "db2 table", client.data_assets.ConfigurationMetaNames.DATA_CONTENT_NAME: input_table_1 } db2_conn_input_asset_details = client.data_assets.store(db2_asset_meta_props) input_data_1_href = client.data_assets.get_href(db2_conn_input_asset_details) db2_asset_meta_props={ client.data_assets.ConfigurationMetaNames.NAME: "INPUT_TABLE_2", client.data_assets.ConfigurationMetaNames.CONNECTION_ID: db2_conn_id, client.data_assets.ConfigurationMetaNames.DESCRIPTION: "db2 table", client.data_assets.ConfigurationMetaNames.DATA_CONTENT_NAME: input_table_2 } db2_conn_input_asset_details = client.data_assets.store(db2_asset_meta_props) input_data_2_href = client.data_assets.get_href(db2_conn_input_asset_details) ``` #### Create output connection data assets ``` db2_asset_meta_props={ client.data_assets.ConfigurationMetaNames.NAME: "OUTPUT_TABLE", client.data_assets.ConfigurationMetaNames.CONNECTION_ID: db2_conn_id, client.data_assets.ConfigurationMetaNames.DESCRIPTION: "db2 table", client.data_assets.ConfigurationMetaNames.DATA_CONTENT_NAME: output_table } db2_conn_output_asset_details = client.data_assets.store(db2_asset_meta_props) output_data_href = client.data_assets.get_href(db2_conn_output_asset_details) ``` <a id="deploy"></a> ## 4. Create batch deployment Use bellow to create batch deployment for stored model. ``` model_uid = client.repository.get_model_uid(model_details) deployment = client.deployments.create( artifact_uid=model_uid, meta_props={ client.deployments.ConfigurationMetaNames.NAME: "SPSS BATCH customer satisfaction", client.deployments.ConfigurationMetaNames.BATCH: {}, client.deployments.ConfigurationMetaNames.HARDWARE_SPEC: { "name": "S", "num_nodes": 1 } } ) ``` <a id="score"></a> ## 5. Scoring You can create batch job using below methods. ### 5.1 Scoring using `data_asset` pointing to the DB2. ``` job_payload_ref = { client.deployments.ScoringMetaNames.INPUT_DATA_REFERENCES: [ { "id": "conn_db2", "name": "input_data_1_href", "type": "data_asset", "connection": {}, "location": { "href": input_data_1_href } }, { "id": "conn_db2", "name": "input_data_2_href", "type": "data_asset", "connection": {}, "location": { "href": input_data_2_href } } ], client.deployments.ScoringMetaNames.OUTPUT_DATA_REFERENCE: { "type": "data_asset", "connection": {}, "location": { "href": output_data_href } } } deployment_uid = client.deployments.get_uid(deployment) job = client.deployments.create_job(deployment_uid, meta_props=job_payload_ref) ``` You can retrive job ID. ``` job_id = client.deployments.get_job_uid(job) ``` ##### Monitor job execution Here you can check the status of your batch scoring. When batch job is completed the results will be written to a Db2 table. ``` import time elapsed_time = 0 while client.deployments.get_job_status(job_id).get('state') != 'completed' and elapsed_time < 300: print(f" Current state: {client.deployments.get_job_status(job_id).get('state')}") elapsed_time += 10 time.sleep(10) if client.deployments.get_job_status(job_id).get('state') == 'completed': print(f" Current state: {client.deployments.get_job_status(job_id).get('state')}") job_details_do = client.deployments.get_job_details(job_id) print(job_details_do) else: print("Job hasn't completed successfully in 5 minutes.") ``` ### 5.2 Scoring using `connection_asset` poiniting to the DB2 ``` job_payload_ref = { client.deployments.ScoringMetaNames.INPUT_DATA_REFERENCES: [ { "id": "conn_db2", "name": "input_table_1", "type": "connection_asset", "connection": { "id": db2_conn_id }, "location": { "schema_name": schema_name, "file_name": input_table_1 } }, { "id": "conn_db2", "name": "input_table_2", "type": "connection_asset", "connection": { "id": db2_conn_id }, "location": { "schema_name": schema_name, "file_name": input_table_2 } } ], client.deployments.ScoringMetaNames.OUTPUT_DATA_REFERENCE: { "id": "conn_db2", "name": "output_table", "type": "connection_asset", "connection": { "id": db2_conn_id }, "location": { "schema_name": schema_name, "file_name": output_table } } } deployment_uid = client.deployments.get_uid(deployment) job = client.deployments.create_job(deployment_uid, meta_props=job_payload_ref) ``` Retrive job ID. ``` job_id = client.deployments.get_job_uid(job) ``` ##### Monitor job execution ``` import time elapsed_time = 0 while client.deployments.get_job_status(job_id).get('state') != 'completed' and elapsed_time < 300: print(f" Current state: {client.deployments.get_job_status(job_id).get('state')}") elapsed_time += 10 time.sleep(10) if client.deployments.get_job_status(job_id).get('state') == 'completed': print(f" Current state: {client.deployments.get_job_status(job_id).get('state')}") job_details_do = client.deployments.get_job_details(job_id) print(job_details_do) else: print("Job hasn't completed successfully in 5 minutes.") ``` #### Preview scored data In this subsection you will load scored data. **Tip**: To install `requests` execute the following command: `!pip install requests` ``` import requests host = db_credentials["https_url"] + "/dbapi/v3" url = host + "/auth/tokens" token = requests.post(url, json={ "userid": db_credentials["username"], "password": db_credentials["password"]}).json()['token'] ``` ##### Get stored output using Db2 REST API ``` auth_header = { "Authorization": f"Bearer {token}" } sql_command = { "commands": "SELECT * FROM OUTPUT", "limit": 100, "separator": ",", "stop_on_error": "yes" } url = host + "/sql_jobs" jobid = requests.post(url, headers=auth_header, json=sql_command).json()['id'] resp = requests.get(f"{url}/{jobid}", headers=auth_header) results = resp.json()["results"][0] columns = results["columns"] rows = results["rows"] ``` ##### Preview output using pandas DateFrame **Tip**: To install `pandas` execute following command: `!pip install pandas` ``` import pandas as pd pd.DataFrame(data=rows, columns=columns) ``` <a id="cleanup"></a> ## 6. Clean up If you want to clean up all created assets: - experiments - trainings - pipelines - model definitions - models - functions - deployments see the steps in this sample [notebook](https://github.com/IBM/watson-machine-learning-samples/blob/master/cloud/notebooks/python_sdk/instance-management/Machine%20Learning%20artifacts%20management.ipynb). <a id="summary"></a> ## 7. Summary and next steps You successfully completed this notebook! You learned how to use Watson Machine Learning for SPSS model deployment and scoring. Check out our [Online Documentation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-service-instance.html?context=analytics) for more samples, tutorials, documentation, how-tos, and blog posts. ### Author **Jan Sołtysik** Intern in Watson Machine Learning. Copyright © 2020, 2021, 2022 IBM. This notebook and its source code are released under the terms of the MIT License.
github_jupyter
ibmcloud login ibmcloud iam api-key-create API_KEY_NAME ibmcloud login --apikey API_KEY -a https://cloud.ibm.com ibmcloud resource service-instance WML_INSTANCE_NAME api_key = 'PASTE YOUR PLATFORM API KEY HERE' location = 'PASTE YOUR INSTANCE LOCATION HERE' wml_credentials = { "apikey": api_key, "url": 'https://' + location + '.ml.cloud.ibm.com' } !pip install -U ibm-watson-machine-learning from ibm_watson_machine_learning import APIClient client = APIClient(wml_credentials) space_id = 'PASTE YOUR SPACE ID HERE' client.spaces.list(limit=10) client.set.default_space(space_id) import os from wget import download sample_dir = 'spss_sample_model' if not os.path.isdir(sample_dir): os.mkdir(sample_dir) filename=os.path.join(sample_dir, 'db2-customer-satisfaction-prediction.str') if not os.path.isfile(filename): filename = download('https://github.com/IBM/watson-machine-learning-samples/raw/master/cloud/models/spss/db2_customer_satisfaction/model/db2-customer-satisfaction-prediction.str',\ out=sample_dir) print(filename) client.software_specifications.list() sw_spec_uid = client.software_specifications.get_uid_by_name("spss-modeler_18.2") model_meta_props = { client.repository.ModelMetaNames.NAME: "SPSS customer satisfaction model", client.repository.ModelMetaNames.TYPE: "spss-modeler_18.2", client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: sw_spec_uid } model_details = client.repository.store_model(filename, model_meta_props) client.repository.list_models() schema_name = 'PUT YOUR SCHEMA NAME HERE' db_name = 'db2' input_table_1 = 'CUSTOMER' input_table_2 = 'CUSTOMER_2' output_table = 'OUTPUT' db_credentials = { "db": "***", "host": "***", "https_url": "***", "password": "***", "port": "***", "username": "***" } db2_data_source_type_id = client.connections.get_datasource_type_uid_by_name(db_name) db2_conn_meta_props= { client.connections.ConfigurationMetaNames.NAME: "conn_db2", client.connections.ConfigurationMetaNames.DATASOURCE_TYPE: db2_data_source_type_id, client.connections.ConfigurationMetaNames.DESCRIPTION: "Connection using DB2", client.connections.ConfigurationMetaNames.PROPERTIES: { "database": db_credentials["db"], "port": db_credentials["port"], "host": db_credentials["host"], "password": db_credentials["password"], "username": db_credentials["username"] } } db2_conn_details = client.connections.create(meta_props=db2_conn_meta_props) db2_conn_id = client.connections.get_uid(db2_conn_details) db2_asset_meta_props={ client.data_assets.ConfigurationMetaNames.NAME: "INPUT_TABLE_1", client.data_assets.ConfigurationMetaNames.CONNECTION_ID: db2_conn_id, client.data_assets.ConfigurationMetaNames.DESCRIPTION: "db2 table", client.data_assets.ConfigurationMetaNames.DATA_CONTENT_NAME: input_table_1 } db2_conn_input_asset_details = client.data_assets.store(db2_asset_meta_props) input_data_1_href = client.data_assets.get_href(db2_conn_input_asset_details) db2_asset_meta_props={ client.data_assets.ConfigurationMetaNames.NAME: "INPUT_TABLE_2", client.data_assets.ConfigurationMetaNames.CONNECTION_ID: db2_conn_id, client.data_assets.ConfigurationMetaNames.DESCRIPTION: "db2 table", client.data_assets.ConfigurationMetaNames.DATA_CONTENT_NAME: input_table_2 } db2_conn_input_asset_details = client.data_assets.store(db2_asset_meta_props) input_data_2_href = client.data_assets.get_href(db2_conn_input_asset_details) db2_asset_meta_props={ client.data_assets.ConfigurationMetaNames.NAME: "OUTPUT_TABLE", client.data_assets.ConfigurationMetaNames.CONNECTION_ID: db2_conn_id, client.data_assets.ConfigurationMetaNames.DESCRIPTION: "db2 table", client.data_assets.ConfigurationMetaNames.DATA_CONTENT_NAME: output_table } db2_conn_output_asset_details = client.data_assets.store(db2_asset_meta_props) output_data_href = client.data_assets.get_href(db2_conn_output_asset_details) model_uid = client.repository.get_model_uid(model_details) deployment = client.deployments.create( artifact_uid=model_uid, meta_props={ client.deployments.ConfigurationMetaNames.NAME: "SPSS BATCH customer satisfaction", client.deployments.ConfigurationMetaNames.BATCH: {}, client.deployments.ConfigurationMetaNames.HARDWARE_SPEC: { "name": "S", "num_nodes": 1 } } ) job_payload_ref = { client.deployments.ScoringMetaNames.INPUT_DATA_REFERENCES: [ { "id": "conn_db2", "name": "input_data_1_href", "type": "data_asset", "connection": {}, "location": { "href": input_data_1_href } }, { "id": "conn_db2", "name": "input_data_2_href", "type": "data_asset", "connection": {}, "location": { "href": input_data_2_href } } ], client.deployments.ScoringMetaNames.OUTPUT_DATA_REFERENCE: { "type": "data_asset", "connection": {}, "location": { "href": output_data_href } } } deployment_uid = client.deployments.get_uid(deployment) job = client.deployments.create_job(deployment_uid, meta_props=job_payload_ref) job_id = client.deployments.get_job_uid(job) import time elapsed_time = 0 while client.deployments.get_job_status(job_id).get('state') != 'completed' and elapsed_time < 300: print(f" Current state: {client.deployments.get_job_status(job_id).get('state')}") elapsed_time += 10 time.sleep(10) if client.deployments.get_job_status(job_id).get('state') == 'completed': print(f" Current state: {client.deployments.get_job_status(job_id).get('state')}") job_details_do = client.deployments.get_job_details(job_id) print(job_details_do) else: print("Job hasn't completed successfully in 5 minutes.") job_payload_ref = { client.deployments.ScoringMetaNames.INPUT_DATA_REFERENCES: [ { "id": "conn_db2", "name": "input_table_1", "type": "connection_asset", "connection": { "id": db2_conn_id }, "location": { "schema_name": schema_name, "file_name": input_table_1 } }, { "id": "conn_db2", "name": "input_table_2", "type": "connection_asset", "connection": { "id": db2_conn_id }, "location": { "schema_name": schema_name, "file_name": input_table_2 } } ], client.deployments.ScoringMetaNames.OUTPUT_DATA_REFERENCE: { "id": "conn_db2", "name": "output_table", "type": "connection_asset", "connection": { "id": db2_conn_id }, "location": { "schema_name": schema_name, "file_name": output_table } } } deployment_uid = client.deployments.get_uid(deployment) job = client.deployments.create_job(deployment_uid, meta_props=job_payload_ref) job_id = client.deployments.get_job_uid(job) import time elapsed_time = 0 while client.deployments.get_job_status(job_id).get('state') != 'completed' and elapsed_time < 300: print(f" Current state: {client.deployments.get_job_status(job_id).get('state')}") elapsed_time += 10 time.sleep(10) if client.deployments.get_job_status(job_id).get('state') == 'completed': print(f" Current state: {client.deployments.get_job_status(job_id).get('state')}") job_details_do = client.deployments.get_job_details(job_id) print(job_details_do) else: print("Job hasn't completed successfully in 5 minutes.") import requests host = db_credentials["https_url"] + "/dbapi/v3" url = host + "/auth/tokens" token = requests.post(url, json={ "userid": db_credentials["username"], "password": db_credentials["password"]}).json()['token'] auth_header = { "Authorization": f"Bearer {token}" } sql_command = { "commands": "SELECT * FROM OUTPUT", "limit": 100, "separator": ",", "stop_on_error": "yes" } url = host + "/sql_jobs" jobid = requests.post(url, headers=auth_header, json=sql_command).json()['id'] resp = requests.get(f"{url}/{jobid}", headers=auth_header) results = resp.json()["results"][0] columns = results["columns"] rows = results["rows"] import pandas as pd pd.DataFrame(data=rows, columns=columns)
0.262842
0.952353
This example shows how to: 1. Load a counts matrix (10X Chromium data from human peripheral blood cells) 2. Run the default Scrublet pipeline 3. Check that doublet predictions make sense ``` import sys sys.path %matplotlib inline import scrublet as scr import scipy.io import matplotlib.pyplot as plt import numpy as np import os plt.rcParams['font.family'] = 'sans-serif' plt.rcParams['font.sans-serif'] = 'Arial' plt.rc('font', size=14) plt.rcParams['pdf.fonttype'] = 42 ``` #### Load counts matrix and gene list Load the raw counts matrix as a scipy sparse matrix with cells as rows and genes as columns. ``` input_dir = '/home/ubuntu/velocyto/Dec_trachea/E16_Dec7_combined_v3/E16_Dec7_mut_8/outs/filtered_gene_bc_matrices/mm10.1.2.0/' counts_matrix = scipy.io.mmread(input_dir + '/matrix.mtx').T.tocsc() genes = np.array(scr.load_genes(input_dir + 'features.tsv', delimiter='\t', column=1)) print('Counts matrix shape: {} rows, {} columns'.format(counts_matrix.shape[0], counts_matrix.shape[1])) print('Number of genes in gene list: {}'.format(len(genes))) ``` #### Initialize Scrublet object The relevant parameters are: - *expected_doublet_rate*: the expected fraction of transcriptomes that are doublets, typically 0.05-0.1. Results are not particularly sensitive to this parameter. For this example, the expected doublet rate comes from the Chromium User Guide: https://support.10xgenomics.com/permalink/3vzDu3zQjY0o2AqkkkI4CC - *sim_doublet_ratio*: the number of doublets to simulate, relative to the number of observed transcriptomes. This should be high enough that all doublet states are well-represented by simulated doublets. Setting it too high is computationally expensive. The default value is 2, though values as low as 0.5 give very similar results for the datasets that have been tested. - *n_neighbors*: Number of neighbors used to construct the KNN classifier of observed transcriptomes and simulated doublets. The default value of `round(0.5*sqrt(n_cells))` generally works well. ``` scrub = scr.Scrublet(counts_matrix, expected_doublet_rate=0.06) ``` #### Run the default pipeline, which includes: 1. Doublet simulation 2. Normalization, gene filtering, rescaling, PCA 3. Doublet score calculation 4. Doublet score threshold detection and doublet calling ``` doublet_scores, predicted_doublets = scrub.scrub_doublets(min_counts=2, min_cells=3, min_gene_variability_pctl=85, n_prin_comps=30) ``` #### Plot doublet score histograms for observed transcriptomes and simulated doublets The simulated doublet histogram is typically bimodal. The left mode corresponds to "embedded" doublets generated by two cells with similar gene expression. The right mode corresponds to "neotypic" doublets, which are generated by cells with distinct gene expression (e.g., different cell types) and are expected to introduce more artifacts in downstream analyses. Scrublet can only detect neotypic doublets. To call doublets vs. singlets, we must set a threshold doublet score, ideally at the minimum between the two modes of the simulated doublet histogram. `scrub_doublets()` attempts to identify this point automatically and has done a good job in this example. However, if automatic threshold detection doesn't work well, you can adjust the threshold with the `call_doublets()` function. For example: ```python scrub.call_doublets(threshold=0.25) ``` ``` scrub.plot_histogram(); ``` #### Get 2-D embedding to visualize the results ``` print('Running UMAP...') scrub.set_embedding('UMAP', scr.get_umap(scrub.manifold_obs_, 10, min_dist=0.3)) # # Uncomment to run tSNE - slow # print('Running tSNE...') # scrub.set_embedding('tSNE', scr.get_tsne(scrub.manifold_obs_, angle=0.9)) # # Uncomment to run force layout - slow # print('Running ForceAtlas2...') # scrub.set_embedding('FA', scr.get_force_layout(scrub.manifold_obs_, n_neighbors=5. n_iter=1000)) print('Done.') ``` #### Plot doublet predictions on 2-D embedding Predicted doublets should co-localize in distinct states. ``` scrub.plot_embedding('UMAP', order_points=True); # scrub.plot_embedding('tSNE', order_points=True); # scrub.plot_embedding('FA', order_points=True); print(doublet_scores) print(predicted_doublets) sum(predicted_doublets) len(predicted_doublets) cwd = os.getcwd() print (cwd) doublet_scores.tofile('E16_Dec7_mut8_doubletScore.csv',sep=',',format='%s') min(doublet_scores[predicted_doublets]) ```
github_jupyter
import sys sys.path %matplotlib inline import scrublet as scr import scipy.io import matplotlib.pyplot as plt import numpy as np import os plt.rcParams['font.family'] = 'sans-serif' plt.rcParams['font.sans-serif'] = 'Arial' plt.rc('font', size=14) plt.rcParams['pdf.fonttype'] = 42 input_dir = '/home/ubuntu/velocyto/Dec_trachea/E16_Dec7_combined_v3/E16_Dec7_mut_8/outs/filtered_gene_bc_matrices/mm10.1.2.0/' counts_matrix = scipy.io.mmread(input_dir + '/matrix.mtx').T.tocsc() genes = np.array(scr.load_genes(input_dir + 'features.tsv', delimiter='\t', column=1)) print('Counts matrix shape: {} rows, {} columns'.format(counts_matrix.shape[0], counts_matrix.shape[1])) print('Number of genes in gene list: {}'.format(len(genes))) scrub = scr.Scrublet(counts_matrix, expected_doublet_rate=0.06) doublet_scores, predicted_doublets = scrub.scrub_doublets(min_counts=2, min_cells=3, min_gene_variability_pctl=85, n_prin_comps=30) scrub.call_doublets(threshold=0.25) scrub.plot_histogram(); print('Running UMAP...') scrub.set_embedding('UMAP', scr.get_umap(scrub.manifold_obs_, 10, min_dist=0.3)) # # Uncomment to run tSNE - slow # print('Running tSNE...') # scrub.set_embedding('tSNE', scr.get_tsne(scrub.manifold_obs_, angle=0.9)) # # Uncomment to run force layout - slow # print('Running ForceAtlas2...') # scrub.set_embedding('FA', scr.get_force_layout(scrub.manifold_obs_, n_neighbors=5. n_iter=1000)) print('Done.') scrub.plot_embedding('UMAP', order_points=True); # scrub.plot_embedding('tSNE', order_points=True); # scrub.plot_embedding('FA', order_points=True); print(doublet_scores) print(predicted_doublets) sum(predicted_doublets) len(predicted_doublets) cwd = os.getcwd() print (cwd) doublet_scores.tofile('E16_Dec7_mut8_doubletScore.csv',sep=',',format='%s') min(doublet_scores[predicted_doublets])
0.126299
0.979453
<a href="https://cognitiveclass.ai/"> <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/CCLog.png" width="200" align="center"> </a> <h1> HTTP and Requests</h1> Estimated time needed: **15** minutes ## Objectives After completing this lab you will be able to: - Understand HTTP - Handle the HTTP Requests <h2>Table of Contents</h2> <div class="alert alert-block alert-info" style="margin-top: 20px"> <ul> <li> <a href="#index">Overview of HTTP </a> <ul> <li><a href="#HTTP">Uniform Resource Locator:URL</a></li> <li><a href="slice">Request</a></li> <li><a href="stride">Response</a></li> </ul> </li> <li> <a href="#RP">Requests in Python </a> <ul> <li><a href="#get">Get Request with URL Parameters</a></li> <li><a href="#post">Post Requests </a></li> </ul> </div> <hr> <h2 id="">Overview of HTTP </h2> When you, the **client**, use a web page your browser sends an **HTTP** request to the **server** where the page is hosted. The server tries to find the desired **resource** by default "<code>index.html</code>". If your request is successful, the server will send the object to the client in an **HTTP response**; this includes information like the type of the **resource**, the length of the **resource**, and other information. <p> The figure below represents the process; the circle on the left represents the client, the circle on the right represents the Web server. The table under the Web server represents a list of resources stored in the web server. In this case an <code>HTML</code> file, <code>png</code> image, and <code>txt</code> file . </p> <p> The <b>HTTP</b> protocol allows you to send and receive information through the web including webpages, images, and other web resources. In this lab, we will provide an overview of the Requests library for interacting with the <code>HTTP </code> protocol. </p <div class="alert alert-block alert-info" style="margin-top: 20px"> <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/images/reqest_basics.png" width="750" align="center"> </div> <h2 id="URL">Uniform Resource Locator:URL</h2> Uniform resource locator (URL) is the most popular way to find resources on the web. We can break the URL into three parts. <ul> <li><b>scheme</b> this is this protocol, for this lab it will always be <code>http://</code> </li> <li><b> Internet address or Base URL </b> this will be used to find the location here are some examples: <code>www.ibm.com</code> and <code> www.gitlab.com </code> </li> <li><b>route</b> location on the web server for example: <code>/images/IDSNlogo.png</code> </li> </ul> You may also here the term uniform resource identifier (URI), URL are actually a subset of URIs. Another popular term is endpoint, this is the URL of an operation provided by a Web server. <h2 id="RE">Request </h2> The process can be broken into the <b>request</b> and <b>response </b> process. The request using the get method is partially illustrated below. In the start line we have the <code>GET</code> method, this is an <code>HTTP</code> method. Also the location of the resource <code>/index.html</code> and the <code>HTTP</code> version .The Request header passes additional information with an <code>HTTP</code> request: <div class="alert alert-block alert-info" style="margin-top: 20px"> <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/images/reqest_messege.png" width="400" align="center"> </div> When an <code>HTTP</code> request is made, an <code>HTTP</code> method is sent, this tells the server what action to perform. A list of several <code>HTTP</code> methods is shown below. We will go over more examples later. <div class="alert alert-block alert-info" style="margin-top: 20px"> <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/images/http_methods.png" width="400" align="center"> </div> <h2 id="RES">Response</h2> The figure below represents the response; the response start line contains the version number <code>HTTP/1.0</code>, a status code (200) meaning success, followed by a descriptive phrase (OK). The response header contains useful information. Finally, we have the response body containing the requested file an <code> HTML </code> document. It should be noted that some request have headers. <div class="alert alert-block alert-info" style="margin-top: 20px"> <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/images/response_message.png" width="400" align="center"> </div> Some status code examples are shown in the table below, the prefix indicates the class; these are shown in yellow, with actual status codes shown in white. Check out the following <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Status">link </a> for more descriptions. <div class="alert alert-block alert-info" style="margin-top: 20px"> <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/images/status_code.png" width="300" align="center"> </div> <h2 id="RP">Requests in Python</h2> Requests is a python Library that allows you to send <code>HTTP/1.1</code> requests easily. We can import the library as follows: ``` import requests ``` We will also use the following libraries ``` import os from PIL import Image from IPython.display import IFrame ``` You can make a <code>GET</code> request via the method <code>get</code> to [www.ibm.com](http://www.ibm.com?cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork-19487395&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork-19487395&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork-19487395&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork-19487395&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork-19487395&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork-19487395&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork-19487395&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork-19487395&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork-19487395&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork-19487395&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ): ``` url='https://www.ibm.com/' r=requests.get(url) ``` We have the response object <code>r</code> , this has information about the request, like the status of the request. We can view the status code using the attribute <code>status_code </code> ``` r.status_code ``` You can view the request headers: ``` print(r.request.headers) ``` You can view the request body, in the following line, as there is no body for a get request we get a <code>None </code>: ``` print("request body:", r.request.body) ``` You can view the <code>HTTP</code> response header using the attribute <code>headers</code>. This returns a python dictionary of <code>HTTP</code> response headers. ``` header=r.headers print(r.headers) ``` We can obtain the date the request was sent using the key <code>Data</code> ``` header['date'] ``` <code>Content-Type</code> indicates the type of data: ``` header['Content-Type'] ``` You can also check the <code>encoding</code>: ``` r.encoding ``` As the <code>Content-Type</code> is <code>text/html</code> we can use the attribute <code>text</code> to display the <code>HTML</code> in the body. We can review the first 100 characters: ``` r.text[0:100] ``` You can load other types of data for non-text requests like images, consider the URL of the following image: ``` # Use single quotation marks for defining string url='https://gitlab.com/ibm/skills-network/courses/placeholder101/-/raw/master/labs/module%201/images/IDSNlogo.png' ``` We can make a get request: ``` r=requests.get(url) ``` We can look at the response header: ``` print(r.headers) ``` We can we can see the <code>'Content-Type'</code> ``` r.headers['Content-Type'] ``` An image is a response object that contains the image as a <a href="https://docs.python.org/3/glossary.html#term-bytes-like-object">bytes-like object</a>. As a result, we must save it using a file object. First, we specify the file path and name ``` path=os.path.join(os.getcwd(),'image.png') path ``` We save the file, in order to access the body of the response we use the attribute <code>content</code> then save it using the <code>open</code> function and write <code>method</code>: ``` with open(path,'wb') as f: f.write(r.content) ``` We can view the image: ``` Image.open(path) ``` <h3>Question 1: write <a href="https://www.gnu.org/software/wget/"><code> wget </code></a></h3> In the previous section, we used the <code>wget</code> function to retrieve content from the web server as shown below. Write the python code to perform the same task. The code should be the same as the one used to download the image, but the file name should be <code>'example.txt'</code>. <code>!wget -O /resources/data/Example1.txt [https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/labs/example1.txt](https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/labs/example1.txt?cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork-19487395&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork-19487395&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork-19487395&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ)</code> ``` url='https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/labs/example1.txt' path=os.path.join(os.getcwd(),'example1.txt') r=requests.get(url) with open(path,'wb') as f: f.write(r.content) ``` <details><summary>Click here for the solution</summary> ```python url='https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/labs/example1.txt' path=os.path.join(os.getcwd(),'example1.txt') r=requests.get(url) with open(path,'wb') as f: f.write(r.content) ``` </details> <h2 id="URL_P">Get Request with URL Parameters </h2> You can use the <b>GET</b> method to modify the results of your query, for example retrieving data from an API . We send a <b>GET</b> request to the server. Like before we have the <b>Base URL</b>, in the <b>Route</b> we append <code>/get</code> this indicates we would like to preform a <code>GET</code> request, this is demonstrated in the following table: <div class="alert alert-block alert-info" style="margin-top: 20px"> <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/images/base_URL_Route.png" width="400" align="center"> </div> The Base URL is for <code>[http://httpbin.org/](http://httpbin.org?cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork-19487395&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork-19487395&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork-19487395&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ)</code> is a simple HTTP Request & Response Service. The <code>URL</code> in Python is given by: ``` url_get='http://httpbin.org/get' ``` A <a href="https://en.wikipedia.org/wiki/Query_string">query string</a> is a part of a uniform resource locator (URL), this sends other information to the web server. The start of the query is a <code>?</code>, followed by a series of parameter and value pairs, as shown in the table below. The first parameter name is <code>name</code> and the value is <code>Joseph</code> the second parameter name is <code>ID</code> and the Value is <code>123</code>. Each pair, parameter and value is separated by an equals sign, <code>=</code>. The series of pairs is separated by the ampersand <code>&</code>. <div class="alert alert-block alert-info" style="margin-top: 20px"> <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/images/query_string.png" width="500" align="center"> </div> To create a Query string, add a dictionary. The keys are the parameter names, and the values are the value of the Query string. ``` payload={"name":"Joseph","ID":"123"} ``` Then passing the dictionary <code>payload</code> to the <code>params</code> parameter of the <code> get()</code> function: ``` r=requests.get(url_get,params=payload) ``` We can print out the <code>URL</code> and see the name and values ``` r.url ``` There is no request body ``` print("request body:", r.request.body) ``` We can print out the status code ``` print(r.status_code) ``` We can view the response as text: ``` print(r.text) ``` We can look at the <code>'Content-Type'</code>. ``` r.headers['Content-Type'] ``` As the content <code>'Content-Type'</code> is in the <code>JSAON</code> format we can use the method <code>json()</code> , it returns a Python <code>dict</code>: ``` r.json() ``` The key <code>args</code> had the name and values: ``` r.json()['args'] ``` <h2 id="POST">Post Requests </h2> Like a <code>GET</code> request a <code>POST</code> is used to send data to a server, but the <code>POST</code> request sends the data in a request body. In order to send the Post Request in Python in the <code>URL</code> we change the route to <code>POST</code>: ``` url_post='http://httpbin.org/post' ``` This endpont will expect data as a file or as a form, a from is convenient way to configure an HTTP request to send data to a server. To make a <code>POST</code> request we use the <code>post()</code> function, the variable <code>payload</code> is passed to the parameter <code> data </code>: ``` r_post=requests.post(url_post,data=payload) ``` Comparing the URL from the response object of the <code>GET</code> and <code>POST</code> request we see the <code>POST</code> request has no name or value pairs. ``` print("POST request URL:",r_post.url ) print("GET request URL:",r.url) ``` We can compare the <code>POST</code> and <code>GET</code> request body, we see only the <code>POST</code> request has a body: ``` print("POST request body:",r_post.request.body) print("GET request body:",r.request.body) ``` We can view the form as well: ``` r_post.json()['form'] ``` There is a lot more you can do check out <a href="https://requests.readthedocs.io/en/master/">Requests </a> for more. <hr> ## Authors <p><a href="https://www.linkedin.com/in/joseph-s-50398b136/" target="_blank">Joseph Santarcangelo</a> <br>A Data Scientist at IBM, and holds a PhD in Electrical Engineering. His research focused on using Machine Learning, Signal Processing, and Computer Vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD.</p> ### Other Contributors <a href="www.linkedin.com/in/jiahui-mavis-zhou-a4537814a">Mavis Zhou</a> ## Change Log | Date (YYYY-MM-DD) | Version | Changed By | Change Description | | ----------------- | ------- | ---------- | ---------------------------- | | 2020-09-02 | 2.0 | Simran | Template updates to the file | | | | | | | | | | | ## <h3 align="center"> © IBM Corporation 2020. All rights reserved. <h3/>
github_jupyter
import requests import os from PIL import Image from IPython.display import IFrame url='https://www.ibm.com/' r=requests.get(url) r.status_code print(r.request.headers) print("request body:", r.request.body) header=r.headers print(r.headers) header['date'] header['Content-Type'] r.encoding r.text[0:100] # Use single quotation marks for defining string url='https://gitlab.com/ibm/skills-network/courses/placeholder101/-/raw/master/labs/module%201/images/IDSNlogo.png' r=requests.get(url) print(r.headers) r.headers['Content-Type'] path=os.path.join(os.getcwd(),'image.png') path with open(path,'wb') as f: f.write(r.content) Image.open(path) url='https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/labs/example1.txt' path=os.path.join(os.getcwd(),'example1.txt') r=requests.get(url) with open(path,'wb') as f: f.write(r.content) url='https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/labs/example1.txt' path=os.path.join(os.getcwd(),'example1.txt') r=requests.get(url) with open(path,'wb') as f: f.write(r.content) url_get='http://httpbin.org/get' payload={"name":"Joseph","ID":"123"} r=requests.get(url_get,params=payload) r.url print("request body:", r.request.body) print(r.status_code) print(r.text) r.headers['Content-Type'] r.json() r.json()['args'] url_post='http://httpbin.org/post' r_post=requests.post(url_post,data=payload) print("POST request URL:",r_post.url ) print("GET request URL:",r.url) print("POST request body:",r_post.request.body) print("GET request body:",r.request.body) r_post.json()['form']
0.205336
0.848659