Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
14,600 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Bootcamp 1
Step1: Example 1
Step2: The variable g (quarterly GDP growth expressed as an annual rate) is now what Python calls a DataFrame, which is a collection of data organized by variable and observation. You can get some of its properties by typing some or all of the following in the box below
Step3: Example 2
Step4: Answer(s)? Aren't the boxplots in the last figure cool? The histograms above them? What do you see in them? How do the various returns compare?
Example 3
Step5: Example 4
Step6: Note to self.
In older versions of Pandas, prior to 0.14.1, the option input puts the strike in the index, not as a column of data.
The next two lines check the versions of pandas and python on the off chance we want to check | Python Code:
x = [7, 3, 5]
x.pop?
Explanation: Data Bootcamp 1: Examples
Python applied to economic and financial data
This is an introduction to Data Bootcamp, a (prospective) course at NYU designed to give students some familiarity with (i) Python and (ii) economic and financial data. A more complete collection of materials, including this IPython Notebook, is available in our Github repository. (And yes, Python and IPython are different things, but ignore that for now.)
In this Notebook we illustrate some of the possibilities with examples. The code will be obscure if you're new to Python, but we will fill in the gaps over time. In the meantime, you might note for future reference things you run across that you'd like to understand better. We think it's best to take this one step at a time, but if you're interested in the logic behind the code, we give links to relevant documentation under "References." The occasional "Comments" are things for us to follow up on, we suggest you ignore them.
Recent US GDP growth (data from FRED)
GDP per capita in selected countries (data from the World Bank)
Fama-French equity "factors" (data from Ken French's website)
S&P 500 ETF (Spyders) (data from Yahoo finance)
read a csv??
Warnings
This program requires internet access, that's where the data comes from.
It's also work in progress, the eta for the course is Fall 2015.
End of explanation
# anything after the hashtag is a comment
# load packages
import datetime as dt
import pandas.io.data as web # data import tools
import matplotlib.pyplot as plt # plotting tools
# The next one is an IPython command: it says to put plots here in the notebook, rather than open a separate window.
%matplotlib inline
# get data from FRED
fred_series = ["GDPC1"]
start_date = dt.datetime(1960, 1, 1)
data = web.DataReader(fred_series, "fred", start_date)
# print last 3 data points to see what we've got
print(data.tail(3))
# compute annual growth rates
g = 4*data.pct_change()
# change label
g.columns = ['US GDP Growth']
Explanation: Example 1: US GDP Growth
Investors -- and others -- keep a close eye on the state of the economy because it affects the performance of firms and financial assets. We'll go into this more extensively later, but for now we want to see what the economy has done in the past, especially the recent past. We use the wonderful FRED interface ("API") and load the data straight from their website. Then we graph GDP growth over the past 50 years or so and for a more recent period of greater interest.
This strategy -- find the data on the web, load it, and produce a graph -- is a model for much of what we do.
Question(s).
It's always good to know what you're looking for so we'll post question(s) for each example. Here we
ask how the economy is doing, and how its current performance compares to the past.
References
FRED: http://research.stlouisfed.org/fred2/
Pandas: http://pandas.pydata.org/pandas-docs/stable/
Data access: http://pandas.pydata.org/pandas-docs/stable/remote_data.html#fred
Inline plots: http://stackoverflow.com/questions/21176731/automatically-run-matplotlib-inline-in-ipython-notebook
Note to self:
The FRED API allows you to import transformations like growth rates directly. Is that possible with Pandas?
End of explanation
# enter your commands here
# more examples: some statistics on GDP growth
print(['Mean GDP growth ', g.mean()])
print(['Std deviation ', g.std()])
# do this for subperiods...
# quick and dirty plot
# note the financial crisis: GDP fell 8% one quarter (at an annual rate, so really 2%)
g.plot()
plt.show()
# more complex plot, bar chart for last 6 quarters
# also: add moving average?
Explanation: The variable g (quarterly GDP growth expressed as an annual rate) is now what Python calls a DataFrame, which is a collection of data organized by variable and observation. You can get some of its properties by typing some or all of the following in the box below:
type(g)
g.tail()
g.head(2)
You can get information about g and what we can do with it by typing: g.[tab]. (Don't type the second period!) That will pop up a list you can scroll through. Typically it's a long list, so it takes some experience to know what to do with it.
You can also get information about things you can do with g by typing commands with an open paren: g.command( and wait. That will give you the arguments of the command. g.head and g.tail, for example, have an argument n which is the number of observations to print. head prints the top of the DataFrame, tail prints the bottom. If you leave it blank, it prints 5.
End of explanation
# load packages (if it's redundant it'll be ignored)
import pandas.io.data as web
# read data from Ken French's website
ff = web.DataReader('F-F_Research_Data_Factors', 'famafrench')[0]
# NB: ff.xs is a conflict, rename to xsm
ff.columns = ['xsm', 'smb', 'hml', 'rf']
# see what we've got
print(ff.head(3))
print(ff.describe())
# compute and print summary stats
moments = [ff.mean(), ff.std(), ff.skew(), ff.kurtosis() - 3]
# \n here is a line break
print('Summary stats for Fama-French factors (mean, std, skew, ex kurt)') #, end='\n\n')
print(moments)
#[print(moment, end='\n\n') for moment in moments]
# try some things yourself
# like what? type ff.[tab]
import pandas as pd
pd.__version__
# some plots
ff.plot()
plt.show()
ff.hist(bins=50, sharex=True)
plt.show()
ff.boxplot(whis=0, return_type='axes')
plt.show()
Explanation: Example 2: Fama-French equity "factors"
Gene Fama and Ken French are two of the leading academics studying (primarily) equity returns. Some of this work is summarized in the press release and related material for the 2013 Nobel Prize in economics, which was shared by Fama with Lars Hansen and Robert Shiller. For now, it's enough to say that Ken French posts an extensive collection of equity data on his website.
We'll look at what have come to be called the Fama-French factors. The data includes:
xsm: the return on the market (aggregate equity) minus the riskfree rate
smb (small minus big): the return on small firms minus the return on big firms
hml (high minus low): the return on firms with high book-to-market ratios minus those with low ratios.
rf: the riskfree rate.
We download all of these at once, monthly from 1926. Each is reported as a percentage.
Since they're monthly, you can get a rough annual number if you multiply by 12.
Question(s).
The question we address is how the returns compare: their means, their variability, and so on.
[Ask yourself: how would I answer this? What would I like to do with the data?]
References
http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html
http://quant-econ.net/pandas.html
http://pandas.pydata.org/pandas-docs/dev/remote_data.html#fama-french
http://pandas.pydata.org/pandas-docs/stable/10min.html#selection
http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.boxplot
http://pandas.pydata.org/pandas-docs/dev/generated/pandas.DataFrame.hist.html
End of explanation
# load package under name wb
from pandas.io import wb
# find the codes for the variables of interest
wb.search
wb.search(string='gdp.*capita').iloc[:2]
# specify dates, variables, and countries
start = 2011
# GDP per capita, population, life expectancy
variable_list = ['NY.GDP.PCAP.KD', 'SP.POP.TOTL', 'SP.DYN.LE00.IN']
country_list = ['US', 'FR', 'JP', 'CN', 'IN', 'BR', 'MX']
# Python understands we need to go to the second line because ( hasn't been closed by )
data = wb.download(indicator=variable_list,
country=country_list, start=start, end=start).dropna()
# see what we've got
print(data)
# check the column labels, change to something simpler
print(data.columns)
data.columns = ['gdppc', 'pop', 'le']
print(data)
# scatterplot
# life expectancy v GDP per capita
# size of circles controlled by population
# load packages (ignored if redundant)
import numpy as np
import matplotlib.pyplot as plt
plt.scatter(data['gdppc'], data['le'], s=0.000001*data['pop'], alpha=0.5)
plt.ylabel('Life Expectancy')
plt.xlabel('GDP Per Capita')
plt.show()
# Note: size of circles based on population
Explanation: Answer(s)? Aren't the boxplots in the last figure cool? The histograms above them? What do you see in them? How do the various returns compare?
Example 3: GDP per capita and life expectancy
The World Bank collects a broad range of economic and social indicators for most countries in the World. They also have a nice interface. It's a good source for basic information about the economic climate compares across countries.
We illustrate its usefulness with a scatterplot of life expectancy v GDP per capita.
Question(s). How closely are these two indicators of quality of life are related.
References
http://data.worldbank.org/
http://pandas.pydata.org/pandas-docs/stable/remote_data.html#world-bank
http://matplotlib.org/examples/shapes_and_collections/scatter_demo.html
End of explanation
# load packages
import pandas as pd
import pandas.io.data as web
from pandas.io.data import Options
import datetime as dt
import matplotlib.pylab as plt
# ticker
ticker = 'spy'
# load stock price first (the underlying)
# pick a recent date and subtract seven days to be sure we get a quote
# http://pymotw.com/2/datetime/#date-arithmetic
today = dt.date.today()
one_week = dt.timedelta(days=7)
start = today - one_week
stock = web.DataReader(ticker, 'yahoo', start)
print(stock) # just to see what we have
# take the last close (-1 is the last, 'Close' is the close)
# this shows up in our figure
atm = stock.ix[-1,'Close'] # the -1 takes the last observation
# get option prices for same ticker
option = Options(ticker, 'yahoo')
expiry = dt.date(2014, 11, 20)
data_calls = option.get_call_data(expiry=expiry).dropna()
data_puts = option.get_put_data(expiry=expiry).dropna()
# check what we have
print(data_calls.index)
print(data_calls.tail())
# compute mid of bid and ask and arrange series for plotting
calls_bid = data_calls['Bid']
calls_ask = data_calls['Ask']
calls_strikes = data_calls['Strike']
calls_mid = (data_calls['Bid'] + data_calls['Ask'])/2
puts_strikes = data_puts['Strike']
puts_mid = (data_puts['Bid'] + data_puts['Ask'])/2
Explanation: Example 4: Option prices
A financial option gives its owner the right to buy or sell an asset (the "underlying") at a preset price (the "strike") by a specific date (the "expiration date"). Puts are options to sell, calls are options to buy. We explore option prices with Yahoo Finance, specifically options on the S&P 500 exchange-traded fund, ticker SPY.
We illustrate its usefulness with a scatterplot of life expectancy v GDP per capita.
Question(s). How do put and call prices vary with their strike price? [Think about this. What would you expect?]
Warning. This won't work in Python 2.7 or, in fact, in any environment that uses versions of Pandas prior to 0.14.1. The Yahoo Option API is labeled experimental and it seems the earlier versions don't allow easy access to the strike prices.
References
http://finance.yahoo.com/q/op?s=SPY+Options
http://pandas.pydata.org/pandas-docs/stable/remote_data.html#yahoo-finance
http://pandas.pydata.org/pandas-docs/stable/remote_data.html#yahoo-finance-options
End of explanation
# plot call and put prices v strike
plt.plot(calls_strikes, calls_mid, 'r', lw=2, label='calls')
plt.plot(puts_strikes, puts_mid, 'b', lw=2, label='puts')
# prettify it
#plt.axis([120, 250, 0, 50])
plt.axvline(x=atm, color='k', linestyle='--', label='ATM')
plt.legend(loc='best')
plt.show()
# rerun the figure above with different color lines. Or dashed lines for call and put prices.
# or change the form of the vertical ATM line: solid? another color?
Explanation: Note to self.
In older versions of Pandas, prior to 0.14.1, the option input puts the strike in the index, not as a column of data.
The next two lines check the versions of pandas and python on the off chance we want to check: print(pd.version),
! python --version
End of explanation |
14,601 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simulating data and power analysis
Tom Ellis, August 2017
Before committing to the time and cost of genotyping samples for a paternity study, it is always sensible to run simulations to test the likely statistical power of your data set. This can help with important questions regaridng study design, such as finding an appropriate balance between the number of families vs offspring per family, or identifying a minimum number of loci to type. Simulated data can also be useful in verifying the results of an analysis.
FAPS provides tools to run such simulations. In this notebook we look look at
Step1: There are multiple ways to mate adults to generate offspring. If you supply a set of adults and an integer number of offspring, make_offspring mates adults at random.
Step2: You can also supply an explicit list of dams and sires, in which case the adults are paired in the order they appear in each list.
Step3: Usually we really want to simulate half sib arrays. This can be done using make_sibships, which mates a single mother to a set of males.
Step4: For uneven sibship sizes, give a list of sizes for each family of the same length as sires.
Step5: Adding errors
Real data almost always contains errors. For SNP data, these take the form of
Step6: It is best to create the progeny before adding errors. Set the error rates and add errors at random.
Step7: mutations and dropouts make copies of the genotypeArray, so the original data remains unchanged. For example
Step8: Paternity and sibships
Create a paternityArray and cluster into sibships as usual (more information on these objects can be found here and here.
Step9: A very useful tool is the accuracy subfunction for sibshipCluster objects.
When the paternity and sibship structure are know (seldom the case in real life, but true for simulated data) this returns an array of handy information about the analysis
Step10: In this example, accuracy is high, but the probability of a missing sire is NaN because all the sires are present, and this number of calculated only for offspring whose sire was absent.
We can adjust the paternityArray to see how much this effects the results.
For example, if we remove the sire of the first family (i.e. the male indexed by 1), there is a drop in the accuracy for full-sibling relationships, although half-sibling relationships are unaffected.
Step11: In contrast, imagine we had an idea that selfing was strong. How would this affect things?
Step12: The results are identical to the unmodified case; FAPS has correctly identifed the correct partition structure in spite of the (incorrect) strong prior for high selfing.
Automation
It can be tedious to put together your own simulation for every analysis.
FAPS has an automated function that repeatedly creates genotype data, clusters into siblings and calls the accuracy function.
You can supply lists of variables and it will evaluate each combination.
For example, this code creates four families of five full siblings with a genotyping error rate of 0.0015.
It considers 30, 40 and 50 loci for 100, 250 or 500 candidate fathers.
Each parameter combination is replicated 10 times.
In reality you would want to do more than this; I have found that results tend to asymptote with 300 simulations.
Step13: For convenience, make_power provides a summary of the input parameters.
This can be turned off by setting verbose to False.
Similarly, the progress bar can be removed by setting progress to False.
This bar uses iPython widgets, and probably won't work outside of iPython, so it may be necessary to turn them off.
The results of make_power are basically the output from the accuracy function we saw before, but include information on simulation parameters, and the time taken to create the paternityArray and sibshipCluster objects. View them by inspecting eventab.
Arguments to set up the population work much like those to create genotypeArrays, and are quite flexible.
Have a look into the help file (run make_power? in Python) for more.
You can also take a look at the simulations in support of the main FAPS paper, which considered a range of contrasting demographic scenarios; the example above is adapted from there.
Error rates and missing candidates are important topics to get a handle on.
We can estimate these parameters (e.g. by genotyping some individuals twice and counting how many loci are different), but we can never completely be sure how close to reality we are.
With that in mind make_power allows you to simulate true values mu and the proportion of missing sires, but run the analysis with different values.
The idea is to estimate how wrong you could be before the analysis fails.
For example, this code would simulate the case where you thought that the error rate was 0.0015, and 5% of the candidates went unsampled, but in reality both parameters were double that amount.
Step14: If you want to perform downstream analysis, you can tell make_power to also export each paternity_Array and/or sibshipCluster object. This is done by setting return_paternities and return_clusters to True. For example, this code pulls out the distribution of family sizes from each sibshipArray, and plots it.
Step15: Custom simulations
Once you are familiar with the basic building blocks for generating data and running analysis, creating your own simulations if largely a case of setting up combinations of parameters, and looping over them.
Given the vast array of possible scenarios you could want to simulate, it is impossible to be comprehensive here, so it must suffice to given a couple of examples for inspiration.
Likelihood for missing sires
In this example is was interested in the performance of the likelihood estimator for a sire being absent.
This is the likelihood of generating the offspring genotype if paternal alleles come from population allele frequencies.
This is what the attribute lik_abset in a paternityArray tells you.
Ideally this likelihood should be below the likelihood of paternity for the true sire, but higher than that of the other candidates. I suspected this would not be the case when minor allele frequency is low and there are many candidates.
This cell sets up the simulation. I'm considering 50 loci, and mu=0.0015, but varying sample size and allele frequency.
Step16: This cell simulates genotype data and clusters the offspring into full sibships.
The code pulls out the mean probability that each sire is absent, and the rank of the likelihood for a missing sire among the likelihoods of paternity for the candidates.
Step17: There is a strong dependency on minor allele frequency. As MAF goes from zero to 0.5, the effectiveness of identifying a missing sire using this likelihood estimator goes from 'basically useless' to 'useful'.
Step18: In contrast, there is no effect of the number of adults. | Python Code:
import numpy as np
import faps as fp
import matplotlib.pylab as plt
import pandas as pd
from time import time, localtime, asctime
np.random.seed(37)
allele_freqs = np.random.uniform(0.2, 0.5, 50)
adults = fp.make_parents(10, allele_freqs, family_name='adult')
Explanation: Simulating data and power analysis
Tom Ellis, August 2017
Before committing to the time and cost of genotyping samples for a paternity study, it is always sensible to run simulations to test the likely statistical power of your data set. This can help with important questions regaridng study design, such as finding an appropriate balance between the number of families vs offspring per family, or identifying a minimum number of loci to type. Simulated data can also be useful in verifying the results of an analysis.
FAPS provides tools to run such simulations. In this notebook we look look at:
Basic tools for simulating genotype data.
Automated tools for power analysis.
Crafting custom simulations for specialised purposes.
Simulations using emprical datasets (under construction).
It is worth noting that I relied on loops for a lot of these tools, for the purely selfish reason that it was easy to code. Loops are of course slow, so if you work with these tools a lot there is ample scope for speeding things up (see especially the functions make_offspring, make_sibships and make_power).
Simulation building blocks
Creating genotypeArray objects
Simulations are built using genotypeArrays. See the section on these here for more information.
make_parents generates a population of reproductive adults from population allele frequencies.
This example creates ten individuals.
Note that this population will be in Hardy-Weinberg equilibrium, but yours may not.
End of explanation
family1 = fp.make_offspring(parents = adults, noffs=5)
family1.parents
Explanation: There are multiple ways to mate adults to generate offspring. If you supply a set of adults and an integer number of offspring, make_offspring mates adults at random.
End of explanation
family2 = fp.make_offspring(parents = adults, dam_list=[7,1,8,8,0], sire_list=[2,6,3,0,7])
family2.parents
Explanation: You can also supply an explicit list of dams and sires, in which case the adults are paired in the order they appear in each list.
End of explanation
family3 = fp.make_sibships(parents=adults, dam=0, sires=[1,2,3,4], family_size=5)
family3.parents
Explanation: Usually we really want to simulate half sib arrays. This can be done using make_sibships, which mates a single mother to a set of males.
End of explanation
family4 = fp.make_sibships(parents=adults, dam=0, sires=[1,2,3,4], family_size=[5,4,3,2])
family4.parents
Explanation: For uneven sibship sizes, give a list of sizes for each family of the same length as sires.
End of explanation
np.random.seed(85)
allele_freqs = np.random.uniform(0.2, 0.5, 50)
adults = fp.make_parents(10, allele_freqs, family_name='adult')
progeny = fp.make_sibships(parents=adults, dam=0, sires=[1,2,3,4], family_size=5)
Explanation: Adding errors
Real data almost always contains errors. For SNP data, these take the form of:
Missing data, where a locus fails to amplify for some reason
Genotyping errors, when the observed genotype at a locus is not the actual genotype.
These are straightforward to include in simulated data. First generate some clean data again, and mate the parents.
End of explanation
d, mu= 0.01, 0.0015 # values for dropout and error rate.
# add genotyping errors
adults_mu = adults.mutations(mu)
progeny_mu = progeny.mutations(mu)
# add dropouts (to the mutated data)
adults_mu = adults_mu.dropouts(d)
progeny_mu = progeny.dropouts(d)
Explanation: It is best to create the progeny before adding errors. Set the error rates and add errors at random.
End of explanation
print(adults.missing_data().mean())
print(adults_mu.missing_data().mean())
Explanation: mutations and dropouts make copies of the genotypeArray, so the original data remains unchanged. For example:
End of explanation
np.random.seed(85)
allele_freqs = np.random.uniform(0.4, 0.5, 50)
adults = fp.make_parents(10, allele_freqs, family_name='adult')
progeny = fp.make_sibships(parents=adults, dam=0, sires=[1,2,3,4], family_size=5)
mothers = adults.subset(progeny.parent_index('m', adults.names))
patlik = fp.paternity_array(progeny, mothers, adults, mu=0.0015)
sc = fp.sibship_clustering(patlik)
Explanation: Paternity and sibships
Create a paternityArray and cluster into sibships as usual (more information on these objects can be found here and here.
End of explanation
sc.accuracy(progeny, adults)
Explanation: A very useful tool is the accuracy subfunction for sibshipCluster objects.
When the paternity and sibship structure are know (seldom the case in real life, but true for simulated data) this returns an array of handy information about the analysis:
Binary indiciator for whether the true partition was included in the sample of partitions.
Difference in log likelihood for the maximum likelihood partition identified and the true partition. Positive values indicate that the ML partition had greater support than the true partition.
Posterior probability of the true number of families.
Mean probabilities that a pair of true full sibs are identified as full sibs.
Mean probabilities that a pair of true half sibs are identified as half sibs.
Mean probabilities that a pair of true half or full sibs are correctly assigned as such (i.e. overall accuracy of sibship reconstruction.
Mean (log) probability of paternity of the true sires for those sires who had been sampled (who had non-zero probability in the paternityArray).
Mean (log) probability that the sire had not been sampled for those individuals whose sire was truly absent.
End of explanation
patlik.prob_array = patlik.adjust_prob_array(purge = 1, missing_parents=0.25)
sc = fp.sibship_clustering(patlik)
sc.accuracy(progeny, adults)
Explanation: In this example, accuracy is high, but the probability of a missing sire is NaN because all the sires are present, and this number of calculated only for offspring whose sire was absent.
We can adjust the paternityArray to see how much this effects the results.
For example, if we remove the sire of the first family (i.e. the male indexed by 1), there is a drop in the accuracy for full-sibling relationships, although half-sibling relationships are unaffected.
End of explanation
patlik.prob_array = patlik.adjust_prob_array(selfing_rate=0.5)
sc = fp.sibship_clustering(patlik)
sc.accuracy(progeny, adults)
Explanation: In contrast, imagine we had an idea that selfing was strong. How would this affect things?
End of explanation
# Common simulation parameters
r = 10 # number of replicates
nloci = [30,40,50] # number of loci
allele_freqs = [0.25, 0.5] # draw allele frequencies
nadults = [100,250,500] # size of the adults population
mu = 0.0015 #genotype error rates
sires = 4
offspring = 5
np.random.seed(614)
eventab = fp.make_power(r, nloci, allele_freqs, nadults, sires, offspring, 0, mu)
Explanation: The results are identical to the unmodified case; FAPS has correctly identifed the correct partition structure in spite of the (incorrect) strong prior for high selfing.
Automation
It can be tedious to put together your own simulation for every analysis.
FAPS has an automated function that repeatedly creates genotype data, clusters into siblings and calls the accuracy function.
You can supply lists of variables and it will evaluate each combination.
For example, this code creates four families of five full siblings with a genotyping error rate of 0.0015.
It considers 30, 40 and 50 loci for 100, 250 or 500 candidate fathers.
Each parameter combination is replicated 10 times.
In reality you would want to do more than this; I have found that results tend to asymptote with 300 simulations.
End of explanation
fp.make_power(r, nloci, allele_freqs, nadults, sires, offspring, 0,
mu_input= 0.003,
mu_real=0.0015,
unsampled_real=0.1,
unsampled_input = 0.05);
Explanation: For convenience, make_power provides a summary of the input parameters.
This can be turned off by setting verbose to False.
Similarly, the progress bar can be removed by setting progress to False.
This bar uses iPython widgets, and probably won't work outside of iPython, so it may be necessary to turn them off.
The results of make_power are basically the output from the accuracy function we saw before, but include information on simulation parameters, and the time taken to create the paternityArray and sibshipCluster objects. View them by inspecting eventab.
Arguments to set up the population work much like those to create genotypeArrays, and are quite flexible.
Have a look into the help file (run make_power? in Python) for more.
You can also take a look at the simulations in support of the main FAPS paper, which considered a range of contrasting demographic scenarios; the example above is adapted from there.
Error rates and missing candidates are important topics to get a handle on.
We can estimate these parameters (e.g. by genotyping some individuals twice and counting how many loci are different), but we can never completely be sure how close to reality we are.
With that in mind make_power allows you to simulate true values mu and the proportion of missing sires, but run the analysis with different values.
The idea is to estimate how wrong you could be before the analysis fails.
For example, this code would simulate the case where you thought that the error rate was 0.0015, and 5% of the candidates went unsampled, but in reality both parameters were double that amount.
End of explanation
eventab, evenclusters = fp.make_power(r, nloci, allele_freqs, nadults, sires, offspring, 0, mu, return_clusters=True, verbose=False)
even_famsizes = np.array([evenclusters[i].family_size() for i in range(len(evenclusters))])
plt.plot(even_famsizes.mean(0))
plt.show()
Explanation: If you want to perform downstream analysis, you can tell make_power to also export each paternity_Array and/or sibshipCluster object. This is done by setting return_paternities and return_clusters to True. For example, this code pulls out the distribution of family sizes from each sibshipArray, and plots it.
End of explanation
# Common simulation parameters
nreps = 10 # number of replicates
nloci = [50] # number of loci
allele_freqs = [0.1, 0.2, 0.3, 0.4, 0.5] # draw allele frequencies
nadults = [10, 100, 250, 500, 750, 1000] # size of the adults population
mu_list = [0.0015] #genotype error rates
nsims = nreps * len(nloci) * len(allele_freqs) * len(nadults) * len(mu) # total number of simulations to run
dt = np.zeros([nsims, 7]) # empty array to store data
Explanation: Custom simulations
Once you are familiar with the basic building blocks for generating data and running analysis, creating your own simulations if largely a case of setting up combinations of parameters, and looping over them.
Given the vast array of possible scenarios you could want to simulate, it is impossible to be comprehensive here, so it must suffice to given a couple of examples for inspiration.
Likelihood for missing sires
In this example is was interested in the performance of the likelihood estimator for a sire being absent.
This is the likelihood of generating the offspring genotype if paternal alleles come from population allele frequencies.
This is what the attribute lik_abset in a paternityArray tells you.
Ideally this likelihood should be below the likelihood of paternity for the true sire, but higher than that of the other candidates. I suspected this would not be the case when minor allele frequency is low and there are many candidates.
This cell sets up the simulation. I'm considering 50 loci, and mu=0.0015, but varying sample size and allele frequency.
End of explanation
t0 = time()
counter = 0
print("Beginning simulations on {}.".format(asctime(localtime(time()) )))
for r in range(nreps):
for l in range(len(nloci)):
for a in range(len(allele_freqs)):
for n in range(len(nadults)):
for m in range(len(mu_list)):
af = np.repeat(allele_freqs[a], nloci[l])
adults = fp.make_parents(nadults[n], af)
progeny = fp.make_offspring(adults, 100)
mi = progeny.parent_index('m', adults.names) # maternal index
mothers = adults.subset(mi)
patlik = fp.paternity_array(progeny, mothers, adults, mu_list[m])
# Find the rank of the missing term within the array.
rank = [np.where(np.sort(patlik.prob_array[i]) == patlik.prob_array[i,-1])[0][0] for i in range(progeny.size)]
rank = np.array(rank).mean() / nadults[n]
# get the posterior probabilty fir the missing term.
prob_misisng = np.exp(patlik.prob_array[:, -1]).mean()
#export data
dt[counter] = np.array([r, nloci[l], allele_freqs[a], nadults[n], mu[m], rank, prob_misisng])
# update counters
counter += 1
print("Completed in {} hours.".format(round((time() - t0)/3600,2)))
head = ['rep', 'nloci', 'allele_freqs', 'nadults', 'mu', 'rank', 'prob_missing']
dt = pd.DataFrame(dt, columns=head)
Explanation: This cell simulates genotype data and clusters the offspring into full sibships.
The code pulls out the mean probability that each sire is absent, and the rank of the likelihood for a missing sire among the likelihoods of paternity for the candidates.
End of explanation
dt.groupby('allele_freqs').mean()
Explanation: There is a strong dependency on minor allele frequency. As MAF goes from zero to 0.5, the effectiveness of identifying a missing sire using this likelihood estimator goes from 'basically useless' to 'useful'.
End of explanation
dt.groupby('nadults').mean()
Explanation: In contrast, there is no effect of the number of adults.
End of explanation |
14,602 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step2: The way you define groups affects your statistical tests
This notebook is, I hope, a step towards explaining the issue and sharing some intuition about
the difference between experiments where classes to be compared are under the control of the experimenter, and those where they are predicted or otherwise subject to error
the importance to interpreting a statistical test of knowing the certainty with which individual data points are assigned to the classes being compared
the effects of classification false positive and false negative rate, and an imbalance of membership between classes being compared
Overview
Step3: A perfect experiment
Step4: A $t$-test between samples of these populations is not consistent with the null hypothesis
Step5: Now the frequency of each experimental group is also plotted, as a histogram. The blue group - our negative examples, match the idealised distribution well. The green group - our positives - match the profile less well, but even so the difference between means is visibly apparent.
The title of the plot shows two $P$-values from the $t$-test, which is again very small. The null hypothesis
Step6: Again, the $P$-value reported by the $t$-test allows us to reject the null hypothesis. But there's a difference to the earlier graphs, as the two $P$-values in the title differ. That is because they represent two different tests.
$P_{real}$
Step7: Now the $t$-test reports several orders of magnitude difference in $P$-values. Both are still very small, and the difference in population means is clear, but the trend is obvious
Step8: The direction of change is again the same
Step9: With $FPR=0.2$, the $t$-test now reports a $P$-value that can be 20-30 orders of magnitude different from what would be seen with no misclassification. This is a considerable move towards being less able to reject the null hypothesis in what we might imagine to be a clear-cut case of having two distinct populations.
Combining false negatives and false positives
As might be expected, the effect of combining false positive and false negative misclassifications is greater than either case alone. This is illustrated in the plots below.
Step10: The effects of class size
We have seen that the relative sizes of positive and negative classes affect whether $FPR$ or $FNR$ is the more influential error type for this data. The total size of classes also has an influence. In general, reducing the number of class members increases the impact of misclassification, in part because the smaller sample size overall makes it harder to reject the null hypothesis as a 'baseline'
Step13: Some realisations of the last example (n_neg=200, n_pos=10, fpr=0.1, fnr=0.1) result in a $P$-value that (at the 0.05 level) rejects the null hypothesis when there is no misclassification, but cannot reject the null hypothesis when misclassification is taken into account.
Misclassification of samples into categories can prevent statistical determination of category differences, even for distinct categories
The general case
All our examples so far have been single realisations of a fictional experiment. The results will vary every time you (re-)run a cell, and quite greatly between runs, especially for some parameter values.
Let's look at what happens over several hundred replications of the experiment, to pick up on some trends.
Python code
As before, ignore this if you don't want to look at it.
Step14: The perfect experiment
Assuming $FNR=0$ and $FPR=0$, i.e. no misclassification at all, the observed and real subsample $t$-test values are always identical. We plot $t$-test $P$-values results for 1000 replicates of our fictional experiment, below
Step15: The red lines in the plot indicate a nominal $P=0.05$ threshold.
The vertical line indicates this for the 'real' data - that is to say that points on the left of this line indicate a 'real' difference between the means of the populations (we reject the null hypothesis) at $P=0.05$.
The horizontal line indicates a similar threshold at $P=0.05$ for the 'observed' data - that which has some level of misclassification. Points below this line indicate that the experiment - with misclassification - rejects the null hypothesis at $P=0.05$.
Here, there is no misclassification, and all points lie on the diagonal, accordingly. The two populations we draw from are quite distinct, so all points cluster well to the left of and below the $P=0.05$ thresholds.
The effect of $FNR$
We saw before that, due to the small relative size of the positive set, the effect of $FNR$ was not very pronounced. Running 1000 replicates of the experiment, we can get some intuition about how increasing $FNR$ affects the observed $P$-value, relative to that which we would see without any misclassification.
Step16: The effet of increasing $FNR$ is to move the reported $P$-values away from the $y=x$ diagonal, and towards the $P=0.05$ threshold. Even with $FNR=0.1$, almost every run of the experiment misreports the 'real' $P$-value such that we are less likely to reject the null hypothesis.
The effect of $FPR$
We also saw earlier that, again due to the small relative size of the positive set, the effect of $FPR$ was greater than that of $FNR$.
By running 1000 replicates of the experiment as before, we can understand how increasing $FPR$ affects the observed $P$-value, relative there being no misclassification.
Step17: We see the same progression of 'observed' $P$-values away from what would be the 'real' $P$-value without misclassification, but this time much more rapidly than with $FNR$ as $FPR$ increases. Even for this very distinct pair of populations, whose 'true' $P$-value should be ≈$10^-40$, an $FPR$ of 0.2 runs the risk of occasionally failing to reject the null hypothesis that the population means are the same.
As before, combining misclassification of positive and negative examples results in us being more likely to accept the null hypothesis, even for a very distinct pair of populations.
Step18: A more realistic population?
The examples above have all been performed with two populations that have very distinct means, by $t$-test. How powerful is the effect of misclassification when the distinction is not so clear.
Let's consider two populations with less readily-distinguishable means, as might be encountered in real data
Step19: In a single realisation of a perfect experiment with no misclassification, the reported $P$-value is likely to very strongly reject the null-hypothesis
Step20: What is the impact of misclassification?
Now, we increase the level of misclassification modestly
Step21: And now, at $FPR=FNR=0.05$ we start to see the population of experiments creeping into the upper left quadrant. In this quadrant we have experiments where data that (if classified correctly) would reject the null hypothesis are observed to accept it instead. This problem gets worse as the rate of misclassification increases.
Step22: At $FPR=FNR=0.2$
Step23: What does this all mean?
If we know that our experiment may involve misclassification into the groups we are going to compare, then we need to consider alternative statistical methods to $t$-tests, as misclassification can introduce quantitative (effect size) and qualitative (presence/absence of an effect) errors to our analysis.
The likelihood of such errors being introduced depends on the nature of the experiment
Step24: Note that, apart from the overall sample size (170 mouse proteins instead of 2000) the parameters for this run are not very different from those we have been using.
Interactive examples
The cells below allow you to explore variation in all the parameters we have modified above, and their effects on reported $P$ values | Python Code:
%pylab inline
from scipy import stats
from ipywidgets import interact, fixed
def sample_distributions(mu_neg, mu_pos, sd_neg, sd_pos,
n_neg, n_pos, fnr, fpr,
clip_low, clip_high):
Returns subsamples and observations from two normal
distributions.
- mu_neg mean of 'negative' samples
- mu_pos mean of 'positive' samples
- sd_neg standard deviation of 'negative' samples
- sd_pos standard deviation of 'positive' samples
- n_neg number of subsampled data points (negatives)
- n_pos number of subsampled data points (positives)
- fnr false negative rate (positives assigned to negative class)
- fpr false positive rate (negatives assigned to positive class)
- clip_low low value for clipping samples
- clip_high high value for clipping samples
# subsamples
samples = (clip(stats.norm.rvs(mu_neg, sd_neg, size=n_neg), clip_low, clip_high),
clip(stats.norm.rvs(mu_pos, sd_pos, size=n_pos), clip_low, clip_high))
# observed samples, including FPR and FNR
[shuffle(s) for s in samples]
obs_neg = concatenate((samples[0][:int((1-fpr)*n_neg)],
samples[1][int((1-fnr)*n_pos):]))
obs_pos = concatenate((samples[1][:int((1-fnr)*n_pos)],
samples[0][int((1-fpr)*n_neg):]))
# return subsamples and observations
return ((samples[0], samples[1]), (obs_neg, obs_pos))
def draw_sample_comparison(mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5,
n_neg=100, n_pos=100,
fnr=0, fpr=0,
clip_low=0, clip_high=100,
num_bins=50,
xmin=50, xmax=100, points=100,
subsample=True,
negcolor='blue', poscolor='green'):
Renders a matplotlib plot of normal distributions and subsamples,
and returns t-test P values that the means of the two subsamples are
equal, with and without FNR/FPR.
- mu_neg mean of 'negative' samples
- mu_pos mean of 'positive' samples
- sd_neg standard deviation of 'negative' samples
- sd_pos standard deviation of 'positive' samples
- n_neg number of subsampled data points (negatives)
- n_pos number of subsampled data points (positives)
- fnr false negative rate (positives assigned to negative class)
- fpr false positive rate (negatives assigned to positive class)
- clip_low low value for clipping samples
- clip_high high value for clipping samples
- bins number of bins for histogram
- xmin x-axis lower limit
- xmax x-axis upper limit
- points number of points for plotting PDF
- subsample Boolean: True plots subsamples
x = linspace(points, xmin, xmax)
# Normal PDFs
norms = (normpdf(x, mu_neg, sd_neg), normpdf(x, mu_pos, sd_pos))
# Get subsamples and observations
samples, obs = sample_distributions(mu_neg, mu_pos, sd_neg, sd_pos,
n_neg, n_pos, fnr, fpr,
clip_low, clip_high)
# Plot distribution and samples
plot(x, norms[0], color=negcolor)
plot(x, norms[1], color=poscolor)
if subsample:
h_neg = hist(samples[0], num_bins, normed=1, facecolor=negcolor, alpha=0.5)
h_pos = hist(samples[1], num_bins, normed=1, facecolor=poscolor, alpha=0.5)
ax = gca()
ax.set_xlabel("value")
ax.set_ylabel("frequency")
# Calculate t-tests
t_sam = stats.ttest_ind(samples[0], samples[1], equal_var=False)
t_obs = stats.ttest_ind(obs[0], obs[1], equal_var=False)
ax.set_title("$P_{real}$: %.02e $P_{obs}$: %.02e" % (t_sam[1], t_obs[1]))
Explanation: The way you define groups affects your statistical tests
This notebook is, I hope, a step towards explaining the issue and sharing some intuition about
the difference between experiments where classes to be compared are under the control of the experimenter, and those where they are predicted or otherwise subject to error
the importance to interpreting a statistical test of knowing the certainty with which individual data points are assigned to the classes being compared
the effects of classification false positive and false negative rate, and an imbalance of membership between classes being compared
Overview: Experiment
Let's say you have data from an experiment. The experiment is to test the binding of some human drug candidates to a large number (thousands) of mouse proteins. The experiment was conducted with transformed mouse proteins in yeast, so some of the proteins are in different states to how they would be found in the mouse: truncated to work in yeast, and with different post-translational modifications.
Overview: Analysis
Now let's consider one possible analysis of the data. We will test whether the mouse proteins that bind to the drug are more similar to their human equivalents than those that do not bind to the drug.
We are ignoring the question of what 'equivalent' means here, and assume that a satisfactory equivalency is known and accepted. We will represent the 'strength' of equivalence by percentage sequence identity of each mouse protein to its human counterpart, on a scale of 0-100%.
We will test whether there is a difference between the two groups by a $t$-test of two samples. We assume that each group (binds/does not bind) subsamples a distinct, normally-distributed population of sequence identities. We do not know whether the means or variances of these populations are the same, but we will test the null hypothesis that the means are identical (or the difference between means is zero).
This isn't the way I would want to analyse this kind of data in a real situation - but we're doing it here to show the effect of how we define groups. We'll define positive and negative groups as follows:
positive: mouse protein that binds to drug in the yeast experiment
negative: mouse protein that does not bind to drug in the yeast experiment
Python code:
Let's take a look at what we're actually doing when we perform this analysis. We'll use some Python code to visualise and explore this. Skip over this if you like…
End of explanation
draw_sample_comparison(mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, subsample=False)
Explanation: A perfect experiment:
First, we're assuming that there are two "real" populations of sequence identity between mouse and human equivalents - one representing proteins that bind to the drug in the experiment; one representing proteins that do not bind to the drug in this experiment. We're also assuming that the distribution of these values is Normal.
Giving ourselves a decent chance for success, we'll assume that the real situation for drug-binding proteins is that they have a mean sequence identity of around 90%, and those proteins that don't bind the drug have a mean identity of 85% to their human counterpart. We'll also assume that the standard deviation of thes identities is 5% in each case.
These are the "real", but idealised, populations from which our experimental results are drawn.
We can see how these distributions look:
End of explanation
draw_sample_comparison(mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100)
Explanation: A $t$-test between samples of these populations is not consistent with the null hypothesis: that both means are equal. The reported P-value in the plot title tells us that the probability of seeing this data (or a greater difference between the population means) is $P_{real}$ and is pretty small. No-one should have difficulty seeing that these populations have different means.
NOTE: The $t$-test is calculated on the basis of a 100-item subsample of each population and $P$-values will change when you rerun the cell.
When the experiment is performed, the results are not these idealised populations, but observations that are subsampled from the population.
We'll say that we find fewer mouse proteins that bind the drug than do not and, for the sake of round numbers, we'll have:
100 positive results: mouse protein binds drug
2000 negative results: mouse protein does not bind drug
And we can show this outcome:
End of explanation
draw_sample_comparison(mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100, fnr=0.01)
Explanation: Now the frequency of each experimental group is also plotted, as a histogram. The blue group - our negative examples, match the idealised distribution well. The green group - our positives - match the profile less well, but even so the difference between means is visibly apparent.
The title of the plot shows two $P$-values from the $t$-test, which is again very small. The null hypothesis: that the means are equal, is rejected.
Experimental error in classification
For many experiments, the experimenter is in complete control of sample classification, throughout.
For example, the experimenter either does, or does not, inject a mouse with a drug. The 'experimental' sample is easily and absolutely distinguished from the control. In this case, $t$-tests are simple to apply, with relatively few caveats.
This easy distinction between 'experiment' and 'control' samples is such a common circumstance, that it is easy to fall into the trap of thinking that it is always a clear division. But it is not, and it is increasingly less so when the samples being compared derive from high-throughput experiments, as in our fictional example, here.
When the categories being compared are classified as the result of an experiment or prediction that has an inherent error in classification, then we may be comparing hybrid populations of results: mixtures of positive and negative examples - and this can potentially affect the analysis results.
Where does classification error come from?
In our fictional experiment every interaction is taking place in yeast, so the biochemistry is different to the mouse and may affect the outcome of binding. Furthermore, the assay itself will have detection limits (some real - maybe productive - binding will not be detected).
Although we are doing the experiment in yeast, we want to claim that the results tell us something about interaction in the mouse. Why is that? It is in part because the comparison of mouse protein sequence identity to the human counterpart protein implies that we care about whether the interaction is similar in the human and mouse system. It is also because the implication of binding in the yeast experiment is efficacy in the animal system.
The experiment is an imperfect proxy for detecting whether the drug "really binds" to each protein in the mouse. Not every individual binding outcome will be correct. Errors arise in comparison to what happens in the mouse, and also simple experimental error or variation. We can therefore define outcomes for each individual test:
true positive: the drug binds in the yeast experiment, and in the mouse
true negative: the drug does not bind in the yeast experiment, and does not bind in the mouse
false positive: the drug binds in the yeast experiment, but does not in the mouse
false negative: the drug does not bind in the yeast experiment, but does bind in the mouse
Introducing false negatives
We can look at the effect of introducing a false negative rate (FNR: the probability that a protein which binds the drug in the mouse gives a negative result in yeast).
We'll start it low, at $FNR=0.01$:
End of explanation
draw_sample_comparison(mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100, fnr=0.1)
draw_sample_comparison(mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100, fnr=0.2)
Explanation: Again, the $P$-value reported by the $t$-test allows us to reject the null hypothesis. But there's a difference to the earlier graphs, as the two $P$-values in the title differ. That is because they represent two different tests.
$P_{real}$: the $P$-value obtained with no false positives or false negatives. This is the $P$-value we would get if we could correctly assign every protein to be either 'binding' or 'non-binding' of the drug, in the yeast experiment.
$P_{obs}$: the $P$-value obtained from our observed dataset, which could contain either false positives or false negatives. This is the $P$-value we would get if the yeast experiment had the same false positive or false negative rate.
In this case, with FNR=0.01, and 100 'true' positive interactors, we should expect approximately one 'true' positive to be misclassified as a 'negative', and the resulting effect on our test to be quite small. This is, in fact, what we see - $P_{obs}$ should be very slightly higher than $P_{real}$.
Increasing the rate of false negatives
If we increase the rate of false negatives to first $FNR=0.1$ and then $FNR=0.2$, we see a greater influence on our statistical test:
End of explanation
draw_sample_comparison(mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100, fpr=0.01)
Explanation: Now the $t$-test reports several orders of magnitude difference in $P$-values. Both are still very small, and the difference in population means is clear, but the trend is obvious: misclassification moves the reported $P$-value closer to accepting the null hypothesis that both populations have the same mean.
Introducing false positives
Now let's introduce a false positive rate (FPR: the probability that a protein which does not bind the drug in the mouse does so in the yeast experiment).
Again, starting low, at $FNR=0.01$:
End of explanation
draw_sample_comparison(mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100, fpr=0.1)
draw_sample_comparison(mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100, fpr=0.2)
Explanation: The direction of change is again the same: with false positive errors, the test $P$-value moves towards accepting the null hypothesis. Also, the size of this change is (probably - it might change if you rerun the notebook) greater than that which we saw for $FNR=0.01$.
The increase in effect is because the sample sizes for positive and negative groups differ. 1% of a sample of 2000 negatives is 20; 1% of a sample of 100 is 1. Misclassifying 20 negatives in the middle of 100 positives is likely to have more effect than misclassifying a single positive amongst 2000 negatives.
This tendency becomes more pronounced as we increase $FPR$:
End of explanation
draw_sample_comparison(mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100, fpr=0.01, fnr=0.01)
draw_sample_comparison(mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100, fpr=0.1, fnr=0.1)
draw_sample_comparison(mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100, fpr=0.2, fnr=0.2)
Explanation: With $FPR=0.2$, the $t$-test now reports a $P$-value that can be 20-30 orders of magnitude different from what would be seen with no misclassification. This is a considerable move towards being less able to reject the null hypothesis in what we might imagine to be a clear-cut case of having two distinct populations.
Combining false negatives and false positives
As might be expected, the effect of combining false positive and false negative misclassifications is greater than either case alone. This is illustrated in the plots below.
End of explanation
draw_sample_comparison(mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100, fpr=0.1, fnr=0.1)
draw_sample_comparison(mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=1000, n_pos=50, fpr=0.1, fnr=0.1)
draw_sample_comparison(mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=200, n_pos=10, fpr=0.1, fnr=0.1)
Explanation: The effects of class size
We have seen that the relative sizes of positive and negative classes affect whether $FPR$ or $FNR$ is the more influential error type for this data. The total size of classes also has an influence. In general, reducing the number of class members increases the impact of misclassification, in part because the smaller sample size overall makes it harder to reject the null hypothesis as a 'baseline':
End of explanation
def multiple_samples(n_samp=1000,
mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5,
n_neg=100, n_pos=100,
fnr=0, fpr=0,
clip_low=0, clip_high=100):
Returns the distribution of P-values obtained from subsampled
and observed (with FNR/FPR) normal distributions, over n_samp
repetitions.
- n_samp number of times to (re)sample from the distribution
- mu_neg mean of 'negative' samples
- mu_pos mean of 'positive' samples
- sd_neg standard deviation of 'negative' samples
- sd_pos standard deviation of 'positive' samples
- n_neg number of subsampled data points (negatives)
- n_pos number of subsampled data points (positives)
- fnr false negative rate (positives assigned to negative class)
- fpr false positive rate (negatives assigned to positive class)
- clip_low low value for clipping samples
- clip_high high value for clipping samples
p_sam, p_obs = [], []
for n in range(n_samp):
samples, obs = sample_distributions(mu_neg, mu_pos, sd_neg, sd_pos,
n_neg, n_pos, fnr, fpr,
clip_low, clip_high)
t_sam = stats.ttest_ind(samples[0], samples[1], equal_var=False)
t_obs = stats.ttest_ind(obs[0], obs[1], equal_var=False)
p_sam.append(t_sam[1])
p_obs.append(t_obs[1])
# return the P-values
return (p_sam, p_obs)
def draw_multiple_samples(n_samp=1000,
mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5,
n_neg=100, n_pos=100,
fnr=0, fpr=0,
clip_low=0, clip_high=100,
logy=True):
Plots the distribution of P-values obtained from subsampled
and observed (with FNR/FPR) normal distributions, over n_samp
repetitions.
- n_samp number of times to (re)sample from the distribution
- mu_neg mean of 'negative' samples
- mu_pos mean of 'positive' samples
- sd_neg standard deviation of 'negative' samples
- sd_pos standard deviation of 'positive' samples
- n_neg number of subsampled data points (negatives)
- n_pos number of subsampled data points (positives)
- fnr false negative rate (positives assigned to negative class)
- fpr false positive rate (negatives assigned to positive class)
- clip_low low value for clipping samples
- clip_high high value for clipping samples
p_sam, p_obs = multiple_samples(n_samp, mu_neg, mu_pos,
sd_neg, sd_pos, n_neg, n_pos,
fnr, fpr, clip_low, clip_high)
# plot P-values against each other
if logy:
p = loglog(p_sam, p_obs, 'o', alpha=0.3)
else:
p = semilogx(p_sam, p_obs, 'o', alpha=0.3)
ax = gca()
ax.set_xlabel("'Real' subsample P-value")
ax.set_ylabel("Observed subsample P-value")
ax.set_title("reps=%d $n_{neg}$=%d $n_{pos}$=%d FNR=%.02f FPR=%.02f" %
(n_samp, n_neg, n_pos, fnr, fpr))
# Add y=x lines, P=0.05
lims = [min([ax.get_xlim(), ax.get_ylim()]),
max([(0.05, 0.05), max([ax.get_xlim(), ax.get_ylim()])])]
if logy:
loglog(lims, lims, 'k', alpha=0.75)
ax.set_aspect('equal')
else:
semilogx(lims, lims, 'k', alpha=0.75)
vlines(0.05, min(ax.get_ylim()), max(max(ax.get_ylim()), 0.05), color='red') # add P=0.05 lines
hlines(0.05, min(ax.get_xlim()), max(max(ax.get_xlim()), 0.05), color='red')
Explanation: Some realisations of the last example (n_neg=200, n_pos=10, fpr=0.1, fnr=0.1) result in a $P$-value that (at the 0.05 level) rejects the null hypothesis when there is no misclassification, but cannot reject the null hypothesis when misclassification is taken into account.
Misclassification of samples into categories can prevent statistical determination of category differences, even for distinct categories
The general case
All our examples so far have been single realisations of a fictional experiment. The results will vary every time you (re-)run a cell, and quite greatly between runs, especially for some parameter values.
Let's look at what happens over several hundred replications of the experiment, to pick up on some trends.
Python code
As before, ignore this if you don't want to look at it.
End of explanation
draw_multiple_samples(n_samp=1000, mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100)
Explanation: The perfect experiment
Assuming $FNR=0$ and $FPR=0$, i.e. no misclassification at all, the observed and real subsample $t$-test values are always identical. We plot $t$-test $P$-values results for 1000 replicates of our fictional experiment, below:
End of explanation
draw_multiple_samples(n_samp=1000, mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100, fnr=0.01)
draw_multiple_samples(n_samp=1000, mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100, fnr=0.1)
draw_multiple_samples(n_samp=1000, mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100, fnr=0.2)
Explanation: The red lines in the plot indicate a nominal $P=0.05$ threshold.
The vertical line indicates this for the 'real' data - that is to say that points on the left of this line indicate a 'real' difference between the means of the populations (we reject the null hypothesis) at $P=0.05$.
The horizontal line indicates a similar threshold at $P=0.05$ for the 'observed' data - that which has some level of misclassification. Points below this line indicate that the experiment - with misclassification - rejects the null hypothesis at $P=0.05$.
Here, there is no misclassification, and all points lie on the diagonal, accordingly. The two populations we draw from are quite distinct, so all points cluster well to the left of and below the $P=0.05$ thresholds.
The effect of $FNR$
We saw before that, due to the small relative size of the positive set, the effect of $FNR$ was not very pronounced. Running 1000 replicates of the experiment, we can get some intuition about how increasing $FNR$ affects the observed $P$-value, relative to that which we would see without any misclassification.
End of explanation
draw_multiple_samples(n_samp=1000, mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100, fpr=0.01)
draw_multiple_samples(n_samp=1000, mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100, fpr=0.1)
draw_multiple_samples(n_samp=1000, mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100, fpr=0.2)
Explanation: The effet of increasing $FNR$ is to move the reported $P$-values away from the $y=x$ diagonal, and towards the $P=0.05$ threshold. Even with $FNR=0.1$, almost every run of the experiment misreports the 'real' $P$-value such that we are less likely to reject the null hypothesis.
The effect of $FPR$
We also saw earlier that, again due to the small relative size of the positive set, the effect of $FPR$ was greater than that of $FNR$.
By running 1000 replicates of the experiment as before, we can understand how increasing $FPR$ affects the observed $P$-value, relative there being no misclassification.
End of explanation
draw_multiple_samples(n_samp=1000, mu_neg=80, mu_pos=90, sd_neg=5, sd_pos=5, n_neg=2000, n_pos=100, fpr=0.2, fnr=0.2)
Explanation: We see the same progression of 'observed' $P$-values away from what would be the 'real' $P$-value without misclassification, but this time much more rapidly than with $FNR$ as $FPR$ increases. Even for this very distinct pair of populations, whose 'true' $P$-value should be ≈$10^-40$, an $FPR$ of 0.2 runs the risk of occasionally failing to reject the null hypothesis that the population means are the same.
As before, combining misclassification of positive and negative examples results in us being more likely to accept the null hypothesis, even for a very distinct pair of populations.
End of explanation
draw_multiple_samples(n_samp=1000, mu_neg=85, mu_pos=90, sd_neg=6, sd_pos=6, n_neg=1000, n_pos=50)
Explanation: A more realistic population?
The examples above have all been performed with two populations that have very distinct means, by $t$-test. How powerful is the effect of misclassification when the distinction is not so clear.
Let's consider two populations with less readily-distinguishable means, as might be encountered in real data:
$\mu_{neg}=85$, $\mu_{pos}=90$, $\sigma_{neg}=6$, $\sigma_{pos}=6$
Over 1000 repeated (perfect) experiments, pretty much all experiments reject the null hypothesis that the means are the same, at $P=0.05$, but some (rare) experiments might falsely accept this:
End of explanation
draw_sample_comparison(mu_neg=80, mu_pos=90, sd_neg=6, sd_pos=6, n_neg=1000, n_pos=50)
Explanation: In a single realisation of a perfect experiment with no misclassification, the reported $P$-value is likely to very strongly reject the null-hypothesis:
End of explanation
draw_multiple_samples(n_samp=1000, mu_neg=85, mu_pos=90, sd_neg=6, sd_pos=6, n_neg=1000, n_pos=50, fnr=0.01, fpr=0.01)
draw_multiple_samples(n_samp=1000, mu_neg=85, mu_pos=90, sd_neg=6, sd_pos=6, n_neg=1000, n_pos=50, fnr=0.05, fpr=0.05)
Explanation: What is the impact of misclassification?
Now, we increase the level of misclassification modestly:
End of explanation
draw_multiple_samples(n_samp=1000, mu_neg=85, mu_pos=90, sd_neg=6, sd_pos=6, n_neg=1000, n_pos=50, fnr=0.1, fpr=0.1)
draw_multiple_samples(n_samp=1000, mu_neg=85, mu_pos=90, sd_neg=6, sd_pos=6, n_neg=1000, n_pos=50,
fnr=0.1, fpr=0.1, logy=False)
Explanation: And now, at $FPR=FNR=0.05$ we start to see the population of experiments creeping into the upper left quadrant. In this quadrant we have experiments where data that (if classified correctly) would reject the null hypothesis are observed to accept it instead. This problem gets worse as the rate of misclassification increases.
End of explanation
draw_multiple_samples(n_samp=1000, mu_neg=85, mu_pos=90, sd_neg=6, sd_pos=6, n_neg=1000, n_pos=50, fnr=0.2, fpr=0.2)
draw_multiple_samples(n_samp=1000, mu_neg=85, mu_pos=90, sd_neg=6, sd_pos=6, n_neg=1000, n_pos=50,
fnr=0.2, fpr=0.2, logy=False)
Explanation: At $FPR=FNR=0.2$
End of explanation
draw_multiple_samples(n_samp=1000, mu_neg=85, mu_pos=89, sd_neg=7, sd_pos=7, n_neg=150, n_pos=20,
fnr=0.1, fpr=0.15, logy=False)
Explanation: What does this all mean?
If we know that our experiment may involve misclassification into the groups we are going to compare, then we need to consider alternative statistical methods to $t$-tests, as misclassification can introduce quantitative (effect size) and qualitative (presence/absence of an effect) errors to our analysis.
The likelihood of such errors being introduced depends on the nature of the experiment: number of samples in each group, expected false positive and false negative rate, and the expected difference between the two groups. We need to have at least an estimate of each of these quantities to be able to determine whether a simple test (e.g. $t$-test) might be appropriate, whether we need to take misclassification into account explicitly, or whether the data are likely unable to give a decisive answer to our biological question.
A further point to consider is whether the initial assumption of the question/experiment is realistic. For instance, in our fictional example here, is it truly realistic to expect that our drug will only bind those mouse proteins that are most similar to their human counterparts? I would argue that this is unlikely, and that there are almost certainly proportionally as many proteins highly-similar to their human counterparts that do not bind the drug. As misclassification can tend to sway the result towards an overall false negative of "no difference between the groups" where there is one, it may be difficult to distinguish between faulty assumptions, and the effects of misclassification.
Does misclassification always give an overall false negative?
No.
Sometimes, data that should not reject the null hypothesis can be reported as rejecting it: an overall false positive. These are the points in the lower-right quadrant, below:
End of explanation
interact(draw_sample_comparison,
mu_neg=(60, 99, 1), mu_pos=(60, 99, 1),
sd_neg=(0, 15, 1), sd_pos=(0, 15, 1),
n_neg=(0, 150, 1), n_pos=(0, 150, 1),
fnr=(0, 1, 0.01), fpr=(0, 1, 0.01),
clip_low=fixed(0), clip_high=fixed(100),
num_bins=fixed(50), xmin=fixed(50),
xmax=fixed(100), points=fixed(100),
subsample=True, negcolor=fixed('blue'),
poscolor=fixed('green'))
interact(draw_multiple_samples,
mu_neg=(60, 99, 1), mu_pos=(60, 99, 1),
sd_neg=(0, 15, 1), sd_pos=(0, 15, 1),
n_neg=(0, 150, 1), n_pos=(0, 150, 1),
fnr=(0, 1, 0.01), fpr=(0, 1, 0.01),
clip_low=fixed(0), clip_high=fixed(100))
Explanation: Note that, apart from the overall sample size (170 mouse proteins instead of 2000) the parameters for this run are not very different from those we have been using.
Interactive examples
The cells below allow you to explore variation in all the parameters we have modified above, and their effects on reported $P$ values:
End of explanation |
14,603 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Binary Data with the Beta Bernouli Distribution
Let's consider one of the most basic types of data - binary data
Step1: Binary data can take various forms
Step2: Graphs can be represented as binary matrices
In this email communication network from the enron dataset, $X_{i,j} = 1$ if and only if person${i}$ sent an email to person${j}$
Step3: In a Bayesian context, one often models binary data with a beta Bernoulli distribution
The beta Bernoulli distribution is the posterior of the Bernoulli distribution and its conjugate prior the beta distribution
Recall that the Bernouli distribution is the likelihood of $x$ given some probability $\theta$
$$P(x=1)=\theta$$
$$P(x=0)=1-\theta$$
$$P(x|\theta)=\theta^x(1-\theta)^{1-x}$$
If we wanted to learn the underlying probability $\theta$, we would use the beta distribution, which is the conjugate prior of the Bernouli distribution.
To import our desired distribution we'd call
Step4: Then given the specific model we'd want we'd import
from microscopes.model_name.definition import model_definition
Step5: We would then define the model as follows | Python Code:
import pandas as pd
import seaborn as sns
import math
import cPickle as pickle
import itertools as it
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import fetch_mldata
import itertools as it
%matplotlib inline
sns.set_context('talk')
Explanation: Binary Data with the Beta Bernouli Distribution
Let's consider one of the most basic types of data - binary data
End of explanation
mnist_dataset = fetch_mldata('MNIST original')
_, D = mnist_dataset['data'].shape
Y = mnist_dataset['data'].astype(bool)
W = int(math.sqrt(D))
assert W * W == D
sns.heatmap(np.reshape(Y[0], (W, W)), linewidth=0, xticklabels=False, yticklabels=False, cbar=False)
plt.title('Example MNIST Digit')
Explanation: Binary data can take various forms:
Image data is often represented as binary images. For example, the MNIST dataset contains images of handwritten digits.
Let's convert the MNIST digits into binary images
End of explanation
import enron_utils
with open('results.p') as fp:
communications = pickle.load(fp)
def allnames(o):
for k, v in o:
yield [k] + list(v)
names = set(it.chain.from_iterable(allnames(communications)))
names = sorted(list(names))
namemap = { name : idx for idx, name in enumerate(names) }
N = len(names)
communications_relation = np.zeros((N, N), dtype=np.bool)
for sender, receivers in communications:
sender_id = namemap[sender]
for receiver in receivers:
receiver_id = namemap[receiver]
communications_relation[sender_id, receiver_id] = True
labels = [i if i%20 == 0 else '' for i in xrange(N)]
sns.heatmap(communications_relation, linewidths=0, cbar=False, xticklabels=labels, yticklabels=labels)
plt.xlabel('person number')
plt.ylabel('person number')
plt.title('Email Communication Matrix')
Explanation: Graphs can be represented as binary matrices
In this email communication network from the enron dataset, $X_{i,j} = 1$ if and only if person${i}$ sent an email to person${j}$
End of explanation
from microscopes.models import bb as beta_bernoulli
Explanation: In a Bayesian context, one often models binary data with a beta Bernoulli distribution
The beta Bernoulli distribution is the posterior of the Bernoulli distribution and its conjugate prior the beta distribution
Recall that the Bernouli distribution is the likelihood of $x$ given some probability $\theta$
$$P(x=1)=\theta$$
$$P(x=0)=1-\theta$$
$$P(x|\theta)=\theta^x(1-\theta)^{1-x}$$
If we wanted to learn the underlying probability $\theta$, we would use the beta distribution, which is the conjugate prior of the Bernouli distribution.
To import our desired distribution we'd call
End of explanation
from microscopes.irm.definition import model_definition as irm_definition
from microscopes.mixture.definition import model_definition as mm_definition
Explanation: Then given the specific model we'd want we'd import
from microscopes.model_name.definition import model_definition
End of explanation
defn_mixture = mm_definition(Y.shape[0], [beta_bernoulli]*D)
defn_irm = irm_definition([N], [((0, 0), beta_bernoulli)])
Explanation: We would then define the model as follows
End of explanation |
14,604 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Handling image bytes
In this notebook, we start from the checkpoints of an already trained and saved model (as in Chapter 7).
For convenience, we have put this model in a public bucket in gs
Step1: Read from checkpoints.
We start from the checkpoints not the saved model because we want the full model
not just the signatures.
Step2: Export signature that will handle bytes from client
Step3: Send img bytes over the wire
No need for intermediate file on GCS. Note that we are simply using Python's file reading method.
Step4: Deploy bytes-handling model to CAIP
Step5: IMPORTANT
Step6: Note how we pass the base-64 encoded data | Python Code:
import tensorflow as tf
print('TensorFlow version' + tf.version.VERSION)
print('Built with GPU support? ' + ('Yes!' if tf.test.is_built_with_cuda() else 'Noooo!'))
print('There are {} GPUs'.format(len(tf.config.experimental.list_physical_devices("GPU"))))
device_name = tf.test.gpu_device_name()
if device_name != '/device:GPU:0':
raise SystemError('GPU device not found')
print('Found GPU at: {}'.format(device_name))
Explanation: Handling image bytes
In this notebook, we start from the checkpoints of an already trained and saved model (as in Chapter 7).
For convenience, we have put this model in a public bucket in gs://practical-ml-vision-book/flowers_5_trained
What we want to do is to directly handle bytes over the wire. That ways clients will not have to put their
images on Google Cloud Storage.
Enable GPU and set up helper functions
This notebook and pretty much every other notebook in this repository
will run faster if you are using a GPU.
On Colab:
- Navigate to Edit→Notebook Settings
- Select GPU from the Hardware Accelerator drop-down
On Cloud AI Platform Notebooks:
- Navigate to https://console.cloud.google.com/ai-platform/notebooks
- Create an instance with a GPU or select your instance and add a GPU
Next, we'll confirm that we can connect to the GPU with tensorflow:
End of explanation
import os
import shutil
import tensorflow as tf
CHECK_POINT_DIR='gs://practical-ml-vision-book/flowers_5_trained/chkpts'
model = tf.keras.models.load_model(CHECK_POINT_DIR)
print(model.summary())
IMG_HEIGHT = 345
IMG_WIDTH = 345
IMG_CHANNELS = 3
CLASS_NAMES = 'daisy dandelion roses sunflowers tulips'.split()
def read_from_jpegfile(filename):
img_bytes = tf.io.read_file(filename)
return img_bytes
def preprocess(img_bytes):
img = tf.image.decode_jpeg(img_bytes, channels=IMG_CHANNELS)
img = tf.image.convert_image_dtype(img, tf.float32)
return tf.image.resize_with_pad(img, IMG_HEIGHT, IMG_WIDTH)
filenames = [
'gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/dandelion/9818247_e2eac18894.jpg',
'gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/dandelion/9853885425_4a82356f1d_m.jpg',
'gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/daisy/9299302012_958c70564c_n.jpg',
'gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/tulips/8733586143_3139db6e9e_n.jpg',
'gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/tulips/8713397358_0505cc0176_n.jpg'
]
for filename in filenames:
img_bytes = read_from_jpegfile(filename)
img = preprocess(img_bytes)
img = tf.expand_dims(img, axis=0)
pred = model.predict(img)
print(pred)
Explanation: Read from checkpoints.
We start from the checkpoints not the saved model because we want the full model
not just the signatures.
End of explanation
@tf.function(input_signature=[tf.TensorSpec([None,], dtype=tf.string)])
def predict_bytes(img_bytes):
input_images = tf.map_fn(
preprocess,
img_bytes,
fn_output_signature=tf.float32
)
batch_pred = model(input_images) # same as model.predict()
top_prob = tf.math.reduce_max(batch_pred, axis=[1])
pred_label_index = tf.math.argmax(batch_pred, axis=1)
pred_label = tf.gather(tf.convert_to_tensor(CLASS_NAMES), pred_label_index)
return {
'probability': top_prob,
'flower_type_int': pred_label_index,
'flower_type_str': pred_label
}
@tf.function(input_signature=[tf.TensorSpec([None,], dtype=tf.string)])
def predict_filename(filenames):
img_bytes = tf.map_fn(
tf.io.read_file,
filenames
)
result = predict_bytes(img_bytes)
result['filename'] = filenames
return result
shutil.rmtree('export', ignore_errors=True)
os.mkdir('export')
model.save('export/flowers_model3',
signatures={
'serving_default': predict_filename,
'from_bytes': predict_bytes
})
!saved_model_cli show --tag_set serve --dir export/flowers_model3
!saved_model_cli show --tag_set serve --dir export/flowers_model3 --signature_def serving_default
!saved_model_cli show --tag_set serve --dir export/flowers_model3 --signature_def from_bytes
Explanation: Export signature that will handle bytes from client
End of explanation
!gsutil cp gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/daisy/9299302012_958c70564c_n.jpg /tmp/test.jpg
with open('/tmp/test.jpg', 'rb') as ifp:
img_bytes = ifp.read()
serving_fn = tf.keras.models.load_model('./export/flowers_model3').signatures['from_bytes']
pred = serving_fn(tf.convert_to_tensor([img_bytes]))
print(pred)
Explanation: Send img bytes over the wire
No need for intermediate file on GCS. Note that we are simply using Python's file reading method.
End of explanation
%%bash
BUCKET="ai-analytics-solutions-mlvisionbook" # CHANGE
gsutil -m cp -r ./export/flowers_model3 gs://${BUCKET}/flowers_model3
%%bash
BUCKET="ai-analytics-solutions-mlvisionbook" # CHANGE
./vertex_deploy.sh \
--endpoint_name=bytes \
--model_name=bytes \
--model_location=gs://${BUCKET}/flowers_model3
Explanation: Deploy bytes-handling model to CAIP
End of explanation
# CHANGE THESE TO REFLECT WHERE YOU DEPLOYED THE MODEL
import os
os.environ['ENDPOINT_ID'] = '7318683646011899904' # CHANGE
os.environ['MODEL_ID'] = '6992243041771716608' # CHANGE
os.environ['PROJECT'] = 'ai-analytics-solutions' # CHANGE
os.environ['BUCKET'] = 'ai-analytics-solutions-mlvisionbook' # CHANGE
os.environ['REGION'] = 'us-central1' # CHANGE
%%bash
gsutil cp gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/daisy/9299302012_958c70564c_n.jpg /tmp/test1.jpg
gsutil cp gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/tulips/8713397358_0505cc0176_n.jpg /tmp/test2.jpg
Explanation: IMPORTANT: CHANGE THIS CELL
Note the endpoint ID and deployed model ID above. Set it in the cell below.
End of explanation
# Invoke from Python.
import base64
import json
from oauth2client.client import GoogleCredentials
import requests
PROJECT = "ai-analytics-solutions" # CHANGE
REGION = "us-central1" # make sure you have GPU/TPU quota in this region
ENDPOINT_ID = "7318683646011899904"
def b64encode(filename):
with open(filename, 'rb') as ifp:
img_bytes = ifp.read()
return base64.b64encode(img_bytes)
token = GoogleCredentials.get_application_default().get_access_token().access_token
api = "https://{}-aiplatform.googleapis.com/v1/projects/{}/locations/{}/endpoints/{}:predict".format(
REGION, PROJECT, REGION, ENDPOINT_ID)
headers = {"Authorization": "Bearer " + token }
data = {
"signature_name": "from_bytes", # currently bugged
"instances": [
{
"img_bytes": {"b64": b64encode('/tmp/test1.jpg')}
},
{
"img_bytes": {"b64": b64encode('/tmp/test2.jpg')}
},
]
}
response = requests.post(api, json=data, headers=headers)
print(response.content)
Explanation: Note how we pass the base-64 encoded data
End of explanation |
14,605 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Compare Spectral Clustering against kMeans using Similarity
As there is no ground truth, the criteria used to evaluate clusters produced using Spectral and kmeans is the silhouette coefficient. From the results obtained, it can be appreaciated that Spectral Clustering requires 6 clusters to have the silhouette score similar to the one obtained with 3 clusters with kmeans.
Step1: the optimal number of kmeans will be determined using the elbow method. Once the kmeans number of clusters is set, the number of clusters using spectral clustering will be used so that it equals the silhouette score obtained in the first case.
K-Means
Step2: The elbow method shows that the optimal number of clusters to be used in the kmeans method is 3, considering the euclidean distance between cluster centers. From an analytical perspective, the inertia functions shows the same results
Step3: Spectral Clustering
Step4: Mean Shift | Python Code:
#Compare from a silhouette_score perspective kmeans against Spectral Clustering
range_n_clusters = np.arange(10)+2
for n_clusters in range_n_clusters:
# The silhouette_score gives the average value for all the samples.
# This gives a perspective into the density and separation of the formed
# clusters
# Initialize the clusterer with n_clusters value and a random generator
# seed of 10 for reproducibility.
spec_clust = SpectralClustering(n_clusters=n_clusters)
cluster_labels1 = spec_clust.fit_predict(X_tr_std)
silhouette_avg1 = silhouette_score(X_tr_std, cluster_labels1)
kmeans = KMeans(n_clusters=n_clusters, init='k-means++', n_init=10).fit(X_tr_std)
cluster_labels2 = kmeans.fit_predict(X_tr_std)
silhouette_avg2 = silhouette_score(X_tr_std, cluster_labels2)
print("For n_clusters =", n_clusters,
"av. sil_score for Spec. clust is :", silhouette_avg1,
"av. sil_score for kmeans is :",silhouette_avg2 )
Explanation: Compare Spectral Clustering against kMeans using Similarity
As there is no ground truth, the criteria used to evaluate clusters produced using Spectral and kmeans is the silhouette coefficient. From the results obtained, it can be appreaciated that Spectral Clustering requires 6 clusters to have the silhouette score similar to the one obtained with 3 clusters with kmeans.
End of explanation
#Use the elbow method to determine the number of clusters
# k-means determine k
distortions = []
K = range(1,10)
for k in K:
kmeanModel = KMeans(n_clusters=k).fit(X_tr)
kmeanModel.fit(X_tr)
distortions.append(sum(np.min(cdist(X_tr, kmeanModel.cluster_centers_, 'euclidean'), axis=1)) / X_tr.shape[0])
# Plot the elbow
plt.plot(K, distortions, 'bx-')
plt.xlabel('k')
plt.ylabel('Distortion')
plt.title('The Elbow Method showing the optimal k')
plt.show()
Explanation: the optimal number of kmeans will be determined using the elbow method. Once the kmeans number of clusters is set, the number of clusters using spectral clustering will be used so that it equals the silhouette score obtained in the first case.
K-Means
End of explanation
#Evaluate the best number of clusters
for i in range(1,10):
km = KMeans(n_clusters=i, init='k-means++', n_init=10).fit(X_tr_std)
print (i, km.inertia_)
#Cluster the data
kmeans = KMeans(n_clusters=3, init='k-means++', n_init=10).fit(X_tr_std)
labels = kmeans.labels_
#Glue back to original data
X_tr['clusters'] = labels
X_tr['Gender'] = boston_marathon_scores.gender
X_tr['Overall'] = boston_marathon_scores.overall
#Add the column into our list
clmns.extend(['clusters','Gender','Overall'])
#Lets analyze the clusters
pd.DataFrame(X_tr.groupby(['clusters']).mean())
clusters_summary = X_tr.groupby(['clusters']).describe()
clusters_summary_transposed = clusters_summary.transpose()
clusters_summary_transposed
# Reduce it to two components.
X_pca = PCA(2).fit_transform(X_tr_std)
# Calculate predicted values.
y_pred = KMeans(n_clusters=3, random_state=42).fit_predict(X_pca)
# Plot the solution.
plt.scatter(X_pca[:, 0], X_pca[:, 1], c=y_pred)
plt.show()
Graph_kmeans_official = pd.pivot_table(X_tr, 'official', ['clusters', 'gender'])
Graph_kmeans_pace = pd.pivot_table(X_tr, 'pace', ['clusters', 'gender'])
Graph_kmeans_age = pd.pivot_table(X_tr, 'age', ['clusters', 'gender'])
print(Graph_kmeans_official, Graph_kmeans_pace, Graph_kmeans_age)
Explanation: The elbow method shows that the optimal number of clusters to be used in the kmeans method is 3, considering the euclidean distance between cluster centers. From an analytical perspective, the inertia functions shows the same results: 3 clusters were the difference between the results obtained by the inertia function are smaller when shifting from 3 to 4 clusters.
End of explanation
# We know we're looking for 6 clusters from the comparison with the kmeans.
n_clusters=6
# Declare and fit the model.
sc = SpectralClustering(n_clusters=n_clusters).fit(X_tr_std)
# Extract cluster assignments for each data point.
labels = sc.labels_
#Glue back to original data
X_tr['clusters'] = labels
X_tr['Gender'] = boston_marathon_scores.gender
X_tr['Overall'] = boston_marathon_scores.overall
#Add the column into our list
clmns.extend(['clusters','Gender','Overall'])
#Lets analyze the clusters
pd.DataFrame(X_tr.groupby(['clusters']).mean())
clusters_summary = X_tr.groupby(['clusters']).describe()
clusters_summary_transposed = clusters_summary.transpose()
clusters_summary_transposed
# Reduce it to two components.
X_pca = PCA(2).fit_transform(X_tr_std)
# Calculate predicted values.
y_pred = SpectralClustering(n_clusters=3).fit_predict(X_pca)
# Plot the solution.
plt.scatter(X_pca[:, 0], X_pca[:, 1], c=y_pred)
plt.show()
Explanation: Spectral Clustering
End of explanation
# Here we set the bandwidth. This function automatically derives a bandwidth
# number based on an inspection of the distances among points in the data.
bandwidth = estimate_bandwidth(X_tr_std, quantile=0.9)
# Declare and fit the model.
ms = MeanShift(bandwidth=bandwidth, bin_seeding=True).fit(X_tr_std)
# Extract cluster assignments for each data point.
labels = ms.labels_
# Coordinates of the cluster centers.
cluster_centers = ms.cluster_centers_
# Count our clusters.
n_clusters_ = len(np.unique(labels))
#Glue back to original data
X_tr['clusters'] = labels
X_tr['Gender'] = boston_marathon_scores.gender
X_tr['Overall'] = boston_marathon_scores.overall
#Add the column into our list
clmns.extend(['clusters','Gender','Overall'])
#Lets analyze the clusters
print("Number of estimated clusters: {}".format(n_clusters_))
pd.DataFrame(X_tr.groupby(['clusters']).mean())
clusters_summary = X_tr.groupby(['clusters']).describe()
clusters_summary_transposed = clusters_summary.transpose()
clusters_summary_transposed
# Reduce it to two components.
X_pca = PCA(2).fit_transform(X_tr_std)
# Calculate predicted values.
bandwidth = estimate_bandwidth(X_tr_std, quantile=0.9)
y_pred = MeanShift(bandwidth=bandwidth, bin_seeding=True).fit_predict(X_pca)
# Plot the solution.
plt.scatter(X_pca[:, 0], X_pca[:, 1], c=y_pred)
plt.show()
# Declare the model and fit it in one statement.
# Note that you can provide arguments to the model, but we didn't.
af = AffinityPropagation().fit(X_tr_std)
print('Done')
# Pull the number of clusters and cluster assignments for each data point.
cluster_centers_indices = af.cluster_centers_indices_
n_clusters_ = len(cluster_centers_indices)
labels = af.labels_
#Glue back to original data
X_tr['clusters'] = labels
X_tr['Gender'] = boston_marathon_scores.gender
X_tr['Overall'] = boston_marathon_scores.overall
#Add the column into our list
clmns.extend(['clusters','Gender','Overall'])
#Lets analyze the clusters
print("Number of estimated clusters: {}".format(n_clusters_))
pd.DataFrame(X_tr.groupby(['clusters']).mean())
clusters_summary = X_tr.groupby(['clusters']).describe()
clusters_summary_transposed = clusters_summary.transpose()
clusters_summary_transposed
Explanation: Mean Shift
End of explanation |
14,606 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Word2Vec
사실 Word2vec는 noise contrastive estimator (이하 NCE) loss를 사용한다.
아직 pytorch에서는 이 부분이 구현되어 있지 않고, 간단한 vocabulary이라서 그냥 softmax를 사용해서 이 부분을 구현하였다.
embedding이 2개이면, 단어에 따른 간단한 Classifiaction 문제로 볼 수 있기 때문에, 큰 무리는 없을 것이다.
※ 단, vocabulary수가 많아지면 학습속도를 높이기 위해서 NCE를 사용해야 할 것이다.
Step1: 1. Dataset 준비
Step2: Dataset Loader 설정
Step3: 2. 사전 설정
* model
* loss
* opimizer
Step4: 3. Trainning loop
* (입력 생성)
* model 생성
* loss 생성
* zeroGrad
* backpropagation
* optimizer step (update model parameter)
Step5: 4. Predict & Evaluate
Step6: 5. plot embedding space | Python Code:
import torch
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
import torch.utils.data as data_utils
Explanation: Word2Vec
사실 Word2vec는 noise contrastive estimator (이하 NCE) loss를 사용한다.
아직 pytorch에서는 이 부분이 구현되어 있지 않고, 간단한 vocabulary이라서 그냥 softmax를 사용해서 이 부분을 구현하였다.
embedding이 2개이면, 단어에 따른 간단한 Classifiaction 문제로 볼 수 있기 때문에, 큰 무리는 없을 것이다.
※ 단, vocabulary수가 많아지면 학습속도를 높이기 위해서 NCE를 사용해야 할 것이다.
End of explanation
import numpy as np
word_pair = [['고양이', '흰'],
['고양이', '동물'],
['국화', '흰'],
['국화', '식물'],
['선인장', '초록'],
['선인장', '식물'],
['강아지', '검은'],
['강아지', '동물'],
['타조', '회색'],
['타조', '동물'],
['코끼리', '회색'],
['코끼리', '동물'],
['장미', '빨간'],
['장미', '식물'],
['자동차', '빨간'],
['그릇', '빨간'],
['민들레', '식물'],
['민들레', '흰']]
word_list = set(np.array(word_pair).flatten())
word_dict = {w: i for i, w in enumerate(word_list)}
skip_grams = [[word_dict[word[0]], word_dict[word[1]]] for word in word_pair]
Explanation: 1. Dataset 준비
End of explanation
label = torch.LongTensor(skip_grams)[:, 0].contiguous()
context = torch.LongTensor(skip_grams)[:, 1].contiguous()
skip_grams_dataset = data_utils.TensorDataset(label, context)
train_loader = torch.utils.data.DataLoader(skip_grams_dataset, batch_size=8, shuffle=True)
test_loader = torch.utils.data.DataLoader(skip_grams_dataset, batch_size=1, shuffle=False)
Explanation: Dataset Loader 설정
End of explanation
class _model(nn.Module) :
def __init__(self):
super(_model, self).__init__()
self.embedding = nn.Embedding(len(word_list), 2)
self.linear = nn.Linear(2, len(word_list), bias=True)
def forward(self, x):
x = self.embedding(x)
x = self.linear(x)
return F.log_softmax(x)
model = _model()
loss_fn = nn.NLLLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.1)
Explanation: 2. 사전 설정
* model
* loss
* opimizer
End of explanation
model.train()
for epoch in range(100):
for data, target in train_loader:
data, target = Variable(data), Variable(target) #(입력 생성)
output = model(data) # model 생성
loss = F.nll_loss(output, target) #loss 생성
optimizer.zero_grad() # zeroGrad
loss.backward() # calc backward gradients
optimizer.step() # update parameters
Explanation: 3. Trainning loop
* (입력 생성)
* model 생성
* loss 생성
* zeroGrad
* backpropagation
* optimizer step (update model parameter)
End of explanation
model.eval()
invDic = { i : w for w, i in word_dict.items()}
print('Input : true : pred')
for x, y in test_loader :
x, y = Variable(x.squeeze()), y.squeeze()
y_pred = model(x).max(1)[1].data[0][0]
print('{:s} : {:s} : {:s}'.format(invDic[x.data[0]], invDic[y[0]], invDic[y_pred]))
Explanation: 4. Predict & Evaluate
End of explanation
import matplotlib.pyplot as plt
import matplotlib
%matplotlib inline
matplotlib.rc('font', family="NanumGothic")
for i in label :
x = Variable(torch.LongTensor([i]))
fx, fy = model.embedding(x).squeeze().data
plt.scatter(fx, fy)
plt.annotate(invDic[i], xy=(fx, fy), xytext=(5, 2),
textcoords='offset points', ha='right', va='bottom')
Explanation: 5. plot embedding space
End of explanation |
14,607 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Most Basic-est Model of Them All
They all survive
Step1: The optimistic model
Step2: Open a new model (.csv) file to write to
Step3: Write the columns header row
Step4: Take a look at the resulting predictions
Step5: Kaggle Submission Results
Your submission scored 0.37321
Only 37% correct. Looks like we can do better!
The Overly Pessimistic Model
Let's create a model which predicts that all Titanic passengers die.
First open up the test data
Step6: Skip the first row because it's the header row
Step7: Let's open up an output prediction model/csv-file
Step8: Write in the prediction model that they all died
Step9: Close the test data file and the predicitons file
Step10: Take a look at the output predictions | Python Code:
import csv as csv
import numpy as np
Explanation: The Most Basic-est Model of Them All
They all survive
End of explanation
test_file = open('./data/test.csv', 'rb') # Open the test data
test_file_object = csv.reader(test_file)
header = test_file_object.next()
header
Explanation: The optimistic model: They All Survive
End of explanation
predictions_file = open('./models/jfaPythonModel-allSurvive.csv', 'wb')
predictions_file_object = csv.writer(predictions_file)
Explanation: Open a new model (.csv) file to write to
End of explanation
predictions_file_object.writerow(['PassengerID', 'Survived'])
for row in test_file_object:
predictions_file_object.writerow([row[0], "1"])
test_file.close()
predictions_file.close()
Explanation: Write the columns header row
End of explanation
output_predictions_file = open('./models/jfaPythonModel-allSurvive.csv', 'rb')
output_predictions_file_object = csv.reader(output_predictions_file)
data = []
for row in output_predictions_file_object:
data.append(row[0:])
data = np.array(data)
data.shape
data
Explanation: Take a look at the resulting predictions
End of explanation
test_data = open('./data/test.csv')
test_data_object = csv.reader(test_data)
Explanation: Kaggle Submission Results
Your submission scored 0.37321
Only 37% correct. Looks like we can do better!
The Overly Pessimistic Model
Let's create a model which predicts that all Titanic passengers die.
First open up the test data
End of explanation
header = test_data_object.next()
header
Explanation: Skip the first row because it's the header row
End of explanation
predictions_file = open('./models/jfaPythonModel-allDie.csv', 'wb')
predictions_file_object = csv.writer(predictions_file)
predictions_file_object.writerow(['PassengerID', 'Survived'])
Explanation: Let's open up an output prediction model/csv-file
End of explanation
for passenger in test_data_object:
predictions_file_object.writerow([passenger[0], "0"])
Explanation: Write in the prediction model that they all died
End of explanation
test_data.close()
predictions_file.close()
Explanation: Close the test data file and the predicitons file
End of explanation
output_predictions_file = open('./models/jfaPythonModel-allDie.csv', 'rb')
output_predictions_file_object = csv.reader(output_predictions_file)
data = []
for passenger in output_predictions_file_object:
data.append(passenger[0:])
data = np.array(data)
data.shape
output_predictions_file.close()
Explanation: Take a look at the output predictions
End of explanation |
14,608 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
modin.spreadsheet
modin.spreadsheet is a Jupyter notebook widget that allows users to interact with Modin DataFrames in a spreadsheet-like fashion while taking advantage of the underlying capabilities of Modin. The widget makes it quick and easy to explore, sort, filter, edit data and export reproducible code.
This tutorial will showcase how to use modin.spreadsheet. Before starting, please install the required packages using pip install -r requirements.txt in the current directory. Then just run the cells; no editing required!
Step1: Create a Modin DataFrame
The following cells creates a DataFrame using a NYC taxi dataset.
Step2: Generate a spreadsheet widget with the DataFrame
mss.from_dataframe takes in a DataFrame, optional configuration options, and returns a SpreadsheetWidget, which contains all the logic for displaying the spreadsheet view of the DataFrame. The object returned will not be rendered unless displayed.
Step3: Displaying the Spreadsheet
The widget is displayed when the widget is returned by an input cell or passed to the display function e.g. display(spreadsheet). When displayed, the SpreadsheetWidget will generate a transformation history cell that contains a record of the transformations applied to the DataFrame unless the cell already exists or the feature is disabled.
Basic Usage
from_dataframe creates a copy of the input DataFrame, so changes do not alter the original DataFrame.
Filter - Each column can be filtered according to its datatype using the filter button to the right of the column header. Any number of columns can be filtered simultaneously.\
Sort - Each column can be sorted by clicking on the column header. Assumptions on the order of the data should only be made according to the latest sort i.e. the 2nd last sort may not be in order even if grouped by the duplicates in the last sorted column.\
Cell Edit - Double click on a cell to edit its value.\
Add Row(toolbar) - Click on the Add Row button in the toolbar to duplicate the last row in the DataFrame.\
Remove Row(toolbar) - Select row(s) on the spreadsheet and click the Remove Row button in the toolbar to remove them.\
Reset Filters(toolbar) - Click on the Reset Filters button in the toolbar to remove all filters on the data.\
Reset Sort(toolbar) - Click on the Reset Sort button in the toolbar to remove any sorting on the data.
Transformation History and Reproducible Code
The widget records the history of transformations, such as filtering, that occur on the spreadsheet. These transformations are updated in the spreadsheet transformation history cell as they happen and can be easily copied for reproducibility. The history can be cleared using the Clear History button in the toolbar.
Try making some changes to the spreadsheet!
Step4: Exporting Changes
to_dataframe takes in a SpreadsheetWidget and returns a copy of the DataFrame reflecting the current state of the UI on the widget. Specifically, any filters, edits, or sorts will be applied on the returned Dataframe.
Export a DataFrame after making some changes on the spreadsheet UI
Step5: SpreadsheetWidget API
The API on SpreadsheetWidget allows users to replicate some of the functionality on the GUI, but also provides other functionality such as applying the transformation history on another DataFrame or getting the DataFrame that matches the spreadsheet state like to_dataframe.
Step6: Retrieving and Applying Transformation History
The transformation history can be retrieved as a list of code snippets using the get_history API. The apply_history API will apply the transformations on the input DataFrame and return the resultant DataFrame.
Step7: Additional Example
Here is another example of how to use from_dataframe with configuration options. | Python Code:
# Please install the required packages using `pip install -r requirements.txt` in the current directory
# For all ways to install Modin see official documentation at:
# https://modin.readthedocs.io/en/latest/installation.html
import modin.pandas as pd
import modin.spreadsheet as mss
Explanation: modin.spreadsheet
modin.spreadsheet is a Jupyter notebook widget that allows users to interact with Modin DataFrames in a spreadsheet-like fashion while taking advantage of the underlying capabilities of Modin. The widget makes it quick and easy to explore, sort, filter, edit data and export reproducible code.
This tutorial will showcase how to use modin.spreadsheet. Before starting, please install the required packages using pip install -r requirements.txt in the current directory. Then just run the cells; no editing required!
End of explanation
columns_names = [
"trip_id", "vendor_id", "pickup_datetime", "dropoff_datetime", "store_and_fwd_flag",
"rate_code_id", "pickup_longitude", "pickup_latitude", "dropoff_longitude", "dropoff_latitude",
"passenger_count", "trip_distance", "fare_amount", "extra", "mta_tax", "tip_amount",
"tolls_amount", "ehail_fee", "improvement_surcharge", "total_amount", "payment_type",
"trip_type", "pickup", "dropoff", "cab_type", "precipitation", "snow_depth", "snowfall",
"max_temperature", "min_temperature", "average_wind_speed", "pickup_nyct2010_gid",
"pickup_ctlabel", "pickup_borocode", "pickup_boroname", "pickup_ct2010",
"pickup_boroct2010", "pickup_cdeligibil", "pickup_ntacode", "pickup_ntaname", "pickup_puma",
"dropoff_nyct2010_gid", "dropoff_ctlabel", "dropoff_borocode", "dropoff_boroname",
"dropoff_ct2010", "dropoff_boroct2010", "dropoff_cdeligibil", "dropoff_ntacode",
"dropoff_ntaname", "dropoff_puma",
]
parse_dates=["pickup_datetime", "dropoff_datetime"]
df = pd.read_csv('s3://modin-datasets/trips_data.csv', names=columns_names,
header=None, parse_dates=parse_dates)
df
Explanation: Create a Modin DataFrame
The following cells creates a DataFrame using a NYC taxi dataset.
End of explanation
spreadsheet = mss.from_dataframe(df)
Explanation: Generate a spreadsheet widget with the DataFrame
mss.from_dataframe takes in a DataFrame, optional configuration options, and returns a SpreadsheetWidget, which contains all the logic for displaying the spreadsheet view of the DataFrame. The object returned will not be rendered unless displayed.
End of explanation
spreadsheet
Explanation: Displaying the Spreadsheet
The widget is displayed when the widget is returned by an input cell or passed to the display function e.g. display(spreadsheet). When displayed, the SpreadsheetWidget will generate a transformation history cell that contains a record of the transformations applied to the DataFrame unless the cell already exists or the feature is disabled.
Basic Usage
from_dataframe creates a copy of the input DataFrame, so changes do not alter the original DataFrame.
Filter - Each column can be filtered according to its datatype using the filter button to the right of the column header. Any number of columns can be filtered simultaneously.\
Sort - Each column can be sorted by clicking on the column header. Assumptions on the order of the data should only be made according to the latest sort i.e. the 2nd last sort may not be in order even if grouped by the duplicates in the last sorted column.\
Cell Edit - Double click on a cell to edit its value.\
Add Row(toolbar) - Click on the Add Row button in the toolbar to duplicate the last row in the DataFrame.\
Remove Row(toolbar) - Select row(s) on the spreadsheet and click the Remove Row button in the toolbar to remove them.\
Reset Filters(toolbar) - Click on the Reset Filters button in the toolbar to remove all filters on the data.\
Reset Sort(toolbar) - Click on the Reset Sort button in the toolbar to remove any sorting on the data.
Transformation History and Reproducible Code
The widget records the history of transformations, such as filtering, that occur on the spreadsheet. These transformations are updated in the spreadsheet transformation history cell as they happen and can be easily copied for reproducibility. The history can be cleared using the Clear History button in the toolbar.
Try making some changes to the spreadsheet!
End of explanation
changed_df = mss.to_dataframe(spreadsheet)
changed_df
Explanation: Exporting Changes
to_dataframe takes in a SpreadsheetWidget and returns a copy of the DataFrame reflecting the current state of the UI on the widget. Specifically, any filters, edits, or sorts will be applied on the returned Dataframe.
Export a DataFrame after making some changes on the spreadsheet UI
End of explanation
# Duplicates the `Reset Filters` button
spreadsheet.reset_filters()
# Duplicates the `Reset Sort` button
spreadsheet.reset_sort()
# Duplicates the `Clear History` button
spreadsheet.clear_history()
# Gets the modified DataFrame that matches the changes to the spreadsheet
# This is the same functionality as `mss.to_dataframe`
spreadsheet.get_changed_df()
Explanation: SpreadsheetWidget API
The API on SpreadsheetWidget allows users to replicate some of the functionality on the GUI, but also provides other functionality such as applying the transformation history on another DataFrame or getting the DataFrame that matches the spreadsheet state like to_dataframe.
End of explanation
spreadsheet.get_history()
another_df = df.copy()
spreadsheet.apply_history(another_df)
Explanation: Retrieving and Applying Transformation History
The transformation history can be retrieved as a list of code snippets using the get_history API. The apply_history API will apply the transformations on the input DataFrame and return the resultant DataFrame.
End of explanation
mss.from_dataframe(df, show_toolbar=False, grid_options={'forceFitColumns': False, 'editable': False, 'highlightSelectedCell': True})
Explanation: Additional Example
Here is another example of how to use from_dataframe with configuration options.
End of explanation |
14,609 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Web Scraping & Data Analysis with Selenium and Python
Vinay Babu
Github
Step1: Initial Setup and Launch the browser to open the URL
Step2: Getting Data
Function to extract the data from Web using Selenium
Step3: Let's extract all the required data like Ratings,Votes,Genre, Year of Release for the Best Movies
Step4: How does data looks now?
Is this Data is in the correct format to perform data manipulation?
The individual movie related data is stored in a Python List, it's hard to corelated the data attributes with the respective Movies
Step5: Store Data in a Python Dictionary
The data is stored in a python dictionary which is more structured way to store the data here and all the movie attributes are now linked with the respective movie
Step6: Let's see now how the data in dictionary looks like?
Step7: Oh! Something is wrong with the data, It's not in right shape to perform analysis on this data set
Let's clean the data
Replace the comma(,) in Vote Value and change the data type to int
Change the Data type for Rating and RunTime
Step8: Now let's look at the data and see how it looks like
Step9: Data in Pandas Dataframe
Data is consumed in a Pandas Dataframe, Which is more convenient way to perform data analysis,manipulation or aggregation
Step10: Let's use some of the Pandas functions now and start the Analysis
Step11: Movies with Highest Ratings
The top five movies with Maximum Rating since 1955
Step12: Movies with Maximum Run time
Top 10 movies with maximum Run time
Step13: Best Movie Run time
Let's plot a graph to see the movie run time trend from 1955 thru 2015
Step14: Mean of the Movie Run Time
Step15: Best Movie Ratings
Perform some analysis on the ratings of all the Best won movies
No. of Movies Greater than IMDB 7 ratings
Step16: Movie Ratings Visualization using Bar Graph
Step17: Percentage distribution of the Ratings in a Pie-Chart
Step18: Best Picture by Genre
Let's analyze the Genre for the best won movies | Python Code:
%matplotlib inline
from selenium import webdriver
import os,time,json
import pandas as pd
from collections import defaultdict,Counter
import matplotlib.pyplot as plt
Explanation: Web Scraping & Data Analysis with Selenium and Python
Vinay Babu
Github: https://github.com/min2bro/WebScrapingwithSelenium
Twitter: @min2bro
Data Science ToolBox
<img src="./img/DataScienceToolbox.png">
IPython Notebook
Write, Edit, Replay python scripts
Interactive Data Visualization and report Presentation
Notebook can be saved and shared
Run Selenium Python Scripts
Pandas
Python Data Analysis Library
Matplotlib
plotting library for the Python
<img src="./img/Steps2follow.png">
Analysis of the Filmfare Awards for Best Picture from 1955-2015
<img src="./img/wordcloud2.jpg">
<img src="./img/wordcloud3.png">
Web Scraping: Extracting Data from the Web
Some Import
End of explanation
url = "http://www.imdb.com/list/ls061683439/"
with open('./img/filmfare.json',encoding="utf-8") as f:
datatbl = json.load(f)
driver = webdriver.Chrome(datatbl['data']['chromedriver'])
driver.get(url)
Explanation: Initial Setup and Launch the browser to open the URL
End of explanation
def ExtractText(Xpath):
textlist=[]
if(Xpath=="Movies_Runtime_Xpath"):
[textlist.append(item.text[-10:-7]) for item in driver.find_elements_by_xpath(datatbl['data'][Xpath])]
else:
[textlist.append(item.text) for item in driver.find_elements_by_xpath(datatbl['data'][Xpath])]
return textlist
Explanation: Getting Data
Function to extract the data from Web using Selenium
End of explanation
#Extracting Data from Web
Movies_Votes,Movies_Name,Movies_Ratings,Movies_RunTime=[[] for i in range(4)]
datarepo = [[]]*4
Xpath_list = ['Movies_Name_Xpath','Movies_Rate_Xpath','Movies_Runtime_Xpath','Movies_Votes_Xpath']
for i in range(4):
if(i==3):
driver.find_element_by_xpath(datatbl['data']['listview']).click()
datarepo[i] = ExtractText(Xpath_list[i])
driver.quit()
Explanation: Let's extract all the required data like Ratings,Votes,Genre, Year of Release for the Best Movies
End of explanation
# Movie Name List & Ratings
print(datarepo[0][:5])
print(datarepo[3][:5])
Explanation: How does data looks now?
Is this Data is in the correct format to perform data manipulation?
The individual movie related data is stored in a Python List, it's hard to corelated the data attributes with the respective Movies
End of explanation
# Result in a Python Dictionary
Years=range(2015,1954,-1)
result = defaultdict(dict)
for i in range(0,len(datarepo[0])):
result[i]['Movie Name']= datarepo[0][i]
result[i]['Year']= Years[i]
result[i]['Rating']= datarepo[1][i]
result[i]['Votes']= datarepo[3][i]
result[i]['RunTime']= datarepo[2][i]
Explanation: Store Data in a Python Dictionary
The data is stored in a python dictionary which is more structured way to store the data here and all the movie attributes are now linked with the respective movie
End of explanation
result
print(json.dumps(result[58], indent=2))
Explanation: Let's see now how the data in dictionary looks like?
End of explanation
for key,values in result.items():
values['Votes'] = int(values['Votes'].replace(",",""))
values['Rating']= float(values['Rating'])
try:
values['RunTime'] = int(values['RunTime'])
except ValueError:
values['RunTime'] = 154
Explanation: Oh! Something is wrong with the data, It's not in right shape to perform analysis on this data set
Let's clean the data
Replace the comma(,) in Vote Value and change the data type to int
Change the Data type for Rating and RunTime
End of explanation
result[58]
Explanation: Now let's look at the data and see how it looks like
End of explanation
# create dataframe
df = pd.DataFrame.from_dict(result,orient='index')
df = df[['Year', 'Movie Name', 'Rating', 'Votes','RunTime']]
df
Explanation: Data in Pandas Dataframe
Data is consumed in a Pandas Dataframe, Which is more convenient way to perform data analysis,manipulation or aggregation
End of explanation
df.info()
Explanation: Let's use some of the Pandas functions now and start the Analysis
End of explanation
#Highest Rating Movies
df.sort_values('Rating',ascending=[False]).head(5)
Explanation: Movies with Highest Ratings
The top five movies with Maximum Rating since 1955
End of explanation
#Movies with maximum Run Time
df.sort_values(['RunTime'],ascending=[False]).head(10)
Explanation: Movies with Maximum Run time
Top 10 movies with maximum Run time
End of explanation
df.plot(x=df.Year,y=['RunTime']);
Explanation: Best Movie Run time
Let's plot a graph to see the movie run time trend from 1955 thru 2015
End of explanation
df['RunTime'].mean()
Explanation: Mean of the Movie Run Time
End of explanation
df[(df['Rating']>=7)]['Rating'].count()
Explanation: Best Movie Ratings
Perform some analysis on the ratings of all the Best won movies
No. of Movies Greater than IMDB 7 ratings
End of explanation
Rating_Histdic = defaultdict(dict)
Rating_Histdic['Btwn 6&7'] = df[(df['Rating']>=6)&(df['Rating']<7)]['Rating'].count()
Rating_Histdic['GTEQ 8'] = df[(df['Rating']>=8)]['Rating'].count()
Rating_Histdic['Btwn 7 & 8'] = df[(df['Rating']>=7)&(df['Rating']<8)]['Rating'].count()
plt.bar(range(len(Rating_Histdic)), Rating_Histdic.values(), align='center',color='brown',width=0.4)
plt.xticks(range(len(Rating_Histdic)), Rating_Histdic.keys(), rotation=25);
Explanation: Movie Ratings Visualization using Bar Graph
End of explanation
Rating_Hist = []
import numpy as np
Rating_Hist.append(Rating_Histdic['Btwn 6&7'])
Rating_Hist.append(Rating_Histdic['GTEQ 8'])
Rating_Hist.append(Rating_Histdic['Btwn 7 & 8'])
labels = ['Btwn 6&7', 'GTEQ 8', 'Btwn 7 & 8']
colors = ['red', 'orange', 'green']
plt.pie(Rating_Hist,labels=labels, colors=colors,autopct='%1.1f%%', shadow=True, startangle=90);
Explanation: Percentage distribution of the Ratings in a Pie-Chart
End of explanation
Category=Counter(datatbl['data']['Genre'])
df1 = pd.DataFrame.from_dict(Category,orient='index')
df1 = df1.sort_values([0],ascending=[False]).head(5)
df1.plot(kind='barh',color=['g','c','m']);
Explanation: Best Picture by Genre
Let's analyze the Genre for the best won movies
End of explanation |
14,610 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<!-- HTML file automatically generated from DocOnce source (https
Step1: Here follows a simple example where we set up an array of ten elements, all determined by random numbers drawn according to the normal distribution,
Step2: We defined a vector $x$ with $n=10$ elements with its values given by the Normal distribution $N(0,1)$.
Another alternative is to declare a vector as follows
Step3: Here we have defined a vector with three elements, with $x_0=1$, $x_1=2$ and $x_2=3$. Note that both Python and C++
start numbering array elements from $0$ and on. This means that a vector with $n$ elements has a sequence of entities $x_0, x_1, x_2, \dots, x_{n-1}$. We could also let (recommended) Numpy to compute the logarithms of a specific array as
Step4: In the last example we used Numpy's unary function $np.log$. This function is
highly tuned to compute array elements since the code is vectorized
and does not require looping. We normaly recommend that you use the
Numpy intrinsic functions instead of the corresponding log function
from Python's math module. The looping is done explicitely by the
np.log function. The alternative, and slower way to compute the
logarithms of a vector would be to write
Step5: We note that our code is much longer already and we need to import the log function from the math module.
The attentive reader will also notice that the output is $[1, 1, 2]$. Python interprets automagically our numbers as integers (like the automatic keyword in C++). To change this we could define our array elements to be double precision numbers as
Step6: or simply write them as double precision numbers (Python uses 64 bits as default for floating point type variables), that is
Step7: To check the number of bytes (remember that one byte contains eight bits for double precision variables), you can use simple use the itemsize functionality (the array $x$ is actually an object which inherits the functionalities defined in Numpy) as
Step8: Matrices in Python
Having defined vectors, we are now ready to try out matrices. We can
define a $3 \times 3 $ real matrix $\boldsymbol{A}$ as (recall that we user
lowercase letters for vectors and uppercase letters for matrices)
Step9: If we use the shape function we would get $(3, 3)$ as output, that is verifying that our matrix is a $3\times 3$ matrix. We can slice the matrix and print for example the first column (Python organized matrix elements in a row-major order, see below) as
Step10: We can continue this was by printing out other columns or rows. The example here prints out the second column
Step11: Numpy contains many other functionalities that allow us to slice, subdivide etc etc arrays. We strongly recommend that you look up the Numpy website for more details. Useful functions when defining a matrix are the np.zeros function which declares a matrix of a given dimension and sets all elements to zero
Step12: or initializing all elements to
Step13: or as unitarily distributed random numbers (see the material on random number generators in the statistics part)
Step14: As we will see throughout these lectures, there are several extremely useful functionalities in Numpy.
As an example, consider the discussion of the covariance matrix. Suppose we have defined three vectors
$\boldsymbol{x}, \boldsymbol{y}, \boldsymbol{z}$ with $n$ elements each. The covariance matrix is defined as
$$
\boldsymbol{\Sigma} = \begin{bmatrix} \sigma_{xx} & \sigma_{xy} & \sigma_{xz} \
\sigma_{yx} & \sigma_{yy} & \sigma_{yz} \
\sigma_{zx} & \sigma_{zy} & \sigma_{zz}
\end{bmatrix},
$$
where for example
$$
\sigma_{xy} =\frac{1}{n} \sum_{i=0}^{n-1}(x_i- \overline{x})(y_i- \overline{y}).
$$
The Numpy function np.cov calculates the covariance elements using the factor $1/(n-1)$ instead of $1/n$ since it assumes we do not have the exact mean values.
The following simple function uses the np.vstack function which takes each vector of dimension $1\times n$ and produces a $3\times n$ matrix $\boldsymbol{W}$
$$
\boldsymbol{W} = \begin{bmatrix} x_0 & x_1 & x_2 & \dots & x_{n-2} & x_{n-1} \
y_0 & y_1 & y_2 & \dots & y_{n-2} & y_{n-1} \
z_0 & z_1 & z_2 & \dots & z_{n-2} & z_{n-1} \
\end{bmatrix},
$$
which in turn is converted into into the $3\times 3$ covariance matrix
$\boldsymbol{\Sigma}$ via the Numpy function np.cov(). We note that we can also calculate
the mean value of each set of samples $\boldsymbol{x}$ etc using the Numpy
function np.mean(x). We can also extract the eigenvalues of the
covariance matrix through the np.linalg.eig() function.
Step15: Meet the Pandas
<!-- dom
Step16: In the above we have imported pandas with the shorthand pd, the latter has become the standard way we import pandas. We make then a list of various variables
and reorganize the aboves lists into a DataFrame and then print out a neat table with specific column labels as Name, place of birth and date of birth.
Displaying these results, we see that the indices are given by the default numbers from zero to three.
pandas is extremely flexible and we can easily change the above indices by defining a new type of indexing as
Step17: Thereafter we display the content of the row which begins with the index Aragorn
Step18: We can easily append data to this, for example
Step19: Here are other examples where we use the DataFrame functionality to handle arrays, now with more interesting features for us, namely numbers. We set up a matrix
of dimensionality $10\times 5$ and compute the mean value and standard deviation of each column. Similarly, we can perform mathematial operations like squaring the matrix elements and many other operations.
Step20: Thereafter we can select specific columns only and plot final results
Step21: We can produce a $4\times 4$ matrix
Step22: and many other operations.
The Series class is another important class included in
pandas. You can view it as a specialization of DataFrame but where
we have just a single column of data. It shares many of the same features as _DataFrame. As with DataFrame,
most operations are vectorized, achieving thereby a high performance when dealing with computations of arrays, in particular labeled arrays.
As we will see below it leads also to a very concice code close to the mathematical operations we may be interested in.
For multidimensional arrays, we recommend strongly xarray. xarray has much of the same flexibility as pandas, but allows for the extension to higher dimensions than two. We will see examples later of the usage of both pandas and xarray.
Friday August 27
"Video of Lecture August 27, 2021"
Step23: This example serves several aims. It allows us to demonstrate several
aspects of data analysis and later machine learning algorithms. The
immediate visualization shows that our linear fit is not
impressive. It goes through the data points, but there are many
outliers which are not reproduced by our linear regression. We could
now play around with this small program and change for example the
factor in front of $x$ and the normal distribution. Try to change the
function $y$ to
$$
y = 10x+0.01 \times N(0,1),
$$
where $x$ is defined as before. Does the fit look better? Indeed, by
reducing the role of the noise given by the normal distribution we see immediately that
our linear prediction seemingly reproduces better the training
set. However, this testing 'by the eye' is obviouly not satisfactory in the
long run. Here we have only defined the training data and our model, and
have not discussed a more rigorous approach to the cost function.
We need more rigorous criteria in defining whether we have succeeded or
not in modeling our training data. You will be surprised to see that
many scientists seldomly venture beyond this 'by the eye' approach. A
standard approach for the cost function is the so-called $\chi^2$
function (a variant of the mean-squared error (MSE))
$$
\chi^2 = \frac{1}{n}
\sum_{i=0}^{n-1}\frac{(y_i-\tilde{y}_i)^2}{\sigma_i^2},
$$
where $\sigma_i^2$ is the variance (to be defined later) of the entry
$y_i$. We may not know the explicit value of $\sigma_i^2$, it serves
however the aim of scaling the equations and make the cost function
dimensionless.
Minimizing the cost function is a central aspect of
our discussions to come. Finding its minima as function of the model
parameters ($\alpha$ and $\beta$ in our case) will be a recurring
theme in these series of lectures. Essentially all machine learning
algorithms we will discuss center around the minimization of the
chosen cost function. This depends in turn on our specific
model for describing the data, a typical situation in supervised
learning. Automatizing the search for the minima of the cost function is a
central ingredient in all algorithms. Typical methods which are
employed are various variants of gradient methods. These will be
discussed in more detail later. Again, you'll be surprised to hear that
many practitioners minimize the above function ''by the eye', popularly dubbed as
'chi by the eye'. That is, change a parameter and see (visually and numerically) that
the $\chi^2$ function becomes smaller.
There are many ways to define the cost function. A simpler approach is to look at the relative difference between the training data and the predicted data, that is we define
the relative error (why would we prefer the MSE instead of the relative error?) as
$$
\epsilon_{\mathrm{relative}}= \frac{\vert \boldsymbol{y} -\boldsymbol{\tilde{y}}\vert}{\vert \boldsymbol{y}\vert}.
$$
The squared cost function results in an arithmetic mean-unbiased
estimator, and the absolute-value cost function results in a
median-unbiased estimator (in the one-dimensional case, and a
geometric median-unbiased estimator for the multi-dimensional
case). The squared cost function has the disadvantage that it has the tendency
to be dominated by outliers.
We can modify easily the above Python code and plot the relative error instead
Step24: Depending on the parameter in front of the normal distribution, we may
have a small or larger relative error. Try to play around with
different training data sets and study (graphically) the value of the
relative error.
As mentioned above, Scikit-Learn has an impressive functionality.
We can for example extract the values of $\alpha$ and $\beta$ and
their error estimates, or the variance and standard deviation and many
other properties from the statistical data analysis.
Here we show an
example of the functionality of Scikit-Learn.
Step25: The function coef gives us the parameter $\beta$ of our fit while intercept yields
$\alpha$. Depending on the constant in front of the normal distribution, we get values near or far from $\alpha =2$ and $\beta =5$. Try to play around with different parameters in front of the normal distribution. The function meansquarederror gives us the mean square error, a risk metric corresponding to the expected value of the squared (quadratic) error or loss defined as
$$
MSE(\boldsymbol{y},\boldsymbol{\tilde{y}}) = \frac{1}{n}
\sum_{i=0}^{n-1}(y_i-\tilde{y}_i)^2,
$$
The smaller the value, the better the fit. Ideally we would like to
have an MSE equal zero. The attentive reader has probably recognized
this function as being similar to the $\chi^2$ function defined above.
The r2score function computes $R^2$, the coefficient of
determination. It provides a measure of how well future samples are
likely to be predicted by the model. Best possible score is 1.0 and it
can be negative (because the model can be arbitrarily worse). A
constant model that always predicts the expected value of $\boldsymbol{y}$,
disregarding the input features, would get a $R^2$ score of $0.0$.
If $\tilde{\boldsymbol{y}}_i$ is the predicted value of the $i-th$ sample and $y_i$ is the corresponding true value, then the score $R^2$ is defined as
$$
R^2(\boldsymbol{y}, \tilde{\boldsymbol{y}}) = 1 - \frac{\sum_{i=0}^{n - 1} (y_i - \tilde{y}i)^2}{\sum{i=0}^{n - 1} (y_i - \bar{y})^2},
$$
where we have defined the mean value of $\boldsymbol{y}$ as
$$
\bar{y} = \frac{1}{n} \sum_{i=0}^{n - 1} y_i.
$$
Another quantity taht we will meet again in our discussions of regression analysis is
the mean absolute error (MAE), a risk metric corresponding to the expected value of the absolute error loss or what we call the $l1$-norm loss. In our discussion above we presented the relative error.
The MAE is defined as follows
$$
\text{MAE}(\boldsymbol{y}, \boldsymbol{\tilde{y}}) = \frac{1}{n} \sum_{i=0}^{n-1} \left| y_i - \tilde{y}_i \right|.
$$
We present the
squared logarithmic (quadratic) error
$$
\text{MSLE}(\boldsymbol{y}, \boldsymbol{\tilde{y}}) = \frac{1}{n} \sum_{i=0}^{n - 1} (\log_e (1 + y_i) - \log_e (1 + \tilde{y}_i) )^2,
$$
where $\log_e (x)$ stands for the natural logarithm of $x$. This error
estimate is best to use when targets having exponential growth, such
as population counts, average sales of a commodity over a span of
years etc.
Finally, another cost function is the Huber cost function used in robust regression.
The rationale behind this possible cost function is its reduced
sensitivity to outliers in the data set. In our discussions on
dimensionality reduction and normalization of data we will meet other
ways of dealing with outliers.
The Huber cost function is defined as
$$
H_{\delta}(\boldsymbol{a})=\left{\begin{array}{cc}\frac{1}{2} \boldsymbol{a}^{2}& \text{for }|\boldsymbol{a}|\leq \delta\ \delta (|\boldsymbol{a}|-\frac{1}{2}\delta ),&\text{otherwise}.\end{array}\right.
$$
Here $\boldsymbol{a}=\boldsymbol{y} - \boldsymbol{\tilde{y}}$.
We will discuss in more detail these and other functions in the
various lectures. We conclude this part with another example. Instead
of a linear $x$-dependence we study now a cubic polynomial and use the
polynomial regression analysis tools of scikit-learn.
Step26: To our real data
Step27: Before we proceed, we define also a function for making our plots. You can obviously avoid this and simply set up various matplotlib commands every time you need them. You may however find it convenient to collect all such commands in one function and simply call this function.
Step29: Our next step is to read the data on experimental binding energies and
reorganize them as functions of the mass number $A$, the number of
protons $Z$ and neutrons $N$ using pandas. Before we do this it is
always useful (unless you have a binary file or other types of compressed
data) to actually open the file and simply take a look at it!
In particular, the program that outputs the final nuclear masses is written in Fortran with a specific format. It means that we need to figure out the format and which columns contain the data we are interested in. Pandas comes with a function that reads formatted output. After having admired the file, we are now ready to start massaging it with pandas. The file begins with some basic format information.
Step30: The data we are interested in are in columns 2, 3, 4 and 11, giving us
the number of neutrons, protons, mass numbers and binding energies,
respectively. We add also for the sake of completeness the element name. The data are in fixed-width formatted lines and we will
covert them into the pandas DataFrame structure.
Step31: We have now read in the data, grouped them according to the variables we are interested in.
We see how easy it is to reorganize the data using pandas. If we
were to do these operations in C/C++ or Fortran, we would have had to
write various functions/subroutines which perform the above
reorganizations for us. Having reorganized the data, we can now start
to make some simple fits using both the functionalities in numpy and
Scikit-Learn afterwards.
Now we define five variables which contain
the number of nucleons $A$, the number of protons $Z$ and the number of neutrons $N$, the element name and finally the energies themselves.
Step32: The next step, and we will define this mathematically later, is to set up the so-called design matrix. We will throughout call this matrix $\boldsymbol{X}$.
It has dimensionality $p\times n$, where $n$ is the number of data points and $p$ are the so-called predictors. In our case here they are given by the number of polynomials in $A$ we wish to include in the fit.
Step33: With scikitlearn we are now ready to use linear regression and fit our data.
Step34: Pretty simple!
Now we can print measures of how our fit is doing, the coefficients from the fits and plot the final fit together with our data.
Step35: Seeing the wood for the trees
As a teaser, let us now see how we can do this with decision trees using scikit-learn. Later we will switch to so-called random forests!
Step36: And what about using neural networks?
The seaborn package allows us to visualize data in an efficient way. Note that we use scikit-learn's multi-layer perceptron (or feed forward neural network)
functionality.
Step37: A first summary
The aim behind these introductory words was to present to you various
Python libraries and their functionalities, in particular libraries like
numpy, pandas, xarray and matplotlib and other that make our life much easier
in handling various data sets and visualizing data.
Furthermore,
Scikit-Learn allows us with few lines of code to implement popular
Machine Learning algorithms for supervised learning. Later we will meet Tensorflow, a powerful library for deep learning.
Now it is time to dive more into the details of various methods. We will start with linear regression and try to take a deeper look at what it entails.
Why Linear Regression (aka Ordinary Least Squares and family)
Fitting a continuous function with linear parameterization in terms of the parameters $\boldsymbol{\beta}$.
* Method of choice for fitting a continuous function!
Gives an excellent introduction to central Machine Learning features with understandable pedagogical links to other methods like Neural Networks, Support Vector Machines etc
Analytical expression for the fitting parameters $\boldsymbol{\beta}$
Analytical expressions for statistical propertiers like mean values, variances, confidence intervals and more
Analytical relation with probabilistic interpretations
Easy to introduce basic concepts like bias-variance tradeoff, cross-validation, resampling and regularization techniques and many other ML topics
Easy to code! And links well with classification problems and logistic regression and neural networks
Allows for easy hands-on understanding of gradient descent methods
and many more features
For more discussions of Ridge and Lasso regression, Wessel van Wieringen's article is highly recommended.
Similarly, Mehta et al's article is also recommended.
Regression analysis, overarching aims
Regression modeling deals with the description of the sampling distribution of a given random variable $y$ and how it varies as function of another variable or a set of such variables $\boldsymbol{x} =[x_0, x_1,\dots, x_{n-1}]^T$.
The first variable is called the dependent, the outcome or the response variable while the set of variables $\boldsymbol{x}$ is called the independent variable, or the predictor variable or the explanatory variable.
A regression model aims at finding a likelihood function $p(\boldsymbol{y}\vert \boldsymbol{x})$, that is the conditional distribution for $\boldsymbol{y}$ with a given $\boldsymbol{x}$. The estimation of $p(\boldsymbol{y}\vert \boldsymbol{x})$ is made using a data set with
* $n$ cases $i = 0, 1, 2, \dots, n-1$
Response (target, dependent or outcome) variable $y_i$ with $i = 0, 1, 2, \dots, n-1$
$p$ so-called explanatory (independent or predictor) variables $\boldsymbol{x}i=[x{i0}, x_{i1}, \dots, x_{ip-1}]$ with $i = 0, 1, 2, \dots, n-1$ and explanatory variables running from $0$ to $p-1$. See below for more explicit examples.
The goal of the regression analysis is to extract/exploit relationship between $\boldsymbol{y}$ and $\boldsymbol{x}$ in or to infer causal dependencies, approximations to the likelihood functions, functional relationships and to make predictions, making fits and many other things.
Regression analysis, overarching aims II
Consider an experiment in which $p$ characteristics of $n$ samples are
measured. The data from this experiment, for various explanatory variables $p$ are normally represented by a matrix
$\mathbf{X}$.
The matrix $\mathbf{X}$ is called the design
matrix. Additional information of the samples is available in the
form of $\boldsymbol{y}$ (also as above). The variable $\boldsymbol{y}$ is
generally referred to as the response variable. The aim of
regression analysis is to explain $\boldsymbol{y}$ in terms of
$\boldsymbol{X}$ through a functional relationship like $y_i =
f(\mathbf{X}{i,\ast})$. When no prior knowledge on the form of
$f(\cdot)$ is available, it is common to assume a linear relationship
between $\boldsymbol{X}$ and $\boldsymbol{y}$. This assumption gives rise to
the linear regression model where $\boldsymbol{\beta} = [\beta_0, \ldots,
\beta{p-1}]^{T}$ are the regression parameters.
Linear regression gives us a set of analytical equations for the parameters $\beta_j$.
Examples
In order to understand the relation among the predictors $p$, the set of data $n$ and the target (outcome, output etc) $\boldsymbol{y}$,
consider the model we discussed for describing nuclear binding energies.
There we assumed that we could parametrize the data using a polynomial approximation based on the liquid drop model.
Assuming
$$
BE(A) = a_0+a_1A+a_2A^{2/3}+a_3A^{-1/3}+a_4A^{-1},
$$
we have five predictors, that is the intercept, the $A$ dependent term, the $A^{2/3}$ term and the $A^{-1/3}$ and $A^{-1}$ terms.
This gives $p=0,1,2,3,4$. Furthermore we have $n$ entries for each predictor. It means that our design matrix is a
$p\times n$ matrix $\boldsymbol{X}$.
Here the predictors are based on a model we have made. A popular data set which is widely encountered in ML applications is the
so-called credit card default data from Taiwan. The data set contains data on $n=30000$ credit card holders with predictors like gender, marital status, age, profession, education, etc. In total there are $24$ such predictors or attributes leading to a design matrix of dimensionality $24 \times 30000$. This is however a classification problem and we will come back to it when we discuss Logistic Regression.
General linear models
Before we proceed let us study a case from linear algebra where we aim at fitting a set of data $\boldsymbol{y}=[y_0,y_1,\dots,y_{n-1}]$. We could think of these data as a result of an experiment or a complicated numerical experiment. These data are functions of a series of variables $\boldsymbol{x}=[x_0,x_1,\dots,x_{n-1}]$, that is $y_i = y(x_i)$ with $i=0,1,2,\dots,n-1$. The variables $x_i$ could represent physical quantities like time, temperature, position etc. We assume that $y(x)$ is a smooth function.
Since obtaining these data points may not be trivial, we want to use these data to fit a function which can allow us to make predictions for values of $y$ which are not in the present set. The perhaps simplest approach is to assume we can parametrize our function in terms of a polynomial of degree $n-1$ with $n$ points, that is
$$
y=y(x) \rightarrow y(x_i)=\tilde{y}i+\epsilon_i=\sum{j=0}^{n-1} \beta_j x_i^j+\epsilon_i,
$$
where $\epsilon_i$ is the error in our approximation.
Rewriting the fitting procedure as a linear algebra problem
For every set of values $y_i,x_i$ we have thus the corresponding set of equations
$$
\begin{align}
y_0&=\beta_0+\beta_1x_0^1+\beta_2x_0^2+\dots+\beta_{n-1}x_0^{n-1}+\epsilon_0\
y_1&=\beta_0+\beta_1x_1^1+\beta_2x_1^2+\dots+\beta_{n-1}x_1^{n-1}+\epsilon_1\
y_2&=\beta_0+\beta_1x_2^1+\beta_2x_2^2+\dots+\beta_{n-1}x_2^{n-1}+\epsilon_2\
\dots & \dots \
y_{n-1}&=\beta_0+\beta_1x_{n-1}^1+\beta_2x_{n-1}^2+\dots+\beta_{n-1}x_{n-1}^{n-1}+\epsilon_{n-1}.\
\end{align}
$$
Rewriting the fitting procedure as a linear algebra problem, more details
Defining the vectors
$$
\boldsymbol{y} = [y_0,y_1, y_2,\dots, y_{n-1}]^T,
$$
and
$$
\boldsymbol{\beta} = [\beta_0,\beta_1, \beta_2,\dots, \beta_{n-1}]^T,
$$
and
$$
\boldsymbol{\epsilon} = [\epsilon_0,\epsilon_1, \epsilon_2,\dots, \epsilon_{n-1}]^T,
$$
and the design matrix
$$
\boldsymbol{X}=
\begin{bmatrix}
1& x_{0}^1 &x_{0}^2& \dots & \dots &x_{0}^{n-1}\
1& x_{1}^1 &x_{1}^2& \dots & \dots &x_{1}^{n-1}\
1& x_{2}^1 &x_{2}^2& \dots & \dots &x_{2}^{n-1}\
\dots& \dots &\dots& \dots & \dots &\dots\
1& x_{n-1}^1 &x_{n-1}^2& \dots & \dots &x_{n-1}^{n-1}\
\end{bmatrix}
$$
we can rewrite our equations as
$$
\boldsymbol{y} = \boldsymbol{X}\boldsymbol{\beta}+\boldsymbol{\epsilon}.
$$
The above design matrix is called a Vandermonde matrix.
Generalizing the fitting procedure as a linear algebra problem
We are obviously not limited to the above polynomial expansions. We
could replace the various powers of $x$ with elements of Fourier
series or instead of $x_i^j$ we could have $\cos{(j x_i)}$ or $\sin{(j
x_i)}$, or time series or other orthogonal functions. For every set
of values $y_i,x_i$ we can then generalize the equations to
$$
\begin{align}
y_0&=\beta_0x_{00}+\beta_1x_{01}+\beta_2x_{02}+\dots+\beta_{n-1}x_{0n-1}+\epsilon_0\
y_1&=\beta_0x_{10}+\beta_1x_{11}+\beta_2x_{12}+\dots+\beta_{n-1}x_{1n-1}+\epsilon_1\
y_2&=\beta_0x_{20}+\beta_1x_{21}+\beta_2x_{22}+\dots+\beta_{n-1}x_{2n-1}+\epsilon_2\
\dots & \dots \
y_{i}&=\beta_0x_{i0}+\beta_1x_{i1}+\beta_2x_{i2}+\dots+\beta_{n-1}x_{in-1}+\epsilon_i\
\dots & \dots \
y_{n-1}&=\beta_0x_{n-1,0}+\beta_1x_{n-1,2}+\beta_2x_{n-1,2}+\dots+\beta_{n-1}x_{n-1,n-1}+\epsilon_{n-1}.\
\end{align}
$$
Note that we have $p=n$ here. The matrix is symmetric. This is generally not the case!
Generalizing the fitting procedure as a linear algebra problem
We redefine in turn the matrix $\boldsymbol{X}$ as
$$
\boldsymbol{X}=
\begin{bmatrix}
x_{00}& x_{01} &x_{02}& \dots & \dots &x_{0,n-1}\
x_{10}& x_{11} &x_{12}& \dots & \dots &x_{1,n-1}\
x_{20}& x_{21} &x_{22}& \dots & \dots &x_{2,n-1}\
\dots& \dots &\dots& \dots & \dots &\dots\
x_{n-1,0}& x_{n-1,1} &x_{n-1,2}& \dots & \dots &x_{n-1,n-1}\
\end{bmatrix}
$$
and without loss of generality we rewrite again our equations as
$$
\boldsymbol{y} = \boldsymbol{X}\boldsymbol{\beta}+\boldsymbol{\epsilon}.
$$
The left-hand side of this equation is kwown. Our error vector $\boldsymbol{\epsilon}$ and the parameter vector $\boldsymbol{\beta}$ are our unknow quantities. How can we obtain the optimal set of $\beta_i$ values?
Optimizing our parameters
We have defined the matrix $\boldsymbol{X}$ via the equations
$$
\begin{align}
y_0&=\beta_0x_{00}+\beta_1x_{01}+\beta_2x_{02}+\dots+\beta_{n-1}x_{0n-1}+\epsilon_0\
y_1&=\beta_0x_{10}+\beta_1x_{11}+\beta_2x_{12}+\dots+\beta_{n-1}x_{1n-1}+\epsilon_1\
y_2&=\beta_0x_{20}+\beta_1x_{21}+\beta_2x_{22}+\dots+\beta_{n-1}x_{2n-1}+\epsilon_1\
\dots & \dots \
y_{i}&=\beta_0x_{i0}+\beta_1x_{i1}+\beta_2x_{i2}+\dots+\beta_{n-1}x_{in-1}+\epsilon_1\
\dots & \dots \
y_{n-1}&=\beta_0x_{n-1,0}+\beta_1x_{n-1,2}+\beta_2x_{n-1,2}+\dots+\beta_{n-1}x_{n-1,n-1}+\epsilon_{n-1}.\
\end{align}
$$
As we noted above, we stayed with a system with the design matrix
$\boldsymbol{X}\in {\mathbb{R}}^{n\times n}$, that is we have $p=n$. For reasons to come later (algorithmic arguments) we will hereafter define
our matrix as $\boldsymbol{X}\in {\mathbb{R}}^{n\times p}$, with the predictors refering to the column numbers and the entries $n$ being the row elements.
Our model for the nuclear binding energies
In our introductory notes we looked at the so-called liquid drop model. Let us remind ourselves about what we did by looking at the code.
We restate the parts of the code we are most interested in.
Step38: With $\boldsymbol{\beta}\in {\mathbb{R}}^{p\times 1}$, it means that we will hereafter write our equations for the approximation as
$$
\boldsymbol{\tilde{y}}= \boldsymbol{X}\boldsymbol{\beta},
$$
throughout these lectures.
Optimizing our parameters, more details
With the above we use the design matrix to define the approximation $\boldsymbol{\tilde{y}}$ via the unknown quantity $\boldsymbol{\beta}$ as
$$
\boldsymbol{\tilde{y}}= \boldsymbol{X}\boldsymbol{\beta},
$$
and in order to find the optimal parameters $\beta_i$ instead of solving the above linear algebra problem, we define a function which gives a measure of the spread between the values $y_i$ (which represent hopefully the exact values) and the parameterized values $\tilde{y}_i$, namely
$$
C(\boldsymbol{\beta})=\frac{1}{n}\sum_{i=0}^{n-1}\left(y_i-\tilde{y}_i\right)^2=\frac{1}{n}\left{\left(\boldsymbol{y}-\boldsymbol{\tilde{y}}\right)^T\left(\boldsymbol{y}-\boldsymbol{\tilde{y}}\right)\right},
$$
or using the matrix $\boldsymbol{X}$ and in a more compact matrix-vector notation as
$$
C(\boldsymbol{\beta})=\frac{1}{n}\left{\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)^T\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)\right}.
$$
This function is one possible way to define the so-called cost function.
It is also common to define
the function $C$ as
$$
C(\boldsymbol{\beta})=\frac{1}{2n}\sum_{i=0}^{n-1}\left(y_i-\tilde{y}_i\right)^2,
$$
since when taking the first derivative with respect to the unknown parameters $\beta$, the factor of $2$ cancels out.
Interpretations and optimizing our parameters
The function
$$
C(\boldsymbol{\beta})=\frac{1}{n}\left{\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)^T\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)\right},
$$
can be linked to the variance of the quantity $y_i$ if we interpret the latter as the mean value.
When linking (see the discussion below) with the maximum likelihood approach below, we will indeed interpret $y_i$ as a mean value
$$
y_{i}=\langle y_i \rangle = \beta_0x_{i,0}+\beta_1x_{i,1}+\beta_2x_{i,2}+\dots+\beta_{n-1}x_{i,n-1}+\epsilon_i,
$$
where $\langle y_i \rangle$ is the mean value. Keep in mind also that
till now we have treated $y_i$ as the exact value. Normally, the
response (dependent or outcome) variable $y_i$ the outcome of a
numerical experiment or another type of experiment and is thus only an
approximation to the true value. It is then always accompanied by an
error estimate, often limited to a statistical error estimate given by
the standard deviation discussed earlier. In the discussion here we
will treat $y_i$ as our exact value for the response variable.
In order to find the parameters $\beta_i$ we will then minimize the spread of $C(\boldsymbol{\beta})$, that is we are going to solve the problem
$$
{\displaystyle \min_{\boldsymbol{\beta}\in
{\mathbb{R}}^{p}}}\frac{1}{n}\left{\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)^T\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)\right}.
$$
In practical terms it means we will require
$$
\frac{\partial C(\boldsymbol{\beta})}{\partial \beta_j} = \frac{\partial }{\partial \beta_j}\left[ \frac{1}{n}\sum_{i=0}^{n-1}\left(y_i-\beta_0x_{i,0}-\beta_1x_{i,1}-\beta_2x_{i,2}-\dots-\beta_{n-1}x_{i,n-1}\right)^2\right]=0,
$$
which results in
$$
\frac{\partial C(\boldsymbol{\beta})}{\partial \beta_j} = -\frac{2}{n}\left[ \sum_{i=0}^{n-1}x_{ij}\left(y_i-\beta_0x_{i,0}-\beta_1x_{i,1}-\beta_2x_{i,2}-\dots-\beta_{n-1}x_{i,n-1}\right)\right]=0,
$$
or in a matrix-vector form as
$$
\frac{\partial C(\boldsymbol{\beta})}{\partial \boldsymbol{\beta}} = 0 = \boldsymbol{X}^T\left( \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right).
$$
Interpretations and optimizing our parameters
We can rewrite
$$
\frac{\partial C(\boldsymbol{\beta})}{\partial \boldsymbol{\beta}} = 0 = \boldsymbol{X}^T\left( \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right),
$$
as
$$
\boldsymbol{X}^T\boldsymbol{y} = \boldsymbol{X}^T\boldsymbol{X}\boldsymbol{\beta},
$$
and if the matrix $\boldsymbol{X}^T\boldsymbol{X}$ is invertible we have the solution
$$
\boldsymbol{\beta} =\left(\boldsymbol{X}^T\boldsymbol{X}\right)^{-1}\boldsymbol{X}^T\boldsymbol{y}.
$$
We note also that since our design matrix is defined as $\boldsymbol{X}\in
{\mathbb{R}}^{n\times p}$, the product $\boldsymbol{X}^T\boldsymbol{X} \in
{\mathbb{R}}^{p\times p}$. In the above case we have that $p \ll n$,
in our case $p=5$ meaning that we end up with inverting a small
$5\times 5$ matrix. This is a rather common situation, in many cases we end up with low-dimensional
matrices to invert. The methods discussed here and for many other
supervised learning algorithms like classification with logistic
regression or support vector machines, exhibit dimensionalities which
allow for the usage of direct linear algebra methods such as LU decomposition or Singular Value Decomposition (SVD) for finding the inverse of the matrix
$\boldsymbol{X}^T\boldsymbol{X}$.
Small question
Step39: Alternatively, you can use the least squares functionality in Numpy as
Step40: And finally we plot our fit with and compare with data
Step41: Adding error analysis and training set up
We can easily test our fit by computing the $R2$ score that we discussed in connection with the functionality of Scikit-Learn in the introductory slides.
Since we are not using Scikit-Learn here we can define our own $R2$ function as
Step42: and we would be using it as
Step43: We can easily add our MSE score as
Step44: and finally the relative error as
Step45: The $\chi^2$ function
Normally, the response (dependent or outcome) variable $y_i$ is the
outcome of a numerical experiment or another type of experiment and is
thus only an approximation to the true value. It is then always
accompanied by an error estimate, often limited to a statistical error
estimate given by the standard deviation discussed earlier. In the
discussion here we will treat $y_i$ as our exact value for the
response variable.
Introducing the standard deviation $\sigma_i$ for each measurement
$y_i$, we define now the $\chi^2$ function (omitting the $1/n$ term)
as
$$
\chi^2(\boldsymbol{\beta})=\frac{1}{n}\sum_{i=0}^{n-1}\frac{\left(y_i-\tilde{y}_i\right)^2}{\sigma_i^2}=\frac{1}{n}\left{\left(\boldsymbol{y}-\boldsymbol{\tilde{y}}\right)^T\frac{1}{\boldsymbol{\Sigma^2}}\left(\boldsymbol{y}-\boldsymbol{\tilde{y}}\right)\right},
$$
where the matrix $\boldsymbol{\Sigma}$ is a diagonal matrix with $\sigma_i$ as matrix elements.
The $\chi^2$ function
In order to find the parameters $\beta_i$ we will then minimize the spread of $\chi^2(\boldsymbol{\beta})$ by requiring
$$
\frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \beta_j} = \frac{\partial }{\partial \beta_j}\left[ \frac{1}{n}\sum_{i=0}^{n-1}\left(\frac{y_i-\beta_0x_{i,0}-\beta_1x_{i,1}-\beta_2x_{i,2}-\dots-\beta_{n-1}x_{i,n-1}}{\sigma_i}\right)^2\right]=0,
$$
which results in
$$
\frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \beta_j} = -\frac{2}{n}\left[ \sum_{i=0}^{n-1}\frac{x_{ij}}{\sigma_i}\left(\frac{y_i-\beta_0x_{i,0}-\beta_1x_{i,1}-\beta_2x_{i,2}-\dots-\beta_{n-1}x_{i,n-1}}{\sigma_i}\right)\right]=0,
$$
or in a matrix-vector form as
$$
\frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \boldsymbol{\beta}} = 0 = \boldsymbol{A}^T\left( \boldsymbol{b}-\boldsymbol{A}\boldsymbol{\beta}\right).
$$
where we have defined the matrix $\boldsymbol{A} =\boldsymbol{X}/\boldsymbol{\Sigma}$ with matrix elements $a_{ij} = x_{ij}/\sigma_i$ and the vector $\boldsymbol{b}$ with elements $b_i = y_i/\sigma_i$.
The $\chi^2$ function
We can rewrite
$$
\frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \boldsymbol{\beta}} = 0 = \boldsymbol{A}^T\left( \boldsymbol{b}-\boldsymbol{A}\boldsymbol{\beta}\right),
$$
as
$$
\boldsymbol{A}^T\boldsymbol{b} = \boldsymbol{A}^T\boldsymbol{A}\boldsymbol{\beta},
$$
and if the matrix $\boldsymbol{A}^T\boldsymbol{A}$ is invertible we have the solution
$$
\boldsymbol{\beta} =\left(\boldsymbol{A}^T\boldsymbol{A}\right)^{-1}\boldsymbol{A}^T\boldsymbol{b}.
$$
The $\chi^2$ function
If we then introduce the matrix
$$
\boldsymbol{H} = \left(\boldsymbol{A}^T\boldsymbol{A}\right)^{-1},
$$
we have then the following expression for the parameters $\beta_j$ (the matrix elements of $\boldsymbol{H}$ are $h_{ij}$)
$$
\beta_j = \sum_{k=0}^{p-1}h_{jk}\sum_{i=0}^{n-1}\frac{y_i}{\sigma_i}\frac{x_{ik}}{\sigma_i} = \sum_{k=0}^{p-1}h_{jk}\sum_{i=0}^{n-1}b_ia_{ik}
$$
We state without proof the expression for the uncertainty in the parameters $\beta_j$ as (we leave this as an exercise)
$$
\sigma^2(\beta_j) = \sum_{i=0}^{n-1}\sigma_i^2\left( \frac{\partial \beta_j}{\partial y_i}\right)^2,
$$
resulting in
$$
\sigma^2(\beta_j) = \left(\sum_{k=0}^{p-1}h_{jk}\sum_{i=0}^{n-1}a_{ik}\right)\left(\sum_{l=0}^{p-1}h_{jl}\sum_{m=0}^{n-1}a_{ml}\right) = h_{jj}!
$$
The $\chi^2$ function
The first step here is to approximate the function $y$ with a first-order polynomial, that is we write
$$
y=y(x) \rightarrow y(x_i) \approx \beta_0+\beta_1 x_i.
$$
By computing the derivatives of $\chi^2$ with respect to $\beta_0$ and $\beta_1$ show that these are given by
$$
\frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \beta_0} = -2\left[ \frac{1}{n}\sum_{i=0}^{n-1}\left(\frac{y_i-\beta_0-\beta_1x_{i}}{\sigma_i^2}\right)\right]=0,
$$
and
$$
\frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \beta_1} = -\frac{2}{n}\left[ \sum_{i=0}^{n-1}x_i\left(\frac{y_i-\beta_0-\beta_1x_{i}}{\sigma_i^2}\right)\right]=0.
$$
The $\chi^2$ function
For a linear fit (a first-order polynomial) we don't need to invert a matrix!!
Defining
$$
\gamma = \sum_{i=0}^{n-1}\frac{1}{\sigma_i^2},
$$
$$
\gamma_x = \sum_{i=0}^{n-1}\frac{x_{i}}{\sigma_i^2},
$$
$$
\gamma_y = \sum_{i=0}^{n-1}\left(\frac{y_i}{\sigma_i^2}\right),
$$
$$
\gamma_{xx} = \sum_{i=0}^{n-1}\frac{x_ix_{i}}{\sigma_i^2},
$$
$$
\gamma_{xy} = \sum_{i=0}^{n-1}\frac{y_ix_{i}}{\sigma_i^2},
$$
we obtain
$$
\beta_0 = \frac{\gamma_{xx}\gamma_y-\gamma_x\gamma_y}{\gamma\gamma_{xx}-\gamma_x^2},
$$
$$
\beta_1 = \frac{\gamma_{xy}\gamma-\gamma_x\gamma_y}{\gamma\gamma_{xx}-\gamma_x^2}.
$$
This approach (different linear and non-linear regression) suffers
often from both being underdetermined and overdetermined in the
unknown coefficients $\beta_i$. A better approach is to use the
Singular Value Decomposition (SVD) method discussed next week.
Fitting an Equation of State for Dense Nuclear Matter
Before we continue, let us introduce yet another example. We are going to fit the
nuclear equation of state using results from many-body calculations.
The equation of state we have made available here, as function of
density, has been derived using modern nucleon-nucleon potentials with
the addition of three-body
forces. This
time the file is presented as a standard csv file.
The beginning of the Python code here is similar to what you have seen
before, with the same initializations and declarations. We use also
pandas again, rather extensively in order to organize our data.
The difference now is that we use Scikit-Learn's regression tools
instead of our own matrix inversion implementation. Furthermore, we
sneak in Ridge regression (to be discussed below) which includes a
hyperparameter $\lambda$, also to be explained below.
The code
Step46: The above simple polynomial in density $\rho$ gives an excellent fit
to the data.
We note also that there is a small deviation between the
standard OLS and the Ridge regression at higher densities. We discuss this in more detail
below.
Splitting our Data in Training and Test data
It is normal in essentially all Machine Learning studies to split the
data in a training set and a test set (sometimes also an additional
validation set). Scikit-Learn has an own function for this. There
is no explicit recipe for how much data should be included as training
data and say test data. An accepted rule of thumb is to use
approximately $2/3$ to $4/5$ of the data as training data. We will
postpone a discussion of this splitting to the end of these notes and
our discussion of the so-called bias-variance tradeoff. Here we
limit ourselves to repeat the above equation of state fitting example
but now splitting the data into a training set and a test set.
Step47: Exercises for week 35
Here are three possible exercises for week 35 and the lab sessions of Wednesday September 1.
Exercise 1
Step48: Write your own code (following the examples under the regression notes) for computing the parametrization of the data set fitting a second-order polynomial.
Use thereafter scikit-learn (see again the examples in the regression slides) and compare with your own code.
Using scikit-learn, compute also the mean square error, a risk metric corresponding to the expected value of the squared (quadratic) error defined as
$$
MSE(\boldsymbol{y},\boldsymbol{\tilde{y}}) = \frac{1}{n}
\sum_{i=0}^{n-1}(y_i-\tilde{y}_i)^2,
$$
and the $R^2$ score function.
If $\tilde{\boldsymbol{y}}_i$ is the predicted value of the $i-th$ sample and $y_i$ is the corresponding true value, then the score $R^2$ is defined as
$$
R^2(\boldsymbol{y}, \tilde{\boldsymbol{y}}) = 1 - \frac{\sum_{i=0}^{n - 1} (y_i - \tilde{y}i)^2}{\sum{i=0}^{n - 1} (y_i - \bar{y})^2},
$$
where we have defined the mean value of $\boldsymbol{y}$ as
$$
\bar{y} = \frac{1}{n} \sum_{i=0}^{n - 1} y_i.
$$
You can use the functionality included in scikit-learn. If you feel for it, you can use your own program and define functions which compute the above two functions.
Discuss the meaning of these results. Try also to vary the coefficient in front of the added stochastic noise term and discuss the quality of the fits.
<!-- --- begin solution of exercise --- -->
Solution.
The code here is an example of where we define our own design matrix and fit parameters $\beta$.
Step49: <!-- --- end solution of exercise --- -->
Exercise 3
Step50: Then we can use the standard scaler to scale our data as
Step51: In this exercise we want you to to compute the MSE for the training
data and the test data as function of the complexity of a polynomial,
that is the degree of a given polynomial. We want you also to compute the $R2$ score as function of the complexity of the model for both training data and test data. You should also run the calculation with and without scaling.
One of
the aims is to reproduce Figure 2.11 of Hastie et al.
Our data is defined by $x\in [-3,3]$ with a total of for example $100$ data points. | Python Code:
import numpy as np
Explanation: <!-- HTML file automatically generated from DocOnce source (https://github.com/doconce/doconce/)
doconce format html week34.do.txt --no_mako -->
<!-- dom:TITLE: Week 34: Introduction to the course, Logistics and Practicalities -->
Week 34: Introduction to the course, Logistics and Practicalities
Morten Hjorth-Jensen, Department of Physics, University of Oslo and Department of Physics and Astronomy and National Superconducting Cyclotron Laboratory, Michigan State University
Date: Nov 13, 2021
Copyright 1999-2021, Morten Hjorth-Jensen. Released under CC Attribution-NonCommercial 4.0 license
Overview of first week
Wednesday August 25: Introduction to software and repetition of Python Programming
Thursday August 26: First lecture: Presentation of the course, aims and content
Thursday: Second Lecture: Start with simple linear regression and repetition of linear algebra and elements of statistics
Friday August 27: Linear regression
Computer lab: Wednesdays, 8am-6pm. First time: Wednesday August 25.
Reading Recommendations
For the reading assignments we use the following abbreviations:
* GBC: Goodfellow, Bengio, and Courville, Deep Learning
CMB: Christopher M. Bishop, Pattern Recognition and Machine Learning
HTF: Hastie, Tibshirani, and Friedman, The Elements of Statistical Learning
AG: Aurelien Geron, Hands‑On Machine Learning with Scikit‑Learn and TensorFlow
Reading recommendations this week: Refresh linear algebra, GBC chapters 1 and 2. CMB sections 1.1 and 3.1. HTF chapters 2 and 3. Install scikit-learn. See lecture notes for week 34 at https://compphysics.github.io/MachineLearning/doc/web/course.html
Thursday August 26
The lectures will be recorded and updated videos will be posted after the lectures.
"Video of Lecture August 26, 2021":"https://www.uio.no/studier/emner/matnat/fys/FYS-STK4155/h21/forelesningsvideoer/LectureThursdayAugust26.mp4?vrtx=view-as-webpage
Zoom link for lectures: https://msu.zoom.us/j/93311529525?pwd=a1VXSzY4aTFWVy9Rb05mNDJTZ09lZz09
Meeting ID: 933 1152 9525
Passcode: 646102
Video of Lecture from Fall Semester 2020.
Lectures and ComputerLab
Lectures: Thursday (12.15pm-2pm and Friday (12.15pm-2pm).
Weekly reading assignments and videos needed to solve projects and exercises.
Weekly exercises when not working on projects. You can hand in exercises if you want.
Detailed lecture notes, exercises, all programs presented, projects etc can be found at the homepage of the course.
Weekly plans and all other information are on the official webpage.
No final exam, three projects that are graded and have to be approved.
Announcement
NORA AI competetion: See the link here https://www.nora.ai/Competition/image-segmentation.html
Communication channels
Chat and communications via <canvas.uio.no>, GDPR safe
Slack channel: machinelearninguio.slack.com
Piazza : enlist at <https:piazza.com/uio.no/fall2021/fysstk4155>
Course Format
Three compulsory projects. Electronic reports only using Canvas to hand in projects and git as version control software and GitHub for repository (or GitLab) of all your material.
Evaluation and grading: The three projects are graded and each counts 1/3 of the final mark. No final written or oral exam.
a. For the last project each group/participant submits a proposal or works with suggested (by us) proposals for the project.
b. If possible, we would like to organize the last project as a workshop where each group makes a poster and presents this to all other participants of the course
c. Poster session where all participants can study and discuss the other proposals.
d. Based on feedback etc, each group finalizes the report and submits for grading.
Python is the default programming language, but feel free to use C/C++ and/or Fortran or other programming languages. All source codes discussed during the lectures can be found at the webpage and github address of the course.
Teachers
Teachers :
* Morten Hjorth-Jensen, [email protected]
Phone: +47-48257387
Office: Department of Physics, University of Oslo, Eastern wing, room FØ470
Office hours: Anytime! Individual or group office hours can be arranged either in person or via zoom. Feel free to send an email for planning.
Øyvind Sigmundson Schøyen, [email protected]
Office: Department of Physics, University of Oslo, Eastern wing, room FØ452
Stian Dysthe Bilek [email protected]
Office: Department of Physics, University of Oslo, Eastern wing, room FØ450
Linus Ekstrøm, [email protected], [email protected]
Nicholas Karlsen, [email protected], [email protected]
Bendik Steinsvåg Dalen, [email protected]
Philip Karim Sørli Niane, [email protected]
Deadlines for projects (tentative)
Project 1: October 11 (available September 10) graded with feedback)
Project 2: November 20 (available October 12, graded with feedback)
Project 3: December 17 (available November 13, graded with feedback)
Projects are handed in using Canvas. We use Github as repository for codes, benchmark calculations etc. Comments and feedback on projects only via Canvas.
Recommended textbooks
The lecture notes are collected as a jupyter-book at https://compphysics.github.io/MachineLearning/doc/LectureNotes/_build/html/intro.html.
In addition to the lecture notes, we recommend the books of Bishop and Goodfellow et al. We will follow these texts closely and the weekly reading assignments refer to these two texts. The text by Hastie et al is also widely used in the Machine Learning community. Finally, we also recommend the hands-on text by Geron, see below.
Christopher M. Bishop, Pattern Recognition and Machine Learning, Springer, https://www.springer.com/gp/book/9780387310732.
Ian Goodfellow, Yoshua Bengio, and Aaron Courville. The different chapters are available for free at https://www.deeplearningbook.org/. Chapters 2-14 are highly recommended. The lectures follow to a larg extent this text. The weekly plans will include reading suggestions from these two textbooks.
Additional textbooks:
Trevor Hastie, Robert Tibshirani, Jerome H. Friedman, The Elements of Statistical Learning, Springer, https://www.springer.com/gp/book/9780387848570. This is a well-known text and serves as additional literature.
Aurelien Geron, Hands‑On Machine Learning with Scikit‑Learn and TensorFlow, O'Reilly, https://www.oreilly.com/library/view/hands-on-machine-learning/9781492032632/. This text is very useful since it contains many code examples and hands-on applications of all algorithms discussed in this course.
Prerequisites
Basic knowledge in programming and mathematics, with an emphasis on
linear algebra. Knowledge of Python or/and C++ as programming
languages is strongly recommended and experience with Jupiter notebook
is recommended. Required courses are the equivalents to the University
of Oslo mathematics courses MAT1100, MAT1110, MAT1120 and at least one
of the corresponding computing and programming courses INF1000/INF1110
or MAT-INF1100/MAT-INF1100L/BIOS1100/KJM-INF1100. Most universities
offer nowadays a basic programming course (often compulsory) where
Python is the recurring programming language.
Learning outcomes
This course aims at giving you insights and knowledge about many of
the central algorithms used in Data Analysis and Machine Learning.
The course is project based and through various numerical projects,
normally three, you will be exposed to fundamental research problems
in these fields, with the aim to reproduce state of the art scientific
results. Both supervised and unsupervised methods will be covered. The
emphasis is on a frequentist approach, although we will try to link it
with a Bayesian approach as well. You will learn to develop and
structure large codes for studying different cases where Machine
Learning is applied to, get acquainted with computing facilities and
learn to handle large scientific projects. A good scientific and
ethical conduct is emphasized throughout the course. More
specifically, after this course you will
Learn about basic data analysis, statistical analysis, Bayesian statistics, Monte Carlo sampling, data optimization and machine learning;
Be capable of extending the acquired knowledge to other systems and cases;
Have an understanding of central algorithms used in data analysis and machine learning;
Understand linear methods for regression and classification, from ordinary least squares, via Lasso and Ridge to Logistic regression;
Learn about neural networks and deep learning methods for supervised and unsupervised learning. Emphasis on feed forward neural networks, convolutional and recurrent neural networks;
Learn about about decision trees, random forests, bagging and boosting methods;
Learn about support vector machines and kernel transformations;
Reduction of data sets, from PCA to clustering;
Autoencoders and Reinforcement Learning;
Work on numerical projects to illustrate the theory. The projects play a central role and you are expected to know modern programming languages like Python or C++ and/or Fortran (Fortran2003 or later) or Julia or other.
Topics covered in this course: Statistical analysis and optimization of data
The course has two central parts
Statistical analysis and optimization of data
Machine learning
These topics will be scattered thorughout the course and may not necessarily be taught separately. Rather, we will often take an approach (during the lectures and project/exercise sessions) where say elements from statistical data analysis are mixed with specific Machine Learning algorithms
Statistical analysis and optimization of data.
We plan to cover the following topics:
* Basic concepts, expectation values, variance, covariance, correlation functions and errors;
Simpler models, binomial distribution, the Poisson distribution, simple and multivariate normal distributions;
Central elements of Bayesian statistics and modeling;
Gradient methods for data optimization,
Monte Carlo methods, Markov chains, Gibbs sampling and Metropolis-Hastings sampling;
Estimation of errors and resampling techniques such as the cross-validation, blocking, bootstrapping and jackknife methods;
Principal Component Analysis (PCA) and its mathematical foundation
Topics covered in this course: Machine Learning
The following topics will be covered
* Linear Regression and Logistic Regression;
Neural networks and deep learning, including convolutional and recurrent neural networks
Decisions trees, Random Forests, Bagging and Boosting
Support vector machines
Bayesian linear and logistic regression
Boltzmann Machines
Unsupervised learning Dimensionality reduction, from PCA to clustering
Hands-on demonstrations, exercises and projects aim at deepening your understanding of these topics.
Extremely useful tools, strongly recommended
and discussed at the lab sessions.
GIT for version control, and GitHub or GitLab as repositories, highly recommended. This will be discussed during the first exercise session
Anaconda and other Python environments, see intro slides and links to programming resources at https://computationalscienceuio.github.io/RefreshProgrammingSkills/intro.html
Other courses on Data science and Machine Learning at UiO
The link here https://www.mn.uio.no/english/research/about/centre-focus/innovation/data-science/studies/ gives an excellent overview of courses on Machine learning at UiO.
STK2100 Machine learning and statistical methods for prediction and classification.
IN3050/4050 Introduction to Artificial Intelligence and Machine Learning. Introductory course in machine learning and AI with an algorithmic approach.
STK-INF3000/4000 Selected Topics in Data Science. The course provides insight into selected contemporary relevant topics within Data Science.
IN4080 Natural Language Processing. Probabilistic and machine learning techniques applied to natural language processing.
STK-IN4300 Statistical learning methods in Data Science. An advanced introduction to statistical and machine learning. For students with a good mathematics and statistics background.
INF4490 Biologically Inspired Computing. An introduction to self-adapting methods also called artificial intelligence or machine learning.
IN-STK5000 Adaptive Methods for Data-Based Decision Making. Methods for adaptive collection and processing of data based on machine learning techniques.
IN5400/INF5860 Machine Learning for Image Analysis. An introduction to deep learning with particular emphasis on applications within Image analysis, but useful for other application areas too.
TEK5040 Deep learning for autonomous systems. The course addresses advanced algorithms and architectures for deep learning with neural networks. The course provides an introduction to how deep-learning techniques can be used in the construction of key parts of advanced autonomous systems that exist in physical environments and cyber environments.
STK4051 Computational Statistics
STK4021 Applied Bayesian Analysis and Numerical Methods
Introduction
Our emphasis throughout this series of lectures
is on understanding the mathematical aspects of
different algorithms used in the fields of data analysis and machine learning.
However, where possible we will emphasize the
importance of using available software. We start thus with a hands-on
and top-down approach to machine learning. The aim is thus to start with
relevant data or data we have produced
and use these to introduce statistical data analysis
concepts and machine learning algorithms before we delve into the
algorithms themselves. The examples we will use in the beginning, start with simple
polynomials with random noise added. We will use the Python
software package Scikit-Learn and
introduce various machine learning algorithms to make fits of
the data and predictions. We move thereafter to more interesting
cases such as data from say experiments (below we will look at experimental nuclear binding energies as an example).
These are examples where we can easily set up the data and
then use machine learning algorithms included in for example
Scikit-Learn.
These examples will serve us the purpose of getting
started. Furthermore, they allow us to catch more than two birds with
a stone. They will allow us to bring in some programming specific
topics and tools as well as showing the power of various Python
libraries for machine learning and statistical data analysis.
Here, we will mainly focus on two
specific Python packages for Machine Learning, Scikit-Learn and
Tensorflow (see below for links etc). Moreover, the examples we
introduce will serve as inputs to many of our discussions later, as
well as allowing you to set up models and produce your own data and
get started with programming.
What is Machine Learning?
Statistics, data science and machine learning form important fields of
research in modern science. They describe how to learn and make
predictions from data, as well as allowing us to extract important
correlations about physical process and the underlying laws of motion
in large data sets. The latter, big data sets, appear frequently in
essentially all disciplines, from the traditional Science, Technology,
Mathematics and Engineering fields to Life Science, Law, education
research, the Humanities and the Social Sciences.
It has become more
and more common to see research projects on big data in for example
the Social Sciences where extracting patterns from complicated survey
data is one of many research directions. Having a solid grasp of data
analysis and machine learning is thus becoming central to scientific
computing in many fields, and competences and skills within the fields
of machine learning and scientific computing are nowadays strongly
requested by many potential employers. The latter cannot be
overstated, familiarity with machine learning has almost become a
prerequisite for many of the most exciting employment opportunities,
whether they are in bioinformatics, life science, physics or finance,
in the private or the public sector. This author has had several
students or met students who have been hired recently based on their
skills and competences in scientific computing and data science, often
with marginal knowledge of machine learning.
Machine learning is a subfield of computer science, and is closely
related to computational statistics. It evolved from the study of
pattern recognition in artificial intelligence (AI) research, and has
made contributions to AI tasks like computer vision, natural language
processing and speech recognition. Many of the methods we will study are also
strongly rooted in basic mathematics and physics research.
Ideally, machine learning represents the science of giving computers
the ability to learn without being explicitly programmed. The idea is
that there exist generic algorithms which can be used to find patterns
in a broad class of data sets without having to write code
specifically for each problem. The algorithm will build its own logic
based on the data. You should however always keep in mind that
machines and algorithms are to a large extent developed by humans. The
insights and knowledge we have about a specific system, play a central
role when we develop a specific machine learning algorithm.
Machine learning is an extremely rich field, in spite of its young
age. The increases we have seen during the last three decades in
computational capabilities have been followed by developments of
methods and techniques for analyzing and handling large date sets,
relying heavily on statistics, computer science and mathematics. The
field is rather new and developing rapidly. Popular software packages
written in Python for machine learning like
Scikit-learn,
Tensorflow,
PyTorch and Keras, all
freely available at their respective GitHub sites, encompass
communities of developers in the thousands or more. And the number of
code developers and contributors keeps increasing. Not all the
algorithms and methods can be given a rigorous mathematical
justification, opening up thereby large rooms for experimenting and
trial and error and thereby exciting new developments. However, a
solid command of linear algebra, multivariate theory, probability
theory, statistical data analysis, understanding errors and Monte
Carlo methods are central elements in a proper understanding of many
of algorithms and methods we will discuss.
Types of Machine Learning
The approaches to machine learning are many, but are often split into
two main categories. In supervised learning we know the answer to a
problem, and let the computer deduce the logic behind it. On the other
hand, unsupervised learning is a method for finding patterns and
relationship in data sets without any prior knowledge of the system.
Some authours also operate with a third category, namely
reinforcement learning. This is a paradigm of learning inspired by
behavioral psychology, where learning is achieved by trial-and-error,
solely from rewards and punishment.
Another way to categorize machine learning tasks is to consider the
desired output of a system. Some of the most common tasks are:
Classification: Outputs are divided into two or more classes. The goal is to produce a model that assigns inputs into one of these classes. An example is to identify digits based on pictures of hand-written ones. Classification is typically supervised learning.
Regression: Finding a functional relationship between an input data set and a reference data set. The goal is to construct a function that maps input data to continuous output values.
Clustering: Data are divided into groups with certain common traits, without knowing the different groups beforehand. It is thus a form of unsupervised learning.
Essential elements of ML
The methods we cover have three main topics in common, irrespective of
whether we deal with supervised or unsupervised learning.
* The first ingredient is normally our data set (which can be subdivided into training, validation and test data). Many find the most difficult part of using Machine Learning to be the set up of your data in a meaningful way.
The second item is a model which is normally a function of some parameters. The model reflects our knowledge of the system (or lack thereof). As an example, if we know that our data show a behavior similar to what would be predicted by a polynomial, fitting our data to a polynomial of some degree would then determin our model.
The last ingredient is a so-called cost/loss function (or error or risk function) which allows us to present an estimate on how good our model is in reproducing the data it is supposed to train.
An optimization/minimization problem
At the heart of basically all Machine Learning algorithms we will encounter so-called minimization or optimization algorithms. A large family of such methods are so-called gradient methods.
A Frequentist approach to data analysis
When you hear phrases like predictions and estimations and
correlations and causations, what do you think of? May be you think
of the difference between classifying new data points and generating
new data points.
Or perhaps you consider that correlations represent some kind of symmetric statements like
if $A$ is correlated with $B$, then $B$ is correlated with
$A$. Causation on the other hand is directional, that is if $A$ causes $B$, $B$ does not
necessarily cause $A$.
These concepts are in some sense the difference between machine
learning and statistics. In machine learning and prediction based
tasks, we are often interested in developing algorithms that are
capable of learning patterns from given data in an automated fashion,
and then using these learned patterns to make predictions or
assessments of newly given data. In many cases, our primary concern
is the quality of the predictions or assessments, and we are less
concerned about the underlying patterns that were learned in order
to make these predictions.
In machine learning we normally use a so-called frequentist approach,
where the aim is to make predictions and find correlations. We focus
less on for example extracting a probability distribution function (PDF). The PDF can be
used in turn to make estimations and find causations such as given $A$
what is the likelihood of finding $B$.
What is a good model?
In science and engineering we often end up in situations where we want to infer (or learn) a
quantitative model $M$ for a given set of sample points $\boldsymbol{X} \in [x_1, x_2,\dots x_N]$.
As we will see repeatedely in these lectures, we could try to fit these data points to a model given by a
straight line, or if we wish to be more sophisticated to a more complex
function.
The reason for inferring such a model is that it
serves many useful purposes. On the one hand, the model can reveal information
encoded in the data or underlying mechanisms from which the data were generated. For instance, we could discover important
corelations that relate interesting physics interpretations.
In addition, it can simplify the representation of the given data set and help
us in making predictions about future data samples.
A first important consideration to keep in mind is that inferring the correct model
for a given data set is an elusive, if not impossible, task. The fundamental difficulty
is that if we are not specific about what we mean by a correct model, there
could easily be many different models that fit the given data set equally well.
What is a good model? Can we define it?
The central question is this: what leads us to say that a model is correct or
optimal for a given data set? To make the model inference problem well posed, i.e.,
to guarantee that there is a unique optimal model for the given data, we need to
impose additional assumptions or restrictions on the class of models considered. To
this end, we should not be looking for just any model that can describe the data.
Instead, we should look for a model $M$ that is the best among a restricted class
of models. In addition, to make the model inference problem computationally
tractable, we need to specify how restricted the class of models needs to be. A
common strategy is to start
with the simplest possible class of models that is just necessary to describe the data
or solve the problem at hand. More precisely, the model class should be rich enough
to contain at least one model that can fit the data to a desired accuracy and yet be
restricted enough that it is relatively simple to find the best model for the given data.
Thus, the most popular strategy is to start from the
simplest class of models and increase the complexity of the models only when the
simpler models become inadequate. For instance, if we work with a regression problem to fit a set of sample points, one
may first try the simplest class of models, namely linear models, followed obviously by more complex models.
How to evaluate which model fits best the data is something we will come back to over and over again in these sets of lectures.
Software and needed installations
We will make extensive use of Python as programming language and its
myriad of available libraries. You will find
Jupyter notebooks invaluable in your work. You can run R
codes in the Jupyter/IPython notebooks, with the immediate benefit of
visualizing your data. You can also use compiled languages like C++,
Rust, Julia, Fortran etc if you prefer. The focus in these lectures will be
on Python.
If you have Python installed (we strongly recommend Python3) and you feel
pretty familiar with installing different packages, we recommend that
you install the following Python packages via pip as
pip install numpy scipy matplotlib ipython scikit-learn mglearn sympy pandas pillow
For Python3, replace pip with pip3.
For OSX users we recommend, after having installed Xcode, to
install brew. Brew allows for a seamless installation of additional
software via for example
brew install python3
For Linux users, with its variety of distributions like for example the widely popular Ubuntu distribution,
you can use pip as well and simply install Python as
sudo apt-get install python3 (or python for pyhton2.7)
etc etc.
Python installers
If you don't want to perform these operations separately and venture
into the hassle of exploring how to set up dependencies and paths, we
recommend two widely used distrubutions which set up all relevant
dependencies for Python, namely
Anaconda,
which is an open source
distribution of the Python and R programming languages for large-scale
data processing, predictive analytics, and scientific computing, that
aims to simplify package management and deployment. Package versions
are managed by the package management system conda.
Enthought canopy
is a Python
distribution for scientific and analytic computing distribution and
analysis environment, available for free and under a commercial
license.
Furthermore, Google's Colab is a free Jupyter notebook environment that requires
no setup and runs entirely in the cloud. Try it out!
Useful Python libraries
Here we list several useful Python libraries we strongly recommend (if you use anaconda many of these are already there)
NumPy is a highly popular library for large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on these arrays
The pandas library provides high-performance, easy-to-use data structures and data analysis tools
Xarray is a Python package that makes working with labelled multi-dimensional arrays simple, efficient, and fun!
Scipy (pronounced “Sigh Pie”) is a Python-based ecosystem of open-source software for mathematics, science, and engineering.
Matplotlib is a Python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms.
Autograd can automatically differentiate native Python and Numpy code. It can handle a large subset of Python's features, including loops, ifs, recursion and closures, and it can even take derivatives of derivatives of derivatives
SymPy is a Python library for symbolic mathematics.
scikit-learn has simple and efficient tools for machine learning, data mining and data analysis
TensorFlow is a Python library for fast numerical computing created and released by Google
Keras is a high-level neural networks API, written in Python and capable of running on top of TensorFlow, CNTK, or Theano
And many more such as pytorch, Theano etc
Installing R, C++, cython or Julia
You will also find it convenient to utilize R. We will mainly
use Python during our lectures and in various projects and exercises.
Those of you
already familiar with R should feel free to continue using R, keeping
however an eye on the parallel Python set ups. Similarly, if you are a
Python afecionado, feel free to explore R as well. Jupyter/Ipython
notebook allows you to run R codes interactively in your
browser. The software library R is really tailored for statistical data analysis
and allows for an easy usage of the tools and algorithms we will discuss in these
lectures.
To install R with Jupyter notebook
follow the link here
Installing R, C++, cython, Numba etc
For the C++ aficionados, Jupyter/IPython notebook allows you also to
install C++ and run codes written in this language interactively in
the browser. Since we will emphasize writing many of the algorithms
yourself, you can thus opt for either Python or C++ (or Fortran or other compiled languages) as programming
languages.
To add more entropy, cython can also be used when running your
notebooks. It means that Python with the jupyter notebook
setup allows you to integrate widely popular softwares and tools for
scientific computing. Similarly, the
Numba Python package delivers increased performance
capabilities with minimal rewrites of your codes. With its
versatility, including symbolic operations, Python offers a unique
computational environment. Your jupyter notebook can easily be
converted into a nicely rendered PDF file or a Latex file for
further processing. For example, convert to latex as
pycod jupyter nbconvert filename.ipynb --to latex
And to add more versatility, the Python package SymPy is a Python library for symbolic mathematics. It aims to become a full-featured computer algebra system (CAS) and is entirely written in Python.
Finally, if you wish to use the light mark-up language
doconce you can convert a standard ascii text file into various HTML
formats, ipython notebooks, latex files, pdf files etc with minimal edits. These lectures were generated using doconce.
Numpy examples and Important Matrix and vector handling packages
There are several central software libraries for linear algebra and eigenvalue problems. Several of the more
popular ones have been wrapped into ofter software packages like those from the widely used text Numerical Recipes. The original source codes in many of the available packages are often taken from the widely used
software package LAPACK, which follows two other popular packages
developed in the 1970s, namely EISPACK and LINPACK. We describe them shortly here.
LINPACK: package for linear equations and least square problems.
LAPACK:package for solving symmetric, unsymmetric and generalized eigenvalue problems. From LAPACK's website http://www.netlib.org it is possible to download for free all source codes from this library. Both C/C++ and Fortran versions are available.
BLAS (I, II and III): (Basic Linear Algebra Subprograms) are routines that provide standard building blocks for performing basic vector and matrix operations. Blas I is vector operations, II vector-matrix operations and III matrix-matrix operations. Highly parallelized and efficient codes, all available for download from http://www.netlib.org.
Basic Matrix Features
Matrix properties reminder.
$$
\mathbf{A} =
\begin{bmatrix} a_{11} & a_{12} & a_{13} & a_{14} \
a_{21} & a_{22} & a_{23} & a_{24} \
a_{31} & a_{32} & a_{33} & a_{34} \
a_{41} & a_{42} & a_{43} & a_{44}
\end{bmatrix}\qquad
\mathbf{I} =
\begin{bmatrix} 1 & 0 & 0 & 0 \
0 & 1 & 0 & 0 \
0 & 0 & 1 & 0 \
0 & 0 & 0 & 1
\end{bmatrix}
$$
The inverse of a matrix is defined by
$$
\mathbf{A}^{-1} \cdot \mathbf{A} = I
$$
<table class="dotable" border="1">
<thead>
<tr><th align="center"> Relations </th> <th align="center"> Name </th> <th align="center"> matrix elements </th> </tr>
</thead>
<tbody>
<tr><td align="center"> $A=A^{T}$ </td> <td align="center"> symmetric </td> <td align="center"> $a_{ij}=a_{ji}$ </td> </tr>
<tr><td align="center"> $A=\left (A^{T}\right )^{-1}$ </td> <td align="center"> real orthogonal </td> <td align="center"> $\sum_k a_{ik}a_{jk}=\sum_k a_{ki} a_{kj}=\delta_{ij}$ </td> </tr>
<tr><td align="center"> $A=A^{ * }$ </td> <td align="center"> real matrix </td> <td align="center"> $a_{ij}=a_{ij}^{*}$ </td> </tr>
<tr><td align="center"> $A=A^{\dagger}$ </td> <td align="center"> hermitian </td> <td align="center"> $a_{ij}=a_{ji}^{*}$ </td> </tr>
<tr><td align="center"> $A=\left(A^{\dagger}\right )^{-1}$ </td> <td align="center"> unitary </td> <td align="center"> $\sum_k a_{ik}a_{jk}^{*}=\sum_k a_{ki}^{ * } a_{kj}=\delta_{ij}$ </td> </tr>
</tbody>
</table>
Some famous Matrices
Diagonal if $a_{ij}=0$ for $i\ne j$
Upper triangular if $a_{ij}=0$ for $i>j$
Lower triangular if $a_{ij}=0$ for $i<j$
Upper Hessenberg if $a_{ij}=0$ for $i>j+1$
Lower Hessenberg if $a_{ij}=0$ for $i<j+1$
Tridiagonal if $a_{ij}=0$ for $|i -j|>1$
Lower banded with bandwidth $p$: $a_{ij}=0$ for $i>j+p$
Upper banded with bandwidth $p$: $a_{ij}=0$ for $i<j+p$
Banded, block upper triangular, block lower triangular....
More Basic Matrix Features
Some Equivalent Statements.
For an $N\times N$ matrix $\mathbf{A}$ the following properties are all equivalent
If the inverse of $\mathbf{A}$ exists, $\mathbf{A}$ is nonsingular.
The equation $\mathbf{Ax}=0$ implies $\mathbf{x}=0$.
The rows of $\mathbf{A}$ form a basis of $R^N$.
The columns of $\mathbf{A}$ form a basis of $R^N$.
$\mathbf{A}$ is a product of elementary matrices.
$0$ is not eigenvalue of $\mathbf{A}$.
Numpy and arrays
Numpy provides an easy way to handle arrays in Python. The standard way to import this library is as
End of explanation
n = 10
x = np.random.normal(size=n)
print(x)
Explanation: Here follows a simple example where we set up an array of ten elements, all determined by random numbers drawn according to the normal distribution,
End of explanation
import numpy as np
x = np.array([1, 2, 3])
print(x)
Explanation: We defined a vector $x$ with $n=10$ elements with its values given by the Normal distribution $N(0,1)$.
Another alternative is to declare a vector as follows
End of explanation
import numpy as np
x = np.log(np.array([4, 7, 8]))
print(x)
Explanation: Here we have defined a vector with three elements, with $x_0=1$, $x_1=2$ and $x_2=3$. Note that both Python and C++
start numbering array elements from $0$ and on. This means that a vector with $n$ elements has a sequence of entities $x_0, x_1, x_2, \dots, x_{n-1}$. We could also let (recommended) Numpy to compute the logarithms of a specific array as
End of explanation
import numpy as np
from math import log
x = np.array([4, 7, 8])
for i in range(0, len(x)):
x[i] = log(x[i])
print(x)
Explanation: In the last example we used Numpy's unary function $np.log$. This function is
highly tuned to compute array elements since the code is vectorized
and does not require looping. We normaly recommend that you use the
Numpy intrinsic functions instead of the corresponding log function
from Python's math module. The looping is done explicitely by the
np.log function. The alternative, and slower way to compute the
logarithms of a vector would be to write
End of explanation
import numpy as np
x = np.log(np.array([4, 7, 8], dtype = np.float64))
print(x)
Explanation: We note that our code is much longer already and we need to import the log function from the math module.
The attentive reader will also notice that the output is $[1, 1, 2]$. Python interprets automagically our numbers as integers (like the automatic keyword in C++). To change this we could define our array elements to be double precision numbers as
End of explanation
import numpy as np
x = np.log(np.array([4.0, 7.0, 8.0]))
print(x)
Explanation: or simply write them as double precision numbers (Python uses 64 bits as default for floating point type variables), that is
End of explanation
import numpy as np
x = np.log(np.array([4.0, 7.0, 8.0]))
print(x.itemsize)
Explanation: To check the number of bytes (remember that one byte contains eight bits for double precision variables), you can use simple use the itemsize functionality (the array $x$ is actually an object which inherits the functionalities defined in Numpy) as
End of explanation
import numpy as np
A = np.log(np.array([ [4.0, 7.0, 8.0], [3.0, 10.0, 11.0], [4.0, 5.0, 7.0] ]))
print(A)
Explanation: Matrices in Python
Having defined vectors, we are now ready to try out matrices. We can
define a $3 \times 3 $ real matrix $\boldsymbol{A}$ as (recall that we user
lowercase letters for vectors and uppercase letters for matrices)
End of explanation
import numpy as np
A = np.log(np.array([ [4.0, 7.0, 8.0], [3.0, 10.0, 11.0], [4.0, 5.0, 7.0] ]))
# print the first column, row-major order and elements start with 0
print(A[:,0])
Explanation: If we use the shape function we would get $(3, 3)$ as output, that is verifying that our matrix is a $3\times 3$ matrix. We can slice the matrix and print for example the first column (Python organized matrix elements in a row-major order, see below) as
End of explanation
import numpy as np
A = np.log(np.array([ [4.0, 7.0, 8.0], [3.0, 10.0, 11.0], [4.0, 5.0, 7.0] ]))
# print the first column, row-major order and elements start with 0
print(A[1,:])
Explanation: We can continue this was by printing out other columns or rows. The example here prints out the second column
End of explanation
import numpy as np
n = 10
# define a matrix of dimension 10 x 10 and set all elements to zero
A = np.zeros( (n, n) )
print(A)
Explanation: Numpy contains many other functionalities that allow us to slice, subdivide etc etc arrays. We strongly recommend that you look up the Numpy website for more details. Useful functions when defining a matrix are the np.zeros function which declares a matrix of a given dimension and sets all elements to zero
End of explanation
import numpy as np
n = 10
# define a matrix of dimension 10 x 10 and set all elements to one
A = np.ones( (n, n) )
print(A)
Explanation: or initializing all elements to
End of explanation
import numpy as np
n = 10
# define a matrix of dimension 10 x 10 and set all elements to random numbers with x \in [0, 1]
A = np.random.rand(n, n)
print(A)
Explanation: or as unitarily distributed random numbers (see the material on random number generators in the statistics part)
End of explanation
# Importing various packages
import numpy as np
n = 100
x = np.random.normal(size=n)
print(np.mean(x))
y = 4+3*x+np.random.normal(size=n)
print(np.mean(y))
z = x**3+np.random.normal(size=n)
print(np.mean(z))
W = np.vstack((x, y, z))
Sigma = np.cov(W)
print(Sigma)
Eigvals, Eigvecs = np.linalg.eig(Sigma)
print(Eigvals)
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import sparse
eye = np.eye(4)
print(eye)
sparse_mtx = sparse.csr_matrix(eye)
print(sparse_mtx)
x = np.linspace(-10,10,100)
y = np.sin(x)
plt.plot(x,y,marker='x')
plt.show()
Explanation: As we will see throughout these lectures, there are several extremely useful functionalities in Numpy.
As an example, consider the discussion of the covariance matrix. Suppose we have defined three vectors
$\boldsymbol{x}, \boldsymbol{y}, \boldsymbol{z}$ with $n$ elements each. The covariance matrix is defined as
$$
\boldsymbol{\Sigma} = \begin{bmatrix} \sigma_{xx} & \sigma_{xy} & \sigma_{xz} \
\sigma_{yx} & \sigma_{yy} & \sigma_{yz} \
\sigma_{zx} & \sigma_{zy} & \sigma_{zz}
\end{bmatrix},
$$
where for example
$$
\sigma_{xy} =\frac{1}{n} \sum_{i=0}^{n-1}(x_i- \overline{x})(y_i- \overline{y}).
$$
The Numpy function np.cov calculates the covariance elements using the factor $1/(n-1)$ instead of $1/n$ since it assumes we do not have the exact mean values.
The following simple function uses the np.vstack function which takes each vector of dimension $1\times n$ and produces a $3\times n$ matrix $\boldsymbol{W}$
$$
\boldsymbol{W} = \begin{bmatrix} x_0 & x_1 & x_2 & \dots & x_{n-2} & x_{n-1} \
y_0 & y_1 & y_2 & \dots & y_{n-2} & y_{n-1} \
z_0 & z_1 & z_2 & \dots & z_{n-2} & z_{n-1} \
\end{bmatrix},
$$
which in turn is converted into into the $3\times 3$ covariance matrix
$\boldsymbol{\Sigma}$ via the Numpy function np.cov(). We note that we can also calculate
the mean value of each set of samples $\boldsymbol{x}$ etc using the Numpy
function np.mean(x). We can also extract the eigenvalues of the
covariance matrix through the np.linalg.eig() function.
End of explanation
import pandas as pd
from IPython.display import display
data = {'First Name': ["Frodo", "Bilbo", "Aragorn II", "Samwise"],
'Last Name': ["Baggins", "Baggins","Elessar","Gamgee"],
'Place of birth': ["Shire", "Shire", "Eriador", "Shire"],
'Date of Birth T.A.': [2968, 2890, 2931, 2980]
}
data_pandas = pd.DataFrame(data)
display(data_pandas)
Explanation: Meet the Pandas
<!-- dom:FIGURE: [fig/pandas.jpg, width=600 frac=0.8] -->
<!-- begin figure -->
<img src="fig/pandas.jpg" width="600"><p style="font-size: 0.9em"><i>Figure 1: </i></p>
<!-- end figure -->
Another useful Python package is
pandas, which is an open source library
providing high-performance, easy-to-use data structures and data
analysis tools for Python. pandas stands for panel data, a term borrowed from econometrics and is an efficient library for data analysis with an emphasis on tabular data.
pandas has two major classes, the DataFrame class with two-dimensional data objects and tabular data organized in columns and the class Series with a focus on one-dimensional data objects. Both classes allow you to index data easily as we will see in the examples below.
pandas allows you also to perform mathematical operations on the data, spanning from simple reshapings of vectors and matrices to statistical operations.
The following simple example shows how we can, in an easy way make tables of our data. Here we define a data set which includes names, place of birth and date of birth, and displays the data in an easy to read way. We will see repeated use of pandas, in particular in connection with classification of data.
End of explanation
data_pandas = pd.DataFrame(data,index=['Frodo','Bilbo','Aragorn','Sam'])
display(data_pandas)
Explanation: In the above we have imported pandas with the shorthand pd, the latter has become the standard way we import pandas. We make then a list of various variables
and reorganize the aboves lists into a DataFrame and then print out a neat table with specific column labels as Name, place of birth and date of birth.
Displaying these results, we see that the indices are given by the default numbers from zero to three.
pandas is extremely flexible and we can easily change the above indices by defining a new type of indexing as
End of explanation
display(data_pandas.loc['Aragorn'])
Explanation: Thereafter we display the content of the row which begins with the index Aragorn
End of explanation
new_hobbit = {'First Name': ["Peregrin"],
'Last Name': ["Took"],
'Place of birth': ["Shire"],
'Date of Birth T.A.': [2990]
}
data_pandas=data_pandas.append(pd.DataFrame(new_hobbit, index=['Pippin']))
display(data_pandas)
Explanation: We can easily append data to this, for example
End of explanation
import numpy as np
import pandas as pd
from IPython.display import display
np.random.seed(100)
# setting up a 10 x 5 matrix
rows = 10
cols = 5
a = np.random.randn(rows,cols)
df = pd.DataFrame(a)
display(df)
print(df.mean())
print(df.std())
display(df**2)
Explanation: Here are other examples where we use the DataFrame functionality to handle arrays, now with more interesting features for us, namely numbers. We set up a matrix
of dimensionality $10\times 5$ and compute the mean value and standard deviation of each column. Similarly, we can perform mathematial operations like squaring the matrix elements and many other operations.
End of explanation
df.columns = ['First', 'Second', 'Third', 'Fourth', 'Fifth']
df.index = np.arange(10)
display(df)
print(df['Second'].mean() )
print(df.info())
print(df.describe())
from pylab import plt, mpl
plt.style.use('seaborn')
mpl.rcParams['font.family'] = 'serif'
df.cumsum().plot(lw=2.0, figsize=(10,6))
plt.show()
df.plot.bar(figsize=(10,6), rot=15)
plt.show()
Explanation: Thereafter we can select specific columns only and plot final results
End of explanation
b = np.arange(16).reshape((4,4))
print(b)
df1 = pd.DataFrame(b)
print(df1)
Explanation: We can produce a $4\times 4$ matrix
End of explanation
# Importing various packages
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
x = np.random.rand(100,1)
y = 2*x+np.random.randn(100,1)
linreg = LinearRegression()
linreg.fit(x,y)
xnew = np.array([[0],[1]])
ypredict = linreg.predict(xnew)
plt.plot(xnew, ypredict, "r-")
plt.plot(x, y ,'ro')
plt.axis([0,1.0,0, 5.0])
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
plt.title(r'Simple Linear Regression')
plt.show()
Explanation: and many other operations.
The Series class is another important class included in
pandas. You can view it as a specialization of DataFrame but where
we have just a single column of data. It shares many of the same features as _DataFrame. As with DataFrame,
most operations are vectorized, achieving thereby a high performance when dealing with computations of arrays, in particular labeled arrays.
As we will see below it leads also to a very concice code close to the mathematical operations we may be interested in.
For multidimensional arrays, we recommend strongly xarray. xarray has much of the same flexibility as pandas, but allows for the extension to higher dimensions than two. We will see examples later of the usage of both pandas and xarray.
Friday August 27
"Video of Lecture August 27, 2021":"https://www.uio.no/studier/emner/matnat/fys/FYS-STK4155/h21/forelesningsvideoer/LectureThursdayAugust27.mp4?vrtx=view-as-webpage
Video of Lecture from fall 2020 and Handwritten notes
Simple linear regression model using scikit-learn
We start with perhaps our simplest possible example, using Scikit-Learn to perform linear regression analysis on a data set produced by us.
What follows is a simple Python code where we have defined a function
$y$ in terms of the variable $x$. Both are defined as vectors with $100$ entries.
The numbers in the vector $\boldsymbol{x}$ are given
by random numbers generated with a uniform distribution with entries
$x_i \in [0,1]$ (more about probability distribution functions
later). These values are then used to define a function $y(x)$
(tabulated again as a vector) with a linear dependence on $x$ plus a
random noise added via the normal distribution.
The Numpy functions are imported used the import numpy as np
statement and the random number generator for the uniform distribution
is called using the function np.random.rand(), where we specificy
that we want $100$ random variables. Using Numpy we define
automatically an array with the specified number of elements, $100$ in
our case. With the Numpy function randn() we can compute random
numbers with the normal distribution (mean value $\mu$ equal to zero and
variance $\sigma^2$ set to one) and produce the values of $y$ assuming a linear
dependence as function of $x$
$$
y = 2x+N(0,1),
$$
where $N(0,1)$ represents random numbers generated by the normal
distribution. From Scikit-Learn we import then the
LinearRegression functionality and make a prediction $\tilde{y} =
\alpha + \beta x$ using the function fit(x,y). We call the set of
data $(\boldsymbol{x},\boldsymbol{y})$ for our training data. The Python package
scikit-learn has also a functionality which extracts the above
fitting parameters $\alpha$ and $\beta$ (see below). Later we will
distinguish between training data and test data.
For plotting we use the Python package
matplotlib which produces publication
quality figures. Feel free to explore the extensive
gallery of examples. In
this example we plot our original values of $x$ and $y$ as well as the
prediction ypredict ($\tilde{y}$), which attempts at fitting our
data with a straight line.
The Python code follows here.
End of explanation
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
x = np.random.rand(100,1)
y = 5*x+0.01*np.random.randn(100,1)
linreg = LinearRegression()
linreg.fit(x,y)
ypredict = linreg.predict(x)
plt.plot(x, np.abs(ypredict-y)/abs(y), "ro")
plt.axis([0,1.0,0.0, 0.5])
plt.xlabel(r'$x$')
plt.ylabel(r'$\epsilon_{\mathrm{relative}}$')
plt.title(r'Relative error')
plt.show()
Explanation: This example serves several aims. It allows us to demonstrate several
aspects of data analysis and later machine learning algorithms. The
immediate visualization shows that our linear fit is not
impressive. It goes through the data points, but there are many
outliers which are not reproduced by our linear regression. We could
now play around with this small program and change for example the
factor in front of $x$ and the normal distribution. Try to change the
function $y$ to
$$
y = 10x+0.01 \times N(0,1),
$$
where $x$ is defined as before. Does the fit look better? Indeed, by
reducing the role of the noise given by the normal distribution we see immediately that
our linear prediction seemingly reproduces better the training
set. However, this testing 'by the eye' is obviouly not satisfactory in the
long run. Here we have only defined the training data and our model, and
have not discussed a more rigorous approach to the cost function.
We need more rigorous criteria in defining whether we have succeeded or
not in modeling our training data. You will be surprised to see that
many scientists seldomly venture beyond this 'by the eye' approach. A
standard approach for the cost function is the so-called $\chi^2$
function (a variant of the mean-squared error (MSE))
$$
\chi^2 = \frac{1}{n}
\sum_{i=0}^{n-1}\frac{(y_i-\tilde{y}_i)^2}{\sigma_i^2},
$$
where $\sigma_i^2$ is the variance (to be defined later) of the entry
$y_i$. We may not know the explicit value of $\sigma_i^2$, it serves
however the aim of scaling the equations and make the cost function
dimensionless.
Minimizing the cost function is a central aspect of
our discussions to come. Finding its minima as function of the model
parameters ($\alpha$ and $\beta$ in our case) will be a recurring
theme in these series of lectures. Essentially all machine learning
algorithms we will discuss center around the minimization of the
chosen cost function. This depends in turn on our specific
model for describing the data, a typical situation in supervised
learning. Automatizing the search for the minima of the cost function is a
central ingredient in all algorithms. Typical methods which are
employed are various variants of gradient methods. These will be
discussed in more detail later. Again, you'll be surprised to hear that
many practitioners minimize the above function ''by the eye', popularly dubbed as
'chi by the eye'. That is, change a parameter and see (visually and numerically) that
the $\chi^2$ function becomes smaller.
There are many ways to define the cost function. A simpler approach is to look at the relative difference between the training data and the predicted data, that is we define
the relative error (why would we prefer the MSE instead of the relative error?) as
$$
\epsilon_{\mathrm{relative}}= \frac{\vert \boldsymbol{y} -\boldsymbol{\tilde{y}}\vert}{\vert \boldsymbol{y}\vert}.
$$
The squared cost function results in an arithmetic mean-unbiased
estimator, and the absolute-value cost function results in a
median-unbiased estimator (in the one-dimensional case, and a
geometric median-unbiased estimator for the multi-dimensional
case). The squared cost function has the disadvantage that it has the tendency
to be dominated by outliers.
We can modify easily the above Python code and plot the relative error instead
End of explanation
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, r2_score, mean_squared_log_error, mean_absolute_error
x = np.random.rand(100,1)
y = 2.0+ 5*x+0.5*np.random.randn(100,1)
linreg = LinearRegression()
linreg.fit(x,y)
ypredict = linreg.predict(x)
print('The intercept alpha: \n', linreg.intercept_)
print('Coefficient beta : \n', linreg.coef_)
# The mean squared error
print("Mean squared error: %.2f" % mean_squared_error(y, ypredict))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % r2_score(y, ypredict))
# Mean squared log error
print('Mean squared log error: %.2f' % mean_squared_log_error(y, ypredict) )
# Mean absolute error
print('Mean absolute error: %.2f' % mean_absolute_error(y, ypredict))
plt.plot(x, ypredict, "r-")
plt.plot(x, y ,'ro')
plt.axis([0.0,1.0,1.5, 7.0])
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
plt.title(r'Linear Regression fit ')
plt.show()
Explanation: Depending on the parameter in front of the normal distribution, we may
have a small or larger relative error. Try to play around with
different training data sets and study (graphically) the value of the
relative error.
As mentioned above, Scikit-Learn has an impressive functionality.
We can for example extract the values of $\alpha$ and $\beta$ and
their error estimates, or the variance and standard deviation and many
other properties from the statistical data analysis.
Here we show an
example of the functionality of Scikit-Learn.
End of explanation
import matplotlib.pyplot as plt
import numpy as np
import random
from sklearn.linear_model import Ridge
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import LinearRegression
x=np.linspace(0.02,0.98,200)
noise = np.asarray(random.sample((range(200)),200))
y=x**3*noise
yn=x**3*100
poly3 = PolynomialFeatures(degree=3)
X = poly3.fit_transform(x[:,np.newaxis])
clf3 = LinearRegression()
clf3.fit(X,y)
Xplot=poly3.fit_transform(x[:,np.newaxis])
poly3_plot=plt.plot(x, clf3.predict(Xplot), label='Cubic Fit')
plt.plot(x,yn, color='red', label="True Cubic")
plt.scatter(x, y, label='Data', color='orange', s=15)
plt.legend()
plt.show()
def error(a):
for i in y:
err=(y-yn)/yn
return abs(np.sum(err))/len(err)
print (error(y))
Explanation: The function coef gives us the parameter $\beta$ of our fit while intercept yields
$\alpha$. Depending on the constant in front of the normal distribution, we get values near or far from $\alpha =2$ and $\beta =5$. Try to play around with different parameters in front of the normal distribution. The function meansquarederror gives us the mean square error, a risk metric corresponding to the expected value of the squared (quadratic) error or loss defined as
$$
MSE(\boldsymbol{y},\boldsymbol{\tilde{y}}) = \frac{1}{n}
\sum_{i=0}^{n-1}(y_i-\tilde{y}_i)^2,
$$
The smaller the value, the better the fit. Ideally we would like to
have an MSE equal zero. The attentive reader has probably recognized
this function as being similar to the $\chi^2$ function defined above.
The r2score function computes $R^2$, the coefficient of
determination. It provides a measure of how well future samples are
likely to be predicted by the model. Best possible score is 1.0 and it
can be negative (because the model can be arbitrarily worse). A
constant model that always predicts the expected value of $\boldsymbol{y}$,
disregarding the input features, would get a $R^2$ score of $0.0$.
If $\tilde{\boldsymbol{y}}_i$ is the predicted value of the $i-th$ sample and $y_i$ is the corresponding true value, then the score $R^2$ is defined as
$$
R^2(\boldsymbol{y}, \tilde{\boldsymbol{y}}) = 1 - \frac{\sum_{i=0}^{n - 1} (y_i - \tilde{y}i)^2}{\sum{i=0}^{n - 1} (y_i - \bar{y})^2},
$$
where we have defined the mean value of $\boldsymbol{y}$ as
$$
\bar{y} = \frac{1}{n} \sum_{i=0}^{n - 1} y_i.
$$
Another quantity taht we will meet again in our discussions of regression analysis is
the mean absolute error (MAE), a risk metric corresponding to the expected value of the absolute error loss or what we call the $l1$-norm loss. In our discussion above we presented the relative error.
The MAE is defined as follows
$$
\text{MAE}(\boldsymbol{y}, \boldsymbol{\tilde{y}}) = \frac{1}{n} \sum_{i=0}^{n-1} \left| y_i - \tilde{y}_i \right|.
$$
We present the
squared logarithmic (quadratic) error
$$
\text{MSLE}(\boldsymbol{y}, \boldsymbol{\tilde{y}}) = \frac{1}{n} \sum_{i=0}^{n - 1} (\log_e (1 + y_i) - \log_e (1 + \tilde{y}_i) )^2,
$$
where $\log_e (x)$ stands for the natural logarithm of $x$. This error
estimate is best to use when targets having exponential growth, such
as population counts, average sales of a commodity over a span of
years etc.
Finally, another cost function is the Huber cost function used in robust regression.
The rationale behind this possible cost function is its reduced
sensitivity to outliers in the data set. In our discussions on
dimensionality reduction and normalization of data we will meet other
ways of dealing with outliers.
The Huber cost function is defined as
$$
H_{\delta}(\boldsymbol{a})=\left{\begin{array}{cc}\frac{1}{2} \boldsymbol{a}^{2}& \text{for }|\boldsymbol{a}|\leq \delta\ \delta (|\boldsymbol{a}|-\frac{1}{2}\delta ),&\text{otherwise}.\end{array}\right.
$$
Here $\boldsymbol{a}=\boldsymbol{y} - \boldsymbol{\tilde{y}}$.
We will discuss in more detail these and other functions in the
various lectures. We conclude this part with another example. Instead
of a linear $x$-dependence we study now a cubic polynomial and use the
polynomial regression analysis tools of scikit-learn.
End of explanation
# Common imports
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import sklearn.linear_model as skl
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error, r2_score, mean_absolute_error
import os
# Where to save the figures and data files
PROJECT_ROOT_DIR = "Results"
FIGURE_ID = "Results/FigureFiles"
DATA_ID = "DataFiles/"
if not os.path.exists(PROJECT_ROOT_DIR):
os.mkdir(PROJECT_ROOT_DIR)
if not os.path.exists(FIGURE_ID):
os.makedirs(FIGURE_ID)
if not os.path.exists(DATA_ID):
os.makedirs(DATA_ID)
def image_path(fig_id):
return os.path.join(FIGURE_ID, fig_id)
def data_path(dat_id):
return os.path.join(DATA_ID, dat_id)
def save_fig(fig_id):
plt.savefig(image_path(fig_id) + ".png", format='png')
infile = open(data_path("MassEval2016.dat"),'r')
Explanation: To our real data: nuclear binding energies. Brief reminder on masses and binding energies
Let us now dive into nuclear physics and remind ourselves briefly about some basic features about binding
energies. A basic quantity which can be measured for the ground
states of nuclei is the atomic mass $M(N, Z)$ of the neutral atom with
atomic mass number $A$ and charge $Z$. The number of neutrons is $N$. There are indeed several sophisticated experiments worldwide which allow us to measure this quantity to high precision (parts per million even).
Atomic masses are usually tabulated in terms of the mass excess defined by
$$
\Delta M(N, Z) = M(N, Z) - uA,
$$
where $u$ is the Atomic Mass Unit
$$
u = M(^{12}\mathrm{C})/12 = 931.4940954(57) \hspace{0.1cm} \mathrm{MeV}/c^2.
$$
The nucleon masses are
$$
m_p = 1.00727646693(9)u,
$$
and
$$
m_n = 939.56536(8)\hspace{0.1cm} \mathrm{MeV}/c^2 = 1.0086649156(6)u.
$$
In the 2016 mass evaluation of by W.J.Huang, G.Audi, M.Wang, F.G.Kondev, S.Naimi and X.Xu
there are data on masses and decays of 3437 nuclei.
The nuclear binding energy is defined as the energy required to break
up a given nucleus into its constituent parts of $N$ neutrons and $Z$
protons. In terms of the atomic masses $M(N, Z)$ the binding energy is
defined by
$$
BE(N, Z) = ZM_H c^2 + Nm_n c^2 - M(N, Z)c^2 ,
$$
where $M_H$ is the mass of the hydrogen atom and $m_n$ is the mass of the neutron.
In terms of the mass excess the binding energy is given by
$$
BE(N, Z) = Z\Delta_H c^2 + N\Delta_n c^2 -\Delta(N, Z)c^2 ,
$$
where $\Delta_H c^2 = 7.2890$ MeV and $\Delta_n c^2 = 8.0713$ MeV.
A popular and physically intuitive model which can be used to parametrize
the experimental binding energies as function of $A$, is the so-called
liquid drop model. The ansatz is based on the following expression
$$
BE(N,Z) = a_1A-a_2A^{2/3}-a_3\frac{Z^2}{A^{1/3}}-a_4\frac{(N-Z)^2}{A},
$$
where $A$ stands for the number of nucleons and the $a_i$s are parameters which are determined by a fit
to the experimental data.
To arrive at the above expression we have assumed that we can make the following assumptions:
There is a volume term $a_1A$ proportional with the number of nucleons (the energy is also an extensive quantity). When an assembly of nucleons of the same size is packed together into the smallest volume, each interior nucleon has a certain number of other nucleons in contact with it. This contribution is proportional to the volume.
There is a surface energy term $a_2A^{2/3}$. The assumption here is that a nucleon at the surface of a nucleus interacts with fewer other nucleons than one in the interior of the nucleus and hence its binding energy is less. This surface energy term takes that into account and is therefore negative and is proportional to the surface area.
There is a Coulomb energy term $a_3\frac{Z^2}{A^{1/3}}$. The electric repulsion between each pair of protons in a nucleus yields less binding.
There is an asymmetry term $a_4\frac{(N-Z)^2}{A}$. This term is associated with the Pauli exclusion principle and reflects the fact that the proton-neutron interaction is more attractive on the average than the neutron-neutron and proton-proton interactions.
We could also add a so-called pairing term, which is a correction term that
arises from the tendency of proton pairs and neutron pairs to
occur. An even number of particles is more stable than an odd number.
Organizing our data
Let us start with reading and organizing our data.
We start with the compilation of masses and binding energies from 2016.
After having downloaded this file to our own computer, we are now ready to read the file and start structuring our data.
We start with preparing folders for storing our calculations and the data file over masses and binding energies. We import also various modules that we will find useful in order to present various Machine Learning methods. Here we focus mainly on the functionality of scikit-learn.
End of explanation
from pylab import plt, mpl
plt.style.use('seaborn')
mpl.rcParams['font.family'] = 'serif'
def MakePlot(x,y, styles, labels, axlabels):
plt.figure(figsize=(10,6))
for i in range(len(x)):
plt.plot(x[i], y[i], styles[i], label = labels[i])
plt.xlabel(axlabels[0])
plt.ylabel(axlabels[1])
plt.legend(loc=0)
Explanation: Before we proceed, we define also a function for making our plots. You can obviously avoid this and simply set up various matplotlib commands every time you need them. You may however find it convenient to collect all such commands in one function and simply call this function.
End of explanation
This is taken from the data file of the mass 2016 evaluation.
All files are 3436 lines long with 124 character per line.
Headers are 39 lines long.
col 1 : Fortran character control: 1 = page feed 0 = line feed
format : a1,i3,i5,i5,i5,1x,a3,a4,1x,f13.5,f11.5,f11.3,f9.3,1x,a2,f11.3,f9.3,1x,i3,1x,f12.5,f11.5
These formats are reflected in the pandas widths variable below, see the statement
widths=(1,3,5,5,5,1,3,4,1,13,11,11,9,1,2,11,9,1,3,1,12,11,1),
Pandas has also a variable header, with length 39 in this case.
Explanation: Our next step is to read the data on experimental binding energies and
reorganize them as functions of the mass number $A$, the number of
protons $Z$ and neutrons $N$ using pandas. Before we do this it is
always useful (unless you have a binary file or other types of compressed
data) to actually open the file and simply take a look at it!
In particular, the program that outputs the final nuclear masses is written in Fortran with a specific format. It means that we need to figure out the format and which columns contain the data we are interested in. Pandas comes with a function that reads formatted output. After having admired the file, we are now ready to start massaging it with pandas. The file begins with some basic format information.
End of explanation
# Read the experimental data with Pandas
Masses = pd.read_fwf(infile, usecols=(2,3,4,6,11),
names=('N', 'Z', 'A', 'Element', 'Ebinding'),
widths=(1,3,5,5,5,1,3,4,1,13,11,11,9,1,2,11,9,1,3,1,12,11,1),
header=39,
index_col=False)
# Extrapolated values are indicated by '#' in place of the decimal place, so
# the Ebinding column won't be numeric. Coerce to float and drop these entries.
Masses['Ebinding'] = pd.to_numeric(Masses['Ebinding'], errors='coerce')
Masses = Masses.dropna()
# Convert from keV to MeV.
Masses['Ebinding'] /= 1000
# Group the DataFrame by nucleon number, A.
Masses = Masses.groupby('A')
# Find the rows of the grouped DataFrame with the maximum binding energy.
Masses = Masses.apply(lambda t: t[t.Ebinding==t.Ebinding.max()])
Explanation: The data we are interested in are in columns 2, 3, 4 and 11, giving us
the number of neutrons, protons, mass numbers and binding energies,
respectively. We add also for the sake of completeness the element name. The data are in fixed-width formatted lines and we will
covert them into the pandas DataFrame structure.
End of explanation
A = Masses['A']
Z = Masses['Z']
N = Masses['N']
Element = Masses['Element']
Energies = Masses['Ebinding']
print(Masses)
Explanation: We have now read in the data, grouped them according to the variables we are interested in.
We see how easy it is to reorganize the data using pandas. If we
were to do these operations in C/C++ or Fortran, we would have had to
write various functions/subroutines which perform the above
reorganizations for us. Having reorganized the data, we can now start
to make some simple fits using both the functionalities in numpy and
Scikit-Learn afterwards.
Now we define five variables which contain
the number of nucleons $A$, the number of protons $Z$ and the number of neutrons $N$, the element name and finally the energies themselves.
End of explanation
# Now we set up the design matrix X
X = np.zeros((len(A),5))
X[:,0] = 1
X[:,1] = A
X[:,2] = A**(2.0/3.0)
X[:,3] = A**(-1.0/3.0)
X[:,4] = A**(-1.0)
Explanation: The next step, and we will define this mathematically later, is to set up the so-called design matrix. We will throughout call this matrix $\boldsymbol{X}$.
It has dimensionality $p\times n$, where $n$ is the number of data points and $p$ are the so-called predictors. In our case here they are given by the number of polynomials in $A$ we wish to include in the fit.
End of explanation
clf = skl.LinearRegression().fit(X, Energies)
fity = clf.predict(X)
Explanation: With scikitlearn we are now ready to use linear regression and fit our data.
End of explanation
# The mean squared error
print("Mean squared error: %.2f" % mean_squared_error(Energies, fity))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % r2_score(Energies, fity))
# Mean absolute error
print('Mean absolute error: %.2f' % mean_absolute_error(Energies, fity))
print(clf.coef_, clf.intercept_)
Masses['Eapprox'] = fity
# Generate a plot comparing the experimental with the fitted values values.
fig, ax = plt.subplots()
ax.set_xlabel(r'$A = N + Z$')
ax.set_ylabel(r'$E_\mathrm{bind}\,/\mathrm{MeV}$')
ax.plot(Masses['A'], Masses['Ebinding'], alpha=0.7, lw=2,
label='Ame2016')
ax.plot(Masses['A'], Masses['Eapprox'], alpha=0.7, lw=2, c='m',
label='Fit')
ax.legend()
save_fig("Masses2016")
plt.show()
Explanation: Pretty simple!
Now we can print measures of how our fit is doing, the coefficients from the fits and plot the final fit together with our data.
End of explanation
#Decision Tree Regression
from sklearn.tree import DecisionTreeRegressor
regr_1=DecisionTreeRegressor(max_depth=5)
regr_2=DecisionTreeRegressor(max_depth=7)
regr_3=DecisionTreeRegressor(max_depth=11)
regr_1.fit(X, Energies)
regr_2.fit(X, Energies)
regr_3.fit(X, Energies)
y_1 = regr_1.predict(X)
y_2 = regr_2.predict(X)
y_3=regr_3.predict(X)
Masses['Eapprox'] = y_3
# Plot the results
plt.figure()
plt.plot(A, Energies, color="blue", label="Data", linewidth=2)
plt.plot(A, y_1, color="red", label="max_depth=5", linewidth=2)
plt.plot(A, y_2, color="green", label="max_depth=7", linewidth=2)
plt.plot(A, y_3, color="m", label="max_depth=9", linewidth=2)
plt.xlabel("$A$")
plt.ylabel("$E$[MeV]")
plt.title("Decision Tree Regression")
plt.legend()
save_fig("Masses2016Trees")
plt.show()
print(Masses)
print(np.mean( (Energies-y_1)**2))
Explanation: Seeing the wood for the trees
As a teaser, let us now see how we can do this with decision trees using scikit-learn. Later we will switch to so-called random forests!
End of explanation
from sklearn.neural_network import MLPRegressor
from sklearn.metrics import accuracy_score
import seaborn as sns
X_train = X
Y_train = Energies
n_hidden_neurons = 100
epochs = 100
# store models for later use
eta_vals = np.logspace(-5, 1, 7)
lmbd_vals = np.logspace(-5, 1, 7)
# store the models for later use
DNN_scikit = np.zeros((len(eta_vals), len(lmbd_vals)), dtype=object)
train_accuracy = np.zeros((len(eta_vals), len(lmbd_vals)))
sns.set()
for i, eta in enumerate(eta_vals):
for j, lmbd in enumerate(lmbd_vals):
dnn = MLPRegressor(hidden_layer_sizes=(n_hidden_neurons), activation='logistic',
alpha=lmbd, learning_rate_init=eta, max_iter=epochs)
dnn.fit(X_train, Y_train)
DNN_scikit[i][j] = dnn
train_accuracy[i][j] = dnn.score(X_train, Y_train)
fig, ax = plt.subplots(figsize = (10, 10))
sns.heatmap(train_accuracy, annot=True, ax=ax, cmap="viridis")
ax.set_title("Training Accuracy")
ax.set_ylabel("$\eta$")
ax.set_xlabel("$\lambda$")
plt.show()
Explanation: And what about using neural networks?
The seaborn package allows us to visualize data in an efficient way. Note that we use scikit-learn's multi-layer perceptron (or feed forward neural network)
functionality.
End of explanation
# Common imports
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from IPython.display import display
import os
# Where to save the figures and data files
PROJECT_ROOT_DIR = "Results"
FIGURE_ID = "Results/FigureFiles"
DATA_ID = "DataFiles/"
if not os.path.exists(PROJECT_ROOT_DIR):
os.mkdir(PROJECT_ROOT_DIR)
if not os.path.exists(FIGURE_ID):
os.makedirs(FIGURE_ID)
if not os.path.exists(DATA_ID):
os.makedirs(DATA_ID)
def image_path(fig_id):
return os.path.join(FIGURE_ID, fig_id)
def data_path(dat_id):
return os.path.join(DATA_ID, dat_id)
def save_fig(fig_id):
plt.savefig(image_path(fig_id) + ".png", format='png')
infile = open(data_path("MassEval2016.dat"),'r')
# Read the experimental data with Pandas
Masses = pd.read_fwf(infile, usecols=(2,3,4,6,11),
names=('N', 'Z', 'A', 'Element', 'Ebinding'),
widths=(1,3,5,5,5,1,3,4,1,13,11,11,9,1,2,11,9,1,3,1,12,11,1),
header=39,
index_col=False)
# Extrapolated values are indicated by '#' in place of the decimal place, so
# the Ebinding column won't be numeric. Coerce to float and drop these entries.
Masses['Ebinding'] = pd.to_numeric(Masses['Ebinding'], errors='coerce')
Masses = Masses.dropna()
# Convert from keV to MeV.
Masses['Ebinding'] /= 1000
# Group the DataFrame by nucleon number, A.
Masses = Masses.groupby('A')
# Find the rows of the grouped DataFrame with the maximum binding energy.
Masses = Masses.apply(lambda t: t[t.Ebinding==t.Ebinding.max()])
A = Masses['A']
Z = Masses['Z']
N = Masses['N']
Element = Masses['Element']
Energies = Masses['Ebinding']
# Now we set up the design matrix X
X = np.zeros((len(A),5))
X[:,0] = 1
X[:,1] = A
X[:,2] = A**(2.0/3.0)
X[:,3] = A**(-1.0/3.0)
X[:,4] = A**(-1.0)
# Then nice printout using pandas
DesignMatrix = pd.DataFrame(X)
DesignMatrix.index = A
DesignMatrix.columns = ['1', 'A', 'A^(2/3)', 'A^(-1/3)', '1/A']
display(DesignMatrix)
Explanation: A first summary
The aim behind these introductory words was to present to you various
Python libraries and their functionalities, in particular libraries like
numpy, pandas, xarray and matplotlib and other that make our life much easier
in handling various data sets and visualizing data.
Furthermore,
Scikit-Learn allows us with few lines of code to implement popular
Machine Learning algorithms for supervised learning. Later we will meet Tensorflow, a powerful library for deep learning.
Now it is time to dive more into the details of various methods. We will start with linear regression and try to take a deeper look at what it entails.
Why Linear Regression (aka Ordinary Least Squares and family)
Fitting a continuous function with linear parameterization in terms of the parameters $\boldsymbol{\beta}$.
* Method of choice for fitting a continuous function!
Gives an excellent introduction to central Machine Learning features with understandable pedagogical links to other methods like Neural Networks, Support Vector Machines etc
Analytical expression for the fitting parameters $\boldsymbol{\beta}$
Analytical expressions for statistical propertiers like mean values, variances, confidence intervals and more
Analytical relation with probabilistic interpretations
Easy to introduce basic concepts like bias-variance tradeoff, cross-validation, resampling and regularization techniques and many other ML topics
Easy to code! And links well with classification problems and logistic regression and neural networks
Allows for easy hands-on understanding of gradient descent methods
and many more features
For more discussions of Ridge and Lasso regression, Wessel van Wieringen's article is highly recommended.
Similarly, Mehta et al's article is also recommended.
Regression analysis, overarching aims
Regression modeling deals with the description of the sampling distribution of a given random variable $y$ and how it varies as function of another variable or a set of such variables $\boldsymbol{x} =[x_0, x_1,\dots, x_{n-1}]^T$.
The first variable is called the dependent, the outcome or the response variable while the set of variables $\boldsymbol{x}$ is called the independent variable, or the predictor variable or the explanatory variable.
A regression model aims at finding a likelihood function $p(\boldsymbol{y}\vert \boldsymbol{x})$, that is the conditional distribution for $\boldsymbol{y}$ with a given $\boldsymbol{x}$. The estimation of $p(\boldsymbol{y}\vert \boldsymbol{x})$ is made using a data set with
* $n$ cases $i = 0, 1, 2, \dots, n-1$
Response (target, dependent or outcome) variable $y_i$ with $i = 0, 1, 2, \dots, n-1$
$p$ so-called explanatory (independent or predictor) variables $\boldsymbol{x}i=[x{i0}, x_{i1}, \dots, x_{ip-1}]$ with $i = 0, 1, 2, \dots, n-1$ and explanatory variables running from $0$ to $p-1$. See below for more explicit examples.
The goal of the regression analysis is to extract/exploit relationship between $\boldsymbol{y}$ and $\boldsymbol{x}$ in or to infer causal dependencies, approximations to the likelihood functions, functional relationships and to make predictions, making fits and many other things.
Regression analysis, overarching aims II
Consider an experiment in which $p$ characteristics of $n$ samples are
measured. The data from this experiment, for various explanatory variables $p$ are normally represented by a matrix
$\mathbf{X}$.
The matrix $\mathbf{X}$ is called the design
matrix. Additional information of the samples is available in the
form of $\boldsymbol{y}$ (also as above). The variable $\boldsymbol{y}$ is
generally referred to as the response variable. The aim of
regression analysis is to explain $\boldsymbol{y}$ in terms of
$\boldsymbol{X}$ through a functional relationship like $y_i =
f(\mathbf{X}{i,\ast})$. When no prior knowledge on the form of
$f(\cdot)$ is available, it is common to assume a linear relationship
between $\boldsymbol{X}$ and $\boldsymbol{y}$. This assumption gives rise to
the linear regression model where $\boldsymbol{\beta} = [\beta_0, \ldots,
\beta{p-1}]^{T}$ are the regression parameters.
Linear regression gives us a set of analytical equations for the parameters $\beta_j$.
Examples
In order to understand the relation among the predictors $p$, the set of data $n$ and the target (outcome, output etc) $\boldsymbol{y}$,
consider the model we discussed for describing nuclear binding energies.
There we assumed that we could parametrize the data using a polynomial approximation based on the liquid drop model.
Assuming
$$
BE(A) = a_0+a_1A+a_2A^{2/3}+a_3A^{-1/3}+a_4A^{-1},
$$
we have five predictors, that is the intercept, the $A$ dependent term, the $A^{2/3}$ term and the $A^{-1/3}$ and $A^{-1}$ terms.
This gives $p=0,1,2,3,4$. Furthermore we have $n$ entries for each predictor. It means that our design matrix is a
$p\times n$ matrix $\boldsymbol{X}$.
Here the predictors are based on a model we have made. A popular data set which is widely encountered in ML applications is the
so-called credit card default data from Taiwan. The data set contains data on $n=30000$ credit card holders with predictors like gender, marital status, age, profession, education, etc. In total there are $24$ such predictors or attributes leading to a design matrix of dimensionality $24 \times 30000$. This is however a classification problem and we will come back to it when we discuss Logistic Regression.
General linear models
Before we proceed let us study a case from linear algebra where we aim at fitting a set of data $\boldsymbol{y}=[y_0,y_1,\dots,y_{n-1}]$. We could think of these data as a result of an experiment or a complicated numerical experiment. These data are functions of a series of variables $\boldsymbol{x}=[x_0,x_1,\dots,x_{n-1}]$, that is $y_i = y(x_i)$ with $i=0,1,2,\dots,n-1$. The variables $x_i$ could represent physical quantities like time, temperature, position etc. We assume that $y(x)$ is a smooth function.
Since obtaining these data points may not be trivial, we want to use these data to fit a function which can allow us to make predictions for values of $y$ which are not in the present set. The perhaps simplest approach is to assume we can parametrize our function in terms of a polynomial of degree $n-1$ with $n$ points, that is
$$
y=y(x) \rightarrow y(x_i)=\tilde{y}i+\epsilon_i=\sum{j=0}^{n-1} \beta_j x_i^j+\epsilon_i,
$$
where $\epsilon_i$ is the error in our approximation.
Rewriting the fitting procedure as a linear algebra problem
For every set of values $y_i,x_i$ we have thus the corresponding set of equations
$$
\begin{align}
y_0&=\beta_0+\beta_1x_0^1+\beta_2x_0^2+\dots+\beta_{n-1}x_0^{n-1}+\epsilon_0\
y_1&=\beta_0+\beta_1x_1^1+\beta_2x_1^2+\dots+\beta_{n-1}x_1^{n-1}+\epsilon_1\
y_2&=\beta_0+\beta_1x_2^1+\beta_2x_2^2+\dots+\beta_{n-1}x_2^{n-1}+\epsilon_2\
\dots & \dots \
y_{n-1}&=\beta_0+\beta_1x_{n-1}^1+\beta_2x_{n-1}^2+\dots+\beta_{n-1}x_{n-1}^{n-1}+\epsilon_{n-1}.\
\end{align}
$$
Rewriting the fitting procedure as a linear algebra problem, more details
Defining the vectors
$$
\boldsymbol{y} = [y_0,y_1, y_2,\dots, y_{n-1}]^T,
$$
and
$$
\boldsymbol{\beta} = [\beta_0,\beta_1, \beta_2,\dots, \beta_{n-1}]^T,
$$
and
$$
\boldsymbol{\epsilon} = [\epsilon_0,\epsilon_1, \epsilon_2,\dots, \epsilon_{n-1}]^T,
$$
and the design matrix
$$
\boldsymbol{X}=
\begin{bmatrix}
1& x_{0}^1 &x_{0}^2& \dots & \dots &x_{0}^{n-1}\
1& x_{1}^1 &x_{1}^2& \dots & \dots &x_{1}^{n-1}\
1& x_{2}^1 &x_{2}^2& \dots & \dots &x_{2}^{n-1}\
\dots& \dots &\dots& \dots & \dots &\dots\
1& x_{n-1}^1 &x_{n-1}^2& \dots & \dots &x_{n-1}^{n-1}\
\end{bmatrix}
$$
we can rewrite our equations as
$$
\boldsymbol{y} = \boldsymbol{X}\boldsymbol{\beta}+\boldsymbol{\epsilon}.
$$
The above design matrix is called a Vandermonde matrix.
Generalizing the fitting procedure as a linear algebra problem
We are obviously not limited to the above polynomial expansions. We
could replace the various powers of $x$ with elements of Fourier
series or instead of $x_i^j$ we could have $\cos{(j x_i)}$ or $\sin{(j
x_i)}$, or time series or other orthogonal functions. For every set
of values $y_i,x_i$ we can then generalize the equations to
$$
\begin{align}
y_0&=\beta_0x_{00}+\beta_1x_{01}+\beta_2x_{02}+\dots+\beta_{n-1}x_{0n-1}+\epsilon_0\
y_1&=\beta_0x_{10}+\beta_1x_{11}+\beta_2x_{12}+\dots+\beta_{n-1}x_{1n-1}+\epsilon_1\
y_2&=\beta_0x_{20}+\beta_1x_{21}+\beta_2x_{22}+\dots+\beta_{n-1}x_{2n-1}+\epsilon_2\
\dots & \dots \
y_{i}&=\beta_0x_{i0}+\beta_1x_{i1}+\beta_2x_{i2}+\dots+\beta_{n-1}x_{in-1}+\epsilon_i\
\dots & \dots \
y_{n-1}&=\beta_0x_{n-1,0}+\beta_1x_{n-1,2}+\beta_2x_{n-1,2}+\dots+\beta_{n-1}x_{n-1,n-1}+\epsilon_{n-1}.\
\end{align}
$$
Note that we have $p=n$ here. The matrix is symmetric. This is generally not the case!
Generalizing the fitting procedure as a linear algebra problem
We redefine in turn the matrix $\boldsymbol{X}$ as
$$
\boldsymbol{X}=
\begin{bmatrix}
x_{00}& x_{01} &x_{02}& \dots & \dots &x_{0,n-1}\
x_{10}& x_{11} &x_{12}& \dots & \dots &x_{1,n-1}\
x_{20}& x_{21} &x_{22}& \dots & \dots &x_{2,n-1}\
\dots& \dots &\dots& \dots & \dots &\dots\
x_{n-1,0}& x_{n-1,1} &x_{n-1,2}& \dots & \dots &x_{n-1,n-1}\
\end{bmatrix}
$$
and without loss of generality we rewrite again our equations as
$$
\boldsymbol{y} = \boldsymbol{X}\boldsymbol{\beta}+\boldsymbol{\epsilon}.
$$
The left-hand side of this equation is kwown. Our error vector $\boldsymbol{\epsilon}$ and the parameter vector $\boldsymbol{\beta}$ are our unknow quantities. How can we obtain the optimal set of $\beta_i$ values?
Optimizing our parameters
We have defined the matrix $\boldsymbol{X}$ via the equations
$$
\begin{align}
y_0&=\beta_0x_{00}+\beta_1x_{01}+\beta_2x_{02}+\dots+\beta_{n-1}x_{0n-1}+\epsilon_0\
y_1&=\beta_0x_{10}+\beta_1x_{11}+\beta_2x_{12}+\dots+\beta_{n-1}x_{1n-1}+\epsilon_1\
y_2&=\beta_0x_{20}+\beta_1x_{21}+\beta_2x_{22}+\dots+\beta_{n-1}x_{2n-1}+\epsilon_1\
\dots & \dots \
y_{i}&=\beta_0x_{i0}+\beta_1x_{i1}+\beta_2x_{i2}+\dots+\beta_{n-1}x_{in-1}+\epsilon_1\
\dots & \dots \
y_{n-1}&=\beta_0x_{n-1,0}+\beta_1x_{n-1,2}+\beta_2x_{n-1,2}+\dots+\beta_{n-1}x_{n-1,n-1}+\epsilon_{n-1}.\
\end{align}
$$
As we noted above, we stayed with a system with the design matrix
$\boldsymbol{X}\in {\mathbb{R}}^{n\times n}$, that is we have $p=n$. For reasons to come later (algorithmic arguments) we will hereafter define
our matrix as $\boldsymbol{X}\in {\mathbb{R}}^{n\times p}$, with the predictors refering to the column numbers and the entries $n$ being the row elements.
Our model for the nuclear binding energies
In our introductory notes we looked at the so-called liquid drop model. Let us remind ourselves about what we did by looking at the code.
We restate the parts of the code we are most interested in.
End of explanation
# matrix inversion to find beta
beta = np.linalg.inv(X.T.dot(X)).dot(X.T).dot(Energies)
# and then make the prediction
ytilde = X @ beta
Explanation: With $\boldsymbol{\beta}\in {\mathbb{R}}^{p\times 1}$, it means that we will hereafter write our equations for the approximation as
$$
\boldsymbol{\tilde{y}}= \boldsymbol{X}\boldsymbol{\beta},
$$
throughout these lectures.
Optimizing our parameters, more details
With the above we use the design matrix to define the approximation $\boldsymbol{\tilde{y}}$ via the unknown quantity $\boldsymbol{\beta}$ as
$$
\boldsymbol{\tilde{y}}= \boldsymbol{X}\boldsymbol{\beta},
$$
and in order to find the optimal parameters $\beta_i$ instead of solving the above linear algebra problem, we define a function which gives a measure of the spread between the values $y_i$ (which represent hopefully the exact values) and the parameterized values $\tilde{y}_i$, namely
$$
C(\boldsymbol{\beta})=\frac{1}{n}\sum_{i=0}^{n-1}\left(y_i-\tilde{y}_i\right)^2=\frac{1}{n}\left{\left(\boldsymbol{y}-\boldsymbol{\tilde{y}}\right)^T\left(\boldsymbol{y}-\boldsymbol{\tilde{y}}\right)\right},
$$
or using the matrix $\boldsymbol{X}$ and in a more compact matrix-vector notation as
$$
C(\boldsymbol{\beta})=\frac{1}{n}\left{\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)^T\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)\right}.
$$
This function is one possible way to define the so-called cost function.
It is also common to define
the function $C$ as
$$
C(\boldsymbol{\beta})=\frac{1}{2n}\sum_{i=0}^{n-1}\left(y_i-\tilde{y}_i\right)^2,
$$
since when taking the first derivative with respect to the unknown parameters $\beta$, the factor of $2$ cancels out.
Interpretations and optimizing our parameters
The function
$$
C(\boldsymbol{\beta})=\frac{1}{n}\left{\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)^T\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)\right},
$$
can be linked to the variance of the quantity $y_i$ if we interpret the latter as the mean value.
When linking (see the discussion below) with the maximum likelihood approach below, we will indeed interpret $y_i$ as a mean value
$$
y_{i}=\langle y_i \rangle = \beta_0x_{i,0}+\beta_1x_{i,1}+\beta_2x_{i,2}+\dots+\beta_{n-1}x_{i,n-1}+\epsilon_i,
$$
where $\langle y_i \rangle$ is the mean value. Keep in mind also that
till now we have treated $y_i$ as the exact value. Normally, the
response (dependent or outcome) variable $y_i$ the outcome of a
numerical experiment or another type of experiment and is thus only an
approximation to the true value. It is then always accompanied by an
error estimate, often limited to a statistical error estimate given by
the standard deviation discussed earlier. In the discussion here we
will treat $y_i$ as our exact value for the response variable.
In order to find the parameters $\beta_i$ we will then minimize the spread of $C(\boldsymbol{\beta})$, that is we are going to solve the problem
$$
{\displaystyle \min_{\boldsymbol{\beta}\in
{\mathbb{R}}^{p}}}\frac{1}{n}\left{\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)^T\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)\right}.
$$
In practical terms it means we will require
$$
\frac{\partial C(\boldsymbol{\beta})}{\partial \beta_j} = \frac{\partial }{\partial \beta_j}\left[ \frac{1}{n}\sum_{i=0}^{n-1}\left(y_i-\beta_0x_{i,0}-\beta_1x_{i,1}-\beta_2x_{i,2}-\dots-\beta_{n-1}x_{i,n-1}\right)^2\right]=0,
$$
which results in
$$
\frac{\partial C(\boldsymbol{\beta})}{\partial \beta_j} = -\frac{2}{n}\left[ \sum_{i=0}^{n-1}x_{ij}\left(y_i-\beta_0x_{i,0}-\beta_1x_{i,1}-\beta_2x_{i,2}-\dots-\beta_{n-1}x_{i,n-1}\right)\right]=0,
$$
or in a matrix-vector form as
$$
\frac{\partial C(\boldsymbol{\beta})}{\partial \boldsymbol{\beta}} = 0 = \boldsymbol{X}^T\left( \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right).
$$
Interpretations and optimizing our parameters
We can rewrite
$$
\frac{\partial C(\boldsymbol{\beta})}{\partial \boldsymbol{\beta}} = 0 = \boldsymbol{X}^T\left( \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right),
$$
as
$$
\boldsymbol{X}^T\boldsymbol{y} = \boldsymbol{X}^T\boldsymbol{X}\boldsymbol{\beta},
$$
and if the matrix $\boldsymbol{X}^T\boldsymbol{X}$ is invertible we have the solution
$$
\boldsymbol{\beta} =\left(\boldsymbol{X}^T\boldsymbol{X}\right)^{-1}\boldsymbol{X}^T\boldsymbol{y}.
$$
We note also that since our design matrix is defined as $\boldsymbol{X}\in
{\mathbb{R}}^{n\times p}$, the product $\boldsymbol{X}^T\boldsymbol{X} \in
{\mathbb{R}}^{p\times p}$. In the above case we have that $p \ll n$,
in our case $p=5$ meaning that we end up with inverting a small
$5\times 5$ matrix. This is a rather common situation, in many cases we end up with low-dimensional
matrices to invert. The methods discussed here and for many other
supervised learning algorithms like classification with logistic
regression or support vector machines, exhibit dimensionalities which
allow for the usage of direct linear algebra methods such as LU decomposition or Singular Value Decomposition (SVD) for finding the inverse of the matrix
$\boldsymbol{X}^T\boldsymbol{X}$.
Small question: Do you think the example we have at hand here (the nuclear binding energies) can lead to problems in inverting the matrix $\boldsymbol{X}^T\boldsymbol{X}$? What kind of problems can we expect?
Some useful matrix and vector expressions
The following matrix and vector relation will be useful here and for the rest of the course. Vectors are always written as boldfaced lower case letters and
matrices as upper case boldfaced letters.
$$
\frac{\partial (\boldsymbol{b}^T\boldsymbol{a})}{\partial \boldsymbol{a}} = \boldsymbol{b},
$$
$$
\frac{\partial (\boldsymbol{a}^T\boldsymbol{A}\boldsymbol{a})}{\partial \boldsymbol{a}} = (\boldsymbol{A}+\boldsymbol{A}^T)\boldsymbol{a},
$$
$$
\frac{\partial tr(\boldsymbol{B}\boldsymbol{A})}{\partial \boldsymbol{A}} = \boldsymbol{B}^T,
$$
$$
\frac{\partial \log{\vert\boldsymbol{A}\vert}}{\partial \boldsymbol{A}} = (\boldsymbol{A}^{-1})^T.
$$
Interpretations and optimizing our parameters
The residuals $\boldsymbol{\epsilon}$ are in turn given by
$$
\boldsymbol{\epsilon} = \boldsymbol{y}-\boldsymbol{\tilde{y}} = \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta},
$$
and with
$$
\boldsymbol{X}^T\left( \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)= 0,
$$
we have
$$
\boldsymbol{X}^T\boldsymbol{\epsilon}=\boldsymbol{X}^T\left( \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)= 0,
$$
meaning that the solution for $\boldsymbol{\beta}$ is the one which minimizes the residuals. Later we will link this with the maximum likelihood approach.
Let us now return to our nuclear binding energies and simply code the above equations.
Own code for Ordinary Least Squares
It is rather straightforward to implement the matrix inversion and obtain the parameters $\boldsymbol{\beta}$. After having defined the matrix $\boldsymbol{X}$ we simply need to
write
End of explanation
fit = np.linalg.lstsq(X, Energies, rcond =None)[0]
ytildenp = np.dot(fit,X.T)
Explanation: Alternatively, you can use the least squares functionality in Numpy as
End of explanation
Masses['Eapprox'] = ytilde
# Generate a plot comparing the experimental with the fitted values values.
fig, ax = plt.subplots()
ax.set_xlabel(r'$A = N + Z$')
ax.set_ylabel(r'$E_\mathrm{bind}\,/\mathrm{MeV}$')
ax.plot(Masses['A'], Masses['Ebinding'], alpha=0.7, lw=2,
label='Ame2016')
ax.plot(Masses['A'], Masses['Eapprox'], alpha=0.7, lw=2, c='m',
label='Fit')
ax.legend()
save_fig("Masses2016OLS")
plt.show()
Explanation: And finally we plot our fit with and compare with data
End of explanation
def R2(y_data, y_model):
return 1 - np.sum((y_data - y_model) ** 2) / np.sum((y_data - np.mean(y_data)) ** 2)
Explanation: Adding error analysis and training set up
We can easily test our fit by computing the $R2$ score that we discussed in connection with the functionality of Scikit-Learn in the introductory slides.
Since we are not using Scikit-Learn here we can define our own $R2$ function as
End of explanation
print(R2(Energies,ytilde))
Explanation: and we would be using it as
End of explanation
def MSE(y_data,y_model):
n = np.size(y_model)
return np.sum((y_data-y_model)**2)/n
print(MSE(Energies,ytilde))
Explanation: We can easily add our MSE score as
End of explanation
def RelativeError(y_data,y_model):
return abs((y_data-y_model)/y_data)
print(RelativeError(Energies, ytilde))
Explanation: and finally the relative error as
End of explanation
# Common imports
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.pyplot as plt
import sklearn.linear_model as skl
from sklearn.metrics import mean_squared_error, r2_score, mean_absolute_error
# Where to save the figures and data files
PROJECT_ROOT_DIR = "Results"
FIGURE_ID = "Results/FigureFiles"
DATA_ID = "DataFiles/"
if not os.path.exists(PROJECT_ROOT_DIR):
os.mkdir(PROJECT_ROOT_DIR)
if not os.path.exists(FIGURE_ID):
os.makedirs(FIGURE_ID)
if not os.path.exists(DATA_ID):
os.makedirs(DATA_ID)
def image_path(fig_id):
return os.path.join(FIGURE_ID, fig_id)
def data_path(dat_id):
return os.path.join(DATA_ID, dat_id)
def save_fig(fig_id):
plt.savefig(image_path(fig_id) + ".png", format='png')
infile = open(data_path("EoS.csv"),'r')
# Read the EoS data as csv file and organize the data into two arrays with density and energies
EoS = pd.read_csv(infile, names=('Density', 'Energy'))
EoS['Energy'] = pd.to_numeric(EoS['Energy'], errors='coerce')
EoS = EoS.dropna()
Energies = EoS['Energy']
Density = EoS['Density']
# The design matrix now as function of various polytrops
X = np.zeros((len(Density),4))
X[:,3] = Density**(4.0/3.0)
X[:,2] = Density
X[:,1] = Density**(2.0/3.0)
X[:,0] = 1
# We use now Scikit-Learn's linear regressor and ridge regressor
# OLS part
clf = skl.LinearRegression().fit(X, Energies)
ytilde = clf.predict(X)
EoS['Eols'] = ytilde
# The mean squared error
print("Mean squared error: %.2f" % mean_squared_error(Energies, ytilde))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % r2_score(Energies, ytilde))
# Mean absolute error
print('Mean absolute error: %.2f' % mean_absolute_error(Energies, ytilde))
print(clf.coef_, clf.intercept_)
# The Ridge regression with a hyperparameter lambda = 0.1
_lambda = 0.1
clf_ridge = skl.Ridge(alpha=_lambda).fit(X, Energies)
yridge = clf_ridge.predict(X)
EoS['Eridge'] = yridge
# The mean squared error
print("Mean squared error: %.2f" % mean_squared_error(Energies, yridge))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % r2_score(Energies, yridge))
# Mean absolute error
print('Mean absolute error: %.2f' % mean_absolute_error(Energies, yridge))
print(clf_ridge.coef_, clf_ridge.intercept_)
fig, ax = plt.subplots()
ax.set_xlabel(r'$\rho[\mathrm{fm}^{-3}]$')
ax.set_ylabel(r'Energy per particle')
ax.plot(EoS['Density'], EoS['Energy'], alpha=0.7, lw=2,
label='Theoretical data')
ax.plot(EoS['Density'], EoS['Eols'], alpha=0.7, lw=2, c='m',
label='OLS')
ax.plot(EoS['Density'], EoS['Eridge'], alpha=0.7, lw=2, c='g',
label='Ridge $\lambda = 0.1$')
ax.legend()
save_fig("EoSfitting")
plt.show()
Explanation: The $\chi^2$ function
Normally, the response (dependent or outcome) variable $y_i$ is the
outcome of a numerical experiment or another type of experiment and is
thus only an approximation to the true value. It is then always
accompanied by an error estimate, often limited to a statistical error
estimate given by the standard deviation discussed earlier. In the
discussion here we will treat $y_i$ as our exact value for the
response variable.
Introducing the standard deviation $\sigma_i$ for each measurement
$y_i$, we define now the $\chi^2$ function (omitting the $1/n$ term)
as
$$
\chi^2(\boldsymbol{\beta})=\frac{1}{n}\sum_{i=0}^{n-1}\frac{\left(y_i-\tilde{y}_i\right)^2}{\sigma_i^2}=\frac{1}{n}\left{\left(\boldsymbol{y}-\boldsymbol{\tilde{y}}\right)^T\frac{1}{\boldsymbol{\Sigma^2}}\left(\boldsymbol{y}-\boldsymbol{\tilde{y}}\right)\right},
$$
where the matrix $\boldsymbol{\Sigma}$ is a diagonal matrix with $\sigma_i$ as matrix elements.
The $\chi^2$ function
In order to find the parameters $\beta_i$ we will then minimize the spread of $\chi^2(\boldsymbol{\beta})$ by requiring
$$
\frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \beta_j} = \frac{\partial }{\partial \beta_j}\left[ \frac{1}{n}\sum_{i=0}^{n-1}\left(\frac{y_i-\beta_0x_{i,0}-\beta_1x_{i,1}-\beta_2x_{i,2}-\dots-\beta_{n-1}x_{i,n-1}}{\sigma_i}\right)^2\right]=0,
$$
which results in
$$
\frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \beta_j} = -\frac{2}{n}\left[ \sum_{i=0}^{n-1}\frac{x_{ij}}{\sigma_i}\left(\frac{y_i-\beta_0x_{i,0}-\beta_1x_{i,1}-\beta_2x_{i,2}-\dots-\beta_{n-1}x_{i,n-1}}{\sigma_i}\right)\right]=0,
$$
or in a matrix-vector form as
$$
\frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \boldsymbol{\beta}} = 0 = \boldsymbol{A}^T\left( \boldsymbol{b}-\boldsymbol{A}\boldsymbol{\beta}\right).
$$
where we have defined the matrix $\boldsymbol{A} =\boldsymbol{X}/\boldsymbol{\Sigma}$ with matrix elements $a_{ij} = x_{ij}/\sigma_i$ and the vector $\boldsymbol{b}$ with elements $b_i = y_i/\sigma_i$.
The $\chi^2$ function
We can rewrite
$$
\frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \boldsymbol{\beta}} = 0 = \boldsymbol{A}^T\left( \boldsymbol{b}-\boldsymbol{A}\boldsymbol{\beta}\right),
$$
as
$$
\boldsymbol{A}^T\boldsymbol{b} = \boldsymbol{A}^T\boldsymbol{A}\boldsymbol{\beta},
$$
and if the matrix $\boldsymbol{A}^T\boldsymbol{A}$ is invertible we have the solution
$$
\boldsymbol{\beta} =\left(\boldsymbol{A}^T\boldsymbol{A}\right)^{-1}\boldsymbol{A}^T\boldsymbol{b}.
$$
The $\chi^2$ function
If we then introduce the matrix
$$
\boldsymbol{H} = \left(\boldsymbol{A}^T\boldsymbol{A}\right)^{-1},
$$
we have then the following expression for the parameters $\beta_j$ (the matrix elements of $\boldsymbol{H}$ are $h_{ij}$)
$$
\beta_j = \sum_{k=0}^{p-1}h_{jk}\sum_{i=0}^{n-1}\frac{y_i}{\sigma_i}\frac{x_{ik}}{\sigma_i} = \sum_{k=0}^{p-1}h_{jk}\sum_{i=0}^{n-1}b_ia_{ik}
$$
We state without proof the expression for the uncertainty in the parameters $\beta_j$ as (we leave this as an exercise)
$$
\sigma^2(\beta_j) = \sum_{i=0}^{n-1}\sigma_i^2\left( \frac{\partial \beta_j}{\partial y_i}\right)^2,
$$
resulting in
$$
\sigma^2(\beta_j) = \left(\sum_{k=0}^{p-1}h_{jk}\sum_{i=0}^{n-1}a_{ik}\right)\left(\sum_{l=0}^{p-1}h_{jl}\sum_{m=0}^{n-1}a_{ml}\right) = h_{jj}!
$$
The $\chi^2$ function
The first step here is to approximate the function $y$ with a first-order polynomial, that is we write
$$
y=y(x) \rightarrow y(x_i) \approx \beta_0+\beta_1 x_i.
$$
By computing the derivatives of $\chi^2$ with respect to $\beta_0$ and $\beta_1$ show that these are given by
$$
\frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \beta_0} = -2\left[ \frac{1}{n}\sum_{i=0}^{n-1}\left(\frac{y_i-\beta_0-\beta_1x_{i}}{\sigma_i^2}\right)\right]=0,
$$
and
$$
\frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \beta_1} = -\frac{2}{n}\left[ \sum_{i=0}^{n-1}x_i\left(\frac{y_i-\beta_0-\beta_1x_{i}}{\sigma_i^2}\right)\right]=0.
$$
The $\chi^2$ function
For a linear fit (a first-order polynomial) we don't need to invert a matrix!!
Defining
$$
\gamma = \sum_{i=0}^{n-1}\frac{1}{\sigma_i^2},
$$
$$
\gamma_x = \sum_{i=0}^{n-1}\frac{x_{i}}{\sigma_i^2},
$$
$$
\gamma_y = \sum_{i=0}^{n-1}\left(\frac{y_i}{\sigma_i^2}\right),
$$
$$
\gamma_{xx} = \sum_{i=0}^{n-1}\frac{x_ix_{i}}{\sigma_i^2},
$$
$$
\gamma_{xy} = \sum_{i=0}^{n-1}\frac{y_ix_{i}}{\sigma_i^2},
$$
we obtain
$$
\beta_0 = \frac{\gamma_{xx}\gamma_y-\gamma_x\gamma_y}{\gamma\gamma_{xx}-\gamma_x^2},
$$
$$
\beta_1 = \frac{\gamma_{xy}\gamma-\gamma_x\gamma_y}{\gamma\gamma_{xx}-\gamma_x^2}.
$$
This approach (different linear and non-linear regression) suffers
often from both being underdetermined and overdetermined in the
unknown coefficients $\beta_i$. A better approach is to use the
Singular Value Decomposition (SVD) method discussed next week.
Fitting an Equation of State for Dense Nuclear Matter
Before we continue, let us introduce yet another example. We are going to fit the
nuclear equation of state using results from many-body calculations.
The equation of state we have made available here, as function of
density, has been derived using modern nucleon-nucleon potentials with
the addition of three-body
forces. This
time the file is presented as a standard csv file.
The beginning of the Python code here is similar to what you have seen
before, with the same initializations and declarations. We use also
pandas again, rather extensively in order to organize our data.
The difference now is that we use Scikit-Learn's regression tools
instead of our own matrix inversion implementation. Furthermore, we
sneak in Ridge regression (to be discussed below) which includes a
hyperparameter $\lambda$, also to be explained below.
The code
End of explanation
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
# Where to save the figures and data files
PROJECT_ROOT_DIR = "Results"
FIGURE_ID = "Results/FigureFiles"
DATA_ID = "DataFiles/"
if not os.path.exists(PROJECT_ROOT_DIR):
os.mkdir(PROJECT_ROOT_DIR)
if not os.path.exists(FIGURE_ID):
os.makedirs(FIGURE_ID)
if not os.path.exists(DATA_ID):
os.makedirs(DATA_ID)
def image_path(fig_id):
return os.path.join(FIGURE_ID, fig_id)
def data_path(dat_id):
return os.path.join(DATA_ID, dat_id)
def save_fig(fig_id):
plt.savefig(image_path(fig_id) + ".png", format='png')
def R2(y_data, y_model):
return 1 - np.sum((y_data - y_model) ** 2) / np.sum((y_data - np.mean(y_data)) ** 2)
def MSE(y_data,y_model):
n = np.size(y_model)
return np.sum((y_data-y_model)**2)/n
infile = open(data_path("EoS.csv"),'r')
# Read the EoS data as csv file and organized into two arrays with density and energies
EoS = pd.read_csv(infile, names=('Density', 'Energy'))
EoS['Energy'] = pd.to_numeric(EoS['Energy'], errors='coerce')
EoS = EoS.dropna()
Energies = EoS['Energy']
Density = EoS['Density']
# The design matrix now as function of various polytrops
X = np.zeros((len(Density),5))
X[:,0] = 1
X[:,1] = Density**(2.0/3.0)
X[:,2] = Density
X[:,3] = Density**(4.0/3.0)
X[:,4] = Density**(5.0/3.0)
# We split the data in test and training data
X_train, X_test, y_train, y_test = train_test_split(X, Energies, test_size=0.2)
# matrix inversion to find beta
beta = np.linalg.inv(X_train.T.dot(X_train)).dot(X_train.T).dot(y_train)
# and then make the prediction
ytilde = X_train @ beta
print("Training R2")
print(R2(y_train,ytilde))
print("Training MSE")
print(MSE(y_train,ytilde))
ypredict = X_test @ beta
print("Test R2")
print(R2(y_test,ypredict))
print("Test MSE")
print(MSE(y_test,ypredict))
Explanation: The above simple polynomial in density $\rho$ gives an excellent fit
to the data.
We note also that there is a small deviation between the
standard OLS and the Ridge regression at higher densities. We discuss this in more detail
below.
Splitting our Data in Training and Test data
It is normal in essentially all Machine Learning studies to split the
data in a training set and a test set (sometimes also an additional
validation set). Scikit-Learn has an own function for this. There
is no explicit recipe for how much data should be included as training
data and say test data. An accepted rule of thumb is to use
approximately $2/3$ to $4/5$ of the data as training data. We will
postpone a discussion of this splitting to the end of these notes and
our discussion of the so-called bias-variance tradeoff. Here we
limit ourselves to repeat the above equation of state fitting example
but now splitting the data into a training set and a test set.
End of explanation
x = np.random.rand(100,1)
y = 2.0+5*x*x+0.1*np.random.randn(100,1)
Explanation: Exercises for week 35
Here are three possible exercises for week 35 and the lab sessions of Wednesday September 1.
Exercise 1: Setting up various Python environments
The first exercise here is of a mere technical art. We want you to have
* git as a version control software and to establish a user account on a provider like GitHub. Other providers like GitLab etc are equally fine. You can also use the University of Oslo GitHub facilities.
Install various Python packages
We will make extensive use of Python as programming language and its
myriad of available libraries. You will find
IPython/Jupyter notebooks invaluable in your work. You can run R
codes in the Jupyter/IPython notebooks, with the immediate benefit of
visualizing your data. You can also use compiled languages like C++,
Rust, Fortran etc if you prefer. The focus in these lectures will be
on Python.
If you have Python installed (we recommend Python3) and you feel
pretty familiar with installing different packages, we recommend that
you install the following Python packages via pip as
pip install numpy scipy matplotlib ipython scikit-learn sympy pandas pillow
For Tensorflow, we recommend following the instructions in the text of
Aurelien Geron, Hands‑On Machine Learning with Scikit‑Learn and TensorFlow, O'Reilly
We will come back to tensorflow later.
For Python3, replace pip with pip3.
For OSX users we recommend, after having installed Xcode, to
install brew. Brew allows for a seamless installation of additional
software via for example
brew install python3
For Linux users, with its variety of distributions like for example the widely popular Ubuntu distribution,
you can use pip as well and simply install Python as
sudo apt-get install python3 (or python for Python2.7)
If you don't want to perform these operations separately and venture
into the hassle of exploring how to set up dependencies and paths, we
recommend two widely used distrubutions which set up all relevant
dependencies for Python, namely
Anaconda,
which is an open source
distribution of the Python and R programming languages for large-scale
data processing, predictive analytics, and scientific computing, that
aims to simplify package management and deployment. Package versions
are managed by the package management system conda.
Enthought canopy
is a Python
distribution for scientific and analytic computing distribution and
analysis environment, available for free and under a commercial
license.
We recommend using Anaconda if you are not too familiar with setting paths in a terminal environment.
Exercise 2: making your own data and exploring scikit-learn
We will generate our own dataset for a function $y(x)$ where $x \in [0,1]$ and defined by random numbers computed with the uniform distribution. The function $y$ is a quadratic polynomial in $x$ with added stochastic noise according to the normal distribution $\cal {N}(0,1)$.
The following simple Python instructions define our $x$ and $y$ values (with 100 data points).
End of explanation
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
def save_fig(fig_id):
plt.savefig(image_path(fig_id) + ".png", format='png')
def R2(y_data, y_model):
return 1 - np.sum((y_data - y_model) ** 2) / np.sum((y_data - np.mean(y_data)) ** 2)
def MSE(y_data,y_model):
n = np.size(y_model)
return np.sum((y_data-y_model)**2)/n
x = np.random.rand(100)
y = 2.0+5*x*x+0.1*np.random.randn(100)
# The design matrix now as function of a given polynomial
X = np.zeros((len(x),3))
X[:,0] = 1.0
X[:,1] = x
X[:,2] = x**2
# We split the data in test and training data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# matrix inversion to find beta
beta = np.linalg.inv(X_train.T @ X_train) @ X_train.T @ y_train
print(beta)
# and then make the prediction
ytilde = X_train @ beta
print("Training R2")
print(R2(y_train,ytilde))
print("Training MSE")
print(MSE(y_train,ytilde))
ypredict = X_test @ beta
print("Test R2")
print(R2(y_test,ypredict))
print("Test MSE")
print(MSE(y_test,ypredict))
Explanation: Write your own code (following the examples under the regression notes) for computing the parametrization of the data set fitting a second-order polynomial.
Use thereafter scikit-learn (see again the examples in the regression slides) and compare with your own code.
Using scikit-learn, compute also the mean square error, a risk metric corresponding to the expected value of the squared (quadratic) error defined as
$$
MSE(\boldsymbol{y},\boldsymbol{\tilde{y}}) = \frac{1}{n}
\sum_{i=0}^{n-1}(y_i-\tilde{y}_i)^2,
$$
and the $R^2$ score function.
If $\tilde{\boldsymbol{y}}_i$ is the predicted value of the $i-th$ sample and $y_i$ is the corresponding true value, then the score $R^2$ is defined as
$$
R^2(\boldsymbol{y}, \tilde{\boldsymbol{y}}) = 1 - \frac{\sum_{i=0}^{n - 1} (y_i - \tilde{y}i)^2}{\sum{i=0}^{n - 1} (y_i - \bar{y})^2},
$$
where we have defined the mean value of $\boldsymbol{y}$ as
$$
\bar{y} = \frac{1}{n} \sum_{i=0}^{n - 1} y_i.
$$
You can use the functionality included in scikit-learn. If you feel for it, you can use your own program and define functions which compute the above two functions.
Discuss the meaning of these results. Try also to vary the coefficient in front of the added stochastic noise term and discuss the quality of the fits.
<!-- --- begin solution of exercise --- -->
Solution.
The code here is an example of where we define our own design matrix and fit parameters $\beta$.
End of explanation
# split in training and test data
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.2)
Explanation: <!-- --- end solution of exercise --- -->
Exercise 3: Normalizing our data
A much used approach before starting to train the data is to preprocess our
data. Normally the data may need a rescaling and/or may be sensitive
to extreme values. Scaling the data renders our inputs much more
suitable for the algorithms we want to employ.
Scikit-Learn has several functions which allow us to rescale the
data, normally resulting in much better results in terms of various
accuracy scores. The StandardScaler function in Scikit-Learn
ensures that for each feature/predictor we study the mean value is
zero and the variance is one (every column in the design/feature
matrix). This scaling has the drawback that it does not ensure that
we have a particular maximum or minimum in our data set. Another
function included in Scikit-Learn is the MinMaxScaler which
ensures that all features are exactly between $0$ and $1$. The
The Normalizer scales each data
point such that the feature vector has a euclidean length of one. In other words, it
projects a data point on the circle (or sphere in the case of higher dimensions) with a
radius of 1. This means every data point is scaled by a different number (by the
inverse of it’s length).
This normalization is often used when only the direction (or angle) of the data matters,
not the length of the feature vector.
The RobustScaler works similarly to the StandardScaler in that it
ensures statistical properties for each feature that guarantee that
they are on the same scale. However, the RobustScaler uses the median
and quartiles, instead of mean and variance. This makes the
RobustScaler ignore data points that are very different from the rest
(like measurement errors). These odd data points are also called
outliers, and might often lead to trouble for other scaling
techniques.
It also common to split the data in a training set and a testing set. A typical split is to use $80\%$ of the data for training and the rest
for testing. This can be done as follows with our design matrix $\boldsymbol{X}$ and data $\boldsymbol{y}$ (remember to import scikit-learn)
End of explanation
scaler = StandardScaler()
scaler.fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
Explanation: Then we can use the standard scaler to scale our data as
End of explanation
np.random.seed()
n = 100
maxdegree = 14
# Make data set.
x = np.linspace(-3, 3, n).reshape(-1, 1)
y = np.exp(-x**2) + 1.5 * np.exp(-(x-2)**2)+ np.random.normal(0, 0.1, x.shape)
Explanation: In this exercise we want you to to compute the MSE for the training
data and the test data as function of the complexity of a polynomial,
that is the degree of a given polynomial. We want you also to compute the $R2$ score as function of the complexity of the model for both training data and test data. You should also run the calculation with and without scaling.
One of
the aims is to reproduce Figure 2.11 of Hastie et al.
Our data is defined by $x\in [-3,3]$ with a total of for example $100$ data points.
End of explanation |
14,611 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Mapwork with enlib and orphics
In this short tutorial, we will
Step1: It looks like white noise since we randomly put down galaxies (no clustering).
Step2: Note that I'm plotting $LC_L$ here. | Python Code:
from __future__ import print_function
# The main mapwork module
from enlib import enmap
import numpy as np
import matplotlib.pyplot as plt
# Tools for working with enmaps, i/o, catalogs and statistics
from orphics import maps as omaps,io,catalogs as cats,stats,cosmology as cosmo
# Let's define a geometry centered on the equator by default
shape,wcs = omaps.rect_geometry(width_deg=20.,px_res_arcmin=0.5)
# shape gives us the shape of the numpy matrix storing the map
print(shape)
# wcs is the World Coordinate System info that tells enlib what pixels correspond to what locations on the sky.
# Enlib essentially extends the numpy array to include wcs information
print(wcs)
# What are the bounds of this geometry in degrees?
bounds = enmap.box(shape,wcs)*180./np.pi
print(bounds)
# Let's generate a random catalog of "galaxies"
Ngals = 10000000
ras = np.random.uniform(bounds[0,1],bounds[1,1],Ngals) # ras between the minimum and maximum ra of our geometry
decs = np.random.uniform(bounds[0,0],bounds[1,0],Ngals) # same for decs
# Now let's make a map out of these
# This requires you to specify the geometry through shape,wcs and give a list of the ras and decs
cmapper = cats.CatMapper(shape,wcs,ras,decs)
# Once you initialize this object, it makes a "counts" map. We can get an overdensity map from it.
delta = cmapper.counts/cmapper.counts.mean()-1.
# Let's plot it
plt.imshow(delta)
plt.colorbar()
plt.show()
# The sum of the counts map should be the number of galaxies
print (cmapper.counts.sum())
# And the sum of the overdensity should be pretty close to zero
print (delta.sum())
# Now let's do some fourier space operations
# We initialize a fourier calculator for this geometry
fc = omaps.FourierCalc(shape,wcs)
# And calculate the autospectrum of the overdensity. The last two return arguments
# are the fourier transforms of the galaxy map
p2d,kgal,_ = fc.power2d(delta)
# This returns a 2D power spectrum. We need to bin it in annuli to get a 1D spectrum.
# Let's define the bin edges (these are multipole Ls)
bin_edges = np.arange(100,2000,100)
# Let's initialize a binner object
# This requires the magnitudes of the L wavenumbers in each fourier space pixel of the map.
# This is typically called modlmap (for modulus of L map).
# We can get it from cmapper.counts since that is an enmap.
modlmap = cmapper.counts.modlmap()
# With the modlmap and the bin edges, we can define the binner
binner = stats.bin2D(modlmap,bin_edges)
# and apply it to the 2d power spectrum
cents, p1d = binner.bin(p2d)
# Let's plot it
pl = io.Plotter()
pl.add(cents,p1d)
pl.hline()
pl.done()
Explanation: Mapwork with enlib and orphics
In this short tutorial, we will:
Define a map geometry
Generate a random galaxy catalog
Make a map from the catalog
Calculate and bin the autospectrum of the galaxy map
Call CAMB from Python to get a lensing power spectrum
Create a simulated lensing map using the power spectrum
Calcate and bin the cross-spectrum of the lensing map and galaxy map
End of explanation
# Let's initialize a CAMB cosmology. This could take a few seconds.
cc = cosmo.Cosmology(lmax=2000)
# Now let's get the Clkk power spectrum from the initialized cosmology and plot it
# along with the galaxy autospectrum
ells = np.arange(0,2000,1)
clkk = cc.theory.gCl("kk",ells)
pl = io.Plotter()
pl.add(cents,p1d*cents)
pl.add(ells,clkk*ells)
pl.hline()
pl.done()
Explanation: It looks like white noise since we randomly put down galaxies (no clustering).
End of explanation
# Now let's generate gaussian random fields with Clkk power.
# We need to reshape the power spectrum into a polarization-friendly form.
# It is a 3D matrix, the first two indices are the polarization indices.
# The required shape is (1,1,N_L) where the only (0,0,:) element is
# saying we don't want polarization since it is just a TT spectrum.
ps = np.reshape(clkk,(1,1,clkk.size))
mg = omaps.MapGen(shape,wcs,ps)
# Now let's generate a map and plot it
kappa_sim = mg.get_map()
plt.imshow(kappa_sim)
plt.show()
# Let's calculate the auto-spectrum of the kappa map
p2d_kappa,kkappa,_ = fc.power2d(kappa_sim)
cents, p1d_kappa = binner.bin(p2d_kappa)
# Now let's calculate its cross-power with the galaxy map we made earlier. What do you expect it to be?
# Notice that we are using fc.f2power instead of fc.power2d. FFTs are expensive, so if we already have
# the fourier transforms of our maps from previous operations, we can reuse them to calculate a cross power
# spectrum. Instead of redoing the FFTs, this operation is instant, since it just multiplies the
# fourier transforms calculate earlier and applies the appropriate scaling factor.
p2d_cross = fc.f2power(kgal,kkappa)
cents, p1d_cross = binner.bin(p2d_cross)
pl = io.Plotter()
pl.add(cents,p1d*cents)
pl.add(cents,p1d_kappa*cents)
pl.add(cents,p1d_cross*cents)
pl.add(ells,cltt*ells)
pl.hline()
pl.done()
Explanation: Note that I'm plotting $LC_L$ here.
End of explanation |
14,612 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
How to dynamically add functionalities to the Browser
Suppose you want to add a login function to the Browser.
Step1: You can define such a function at any point in your testbook. Note that you need to use self inside your function definition rather than using the word browser.
Step2: Once defined, you can call the register_function method of test object to attach the function to the browser object.
Step3: You can then confirm that the login is now a bound method of browser and can be used right away just like any other methods bound to browser. | Python Code:
from marigoso import Test
test = Test()
browser = test.launch_browser("Firefox")
data = {
'url': "http://pytest.uk",
'username': "myusername",
'password': "mysecret"
}
Explanation: How to dynamically add functionalities to the Browser
Suppose you want to add a login function to the Browser.
End of explanation
def login(self, data):
self.get_url(data['url'])
if self.is_available('name=credential_0', 1):
self.kb_type('name=credential_0', data['username'])
self.kb_type('name=credential_1', data['password'])
self.submit_btn('Login')
assert self.is_available("Logout")
return self.get_element("id=logged_in_user").text
Explanation: You can define such a function at any point in your testbook. Note that you need to use self inside your function definition rather than using the word browser.
End of explanation
test.register_function("browser", [login])
Explanation: Once defined, you can call the register_function method of test object to attach the function to the browser object.
End of explanation
browser.login
Explanation: You can then confirm that the login is now a bound method of browser and can be used right away just like any other methods bound to browser.
End of explanation |
14,613 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Getting stretch ratios for each step
Step1: Putting it all together!
Step2: The sum divided by length should add up to 1...
Maybe we need to subdivide it further?
Or oh there's the "stretch multiplier"
Step3: OK, so revised with stretch_multiplier | Python Code:
# Imports
%matplotlib inline
import pardir; pardir.pardir() # Allow imports from parent directory
import fibonaccistretch as fib
import bjorklund
# Setting up basics
original_rhythm = [1,0,0,1,0,0,1,0]
target_rhythm = [1,0,0,0,0,1,0,0,0,0,1,0,0]
fib.calculate_pulse_ratios(original_rhythm, target_rhythm)
fib.calculate_pulse_lengths(original_rhythm)
fib.calculate_pulse_ratios([1]*len(original_rhythm), target_rhythm)
[1]*8
fib.calculate_pulse_lengths(target_rhythm)
# Original and target pulse lengths; just take the first one from each for now
opl = fib.calculate_pulse_lengths(original_rhythm)[0]
tpl = fib.calculate_pulse_lengths(target_rhythm)[0]
# Adapted from Euclidean stretch
opr = [1] * len(original_rhythm)
# Generate target pulse rhythm ("tpr")
tpr = bjorklund.bjorklund(pulses=opl, steps=tpl)
tpr_pulse_lengths = fib.calculate_pulse_lengths(tpr)
tpr_pulse_ratios = fib.calculate_pulse_ratios(opr, tpr)
tpr_pulse_ratios
# Format pulse ratios so there's one for each step
original_pulse_lengths = fib.calculate_pulse_lengths(original_rhythm)
pulse_ratios = fib.calculate_pulse_ratios(original_rhythm, target_rhythm)
pulse_ratios_by_step = []
for i,pulse_length in enumerate(original_pulse_lengths):
for _ in range(pulse_length):
pulse_ratios_by_step.append(pulse_ratios[i])
pulse_ratios_by_step
Explanation: Getting stretch ratios for each step
End of explanation
def calculate_step_stretch_ratios(original_rhythm, target_rhythm):
# Original and target pulse lengths
original_pulse_lengths = fib.calculate_pulse_lengths(original_rhythm)
target_pulse_lengths = fib.calculate_pulse_lengths(target_rhythm)
# Pulse ratios
# Format pulse ratios so there's one for each step
pulse_ratios = fib.calculate_pulse_ratios(original_rhythm, target_rhythm)
pulse_ratios_by_step = []
for i,pulse_length in enumerate(original_pulse_lengths):
for _ in range(pulse_length):
pulse_ratios_by_step.append(pulse_ratios[i])
# Calculate stretch ratios for each original step
# Adapted from Euclidean stretch
step_stretch_ratios = []
for i in range(min(len(original_pulse_lengths), len(target_pulse_lengths))):
# Pulse lengths
opl = original_pulse_lengths[i]
tpl = target_pulse_lengths[i]
# Use steps as original pulse rhythm ("opr")
opr = [1] * len(original_rhythm)
# Generate target pulse rhythm ("tpr") using Bjorklund's algorithm
tpr = bjorklund.bjorklund(pulses=opl, steps=tpl)
tpr_pulse_lengths = fib.calculate_pulse_lengths(tpr)
tpr_pulse_ratios = fib.calculate_pulse_ratios(opr, tpr)
# Scale the tpr pulse ratios by the corresponding ratio from pulse_ratios_by_step
tpr_pulse_ratios *= pulse_ratios_by_step[i]
step_stretch_ratios.extend(tpr_pulse_ratios)
return step_stretch_ratios
step_stretch_ratios = calculate_step_stretch_ratios(original_rhythm, target_rhythm)
step_stretch_ratios
sum(step_stretch_ratios) / len(original_rhythm)
step_stretch_ratios = calculate_step_stretch_ratios(original_rhythm, original_rhythm)
step_stretch_ratios
sum(step_stretch_ratios) / len(original_rhythm)
Explanation: Putting it all together!
End of explanation
stretch_multiplier = 1.0 / (sum(step_stretch_ratios) / len(original_rhythm))
stretch_multiplier
step_stretch_ratios = [r * stretch_multiplier for r in step_stretch_ratios]
step_stretch_ratios
sum(step_stretch_ratios) / len(original_rhythm)
Explanation: The sum divided by length should add up to 1...
Maybe we need to subdivide it further?
Or oh there's the "stretch multiplier"
End of explanation
def calculate_step_stretch_ratios(original_rhythm, target_rhythm):
# Original and target pulse lengths
original_pulse_lengths = list(fib.calculate_pulse_lengths(original_rhythm))
target_pulse_lengths = list(fib.calculate_pulse_lengths(target_rhythm))
# Pulse ratios
# Format pulse ratios so there's one for each step
pulse_ratios = list(fib.calculate_pulse_ratios(original_rhythm, target_rhythm))
if len(pulse_ratios) < len(original_pulse_lengths): # Add 0s to pulse ratios if there aren't enough
for _ in range(len(original_pulse_lengths) - len(pulse_ratios)):
pulse_ratios.append(0.0)
assert(len(pulse_ratios) == len(original_pulse_lengths))
pulse_ratios_by_step = []
for i,pulse_length in enumerate(original_pulse_lengths):
for _ in range(pulse_length):
pulse_ratios_by_step.append(pulse_ratios[i])
# Calculate stretch ratios for each original step
# Adapted from Euclidean stretch
step_stretch_ratios = []
for i in range(min(len(original_pulse_lengths), len(target_pulse_lengths))):
# Pulse lengths
opl = original_pulse_lengths[i]
tpl = target_pulse_lengths[i]
# Adjust target pulse length if it's too small
#if opl > tpl:
# tpl = opl
while opl > tpl:
tpl *= 2
# Use steps as original pulse rhythm ("opr")
opr = [1] * len(original_rhythm)
# Generate target pulse rhythm ("tpr") using Bjorklund's algorithm
tpr = bjorklund.bjorklund(pulses=opl, steps=tpl)
tpr_pulse_lengths = fib.calculate_pulse_lengths(tpr)
tpr_pulse_ratios = fib.calculate_pulse_ratios(opr, tpr)
# Scale the tpr pulse ratios by the corresponding ratio from pulse_ratios_by_step
tpr_pulse_ratios *= pulse_ratios_by_step[i]
step_stretch_ratios.extend(tpr_pulse_ratios)
# Multiply by stretch multiplier to make sure the length is the same as original
stretch_multiplier = 1.0 / (sum(step_stretch_ratios) / len(original_rhythm))
step_stretch_ratios = [r * stretch_multiplier for r in step_stretch_ratios]
assert(round(sum(step_stretch_ratios) / len(original_rhythm), 5) == 1) # Make sure it's *close enough* to original length.
return step_stretch_ratios
step_stretch_ratios = calculate_step_stretch_ratios(original_rhythm, target_rhythm)
step_stretch_ratios
calculate_step_stretch_ratios(original_rhythm, [1,0,1])
# fib.calculate_pulse_ratios(original_rhythm, [1,0,1])
round?
reload(fib)
# import numpy as np
# a = np.array(original_rhythm)
# b = np.zeros(4)
# np.hstack((a, b))
fib.calculate_step_stretch_ratios(original_rhythm, target_rhythm)
Explanation: OK, so revised with stretch_multiplier:
End of explanation |
14,614 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Q Learning and Deep Q Network Examples
Step1: Game design
The game the Q-agents will need to learn is made of a board with 4 cells. The agent will receive a reward of +1 every time it fills a vacant cell, and will receive a penalty of -1 when it tries to fill an already occupied cell. The game ends when the board is full.
Step2: Initialize the game
Step3: Q Learning
Starting with a table-based Q-learning algorithm
Step4: Initializing the Q-table
Step5: Letting the agent play and learn
Step6: Let's verify that the agent indeed learned a correct strategy by seeing what action it will choose in each one of the possible states
Step7: We can see that the agent indeed picked up the right way to play the game. Still, when looking at the predicted Q values, we see that there are some states where it didn't pick up the correct Q values.
Q Learning is a greedy algorithm, and it prefers choosing the best action at each state rather than exploring. We can solve this issue by increasing <math xmlns="http
Step8: Deep Q Network
Let's move on to neural network-based modeling. We'll design the Q network first.
Remember that the output of the network, (self.output) is an array of the predicted Q value for each action taken from the input state (self.states). Comparing with the Q-table algorithm, the output is an entire column of the table
Step9: Designing the Experience Replay memory which will be used, using a cyclic memory buffer
Step10: Setting up the parameters. Note that here we used gamma = 0.99 and not 1 like in the Q-table algorithm, as the literature recommends working with a discount factor of <math xmlns="http
Step11: Initializing the Q-network
Step12: Training the network. Compare this code to the above Q-table training.
Step13: Again, let's verify that the agent indeed learned a correct strategy
Step14: Here too, we see that the agent learned a correct strategy, and again, Q values are not what we would've expectdd.
Let's plot the rewards the agent received
Step15: Zooming in, so we can compare the Q-table algorithm
Step16: Let's plot the cost too. Remember that here the x-axis reflects the number of trainings, and not the number of games. The number of training depends on the number of actions taken, and the agent can take any number of actions during each game. | Python Code:
import random
import numpy as np
import pandas as pd
import tensorflow as tf
import matplotlib.pyplot as plt
from collections import deque
from time import time
seed = 23041974
random.seed(seed)
print('Seed: {}'.format(seed))
Explanation: Q Learning and Deep Q Network Examples
End of explanation
class Game:
board = None
board_size = 0
def __init__(self, board_size = 4):
self.board_size = board_size
self.reset()
def reset(self):
self.board = np.zeros(self.board_size)
def play(self, cell):
# Returns a tuple: (reward, game_over?)
if self.board[cell] == 0:
self.board[cell] = 1
game_over = len(np.where(self.board == 0)[0]) == 0
return (1, game_over)
else:
return (-1, False)
def state_to_str(state):
return str(list(map(int, state.tolist())))
all_states = list()
for i in range(2):
for j in range(2):
for k in range(2):
for l in range(2):
s = np.array([i, j, k, l])
all_states.append(state_to_str(s))
print('All possible states: ')
for s in all_states:
print(s)
Explanation: Game design
The game the Q-agents will need to learn is made of a board with 4 cells. The agent will receive a reward of +1 every time it fills a vacant cell, and will receive a penalty of -1 when it tries to fill an already occupied cell. The game ends when the board is full.
End of explanation
game = Game()
Explanation: Initialize the game:
End of explanation
num_of_games = 2000
epsilon = 0.1
gamma = 1
Explanation: Q Learning
Starting with a table-based Q-learning algorithm
End of explanation
q_table = pd.DataFrame(0,
index = np.arange(4),
columns = all_states)
Explanation: Initializing the Q-table:
End of explanation
r_list = [] ## Store the total reward of each game so we could plot it later
for g in range(num_of_games):
game_over = False
game.reset()
total_reward = 0
while not game_over:
state = np.copy(game.board)
if random.random() < epsilon:
action = random.randint(0, 3)
else:
action = q_table[state_to_str(state)].idxmax()
reward, game_over = game.play(action)
total_reward += reward
if np.sum(game.board) == 4: ## Terminal state
next_state_max_q_value = 0
else:
next_state = np.copy(game.board)
next_state_max_q_value = q_table[state_to_str(next_state)].max()
q_table.loc[action, state_to_str(state)] = reward + gamma * next_state_max_q_value
r_list.append(total_reward)
q_table
Explanation: Letting the agent play and learn:
End of explanation
for i in range(2):
for j in range(2):
for k in range(2):
for l in range(2):
b = np.array([i, j, k, l])
if len(np.where(b == 0)[0]) != 0:
action = q_table[state_to_str(b)].idxmax()
pred = q_table[state_to_str(b)].tolist()
print('board: {b}\tpredicted Q values: {p} \tbest action: {a}\t correct action? {s}'
.format(b = b, p = pred, a = action, s = b[action] == 0))
Explanation: Let's verify that the agent indeed learned a correct strategy by seeing what action it will choose in each one of the possible states:
End of explanation
plt.figure(figsize = (14, 7))
plt.plot(range(len(r_list)), r_list)
plt.xlabel('Games played')
plt.ylabel('Reward')
plt.show()
Explanation: We can see that the agent indeed picked up the right way to play the game. Still, when looking at the predicted Q values, we see that there are some states where it didn't pick up the correct Q values.
Q Learning is a greedy algorithm, and it prefers choosing the best action at each state rather than exploring. We can solve this issue by increasing <math xmlns="http://www.w3.org/1998/Math/MathML"><mi>ε</mi></math> (epsilon), which controls the exploration of this algorithm and was set to 0.1, OR by letting the agent play more games.
Let's plot the total reward the agent received per game:
End of explanation
class QNetwork:
def __init__(self, hidden_layers_size, gamma, learning_rate,
input_size = 4, output_size = 4):
self.q_target = tf.placeholder(shape = (None, output_size), dtype = tf.float32)
self.r = tf.placeholder(shape = None, dtype = tf.float32)
self.states = tf.placeholder(shape = (None, input_size), dtype = tf.float32)
self.enum_actions = tf.placeholder(shape = (None, 2), dtype = tf.int32)
layer = self.states
for l in hidden_layers_size:
layer = tf.layers.dense(inputs = layer, units = l,
activation = tf.nn.relu,
kernel_initializer = tf.contrib.layers.xavier_initializer(seed = seed))
self.output = tf.layers.dense(inputs = layer, units = output_size,
kernel_initializer = tf.contrib.layers.xavier_initializer(seed = seed))
self.predictions = tf.gather_nd(self.output, indices = self.enum_actions)
self.labels = self.r + gamma * tf.reduce_max(self.q_target, axis = 1)
self.cost = tf.reduce_mean(tf.losses.mean_squared_error(labels = self.labels,
predictions = self.predictions))
self.optimizer = tf.train.GradientDescentOptimizer(learning_rate = learning_rate).minimize(self.cost)
Explanation: Deep Q Network
Let's move on to neural network-based modeling. We'll design the Q network first.
Remember that the output of the network, (self.output) is an array of the predicted Q value for each action taken from the input state (self.states). Comparing with the Q-table algorithm, the output is an entire column of the table
End of explanation
class ReplayMemory:
memory = None
counter = 0
def __init__(self, size):
self.memory = deque(maxlen = size)
def append(self, element):
self.memory.append(element)
self.counter += 1
def sample(self, n):
return random.sample(self.memory, n)
Explanation: Designing the Experience Replay memory which will be used, using a cyclic memory buffer:
End of explanation
num_of_games = 2000
epsilon = 0.1
gamma = 0.99
batch_size = 10
memory_size = 2000
Explanation: Setting up the parameters. Note that here we used gamma = 0.99 and not 1 like in the Q-table algorithm, as the literature recommends working with a discount factor of <math xmlns="http://www.w3.org/1998/Math/MathML"><mn>0</mn><mo>.</mo><mn>9</mn><mo> </mo><mo>≤</mo><mo> </mo><mi>γ</mi><mo> </mo><mo>≤</mo><mo> </mo><mn>0</mn><mo>.</mo><mn>99</mn></math>. It probably won't matter much in this specific case, but it's good to get used to this.
End of explanation
tf.reset_default_graph()
tf.set_random_seed(seed)
qnn = QNetwork(hidden_layers_size = [20, 20],
gamma = gamma,
learning_rate = 0.001)
memory = ReplayMemory(memory_size)
sess = tf.Session()
sess.run(tf.global_variables_initializer())
Explanation: Initializing the Q-network:
End of explanation
r_list = []
c_list = [] ## Same as r_list, but for storing the cost
counter = 0 ## Will be used to trigger network training
for g in range(num_of_games):
game_over = False
game.reset()
total_reward = 0
while not game_over:
counter += 1
state = np.copy(game.board)
if random.random() < epsilon:
action = random.randint(0, 3)
else:
pred = np.squeeze(sess.run(qnn.output,
feed_dict = {qnn.states: np.expand_dims(game.board, axis = 0)}))
action = np.argmax(pred)
reward, game_over = game.play(action)
total_reward += reward
next_state = np.copy(game.board)
memory.append({'state': state, 'action': action,
'reward': reward, 'next_state': next_state,
'game_over': game_over})
if counter % batch_size == 0:
## Network training
batch = memory.sample(batch_size)
q_target = sess.run(qnn.output, feed_dict = {qnn.states: np.array(list(map(lambda x: x['next_state'], batch)))})
terminals = np.array(list(map(lambda x: x['game_over'], batch)))
for i in range(terminals.size):
if terminals[i]:
## Remember we use the network's own predictions for the next state while calculating loss.
## Terminal states have no Q-value, and so we manually set them to 0, as the network's predictions
## for these states is meaningless
q_target[i] = np.zeros(game.board_size)
_, cost = sess.run([qnn.optimizer, qnn.cost],
feed_dict = {qnn.states: np.array(list(map(lambda x: x['state'], batch))),
qnn.r: np.array(list(map(lambda x: x['reward'], batch))),
qnn.enum_actions: np.array(list(enumerate(map(lambda x: x['action'], batch)))),
qnn.q_target: q_target})
c_list.append(cost)
r_list.append(total_reward)
print('Final cost: {}'.format(c_list[-1]))
Explanation: Training the network. Compare this code to the above Q-table training.
End of explanation
for i in range(2):
for j in range(2):
for k in range(2):
for l in range(2):
b = np.array([i, j, k, l])
if len(np.where(b == 0)[0]) != 0:
pred = np.squeeze(sess.run(qnn.output,
feed_dict = {qnn.states: np.expand_dims(b, axis = 0)}))
pred = list(map(lambda x: round(x, 3), pred))
action = np.argmax(pred)
print('board: {b} \tpredicted Q values: {p} \tbest action: {a} \tcorrect action? {s}'
.format(b = b, p = pred, a = action, s = b[action] == 0))
Explanation: Again, let's verify that the agent indeed learned a correct strategy:
End of explanation
plt.figure(figsize = (14, 7))
plt.plot(range(len(r_list)), r_list)
plt.xlabel('Games played')
plt.ylabel('Reward')
plt.show()
Explanation: Here too, we see that the agent learned a correct strategy, and again, Q values are not what we would've expectdd.
Let's plot the rewards the agent received:
End of explanation
plt.figure(figsize = (14, 7))
plt.plot(range(len(r_list)), r_list)
plt.xlabel('Game played')
plt.ylabel('Reward')
plt.ylim(-2, 4.5)
plt.show()
Explanation: Zooming in, so we can compare the Q-table algorithm:
End of explanation
plt.figure(figsize = (14, 7))
plt.plot(range(len(c_list)), c_list)
plt.xlabel('Trainings')
plt.ylabel('Cost')
plt.show()
Explanation: Let's plot the cost too. Remember that here the x-axis reflects the number of trainings, and not the number of games. The number of training depends on the number of actions taken, and the agent can take any number of actions during each game.
End of explanation |
14,615 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: A simple regression model using Keras with Cloud TPUs
Overview
This notebook demonstrates using Cloud TPUs in colab to build a simple regression model using y = sin(x) to predict y for given x.
This model generates huge amounts of data that and demonstrates the training performance advantage of Cloud TPU.
The model trains for 10 epochs with 512 steps per epoch on TPU and completes in approximately 2 minutes.
This notebook is hosted on GitHub. To view it in its original repository, after opening the notebook, select File > View on GitHub.
Learning objectives
In this Colab, you will learn how to
Step2: Resolve TPU Address
Step3: Creating data for y = sin(x).
Sine wave data is created using numpy. And to make it more difficult, random noise is added to the sine wave.
Step4: Define model
Step5: Compiling the model with a distribution strategy
To make the model usable by a TPU, we first must create and compile it using a distribution strategy.
Step6: Training of the model on TPU
Step7: Prediction
For predictions, same model weights are being used. | Python Code:
# Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
Explanation: <a href="https://colab.research.google.com/github/tensorflow/tpu/blob/0e3cfbdfbcf321681c1ac1c387baf7a1a41d8d21/tools/colab/regression_sine_data_with_keras.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Copyright 2018 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
import math
import os
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
print(tf.__version__)
import distutils
if distutils.version.LooseVersion(tf.__version__) < '2.0':
raise Exception('This notebook is compatible with TensorFlow 2.0 or higher.')
Explanation: A simple regression model using Keras with Cloud TPUs
Overview
This notebook demonstrates using Cloud TPUs in colab to build a simple regression model using y = sin(x) to predict y for given x.
This model generates huge amounts of data that and demonstrates the training performance advantage of Cloud TPU.
The model trains for 10 epochs with 512 steps per epoch on TPU and completes in approximately 2 minutes.
This notebook is hosted on GitHub. To view it in its original repository, after opening the notebook, select File > View on GitHub.
Learning objectives
In this Colab, you will learn how to:
* Create and compile a Keras model on TPU with a distribution strategy.
* Train, evaluate, and and generate predictions on Cloud TPU.
* Compare the performance of a TPU versus a CPU.
Instructions
<h3> Train on TPU <a href="https://cloud.google.com/tpu/"><img valign="middle" src="https://raw.githubusercontent.com/GoogleCloudPlatform/tensorflow-without-a-phd/master/tensorflow-rl-pong/images/tpu-hexagon.png" width="50"></a></h3>
On the main menu, click Runtime and select Change runtime type. Set "TPU" as the hardware accelerator.
Click Runtime again and select Runtime > Run All. You can also run the cells manually with Shift-ENTER.
Imports
End of explanation
use_tpu = True #@param {type:"boolean"}
if use_tpu:
assert 'COLAB_TPU_ADDR' in os.environ, 'Missing TPU; did you request a TPU in Notebook Settings?'
if 'COLAB_TPU_ADDR' in os.environ:
TPU_ADDRESS = 'grpc://{}'.format(os.environ['COLAB_TPU_ADDR'])
else:
TPU_ADDRESS = ''
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu=TPU_ADDRESS)
tf.config.experimental_connect_to_cluster(resolver)
tf.tpu.experimental.initialize_tpu_system(resolver)
print("All devices: ", tf.config.list_logical_devices('TPU'))
Explanation: Resolve TPU Address
End of explanation
data_size = 2**18
x = np.linspace(0, 6, data_size, dtype=np.float32)
np.random.shuffle(x)
y = -20 * np.sin(x, dtype=np.float32) + 3 + np.random.normal(0, 1, (data_size,)).astype(np.float32)
x = x.reshape(-1, 1)
y = y.reshape(-1, 1)
train_x, test_x = x[:data_size//2], x[data_size//2:]
train_y, test_y = y[:data_size//2], y[data_size//2:]
plt.plot(x, y, 'bo')
Explanation: Creating data for y = sin(x).
Sine wave data is created using numpy. And to make it more difficult, random noise is added to the sine wave.
End of explanation
def get_model():
return tf.keras.models.Sequential([
tf.keras.layers.Dense(1, input_shape=(1,)),
tf.keras.layers.Dense(200, activation='sigmoid'),
tf.keras.layers.Dense(80, activation='sigmoid'),
tf.keras.layers.Dense(1)
])
Explanation: Define model:
Model will have an input layer where it takes in the x coordinate, two densely connected layers with 200 and 80 nodes, and an output layer where it returns the predicted y value.
End of explanation
strategy = tf.distribute.experimental.TPUStrategy(resolver)
with strategy.scope():
model = get_model()
model.compile(optimizer=tf.keras.optimizers.SGD(.01),
loss='mean_squared_error',
metrics=['mean_squared_error'])
Explanation: Compiling the model with a distribution strategy
To make the model usable by a TPU, we first must create and compile it using a distribution strategy.
End of explanation
model.fit(train_x, train_y, epochs=10, steps_per_epoch=512)
Explanation: Training of the model on TPU
End of explanation
predictions = model.predict(test_x)
plt.plot(test_x, predictions, 'ro')
Explanation: Prediction
For predictions, same model weights are being used.
End of explanation |
14,616 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 The TF-Agents Authors.
Step1: DQN C51/Rainbow
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https
Step2: ハイパーパラメータ
Step3: 環境
前回のように環境を読み込みます。1つはトレーニング用で、もう1つは評価用です。ここでは、最大報酬が200ではなく500であるCartPole-v1(DQNチュートリアルではCartPole-v0を使用)を使用します。
Step4: エージェント
C51は、DQNに基づくQ学習アルゴリズムです。 DQNと同様に、個別の行動空間がある任意の環境で使用できます。
C51とDQNの主な違いは、各状態と行動のペアのQ値を単に予測するのではなく、C51はQ値の確率分布のヒストグラムモデルを予測することです。
単なる推定値ではなく分布を学習することで、アルゴリズムはトレーニング時に安定性を維持でき、最終的なパフォーマンスが向上します。これは、単一の平均では正確な画像が得られない、バイモーダルまたはマルチモーダルの値の分布がある場合に特に有用です。
C51は値ではなく確率分布でトレーニングするために、損失関数を計算するために複雑な分布計算を実行する必要がありますが、これらはすべてTF-Agentで処理されます。
C51 Agentを作成するには、まずCategoricalQNetworkを作成する必要があります。CategoricalQNetworkのAPIは、QNetworkのAPIと同じですが、引数num_atomsが追加されます。これは、確率分布の推定におけるサポートポイントの数を表します。(上の画像には10個のサポートポイントが含まれており、それぞれが青い縦棒で表されています。)名前からわかるように、デフォルトのatom数は51です。
Step5: また、先ほど作成したネットワークをトレーニングするためのoptimizerと、ネットワークが更新された回数を追跡するためのtrain_step_counter変数も必要です。
簡単なDqnAgentとのもう1つの重要な違いは、引数としてmin_q_valueとmax_q_valueを指定する必要があることです。これらは、サポートの最も極端な値(51のatomの最も極端な値)を指定します。特定の環境に合わせてこれらを適切に選択してください。 ここでは、-20と20を使用します。
Step6: 最後に注意すべき点は、$n$ = 2のnステップ更新を使用する引数も追加されていることです。1ステップのQ学習($n$ = 1)では、(ベルマン最適化方程式に基づいて)1ステップの戻り値を使用して、その時点のタイムステップと次のタイムステップでのQ値間の誤差のみを計算します。 1ステップの戻り値は次のように定義されます。
$G_t = R_{t + 1} + \gamma V(s_{t + 1})$
$V(s) = \max_a{Q(s, a)}$と定義します。
nステップ更新では、標準の1ステップの戻り関数は$n$倍に拡張されます。
$G_t^n = R_{t + 1} + \gamma R_{t + 2} + \gamma^2 R_{t + 3} + \dots + \gamma^n V(s_{t + n})$
nステップ更新により、エージェントは将来からブートストラップできるようになり、正しい$n$値を使用することで、多くの場合、学習が迅速になります。
C51とnステップの更新は、多くの場合、優先再生と組み合わされてRainbow agentの中核となっていますが、優先再生の実装による測定可能な改善は見られませんでした。さらに、C51エージェントをnステップの更新のみと組み合わせた場合、エージェントは、テストしたAtari環境のサンプルで他のRainbowエージェントと同様に機能することが明らかになっています。
指標と評価
ポリシーの評価に使用される最も一般的なメトリックは、平均利得です。利得は、エピソードの環境でポリシーを実行中に取得した報酬の総計です。通常、数エピソードが実行され、平均利得が生成されます。
Step7: データ収集
DQNチュートリアルと同様に、ランダムポリシーを使用して再生バッファーと初期データ収集を設定します。
Step8: エージェントのトレーニング
トレーニングループには、環境からのデータ収集とエージェントのネットワークの最適化の両方が含まれます。途中で、エージェントのポリシーを時々評価して、状況を確認します。
以下の実行には7分ほどかかります。
Step9: 可視化
プロット
利得とグローバルステップをプロットして、エージェントのパフォーマンスを確認できます。Cartpole-v1では、棒が立ったままでいるタイムステップごとに環境は+1の報酬を提供します。最大ステップ数は500であるため、可能な最大利得値も500です。
Step11: 動画
各ステップで環境をレンダリングすると、エージェントのパフォーマンスを可視化できます。その前に、このColabに動画を埋め込む関数を作成しましょう。
Step12: 次のコードは、いくつかのエピソードに渡るエージェントのポリシーを可視化します。 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2021 The TF-Agents Authors.
End of explanation
!sudo apt-get update
!sudo apt-get install -y xvfb ffmpeg freeglut3-dev
!pip install 'imageio==2.4.0'
!pip install pyvirtualdisplay
!pip install tf-agents
!pip install pyglet
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import base64
import imageio
import IPython
import matplotlib
import matplotlib.pyplot as plt
import PIL.Image
import pyvirtualdisplay
import tensorflow as tf
from tf_agents.agents.categorical_dqn import categorical_dqn_agent
from tf_agents.drivers import dynamic_step_driver
from tf_agents.environments import suite_gym
from tf_agents.environments import tf_py_environment
from tf_agents.eval import metric_utils
from tf_agents.metrics import tf_metrics
from tf_agents.networks import categorical_q_network
from tf_agents.policies import random_tf_policy
from tf_agents.replay_buffers import tf_uniform_replay_buffer
from tf_agents.trajectories import trajectory
from tf_agents.utils import common
# Set up a virtual display for rendering OpenAI gym environments.
display = pyvirtualdisplay.Display(visible=0, size=(1400, 900)).start()
Explanation: DQN C51/Rainbow
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://www.tensorflow.org/agents/tutorials/9_c51_tutorial"> <img src="https://www.tensorflow.org/images/tf_logo_32px.png"> TensorFlow.org で表示</a>
</td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/agents/tutorials/9_c51_tutorial.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png"> Google Colab で実行</a>
</td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/agents/tutorials/9_c51_tutorial.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a></td>
<td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/agents/tutorials/9_c51_tutorial.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a> </td>
</table>
はじめに
ここでは、Cartpole環境でTF-Agentライブラリを使用してCategorical DQN (C51)エージェントをトレーニングする方法を紹介します。
はじめる前に、 DQNチュートリアルをご覧ください。このチュートリアルは、DQNチュートリアルに精通されていることを前提とし、主にDQNとC51の違いについて見ていきます。
セットアップ
TF-agentをまだインストールしていない場合は、次を実行します。
End of explanation
env_name = "CartPole-v1" # @param {type:"string"}
num_iterations = 15000 # @param {type:"integer"}
initial_collect_steps = 1000 # @param {type:"integer"}
collect_steps_per_iteration = 1 # @param {type:"integer"}
replay_buffer_capacity = 100000 # @param {type:"integer"}
fc_layer_params = (100,)
batch_size = 64 # @param {type:"integer"}
learning_rate = 1e-3 # @param {type:"number"}
gamma = 0.99
log_interval = 200 # @param {type:"integer"}
num_atoms = 51 # @param {type:"integer"}
min_q_value = -20 # @param {type:"integer"}
max_q_value = 20 # @param {type:"integer"}
n_step_update = 2 # @param {type:"integer"}
num_eval_episodes = 10 # @param {type:"integer"}
eval_interval = 1000 # @param {type:"integer"}
Explanation: ハイパーパラメータ
End of explanation
train_py_env = suite_gym.load(env_name)
eval_py_env = suite_gym.load(env_name)
train_env = tf_py_environment.TFPyEnvironment(train_py_env)
eval_env = tf_py_environment.TFPyEnvironment(eval_py_env)
Explanation: 環境
前回のように環境を読み込みます。1つはトレーニング用で、もう1つは評価用です。ここでは、最大報酬が200ではなく500であるCartPole-v1(DQNチュートリアルではCartPole-v0を使用)を使用します。
End of explanation
categorical_q_net = categorical_q_network.CategoricalQNetwork(
train_env.observation_spec(),
train_env.action_spec(),
num_atoms=num_atoms,
fc_layer_params=fc_layer_params)
Explanation: エージェント
C51は、DQNに基づくQ学習アルゴリズムです。 DQNと同様に、個別の行動空間がある任意の環境で使用できます。
C51とDQNの主な違いは、各状態と行動のペアのQ値を単に予測するのではなく、C51はQ値の確率分布のヒストグラムモデルを予測することです。
単なる推定値ではなく分布を学習することで、アルゴリズムはトレーニング時に安定性を維持でき、最終的なパフォーマンスが向上します。これは、単一の平均では正確な画像が得られない、バイモーダルまたはマルチモーダルの値の分布がある場合に特に有用です。
C51は値ではなく確率分布でトレーニングするために、損失関数を計算するために複雑な分布計算を実行する必要がありますが、これらはすべてTF-Agentで処理されます。
C51 Agentを作成するには、まずCategoricalQNetworkを作成する必要があります。CategoricalQNetworkのAPIは、QNetworkのAPIと同じですが、引数num_atomsが追加されます。これは、確率分布の推定におけるサポートポイントの数を表します。(上の画像には10個のサポートポイントが含まれており、それぞれが青い縦棒で表されています。)名前からわかるように、デフォルトのatom数は51です。
End of explanation
optimizer = tf.compat.v1.train.AdamOptimizer(learning_rate=learning_rate)
train_step_counter = tf.Variable(0)
agent = categorical_dqn_agent.CategoricalDqnAgent(
train_env.time_step_spec(),
train_env.action_spec(),
categorical_q_network=categorical_q_net,
optimizer=optimizer,
min_q_value=min_q_value,
max_q_value=max_q_value,
n_step_update=n_step_update,
td_errors_loss_fn=common.element_wise_squared_loss,
gamma=gamma,
train_step_counter=train_step_counter)
agent.initialize()
Explanation: また、先ほど作成したネットワークをトレーニングするためのoptimizerと、ネットワークが更新された回数を追跡するためのtrain_step_counter変数も必要です。
簡単なDqnAgentとのもう1つの重要な違いは、引数としてmin_q_valueとmax_q_valueを指定する必要があることです。これらは、サポートの最も極端な値(51のatomの最も極端な値)を指定します。特定の環境に合わせてこれらを適切に選択してください。 ここでは、-20と20を使用します。
End of explanation
#@test {"skip": true}
def compute_avg_return(environment, policy, num_episodes=10):
total_return = 0.0
for _ in range(num_episodes):
time_step = environment.reset()
episode_return = 0.0
while not time_step.is_last():
action_step = policy.action(time_step)
time_step = environment.step(action_step.action)
episode_return += time_step.reward
total_return += episode_return
avg_return = total_return / num_episodes
return avg_return.numpy()[0]
random_policy = random_tf_policy.RandomTFPolicy(train_env.time_step_spec(),
train_env.action_spec())
compute_avg_return(eval_env, random_policy, num_eval_episodes)
# Please also see the metrics module for standard implementations of different
# metrics.
Explanation: 最後に注意すべき点は、$n$ = 2のnステップ更新を使用する引数も追加されていることです。1ステップのQ学習($n$ = 1)では、(ベルマン最適化方程式に基づいて)1ステップの戻り値を使用して、その時点のタイムステップと次のタイムステップでのQ値間の誤差のみを計算します。 1ステップの戻り値は次のように定義されます。
$G_t = R_{t + 1} + \gamma V(s_{t + 1})$
$V(s) = \max_a{Q(s, a)}$と定義します。
nステップ更新では、標準の1ステップの戻り関数は$n$倍に拡張されます。
$G_t^n = R_{t + 1} + \gamma R_{t + 2} + \gamma^2 R_{t + 3} + \dots + \gamma^n V(s_{t + n})$
nステップ更新により、エージェントは将来からブートストラップできるようになり、正しい$n$値を使用することで、多くの場合、学習が迅速になります。
C51とnステップの更新は、多くの場合、優先再生と組み合わされてRainbow agentの中核となっていますが、優先再生の実装による測定可能な改善は見られませんでした。さらに、C51エージェントをnステップの更新のみと組み合わせた場合、エージェントは、テストしたAtari環境のサンプルで他のRainbowエージェントと同様に機能することが明らかになっています。
指標と評価
ポリシーの評価に使用される最も一般的なメトリックは、平均利得です。利得は、エピソードの環境でポリシーを実行中に取得した報酬の総計です。通常、数エピソードが実行され、平均利得が生成されます。
End of explanation
#@test {"skip": true}
replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer(
data_spec=agent.collect_data_spec,
batch_size=train_env.batch_size,
max_length=replay_buffer_capacity)
def collect_step(environment, policy):
time_step = environment.current_time_step()
action_step = policy.action(time_step)
next_time_step = environment.step(action_step.action)
traj = trajectory.from_transition(time_step, action_step, next_time_step)
# Add trajectory to the replay buffer
replay_buffer.add_batch(traj)
for _ in range(initial_collect_steps):
collect_step(train_env, random_policy)
# This loop is so common in RL, that we provide standard implementations of
# these. For more details see the drivers module.
# Dataset generates trajectories with shape [BxTx...] where
# T = n_step_update + 1.
dataset = replay_buffer.as_dataset(
num_parallel_calls=3, sample_batch_size=batch_size,
num_steps=n_step_update + 1).prefetch(3)
iterator = iter(dataset)
Explanation: データ収集
DQNチュートリアルと同様に、ランダムポリシーを使用して再生バッファーと初期データ収集を設定します。
End of explanation
#@test {"skip": true}
try:
%%time
except:
pass
# (Optional) Optimize by wrapping some of the code in a graph using TF function.
agent.train = common.function(agent.train)
# Reset the train step
agent.train_step_counter.assign(0)
# Evaluate the agent's policy once before training.
avg_return = compute_avg_return(eval_env, agent.policy, num_eval_episodes)
returns = [avg_return]
for _ in range(num_iterations):
# Collect a few steps using collect_policy and save to the replay buffer.
for _ in range(collect_steps_per_iteration):
collect_step(train_env, agent.collect_policy)
# Sample a batch of data from the buffer and update the agent's network.
experience, unused_info = next(iterator)
train_loss = agent.train(experience)
step = agent.train_step_counter.numpy()
if step % log_interval == 0:
print('step = {0}: loss = {1}'.format(step, train_loss.loss))
if step % eval_interval == 0:
avg_return = compute_avg_return(eval_env, agent.policy, num_eval_episodes)
print('step = {0}: Average Return = {1:.2f}'.format(step, avg_return))
returns.append(avg_return)
Explanation: エージェントのトレーニング
トレーニングループには、環境からのデータ収集とエージェントのネットワークの最適化の両方が含まれます。途中で、エージェントのポリシーを時々評価して、状況を確認します。
以下の実行には7分ほどかかります。
End of explanation
#@test {"skip": true}
steps = range(0, num_iterations + 1, eval_interval)
plt.plot(steps, returns)
plt.ylabel('Average Return')
plt.xlabel('Step')
plt.ylim(top=550)
Explanation: 可視化
プロット
利得とグローバルステップをプロットして、エージェントのパフォーマンスを確認できます。Cartpole-v1では、棒が立ったままでいるタイムステップごとに環境は+1の報酬を提供します。最大ステップ数は500であるため、可能な最大利得値も500です。
End of explanation
def embed_mp4(filename):
Embeds an mp4 file in the notebook.
video = open(filename,'rb').read()
b64 = base64.b64encode(video)
tag = '''
<video width="640" height="480" controls>
<source src="data:video/mp4;base64,{0}" type="video/mp4">
Your browser does not support the video tag.
</video>'''.format(b64.decode())
return IPython.display.HTML(tag)
Explanation: 動画
各ステップで環境をレンダリングすると、エージェントのパフォーマンスを可視化できます。その前に、このColabに動画を埋め込む関数を作成しましょう。
End of explanation
num_episodes = 3
video_filename = 'imageio.mp4'
with imageio.get_writer(video_filename, fps=60) as video:
for _ in range(num_episodes):
time_step = eval_env.reset()
video.append_data(eval_py_env.render())
while not time_step.is_last():
action_step = agent.policy.action(time_step)
time_step = eval_env.step(action_step.action)
video.append_data(eval_py_env.render())
embed_mp4(video_filename)
Explanation: 次のコードは、いくつかのエピソードに渡るエージェントのポリシーを可視化します。
End of explanation |
14,617 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Make animations for webpage
Create html versions of some animations to be uploaded to the webpage. Links from the pdf version of the book will go to these versions for readers who are only reading the pdf.
Note that make_html_on_master.py will copy everything from html_animations into build_html when creating webpage version.
Step2: Acoustics
Animation to link from Acoustics.ipynb.
Step5: Burgers
Animations to link from Burgers.ipynb. | Python Code:
%matplotlib inline
from IPython.display import FileLink
Explanation: Make animations for webpage
Create html versions of some animations to be uploaded to the webpage. Links from the pdf version of the book will go to these versions for readers who are only reading the pdf.
Note that make_html_on_master.py will copy everything from html_animations into build_html when creating webpage version.
End of explanation
from exact_solvers import acoustics_demos
def make_bump_animation_html(numframes, file_name):
video_html = acoustics_demos.bump_animation(numframes)
f = open(file_name,'w')
f.write('<html>\n')
file_name = 'acoustics_bump_animation.html'
descr = <h1>Acoustics Bump Animation</h1>
This animation is to accompany
<a href="http://www.clawpack.org/riemann_book/html/Acoustics.html">this
notebook</a>,\n from the book <a
href="http://www.clawpack.org/riemann_book/index.html">Riemann Problems and
Jupyter Solutions</a>\n
f.write(descr)
f.write("<p>")
f.write(video_html)
print("Created ", file_name)
f.close()
file_name = 'html_animations/acoustics_bump_animation.html'
anim = make_bump_animation_html(numframes=50, file_name=file_name)
FileLink(file_name)
Explanation: Acoustics
Animation to link from Acoustics.ipynb.
End of explanation
from exact_solvers import burgers_demos
from importlib import reload
reload(burgers_demos)
video_html = burgers_demos.bump_animation(numframes = 50)
file_name = 'html_animations/burgers_animation0.html'
f = open(file_name,'w')
f.write('<html>\n')
descr = <h1>Burgers' Equation Animation</h1>
This animation is to accompany
<a href="http://www.clawpack.org/riemann_book/html/Burgers.html">this
notebook</a>,\n from the book <a
href="http://www.clawpack.org/riemann_book/index.html">Riemann Problems and
Jupyter Solutions</a>\n
<p>
Burgers' equation with hump initial data, evolving into a shock wave
followed by a rarefaction wave.
f.write(descr)
f.write("<p>")
f.write(video_html)
print("Created ", file_name)
f.close()
FileLink(file_name)
def make_burgers_animation_html(ql, qm, qr, file_name):
video_html = burgers_demos.triplestate_animation(ql,qm,qr,numframes=50)
f = open(file_name,'w')
f.write('<html>\n')
descr = <h1>Burgers' Equation Animation</h1>
This animation is to accompany
<a href="http://www.clawpack.org/riemann_book/html/Burgers.html">this
notebook</a>,\n from the book <a
href="http://www.clawpack.org/riemann_book/index.html">Riemann Problems and
Jupyter Solutions</a>\n
<p>
Burgers' equation with three constant states as initial data,\n
ql = %.1f, qm = %.1f, qr = %.1f % (ql,qm,qr)
f.write(descr)
f.write("<p>")
f.write(video_html)
print("Created ", file_name)
f.close()
file_name = 'html_animations/burgers_animation1.html'
make_burgers_animation_html(4., 2., 0., file_name)
FileLink(file_name)
file_name = 'html_animations/burgers_animation2.html'
make_burgers_animation_html(4., -1.5, 0.5, file_name)
FileLink(file_name)
file_name = 'html_animations/burgers_animation3.html'
make_burgers_animation_html(-1., 3., -2., file_name)
FileLink(file_name)
Explanation: Burgers
Animations to link from Burgers.ipynb.
End of explanation |
14,618 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Working with data 2017. Class 4
Contact
Javier Garcia-Bernardo
[email protected]
0. Structure
Stats
Definitions
What's a p-value?
One-tailed test vs two-tailed test
Count vs expected count (binomial test)
Independence between factors
Step1: 3. Read tables from websites
pandas is cool
- Use pd.read_html(url)
- It returns a list of all tables in the website
- It tries to guess the encoding of the website, but with no much success.
Step2: 4. Parse dates
pandas is cool
- Use parse_dates=[columns] when reading the file
- It parses the date
4.1. Use parse_dates when reading the file
Step3: 4.2. You can now filter by date
Step4: 4.3. And still extract columns of year and month
Step5: 4.4. You can resample the data with a specific frequency
Very similar to groupby.
Groups the data with a specific frequency
"A" = End of year
"B" = Business day
others
Step6: 4.5 And of course plot it with a line plot | Python Code:
import pandas as pd
import numpy as np
import pylab as plt
import seaborn as sns
from scipy.stats import chi2_contingency,ttest_ind
#This allows us to use R
%load_ext rpy2.ipython
#Visualize in line
%matplotlib inline
#Be able to plot images saved in the hard drive
from IPython.display import Image,display
#Make the notebook wider
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:90% !important; }</style>"))
Explanation: Working with data 2017. Class 4
Contact
Javier Garcia-Bernardo
[email protected]
0. Structure
Stats
Definitions
What's a p-value?
One-tailed test vs two-tailed test
Count vs expected count (binomial test)
Independence between factors: ($\chi^2$ test)
In-class exercises to melt, pivot, concat, merge, groupby and plot.
Read data from websited
Time series
End of explanation
df = pd.read_html("https://piie.com/summary-economic-sanctions-episodes-1914-2006",encoding="UTF-8")
print(type(df),len(df))
df
df[0].head(10)
df[0].columns
df = pd.read_html("https://piie.com/summary-economic-sanctions-episodes-1914-2006",encoding="UTF-8")
df = df[0]
print(df.columns)
df.columns = ['Year imposed', 'Year ended', 'Principal sender',
'Target country', 'Policy goal',
'Success score (scale 1 to 16)',
'Cost to target (percent of GNP)']
df = df.replace('negligible', 0)
df = df.replace("–","-",regex=True) #the file uses long dashes
df.to_csv("data/economic_sanctions.csv",index=None,sep="\t")
df = pd.read_csv("data/economic_sanctions.csv",sep="\t",na_values=["-","Ongoing"])
df["Duration"] = df["Year ended"] - df["Year imposed"]
df.head()
sns.lmplot(x="Duration",y="Cost to target (percent of GNP)",data=df,fit_reg=False,hue="Year imposed",legend=False,palette="YlOrBr")
plt.ylim((-2,10))
plt.legend(loc="center left", bbox_to_anchor=(1, 0.5),ncol=4)
Explanation: 3. Read tables from websites
pandas is cool
- Use pd.read_html(url)
- It returns a list of all tables in the website
- It tries to guess the encoding of the website, but with no much success.
End of explanation
df = pd.read_csv("data/exchange-rate-twi-may-1970-aug-1.tsv",sep="\t",parse_dates=["Month"],skipfooter=2)
df.head()
Explanation: 4. Parse dates
pandas is cool
- Use parse_dates=[columns] when reading the file
- It parses the date
4.1. Use parse_dates when reading the file
End of explanation
#filter by time
df_after1980 = df.loc[df["Month"] > "1980-05-02"] #year-month-date
df_after1980.columns = ["Date","Rate"]
df_after1980.head()
Explanation: 4.2. You can now filter by date
End of explanation
#make columns with year and month (useful for models)
df_after1980["Year"] = df_after1980["Date"].apply(lambda x: x.year)
df_after1980["Month"] = df_after1980["Date"].apply(lambda x: x.month)
df_after1980.head()
Explanation: 4.3. And still extract columns of year and month
End of explanation
#resample
df_after1980_resampled = df_after1980.resample("A",on="Date").mean()
display(df_after1980_resampled.head())
df_after1980_resampled = df_after1980_resampled.reset_index()
df_after1980_resampled.head()
Explanation: 4.4. You can resample the data with a specific frequency
Very similar to groupby.
Groups the data with a specific frequency
"A" = End of year
"B" = Business day
others: http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases
Then you tell pandas to apply a function to the group (mean/max/median...)
End of explanation
#Let's visualize it
plt.figure(figsize=(6,4))
plt.plot(df_after1980["Date"],df_after1980["Rate"],label="Before resampling")
plt.plot(df_after1980_resampled["Date"],df_after1980_resampled["Rate"],label="After resampling")
plt.xlabel("Time")
plt.ylabel("Rate")
plt.legend()
plt.show()
Explanation: 4.5 And of course plot it with a line plot
End of explanation |
14,619 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below
Step9: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token
Step11: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step13: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step15: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below
Step18: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders
Step21: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
Step24: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
Step27: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
Step30: Build the Neural Network
Apply the functions you implemented above to
Step33: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements
Step35: Neural Network Training
Hyperparameters
Tune the following parameters
Step37: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step41: Save Parameters
Save seq_length and save_dir for generating a new TV script.
Step43: Checkpoint
Step46: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names
Step49: Choose Word
Implement the pick_word() function to select the next word using probabilities.
Step51: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
view_sentence_range = (80, 100)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
# create a set and sort to get a list
sorted_vocabs = sorted(set(text))
vocab_to_int = { vocab:idx for idx,vocab in enumerate(sorted_vocabs)}
int_to_vocab = {idx:vocab for vocab,idx in vocab_to_int.items()}
return vocab_to_int, int_to_vocab
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
# TODO: Implement Function
return {'.':"||period||",',':"||comma||",'"':"||quotation_mark||",
';':"||semicolon||",'!':"||exclamation_mark||",'?':"||question_mark||",
'(':"||left_parenthese||",')':"||right_parenthese||",'--':"||dash||",'\n':"||return||"}
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
# TODO: Implement Function
#with tf.name_scope("inputs"):
inputs = tf.placeholder(dtype=tf.int32, shape =(None,None),name="input")
targets = tf.placeholder(dtype= tf.int32, shape= (None,None),name="targets")
learning_rate = tf.placeholder(dtype=tf.float32,name="learning_rate")
return inputs, targets, learning_rate
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Input, Targets, LearningRate)
End of explanation
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
# TODO: Implement Function
# just 2 lstm layer
num_layer = 2
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=0.5)
cell = tf.contrib.rnn.MultiRNNCell([drop]* num_layer)
# no dropout at first
initial_state = cell.zero_state(batch_size, tf.float32)
initial_state = tf.identity(initial_state, name="initial_state")
return cell, initial_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
# TODO: Implement Function
embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
embed = tf.nn.embedding_lookup(embedding, input_data)
return embed
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
# TODO: Implement Function
outputs,final_state = tf.nn.dynamic_rnn(cell,inputs,dtype = tf.float32)
final_state = tf.identity(final_state,name ="final_state")
return outputs,final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
# TODO: Implement Function
embed_dim = 300
embed = get_embed(input_data,vocab_size,embed_dim)
outputs,final_state = build_rnn(cell,embed)
logits = tf.contrib.layers.fully_connected(outputs,num_outputs = vocab_size,activation_fn=None)
return logits, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
# TODO: Implement Function
n_batches = len(int_text)//(batch_size*seq_length)
inputs = int_text[:n_batches*batch_size*seq_length]
# assuming the last word has index >n_batches*batch_size*seq_length
targets = int_text[1:n_batches*batch_size*seq_length+1]
inputs = np.array(inputs).reshape([n_batches,1,batch_size,seq_length])
targets = np.array(targets).reshape([n_batches,1,batch_size,seq_length])
return np.concatenate([inputs,targets],axis=1)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2 3], [ 7 8 9]],
# Batch of targets
[[ 2 3 4], [ 8 9 10]]
],
# Second Batch
[
# Batch of Input
[[ 4 5 6], [10 11 12]],
# Batch of targets
[[ 5 6 7], [11 12 13]]
]
]
```
End of explanation
# Number of Epochs
num_epochs = 100
# Batch Size
batch_size = 32
# RNN Size
# Embedding Dimension Size
# Sequence Length
seq_length = 16
# Learning Rate
learning_rate = 0.001
# Show stats for every n number of batches
show_every_n_batches = 20
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set embed_dim to the size of the embedding.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
Explanation: Checkpoint
End of explanation
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
# TODO: Implement Function
inputs= loaded_graph.get_tensor_by_name("input:0")
initial_state=loaded_graph.get_tensor_by_name("initial_state:0")
final_state = loaded_graph.get_tensor_by_name("final_state:0")
probs = loaded_graph.get_tensor_by_name("probs:0")
return inputs, initial_state, final_state, probs
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
# TODO: Implement Function
#idx = np.argmax(probabilities)
n=10
#get the top n indice of words
top_n = probabilities.argsort()[-n:]
p_top_n = probabilities[top_n]
#normalization
p_top_n = p_top_n / np.sum(p_top_n)
idx = np.random.choice(top_n,1,p=p_top_n)
return int_to_vocab[idx[0]]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation |
14,620 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Группируем по миллисекундам и усредняем
Step1: Интересные нам всплески потребления кончаются где-то на 10000-ной миллисекунде (их пять подряд, мы моргали лампочкой пять раз).
Step6: Функции для парсинга событий из лога и поиска точки синхронизации | Python Code:
df_r1000 = df.groupby(df.index//1000).mean()
fig = sns.plt.figure(figsize=(16, 6))
ax = sns.plt.subplot()
df_r1000.plot(ax=ax)
Explanation: Группируем по миллисекундам и усредняем:
End of explanation
fig = sns.plt.figure(figsize=(16, 6))
ax = sns.plt.subplot()
df_r1000[:12000].plot(ax=ax)
Explanation: Интересные нам всплески потребления кончаются где-то на 10000-ной миллисекунде (их пять подряд, мы моргали лампочкой пять раз).
End of explanation
import numpy as np
import pandas as pd
from scipy import signal
from scipy import interpolate
from scipy.stats import pearsonr
import logging
log = logging.getLogger(__name__)
def torch_status(lines):
Parse torch statuses from lines
for line in lines:
if "newStatus=2" in line:
yield (
datetime.strptime(
line.split()[1], "%H:%M:%S.%f"),
1)
elif "newStatus=1" in line:
yield (
datetime.strptime(
line.split()[1], "%H:%M:%S.%f"),
0)
def parse_torch_events(filename, sps=1000):
Parse torch events from file, considering target sample rate.
Offset is the number of sample
log.info("Parsing torch events...")
with open(filename) as eventlog:
df = pd.DataFrame.from_records(
torch_status(eventlog), columns=["offset", "status"])
df["offset"] = df["offset"].map(
lambda x: int(np.round((x - df["offset"][0]).total_seconds() * sps)))
return df
def ref_signal(torch, trailing_zeros=1000):
Generate square reference signal with trailing zeroes
log.info("Generating ref signal...")
f = interpolate.interp1d(torch["offset"], torch["status"], kind="zero")
X = np.linspace(0, torch["offset"].values[-1], torch["offset"].values[-1])
return np.append(f(X), np.zeros(trailing_zeros))
def cross_correlate(sig, ref, first=30000):
Calculate cross-correlation with lag. Take only first n lags.
log.info("Calculating cross-correlation...")
lags = np.arange(len(sig) - len(ref))
if len(lags) > first:
lags = lags[:first]
return pd.DataFrame.from_records(
[pearsonr(sig[lag:lag+len(ref)], ref) for lag in lags],
columns=["corr", "p_value"])
def sync(sig, eventlog, sps=1000, trailing_zeros=1000, first=30000):
rs = ref_signal(
parse_torch_events(eventlog, sps=sps),
trailing_zeros=trailing_zeros)
cc = cross_correlate(sig, rs)
sync_point = np.argmax(cc["corr"])
if cc["p_value"][sync_point] > 0.05:
raise RuntimeError("P-value is too big: %d" % cc["p_value"][sync_point])
log.info(
"Pearson's coef: %d, p-value: %d",
cc["corr"][sync_point],
cc["p_value"][sync_point])
return sync_point
te = parse_torch_events("browser_download.log", sps=1000)
rs = ref_signal(te)
cc = cross_correlate(df_r1000[0], rs)
fig = sns.plt.figure(figsize=(16, 6))
ax = sns.plt.subplot()
sns.plt.plot(df_r1000[0][:20000], label="signal")
sns.plt.plot(cc["corr"][:20000]*1000 + 500, label="cross-correlation")
sns.plt.plot(np.append(np.zeros(sync_point), rs * 500 + 500), label="reference")
#sns.plt.plot(cc["p_value"][:20000]*1000, label="p-value")
sync_point = np.argmax(cc["corr"])
sns.plt.axvline(sync_point)
ax.legend()
fig = sns.plt.figure(figsize=(10, 6))
ax = sns.plt.subplot()
sns.plt.scatter(np.arange(0, 30, 2), np.zeros(15), label="Одно")
sns.plt.scatter(np.arange(1, 31, 2), np.zeros(15), label="Другое", color="red")
ax.legend()
Explanation: Функции для парсинга событий из лога и поиска точки синхронизации:
End of explanation |
14,621 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Feature
Step1: Config
Automatically discover the paths to various data folders and compose the project structure.
Step2: Identifier for storing these features on disk and referring to them later.
Step3: Download auxiliary NLTK models and corpora.
Step4: Read Data
Preprocessed and tokenized questions.
Step15: Build Features
Step16: Save features | Python Code:
from pygoose import *
import math
import nltk
Explanation: Feature: WordNet Similarity
Compute the aggregate similarity of two question token sets according to ontological graph distances in WordNet.
Based on the implementation of the paper "Sentence Similarity based on Semantic Nets and Corpus Statistics" by Li et al..
Imports
This utility package imports numpy, pandas, matplotlib and a helper kg module into the root namespace.
End of explanation
project = kg.Project.discover()
Explanation: Config
Automatically discover the paths to various data folders and compose the project structure.
End of explanation
feature_list_id = 'wordnet_similarity'
Explanation: Identifier for storing these features on disk and referring to them later.
End of explanation
nltk.download('brown')
nltk.download('wordnet')
nltk.download('averaged_perceptron_tagger')
Explanation: Download auxiliary NLTK models and corpora.
End of explanation
tokens_train = kg.io.load(project.preprocessed_data_dir + 'tokens_lowercase_spellcheck_train.pickle')
tokens_test = kg.io.load(project.preprocessed_data_dir + 'tokens_lowercase_spellcheck_test.pickle')
tokens = tokens_train + tokens_test
Explanation: Read Data
Preprocessed and tokenized questions.
End of explanation
brown_freqs = dict()
N = 0
def get_batch_similarities(pairs, _):
from nltk.corpus import brown
from nltk.corpus import wordnet as wn
# Parameters to the algorithm. Currently set to values that was reported
# in the paper to produce "best" results.
ALPHA = 0.2
BETA = 0.45
ETA = 0.4
PHI = 0.2
DELTA = 0.85
######################### word similarity ##########################
def get_best_synset_pair(word_1, word_2):
Choose the pair with highest path similarity among all pairs.
Mimics pattern-seeking behavior of humans.
max_sim = -1.0
synsets_1 = wn.synsets(word_1)
synsets_2 = wn.synsets(word_2)
if len(synsets_1) == 0 or len(synsets_2) == 0:
return None, None
else:
max_sim = -1.0
best_pair = None, None
for synset_1 in synsets_1:
for synset_2 in synsets_2:
sim = wn.path_similarity(synset_1, synset_2)
if sim is not None and sim > max_sim:
max_sim = sim
best_pair = synset_1, synset_2
return best_pair
def length_dist(synset_1, synset_2):
Return a measure of the length of the shortest path in the semantic
ontology (Wordnet in our case as well as the paper's) between two
synsets.
l_dist = 1e9
if synset_1 is None or synset_2 is None:
return 0.0
if synset_1 == synset_2:
# if synset_1 and synset_2 are the same synset return 0
l_dist = 0.0
else:
wset_1 = set([str(x.name()) for x in synset_1.lemmas()])
wset_2 = set([str(x.name()) for x in synset_2.lemmas()])
if len(wset_1.intersection(wset_2)) > 0:
# if synset_1 != synset_2 but there is word overlap, return 1.0
l_dist = 1.0
else:
# just compute the shortest path between the two
l_dist = synset_1.shortest_path_distance(synset_2)
if l_dist is None:
l_dist = 0.0
# normalize path length to the range [0,1]
return math.exp(-ALPHA * l_dist)
def hierarchy_dist(synset_1, synset_2):
Return a measure of depth in the ontology to model the fact that
nodes closer to the root are broader and have less semantic similarity
than nodes further away from the root.
h_dist = 1e9
if synset_1 is None or synset_2 is None:
return h_dist
if synset_1 == synset_2:
# return the depth of one of synset_1 or synset_2
h_dist = max([x[1] for x in synset_1.hypernym_distances()])
else:
# find the max depth of least common subsumer
hypernyms_1 = {x[0]:x[1] for x in synset_1.hypernym_distances()}
hypernyms_2 = {x[0]:x[1] for x in synset_2.hypernym_distances()}
lcs_candidates = set(hypernyms_1.keys()).intersection(
set(hypernyms_2.keys()))
if len(lcs_candidates) > 0:
lcs_dists = []
for lcs_candidate in lcs_candidates:
lcs_d1 = 0
if lcs_candidate in hypernyms_1:
lcs_d1 = hypernyms_1[lcs_candidate]
lcs_d2 = 0
if lcs_candidate in hypernyms_2:
lcs_d2 = hypernyms_2[lcs_candidate]
lcs_dists.append(max([lcs_d1, lcs_d2]))
h_dist = max(lcs_dists)
else:
h_dist = 0
return ((math.exp(BETA * h_dist) - math.exp(-BETA * h_dist)) /
(math.exp(BETA * h_dist) + math.exp(-BETA * h_dist)))
def word_similarity(word_1, word_2):
synset_pair = get_best_synset_pair(word_1, word_2)
return (length_dist(synset_pair[0], synset_pair[1]) *
hierarchy_dist(synset_pair[0], synset_pair[1]))
######################### sentence similarity ##########################
def most_similar_word(word, word_set):
Find the word in the joint word set that is most similar to the word
passed in. We use the algorithm above to compute word similarity between
the word and each word in the joint word set, and return the most similar
word and the actual similarity value.
max_sim = -1.0
sim_word = ""
for ref_word in word_set:
sim = word_similarity(word, ref_word)
if sim > max_sim:
max_sim = sim
sim_word = ref_word
return sim_word, max_sim
def info_content(lookup_word):
Uses the Brown corpus available in NLTK to calculate a Laplace
smoothed frequency distribution of words, then uses this information
to compute the information content of the lookup_word.
global N
if N == 0:
# poor man's lazy evaluation
for sent in brown.sents():
for word in sent:
word = word.lower()
if word not in brown_freqs:
brown_freqs[word] = 0
brown_freqs[word] = brown_freqs[word] + 1
N = N + 1
lookup_word = lookup_word.lower()
n = 0 if lookup_word not in brown_freqs else brown_freqs[lookup_word]
return 1.0 - (math.log(n + 1) / math.log(N + 1))
def semantic_vector(words, joint_words, info_content_norm):
Computes the semantic vector of a sentence. The sentence is passed in as
a collection of words. The size of the semantic vector is the same as the
size of the joint word set. The elements are 1 if a word in the sentence
already exists in the joint word set, or the similarity of the word to the
most similar word in the joint word set if it doesn't. Both values are
further normalized by the word's (and similar word's) information content
if info_content_norm is True.
sent_set = set(words)
semvec = np.zeros(len(joint_words))
i = 0
for joint_word in joint_words:
if joint_word in sent_set:
# if word in union exists in the sentence, s(i) = 1 (unnormalized)
semvec[i] = 1.0
if info_content_norm:
semvec[i] = semvec[i] * math.pow(info_content(joint_word), 2)
else:
# find the most similar word in the joint set and set the sim value
sim_word, max_sim = most_similar_word(joint_word, sent_set)
semvec[i] = PHI if max_sim > PHI else 0.0
if info_content_norm:
semvec[i] = semvec[i] * info_content(joint_word) * info_content(sim_word)
i = i + 1
return semvec
def semantic_similarity(words_1, words_2, info_content_norm):
Computes the semantic similarity between two sentences as the cosine
similarity between the semantic vectors computed for each sentence.
joint_words = set(words_1).union(set(words_2))
vec_1 = semantic_vector(words_1, joint_words, info_content_norm)
vec_2 = semantic_vector(words_2, joint_words, info_content_norm)
return np.dot(vec_1, vec_2.T) / (np.linalg.norm(vec_1) * np.linalg.norm(vec_2))
######################### word order similarity ##########################
def word_order_vector(words, joint_words, windex):
Computes the word order vector for a sentence. The sentence is passed
in as a collection of words. The size of the word order vector is the
same as the size of the joint word set. The elements of the word order
vector are the position mapping (from the windex dictionary) of the
word in the joint set if the word exists in the sentence. If the word
does not exist in the sentence, then the value of the element is the
position of the most similar word in the sentence as long as the similarity
is above the threshold ETA.
wovec = np.zeros(len(joint_words))
i = 0
wordset = set(words)
for joint_word in joint_words:
if joint_word in wordset:
# word in joint_words found in sentence, just populate the index
wovec[i] = windex[joint_word]
else:
# word not in joint_words, find most similar word and populate
# word_vector with the thresholded similarity
sim_word, max_sim = most_similar_word(joint_word, wordset)
if max_sim > ETA:
wovec[i] = windex[sim_word]
else:
wovec[i] = 0
i = i + 1
return wovec
def word_order_similarity(words_1, words_2):
Computes the word-order similarity between two sentences as the normalized
difference of word order between the two sentences.
joint_words = list(set(words_1).union(set(words_2)))
windex = {x[1]: x[0] for x in enumerate(joint_words)}
r1 = word_order_vector(words_1, joint_words, windex)
r2 = word_order_vector(words_2, joint_words, windex)
return 1.0 - (np.linalg.norm(r1 - r2) / np.linalg.norm(r1 + r2))
######################### overall similarity ##########################
def similarity(words_1, words_2, info_content_norm):
Calculate the semantic similarity between two sentences. The last
parameter is True or False depending on whether information content
normalization is desired or not.
return DELTA * semantic_similarity(words_1, words_2, info_content_norm) + \
(1.0 - DELTA) * word_order_similarity(words_1, words_2)
######################### main / test ##########################
return [
[similarity(pair[0], pair[1], False), similarity(pair[0], pair[1], True)]
for pair in pairs
]
similarities = kg.jobs.map_batch_parallel(
tokens,
batch_mapper=get_batch_similarities,
batch_size=1000,
)
similarities = np.array(similarities)
X_train = similarities[:len(tokens_train)]
X_test = similarities[len(tokens_train):]
print('X_train:', X_train.shape)
print('X_test: ', X_test.shape)
Explanation: Build Features
End of explanation
feature_names = [
'wordnet_similarity_raw',
'wordnet_similarity_brown',
]
project.save_features(X_train, X_test, feature_names, feature_list_id)
Explanation: Save features
End of explanation |
14,622 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A significant portion of the checking was done in the Excel file 'manual verification.'
In essence, I searched the state's site (https
Step1: Five facilities did not correspond. Manual checks shows inaccurate online data. | Python Code:
import pandas as pd
import numpy as np
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
scraped_comp = pd.read_csv('../data/scraped/scraped_complaints_3_25.csv')
scraped_comp['abuse_number'] = scraped_comp['abuse_number'].apply(lambda x: x.upper())
manual = pd.read_excel('/Users/fzarkhin/OneDrive - Advance Central Services, Inc/fproj/github/database-story/scraper/manual verification.xlsx', sheetname='All manual')
manual = manual.groupby('name').sum().reset_index()
manual['name']= manual['name'].apply(lambda x: x.strip())
scraped_comp['fac_name']= scraped_comp['fac_name'].apply(lambda x: x.strip())
df = scraped_comp.groupby('fac_name').count().reset_index()[['fac_name','abuse_number']]
merge1 = manual.merge(df, how = 'left', left_on = 'name', right_on='fac_name')
Explanation: A significant portion of the checking was done in the Excel file 'manual verification.'
In essence, I searched the state's site (https://apps.state.or.us/cf2/spd/facility_complaints/) using the same criteria as in the scraper, then copy-pasted the resulting list of totals by facility into a spreadsheet. I summed them up and compared them to what the scraper got me.
There were some differences between the two.
The totals were off by 5. I manually checked. The totals didn't correspond to the actual numbers on the facility pages.
The number of complaints per facility type did not match. I don't know why this is, but I don't see it as a problem because the totals are accurate, and we don't need to know the right facility type from the scraped data.
Four facilities didn't join on 'name.' Checked each one. Each had the right number of complaints scraped.
End of explanation
merge1[merge1['count']!=merge1['abuse_number']].sort_values('abuse_number')#.sum()
manual[manual['name']=='AVAMERE AT SANDY']
scraped_comp[scraped_comp['abuse_number']=='BH116622B']
scraped_comp[scraped_comp['fac_name'].str.contains('FLAGSTONE RETIREME')]
merge2 = manual.merge(df, how = 'right', left_on = 'name', right_on='fac_name')
merge2[merge2['count']!=merge2['abuse_number']].sort_values('count')#.sum()
Explanation: Five facilities did not correspond. Manual checks shows inaccurate online data.
End of explanation |
14,623 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Time Series Analysis for Car Counts
Guillaume Decerprit, PhD - 1 Nov. 2017
This document describes thought process and findings on data exploration & forecasting of car counts.
Overview of my strategy
At the highest level, two strategies are possible with time series forecasting
Step1: A) Data exploration
Data Structure
The data consists of a multi-dimensional (4D) daily time series with one target value (car count) and 3 associated features. Two of those features are categorical (day of week and cloud/clear) and one is not (weather)
To get some sense of the data, let's display a few basic plots.
Step2: A few remarks
Step3: This was a critical step in the exploratory phase
Step5: Note
Step7: We found the multi-year pseudo-periodicity again (about 280 weeks or ~5 years), which is consistent with the previous remarks. The green curve on the bottom plot is the 'un-trended' one where the 3rd-order polynomial fit was substracted from the car count. One can see that all auto-correlations are gone once un-trended.
This means that we need to incorporate two dominant time scales in our forecaster
Step8: Attach the gradients
Step9: Loss utility functions
Step10: Optimizer
Note
Step12: Deep LSTM Net core functions
Step13: Test and visualize predictions
Step14: Time to Train the Deep Net! (about 6 min for 2000 epochs on a 2015 15'' MBP)
Another tweak that I applied to this Net is to compute the loss only on the last predicted value, not on the entire sequence. This is achieved by setting last_element_only to True in the call to average_rmse_loss().
Usually, the loss is averaged over the full sequence but by only using the last value, I force the Net to learn a representation that is entirely trained to predict one target value from a full sequence, as opposed to being trained to predict one full sequence from another full sequence. My rationale was to force the training to be entirely dedicated to our value of interest, and the intermediate value of the output sequence could be use by the Net to serve as some sort of representation (sort of like some additional units.)
During training, a plot of predicted vs true sequence is shown, along another chart that shows the (training) loss and (test) error vs the training epoch.
Step15: Error Analysis
Hurrah! The Deep Net worked and seems to do OK on forecasting. Let's get some error statistics using the test set.
Step16: Conclusions on the Deep Net approach
It is remarkable to see the convergence happening quickly and firmly given the relatively small data set at hand.
Forecasting Performances are accceptable | Python Code:
####################
# test libraries
####################
try:
import mxnet
except ImportError:
!pip2 install mxnet
try:
import seaborn
except ImportError:
!pip2 install seaborn
try:
import sklearn
except ImportError:
!pip2 install sklearn
####################
# necessary imports. !! pip install mxnet if mxnet not installed !!
####################
from __future__ import print_function
import mxnet as mx # !! pip install mxnet if mxnet not installed !!
from mxnet import nd, autograd
import numpy as np
from collections import defaultdict
mx.random.seed(1234)
# ctx = mx.gpu(0)
ctx = mx.cpu(0)
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import scipy.fftpack
from scipy import stats
from pandas.tools import plotting
from pandas.tools.plotting import autocorrelation_plot
try:
from sklearn.model_selection import train_test_split
except ImportError:
from sklearn.cross_validation import train_test_split
from numpy import polyfit
from datetime import datetime
sns.set_style('whitegrid')
#sns.set_context('notebook')
sns.set_context('poster')
# Make inline plots vector graphics instead of raster graphics
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('pdf', 'png')
Explanation: Time Series Analysis for Car Counts
Guillaume Decerprit, PhD - 1 Nov. 2017
This document describes thought process and findings on data exploration & forecasting of car counts.
Overview of my strategy
At the highest level, two strategies are possible with time series forecasting:
1) we ignore the true order of time and assume that the target value depends only on the features (day of week, weather etc.) We can then do inferences on car count given the features and use standard supervized-learning techniques
2) we assume correlation between time steps not only exists but more importantly plays a critical role in the value of the target output. We may then use standard methods (ARIMA, Kalman Filters etc.) or more complex ones (Recurrent Neural Networks, RNN)
Data Science is largely an empyrical science. After giving some exploratory results I decided to go first with strategy 2) using a Deep LSTM Recurrent Neural Net but was not very successful so decided to try 1) with a Random Forest that turned out to work. Not willing to give up too easily, I refined my Deep Net and eventually got interesting results.
Below is a table that summarizes a (tiny) fraction of info on most common methodologies for Time Series Forecasting:
| Tables | Pros | Cons |
| ------------- |:-------------:| :-----:|
| ARIMA | robust, fast | does not handle multi-dimensional features, hard to forget trends |
| Kalman | better at capturing trends, handles noise easily | Markov model |
| Deep LSTM RNN | can adapt to changing trends, can handle multi-D features from multiple time steps, can learn extremely complex representations | needs enough data |
In this document, I will go in the same order as my findings and start with the Deep Net, followed by the Random Forest (RF).
It is divided into 4 sections:
A) Data Exploration
B) Deep Net Approach
C) Random Forest Approach
D) Conclusions / Extension
End of explanation
######################################
# MAKE SURE data.csv IS IN THE SAME PATH AS THIS JUPYTER NOTEBOOK
######################################
original_data = pd.read_csv("data.csv", index_col=0)
dict_days_to_int = {'Monday': 1, 'Tuesday': 2, 'Wednesday': 3, 'Thursday': 4, 'Friday': 5, 'Saturday': 6, 'Sunday': 7}
original_data.sort_index(inplace=True)
original_data['date_']=original_data.index
original_data['current_month'] = original_data['date_'].apply(lambda x: pd.Timestamp(x).month)
original_data['current_year'] = original_data['date_'].apply(lambda x: pd.Timestamp(x).year-2009)
original_data['day_of_week_int'] = original_data['day.of.week'].apply(lambda x: dict_days_to_int[x])
original_data['cloudy_or_not_cloudy'] = original_data['cloud.indicator'].apply(lambda x: 0 if x=='clear' else 1)
# For the Random Forest, we bin the car # in small bins of 2 cars. This increases training statistics at the cost of a negligible drop in precision
BIN_SIZE = 2
original_data['bucketed_car_count'] = original_data['car.count'].apply(lambda x: np.floor(x/BIN_SIZE))
full_data = original_data.dropna()
# Split data in cloudy data and non-cloudy data
data_no_clouds = original_data[original_data['cloudy_or_not_cloudy']==0].dropna()
data_clouds = original_data[original_data['cloudy_or_not_cloudy']==1].dropna()
data_no_clouds['previous_bucketed_car_count'] = data_no_clouds['bucketed_car_count'].shift(1).dropna()
plotting.scatter_matrix(full_data.sample(300)[['car.count', 'weather']])
plotting.scatter_matrix(full_data.sample(300)[['car.count', 'day_of_week_int']])
plt.show()
data_no_clouds['car.count.previous'] = data_no_clouds['car.count'].shift(1)
# computes correlation coeffs, normalization is done before their computation
data_no_clouds.loc[:, ['car.count', 'car.count.previous' ,'weather', 'day_of_week_int']].corr()
data_no_clouds = data_no_clouds.dropna()
Explanation: A) Data exploration
Data Structure
The data consists of a multi-dimensional (4D) daily time series with one target value (car count) and 3 associated features. Two of those features are categorical (day of week and cloud/clear) and one is not (weather)
To get some sense of the data, let's display a few basic plots.
End of explanation
full_data.index = full_data.index.to_datetime()
data_no_clouds.index = data_no_clouds.index.to_datetime()
data_clouds.index = data_clouds.index.to_datetime()
#######################
# resampling to weekly data for better visibilty
# this doesn't change any of the conclusions on the data
#######################
resampled_full_data = full_data.resample('W')
resampled_no_clouds = data_no_clouds.resample('W')
resampled_clouds = data_clouds.resample('W')
resampled_full_data['car.count'].plot(label='all data')
resampled_no_clouds['car.count'].plot(label='no clouds data')
resampled_clouds['car.count'].plot(label='clouds data')
resampled_full_data['ii'] = resampled_full_data.reset_index().index
resampled_no_clouds['ii'] = resampled_no_clouds.reset_index().index
resampled_no_clouds = resampled_no_clouds.dropna()
poly_fit = polyfit(resampled_full_data['ii'], resampled_full_data['car.count'], 3)
a, b, c, d = poly_fit[0], poly_fit[1], poly_fit[2], poly_fit[3]
poly_fit2 = polyfit(resampled_no_clouds['ii'], resampled_no_clouds['car.count'], 3)
a2, b2, c2, d2 = poly_fit2[0], poly_fit2[1], poly_fit2[2], poly_fit2[3]
resampled_no_clouds['car.count_fit'] = resampled_no_clouds['ii'].apply(lambda x: a2*x*x*x + b2*x*x + c2*x + d2)
resampled_no_clouds['untrended_car_count'] = resampled_no_clouds['car.count'] - resampled_no_clouds['car.count_fit']
resampled_no_clouds['car.count_fit'].plot(label='3rd order poly fit (trend)')
resampled_full_data['car.count_fit'] = resampled_full_data['ii'].apply(lambda x: a*x*x*x + b*x*x + c*x + d)
resampled_full_data['untrended_car_count'] = resampled_full_data['car.count'] - resampled_full_data['car.count_fit']
print(full_data.head(3))
plt.legend(loc='upper right')
plt.show()
Explanation: A few remarks:
Car count is roughly between 0 and 250. No missing value was observed. Interestingly, weather doesn't seem to have much influence on car count (no clear correlation). A slight trend is observed on car count vs day of week.
Let's look at the cloud indicator by plotting a few cloudy data and non-cloudy data charts.
End of explanation
fig, axes_fig1 = plt.subplots(1,1, figsize=(8,6))
plt.hist(data_clouds.sample(200)['car.count']/100. + 1.5, 20, alpha=0.5, label='bad data', normed=True)
plt.hist(data_no_clouds.sample(200)['car.count']/128., 20, alpha=0.5, label='good data', normed=True)
plt.legend(loc='upper right')
plt.xlabel('Anomaly score')
axes_fig1.locator_params(nbins=1, axis='y')
axes_fig1.set_ylim([0., 2.5])
plt.show()
print('\n===> p-value for Kolmogorov test: {0}\n'.format(stats.ks_2samp(data_clouds['car.count'], data_no_clouds['car.count'])[1]))
fig, axes_fig1 = plt.subplots(1,1, figsize=(10,8))
plt.hist(data_clouds['car.count'], 20, alpha=0.5, label='clouds')
plt.hist(data_no_clouds['car.count'], 20, alpha=0.5, label='no clouds')
plt.legend(loc='upper right')
plt.xlabel('car count')
fig, axes_fig2 = plt.subplots(1,1, figsize=(10,8))
full_data.sample(500).loc[:, ['cloudy_or_not_cloudy', 'car.count']].boxplot(by='cloudy_or_not_cloudy', ax=axes_fig2)
plt.show()
Explanation: This was a critical step in the exploratory phase: the data README suggests that clouds and weather may affect the visibility of the parking. It was unclear to me whether that means the car count would be corrected or not. By breaking down the plots in cloudy/non-cloudy/all, one can interpret the charts with the two following Hypotheses:
Hyp. 1 : clouds introduce a strong variance and a (clearly visible) bias for which we essentially 'miss' cars but clouds do not impact the intrinsic behavior of shoppers, in other words if we could count cars on the ground directly and accurately, clouds would correlate at most marginally with the car count
Hyp. 2 : clouds do actually influence the amount of cars (at least more so than its intrinsic dispersion)
Given the much larger dispersion ($\sigma_{clouds} >> \sigma_{clear}$) and the negligible correlation of car # with 'weather', I picked Hyp. 1. It is important to keep in mind that this was a choice to make, and the entire analysis (especially the RF where the cloud/clear would be an important feature to add) should be revised should Hyp. 2 be true.
The box plot and histograms below display the bias and the different shapes of the cloudy vs non-cloudy car # distributions, further demonstrated by the extremely low p-value of the Kolmogorov test (which proves that both samples do not come from the same underlying distribution.)
End of explanation
def plot_fft(data):
Plot the Fourier Transform of a time series
# Number of samplepoints
N = len(data)
# sample spacing
T = 1.0
x = np.linspace(0.0, N*T, N)
y = data
yf = scipy.fftpack.fft(y)
xf = np.linspace(0.0, (T*N/2), N/2)
fig, ax = plt.subplots(1,1, figsize=(8,6))
yf = (2.0/N * np.abs(yf[:N//2]))
axes = ax.plot(xf[1:50], yf[1:50]) # point 0 is for T=+inf, i.e. a constant (actually prop. to the mean value)
ax.set_xlabel("Period [day]")
plt.show()
plot_fft(data_no_clouds['car.count'].dropna())
# --> autocorr reveals most info is in previous day, ==> markov style
f1 = autocorrelation_plot(resampled_no_clouds['car.count'])
f2 = autocorrelation_plot(resampled_no_clouds['untrended_car_count'])
_ = f2.set_xlabel("Lag [day]")
Explanation: Note: For the sake of completeness I did this very exercise of including the cloud feature in the RF and found that predictions were less accurate, basically doubling error bars for any day, cloudy or not. Which made me conclude that Hyp. 1 is more likely and clouds, which are very random by nature, introduce a significant noise in the car count.
Some exploration in the Frequency domain
The trend above suggests at least a multi-year pseudo-periodicity in the time series.
It is always interesting and sometimes informative to quickly analyze the time series in the Frequency (Fourier) domain.
Let's plot:
- the Fourier spectrum of this time series. It will tell us what are the most significant (pseudo)periodic time scales in the series and, more importantly, whether the data available actually covers at least a few of those periods. This is especially important for the Deep Net, to ensure a good temporal representation is learnt.
End of explanation
#############################
# Utility functions for data preparation
#############################
def get_list_unique_block_indices(len_data=100, seq_length=5, n_samples=10):
returns a list of unique random int that serve as index of the first element of a block of data
args:
len_data (int): length of the data set
seq_length (int): length of the blocks to extract
n_blocks (int): # of blocks to extract
set1 = set(np.random.randint(len_data // seq_length, size=n_samples)*seq_length)
full_set = set1
while len(full_set) < n_samples:
set2 = set(np.random.randint(len_data // seq_length, size=n_samples)*seq_length)
full_set = full_set | set2
returned_list = list(full_set)[0:n_samples]
assert(len(returned_list) == n_samples)
return returned_list
def extract_random_sequence(data, seq_length=5, block_start_index=None):
columns_subset = ['car.count', 'day_of_week_int', 'weather', 'current_month', 'current_year']
if block_start_index is None:
block_start_index = np.random.randint(len(data)-seq_length)
data_subset = data.reset_index().loc[block_start_index:block_start_index+seq_length-1, columns_subset]
assert(len(data_subset) == (seq_length))
out_data = [list(i) for i in data_subset.values]
return out_data
def create_batch_ND_time_series(full_data, seq_length=10, num_samples=4):
out_data = []
# get a list of non-overlapping random sequence start indices
all_samples_start_indices = get_list_unique_block_indices(len(full_data), seq_length, num_samples)
assert(len(all_samples_start_indices) == num_samples)
for one_random_start_index in all_samples_start_indices:
out_data.append(extract_random_sequence(full_data, seq_length, one_random_start_index))
assert(len(out_data[-1]) == (seq_length))
return out_data
#############################
# Data Preparation
#############################
SEQ_LENGTH = 3 # we use the last 3 days as inputs
NUM_FEATURES = 4 # we use 4 features (weather, day of week, month, year)
BATCH_SIZE = 32
BATCH_SIZE_TEST = 1
# let's divide data in train (75%), dev (15%), test (10%)
# in sequences of 5 days (SEQ_LENGTH = 5)
data_no_clouds_length = len(data_no_clouds)
# the actual length of extracted sequence is SEQ_LENGTH + 1 so that we can do the shift of +1 for labels
total_num_of_sequences = data_no_clouds_length // (SEQ_LENGTH+1) - 1
# the length of extracted sequence is SEQ_LENGTH so that we can do the shift of +1 for labels
all_random_sequences = create_batch_ND_time_series(data_no_clouds, seq_length=SEQ_LENGTH+1, num_samples=total_num_of_sequences)
percent_train, percent_dev = 0.9, 0.91 # we used dev set to try a few sets of parameters
n_seq_train = int(total_num_of_sequences*percent_train)
n_seq_dev = int(total_num_of_sequences*percent_dev) - int(total_num_of_sequences*percent_train)
n_seq_test = len(all_random_sequences) - int(total_num_of_sequences*percent_dev)
data_train = np.array(all_random_sequences[0:n_seq_train])
data_dev = np.array(all_random_sequences[n_seq_train:n_seq_train+n_seq_dev])
data_test = np.array(all_random_sequences[n_seq_train+n_seq_dev:])
print('all data_length ', data_no_clouds_length)
print('SHAPES of ALL, TRAIN, DEV, TEST:')
print(np.array(all_random_sequences).shape)
print(np.array(data_train).shape)
print(np.array(data_dev).shape)
print(np.array(data_test).shape)
assert(data_train.shape == (n_seq_train, SEQ_LENGTH+1, NUM_FEATURES+1)) # +1 for the target value (car.count)
assert(data_dev.shape == (n_seq_dev, SEQ_LENGTH+1, NUM_FEATURES+1))
assert(data_test.shape == (n_seq_test, SEQ_LENGTH+1, NUM_FEATURES+1))
num_batches_train = data_train.shape[0] // BATCH_SIZE
num_batches_test = data_test.shape[0] // BATCH_SIZE_TEST
dim_inputs = NUM_FEATURES + 1 # dimension of the input vector for a given time stamp
# for labels, we just keep the time-series prediction which has index of 0, we dont keep the features.
# See Remarks Below
indices_outputs = [0]
dim_outputs = len(indices_outputs) # same comment
# inputs are from t0 to t_seq_length - 1. because the last point is kept for the
# output ("label") of the penultimate point
data_train_inputs = data_train[:, :-1, :]
data_train_labels = data_train[:, 1:, indices_outputs]
data_test_inputs = data_test[:, :-1, :]
data_test_labels = data_test[:, 1:, indices_outputs]
train_data_inputs = nd.array(data_train_inputs).reshape((num_batches_train, BATCH_SIZE, SEQ_LENGTH, dim_inputs))
train_data_labels = nd.array(data_train_labels).reshape((num_batches_train, BATCH_SIZE, SEQ_LENGTH, dim_outputs))
test_data_inputs = nd.array(data_test_inputs).reshape((num_batches_test, BATCH_SIZE_TEST, SEQ_LENGTH, dim_inputs))
test_data_labels = nd.array(data_test_labels).reshape((num_batches_test, BATCH_SIZE_TEST, SEQ_LENGTH, dim_outputs))
train_data_inputs = nd.swapaxes(train_data_inputs, 1, 2)
train_data_labels = nd.swapaxes(train_data_labels, 1, 2)
test_data_inputs = nd.swapaxes(test_data_inputs, 1, 2)
test_data_labels = nd.swapaxes(test_data_labels, 1, 2)
print('num_mini-batches_train={0} | seq_length={2} | mini-batch_size={1} | dim_input={3} | dim_output={4}'.format(num_batches_train, BATCH_SIZE, SEQ_LENGTH, dim_inputs, dim_outputs))
print('train_data_inputs shape: ', train_data_inputs.shape)
print('train_data_labels shape: ', train_data_labels.shape)
num_hidden_units = [64, 16] # num of hidden units in each hidden LSTM layer
num_hidden_layers = len(num_hidden_units) # num of hidden LSTM layers
num_units_layers = [dim_inputs] + num_hidden_units
norm_factor = 0.1 # normalization factor for weight initialization. set to 0.01 for prediction of small values (<1)
########################
# Weights connecting the inputs to the hidden layer
########################
Wxg, Wxi, Wxf, Wxo, Whg, Whi, Whf, Who, bg, bi, bf, bo = {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}
for i_layer in range(1, num_hidden_layers+1):
num_inputs = num_units_layers[i_layer-1]
num_hidden_units = num_units_layers[i_layer]
Wxg[i_layer] = nd.random_normal(shape=(num_inputs,num_hidden_units), ctx=ctx) * norm_factor
Wxi[i_layer] = nd.random_normal(shape=(num_inputs,num_hidden_units), ctx=ctx) * norm_factor
Wxf[i_layer] = nd.random_normal(shape=(num_inputs,num_hidden_units), ctx=ctx) * norm_factor
Wxo[i_layer] = nd.random_normal(shape=(num_inputs,num_hidden_units), ctx=ctx) * norm_factor
########################
# Recurrent weights connecting the hidden layer across time steps
########################
Whg[i_layer] = nd.random_normal(shape=(num_hidden_units, num_hidden_units), ctx=ctx) * norm_factor
Whi[i_layer] = nd.random_normal(shape=(num_hidden_units, num_hidden_units), ctx=ctx) * norm_factor
Whf[i_layer] = nd.random_normal(shape=(num_hidden_units, num_hidden_units), ctx=ctx) * norm_factor
Who[i_layer] = nd.random_normal(shape=(num_hidden_units, num_hidden_units), ctx=ctx) * norm_factor
########################
# Bias vector for hidden layer
########################
bg[i_layer] = nd.random_normal(shape=num_hidden_units, ctx=ctx) * norm_factor
bi[i_layer] = nd.random_normal(shape=num_hidden_units, ctx=ctx) * norm_factor
bf[i_layer] = nd.random_normal(shape=num_hidden_units, ctx=ctx) * norm_factor
bo[i_layer] = nd.random_normal(shape=num_hidden_units, ctx=ctx) * norm_factor
########################
# Weights to the output nodes
########################
Why = nd.random_normal(shape=(num_units_layers[-1], dim_outputs), ctx=ctx) * norm_factor
by = nd.random_normal(shape=dim_outputs, ctx=ctx) * 0.01
Explanation: We found the multi-year pseudo-periodicity again (about 280 weeks or ~5 years), which is consistent with the previous remarks. The green curve on the bottom plot is the 'un-trended' one where the 3rd-order polynomial fit was substracted from the car count. One can see that all auto-correlations are gone once un-trended.
This means that we need to incorporate two dominant time scales in our forecaster:
- one long-term pseudo-periodicity. We will incorporate this one as an integer representing the current year of each data point and use that as a feature in our Deep Net and RF
- one short-term (few days) pseudo-periodicity (can be seen when zooming on the autocorr plot, see also correlation coeffs above) that we can represent by using the last known car count value as a feature itself (last few for the Deep Net)
B) Deep LSTM RNN Net Approach
The next few Cells are code snippets needed to prepare the data and implement the RNN. I re-used some code I had worked on (and the MXNet tutorials helped!).
Here is a brief overview of LSTM Cells (we assume familiarity with RNNs).
Long short-term memory (LSTM) RNNs
An LSTM block has mechanisms to enable "memorizing" information for an extended number of time steps. We use the LSTM block with the following transformations that map inputs to outputs across blocks at consecutive layers and consecutive time steps: $\newcommand{\xb}{\mathbf{x}} \newcommand{\RR}{\mathbb{R}}$
$$f_t = \sigma(X_t W_{xf} + h_{t-1} W_{hf} + b_f) \text{ [forget gate]}$$
$$i_t = \sigma(X_t W_{xi} + h_{t-1} W_{hi} + b_i) \text{ [input gate]}$$
$$g_t = \text{tanh}(X_t W_{xg} + h_{t-1} W_{hg} + b_g) \text{ [candidate gate]}$$
$$o_t = \sigma(X_t W_{xo} + h_{t-1} W_{ho} + b_o) \text{ [output gate]}$$
$$c_t = f_t \odot c_{t-1} + i_t \odot g_t \text{ [cell state]}$$
$$h_t = o_t \odot \text{tanh}(c_t) \text{ [new hidden state]}$$
where $\odot$ is an element-wise multiplication operator, and
for all $\xb = [x_1, x_2, \ldots, x_k]^\top \in \RR^k$ the two activation functions:
$$\sigma(\xb) = \left[\frac{1}{1+\exp(-x_1)}, \ldots, \frac{1}{1+\exp(-x_k)}]\right]^\top,$$
$$\text{tanh}(\xb) = \left[\frac{1-\exp(-2x_1)}{1+\exp(-2x_1)}, \ldots, \frac{1-\exp(-2x_k)}{1+\exp(-2x_k)}\right]^\top.$$
In the transformations above, the memory cell $c_t$ stores the "long-term" memory in the vector form.
In other words, the information accumulatively captured and encoded until time step $t$ is stored in $c_t$ and is only passed along the same layer over different time steps.
Given the inputs $c_t$ and $h_t$, the input gate $i_t$ and forget gate $f_t$ will help the memory cell to decide how to overwrite or keep the memory information. The output gate $o_t$ further lets the LSTM block decide how to retrieve the memory information to generate the current state $h_t$ that is passed to both the next layer of the current time step and the next time step of the current layer. Such decisions are made using the hidden-layer parameters $W$ and $b$ with different subscripts: these parameters will be inferred during the training phase.
Data Preparation
End of explanation
params = []
for i_layer in range(1, num_hidden_layers+1):
params += [Wxg[i_layer], Wxi[i_layer], Wxf[i_layer], Wxo[i_layer], Whg[i_layer], Whi[i_layer], Whf[i_layer], Who[i_layer], bg[i_layer], bi[i_layer], bf[i_layer], bo[i_layer]]
params += [Why, by] # add the output layer
for param in params:
param.attach_grad()
Explanation: Attach the gradients
End of explanation
def rmse(yhat, y):
return nd.mean(nd.sqrt(nd.sum(nd.power(y - yhat, 2), axis=0, exclude=True)))
def average_rmse_loss(outputs, labels, last_element_only=False):
# assert(len(outputs) == len(labels))
total_loss = 0.
zipped_elements = zip(outputs,labels)
if last_element_only:
output, label = zipped_elements[-1]
total_loss = rmse(output, label)
return total_loss
for (output, label) in zipped_elements:
total_loss = total_loss + rmse(output, label)
return total_loss / len(outputs)
Explanation: Loss utility functions
End of explanation
from exceptions import ValueError
def SGD(params, learning_rate):
for param in params:
param[:] = param - learning_rate * param.grad
def adam(params, learning_rate, M , R, index_adam_call, beta1, beta2, eps):
k = -1
for param in params:
k += 1
M[k] = beta1 * M[k] + (1. - beta1) * param.grad
R[k] = beta2 * R[k] + (1. - beta2) * (param.grad)**2
# bias correction since we initilized M & R to zeros, they're biased toward zero on the first few iterations
m_k_hat = M[k] / (1. - beta1**(index_adam_call))
r_k_hat = R[k] / (1. - beta2**(index_adam_call))
if((np.isnan(M[k].asnumpy())).any() or (np.isnan(R[k].asnumpy())).any()):
raise(ValueError('Nans!!'))
param[:] = param - learning_rate * m_k_hat / (nd.sqrt(r_k_hat) + eps)
return params, M, R
Explanation: Optimizer
Note: we will use the Adam Optimizer. Much faster and much more robust than Standard Gradient Descent.
End of explanation
def single_lstm_unit_calcs(X, c, Wxg, h, Whg, bg, Wxi, Whi, bi, Wxf, Whf, bf, Wxo, Who, bo):
g = nd.tanh(nd.dot(X, Wxg) + nd.dot(h, Whg) + bg)
i = nd.sigmoid(nd.dot(X, Wxi) + nd.dot(h, Whi) + bi)
f = nd.sigmoid(nd.dot(X, Wxf) + nd.dot(h, Whf) + bf)
o = nd.sigmoid(nd.dot(X, Wxo) + nd.dot(h, Who) + bo)
#######################
c = f * c + i * g
h = o * nd.tanh(c)
return c, h
def deep_lstm_rnn(inputs, h, c, temperature=1.0):
h: dict of nd.arrays, each key is the index of a hidden layer (from 1 to whatever).
Index 0, if any, is the input layer
outputs = []
# inputs is one BATCH of sequences so its shape is number_of_seq, seq_length, features_dim
# (latter is 1 for a time series, vocab_size for a character, n for a n different times series)
for X in inputs:
# X is batch of one time stamp. E.g. if each batch has 37 sequences, then the first value of X will be a set of the 37 first values of each of the 37 sequences
# that means each iteration on X corresponds to one time stamp, but it is done in batches of different sequences
h[0] = X # the first hidden layer takes the input X as input
for i_layer in range(1, num_hidden_layers+1):
# lstm units now have the 2 following inputs:
# i) h_t from the previous layer (equivalent to the input X for a non-deep lstm net),
# ii) h_t-1 from the current layer (same as for non-deep lstm nets)
c[i_layer], h[i_layer] = single_lstm_unit_calcs(h[i_layer-1], c[i_layer], Wxg[i_layer], h[i_layer], Whg[i_layer], bg[i_layer], Wxi[i_layer], Whi[i_layer], bi[i_layer], Wxf[i_layer], Whf[i_layer], bf[i_layer], Wxo[i_layer], Who[i_layer], bo[i_layer])
yhat_linear = nd.dot(h[num_hidden_layers], Why) + by
# yhat is a batch of several values of the same time stamp
# this is basically the prediction of the sequence, which overlaps most of the input sequence, plus one point (character or value)
# yhat = nd.sigmoid(yhat_linear)
yhat = yhat_linear # we cant use a 1.0-bounded activation function since outputs (car count) can be greater than 1.0
outputs.append(yhat) # outputs has same shape as inputs, i.e. a list of batches of data points.
return (outputs, h, c)
Explanation: Deep LSTM Net core functions
End of explanation
INDEX_TARGET_VALUE = 0
def test_prediction(one_input_seq, one_label_seq, temperature=1.0):
# WE ASSUME the first value in input vector is the variable of interest
#####################################
# Set the initial state of the hidden representation ($h_0$) to the zero vector
##################################### # some better initialization needed??
h, c = {}, {}
for i_layer in range(1, num_hidden_layers+1):
h[i_layer] = nd.zeros(shape=(BATCH_SIZE_TEST, num_units_layers[i_layer]), ctx=ctx)
c[i_layer] = nd.zeros(shape=(BATCH_SIZE_TEST, num_units_layers[i_layer]), ctx=ctx)
outputs, h, c = deep_lstm_rnn(one_input_seq, h, c, temperature=temperature)
return outputs[-1][0].asnumpy()[INDEX_TARGET_VALUE], one_label_seq.asnumpy()[-1].flatten()[INDEX_TARGET_VALUE], outputs, one_label_seq
def check_prediction(index):
if index >= len(test_data_inputs):
index = np.random.randint(len(test_data_inputs))
o, label, outputs, labels = test_prediction(test_data_inputs[index], test_data_labels[index], temperature=1.0)
prediction = round(o, 3)
true_label = round(label, 3)
outputs = [float(i.asnumpy().flatten()[INDEX_TARGET_VALUE]) for i in outputs] # if batch_size_test=1 then this float() will work, otherwise, nope.
true_labels = list(test_data_labels[index].asnumpy()[:,:,INDEX_TARGET_VALUE].flatten())
df = pd.DataFrame([outputs, true_labels]).transpose()
df.columns = ['predicted', 'true']
return df
Explanation: Test and visualize predictions
End of explanation
epochs = 2000 # one epoch is one pass over the entire training set
moving_loss = 0.
learning_rate = 0.002 # no more than 0.002, even with Adam, otherwise, convergence will likely saturate
# Adam Optimizer stuff
beta1 = .9
beta2 = .999
index_adam_call = 0
# M & R arrays to keep track of momenta in adam optimizer. params is a list that contains all ndarrays of parameters
M = {k: nd.zeros_like(v) for k, v in enumerate(params)}
R = {k: nd.zeros_like(v) for k, v in enumerate(params)}
df_moving_loss = pd.DataFrame(columns=['Loss', 'Error'])
df_moving_loss.index.name = 'Epoch'
# needed to update plots on the fly
%matplotlib notebook
fig, axes_fig1 = plt.subplots(1,1, figsize=(6,3))
fig2, axes_fig2 = plt.subplots(1,1, figsize=(6,3))
for e in range(epochs):
############################
# Attenuate the learning rate by a factor of 2 every 100 epochs
############################
if ((e+1) % 1000 == 0):
learning_rate = learning_rate / 2.0 # TODO check if its ok to adjust learning_rate when using Adam Optimizer
h, c = {}, {}
for i_layer in range(1, num_hidden_layers+1):
h[i_layer] = nd.zeros(shape=(BATCH_SIZE, num_units_layers[i_layer]), ctx=ctx)
c[i_layer] = nd.zeros(shape=(BATCH_SIZE, num_units_layers[i_layer]), ctx=ctx)
for i in range(num_batches_train):
data_one_hot = train_data_inputs[i]
label_one_hot = train_data_labels[i]
with autograd.record():
outputs, h, c = deep_lstm_rnn(data_one_hot, h, c)
loss = average_rmse_loss(outputs, label_one_hot, last_element_only=True)
loss.backward()
# SGD(params, learning_rate)
index_adam_call += 1 # needed for bias correction in Adam optimizer
params, M, R = adam(params, learning_rate, M, R, index_adam_call, beta1, beta2, 1e-8)
##########################
# Keep a moving average of the losses
##########################
if (i == 0) and (e == 0):
moving_loss = nd.mean(loss).asscalar()
else:
moving_loss = .99 * moving_loss + .01 * nd.mean(loss).asscalar()
df_moving_loss.loc[e] = round(moving_loss, 4)
############################
# Predictions and plots
############################
data_prediction_df = check_prediction(index=e)
if not (e%50):
axes_fig1.clear()
data_prediction_df.plot(ax=axes_fig1)
fig.canvas.draw()
prediction = round(data_prediction_df.tail(1)['predicted'].values.flatten()[-1], 3)
true_label = round(data_prediction_df.tail(1)['true'].values.flatten()[-1], 3)
if true_label != 0:
rel_error = round(100. * np.abs(prediction / (true_label+1e-5) - 1.0), 2)
else:
rel_error = moving_rel_error
if not (e%50):
print("Epoch = {0} | Loss = {1} | Prediction = {2} True = {3} Error = {4}".format(e, moving_loss, prediction, true_label, rel_error ))
if not (e%50):
axes_fig2.clear()
if e == 0:
moving_rel_error = rel_error
else:
moving_rel_error = .99 * moving_rel_error + .01 * rel_error
df_moving_loss.loc[e, ['Error']] = moving_rel_error
if not (e%50):
axes_loss_plot = df_moving_loss.plot(ax=axes_fig2, secondary_y='Loss', color=['r','b'])
axes_loss_plot.right_ax.grid(False)
# axes_loss_plot.right_ax.set_yscale('log')
fig2.canvas.draw()
%matplotlib inline
Explanation: Time to Train the Deep Net! (about 6 min for 2000 epochs on a 2015 15'' MBP)
Another tweak that I applied to this Net is to compute the loss only on the last predicted value, not on the entire sequence. This is achieved by setting last_element_only to True in the call to average_rmse_loss().
Usually, the loss is averaged over the full sequence but by only using the last value, I force the Net to learn a representation that is entirely trained to predict one target value from a full sequence, as opposed to being trained to predict one full sequence from another full sequence. My rationale was to force the training to be entirely dedicated to our value of interest, and the intermediate value of the output sequence could be use by the Net to serve as some sort of representation (sort of like some additional units.)
During training, a plot of predicted vs true sequence is shown, along another chart that shows the (training) loss and (test) error vs the training epoch.
End of explanation
def compute_test_errors(bias_correction=1.):
list_abs_rel_error, list_rel_error, list_abs_error = [], [], []
for e in range(0, data_test_labels.shape[0]):
data_prediction_df = check_prediction(index=e)
prediction = round(data_prediction_df.tail(1)['predicted'].values.flatten()[-1] * bias_correction, 3)
true_label = round(data_prediction_df.tail(1)['true'].values.flatten()[-1], 3)
if true_label != 0:
list_abs_rel_error.append( round(100. * np.abs(prediction / (true_label) - 1.0), 2) )
list_rel_error.append( round(100. * (prediction / (true_label) - 1.0), 2) )
list_abs_error.append( round(prediction - true_label, 0) )
else:
continue
return list_abs_rel_error, list_rel_error, list_abs_error
list_abs_rel_error, list_rel_error, _ = compute_test_errors(bias_correction=1.0)
bias_correction = 100. / (100. + np.mean(list_rel_error)) # estimate of the bias
# recompute errors using "de-biased" model
list_abs_rel_error, list_rel_error, list_abs_error = compute_test_errors(bias_correction=bias_correction)
%matplotlib inline
plt.hist(list_abs_rel_error, 20, alpha=0.4, label='abs percent error')
plt.hist(list_rel_error, 20, alpha=0.4, label='percent error')
plt.hist(list_abs_error, 20, alpha=0.4, label='abs error [in # of cars]')
plt.legend(loc='upper right')
plt.xlabel('unbiased errors')
plt.show()
print('Estimate of the Bias: {0}% | Median Abs Rel Error: {1}%'.format(round(bias_correction,1), round(np.median(list_abs_rel_error), 1)))
Explanation: Error Analysis
Hurrah! The Deep Net worked and seems to do OK on forecasting. Let's get some error statistics using the test set.
End of explanation
columns_subset = ['previous_bucketed_car_count', 'day_of_week_int', 'weather', 'current_month', 'current_year']
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
X_train, X_test, Y_train, Y_test = train_test_split(data_no_clouds[columns_subset], data_no_clouds['bucketed_car_count'],test_size=0.1)
clf = RandomForestClassifier(n_estimators=3500, max_depth=8)
clf.fit(X_train, Y_train)
#####################
# Execute predictions on test set
#####################
predictions = clf.predict(X_test)
#####################
# Compute error metrics on test set
#####################
def get_list_errors(predictions, Y_test, bias_corr=1.00):
list_deviations = []
for p, t in zip(predictions, Y_test):
list_deviations.append(p*bias_corr - t)
return list_deviations
list_deviations = get_list_errors(predictions, Y_test)
med = round(np.mean(list_deviations)*BIN_SIZE, 1)
sigma = round(np.std(list_deviations)*BIN_SIZE, 1)
print('Prediction bias {0} | sigma: +-{1}'.format(med, sigma))
plt.hist(list_deviations*BIN_SIZE)
plt.xlabel('Relative Error [car #]')
#####################
# Rank the features
#####################
importances = clf.feature_importances_
std = np.std([tree.feature_importances_ for tree in clf.estimators_],
axis=0)
indices = np.argsort(importances)[::-1]
print("Feature ranking:")
for f in range(X_train.shape[1]):
print("%d. feature %d (%f)" % (f + 1, indices[f], importances[indices[f]]))
# Plot the feature importances of the forest
plt.figure()
plt.title("Feature importances")
plt.bar(range(X_train.shape[1]), importances[indices],
color="r", yerr=std[indices], align="center")
plt.xticks(range(X_train.shape[1]), indices)
plt.xlim([-1, X_train.shape[1]])
plt.show()
Explanation: Conclusions on the Deep Net approach
It is remarkable to see the convergence happening quickly and firmly given the relatively small data set at hand.
Forecasting Performances are accceptable: about 15% uncertainty after 2000 epochs on the data. By estimating the bias and correcting our model accordingly, the median absolute relative error falls to about 9.5%.
Forecast accuracy is thus +- 9.5% or roughly +- 15 cars in average (95% CL, see red histogram). This means that if we predict, say, 150 cars on a given day, there's 95% chances that the range [135-165] will cover the true value. Lower Confidence Levels will obviously narrow this range.
C) Random Forest Approach
A Random Forest is an estimator that fits a large number of decision trees on various subsamples of the dataset and use averaging to improve the prediction accuracy and reduce over-fitting.
As explained in A), we use the previous-day car count as one of the features, as well as weekday, month, year and weather features.
End of explanation |
14,624 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using LAMMPS with iPython and Jupyter
LAMMPS can be run interactively using iPython easily. This tutorial shows how to set this up.
Installation
Download the latest version of LAMMPS into a folder (we will calls this $LAMMPS_DIR from now on)
Compile LAMMPS as a shared library and enable PNG support
bash
cd $LAMMPS_DIR/src
python2 Make.py -m mpi -png -a file
make mode=shlib auto
Create a python virtualenv
bash
virtualenv testing
source testing/bin/activate
Inside the virtualenv install the lammps package
(testing) cd $LAMMPS_DIR/python
(testing) python install.py
(testing) cd # move to your working directory
Install jupyter and ipython in the virtualenv
bash
(testing) pip install ipython jupyter
Run jupyter notebook
bash
(testing) jupyter notebook
Example
Step1: Queries about LAMMPS simulation
Step2: Working with LAMMPS Variables
Step3: Accessing Atom data | Python Code:
from lammps import IPyLammps
L = IPyLammps()
# 3d Lennard-Jones melt
L.units("lj")
L.atom_style("atomic")
L.atom_modify("map array")
L.lattice("fcc", 0.8442)
L.region("box block", 0, 4, 0, 4, 0, 4)
L.create_box(1, "box")
L.create_atoms(1, "box")
L.mass(1, 1.0)
L.velocity("all create", 1.44, 87287, "loop geom")
L.pair_style("lj/cut", 2.5)
L.pair_coeff(1, 1, 1.0, 1.0, 2.5)
L.neighbor(0.3, "bin")
L.neigh_modify("delay 0 every 20 check no")
L.fix("1 all nve")
L.variable("fx atom fx")
L.run(10)
L.image(zoom=1)
Explanation: Using LAMMPS with iPython and Jupyter
LAMMPS can be run interactively using iPython easily. This tutorial shows how to set this up.
Installation
Download the latest version of LAMMPS into a folder (we will calls this $LAMMPS_DIR from now on)
Compile LAMMPS as a shared library and enable PNG support
bash
cd $LAMMPS_DIR/src
python2 Make.py -m mpi -png -a file
make mode=shlib auto
Create a python virtualenv
bash
virtualenv testing
source testing/bin/activate
Inside the virtualenv install the lammps package
(testing) cd $LAMMPS_DIR/python
(testing) python install.py
(testing) cd # move to your working directory
Install jupyter and ipython in the virtualenv
bash
(testing) pip install ipython jupyter
Run jupyter notebook
bash
(testing) jupyter notebook
Example
End of explanation
L.system
L.system.natoms
L.communication
L.fixes
L.computes
L.dumps
L.groups
Explanation: Queries about LAMMPS simulation
End of explanation
L.variable("a index 2")
L.variables
L.variable("t equal temp")
L.variables
import sys
if sys.version_info < (3, 0):
# In Python 2 'print' is a restricted keyword, which is why you have to use the lmp_print function instead.
x = float(L.lmp_print('"${a}"'))
else:
# In Python 3 the print function can be redefined.
# x = float(L.print('"${a}"')")
# To avoid a syntax error in Python 2 executions of this notebook, this line is packed into an eval statement
x = float(eval("L.print('\"${a}\"')"))
x
L.variables['t'].value
L.eval("v_t/2.0")
L.variable("b index a b c")
L.variables['b'].value
L.eval("v_b")
L.variables['b'].definition
L.variable("i loop 10")
L.variables['i'].value
L.next("i")
L.variables['i'].value
L.eval("ke")
Explanation: Working with LAMMPS Variables
End of explanation
L.atoms[0]
[x for x in dir(L.atoms[0]) if not x.startswith('__')]
L.atoms[0].position
L.atoms[0].id
L.atoms[0].velocity
L.atoms[0].force
L.atoms[0].type
L.variables['fx'].value
Explanation: Accessing Atom data
End of explanation |
14,625 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Collaborative filtering on Google Analytics data
This notebook demonstrates how to implement a WALS matrix refactorization approach to do collaborative filtering.
Step2: Create raw dataset
<p>
For collaborative filtering, we don't need to know anything about either the users or the content. Essentially, all we need to know is userId, itemId, and rating that the particular user gave the particular item.
<p>
In this case, we are working with newspaper articles. The company doesn't ask their users to rate the articles. However, we can use the time-spent on the page as a proxy for rating.
<p>
Normally, we would also add a time filter to this ("latest 7 days"), but our dataset is itself limited to a few days.
Step3: Create dataset for WALS
<p>
The raw dataset (above) won't work for WALS
Step4: Creating rows and columns datasets
Step5: To summarize, we created the following data files from collab_raw.csv
Step6: This code is helpful in developing the input function. You don't need it in production.
Step7: Run as a Python module
Let's run it as Python module for just a few steps.
Step8: Run on Cloud
Step9: This will take <b>10 minutes</b> to complete. Rerun the above command until the jobs gets submitted.
Get row and column factors
Once you have a trained WALS model, you can get row and column factors (user and item embeddings) from the checkpoint file. We'll look at how to use these in the section on building a recommendation system using deep neural networks.
Step10: You can visualize the embedding vectors using dimensional reduction techniques such as PCA. | Python Code:
import os
PROJECT = "cloud-training-demos" # REPLACE WITH YOUR PROJECT ID
BUCKET = "cloud-training-demos-ml" # REPLACE WITH YOUR BUCKET NAME
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# Do not change these
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["TFVERSION"] = "1.15"
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
import tensorflow as tf
print(tf.__version__)
Explanation: Collaborative filtering on Google Analytics data
This notebook demonstrates how to implement a WALS matrix refactorization approach to do collaborative filtering.
End of explanation
from google.cloud import bigquery
bq = bigquery.Client(project = PROJECT)
sql =
WITH CTE_visitor_page_content AS (
SELECT
# Schema: https://support.google.com/analytics/answer/3437719?hl=en
# For a completely unique visit-session ID, we combine combination of fullVisitorId and visitNumber:
CONCAT(fullVisitorID,'-',CAST(visitNumber AS STRING)) AS visitorId,
(SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(hits.customDimensions)) AS latestContentId,
(LEAD(hits.time, 1) OVER (PARTITION BY fullVisitorId ORDER BY hits.time ASC) - hits.time) AS session_duration
FROM
`cloud-training-demos.GA360_test.ga_sessions_sample`,
UNNEST(hits) AS hits
WHERE
# only include hits on pages
hits.type = "PAGE"
GROUP BY
fullVisitorId,
visitNumber,
latestContentId,
hits.time )
-- Aggregate web stats
SELECT
visitorId,
latestContentId as contentId,
SUM(session_duration) AS session_duration
FROM
CTE_visitor_page_content
WHERE
latestContentId IS NOT NULL
GROUP BY
visitorId,
latestContentId
HAVING
session_duration > 0
df = bq.query(sql).to_dataframe()
df.head()
stats = df.describe()
stats
df[["session_duration"]].plot(kind="hist", logy=True, bins=100, figsize=[8,5])
# The rating is the session_duration scaled to be in the range 0-1. This will help with training.
median = stats.loc["50%", "session_duration"]
df["rating"] = 0.3 * df["session_duration"] / median
df.loc[df["rating"] > 1, "rating"] = 1
df[["rating"]].plot(kind="hist", logy=True, bins=100, figsize=[8,5])
del df["session_duration"]
%%bash
rm -rf data
mkdir data
df.to_csv(path_or_buf = "data/collab_raw.csv", index = False, header = False)
!head data/collab_raw.csv
Explanation: Create raw dataset
<p>
For collaborative filtering, we don't need to know anything about either the users or the content. Essentially, all we need to know is userId, itemId, and rating that the particular user gave the particular item.
<p>
In this case, we are working with newspaper articles. The company doesn't ask their users to rate the articles. However, we can use the time-spent on the page as a proxy for rating.
<p>
Normally, we would also add a time filter to this ("latest 7 days"), but our dataset is itself limited to a few days.
End of explanation
import pandas as pd
import numpy as np
def create_mapping(values, filename):
with open(filename, 'w') as ofp:
value_to_id = {value:idx for idx, value in enumerate(values.unique())}
for value, idx in value_to_id.items():
ofp.write("{},{}\n".format(value, idx))
return value_to_id
df = pd.read_csv(filepath_or_buffer = "data/collab_raw.csv",
header = None,
names = ["visitorId", "contentId", "rating"],
dtype = {"visitorId": str, "contentId": str, "rating": np.float})
df.to_csv(path_or_buf = "data/collab_raw.csv", index = False, header = False)
user_mapping = create_mapping(df["visitorId"], "data/users.csv")
item_mapping = create_mapping(df["contentId"], "data/items.csv")
!head -3 data/*.csv
df["userId"] = df["visitorId"].map(user_mapping.get)
df["itemId"] = df["contentId"].map(item_mapping.get)
mapped_df = df[["userId", "itemId", "rating"]]
mapped_df.to_csv(path_or_buf = "data/collab_mapped.csv", index = False, header = False)
mapped_df.head()
Explanation: Create dataset for WALS
<p>
The raw dataset (above) won't work for WALS:
<ol>
<li> The userId and itemId have to be 0,1,2 ... so we need to create a mapping from visitorId (in the raw data) to userId and contentId (in the raw data) to itemId.
<li> We will need to save the above mapping to a file because at prediction time, we'll need to know how to map the contentId in the table above to the itemId.
<li> We'll need two files: a "rows" dataset where all the items for a particular user are listed; and a "columns" dataset where all the users for a particular item are listed.
</ol>
<p>
### Mapping
End of explanation
import pandas as pd
import numpy as np
mapped_df = pd.read_csv(filepath_or_buffer = "data/collab_mapped.csv", header = None, names = ["userId", "itemId", "rating"])
mapped_df.head()
NITEMS = np.max(mapped_df["itemId"]) + 1
NUSERS = np.max(mapped_df["userId"]) + 1
mapped_df["rating"] = np.round(mapped_df["rating"].values, 2)
print("{} items, {} users, {} interactions".format( NITEMS, NUSERS, len(mapped_df) ))
grouped_by_items = mapped_df.groupby("itemId")
iter = 0
for item, grouped in grouped_by_items:
print(item, grouped["userId"].values, grouped["rating"].values)
iter = iter + 1
if iter > 5:
break
import tensorflow as tf
grouped_by_items = mapped_df.groupby("itemId")
with tf.python_io.TFRecordWriter("data/users_for_item") as ofp:
for item, grouped in grouped_by_items:
example = tf.train.Example(features = tf.train.Features(feature = {
"key": tf.train.Feature(int64_list = tf.train.Int64List(value = [item])),
"indices": tf.train.Feature(int64_list = tf.train.Int64List(value = grouped["userId"].values)),
"values": tf.train.Feature(float_list = tf.train.FloatList(value = grouped["rating"].values))
}))
ofp.write(example.SerializeToString())
grouped_by_users = mapped_df.groupby("userId")
with tf.python_io.TFRecordWriter("data/items_for_user") as ofp:
for user, grouped in grouped_by_users:
example = tf.train.Example(features = tf.train.Features(feature = {
"key": tf.train.Feature(int64_list = tf.train.Int64List(value = [user])),
"indices": tf.train.Feature(int64_list = tf.train.Int64List(value = grouped["itemId"].values)),
"values": tf.train.Feature(float_list = tf.train.FloatList(value = grouped["rating"].values))
}))
ofp.write(example.SerializeToString())
!ls -lrt data
Explanation: Creating rows and columns datasets
End of explanation
import os
import tensorflow as tf
from tensorflow.python.lib.io import file_io
from tensorflow.contrib.factorization import WALSMatrixFactorization
def read_dataset(mode, args):
def decode_example(protos, vocab_size):
# TODO
return
def remap_keys(sparse_tensor):
# Current indices of our SparseTensor that we need to fix
bad_indices = sparse_tensor.indices # shape = (current_batch_size * (number_of_items/users[i] + 1), 2)
# Current values of our SparseTensor that we need to fix
bad_values = sparse_tensor.values # shape = (current_batch_size * (number_of_items/users[i] + 1),)
# Since batch is ordered, the last value for a batch index is the user
# Find where the batch index chages to extract the user rows
# 1 where user, else 0
user_mask = tf.concat(values = [bad_indices[1:,0] - bad_indices[:-1,0], tf.constant(value = [1], dtype = tf.int64)], axis = 0) # shape = (current_batch_size * (number_of_items/users[i] + 1), 2)
# Mask out the user rows from the values
good_values = tf.boolean_mask(tensor = bad_values, mask = tf.equal(x = user_mask, y = 0)) # shape = (current_batch_size * number_of_items/users[i],)
item_indices = tf.boolean_mask(tensor = bad_indices, mask = tf.equal(x = user_mask, y = 0)) # shape = (current_batch_size * number_of_items/users[i],)
user_indices = tf.boolean_mask(tensor = bad_indices, mask = tf.equal(x = user_mask, y = 1))[:, 1] # shape = (current_batch_size,)
good_user_indices = tf.gather(params = user_indices, indices = item_indices[:,0]) # shape = (current_batch_size * number_of_items/users[i],)
# User and item indices are rank 1, need to make rank 1 to concat
good_user_indices_expanded = tf.expand_dims(input = good_user_indices, axis = -1) # shape = (current_batch_size * number_of_items/users[i], 1)
good_item_indices_expanded = tf.expand_dims(input = item_indices[:, 1], axis = -1) # shape = (current_batch_size * number_of_items/users[i], 1)
good_indices = tf.concat(values = [good_user_indices_expanded, good_item_indices_expanded], axis = 1) # shape = (current_batch_size * number_of_items/users[i], 2)
remapped_sparse_tensor = tf.SparseTensor(indices = good_indices, values = good_values, dense_shape = sparse_tensor.dense_shape)
return remapped_sparse_tensor
def parse_tfrecords(filename, vocab_size):
if mode == tf.estimator.ModeKeys.TRAIN:
num_epochs = None # indefinitely
else:
num_epochs = 1 # end-of-input after this
files = tf.gfile.Glob(filename = os.path.join(args["input_path"], filename))
# Create dataset from file list
dataset = tf.data.TFRecordDataset(files)
dataset = dataset.map(map_func = lambda x: decode_example(x, vocab_size))
dataset = dataset.repeat(count = num_epochs)
dataset = dataset.batch(batch_size = args["batch_size"])
dataset = dataset.map(map_func = lambda x: remap_keys(x))
return dataset.make_one_shot_iterator().get_next()
def _input_fn():
features = {
WALSMatrixFactorization.INPUT_ROWS: parse_tfrecords("items_for_user", args["nitems"]),
WALSMatrixFactorization.INPUT_COLS: parse_tfrecords("users_for_item", args["nusers"]),
WALSMatrixFactorization.PROJECT_ROW: tf.constant(True)
}
return features, None
return _input_fn
def input_cols():
return parse_tfrecords("users_for_item", args["nusers"])
return _input_fn#_subset
Explanation: To summarize, we created the following data files from collab_raw.csv:
<ol>
<li> ```collab_mapped.csv``` is essentially the same data as in ```collab_raw.csv``` except that ```visitorId``` and ```contentId``` which are business-specific have been mapped to ```userId``` and ```itemId``` which are enumerated in 0,1,2,.... The mappings themselves are stored in ```items.csv``` and ```users.csv``` so that they can be used during inference.
<li> ```users_for_item``` contains all the users/ratings for each item in TFExample format
<li> ```items_for_user``` contains all the items/ratings for each user in TFExample format
</ol>
Train with WALS
Once you have the dataset, do matrix factorization with WALS using the WALSMatrixFactorization in the contrib directory.
This is an estimator model, so it should be relatively familiar.
<p>
As usual, we write an input_fn to provide the data to the model, and then create the Estimator to do train_and_evaluate.
Because it is in contrib and hasn't moved over to tf.estimator yet, we use tf.contrib.learn.Experiment to handle the training loop.<p>
Make sure to replace <strong># TODO</strong> in below code. For any help, you can refer [this](https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/machine_learning/deepdive/10_recommend/wals.ipynb).
End of explanation
def try_out():
with tf.Session() as sess:
fn = read_dataset(
mode = tf.estimator.ModeKeys.EVAL,
args = {"input_path": "data", "batch_size": 4, "nitems": NITEMS, "nusers": NUSERS})
feats, _ = fn()
print(feats["input_rows"].eval())
print(feats["input_rows"].eval())
try_out()
def find_top_k(user, item_factors, k):
all_items = tf.matmul(a = tf.expand_dims(input = user, axis = 0), b = tf.transpose(a = item_factors))
topk = tf.nn.top_k(input = all_items, k = k)
return tf.cast(x = topk.indices, dtype = tf.int64)
def batch_predict(args):
import numpy as np
with tf.Session() as sess:
estimator = tf.contrib.factorization.WALSMatrixFactorization(
num_rows = args["nusers"],
num_cols = args["nitems"],
embedding_dimension = args["n_embeds"],
model_dir = args["output_dir"])
# This is how you would get the row factors for out-of-vocab user data
# row_factors = list(estimator.get_projections(input_fn=read_dataset(tf.estimator.ModeKeys.EVAL, args)))
# user_factors = tf.convert_to_tensor(np.array(row_factors))
# But for in-vocab data, the row factors are already in the checkpoint
user_factors = tf.convert_to_tensor(value = estimator.get_row_factors()[0]) # (nusers, nembeds)
# In either case, we have to assume catalog doesn"t change, so col_factors are read in
item_factors = tf.convert_to_tensor(value = estimator.get_col_factors()[0])# (nitems, nembeds)
# For each user, find the top K items
topk = tf.squeeze(input = tf.map_fn(fn = lambda user: find_top_k(user, item_factors, args["topk"]), elems = user_factors, dtype = tf.int64))
with file_io.FileIO(os.path.join(args["output_dir"], "batch_pred.txt"), mode = 'w') as f:
for best_items_for_user in topk.eval():
f.write(",".join(str(x) for x in best_items_for_user) + '\n')
def train_and_evaluate(args):
train_steps = int(0.5 + (1.0 * args["num_epochs"] * args["nusers"]) / args["batch_size"])
steps_in_epoch = int(0.5 + args["nusers"] / args["batch_size"])
print("Will train for {} steps, evaluating once every {} steps".format(train_steps, steps_in_epoch))
def experiment_fn(output_dir):
return tf.contrib.learn.Experiment(
tf.contrib.factorization.WALSMatrixFactorization(
num_rows = args["nusers"],
num_cols = args["nitems"],
embedding_dimension = args["n_embeds"],
model_dir = args["output_dir"]),
train_input_fn = read_dataset(tf.estimator.ModeKeys.TRAIN, args),
eval_input_fn = read_dataset(tf.estimator.ModeKeys.EVAL, args),
train_steps = train_steps,
eval_steps = 1,
min_eval_frequency = steps_in_epoch
)
from tensorflow.contrib.learn.python.learn import learn_runner
learn_runner.run(experiment_fn = experiment_fn, output_dir = args["output_dir"])
batch_predict(args)
import shutil
shutil.rmtree(path = "wals_trained", ignore_errors=True)
train_and_evaluate({
"output_dir": "wals_trained",
"input_path": "data/",
"num_epochs": 0.05,
"nitems": NITEMS,
"nusers": NUSERS,
"batch_size": 512,
"n_embeds": 10,
"topk": 3
})
!ls wals_trained
!head wals_trained/batch_pred.txt
Explanation: This code is helpful in developing the input function. You don't need it in production.
End of explanation
os.environ["NITEMS"] = str(NITEMS)
os.environ["NUSERS"] = str(NUSERS)
%%bash
rm -rf wals.tar.gz wals_trained
gcloud ai-platform local train \
--module-name=walsmodel.task \
--package-path=${PWD}/walsmodel \
-- \
--output_dir=${PWD}/wals_trained \
--input_path=${PWD}/data \
--num_epochs=0.01 --nitems=${NITEMS} --nusers=${NUSERS} \
--job-dir=./tmp
Explanation: Run as a Python module
Let's run it as Python module for just a few steps.
End of explanation
%%bash
gsutil -m cp data/* gs://${BUCKET}/wals/data
%%bash
OUTDIR=gs://${BUCKET}/wals/model_trained
JOBNAME=wals_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ai-platform jobs submit training $JOBNAME \
--region=$REGION \
--module-name=walsmodel.task \
--package-path=${PWD}/walsmodel \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=BASIC_GPU \
--runtime-version=$TFVERSION \
-- \
--output_dir=$OUTDIR \
--input_path=gs://${BUCKET}/wals/data \
--num_epochs=10 --nitems=${NITEMS} --nusers=${NUSERS}
Explanation: Run on Cloud
End of explanation
def get_factors(args):
with tf.Session() as sess:
estimator = tf.contrib.factorization.WALSMatrixFactorization(
num_rows = args["nusers"],
num_cols = args["nitems"],
embedding_dimension = args["n_embeds"],
model_dir = args["output_dir"])
row_factors = estimator.get_row_factors()[0]
col_factors = estimator.get_col_factors()[0]
return row_factors, col_factors
args = {
"output_dir": "gs://{}/wals/model_trained".format(BUCKET),
"nitems": NITEMS,
"nusers": NUSERS,
"n_embeds": 10
}
user_embeddings, item_embeddings = get_factors(args)
print(user_embeddings[:3])
print(item_embeddings[:3])
Explanation: This will take <b>10 minutes</b> to complete. Rerun the above command until the jobs gets submitted.
Get row and column factors
Once you have a trained WALS model, you can get row and column factors (user and item embeddings) from the checkpoint file. We'll look at how to use these in the section on building a recommendation system using deep neural networks.
End of explanation
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from sklearn.decomposition import PCA
pca = PCA(n_components = 3)
pca.fit(user_embeddings)
user_embeddings_pca = pca.transform(user_embeddings)
fig = plt.figure(figsize = (8,8))
ax = fig.add_subplot(111, projection = "3d")
xs, ys, zs = user_embeddings_pca[::150].T
ax.scatter(xs, ys, zs)
Explanation: You can visualize the embedding vectors using dimensional reduction techniques such as PCA.
End of explanation |
14,626 | Given the following text description, write Python code to implement the functionality described.
Description:
Modify a given matrix by placing sorted boundary elements in clockwise manner
Function to print the elements of the matrix in row - wise manner ; Function to sort boundary elements of a matrix starting from the outermost to the innermost boundary and place them in a clockwise manner ; k - starting row index m - ending row index l - starting column index n - ending column index i - iterator ; Stores the current boundary elements ; Push the first row ; Push the last column ; Push the last row ; Push the first column ; Sort the boundary elements ; Update the first row ; Update the last column ; Update the last row ; Update the first column ; Print the resultant matrix ; Driver Code ; Given matrix
| Python Code:
def printMatrix(a ) :
for x in a :
for y in x :
print(y , end = "▁ ")
print()
def sortBoundaryWise(a ) :
k = 0
l = 0
m = len(a )
n = len(a[0 ] )
n_k = 0
n_l = 0
n_m = m
n_n = n
while(k < m and l < n ) :
boundary =[]
for i in range(l , n ) :
boundary . append(a[k ][i ] )
k += 1
for i in range(k , m ) :
boundary . append(a[i ][n - 1 ] )
n -= 1
if(k < m ) :
for i in range(n - 1 , l - 1 , - 1 ) :
boundary . append(a[m - 1 ][i ] )
m -= 1
if(l < n ) :
for i in range(m - 1 , k - 1 , - 1 ) :
boundary . append(a[i ][l ] )
l += 1
boundary . sort()
ind = 0
for i in range(n_l , n_n ) :
a[n_k ][i ] = boundary[ind ]
ind += 1
n_k += 1
for i in range(n_k , n_m ) :
a[i ][n_n - 1 ] = boundary[ind ]
ind += 1
n_n -= 1
if(n_k < n_m ) :
for i in range(n_n - 1 , n_l - 1 , - 1 ) :
a[n_m - 1 ][i ] = boundary[ind ]
ind += 1
n_m -= 1
if(n_l < n_n ) :
for i in range(n_m - 1 , n_k - 1 , - 1 ) :
a[i ][n_l ] = boundary[ind ]
ind += 1
n_l += 1
printMatrix(a )
if __name__== "__main __":
matrix =[[ 9 , 7 , 4 , 5 ] ,[1 , 6 , 2 , - 6 ] ,[12 , 20 , 2 , 0 ] ,[- 5 , - 6 , 7 , - 2 ] ]
sortBoundaryWise(matrix )
|
14,627 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Import TLG
http
Step1: Convert TLG to Unicode
http
Step2: Now the TLG corpus is in now ready to use in Unicode. Some preprocesing is likely still required, as the text still has formatting and linebreaks present in the original printed text. | Python Code:
import datetime as dt
from cltk.corpus.utils.importer import CorpusImporter
corpus_importer = CorpusImporter('greek')
corpus_importer.list_corpora
corpus_importer.import_corpus('tlg', '/root/classics_corpora/TLG_E')
Explanation: Import TLG
http://docs.cltk.org/en/latest/importing_corpora.html
End of explanation
from cltk.corpus.greek.tlgu import TLGU
corpus_importer.import_corpus('greek_software_tlgu')
t = TLGU()
t0 = dt.datetime.utcnow()
t.convert_corpus(corpus='tlg')
print('... finished in {}'.format(dt.datetime.utcnow() - t0))
Explanation: Convert TLG to Unicode
http://docs.cltk.org/en/latest/greek.html#converting-tlg-texts-with-tlgu
End of explanation
with open('/root/cltk_data/greek/text/tlg/plaintext/TLG0007.TXT') as file_open:
text_snippet = file_open.read()[:1500]
print(text_snippet)
Explanation: Now the TLG corpus is in now ready to use in Unicode. Some preprocesing is likely still required, as the text still has formatting and linebreaks present in the original printed text.
End of explanation |
14,628 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Query for pile-up allignments at region "X"
We can query the API services to obtain reads from a given readgroupset such that we are able to make a pileup for any specified region
NOTE
Step1: Description
Step2: Make reference to the data from the server
We query the server for the dataset, which is the 1k-genomes dataset.
We access the bases of reference, followed by listingthe reference sets.
Step3: ReferenceSet Name (chromosome) & ReadGroupSet Reads
We define our contiguous sequence with a chromosome reference, and then make a reference array for our read group sets of read groups.
Step4: Functions to obtain ReadGroupSet ID by name.
We can obtain a set of reads for a given Read-Group. The set of reads is returned in the 'rgs' variable below.
Step5: Function to call multiple ReferenceSets.
Because some calls such as Variants, Reference Bases, and Reads require this field to return the region that wants to be analyzed. Also note, that it is a required input of this service.
Step6: Cigar-Unit interpreter function.
This function can be expanded in the sense that, INDELS are detected in this function. With more specifications this Pile-Up program with this function can be extended to also detect such variants. Also note that only 4 cigar operations are specified, because they were the only operations specified in the reads.
Step7: Variant Call Function
If the pile-up detects that the dominant allele frequency, defers from the reference bases, this function will be call and query the server for that variant.
Step8: Pile up function
This function calculates the pile up's for a given region, that is the position being observed. It takes as input the chromosome reference and the Read-Groups to obtain the needed aligned sequence.
Step9: Function to calculate occurrence frequency
The frequency is obtained from the occurrence of alleles in the observed position for all the reads which are mapped in that region. This function returns an array of occurrence alleles as well as their individualized frequency compared to all the reads detected.
Step10: Precursor function
This function prepares the Read-Group set and does the inner calls, it also calls and obtains the reference bases. Note that only if the calls are correct will the function continue to make the calculations and inner calls.
Step11: Plotting Function
This function plots, the information obtained by the others. It obtains the reference base and denotes it. It also obtains the frequencies and plots them in a pie chart.
Step12: Widget Interface Setup
This function calls the previous one, and sets up the interface so that it is an active application. The following one, will begin the query and plotting process. | Python Code:
#Widget()
Explanation: Query for pile-up allignments at region "X"
We can query the API services to obtain reads from a given readgroupset such that we are able to make a pileup for any specified region
NOTE: Under the "Kernel" tab above, do "Restart & Run All" then uncomment the first cell and run it individually
End of explanation
import ga4gh.client as client
c = client.HttpClient("http://1kgenomes.ga4gh.org")
Explanation: Description:
Note that the understanding of 3 terms allows for a complete/useful way to use this notebook. Now, the terminology is adapted to my understanding and therefore the expressions that I present might be incomplete or not strictly as defined by science.
First, our input takes the "position" argument which it is a unique position in the genome. It is necessary to specify which position wants to be observed because when we query the server for reads, it takes a starting and an ending position to return the set of data that spans our region of interest.
Second, the "Read Groupset Names" are specific subjects which had their genome sequenced on the 1k genomes project. The data rests on the server in the form of read sets, which will be defined later on.
Third, the “Reference Set Name” is a group of contiguous defined regions in the genome, which I refer to as chromosomes, but according to the 1k genomes website, there is more than just the 23-regular chromosomal expressions. Therefore I can only assume that other regions or references have been defined.
A set of reads is the data provided by the sequencer in the form of contiguous alleles. It is natural to observe multiple reads which overlap in a particular region, as well as reads which cover the same area. But that only adds on to the certainty of the statistics which determine the allele occurrence in a given position, that is, the purpose of Pileup. Also, variants are a set of alleles that differ from the reference bases and they are known as SNPs (Single Nucleotide Polymorphisms).
Pileup is a set of functions which do inner-mutual calls to obtain the fields previously defined. After the specific set of reads that span the region of interest have been obtained, we proceed to dissect the specific position of interest. We stack them in a counter dictionary which is then passed to a function that does the frequency calculation and finishes by returning the alleles observed as well as their individualized frequency. When the functions detect that the highest frequency allele differs from the reference bases, there is a call to the variant set to obtain the name, the position at which it starts, the alternate bases, and the genotype. Finally, it is plotted in a pie chart with the proper distribution of frequency.
Initialize the client
As seen in the "1kg.ipynb" example, we take the following steps to create the client object that will be used to obtain the information we desire and query the serever
End of explanation
dataset = c.searchDatasets().next()
referenceSet = c.searchReferenceSets().next()
references = [r for r in c.searchReferences(referenceSetId = referenceSet.id)]
Explanation: Make reference to the data from the server
We query the server for the dataset, which is the 1k-genomes dataset.
We access the bases of reference, followed by listingthe reference sets.
End of explanation
contig ={}
for i in references:
contig[i.name] = str(i.id)
Explanation: ReferenceSet Name (chromosome) & ReadGroupSet Reads
We define our contiguous sequence with a chromosome reference, and then make a reference array for our read group sets of read groups.
End of explanation
def GetReadsForName(Name):
Name = str(Name)
if type(getReadGroupsByReadGroupSetName(Name)) == str:
return getReadGroupsByReadGroupSetName(Name)
else:
return [i for i in getReadGroupsByReadGroupSetName(Name)]
def readGroupSetByName(name):
result = None
for rgs in c.searchReadGroupSets(name=name, datasetId=dataset.id):
return rgs
return result
def getReadGroupsByReadGroupSetName(readGroupSetName):
if None == readGroupSetByName(readGroupSetName):
return "Sorry, bad request for {}".format(readGroupSetName)
else:
return readGroupSetByName(readGroupSetName).read_groups
Explanation: Functions to obtain ReadGroupSet ID by name.
We can obtain a set of reads for a given Read-Group. The set of reads is returned in the 'rgs' variable below.
End of explanation
def chrfunct(Chromo):
chr1 = filter(lambda x: x.name == str(Chromo), references)[0]
return chr1
Explanation: Function to call multiple ReferenceSets.
Because some calls such as Variants, Reference Bases, and Reads require this field to return the region that wants to be analyzed. Also note, that it is a required input of this service.
End of explanation
def Cigar_Interpreter(Sequence, observe, ReferBase):
Temp = 0
BaseCounter = 0
Variant = ""
AligSeq = Sequence.aligned_sequence
InterpArr = list([])
Iter = 0
for i in Sequence.alignment.cigar:
Length = i.operation_length
if i.Operation.Name(i.operation) == "ALIGNMENT_MATCH":
InterpArr[len(InterpArr):len(InterpArr)+Length] = AligSeq[Temp:Temp+Length]
Temp += Length
BaseCounter += Length
elif i.Operation.Name(i.operation) == "CLIP_SOFT":
Temp += Length
elif i.Operation.Name(i.operation) == "DELETE":
int_iter = 0
for i in range(Length):
InterpArr[len(InterpArr) : len(InterpArr)+1] = "N"
BaseCounter += 1
int_iter += 1
if BaseCounter == observe:
Variant = ReferBase[BaseCounter:BaseCounter+int_iter]
return Variant
elif i.Operation.Name(i.operation) == "INSERT":
for i in range(Length):
InterpArr[len(InterpArr):len(InterpArr)+1] = AligSeq[Temp : Temp+1]
Temp += 1
if (Temp == observe) and (len(InterpArr) >= Temp+Length+1):
Variant = "".join(InterpArr[Temp:Temp+Length+1])
return Variant
Iter += 1
if (Temp >= observe) and (len(Sequence.alignment.cigar) == Iter) :
return InterpArr[observe]
else:
return "N"
Explanation: Cigar-Unit interpreter function.
This function can be expanded in the sense that, INDELS are detected in this function. With more specifications this Pile-Up program with this function can be extended to also detect such variants. Also note that only 4 cigar operations are specified, because they were the only operations specified in the reads.
End of explanation
def find_variants(Start, End, RdGrpSetName, ChromoSm):
for variantSet in c.searchVariantSets(datasetId=dataset.id):
if variantSet.name == "phase3-release":
release = variantSet
for callSet in c.searchCallSets(variantSetId= release.id, name= str(RdGrpSetName)):
mycallset = callSet
for variant in c.searchVariants(release.id, referenceName=ChromoSm, start=Start, end=End, callSetIds=[mycallset.id]):
if len(variant.alternate_bases[0]) == 1 and len(variant.reference_bases) == 1:
print "\nA VARIANT WAS FOUND"
print "Variant Name: {}, Start: {}, End: {} \nAlternate Bases: {} \nGenotypes: {}".format(str(variant.names[0]), str(variant.start), str(variant.end), str(variant.alternate_bases[0]), str(variant.calls[0].genotype))
return
return False
Explanation: Variant Call Function
If the pile-up detects that the dominant allele frequency, defers from the reference bases, this function will be call and query the server for that variant.
End of explanation
def pileUp(contig, position, rgset, Chromosm):
alleles = []
rgset = GetReadsForName(rgset)
if type(rgset) != str:
for i in rgset:
for sequence in c.searchReads(readGroupIds=[i.id],start = position, end = position+1, referenceId=contig):
if sequence.alignment != None:
start = sequence.alignment.position.position
observe = position - sequence.alignment.position.position
end = start+len(sequence.aligned_sequence)
if observe > 100 or observe < 0:
continue
if len(sequence.alignment.cigar) > 1:
allele = Cigar_Interpreter(sequence, observe,c.listReferenceBases(chrfunct(Chromosm).id, start=start, end= end))
else:
allele = sequence.aligned_sequence[observe]
alleles.append({"allele": str(allele), "readGroupId":i.id})
return Calc_Freq(alleles)
else:
return rgset
Explanation: Pile up function
This function calculates the pile up's for a given region, that is the position being observed. It takes as input the chromosome reference and the Read-Groups to obtain the needed aligned sequence.
End of explanation
def Calc_Freq(Test):
tot = len(Test)
AutCalc = {}
Arr = []
for i in range(tot):
if AutCalc.has_key(Test[i]["allele"]) == False and (Test[i]['allele'] != "N"):
AutCalc.setdefault(Test[i]["allele"], 1)
Arr.append(Test[i]['allele'])
else:
if Test[i]['allele'] == "N":
tot -= 1
else:
AutCalc[Test[i]["allele"]] = float(AutCalc.get(Test[i]["allele"]) + 1)
Freq = {}
print "\n{} Reads where used, to determine pile-up".format(tot)
tot = float(tot)
for i in Arr:
Freq.setdefault(i,float(AutCalc.get(i)/tot))
return Freq
Explanation: Function to calculate occurrence frequency
The frequency is obtained from the occurrence of alleles in the observed position for all the reads which are mapped in that region. This function returns an array of occurrence alleles as well as their individualized frequency compared to all the reads detected.
End of explanation
def Variant_Comp(Position, ReadGroupSetName, Chromosm):
RdGrp = GetReadsForName(ReadGroupSetName)
Chrm = contig.get(Chromosm, None)
if (Chrm != None) and type(RdGrp) != (str) :
base = c.listReferenceBases(Chrm, start = Position, end = Position+1)
var = pileUp(Chrm, Position, ReadGroupSetName, Chromosm)
return (str(base), var)
else:
if RdGrp == None:
print"Read Group Set '{}' is not in the API".format(ReadGroupSetName)
else:
print"Chromosome '{}' is not in the API".format(Chromosm)
Explanation: Precursor function
This function prepares the Read-Group set and does the inner calls, it also calls and obtains the reference bases. Note that only if the calls are correct will the function continue to make the calculations and inner calls.
End of explanation
def plot_vars(Position, RdGrpName, Chromo):
%matplotlib inline
import matplotlib.pyplot as plt
Refer, Freqs = Variant_Comp(int(Position), str(RdGrpName),str(Chromo))
labels = Freqs.keys()
sizes = Freqs.values()
colors = ['yellowgreen', 'gold', 'lightskyblue', 'lightcoral']
Expl= {}
Legend = []
print "Reference Bases:", Refer
for i in labels:
if Freqs.get(i) != max(sizes):
find_variants(int(Position), int(Position)+1, str(RdGrpName), str(Chromo))
Expl.setdefault(i, .15)
Legend.append("{}: {} %".format(i, str(Freqs.get(i)*100)[:4]))
elif i == Refer:
Expl.setdefault(i,0.8)
Legend.append("{}: {} %".format(i, str(Freqs.get(i)*100)[:4]))
else:
Expl.setdefault(i,0.0)
Legend.append("{}: {} %".format(i, str(Freqs.get(i)*100)[:4]))
explode = Expl.values()
plt.pie(sizes, explode=explode, labels=labels, colors=colors,autopct='%1.1f%%', shadow=True, startangle=0)
plt.axis('equal')
plt.legend(['%s' % str(x) for x in (Legend)])
plt.show()
Explanation: Plotting Function
This function plots, the information obtained by the others. It obtains the reference base and denotes it. It also obtains the frequencies and plots them in a pie chart.
End of explanation
def Widget():
from ipywidgets import widgets
from ipywidgets import interact
from IPython.display import display
t0 = widgets.Text(value="Position Exaple: '120394'", disabled=True)
text0 = widgets.Text()
t1 = widgets.Text(value="ReadGroupName Example: 'NA19102'", disabled=True)
text1 = widgets.Text()
t2 = widgets.Text(value= "ReferenceSets Example: '1'", disabled=True)
text2 = widgets.Text()
display(t0, text0, t1, text1, t2, text2)
button = widgets.Button(description="Submit")
exit = widgets.Button(description="Exit")
display(button, exit)
def exitFunct(c):
import sys
sys.exit(["Thank you, you have exited the function"])
def Submit(sender):
Pos, RgSetNm, Chrom = text0.value, text1.value, text2.value
chr1 = chrfunct(Chrom)
plot_vars(Pos, RgSetNm, Chrom)
def button_clicked(b):
print "Position: {}, ReadGrpSet: {}, Chrom: {}".format(text0.value, text1.value, text2.value)
Submit(b)
button.on_click(button_clicked)
exit.on_click(exitFunct)
Explanation: Widget Interface Setup
This function calls the previous one, and sets up the interface so that it is an active application. The following one, will begin the query and plotting process.
End of explanation |
14,629 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Implementing methods
Once we have a mock server, we could already provide an interface to
external services mocking our replies.
This is very helpful to enable
clients to test our API and enable quick feedbacks on data types and possible responses.
Now that we have the contract, we should start with the implementation!
OperationId
OperationId is the OAS3 fields with maps the resource-target with the python function to call.
paths
Step1: Now run the spec in a terminal using
cd /code/notebooks/oas3/
connexion run /code/notebooks/oas3/ex-03-02-path.yaml
play a bit with the Swagger UI
and try making a request!
Step2: Returning errors.
Our api.get_status implementation always returns 200 OK, but in the real world APIs could
return different kind of errors.
An interoperable API should | Python Code:
# connexion provides a predefined problem object
from connexion import problem
# Exercise: write a get_status() returning a successful response to problem.
help(problem)
def get_status():
return problem(
status=200,
title="OK",
detail="The application is working properly"
)
Explanation: Implementing methods
Once we have a mock server, we could already provide an interface to
external services mocking our replies.
This is very helpful to enable
clients to test our API and enable quick feedbacks on data types and possible responses.
Now that we have the contract, we should start with the implementation!
OperationId
OperationId is the OAS3 fields with maps the resource-target with the python function to call.
paths:
/status
get:
...
operationId: api.get_status
...
The method signature should reflect the function's one.
OAS allows to pass parameters to the resource target via:
- query parameters
- http headers
- request body
Implement get_status
At first we'll just implement the get_status in api.py function that:
takes no input parameters;
returns a problem+json
End of explanation
!curl http://localhost:5000/datetime/v1/status -kv
Explanation: Now run the spec in a terminal using
cd /code/notebooks/oas3/
connexion run /code/notebooks/oas3/ex-03-02-path.yaml
play a bit with the Swagger UI
and try making a request!
End of explanation
from random import randint
from connexion import problem
def get_status():
headers = {"Cache-Control": "no-store"}
p = randint(1, 5)
if p == 5:
return problem(
status=503,
title="Service Temporarily Unavailable",
detail="Retry after the number of seconds specified in the the Retry-After header.",
headers=dict(**headers, **{'Retry-After': str(p)})
)
return problem(
status=200,
title="OK",
detail="So far so good.",
headers=headers
)
Explanation: Returning errors.
Our api.get_status implementation always returns 200 OK, but in the real world APIs could
return different kind of errors.
An interoperable API should:
fail fast, avoiding that application errors result in stuck connections;
implement a clean error semantic.
In our Service Management framework we expect that:
if the Service is unavailable, we must return 503 Service Unavailable http status
we must return the Retry-After header specifying the number of seconds
when to retry.
TODO: ADD CIRCUIT_BREAKER IMAGE
To implement this we must:
add the returned headers in the OAS3 interface;
pass the headers to the flask Response
Exercise
Modify the OAS3 spec in ex-04-01-headers.yaml and:
add a 503 response to the /status path;
when a 503 is returned, the retry-after header is returned.
Hint: you can define a header in components/headers like that:
```
components:
headers:
Retry-After:
description: |-
Retry contacting the endpoint at least after seconds.
See https://tools.ietf.org/html/rfc7231#section-7.1.3
schema:
format: int32
type: integer
```
Or just $ref the Retry-After defined in https://teamdigitale.github.io/openapi/0.0.5/definitions.yaml#/headers/Retry-After
Modify api.py:get_status such that:
returns a 503 on 20% of the requests;
when a 503 is returned, the retry-after header is returned;
on each response, return the Cache-Control: no-store header to avoid
caching on service status.
Bonus track: Google post on HTTP caching
End of explanation |
14,630 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing
Step8: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step10: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step12: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
Step15: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below
Step18: Process Decoder Input
Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
Step21: Encoding
Implement encoding_layer() to create a Encoder RNN layer
Step24: Decoding - Training
Create a training decoding layer
Step27: Decoding - Inference
Create inference decoder
Step30: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Embed the target sequences
Construct the decoder LSTM cell (just like you constructed the encoder cell above)
Create an output layer to map the outputs of the decoder to the elements of our vocabulary
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.
Note
Step33: Build the Neural Network
Apply the functions you implemented above to
Step34: Neural Network Training
Hyperparameters
Tune the following parameters
Step36: Build the Graph
Build the graph using the neural network you implemented.
Step40: Batch and pad the source and target sequences
Step43: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step45: Save Parameters
Save the batch_size and save_path parameters for inference.
Step47: Checkpoint
Step50: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
Step52: Translate
This will translate translate_sentence from English to French. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
Explanation: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
# TODO: Implement Function
source_id_text = []
target_id_text = []
for sentence in source_text.split('\n'):
s = []
for word in sentence.split():
s.append(source_vocab_to_int[word])
source_id_text.append(s)
for sentence in target_text.split('\n'):
s = []
for word in sentence.split():
s.append(target_vocab_to_int[word])
s.append(target_vocab_to_int['<EOS>'])
target_id_text.append(s)
return (source_id_text, target_id_text)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
Explanation: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
import problem_unittests as tests
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
from tensorflow.python.layers.core import Dense
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
def model_inputs():
Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.
:return: Tuple (input, targets, learning rate, keep probability, target sequence length,
max target sequence length, source sequence length)
# TODO: Implement Function
inputs = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name='targets')
learn_rate = tf.placeholder(tf.float32, name='learning_rate')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
target_sequence_length = tf.placeholder(tf.int32, (None,), name='target_sequence_length')
max_target_sequence_length = tf.reduce_max(target_sequence_length, name='max_target_len')
source_sequence_length = tf.placeholder(tf.int32, (None,), name='source_sequence_length')
return (inputs, targets, learn_rate, keep_prob, target_sequence_length, \
max_target_sequence_length, source_sequence_length)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
Explanation: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoder_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
Targets placeholder with rank 2.
Learning rate placeholder with rank 0.
Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Target sequence length placeholder named "target_sequence_length" with rank 1
Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.
Source sequence length placeholder named "source_sequence_length" with rank 1
Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length)
End of explanation
def process_decoder_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for encoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
# TODO: Implement Function
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
decoder_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)
return decoder_input
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_encoding_input(process_decoder_input)
Explanation: Process Decoder Input
Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
End of explanation
from imp import reload
reload(tests)
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
encoding_embedding_size):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:param source_sequence_length: a list of the lengths of each sequence in the batch
:param source_vocab_size: vocabulary size of source data
:param encoding_embedding_size: embedding size of source data
:return: tuple (RNN output, RNN state)
# TODO: Implement Function
embed_encoder_input = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size,
encoding_embedding_size)
def lstm_cell(rnn_size):
cell = tf.contrib.rnn.LSTMCell(rnn_size,
initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=1))
cell_dropout = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)
return cell_dropout
stacked_lstm = tf.contrib.rnn.MultiRNNCell([lstm_cell(rnn_size) for _ in range(num_layers)])
encoder_output, encoder_state = tf.nn.dynamic_rnn(stacked_lstm,
embed_encoder_input,
sequence_length=source_sequence_length, dtype=tf.float32)
return (encoder_output, encoder_state)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
Explanation: Encoding
Implement encoding_layer() to create a Encoder RNN layer:
* Embed the encoder input using tf.contrib.layers.embed_sequence
* Construct a stacked tf.contrib.rnn.LSTMCell wrapped in a tf.contrib.rnn.DropoutWrapper
* Pass cell and embedded input to tf.nn.dynamic_rnn()
End of explanation
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
target_sequence_length, max_summary_length,
output_layer, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_summary_length: The length of the longest sequence in the batch
:param output_layer: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing training logits and sample_id
# TODO: Implement Function
train_helper = tf.contrib.seq2seq.TrainingHelper(dec_embed_input, target_sequence_length)
train_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, train_helper, encoder_state,
output_layer)
train_decoder_output = tf.contrib.seq2seq.dynamic_decode(train_decoder, impute_finished=True,
maximum_iterations=max_summary_length)[0]
return train_decoder_output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
Explanation: Decoding - Training
Create a training decoding layer:
* Create a tf.contrib.seq2seq.TrainingHelper
* Create a tf.contrib.seq2seq.BasicDecoder
* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
End of explanation
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
end_of_sequence_id, max_target_sequence_length,
vocab_size, output_layer, batch_size, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param max_target_sequence_length: Maximum length of target sequences
:param vocab_size: Size of decoder/target vocabulary
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_layer: Function to apply the output layer
:param batch_size: Batch size
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing inference logits and sample_id
# TODO: Implement Function
start_tokens = tf.tile(tf.constant([start_of_sequence_id],
dtype=tf.int32), [batch_size], name='start_tokens')
infer_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings, start_tokens,
end_of_sequence_id)
infer_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, infer_helper, encoder_state,
output_layer)
infer_decoder_output = tf.contrib.seq2seq.dynamic_decode(infer_decoder,
impute_finished=True,
maximum_iterations=max_target_sequence_length)[0]
return infer_decoder_output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
Explanation: Decoding - Inference
Create inference decoder:
* Create a tf.contrib.seq2seq.GreedyEmbeddingHelper
* Create a tf.contrib.seq2seq.BasicDecoder
* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
End of explanation
def decoding_layer(dec_input, encoder_state,
target_sequence_length, max_target_sequence_length,
rnn_size,
num_layers, target_vocab_to_int, target_vocab_size,
batch_size, keep_prob, decoding_embedding_size):
Create decoding layer
:param dec_input: Decoder input
:param encoder_state: Encoder state
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_target_sequence_length: Maximum length of target sequences
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param target_vocab_size: Size of target vocabulary
:param batch_size: The size of the batch
:param keep_prob: Dropout keep probability
:param decoding_embedding_size: Decoding embedding size
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
# TODO: Implement Function
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
def lstm_cell(rnn_size):
cell = tf.contrib.rnn.LSTMCell(rnn_size,
initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=1))
cell_dropout = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)
return cell_dropout
stacked_lstm = tf.contrib.rnn.MultiRNNCell([lstm_cell(rnn_size) for _ in range(num_layers)])
output_layer = Dense(target_vocab_size, use_bias=False)
with tf.variable_scope('decode'):
train_output = decoding_layer_train(encoder_state,
stacked_lstm,
dec_embed_input,
target_sequence_length,
max_target_sequence_length,
output_layer,
keep_prob)
with tf.variable_scope('decode', reuse=True):
infer_output = decoding_layer_infer(encoder_state,
stacked_lstm,
dec_embeddings,
target_vocab_to_int['<GO>'],
target_vocab_to_int['<EOS>'],
max_target_sequence_length,
target_vocab_size,
output_layer,
batch_size,
keep_prob)
return (train_output, infer_output)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
Explanation: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Embed the target sequences
Construct the decoder LSTM cell (just like you constructed the encoder cell above)
Create an output layer to map the outputs of the decoder to the elements of our vocabulary
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference.
End of explanation
def seq2seq_model(input_data, target_data, keep_prob, batch_size,
source_sequence_length, target_sequence_length,
max_target_sentence_length,
source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size,
rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param source_sequence_length: Sequence Lengths of source sequences in the batch
:param target_sequence_length: Sequence Lengths of target sequences in the batch
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
# TODO: Implement Function
_, enc_state = encoding_layer(input_data,
rnn_size,
num_layers,
keep_prob,
source_sequence_length,
source_vocab_size,
enc_embedding_size)
dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size)
train_dec_output, infer_dec_output = decoding_layer(dec_input,
enc_state,
target_sequence_length,
max_target_sentence_length,
rnn_size,
num_layers,
target_vocab_to_int,
target_vocab_size,
batch_size,
keep_prob,
dec_embedding_size)
return (train_dec_output, infer_dec_output)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size).
Process target data using your process_decoder_input(target_data, target_vocab_to_int, batch_size) function.
Decode the encoded input using your decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) function.
End of explanation
#Number of Epochs
epochs = 3
#Batch Size
batch_size = 256
#RNN Size
rnn_size = 500
#Number of Layers
num_layers = 2
#Embedding Size
encoding_embedding_size = 250
decoding_embedding_size = 250
#Learning Rate
learning_rate = 0.001
#Dropout Keep Probability
keep_probability = 0.7
display_step = 100
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
Set display_step to state how many steps between each debug output statement
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()
#sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),
targets,
keep_prob,
batch_size,
source_sequence_length,
target_sequence_length,
max_target_sequence_length,
len(source_vocab_to_int),
len(target_vocab_to_int),
encoding_embedding_size,
decoding_embedding_size,
rnn_size,
num_layers,
target_vocab_to_int)
training_logits = tf.identity(train_logits.rnn_output, name='logits')
inference_logits = tf.identity(inference_logits.sample_id, name='predictions')
masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
training_logits,
targets,
masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
def pad_sentence_batch(sentence_batch, pad_int):
Pad sentences with <PAD> so that each sentence of a batch has the same length
max_sentence = max([len(sentence) for sentence in sentence_batch])
return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]
def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):
Batch targets, sources, and the lengths of their sentences together
for batch_i in range(0, len(sources)//batch_size):
start_i = batch_i * batch_size
# Slice the right amount for the batch
sources_batch = sources[start_i:start_i + batch_size]
targets_batch = targets[start_i:start_i + batch_size]
# Pad
pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))
# Need the lengths for the _lengths parameters
pad_targets_lengths = []
for target in pad_targets_batch:
pad_targets_lengths.append(len(target))
pad_source_lengths = []
for source in pad_sources_batch:
pad_source_lengths.append(len(source))
yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths
Explanation: Batch and pad the source and target sequences
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1])],
'constant')
return np.mean(np.equal(target, logits))
# Split data to training and validation sets
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = source_int_text[:batch_size]
valid_target = target_int_text[:batch_size]
(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,
valid_target,
batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>']))
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(
get_batches(train_source, train_target, batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>'])):
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
target_sequence_length: targets_lengths,
source_sequence_length: sources_lengths,
keep_prob: keep_probability})
if batch_i % display_step == 0 and batch_i > 0:
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch,
source_sequence_length: sources_lengths,
target_sequence_length: targets_lengths,
keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_sources_batch,
source_sequence_length: valid_sources_lengths,
target_sequence_length: valid_targets_lengths,
keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
Explanation: Save Parameters
Save the batch_size and save_path parameters for inference.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
Explanation: Checkpoint
End of explanation
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
# TODO: Implement Function
s = sentence.lower().split()
word_ids = [vocab_to_int.get(w, vocab_to_int['<UNK>']) for w in s]
return word_ids
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
Explanation: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
End of explanation
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('predictions:0')
target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,
target_sequence_length: [len(translate_sentence)*2]*batch_size,
source_sequence_length: [len(translate_sentence)]*batch_size,
keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in translate_logits]))
print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits])))
Explanation: Translate
This will translate translate_sentence from English to French.
End of explanation |
14,631 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Processing in IPython
This notebook shows the ability to use the Processing library (based on Java). There is also a full Processing kernel that does a Java-compile (showing any errors) with additional benefits. This magic does no error checking.
Requirements
Step1: Next, you should enable metakernel magics for IPython
Step2: Now, you are ready to embed Processing sketches in your notebook. Try moving your mouse over the sketch
Step3: This example from https | Python Code:
! pip install metakernel --user
Explanation: Processing in IPython
This notebook shows the ability to use the Processing library (based on Java). There is also a full Processing kernel that does a Java-compile (showing any errors) with additional benefits. This magic does no error checking.
Requirements:
IPython/Jupyter notebook
metakernel
Internet connection
First you need to install metakernel:
End of explanation
from metakernel import register_ipython_magics
register_ipython_magics()
Explanation: Next, you should enable metakernel magics for IPython:
End of explanation
%%processing
void draw() {
background(128);
ellipse(mouseX, mouseY, 10, 10);
}
Explanation: Now, you are ready to embed Processing sketches in your notebook. Try moving your mouse over the sketch:
End of explanation
%%processing
int cx, cy;
float secondsRadius;
float minutesRadius;
float hoursRadius;
float clockDiameter;
void setup() {
size(640, 360);
stroke(255);
int radius = min(width, height) / 2;
secondsRadius = radius * 0.72;
minutesRadius = radius * 0.60;
hoursRadius = radius * 0.50;
clockDiameter = radius * 1.8;
cx = width / 2;
cy = height / 2;
}
void draw() {
background(0);
// Draw the clock background
fill(80);
noStroke();
ellipse(cx, cy, clockDiameter, clockDiameter);
// Angles for sin() and cos() start at 3 o'clock;
// subtract HALF_PI to make them start at the top
float s = map(second(), 0, 60, 0, TWO_PI) - HALF_PI;
float m = map(minute() + norm(second(), 0, 60), 0, 60, 0, TWO_PI) - HALF_PI;
float h = map(hour() + norm(minute(), 0, 60), 0, 24, 0, TWO_PI * 2) - HALF_PI;
// Draw the hands of the clock
stroke(255);
strokeWeight(1);
line(cx, cy, cx + cos(s) * secondsRadius, cy + sin(s) * secondsRadius);
strokeWeight(2);
line(cx, cy, cx + cos(m) * minutesRadius, cy + sin(m) * minutesRadius);
strokeWeight(4);
line(cx, cy, cx + cos(h) * hoursRadius, cy + sin(h) * hoursRadius);
// Draw the minute ticks
strokeWeight(2);
beginShape(POINTS);
for (int a = 0; a < 360; a+=6) {
float angle = radians(a);
float x = cx + cos(angle) * secondsRadius;
float y = cy + sin(angle) * secondsRadius;
vertex(x, y);
}
endShape();
}
Explanation: This example from https://processing.org/examples/clock.html :
End of explanation |
14,632 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: ABC
Abstract base classes are a form of interface checking more strict than individual hasattr() checks for particular methods. By defining an abstract base class, you can define a common API for a set of subclasses. This capability is especially useful in situations where a third-party is going to provide implementations, such as with plugins to an application, but can also aid you when working on a large team or with a large code-base where keeping all classes in your head at the same time is difficult or not possible.
How ABCs Work
Step3: NOTE
issubclass
Step5: Implementation Through Subclassing
Step7: Abstract Method Continued
Step8: Abstract Properties
If your API specification includes attributes in addition to methods, you can require the attributes in concrete classes by defining them with @abstractproperty | Python Code:
from abc import ABCMeta, abstractmethod
class Mammal(metaclass=ABCMeta):
## version 2.x ## __metaclass__=ABCMeta
@abstractmethod
def eyes(self, val):
pass
# @abstractmethod
# def hand(self):
# pass
def hair(self):
print("hair")
def neocortex(self):
a part of the cerebral cortex concerned with sight and hearing in mammals,
regarded as the most recently evolved part of the cortex.
print("neocortex")
class Human(Mammal):
def eyes(self, val):
print("human eyes")
print ('Subclass:', issubclass(Human, Mammal))
print ('Instance:', isinstance(Human(), Mammal))
c = Human()
print ('Instance:', isinstance(c, Mammal))
c.eyes("red")
print(dir(c))
m = Mammal()
class fish(Mammal):
pass
myfish = fish()
Explanation: ABC
Abstract base classes are a form of interface checking more strict than individual hasattr() checks for particular methods. By defining an abstract base class, you can define a common API for a set of subclasses. This capability is especially useful in situations where a third-party is going to provide implementations, such as with plugins to an application, but can also aid you when working on a large team or with a large code-base where keeping all classes in your head at the same time is difficult or not possible.
How ABCs Work
End of explanation
from abc import ABCMeta, abstractmethod
class Mammal(metaclass=ABCMeta):
@abstractmethod
def eyes(self, val):
raise NotImplementedError()
def hair(self):
print("hair")
def neocortex(self):
a part of the cerebral cortex concerned with sight and hearing in mammals,
regarded as the most recently evolved part of the cortex.
print("neocortex")
class Human:
def eyes(self, val):
print("human eyes: ", val)
Mammal.register(Human)
print ('Subclass:', issubclass(Human, Mammal))
c = Human()
print ('Instance:', isinstance(c, Mammal))
c.eyes("Hazel")
Explanation: NOTE
issubclass: Return true if class is a subclass (direct, indirect or virtual) of classinfo. A class is considered a subclass of itself. classinfo may be a tuple of class objects, in which case every entry in classinfo will be checked. In any other case, a TypeError exception is raised.
isinstance: Return true if the object argument is an instance of the classinfo argument, or of a (direct, indirect or virtual) subclass thereof. Also return true if classinfo is a type object (new-style class) and object is an object of that type or of a (direct, indirect or virtual) subclass thereof. If object is not a class instance or an object of the given type, the function always returns false. If classinfo is a tuple of class or type objects (or recursively, other such tuples), return true if object is an instance of any of the classes or types. If classinfo is not a class, type, or tuple of classes, types, and such tuples, a TypeError exception is raised.
Registering the child class
End of explanation
from abc import ABCMeta, abstractmethod
class Mammal(metaclass=ABCMeta):
## version 2.x ## __metaclass__=ABCMeta
@abstractmethod
def eyes(self, val):
raise NotImplementedError()
# @abstractmethod
# def hand(self):
# raise NotImplementedError()
def hair(self):
print("hair")
def neocortex(self):
a part of the cerebral cortex concerned with sight and hearing in mammals,
regarded as the most recently evolved part of the cortex.
print("neocortex")
class Human(Mammal):
def eyes(self, val):
print("human eyes")
print ('Subclass:', issubclass(Human, Mammal))
print ('Instance:', isinstance(Human(), Mammal))
c = Human()
print ('Instance:', isinstance(c, Mammal))
c.eyes("Gray")
Explanation: Implementation Through Subclassing
End of explanation
from abc import ABCMeta, abstractmethod
class Mammal(metaclass=ABCMeta):
## version 2.x ## __metaclass__=ABCMeta
@abstractmethod
def eyes(self,color):
print("Eyes color : " + color)
def hair(self):
print("hair")
def neocortex(self):
a part of the cerebral cortex concerned with sight and hearing in mammals,
regarded as the most recently evolved part of the cortex.
print("neocortex")
class Human(Mammal):
def eyes(self, val):
super(Human, self).eyes(val)
print("human eyes")
print ('Subclass:', issubclass(Human, Mammal))
print ('Instance:', isinstance(Human(), Mammal))
c = Human()
print ('Instance:', isinstance(c, Mammal))
c.eyes("Green")
Explanation: Abstract Method Continued
End of explanation
from abc import ABCMeta, abstractmethod
class Base(metaclass=ABCMeta):
@abc.abstractproperty
def value(self):
return 'Should never get here'
class Implementation(Base):
@property
def value(self):
return 'concrete property'
if __name__ == '__main__':
# try:
# b = Base()
# print ('Base.value:', b.value)
# except Exception as err:
# print ('ERROR:', str(err))
i = Implementation()
print ('Implementation.value:', i.value)
import abc
class Base(metaclass=abc.ABCMeta):
@abc.abstractproperty
def value(self):
return 'Should never see this'
@value.setter
def value(self, newvalue):
return
class Implementation(Base):
_value = 'Default value'
@property
def value(self):
return self._value
@value.setter
def value(self, newvalue):
self._value = newvalue
i = Implementation()
print ('Implementation.value:', i.value)
i.value = 'New value'
print ('Changed value:', i.value)
Explanation: Abstract Properties
If your API specification includes attributes in addition to methods, you can require the attributes in concrete classes by defining them with @abstractproperty
End of explanation |
14,633 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Examples and Exercises from Think Stats, 2nd Edition
http
Step1: Time series analysis
Load the data from "Price of Weed".
Step3: The following function takes a DataFrame of transactions and compute daily averages.
Step5: The following function returns a map from quality name to a DataFrame of daily averages.
Step6: dailies is the map from quality name to DataFrame.
Step7: The following plots the daily average price for each quality.
Step8: We can use statsmodels to run a linear model of price as a function of time.
Step9: Here's what the results look like.
Step11: Now let's plot the fitted model with the data.
Step13: The following function plots the original data and the fitted curve.
Step14: Here are results for the high quality category
Step15: Moving averages
As a simple example, I'll show the rolling average of the numbers from 1 to 10.
Step16: With a "window" of size 3, we get the average of the previous 3 elements, or nan when there are fewer than 3.
Step18: The following function plots the rolling mean.
Step19: Here's what it looks like for the high quality category.
Step21: The exponentially-weighted moving average gives more weight to more recent points.
Step24: We can use resampling to generate missing values with the right amount of noise.
Step25: Here's what the EWMA model looks like with missing values filled.
Step26: Serial correlation
The following function computes serial correlation with the given lag.
Step27: Before computing correlations, we'll fill missing values.
Step28: Here are the serial correlations for raw price data.
Step29: It's not surprising that there are correlations between consecutive days, because there are obvious trends in the data.
It is more interested to see whether there are still correlations after we subtract away the trends.
Step30: Even if the correlations between consecutive days are weak, there might be correlations across intervals of one week, one month, or one year.
Step31: The strongest correlation is a weekly cycle in the medium quality category.
Autocorrelation
The autocorrelation function is the serial correlation computed for all lags.
We can use it to replicate the results from the previous section.
Step33: To get a sense of how much autocorrelation we should expect by chance, we can resample the data (which eliminates any actual autocorrelation) and compute the ACF.
Step35: The following function plots the actual autocorrelation for lags up to 40 days.
The flag add_weekly indicates whether we should add a simulated weekly cycle.
Step37: To show what a strong weekly cycle would look like, we have the option of adding a price increase of 1-2 dollars on Friday and Saturdays.
Step38: Here's what the real ACFs look like. The gray regions indicate the levels we expect by chance.
Step39: Here's what it would look like if there were a weekly cycle.
Step41: Prediction
The simplest way to generate predictions is to use statsmodels to fit a model to the data, then use the predict method from the results.
Step42: Here's what the prediction looks like for the high quality category, using the linear model.
Step44: When we generate predictions, we want to quatify the uncertainty in the prediction. We can do that by resampling. The following function fits a model to the data, computes residuals, then resamples from the residuals to general fake datasets. It fits the same model to each fake dataset and returns a list of results.
Step46: To generate predictions, we take the list of results fitted to resampled data. For each model, we use the predict method to generate predictions, and return a sequence of predictions.
If add_resid is true, we add resampled residuals to the predicted values, which generates predictions that include predictive uncertainty (due to random noise) as well as modeling uncertainty (due to random sampling).
Step48: To visualize predictions, I show a darker region that quantifies modeling uncertainty and a lighter region that quantifies predictive uncertainty.
Step49: Here are the results for the high quality category.
Step51: But there is one more source of uncertainty
Step53: And this function plots the results.
Step54: Here's what the high quality category looks like if we take into account uncertainty about how much past data to use.
Step55: Exercises
Exercise
Step56: Exercise
Step57: Worked example | Python Code:
from __future__ import print_function, division
%matplotlib inline
import warnings
warnings.filterwarnings('ignore', category=FutureWarning)
import numpy as np
import pandas as pd
import random
import thinkstats2
import thinkplot
Explanation: Examples and Exercises from Think Stats, 2nd Edition
http://thinkstats2.com
Copyright 2016 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
transactions = pd.read_csv('mj-clean.csv', parse_dates=[5])
transactions.head()
Explanation: Time series analysis
Load the data from "Price of Weed".
End of explanation
def GroupByDay(transactions, func=np.mean):
Groups transactions by day and compute the daily mean ppg.
transactions: DataFrame of transactions
returns: DataFrame of daily prices
grouped = transactions[['date', 'ppg']].groupby('date')
daily = grouped.aggregate(func)
daily['date'] = daily.index
start = daily.date[0]
one_year = np.timedelta64(1, 'Y')
daily['years'] = (daily.date - start) / one_year
return daily
Explanation: The following function takes a DataFrame of transactions and compute daily averages.
End of explanation
def GroupByQualityAndDay(transactions):
Divides transactions by quality and computes mean daily price.
transaction: DataFrame of transactions
returns: map from quality to time series of ppg
groups = transactions.groupby('quality')
dailies = {}
for name, group in groups:
dailies[name] = GroupByDay(group)
return dailies
Explanation: The following function returns a map from quality name to a DataFrame of daily averages.
End of explanation
dailies = GroupByQualityAndDay(transactions)
Explanation: dailies is the map from quality name to DataFrame.
End of explanation
import matplotlib.pyplot as plt
thinkplot.PrePlot(rows=3)
for i, (name, daily) in enumerate(dailies.items()):
thinkplot.SubPlot(i+1)
title = 'Price per gram ($)' if i == 0 else ''
thinkplot.Config(ylim=[0, 20], title=title)
thinkplot.Scatter(daily.ppg, s=10, label=name)
if i == 2:
plt.xticks(rotation=30)
thinkplot.Config()
else:
thinkplot.Config(xticks=[])
Explanation: The following plots the daily average price for each quality.
End of explanation
import statsmodels.formula.api as smf
def RunLinearModel(daily):
model = smf.ols('ppg ~ years', data=daily)
results = model.fit()
return model, results
Explanation: We can use statsmodels to run a linear model of price as a function of time.
End of explanation
from IPython.display import display
for name, daily in dailies.items():
model, results = RunLinearModel(daily)
print(name)
display(results.summary())
Explanation: Here's what the results look like.
End of explanation
def PlotFittedValues(model, results, label=''):
Plots original data and fitted values.
model: StatsModel model object
results: StatsModel results object
years = model.exog[:,1]
values = model.endog
thinkplot.Scatter(years, values, s=15, label=label)
thinkplot.Plot(years, results.fittedvalues, label='model', color='#ff7f00')
Explanation: Now let's plot the fitted model with the data.
End of explanation
def PlotLinearModel(daily, name):
Plots a linear fit to a sequence of prices, and the residuals.
daily: DataFrame of daily prices
name: string
model, results = RunLinearModel(daily)
PlotFittedValues(model, results, label=name)
thinkplot.Config(title='Fitted values',
xlabel='Years',
xlim=[-0.1, 3.8],
ylabel='Price per gram ($)')
Explanation: The following function plots the original data and the fitted curve.
End of explanation
name = 'high'
daily = dailies[name]
PlotLinearModel(daily, name)
Explanation: Here are results for the high quality category:
End of explanation
series = np.arange(10)
Explanation: Moving averages
As a simple example, I'll show the rolling average of the numbers from 1 to 10.
End of explanation
pd.rolling_mean(series, 3)
Explanation: With a "window" of size 3, we get the average of the previous 3 elements, or nan when there are fewer than 3.
End of explanation
def PlotRollingMean(daily, name):
Plots rolling mean.
daily: DataFrame of daily prices
dates = pd.date_range(daily.index.min(), daily.index.max())
reindexed = daily.reindex(dates)
thinkplot.Scatter(reindexed.ppg, s=15, alpha=0.2, label=name)
roll_mean = pd.rolling_mean(reindexed.ppg, 30)
thinkplot.Plot(roll_mean, label='rolling mean', color='#ff7f00')
plt.xticks(rotation=30)
thinkplot.Config(ylabel='price per gram ($)')
Explanation: The following function plots the rolling mean.
End of explanation
PlotRollingMean(daily, name)
Explanation: Here's what it looks like for the high quality category.
End of explanation
def PlotEWMA(daily, name):
Plots rolling mean.
daily: DataFrame of daily prices
dates = pd.date_range(daily.index.min(), daily.index.max())
reindexed = daily.reindex(dates)
thinkplot.Scatter(reindexed.ppg, s=15, alpha=0.2, label=name)
roll_mean = pd.ewma(reindexed.ppg, 30)
thinkplot.Plot(roll_mean, label='EWMA', color='#ff7f00')
plt.xticks(rotation=30)
thinkplot.Config(ylabel='price per gram ($)')
PlotEWMA(daily, name)
Explanation: The exponentially-weighted moving average gives more weight to more recent points.
End of explanation
def FillMissing(daily, span=30):
Fills missing values with an exponentially weighted moving average.
Resulting DataFrame has new columns 'ewma' and 'resid'.
daily: DataFrame of daily prices
span: window size (sort of) passed to ewma
returns: new DataFrame of daily prices
dates = pd.date_range(daily.index.min(), daily.index.max())
reindexed = daily.reindex(dates)
ewma = pd.ewma(reindexed.ppg, span=span)
resid = (reindexed.ppg - ewma).dropna()
fake_data = ewma + thinkstats2.Resample(resid, len(reindexed))
reindexed.ppg.fillna(fake_data, inplace=True)
reindexed['ewma'] = ewma
reindexed['resid'] = reindexed.ppg - ewma
return reindexed
def PlotFilled(daily, name):
Plots the EWMA and filled data.
daily: DataFrame of daily prices
filled = FillMissing(daily, span=30)
thinkplot.Scatter(filled.ppg, s=15, alpha=0.2, label=name)
thinkplot.Plot(filled.ewma, label='EWMA', color='#ff7f00')
plt.xticks(rotation=30)
thinkplot.Config(ylabel='Price per gram ($)')
Explanation: We can use resampling to generate missing values with the right amount of noise.
End of explanation
PlotFilled(daily, name)
Explanation: Here's what the EWMA model looks like with missing values filled.
End of explanation
def SerialCorr(series, lag=1):
xs = series[lag:]
ys = series.shift(lag)[lag:]
corr = thinkstats2.Corr(xs, ys)
return corr
Explanation: Serial correlation
The following function computes serial correlation with the given lag.
End of explanation
filled_dailies = {}
for name, daily in dailies.items():
filled_dailies[name] = FillMissing(daily, span=30)
Explanation: Before computing correlations, we'll fill missing values.
End of explanation
for name, filled in filled_dailies.items():
corr = thinkstats2.SerialCorr(filled.ppg, lag=1)
print(name, corr)
Explanation: Here are the serial correlations for raw price data.
End of explanation
for name, filled in filled_dailies.items():
corr = thinkstats2.SerialCorr(filled.resid, lag=1)
print(name, corr)
Explanation: It's not surprising that there are correlations between consecutive days, because there are obvious trends in the data.
It is more interested to see whether there are still correlations after we subtract away the trends.
End of explanation
rows = []
for lag in [1, 7, 30, 365]:
print(lag, end='\t')
for name, filled in filled_dailies.items():
corr = SerialCorr(filled.resid, lag)
print('%.2g' % corr, end='\t')
print()
Explanation: Even if the correlations between consecutive days are weak, there might be correlations across intervals of one week, one month, or one year.
End of explanation
import statsmodels.tsa.stattools as smtsa
filled = filled_dailies['high']
acf = smtsa.acf(filled.resid, nlags=365, unbiased=True)
print('%0.2g, %.2g, %0.2g, %0.2g, %0.2g' %
(acf[0], acf[1], acf[7], acf[30], acf[365]))
Explanation: The strongest correlation is a weekly cycle in the medium quality category.
Autocorrelation
The autocorrelation function is the serial correlation computed for all lags.
We can use it to replicate the results from the previous section.
End of explanation
def SimulateAutocorrelation(daily, iters=1001, nlags=40):
Resample residuals, compute autocorrelation, and plot percentiles.
daily: DataFrame
iters: number of simulations to run
nlags: maximum lags to compute autocorrelation
# run simulations
t = []
for _ in range(iters):
filled = FillMissing(daily, span=30)
resid = thinkstats2.Resample(filled.resid)
acf = smtsa.acf(resid, nlags=nlags, unbiased=True)[1:]
t.append(np.abs(acf))
high = thinkstats2.PercentileRows(t, [97.5])[0]
low = -high
lags = range(1, nlags+1)
thinkplot.FillBetween(lags, low, high, alpha=0.2, color='gray')
Explanation: To get a sense of how much autocorrelation we should expect by chance, we can resample the data (which eliminates any actual autocorrelation) and compute the ACF.
End of explanation
def PlotAutoCorrelation(dailies, nlags=40, add_weekly=False):
Plots autocorrelation functions.
dailies: map from category name to DataFrame of daily prices
nlags: number of lags to compute
add_weekly: boolean, whether to add a simulated weekly pattern
thinkplot.PrePlot(3)
daily = dailies['high']
SimulateAutocorrelation(daily)
for name, daily in dailies.items():
if add_weekly:
daily = AddWeeklySeasonality(daily)
filled = FillMissing(daily, span=30)
acf = smtsa.acf(filled.resid, nlags=nlags, unbiased=True)
lags = np.arange(len(acf))
thinkplot.Plot(lags[1:], acf[1:], label=name)
Explanation: The following function plots the actual autocorrelation for lags up to 40 days.
The flag add_weekly indicates whether we should add a simulated weekly cycle.
End of explanation
def AddWeeklySeasonality(daily):
Adds a weekly pattern.
daily: DataFrame of daily prices
returns: new DataFrame of daily prices
fri_or_sat = (daily.index.dayofweek==4) | (daily.index.dayofweek==5)
fake = daily.copy()
fake.ppg.loc[fri_or_sat] += np.random.uniform(0, 2, fri_or_sat.sum())
return fake
Explanation: To show what a strong weekly cycle would look like, we have the option of adding a price increase of 1-2 dollars on Friday and Saturdays.
End of explanation
axis = [0, 41, -0.2, 0.2]
PlotAutoCorrelation(dailies, add_weekly=False)
thinkplot.Config(axis=axis,
loc='lower right',
ylabel='correlation',
xlabel='lag (day)')
Explanation: Here's what the real ACFs look like. The gray regions indicate the levels we expect by chance.
End of explanation
PlotAutoCorrelation(dailies, add_weekly=True)
thinkplot.Config(axis=axis,
loc='lower right',
xlabel='lag (days)')
Explanation: Here's what it would look like if there were a weekly cycle.
End of explanation
def GenerateSimplePrediction(results, years):
Generates a simple prediction.
results: results object
years: sequence of times (in years) to make predictions for
returns: sequence of predicted values
n = len(years)
inter = np.ones(n)
d = dict(Intercept=inter, years=years, years2=years**2)
predict_df = pd.DataFrame(d)
predict = results.predict(predict_df)
return predict
def PlotSimplePrediction(results, years):
predict = GenerateSimplePrediction(results, years)
thinkplot.Scatter(daily.years, daily.ppg, alpha=0.2, label=name)
thinkplot.plot(years, predict, color='#ff7f00')
xlim = years[0]-0.1, years[-1]+0.1
thinkplot.Config(title='Predictions',
xlabel='Years',
xlim=xlim,
ylabel='Price per gram ($)',
loc='upper right')
Explanation: Prediction
The simplest way to generate predictions is to use statsmodels to fit a model to the data, then use the predict method from the results.
End of explanation
name = 'high'
daily = dailies[name]
_, results = RunLinearModel(daily)
years = np.linspace(0, 5, 101)
PlotSimplePrediction(results, years)
Explanation: Here's what the prediction looks like for the high quality category, using the linear model.
End of explanation
def SimulateResults(daily, iters=101, func=RunLinearModel):
Run simulations based on resampling residuals.
daily: DataFrame of daily prices
iters: number of simulations
func: function that fits a model to the data
returns: list of result objects
_, results = func(daily)
fake = daily.copy()
result_seq = []
for _ in range(iters):
fake.ppg = results.fittedvalues + thinkstats2.Resample(results.resid)
_, fake_results = func(fake)
result_seq.append(fake_results)
return result_seq
Explanation: When we generate predictions, we want to quatify the uncertainty in the prediction. We can do that by resampling. The following function fits a model to the data, computes residuals, then resamples from the residuals to general fake datasets. It fits the same model to each fake dataset and returns a list of results.
End of explanation
def GeneratePredictions(result_seq, years, add_resid=False):
Generates an array of predicted values from a list of model results.
When add_resid is False, predictions represent sampling error only.
When add_resid is True, they also include residual error (which is
more relevant to prediction).
result_seq: list of model results
years: sequence of times (in years) to make predictions for
add_resid: boolean, whether to add in resampled residuals
returns: sequence of predictions
n = len(years)
d = dict(Intercept=np.ones(n), years=years, years2=years**2)
predict_df = pd.DataFrame(d)
predict_seq = []
for fake_results in result_seq:
predict = fake_results.predict(predict_df)
if add_resid:
predict += thinkstats2.Resample(fake_results.resid, n)
predict_seq.append(predict)
return predict_seq
Explanation: To generate predictions, we take the list of results fitted to resampled data. For each model, we use the predict method to generate predictions, and return a sequence of predictions.
If add_resid is true, we add resampled residuals to the predicted values, which generates predictions that include predictive uncertainty (due to random noise) as well as modeling uncertainty (due to random sampling).
End of explanation
def PlotPredictions(daily, years, iters=101, percent=90, func=RunLinearModel):
Plots predictions.
daily: DataFrame of daily prices
years: sequence of times (in years) to make predictions for
iters: number of simulations
percent: what percentile range to show
func: function that fits a model to the data
result_seq = SimulateResults(daily, iters=iters, func=func)
p = (100 - percent) / 2
percents = p, 100-p
predict_seq = GeneratePredictions(result_seq, years, add_resid=True)
low, high = thinkstats2.PercentileRows(predict_seq, percents)
thinkplot.FillBetween(years, low, high, alpha=0.3, color='gray')
predict_seq = GeneratePredictions(result_seq, years, add_resid=False)
low, high = thinkstats2.PercentileRows(predict_seq, percents)
thinkplot.FillBetween(years, low, high, alpha=0.5, color='gray')
Explanation: To visualize predictions, I show a darker region that quantifies modeling uncertainty and a lighter region that quantifies predictive uncertainty.
End of explanation
years = np.linspace(0, 5, 101)
thinkplot.Scatter(daily.years, daily.ppg, alpha=0.1, label=name)
PlotPredictions(daily, years)
xlim = years[0]-0.1, years[-1]+0.1
thinkplot.Config(title='Predictions',
xlabel='Years',
xlim=xlim,
ylabel='Price per gram ($)')
Explanation: Here are the results for the high quality category.
End of explanation
def SimulateIntervals(daily, iters=101, func=RunLinearModel):
Run simulations based on different subsets of the data.
daily: DataFrame of daily prices
iters: number of simulations
func: function that fits a model to the data
returns: list of result objects
result_seq = []
starts = np.linspace(0, len(daily), iters).astype(int)
for start in starts[:-2]:
subset = daily[start:]
_, results = func(subset)
fake = subset.copy()
for _ in range(iters):
fake.ppg = (results.fittedvalues +
thinkstats2.Resample(results.resid))
_, fake_results = func(fake)
result_seq.append(fake_results)
return result_seq
Explanation: But there is one more source of uncertainty: how much past data should we use to build the model?
The following function generates a sequence of models based on different amounts of past data.
End of explanation
def PlotIntervals(daily, years, iters=101, percent=90, func=RunLinearModel):
Plots predictions based on different intervals.
daily: DataFrame of daily prices
years: sequence of times (in years) to make predictions for
iters: number of simulations
percent: what percentile range to show
func: function that fits a model to the data
result_seq = SimulateIntervals(daily, iters=iters, func=func)
p = (100 - percent) / 2
percents = p, 100-p
predict_seq = GeneratePredictions(result_seq, years, add_resid=True)
low, high = thinkstats2.PercentileRows(predict_seq, percents)
thinkplot.FillBetween(years, low, high, alpha=0.2, color='gray')
Explanation: And this function plots the results.
End of explanation
name = 'high'
daily = dailies[name]
thinkplot.Scatter(daily.years, daily.ppg, alpha=0.1, label=name)
PlotIntervals(daily, years)
PlotPredictions(daily, years)
xlim = years[0]-0.1, years[-1]+0.1
thinkplot.Config(title='Predictions',
xlabel='Years',
xlim=xlim,
ylabel='Price per gram ($)')
Explanation: Here's what the high quality category looks like if we take into account uncertainty about how much past data to use.
End of explanation
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
Explanation: Exercises
Exercise: The linear model I used in this chapter has the obvious drawback that it is linear, and there is no reason to expect prices to change linearly over time. We can add flexibility to the model by adding a quadratic term, as we did in Section 11.3.
Use a quadratic model to fit the time series of daily prices, and use the model to generate predictions. You will have to write a version of RunLinearModel that runs that quadratic model, but after that you should be able to reuse code from the chapter to generate predictions.
End of explanation
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
Explanation: Exercise: Write a definition for a class named SerialCorrelationTest that extends HypothesisTest from Section 9.2. It should take a series and a lag as data, compute the serial correlation of the series with the given lag, and then compute the p-value of the observed correlation.
Use this class to test whether the serial correlation in raw price data is statistically significant. Also test the residuals of the linear model and (if you did the previous exercise), the quadratic model.
End of explanation
name = 'high'
daily = dailies[name]
filled = FillMissing(daily)
diffs = filled.ppg.diff()
thinkplot.plot(diffs)
plt.xticks(rotation=30)
thinkplot.Config(ylabel='Daily change in price per gram ($)')
filled['slope'] = pd.ewma(diffs, span=365)
thinkplot.plot(filled.slope[-365:])
plt.xticks(rotation=30)
thinkplot.Config(ylabel='EWMA of diff ($)')
# extract the last inter and the mean of the last 30 slopes
start = filled.index[-1]
inter = filled.ewma[-1]
slope = filled.slope[-30:].mean()
start, inter, slope
# reindex the DataFrame, adding a year to the end
dates = pd.date_range(filled.index.min(),
filled.index.max() + np.timedelta64(365, 'D'))
predicted = filled.reindex(dates)
# generate predicted values and add them to the end
predicted['date'] = predicted.index
one_day = np.timedelta64(1, 'D')
predicted['days'] = (predicted.date - start) / one_day
predict = inter + slope * predicted.days
predicted.ewma.fillna(predict, inplace=True)
# plot the actual values and predictions
thinkplot.Scatter(daily.ppg, alpha=0.1, label=name)
thinkplot.Plot(predicted.ewma, color='#ff7f00')
Explanation: Worked example: There are several ways to extend the EWMA model to generate predictions. One of the simplest is something like this:
Compute the EWMA of the time series and use the last point as an intercept, inter.
Compute the EWMA of differences between successive elements in the time series and use the last point as a slope, slope.
To predict values at future times, compute inter + slope * dt, where dt is the difference between the time of the prediction and the time of the last observation.
End of explanation |
14,634 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Table of Contents
<p><div class="lev1 toc-item"><a href="#Building-an-ANN" data-toc-modified-id="Building-an-ANN-1"><span class="toc-item-num">1 </span>Building an ANN</a></div><div class="lev2 toc-item"><a href="#Installing-packages" data-toc-modified-id="Installing-packages-11"><span class="toc-item-num">1.1 </span>Installing packages</a></div><div class="lev2 toc-item"><a href="#Data-Preprocessing" data-toc-modified-id="Data-Preprocessing-12"><span class="toc-item-num">1.2 </span>Data Preprocessing</a></div><div class="lev2 toc-item"><a href="#Building-an-ANN" data-toc-modified-id="Building-an-ANN-13"><span class="toc-item-num">1.3 </span>Building an ANN</a></div><div class="lev2 toc-item"><a href="#Making-predictions-and-evaluating-the-model" data-toc-modified-id="Making-predictions-and-evaluating-the-model-14"><span class="toc-item-num">1.4 </span>Making predictions and evaluating the model</a></div><div class="lev2 toc-item"><a href="#Evaluating,-Improving-and-Tuning-the-ANN" data-toc-modified-id="Evaluating,-Improving-and-Tuning-the-ANN-15"><span class="toc-item-num">1.5 </span>Evaluating, Improving and Tuning the ANN</a></div>
# Building an ANN
Credit
Step1: Data Preprocessing
Step2: y (actual value)
Step3: Building an ANN
Step4: Tips
Step5: Making predictions and evaluating the model
Step6: Evaluating, Improving and Tuning the ANN
Using K-Fold Cross validation with Keras | Python Code:
# Installing Theano
# pip install --upgrade --no-deps git+git://github.com/Theano/Theano.git
# Installing Tensorflow
# pip install tensorflow
# Installing Keras
# pip install --upgrade keras
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#Building-an-ANN" data-toc-modified-id="Building-an-ANN-1"><span class="toc-item-num">1 </span>Building an ANN</a></div><div class="lev2 toc-item"><a href="#Installing-packages" data-toc-modified-id="Installing-packages-11"><span class="toc-item-num">1.1 </span>Installing packages</a></div><div class="lev2 toc-item"><a href="#Data-Preprocessing" data-toc-modified-id="Data-Preprocessing-12"><span class="toc-item-num">1.2 </span>Data Preprocessing</a></div><div class="lev2 toc-item"><a href="#Building-an-ANN" data-toc-modified-id="Building-an-ANN-13"><span class="toc-item-num">1.3 </span>Building an ANN</a></div><div class="lev2 toc-item"><a href="#Making-predictions-and-evaluating-the-model" data-toc-modified-id="Making-predictions-and-evaluating-the-model-14"><span class="toc-item-num">1.4 </span>Making predictions and evaluating the model</a></div><div class="lev2 toc-item"><a href="#Evaluating,-Improving-and-Tuning-the-ANN" data-toc-modified-id="Evaluating,-Improving-and-Tuning-the-ANN-15"><span class="toc-item-num">1.5 </span>Evaluating, Improving and Tuning the ANN</a></div>
# Building an ANN
Credit: [Deep Learning A-Z™: Hands-On Artificial Neural Networks](https://www.udemy.com/deeplearning/learn/v4/content)
- [Getting the dataset](https://www.superdatascience.com/deep-learning/)
## Installing packages
End of explanation
# Importing the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# Importing the dataset
dataset = pd.read_csv('./Artificial_Neural_Networks/Churn_Modelling.csv')
X = dataset.iloc[:, 3:13].values
y = dataset.iloc[:, 13].values
Explanation: Data Preprocessing
End of explanation
print (X.shape)
X
print (y.shape)
y
# Encoding categorical data
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
labelencoder_X_1 = LabelEncoder()
X[:, 1] = labelencoder_X_1.fit_transform(X[:, 1])
labelencoder_X_2 = LabelEncoder()
X[:, 2] = labelencoder_X_2.fit_transform(X[:, 2])
onehotencoder = OneHotEncoder(categorical_features = [1])
X = onehotencoder.fit_transform(X).toarray()
X = X[:, 1:]
print (X.shape)
X
print (y.shape)
y
# Splitting the dataset into the Training set and Test set
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
# Feature Scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
Explanation: y (actual value): exited, this is the value we are trying to predict, which means if the customer stays or exit the bank.
End of explanation
# Importing the Keras libraries and packages
import keras
from keras.models import Sequential
from keras.layers import Dense
# Initialising the ANN
classifier = Sequential()
Explanation: Building an ANN
End of explanation
# Adding the input layer and the first hidden layer
classifier.add(Dense(units = 6, kernel_initializer = 'uniform', activation = 'relu', input_dim = 11))
# Adding the second hidden layer
classifier.add(Dense(units = 6, kernel_initializer = 'uniform', activation = 'relu'))
# Adding the output layer
classifier.add(Dense(units = 1, kernel_initializer = 'uniform', activation = 'sigmoid'))
# Compiling the ANN
classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
# Fitting the ANN to the Training set
classifier.fit(X_train, y_train, batch_size = 10, epochs = 100)
Explanation: Tips:
number of nodes in the hidden layer = average of number of nodes in the input layer and number of nodes in the output layer.
(11 + 1)/2 = 6
End of explanation
y_pred = classifier.predict(X_test)
y_pred = (y_pred > 0.5)
# Making the Confusion Matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
cm
Explanation: Making predictions and evaluating the model
End of explanation
# Evaluating the ANN
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import cross_val_score
from keras.models import Sequential
from keras.layers import Dense
def build_classifier():
classifier = Sequential()
classifier.add(Dense(units = 6, kernel_initializer = 'uniform', activation = 'relu', input_dim = 11))
classifier.add(Dense(units = 6, kernel_initializer = 'uniform', activation = 'relu'))
classifier.add(Dense(units = 1, kernel_initializer = 'uniform', activation = 'sigmoid'))
classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
return classifier
classifier = KerasClassifier(build_fn = build_classifier, batch_size = 10, epochs = 100)
accuracies = cross_val_score(estimator = classifier, X = X_train, y = y_train, cv = 10, n_jobs = -1)
mean = accuracies.mean()
variance = accuracies.std()
# Improving the ANN
# Dropout Regularization to reduce overfitting if needed
# Tuning the ANN
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import GridSearchCV
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
def build_classifier(optimizer):
classifier = Sequential()
classifier.add(Dense(units = 6, kernel_initializer = 'uniform', activation = 'relu', input_dim = 11))
# classifier.add(Dropout(p = 0.1))
classifier.add(Dense(units = 6, kernel_initializer = 'uniform', activation = 'relu'))
# classifier.add(Dropout(p = 0.1))
classifier.add(Dense(units = 1, kernel_initializer = 'uniform', activation = 'sigmoid'))
classifier.compile(optimizer = optimizer, loss = 'binary_crossentropy', metrics = ['accuracy'])
return classifier
classifier = KerasClassifier(build_fn = build_classifier)
parameters = {'batch_size': [25, 32],
'epochs': [100, 500],
'optimizer': ['adam', 'rmsprop']}
grid_search = GridSearchCV(estimator = classifier,
param_grid = parameters,
scoring = 'accuracy',
cv = 10)
grid_search = grid_search.fit(X_train, y_train)
best_parameters = grid_search.best_params_
best_accuracy = grid_search.best_score_
Explanation: Evaluating, Improving and Tuning the ANN
Using K-Fold Cross validation with Keras
End of explanation |
14,635 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="../artworks/matchzoo-logo.png" alt="logo" style="width
Step1: Define Task
There are two types of tasks available in MatchZoo. mz.tasks.Ranking and mz.tasks.Classification. We will use a ranking task for this demo.
Step2: Prepare Data
Step3: DataPack is a MatchZoo native data structure that most MatchZoo data handling processes build upon. A DataPack is consists of three pandas.DataFrame
Step4: It is also possible to convert a DataPack into a single pandas.DataFrame that holds all information.
Step5: However, using such pandas.DataFrame consumes much more memory if there are many duplicates in the texts, and that is the exact reason why we use DataPack. For more details about data handling, consult matchzoo/tutorials/data_handling.ipynb.
Preprocessing
MatchZoo preprocessors are used to convert a raw DataPack into a DataPack that ready to be fed into a model.
Step6: There are two steps to use a preprocessor. First, fit. Then, transform. fit will only changes the preprocessor's inner state but not the input DataPack.
Step7: fit will gather useful information into its context, which will be used later in a transform or used to set hyper-parameters of a model.
Step8: Once fit, the preprocessor has enough information to transform. transform will not change the preprocessor's inner state and the input DataPack, but return a transformed DataPack.
Step9: As we can see, text_left is already in sequence form that nerual networks love.
Just to make sure we have the correct sequence
Step10: For more details about preprocessing, consult matchzoo/tutorials/data_handling.ipynb.
Build Model
MatchZoo provides many built-in text matching models.
Step11: Let's use mz.models.DenseBaseline for our demo.
Step12: The model is initialized with a hyper parameter table, in which values are partially filled. To view parameters and their values, use print.
Step13: to_frame gives you more informartion in addition to just names and values.
Step14: To set a hyper-parameter
Step15: Notice that we are still missing input_shapes, and that information is store in the preprocessor.
Step16: We may use update to load a preprocessor's context into a model's hyper-parameter table.
Step17: Now we have a completed hyper-parameter table.
Step18: With all parameters filled in, we can now build and compile the model.
Step19: MatchZoo models are wrapped over keras models, and the backend property of a model gives you the actual keras model built.
Step20: For more details about models, consult matchzoo/tutorials/models.ipynb.
Train, Evaluate, Predict
A DataPack can unpack itself into data that can be directly used to train a MatchZoo model.
Step21: An alternative to train a model is to use a DataGenerator. This is useful for delaying expensive preprocessing steps or doing real-time data augmentation. For some models that needs dynamic batch-wise information, using a DataGenerator is required. For more details about DataGenerator, consult matchzoo/tutorials/data_handling.ipynb.
Step22: A Shortcut to Preprocessing and Model Building
Since data preprocessing and model building are laborious and special setups of some models makes this even worse, MatchZoo provides prepare, a unified interface that handles interaction among data, model, and preprocessor automatically.
More specifically, prepare does these following things
Step23: Save and Load the Model | Python Code:
import matchzoo as mz
print(mz.__version__)
Explanation: <img src="../artworks/matchzoo-logo.png" alt="logo" style="width:600px;float: center"/>
MatchZoo Quick Start
End of explanation
task = mz.tasks.Ranking()
print(task)
Explanation: Define Task
There are two types of tasks available in MatchZoo. mz.tasks.Ranking and mz.tasks.Classification. We will use a ranking task for this demo.
End of explanation
train_raw = mz.datasets.toy.load_data(stage='train', task=task)
test_raw = mz.datasets.toy.load_data(stage='test', task=task)
type(train_raw)
Explanation: Prepare Data
End of explanation
train_raw.left.head()
train_raw.right.head()
train_raw.relation.head()
Explanation: DataPack is a MatchZoo native data structure that most MatchZoo data handling processes build upon. A DataPack is consists of three pandas.DataFrame:
End of explanation
train_raw.frame().head()
Explanation: It is also possible to convert a DataPack into a single pandas.DataFrame that holds all information.
End of explanation
preprocessor = mz.preprocessors.BasicPreprocessor()
Explanation: However, using such pandas.DataFrame consumes much more memory if there are many duplicates in the texts, and that is the exact reason why we use DataPack. For more details about data handling, consult matchzoo/tutorials/data_handling.ipynb.
Preprocessing
MatchZoo preprocessors are used to convert a raw DataPack into a DataPack that ready to be fed into a model.
End of explanation
preprocessor.fit(train_raw)
Explanation: There are two steps to use a preprocessor. First, fit. Then, transform. fit will only changes the preprocessor's inner state but not the input DataPack.
End of explanation
preprocessor.context
Explanation: fit will gather useful information into its context, which will be used later in a transform or used to set hyper-parameters of a model.
End of explanation
train_processed = preprocessor.transform(train_raw)
test_processed = preprocessor.transform(test_raw)
train_processed.left.head()
Explanation: Once fit, the preprocessor has enough information to transform. transform will not change the preprocessor's inner state and the input DataPack, but return a transformed DataPack.
End of explanation
vocab_unit = preprocessor.context['vocab_unit']
print('Orig Text:', train_processed.left.loc['Q1']['text_left'])
sequence = train_processed.left.loc['Q1']['text_left']
print('Transformed Indices:', sequence)
print('Transformed Indices Meaning:',
'_'.join([vocab_unit.state['index_term'][i] for i in sequence]))
Explanation: As we can see, text_left is already in sequence form that nerual networks love.
Just to make sure we have the correct sequence:
End of explanation
mz.models.list_available()
Explanation: For more details about preprocessing, consult matchzoo/tutorials/data_handling.ipynb.
Build Model
MatchZoo provides many built-in text matching models.
End of explanation
model = mz.models.DenseBaseline()
Explanation: Let's use mz.models.DenseBaseline for our demo.
End of explanation
print(model.params)
Explanation: The model is initialized with a hyper parameter table, in which values are partially filled. To view parameters and their values, use print.
End of explanation
model.params.to_frame()[['Name', 'Description', 'Value']]
Explanation: to_frame gives you more informartion in addition to just names and values.
End of explanation
model.params['task'] = task
model.params['mlp_num_units'] = 3
print(model.params)
Explanation: To set a hyper-parameter:
End of explanation
print(preprocessor.context['input_shapes'])
Explanation: Notice that we are still missing input_shapes, and that information is store in the preprocessor.
End of explanation
model.params.update(preprocessor.context)
Explanation: We may use update to load a preprocessor's context into a model's hyper-parameter table.
End of explanation
model.params.completed()
Explanation: Now we have a completed hyper-parameter table.
End of explanation
model.build()
model.compile()
Explanation: With all parameters filled in, we can now build and compile the model.
End of explanation
model.backend.summary()
Explanation: MatchZoo models are wrapped over keras models, and the backend property of a model gives you the actual keras model built.
End of explanation
x, y = train_processed.unpack()
test_x, test_y = test_processed.unpack()
model.fit(x, y, batch_size=32, epochs=5)
Explanation: For more details about models, consult matchzoo/tutorials/models.ipynb.
Train, Evaluate, Predict
A DataPack can unpack itself into data that can be directly used to train a MatchZoo model.
End of explanation
data_generator = mz.DataGenerator(train_processed, batch_size=32)
model.fit_generator(data_generator, epochs=5, use_multiprocessing=True, workers=4)
model.evaluate(test_x, test_y)
model.predict(test_x)
Explanation: An alternative to train a model is to use a DataGenerator. This is useful for delaying expensive preprocessing steps or doing real-time data augmentation. For some models that needs dynamic batch-wise information, using a DataGenerator is required. For more details about DataGenerator, consult matchzoo/tutorials/data_handling.ipynb.
End of explanation
for model_class in mz.models.list_available():
print(model_class)
model, preprocessor, data_generator_builder, embedding_matrix = mz.auto.prepare(
task=task,
model_class=model_class,
data_pack=train_raw,
)
train_processed = preprocessor.transform(train_raw, verbose=0)
test_processed = preprocessor.transform(test_raw, verbose=0)
train_gen = data_generator_builder.build(train_processed)
test_gen = data_generator_builder.build(test_processed)
model.fit_generator(train_gen, epochs=1)
model.evaluate_generator(test_gen)
print()
Explanation: A Shortcut to Preprocessing and Model Building
Since data preprocessing and model building are laborious and special setups of some models makes this even worse, MatchZoo provides prepare, a unified interface that handles interaction among data, model, and preprocessor automatically.
More specifically, prepare does these following things:
- create a default preprocessor of the model class (if not given one)
- fit the preprocessor using the raw data
- create an embedding matrix
- instantiate a model and fill in hype-parameters
- build the model
- instantiate a DataGeneratorBuilder that will build a correctly formed DataGenerator given a DataPack
It also does many special handling for specific models, but we will not go into the details of that here.
End of explanation
model.save('my-model')
loaded_model = mz.load_model('my-model')
Explanation: Save and Load the Model
End of explanation |
14,636 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Landice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Ice Albedo
Is Required
Step7: 1.4. Atmospheric Coupling Variables
Is Required
Step8: 1.5. Oceanic Coupling Variables
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required
Step11: 2.2. Code Version
Is Required
Step12: 2.3. Code Languages
Is Required
Step13: 3. Grid
Land ice grid
3.1. Overview
Is Required
Step14: 3.2. Adaptive Grid
Is Required
Step15: 3.3. Base Resolution
Is Required
Step16: 3.4. Resolution Limit
Is Required
Step17: 3.5. Projection
Is Required
Step18: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required
Step19: 4.2. Description
Is Required
Step20: 4.3. Dynamic Areal Extent
Is Required
Step21: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required
Step22: 5.2. Grounding Line Method
Is Required
Step23: 5.3. Ice Sheet
Is Required
Step24: 5.4. Ice Shelf
Is Required
Step25: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required
Step26: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required
Step27: 7.2. Ocean
Is Required
Step28: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required
Step29: 8.2. Melting
Is Required
Step30: 9. Ice --> Dynamics
**
9.1. Description
Is Required
Step31: 9.2. Approximation
Is Required
Step32: 9.3. Adaptive Timestep
Is Required
Step33: 9.4. Timestep
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nerc', 'sandbox-3', 'landice')
Explanation: ES-DOC CMIP6 Model Properties - Landice
MIP Era: CMIP6
Institute: NERC
Source ID: SANDBOX-3
Topic: Landice
Sub-Topics: Glaciers, Ice.
Properties: 30 (21 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:27
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Ice Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify how ice albedo is modelled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Atmospheric Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the atmosphere and ice (e.g. orography, ice mass)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Oceanic Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the ocean and ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which variables are prognostically calculated in the ice model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Grid
Land ice grid
3.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is an adative grid being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Base Resolution
Is Required: TRUE Type: FLOAT Cardinality: 1.1
The base resolution (in metres), before any adaption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Resolution Limit
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If an adaptive grid is being used, what is the limit of the resolution (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.5. Projection
Is Required: TRUE Type: STRING Cardinality: 1.1
The projection of the land ice grid (e.g. albers_equal_area)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of glaciers in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of glaciers, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 4.3. Dynamic Areal Extent
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does the model include a dynamic glacial extent?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the ice sheet and ice shelf in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.2. Grounding Line Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.3. Ice Sheet
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice sheets simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Ice Shelf
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice shelves simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over bedrock
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Ocean
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of calving from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Melting
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of melting from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Ice --> Dynamics
**
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description if ice sheet and ice shelf dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Approximation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Approximation type used in modelling ice dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.3. Adaptive Timestep
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there an adaptive time scheme for the ice scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.
End of explanation |
14,637 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1 align="center">TensorFlow Neural Network Lab</h1>
<img src="image/notmnist.png">
In this lab, you'll use all the tools you learned from Introduction to TensorFlow to label images of English letters! The data you are using, <a href="http
Step3: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
Step5: <img src="image/Mean_Variance_Image.png" style="height
Step6: Checkpoint
All your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed.
Step7: Problem 2
Now it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer.
<img src="image/network_diagram.png" style="height
Step8: <img src="image/Learn_Rate_Tune_Image.png" style="height
Step9: Test
You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%. | Python Code:
import hashlib
import os
import pickle
from urllib.request import urlretrieve
import numpy as np
from PIL import Image
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from sklearn.utils import resample
from tqdm import tqdm
from zipfile import ZipFile
print('All modules imported.')
Explanation: <h1 align="center">TensorFlow Neural Network Lab</h1>
<img src="image/notmnist.png">
In this lab, you'll use all the tools you learned from Introduction to TensorFlow to label images of English letters! The data you are using, <a href="http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html">notMNIST</a>, consists of images of a letter from A to J in different fonts.
The above images are a few examples of the data you'll be training on. After training the network, you will compare your prediction model against test data. Your goal, by the end of this lab, is to make predictions against that test set with at least an 80% accuracy. Let's jump in!
To start this lab, you first need to import all the necessary modules. Run the code below. If it runs successfully, it will print "All modules imported".
End of explanation
def download(url, file):
Download file from <url>
:param url: URL to file
:param file: Local file path
if not os.path.isfile(file):
print('Downloading ' + file + '...')
urlretrieve(url, file)
print('Download Finished')
# Download the training and test dataset.
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_train.zip', 'notMNIST_train.zip')
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_test.zip', 'notMNIST_test.zip')
# Make sure the files aren't corrupted
assert hashlib.md5(open('notMNIST_train.zip', 'rb').read()).hexdigest() == 'c8673b3f28f489e9cdf3a3d74e2ac8fa',\
'notMNIST_train.zip file is corrupted. Remove the file and try again.'
assert hashlib.md5(open('notMNIST_test.zip', 'rb').read()).hexdigest() == '5d3c7e653e63471c88df796156a9dfa9',\
'notMNIST_test.zip file is corrupted. Remove the file and try again.'
# Wait until you see that all files have been downloaded.
print('All files downloaded.')
def uncompress_features_labels(file):
Uncompress features and labels from a zip file
:param file: The zip file to extract the data from
features = []
labels = []
with ZipFile(file) as zipf:
# Progress Bar
filenames_pbar = tqdm(zipf.namelist(), unit='files')
# Get features and labels from all files
for filename in filenames_pbar:
# Check if the file is a directory
if not filename.endswith('/'):
with zipf.open(filename) as image_file:
image = Image.open(image_file)
image.load()
# Load image data as 1 dimensional array
# We're using float32 to save on memory space
feature = np.array(image, dtype=np.float32).flatten()
# Get the the letter from the filename. This is the letter of the image.
label = os.path.split(filename)[1][0]
features.append(feature)
labels.append(label)
return np.array(features), np.array(labels)
# Get the features and labels from the zip files
train_features, train_labels = uncompress_features_labels('notMNIST_train.zip')
test_features, test_labels = uncompress_features_labels('notMNIST_test.zip')
# Limit the amount of data to work with a docker container
docker_size_limit = 150000
train_features, train_labels = resample(train_features, train_labels, n_samples=docker_size_limit)
# Set flags for feature engineering. This will prevent you from skipping an important step.
is_features_normal = False
is_labels_encod = False
# Wait until you see that all features and labels have been uncompressed.
print('All features and labels uncompressed.')
Explanation: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
End of explanation
# Problem 1 - Implement Min-Max scaling for grayscale image data
def normalize_grayscale(image_data):
Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]
:param image_data: The image data to be normalized
:return: Normalized image data
# TODO: Implement Min-Max scaling for grayscale image data
x_prime = map(lambda x: 0.1 + ((x*0.8)/(255)), image_data)
return np.array(list(x_prime))
### DON'T MODIFY ANYTHING BELOW ###
# Test Cases
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 255])),
[0.1, 0.103137254902, 0.106274509804, 0.109411764706, 0.112549019608, 0.11568627451, 0.118823529412, 0.121960784314,
0.125098039216, 0.128235294118, 0.13137254902, 0.9],
decimal=3)
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 10, 20, 30, 40, 233, 244, 254,255])),
[0.1, 0.103137254902, 0.13137254902, 0.162745098039, 0.194117647059, 0.225490196078, 0.830980392157, 0.865490196078,
0.896862745098, 0.9])
if not is_features_normal:
train_features = normalize_grayscale(train_features)
test_features = normalize_grayscale(test_features)
is_features_normal = True
print('Tests Passed!')
if not is_labels_encod:
# Turn labels into numbers and apply One-Hot Encoding
encoder = LabelBinarizer()
encoder.fit(train_labels)
train_labels = encoder.transform(train_labels)
test_labels = encoder.transform(test_labels)
# Change to float32, so it can be multiplied against the features in TensorFlow, which are float32
train_labels = train_labels.astype(np.float32)
test_labels = test_labels.astype(np.float32)
is_labels_encod = True
print('Labels One-Hot Encoded')
assert is_features_normal, 'You skipped the step to normalize the features'
assert is_labels_encod, 'You skipped the step to One-Hot Encode the labels'
# Get randomized datasets for training and validation
train_features, valid_features, train_labels, valid_labels = train_test_split(
train_features,
train_labels,
test_size=0.05,
random_state=832289)
print('Training features and labels randomized and split.')
# Save the data for easy access
pickle_file = 'notMNIST.pickle'
if not os.path.isfile(pickle_file):
print('Saving data to pickle file...')
try:
with open('notMNIST.pickle', 'wb') as pfile:
pickle.dump(
{
'train_dataset': train_features,
'train_labels': train_labels,
'valid_dataset': valid_features,
'valid_labels': valid_labels,
'test_dataset': test_features,
'test_labels': test_labels,
},
pfile, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
print('Data cached in pickle file.')
Explanation: <img src="image/Mean_Variance_Image.png" style="height: 75%;width: 75%; position: relative; right: 5%">
Problem 1
The first problem involves normalizing the features for your training and test data.
Implement Min-Max scaling in the normalize_grayscale() function to a range of a=0.1 and b=0.9. After scaling, the values of the pixels in the input data should range from 0.1 to 0.9.
Since the raw notMNIST image data is in grayscale, the current values range from a min of 0 to a max of 255.
Min-Max Scaling:
$
X'=a+{\frac {\left(X-X_{\min }\right)\left(b-a\right)}{X_{\max }-X_{\min }}}
$
If you're having trouble solving problem 1, you can view the solution here.
End of explanation
%matplotlib inline
# Load the modules
import pickle
import math
import numpy as np
import tensorflow as tf
from tqdm import tqdm
import matplotlib.pyplot as plt
# Reload the data
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
pickle_data = pickle.load(f)
train_features = pickle_data['train_dataset']
train_labels = pickle_data['train_labels']
valid_features = pickle_data['valid_dataset']
valid_labels = pickle_data['valid_labels']
test_features = pickle_data['test_dataset']
test_labels = pickle_data['test_labels']
del pickle_data # Free up memory
print('Data and modules loaded.')
Explanation: Checkpoint
All your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed.
End of explanation
# All the pixels in the image (28 * 28 = 784)
features_count = 784
# All the labels
labels_count = 10
# TODO: Set the features and labels tensors
features = tf.placeholder(tf.float32, [None, features_count])
labels = tf.placeholder(tf.float32, [None, labels_count])
# TODO: Set the weights and biases tensors
weights = tf.Variable(tf.truncated_normal((features_count, labels_count)))
biases = tf.Variable(tf.zeros(labels_count))
### DON'T MODIFY ANYTHING BELOW ###
#Test Cases
from tensorflow.python.ops.variables import Variable
assert features._op.name.startswith('Placeholder'), 'features must be a placeholder'
assert labels._op.name.startswith('Placeholder'), 'labels must be a placeholder'
assert isinstance(weights, Variable), 'weights must be a TensorFlow variable'
assert isinstance(biases, Variable), 'biases must be a TensorFlow variable'
assert features._shape == None or (\
features._shape.dims[0].value is None and\
features._shape.dims[1].value in [None, 784]), 'The shape of features is incorrect'
assert labels._shape == None or (\
labels._shape.dims[0].value is None and\
labels._shape.dims[1].value in [None, 10]), 'The shape of labels is incorrect'
assert weights._variable._shape == (784, 10), 'The shape of weights is incorrect'
assert biases._variable._shape == (10), 'The shape of biases is incorrect'
assert features._dtype == tf.float32, 'features must be type float32'
assert labels._dtype == tf.float32, 'labels must be type float32'
# Feed dicts for training, validation, and test session
train_feed_dict = {features: train_features, labels: train_labels}
valid_feed_dict = {features: valid_features, labels: valid_labels}
test_feed_dict = {features: test_features, labels: test_labels}
# Linear Function WX + b
logits = tf.matmul(features, weights) + biases
prediction = tf.nn.softmax(logits)
# Cross entropy
cross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1)
# Training loss
loss = tf.reduce_mean(cross_entropy)
# Create an operation that initializes all variables
init = tf.global_variables_initializer()
# Test Cases
with tf.Session() as session:
session.run(init)
session.run(loss, feed_dict=train_feed_dict)
session.run(loss, feed_dict=valid_feed_dict)
session.run(loss, feed_dict=test_feed_dict)
biases_data = session.run(biases)
assert not np.count_nonzero(biases_data), 'biases must be zeros'
print('Tests Passed!')
# Determine if the predictions are correct
is_correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(labels, 1))
# Calculate the accuracy of the predictions
accuracy = tf.reduce_mean(tf.cast(is_correct_prediction, tf.float32))
print('Accuracy function created.')
Explanation: Problem 2
Now it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer.
<img src="image/network_diagram.png" style="height: 40%;width: 40%; position: relative; right: 10%">
For the input here the images have been flattened into a vector of $28 \times 28 = 784$ features. Then, we're trying to predict the image digit so there are 10 output units, one for each label. Of course, feel free to add hidden layers if you want, but this notebook is built to guide you through a single layer network.
For the neural network to train on your data, you need the following <a href="https://www.tensorflow.org/resources/dims_types.html#data-types">float32</a> tensors:
- features
- Placeholder tensor for feature data (train_features/valid_features/test_features)
- labels
- Placeholder tensor for label data (train_labels/valid_labels/test_labels)
- weights
- Variable Tensor with random numbers from a truncated normal distribution.
- See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#truncated_normal">tf.truncated_normal() documentation</a> for help.
- biases
- Variable Tensor with all zeros.
- See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#zeros"> tf.zeros() documentation</a> for help.
If you're having trouble solving problem 2, review "TensorFlow Linear Function" section of the class. If that doesn't help, the solution for this problem is available here.
End of explanation
# Change if you have memory restrictions
batch_size = 128
# TODO: Find the best parameters for each configuration
epochs = 5
learning_rate = 0.05
### DON'T MODIFY ANYTHING BELOW ###
# Gradient Descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
# The accuracy measured against the validation set
validation_accuracy = 0.0
# Measurements use for graphing loss and accuracy
log_batch_step = 50
batches = []
loss_batch = []
train_acc_batch = []
valid_acc_batch = []
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer and get loss
_, l = session.run(
[optimizer, loss],
feed_dict={features: batch_features, labels: batch_labels})
# Log every 50 batches
if not batch_i % log_batch_step:
# Calculate Training and Validation accuracy
training_accuracy = session.run(accuracy, feed_dict=train_feed_dict)
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
# Log batches
previous_batch = batches[-1] if batches else 0
batches.append(log_batch_step + previous_batch)
loss_batch.append(l)
train_acc_batch.append(training_accuracy)
valid_acc_batch.append(validation_accuracy)
# Check accuracy against Validation data
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
loss_plot = plt.subplot(211)
loss_plot.set_title('Loss')
loss_plot.plot(batches, loss_batch, 'g')
loss_plot.set_xlim([batches[0], batches[-1]])
acc_plot = plt.subplot(212)
acc_plot.set_title('Accuracy')
acc_plot.plot(batches, train_acc_batch, 'r', label='Training Accuracy')
acc_plot.plot(batches, valid_acc_batch, 'x', label='Validation Accuracy')
acc_plot.set_ylim([0, 1.0])
acc_plot.set_xlim([batches[0], batches[-1]])
acc_plot.legend(loc=4)
plt.tight_layout()
plt.show()
print('Validation accuracy at {}'.format(validation_accuracy))
Explanation: <img src="image/Learn_Rate_Tune_Image.png" style="height: 70%;width: 70%">
Problem 3
Below are 2 parameter configurations for training the neural network. In each configuration, one of the parameters has multiple options. For each configuration, choose the option that gives the best acccuracy.
Parameter configurations:
Configuration 1
* Epochs: 1
* Learning Rate:
* 0.8
* 0.5
* 0.1
* 0.05
* 0.01
Configuration 2
* Epochs:
* 1
* 2
* 3
* 4
* 5
* Learning Rate: 0.2
The code will print out a Loss and Accuracy graph, so you can see how well the neural network performed.
If you're having trouble solving problem 3, you can view the solution here.
End of explanation
### DON'T MODIFY ANYTHING BELOW ###
# The accuracy measured against the test set
test_accuracy = 0.0
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer
_ = session.run(optimizer, feed_dict={features: batch_features, labels: batch_labels})
# Check accuracy against Test data
test_accuracy = session.run(accuracy, feed_dict=test_feed_dict)
assert test_accuracy >= 0.80, 'Test accuracy at {}, should be equal to or greater than 0.80'.format(test_accuracy)
print('Nice Job! Test Accuracy is {}'.format(test_accuracy))
Explanation: Test
You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%.
End of explanation |
14,638 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In this little demo we'll have a look at using the HoloViews DataFrame support and Bokeh backend to explore some real world data. This demo first appeared on Philipp Rudiger's blog, but this official example will be kept up to date.
Loading data
First we extract shape coordinates for the continents and countries from matplotlib's basemap toolkit and put them inside a Polygons and Contours Element respectively.
Step1: Additionally we can load an satellite image of earth. Unfortunately embedding large images in the notebook using bokeh quickly balloons the size of the notebook so we'll downsample by a factor of 5x here
Step2: Finally we download a few months worth of earthquake data from the US Geological survey (USGS), which provides a convenient web API and read it into a pandas DataFrame. For a full reference of the USGS API look here.
Step3: Let's have a look at what this data looks like
Step4: And get a summary overview of the data
Step5: That's almost 9,000 data points, which should be no problem to load and render in memory. In a future blog post we'll look at loading and dynamically displaying several years worth of data using dask out-of-memory DataFrames.
Styling our plots
Next we define some style options, in particular we map the size and color of our points to the magnitude.
Step6: Explore the data
We'll overlay the earthquake data on top of the 'Blue Marble' image we loaded previous, we'll also enable the hover tool so we can access some more information on each point
Step7: Earthquakes by day
Using groupby we can split our DataFrame up by day and using datetime we can generate date strings which we'll use as keys in a HoloMap, allowing us to visualize earthquakes for each day.
Step8: If you're trying this notebook out in a live notebook you can set
Step9: Using some pandas magic we can also resample the data and smooth it a little bit to see the frequency of earthquakes over time.
Step10: Update
Step11: Linking plots in this way is a very powerful way to explore high-dimensional data. Here we'll add an Overlay split into tabs plotting the magnitude, RMS and depth value against each other. By linking that with the familiar map, we can easily explore how the geographical location relates to these other values. | Python Code:
basemap = Basemap()
kdims = ['Longitude', 'Latitude']
continents = hv.Polygons([poly.get_coords() for poly in basemap.landpolygons],
group='Continents', kdims=kdims)
countries = hv.Contours([np.array(country) for path in basemap._readboundarydata('countries')
for country in path if not isinstance(country, int)],
group='Countries', kdims=kdims)
Explanation: In this little demo we'll have a look at using the HoloViews DataFrame support and Bokeh backend to explore some real world data. This demo first appeared on Philipp Rudiger's blog, but this official example will be kept up to date.
Loading data
First we extract shape coordinates for the continents and countries from matplotlib's basemap toolkit and put them inside a Polygons and Contours Element respectively.
End of explanation
img = basemap.bluemarble()
blue_marble = hv.RGB(np.flipud(img.get_array()[::5, ::5]),
bounds=(-180, -90, 180, 90), kdims=kdims)
Explanation: Additionally we can load an satellite image of earth. Unfortunately embedding large images in the notebook using bokeh quickly balloons the size of the notebook so we'll downsample by a factor of 5x here:
End of explanation
# Generate a valid query to the USGS API and let pandas handle the loading and parsing of dates
query = dict(starttime="2014-12-01", endtime="2014-12-31")
query_string = '&'.join('{0}={1}'.format(k, v) for k, v in query.items())
query_url = "http://earthquake.usgs.gov/fdsnws/event/1/query.csv?" + query_string
df = pd.read_csv(BytesIO(urlopen(query_url).read()),
parse_dates=['time'], index_col='time',
infer_datetime_format=True)
df['Date'] = [str(t)[:19] for t in df.index]
# Pass the earthquake dataframe into the HoloViews Element
earthquakes = hv.Points(df, kdims=['longitude', 'latitude'],
vdims=['place', 'Date', 'depth', 'mag', 'rms'],
group='Earthquakes')
Explanation: Finally we download a few months worth of earthquake data from the US Geological survey (USGS), which provides a convenient web API and read it into a pandas DataFrame. For a full reference of the USGS API look here.
End of explanation
df.head(2)
Explanation: Let's have a look at what this data looks like:
End of explanation
df.describe()
Explanation: And get a summary overview of the data:
End of explanation
%output size=150
%opts Overlay [width=800]
%opts Points.Earthquakes [color_index=5 size_index=5 scaling_factor=1.5] (cmap='hot_r' size=1)
%opts Polygons.Continents (color='k')
%opts Contours.Countries (color='white')
Explanation: That's almost 9,000 data points, which should be no problem to load and render in memory. In a future blog post we'll look at loading and dynamically displaying several years worth of data using dask out-of-memory DataFrames.
Styling our plots
Next we define some style options, in particular we map the size and color of our points to the magnitude.
End of explanation
%%opts Points.Earthquakes [tools=['hover']]
blue_marble * earthquakes
Explanation: Explore the data
We'll overlay the earthquake data on top of the 'Blue Marble' image we loaded previous, we'll also enable the hover tool so we can access some more information on each point:
End of explanation
daily_df = df.groupby([df.index.year, df.index.month, df.index.day])
daily_earthquakes = hv.HoloMap(kdims=['Date'])
for date, data in daily_df:
date = str(dt.date(*date))
daily_earthquakes[date] = (continents * countries *
hv.Points(data, kdims=['longitude', 'latitude'],
vdims=['mag'], group='Earthquakes'))
Explanation: Earthquakes by day
Using groupby we can split our DataFrame up by day and using datetime we can generate date strings which we'll use as keys in a HoloMap, allowing us to visualize earthquakes for each day.
End of explanation
%%output holomap='scrubber'
%%opts Overlay [width=800] Points.Earthquakes [color_index=2 size_index=2]
daily_earthquakes[::3]
Explanation: If you're trying this notebook out in a live notebook you can set:
python
%output widgets='live'
here to update the data dynamically. Since we're embedding this data here we'll only display every third date.
End of explanation
%%opts Curve [width=600] Spikes [spike_length=4] (line_width=0.1)
df['count'] = 1
hourly_counts = pd.rolling_mean(df.resample('3H', how='count'), 5).reset_index()
hv.Curve(hourly_counts, kdims=['time'], vdims=['count']) *\
hv.Spikes(df.reset_index(), kdims=['time'], vdims=[])
Explanation: Using some pandas magic we can also resample the data and smooth it a little bit to see the frequency of earthquakes over time.
End of explanation
%%opts Points.Earthquakes [tools=['lasso_select']] Overlay [width=800 height=400] Table [width=800]
(blue_marble * earthquakes + hv.Table(earthquakes.data, kdims=['Date', 'latitude', 'longitude'], vdims=['depth', 'mag'])).cols(1)
Explanation: Update: Linked data and widgets
Another feature I've been playing with is automatic sharing of the data across plots, which automatically allows linked brushing and selecting. Here's a first quick demo of what this can look like. The only thing we need to do when adding a linked Element such as a Table is to ensure it draws from the same DataFrame as the other Elements we want to link it with. Using the 'lasso_select' tool we can select only a subregion of points and watch our selection get highlighted in the Table. In reverse we can also highlight rows in the Table and watch our selection appear in the plot, even editing is allowed.
End of explanation
%%opts Points [height=250 width=400 tools=['lasso_select', 'box_select']] (unselected_color='indianred')
%%opts Overlay [width=500 height=300] Overlay.Combinations [tabs=True]
from itertools import combinations
dim_combos = combinations(['mag', 'depth', 'rms'], 2)
(blue_marble * earthquakes +
hv.Overlay([hv.Points(earthquakes.data, kdims=[c1, c2], group='%s_%s' % (c1, c2))
for c1, c2 in dim_combos], group='Combinations')).cols(2)
Explanation: Linking plots in this way is a very powerful way to explore high-dimensional data. Here we'll add an Overlay split into tabs plotting the magnitude, RMS and depth value against each other. By linking that with the familiar map, we can easily explore how the geographical location relates to these other values.
End of explanation |
14,639 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Imports and configuration
Note
Step1: Specify the path to your JCMsuite installation directory here. You can skip this later if you have a configuration file.
Step2: Prepare
Creating a JCMProject
Step3: Set the keys that are necessary to translate the JCM template files.
Step4: Initialize a SimulationSet. We ignore the configured storage here and save everything in the current working dir.
Step5: Make a schedule
Step6: Define a processing function
Step7: Solve
Is your machine multicore? Than uncomment this statement and allow yourself some more computation power.
Step8: Run your simulations. The results will be appended to the HDF5 store.
Step9: Plot
Step10: Write the data to CSV. | Python Code:
import sys
sys.path.append('..')
import os
import numpy as np
Explanation: Imports and configuration
Note: It is assumed that impatient people do not have a configuration file yet. You can learn on configuration files in the Setting up a configuration file notebook.
Since the parent directory, which contains the pypmj module, is not automatically in our path, we need to append it before.
End of explanation
import pypmj as jpy
jpy.import_jcmwave('/path/to/your/JCMsuite/installation/directory')
Explanation: Specify the path to your JCMsuite installation directory here. You can skip this later if you have a configuration file.
End of explanation
project = jpy.JCMProject('../projects/scattering/mie/mie2D')
Explanation: Prepare
Creating a JCMProject: It is assumed that you run this notebook in the a subfolder of the folder structure shipped with pypmj. If this raises an Exception, make sure that the path to the mie2D project is correct.
End of explanation
mie_keys = {'constants' :{}, # <-- can be anything, but is not looped over and not stored in the HDF5 store
'parameters': {}, # <-- everything that needs to be stored, but is not in layout.jcmt
'geometry': {'radius':np.linspace(0.3, 0.5, 40)}} # <-- same as before, but layout.jcmt-relevant
Explanation: Set the keys that are necessary to translate the JCM template files.
End of explanation
simuset = jpy.SimulationSet(project, mie_keys)
Explanation: Initialize a SimulationSet. We ignore the configured storage here and save everything in the current working dir.
End of explanation
simuset.make_simulation_schedule()
Explanation: Make a schedule: this checks which simulations need to be run and sorts them depending on their geometry.
End of explanation
def read_scs(pp):
results = {} #must be a dict
results['SCS'] = pp[0]['ElectromagneticFieldEnergyFlux'][0][0].real
return results
Explanation: Define a processing function: all post process results are passed to this function automatically. It must return a dict with your results.
End of explanation
# simuset.resource_manager.resources['localhost'].set_m_n(4,2)
Explanation: Solve
Is your machine multicore? Than uncomment this statement and allow yourself some more computation power.
End of explanation
simuset.run(processing_func=read_scs)
Explanation: Run your simulations. The results will be appended to the HDF5 store.
End of explanation
%matplotlib inline
data = simuset.get_store_data().sort_values(by='radius')
data.plot(x='radius', y='SCS', title='Results of the simulation')
Explanation: Plot
End of explanation
simuset.write_store_data_to_file() # default is results.csv in the storage folder
Explanation: Write the data to CSV.
End of explanation |
14,640 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Getting first and last date of tweets for each twitter user
The purpose of this notebook is to extract unique user id, screen name, date user created, date of first tweet in dataset, date of last tweet from a tweets collection (JSON) as a result table shown in Step 3 below.
It was originally written for Program on Etremism data's request, but can be used for any collection by replacing the input file to users' own tweets collection file.
1) Setting Input file(JSON) and Output file(CSV)
Step1: 2) Extracting "UserID, screen name, date created" from the input data
Step2: 3) Getting First_tweet_date and Last_tweet_date for each user
Step3: 4) Export the results to a csv file | Python Code:
# For users: Change the filenames as you like.
INPUTFILE = "POE_json2.json"
OUTPUTFILE = "results.csv"
Explanation: Getting first and last date of tweets for each twitter user
The purpose of this notebook is to extract unique user id, screen name, date user created, date of first tweet in dataset, date of last tweet from a tweets collection (JSON) as a result table shown in Step 3 below.
It was originally written for Program on Etremism data's request, but can be used for any collection by replacing the input file to users' own tweets collection file.
1) Setting Input file(JSON) and Output file(CSV)
End of explanation
# header
!echo "[]" | jq -r '["tweet_created_at","userID", "screen_name", "user_created_at"] | @csv' > "csvdata.csv"
!cat $INPUTFILE | jq -r '[(.created_at | strptime("%A %B %d %T %z %Y") | todate), .user.id_str, .user.screen_name, (.user.created_at | strptime("%A %B %d %T %z %Y") | todate)] | @csv' >> "csvdata.csv"
!head -5 "csvdata.csv"
Explanation: 2) Extracting "UserID, screen name, date created" from the input data
End of explanation
import pandas as pd
data = pd.read_csv("csvdata.csv", encoding = 'ISO-8859-1')
data2 = data.groupby(['userID', 'screen_name', 'user_created_at']).tweet_created_at.agg(['min', 'max'])
data3 = data2.reset_index()
data3.rename(columns={'min': 'first_tweet_date', 'max': 'last_tweet_date'}, inplace=True)
data3.head(5)
# the number of unique users
len(data3)
Explanation: 3) Getting First_tweet_date and Last_tweet_date for each user
End of explanation
# Export the results to a csv file whose filename is OUTPUTFILE set by user in the beginning of thie notebook.
data3.to_csv(OUTPUTFILE, index=False)
Explanation: 4) Export the results to a csv file
End of explanation |
14,641 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: TFP 確率的レイヤー:変分オートエンコーダー
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: 迅速に作成
はじめる前に、このデモで GPU を使用していることを確認します。
[ランタイム] -> [ランタイムタイプの変更] -> [ハードウェアアクセラレータ] -> [GPU] を選択します。
次のスニペットは、GPU にアクセスできることを確認します。
Step3: 注意
Step4: 上記の preprocess() は、image ではなく image, image を返すことに注意してください。これは、Keras が (example, label) 入力形式、すなわち $p\theta(y|x)$ の識別モデル用に設定されているためです。VAE の目的は、x 自体($p_\theta(x|x)$)から入力 x を復元することであるため、データペアは (example, example) です。
VAE コードゴルフ
モデルを指定します。
Step5: 推論します。
Step6: 10 桁の数字を生成する | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
#@title Import { display-mode: "form" }
import numpy as np
import tensorflow.compat.v2 as tf
tf.enable_v2_behavior()
import tensorflow_datasets as tfds
import tensorflow_probability as tfp
tfk = tf.keras
tfkl = tf.keras.layers
tfpl = tfp.layers
tfd = tfp.distributions
Explanation: TFP 確率的レイヤー:変分オートエンコーダー
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/probability/examples/Probabilistic_Layers_VAE"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/probability/examples/Probabilistic_Layers_VAE.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/probability/examples/Probabilistic_Layers_VAE.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/probability/examples/Probabilistic_Layers_VAE.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a></td>
</table>
この例では、TFP の「確率的レイヤー」を使用して変分オートエンコーダーを適合させる方法を示します。
依存関係と前提条件
End of explanation
if tf.test.gpu_device_name() != '/device:GPU:0':
print('WARNING: GPU device not found.')
else:
print('SUCCESS: Found GPU: {}'.format(tf.test.gpu_device_name()))
Explanation: 迅速に作成
はじめる前に、このデモで GPU を使用していることを確認します。
[ランタイム] -> [ランタイムタイプの変更] -> [ハードウェアアクセラレータ] -> [GPU] を選択します。
次のスニペットは、GPU にアクセスできることを確認します。
End of explanation
datasets, datasets_info = tfds.load(name='mnist',
with_info=True,
as_supervised=False)
def _preprocess(sample):
image = tf.cast(sample['image'], tf.float32) / 255. # Scale to unit interval.
image = image < tf.random.uniform(tf.shape(image)) # Randomly binarize.
return image, image
train_dataset = (datasets['train']
.map(_preprocess)
.batch(256)
.prefetch(tf.data.AUTOTUNE)
.shuffle(int(10e3)))
eval_dataset = (datasets['test']
.map(_preprocess)
.batch(256)
.prefetch(tf.data.AUTOTUNE))
Explanation: 注意: 何らかの理由で GPU にアクセスできない場合でも、このコラボは機能します (トレーニングには時間がかかります)。
データセットの読み込み
End of explanation
input_shape = datasets_info.features['image'].shape
encoded_size = 16
base_depth = 32
prior = tfd.Independent(tfd.Normal(loc=tf.zeros(encoded_size), scale=1),
reinterpreted_batch_ndims=1)
encoder = tfk.Sequential([
tfkl.InputLayer(input_shape=input_shape),
tfkl.Lambda(lambda x: tf.cast(x, tf.float32) - 0.5),
tfkl.Conv2D(base_depth, 5, strides=1,
padding='same', activation=tf.nn.leaky_relu),
tfkl.Conv2D(base_depth, 5, strides=2,
padding='same', activation=tf.nn.leaky_relu),
tfkl.Conv2D(2 * base_depth, 5, strides=1,
padding='same', activation=tf.nn.leaky_relu),
tfkl.Conv2D(2 * base_depth, 5, strides=2,
padding='same', activation=tf.nn.leaky_relu),
tfkl.Conv2D(4 * encoded_size, 7, strides=1,
padding='valid', activation=tf.nn.leaky_relu),
tfkl.Flatten(),
tfkl.Dense(tfpl.MultivariateNormalTriL.params_size(encoded_size),
activation=None),
tfpl.MultivariateNormalTriL(
encoded_size,
activity_regularizer=tfpl.KLDivergenceRegularizer(prior)),
])
decoder = tfk.Sequential([
tfkl.InputLayer(input_shape=[encoded_size]),
tfkl.Reshape([1, 1, encoded_size]),
tfkl.Conv2DTranspose(2 * base_depth, 7, strides=1,
padding='valid', activation=tf.nn.leaky_relu),
tfkl.Conv2DTranspose(2 * base_depth, 5, strides=1,
padding='same', activation=tf.nn.leaky_relu),
tfkl.Conv2DTranspose(2 * base_depth, 5, strides=2,
padding='same', activation=tf.nn.leaky_relu),
tfkl.Conv2DTranspose(base_depth, 5, strides=1,
padding='same', activation=tf.nn.leaky_relu),
tfkl.Conv2DTranspose(base_depth, 5, strides=2,
padding='same', activation=tf.nn.leaky_relu),
tfkl.Conv2DTranspose(base_depth, 5, strides=1,
padding='same', activation=tf.nn.leaky_relu),
tfkl.Conv2D(filters=1, kernel_size=5, strides=1,
padding='same', activation=None),
tfkl.Flatten(),
tfpl.IndependentBernoulli(input_shape, tfd.Bernoulli.logits),
])
vae = tfk.Model(inputs=encoder.inputs,
outputs=decoder(encoder.outputs[0]))
Explanation: 上記の preprocess() は、image ではなく image, image を返すことに注意してください。これは、Keras が (example, label) 入力形式、すなわち $p\theta(y|x)$ の識別モデル用に設定されているためです。VAE の目的は、x 自体($p_\theta(x|x)$)から入力 x を復元することであるため、データペアは (example, example) です。
VAE コードゴルフ
モデルを指定します。
End of explanation
negloglik = lambda x, rv_x: -rv_x.log_prob(x)
vae.compile(optimizer=tf.optimizers.Adam(learning_rate=1e-3),
loss=negloglik)
_ = vae.fit(train_dataset,
epochs=15,
validation_data=eval_dataset)
Explanation: 推論します。
End of explanation
# We'll just examine ten random digits.
x = next(iter(eval_dataset))[0][:10]
xhat = vae(x)
assert isinstance(xhat, tfd.Distribution)
#@title Image Plot Util
import matplotlib.pyplot as plt
def display_imgs(x, y=None):
if not isinstance(x, (np.ndarray, np.generic)):
x = np.array(x)
plt.ioff()
n = x.shape[0]
fig, axs = plt.subplots(1, n, figsize=(n, 1))
if y is not None:
fig.suptitle(np.argmax(y, axis=1))
for i in range(n):
axs.flat[i].imshow(x[i].squeeze(), interpolation='none', cmap='gray')
axs.flat[i].axis('off')
plt.show()
plt.close()
plt.ion()
print('Originals:')
display_imgs(x)
print('Decoded Random Samples:')
display_imgs(xhat.sample())
print('Decoded Modes:')
display_imgs(xhat.mode())
print('Decoded Means:')
display_imgs(xhat.mean())
# Now, let's generate ten never-before-seen digits.
z = prior.sample(10)
xtilde = decoder(z)
assert isinstance(xtilde, tfd.Distribution)
print('Randomly Generated Samples:')
display_imgs(xtilde.sample())
print('Randomly Generated Modes:')
display_imgs(xtilde.mode())
print('Randomly Generated Means:')
display_imgs(xtilde.mean())
Explanation: 10 桁の数字を生成する
End of explanation |
14,642 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below
Step9: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token
Step11: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step13: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step15: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below
Step18: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders
Step21: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
Step24: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
Step27: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
Step30: Build the Neural Network
Apply the functions you implemented above to
Step33: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements
Step35: Neural Network Training
Hyperparameters
Tune the following parameters
Step37: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem.
Step41: Save Parameters
Save seq_length and save_dir for generating a new TV script.
Step43: Checkpoint
Step46: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names
Step49: Choose Word
Implement the pick_word() function to select the next word using probabilities.
Step51: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
import numpy as np
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
word_counts = Counter(text) # creates a 'dictionary' word:count
sorted_vocab = sorted(word_counts,key=word_counts.get, reverse=True)
int_to_vocab = {ii: word for ii,word in enumerate(sorted_vocab)}
vocab_to_int = {word:ii for ii,word in enumerate(sorted_vocab)}
return (vocab_to_int, int_to_vocab)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
Tokenize=dict()
Tokenize['.']='<PERIOD>'
Tokenize[','] = '<COMMA>'
Tokenize['"'] = '<QUOTATION_MARK>'
Tokenize[';'] = '<SEMICOLON>'
Tokenize['!'] = '<EXCLAMATION_MARK>'
Tokenize['?'] = '<QUESTION_MARK>'
Tokenize['('] = '<LEFT_PAREN>'
Tokenize[')'] = '<RIGHT_PAREN>'
Tokenize['--'] = '<DASH>'
Tokenize['\n'] = '<RETURN>'
return Tokenize
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
inputs = tf.placeholder(dtype=tf.int32, shape=(None,None), name='input')
targets = tf.placeholder(dtype=tf.int32, shape=(None,None), name='targets')
learning_rate = tf.placeholder(dtype=tf.float32, name='learning_rate')
return (inputs,targets,learning_rate)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Input, Targets, LearningRate)
End of explanation
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
num_layers = 3
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
# todo: consider dropout
# keep_prob = 0.5
# drop = tf.contrib.rnn.DropoutWrapper(lstm,output_keep_prob=keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([lstm]*num_layers) # if dropout applied then replace 'lstm' with 'drop'
initial_state=tf.identity(cell.zero_state(batch_size,tf.float32),name='initial_state')
return (cell,initial_state)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
embedding = tf.Variable(tf.random_uniform((vocab_size,embed_dim),-1,1))
embed = tf.nn.embedding_lookup(embedding,input_data)
return embed
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
outputs,state = tf.nn.dynamic_rnn(cell,inputs,dtype=tf.float32) # why dtype=tf.float32 ? doesnt work with int32
final_state = tf.identity(state,name='final_state')
return (outputs,final_state)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
embed_words = get_embed(input_data, vocab_size, embed_dim) # shape : [None, None, 300]
rnn_outputs, final_state = build_rnn(cell, embed_words) # shape: [None, None, 256]
logits = tf.layers.dense(rnn_outputs,vocab_size)
return (logits,final_state)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
# TODO: Implement Function
num_batches = len(int_text)//(batch_size*seq_length)
trimmed_text = int_text[:num_batches*(batch_size*seq_length)]
inputs = trimmed_text
targets = trimmed_text[1:]+[trimmed_text[0]]
# the below code was inspired (copied with modification) from KaRNNa exercise - get_batches)
inputs = np.reshape(inputs,(batch_size,-1)) # now inputs.shape=[batch_size,num_batches*seq_length]
targets = np.reshape(targets, (batch_size, -1))
Batches=np.zeros((num_batches,2,batch_size,seq_length))
for b,n in enumerate(range(0,inputs.shape[1],seq_length)):
inp=np.expand_dims(inputs[:,n:n+seq_length],0)
tar=np.expand_dims(targets[:,n:n+seq_length],0)
Batches[b]=np.vstack((inp,tar))
return Batches
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2], [ 7 8], [13 14]]
# Batch of targets
[[ 2 3], [ 8 9], [14 15]]
]
# Second Batch
[
# Batch of Input
[[ 3 4], [ 9 10], [15 16]]
# Batch of targets
[[ 4 5], [10 11], [16 17]]
]
# Third Batch
[
# Batch of Input
[[ 5 6], [11 12], [17 18]]
# Batch of targets
[[ 6 7], [12 13], [18 1]]
]
]
```
Notice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.
End of explanation
# Number of Epochs
num_epochs = 100
# Batch Size
batch_size = 23 # s.t. 100 batches are 1 epoch
# RNN Size
rnn_size = 256
# Embedding Dimension Size
embed_dim = 200
# Sequence Length
seq_length = 30
# Learning Rate
learning_rate = 0.005
# Show stats for every n number of batches
show_every_n_batches = 50
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set embed_dim to the size of the embedding.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
Explanation: Checkpoint
End of explanation
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
# TODO: Implement Function
InputTensor = loaded_graph.get_tensor_by_name("input:0")
InitialStateTensor = loaded_graph.get_tensor_by_name("initial_state:0")
FinalStateTensor = loaded_graph.get_tensor_by_name("final_state:0")
ProbsTensor = loaded_graph.get_tensor_by_name("probs:0")
return InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
# TODO: Implement Function
sampled_int=np.random.choice(len(int_to_vocab),1,p=probabilities)[0]
return int_to_vocab[sampled_int]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation |
14,643 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
EDA with Teton avalanche observations and hazard forecasts
I've already preprocessed the avalanche events and forecasts, so we'll just load them into dataframes here
Step1: Since we dont't have hazard forecasts for non-Jackson regions, let's filter events in those areas out. I infer the
list of zones from the BTAC obs page.
Step2: Let's take a look at the event data. The first 10 columns contain data on the time, location, and characteristics of the slide path
Step3: These fields are largely self explanatory; elevation is reported in feet above mean sea level.
After that, we get details on the size, type, and trigger for the avalanche, as well as the number of people killed
Step4: A guide to avalanche terminology and codes is available here. Destructive size and relative size describe the magnitude of the slide, depth is the initial thickness in inches of the snow slab that slides. The trigger describes the source of the disturbance causing the slide, while type describes the nature of the slab.
Finally, we get notes and info on the reporting individual, if available
Step5: Let's look at a quick count of the events in our database by year
Step6: What's going on here? It turns out initially, the only events in our data are serious accidents. In the winter of 2009-2010, the ski patrol at Jackson Hole began reporting the results of their avalanche control work, and a broader range of non-accident reports from skiers and snowmobilers are included. The raw observation counts are thus tricky to compare from year to year.
Step7: So in our 16 year record, we have 25 avalanche fatalities in the jackson hole region. With such a small sample, hard to see any particular pattern.
Let's start examining these events in the context of the avalanche forecasts. First, the data (last 5 days shown)
Step8: Each column is for a specific terrain elevation (atl is "above treeline", tl is "treeline", btl is "below treeline"), representing the forecast hazard level from 1-5 (low to extreme). The equivalent elevation bands are 6000-7500ft, 7500-9000ft, and 9000-10500ft. The daily bulletin gives an expected hazard for the morning and afternoon - in late winter and spring, the longer days and warming temperatures can create significant intrady increases in hazard. This is what the hazard looked like over the course of the 2008-2009 season
Step9: Peak hazard occurred in late December to early January. The increased intraday variation in hazard is visible from March forward, with higher hazard in the afternoons.
What forecast conditions have the highest number of fatalities? We don't have a very impressive sample with just one region of a single state, but we can at least see how to approach it. We want to extract the appropriate forecast hazard for the date and elevation where fatalities occurred.
First, we make a function to categorize elevations
Step10: Next we augment the event frame with the elevation category in which the slide occurred
Step11: We next average the morning and afternoon hazard levels, then stack and reindex the hazard frame in preparation for a left outer join with the event frame
Step12: Finally, we merge these frames, then restrict the analysis to fatal accidents. While the sample size is small, we recover the frequently noted result that more fatal accidents occur during "Considerable" forecast hazard than "High" or "Extreme". This is both the result of the underlying hazard frequency (there are more "Considerable" days than "High" or "Extreme" days) and psychology (fewer people choose to recreate in avalanche terrain when the forecast danger is above "Considerable").
Step13: The stacked histogram of forecast avalanche danger by elevation category has a lognormal character.
Step14: The raw count of avalanches by hazard rating is given below
Step15: and here, raw counts of forecasts per hazard rating
Step16: While this is "quick and dirty", the forecast frequence weighted avalanche occurrence suggests there are about three events per fcst at "high" and "extreme" hazard, about one per forecast at "considerable", less than 1/3 per forecast at "moderate", and 1/40 per forecast at "low".
A quick addendum, adding thresholds for destructive size
Step17: Weather
Having examined avalanche forecasts and observations, the next challenge is to address the driving variable
Step18: Let's look at air temp during the winter of 2008-2009
Step19: That looks pretty good. If we look at the whole series though, there are some issues
Step20: At this resolution, we basically see the annual cycle for the period when the temperature sensor is reporting, but there are some obvious issues. Let's look at the period of 2011-2014 to explore this | Python Code:
events_df = pd.read_csv('btac_events.csv.gz', compression='gzip',
index_col=[0], parse_dates = [2])
hzrd_df = pd.read_csv('btac_nowcast_teton.csv.gz', compression='gzip',
index_col=[0], parse_dates=[0])
Explanation: EDA with Teton avalanche observations and hazard forecasts
I've already preprocessed the avalanche events and forecasts, so we'll just load them into dataframes here:
End of explanation
zones = ['101','102','103','104','105','106','JHMR','GT','SK']
df1 = events_df[events_df['zone'].isin(zones)]
Explanation: Since we dont't have hazard forecasts for non-Jackson regions, let's filter events in those areas out. I infer the
list of zones from the BTAC obs page.
End of explanation
df1[df1.columns[0:10]].head(10)
Explanation: Let's take a look at the event data. The first 10 columns contain data on the time, location, and characteristics of the slide path:
End of explanation
df1[df1.columns[10:16]].head(10)
Explanation: These fields are largely self explanatory; elevation is reported in feet above mean sea level.
After that, we get details on the size, type, and trigger for the avalanche, as well as the number of people killed:
End of explanation
df1[df1.columns[16:]].head(10)
Explanation: A guide to avalanche terminology and codes is available here. Destructive size and relative size describe the magnitude of the slide, depth is the initial thickness in inches of the snow slab that slides. The trigger describes the source of the disturbance causing the slide, while type describes the nature of the slab.
Finally, we get notes and info on the reporting individual, if available:
End of explanation
per = df1['event_date'].dt.to_period("M");
g = df1.groupby(per);
s1 = g['ID'].count();
fig, ax = plt.subplots(1,1,figsize=(16,6));
s1 = s1.resample('M').sum();
s1.plot(kind='bar', ax=ax, title='Avalanche event counts', rot=65);
ticks = ax.xaxis.get_ticklocs();
ticklabels = [l.get_text() for l in ax.xaxis.get_ticklabels()];
ax.xaxis.set_ticks(ticks[::3]);
ax.xaxis.set_ticklabels(ticklabels[::3]);
ax.set_xlabel('date');
ax.set_ylabel('count');
Explanation: Let's look at a quick count of the events in our database by year:
End of explanation
per = df1['event_date'].dt.to_period("M");
g = df1.groupby(per);
s2 = g['fatality'].sum().astype(int);
fig, ax = plt.subplots(1,1,figsize=(16,6));
s2 = s2.resample('M').sum();
s2.plot(kind='bar', ax=ax, title='Monthly fatal avalanche events', rot=65);
ticks = ax.xaxis.get_ticklocs();
ticklabels = [l.get_text() for l in ax.xaxis.get_ticklabels()];
ax.xaxis.set_ticks(ticks[::3]);
ax.xaxis.set_ticklabels(ticklabels[::3]);
ax.set_xlabel('date');
ax.set_ylabel('count');
print("Total fatalities:",s2.sum())
Explanation: What's going on here? It turns out initially, the only events in our data are serious accidents. In the winter of 2009-2010, the ski patrol at Jackson Hole began reporting the results of their avalanche control work, and a broader range of non-accident reports from skiers and snowmobilers are included. The raw observation counts are thus tricky to compare from year to year.
End of explanation
hzrd_df.tail(10)
Explanation: So in our 16 year record, we have 25 avalanche fatalities in the jackson hole region. With such a small sample, hard to see any particular pattern.
Let's start examining these events in the context of the avalanche forecasts. First, the data (last 5 days shown):
End of explanation
s = ('2008-10-31','2009-5-31')
#s = ('2013-10-31','2014-5-31')
#s =('2016-10-31','2017-5-31')
ax = hzrd_df.loc[s[0]:s[1],['atl','tl','btl']].plot(figsize=(16,6),rot=45);
ax.set_ylim([0,5]);
ax.set_ylabel('Avalanche Danger');
Explanation: Each column is for a specific terrain elevation (atl is "above treeline", tl is "treeline", btl is "below treeline"), representing the forecast hazard level from 1-5 (low to extreme). The equivalent elevation bands are 6000-7500ft, 7500-9000ft, and 9000-10500ft. The daily bulletin gives an expected hazard for the morning and afternoon - in late winter and spring, the longer days and warming temperatures can create significant intrady increases in hazard. This is what the hazard looked like over the course of the 2008-2009 season:
End of explanation
def elevation_category(elevation):
if (6000. < elevation <= 7500.):
return 'btl'
elif (7500. < elevation <= 9000.):
return 'tl'
elif (9000. < elevation <= 10500.):
return 'atl'
else:
return None
Explanation: Peak hazard occurred in late December to early January. The increased intraday variation in hazard is visible from March forward, with higher hazard in the afternoons.
What forecast conditions have the highest number of fatalities? We don't have a very impressive sample with just one region of a single state, but we can at least see how to approach it. We want to extract the appropriate forecast hazard for the date and elevation where fatalities occurred.
First, we make a function to categorize elevations:
End of explanation
df1.is_copy=False
df1['el_cat'] = df1['elevation'].apply(lambda x: elevation_category(x))
Explanation: Next we augment the event frame with the elevation category in which the slide occurred:
End of explanation
df2 = hzrd_df[['atl','tl','btl']].resample('D').mean().stack()
df2 = df2.reset_index()
df2.columns = ['event_date','el_cat','hazard']
df2.head()
Explanation: We next average the morning and afternoon hazard levels, then stack and reindex the hazard frame in preparation for a left outer join with the event frame:
End of explanation
df3 = pd.merge(df1, df2, how='left', left_on=['event_date','el_cat'], right_on=['event_date','el_cat'])
df4 = df3[df3['fatality']>0]
df4['hazard'].plot(kind='hist', title='Fatalities by avalanche forecast hazard',
xlim=[0,5], bins=20, figsize=(6,6));
Explanation: Finally, we merge these frames, then restrict the analysis to fatal accidents. While the sample size is small, we recover the frequently noted result that more fatal accidents occur during "Considerable" forecast hazard than "High" or "Extreme". This is both the result of the underlying hazard frequency (there are more "Considerable" days than "High" or "Extreme" days) and psychology (fewer people choose to recreate in avalanche terrain when the forecast danger is above "Considerable").
End of explanation
hzrd_df[['atl','tl','btl']].plot(kind='hist', stacked=True,
xlim=[0,5], bins=20, figsize=(6,6));
Explanation: The stacked histogram of forecast avalanche danger by elevation category has a lognormal character.
End of explanation
g = df3.groupby(by='hazard');
g['ID'].count()
Explanation: The raw count of avalanches by hazard rating is given below:
End of explanation
atl1, b = np.histogram(hzrd_df['atl'], bins=20);
tl1, _ = np.histogram(hzrd_df['tl'], bins=20);
btl1, _ = np.histogram(hzrd_df['btl'], bins=20);
atl1 + tl1 + btl1
b
Explanation: and here, raw counts of forecasts per hazard rating
End of explanation
g = df3[(1 < df3['hazard']) & (df3['hazard'] <= 2)].groupby(by='destructive_size');
sz_2 = g['ID'].count()
g = df3[(2 < df3['hazard']) & (df3['hazard'] <= 3)].groupby(by='destructive_size');
sz_3 = g['ID'].count()
g = df3[(3 < df3['hazard']) & (df3['hazard'] <= 4)].groupby(by='destructive_size');
sz_4 = g['ID'].count()
g = df3[(4 < df3['hazard']) & (df3['hazard'] <= 5)].groupby(by='destructive_size');
sz_5 = g['ID'].count()
print(sz_2,sz_3,sz_4,sz_5)
Explanation: While this is "quick and dirty", the forecast frequence weighted avalanche occurrence suggests there are about three events per fcst at "high" and "extreme" hazard, about one per forecast at "considerable", less than 1/3 per forecast at "moderate", and 1/40 per forecast at "low".
A quick addendum, adding thresholds for destructive size:
End of explanation
df = select_stn('./wxdata',{'k_nrst': 10, 'lat_lon': (43.572236,-110.8496103)}, return_df=True)
df.head(10)
wx_df = process_stn('./wxdata', 'JHR');
Explanation: Weather
Having examined avalanche forecasts and observations, the next challenge is to address the driving variable: weather. We'll take a quick look at some station data from Jackson Hole.
First, let's look at the data sources available. I've coded a handy utility to help pick out useful stations. Let's find the ten nearest Jackson Hole, with (lat, lon) = (43.572236,-110.8496103):
End of explanation
wx_df.loc['2008-10-31':'2009-05-31','air_temp_set_1'].plot(figsize=(16,6));
Explanation: Let's look at air temp during the winter of 2008-2009:
End of explanation
wx_df['air_temp_set_1'].plot(figsize=(16,6));
Explanation: That looks pretty good. If we look at the whole series though, there are some issues:
End of explanation
wx_df.loc['2011-10-31':'2014-05-31','air_temp_set_1'].plot(figsize=(16,6));
Explanation: At this resolution, we basically see the annual cycle for the period when the temperature sensor is reporting, but there are some obvious issues. Let's look at the period of 2011-2014 to explore this:
End of explanation |
14,644 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Calculate $f_{sat}$ for the Entire MultiDark Volume
Step1: Note that changing the PBC condition enforce_PBC option does not change the $f_{sat}$ value.
Calculate $f_{sat}$ for MultiDark Subvolumes | Python Code:
# initialize hod model
model = PrebuiltHodModelFactory('zheng07', threshold=-21)
halocat = CachedHaloCatalog(simname='multidark', redshift=0, halo_finder='rockstar')
model.populate_mock(halocat, enforce_PBC=False)
N_sat = len(np.where(model.mock.galaxy_table['gal_type'] == 'satellites')[0])
N_gal = len(model.mock.galaxy_table['gal_type'])
print 'f_sat = ', np.float(N_sat)/np.float(N_gal)
Explanation: Calculate $f_{sat}$ for the Entire MultiDark Volume
End of explanation
sub_model = PrebuiltHodModelFactory('zheng07', threshold=-21)
sub_model.new_haloprop_func_dict = {'sim_subvol': util.mk_id_column}
sub_halocat = CachedHaloCatalog(simname = 'multidark', redshift = 0, halo_finder = 'rockstar')
for rint in range(10):
simsubvol = lambda x: util.mask_func(x, rint)
sub_model.populate_mock(sub_halocat, masking_function=simsubvol, enforce_PBC=False)
sub_N_sat = len(np.where(sub_model.mock.galaxy_table['gal_type'] == 'satellites')[0])
sub_N_gal = len(sub_model.mock.galaxy_table['gal_type'])
print 'f_sat = ', np.float(sub_N_sat)/np.float(sub_N_gal)
Explanation: Note that changing the PBC condition enforce_PBC option does not change the $f_{sat}$ value.
Calculate $f_{sat}$ for MultiDark Subvolumes
End of explanation |
14,645 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Residual bias correction of Mean-field GF
This is the full computation of the residual bias correction for our (mean-field) GF solution for the percolation problem (immobile "solute" with vacancy diffusion). It work through all of the matrix averages "analytically" (storing them as polynomials in the concentration of the immobile solute, $c_\text{B}$), and then brings everything together to express the residual bias correction as an analytic function with numerical coefficients for the square lattice.
Step2: Now, we need to expand out our probability factors. Let $x$ be the concentration of solute B; imagine we have $N$ sites possible. Then, if there are $n$ B atoms, the probability factor is
$$P(n;N) = x^n (1-x)^{N-n} = x^n \sum_{j=0}^{N-n} \frac{(N-n)!}{j!(N-n-j)!} (-x)^j
= \sum_{j=0}^{N-n} \frac{(N-n)!}{j!(N-n-j)!} (-1)^j x^{n+j}$$
The factorial term is $N-n$ choose $j$, which is scipy.misc.comb.
We want to construct a probability matrix P[n,c] such that $P(n;N)$ is written as a sum over $x^c$ terms; $c=0\ldots N$.
Step3: Normalization check
Step4: Now, we do some analysis by constructing up to 3 jumps (corresponding to third power of our transition rate matrix $W$). We do this analysis by setting up some bookkeeping
Step5: Some matrices and lists to manage conversion between sites and basis functions.
We also include a matrix that corresponds to "matching" basis functions as a function of endstate $x$. This is used to correct the outer product for "missing" basis functions, for when the missing basis functions map onto identical sites.
Step6: Group operation simplification
For our 8 group operations, corresponding to the point group operations on a square, we're going to make a reduced state list that only contains one symmetry-unique representative. This requires mapping the group operations on Cartesian coordinates into corresponding group operations on our sites, and our basis functions.
Step8: Now, we need symmetrized versions of a lot of our information from above, in order to properly account for all of the symmetrized versions of our basis functions. This includes
Computation of bias function times a basis function
Computation of two basis functions
Inside/inside
Inside/outside
Outside/outside
Outside/outside matching
We can group these in terms of what factor of concentration goes in front.
Step11: Efficient matrix operations
Some jit functions via numba to make operations efficient
Step12: Evaluation of averages
We have a state vector $\chi_i$ = 0 or 1, and for each end position $j$, we'll have the representation of $M_{\chi\chi'}$ as a vector $M_j$, we want the contribution to each basis function $b$.
Let's try some averages; first, without basis functions
Step13: Now, an average involving a single basis function
Step14: Now, let's try a basis / basis vector average
Step18: Output of averages
Some helper functions to make the printing nicer, followed by direct output.
Step19: Write out to HDF5 file
We now store the output in an HDF5 file for later use and analysis.
Step21: Mapping onto vectorStars
We create the simplified symmetry basis functions using vectorStars, to folddown the full representation, and compute proper inverses. We also make our own Monkhorst-Pack mesh that is shifted off of the origin, and symmetrized, for simplicity.
Step22: Now the real conversion begins! We start by mapping all of the bias vectors and local functions onto our vectorBasis.
Step25: Fourier transformation of translationally invariant contributions
Our "far" functions represent the translationally invariant contributions, and this requires Fourier transforms, and Taylor expansions to then be made into local contributions.
Mathematically, we're attempting to compute $\eta_i\cdot M_{ij}\cdot\eta_j$; the issue is that $\eta_i$ does not go to zero in the far-field (it's not local), and $M$ can be written as a local function plus a translationally invariant function $M^0$. Only the latter is problematic. However, as $\eta_i$ comes from a Green function solution (using the Dyson equation), if we multiply by the $w^0$, we produce a local function. Hence, we can rewrite that matrix equation as $(w^0\eta)i\cdot (g^0M^0g^0){ij}\cdot (w^0\eta_j)$. Now, then we "simply" need to evaluate $g^0M^0g^0$, which can be done using Fourier transforms, as it is the product of three translationally invariant functions.
Step26: inverse Fourier transformation
Now we go from the Fourier transformed version to the inverse Fourier transformed version (the final product version).
Step29: Putting it all together
All of the pieces are in place; we can now compute
Step31: Final "analytic" versions
We now produce the analytic (with numerical coefficients) version of our transport coefficients.
Step32: Note | Python Code:
import sys
sys.path.extend(['.','./Vacancy'])
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
%matplotlib inline
import scipy.sparse
import itertools
from numba import jit, njit, prange, guvectorize # faster runtime with update routines
from scipy.misc import comb
# from sympy import *
import onsager.PowerExpansion as PE
import onsager.crystal as crystal
import onsager.crystalStars as stars
import onsager.GFcalc as GFcalc
from tqdm import tnrange, tqdm_notebook
# Turn off or on to run optional testing code in notebook:
# Also turns on / off progress bars
__TESTING__ = False
Explanation: Residual bias correction of Mean-field GF
This is the full computation of the residual bias correction for our (mean-field) GF solution for the percolation problem (immobile "solute" with vacancy diffusion). It work through all of the matrix averages "analytically" (storing them as polynomials in the concentration of the immobile solute, $c_\text{B}$), and then brings everything together to express the residual bias correction as an analytic function with numerical coefficients for the square lattice.
End of explanation
def calc_P(N):
Returns the probability matrix P[n,c] where the probability of seeing `n` atoms
of type B in `N` sites is sum(c=0..N, x^c P[n,c])
:param N: total number of sites
:returns P[n,c]: matrix of probablities, n=0..N, c=0..N
P = np.zeros((N+1, N+1), dtype=int)
for n in range(N+1):
Nn = N-n
P[n,n:] = comb([Nn]*(Nn+1), [j for j in range(Nn+1)])
for j in range(Nn+1):
P[n,j+n] *= (-1)**j
return P
if __TESTING__:
calc_P(4)
Explanation: Now, we need to expand out our probability factors. Let $x$ be the concentration of solute B; imagine we have $N$ sites possible. Then, if there are $n$ B atoms, the probability factor is
$$P(n;N) = x^n (1-x)^{N-n} = x^n \sum_{j=0}^{N-n} \frac{(N-n)!}{j!(N-n-j)!} (-x)^j
= \sum_{j=0}^{N-n} \frac{(N-n)!}{j!(N-n-j)!} (-1)^j x^{n+j}$$
The factorial term is $N-n$ choose $j$, which is scipy.misc.comb.
We want to construct a probability matrix P[n,c] such that $P(n;N)$ is written as a sum over $x^c$ terms; $c=0\ldots N$.
End of explanation
N = 24
prob = calc_P(N)
states = np.array([(0,) + st for st in itertools.product((0,1), repeat=N)])
nB = np.sum(states, axis=1)
if __TESTING__:
norm = np.zeros(N+1, dtype=int)
for n in tqdm_notebook(nB):
norm += prob[n]
print(norm)
states.shape
Pstates = np.array([prob[n] for n in nB])
Explanation: Normalization check: construct the $2^N$ states, and see if it averages to 1. Each state is a vector of length $N$, with entries that are 0 (A) or 1 (B). Here, we explicitly build our state space, and also do a quick count to determine $n_\text{B}$ for each state. Note: we prepend a value of 0, since this corresponds to the initial location of the vacancy.
New version: we now generate group operations for the square lattice, and take advantage of those to reduce the computational time.
End of explanation
dxlist = [np.array([1,0]), np.array([-1,0]), np.array([0,1]), np.array([0,-1])]
Njump = 3
sites = [np.array([0,0])]
sitedict = {(0,0): 0}
lastsites = sites.copy()
for nj in range(Njump):
newsites = []
for dx in dxlist:
for x in lastsites:
y = x+dx
yt = tuple(y)
if yt not in sitedict:
sitedict[yt] = len(sites)
sites.append(y)
newsites.append(y)
lastsites = newsites
Nsite = len(sites)
Nsite0 = len(sites) - len(lastsites)
sites0 = sites[:Nsite0]
jumplist = []
for x in sites0:
jumplist.append([sitedict[tuple(x+dx)] for dx in dxlist])
if __TESTING__:
print(jumplist)
basisfunc, basisdict = [], {}
for x in sites:
for y in sites:
d = x-y
dt = tuple(d)
if dt not in basisdict:
basisdict[dt] = len(basisfunc)
basisfunc.append(d)
Nbasis = len(basisfunc)
Explanation: Now, we do some analysis by constructing up to 3 jumps (corresponding to third power of our transition rate matrix $W$). We do this analysis by setting up some bookkeeping:
We work with a list of displacement vectors [dx_0, dx_1, dx_2, dx_3]
We construct the list of positions for the vacancy
For each position, we identify the possible jumps (though we only need to do this
for positions that are reachable in 0-2 jumps.
We construct a list of possible basis functions: these are all possible
differences of vacancy positions
Finally, for each position, we identify which position corresponds to each possible
basis function, as well a list of all basis functions that are not in the state.
This is all sufficient to construct a sparse version of $W$ (and $\Gamma$) for a given state $\chi$.
End of explanation
chibasisfound, chibasismiss, chibasismissmatch = [], [], []
chibasisfar = []
for x in sites:
xbdict = {}
xbmiss = []
xbmissmatch = {}
xbfar = {}
for bindex, b in enumerate(basisfunc):
bt = tuple(b)
y = x+b
yt = tuple(y)
if yt in basisdict:
xbfar[bindex] = basisdict[yt]
if yt in sitedict:
xbdict[bindex] = sitedict[yt]
else:
xbmiss.append(bindex)
if bt not in sitedict and yt in basisdict:
xbmissmatch[bindex] = basisdict[yt]
chibasisfound.append(xbdict)
chibasismiss.append(xbmiss)
chibasismissmatch.append(xbmissmatch)
chibasisfar.append(xbfar)
# make a set of "outside" and "inside" basis functions:
basisout = set([tuple(basisfunc[bindex]) for bindex in chibasismiss[0]])
basisin = set([tuple(bv) for bv in basisfunc if tuple(bv) not in basisout])
# converting chibasisfound and chibasismiss into matrices:
chibasisfound_mat = np.zeros((N+1, Nbasis, N+1), dtype=int)
# chibasisfound_sparse = [scipy.sparse.csr_matrix((Nbasis, N+1), dtype=int)
# for n in range(N+1)]
chibasismiss_mat = np.zeros((N+1, Nbasis), dtype=int)
chibasismissmatch_mat = np.zeros((Nbasis, Nbasis, N+1), dtype=int)
chibasisfar_mat = np.zeros((Nbasis, Nbasis, N+1), dtype=int)
for n, cbf, cbm, cbmm, cbfar in zip(itertools.count(), chibasisfound,
chibasismiss, chibasismissmatch, chibasisfar):
for bindex in cbm:
chibasismiss_mat[n, bindex] = 1
for bindex, siteindex in cbf.items():
chibasisfound_mat[n, bindex, siteindex] = 1
# chibasisfound_sparse[n][bindex, siteindex] = 1
for bindex, siteindex in cbmm.items():
chibasismissmatch_mat[bindex, siteindex, n] = 1
for bindex, siteindex in cbfar.items():
chibasisfar_mat[bindex, siteindex, n] = 1
Explanation: Some matrices and lists to manage conversion between sites and basis functions.
We also include a matrix that corresponds to "matching" basis functions as a function of endstate $x$. This is used to correct the outer product for "missing" basis functions, for when the missing basis functions map onto identical sites.
End of explanation
groupops = [np.array([[1,0],[0,1]]), np.array([[0,-1],[1,0]]),
np.array([[-1,0],[0,-1]]), np.array([[0,1],[-1,0]]),
np.array([[-1,0],[0,1]]), np.array([[1,0],[0,-1]]),
np.array([[0,-1],[-1,0]]), np.array([[0,1],[1,0]])]
sitegroupops, basisgroupops = [], []
for g in groupops:
sg = np.zeros([Nsite, Nsite], dtype=int)
bg = np.zeros([Nbasis, Nbasis], dtype=int)
for n, x in enumerate(sites):
yt = tuple(np.dot(g, x))
sg[sitedict[yt], n] = 1
for n, x in enumerate(basisfunc):
yt = tuple(np.dot(g, x))
bg[basisdict[yt], n] = 1
sitegroupops.append(sg)
basisgroupops.append(bg)
foundstates = set([])
binary = np.array([2**n for n in range(Nsite)])
symmstateslist, symmPlist = [], []
for st, P in tqdm_notebook(zip(states, Pstates), total=(2**N), disable=not __TESTING__):
bc = np.dot(st, binary)
if bc not in foundstates:
symmstateslist.append(st)
equivset = set([np.dot(np.dot(g, st), binary) for g in sitegroupops])
foundstates.update(equivset)
symmPlist.append(len(equivset)*P)
symmstates = np.array(symmstateslist)
symmPstates = np.array(symmPlist)
symmstates.shape
if __TESTING__:
np.sum(symmPstates, axis=0)
Explanation: Group operation simplification
For our 8 group operations, corresponding to the point group operations on a square, we're going to make a reduced state list that only contains one symmetry-unique representative. This requires mapping the group operations on Cartesian coordinates into corresponding group operations on our sites, and our basis functions.
End of explanation
biasvec_mat = np.zeros((2, N+1), dtype=int)
for j, dx in enumerate(dxlist):
biasvec_mat[:, j+1] -= dx
if __TESTING__:
print(np.dot(biasvec_mat, states[8388608]), states[8388608])
def symmetrize(mat, groupops0, groupops1):
Designed to symmetrize the first two entries of a matrix with the
corresponding group operations
symmmat = np.zeros(mat.shape)
for g0, g1 in zip(groupops0, groupops1):
for i in range(mat.shape[0]):
for j in range(mat.shape[1]):
symmmat[i, j] += np.tensordot(np.tensordot(
mat, g1[j], axes=(1,0)), g0[i], axes=(0,0))
symmmat /= len(groupops0)
return symmmat
Explanation: Now, we need symmetrized versions of a lot of our information from above, in order to properly account for all of the symmetrized versions of our basis functions. This includes
Computation of bias function times a basis function
Computation of two basis functions
Inside/inside
Inside/outside
Outside/outside
Outside/outside matching
We can group these in terms of what factor of concentration goes in front.
End of explanation
@njit(nogil=True, parallel=True)
def tripleouterupdate(summand, A, B, C):
Update summand[i,j,k] += A[i]*B[j]*C[k]
I, = A.shape
J, = B.shape
K, = C.shape
for i in prange(I):
for j in prange(J):
for k in prange(K):
summand[i, j, k] += A[i]*B[j]*C[k]
@njit(nogil=True, parallel=True)
def matrixouterupdate(summand, A, B):
Update summand[i,j,k] += A[i, j]*B[k]
I,J = A.shape
K, = B.shape
for i in prange(I):
for j in prange(J):
for k in prange(K):
summand[i, j, k] += A[i,j]*B[k]
Explanation: Efficient matrix operations
Some jit functions via numba to make operations efficient:
End of explanation
resbiasave = np.zeros(N+1, dtype=int)
for st, P in tqdm_notebook(zip(symmstates, symmPstates), total=symmstates.shape[0],
disable=not __TESTING__):
# bv = np.sum(dx for j, dx in enumerate(dxlist) if st[j+1] == 0)
bv = np.dot(biasvec_mat, st)
W = 4-np.sum(st[1:5])
if W>0:
resbiasave += P*(bv[0]*bv[0]+bv[1]*bv[1])*(12//W)
print(resbiasave/12)
Explanation: Evaluation of averages
We have a state vector $\chi_i$ = 0 or 1, and for each end position $j$, we'll have the representation of $M_{\chi\chi'}$ as a vector $M_j$, we want the contribution to each basis function $b$.
Let's try some averages; first, without basis functions:
$\langle \tau_\chi \mathbf{b}\chi\cdot\mathbf{b}\chi\rangle_\chi$, the average residual bias.
End of explanation
biasvecbar = np.zeros((2, Nbasis, N+1), dtype=int)
Pc = np.zeros(N+1, dtype=int)
for st, P in tqdm_notebook(zip(symmstates, symmPstates), total=symmstates.shape[0],
disable=not __TESTING__):
# bv = np.sum(dx for j, dx in enumerate(dxlist) if st[j+1] == 0)
W = 4-np.sum(st[1:5])
if W==0 or W==4: continue
bv = np.dot(biasvec_mat, st)
Pc[1:] = P[:-1]
tripleouterupdate(biasvecbar, bv, np.dot(chibasisfound_mat[0], st), P)
tripleouterupdate(biasvecbar, bv, chibasismiss_mat[0], Pc)
symmbiasvecbar = symmetrize(biasvecbar, groupops, basisgroupops)
Explanation: Now, an average involving a single basis function: $\langle \mathbf{b}\chi \phi{\chi,\mathbf{x}}\rangle_\chi$.
End of explanation
# @njit(nogil=True, parallel=True)
@jit
def matrixupdate(mat_bar, mat_vec, chibasis, chibasis_miss,
chibasismissmatch_mat, P, Pc, Pcc):
chibasis0, chibasis1 = chibasis[0], chibasis_miss[0]
chipbasis0, chipbasis1 = np.dot(mat_vec, chibasis), np.dot(mat_vec, chibasis_miss)
tripleouterupdate(mat_bar, chibasis0, chipbasis0, P)
tripleouterupdate(mat_bar, chibasis1, chipbasis0, Pc)
tripleouterupdate(mat_bar, chibasis0, chipbasis1, Pc)
# note: this is a little confusing; if the two ("missing") basis functions are
# referencing *different* sites, then we pick up a x^2 term; but if they
# reference the same site, it is a factor of x.
tripleouterupdate(mat_bar, chibasis1, chipbasis1, Pcc)
matchouter = np.dot(chibasismissmatch_mat, mat_vec)
matrixouterupdate(mat_bar, matchouter, Pc-Pcc)
# I'm not entirely sure how this is supposed to read; the matching seems to be the key?
# @njit(nogil=True, parallel=True)
@jit
def farmatrixupdate(mat_bar, mat_vec, chibasis_far, Pc, Pcc):
# note: this is a little confusing; if the two ("missing") basis functions are
# referencing *different* sites, then we pick up a x^2 term; but if they
# reference the same site, it is a factor of x.
# tripleouterupdate(mat_bar, chibasis1, chipbasis1, Pcc)
matchouter = np.dot(chibasis_far, mat_vec)
matrixouterupdate(mat_bar, matchouter, Pc-Pcc)
# @njit(nogil=True, parallel=True)
@jit
def vectorupdate(vec_bar, bv, vec, chibasis, chibasis_miss, P, Pc):
# chibasis0, chibasis1 = chibasis[0], chibasis_miss[0]
chipbasis0, chipbasis1 = np.dot(vec, chibasis), np.dot(vec, chibasis_miss)
tripleouterupdate(vec_bar, bv, chipbasis0, P)
tripleouterupdate(vec_bar, bv, chipbasis1, Pc)
tauscale = 12
eye = tauscale*np.pad(np.eye(Nsite0, dtype=int), ((0,0), (0,Nsite-Nsite0)), 'constant')
onevec = np.array([1,] + [0,]*(Nsite-1))
# We don't expect to need c^N+1 or c^N+2 so we ignore those...
# Matrices: <sum_c' W_cc' chi chi'> and higher order (GG, and WGG terms...)
Wbar = np.zeros((Nbasis, Nbasis, N+1), dtype=int)
WGbar = np.zeros((Nbasis, Nbasis, N+1), dtype=int)
WGGbar = np.zeros((Nbasis, Nbasis, N+1), dtype=int)
# far-field versions of the same; the matched versions, followed by the "summed" (baseline) version:
Wbar_far = np.zeros((Nbasis, Nbasis, N+1), dtype=int)
WGbar_far = np.zeros((Nbasis, Nbasis, N+1), dtype=int)
WGGbar_far = np.zeros((Nbasis, Nbasis, N+1), dtype=int)
Wbar_far0 = np.zeros(N+1, dtype=int)
WGbar_far0 = np.zeros(N+1, dtype=int)
WGGbar_far0 = np.zeros(N+1, dtype=int)
# bias vector versions, including products with gamma:
biasvecbar = np.zeros((2, Nbasis, N+1), dtype=int)
biasGvecbar = np.zeros((2, Nbasis, N+1), dtype=int)
biasGGvecbar = np.zeros((2, Nbasis, N+1), dtype=int)
# residual bias vector versions:
resbiasave = np.zeros(N+1, dtype=int)
resbiasGave = np.zeros(N+1, dtype=int)
Pc, Pcc = np.zeros(N+1, dtype=int), np.zeros(N+1, dtype=int)
for st, P in tqdm_notebook(zip(symmstates, symmPstates), total=symmstates.shape[0],
disable=not __TESTING__):
Pc[1:] = P[:-1]
Pcc[2:] = P[:-2]
# basis0: those inside \chi, basis1: those outside \chi
chibasis = np.dot(chibasisfound_mat, st)
# chibasis0, chibasis1 = np.dot(chibasisfound_mat[0], st), chibasismiss_mat[0]
# construct our transition matrix:
W = np.zeros((Nsite0, Nsite), dtype=int)
for n, jumps in enumerate(jumplist):
if st[n] == 1: continue
for m in jumps:
if st[m] == 0:
W[n,n] -= 1
W[n,m] = 1
tau = -np.diag(W) # will be tau multiplied by tauscale = 12 (== -12//W[n,n])
Gam = W.copy() # Gamma matrix multiplied by tauscale = 12.
for n in range(Nsite0):
if tau[n] > 0:
tau[n] = tauscale//tau[n]
Gam[n,n] = 0
Gam[n] *= tau[n]
WG = -W[0,0]*np.dot(Gam[0,:Nsite0], Gam)+tauscale*tauscale*W[0,0]*onevec
WGG = np.dot(W[0,:Nsite0], np.dot(Gam[:,:Nsite0], Gam - 2*eye))
matrixupdate(Wbar, W[0], chibasis, chibasismiss_mat, chibasismissmatch_mat,
P, Pc, Pcc)
matrixupdate(WGbar, WG, chibasis, chibasismiss_mat, chibasismissmatch_mat,
P, Pc, Pcc)
matrixupdate(WGGbar, WGG, chibasis, chibasismiss_mat, chibasismissmatch_mat,
P, Pc, Pcc)
# far-field contributions of same:
farmatrixupdate(Wbar_far, W[0], chibasisfar_mat, Pc, Pcc)
farmatrixupdate(WGbar_far, WG, chibasisfar_mat, Pc, Pcc)
farmatrixupdate(WGGbar_far, WGG, chibasisfar_mat, Pc, Pcc)
Wbar_far0 += np.sum(W[0])*Pcc
WGbar_far0 += np.sum(WG)*Pcc
WGGbar_far0 += np.sum(WGG)*Pcc
# bias contributions (only bother if there's non-zero bias)
if tau[0]==0: continue
bv = np.sum(dx for j, dx in enumerate(dxlist) if st[j+1] == 0)
vectorupdate(biasvecbar, bv, onevec, chibasis, chibasismiss_mat, P, Pc)
vectorupdate(biasGvecbar, bv, Gam[0], chibasis, chibasismiss_mat, P, Pc)
vectorupdate(biasGGvecbar, bv, np.dot(Gam[0,:Nsite0],Gam-2*eye),
chibasis, chibasismiss_mat, P, Pc)
resbiasave += P*(bv[0]*bv[0]+bv[1]*bv[1])*tau[0]
bb = 0
for j, G in enumerate(Gam[0]):
if G>0:
bvp = np.array([0,0])
for k, dx in zip(jumplist[j], dxlist):
if st[k] == 0: bvp += dx
bb += G*np.dot(bv, bvp)*tau[j]
resbiasGave += P*bb
if __TESTING__:
print(Wbar_far0, WGbar_far0, WGGbar_far0)
# scaling and symmetrization
symmWbar = symmetrize(Wbar, basisgroupops, basisgroupops)
symmWGbar = symmetrize(WGbar, basisgroupops, basisgroupops)/(tauscale*tauscale)
symmWGGbar = symmetrize(WGGbar, basisgroupops, basisgroupops)/(tauscale*tauscale)
symmWbar_far = symmetrize(Wbar_far, basisgroupops, basisgroupops)
symmWGbar_far = symmetrize(WGbar_far, basisgroupops, basisgroupops)/(tauscale*tauscale)
symmWGGbar_far = symmetrize(WGGbar_far, basisgroupops, basisgroupops)/(tauscale*tauscale)
symmresbiasave = resbiasave/tauscale
symmresbiasGave = resbiasGave/(tauscale*tauscale)
symmbiasvecbar = symmetrize(biasvecbar, groupops, basisgroupops)
symmbiasGvecbar = symmetrize(biasGvecbar, groupops, basisgroupops)/tauscale
symmbiasGGvecbar = symmetrize(biasGGvecbar, groupops, basisgroupops)/(tauscale*tauscale)
symmresbiasave
symmresbiasGave
Explanation: Now, let's try a basis / basis vector average: $\langle \sum_{\chi'} \phi_{\chi,\mathbf{x}} W_{\chi\chi'} \phi_{\chi',\mathbf{y}}\rangle_\chi$.
This gets a bit complicated with the "missing" basis functions for $\chi$, and especially when we consider those that are missing in both $\chi$ and $\chi'$. We also need to treat the "far" case, where both $\mathbf{x}$ and $\mathbf{y}$ are far away from the origin.
We ignore terms higher than $c^N$ ($N$=25); no contributions are found higher than 10.
End of explanation
def truncate_vec(v):
Return a vector that's shortened by truncating the high-order 0 components
return v[:(np.max(np.nonzero(v))+1)]
def printvecbasis(VB):
Print out the components of a vector-basis matrix
for d in range(2):
print("dim {}".format(d+1))
for bv, v in zip(basisfunc, VB[d]):
if np.any(v) != 0:
print(bv, truncate_vec(v))
def printbasisbasis(BB, comp=None):
Print out the components of a basis-basis matrix
for bv0, BB0 in zip(basisfunc, BB):
if comp is not None and tuple(bv0) not in comp: continue
for bv1, B in zip(basisfunc, BB0):
if np.any(B) != 0:
print(bv0, bv1, truncate_vec(B))
printbasisbasis(symmWbar_far, {(0,0)})
printbasisbasis(symmWbar-symmWbar_far)
printbasisbasis(symmWGbar_far, {(0,0)})
printbasisbasis(symmWGbar-symmWGbar_far)
printbasisbasis(symmWGGbar_far, {(0,0)})
printbasisbasis(symmWGGbar-symmWGGbar_far)
printvecbasis(symmbiasvecbar)
printvecbasis(symmbiasGvecbar)
printvecbasis(symmbiasGGvecbar)
Explanation: Output of averages
Some helper functions to make the printing nicer, followed by direct output.
End of explanation
import h5py
rewriteFile = False
printFile = False
if rewriteFile:
with h5py.File('Neighbor-averaging.hdf5', 'w') as f:
f['dxlist'] = np.array(dxlist)
f['sites'] = np.array(sites)
f['jumplist'] = np.array(jumplist)
f['basisfunc'] = np.array(basisfunc)
f['symmWbar'] = symmWbar
f['symmWGbar'] = symmWGbar
f['symmWGGbar'] = symmWGGbar
f['symmWbar_far'] = symmWbar_far
f['symmWGbar_far'] = symmWGbar_far
f['symmWGGbar_far'] = symmWGGbar_far
f['symmresbias'] = symmresbiasave
f['symmresbiasGave'] = symmresbiasGave
f['symmbiasvecbar'] = symmbiasvecbar
f['symmbiasGvecbar'] = symmbiasGvecbar
f['symmbiasGGvecbar'] = symmbiasGGvecbar
if printFile:
with h5py.File('Neighbor-averaging.hdf5', 'r') as f:
for k, c in f.items():
print(k)
print(c.value)
Explanation: Write out to HDF5 file
We now store the output in an HDF5 file for later use and analysis.
End of explanation
def mpmesh(Ndiv, pre=np.pi):
Generates a MP mesh for a square lattice.
:param Ndiv: number of divisions
:param pre: prefactor for edge of Brilloiun zone (pi/a_0)
:returns k[Nk,2]: k-points
:returns w[Nk]: weight
prescale = pre/Ndiv
wscale = 1./(Ndiv*Ndiv)
Nk = (Ndiv*(Ndiv+1))//2
kpt, w = np.zeros((Nk,2)), np.zeros(Nk)
i = 0
for n in range(Ndiv):
for m in range(n+1):
kpt[i,0] = prescale*(n+0.5)
kpt[i,1] = prescale*(m+0.5)
if n==m:
w[i] = wscale
else:
w[i] = 2*wscale
i += 1
return kpt, w
square = crystal.Crystal(np.eye(2), [np.zeros(2)])
chem = 0
sitelist = square.sitelist(chem)
jumpnetwork = square.jumpnetwork(chem, 1.01) # [[((0,0), dx) for dx in dxlist]]
starset = stars.StarSet(jumpnetwork, square, chem, 3)
vecstarset = stars.VectorStarSet(starset)
if __TESTING__:
print(starset)
if __TESTING__:
for vR, vV in zip(vecstarset.vecpos, vecstarset.vecvec):
print('')
for R, v in zip(vR, vV):
print(starset.states[R] , v)
GF = GFcalc.GFCrystalcalc(square, chem, sitelist, jumpnetwork, kptwt = mpmesh(32))
GF.SetRates(np.ones(1), np.zeros(1), np.ones(1), np.zeros(1))
if __TESTING__:
print(GF)
GFmat, GFstarset = vecstarset.GFexpansion()
GF0array = np.array([GF(0,0,GFstarset.states[s[0]].R) for s in GFstarset.stars])
g0 = np.dot(GFmat, GF0array)
print(g0)
basis2state = [starset.stateindex(stars.PairState(0, 0, bv, bv)) for bv in basisfunc]
basis2star = [starset.starindex(stars.PairState(0, 0, bv, bv)) for bv in basisfunc]
if __TESTING__:
for bv, stateind, starind in zip(basisfunc, basis2state, basis2star):
print(bv, stateind, starind)
state2basis = [basis2state.index(n) for n in range(starset.Nstates)]
if __TESTING__:
print(state2basis)
Explanation: Mapping onto vectorStars
We create the simplified symmetry basis functions using vectorStars, to folddown the full representation, and compute proper inverses. We also make our own Monkhorst-Pack mesh that is shifted off of the origin, and symmetrized, for simplicity.
End of explanation
NVS = vecstarset.Nvstars
symmbiasvecVS = np.zeros((N+1, NVS))
symmbiasGvecVS = np.zeros((N+1, NVS))
symmbiasGGvecVS = np.zeros((N+1, NVS))
for i in range(vecstarset.Nvstars):
for Ri, vi in zip(vecstarset.vecpos[i], vecstarset.vecvec[i]):
bi = state2basis[Ri]
symmbiasvecVS[:, i] += np.dot(vi, symmbiasvecbar[:,bi,:])
symmbiasGvecVS[:, i] += np.dot(vi, symmbiasGvecbar[:,bi,:])
symmbiasGGvecVS[:, i] += np.dot(vi, symmbiasGGvecbar[:,bi,:])
stars.zeroclean(symmbiasvecVS);
stars.zeroclean(symmbiasGvecVS);
stars.zeroclean(symmbiasGGvecVS);
for nv in range(NVS):
if not np.allclose(symmbiasvecVS[:,nv], 0):
print(nv, truncate_vec(symmbiasvecVS[:,nv]))
for nv in range(NVS):
if not np.allclose(symmbiasGvecVS[:,nv], 0):
print(nv, truncate_vec(symmbiasGvecVS[:,nv]))
for nv in range(NVS):
if not np.allclose(symmbiasGGvecVS[:,nv], 0):
print(nv, truncate_vec(symmbiasGGvecVS[:,nv]))
symmWbarVS = np.zeros((N+1, NVS, NVS))
symmWGbarVS = np.zeros((N+1, NVS, NVS))
symmWGGbarVS = np.zeros((N+1, NVS, NVS))
for i in range(vecstarset.Nvstars):
for Ri, vi in zip(vecstarset.vecpos[i], vecstarset.vecvec[i]):
bi = state2basis[Ri]
for j in range(vecstarset.Nvstars):
for Rj, vj in zip(vecstarset.vecpos[j], vecstarset.vecvec[j]):
bj = state2basis[Rj]
vivj = np.dot(vi,vj)
symmWbarVS[:, i, j] += vivj*(symmWbar[bi,bj,:]-symmWbar_far[bi,bj,:])
symmWGbarVS[:, i, j] += vivj*(symmWGbar[bi,bj,:]-symmWGbar_far[bi,bj,:])
symmWGGbarVS[:, i, j] += vivj*(symmWGGbar[bi,bj,:]-symmWGGbar_far[bi,bj,:])
stars.zeroclean(symmWbarVS);
stars.zeroclean(symmWGbarVS);
stars.zeroclean(symmWGGbarVS);
for nv,mv in itertools.product(range(NVS), repeat=2):
if not np.allclose(symmWbarVS[:,nv,mv], 0):
print(nv, mv, truncate_vec(symmWbarVS[:,nv,mv]))
for nv,mv in itertools.product(range(NVS), repeat=2):
if not np.allclose(symmWGbarVS[:,nv,mv], 0):
print(nv, mv, truncate_vec(symmWGbarVS[:,nv,mv]))
for nv,mv in itertools.product(range(NVS), repeat=2):
if not np.allclose(symmWGGbarVS[:,nv,mv], 0):
print(nv, mv, truncate_vec(symmWGGbarVS[:,nv,mv]))
Explanation: Now the real conversion begins! We start by mapping all of the bias vectors and local functions onto our vectorBasis.
End of explanation
def FT(mat, kptwt):
(real) Fourier transform of translationally invariant function.
:param mat[Nbasis, N+1]: far-field version of matrix;
each Nbasis is relative to 0
:param kptwt: tuple of (kpt[Nkpt, 2], wt[Nkpt])
:returns matFT[Nkpt, N+1]: FT of matrix
kpt = kptwt[0]
matFT = np.zeros((kpt.shape[0], N+1))
for bv, matv in zip(basisfunc, mat):
matFT += np.outer(np.cos(np.dot(kpt, bv)), matv)
return matFT
PE.Taylor2D(Lmax=6); # initialize
def Taylor(mat):
(real) Taylor expansion of Fourier transform of translationally invariant function.
:param mat[Nbasis, N+1]: far-field version of matrix;
each Nbasis is relative to 0
:returns matTaylor: T2D version of FT Taylor expansion matrix
pre = np.array([1., 0., -1/2, 0., 1/24]) # Taylor coefficients for cos()
matTaylor = PE.Taylor2D()
for bv, matv in zip(basisfunc, mat):
for ve in PE.Taylor2D.constructexpansion([(matv, bv)], pre=pre):
matTaylor += ve
matTaylor.reduce()
return matTaylor
if __TESTING__:
print(FT(symmWbar_far[0], mpmesh(4)))
g0Taylor = (Taylor(symmWbar_far[0])[1]).inv() # extract out the "constant" term
print(g0Taylor)
g0WGbarTaylor = ( (g0Taylor*g0Taylor)*Taylor(symmWGbar_far[0])).reduce().truncate(0)
g0WGGbarTaylor = ( (g0Taylor*g0Taylor)*Taylor(symmWGGbar_far[0])).reduce().truncate(0)
print(g0WGbarTaylor)
print(g0WGGbarTaylor)
kpt, wt = mpmesh(32)
g0FT = 1./FT(symmWbar_far[0], (kpt, wt))[:,1]
WGbarFT = FT(symmWGbar_far[0], (kpt, wt))
WGGbarFT = FT(symmWGGbar_far[0], (kpt, wt))
if __TESTING__:
print(g0FT)
pmax = np.sqrt(min([np.dot(G, G) for G in square.BZG]) / -np.log(1e-11))
prefactor = square.volume
g0Taylor_fnlp = {(n, l): GFcalc.Fnl_p(n, pmax) for (n, l) in g0Taylor.nl()}
g0Taylor_fnlu = {(n, l): GFcalc.Fnl_u(n, l, pmax, prefactor, d=2)
for (n, l) in g0Taylor.nl()}
if __TESTING__:
print(pmax)
if __TESTING__:
print(g0Taylor.nl(), g0WGbarTaylor.nl(), g0WGGbarTaylor.nl())
g0WGbarsc = np.zeros_like(g0WGbarFT)
g0WGGbarsc = np.zeros_like(g0WGGbarFT)
for i, k in enumerate(kpt):
g0WGbarsc[i] = (g0FT[i]**2)*g0WGbarFT[i] - g0WGbarTaylor(k, g0Taylor_fnlp).real
g0WGGbarsc[i] = (g0FT[i]**2)*g0WGGbarFT[i] - g0WGGbarTaylor(k, g0Taylor_fnlp).real
if __TESTING__:
print(truncate_vec(np.dot(wt, g0WGGbarsc)))
Explanation: Fourier transformation of translationally invariant contributions
Our "far" functions represent the translationally invariant contributions, and this requires Fourier transforms, and Taylor expansions to then be made into local contributions.
Mathematically, we're attempting to compute $\eta_i\cdot M_{ij}\cdot\eta_j$; the issue is that $\eta_i$ does not go to zero in the far-field (it's not local), and $M$ can be written as a local function plus a translationally invariant function $M^0$. Only the latter is problematic. However, as $\eta_i$ comes from a Green function solution (using the Dyson equation), if we multiply by the $w^0$, we produce a local function. Hence, we can rewrite that matrix equation as $(w^0\eta)i\cdot (g^0M^0g^0){ij}\cdot (w^0\eta_j)$. Now, then we "simply" need to evaluate $g^0M^0g^0$, which can be done using Fourier transforms, as it is the product of three translationally invariant functions.
End of explanation
# this list is a bit of overkill, but...
veclist = [GFstarset.states[s[0]].dx for s in GFstarset.stars]
g0WGbar, g0WGGbar = [], []
for x in veclist:
coskx = np.sum(np.cos(np.tensordot(kpt, np.dot(g, x), axes=(1, 0)))
for g in groupops) / 8
g0WGbar.append(np.dot(wt*coskx,g0WGbarsc) + g0WGbarTaylor(x, g0Taylor_fnlu).real)
g0WGGbar.append(np.dot(wt*coskx,g0WGGbarsc) + g0WGGbarTaylor(x, g0Taylor_fnlu).real)
for v, g in zip(veclist, g0WGbar):
print(v, truncate_vec(g))
for v, g in zip(veclist, g0WGGbar):
print(v, truncate_vec(g))
Explanation: inverse Fourier transformation
Now we go from the Fourier transformed version to the inverse Fourier transformed version (the final product version).
End of explanation
@njit(nogil=True, parallel=True)
def polymult(p, q):
Multiplication of two polynomial coefficients, where
p(x) = sum_n p[n] * x^n
:param p: polynomial coefficients for p
:param q: polynomial coefficients for q
:returns pq: polynomial coefficients for pq
P = p.shape[0]-1
Q = q.shape[0]-1
pq = np.zeros(P+Q+1)
for n in range(P+Q+1):
for i in range(max(0,n-Q), min(n,P)+1):
pq[n] += p[i]*q[n-i]
return pq
@njit(nogil=True, parallel=True)
def polydiv(p, a):
Division of polynomial p(x) by (x-a)
:param p: polynomial coefficients for p
:param a: term in nomial (x-a)
:returns d, r: divisor d(x), and remainder r
P = p.shape[0]-1
d = np.zeros(P)
d[P-1] = p[P]
for n in range(P-2,-1,-1):
d[n] = p[n+1] + a*d[n+1]
return d, p[0] + a*d[0]
divpoly = np.zeros(N+1)
divpoly[0], divpoly[1] = 1+g0[0,0], -(1+3*g0[0,0])
etabar_div = -2*g0[0] # this is etabar*div, so that etabar = etabar_div/div
etaW0_div = np.zeros(N+1)
etaW0_div[0] = -2 # this is W0*etabar*div (for the translational invariant terms)
# unbiased:
L0 = np.zeros(N+1)
L0[0], L0[1] = 1., -1.
# Note: vecstarset.outer[i,j, v1, v2] = 1/2 delta_ij delta_v1v2,
# so we can use dot-products throughout
# SCGF:
L1 = 0.5*np.dot(symmbiasvecVS, etabar_div)
L_SCGF = polymult(L0, divpoly)[:N+1] + L1
polydiv(L_SCGF, 1)
# print(np.dot(GFmat[0,0], g0WGGbar))
PsiB = polymult(polymult(divpoly, divpoly), symmresbiasave)[:N+1] + \
-2*polymult(divpoly, np.dot(symmbiasGvecVS, etabar_div))[:N+1] + \
np.dot(np.dot(symmWGbarVS, etabar_div), etabar_div) + \
4*np.dot(GFmat[0,0], g0WGbar) # far-field; note: etaW0_div == 2, so factor of 4
print(PsiB)
WR = polymult(polymult(divpoly, divpoly), symmresbiasGave)[:N+1] - \
polymult(polymult(divpoly, divpoly), symmresbiasave)[:N+1] + \
-2*polymult(divpoly, L1)[:N+1] + \
-2*polymult(divpoly, np.dot(symmbiasGGvecVS, etabar_div))[:N+1] + \
np.dot(np.dot(symmWGGbarVS, etabar_div), etabar_div) + \
4*np.dot(GFmat[0,0], g0WGGbar)
print(WR)
# Now, to put it together, and do the division...
cBv = np.linspace(0.01,1,num=99,endpoint=False)
D1, D2 = [], []
for cB in cBv:
# print(cB)
cA = 1-cB
cpow = np.array([cB**n for n in range(N+1)])
L0c, divc, L1c = np.dot(cpow, L0), np.dot(cpow, divpoly), np.dot(cpow, L_SCGF)
L1c /= divc
PsiBc, WRc = np.dot(cpow, PsiB)/(divc*divc), np.dot(cpow, WR)/(divc*divc)
L2c = L1c + 0.5*PsiBc*PsiBc/WRc
D0c = L0c/cA
D1c = L1c/cA
D2c = L2c/cA
D1.append(D1c)
D2.append(D2c)
print(cB, D1c, D2c, D2c/D1c) #, PsiBc)
D1v, D2v = np.array(D1), np.array(D2)
plt.rcParams['figure.figsize'] = (8,8)
fig, ax = plt.subplots()
ax.plot(cBv, D1, 'b', label='GF')
ax.plot(cBv, D2, 'r', label='GF+resbias')
ax.set_ylabel('$D^{\\rm A}$', fontsize='x-large')
ax.set_xlabel('$c_{\\rm B}$', fontsize='x-large')
ax.legend(bbox_to_anchor=(0.5,0.5,0.5,0.3), ncol=1, shadow=True,
frameon=True, fontsize='x-large', framealpha=1.)
plt.tight_layout()
plt.show()
Explanation: Putting it all together
All of the pieces are in place; we can now compute:
Transport coefficients using the SCGF approach
Residual bias correction to the latter
Quantities are expressed as polynomials in $c_\text{B}$, the concentration of the immobile species.
The Green function, and the correction $\eta$, end up having particularly simple expressions, that we will compute directly (it requires some simplification of the polynomial expressions which are more difficult to directly express here. It, unfortunately, also introduces a denominator polynomial which makes some of our expressions more complicated.
We have
$$\eta_i = -2\frac{g^0_{i0}}{1+g^0_{i0} - (1+3g^0_{i0})c_\text{B}}$$
End of explanation
num_SCGF, denom_SCGF = truncate_vec(-polydiv(L_SCGF,1)[0]), truncate_vec(divpoly)
num_SCGFbc, denom_SCGFbc = \
truncate_vec(-polydiv(0.5*polymult(PsiB,PsiB),1)[0]), \
truncate_vec(polymult(polymult(divpoly, divpoly), WR))
# check remainders (should be 0 for both)
if __TESTING__:
print(polydiv(L_SCGF,1)[1], polydiv(0.5*polymult(PsiB,PsiB),1)[1])
def print_fraction(numer, denom, powstring='**'):
Returns a string representation of our polynomial ratio
def format_pow(n):
if n==0:
return ''
if n==1:
return '*c'
return '*c' + powstring +'{}'.format(n)
# first, "divide" through until lowest order is constant on both:
while np.isclose(numer[0], 0) and np.isclose(denom[0], 0):
numer, denom = numer[1:], denom[1:]
# second, scale everything by lowest order term in denominator
scale = denom[np.min(np.nonzero(denom))]
numer /= scale
denom /= scale
s = '('
for n, coeff in enumerate(numer):
if not np.isclose(coeff, 0):
s += '{:+.10g}'.format(coeff) + format_pow(n)
s += ')/('
for n, coeff in enumerate(denom):
if not np.isclose(coeff, 0):
s += '{:+.10g}'.format(coeff) + format_pow(n)
s += ')'
return s
print(print_fraction(num_SCGF, denom_SCGF))
print(print_fraction(num_SCGF, denom_SCGF) + ' + ' +\
print_fraction(num_SCGFbc, denom_SCGFbc))
Explanation: Final "analytic" versions
We now produce the analytic (with numerical coefficients) version of our transport coefficients.
End of explanation
polydiv(polydiv(polydiv(num_SCGFbc,1)[0],1)[0],1)
polydiv(polydiv(denom_SCGFbc,1)[0],1)
SCGFbc_func = print_fraction(num_SCGF, denom_SCGF) + ' + ' +\
print_fraction(polydiv(polydiv(num_SCGFbc,1)[0],1)[0],
polydiv(polydiv(denom_SCGFbc,1)[0],1)[0])
print(SCGFbc_func)
Explanation: Note: both of these polynomials have two factors of $(1-c)$ in them; so we can simplify further...
End of explanation |
14,646 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A final exercise
This exercise puts together many of the topics covered in this session.
1. Load the single cube from the file iris.sample_data_path('SOI_Darwin.nc'). This contains monthly values of the Southern Oscillation Index.
Step1: 2. Add two new coordinates based upon the time coordinate, one categorising the meteorological season, and one the year the season was in (hint
Step2: 3. Compute the seasonal means from this cube (i.e. average together the times within in each individual season). You should end up with a time series of length 593 (hint
Step3: 4. Now compute the seasonal climatology. You should end up with a cube of size 4, one point per season (hint
Step4: 5. Extract the DJF season from both the climatology and seasonal means. Use these to compute a time series of DJF anomalies with respect to the DJF mean (hint
Step5: 6. Finally, give the DJF anomalies cube a sensible name and plot the time-series with labelled axes. | Python Code:
import iris
soi = iris.load_cube(iris.sample_data_path('SOI_Darwin.nc'))
print(soi)
Explanation: A final exercise
This exercise puts together many of the topics covered in this session.
1. Load the single cube from the file iris.sample_data_path('SOI_Darwin.nc'). This contains monthly values of the Southern Oscillation Index.
End of explanation
import iris.coord_categorisation as coord_cat
coord_cat.add_season(soi, 'time')
coord_cat.add_season_year(soi, 'time')
print(soi)
Explanation: 2. Add two new coordinates based upon the time coordinate, one categorising the meteorological season, and one the year the season was in (hint: add_season and add_season_year are pre-defined). Examine the resulting coordinates.
End of explanation
soi_seasons = soi.aggregated_by(['season', 'season_year'],
iris.analysis.MEAN)
print(soi_seasons)
Explanation: 3. Compute the seasonal means from this cube (i.e. average together the times within in each individual season). You should end up with a time series of length 593 (hint: you can specify two coordinates to aggregate over).
End of explanation
soi_clims = soi.aggregated_by('season', iris.analysis.MEAN)
print(soi_clims)
Explanation: 4. Now compute the seasonal climatology. You should end up with a cube of size 4, one point per season (hint: you can use aggregated_by again).
End of explanation
winter = iris.Constraint(season='djf')
soi_djf = soi_seasons.extract(winter)
soi_djf_mean = soi_clims.extract(winter)
print(soi_djf)
print(soi_djf_mean)
soi_djf_anoms = soi_djf - soi_djf_mean
print(soi_djf_anoms)
Explanation: 5. Extract the DJF season from both the climatology and seasonal means. Use these to compute a time series of DJF anomalies with respect to the DJF mean (hint: remember you can subtract cubes of different dimensionality).
End of explanation
import iris.quickplot as qplt
soi_djf_anoms.rename('SOI_Darwin_anomalies')
qplt.plot(soi_djf_anoms)
qplt.show()
Explanation: 6. Finally, give the DJF anomalies cube a sensible name and plot the time-series with labelled axes.
End of explanation |
14,647 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bayesian Data Analysis, 3rd ed
Chapter 4, demo 1
Normal approximaton for Bioassay model.
Step1: Find the mode by minimising negative log posterior. Compute gradients and Hessian analytically, and use Newton's method for optimisation. You may use optimisation routines below for checking your results. See help for scipy.optimize.minimize.
Step2: Compute the normal approximation density in grid. Note tha this is just for the illustration and in real case we would not need to evaluate this, and we would only use the draws from the normal distribution approaximation.
Step3: Compute Pareto smoothed importance sampling weights and Pareto diagnostic
Step4: Importance sampling weights could be used to weight different expectations directly, but for visualisation and easy computation of LD50 histogram, we use resampling importance sampling.
Step5: Create figure with all results | Python Code:
import numpy as np
from scipy import optimize, stats
%matplotlib inline
import matplotlib.pyplot as plt
import os, sys
# add utilities directory to path
util_path = os.path.abspath(os.path.join(os.path.pardir, 'utilities_and_data'))
if util_path not in sys.path and os.path.exists(util_path):
sys.path.insert(0, util_path)
# import from utilities
import psis
import plot_tools
# edit default plot settings
plt.rc('font', size=12)
# apply custom background plotting style
plt.style.use(plot_tools.custom_styles['gray_background'])
# Bioassay data, (BDA3 page 86)
x = np.array([-0.86, -0.30, -0.05, 0.73])
n = np.array([5, 5, 5, 5])
y = np.array([0, 1, 3, 5])
# compute the posterior density in grid
# - usually should be computed in logarithms!
# - with alternative prior, check that range and spacing of A and B
# are sensible
ngrid = 100
A = np.linspace(-4, 8, ngrid)
B = np.linspace(-10, 40, ngrid)
ilogit_abx = 1 / (np.exp(-(A[:,None] + B[:,None,None] * x)) + 1)
p = np.prod(ilogit_abx**y * (1 - ilogit_abx)**(n - y), axis=2)
# sample from the grid
nsamp = 1000
samp_indices = np.unravel_index(
np.random.choice(p.size, size=nsamp, p=p.ravel()/np.sum(p)),
p.shape
)
samp_A = A[samp_indices[1]]
samp_B = B[samp_indices[0]]
# add random jitter, see BDA3 p. 76
samp_A += (np.random.rand(nsamp) - 0.5) * (A[1]-A[0])
samp_B += (np.random.rand(nsamp) - 0.5) * (B[1]-B[0])
# samples of LD50
samp_ld50 = -samp_A / samp_B
Explanation: Bayesian Data Analysis, 3rd ed
Chapter 4, demo 1
Normal approximaton for Bioassay model.
End of explanation
# define the optimised function
def bioassayfun(w):
a = w[0]
b = w[1]
et = np.exp(a + b * x)
z = et / (1 + et)
e = - np.sum(y * np.log(z) + (n - y) * np.log(1 - z))
return e
# initial guess
w0 = np.array([0.0, 0.0])
# optimise
optim_res = optimize.minimize(bioassayfun, w0)
# extract desired results
w = optim_res['x']
S = optim_res['hess_inv']
Explanation: Find the mode by minimising negative log posterior. Compute gradients and Hessian analytically, and use Newton's method for optimisation. You may use optimisation routines below for checking your results. See help for scipy.optimize.minimize.
End of explanation
# Construct a grid array of shape (ngrid, ngrid, 2) from A and B. Although
# Numpy's concatenation functions do not support broadcasting, a clever trick
# can be applied to overcome this without unnecessary memory copies
# (see Numpy's documentation for strides for more information):
A_broadcasted = np.lib.stride_tricks.as_strided(
A, shape=(ngrid,ngrid), strides=(0, A.strides[0]))
B_broadcasted = np.lib.stride_tricks.as_strided(
B, shape=(ngrid,ngrid), strides=(B.strides[0], 0))
grid = np.dstack((A_broadcasted, B_broadcasted))
p_norm = stats.multivariate_normal.pdf(x=grid, mean=w, cov=S)
# draw samples from the distribution
samp_norm = stats.multivariate_normal.rvs(mean=w, cov=S, size=1000)
Explanation: Compute the normal approximation density in grid. Note tha this is just for the illustration and in real case we would not need to evaluate this, and we would only use the draws from the normal distribution approaximation.
End of explanation
lg = stats.multivariate_normal.logpdf(x=samp_norm, mean=w, cov=S)
Ar = samp_norm[:,0]
Br = samp_norm[:,1]
ilogit_abx = 1 / (np.exp(-(Ar[:,None] + Br[:,None] * x)) + 1)
lp = np.sum(np.log(ilogit_abx**y * (1 - ilogit_abx)**(n - y)), axis=1)
lw = lp - lg
lw, pk = psis.psislw(lw)
print("Pareto khat is {:.2}".format(pk))
Explanation: Compute Pareto smoothed importance sampling weights and Pareto diagnostic
End of explanation
# resampling importance sampling
pis = np.exp(lw)
nsamp = 1000
samp_indices = np.random.choice(pis.size, size=nsamp, p=pis)
rissamp_A = Ar[samp_indices]
rissamp_B = Br[samp_indices]
# add random jitter, see BDA3 p. 76
rissamp_A += (np.random.rand(nsamp) - 0.5) * (A[1]-A[0])
rissamp_B += (np.random.rand(nsamp) - 0.5) * (B[1]-B[0])
# samples of LD50
rissamp_ld50 = - rissamp_A / rissamp_B
Explanation: Importance sampling weights could be used to weight different expectations directly, but for visualisation and easy computation of LD50 histogram, we use resampling importance sampling.
End of explanation
fig, axes = plt.subplots(3, 3, figsize=(13, 10))
# plot the posterior density
ax = axes[0, 0]
ax.imshow(
p,
origin='lower',
aspect='auto',
extent=(A[0], A[-1], B[0], B[-1])
)
ax.set_xlim([-2, 6])
ax.set_ylim([-10, 30])
ax.set_xlabel(r'$\alpha$')
ax.set_ylabel(r'$\beta$')
ax.grid('off')
# plot the samples
ax = axes[1, 0]
ax.scatter(samp_A, samp_B, 5)
ax.set_xlim([-2, 6])
ax.set_ylim([-10, 30])
ax.set_xlabel(r'$\alpha$')
ax.set_ylabel(r'$\beta$')
ax.text(0, -7, 'p(beta>0)={:.2f}'.format(np.mean(samp_B>0)))
# plot the histogram of LD50
ax = axes[2, 0]
ax.hist(samp_ld50, np.linspace(-0.8, 0.8, 31))
ax.set_xlim([-0.8, 0.8])
ax.set_xlabel(r'LD50 = -$\alpha/\beta$')
ax.set_yticks(())
ax.set_xticks(np.linspace(-0.8, 0.8, 5))
# plot the posterior density for normal approx.
ax = axes[0, 1]
ax.imshow(
p_norm,
origin='lower',
aspect='auto',
extent=(A[0], A[-1], B[0], B[-1])
)
ax.set_xlim([-2, 6])
ax.set_ylim([-10, 30])
ax.set_xlabel(r'$\alpha$')
ax.set_ylabel(r'$\beta$')
ax.grid('off')
# plot the samples from the normal approx.
ax = axes[1, 1]
ax.scatter(samp_norm[:,0], samp_norm[:,1], 5)
ax.set_xlim([-2, 6])
ax.set_ylim([-10, 30])
ax.set_xlabel(r'$\alpha$')
ax.set_ylabel(r'$\beta$')
# Normal approximation does not take into account that the posterior
# is not symmetric and that there is very low density for negative
# beta values. Based on the samples from the normal approximation
# it is estimated that there is about 4% probability that beta is negative!
ax.text(0, -7, 'p(beta>0)={:.2f}'.format(np.mean(samp_norm[:,1]>0)))
# Plot the histogram of LD50
ax = axes[2, 1]
# Since we have strong prior belief that beta should not be negative we can
# improve our normal approximation by conditioning on beta>0.
bpi = samp_norm[:,1] > 0
samp_ld50_norm = - samp_norm[bpi,0] / samp_norm[bpi,1]
ax.hist(samp_ld50_norm, np.linspace(-0.8, 0.8, 31))
ax.set_xlim([-0.8, 0.8])
ax.set_xlabel(r'LD50 = -$\alpha/\beta$')
ax.set_yticks(())
ax.set_xticks(np.linspace(-0.8, 0.8, 5))
# plot the samples from the resampling importance sampling
ax = axes[1, 2]
ax.scatter(rissamp_A, rissamp_B, 5)
ax.set_xlim([-2, 6])
ax.set_ylim([-10, 30])
ax.set_xlabel(r'$\alpha$')
ax.set_ylabel(r'$\beta$')
# Importance sampling is able to improve the estimate of p(beta>0)
ax.text(0, -7, 'p(beta>0)={:.2f}'.format(np.mean(rissamp_B>0)))
# Plot the histogram of LD50
ax = axes[2, 2]
ax.hist(rissamp_ld50, np.linspace(-0.8, 0.8, 31))
ax.set_xlim([-0.8, 0.8])
ax.set_xlabel(r'LD50 = -$\alpha/\beta$')
ax.set_yticks(())
ax.set_xticks(np.linspace(-0.8, 0.8, 5))
# hide unused subplot
axes[0, 2].axis('off')
fig.tight_layout()
Explanation: Create figure with all results
End of explanation |
14,648 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Creating a 2 Layer Neural Network in 30 Lines of Python
Modified from an existing exercise. Credit for the original code to Stanford CS 231n
To demonstrate with code the math we went over earlier, we're going to generate some data that is not linearly separable, training a linear classifier, training a 2 layer neural network with a sigmoid activation function, then compare results for both.... just with plain ol Python!
Importing Libraries
Step1: Generating a Spiral Training Dataset
We'll be using this 2D dataset because it's easy to visually see the classifier performance, and because it's impossible to linearly separate the classes nicely.
Step2: Quick question, what are the dimensions of X and y?
Let's visualize this. Setting S=20 (size of points) so that the color/label differences are more visible.
Step3: Training a Linear Classifier
Let's start by training a a simple y = WX + b linear classifer on this dataset. We need to compute some Weights (W) and a bias vector (b) for all classes.
Step4: We're going to compute the normalized softmax of these scores...
Step5: The array correct_logprobs is a 1D array of the probabilities assigned to the correct classes for each example.
Step6: Updating the Parameters
We update the parameters W and B in the direction of the negative gradient in order to decrease the loss.
Step7: Full Code for the Training the Linear Softmax Classifier
Using gradient descent method for optimization.
Using L1 for loss funtion.
This ought to converge to a loss of around 0.78 after 150 iterations
Step8: Evaluating the Training Accuracy
The training accuracy here ought to be at around 0.5
This is better than change for 3 classes, where the expected accuracy of randomly selecting one of out 3 labels is 0.33. But not that much better.
Step9: Let's eyeball the decision boundaries to get a better feel for the split.
Step10: Training a 2 Layer Neural Network
Let's see what kind of improvement we'll get with adding a single hidden layer.
Step11: Let's use a ReLU activation function. See how we're passing the scores from one layer into the hidden layer.
Step12: The loss computation and the dscores gradient computation remain the same. The major difference lies in the the chaining backpropagation of the dscores all the way back up to the parameters W and b.
Step13: Full Code for Training the 2 Layer NN with ReLU activation
Very similar to the linear classifier!
Step14: Evaluating the Training Set Accuracy
This should be around 0.98, which is hugely better than the 0.50 we were getting from the linear classifier!
Step15: Let's visualize this to get a more dramatic sense of just how good the split is. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
Explanation: Creating a 2 Layer Neural Network in 30 Lines of Python
Modified from an existing exercise. Credit for the original code to Stanford CS 231n
To demonstrate with code the math we went over earlier, we're going to generate some data that is not linearly separable, training a linear classifier, training a 2 layer neural network with a sigmoid activation function, then compare results for both.... just with plain ol Python!
Importing Libraries
End of explanation
N = 100 # points per class
D = 2 # dimensionality at 2 so we can eyeball it
K = 3 # number of classes
X = np.zeros((N*K, D)) # generate an empty matrix to hold X features
y = np.zeros(N*K, dtype='uint8') # generate an empty vector to hold y labels
# for 3 classes, evenly generates spiral arms
for j in xrange(K):
ix = range(N*j, N*(j+1))
r = np.linspace(0.0,1,N) #radius
t = np.linspace(j*4, (j+1)*4, N) + np.random.randn(N)*0.2 # theta
X[ix] = np.c_[r*np.sin(t), r*np.cos(t)]
y[ix] = j
Explanation: Generating a Spiral Training Dataset
We'll be using this 2D dataset because it's easy to visually see the classifier performance, and because it's impossible to linearly separate the classes nicely.
End of explanation
plt.scatter(X[:,0], X[:,1], c=y, s=20, cmap=plt.cm.Spectral)
plt.show()
Explanation: Quick question, what are the dimensions of X and y?
Let's visualize this. Setting S=20 (size of points) so that the color/label differences are more visible.
End of explanation
# random initialization of starting params. recall that it's best to randomly initialize at a small value.
# how many parameters should this linear classifier have? remember there are K output classes, and 2 features per observation.
W = 0.01 * np.random.randn(D,K)
b = np.zeros((1,K))
print "W shape", W.shape
print "W values", W
# Here are some hyperparameters that we're not going to worry about too much right now
learning_rate = 1e-0 # the step size in the descent
reg = 1e-3
scores = np.dot(X, W) + b
print scores.shape
Explanation: Training a Linear Classifier
Let's start by training a a simple y = WX + b linear classifer on this dataset. We need to compute some Weights (W) and a bias vector (b) for all classes.
End of explanation
num_examples = X.shape[0]
exp_scores = np.exp(scores)
probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True)
# Let's look at one example to verify the softmax transform
print "Score: ", scores[50]
print "Class Probabilities: ", probs[50]
Explanation: We're going to compute the normalized softmax of these scores...
End of explanation
correct_logprobs = -np.log(probs[range(num_examples),y])
# data loss is L1 loss plus regularization loss
data_loss = np.sum(correct_logprobs)/num_examples
reg_loss = 0.5*reg*np.sum(W*W)
loss = data_loss + reg_loss
# this gets the gradient of the scores
# class probabilities minus - divided by num_examples
dscores = probs
dscores[range(num_examples),y] -= 1
dscores /= num_examples
# this backpropages the gradient into W and b
dW = np.dot(X.T, dscores) # don't forget to transpose! otherwise, you'll be forwarding the gradient
dW += 0.5*W # regularization gradient
db = np.sum(dscores, axis=0, keepdims=True)
Explanation: The array correct_logprobs is a 1D array of the probabilities assigned to the correct classes for each example.
End of explanation
# this updates the W and b parameters
W += -learning_rate * dW
b += -learning_rate * db
Explanation: Updating the Parameters
We update the parameters W and B in the direction of the negative gradient in order to decrease the loss.
End of explanation
# initialize parameters randomly
W = 0.01 * np.random.randn(D,K)
b = np.zeros((1,K))
# some hyperparameters
step_size = 1e-0
reg = 1e-3 # regularization strength
# gradient descent loop
num_examples = X.shape[0]
# evaluated for 200 steps
for i in xrange(200):
# evaluate class scores, [N x K]
scores = np.dot(X, W) + b
# compute the class probabilities
exp_scores = np.exp(scores)
probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True) # [N x K]
# compute the loss: average cross-entropy loss and regularization
corect_logprobs = -np.log(probs[range(num_examples),y])
data_loss = np.sum(corect_logprobs)/num_examples
reg_loss = 0.5*reg*np.sum(W*W)
loss = data_loss + reg_loss
# for every 10 iterations print the loss
if i % 10 == 0:
print "iteration %d: loss %f" % (i, loss)
# compute the gradient on scores
dscores = probs
dscores[range(num_examples),y] -= 1
dscores /= num_examples
# backpropate the gradient to the parameters (W,b)
dW = np.dot(X.T, dscores)
db = np.sum(dscores, axis=0, keepdims=True)
dW += reg*W # regularization gradient
# perform a parameter update
W += -step_size * dW
b += -step_size * db
Explanation: Full Code for the Training the Linear Softmax Classifier
Using gradient descent method for optimization.
Using L1 for loss funtion.
This ought to converge to a loss of around 0.78 after 150 iterations
End of explanation
scores = np.dot(X, W) + b
predicted_class = np.argmax(scores, axis=1)
print 'training accuracy: %.2f' % (np.mean(predicted_class == y))
Explanation: Evaluating the Training Accuracy
The training accuracy here ought to be at around 0.5
This is better than change for 3 classes, where the expected accuracy of randomly selecting one of out 3 labels is 0.33. But not that much better.
End of explanation
# plot the resulting classifier
h = 0.02
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = np.dot(np.c_[xx.ravel(), yy.ravel()], W) + b
Z = np.argmax(Z, axis=1)
Z = Z.reshape(xx.shape)
fig = plt.figure()
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral, alpha=0.8)
plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.Spectral)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
Explanation: Let's eyeball the decision boundaries to get a better feel for the split.
End of explanation
# init parameters
np.random.seed(100) # so we all have the same numbers
W = 0.01 * np.random.randn(D,h)
b = np.zeros((1,h))
h = 100 # size of hidden layer. a hyperparam in itself.
W2 = 0.01 * np.random.randn(h,K)
b2 = np.zeros((1,K))
Explanation: Training a 2 Layer Neural Network
Let's see what kind of improvement we'll get with adding a single hidden layer.
End of explanation
hidden_layer = np.maximum(0, np.dot(X, W) + b)
scores = np.dot(hidden_layer, W2) + b2
Explanation: Let's use a ReLU activation function. See how we're passing the scores from one layer into the hidden layer.
End of explanation
# backpropate the gradient to the parameters of the hidden layer
dW2 = np.dot(hidden_layer.T, dscores)
db2 = np.sum(dscores, axis=0, keepdims=True)
# gradient of the outputs of the hidden layer (the local gradient)
dhidden = np.dot(dscores, W2.T)
# backprop through the ReLU function
dhidden[hidden_layer <= 0] = 0
# back right into the parameters W and b
dW = np.dot(X.T, dhidden)
db = np.sum(dhidden, axis=0, keepdims=True)
Explanation: The loss computation and the dscores gradient computation remain the same. The major difference lies in the the chaining backpropagation of the dscores all the way back up to the parameters W and b.
End of explanation
# initialize parameters randomly
np.random.seed(100) # so we all have the same numbers
h = 100 # size of hidden layer
W = 0.01 * np.random.randn(D,h)
b = np.zeros((1,h))
W2 = 0.01 * np.random.randn(h,K)
b2 = np.zeros((1,K))
# some hyperparameters
step_size = 1e-0
reg = 1e-3 # regularization strength
# optimization: gradient descent loop
num_examples = X.shape[0]
for i in xrange(10000):
# feed forward
# evaluate class scores, [N x K]
hidden_layer = np.maximum(0, np.dot(X, W) + b) # note, ReLU activation
scores = np.dot(hidden_layer, W2) + b2
# compute the class probabilities
exp_scores = np.exp(scores)
probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True) # [N x K]
# compute the loss: average cross-entropy loss and regularization
corect_logprobs = -np.log(probs[range(num_examples),y])
data_loss = np.sum(corect_logprobs)/num_examples
reg_loss = 0.5*reg*np.sum(W*W) + 0.5*reg*np.sum(W2*W2)
loss = data_loss + reg_loss
if i % 1000 == 0:
print "iteration %d: loss %f" % (i, loss)
# backprop
# compute the gradient on scores
dscores = probs
dscores[range(num_examples),y] -= 1
dscores /= num_examples
# backpropate the gradient to the parameters
# first backprop into parameters W2 and b2
dW2 = np.dot(hidden_layer.T, dscores)
db2 = np.sum(dscores, axis=0, keepdims=True)
# next backprop into hidden layer
dhidden = np.dot(dscores, W2.T)
# backprop the ReLU non-linearity
dhidden[hidden_layer <= 0] = 0
# finally into W,b
dW = np.dot(X.T, dhidden)
db = np.sum(dhidden, axis=0, keepdims=True)
# add regularization gradient contribution
dW2 += reg * W2
dW += reg * W
# perform a parameter update
W += -step_size * dW
b += -step_size * db
W2 += -step_size * dW2
b2 += -step_size * db2
Explanation: Full Code for Training the 2 Layer NN with ReLU activation
Very similar to the linear classifier!
End of explanation
hidden_layer = np.maximum(0, np.dot(X, W) + b)
scores = np.dot(hidden_layer, W2) + b2
predicted_class = np.argmax(scores, axis=1)
print 'training accuracy: %.2f' % (np.mean(predicted_class == y))
Explanation: Evaluating the Training Set Accuracy
This should be around 0.98, which is hugely better than the 0.50 we were getting from the linear classifier!
End of explanation
# plot the resulting classifier
h = 0.02
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = np.dot(np.maximum(0, np.dot(np.c_[xx.ravel(), yy.ravel()], W) + b), W2) + b2
Z = np.argmax(Z, axis=1)
Z = Z.reshape(xx.shape)
fig = plt.figure()
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral, alpha=0.8)
plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.Spectral)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
#fig.savefig('spiral_net.png')
Explanation: Let's visualize this to get a more dramatic sense of just how good the split is.
End of explanation |
14,649 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Numpy 介紹
Step1: 建立 ndarray
Step2: 看 ndarray 的第一件事情: shape , dtype
Step3: 有時候,可以看圖
Step4: 有很多其他建立的方式
Step5: 這是一堆資料
* 資料有什麼資訊?
* 資料有什麼限制?
* 這些限制有什麼意義?好處?
* 以前碰過什麼類似的東西?
* 可以套用在哪些東西上面?
* 可以怎麼用(運算)?
最簡單的計算是 逐項計算
see also np.vectorize
Step6: Q0
Step7: Q1
Step8: Indexing
可以用類似 list 的 indexing
Step9: Q2
給定
python
x = np.arange(30)
a = np.arange(30)
a[1
Step10: ndarray 也可以
Step11: Q3
動手試試看各種情況
比方
python
b = np.random.randint(0,99, size=(10,10))
b[
Step12: 試試看下面的結果
想一下是怎麼一回事(numpy 在想什麼?)
Step13: Q4
把 b 中的偶數都變成 -1
Step14: 用圖形來練習
Step15: Q
將圖片縮小成一半
擷取中間一小塊
圖片上下顛倒
左右鏡射
去掉綠色
將圖片放大兩倍
貼另外一張圖到大圖中
python
from urllib.request import urlopen
url = "https
Step16: Q
挖掉個圓圈? (300,300)中心,半徑 100
旋轉九十度? x,y 互換?
Step17: indexing 的其他用法
Step18: Reshaping
.flatten 拉平看看資料在電腦中如何儲存?
查看
.reshape, .T, np.rot00, .swapaxes .rollaxis
然後再做一下上面的事情
Step19: 堆疊在一起
查看 np.vstack np.hstack np.concatenate 然後試試看
Step20: 作用在整個 array/axis 的函數
Step21: 多重意義的運用, 水平平均,整合垂直平均
Step22: Tensor 乘法
先從點積開始
Step23: 矩陣乘法
如果忘記矩陣乘法是什麼了, 參考這裡 http | Python Code:
# 起手式
import numpy as np
Explanation: Numpy 介紹
End of explanation
np.array([1,2,3,4])
x = _
y = np.array([[1.,2,3],[4,5,6]])
y
Explanation: 建立 ndarray
End of explanation
x.shape
y.shape
x.dtype
y.dtype
Explanation: 看 ndarray 的第一件事情: shape , dtype
End of explanation
# import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
# 畫圖
plt.plot(x, 'x');
Explanation: 有時候,可以看圖
End of explanation
# 建立 0 array
np.zeros_like(y)
np.zeros((10,10))
# 跟 range 差不多
x = np.arange(0, 10, 0.1)
# 亂數
y = np.random.uniform(-1,1, size=x.shape)
plt.plot(x, y)
Explanation: 有很多其他建立的方式
End of explanation
x = np.linspace(0, 2* np.pi, 1000)
plt.plot(x, np.sin(x))
Explanation: 這是一堆資料
* 資料有什麼資訊?
* 資料有什麼限制?
* 這些限制有什麼意義?好處?
* 以前碰過什麼類似的東西?
* 可以套用在哪些東西上面?
* 可以怎麼用(運算)?
最簡單的計算是 逐項計算
see also np.vectorize
End of explanation
#可以用 %run -i 跑參考範例
%run -i q0.py
# 或者看看參考範例
#%load q0.py
Explanation: Q0:
畫出 $y=x^2+1$ 或其他函數的圖形
用
python
plt.plot?
看看 plot 還有什麼參數可以玩
End of explanation
# 參考答案
#%load q1.py
Explanation: Q1:
試試看圖片。
使用
```python
from PIL import Image
讀入 PIL Image (這張圖是從 openclipart 來的 cc0)
img = Image.open('img/Green-Rolling-Hills-Landscape-800px.png')
圖片轉成 ndarray
img_array = np.array(img)
ndarray 轉成 PIL Image
Image.fromarray(img_array)
```
看看這個圖片的內容, dtype 和 shape
End of explanation
a = np.arange(30)
a
a[5]
a[3:7]
# 列出所有奇數項
a[1::2]
# 還可以用來設定值
a[1::2] = -1
a
# 或是
a[1::2] = -a[::2]-1
a
Explanation: Indexing
可以用類似 list 的 indexing
End of explanation
%run -i q2.py
#%load q2.py
Explanation: Q2
給定
python
x = np.arange(30)
a = np.arange(30)
a[1::2] = -a[1::2]
畫出下面的圖
End of explanation
b = np.array([[1,2,3], [4,5,6], [7,8,9]])
b
b[1][2]
b[1,2]
b[1]
Explanation: ndarray 也可以
End of explanation
b = np.random.randint(0,99, size=(5,10))
b
Explanation: Q3
動手試試看各種情況
比方
python
b = np.random.randint(0,99, size=(10,10))
b[::2, 2]
Fancy indexing
End of explanation
b[[1,3]]
b[(1,3)]
b[[1,2], [3,4]]
b[[(1,2),(3,4)]]
b[[True, False, False, True, False]]
Explanation: 試試看下面的結果
想一下是怎麼一回事(numpy 在想什麼?)
End of explanation
#參考範例
%run -i q4.py
Explanation: Q4
把 b 中的偶數都變成 -1
End of explanation
# 還記得剛才的
from PIL import Image
img = Image.open('img/Green-Rolling-Hills-Landscape-800px.png')
img_array = np.array(img)
Image.fromarray(img_array)
# 用來顯示圖片的函數
from IPython.display import display
def show(img_array):
display(Image.fromarray(img_array))
Explanation: 用圖形來練習
End of explanation
# 將圖片縮小成一半
%run -i q_half.py
# 將圖片放大
%run -i q_scale2.py
# 圖片上下顛倒
show(img_array[::-1])
%run -i q_paste.py
%run -i q_grayscale.py
Explanation: Q
將圖片縮小成一半
擷取中間一小塊
圖片上下顛倒
左右鏡射
去掉綠色
將圖片放大兩倍
貼另外一張圖到大圖中
python
from urllib.request import urlopen
url = "https://raw.githubusercontent.com/playcanvas/engine/master/examples/images/animation.png"
simg = Image.open(urlopen(url))
紅綠交換
團片變成黑白 參考 Y=0.299R+0.587G+0.114B
會碰到什麼困難? 要如何解決
End of explanation
# 用迴圈畫圓
%run -i q_slow_circle.py
# 用 fancy index 畫圓
%run -i q_fast_circle.py
Explanation: Q
挖掉個圓圈? (300,300)中心,半徑 100
旋轉九十度? x,y 互換?
End of explanation
# 還可以做模糊化
a = img_array.astype(float)
for i in range(10):
a[1:,1:] = (a[1:,1:]+a[:-1,1:]+a[1:,:-1]+a[:-1,:-1])/4
show(a.astype('uint8'))
# 求邊界
a = img_array.astype(float)
a = a @ [0.299, 0.587, 0.114, 0]
a = np.abs((a[1:]-a[:-1]))*2
show(a.astype('uint8'))
Explanation: indexing 的其他用法
End of explanation
# reshaping 的應用
R,G,B,A = img_array.reshape(-1,4).T
plt.hist((R,G,B,A), color="rgby");
Explanation: Reshaping
.flatten 拉平看看資料在電腦中如何儲存?
查看
.reshape, .T, np.rot00, .swapaxes .rollaxis
然後再做一下上面的事情
End of explanation
# 例子
show(np.hstack([img_array, img_array2]))
# 例子
np.concatenate([img_array, img_array2], axis=2).shape
Explanation: 堆疊在一起
查看 np.vstack np.hstack np.concatenate 然後試試看
End of explanation
np.max([1,2,3,4])
np.sum([1,2,3,4])
np.mean([1,2,3,4])
np.min([1,2,3,4])
Explanation: 作用在整個 array/axis 的函數
End of explanation
x_mean = img_array.astype(float).min(axis=0, keepdims=True)
print(x_mean.dtype, x_mean.shape)
y_mean = img_array.astype(float).min(axis=1, keepdims=True)
print(y_mean.dtype, y_mean.shape)
# 自動 broadcast
xy_combined = ((x_mean+y_mean)/2).astype('uint8')
show(xy_combined)
Explanation: 多重意義的運用, 水平平均,整合垂直平均
End of explanation
# = 1*4 + 2*5 + 4*6
np.dot([1,2,3], [4,5,6])
u=np.array([1,2,3])
v=np.array([4,5,6])
print( u@v )
print( (u*v).sum() )
Explanation: Tensor 乘法
先從點積開始
End of explanation
A=np.random.randint(0,10, size=(5,3))
A
B=np.random.randint(0,10, size=(3,7))
B
A.dot(B)
Explanation: 矩陣乘法
如果忘記矩陣乘法是什麼了, 參考這裡 http://matrixmultiplication.xyz/
或者 http://eli.thegreenplace.net/2015/visualizing-matrix-multiplication-as-a-linear-combination/
矩陣乘法可以看成是:
* 所有組合(其他軸)的內積(共有軸)
* 多個行向量線性組合
* 代入線性方程式 A1-矩陣與基本列運算.ipynb
* 用 numpy 來理解
python
np.sum(a[:,:, np.newaxis] * b[np.newaxis, : , :], axis=1)
dot(a, b)[i,k] = sum(a[i,:] * b[:, k])
高維度
要如何推廣?
* tensordot, tensor contraction, a.shape=(3,4,5), b.shape=(4,5,6), axis = 2 時等價於
python
np.sum(a[..., np.newaxis] * b[np.newaxis, ...], axis=(1, 2))
tensordot(a,b)[i,k]=sum(a[i, ...]* b[..., k])
https://en.wikipedia.org/wiki/Tensor_contraction
dot
python
dot(a, b)[i,j,k,m] = sum(a[i,j,:] * b[k,:,m])
np.tensordot(a,b, axes=(-1,-2))
matmul 最後兩個 index 當成 matrix
python
a=np.random.random(size=(3,4,5))
b=np.random.random(size=(3,5,7))
(a @ b).shape
np.sum(a[..., np.newaxis] * np.moveaxis(b[..., np.newaxis], -1,-3), axis=-2)
einsum https://en.wikipedia.org/wiki/Einstein_notation
python
np.einsum('ii', a) # trace(a)
np.einsum('ii->i', a) #diag(a)
np.einsum('ijk,jkl', a, b) # tensordot(a,b)
np.einsum('ijk,ikl->ijl', a,b ) # matmul(a,b)
End of explanation |
14,650 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Building word vectors
Setup
Step1: Data
load corpus vocab and wordidx
Step2: load data
Step3: Word Vectors Pre Trained
collecting biolab words
Step4: dont need word to id dict since this is indexed with words
using biolab words for missing corpus words
Step5: for tensorboard
Step6: building word vectors of 200d for model
Step7: fill in biolab vectors available
Step8: total words not updated with training from biolab
Step9: gcloud tensorboard serving
Step10: for http
Step11: write to checkpoint file
Step12: FastText Vectors
fasttext commands used
fasttext skipgram -minCount 1 -dim 200 -epoch 10 -input corpus_text_for_fast_text.txt -output ft_wvs_200d_10e
fasttext cbow -minCount 1 -dim 200 -epoch 10 -input corpus_text_for_fast_text.txt -output ft_wvs_200d_10e
reading ft vectors
Step13: saving all trained fast text vectors
Step14: Viewing word vectors | Python Code:
import sys
import os
import re
import collections
import itertools
import bcolz
import pickle
sys.path.append('../lib')
import gc
import random
import smart_open
import h5py
import csv
import tensorflow as tf
import gensim
import datetime as dt
from tqdm import tqdm_notebook as tqdm
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
random_state_number = 967898
from tensorflow.python.client import device_lib
def get_available_gpus():
local_device_protos = device_lib.list_local_devices()
return [x.name for x in local_device_protos if x.device_type == 'GPU']
config = tf.ConfigProto()
config.gpu_options.allow_growth=True
sess = tf.Session(config=config)
get_available_gpus()
%pylab
%matplotlib inline
%load_ext autoreload
%autoreload
pd.options.mode.chained_assignment = None
pd.options.display.max_columns = 999
color = sns.color_palette()
Explanation: Building word vectors
Setup
End of explanation
corpus_vocab_list, corpus_vocab_wordidx = None, None
with open('processed/stage1/vocab_words_wordidx.pkl', 'rb') as f:
(corpus_vocab_list, corpus_wordidx) = pickle.load(f)
print(len(corpus_vocab_list), len(corpus_wordidx))
Explanation: Data
load corpus vocab and wordidx
End of explanation
store = pd.HDFStore('processed/stage1/data_frames.h5')
train_df = store['train_df']
test_df = store['test_df']
Explanation: load data
End of explanation
from gensim.models.keyedvectors import KeyedVectors
biolab_keyed_vectors_pubmed_pmc_wiki = KeyedVectors.load_word2vec_format('external/biolab_wvs/wikipedia-pubmed-and-PMC-w2v.bin', binary=True)
biolab_words_pubmed_pmc_wiki = biolab_keyed_vectors_pubmed_pmc_wiki.vocab.keys()
biolab_words = set(biolab_words_pubmed_pmc_wiki)
len(biolab_words)
vocab_biolab = set(biolab_words) & set(vocab_words)
print (len(vocab_biolab))
vocab_biolab
vocab_not_in_biolab =set(vocab_words) - set(biolab_words)
print(len(vocab_not_in_biolab))
vocab_not_in_biolab
Explanation: Word Vectors Pre Trained
collecting biolab words
End of explanation
undesirable_ascii_characters = list(range(32))
undesirable_ascii_characters.remove(10) #keep new line since this might be used for sentence tokenizer
undesirable_charmap = dict.fromkeys(undesirable_ascii_characters)
from nltk import word_tokenize
from utils import custom_word_tokenizer, apply_custom_regx
custom_tokenized_biolab_pubmed_pmc_wiki_wv = {}
for word in vocab_biolab:
vector = biolab_keyed_vectors_pubmed_pmc_wiki.word_vec(word)
custom_tokenized_biolab_pubmed_pmc_wiki_wv[word.lower()] = vector
word = word.lower().encode('ascii', 'ignore').decode('utf-8', 'ignore')
word = str(word).translate(undesirable_charmap)
word = apply_custom_regx(word)
word = word.replace('\\t', '')
for part in word_tokenize(word):
if part in custom_tokenized_biolab_pubmed_pmc_wiki_wv:
custom_tokenized_biolab_pubmed_pmc_wiki_wv[part] += vector
custom_tokenized_biolab_pubmed_pmc_wiki_wv[part] /= 2
len(custom_tokenized_biolab_pubmed_pmc_wiki_wv)
Explanation: dont need word to id dict since this is indexed with words
using biolab words for missing corpus words
End of explanation
tb_vocab_size=5000
tb_vocab_biolab = list(vocab_biolab)[:tb_vocab_size]
with open("view_wvs_tb/tb_vocab.tsv", "w") as fp:
wr = csv.writer(fp, delimiter='\n')
wr.writerow(tb_vocab_biolab)
tb_word_vectors = np.random.randn(tb_vocab_size, 200)
for i,word in enumerate(tb_vocab_biolab):
tb_word_vectors[i] = custom_tokenized_biolab_pubmed_pmc_wiki_wv[word]
%autoreload
from utils import visualize_embeddings_in_tensorboard
visualize_this_embedding = tb_word_vectors
print(visualize_this_embedding.shape)
metadata_path = "/home/bicepjai/Projects/dsotc/data_prep/view_wvs_tb/tb_vocab.tsv"
visualize_embeddings_in_tensorboard(visualize_this_embedding, metadata_path, "/home/bicepjai/Projects/dsotc/data_prep/view_wvs_tb")
del tb_word_vectors
Explanation: for tensorboard
End of explanation
corpus_word_vectors = np.random.randn(len(vocab_words), 200)
corpus_word_vectors.shape
Explanation: building word vectors of 200d for model
End of explanation
for word in vocab_biolab:
dataset_corpus_word_index = vocab_wordidx[word]
corpus_word_vectors[dataset_corpus_word_index] = custom_tokenized_biolab_pubmed_pmc_wiki_wv[word]
Explanation: fill in biolab vectors available
End of explanation
words_not_updated = set(vocab_words) - vocab_biolab
len(words_not_updated)
words_not_updated
np.save("processed/stage1/biolab_updated_wvs.npy", corpus_word_vectors)
Explanation: total words not updated with training from biolab
End of explanation
dataset_corpus_words_list = np.load("dataset_corpus_words_list.npy")
corpus_word_vectors = np.load("corpus_word_vectors.npy")
tb_vocab_size = 10000
local_tb_dir = "/home/bicepjai/Projects/ml-compete/kaggle/mskrct/data_prep_2_ft/model_wv_visualize/gcloud/"
with open(local_tb_dir+"/vocab.tsv", "wb") as fp:
wr = csv.writer(fp, delimiter='\n')
wr.writerow(dataset_corpus_words_list[:tb_vocab_size])
Explanation: gcloud tensorboard serving
End of explanation
# np.savetxt("model_wv_visualize/word_vectors.tsv",corpus_word_vectors[:tb_vocab_size], delimiter='\t')
Explanation: for http://projector.tensorflow.org/ vectors need to be in tsv form
End of explanation
!rm $local_tb_dir/checkpoint
!ls $local_tb_dir
from word2vec import visualize_embeddings_in_tensorboard
visualize_this_embedding = corpus_word_vectors[:tb_vocab_size]
print visualize_this_embedding.shape
# path for gcloud tensorboard
metadata_path = "/home/bicepjai/projects/tb_visual/vocab.tsv"
# metadata_path = "/home/bicepjai/Projects/ml-compete/kaggle/mskrct/data_prep_2_ft/model_wv_visualize/vocab.tsv"
visualize_embeddings_in_tensorboard(visualize_this_embedding, metadata_path, local_tb_dir)
checkpoint_txt = "model_checkpoint_path: \"/home/bicepjai/projects/tb_visual/visual_embed.ckpt-1\"\n\
all_model_checkpoint_paths: \"/home/bicepjai/projects/tb_visual/visual_embed.ckpt-1\""
with open(local_tb_dir+"/checkpoint","w") as f:
f.seek(0)
f.truncate()
f.write(checkpoint_txt)
Explanation: write to checkpoint file
End of explanation
fasttext_vec_file = "processed/stage2/pretrained_word_vectors/ft_sg_200d_10e.vec"
ft_lines = None
with open(fasttext_vec_file,"r") as f:
ft_lines = f.readlines()
print(ft_lines[0])
print(type(ft_lines), len(ft_lines))
ft_shape = tuple([int(i.strip()) for i in ft_lines[0].split()])
ft_shape
print(len(ft_lines[1].split()))
ft_lines[1]
ft_vocab_size=ft_shape[0]
ft_vocab_size
ft_word_vectors = np.random.randn(ft_vocab_size, ft_shape[1])
ft_words = []
for i, line in enumerate(ft_lines[1:]):
str_list =line.split()
ft_words.append(str_list[0].strip())
vec = np.array([np.float(f) for f in str_list[1:]])
ft_word_vectors[i] = vec
ft_word_vectors.shape
a = list(ft_words)
a.sort(key=len, reverse=True)
print(a[:10])
del a
ft_wordidx = {w:i for i,w in enumerate(ft_words)}
ft_vocab_size, len(ft_wordidx)
len(set(vocab_words) - set(ft_words))
set(vocab_words) - set(ft_words)
%autoreload
import global_utils
fasttext_vec_file="/home/bicepjai/Projects/dsotc/data_prep/processed/stage1/pretrained_word_vectors/ft_cbow_200d_20e.vec"
wvs = global_utils.get_corpus_wvs_from_ft(fasttext_vec_file, 200, vocab_words)
wvs.shape
Explanation: FastText Vectors
fasttext commands used
fasttext skipgram -minCount 1 -dim 200 -epoch 10 -input corpus_text_for_fast_text.txt -output ft_wvs_200d_10e
fasttext cbow -minCount 1 -dim 200 -epoch 10 -input corpus_text_for_fast_text.txt -output ft_wvs_200d_10e
reading ft vectors
End of explanation
%ll /home/bicepjai/Projects/dsotc/data_prep/processed/stage1/pretrained_word_vectors
len(vocab_words)
%autoreload
import global_utils
ft_vector_files = [
(100,"ft_cbow_100d_20e"),(200,"ft_cbow_200d_20e"),(200,"ft_cbow_300d_20e"),
(100,"ft_sg_100d_20e"),(200,"ft_sg_200d_20e"),(200,"ft_sg_300d_20e"),
(100,"ft_cbow_100d_50e"),(200,"ft_cbow_200d_50e"),(200,"ft_cbow_300d_50e"),
(100,"ft_sg_100d_50e"),(200,"ft_sg_200d_50e"),(200,"ft_sg_300d_50e"),
(100,"ft_cbow_100d_100e"),(200,"ft_cbow_200d_100e"),(200,"ft_cbow_300d_100e"),
(100,"ft_sg_100d_100e"),(200,"ft_sg_200d_100e"),(200,"ft_sg_300d_100e")
]
for dim_file_name in ft_vector_files:
file_path = "/home/bicepjai/Projects/dsotc/data_prep/processed/stage1/pretrained_word_vectors/"+dim_file_name[1]+".vec"
dim = dim_file_name[0]
if not os.path.exists(file_path):
print("file doesnt exist",file_path)
continue
ft_vec = global_utils.get_corpus_wvs_from_ft(file_path, dim, vocab_words)
print(ft_vector_file,ft_vec.shape)
np.save("processed/stage1/pretrained_word_vectors/"+dim_file_name[1]+".npy", ft_vec)
Explanation: saving all trained fast text vectors
End of explanation
%autoreload
import global_utils
WORD_EMB_SIZE=200
ft_file_path = "/home/bicepjai/Projects/Deep-Survey-Text-Classification/data_prep/processed/stage1/pretrained_word_vectors/ft_sg_200d_50e.vec"
trained_embeddings = global_utils.get_embeddings_from_ft(ft_file_path, WORD_EMB_SIZE, corpus_vocab_list)
trained_embeddings.shape
tb_vocab_size=5000
tb_vocab_biolab = list(trained_embeddings)[:tb_vocab_size]
with open("view_wvs_tb/tb_vocab.tsv", "w") as fp:
wr = csv.writer(fp, delimiter='\n')
wr.writerow(corpus_vocab_list)
tb_word_vectors = np.random.randn(tb_vocab_size, 200)
for i,word in enumerate(tb_vocab_biolab):
tb_word_vectors[i] = trained_embeddings[i]
%autoreload
from utils import visualize_embeddings_in_tensorboard
visualize_this_embedding = tb_word_vectors
print(visualize_this_embedding.shape)
metadata_path = "/home/bicepjai/Projects/Deep-Survey-Text-Classification/data_prep/view_wvs_tb/tb_vocab.tsv"
visualize_embeddings_in_tensorboard(visualize_this_embedding, metadata_path, "/home/bicepjai/Projects/Deep-Survey-Text-Classification/data_prep/view_wvs_tb")
Explanation: Viewing word vectors
End of explanation |
14,651 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CNTK 206 Part A
Step1: Select the notebook runtime environment devices / settings
Set the device to cpu / gpu for the test environment. If you have both CPU and GPU on your machine, you can optionally switch the devices. By default, we choose the best available device.
Step2: There are two run modes
Step3: Data Reading
The input to the GAN will be a vector of random numbers. At the end of the traning, the GAN "learns" to generate images of hand written digits drawn from the MNIST database. We will be using the same MNIST data generated in tutorial 103A. A more in-depth discussion of the data format and reading methods can be seen in previous tutorials. For our purposes, just know that the following function returns an object that will be used to generate images from the MNIST dataset. Since we are building an unsupervised model, we only need to read in features and ignore the labels.
Step4: The random noise we will use to train the GAN is provided by the noise_sample function to generate random noise samples from a uniform distribution within the interval [-1, 1].
Step5: Model Creation
A GAN network is composed of two sub-networks, one called the Generator ($G$) and the other Discriminator ($D$).
- The Generator takes random noise vector ($z$) as input and strives to output synthetic (fake) image ($x^$) that is indistinguishable from the real image ($x$) from the MNIST dataset.
- The Discriminator strives to differentiate between the real image ($x$) and the fake ($x^$) image.
Step6: In each training iteration, the Generator produces more realistic fake images (in other words minimizes the difference between the real and generated counterpart) and also the Discriminator maximizes the probability of assigning the correct label (real vs. fake) to both real examples (from training set) and the generated fake ones. The two conflicting objectives between the sub-networks ($G$ and $D$) leads to the GAN network (when trained) converge to an equilibrium, where the Generator produces realistic looking fake MNIST images and the Discriminator can at best randomly guess whether images are real or fake. The resulting Generator model once trained produces realistic MNIST image with the input being a random number.
Model config
First, we establish some of the architectural and training hyper-parameters for our model.
The generator network is a fully-connected network with a single hidden layer. The input will be a 10-dimensional random vector and the output will be a 784 dimensional vector, corresponding to a flattened version of a 28 x 28 fake image. The discriminator is also a single layer dense network. It takes as input the 784 dimensional output of the generator or a real MNIST image and outputs a single scalar - the estimated probability that the input image is a real MNIST image.
Model components
We build a computational graph for our model, one each for the generator and the discriminator. First, we establish some of the architectural parameters of our model.
The generator takes a 100-dimensional random vector (for starters) as input ($z$) and the outputs a 784 dimensional vector, corresponding to a flattened version of a 28 x 28 fake (synthetic) image ($x^*$). In this tutorial we simply model the generator with two dense layers. We use a tanh activation on the last layer to make sure that the output of the generator function is confined to the interval [-1, 1]. This is necessary because we also scale the MNIST images to this interval, and the outputs of the generator must be able to emulate the actual images as closely as possible.
The discriminator takes as input ($x^*$) the 784 dimensional output of the generator or a real MNIST image and outputs the estimated probability that the input image is a real MNIST image. We also model this with two dense layers with a sigmoid activation in the last layer ensuring that the discriminator produces a valid probability.
Step7: We use a minibatch size of 1024 and a fixed learning rate of 0.0005 for training. In the fast mode (isFast = True) we verify only functional correctness with 200 iterations.
Note
Step8: Build the graph
The rest of the computational graph is mostly responsible for coordinating the training algorithms and parameter updates, which is particularly tricky with GANs for couple reasons.
First, the discriminator must be used on both the real MNIST images and fake images generated by the generator function. One way to represent this in the computational graph is to create a clone of the output of the discriminator function, but with substituted inputs. Setting method=share in the clone function ensures that both paths through the discriminator model use the same set of parameters.
Second, we need to update the parameters for the generator and discriminator model separately using the gradients from different loss functions. We can get the parameters for a Function in the graph with the parameters attribute. However, when updating the model parameters, update only the parameters of the respective models while keeping the other parameters unchanged. In other words, when updating the generator we will update only the parameters of the $G$ function while keeping the parameters of the $D$ function fixed and vice versa.
Training the Model
The code for training the GAN very closely follows the algorithm as presented in the original NIPS 2014 paper. In this implementation, we train $D$ to maximize the probability of assigning the correct label (fake vs. real) to both training examples and the samples from $G$. In other words, $D$ and $G$ play the following two-player minimax game with the value function $V(G,D)$
Step9: With the value functions defined we proceed to interatively train the GAN model. The training of the model can take significnantly long depending on the hardware especiallly if isFast flag is turned off.
Step10: Generating Fake (Synthetic) Images
Now that we have trained the model, we can create fake images simply by feeding random noise into the generator and displaying the outputs. Below are a few images generated from random samples. To get a new set of samples, you can re-run the last cell.
Step11: Larger number of iterations should generate more realistic looking MNIST images. A sampling of such generated images are shown below. | Python Code:
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import os
import cntk as C
from cntk import Trainer
from cntk.device import try_set_default_device, gpu, cpu
from cntk.initializer import xavier
from cntk.io import (MinibatchSource, CTFDeserializer, StreamDef, StreamDefs,
INFINITELY_REPEAT)
from cntk.layers import Dense, default_options
from cntk.learners import (fsadagrad, UnitType, sgd, learning_rate_schedule,
momentum_as_time_constant_schedule)
from cntk.logging import ProgressPrinter
%matplotlib inline
Explanation: CNTK 206 Part A: Basic GAN with MNIST data
Prerequisites: We assume that you have successfully downloaded the MNIST data by completing the tutorial titled CNTK_103A_MNIST_DataLoader.ipynb.
Introduction
Generative models have gained a lot of attention in deep learning community which has traditionally leveraged discriminative models for (semi-supervised) and unsupervised learning. In generative modeling, the idea is to collect a huge amount of data in a domain of interest (e.g., pictures, audio, words) and come up with a trained model that generates such real world data sets. This is an active area of research needing mechanisms to scale up training and having large datasets. As stated in the OpenAI blog, such approaches may be used to perform computer aided art generation, or morph images to some word descriptions such as "make my smile wider". This approach has found use in image denoising, inpainting, super-resolution, structured prediction, exploration in reinforcement learning, and neural network pretraining in cases where labeled data is expensive.
Generating models that can produce realistic content (images, sounds etc.) mimicking real world observations is challenging. Generative Adversarial Network (GAN) is one of the approaches that holds promise. A quote from Yann LeCun summarizes GAN and its variations as the most important idea in the last 10 years. The original idea was proposed by Goodfellow et al at NIPS 2014. In this tutorial, we show how to use the Cognitive Toolkit to create a basic GAN network for generating synthetic MNIST digits.
End of explanation
# Select the right target device when this notebook is being tested:
if 'TEST_DEVICE' in os.environ:
import cntk
if os.environ['TEST_DEVICE'] == 'cpu':
C.device.try_set_default_device(C.device.cpu())
else:
C.device.try_set_default_device(C.device.gpu(0))
Explanation: Select the notebook runtime environment devices / settings
Set the device to cpu / gpu for the test environment. If you have both CPU and GPU on your machine, you can optionally switch the devices. By default, we choose the best available device.
End of explanation
isFast = True
Explanation: There are two run modes:
- Fast mode: isFast is set to True. This is the default mode for the notebooks, which means we train for fewer iterations or train / test on limited data. This ensures functional correctness of the notebook though the models produced are far from what a completed training would produce.
Slow mode: We recommend the user to set this flag to False once the user has gained familiarity with the notebook content and wants to gain insight from running the notebooks for a longer period with different parameters for training.
Note
If the isFlag is set to False the notebook will take a few hours on a GPU enabled machine. You can try fewer iterations by setting the num_minibatches to a smaller number say 20,000 which comes at the expense of quality of the generated images.
End of explanation
# Ensure the training data is generated and available for this tutorial
# We search in two locations in the toolkit for the cached MNIST data set.
data_found = False
for data_dir in [os.path.join("..", "Examples", "Image", "DataSets", "MNIST"),
os.path.join("data", "MNIST")]:
train_file = os.path.join(data_dir, "Train-28x28_cntk_text.txt")
if os.path.isfile(train_file):
data_found = True
break
if not data_found:
raise ValueError("Please generate the data by completing CNTK 103 Part A")
print("Data directory is {0}".format(data_dir))
def create_reader(path, is_training, input_dim, label_dim):
deserializer = CTFDeserializer(
filename = path,
streams = StreamDefs(
labels_unused = StreamDef(field = 'labels', shape = label_dim, is_sparse = False),
features = StreamDef(field = 'features', shape = input_dim, is_sparse = False
)
)
)
return MinibatchSource(
deserializers = deserializer,
randomize = is_training,
max_sweeps = INFINITELY_REPEAT if is_training else 1
)
Explanation: Data Reading
The input to the GAN will be a vector of random numbers. At the end of the traning, the GAN "learns" to generate images of hand written digits drawn from the MNIST database. We will be using the same MNIST data generated in tutorial 103A. A more in-depth discussion of the data format and reading methods can be seen in previous tutorials. For our purposes, just know that the following function returns an object that will be used to generate images from the MNIST dataset. Since we are building an unsupervised model, we only need to read in features and ignore the labels.
End of explanation
np.random.seed(123)
def noise_sample(num_samples):
return np.random.uniform(
low = -1.0,
high = 1.0,
size = [num_samples, g_input_dim]
).astype(np.float32)
Explanation: The random noise we will use to train the GAN is provided by the noise_sample function to generate random noise samples from a uniform distribution within the interval [-1, 1].
End of explanation
# Figure 1
Image(url="https://www.cntk.ai/jup/GAN_basic_flow.png")
Explanation: Model Creation
A GAN network is composed of two sub-networks, one called the Generator ($G$) and the other Discriminator ($D$).
- The Generator takes random noise vector ($z$) as input and strives to output synthetic (fake) image ($x^$) that is indistinguishable from the real image ($x$) from the MNIST dataset.
- The Discriminator strives to differentiate between the real image ($x$) and the fake ($x^$) image.
End of explanation
# architectural parameters
g_input_dim = 100
g_hidden_dim = 128
g_output_dim = d_input_dim = 784
d_hidden_dim = 128
d_output_dim = 1
def generator(z):
with default_options(init = xavier()):
h1 = Dense(g_hidden_dim, activation = C.relu)(z)
return Dense(g_output_dim, activation = C.tanh)(h1)
def discriminator(x):
with default_options(init = xavier()):
h1 = Dense(d_hidden_dim, activation = C.relu)(x)
return Dense(d_output_dim, activation = C.sigmoid)(h1)
Explanation: In each training iteration, the Generator produces more realistic fake images (in other words minimizes the difference between the real and generated counterpart) and also the Discriminator maximizes the probability of assigning the correct label (real vs. fake) to both real examples (from training set) and the generated fake ones. The two conflicting objectives between the sub-networks ($G$ and $D$) leads to the GAN network (when trained) converge to an equilibrium, where the Generator produces realistic looking fake MNIST images and the Discriminator can at best randomly guess whether images are real or fake. The resulting Generator model once trained produces realistic MNIST image with the input being a random number.
Model config
First, we establish some of the architectural and training hyper-parameters for our model.
The generator network is a fully-connected network with a single hidden layer. The input will be a 10-dimensional random vector and the output will be a 784 dimensional vector, corresponding to a flattened version of a 28 x 28 fake image. The discriminator is also a single layer dense network. It takes as input the 784 dimensional output of the generator or a real MNIST image and outputs a single scalar - the estimated probability that the input image is a real MNIST image.
Model components
We build a computational graph for our model, one each for the generator and the discriminator. First, we establish some of the architectural parameters of our model.
The generator takes a 100-dimensional random vector (for starters) as input ($z$) and the outputs a 784 dimensional vector, corresponding to a flattened version of a 28 x 28 fake (synthetic) image ($x^*$). In this tutorial we simply model the generator with two dense layers. We use a tanh activation on the last layer to make sure that the output of the generator function is confined to the interval [-1, 1]. This is necessary because we also scale the MNIST images to this interval, and the outputs of the generator must be able to emulate the actual images as closely as possible.
The discriminator takes as input ($x^*$) the 784 dimensional output of the generator or a real MNIST image and outputs the estimated probability that the input image is a real MNIST image. We also model this with two dense layers with a sigmoid activation in the last layer ensuring that the discriminator produces a valid probability.
End of explanation
# training config
minibatch_size = 1024
num_minibatches = 300 if isFast else 40000
lr = 0.00005
Explanation: We use a minibatch size of 1024 and a fixed learning rate of 0.0005 for training. In the fast mode (isFast = True) we verify only functional correctness with 200 iterations.
Note: In the slow mode, the results look a lot better but it requires patient waiting (few hours) depending on your hardware. In general, the more number of minibatches one trains, the better is the fidelity of the generated images.
End of explanation
# Figure 2
Image(url="https://www.cntk.ai/jup/GAN_goodfellow_NIPS2014.png", width = 500)
def build_graph(noise_shape, image_shape,
G_progress_printer, D_progress_printer):
input_dynamic_axes = [C.Axis.default_batch_axis()]
Z = C.input(noise_shape, dynamic_axes=input_dynamic_axes)
X_real = C.input(image_shape, dynamic_axes=input_dynamic_axes)
X_real_scaled = 2*(X_real / 255.0) - 1.0
# Create the model function for the generator and discriminator models
X_fake = generator(Z)
D_real = discriminator(X_real_scaled)
D_fake = D_real.clone(
method = 'share',
substitutions = {X_real_scaled.output: X_fake.output}
)
# Create loss functions and configure optimazation algorithms
G_loss = 1.0 - C.log(D_fake)
D_loss = -(C.log(D_real) + C.log(1.0 - D_fake))
G_learner = fsadagrad(
parameters = X_fake.parameters,
lr = learning_rate_schedule(lr, UnitType.sample),
momentum = momentum_as_time_constant_schedule(700)
)
D_learner = fsadagrad(
parameters = D_real.parameters,
lr = learning_rate_schedule(lr, UnitType.sample),
momentum = momentum_as_time_constant_schedule(700)
)
# Instantiate the trainers
G_trainer = Trainer(
X_fake,
(G_loss, None),
G_learner,
G_progress_printer
)
D_trainer = Trainer(
D_real,
(D_loss, None),
D_learner,
D_progress_printer
)
return X_real, X_fake, Z, G_trainer, D_trainer
Explanation: Build the graph
The rest of the computational graph is mostly responsible for coordinating the training algorithms and parameter updates, which is particularly tricky with GANs for couple reasons.
First, the discriminator must be used on both the real MNIST images and fake images generated by the generator function. One way to represent this in the computational graph is to create a clone of the output of the discriminator function, but with substituted inputs. Setting method=share in the clone function ensures that both paths through the discriminator model use the same set of parameters.
Second, we need to update the parameters for the generator and discriminator model separately using the gradients from different loss functions. We can get the parameters for a Function in the graph with the parameters attribute. However, when updating the model parameters, update only the parameters of the respective models while keeping the other parameters unchanged. In other words, when updating the generator we will update only the parameters of the $G$ function while keeping the parameters of the $D$ function fixed and vice versa.
Training the Model
The code for training the GAN very closely follows the algorithm as presented in the original NIPS 2014 paper. In this implementation, we train $D$ to maximize the probability of assigning the correct label (fake vs. real) to both training examples and the samples from $G$. In other words, $D$ and $G$ play the following two-player minimax game with the value function $V(G,D)$:
$$
\min_G \max_D V(D,G)= \mathbb{E}{x}[ log D(x) ] + \mathbb{E}{z}[ log(1 - D(G(z))) ]
$$
At the optimal point of this game the generator will produce realistic looking data while the discriminator will predict that the generated image is indeed fake with a probability of 0.5. The algorithm referred below is implemented in this tutorial.
End of explanation
def train(reader_train):
k = 2
# print out loss for each model for upto 50 times
print_frequency_mbsize = num_minibatches // 50
pp_G = ProgressPrinter(print_frequency_mbsize)
pp_D = ProgressPrinter(print_frequency_mbsize * k)
X_real, X_fake, Z, G_trainer, D_trainer = \
build_graph(g_input_dim, d_input_dim, pp_G, pp_D)
input_map = {X_real: reader_train.streams.features}
for train_step in range(num_minibatches):
# train the discriminator model for k steps
for gen_train_step in range(k):
Z_data = noise_sample(minibatch_size)
X_data = reader_train.next_minibatch(minibatch_size, input_map)
if X_data[X_real].num_samples == Z_data.shape[0]:
batch_inputs = {X_real: X_data[X_real].data,
Z: Z_data}
D_trainer.train_minibatch(batch_inputs)
# train the generator model for a single step
Z_data = noise_sample(minibatch_size)
batch_inputs = {Z: Z_data}
G_trainer.train_minibatch(batch_inputs)
G_trainer_loss = G_trainer.previous_minibatch_loss_average
return Z, X_fake, G_trainer_loss
reader_train = create_reader(train_file, True, d_input_dim, label_dim=10)
G_input, G_output, G_trainer_loss = train(reader_train)
# Print the generator loss
print("Training loss of the generator is: {0:.2f}".format(G_trainer_loss))
Explanation: With the value functions defined we proceed to interatively train the GAN model. The training of the model can take significnantly long depending on the hardware especiallly if isFast flag is turned off.
End of explanation
def plot_images(images, subplot_shape):
plt.style.use('ggplot')
fig, axes = plt.subplots(*subplot_shape)
for image, ax in zip(images, axes.flatten()):
ax.imshow(image.reshape(28, 28), vmin = 0, vmax = 1.0, cmap = 'gray')
ax.axis('off')
plt.show()
noise = noise_sample(36)
images = G_output.eval({G_input: noise})
plot_images(images, subplot_shape =[6, 6])
Explanation: Generating Fake (Synthetic) Images
Now that we have trained the model, we can create fake images simply by feeding random noise into the generator and displaying the outputs. Below are a few images generated from random samples. To get a new set of samples, you can re-run the last cell.
End of explanation
# Figure 3
Image(url="http://www.cntk.ai/jup/GAN_basic_slowmode.jpg")
Explanation: Larger number of iterations should generate more realistic looking MNIST images. A sampling of such generated images are shown below.
End of explanation |
14,652 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python
Pero la estrella indiscutible de Jupyter es Python, que se está convirtiendo poco a poco en el lenguaje de facto para el análisis de datos, decantando lentamente R, SAS, Matlab...
Lo importante no son los lenguajes, sino el enorme ecosistema de herramientas que han aparecido gracias a la apertura y facilidad de Python.
Jake VanderPlas mantiene una colección de notebooks muy interesantes
Step1: Poner NFQ en un mapa
¿Cómo de difícil puede ser pintar un mapa interactivo de Madrid y poner la geolocalización de las oficinas de NFQ en él?
Step2: Leer un archivo Excel
Hay otra herramienta que también mezcla datos, lógica y repesentacón | Python Code:
import pandas as pd
import numpy as np
from sklearn import linear_model
from matplotlib import pylab as plt
plt.style.use('bmh')
%matplotlib notebook
wine = pd.read_csv('data/winequality-white.csv',delimiter=';')
wine.describe()
fig = plt.figure(2)
ax = [fig.add_subplot(3,4,i) for i in range(1,12)]
models = [linear_model.LinearRegression() for i in range(11)]
for column, model in zip(wine.columns, models):
model.fit(wine['quality'].reshape(-1,1),
wine[column].as_matrix().reshape(-1,1))
for qual, group in wine.groupby('quality'):
for column, axis in zip(group.columns, ax):
axis.plot(qual, group[column].mean(), 'ob')
axis.set_title(column + ' (avg)', fontsize=10)
qual = np.arange(3,10)
for model, axi in zip(models, ax):
axi.plot(qual, model.coef_[0][0]*qual + model.intercept_,
'r--', linewidth=4, label='Regression')
axi.legend(fontsize=6)
fig.tight_layout()
Explanation: Python
Pero la estrella indiscutible de Jupyter es Python, que se está convirtiendo poco a poco en el lenguaje de facto para el análisis de datos, decantando lentamente R, SAS, Matlab...
Lo importante no son los lenguajes, sino el enorme ecosistema de herramientas que han aparecido gracias a la apertura y facilidad de Python.
Jake VanderPlas mantiene una colección de notebooks muy interesantes:
Para aprender Python
Para aprender Machine Learning
¿Podemos predecir si un vino blanco será bueno?
Existe una base de datos de propiedades químicas de vinos y su valoración en catas que proviene de este paper.
P. Cortez, A. Cerdeira, F. Almeida, T. Matos and J. Reis. Modeling wine preferences by data mining from physicochemical properties. In Decision Support Systems, Elsevier, 47(4):547-553, 2009.
Se trata de más de 4000 vinos verdes portugueses. El objetivo es entender los datos y proporcionar una guía visual sencilla de qué propiedades debe tener un buen vino blanco.
End of explanation
import folium
madrid = folium.Map(location=[40.429857, -3.685812], tiles="Stamen toner",
zoom_start=15)
nfqsolutions = folium.Marker([40.429857, -3.685812], popup='NFQ Solutions')
madrid.add_children(nfqsolutions)
madrid.save('madrid.html')
madrid
Explanation: Poner NFQ en un mapa
¿Cómo de difícil puede ser pintar un mapa interactivo de Madrid y poner la geolocalización de las oficinas de NFQ en él?
End of explanation
with pd.ExcelFile('./data/winequality-white.xls') as xls:
wines = pd.read_excel(xls, 'Sheet1')
wines.describe()
Explanation: Leer un archivo Excel
Hay otra herramienta que también mezcla datos, lógica y repesentacón: una hoja de cálculo. Pero Excel (la hoja de cálculo más frecuente) no escala con los datos.
Mala eficiencia de memoria.
Uso de datos no tabulables.
Simplicidad y apariencia vs eficiencia y rendimiento.
El formato xlsx es la estrategia de vendor lock-in más exitosa de la historia.
La definicón práctica de Big Data es algo demasiado grande para ser seleccionado con el ratón en una hoja Excel.
End of explanation |
14,653 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 The TensorFlow Authors.
Step1: Extension types
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step5: Extension types
User-defined types can make projects more readable, modular, maintainable. However, most TensorFlow APIs have very limited support for user-defined Python types. This includes both high-level APIs (such as Keras, tf.function, tf.SavedModel) and lower-level APIs (such as tf.while_loop and tf.concat). TensorFlow extension types can be used to create user-defined object-oriented types that work seamlessly with TensorFlow's APIs. To create an extension type, simply define a Python class with tf.experimental.ExtensionType as its base, and use type annotations to specify the type for each field.
Step6: The tf.experimental.ExtensionType base class works similarly to typing.NamedTuple and @dataclasses.dataclass from the standard Python library. In particular, it automatically adds a constructor and special methods (such as __repr__ and __eq__) based on the field type annotations.
Typically, extension types tend to fall into one of two categories
Step7: Functionality added by ExtensionType
The ExtensionType base class provides the following functionality
Step8: The constructor raises an TypeError if a field value can not be converted to its declared type
Step9: The default value for a field can be specified by setting its value at the class level
Step10: Printable representation
ExtensionType adds a default printable representation method (__repr__) that includes the class name and the value for each field
Step11: Equality operators
ExtensionType adds default equality operators (__eq__ and __ne__) that consider two values equal if they have the same type and all their fields are equal. Tensor fields are considered equal if they have the same shape and are elementwise equal for all elements.
Step13: Note
Step14: Enforced immutability
ExtensionType overrides the __setattr__ and __delattr__ methods to prevent mutation, ensuring that extension type values are immutable.
Step15: Nested TypeSpec
Each ExtensionType class has a corresponding TypeSpec class, which is created automatically and stored as <extension_type_name>.Spec.
This class captures all the information from a value except for the values of any nested tensors. In particular, the TypeSpec for a value is created by replacing any nested Tensor, ExtensionType, or CompositeTensor with its TypeSpec.
Step16: TypeSpec values can be constructed explicitly, or they can be built from an ExtensionType value using tf.type_spec_from_value
Step17: TypeSpecs are used by TensorFlow to divide values into a static component and a dynamic component
Step19: For more information, see the tf.function Guide.
Customizing ExtensionTypes
In addition to simply declaring fields and their types, extension types may
Step20: Defining methods
Extension types may define methods, just like any normal Python class. For example, the MaskedTensor type could define a with_default method that returns a copy of self with masked values replaced by a given default value. Methods may optionally be annotated with the @tf.function decorator.
Step21: Defining classmethods and staticmethods
Extension types may define methods using the @classmethod and @staticmethod decorators. For example, the MaskedTensor type could define a factory method that masks any element with a given value
Step22: Defining properties
Extension types may define properties using the @property decorator, just like any normal Python class. For example, the MaskedTensor type could define a dtype property that's a shorthand for the dtype of the values
Step23: Overriding the default constructor
You can override the default constructor for extension types. Custom constructors must set a value for every declared field; and after the custom constructor returns, all fields will be type-checked, and values will be converted as described above.
Step24: Alternatively, you might consider leaving the default constructor as-is, but adding one or more factory methods. E.g.
Step25: Overriding the default equality operator (__eq__)
You can override the default __eq__ operator for extension types. The follow example updates MaskedTensor to ignore masked elements when comparing for equality.
Step26: Note
Step27: Defining subclasses
Extension types may be subclassed using the standard Python syntax. Extension type subclasses may add new fields, methods, and properties; and may override the constructor, the printable representation, and the equality operator. The following example defines a basic TensorGraph class that uses three Tensor fields to encode a set of edges between nodes. It then defines a subclass that adds a Tensor field to record a "feature value" for each node. The subclass also defines a method to propagage the feature values along the edges.
Step28: Defining private fields
An extension type's fields may be marked private by prefixing them with an underscore (following standard Python conventions). This does not impact the way that TensorFlow treats the fields in any way; but simply serves as a signal to any users of the extension type that those fields are private.
Customizing the ExtensionType's TypeSpec
Each ExtensionType class has a corresponding TypeSpec class, which is created automatically and stored as <extension_type_name>.Spec. For more information, see the section "Nested TypeSpec" above.
To customize the TypeSpec, simply define your own nested class named Spec, and ExtensionType will use that as the basis for the automatically constructed TypeSpec. You can customize the Spec class by
Step29: Note
Step30: This overrides the default implementation for tf.stack whenever it is called with a list of MaskedTensor values (since the values argument is annotated with typing.List[MaskedTensor])
Step31: To allow tf.stack to handle lists of mixed MaskedTensor and Tensor values, you can refine the type annotation for the values parameter and update the body of the function appropriately
Step32: For a list of APIs that can be overridden, see the API documentation for tf.experimental.dispatch_for_api.
Dispatch for all unary elementwise APIs
The tf.experimental.dispatch_for_unary_elementwise_apis decorator overrides the default behavior of all unary elementwise ops (such as tf.math.cos) whenever the value for the first argument (typically named x) matches the type annotation x_type. The decorated function should take two arguments
Step33: This function will now be used whenever a unary elementwise operation is called on a MaskedTensor.
Step34: Dispatch for binary all elementwise APIs
Similarly, tf.experimental.dispatch_for_binary_elementwise_apis can be used to update all binary elementwise operations to handle the MaskedTensor type
Step35: For a list of the elementwise APIs that are overridden, see the API documentation for tf.experimental.dispatch_for_unary_elementwise_apis and tf.experimental.dispatch_for_binary_elementwise_apis.
Batchable ExtensionTypes
An ExtensionType is batchable if a single instance can be used to represent a batch of values. Typically, this is accomplished by adding batch dimensions to all nested Tensors. The following TensorFlow APIs require that any extension type inputs be batchable
Step36: To make this type batchable, change the base type to BatchableExtensionType, and adjust the shape of each field to include optional batch dimensions. The following example also adds a shape field to keept track of the batch shape. This shape field is not required by tf.data.Dataset or tf.map_fn, but it is required by tf.Keras.
Step37: You can then use tf.data.Dataset to iterate through a batch of networks
Step38: And you can also use map_fn to apply a function to each batch element
Step39: TensorFlow APIs that support ExtensionTypes
@tf.function
tf.function is a decorator that precomputes TensorFlow graphs for Python functions, which can substantially improve the performance of your TensorFlow code. Extension type values can be used transparently with @tf.function-decorated functions.
Step40: If you wish to explicitly specify the input_signature for tf.function, then you can do so using the extension type's TypeSpec.
Step41: Concrete functions
Concrete functions encapsulate individual traced graphs that are built by tf.function. Extension types can be used transparently with concrete functions.
Step42: Control flow operations
Extension types are supported by TensorFlow's control-flow operations
Step43: Autograph control flow
Extension types are also supported by control flow statements in tf.function (using autograph). In the following example, the if statement and for statements are automatically converted to tf.cond and tf.while_loop operations, which support extension types.
Step44: Keras
tf.keras is TensorFlow's high-level API for building and training deep learning models. Extension types may be passed as inputs to a Keras model, passed between Keras layers, and returned by Keras models. Keras currently puts two requirements on extension types
Step46: You can define a new Keras layer that processes Networks.
Step47: You can then use this layers to create a simple model. To feed an ExtensionType into a model, you can use a tf.keras.layer.Input layer with type_spec set to the extension type's TypeSpec. If the Keras model will be used to process batches, then the type_spec must include the batch dimension.
Step48: Finally, you can apply the model to a single network and to a batch of networks.
Step49: Keras example
Step50: Next, the dispatch decorators are used to override the default behavior of several TensorFlow APIs. Since these APIs are used by standard Keras layers (such as the Dense layer), overriding these will allow us to use those layers with MaskedTensor. For the purposes of this example, matmul for masked tensors is defined to treat the masked values as zeros (i.e., to not include them in the product).
Step51: You can then construct a Keras model that accepts MaskedTensor inputs, using standard Keras layers
Step52: SavedModel
A SavedModel is a serialized TensorFlow program, including both weights and computation. It can be built from a Keras model or from a custom model. In either case, extension types can be used transparently with the functions and methods defined by a SavedModel.
SavedModel can save models, layers, and functions that process extension types, as long as the extension types have a __name__ field. This name is used to register the extension type, so it can be located when the model is loaded.
Example
Step54: Example
Step57: Loading a SavedModel when the ExtensionType is unavailable
If you load a SavedModel that uses an ExtensionType, but that ExtensionType is not available (i.e., has not been imported), then you will see a warning and TensorFlow will fall back to using an "anonymous extension type" object. This object will have the same fields as the original type, but will lack any further customization you have added for the type, such as custom methods or properties.
Using ExtensionTypes with TensorFlow serving
Currently, TensorFlow serving (and other consumers of the SavedModel "signatures" dictionary) require that all inputs and outputs be raw tensors. If you wish to use TensorFlow serving with a model that uses extension types, then you can add wrapper methods that compose or decompose extension type values from tensors. E.g.
Step58: Datasets
tf.data is an API that enables you to build complex input pipelines from simple, reusable pieces. Its core data structure is tf.data.Dataset, which represents a sequence of elements, in which each element consists of one or more components.
Building Datasets with extension types
Datasets can be built from extension type values using Dataset.from_tensors, Dataset.from_tensor_slices, or Dataset.from_generator
Step59: Batching and unbatching Datasets with extension types
Datasets with extension types can be batchand and unbatched using Dataset.batch adn Dataset.unbatch. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2021 The TensorFlow Authors.
End of explanation
!pip install -q tf_nightly
import tensorflow as tf
import numpy as np
from typing import Tuple, List, Mapping, Union, Optional
import tempfile
Explanation: Extension types
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/extension_type"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/extension_type.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/extension_type.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/extension_type.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Setup
End of explanation
class TensorGraph(tf.experimental.ExtensionType):
A collection of labeled nodes connected by weighted edges.
edge_weights: tf.Tensor # shape=[num_nodes, num_nodes]
node_labels: Mapping[str, tf.Tensor] # shape=[num_nodes]; dtype=any
class MaskedTensor(tf.experimental.ExtensionType):
A tensor paired with a boolean mask, indicating which values are valid.
values: tf.Tensor
mask: tf.Tensor # shape=values.shape; false for missing/invalid values.
class CSRSparseMatrix(tf.experimental.ExtensionType):
Compressed sparse row matrix (https://en.wikipedia.org/wiki/Sparse_matrix).
values: tf.Tensor # shape=[num_nonzero]; dtype=any
col_index: tf.Tensor # shape=[num_nonzero]; dtype=int64
row_index: tf.Tensor # shape=[num_rows+1]; dtype=int64
Explanation: Extension types
User-defined types can make projects more readable, modular, maintainable. However, most TensorFlow APIs have very limited support for user-defined Python types. This includes both high-level APIs (such as Keras, tf.function, tf.SavedModel) and lower-level APIs (such as tf.while_loop and tf.concat). TensorFlow extension types can be used to create user-defined object-oriented types that work seamlessly with TensorFlow's APIs. To create an extension type, simply define a Python class with tf.experimental.ExtensionType as its base, and use type annotations to specify the type for each field.
End of explanation
class MaskedTensor(tf.experimental.ExtensionType):
values: tf.Tensor
mask: tf.Tensor
def replace_mask(self, new_mask):
self.values.shape.assert_is_compatible_with(new_mask.shape)
return MaskedTensor(self.values, new_mask)
Explanation: The tf.experimental.ExtensionType base class works similarly to typing.NamedTuple and @dataclasses.dataclass from the standard Python library. In particular, it automatically adds a constructor and special methods (such as __repr__ and __eq__) based on the field type annotations.
Typically, extension types tend to fall into one of two categories:
Data structures, which group together a collection of related values, and can provide useful operations based on those values. Data structures may be fairly general (such as the TensorGraph example above); or they may be highly customized to a specific model.
Tensor-like types, which specialize or extend the concept of "Tensor." Types in this category have a rank, a shape, and usually a dtype; and it makes sense to use them with Tensor operations (such as tf.stack, tf.add, or tf.matmul). MaskedTensor and CSRSparseMatrix are examples of tensor-like types.
Supported APIs
Extension types are supported by the following TensorFlow APIs:
Keras: Extension types can be used as inputs and outputs for Keras Models and Layers.
tf.data.Dataset: Extension types can be included in Datasets, and returned by dataset Iterators.
Tensorflow hub: Extension types can be used as inputs and outputs for tf.hub modules.
SavedModel: Extension types can be used as inputs and outputs for SavedModel functions.
tf.function: Extension types can be used as arguments and return values for functions wrapped with the @tf.function decorator.
while loops: Extension types can be used as loop variables in tf.while_loop, and can be used as arguments and return values for the while-loop's body.
conditionals: Extension types can be conditionally selected using tf.cond and tf.case.
py_function: Extension types can be used as arguments and return values for the func argument to tf.py_function.
Tensor ops: Extension types can be extended to support most TensorFlow ops that accept Tensor inputs (e.g., tf.matmul, tf.gather, and tf.reduce_sum). See the "Dispatch" section below for more information.
distribution strategy: Extension types can be used as per-replica values.
For more details, see the section on "TensorFlow APIs that support ExtensionTypes" below.
Requirements
Field types
All fields (aka instance variables) must be declared, and a type annotation must be provided for each field. The following type annotations are supported:
Type | Example
---- | -------
Python integers | i: int
Python floats | f: float
Python strings | s: str
Python booleans | b: bool
Python None | n: None
Tensor shapes | shape: tf.TensorShape
Tensor dtypes | dtype: tf.DType
Tensors | t: tf.Tensor
Extension types | mt: MyMaskedTensor
Ragged Tensors | rt: tf.RaggedTensor
Sparse Tensors | st: tf.SparseTensor
Indexed Slices | s: tf.IndexedSlices
Optional Tensors | o: tf.experimental.Optional
Type unions | int_or_float: typing.Union[int, float]
Tuples | params: typing.Tuple[int, float, tf.Tensor, int]
Var-length tuples | lengths: typing.Tuple[int, ...]
Mappings | tags: typing.Mapping[str, tf.Tensor]
Optional values | weight: typing.Optional[tf.Tensor]
Mutability
Extension types are required to be immutable. This ensures that they can be properly tracked by TensorFlow's graph-tracing mechanisms.
If you find yourself wanting to mutate an extension type value, consider instead defining methods that transform values. For example, rather than defining a set_mask method to mutate a MaskedTensor, you could define a replace_mask method that returns a new MaskedTensor:
End of explanation
class MaskedTensor(tf.experimental.ExtensionType):
values: tf.Tensor
mask: tf.Tensor
# Constructor takes one parameter for each field.
mt = MaskedTensor(values=[[1, 2, 3], [4, 5, 6]],
mask=[[True, True, False], [True, False, True]])
# Fields are type-checked and converted to the declared types.
# E.g., mt.values is converted to a Tensor.
print(mt.values)
Explanation: Functionality added by ExtensionType
The ExtensionType base class provides the following functionality:
A constructor (__init__).
A printable representation method (__repr__).
Equality and inequality operators (__eq__).
A validation method (__validate__).
Enforced immutability.
A nested TypeSpec.
Tensor API dispatch support.
See the "Customizing ExtensionTypes" section below for more information on customizing this functionality.
Constructor
The constructor added by ExtensionType takes each field as a named argument (in the order they were listed in the class definition). This constructor will type-check each parameter, and convert them where necessary. In particular, Tensor fields are converted using tf.convert_to_tensor; Tuple fields are converted to tuples; and Mapping fields are converted to immutable dicts.
End of explanation
try:
MaskedTensor([1, 2, 3], None)
except TypeError as e:
print(f"Got expected TypeError: {e}")
Explanation: The constructor raises an TypeError if a field value can not be converted to its declared type:
End of explanation
class Pencil(tf.experimental.ExtensionType):
color: str = "black"
has_erasor: bool = True
length: tf.Tensor = 1.0
Pencil()
Pencil(length=0.5, color="blue")
Explanation: The default value for a field can be specified by setting its value at the class level:
End of explanation
print(MaskedTensor(values=[1, 2, 3], mask=[True, True, False]))
Explanation: Printable representation
ExtensionType adds a default printable representation method (__repr__) that includes the class name and the value for each field:
End of explanation
a = MaskedTensor([1, 2], [True, False])
b = MaskedTensor([[3, 4], [5, 6]], [[False, True], [True, True]])
print(f"a == a: {a==a}")
print(f"a == b: {a==b}")
print(f"a == a.values: {a==a.values}")
Explanation: Equality operators
ExtensionType adds default equality operators (__eq__ and __ne__) that consider two values equal if they have the same type and all their fields are equal. Tensor fields are considered equal if they have the same shape and are elementwise equal for all elements.
End of explanation
class MaskedTensor(tf.experimental.ExtensionType):
A tensor paired with a boolean mask, indicating which values are valid.
values: tf.Tensor
mask: tf.Tensor
def __validate__(self):
self.values.shape.assert_is_compatible_with(self.mask.shape)
assert self.mask.dtype.is_bool, 'mask.dtype must be bool'
try:
MaskedTensor([1, 2, 3], [0, 1, 0]) # wrong dtype for mask.
except AssertionError as e:
print(f"Got expected AssertionError: {e}")
try:
MaskedTensor([1, 2, 3], [True, False]) # shapes don't match.
except ValueError as e:
print(f"Got expected ValueError: {e}")
Explanation: Note: if any field contains a Tensor, then __eq__ may return a scalar boolean Tensor (rather than a Python boolean value).
Validation method
ExtensionType adds a __validate__ method, which can be overriden to perform validation checks on fields. It is run after the constructor is called, and after fields have been type-checked and converted to their declared types, so it can assume that all fields have their declared types.
he following example updates MaskedTensor to validate the shapes and dtypes of its fields:
End of explanation
mt = MaskedTensor([1, 2, 3], [True, False, True])
try:
mt.mask = [True, True, True]
except AttributeError as e:
print(f"Got expected AttributeError: {e}")
try:
mt.mask[0] = False
except TypeError as e:
print(f"Got expected TypeError: {e}")
try:
del mt.mask
except AttributeError as e:
print(f"Got expected AttributeError: {e}")
Explanation: Enforced immutability
ExtensionType overrides the __setattr__ and __delattr__ methods to prevent mutation, ensuring that extension type values are immutable.
End of explanation
class Player(tf.experimental.ExtensionType):
name: tf.Tensor
attributes: Mapping[str, tf.Tensor]
anne = Player("Anne", {"height": 8.3, "speed": 28.1})
anne_spec = tf.type_spec_from_value(anne)
print(anne_spec.name) # Records dtype and shape, but not the string value.
print(anne_spec.attributes) # Records keys and TensorSpecs for values.
Explanation: Nested TypeSpec
Each ExtensionType class has a corresponding TypeSpec class, which is created automatically and stored as <extension_type_name>.Spec.
This class captures all the information from a value except for the values of any nested tensors. In particular, the TypeSpec for a value is created by replacing any nested Tensor, ExtensionType, or CompositeTensor with its TypeSpec.
End of explanation
spec1 = Player.Spec(name=tf.TensorSpec([], tf.float32), attributes={})
spec2 = tf.type_spec_from_value(anne)
Explanation: TypeSpec values can be constructed explicitly, or they can be built from an ExtensionType value using tf.type_spec_from_value:
End of explanation
@tf.function
def anonymize_player(player):
print("<<TRACING>>")
return Player("<anonymous>", player.attributes)
# Function gets traced (first time the function has been called):
anonymize_player(Player("Anne", {"height": 8.3, "speed": 28.1}))
# Function does NOT get traced (same TypeSpec: just tensor values changed)
anonymize_player(Player("Bart", {"height": 8.1, "speed": 25.3}))
# Function gets traced (new TypeSpec: keys for attributes changed):
anonymize_player(Player("Chuck", {"height": 11.0, "jump": 5.3}))
Explanation: TypeSpecs are used by TensorFlow to divide values into a static component and a dynamic component:
The static component (which is fixed at graph-construction time) is encoded with a tf.TypeSpec.
The dynamic component (which can vary each time the graph is run) is encoded as a list of tf.Tensors.
For example, tf.function retraces its wrapped function whenever an argument has a previously unseen TypeSpec:
End of explanation
class MaskedTensor(tf.experimental.ExtensionType):
A tensor paired with a boolean mask, indicating which values are valid.
values: tf.Tensor
mask: tf.Tensor # shape=values.shape; false for invalid values.
def __repr__(self):
return masked_tensor_str(self.values, self.mask)
def masked_tensor_str(values, mask):
if isinstance(values, tf.Tensor):
if hasattr(values, 'numpy') and hasattr(mask, 'numpy'):
return f'<MaskedTensor {masked_tensor_str(values.numpy(), mask.numpy())}>'
else:
return f'MaskedTensor(values={values}, mask={mask})'
if len(values.shape) == 1:
items = [repr(v) if m else '_' for (v, m) in zip(values, mask)]
else:
items = [masked_tensor_str(v, m) for (v, m) in zip(values, mask)]
return '[%s]' % ', '.join(items)
mt = MaskedTensor(values=[[1, 2, 3], [4, 5, 6]],
mask=[[True, True, False], [True, False, True]])
print(mt)
Explanation: For more information, see the tf.function Guide.
Customizing ExtensionTypes
In addition to simply declaring fields and their types, extension types may:
Override the default printable representation (__repr__).
Define methods.
Define classmethods and staticmethods.
Define properties.
Override the default constructor (__init__).
Override the default equality operator (__eq__).
Define operators (such as __add__ and __lt__).
Declare default values for fields.
Define subclasses.
Overriding the default printable representation
You can override this default string conversion operator for extension types. The following example updates the MaskedTensor class to generate a more readable string representation when values are printed in Eager mode.
End of explanation
class MaskedTensor(tf.experimental.ExtensionType):
values: tf.Tensor
mask: tf.Tensor
def with_default(self, default):
return tf.where(self.mask, self.values, default)
MaskedTensor([1, 2, 3], [True, False, True]).with_default(0)
Explanation: Defining methods
Extension types may define methods, just like any normal Python class. For example, the MaskedTensor type could define a with_default method that returns a copy of self with masked values replaced by a given default value. Methods may optionally be annotated with the @tf.function decorator.
End of explanation
class MaskedTensor(tf.experimental.ExtensionType):
values: tf.Tensor
mask: tf.Tensor
def __repr__(self):
return masked_tensor_str(self.values, self.mask)
@staticmethod
def from_tensor_and_value_to_mask(values, value_to_mask):
return MaskedTensor(values, values == value_to_mask)
x = tf.constant([[1, 0, 2], [3, 0, 0]])
MaskedTensor.from_tensor_and_value_to_mask(x, 0)
Explanation: Defining classmethods and staticmethods
Extension types may define methods using the @classmethod and @staticmethod decorators. For example, the MaskedTensor type could define a factory method that masks any element with a given value:
End of explanation
class MaskedTensor(tf.experimental.ExtensionType):
values: tf.Tensor
mask: tf.Tensor
@property
def dtype(self):
return self.values.dtype
MaskedTensor([1, 2, 3], [True, False, True]).dtype
Explanation: Defining properties
Extension types may define properties using the @property decorator, just like any normal Python class. For example, the MaskedTensor type could define a dtype property that's a shorthand for the dtype of the values:
End of explanation
class Toy(tf.experimental.ExtensionType):
name: str
price: tf.Tensor
def __init__(self, name, price, discount=0):
self.name = name
self.price = price * (1 - discount)
print(Toy("ball", 5.0, discount=0.2)) # On sale -- 20% off!
Explanation: Overriding the default constructor
You can override the default constructor for extension types. Custom constructors must set a value for every declared field; and after the custom constructor returns, all fields will be type-checked, and values will be converted as described above.
End of explanation
class Toy(tf.experimental.ExtensionType):
name: str
price: tf.Tensor
@staticmethod
def new_toy_with_discount(name, price, discount):
return Toy(name, price * (1 - discount))
print(Toy.new_toy_with_discount("ball", 5.0, discount=0.2))
Explanation: Alternatively, you might consider leaving the default constructor as-is, but adding one or more factory methods. E.g.:
End of explanation
class MaskedTensor(tf.experimental.ExtensionType):
values: tf.Tensor
mask: tf.Tensor
def __repr__(self):
return masked_tensor_str(self.values, self.mask)
def __eq__(self, other):
result = tf.math.equal(self.values, other.values)
result = result | ~(self.mask & other.mask)
return tf.reduce_all(result)
x = MaskedTensor([1, 2, 3, 4], [True, True, False, True])
y = MaskedTensor([5, 2, 0, 4], [False, True, False, True])
print(x == y)
Explanation: Overriding the default equality operator (__eq__)
You can override the default __eq__ operator for extension types. The follow example updates MaskedTensor to ignore masked elements when comparing for equality.
End of explanation
class Node(tf.experimental.ExtensionType):
value: tf.Tensor
children: Tuple["Node", ...] = ()
Node(3, [Node(5), Node(2)])
Explanation: Note: You generally don't need to override __ne__, since its default implementation simply calls __eq__ and negates the result.
Using forward references
If the type for a field has not been defined yet, you may use a string containing the name of the type instead. In the following example, the string "Node" is used to annotate the children field because the Node type hasn't been (fully) defined yet.
End of explanation
class TensorGraph(tf.experimental.ExtensionType):
num_nodes: tf.Tensor
edge_src: tf.Tensor # edge_src[e] = index of src node for edge e.
edge_dst: tf.Tensor # edge_dst[e] = index of dst node for edge e.
class TensorGraphWithNodeFeature(TensorGraph):
node_features: tf.Tensor # node_features[n] = feature value for node n.
def propagate_features(self, weight=1.0) -> 'TensorGraphWithNodeFeature':
updates = tf.gather(self.node_features, self.edge_src) * weight
new_node_features = tf.tensor_scatter_nd_add(
self.node_features, tf.expand_dims(self.edge_dst, 1), updates)
return TensorGraphWithNodeFeature(
self.num_nodes, self.edge_src, self.edge_dst, new_node_features)
g = TensorGraphWithNodeFeature( # Edges: 0->1, 4->3, 2->2, 2->1
num_nodes=5, edge_src=[0, 4, 2, 2], edge_dst=[1, 3, 2, 1],
node_features=[10.0, 0.0, 2.0, 5.0, -1.0, 0.0])
print("Original features:", g.node_features)
print("After propagating:", g.propagate_features().node_features)
Explanation: Defining subclasses
Extension types may be subclassed using the standard Python syntax. Extension type subclasses may add new fields, methods, and properties; and may override the constructor, the printable representation, and the equality operator. The following example defines a basic TensorGraph class that uses three Tensor fields to encode a set of edges between nodes. It then defines a subclass that adds a Tensor field to record a "feature value" for each node. The subclass also defines a method to propagage the feature values along the edges.
End of explanation
class MaskedTensor(tf.experimental.ExtensionType):
values: tf.Tensor
mask: tf.Tensor
shape = property(lambda self: self.values.shape)
dtype = property(lambda self: self.values.dtype)
def __repr__(self):
return masked_tensor_str(self.values, self.mask)
def with_values(self, new_values):
return MaskedTensor(new_values, self.mask)
class Spec:
def __init__(self, shape, dtype=tf.float32):
self.values = tf.TensorSpec(shape, dtype)
self.mask = tf.TensorSpec(shape, tf.bool)
def __repr__(self):
return f"MaskedTensor.Spec(shape={self.shape}, dtype={self.dtype})"
shape = property(lambda self: self.values.shape)
dtype = property(lambda self: self.values.dtype)
Explanation: Defining private fields
An extension type's fields may be marked private by prefixing them with an underscore (following standard Python conventions). This does not impact the way that TensorFlow treats the fields in any way; but simply serves as a signal to any users of the extension type that those fields are private.
Customizing the ExtensionType's TypeSpec
Each ExtensionType class has a corresponding TypeSpec class, which is created automatically and stored as <extension_type_name>.Spec. For more information, see the section "Nested TypeSpec" above.
To customize the TypeSpec, simply define your own nested class named Spec, and ExtensionType will use that as the basis for the automatically constructed TypeSpec. You can customize the Spec class by:
Overriding the default printable representation.
Overriding the default constructor.
Defining methods, classmethods, staticmethods, and properties.
The following example customizes the MaskedTensor.Spec class to make it easier to use:
End of explanation
@tf.experimental.dispatch_for_api(tf.stack)
def masked_stack(values: List[MaskedTensor], axis = 0):
return MaskedTensor(tf.stack([v.values for v in values], axis),
tf.stack([v.mask for v in values], axis))
Explanation: Note: The custom Spec class may not use any instance variables that were not declared in the original ExtensionType.
Tensor API dispatch
Extension types can be "tensor-like", in the sense that they specialize or extend the interface defined by the tf.Tensor type. Examples of tensor-like extension types include RaggedTensor, SparseTensor, and MaskedTensor. Dispatch decorators can be used to override the default behavior of TensorFlow operations when applied to tensor-like extension types. TensorFlow currently defines three dispatch decorators:
@tf.experimental.dispatch_for_api(tf_api)
@tf.experimental.dispatch_for_unary_elementwise_api(x_type)
@tf.experimental.dispatch_for_binary_elementwise_apis(x_type, y_type)
Dispatch for a single API
The tf.experimental.dispatch_for_api decorator overrides the default behavior of a specified TensorFlow operation when it is called with the specified signature. For example, you can use this decorator to specify how tf.stack should process MaskedTensor values:
End of explanation
x = MaskedTensor([1, 2, 3], [True, True, False])
y = MaskedTensor([4, 5, 6], [False, True, True])
tf.stack([x, y])
Explanation: This overrides the default implementation for tf.stack whenever it is called with a list of MaskedTensor values (since the values argument is annotated with typing.List[MaskedTensor]):
End of explanation
tf.experimental.unregister_dispatch_for(masked_stack)
def convert_to_masked_tensor(x):
if isinstance(x, MaskedTensor):
return x
else:
return MaskedTensor(x, tf.ones_like(x, tf.bool))
@tf.experimental.dispatch_for_api(tf.stack)
def masked_stack_v2(values: List[Union[MaskedTensor, tf.Tensor]], axis = 0):
values = [convert_to_masked_tensor(v) for v in values]
return MaskedTensor(tf.stack([v.values for v in values], axis),
tf.stack([v.mask for v in values], axis))
x = MaskedTensor([1, 2, 3], [True, True, False])
y = tf.constant([4, 5, 6])
tf.stack([x, y, x])
Explanation: To allow tf.stack to handle lists of mixed MaskedTensor and Tensor values, you can refine the type annotation for the values parameter and update the body of the function appropriately:
End of explanation
@tf.experimental.dispatch_for_unary_elementwise_apis(MaskedTensor)
def masked_tensor_unary_elementwise_api_handler(api_func, x):
return MaskedTensor(api_func(x.values), x.mask)
Explanation: For a list of APIs that can be overridden, see the API documentation for tf.experimental.dispatch_for_api.
Dispatch for all unary elementwise APIs
The tf.experimental.dispatch_for_unary_elementwise_apis decorator overrides the default behavior of all unary elementwise ops (such as tf.math.cos) whenever the value for the first argument (typically named x) matches the type annotation x_type. The decorated function should take two arguments:
api_func: A function that takes a single parameter and performs the elementwise operation (e.g., tf.abs).
x: The first argument to the elementwise operation.
The following example updates all unary elementwise operations to handle the MaskedTensor type:
End of explanation
x = MaskedTensor([1, -2, -3], [True, False, True])
print(tf.abs(x))
print(tf.ones_like(x, dtype=tf.float32))
Explanation: This function will now be used whenever a unary elementwise operation is called on a MaskedTensor.
End of explanation
@tf.experimental.dispatch_for_binary_elementwise_apis(MaskedTensor, MaskedTensor)
def masked_tensor_binary_elementwise_api_handler(api_func, x, y):
return MaskedTensor(api_func(x.values, y.values), x.mask & y.mask)
x = MaskedTensor([1, -2, -3], [True, False, True])
y = MaskedTensor([[4], [5]], [[True], [False]])
tf.math.add(x, y)
Explanation: Dispatch for binary all elementwise APIs
Similarly, tf.experimental.dispatch_for_binary_elementwise_apis can be used to update all binary elementwise operations to handle the MaskedTensor type:
End of explanation
class Network(tf.experimental.ExtensionType): # This version is not batchable.
work: tf.Tensor # work[n] = work left to do at node n
bandwidth: tf.Tensor # bandwidth[n1, n2] = bandwidth from n1->n2
net1 = Network([5., 3, 8], [[0., 2, 0], [2, 0, 3], [0, 3, 0]])
net2 = Network([3., 4, 2], [[0., 2, 2], [2, 0, 2], [2, 2, 0]])
Explanation: For a list of the elementwise APIs that are overridden, see the API documentation for tf.experimental.dispatch_for_unary_elementwise_apis and tf.experimental.dispatch_for_binary_elementwise_apis.
Batchable ExtensionTypes
An ExtensionType is batchable if a single instance can be used to represent a batch of values. Typically, this is accomplished by adding batch dimensions to all nested Tensors. The following TensorFlow APIs require that any extension type inputs be batchable:
tf.data.Dataset (batch, unbatch, from_tensor_slices)
tf.Keras (fit, evaluate, predict)
tf.map_fn
By default, BatchableExtensionType creates batched values by batching any nested Tensors, CompositeTensors, and ExtensionTypes. If this is not appropriate for your class, then you will need to use tf.experimental.ExtensionTypeBatchEncoder to override this default behavior. For example, it would not be appropriate to create a batch of tf.SparseTensor values by simply stacking individual sparse tensors' values, indices, and dense_shape fields -- in most cases, you can't stack these tensors, since they have incompatible shapes; and even if you could, the result would not be a valid SparseTensor.
Note: BatchableExtensionTypes do not automatically define dispatchers for tf.stack, tf.concat, tf.slice, etc. If your class needs to be supported by these APIs, then use the dispatch decorators described above.
BatchableExtensionType example: Network
As an example, consider a simple Network class used for load balancing, which tracks how much work is left to do at each node, and how much bandwidth is available to move work between nodes:
End of explanation
class Network(tf.experimental.BatchableExtensionType):
shape: tf.TensorShape # batch shape. A single network has shape=[].
work: tf.Tensor # work[*shape, n] = work left to do at node n
bandwidth: tf.Tensor # bandwidth[*shape, n1, n2] = bandwidth from n1->n2
def __init__(self, work, bandwidth):
self.work = tf.convert_to_tensor(work)
self.bandwidth = tf.convert_to_tensor(bandwidth)
work_batch_shape = self.work.shape[:-1]
bandwidth_batch_shape = self.bandwidth.shape[:-2]
self.shape = work_batch_shape.merge_with(bandwidth_batch_shape)
def __repr__(self):
return network_repr(self)
def network_repr(network):
work = network.work
bandwidth = network.bandwidth
if hasattr(work, 'numpy'):
work = ' '.join(str(work.numpy()).split())
if hasattr(bandwidth, 'numpy'):
bandwidth = ' '.join(str(bandwidth.numpy()).split())
return (f"<Network shape={network.shape} work={work} bandwidth={bandwidth}>")
net1 = Network([5., 3, 8], [[0., 2, 0], [2, 0, 3], [0, 3, 0]])
net2 = Network([3., 4, 2], [[0., 2, 2], [2, 0, 2], [2, 2, 0]])
batch_of_networks = Network(
work=tf.stack([net1.work, net2.work]),
bandwidth=tf.stack([net1.bandwidth, net2.bandwidth]))
print(f"net1={net1}")
print(f"net2={net2}")
print(f"batch={batch_of_networks}")
Explanation: To make this type batchable, change the base type to BatchableExtensionType, and adjust the shape of each field to include optional batch dimensions. The following example also adds a shape field to keept track of the batch shape. This shape field is not required by tf.data.Dataset or tf.map_fn, but it is required by tf.Keras.
End of explanation
dataset = tf.data.Dataset.from_tensor_slices(batch_of_networks)
for i, network in enumerate(dataset):
print(f"Batch element {i}: {network}")
Explanation: You can then use tf.data.Dataset to iterate through a batch of networks:
End of explanation
def balance_work_greedy(network):
delta = (tf.expand_dims(network.work, -1) - tf.expand_dims(network.work, -2))
delta /= 4
delta = tf.maximum(tf.minimum(delta, network.bandwidth), -network.bandwidth)
new_work = network.work + tf.reduce_sum(delta, -1)
return Network(new_work, network.bandwidth)
tf.map_fn(balance_work_greedy, batch_of_networks)
Explanation: And you can also use map_fn to apply a function to each batch element:
End of explanation
class Pastry(tf.experimental.ExtensionType):
sweetness: tf.Tensor # 2d embedding that encodes sweetness
chewiness: tf.Tensor # 2d embedding that encodes chewiness
@tf.function
def combine_pastry_features(x: Pastry):
return (x.sweetness + x.chewiness) / 2
cookie = Pastry(sweetness=[1.2, 0.4], chewiness=[0.8, 0.2])
combine_pastry_features(cookie)
Explanation: TensorFlow APIs that support ExtensionTypes
@tf.function
tf.function is a decorator that precomputes TensorFlow graphs for Python functions, which can substantially improve the performance of your TensorFlow code. Extension type values can be used transparently with @tf.function-decorated functions.
End of explanation
pastry_spec = Pastry.Spec(tf.TensorSpec([2]), tf.TensorSpec(2))
@tf.function(input_signature=[pastry_spec])
def increase_sweetness(x: Pastry, delta=1.0):
return Pastry(x.sweetness + delta, x.chewiness)
increase_sweetness(cookie)
Explanation: If you wish to explicitly specify the input_signature for tf.function, then you can do so using the extension type's TypeSpec.
End of explanation
cf = combine_pastry_features.get_concrete_function(pastry_spec)
cf(cookie)
Explanation: Concrete functions
Concrete functions encapsulate individual traced graphs that are built by tf.function. Extension types can be used transparently with concrete functions.
End of explanation
# Example: using tf.cond to select between two MaskedTensors. Note that the
# two MaskedTensors don't need to have the same shape.
a = MaskedTensor([1., 2, 3], [True, False, True])
b = MaskedTensor([22., 33, 108, 55], [True, True, True, False])
condition = tf.constant(True)
print(tf.cond(condition, lambda: a, lambda: b))
# Example: using tf.while_loop with MaskedTensor.
cond = lambda i, _: i < 10
def body(i, mt):
return i + 1, mt.with_values(mt.values + 3 / 7)
print(tf.while_loop(cond, body, [0, b])[1])
Explanation: Control flow operations
Extension types are supported by TensorFlow's control-flow operations:
tf.cond
tf.case
tf.while_loop
tf.identity
End of explanation
@tf.function
def fn(x, b):
if b:
x = MaskedTensor(x, tf.less(x, 0))
else:
x = MaskedTensor(x, tf.greater(x, 0))
for i in tf.range(5 if b else 7):
x = x.with_values(x.values + 1 / 2)
return x
print(fn(tf.constant([1., -2, 3]), tf.constant(True)))
print(fn(tf.constant([1., -2, 3]), tf.constant(False)))
Explanation: Autograph control flow
Extension types are also supported by control flow statements in tf.function (using autograph). In the following example, the if statement and for statements are automatically converted to tf.cond and tf.while_loop operations, which support extension types.
End of explanation
class Network(tf.experimental.BatchableExtensionType):
shape: tf.TensorShape # batch shape. A single network has shape=[].
work: tf.Tensor # work[*shape, n] = work left to do at node n
bandwidth: tf.Tensor # bandwidth[*shape, n1, n2] = bandwidth from n1->n2
def __init__(self, work, bandwidth):
self.work = tf.convert_to_tensor(work)
self.bandwidth = tf.convert_to_tensor(bandwidth)
work_batch_shape = self.work.shape[:-1]
bandwidth_batch_shape = self.bandwidth.shape[:-2]
self.shape = work_batch_shape.merge_with(bandwidth_batch_shape)
def __repr__(self):
return network_repr(self)
single_network = Network( # A single network w/ 4 nodes.
work=[8.0, 5, 12, 2],
bandwidth=[[0.0, 1, 2, 2], [1, 0, 0, 2], [2, 0, 0, 1], [2, 2, 1, 0]])
batch_of_networks = Network( # Batch of 2 networks, each w/ 2 nodes.
work=[[8.0, 5], [3, 2]],
bandwidth=[[[0.0, 1], [1, 0]], [[0, 2], [2, 0]]])
Explanation: Keras
tf.keras is TensorFlow's high-level API for building and training deep learning models. Extension types may be passed as inputs to a Keras model, passed between Keras layers, and returned by Keras models. Keras currently puts two requirements on extension types:
They must be batchable (see "Batchable ExtensionTypes" above).
The must have a field or property named shape. shape[0] is assumed to be the batch dimension.
The following two subsections give examples showing how extension types can be used with Keras.
Keras example: Network
For the first example, consider the Network class defined in the "Batchable ExtensionTypes" section above, which can be used for load balancing work between nodes. Its definition is repeated here:
End of explanation
class BalanceNetworkLayer(tf.keras.layers.Layer):
Layer that balances work between nodes in a network.
Shifts work from more busy nodes to less busy nodes, constrained by bandwidth.
def call(self, inputs):
# This function is defined above, in "Batchable ExtensionTypes" section.
return balance_work_greedy(inputs)
Explanation: You can define a new Keras layer that processes Networks.
End of explanation
input_spec = Network.Spec(shape=None,
work=tf.TensorSpec(None, tf.float32),
bandwidth=tf.TensorSpec(None, tf.float32))
model = tf.keras.Sequential([
tf.keras.layers.Input(type_spec=input_spec),
BalanceNetworkLayer(),
])
Explanation: You can then use this layers to create a simple model. To feed an ExtensionType into a model, you can use a tf.keras.layer.Input layer with type_spec set to the extension type's TypeSpec. If the Keras model will be used to process batches, then the type_spec must include the batch dimension.
End of explanation
model(single_network)
model(batch_of_networks)
Explanation: Finally, you can apply the model to a single network and to a batch of networks.
End of explanation
class MaskedTensor(tf.experimental.BatchableExtensionType):
# __name__ is required for serialization in SavedModel; see below for details.
__name__ = 'extension_type_colab.MaskedTensor'
values: tf.Tensor
mask: tf.Tensor
shape = property(lambda self: self.values.shape)
dtype = property(lambda self: self.values.dtype)
def with_default(self, default):
return tf.where(self.mask, self.values, default)
def __repr__(self):
return masked_tensor_str(self.values, self.mask)
class Spec:
def __init__(self, shape, dtype=tf.float32):
self.values = tf.TensorSpec(shape, dtype)
self.mask = tf.TensorSpec(shape, tf.bool)
shape = property(lambda self: self.values.shape)
dtype = property(lambda self: self.values.dtype)
def with_shape(self):
return MaskedTensor.Spec(tf.TensorSpec(shape, self.values.dtype),
tf.TensorSpec(shape, self.mask.dtype))
Explanation: Keras example: MaskedTensor
In this example, MaskedTensor is extended to support Keras. shape is defined as a property that is calculated from the values field. Keras requires thatyou add this property to both the extension type and its TypeSpec. MaskedTensor also defines a __name__ variable, which will be required for SavedModel serialization (below).
End of explanation
@tf.experimental.dispatch_for_unary_elementwise_apis(MaskedTensor)
def unary_elementwise_op_handler(op, x):
return MaskedTensor(op(x.values), x.mask)
@tf.experimental.dispatch_for_binary_elementwise_apis(
Union[MaskedTensor, tf.Tensor],
Union[MaskedTensor, tf.Tensor])
def binary_elementwise_op_handler(op, x, y):
x = convert_to_masked_tensor(x)
y = convert_to_masked_tensor(y)
return MaskedTensor(op(x.values, y.values), x.mask & y.mask)
@tf.experimental.dispatch_for_api(tf.matmul)
def masked_matmul(a: MaskedTensor, b,
transpose_a=False, transpose_b=False,
adjoint_a=False, adjoint_b=False,
a_is_sparse=False, b_is_sparse=False,
output_type=None):
if isinstance(a, MaskedTensor):
a = a.with_default(0)
if isinstance(b, MaskedTensor):
b = b.with_default(0)
return tf.matmul(a, b, transpose_a, transpose_b, adjoint_a,
adjoint_b, a_is_sparse, b_is_sparse, output_type)
Explanation: Next, the dispatch decorators are used to override the default behavior of several TensorFlow APIs. Since these APIs are used by standard Keras layers (such as the Dense layer), overriding these will allow us to use those layers with MaskedTensor. For the purposes of this example, matmul for masked tensors is defined to treat the masked values as zeros (i.e., to not include them in the product).
End of explanation
input_spec = MaskedTensor.Spec([None, 2], tf.float32)
masked_tensor_model = tf.keras.Sequential([
tf.keras.layers.Input(type_spec=input_spec),
tf.keras.layers.Dense(16, activation="relu"),
tf.keras.layers.Dense(1)])
masked_tensor_model.compile(loss='binary_crossentropy', optimizer='rmsprop')
a = MaskedTensor([[1., 2], [3, 4], [5, 6]],
[[True, False], [False, True], [True, True]])
masked_tensor_model.fit(a, tf.constant([[1], [0], [1]]), epochs=3)
print(masked_tensor_model(a))
Explanation: You can then construct a Keras model that accepts MaskedTensor inputs, using standard Keras layers:
End of explanation
masked_tensor_model_path = tempfile.mkdtemp()
tf.saved_model.save(masked_tensor_model, masked_tensor_model_path)
imported_model = tf.saved_model.load(masked_tensor_model_path)
imported_model(a)
Explanation: SavedModel
A SavedModel is a serialized TensorFlow program, including both weights and computation. It can be built from a Keras model or from a custom model. In either case, extension types can be used transparently with the functions and methods defined by a SavedModel.
SavedModel can save models, layers, and functions that process extension types, as long as the extension types have a __name__ field. This name is used to register the extension type, so it can be located when the model is loaded.
Example: saving a Keras model
Keras models that use extension types may be saved using SavedModel.
End of explanation
class CustomModule(tf.Module):
def __init__(self, variable_value):
super().__init__()
self.v = tf.Variable(variable_value)
@tf.function
def grow(self, x: MaskedTensor):
Increase values in `x` by multiplying them by `self.v`.
return MaskedTensor(x.values * self.v, x.mask)
module = CustomModule(100.0)
module.grow.get_concrete_function(MaskedTensor.Spec(shape=None,
dtype=tf.float32))
custom_module_path = tempfile.mkdtemp()
tf.saved_model.save(module, custom_module_path)
imported_model = tf.saved_model.load(custom_module_path)
imported_model.grow(MaskedTensor([1., 2, 3], [False, True, False]))
Explanation: Example: saving a custom model
SavedModel can also be used to save custom tf.Module subclasses with functions that process extension types.
End of explanation
class CustomModuleWrapper(tf.Module):
def __init__(self, variable_value):
super().__init__()
self.v = tf.Variable(variable_value)
@tf.function
def var_weighted_mean(self, x: MaskedTensor):
Mean value of unmasked values in x, weighted by self.v.
x = MaskedTensor(x.values * self.v, x.mask)
return (tf.reduce_sum(x.with_default(0)) /
tf.reduce_sum(tf.cast(x.mask, x.dtype)))
@tf.function()
def var_weighted_mean_wrapper(self, x_values, x_mask):
Raw tensor wrapper for var_weighted_mean.
return self.var_weighted_mean(MaskedTensor(x_values, x_mask))
module = CustomModuleWrapper([3., 2., 8., 5.])
module.var_weighted_mean_wrapper.get_concrete_function(
tf.TensorSpec(None, tf.float32), tf.TensorSpec(None, tf.bool))
custom_module_path = tempfile.mkdtemp()
tf.saved_model.save(module, custom_module_path)
imported_model = tf.saved_model.load(custom_module_path)
x = MaskedTensor([1., 2., 3., 4.], [False, True, False, True])
imported_model.var_weighted_mean_wrapper(x.values, x.mask)
Explanation: Loading a SavedModel when the ExtensionType is unavailable
If you load a SavedModel that uses an ExtensionType, but that ExtensionType is not available (i.e., has not been imported), then you will see a warning and TensorFlow will fall back to using an "anonymous extension type" object. This object will have the same fields as the original type, but will lack any further customization you have added for the type, such as custom methods or properties.
Using ExtensionTypes with TensorFlow serving
Currently, TensorFlow serving (and other consumers of the SavedModel "signatures" dictionary) require that all inputs and outputs be raw tensors. If you wish to use TensorFlow serving with a model that uses extension types, then you can add wrapper methods that compose or decompose extension type values from tensors. E.g.:
End of explanation
ds = tf.data.Dataset.from_tensors(Pastry(5, 5))
iter(ds).next()
mt = MaskedTensor(tf.reshape(range(20), [5, 4]), tf.ones([5, 4]))
ds = tf.data.Dataset.from_tensor_slices(mt)
for value in ds:
print(value)
def value_gen():
for i in range(2, 7):
yield MaskedTensor(range(10), [j%i != 0 for j in range(10)])
ds = tf.data.Dataset.from_generator(
value_gen, output_signature=MaskedTensor.Spec(shape=[10], dtype=tf.int32))
for value in ds:
print(value)
Explanation: Datasets
tf.data is an API that enables you to build complex input pipelines from simple, reusable pieces. Its core data structure is tf.data.Dataset, which represents a sequence of elements, in which each element consists of one or more components.
Building Datasets with extension types
Datasets can be built from extension type values using Dataset.from_tensors, Dataset.from_tensor_slices, or Dataset.from_generator:
End of explanation
batched_ds = ds.batch(2)
for value in batched_ds:
print(value)
unbatched_ds = batched_ds.unbatch()
for value in unbatched_ds:
print(value)
Explanation: Batching and unbatching Datasets with extension types
Datasets with extension types can be batchand and unbatched using Dataset.batch adn Dataset.unbatch.
End of explanation |
14,654 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
201
Step1: In this example notebook, we will walk through the creation of a tour mode choice model.
To begin, we'll re-load the tours and skims data from the
data setup example.
Step2: Preprocessing
The Exampville data output contains a set of files similar to what we might
find for a real travel survey
Step3: In Exampville, there are only two kinds of trips
Step4: Model Definition
And then we are ready to create our model.
Step5: We will explicitly define the set of utility functions
we want to use. Because the DataFrames we are using to
serve data to this model contains exclusively idco format
data, we'll use only the utility_co mapping to define
a unique utility function for each alternative.
Step6: To write a nested logit mode, we'll attach some nesting nodes to the
model's graph. Each new_node allows us to define the set of
codes for the child nodes (elemental alternatives, or lower level nests)
as well as giving the new nest a name and assigning a logsum parameter.
The return value of this method is the node code for the newly created
nest, which then can potenially be used as a child code when creating
a higher level nest. We do this below, adding the 'Car' nest into the
'Motor' nest.
Step7: Let's visually check on the nesting structure.
Step8: The tour mode choice model's choice variable is indicated by
the code value in 'TOURMODE', and this can be
defined for the model using choice_co_code.
Step9: We can also give a dictionary of availability conditions based
on values in the idco data, using the availability_co_vars
attribute. Alternatives that are always available can be indicated
by setting the criterion to 1.
Step10: Then let's prepare this data for estimation. Even though the
data is already in memory, the load_data method is used to
pre-process the data, extracting the required values, pre-computing
the values of fixed expressions, and assembling the results into
contiguous arrays suitable for computing the log likelihood values
efficiently.
Model Estimation
We can check on some important statistics of this loaded data even
before we estimate the model.
Step11: If we are satisfied with the statistics we see above, we
can go ahead and estimate the model.
Step12: After we find the best fitting parameters, we can compute
some variance-covariance statistics, incuding standard errors of
the estimates and t statistics, using calculate_parameter_covariance.
Step13: Then we can review the results in a variety of report tables.
Step14: Save and Report Model | Python Code:
# HIDDEN
import larch.numba as lx
from pytest import approx
import os
import numpy as np
import pandas as pd
import larch.numba as lx
from larch import P, X
Explanation: 201: Exampville Mode Choice
Welcome to Exampville, the best simulated town in this here part of the internet!
Exampville is a demonstration provided with Larch that walks through some of the
data and tools that a transportation planner might use when building a travel model.
End of explanation
hh, pp, tour, skims = lx.example(200, ['hh', 'pp', 'tour', 'skims'])
Explanation: In this example notebook, we will walk through the creation of a tour mode choice model.
To begin, we'll re-load the tours and skims data from the
data setup example.
End of explanation
from addicty import Dict
Mode = Dict(
DA = 1,
SR = 2,
Walk = 3,
Bike = 4,
Transit = 5,
).freeze()
tour_dataset = lx.Dataset.construct.from_idco(tour.set_index('TOURID'), alts=Mode)
od_skims = lx.Dataset.construct.from_omx(skims)
dt = lx.DataTree(
tour=tour_dataset,
hh=hh.set_index('HHID'),
person=pp.set_index('PERSONID'),
od=od_skims,
do=od_skims,
relationships=(
"tours.HHID @ hh.HHID",
"tours.PERSONID @ person.PERSONID",
"hh.HOMETAZ @ od.otaz",
"tours.DTAZ @ od.dtaz",
"hh.HOMETAZ @ do.dtaz",
"tours.DTAZ @ do.otaz",
),
)
Explanation: Preprocessing
The Exampville data output contains a set of files similar to what we might
find for a real travel survey: network skims, and tables of households, persons,
and tours. We'll need to connect these tables together to create a composite dataset
for mode choice model estimation, using the DataTree structure.
End of explanation
dt_work = dt.query_cases("TOURPURP == 1")
Explanation: In Exampville, there are only two kinds of trips:
work (purpose=1) and
non-work (purpose=2).
We want to estimate a mode choice model for work trips,
so we’ll begin by excluding all the other trips:
End of explanation
m = lx.Model(datatree = dt_work)
m.title = "Exampville Work Tour Mode Choice v1"
Explanation: Model Definition
And then we are ready to create our model.
End of explanation
m.utility_co[Mode.DA] = (
+ P.InVehTime * X.AUTO_TIME
+ P.Cost * X.AUTO_COST # dollars per mile
)
m.utility_co[Mode.SR] = (
+ P.ASC_SR
+ P.InVehTime * X.AUTO_TIME
+ P.Cost * (X.AUTO_COST * 0.5) # dollars per mile, half share
+ P("LogIncome:SR") * X("log(INCOME)")
)
m.utility_co[Mode.Walk] = (
+ P.ASC_Walk
+ P.NonMotorTime * X.WALK_TIME
+ P("LogIncome:Walk") * X("log(INCOME)")
)
m.utility_co[Mode.Bike] = (
+ P.ASC_Bike
+ P.NonMotorTime * X.BIKE_TIME
+ P("LogIncome:Bike") * X("log(INCOME)")
)
m.utility_co[Mode.Transit] = (
+ P.ASC_Transit
+ P.InVehTime * X.TRANSIT_IVTT
+ P.OutVehTime * X.TRANSIT_OVTT
+ P.Cost * X.TRANSIT_FARE
+ P("LogIncome:Transit") * X('log(INCOME)')
)
Explanation: We will explicitly define the set of utility functions
we want to use. Because the DataFrames we are using to
serve data to this model contains exclusively idco format
data, we'll use only the utility_co mapping to define
a unique utility function for each alternative.
End of explanation
Car = m.graph.new_node(parameter='Mu:Car', children=[Mode.DA, Mode.SR], name='Car')
NonMotor = m.graph.new_node(parameter='Mu:NonMotor', children=[Mode.Walk, Mode.Bike], name='NonMotor')
Motor = m.graph.new_node(parameter='Mu:Motor', children=[Car, Mode.Transit], name='Motor')
Explanation: To write a nested logit mode, we'll attach some nesting nodes to the
model's graph. Each new_node allows us to define the set of
codes for the child nodes (elemental alternatives, or lower level nests)
as well as giving the new nest a name and assigning a logsum parameter.
The return value of this method is the node code for the newly created
nest, which then can potenially be used as a child code when creating
a higher level nest. We do this below, adding the 'Car' nest into the
'Motor' nest.
End of explanation
m.graph
Explanation: Let's visually check on the nesting structure.
End of explanation
m.choice_co_code = 'TOURMODE'
Explanation: The tour mode choice model's choice variable is indicated by
the code value in 'TOURMODE', and this can be
defined for the model using choice_co_code.
End of explanation
m.availability_co_vars = {
Mode.DA: 'AGE >= 16',
Mode.SR: 1,
Mode.Walk: 'WALK_TIME < 60',
Mode.Bike: 'BIKE_TIME < 60',
Mode.Transit: 'TRANSIT_FARE>0',
}
Explanation: We can also give a dictionary of availability conditions based
on values in the idco data, using the availability_co_vars
attribute. Alternatives that are always available can be indicated
by setting the criterion to 1.
End of explanation
m.choice_avail_summary()
# TEST
summary = m.choice_avail_summary()
assert (summary.to_markdown()) == '''
| | name | chosen | available | availability condition |
|:---------------------------|:---------|---------:|:------------|:-------------------------|
| 1 | DA | 6052 | 7564 | AGE >= 16 |
| 2 | SR | 810 | 7564 | 1 |
| 3 | Walk | 196 | 4179 | WALK_TIME < 60 |
| 4 | Bike | 72 | 7564 | BIKE_TIME < 60 |
| 5 | Transit | 434 | 4199 | TRANSIT_FARE>0 |
| 6 | Car | 6862 | 7564 | |
| 7 | NonMotor | 268 | 7564 | |
| 8 | Motor | 7296 | 7564 | |
| < Total All Alternatives > | | 7564 | | |
'''[1:-1]
Explanation: Then let's prepare this data for estimation. Even though the
data is already in memory, the load_data method is used to
pre-process the data, extracting the required values, pre-computing
the values of fixed expressions, and assembling the results into
contiguous arrays suitable for computing the log likelihood values
efficiently.
Model Estimation
We can check on some important statistics of this loaded data even
before we estimate the model.
End of explanation
m.set_cap(20) # improves optimization stability
result = m.maximize_loglike()
# TEST
assert result.loglike == approx(-3493.0397298749467)
Explanation: If we are satisfied with the statistics we see above, we
can go ahead and estimate the model.
End of explanation
m.calculate_parameter_covariance()
Explanation: After we find the best fitting parameters, we can compute
some variance-covariance statistics, incuding standard errors of
the estimates and t statistics, using calculate_parameter_covariance.
End of explanation
m.parameter_summary()
# TEST
assert (m.parameter_summary().data.to_markdown()) == '''
| | Value | Std Err | t Stat | Signif | Null Value |
|:------------------|--------:|----------:|---------:|:---------|-------------:|
| ASC_Bike | -0.258 | 1.34 | -0.19 | | 0 |
| ASC_SR | 1.42 | 1 | 1.42 | | 0 |
| ASC_Transit | 6.75 | 2.06 | 3.27 | ** | 0 |
| ASC_Walk | 8.62 | 1.14 | 7.57 | *** | 0 |
| Cost | -0.176 | 0.12 | -1.47 | | 0 |
| InVehTime | -0.124 | 0.0292 | -4.24 | *** | 0 |
| LogIncome:Bike | -0.197 | 0.124 | -1.59 | | 0 |
| LogIncome:SR | -0.194 | 0.135 | -1.43 | | 0 |
| LogIncome:Transit | -0.557 | 0.169 | -3.29 | *** | 0 |
| LogIncome:Walk | -0.523 | 0.1 | -5.21 | *** | 0 |
| Mu:Car | 0.259 | 0.181 | -4.1 | *** | 1 |
| Mu:Motor | 0.802 | 0.201 | -0.99 | | 1 |
| Mu:NonMotor | 0.854 | 0.112 | -1.3 | | 1 |
| NonMotorTime | -0.266 | 0.0163 | -16.29 | *** | 0 |
| OutVehTime | -0.255 | 0.0646 | -3.95 | *** | 0 |
'''[1:-1]
m.estimation_statistics()
Explanation: Then we can review the results in a variety of report tables.
End of explanation
report = lx.Reporter(title=m.title)
report.append('# Parameter Summary')
report.append(m.parameter_summary())
report
report << "# Estimation Statistics" << m.estimation_statistics()
report << "# Utility Functions" << m.utility_functions()
report.save(
'exampville_mode_choice.html',
overwrite=True,
metadata=m,
)
Explanation: Save and Report Model
End of explanation |
14,655 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
https
Step1: http
Step2: http | Python Code:
import quandl
print quandl.get("SOCSEC/RETIRED")
! apt-get install curl
Explanation: https://www.quandl.com/data/SOCSEC/RETIRED-Social-Security-Beneficiary-Data-Retired-Workers-and-Dependants
End of explanation
! curl http://data.edwardsaquifer.org/csv_j17.php > csv_j17
! curl http://data.edwardsaquifer.org/csv_j27.php > csv_j27
! curl http://data.edwardsaquifer.org/csv_comal.php > csv_comal
! curl http://data.edwardsaquifer.org/csv_san.php > csv_san_marcos
! curl http://data.edwardsaquifer.org/csv_hondo.php > csv_hondd
import csv
with open('csv_j17','rb') as infile:
reader = csv.reader(infile)
j17_list = list(reader0
len(j17_list)
print j17_list[0]
print j17_list[1]
print j17_list[-1]
Explanation: http://www.edwardsaquifer.org/scientific-research-and-data/aquifer-data-and-maps/historical-data/historic-data-downloads
End of explanation
for x in range(1, len(j17_list) -1):
if abs(float(j17_list[x][1]) - float(j17_list[x+1][1]))> 3:
print j17_list[x], j17_list[x+1], float(j17_list[x][1]) - float(j17_list[x+1][1])
import csv
with open('csv_j27','rb') as infile:
reader = csv.reader(infile)
j27_list = list(reader)
len(j27_list)
print j27_list[0]
print j27_list[1]
print j27_list[-1]
for x in range(1, len(j27_list) -1):
if abs(float(j27_list[x][1]) - float(j27_list[x+1][1]))> 2:
print j27_list[x], j27_list[x+1], float(j27_list[x][1]) - float(j27_list[x+1][1])
for x in j17_list:
print x
Explanation: http://news.ucsc.edu/2013/09/deep-earthquake.html
End of explanation |
14,656 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Asyncio Examples
All commands are coroutine functions.
Connecting and Disconnecting
Utilizing asyncio Redis requires an explicit disconnect of the connection since there is no asyncio deconstructor magic method. By default, a connection pool is created on redis.Redis() and attached to this Redis instance. The connection pool closes automatically on the call to Redis.close which disconnects all connections.
Step1: If you supply a custom ConnectionPool that is supplied to several Redis instances, you may want to disconnect the connection pool explicitly. Disconnecting the connection pool simply disconnects all connections hosted in the pool.
Step2: Transactions (Multi/Exec)
The aioredis.Redis.pipeline will return a aioredis.Pipeline object, which will buffer all commands in-memory and compile them into batches using the Redis Bulk String protocol. Additionally, each command will return the Pipeline instance, allowing you to chain your commands, i.e., p.set('foo', 1).set('bar', 2).mget('foo', 'bar').
The commands will not be reflected in Redis until execute() is called & awaited.
Usually, when performing a bulk operation, taking advantage of a “transaction” (e.g., Multi/Exec) is to be desired, as it will also add a layer of atomicity to your bulk operation.
Step3: Pub/Sub Mode
Subscribing to specific channels
Step4: Subscribing to channels matching a glob-style pattern
Step5: Sentinel Client
The Sentinel client requires a list of Redis Sentinel addresses to connect to and start discovering services.
Calling aioredis.sentinel.Sentinel.master_for or aioredis.sentinel.Sentinel.slave_for methods will return Redis clients connected to specified services monitored by Sentinel.
Sentinel client will detect failover and reconnect Redis clients automatically. | Python Code:
import redis.asyncio as redis
connection = redis.Redis()
print(f"Ping successful: {await connection.ping()}")
await connection.close()
Explanation: Asyncio Examples
All commands are coroutine functions.
Connecting and Disconnecting
Utilizing asyncio Redis requires an explicit disconnect of the connection since there is no asyncio deconstructor magic method. By default, a connection pool is created on redis.Redis() and attached to this Redis instance. The connection pool closes automatically on the call to Redis.close which disconnects all connections.
End of explanation
import redis.asyncio as redis
connection = redis.Redis(auto_close_connection_pool=False)
await connection.close()
# Or: await connection.close(close_connection_pool=False)
await connection.connection_pool.disconnect()
Explanation: If you supply a custom ConnectionPool that is supplied to several Redis instances, you may want to disconnect the connection pool explicitly. Disconnecting the connection pool simply disconnects all connections hosted in the pool.
End of explanation
import redis.asyncio as redis
r = await redis.from_url("redis://localhost")
async with r.pipeline(transaction=True) as pipe:
ok1, ok2 = await (pipe.set("key1", "value1").set("key2", "value2").execute())
assert ok1
assert ok2
Explanation: Transactions (Multi/Exec)
The aioredis.Redis.pipeline will return a aioredis.Pipeline object, which will buffer all commands in-memory and compile them into batches using the Redis Bulk String protocol. Additionally, each command will return the Pipeline instance, allowing you to chain your commands, i.e., p.set('foo', 1).set('bar', 2).mget('foo', 'bar').
The commands will not be reflected in Redis until execute() is called & awaited.
Usually, when performing a bulk operation, taking advantage of a “transaction” (e.g., Multi/Exec) is to be desired, as it will also add a layer of atomicity to your bulk operation.
End of explanation
import asyncio
import async_timeout
import redis.asyncio as redis
STOPWORD = "STOP"
async def reader(channel: redis.client.PubSub):
while True:
try:
async with async_timeout.timeout(1):
message = await channel.get_message(ignore_subscribe_messages=True)
if message is not None:
print(f"(Reader) Message Received: {message}")
if message["data"].decode() == STOPWORD:
print("(Reader) STOP")
break
await asyncio.sleep(0.01)
except asyncio.TimeoutError:
pass
r = redis.from_url("redis://localhost")
pubsub = r.pubsub()
await pubsub.subscribe("channel:1", "channel:2")
future = asyncio.create_task(reader(pubsub))
await r.publish("channel:1", "Hello")
await r.publish("channel:2", "World")
await r.publish("channel:1", STOPWORD)
await future
Explanation: Pub/Sub Mode
Subscribing to specific channels:
End of explanation
import asyncio
import async_timeout
import redis.asyncio as redis
STOPWORD = "STOP"
async def reader(channel: redis.client.PubSub):
while True:
try:
async with async_timeout.timeout(1):
message = await channel.get_message(ignore_subscribe_messages=True)
if message is not None:
print(f"(Reader) Message Received: {message}")
if message["data"].decode() == STOPWORD:
print("(Reader) STOP")
break
await asyncio.sleep(0.01)
except asyncio.TimeoutError:
pass
r = await redis.from_url("redis://localhost")
pubsub = r.pubsub()
await pubsub.psubscribe("channel:*")
future = asyncio.create_task(reader(pubsub))
await r.publish("channel:1", "Hello")
await r.publish("channel:2", "World")
await r.publish("channel:1", STOPWORD)
await future
Explanation: Subscribing to channels matching a glob-style pattern:
End of explanation
import asyncio
from redis.asyncio.sentinel import Sentinel
sentinel = Sentinel([("localhost", 26379), ("sentinel2", 26379)])
r = sentinel.master_for("mymaster")
ok = await r.set("key", "value")
assert ok
val = await r.get("key")
assert val == b"value"
Explanation: Sentinel Client
The Sentinel client requires a list of Redis Sentinel addresses to connect to and start discovering services.
Calling aioredis.sentinel.Sentinel.master_for or aioredis.sentinel.Sentinel.slave_for methods will return Redis clients connected to specified services monitored by Sentinel.
Sentinel client will detect failover and reconnect Redis clients automatically.
End of explanation |
14,657 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Modeling and Simulation in Python
Case study
Step3: Testing make_system
Step4: Testing slope_func
Step5: Now we can run the simulation.
Step6: Plotting r
Step7: We can also see the relationship between y and r, which I derive analytically in the book.
Step8: And here's the figure from the book.
Step9: We can use interpolation to find the time when y is 47 meters.
Step10: At that point r is 55 mm, which is Rmax, as expected.
Step11: The total amount of rotation is 1253 rad.
Step12: Unrolling
For unrolling the paper, we need more units
Step13: And a few more parameters in the Condition object.
Step15: make_system computes rho_h, which we'll need to compute moment of inertia, and k, which we'll use to compute r.
Step16: Testing make_system
Step18: Here's how we compute I as a function of r
Step19: When r is Rmin, I is small.
Step20: As r increases, so does I.
Step22: Here's the slope function.
Step23: Testing slope_func
Step24: Now we can run the simulation.
Step25: And look at the results.
Step26: Extrating the time series
Step27: Plotting theta
Step28: Plotting omega
Step29: Plotting y
Step30: Here's the figure from the book.
Step31: Yo-yo
Exercise
Step33: Here's a make_system function that computes I and k based on the system parameters.
I estimated I by modeling the yo-yo as a solid cylinder with uniform density (see here). In reality, the distribution of weight in a yo-yo is often designed to achieve desired effects. But we'll keep it simple.
Step34: Testing make_system
Step35: Write a slope function for this system, using these results from the book
Step36: Test your slope function with the initial conditions.
Step37: Then run the simulation.
Step38: Check the final conditions. If things have gone according to plan, the final value of y should be close to 0.
Step39: Plot the results.
Step40: theta should increase and accelerate.
Step41: y should decrease and accelerate down. | Python Code:
# If you want the figures to appear in the notebook,
# and you want to interact with them, use
# %matplotlib notebook
# If you want the figures to appear in the notebook,
# and you don't want to interact with them, use
# %matplotlib inline
# If you want the figures to appear in separate windows, use
# %matplotlib qt5
# tempo switch from one to another, you have to select Kernel->Restart
%matplotlib inline
from modsim import *
kg = UNITS.kilogram
m = UNITS.meter
s = UNITS.second
N = UNITS.newton
condition = Condition(mass = 0.03 * kg,
fraction = 1 / 3,
k = 9810.0 * N / m,
duration = 0.3 * s,
L = 0.05 * m,
d = 0.005 * m,
v1 = 0 * m / s,
v2 = 0 * m / s,
g = 9.8 * m / s**2)
condition = Condition(mass = 0.03,
fraction = 1 / 3,
k = 9810.0,
duration = 0.3,
L = 0.05,
d = 0.005,
v1 = 0,
v2 = 0,
g = 9.8)
def make_system(condition):
Make a system object.
condition: Condition with
returns: System with init
unpack(condition)
x1 = L - d # upper mass
x2 = 0 # lower mass
init = State(x1=x1, x2=x2, v1=v1, v2=v2)
m1, m2 = fraction*mass, (1-fraction)*mass
ts = linspace(0, duration, 1001)
return System(init=init, m1=m1, m2=m2, k=k, L=L, ts=ts)
Explanation: Modeling and Simulation in Python
Case study: Hopper optimization
Copyright 2017 Allen Downey
License: Creative Commons Attribution 4.0 International
End of explanation
system = make_system(condition)
system
system.init
def slope_func(state, t, system):
Computes the derivatives of the state variables.
state: State object with theta, y, r
t: time
system: System object with r, k
returns: sequence of derivatives
x1, x2, v1, v2 = state
unpack(system)
dx = x1 - x2
f_spring = k * (L - dx)
a1 = f_spring/m1 - g
a2 = -f_spring/m2 - g
if t < 0.003 and a2 < 0:
a2 = 0
return v1, v2, a1, a2
Explanation: Testing make_system
End of explanation
slope_func(system.init, 0, system)
Explanation: Testing slope_func
End of explanation
run_odeint(system, slope_func)
system.results.tail()
plot(system.results.x1)
plot(system.results.x2)
plot(system.results.x1 - system.results.x2)
plot(ys, color='green', label='y')
decorate(xlabel='Time (s)',
ylabel='Length (m)')
Explanation: Now we can run the simulation.
End of explanation
plot(rs, color='red', label='r')
decorate(xlabel='Time (s)',
ylabel='Radius (mm)')
Explanation: Plotting r
End of explanation
plot(rs, ys, color='purple')
decorate(xlabel='Radius (mm)',
ylabel='Length (m)',
legend=False)
Explanation: We can also see the relationship between y and r, which I derive analytically in the book.
End of explanation
subplot(3, 1, 1)
plot(thetas, label='theta')
decorate(ylabel='Angle (rad)')
subplot(3, 1, 2)
plot(ys, color='green', label='y')
decorate(ylabel='Length (m)')
subplot(3, 1, 3)
plot(rs, color='red', label='r')
decorate(xlabel='Time(s)',
ylabel='Radius (mm)')
savefig('chap11-fig01.pdf')
Explanation: And here's the figure from the book.
End of explanation
T = interp_inverse(ys, kind='cubic')
t_end = T(47)
t_end
Explanation: We can use interpolation to find the time when y is 47 meters.
End of explanation
R = interpolate(rs, kind='cubic')
R(t_end)
Explanation: At that point r is 55 mm, which is Rmax, as expected.
End of explanation
THETA = interpolate(thetas, kind='cubic')
THETA(t_end)
Explanation: The total amount of rotation is 1253 rad.
End of explanation
kg = UNITS.kilogram
N = UNITS.newton
Explanation: Unrolling
For unrolling the paper, we need more units:
End of explanation
condition = Condition(Rmin = 0.02 * m,
Rmax = 0.055 * m,
Mcore = 15e-3 * kg,
Mroll = 215e-3 * kg,
L = 47 * m,
tension = 2e-4 * N,
duration = 180 * s)
Explanation: And a few more parameters in the Condition object.
End of explanation
def make_system(condition):
Make a system object.
condition: Condition with Rmin, Rmax, Mcore, Mroll,
L, tension, and duration
returns: System with init, k, rho_h, Rmin, Rmax,
Mcore, Mroll, ts
unpack(condition)
init = State(theta = 0 * radian,
omega = 0 * radian/s,
y = L)
area = pi * (Rmax**2 - Rmin**2)
rho_h = Mroll / area
k = (Rmax**2 - Rmin**2) / 2 / L / radian
ts = linspace(0, duration, 101)
return System(init=init, k=k, rho_h=rho_h,
Rmin=Rmin, Rmax=Rmax,
Mcore=Mcore, Mroll=Mroll,
ts=ts)
Explanation: make_system computes rho_h, which we'll need to compute moment of inertia, and k, which we'll use to compute r.
End of explanation
system = make_system(condition)
system
system.init
Explanation: Testing make_system
End of explanation
def moment_of_inertia(r, system):
Moment of inertia for a roll of toilet paper.
r: current radius of roll in meters
system: System object with Mcore, rho, Rmin, Rmax
returns: moment of inertia in kg m**2
unpack(system)
Icore = Mcore * Rmin**2
Iroll = pi * rho_h / 2 * (r**4 - Rmin**4)
return Icore + Iroll
Explanation: Here's how we compute I as a function of r:
End of explanation
moment_of_inertia(system.Rmin, system)
Explanation: When r is Rmin, I is small.
End of explanation
moment_of_inertia(system.Rmax, system)
Explanation: As r increases, so does I.
End of explanation
def slope_func(state, t, system):
Computes the derivatives of the state variables.
state: State object with theta, omega, y
t: time
system: System object with Rmin, k, Mcore, rho_h, tension
returns: sequence of derivatives
theta, omega, y = state
unpack(system)
r = sqrt(2*k*y + Rmin**2)
I = moment_of_inertia(r, system)
tau = r * tension
alpha = tau / I
dydt = -r * omega
return omega, alpha, dydt
Explanation: Here's the slope function.
End of explanation
slope_func(system.init, 0*s, system)
Explanation: Testing slope_func
End of explanation
run_odeint(system, slope_func)
Explanation: Now we can run the simulation.
End of explanation
system.results.tail()
Explanation: And look at the results.
End of explanation
thetas = system.results.theta
omegas = system.results.omega
ys = system.results.y
Explanation: Extrating the time series
End of explanation
plot(thetas, label='theta')
decorate(xlabel='Time (s)',
ylabel='Angle (rad)')
Explanation: Plotting theta
End of explanation
plot(omegas, color='orange', label='omega')
decorate(xlabel='Time (s)',
ylabel='Angular velocity (rad/s)')
Explanation: Plotting omega
End of explanation
plot(ys, color='green', label='y')
decorate(xlabel='Time (s)',
ylabel='Length (m)')
Explanation: Plotting y
End of explanation
subplot(3, 1, 1)
plot(thetas, label='theta')
decorate(ylabel='Angle (rad)')
subplot(3, 1, 2)
plot(omegas, color='orange', label='omega')
decorate(ylabel='Angular velocity (rad/s)')
subplot(3, 1, 3)
plot(ys, color='green', label='y')
decorate(xlabel='Time(s)',
ylabel='Length (m)')
savefig('chap11-fig02.pdf')
Explanation: Here's the figure from the book.
End of explanation
condition = Condition(Rmin = 8e-3 * m,
Rmax = 16e-3 * m,
Rout = 35e-3 * m,
mass = 50e-3 * kg,
L = 1 * m,
g = 9.8 * m / s**2,
duration = 1 * s)
Explanation: Yo-yo
Exercise: Simulate the descent of a yo-yo. How long does it take to reach the end of the string.
I provide a Condition object with the system parameters:
Rmin is the radius of the axle. Rmax is the radius of the axle plus rolled string.
Rout is the radius of the yo-yo body. mass is the total mass of the yo-yo, ignoring the string.
L is the length of the string.
g is the acceleration of gravity.
End of explanation
def make_system(condition):
Make a system object.
condition: Condition with Rmin, Rmax, Rout,
mass, L, g, duration
returns: System with init, k, Rmin, Rmax, mass,
I, g, ts
unpack(condition)
init = State(theta = 0 * radian,
omega = 0 * radian/s,
y = L,
v = 0 * m / s)
I = mass * Rout**2 / 2
k = (Rmax**2 - Rmin**2) / 2 / L / radian
ts = linspace(0, duration, 101)
return System(init=init, k=k,
Rmin=Rmin, Rmax=Rmax,
mass=mass, I=I, g=g,
ts=ts)
Explanation: Here's a make_system function that computes I and k based on the system parameters.
I estimated I by modeling the yo-yo as a solid cylinder with uniform density (see here). In reality, the distribution of weight in a yo-yo is often designed to achieve desired effects. But we'll keep it simple.
End of explanation
system = make_system(condition)
system
system.init
Explanation: Testing make_system
End of explanation
# Solution goes here
Explanation: Write a slope function for this system, using these results from the book:
$ r = \sqrt{2 k y + R_{min}^2} $
$ T = m g I / I^* $
$ a = -m g r^2 / I^* $
$ \alpha = m g r / I^* $
where $I^*$ is the augmented moment of inertia, $I + m r^2$.
Hint: If y is less than 0, it means you have reached the end of the string, so the equation for r is no longer valid. In this case, the simplest thing to do it return the sequence of derivatives 0, 0, 0, 0
End of explanation
slope_func(system.init, 0*s, system)
Explanation: Test your slope function with the initial conditions.
End of explanation
run_odeint(system, slope_func)
Explanation: Then run the simulation.
End of explanation
system.results.tail()
Explanation: Check the final conditions. If things have gone according to plan, the final value of y should be close to 0.
End of explanation
thetas = system.results.theta
ys = system.results.y
Explanation: Plot the results.
End of explanation
plot(thetas, label='theta')
decorate(xlabel='Time (s)',
ylabel='Angle (rad)')
Explanation: theta should increase and accelerate.
End of explanation
plot(ys, color='green', label='y')
decorate(xlabel='Time (s)',
ylabel='Length (m)')
Explanation: y should decrease and accelerate down.
End of explanation |
14,658 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Branching GP Regression
Step1: Create the tree
Specify where the branching point is
Step2: Specify where to evaluate the kernel
Step3: Specify the kernel and its hyperparameters
These determine how smooth and variable the branching functions are
Step4: Sample the kernel
Step5: Plot the sample
Step6: You can rerun the same code as many times as you want and get different sample paths
We can also sample independent functions. This is the assumption in the overlapping mixtures of GPs model (OMGP) discussed in the paper. | Python Code:
import pickle
import gpflow
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from BranchedGP import BranchingTree as bt
from BranchedGP import VBHelperFunctions as bplot
from BranchedGP import branch_kernParamGPflow as bk
plt.style.use("ggplot")
%matplotlib inline
Explanation: Branching GP Regression: Sampling from the model
Alexis Boukouvalas, 2017
This notebook shows how to sample from a BGP model
End of explanation
branchingPoint = 0.5
tree = bt.BinaryBranchingTree(
0, 10, fDebug=False
) # set to true to print debug messages
tree.add(None, 1, branchingPoint) # single branching point
(fm, fmb) = tree.GetFunctionBranchTensor()
Explanation: Create the tree
Specify where the branching point is
End of explanation
t = np.linspace(0.01, 1, 10)
(XForKernel, indicesBranch, Xtrue) = tree.GetFunctionIndexList(t, fReturnXtrue=True)
Explanation: Specify where to evaluate the kernel
End of explanation
Bvalues = np.expand_dims(np.asarray(tree.GetBranchValues()), 1)
KbranchParam = bk.BranchKernelParam(gpflow.kernels.RBF(1), fm, b=Bvalues)
KbranchParam.kern.lengthscales = 2
KbranchParam.kern.variance = 1
Explanation: Specify the kernel and its hyperparameters
These determine how smooth and variable the branching functions are
End of explanation
samples = bk.SampleKernel(KbranchParam, XForKernel)
Explanation: Sample the kernel
End of explanation
bk.PlotSample(XForKernel, samples)
Explanation: Plot the sample
End of explanation
indKernel = bk.IndKern(gpflow.kernels.RBF(1))
samples = bk.SampleKernel(indKernel, XForKernel)
bk.PlotSample(XForKernel, samples)
Explanation: You can rerun the same code as many times as you want and get different sample paths
We can also sample independent functions. This is the assumption in the overlapping mixtures of GPs model (OMGP) discussed in the paper.
End of explanation |
14,659 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Language Model Reports
Since my local machine does not have GPU support and thus can't perform many model training and evaluation tasks in a reasonable amount of time, I have created a script evaluation.lua in this repository which generates reports on a language model and serializes them in JSON. This notebook will consume these reports and explore them. It will also include some information about the models these reports were made for that is not included in the serialized report.
Step1: 25K Shallower, Broader Network Trained With Adam
I created a model with 1 LSTM layer, a dropout of 0.1, and a hidden size of 300. Here we can look at it's structure
Step2: Loss vs Epoch
Loss is charted vs. current epoch, with labels of the learning rate used at each epoch
<b> NOTE
Step3: Notably, this model has a loss below 6 for sequences that are ~10 words or less.
Generation Samples
We can also look at examples of how it generates text. Below are side by side comparisons of the labels from the training/validation/test set and the sentence the model generated. A Special <G> token will be placed in the generated sequence to illustrate where the model's input ends and it's generation begins. I chose to look at only short sequences, as the models each have lower loss for these, and might stand a chance of answering correctly.
Step4: Conclusion
This model has lower loss and doesn't seem to make quite as many gibberish mistakes in generation (double periods, long strings of <UNK>, etc.) This is perhaps too small of a sample to make a real conclusion though. Like the previous model, it tends to favor abrupt endings, as it likely is being punished less for only getting a couple tokens wrong instead of a long sequence of wrong answers. It is also leaves an idea hanging, ending sentences with "the", etc.
25K Deeper, Thinner Network
I created a model with 2 LSTM layers, a dropout of 0.1, and a hidden size of 300. Here we can look at it's structure
Step5: Loss Versus Sequence Length
We can examine the relationship between loss and sequence length. We can expect higher losses with increasing sequence length as more information must be remembered by the model as it generates, and the model is only trained on examples of sequence length 30 or less. We can generate a scatter plot of batch loss v. sequence length of batch (all batches are same size)
Step6: Generation Samples
We can also look at examples of how it generates text. Below are side by side comparisons of the labels from the training/validation/test set and the sentence the model generated. A Special <G> token will be placed in the generated sequence to illustrate where the model's input ends and it's generation begins.
Training Set Generation Examples
Step7: Conclusion
While we can see this model has the expected distribution of losses over each set, and does not over fit, it doesn't generate coherent conclusions to the input sentence fragments. In terms of generation quality, it leaves a lot to be desired.
Same Network, Earlier Epoch
I created a model with 2 LSTM layers, a dropout of 0.1, and a hidden size of 300. Here we can look at it's structure
Step8: Loss Versus Sequence Length
We can examine the relationship between loss and sequence length. We can expect higher losses with increasing sequence length as more information must be remembered by the model as it generates, and the model is only trained on examples of sequence length 30 or less. We can generate a scatter plot of batch loss v. sequence length of batch (all batches are same size)
Step9: Generation Samples
We can also look at examples of how it generates text. Below are side by side comparisons of the labels from the training/validation/test set and the sentence the model generated. A Special <G> token will be placed in the generated sequence to illustrate where the model's input ends and it's generation begins.
Training Set Generation Examples | Python Code:
# load some requirements
import json
import matplotlib.pyplot as plt
with open('reports/unweightednoavg_one_layer_12.json', 'r') as f:
first_report = json.loads(f.read())
with open('reports/unweightednoavg_7.json', 'r') as f:
second_report = json.loads(f.read())
with open('reports/unweightednoavg_4.json', 'r') as f:
third_report = json.loads(f.read())
Explanation: Language Model Reports
Since my local machine does not have GPU support and thus can't perform many model training and evaluation tasks in a reasonable amount of time, I have created a script evaluation.lua in this repository which generates reports on a language model and serializes them in JSON. This notebook will consume these reports and explore them. It will also include some information about the models these reports were made for that is not included in the serialized report.
End of explanation
# print out the losses from the report
print 'Training set perplexity:', first_report['train_perplexity']
print 'Validation set perplexity:', first_report['valid_perplexity']
print 'Test set perplexity:', first_report['test_perplexity']
Explanation: 25K Shallower, Broader Network Trained With Adam
I created a model with 1 LSTM layer, a dropout of 0.1, and a hidden size of 300. Here we can look at it's structure:
nn.Sequential {
[input -> (1) -> (2) -> (3) -> (4) -> (5) -> (6) -> output]
(1): nn.LookupTable
(2): nn.LSTM(100 -> 512)
(3): nn.Dropout(0.10000)
(4): nn.DynamicView
(5): nn.Linear(300 -> 25000)
(6): nn.LogSoftMax
}
Notably, this one is a layer shallower and has a larger hidden size, with slightly reduced dropout. While it is not captured in the report, this model converged to it's final loss more quickly than the previous model. The use of adam also lead to considerably lower loss
Perplexity on the Datasets
This model experienced a reduced perplexity across each of the datasets:
End of explanation
with open('logs/log_series.json', 'r') as f:
logs = json.loads(f.read())
for k in logs.keys():
plt.plot(logs[k][0], logs[k][1], label=str(k))
plt.title('Loss v. Epoch')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.show()
# function for turning report data into scatter plot
def scatterize_batch_loss(report_batch_loss):
x = []
y = []
for i, v in enumerate(report_batch_loss):
if i > 50:
break # We'll only consider ones of length 50 and below to get a better view of the data in the chart.
if isinstance(v, list):
x.extend([i + 1 for j in v]) # same batch size for all losses in v
y.extend([j for j in v])
else:
if v is not None:
x.append(i)
y.append(v)
return x, y
%matplotlib inline
x, y = scatterize_batch_loss(first_report['train_batch_perplexities'])
plt.scatter(x, y)
plt.title('Training Perplexity v. Sequence Length')
plt.xlabel('Sequence Length')
plt.ylabel('Perplexity')
plt.show()
%matplotlib inline
x, y = scatterize_batch_loss(first_report['valid_batch_perplexities'])
plt.scatter(x, y)
plt.title('Validation Perplexity v. Sequence Length')
plt.xlabel('Sequence Length')
plt.ylabel('Perplexity')
plt.show()
%matplotlib inline
x, y = scatterize_batch_loss(first_report['test_batch_perplexities'])
plt.scatter(x, y)
plt.title('Test Perplexity v. Sequence Length')
plt.xlabel('Sequence Length')
plt.ylabel('Perplexity')
plt.show()
Explanation: Loss vs Epoch
Loss is charted vs. current epoch, with labels of the learning rate used at each epoch
<b> NOTE: In the first several series, loss is on the last training example. Current implementation calculates average loss, but this is not reflected in early series </b>
End of explanation
def print_sample(sample):
seq = sample['generated'].split(' ')
seq.insert(sample['supplied_length'] + 1, '<G>')
gold = sample['gold'].split(' ')
gold.insert(sample['supplied_length'], '<G>')
print('Gend: ' + ' '.join(seq))
print('True: ' + seq[1] + ' ' + ' '.join(gold) + '\n')
for sample in first_report['train_samples'][5:]:
print_sample(sample)
for sample in first_report['valid_samples'][0:5]:
print_sample(sample)
for sample in first_report['test_samples'][0:5]:
print_sample(sample)
Explanation: Notably, this model has a loss below 6 for sequences that are ~10 words or less.
Generation Samples
We can also look at examples of how it generates text. Below are side by side comparisons of the labels from the training/validation/test set and the sentence the model generated. A Special <G> token will be placed in the generated sequence to illustrate where the model's input ends and it's generation begins. I chose to look at only short sequences, as the models each have lower loss for these, and might stand a chance of answering correctly.
End of explanation
# print out the losses from the report
print 'Training set loss:', second_report['train_perplexity']
print 'Validation set loss:', second_report['valid_perplexity']
print 'Test set loss:', second_report['test_perplexity']
with open('logs/log_series_2_layer.json', 'r') as f:
logs = json.loads(f.read())
for k in logs.keys():
plt.plot(logs[k][0], logs[k][1], label=str(k))
plt.title('Loss v. Epoch')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.show()
Explanation: Conclusion
This model has lower loss and doesn't seem to make quite as many gibberish mistakes in generation (double periods, long strings of <UNK>, etc.) This is perhaps too small of a sample to make a real conclusion though. Like the previous model, it tends to favor abrupt endings, as it likely is being punished less for only getting a couple tokens wrong instead of a long sequence of wrong answers. It is also leaves an idea hanging, ending sentences with "the", etc.
25K Deeper, Thinner Network
I created a model with 2 LSTM layers, a dropout of 0.1, and a hidden size of 300. Here we can look at it's structure:
nn.Sequential {
[input -> (1) -> (2) -> (3) -> (4) -> (5) -> (6) -> (7) -> (8) -> output]
(1): nn.LookupTable
(2): nn.LSTM(100 -> 300)
(3): nn.Dropout(0.100000)
(4): nn.LSTM(300 -> 300)
(5): nn.Dropout(0.100000)
(6): nn.DynamicView
(7): nn.Linear(300 -> 25000)
(8): nn.LogSoftMax
}
Losses on the Datasets
I have created 3 datasets, built from the Google Billion Words data set. I trained on a version of the train_small data set with a reduced vocabulary of 25000, in batches of size 50, with a sequence length cut off of 30. I did not tune any hyper parameters with the validation set, but this could be future work. There is also a small test set.
End of explanation
%matplotlib inline
x, y = scatterize_batch_loss(second_report['train_batch_perplexities'])
plt.scatter(x, y)
plt.title('Training Perplexity v. Sequence Length')
plt.xlabel('Sequence Length')
plt.ylabel('Perplexity')
plt.show()
%matplotlib inline
x, y = scatterize_batch_loss(second_report['valid_batch_perplexities'])
plt.scatter(x, y)
plt.title('Validation Perplexity v. Sequence Length')
plt.xlabel('Sequence Length')
plt.ylabel('Perplexity')
plt.show()
%matplotlib inline
x, y = scatterize_batch_loss(second_report['test_batch_perplexities'])
plt.scatter(x, y)
plt.title('Test Perplexity v. Sequence Length')
plt.xlabel('Sequence Length')
plt.ylabel('Perplexity')
plt.show()
Explanation: Loss Versus Sequence Length
We can examine the relationship between loss and sequence length. We can expect higher losses with increasing sequence length as more information must be remembered by the model as it generates, and the model is only trained on examples of sequence length 30 or less. We can generate a scatter plot of batch loss v. sequence length of batch (all batches are same size):
End of explanation
for sample in second_report['train_samples']:
print_sample(sample)
for sample in second_report['valid_samples'][0:5]:
print_sample(sample)
for sample in second_report['test_samples'][0:5]:
print_sample(sample)
Explanation: Generation Samples
We can also look at examples of how it generates text. Below are side by side comparisons of the labels from the training/validation/test set and the sentence the model generated. A Special <G> token will be placed in the generated sequence to illustrate where the model's input ends and it's generation begins.
Training Set Generation Examples
End of explanation
# print out the losses from the report
print 'Training set loss:', third_report['train_perplexity']
print 'Validation set loss:', third_report['valid_perplexity']
print 'Test set loss:', third_report['test_perplexity']
with open('logs/log_series_2_layer.json', 'r') as f:
logs = json.loads(f.read())
for k in logs.keys():
plt.plot(logs[k][0], logs[k][1], label=str(k))
plt.title('Loss v. Epoch')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.show()
Explanation: Conclusion
While we can see this model has the expected distribution of losses over each set, and does not over fit, it doesn't generate coherent conclusions to the input sentence fragments. In terms of generation quality, it leaves a lot to be desired.
Same Network, Earlier Epoch
I created a model with 2 LSTM layers, a dropout of 0.1, and a hidden size of 300. Here we can look at it's structure:
nn.Sequential {
[input -> (1) -> (2) -> (3) -> (4) -> (5) -> (6) -> (7) -> (8) -> output]
(1): nn.LookupTable
(2): nn.LSTM(100 -> 300)
(3): nn.Dropout(0.100000)
(4): nn.LSTM(300 -> 300)
(5): nn.Dropout(0.100000)
(6): nn.DynamicView
(7): nn.Linear(300 -> 25000)
(8): nn.LogSoftMax
}
Losses on the Datasets
I have created 3 datasets, built from the Google Billion Words data set. I trained on a version of the train_small data set with a reduced vocabulary of 25000, in batches of size 50, with a sequence length cut off of 30. I did not tune any hyper parameters with the validation set, but this could be future work. There is also a small test set.
End of explanation
%matplotlib inline
x, y = scatterize_batch_loss(third_report['train_batch_perplexities'])
plt.scatter(x, y)
plt.title('Training Perplexity v. Sequence Length')
plt.xlabel('Sequence Length')
plt.ylabel('Perplexity')
plt.show()
%matplotlib inline
x, y = scatterize_batch_loss(third_report['valid_batch_perplexities'])
plt.scatter(x, y)
plt.title('Validation Perplexity v. Sequence Length')
plt.xlabel('Sequence Length')
plt.ylabel('Perplexity')
plt.show()
%matplotlib inline
x, y = scatterize_batch_loss(third_report['test_batch_perplexities'])
plt.scatter(x, y)
plt.title('Test Perplexity v. Sequence Length')
plt.xlabel('Sequence Length')
plt.ylabel('Perplexity')
plt.show()
Explanation: Loss Versus Sequence Length
We can examine the relationship between loss and sequence length. We can expect higher losses with increasing sequence length as more information must be remembered by the model as it generates, and the model is only trained on examples of sequence length 30 or less. We can generate a scatter plot of batch loss v. sequence length of batch (all batches are same size):
End of explanation
for sample in third_report['train_samples']:
print_sample(sample)
for sample in third_report['valid_samples'][0:5]:
print_sample(sample)
for sample in third_report['test_samples'][0:5]:
print_sample(sample)
Explanation: Generation Samples
We can also look at examples of how it generates text. Below are side by side comparisons of the labels from the training/validation/test set and the sentence the model generated. A Special <G> token will be placed in the generated sequence to illustrate where the model's input ends and it's generation begins.
Training Set Generation Examples
End of explanation |
14,660 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sebastian Raschka, 2015
https
Step1: <br>
<br>
Overview
Unsupervised dimensionality reduction via principal component analysis 128
Total and explained variance
Feature transformation
Principal component analysis in scikit-learn
Supervised data compression via linear discriminant analysis
Computing the scatter matrices
Selecting linear discriminants for the new feature subspace
Projecting samples onto the new feature space
LDA via scikit-learn
Using kernel principal component analysis for nonlinear mappings
Kernel functions and the kernel trick
Implementing a kernel principal component analysis in Python
Example 1 – separating half-moon shapes
Example 2 – separating concentric circles
Projecting new data points
Kernel principal component analysis in scikit-learn
Summary
<br>
<br>
Step2: Unsupervised dimensionality reduction via principal component analysis
Step3: <hr>
Note
Step4: <hr>
Splitting the data into 70% training and 30% test subsets.
Step5: Standardizing the data.
Step6: Note
Accidentally, I wrote X_test_std = sc.fit_transform(X_test) instead of X_test_std = sc.transform(X_test). In this case, it wouldn't make a big difference since the mean and standard deviation of the test set should be (quite) similar to the training set. However, as remember from Chapter 3, the correct way is to re-use parameters from the training set if we are doing any kind of transformation -- the test set should basically stand for "new, unseen" data.
My initial typo reflects a common mistake is that some people are not re-using these parameters from the model training/building and standardize the new data "from scratch." Here's simple example to explain why this is a problem.
Let's assume we have a simple training set consisting of 3 samples with 1 feature (let's call this feature "length")
Step7: Note
Step8: <br>
<br>
Feature transformation
Step9: Note
Depending on which version of NumPy and LAPACK you are using, you may obtain the the Matrix W with its signs flipped. E.g., the matrix shown in the book was printed as
Step10: <br>
<br>
Principal component analysis in scikit-learn
Step11: Training logistic regression classifier using the first 2 principal components.
Step12: <br>
<br>
Supervised data compression via linear discriminant analysis
Step13: <br>
<br>
Computing the scatter matrices
Calculate the mean vectors for each class
Step14: Compute the within-class scatter matrix
Step15: Better
Step16: Compute the between-class scatter matrix
Step17: <br>
<br>
Selecting linear discriminants for the new feature subspace
Solve the generalized eigenvalue problem for the matrix $S_W^{-1}S_B$
Step18: Note
Step19: <br>
<br>
Projecting samples onto the new feature space
Step20: <br>
<br>
LDA via scikit-learn
Step21: <br>
<br>
Using kernel principal component analysis for nonlinear mappings
Step23: <br>
<br>
Implementing a kernel principal component analysis in Python
Step24: <br>
Example 1
Step25: <br>
Example 2
Step27: <br>
<br>
Projecting new data points
Step28: <br>
<br>
Kernel principal component analysis in scikit-learn | Python Code:
%load_ext watermark
%watermark -a 'Sebastian Raschka' -u -d -v -p numpy,scipy,matplotlib,scikit-learn
# to install watermark just uncomment the following line:
#%install_ext https://raw.githubusercontent.com/rasbt/watermark/master/watermark.py
Explanation: Sebastian Raschka, 2015
https://github.com/rasbt/python-machine-learning-book
Python Machine Learning - Code Examples
Chapter 5 - Compressing Data via Dimensionality Reduction
Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
End of explanation
from IPython.display import Image
%matplotlib inline
Explanation: <br>
<br>
Overview
Unsupervised dimensionality reduction via principal component analysis 128
Total and explained variance
Feature transformation
Principal component analysis in scikit-learn
Supervised data compression via linear discriminant analysis
Computing the scatter matrices
Selecting linear discriminants for the new feature subspace
Projecting samples onto the new feature space
LDA via scikit-learn
Using kernel principal component analysis for nonlinear mappings
Kernel functions and the kernel trick
Implementing a kernel principal component analysis in Python
Example 1 – separating half-moon shapes
Example 2 – separating concentric circles
Projecting new data points
Kernel principal component analysis in scikit-learn
Summary
<br>
<br>
End of explanation
Image(filename='./images/05_01.png', width=400)
import pandas as pd
df_wine = pd.read_csv('https://archive.ics.uci.edu/ml/'
'machine-learning-databases/wine/wine.data',
header=None)
df_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash',
'Alcalinity of ash', 'Magnesium', 'Total phenols',
'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins',
'Color intensity', 'Hue',
'OD280/OD315 of diluted wines', 'Proline']
df_wine.head()
Explanation: Unsupervised dimensionality reduction via principal component analysis
End of explanation
df_wine = pd.read_csv('https://raw.githubusercontent.com/rasbt/python-machine-learning-book/master/code/datasets/wine/wine.data', header=None)
df_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash',
'Alcalinity of ash', 'Magnesium', 'Total phenols',
'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins',
'Color intensity', 'Hue', 'OD280/OD315 of diluted wines', 'Proline']
df_wine.head()
Explanation: <hr>
Note:
If the link to the Wine dataset provided above does not work for you, you can find a local copy in this repository at ./../datasets/wine/wine.data.
Or you could fetch it via
End of explanation
from sklearn.cross_validation import train_test_split
X, y = df_wine.iloc[:, 1:].values, df_wine.iloc[:, 0].values
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=0.3, random_state=0)
Explanation: <hr>
Splitting the data into 70% training and 30% test subsets.
End of explanation
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train_std = sc.fit_transform(X_train)
X_test_std = sc.transform(X_test)
Explanation: Standardizing the data.
End of explanation
import numpy as np
cov_mat = np.cov(X_train_std.T)
eigen_vals, eigen_vecs = np.linalg.eig(cov_mat)
print('\nEigenvalues \n%s' % eigen_vals)
Explanation: Note
Accidentally, I wrote X_test_std = sc.fit_transform(X_test) instead of X_test_std = sc.transform(X_test). In this case, it wouldn't make a big difference since the mean and standard deviation of the test set should be (quite) similar to the training set. However, as remember from Chapter 3, the correct way is to re-use parameters from the training set if we are doing any kind of transformation -- the test set should basically stand for "new, unseen" data.
My initial typo reflects a common mistake is that some people are not re-using these parameters from the model training/building and standardize the new data "from scratch." Here's simple example to explain why this is a problem.
Let's assume we have a simple training set consisting of 3 samples with 1 feature (let's call this feature "length"):
train_1: 10 cm -> class_2
train_2: 20 cm -> class_2
train_3: 30 cm -> class_1
mean: 20, std.: 8.2
After standardization, the transformed feature values are
train_std_1: -1.21 -> class_2
train_std_2: 0 -> class_2
train_std_3: 1.21 -> class_1
Next, let's assume our model has learned to classify samples with a standardized length value < 0.6 as class_2 (class_1 otherwise). So far so good. Now, let's say we have 3 unlabeled data points that we want to classify:
new_4: 5 cm -> class ?
new_5: 6 cm -> class ?
new_6: 7 cm -> class ?
If we look at the "unstandardized "length" values in our training datast, it is intuitive to say that all of these samples are likely belonging to class_2. However, if we standardize these by re-computing standard deviation and and mean you would get similar values as before in the training set and your classifier would (probably incorrectly) classify samples 4 and 5 as class 2.
new_std_4: -1.21 -> class 2
new_std_5: 0 -> class 2
new_std_6: 1.21 -> class 1
However, if we use the parameters from your "training set standardization," we'd get the values:
sample5: -18.37 -> class 2
sample6: -17.15 -> class 2
sample7: -15.92 -> class 2
The values 5 cm, 6 cm, and 7 cm are much lower than anything we have seen in the training set previously. Thus, it only makes sense that the standardized features of the "new samples" are much lower than every standardized feature in the training set.
Eigendecomposition of the covariance matrix.
End of explanation
tot = sum(eigen_vals)
var_exp = [(i / tot) for i in sorted(eigen_vals, reverse=True)]
cum_var_exp = np.cumsum(var_exp)
import matplotlib.pyplot as plt
plt.bar(range(1, 14), var_exp, alpha=0.5, align='center',
label='individual explained variance')
plt.step(range(1, 14), cum_var_exp, where='mid',
label='cumulative explained variance')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.legend(loc='best')
plt.tight_layout()
# plt.savefig('./figures/pca1.png', dpi=300)
plt.show()
Explanation: Note:
Above, I used the numpy.linalg.eig function to decompose the symmetric covariance matrix into its eigenvalues and eigenvectors.
<pre>>>> eigen_vals, eigen_vecs = np.linalg.eig(cov_mat)</pre>
This is not really a "mistake," but probably suboptimal. It would be better to use numpy.linalg.eigh in such cases, which has been designed for Hermetian matrices. The latter always returns real eigenvalues; whereas the numerically less stable np.linalg.eig can decompose nonsymmetric square matrices, you may find that it returns complex eigenvalues in certain cases. (S.R.)
<br>
<br>
Total and explained variance
End of explanation
# Make a list of (eigenvalue, eigenvector) tuples
eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:, i])
for i in range(len(eigen_vals))]
# Sort the (eigenvalue, eigenvector) tuples from high to low
eigen_pairs.sort(reverse=True)
w = np.hstack((eigen_pairs[0][1][:, np.newaxis],
eigen_pairs[1][1][:, np.newaxis]))
print('Matrix W:\n', w)
Explanation: <br>
<br>
Feature transformation
End of explanation
X_train_pca = X_train_std.dot(w)
colors = ['r', 'b', 'g']
markers = ['s', 'x', 'o']
for l, c, m in zip(np.unique(y_train), colors, markers):
plt.scatter(X_train_pca[y_train == l, 0],
X_train_pca[y_train == l, 1],
c=c, label=l, marker=m)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./figures/pca2.png', dpi=300)
plt.show()
X_train_std[0].dot(w)
Explanation: Note
Depending on which version of NumPy and LAPACK you are using, you may obtain the the Matrix W with its signs flipped. E.g., the matrix shown in the book was printed as:
[[ 0.14669811 0.50417079]
[-0.24224554 0.24216889]
[-0.02993442 0.28698484]
[-0.25519002 -0.06468718]
[ 0.12079772 0.22995385]
[ 0.38934455 0.09363991]
[ 0.42326486 0.01088622]
[-0.30634956 0.01870216]
[ 0.30572219 0.03040352]
[-0.09869191 0.54527081]
Please note that this is not an issue: If $v$ is an eigenvector of a matrix $\Sigma$, we have
$$\Sigma v = \lambda v,$$
where $\lambda$ is our eigenvalue,
then $-v$ is also an eigenvector that has the same eigenvalue, since
$$\Sigma(-v) = -\Sigma v = -\lambda v = \lambda(-v).$$
End of explanation
from sklearn.decomposition import PCA
pca = PCA()
X_train_pca = pca.fit_transform(X_train_std)
pca.explained_variance_ratio_
plt.bar(range(1, 14), pca.explained_variance_ratio_, alpha=0.5, align='center')
plt.step(range(1, 14), np.cumsum(pca.explained_variance_ratio_), where='mid')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.show()
pca = PCA(n_components=2)
X_train_pca = pca.fit_transform(X_train_std)
X_test_pca = pca.transform(X_test_std)
plt.scatter(X_train_pca[:, 0], X_train_pca[:, 1])
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.show()
from matplotlib.colors import ListedColormap
def plot_decision_regions(X, y, classifier, resolution=0.02):
# setup marker generator and color map
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
# plot the decision surface
x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),
np.arange(x2_min, x2_max, resolution))
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
# plot class samples
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x=X[y == cl, 0], y=X[y == cl, 1],
alpha=0.8, c=cmap(idx),
marker=markers[idx], label=cl)
Explanation: <br>
<br>
Principal component analysis in scikit-learn
End of explanation
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr = lr.fit(X_train_pca, y_train)
plot_decision_regions(X_train_pca, y_train, classifier=lr)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./figures/pca3.png', dpi=300)
plt.show()
plot_decision_regions(X_test_pca, y_test, classifier=lr)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./figures/pca4.png', dpi=300)
plt.show()
pca = PCA(n_components=None)
X_train_pca = pca.fit_transform(X_train_std)
pca.explained_variance_ratio_
Explanation: Training logistic regression classifier using the first 2 principal components.
End of explanation
Image(filename='./images/05_06.png', width=400)
Explanation: <br>
<br>
Supervised data compression via linear discriminant analysis
End of explanation
np.set_printoptions(precision=4)
mean_vecs = []
for label in range(1, 4):
mean_vecs.append(np.mean(X_train_std[y_train == label], axis=0))
print('MV %s: %s\n' % (label, mean_vecs[label - 1]))
Explanation: <br>
<br>
Computing the scatter matrices
Calculate the mean vectors for each class:
End of explanation
d = 13 # number of features
S_W = np.zeros((d, d))
for label, mv in zip(range(1, 4), mean_vecs):
class_scatter = np.zeros((d, d)) # scatter matrix for each class
for row in X_train_std[y_train == label]:
row, mv = row.reshape(d, 1), mv.reshape(d, 1) # make column vectors
class_scatter += (row - mv).dot((row - mv).T)
S_W += class_scatter # sum class scatter matrices
print('Within-class scatter matrix: %sx%s' % (S_W.shape[0], S_W.shape[1]))
Explanation: Compute the within-class scatter matrix:
End of explanation
print('Class label distribution: %s'
% np.bincount(y_train)[1:])
d = 13 # number of features
S_W = np.zeros((d, d))
for label, mv in zip(range(1, 4), mean_vecs):
class_scatter = np.cov(X_train_std[y_train == label].T)
S_W += class_scatter
print('Scaled within-class scatter matrix: %sx%s' % (S_W.shape[0],
S_W.shape[1]))
Explanation: Better: covariance matrix since classes are not equally distributed:
End of explanation
mean_overall = np.mean(X_train_std, axis=0)
d = 13 # number of features
S_B = np.zeros((d, d))
for i, mean_vec in enumerate(mean_vecs):
n = X_train[y_train == i + 1, :].shape[0]
mean_vec = mean_vec.reshape(d, 1) # make column vector
mean_overall = mean_overall.reshape(d, 1) # make column vector
S_B += n * (mean_vec - mean_overall).dot((mean_vec - mean_overall).T)
print('Between-class scatter matrix: %sx%s' % (S_B.shape[0], S_B.shape[1]))
Explanation: Compute the between-class scatter matrix:
End of explanation
eigen_vals, eigen_vecs = np.linalg.eig(np.linalg.inv(S_W).dot(S_B))
Explanation: <br>
<br>
Selecting linear discriminants for the new feature subspace
Solve the generalized eigenvalue problem for the matrix $S_W^{-1}S_B$:
End of explanation
# Make a list of (eigenvalue, eigenvector) tuples
eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:, i])
for i in range(len(eigen_vals))]
# Sort the (eigenvalue, eigenvector) tuples from high to low
eigen_pairs = sorted(eigen_pairs, key=lambda k: k[0], reverse=True)
# Visually confirm that the list is correctly sorted by decreasing eigenvalues
print('Eigenvalues in decreasing order:\n')
for eigen_val in eigen_pairs:
print(eigen_val[0])
tot = sum(eigen_vals.real)
discr = [(i / tot) for i in sorted(eigen_vals.real, reverse=True)]
cum_discr = np.cumsum(discr)
plt.bar(range(1, 14), discr, alpha=0.5, align='center',
label='individual "discriminability"')
plt.step(range(1, 14), cum_discr, where='mid',
label='cumulative "discriminability"')
plt.ylabel('"discriminability" ratio')
plt.xlabel('Linear Discriminants')
plt.ylim([-0.1, 1.1])
plt.legend(loc='best')
plt.tight_layout()
# plt.savefig('./figures/lda1.png', dpi=300)
plt.show()
w = np.hstack((eigen_pairs[0][1][:, np.newaxis].real,
eigen_pairs[1][1][:, np.newaxis].real))
print('Matrix W:\n', w)
Explanation: Note:
Above, I used the numpy.linalg.eig function to decompose the symmetric covariance matrix into its eigenvalues and eigenvectors.
<pre>>>> eigen_vals, eigen_vecs = np.linalg.eig(cov_mat)</pre>
This is not really a "mistake," but probably suboptimal. It would be better to use numpy.linalg.eigh in such cases, which has been designed for Hermetian matrices. The latter always returns real eigenvalues; whereas the numerically less stable np.linalg.eig can decompose nonsymmetric square matrices, you may find that it returns complex eigenvalues in certain cases. (S.R.)
Sort eigenvectors in decreasing order of the eigenvalues:
End of explanation
X_train_lda = X_train_std.dot(w)
colors = ['r', 'b', 'g']
markers = ['s', 'x', 'o']
for l, c, m in zip(np.unique(y_train), colors, markers):
plt.scatter(X_train_lda[y_train == l, 0] * (-1),
X_train_lda[y_train == l, 1] * (-1),
c=c, label=l, marker=m)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower right')
plt.tight_layout()
# plt.savefig('./figures/lda2.png', dpi=300)
plt.show()
Explanation: <br>
<br>
Projecting samples onto the new feature space
End of explanation
from sklearn.lda import LDA
lda = LDA(n_components=2)
X_train_lda = lda.fit_transform(X_train_std, y_train)
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr = lr.fit(X_train_lda, y_train)
plot_decision_regions(X_train_lda, y_train, classifier=lr)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./images/lda3.png', dpi=300)
plt.show()
X_test_lda = lda.transform(X_test_std)
plot_decision_regions(X_test_lda, y_test, classifier=lr)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./images/lda4.png', dpi=300)
plt.show()
Explanation: <br>
<br>
LDA via scikit-learn
End of explanation
Image(filename='./images/05_11.png', width=500)
Explanation: <br>
<br>
Using kernel principal component analysis for nonlinear mappings
End of explanation
from scipy.spatial.distance import pdist, squareform
from scipy import exp
from scipy.linalg import eigh
import numpy as np
def rbf_kernel_pca(X, gamma, n_components):
RBF kernel PCA implementation.
Parameters
------------
X: {NumPy ndarray}, shape = [n_samples, n_features]
gamma: float
Tuning parameter of the RBF kernel
n_components: int
Number of principal components to return
Returns
------------
X_pc: {NumPy ndarray}, shape = [n_samples, k_features]
Projected dataset
# Calculate pairwise squared Euclidean distances
# in the MxN dimensional dataset.
sq_dists = pdist(X, 'sqeuclidean')
# Convert pairwise distances into a square matrix.
mat_sq_dists = squareform(sq_dists)
# Compute the symmetric kernel matrix.
K = exp(-gamma * mat_sq_dists)
# Center the kernel matrix.
N = K.shape[0]
one_n = np.ones((N, N)) / N
K = K - one_n.dot(K) - K.dot(one_n) + one_n.dot(K).dot(one_n)
# Obtaining eigenpairs from the centered kernel matrix
# numpy.eigh returns them in sorted order
eigvals, eigvecs = eigh(K)
# Collect the top k eigenvectors (projected samples)
X_pc = np.column_stack((eigvecs[:, -i]
for i in range(1, n_components + 1)))
return X_pc
Explanation: <br>
<br>
Implementing a kernel principal component analysis in Python
End of explanation
import matplotlib.pyplot as plt
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=100, random_state=123)
plt.scatter(X[y == 0, 0], X[y == 0, 1], color='red', marker='^', alpha=0.5)
plt.scatter(X[y == 1, 0], X[y == 1, 1], color='blue', marker='o', alpha=0.5)
plt.tight_layout()
# plt.savefig('./figures/half_moon_1.png', dpi=300)
plt.show()
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
scikit_pca = PCA(n_components=2)
X_spca = scikit_pca.fit_transform(X)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_spca[y == 0, 0], X_spca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_spca[y == 1, 0], X_spca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_spca[y == 0, 0], np.zeros((50, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_spca[y == 1, 0], np.zeros((50, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('./figures/half_moon_2.png', dpi=300)
plt.show()
from matplotlib.ticker import FormatStrFormatter
X_kpca = rbf_kernel_pca(X, gamma=15, n_components=2)
fig, ax = plt.subplots(nrows=1,ncols=2, figsize=(7,3))
ax[0].scatter(X_kpca[y==0, 0], X_kpca[y==0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_kpca[y==1, 0], X_kpca[y==1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_kpca[y==0, 0], np.zeros((50,1))+0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_kpca[y==1, 0], np.zeros((50,1))-0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
ax[0].xaxis.set_major_formatter(FormatStrFormatter('%0.1f'))
ax[1].xaxis.set_major_formatter(FormatStrFormatter('%0.1f'))
plt.tight_layout()
# plt.savefig('./figures/half_moon_3.png', dpi=300)
plt.show()
Explanation: <br>
Example 1: Separating half-moon shapes
End of explanation
from sklearn.datasets import make_circles
X, y = make_circles(n_samples=1000, random_state=123, noise=0.1, factor=0.2)
plt.scatter(X[y == 0, 0], X[y == 0, 1], color='red', marker='^', alpha=0.5)
plt.scatter(X[y == 1, 0], X[y == 1, 1], color='blue', marker='o', alpha=0.5)
plt.tight_layout()
# plt.savefig('./figures/circles_1.png', dpi=300)
plt.show()
scikit_pca = PCA(n_components=2)
X_spca = scikit_pca.fit_transform(X)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_spca[y == 0, 0], X_spca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_spca[y == 1, 0], X_spca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_spca[y == 0, 0], np.zeros((500, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_spca[y == 1, 0], np.zeros((500, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('./figures/circles_2.png', dpi=300)
plt.show()
X_kpca = rbf_kernel_pca(X, gamma=15, n_components=2)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_kpca[y == 0, 0], X_kpca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_kpca[y == 1, 0], X_kpca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_kpca[y == 0, 0], np.zeros((500, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_kpca[y == 1, 0], np.zeros((500, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('./figures/circles_3.png', dpi=300)
plt.show()
Explanation: <br>
Example 2: Separating concentric circles
End of explanation
from scipy.spatial.distance import pdist, squareform
from scipy import exp
from scipy.linalg import eigh
import numpy as np
def rbf_kernel_pca(X, gamma, n_components):
RBF kernel PCA implementation.
Parameters
------------
X: {NumPy ndarray}, shape = [n_samples, n_features]
gamma: float
Tuning parameter of the RBF kernel
n_components: int
Number of principal components to return
Returns
------------
X_pc: {NumPy ndarray}, shape = [n_samples, k_features]
Projected dataset
lambdas: list
Eigenvalues
# Calculate pairwise squared Euclidean distances
# in the MxN dimensional dataset.
sq_dists = pdist(X, 'sqeuclidean')
# Convert pairwise distances into a square matrix.
mat_sq_dists = squareform(sq_dists)
# Compute the symmetric kernel matrix.
K = exp(-gamma * mat_sq_dists)
# Center the kernel matrix.
N = K.shape[0]
one_n = np.ones((N, N)) / N
K = K - one_n.dot(K) - K.dot(one_n) + one_n.dot(K).dot(one_n)
# Obtaining eigenpairs from the centered kernel matrix
# numpy.eigh returns them in sorted order
eigvals, eigvecs = eigh(K)
# Collect the top k eigenvectors (projected samples)
alphas = np.column_stack((eigvecs[:, -i]
for i in range(1, n_components + 1)))
# Collect the corresponding eigenvalues
lambdas = [eigvals[-i] for i in range(1, n_components + 1)]
return alphas, lambdas
X, y = make_moons(n_samples=100, random_state=123)
alphas, lambdas = rbf_kernel_pca(X, gamma=15, n_components=1)
x_new = X[25]
x_new
x_proj = alphas[25] # original projection
x_proj
def project_x(x_new, X, gamma, alphas, lambdas):
pair_dist = np.array([np.sum((x_new - row)**2) for row in X])
k = np.exp(-gamma * pair_dist)
return k.dot(alphas / lambdas)
# projection of the "new" datapoint
x_reproj = project_x(x_new, X, gamma=15, alphas=alphas, lambdas=lambdas)
x_reproj
plt.scatter(alphas[y == 0, 0], np.zeros((50)),
color='red', marker='^', alpha=0.5)
plt.scatter(alphas[y == 1, 0], np.zeros((50)),
color='blue', marker='o', alpha=0.5)
plt.scatter(x_proj, 0, color='black',
label='original projection of point X[25]', marker='^', s=100)
plt.scatter(x_reproj, 0, color='green',
label='remapped point X[25]', marker='x', s=500)
plt.legend(scatterpoints=1)
plt.tight_layout()
# plt.savefig('./figures/reproject.png', dpi=300)
plt.show()
Explanation: <br>
<br>
Projecting new data points
End of explanation
from sklearn.decomposition import KernelPCA
X, y = make_moons(n_samples=100, random_state=123)
scikit_kpca = KernelPCA(n_components=2, kernel='rbf', gamma=15)
X_skernpca = scikit_kpca.fit_transform(X)
plt.scatter(X_skernpca[y == 0, 0], X_skernpca[y == 0, 1],
color='red', marker='^', alpha=0.5)
plt.scatter(X_skernpca[y == 1, 0], X_skernpca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.tight_layout()
# plt.savefig('./figures/scikit_kpca.png', dpi=300)
plt.show()
Explanation: <br>
<br>
Kernel principal component analysis in scikit-learn
End of explanation |
14,661 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: 시계열 예측
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: 날씨 데이터세트
이 튜토리얼은 <a class="external" href="https
Step3: 이 튜토리얼은 시간별 예측만 다루므로 10분 간격부터 1시간까지 데이터를 서브 샘플링하는 것으로 시작합니다.
Step4: 데이터를 살펴보겠습니다. 다음은 처음 몇 개의 행입니다.
Step5: 시간이 지남에 따라 몇 가지 특성이 전개됩니다.
Step6: 검사 및 정리하기
다음으로 데이터세트의 통계를 살펴봅니다.
Step7: 풍속
한 가지 주목할 점은 풍속의 min 값, wv (m/s) 및 max. wv (m/s) 열입니다. 이 -9999는 문제가 있는 것으로 보입니다. 별도의 풍향 열이 있으므로 속도는 >=0여야 합니다. 값을 0으로 대체합니다.
Step8: 특성 엔지니어링
모델을 본격적으로 빌드하기 전에 데이터를 이해하고 모델에 적절한 형식의 데이터를 전달하는 것이 중요합니다.
바람
데이터의 마지막 열인 wd (deg)는 도 단위로 바람의 방향을 나타냅니다. 각도가 있으면 모델 입력으로 좋지 않으므로 360°와 0°는 서로 가까워야 하며 부드럽게 휘어져야 합니다. 바람이 불지 않으면 방향은 중요하지 않습니다.
현재, 바람 데이터의 분포는 다음과 같습니다.
Step9: 그러나 풍향과 속도 열을 바람 벡터로 변환하면 모델이 해석하기가 더 쉽습니다.
Step10: 바람 벡터의 분포는 모델이 올바르게 해석하기에 훨씬 더 간단합니다.
Step11: 시간
마찬가지로 Date Time 열은 매우 유용하지만 이 문자열 형식으로는 유용하지 않습니다. 우선 초로 변환합니다.
Step12: 풍향과 유사하게 초 단위의 시간은 유용한 모델 입력이 아닙니다. 날씨 데이터이므로 하루 및 연 단위의 주기성이 명확합니다. 주기성을 처리할 수 있는 방법에는 여러 가지가 있습니다.
사용 가능한 신호로 변환하는 간단한 방법은 sin 및 cos를 사용하여 시간을 명확한 "하루 중 시간" 및 "연중 시간" 신호로 변환하는 것입니다.
Step13: 그러면 모델이 가장 중요한 빈도 특성에 액세스할 수 있습니다. 이 경우 어떤 빈도가 중요한지 미리 알고 있었습니다.
모르는 경우 fft를 사용하여 중요한 빈도를 결정할 수 있습니다. 시간에 따른 온도의 tf.signal.rfft를 보면 여기서 가정한 내용이 확인됩니다. 1/year 및 1/day 근처에서 빈도 피크가 확실하다는 것을 알 수 있습니다.
Step14: 데이터 분할
훈련, 검증 및 테스트 세트에 (70%, 20%, 10%) 분할을 사용합니다. 분할하기 전에 데이터가 임의로 셔플되지 않습니다. 이것은 두 가지 이유 때문입니다.
데이터를 연속된 샘플의 창으로 자르는 것이 여전히 가능합니다.
모델을 훈련한 후 수집된 데이터를 바탕으로 평가하므로 검증/테스트 결과가 보다 현실적입니다.
Step15: 데이터 정규화
신경망을 훈련하기 전에 특성의 크기를 정하는 것이 중요합니다. 정규화는 이 크기 조정을 수행하는 일반적인 방법입니다. 평균을 빼고 각 특성의 표준 편차로 나눕니다.
모델이 검증 및 테스트 세트의 값에 액세스할 수 없도록 훈련 데이터를 사용해서만 평균 및 표준 편차를 계산해야 합니다.
또한 모델이 훈련할 때 훈련 세트의 미래 값에 액세스할 수 없어야 하고 이 정규화가 이동 평균을 사용하여 수행되어야 한다고 말할 수도 있습니다. 이 내용은 본 튜토리얼의 중점 사항이 아니며, 검증 및 테스트 세트가 있기 때문에 (다소) 정직한 메트릭을 얻을 수 있습니다. 따라서 단순화를 위해 이 튜토리얼에서는 단순 평균을 사용합니다.
Step16: 이제 특성의 분포를 살펴봅니다. 일부 특성은 꼬리가 길지만 -9999 풍속 값과 같은 명백한 오류는 없습니다.
Step17: 데이터 창 작업
이 튜토리얼의 모델은 데이터의 연속된 샘플 창을 기반으로 일련의 예측을 수행합니다.
입력 창의 주요 특성은 다음과 같습니다.
입력 및 레이블 창의 너비(타임스텝 수)
각 사이의 시간 오프셋
입력, 레이블 또는 둘 모두로 사용되는 특성
이 튜토리얼은 다양한 모델(선형, DNN, CNN 및 RNN 모델 포함)을 빌드하고 다음 두 가지 목적으로 이 모델을 사용합니다.
단일 출력 및 다중 출력 예측
단일 타임스텝 및 다중 타임스텝 예측
이 섹션에서는 모든 모델에 재사용할 수 있도록 데이터 창 작업을 구현하는 부분에 중점을 둡니다.
작업 및 모델 유형에 따라 다양한 데이터 창을 생성할 수 있습니다. 다음은 몇 가지 예입니다.
예를 들어, 24시간의 기록이 주어졌을 때 앞으로 24시간의 미래를 단일 예측하려면 다음과 같은 창을 정의할 수 있습니다.
6시간의 기록이 주어졌을 때 앞으로 1시간의 미래를 예측하는 모델에는 다음과 같은 창이 필요합니다.
이 섹션의 나머지 부분에서는 WindowGenerator 클래스를 정의합니다. 이 클래스는 다음을 수행할 수 있습니다.
위의 다이어그램과 같이 인덱스와 오프셋을 처리합니다.
특성 창을 (features, labels) 쌍으로 분할합니다.
결과 창의 내용을 플롯합니다.
tf.data.Dataset를 사용하여 훈련, 평가 및 테스트 데이터로부터 이러한 창을 여러 배치로 효율적으로 생성합니다.
1. 인덱스 및 오프셋
우선 WindowGenerator 클래스를 만듭니다. __init__ 메서드에는 입력 및 레이블 인덱스에 필요한 모든 논리가 포함됩니다.
또한 train, eval 및 test 데이터 프레임을 입력으로 사용합니다. 이러한 데이터 프레임은 나중에 창의 tf.data.Dataset로 변환됩니다.
Step18: 이 섹션의 시작 부분에서 다이어그램에 나타낸 두 개의 창을 만드는 코드는 다음과 같습니다.
Step19: 2. 분할
연속적인 입력 목록이 주어지면 split_window 메서드는 이 목록을 입력 창과 레이블 창으로 변환합니다.
위의 예제 w2는 다음과 같이 분할됩니다.
이 다이어그램에는 데이터의 features 축이 나와 있지 않지만 이 split_window 함수는 단일 출력과 다중 출력 예에서 모두 사용될 수 있도록 label_columns를 처리합니다.
Step20: 다음을 사용해 보세요.
Step21: 일반적으로 TensorFlow의 데이터는 가장 바깥 쪽 인덱스가 여러 예제("배치" 차원)에 걸쳐 있는 배열로 구성됩니다. 중간 인덱스는 "시간" 또는 "공간"(너비, 높이) 차원입니다. 가장 안쪽 인덱스는 특성입니다.
위의 코드는 두 배치의 7-타임스텝 창을 사용하며 각 타임스텝에는 19개의 특성이 있습니다. 그러면 이것을 한 배치의 6-타임스텝과 19개의 특성 입력 및 1-타임스텝 1-특성 레이블로 분할합니다. 레이블에는 하나의 특성만 있는데, WindowGenerator가 label_columns=['T (degC)']로 초기화되었기 때문입니다. 우선 이 튜토리얼에서는 단일 출력 레이블을 예측하는 모델을 빌드합니다.
3. 플롯하기
다음은 분할 창을 간단하게 시각화할 수 있는 플롯 메서드입니다.
Step22: 이 플롯은 항목이 참조하는 시간을 기준으로 입력, 레이블 및 (나중에) 예측값을 정렬합니다.
Step23: 다른 열을 플롯할 수 있지만 예제 창 w2 구성에는 T (degC) 열에 대한 레이블만 있습니다.
Step24: 4. tf.data.Dataset 만들기
마지막으로, 이 make_dataset 메서드는 시계열 DataFrame을 가져와 preprocessing.timeseries_dataset_from_array 함수를 이용해 (input_window, label_window) 쌍의 tf.data.Dataset로 변환합니다.
Step26: WindowGenerator 객체는 훈련, 검증 및 테스트 데이터를 보유합니다. 위의 make_dataset 메서드를 사용하여 tf.data.Datasets로 여기에 액세스하기 위한 특성을 추가합니다. 또한 간편한 액세스와 플롯을 위한 표준 예제 배치를 추가합니다.
Step27: 이제 WindowGenerator 객체가 tf.data.Dataset 객체에 대한 액세스 권한을 부여하므로 데이터를 쉽게 반복할 수 있습니다.
Dataset.element_spec 속성은 데이터세트 요소의 구조, dtypes 및 형상을 알려줍니다.
Step28: Dataset를 반복하면 구체적인 배치가 생성됩니다.
Step29: 단일 스텝 모델
이러한 종류의 데이터를 기반으로 빌드할 수 있는 가장 간단한 모델은 현재 조건에만 기초하여 미래로 1 타임스텝(1시간) 진행된 단일 특성 값을 예측하는 모델입니다.
따라서 1시간 미래의 T (degC) 값을 예측하는 모델을 빌드하는 것으로 시작하겠습니다.
다음과 같은 단일 스텝 (input, label) 쌍을 생성하도록 WindowGenerator 객체를 구성합니다.
Step30: window 객체는 훈련, 검증 및 테스트 세트로부터 tf.data.Datasets를 생성하므로 데이터 배치를 쉽게 반복할 수 있습니다.
Step31: 기준
훈련 가능한 모델을 빌드하기 전에 나중에 더 복잡한 모델과 비교하기 위한 포인트로 성능 기준을 갖는 것이 좋습니다.
첫 번째 작업은 모든 특성의 현재 값을 고려하여 1시간 미래의 온도를 예측하는 것입니다. 현재 값에는 현재 온도가 포함됩니다.
따라서 예측으로 현재 온도를 반환하여 "변화 없음"을 예측하는 모델로 시작하겠습니다. 온도는 천천히 변하기 때문에 이것은 합리적인 기준입니다. 물론, 더 미래로 들어가면 이 기준의 예측 효과는 떨어질 것입니다.
Step32: 이 모델을 인스턴스화하고 평가합니다.
Step33: 몇 가지 성능 메트릭을 출력했지만 모델이 얼마나 잘 동작하는지에 대한 느낌은 주지 않습니다.
WindowGenerator에는 플롯 메서드가 있지만 단일 샘플만으로는 플롯이 그다지 흥미롭지 않습니다. 따라서 한 번에 24시간 범위의 연속 입력과 레이블을 생성하는 더 넓은 WindowGenerator를 만듭니다.
wide_window는 모델이 동작하는 방식을 변화시키지 않습니다. 이 모델은 단일 입력 타임스텝을 기반으로 1시간 미래를 예측합니다. 여기서 time 축은 batch 축과 같은 역할을 합니다. 각 예측은 타임스텝 사이의 상호 작용 없이 독립적으로 이루어집니다.
Step34: 이 확장된 창은 어떠한 코드 변경 없이 동일한 baseline 모델에 직접 전달할 수 있습니다. 이는 입력과 레이블이 동일한 수의 타임스텝을 가지며 기준이 입력을 출력으로 전달하기 때문에 가능합니다.
Step35: 기준 모델의 예측값을 플롯하면 1시간씩 오른쪽으로 이동한 단순한 레이블임을 알 수 있습니다.
Step36: 위의 세 가지 예제 플롯에서 단일 스텝 모델은 24시간 동안 실행됩니다. 이에 관해 몇 가지 설명이 필요합니다.
파란색 "입력" 라인은 각 타임스텝의 입력 온도를 보여줍니다. 이 모델은 모든 특성을 수신하며 이 플롯은 온도만 표시합니다.
녹색 "레이블" 점은 목표 예측값을 나타냅니다. 이러한 점은 입력 시간이 아니라 예측 시간에 표시됩니다. 레이블의 범위가 입력에 상대적으로 한 스텝 이동하는 이유가 여기에 있습니다.
주황색 "예측" 십자는 각 출력 타임스텝에 대한 모델의 예측입니다. 모델이 완벽하게 예측하는 경우 예측값은 "레이블" 바로 위에 놓여집니다.
선형 모델
이 작업에 적용할 수 있는 가장 간단한 훈련 가능한 모델은 입력과 출력 사이에 선형 변환을 삽입하는 것입니다. 이 경우 타임스텝의 출력은 해당 스텝에만 의존합니다.
activation 세트가 없는 layers.Dense는 선형 모델입니다. 레이어는 데이터의 마지막 축을 (batch, time, inputs)에서 (batch, time, units)로만 변환하며, batch 및 time 축의 모든 항목에 독립적으로 적용됩니다.
Step37: 이 튜토리얼은 많은 모델을 훈련하므로 훈련 절차를 하나의 함수 패키지로 만듭니다.
Step38: 모델을 훈련하고 성능을 평가합니다.
Step39: baseline 모델과 마찬가지로 선형 모델은 넓은 범위의 배치에서 호출할 수 있습니다. 이러한 방식으로 모델은 연속적인 타임스텝에 대해 일련의 독립적인 예측을 수행합니다. time 축은 다른 batch 축처럼 작동합니다. 각 타임스텝에서 예측 사이에 상호 작용은 없습니다.
Step40: 다음은 wide_widow에 대한 예제 예측값을 플롯한 내용입니다. 많은 경우 예측이 단순히 입력 온도를 반환하는 것보다는 분명히 더 낮지만 몇 가지 경우에는 더 나쁘다는 사실에 주목하세요.
Step41: 선형 모델의 한 가지 장점은 해석하기가 상대적으로 간단하다는 것입니다. 레이어의 가중치를 가져와 각 입력에 할당된 가중치를 볼 수 있습니다.
Step42: 때로 모델은 입력 T (degC)에 가장 많은 가중치를 두지 않습니다. 이것은 무작위 초기화의 위험 중 하나입니다.
밀집
실제로 여러 타임스텝에서 동작하는 모델을 적용하기 전에 더 깊고 강력한 단일 입력 스텝 모델의 성능을 확인하는 것이 좋습니다.
다음 모델은 입력과 출력 사이에 몇 개의 Dense 레이어를 쌓는다는 점을 제외하면 linear 모델과 유사합니다.
Step43: 다중 스텝 밀집
단일 타임스텝 모델에는 입력의 현재 값에 대한 컨텍스트가 없습니다. 시간에 따라 입력 특성이 어떻게 변하는지 볼 수 없습니다. 이 문제를 해결하려면 모델이 예측을 수행할 때 여러 타임스텝에 액세스해야 합니다.
baseline , linear 및 dense 모델은 각 타임스텝을 독립적으로 처리했습니다. 여기서 모델은 단일 출력을 생성하기 위해 여러 타임스텝을 입력으로 사용합니다.
3시간의 입력과 1시간의 레이블 배치를 생성하는 WindowGenerator를 만듭니다.
Window의 shift 매개변수는 두 창의 끝에 상대적입니다.
Step44: layers.Flatten을 모델의 첫 번째 레이어로 추가하여 다중 입력 스텝 창에서 dense 모델을 훈련할 수 있습니다.
Step45: 이 접근법의 주된 단점은 결과적인 모델이 정확히 이 형상의 입력 창에서만 실행될 수 있다는 것입니다.
Step46: 다음 섹션의 컨볼루셔널 모델은 이 문제를 해결합니다.
컨볼루션 신경망
컨볼루션 레이어(layers.Conv1D)도 각 예측에 대한 입력으로 여러 타임스텝을 사용합니다.
다음은 컨볼루션으로 다시 작성한 multi_step_dense와 동일한 모델입니다.
다음 변경 사항에 주목하세요.
layers.Flatten과 첫 번째 layers.Dense는 layers.Conv1D로 대체됩니다.
컨볼루션이 출력에서 시간 축을 유지하므로 layers.Reshape는 이 더 이상 필요하지 않습니다.
Step47: 예제 배치에서 실행하여 모델이 예상된 형상으로 출력을 생성하는지 확인합니다.
Step48: conv_window에서 훈련하고 평가하면 multi_step_dense 모델과 유사한 성능을 제공해야 합니다.
Step49: 이 conv_model과 multi_step_dense 모델의 차이점은 conv_model은 모든 길이의 입력에서 실행될 수 있다는 것입니다. 컨볼루셔널 레이어는 입력의 슬라이딩 윈도우에 적용됩니다.
더 넓은 입력에서 실행하면 더 넓은 출력이 생성됩니다.
Step50: 출력은 입력보다 짧습니다. 훈련 또는 플롯 작업을 수행하려면 레이블과 예상의 길이가 동일해야 합니다. 따라서 레이블과 예측 길이가 일치하도록 몇 개의 추가 입력 타임스텝으로 넓은 창을 생성하는 WindowGenerator를 빌드합니다.
Step51: 이제 더 넓은 창에 모델의 예측값을 플롯할 수 있습니다. 첫 번째 예측 전 3개의 입력 타임스텝에 주목하세요. 여기서 모든 예측은 이전 3개의 타임스텝에 기초합니다.
Step52: 순환 신경망
Recurrent Neural Network(RNN)는 시계열 데이터에 적합한 신경망 유형입니다. RNN은 시계열을 단계별로 처리하여 타임스텝 사이에서 내부 상태를 유지합니다.
자세한 내용은 텍스트 생성 튜토리얼 또는 RNN 가이드를 읽어보세요.
이 튜토리얼에서는 Long Short Term Memory(LSTM)이라는 RNN 레이어를 사용합니다.
모든 keras RNN 레이어에 대한 중요한 생성자 인수는 return_sequences 인수입니다. 이 설정은 다음 두 가지 방법 중 하나로 레이어를 구성할 수 있습니다.
기본값인 False인 경우 레이어는 최종 타임스텝의 출력만 반환하여 단일 예측을 수행하기 전에 모델이 내부 상태를 준비할 시간을 줍니다.
True이면 레이어가 각 입력에 대한 출력을 반환합니다. 다음과 같은 경우에 유용합니다.
RNN 레이어 쌓기
여러 타임스텝에서 동시에 모델 훈련
Step53: return_sequences=True이면 모델을 한 번에 24시간 분량 데이터에 대해 훈련할 수 있습니다.
참고
Step54: 성능
이 데이터세트를 사용하면 일반적으로 각 모델의 성능이 이전 모델보다 약간 더 좋습니다.
Step55: 다중 출력 모델
지금까지 모델은 모두 단일 타임스텝에 대해 단일 출력 특성 T (degC)를 예측했습니다.
이러한 모든 모델은 간단히 출력 레이어의 단위 수를 변경하고 labels에 모든 특성을 포함하도록 훈련 창을 조정하여 여러 특성을 예측하도록 변환할 수 있습니다.
Step56: 레이블의 features 축은 이제 1이 아닌 입력과 동일한 깊이를 갖습니다.
기준
여기서는 동일한 기준 모델을 사용할 수 있지만 이번에는 특정 label_index를 선택하는 대신 모든 특성을 반복합니다.
Step57: 밀집
Step58: RNN
Step59: <a id="residual"></a>
고급
Step60: 성능
다음은 이러한 다중 출력 모델의 전반적인 성능입니다.
Step61: 위의 성능은 모든 모델 출력에 대한 평균입니다.
다중 스텝 모델
이전 섹션의 단일 출력 및 다중 출력 모델은 모두 미래 1시간의 단일 타임스텝 예측을 수행했습니다.
이 섹션에서는 이러한 모델을 확장하여 다중 타임스텝 예측을 수행하는 방법을 살펴봅니다.
다중 스텝 예측에서 모델은 일정 범위의 미래 값을 예측하는 방법을 학습해야 합니다. 따라서 한 미래 시점만 예측하는 단일 스텝 모델과 달리 다중 스텝 모델은 미래 값의 시퀀스를 예측합니다.
대략적으로 두 가지 접근 방식이 있습니다.
전체 시계열이 한 번에 예측되는 싱글샷 예측
모델이 단일 스텝 예측만 수행하고 출력이 입력으로 피드백되는 자기 회귀적 예측
이 섹션에서는 모든 모델이 모든 출력 타임스텝에 걸쳐 모든 특성을 예측합니다.
다중 스텝 모델의 경우, 훈련 데이터는 다시 시간별 샘플로 구성됩니다. 그러나 여기에서 모델은 과거의 24시간을 고려하여 미래 24시간을 예측하는 방법을 학습합니다.
다음은 데이터세트로부터 이러한 조각을 생성하는 Window 객체입니다.
Step62: 기준
이 작업의 간단한 기준은 필요한 출력 타임스텝 수에 대해 마지막 입력 타임스텝을 반복하는 것입니다.
Step63: 이 작업은 24시간이 주어졌을 때 24시간을 예측하는 것이므로 또 다른 간단한 접근 방법은 내일도 비슷하다는 가정 하에 전날을 반복하는 것입니다.
Step64: 싱글샷 모델
이 문제에 대한 한 가지 높은 수준의 접근 방법은 모델이 한 번에 전체 시퀀스 예측을 수행하는 "싱글샷" 모델을 사용하는 것입니다.
이 모델은 OUT_STEPS*features 출력 단위를 이용해 layers.Dense로 효율적으로 구현할 수 있습니다. 이 모델은 이 출력의 형상을 필요한 (OUTPUT_STEPS, features)로 바꾸기만 하면 됩니다.
선형
마지막 입력 타임스텝을 기반으로 하는 단순한 선형 모델은 기준 모델보다 성능이 더 좋지만 강력하지 못합니다. 이 모델은 선형 프로젝션을 이용해 단일 입력 타임스텝으로부터 OUTPUT_STEPS 타임스텝을 예측해야 합니다. 주로 하루 중 시간과 연중 시간을 기반으로 하는 행동의 저차원 조각만 캡처할 수 있습니다.
Step65: 밀집
입력과 출력 사이에 layers.Dense를 추가하면 선현 모델이 더 강력해지지만 여전히 단일 입력에 기반합니다.
Step66: CNN
컨볼루션 모델은 고정 너비 기록을 기반으로 예측을 수행하므로 시간에 따라 상황이 어떻게 변하는지 볼 수 있어 밀집 모델보다 성능을 높일 수 있습니다.
Step67: RNN
반복 모델은 모델이 수행하는 예측과 관련이 있는 경우 긴 입력 기록을 사용하는 방법을 학습할 수 있습니다. 여기서 모델은 다음 24시간에 대한 단일 예측을 수행하기 전에 24시간 동안 내부 상태를 축적합니다.
이 싱글샷 형식에서 LSTM은 마지막 타임스텝에서만 출력을 생성하면 되므로 return_sequences=False를 설정합니다.
Step68: 고급
Step69: 이 모델에 필요한 첫 번째 메서드는 입력을 기반으로 내부 상태를 초기화하는 warmup 메서드입니다. 일단 훈련되면 이 상태는 입력 기록의 관련 부분을 캡처합니다. 이는 앞서 알아본 단일 스텝 LSTM 모델과 동일합니다.
Step70: 이 메서드는 단일 타임스텝 예측과 LSTM의 내부 상태를 반환합니다.
Step71: RNN의 상태 및 초기 예측을 사용하여 이제 이전의 각 스텝에서 수행한 예측을 입력으로 제공하여 모델을 계속 반복할 수 있습니다.
출력 예측을 수집하는 가장 간단한 방법은 Python 목록을 사용하고 루프 후에 tf.stack을 사용하는 것입니다.
참고
Step72: 예제 입력에서 이 모델을 테스트 실행합니다.
Step73: 이제 모델을 훈련합니다.
Step74: 성능
이 문제에 대해 모델 복잡성이 증가함에 따라 분명히 이득이 감소합니다.
Step75: 이 튜토리얼의 전반부에서 소개한 다중 출력 모델에 대한 메트릭은 모든 출력 특성에 평균화된 성능을 보여줍니다. 이러한 성능은 유사하지만 출력 타임스텝에서도 평균화됩니다. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
import os
import datetime
import IPython
import IPython.display
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import tensorflow as tf
mpl.rcParams['figure.figsize'] = (8, 6)
mpl.rcParams['axes.grid'] = False
Explanation: 시계열 예측
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/tutorials/structured_data/time_series"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/tutorials/structured_data/time_series.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png"> Google Colab에서 실행하기</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/tutorials/structured_data/time_series.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub에서 소스 보기</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/tutorials/structured_data/time_series.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">Download notebook</a></td>
</table>
이 튜토리얼에서는 TensorFlow를 사용한 시계열 예측을 소개합니다. Convolutional/Recurrent Neural Network(CNN 및 RNN)를 포함하여 몇 가지 다른 스타일의 모델을 빌드합니다.
이 내용은 각각 하위 항목이 있는 두 부분으로 나누어 생각합니다.
단일 타임스텝 예측:
단일 특성
모든 특성
다중 스텝 예측:
싱글샷: 모두 한 번에 예측합니다.
자가 회귀: 한 번에 하나의 예측을 수행하고 결과를 모델로 피드백합니다.
설정
End of explanation
zip_path = tf.keras.utils.get_file(
origin='https://storage.googleapis.com/tensorflow/tf-keras-datasets/jena_climate_2009_2016.csv.zip',
fname='jena_climate_2009_2016.csv.zip',
extract=True)
csv_path, _ = os.path.splitext(zip_path)
Explanation: 날씨 데이터세트
이 튜토리얼은 <a class="external" href="https://www.bgc-jena.mpg.de/wetter/">막스 플랑크 생물 지구화학 연구소</a>에서 기록한 <a class="external" href="https://www.bgc-jena.mpg.de">날씨 시계열 데이터세트</a>를 사용합니다.
이 데이터세트에는 온도, 대기압 및 습도와 같은 14가지 특성이 있습니다. 이러한 데이터는 2003년부터 시작해 10분 간격으로 수집되었습니다. 효율성을 위해 2009년과 2016년 사이에 수집된 데이터만 사용하겠습니다. 이 데이터세트 부분은 François Chollet이 자신이 저술한 책 Deep Learning with Python을 위해 준비했습니다.
End of explanation
df = pd.read_csv(csv_path)
# slice [start:stop:step], starting from index 5 take every 6th record.
df = df[5::6]
date_time = pd.to_datetime(df.pop('Date Time'), format='%d.%m.%Y %H:%M:%S')
Explanation: 이 튜토리얼은 시간별 예측만 다루므로 10분 간격부터 1시간까지 데이터를 서브 샘플링하는 것으로 시작합니다.
End of explanation
df.head()
Explanation: 데이터를 살펴보겠습니다. 다음은 처음 몇 개의 행입니다.
End of explanation
plot_cols = ['T (degC)', 'p (mbar)', 'rho (g/m**3)']
plot_features = df[plot_cols]
plot_features.index = date_time
_ = plot_features.plot(subplots=True)
plot_features = df[plot_cols][:480]
plot_features.index = date_time[:480]
_ = plot_features.plot(subplots=True)
Explanation: 시간이 지남에 따라 몇 가지 특성이 전개됩니다.
End of explanation
df.describe().transpose()
Explanation: 검사 및 정리하기
다음으로 데이터세트의 통계를 살펴봅니다.
End of explanation
wv = df['wv (m/s)']
bad_wv = wv == -9999.0
wv[bad_wv] = 0.0
max_wv = df['max. wv (m/s)']
bad_max_wv = max_wv == -9999.0
max_wv[bad_max_wv] = 0.0
# The above inplace edits are reflected in the DataFrame
df['wv (m/s)'].min()
Explanation: 풍속
한 가지 주목할 점은 풍속의 min 값, wv (m/s) 및 max. wv (m/s) 열입니다. 이 -9999는 문제가 있는 것으로 보입니다. 별도의 풍향 열이 있으므로 속도는 >=0여야 합니다. 값을 0으로 대체합니다.
End of explanation
plt.hist2d(df['wd (deg)'], df['wv (m/s)'], bins=(50, 50), vmax=400)
plt.colorbar()
plt.xlabel('Wind Direction [deg]')
plt.ylabel('Wind Velocity [m/s]')
Explanation: 특성 엔지니어링
모델을 본격적으로 빌드하기 전에 데이터를 이해하고 모델에 적절한 형식의 데이터를 전달하는 것이 중요합니다.
바람
데이터의 마지막 열인 wd (deg)는 도 단위로 바람의 방향을 나타냅니다. 각도가 있으면 모델 입력으로 좋지 않으므로 360°와 0°는 서로 가까워야 하며 부드럽게 휘어져야 합니다. 바람이 불지 않으면 방향은 중요하지 않습니다.
현재, 바람 데이터의 분포는 다음과 같습니다.
End of explanation
wv = df.pop('wv (m/s)')
max_wv = df.pop('max. wv (m/s)')
# Convert to radians.
wd_rad = df.pop('wd (deg)')*np.pi / 180
# Calculate the wind x and y components.
df['Wx'] = wv*np.cos(wd_rad)
df['Wy'] = wv*np.sin(wd_rad)
# Calculate the max wind x and y components.
df['max Wx'] = max_wv*np.cos(wd_rad)
df['max Wy'] = max_wv*np.sin(wd_rad)
Explanation: 그러나 풍향과 속도 열을 바람 벡터로 변환하면 모델이 해석하기가 더 쉽습니다.
End of explanation
plt.hist2d(df['Wx'], df['Wy'], bins=(50, 50), vmax=400)
plt.colorbar()
plt.xlabel('Wind X [m/s]')
plt.ylabel('Wind Y [m/s]')
ax = plt.gca()
ax.axis('tight')
Explanation: 바람 벡터의 분포는 모델이 올바르게 해석하기에 훨씬 더 간단합니다.
End of explanation
timestamp_s = date_time.map(datetime.datetime.timestamp)
Explanation: 시간
마찬가지로 Date Time 열은 매우 유용하지만 이 문자열 형식으로는 유용하지 않습니다. 우선 초로 변환합니다.
End of explanation
day = 24*60*60
year = (365.2425)*day
df['Day sin'] = np.sin(timestamp_s * (2 * np.pi / day))
df['Day cos'] = np.cos(timestamp_s * (2 * np.pi / day))
df['Year sin'] = np.sin(timestamp_s * (2 * np.pi / year))
df['Year cos'] = np.cos(timestamp_s * (2 * np.pi / year))
plt.plot(np.array(df['Day sin'])[:25])
plt.plot(np.array(df['Day cos'])[:25])
plt.xlabel('Time [h]')
plt.title('Time of day signal')
Explanation: 풍향과 유사하게 초 단위의 시간은 유용한 모델 입력이 아닙니다. 날씨 데이터이므로 하루 및 연 단위의 주기성이 명확합니다. 주기성을 처리할 수 있는 방법에는 여러 가지가 있습니다.
사용 가능한 신호로 변환하는 간단한 방법은 sin 및 cos를 사용하여 시간을 명확한 "하루 중 시간" 및 "연중 시간" 신호로 변환하는 것입니다.
End of explanation
fft = tf.signal.rfft(df['T (degC)'])
f_per_dataset = np.arange(0, len(fft))
n_samples_h = len(df['T (degC)'])
hours_per_year = 24*365.2524
years_per_dataset = n_samples_h/(hours_per_year)
f_per_year = f_per_dataset/years_per_dataset
plt.step(f_per_year, np.abs(fft))
plt.xscale('log')
plt.ylim(0, 400000)
plt.xlim([0.1, max(plt.xlim())])
plt.xticks([1, 365.2524], labels=['1/Year', '1/day'])
_ = plt.xlabel('Frequency (log scale)')
Explanation: 그러면 모델이 가장 중요한 빈도 특성에 액세스할 수 있습니다. 이 경우 어떤 빈도가 중요한지 미리 알고 있었습니다.
모르는 경우 fft를 사용하여 중요한 빈도를 결정할 수 있습니다. 시간에 따른 온도의 tf.signal.rfft를 보면 여기서 가정한 내용이 확인됩니다. 1/year 및 1/day 근처에서 빈도 피크가 확실하다는 것을 알 수 있습니다.
End of explanation
column_indices = {name: i for i, name in enumerate(df.columns)}
n = len(df)
train_df = df[0:int(n*0.7)]
val_df = df[int(n*0.7):int(n*0.9)]
test_df = df[int(n*0.9):]
num_features = df.shape[1]
Explanation: 데이터 분할
훈련, 검증 및 테스트 세트에 (70%, 20%, 10%) 분할을 사용합니다. 분할하기 전에 데이터가 임의로 셔플되지 않습니다. 이것은 두 가지 이유 때문입니다.
데이터를 연속된 샘플의 창으로 자르는 것이 여전히 가능합니다.
모델을 훈련한 후 수집된 데이터를 바탕으로 평가하므로 검증/테스트 결과가 보다 현실적입니다.
End of explanation
train_mean = train_df.mean()
train_std = train_df.std()
train_df = (train_df - train_mean) / train_std
val_df = (val_df - train_mean) / train_std
test_df = (test_df - train_mean) / train_std
Explanation: 데이터 정규화
신경망을 훈련하기 전에 특성의 크기를 정하는 것이 중요합니다. 정규화는 이 크기 조정을 수행하는 일반적인 방법입니다. 평균을 빼고 각 특성의 표준 편차로 나눕니다.
모델이 검증 및 테스트 세트의 값에 액세스할 수 없도록 훈련 데이터를 사용해서만 평균 및 표준 편차를 계산해야 합니다.
또한 모델이 훈련할 때 훈련 세트의 미래 값에 액세스할 수 없어야 하고 이 정규화가 이동 평균을 사용하여 수행되어야 한다고 말할 수도 있습니다. 이 내용은 본 튜토리얼의 중점 사항이 아니며, 검증 및 테스트 세트가 있기 때문에 (다소) 정직한 메트릭을 얻을 수 있습니다. 따라서 단순화를 위해 이 튜토리얼에서는 단순 평균을 사용합니다.
End of explanation
df_std = (df - train_mean) / train_std
df_std = df_std.melt(var_name='Column', value_name='Normalized')
plt.figure(figsize=(12, 6))
ax = sns.violinplot(x='Column', y='Normalized', data=df_std)
_ = ax.set_xticklabels(df.keys(), rotation=90)
Explanation: 이제 특성의 분포를 살펴봅니다. 일부 특성은 꼬리가 길지만 -9999 풍속 값과 같은 명백한 오류는 없습니다.
End of explanation
class WindowGenerator():
def __init__(self, input_width, label_width, shift,
train_df=train_df, val_df=val_df, test_df=test_df,
label_columns=None):
# Store the raw data.
self.train_df = train_df
self.val_df = val_df
self.test_df = test_df
# Work out the label column indices.
self.label_columns = label_columns
if label_columns is not None:
self.label_columns_indices = {name: i for i, name in
enumerate(label_columns)}
self.column_indices = {name: i for i, name in
enumerate(train_df.columns)}
# Work out the window parameters.
self.input_width = input_width
self.label_width = label_width
self.shift = shift
self.total_window_size = input_width + shift
self.input_slice = slice(0, input_width)
self.input_indices = np.arange(self.total_window_size)[self.input_slice]
self.label_start = self.total_window_size - self.label_width
self.labels_slice = slice(self.label_start, None)
self.label_indices = np.arange(self.total_window_size)[self.labels_slice]
def __repr__(self):
return '\n'.join([
f'Total window size: {self.total_window_size}',
f'Input indices: {self.input_indices}',
f'Label indices: {self.label_indices}',
f'Label column name(s): {self.label_columns}'])
Explanation: 데이터 창 작업
이 튜토리얼의 모델은 데이터의 연속된 샘플 창을 기반으로 일련의 예측을 수행합니다.
입력 창의 주요 특성은 다음과 같습니다.
입력 및 레이블 창의 너비(타임스텝 수)
각 사이의 시간 오프셋
입력, 레이블 또는 둘 모두로 사용되는 특성
이 튜토리얼은 다양한 모델(선형, DNN, CNN 및 RNN 모델 포함)을 빌드하고 다음 두 가지 목적으로 이 모델을 사용합니다.
단일 출력 및 다중 출력 예측
단일 타임스텝 및 다중 타임스텝 예측
이 섹션에서는 모든 모델에 재사용할 수 있도록 데이터 창 작업을 구현하는 부분에 중점을 둡니다.
작업 및 모델 유형에 따라 다양한 데이터 창을 생성할 수 있습니다. 다음은 몇 가지 예입니다.
예를 들어, 24시간의 기록이 주어졌을 때 앞으로 24시간의 미래를 단일 예측하려면 다음과 같은 창을 정의할 수 있습니다.
6시간의 기록이 주어졌을 때 앞으로 1시간의 미래를 예측하는 모델에는 다음과 같은 창이 필요합니다.
이 섹션의 나머지 부분에서는 WindowGenerator 클래스를 정의합니다. 이 클래스는 다음을 수행할 수 있습니다.
위의 다이어그램과 같이 인덱스와 오프셋을 처리합니다.
특성 창을 (features, labels) 쌍으로 분할합니다.
결과 창의 내용을 플롯합니다.
tf.data.Dataset를 사용하여 훈련, 평가 및 테스트 데이터로부터 이러한 창을 여러 배치로 효율적으로 생성합니다.
1. 인덱스 및 오프셋
우선 WindowGenerator 클래스를 만듭니다. __init__ 메서드에는 입력 및 레이블 인덱스에 필요한 모든 논리가 포함됩니다.
또한 train, eval 및 test 데이터 프레임을 입력으로 사용합니다. 이러한 데이터 프레임은 나중에 창의 tf.data.Dataset로 변환됩니다.
End of explanation
w1 = WindowGenerator(input_width=24, label_width=1, shift=24,
label_columns=['T (degC)'])
w1
w2 = WindowGenerator(input_width=6, label_width=1, shift=1,
label_columns=['T (degC)'])
w2
Explanation: 이 섹션의 시작 부분에서 다이어그램에 나타낸 두 개의 창을 만드는 코드는 다음과 같습니다.
End of explanation
def split_window(self, features):
inputs = features[:, self.input_slice, :]
labels = features[:, self.labels_slice, :]
if self.label_columns is not None:
labels = tf.stack(
[labels[:, :, self.column_indices[name]] for name in self.label_columns],
axis=-1)
# Slicing doesn't preserve static shape information, so set the shapes
# manually. This way the `tf.data.Datasets` are easier to inspect.
inputs.set_shape([None, self.input_width, None])
labels.set_shape([None, self.label_width, None])
return inputs, labels
WindowGenerator.split_window = split_window
Explanation: 2. 분할
연속적인 입력 목록이 주어지면 split_window 메서드는 이 목록을 입력 창과 레이블 창으로 변환합니다.
위의 예제 w2는 다음과 같이 분할됩니다.
이 다이어그램에는 데이터의 features 축이 나와 있지 않지만 이 split_window 함수는 단일 출력과 다중 출력 예에서 모두 사용될 수 있도록 label_columns를 처리합니다.
End of explanation
# Stack three slices, the length of the total window:
example_window = tf.stack([np.array(train_df[:w2.total_window_size]),
np.array(train_df[100:100+w2.total_window_size]),
np.array(train_df[200:200+w2.total_window_size])])
example_inputs, example_labels = w2.split_window(example_window)
print('All shapes are: (batch, time, features)')
print(f'Window shape: {example_window.shape}')
print(f'Inputs shape: {example_inputs.shape}')
print(f'labels shape: {example_labels.shape}')
Explanation: 다음을 사용해 보세요.
End of explanation
w2.example = example_inputs, example_labels
def plot(self, model=None, plot_col='T (degC)', max_subplots=3):
inputs, labels = self.example
plt.figure(figsize=(12, 8))
plot_col_index = self.column_indices[plot_col]
max_n = min(max_subplots, len(inputs))
for n in range(max_n):
plt.subplot(3, 1, n+1)
plt.ylabel(f'{plot_col} [normed]')
plt.plot(self.input_indices, inputs[n, :, plot_col_index],
label='Inputs', marker='.', zorder=-10)
if self.label_columns:
label_col_index = self.label_columns_indices.get(plot_col, None)
else:
label_col_index = plot_col_index
if label_col_index is None:
continue
plt.scatter(self.label_indices, labels[n, :, label_col_index],
edgecolors='k', label='Labels', c='#2ca02c', s=64)
if model is not None:
predictions = model(inputs)
plt.scatter(self.label_indices, predictions[n, :, label_col_index],
marker='X', edgecolors='k', label='Predictions',
c='#ff7f0e', s=64)
if n == 0:
plt.legend()
plt.xlabel('Time [h]')
WindowGenerator.plot = plot
Explanation: 일반적으로 TensorFlow의 데이터는 가장 바깥 쪽 인덱스가 여러 예제("배치" 차원)에 걸쳐 있는 배열로 구성됩니다. 중간 인덱스는 "시간" 또는 "공간"(너비, 높이) 차원입니다. 가장 안쪽 인덱스는 특성입니다.
위의 코드는 두 배치의 7-타임스텝 창을 사용하며 각 타임스텝에는 19개의 특성이 있습니다. 그러면 이것을 한 배치의 6-타임스텝과 19개의 특성 입력 및 1-타임스텝 1-특성 레이블로 분할합니다. 레이블에는 하나의 특성만 있는데, WindowGenerator가 label_columns=['T (degC)']로 초기화되었기 때문입니다. 우선 이 튜토리얼에서는 단일 출력 레이블을 예측하는 모델을 빌드합니다.
3. 플롯하기
다음은 분할 창을 간단하게 시각화할 수 있는 플롯 메서드입니다.
End of explanation
w2.plot()
Explanation: 이 플롯은 항목이 참조하는 시간을 기준으로 입력, 레이블 및 (나중에) 예측값을 정렬합니다.
End of explanation
w2.plot(plot_col='p (mbar)')
Explanation: 다른 열을 플롯할 수 있지만 예제 창 w2 구성에는 T (degC) 열에 대한 레이블만 있습니다.
End of explanation
def make_dataset(self, data):
data = np.array(data, dtype=np.float32)
ds = tf.keras.preprocessing.timeseries_dataset_from_array(
data=data,
targets=None,
sequence_length=self.total_window_size,
sequence_stride=1,
shuffle=True,
batch_size=32,)
ds = ds.map(self.split_window)
return ds
WindowGenerator.make_dataset = make_dataset
Explanation: 4. tf.data.Dataset 만들기
마지막으로, 이 make_dataset 메서드는 시계열 DataFrame을 가져와 preprocessing.timeseries_dataset_from_array 함수를 이용해 (input_window, label_window) 쌍의 tf.data.Dataset로 변환합니다.
End of explanation
@property
def train(self):
return self.make_dataset(self.train_df)
@property
def val(self):
return self.make_dataset(self.val_df)
@property
def test(self):
return self.make_dataset(self.test_df)
@property
def example(self):
Get and cache an example batch of `inputs, labels` for plotting.
result = getattr(self, '_example', None)
if result is None:
# No example batch was found, so get one from the `.train` dataset
result = next(iter(self.train))
# And cache it for next time
self._example = result
return result
WindowGenerator.train = train
WindowGenerator.val = val
WindowGenerator.test = test
WindowGenerator.example = example
Explanation: WindowGenerator 객체는 훈련, 검증 및 테스트 데이터를 보유합니다. 위의 make_dataset 메서드를 사용하여 tf.data.Datasets로 여기에 액세스하기 위한 특성을 추가합니다. 또한 간편한 액세스와 플롯을 위한 표준 예제 배치를 추가합니다.
End of explanation
# Each element is an (inputs, label) pair
w2.train.element_spec
Explanation: 이제 WindowGenerator 객체가 tf.data.Dataset 객체에 대한 액세스 권한을 부여하므로 데이터를 쉽게 반복할 수 있습니다.
Dataset.element_spec 속성은 데이터세트 요소의 구조, dtypes 및 형상을 알려줍니다.
End of explanation
for example_inputs, example_labels in w2.train.take(1):
print(f'Inputs shape (batch, time, features): {example_inputs.shape}')
print(f'Labels shape (batch, time, features): {example_labels.shape}')
Explanation: Dataset를 반복하면 구체적인 배치가 생성됩니다.
End of explanation
single_step_window = WindowGenerator(
input_width=1, label_width=1, shift=1,
label_columns=['T (degC)'])
single_step_window
Explanation: 단일 스텝 모델
이러한 종류의 데이터를 기반으로 빌드할 수 있는 가장 간단한 모델은 현재 조건에만 기초하여 미래로 1 타임스텝(1시간) 진행된 단일 특성 값을 예측하는 모델입니다.
따라서 1시간 미래의 T (degC) 값을 예측하는 모델을 빌드하는 것으로 시작하겠습니다.
다음과 같은 단일 스텝 (input, label) 쌍을 생성하도록 WindowGenerator 객체를 구성합니다.
End of explanation
for example_inputs, example_labels in single_step_window.train.take(1):
print(f'Inputs shape (batch, time, features): {example_inputs.shape}')
print(f'Labels shape (batch, time, features): {example_labels.shape}')
Explanation: window 객체는 훈련, 검증 및 테스트 세트로부터 tf.data.Datasets를 생성하므로 데이터 배치를 쉽게 반복할 수 있습니다.
End of explanation
class Baseline(tf.keras.Model):
def __init__(self, label_index=None):
super().__init__()
self.label_index = label_index
def call(self, inputs):
if self.label_index is None:
return inputs
result = inputs[:, :, self.label_index]
return result[:, :, tf.newaxis]
Explanation: 기준
훈련 가능한 모델을 빌드하기 전에 나중에 더 복잡한 모델과 비교하기 위한 포인트로 성능 기준을 갖는 것이 좋습니다.
첫 번째 작업은 모든 특성의 현재 값을 고려하여 1시간 미래의 온도를 예측하는 것입니다. 현재 값에는 현재 온도가 포함됩니다.
따라서 예측으로 현재 온도를 반환하여 "변화 없음"을 예측하는 모델로 시작하겠습니다. 온도는 천천히 변하기 때문에 이것은 합리적인 기준입니다. 물론, 더 미래로 들어가면 이 기준의 예측 효과는 떨어질 것입니다.
End of explanation
baseline = Baseline(label_index=column_indices['T (degC)'])
baseline.compile(loss=tf.losses.MeanSquaredError(),
metrics=[tf.metrics.MeanAbsoluteError()])
val_performance = {}
performance = {}
val_performance['Baseline'] = baseline.evaluate(single_step_window.val)
performance['Baseline'] = baseline.evaluate(single_step_window.test, verbose=0)
Explanation: 이 모델을 인스턴스화하고 평가합니다.
End of explanation
wide_window = WindowGenerator(
input_width=24, label_width=24, shift=1,
label_columns=['T (degC)'])
wide_window
Explanation: 몇 가지 성능 메트릭을 출력했지만 모델이 얼마나 잘 동작하는지에 대한 느낌은 주지 않습니다.
WindowGenerator에는 플롯 메서드가 있지만 단일 샘플만으로는 플롯이 그다지 흥미롭지 않습니다. 따라서 한 번에 24시간 범위의 연속 입력과 레이블을 생성하는 더 넓은 WindowGenerator를 만듭니다.
wide_window는 모델이 동작하는 방식을 변화시키지 않습니다. 이 모델은 단일 입력 타임스텝을 기반으로 1시간 미래를 예측합니다. 여기서 time 축은 batch 축과 같은 역할을 합니다. 각 예측은 타임스텝 사이의 상호 작용 없이 독립적으로 이루어집니다.
End of explanation
print('Input shape:', single_step_window.example[0].shape)
print('Output shape:', baseline(single_step_window.example[0]).shape)
Explanation: 이 확장된 창은 어떠한 코드 변경 없이 동일한 baseline 모델에 직접 전달할 수 있습니다. 이는 입력과 레이블이 동일한 수의 타임스텝을 가지며 기준이 입력을 출력으로 전달하기 때문에 가능합니다.
End of explanation
wide_window.plot(baseline)
Explanation: 기준 모델의 예측값을 플롯하면 1시간씩 오른쪽으로 이동한 단순한 레이블임을 알 수 있습니다.
End of explanation
linear = tf.keras.Sequential([
tf.keras.layers.Dense(units=1)
])
print('Input shape:', single_step_window.example[0].shape)
print('Output shape:', linear(single_step_window.example[0]).shape)
Explanation: 위의 세 가지 예제 플롯에서 단일 스텝 모델은 24시간 동안 실행됩니다. 이에 관해 몇 가지 설명이 필요합니다.
파란색 "입력" 라인은 각 타임스텝의 입력 온도를 보여줍니다. 이 모델은 모든 특성을 수신하며 이 플롯은 온도만 표시합니다.
녹색 "레이블" 점은 목표 예측값을 나타냅니다. 이러한 점은 입력 시간이 아니라 예측 시간에 표시됩니다. 레이블의 범위가 입력에 상대적으로 한 스텝 이동하는 이유가 여기에 있습니다.
주황색 "예측" 십자는 각 출력 타임스텝에 대한 모델의 예측입니다. 모델이 완벽하게 예측하는 경우 예측값은 "레이블" 바로 위에 놓여집니다.
선형 모델
이 작업에 적용할 수 있는 가장 간단한 훈련 가능한 모델은 입력과 출력 사이에 선형 변환을 삽입하는 것입니다. 이 경우 타임스텝의 출력은 해당 스텝에만 의존합니다.
activation 세트가 없는 layers.Dense는 선형 모델입니다. 레이어는 데이터의 마지막 축을 (batch, time, inputs)에서 (batch, time, units)로만 변환하며, batch 및 time 축의 모든 항목에 독립적으로 적용됩니다.
End of explanation
MAX_EPOCHS = 20
def compile_and_fit(model, window, patience=2):
early_stopping = tf.keras.callbacks.EarlyStopping(monitor='val_loss',
patience=patience,
mode='min')
model.compile(loss=tf.losses.MeanSquaredError(),
optimizer=tf.optimizers.Adam(),
metrics=[tf.metrics.MeanAbsoluteError()])
history = model.fit(window.train, epochs=MAX_EPOCHS,
validation_data=window.val,
callbacks=[early_stopping])
return history
Explanation: 이 튜토리얼은 많은 모델을 훈련하므로 훈련 절차를 하나의 함수 패키지로 만듭니다.
End of explanation
history = compile_and_fit(linear, single_step_window)
val_performance['Linear'] = linear.evaluate(single_step_window.val)
performance['Linear'] = linear.evaluate(single_step_window.test, verbose=0)
Explanation: 모델을 훈련하고 성능을 평가합니다.
End of explanation
print('Input shape:', wide_window.example[0].shape)
print('Output shape:', baseline(wide_window.example[0]).shape)
Explanation: baseline 모델과 마찬가지로 선형 모델은 넓은 범위의 배치에서 호출할 수 있습니다. 이러한 방식으로 모델은 연속적인 타임스텝에 대해 일련의 독립적인 예측을 수행합니다. time 축은 다른 batch 축처럼 작동합니다. 각 타임스텝에서 예측 사이에 상호 작용은 없습니다.
End of explanation
wide_window.plot(linear)
Explanation: 다음은 wide_widow에 대한 예제 예측값을 플롯한 내용입니다. 많은 경우 예측이 단순히 입력 온도를 반환하는 것보다는 분명히 더 낮지만 몇 가지 경우에는 더 나쁘다는 사실에 주목하세요.
End of explanation
plt.bar(x = range(len(train_df.columns)),
height=linear.layers[0].kernel[:,0].numpy())
axis = plt.gca()
axis.set_xticks(range(len(train_df.columns)))
_ = axis.set_xticklabels(train_df.columns, rotation=90)
Explanation: 선형 모델의 한 가지 장점은 해석하기가 상대적으로 간단하다는 것입니다. 레이어의 가중치를 가져와 각 입력에 할당된 가중치를 볼 수 있습니다.
End of explanation
dense = tf.keras.Sequential([
tf.keras.layers.Dense(units=64, activation='relu'),
tf.keras.layers.Dense(units=64, activation='relu'),
tf.keras.layers.Dense(units=1)
])
history = compile_and_fit(dense, single_step_window)
val_performance['Dense'] = dense.evaluate(single_step_window.val)
performance['Dense'] = dense.evaluate(single_step_window.test, verbose=0)
Explanation: 때로 모델은 입력 T (degC)에 가장 많은 가중치를 두지 않습니다. 이것은 무작위 초기화의 위험 중 하나입니다.
밀집
실제로 여러 타임스텝에서 동작하는 모델을 적용하기 전에 더 깊고 강력한 단일 입력 스텝 모델의 성능을 확인하는 것이 좋습니다.
다음 모델은 입력과 출력 사이에 몇 개의 Dense 레이어를 쌓는다는 점을 제외하면 linear 모델과 유사합니다.
End of explanation
CONV_WIDTH = 3
conv_window = WindowGenerator(
input_width=CONV_WIDTH,
label_width=1,
shift=1,
label_columns=['T (degC)'])
conv_window
conv_window.plot()
plt.title("Given 3h as input, predict 1h into the future.")
Explanation: 다중 스텝 밀집
단일 타임스텝 모델에는 입력의 현재 값에 대한 컨텍스트가 없습니다. 시간에 따라 입력 특성이 어떻게 변하는지 볼 수 없습니다. 이 문제를 해결하려면 모델이 예측을 수행할 때 여러 타임스텝에 액세스해야 합니다.
baseline , linear 및 dense 모델은 각 타임스텝을 독립적으로 처리했습니다. 여기서 모델은 단일 출력을 생성하기 위해 여러 타임스텝을 입력으로 사용합니다.
3시간의 입력과 1시간의 레이블 배치를 생성하는 WindowGenerator를 만듭니다.
Window의 shift 매개변수는 두 창의 끝에 상대적입니다.
End of explanation
multi_step_dense = tf.keras.Sequential([
# Shape: (time, features) => (time*features)
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(units=32, activation='relu'),
tf.keras.layers.Dense(units=32, activation='relu'),
tf.keras.layers.Dense(units=1),
# Add back the time dimension.
# Shape: (outputs) => (1, outputs)
tf.keras.layers.Reshape([1, -1]),
])
print('Input shape:', conv_window.example[0].shape)
print('Output shape:', multi_step_dense(conv_window.example[0]).shape)
history = compile_and_fit(multi_step_dense, conv_window)
IPython.display.clear_output()
val_performance['Multi step dense'] = multi_step_dense.evaluate(conv_window.val)
performance['Multi step dense'] = multi_step_dense.evaluate(conv_window.test, verbose=0)
conv_window.plot(multi_step_dense)
Explanation: layers.Flatten을 모델의 첫 번째 레이어로 추가하여 다중 입력 스텝 창에서 dense 모델을 훈련할 수 있습니다.
End of explanation
print('Input shape:', wide_window.example[0].shape)
try:
print('Output shape:', multi_step_dense(wide_window.example[0]).shape)
except Exception as e:
print(f'\n{type(e).__name__}:{e}')
Explanation: 이 접근법의 주된 단점은 결과적인 모델이 정확히 이 형상의 입력 창에서만 실행될 수 있다는 것입니다.
End of explanation
conv_model = tf.keras.Sequential([
tf.keras.layers.Conv1D(filters=32,
kernel_size=(CONV_WIDTH,),
activation='relu'),
tf.keras.layers.Dense(units=32, activation='relu'),
tf.keras.layers.Dense(units=1),
])
Explanation: 다음 섹션의 컨볼루셔널 모델은 이 문제를 해결합니다.
컨볼루션 신경망
컨볼루션 레이어(layers.Conv1D)도 각 예측에 대한 입력으로 여러 타임스텝을 사용합니다.
다음은 컨볼루션으로 다시 작성한 multi_step_dense와 동일한 모델입니다.
다음 변경 사항에 주목하세요.
layers.Flatten과 첫 번째 layers.Dense는 layers.Conv1D로 대체됩니다.
컨볼루션이 출력에서 시간 축을 유지하므로 layers.Reshape는 이 더 이상 필요하지 않습니다.
End of explanation
print("Conv model on `conv_window`")
print('Input shape:', conv_window.example[0].shape)
print('Output shape:', conv_model(conv_window.example[0]).shape)
Explanation: 예제 배치에서 실행하여 모델이 예상된 형상으로 출력을 생성하는지 확인합니다.
End of explanation
history = compile_and_fit(conv_model, conv_window)
IPython.display.clear_output()
val_performance['Conv'] = conv_model.evaluate(conv_window.val)
performance['Conv'] = conv_model.evaluate(conv_window.test, verbose=0)
Explanation: conv_window에서 훈련하고 평가하면 multi_step_dense 모델과 유사한 성능을 제공해야 합니다.
End of explanation
print("Wide window")
print('Input shape:', wide_window.example[0].shape)
print('Labels shape:', wide_window.example[1].shape)
print('Output shape:', conv_model(wide_window.example[0]).shape)
Explanation: 이 conv_model과 multi_step_dense 모델의 차이점은 conv_model은 모든 길이의 입력에서 실행될 수 있다는 것입니다. 컨볼루셔널 레이어는 입력의 슬라이딩 윈도우에 적용됩니다.
더 넓은 입력에서 실행하면 더 넓은 출력이 생성됩니다.
End of explanation
LABEL_WIDTH = 24
INPUT_WIDTH = LABEL_WIDTH + (CONV_WIDTH - 1)
wide_conv_window = WindowGenerator(
input_width=INPUT_WIDTH,
label_width=LABEL_WIDTH,
shift=1,
label_columns=['T (degC)'])
wide_conv_window
print("Wide conv window")
print('Input shape:', wide_conv_window.example[0].shape)
print('Labels shape:', wide_conv_window.example[1].shape)
print('Output shape:', conv_model(wide_conv_window.example[0]).shape)
Explanation: 출력은 입력보다 짧습니다. 훈련 또는 플롯 작업을 수행하려면 레이블과 예상의 길이가 동일해야 합니다. 따라서 레이블과 예측 길이가 일치하도록 몇 개의 추가 입력 타임스텝으로 넓은 창을 생성하는 WindowGenerator를 빌드합니다.
End of explanation
wide_conv_window.plot(conv_model)
Explanation: 이제 더 넓은 창에 모델의 예측값을 플롯할 수 있습니다. 첫 번째 예측 전 3개의 입력 타임스텝에 주목하세요. 여기서 모든 예측은 이전 3개의 타임스텝에 기초합니다.
End of explanation
lstm_model = tf.keras.models.Sequential([
# Shape [batch, time, features] => [batch, time, lstm_units]
tf.keras.layers.LSTM(32, return_sequences=True),
# Shape => [batch, time, features]
tf.keras.layers.Dense(units=1)
])
Explanation: 순환 신경망
Recurrent Neural Network(RNN)는 시계열 데이터에 적합한 신경망 유형입니다. RNN은 시계열을 단계별로 처리하여 타임스텝 사이에서 내부 상태를 유지합니다.
자세한 내용은 텍스트 생성 튜토리얼 또는 RNN 가이드를 읽어보세요.
이 튜토리얼에서는 Long Short Term Memory(LSTM)이라는 RNN 레이어를 사용합니다.
모든 keras RNN 레이어에 대한 중요한 생성자 인수는 return_sequences 인수입니다. 이 설정은 다음 두 가지 방법 중 하나로 레이어를 구성할 수 있습니다.
기본값인 False인 경우 레이어는 최종 타임스텝의 출력만 반환하여 단일 예측을 수행하기 전에 모델이 내부 상태를 준비할 시간을 줍니다.
True이면 레이어가 각 입력에 대한 출력을 반환합니다. 다음과 같은 경우에 유용합니다.
RNN 레이어 쌓기
여러 타임스텝에서 동시에 모델 훈련
End of explanation
print('Input shape:', wide_window.example[0].shape)
print('Output shape:', lstm_model(wide_window.example[0]).shape)
history = compile_and_fit(lstm_model, wide_window)
IPython.display.clear_output()
val_performance['LSTM'] = lstm_model.evaluate(wide_window.val)
performance['LSTM'] = lstm_model.evaluate(wide_window.test, verbose=0)
wide_window.plot(lstm_model)
Explanation: return_sequences=True이면 모델을 한 번에 24시간 분량 데이터에 대해 훈련할 수 있습니다.
참고: 이 경우에는 모델 성능의 관점에서 기대할 것이 없습니다. 첫 번째 타임스텝에서 모델이 이전 스텝에 액세스할 수 없으므로 이전에 표시한 단순한 linear 및 dense 모델보다 더 나을 것이 없기 때문입니다.
End of explanation
x = np.arange(len(performance))
width = 0.3
metric_name = 'mean_absolute_error'
metric_index = lstm_model.metrics_names.index('mean_absolute_error')
val_mae = [v[metric_index] for v in val_performance.values()]
test_mae = [v[metric_index] for v in performance.values()]
plt.ylabel('mean_absolute_error [T (degC), normalized]')
plt.bar(x - 0.17, val_mae, width, label='Validation')
plt.bar(x + 0.17, test_mae, width, label='Test')
plt.xticks(ticks=x, labels=performance.keys(),
rotation=45)
_ = plt.legend()
for name, value in performance.items():
print(f'{name:12s}: {value[1]:0.4f}')
Explanation: 성능
이 데이터세트를 사용하면 일반적으로 각 모델의 성능이 이전 모델보다 약간 더 좋습니다.
End of explanation
single_step_window = WindowGenerator(
# `WindowGenerator` returns all features as labels if you
# don't set the `label_columns` argument.
input_width=1, label_width=1, shift=1)
wide_window = WindowGenerator(
input_width=24, label_width=24, shift=1)
for example_inputs, example_labels in wide_window.train.take(1):
print(f'Inputs shape (batch, time, features): {example_inputs.shape}')
print(f'Labels shape (batch, time, features): {example_labels.shape}')
Explanation: 다중 출력 모델
지금까지 모델은 모두 단일 타임스텝에 대해 단일 출력 특성 T (degC)를 예측했습니다.
이러한 모든 모델은 간단히 출력 레이어의 단위 수를 변경하고 labels에 모든 특성을 포함하도록 훈련 창을 조정하여 여러 특성을 예측하도록 변환할 수 있습니다.
End of explanation
baseline = Baseline()
baseline.compile(loss=tf.losses.MeanSquaredError(),
metrics=[tf.metrics.MeanAbsoluteError()])
val_performance = {}
performance = {}
val_performance['Baseline'] = baseline.evaluate(wide_window.val)
performance['Baseline'] = baseline.evaluate(wide_window.test, verbose=0)
Explanation: 레이블의 features 축은 이제 1이 아닌 입력과 동일한 깊이를 갖습니다.
기준
여기서는 동일한 기준 모델을 사용할 수 있지만 이번에는 특정 label_index를 선택하는 대신 모든 특성을 반복합니다.
End of explanation
dense = tf.keras.Sequential([
tf.keras.layers.Dense(units=64, activation='relu'),
tf.keras.layers.Dense(units=64, activation='relu'),
tf.keras.layers.Dense(units=num_features)
])
history = compile_and_fit(dense, single_step_window)
IPython.display.clear_output()
val_performance['Dense'] = dense.evaluate(single_step_window.val)
performance['Dense'] = dense.evaluate(single_step_window.test, verbose=0)
Explanation: 밀집
End of explanation
%%time
wide_window = WindowGenerator(
input_width=24, label_width=24, shift=1)
lstm_model = tf.keras.models.Sequential([
# Shape [batch, time, features] => [batch, time, lstm_units]
tf.keras.layers.LSTM(32, return_sequences=True),
# Shape => [batch, time, features]
tf.keras.layers.Dense(units=num_features)
])
history = compile_and_fit(lstm_model, wide_window)
IPython.display.clear_output()
val_performance['LSTM'] = lstm_model.evaluate( wide_window.val)
performance['LSTM'] = lstm_model.evaluate( wide_window.test, verbose=0)
print()
Explanation: RNN
End of explanation
class ResidualWrapper(tf.keras.Model):
def __init__(self, model):
super().__init__()
self.model = model
def call(self, inputs, *args, **kwargs):
delta = self.model(inputs, *args, **kwargs)
# The prediction for each timestep is the input
# from the previous time step plus the delta
# calculated by the model.
return inputs + delta
%%time
residual_lstm = ResidualWrapper(
tf.keras.Sequential([
tf.keras.layers.LSTM(32, return_sequences=True),
tf.keras.layers.Dense(
num_features,
# The predicted deltas should start small
# So initialize the output layer with zeros
kernel_initializer=tf.initializers.zeros)
]))
history = compile_and_fit(residual_lstm, wide_window)
IPython.display.clear_output()
val_performance['Residual LSTM'] = residual_lstm.evaluate(wide_window.val)
performance['Residual LSTM'] = residual_lstm.evaluate(wide_window.test, verbose=0)
print()
Explanation: <a id="residual"></a>
고급: 잔여 연결
이전의 Baseline 모델은 시퀀스가 타임스텝 사이에서 크게 변하지 않는다는 사실을 이용했습니다. 지금까지 이 튜토리얼에서 훈련한 모든 모델은 무작위로 초기화된 다음, 출력이 이전 타임스텝에서 약간 변경된다는 사실을 학습해야 했습니다.
신중한 초기화로 이 문제를 해결할 수 있지만 모델 구조로 빌드하는 것이 더 간단합니다.
시계열 분석에서는 다음 값을 예측하는 대신 다음 타임스텝에서 값이 어떻게 달라지는 지를 예측하는 모델을 빌드하는 것이 일반적입니다. 마찬가지로 딥러닝에서 "잔여 네트워크(Residual networks)" 또는 "ResNets"는 각 레이어가 모델의 누적 결과에 추가되는 아키텍처를 나타냅니다.
이것은 변화가 작아야 한다는 사실을 이용하는 방법입니다.
기본적으로, Baseline과 일치하도록 모델을 초기화합니다. 그러면 이 작업에서 약간 더 나은 성능으로 모델이 더 빨리 수렴하는 데 도움이 됩니다.
이 접근 방식은 이 튜토리얼에서 설명하는 모든 모델과 연계하여 사용할 수 있습니다.
여기서는 LSTM 모델에 적용합니다. tf.initializers.zeros를 사용하여 초기 예측하는 변경이 작고 잔류 연결을 억제하지 않도록 한다는 점에 주목하세요. zeros가 마지막 레이어에서만 사용되기 때문에 여기에서 그래디언트에 대한 대칭성이 깨질 우려는 없습니다.
End of explanation
x = np.arange(len(performance))
width = 0.3
metric_name = 'mean_absolute_error'
metric_index = lstm_model.metrics_names.index('mean_absolute_error')
val_mae = [v[metric_index] for v in val_performance.values()]
test_mae = [v[metric_index] for v in performance.values()]
plt.bar(x - 0.17, val_mae, width, label='Validation')
plt.bar(x + 0.17, test_mae, width, label='Test')
plt.xticks(ticks=x, labels=performance.keys(),
rotation=45)
plt.ylabel('MAE (average over all outputs)')
_ = plt.legend()
for name, value in performance.items():
print(f'{name:15s}: {value[1]:0.4f}')
Explanation: 성능
다음은 이러한 다중 출력 모델의 전반적인 성능입니다.
End of explanation
OUT_STEPS = 24
multi_window = WindowGenerator(input_width=24,
label_width=OUT_STEPS,
shift=OUT_STEPS)
multi_window.plot()
multi_window
Explanation: 위의 성능은 모든 모델 출력에 대한 평균입니다.
다중 스텝 모델
이전 섹션의 단일 출력 및 다중 출력 모델은 모두 미래 1시간의 단일 타임스텝 예측을 수행했습니다.
이 섹션에서는 이러한 모델을 확장하여 다중 타임스텝 예측을 수행하는 방법을 살펴봅니다.
다중 스텝 예측에서 모델은 일정 범위의 미래 값을 예측하는 방법을 학습해야 합니다. 따라서 한 미래 시점만 예측하는 단일 스텝 모델과 달리 다중 스텝 모델은 미래 값의 시퀀스를 예측합니다.
대략적으로 두 가지 접근 방식이 있습니다.
전체 시계열이 한 번에 예측되는 싱글샷 예측
모델이 단일 스텝 예측만 수행하고 출력이 입력으로 피드백되는 자기 회귀적 예측
이 섹션에서는 모든 모델이 모든 출력 타임스텝에 걸쳐 모든 특성을 예측합니다.
다중 스텝 모델의 경우, 훈련 데이터는 다시 시간별 샘플로 구성됩니다. 그러나 여기에서 모델은 과거의 24시간을 고려하여 미래 24시간을 예측하는 방법을 학습합니다.
다음은 데이터세트로부터 이러한 조각을 생성하는 Window 객체입니다.
End of explanation
class MultiStepLastBaseline(tf.keras.Model):
def call(self, inputs):
return tf.tile(inputs[:, -1:, :], [1, OUT_STEPS, 1])
last_baseline = MultiStepLastBaseline()
last_baseline.compile(loss=tf.losses.MeanSquaredError(),
metrics=[tf.metrics.MeanAbsoluteError()])
multi_val_performance = {}
multi_performance = {}
multi_val_performance['Last'] = last_baseline.evaluate(multi_window.val)
multi_performance['Last'] = last_baseline.evaluate(multi_window.val, verbose=0)
multi_window.plot(last_baseline)
Explanation: 기준
이 작업의 간단한 기준은 필요한 출력 타임스텝 수에 대해 마지막 입력 타임스텝을 반복하는 것입니다.
End of explanation
class RepeatBaseline(tf.keras.Model):
def call(self, inputs):
return inputs
repeat_baseline = RepeatBaseline()
repeat_baseline.compile(loss=tf.losses.MeanSquaredError(),
metrics=[tf.metrics.MeanAbsoluteError()])
multi_val_performance['Repeat'] = repeat_baseline.evaluate(multi_window.val)
multi_performance['Repeat'] = repeat_baseline.evaluate(multi_window.test, verbose=0)
multi_window.plot(repeat_baseline)
Explanation: 이 작업은 24시간이 주어졌을 때 24시간을 예측하는 것이므로 또 다른 간단한 접근 방법은 내일도 비슷하다는 가정 하에 전날을 반복하는 것입니다.
End of explanation
multi_linear_model = tf.keras.Sequential([
# Take the last time-step.
# Shape [batch, time, features] => [batch, 1, features]
tf.keras.layers.Lambda(lambda x: x[:, -1:, :]),
# Shape => [batch, 1, out_steps*features]
tf.keras.layers.Dense(OUT_STEPS*num_features,
kernel_initializer=tf.initializers.zeros),
# Shape => [batch, out_steps, features]
tf.keras.layers.Reshape([OUT_STEPS, num_features])
])
history = compile_and_fit(multi_linear_model, multi_window)
IPython.display.clear_output()
multi_val_performance['Linear'] = multi_linear_model.evaluate(multi_window.val)
multi_performance['Linear'] = multi_linear_model.evaluate(multi_window.test, verbose=0)
multi_window.plot(multi_linear_model)
Explanation: 싱글샷 모델
이 문제에 대한 한 가지 높은 수준의 접근 방법은 모델이 한 번에 전체 시퀀스 예측을 수행하는 "싱글샷" 모델을 사용하는 것입니다.
이 모델은 OUT_STEPS*features 출력 단위를 이용해 layers.Dense로 효율적으로 구현할 수 있습니다. 이 모델은 이 출력의 형상을 필요한 (OUTPUT_STEPS, features)로 바꾸기만 하면 됩니다.
선형
마지막 입력 타임스텝을 기반으로 하는 단순한 선형 모델은 기준 모델보다 성능이 더 좋지만 강력하지 못합니다. 이 모델은 선형 프로젝션을 이용해 단일 입력 타임스텝으로부터 OUTPUT_STEPS 타임스텝을 예측해야 합니다. 주로 하루 중 시간과 연중 시간을 기반으로 하는 행동의 저차원 조각만 캡처할 수 있습니다.
End of explanation
multi_dense_model = tf.keras.Sequential([
# Take the last time step.
# Shape [batch, time, features] => [batch, 1, features]
tf.keras.layers.Lambda(lambda x: x[:, -1:, :]),
# Shape => [batch, 1, dense_units]
tf.keras.layers.Dense(512, activation='relu'),
# Shape => [batch, out_steps*features]
tf.keras.layers.Dense(OUT_STEPS*num_features,
kernel_initializer=tf.initializers.zeros),
# Shape => [batch, out_steps, features]
tf.keras.layers.Reshape([OUT_STEPS, num_features])
])
history = compile_and_fit(multi_dense_model, multi_window)
IPython.display.clear_output()
multi_val_performance['Dense'] = multi_dense_model.evaluate(multi_window.val)
multi_performance['Dense'] = multi_dense_model.evaluate(multi_window.test, verbose=0)
multi_window.plot(multi_dense_model)
Explanation: 밀집
입력과 출력 사이에 layers.Dense를 추가하면 선현 모델이 더 강력해지지만 여전히 단일 입력에 기반합니다.
End of explanation
CONV_WIDTH = 3
multi_conv_model = tf.keras.Sequential([
# Shape [batch, time, features] => [batch, CONV_WIDTH, features]
tf.keras.layers.Lambda(lambda x: x[:, -CONV_WIDTH:, :]),
# Shape => [batch, 1, conv_units]
tf.keras.layers.Conv1D(256, activation='relu', kernel_size=(CONV_WIDTH)),
# Shape => [batch, 1, out_steps*features]
tf.keras.layers.Dense(OUT_STEPS*num_features,
kernel_initializer=tf.initializers.zeros),
# Shape => [batch, out_steps, features]
tf.keras.layers.Reshape([OUT_STEPS, num_features])
])
history = compile_and_fit(multi_conv_model, multi_window)
IPython.display.clear_output()
multi_val_performance['Conv'] = multi_conv_model.evaluate(multi_window.val)
multi_performance['Conv'] = multi_conv_model.evaluate(multi_window.test, verbose=0)
multi_window.plot(multi_conv_model)
Explanation: CNN
컨볼루션 모델은 고정 너비 기록을 기반으로 예측을 수행하므로 시간에 따라 상황이 어떻게 변하는지 볼 수 있어 밀집 모델보다 성능을 높일 수 있습니다.
End of explanation
multi_lstm_model = tf.keras.Sequential([
# Shape [batch, time, features] => [batch, lstm_units]
# Adding more `lstm_units` just overfits more quickly.
tf.keras.layers.LSTM(32, return_sequences=False),
# Shape => [batch, out_steps*features]
tf.keras.layers.Dense(OUT_STEPS*num_features,
kernel_initializer=tf.initializers.zeros),
# Shape => [batch, out_steps, features]
tf.keras.layers.Reshape([OUT_STEPS, num_features])
])
history = compile_and_fit(multi_lstm_model, multi_window)
IPython.display.clear_output()
multi_val_performance['LSTM'] = multi_lstm_model.evaluate(multi_window.val)
multi_performance['LSTM'] = multi_lstm_model.evaluate(multi_window.test, verbose=0)
multi_window.plot(multi_lstm_model)
Explanation: RNN
반복 모델은 모델이 수행하는 예측과 관련이 있는 경우 긴 입력 기록을 사용하는 방법을 학습할 수 있습니다. 여기서 모델은 다음 24시간에 대한 단일 예측을 수행하기 전에 24시간 동안 내부 상태를 축적합니다.
이 싱글샷 형식에서 LSTM은 마지막 타임스텝에서만 출력을 생성하면 되므로 return_sequences=False를 설정합니다.
End of explanation
class FeedBack(tf.keras.Model):
def __init__(self, units, out_steps):
super().__init__()
self.out_steps = out_steps
self.units = units
self.lstm_cell = tf.keras.layers.LSTMCell(units)
# Also wrap the LSTMCell in an RNN to simplify the `warmup` method.
self.lstm_rnn = tf.keras.layers.RNN(self.lstm_cell, return_state=True)
self.dense = tf.keras.layers.Dense(num_features)
feedback_model = FeedBack(units=32, out_steps=OUT_STEPS)
Explanation: 고급: 자기 회귀 모델
위의 모델은 모두 한 번에 전체 출력 시퀀스를 예측합니다.
경우에 따라 모델이 이 예측을 여러 타임스텝으로 분해하는 것이 도움이 될 수 있습니다. 그러면 이전의 RNN(Recurrent Neural Networks)을 이용한 시퀀스 생성에서와 같이 각 모델의 출력을 각 스텝에서 자체 피드백할 수 있어 이전 예측을 조건부로 예측을 수행할 수 있습니다.
이 형태의 모델이 갖는 한 가지 분명한 장점은 다양한 길이의 출력을 생성하도록 설정할 수 있다는 것입니다.
이 튜토리얼의 전반부에서 훈련한 단일 스텝 다중 출력 모델 중 하나를 가져와 자기 회귀 피드백 루프에서 실행할 수 있지만 여기서는 이를 수행하도록 명시적으로 훈련된 모델을 빌드하는 데 중점을 둘 것입니다.
RNN
이 튜토리얼에서는 자기 회귀 RNN 모델만 빌드하지만 이 패턴은 단일 타임스텝을 출력하도록 설계된 모든 모델에 적용할 수 있습니다.
이 모델은 단일 스텝 LSTM 모델과 기본 형태가 동일하여 LSTM 다음에 LSTM 출력을 모델 예측으로 변환하는 layers.Dense가 이어집니다.
layers.LSTM은 상태와 시퀀스 결과를 자동으로 관리하는 더 높은 수준의 layers.RNN에서 래핑된 layers.LSTMCell입니다(자세한 내용은 Keras RNN 참조).
이 경우 모델은 각 스텝에 대한 입력을 수동으로 관리해야 하므로 더 낮은 수준의 단일 타임스텝 인터페이스에 대해 layers.LSTMCell를 직접 사용합니다.
End of explanation
def warmup(self, inputs):
# inputs.shape => (batch, time, features)
# x.shape => (batch, lstm_units)
x, *state = self.lstm_rnn(inputs)
# predictions.shape => (batch, features)
prediction = self.dense(x)
return prediction, state
FeedBack.warmup = warmup
Explanation: 이 모델에 필요한 첫 번째 메서드는 입력을 기반으로 내부 상태를 초기화하는 warmup 메서드입니다. 일단 훈련되면 이 상태는 입력 기록의 관련 부분을 캡처합니다. 이는 앞서 알아본 단일 스텝 LSTM 모델과 동일합니다.
End of explanation
prediction, state = feedback_model.warmup(multi_window.example[0])
prediction.shape
Explanation: 이 메서드는 단일 타임스텝 예측과 LSTM의 내부 상태를 반환합니다.
End of explanation
def call(self, inputs, training=None):
# Use a TensorArray to capture dynamically unrolled outputs.
predictions = []
# Initialize the lstm state
prediction, state = self.warmup(inputs)
# Insert the first prediction
predictions.append(prediction)
# Run the rest of the prediction steps
for n in range(1, self.out_steps):
# Use the last prediction as input.
x = prediction
# Execute one lstm step.
x, state = self.lstm_cell(x, states=state,
training=training)
# Convert the lstm output to a prediction.
prediction = self.dense(x)
# Add the prediction to the output
predictions.append(prediction)
# predictions.shape => (time, batch, features)
predictions = tf.stack(predictions)
# predictions.shape => (batch, time, features)
predictions = tf.transpose(predictions, [1, 0, 2])
return predictions
FeedBack.call = call
Explanation: RNN의 상태 및 초기 예측을 사용하여 이제 이전의 각 스텝에서 수행한 예측을 입력으로 제공하여 모델을 계속 반복할 수 있습니다.
출력 예측을 수집하는 가장 간단한 방법은 Python 목록을 사용하고 루프 후에 tf.stack을 사용하는 것입니다.
참고: 이와 같이 Python 목록을 쌓는 것은 훈련을 위해 Model.compile(..., run_eagerly=True)를 사용하거나 고정 길이의 출력을 통해 즉시 실행하는 경우에만 효과가 있습니다. 동적 출력 길이의 경우 Python 목록 대신 tf.TensorArray를 사용하고 Python range 대신 tf.range를 사용해야 합니다.
End of explanation
print('Output shape (batch, time, features): ', feedback_model(multi_window.example[0]).shape)
Explanation: 예제 입력에서 이 모델을 테스트 실행합니다.
End of explanation
history = compile_and_fit(feedback_model, multi_window)
IPython.display.clear_output()
multi_val_performance['AR LSTM'] = feedback_model.evaluate(multi_window.val)
multi_performance['AR LSTM'] = feedback_model.evaluate(multi_window.test, verbose=0)
multi_window.plot(feedback_model)
Explanation: 이제 모델을 훈련합니다.
End of explanation
x = np.arange(len(multi_performance))
width = 0.3
metric_name = 'mean_absolute_error'
metric_index = lstm_model.metrics_names.index('mean_absolute_error')
val_mae = [v[metric_index] for v in multi_val_performance.values()]
test_mae = [v[metric_index] for v in multi_performance.values()]
plt.bar(x - 0.17, val_mae, width, label='Validation')
plt.bar(x + 0.17, test_mae, width, label='Test')
plt.xticks(ticks=x, labels=multi_performance.keys(),
rotation=45)
plt.ylabel(f'MAE (average over all times and outputs)')
_ = plt.legend()
Explanation: 성능
이 문제에 대해 모델 복잡성이 증가함에 따라 분명히 이득이 감소합니다.
End of explanation
for name, value in multi_performance.items():
print(f'{name:8s}: {value[1]:0.4f}')
Explanation: 이 튜토리얼의 전반부에서 소개한 다중 출력 모델에 대한 메트릭은 모든 출력 특성에 평균화된 성능을 보여줍니다. 이러한 성능은 유사하지만 출력 타임스텝에서도 평균화됩니다.
End of explanation |
14,662 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Reading and plotting LMA data
In a previous module we learned about LMA VHF source files - their structure, and principles of data quality control using the station mask and $\chi_{\nu}^2$.
We'll use the xlma-python package to read, process, and visualize the data in these files.
We'll start by importing some libraries and reading a file. In fact, we can read more than Replace the filename below with a file of interest to you, if you want.
Step1: Investigating the pyxlma data structure
We just read in the data from a list of filenames, and got back an lma_data variable, and a starttime. What's in these variables?
starttime is a Python datetime object, with helps us do math operations on times instead of multiplying by 60 and 24 ourselves.
Step2: lma_data is an xarray object. If we print it, we see that it looks much like a NetCDF file, with dimensions and variables that allow us to store whole arrays of data and give them names. xarray is the best way to look at NetCDF data in Python.
(If you'd like to learn more about xarray and NetCDF, you can check out Unidata's lessons on xarray and NetCDF.)
Step3: There are a few things to notice above.
- The dimensions tell us how many events and stations we had in our data file.
- Some other header info about the network location and the lma_analysis command is included.
- There are varaibles with dimension number_of_stations that give the data from the station data tables.
- There are varaibles with dimension number_of_events, including many we'd expect.
- Event location and time
- event_chi2 for the $\chi_{\nu}^2$ as given in the file.
- event_stations has been calculated for us from event_mask. That's helpful!
- Each event has been tagged with a unique event_id
These variables are easy to access by name - for example, we can get a nice display of
Step4: Notice that this xarray DataArray variable is not only the data values, but some other metadata, such as the units and the standard variable name from the Climate and Forecast Metadata Conventions. We can access those attributes if we want them
Step5: Simple plotting and filtering
xarray has some nice, built-in plotting functions for quickly looking at data. We'll make even nicer plots in a bit!
Step6: We have lots of data in the file; let's grab the first 10000 points to make the plotting faster.
Step7: Looks pretty noisy. Let's try again, but filter to lower chi2 and greater event_contributing_stations.
Suggested activity | Python Code:
# We could tediously build a list …
# filenames = ['/data/Houston/realtime-tracer/LYLOUT_200524_210000_0600.dat.gz',]
# Instead, let's read a couple hours at the same time.
import sys, glob
filenames = glob.glob('/data/Houston/130619/LYLOUT_130619_2[0-1]*.dat.gz')
for filename in filenames:
print(filename)
import glob
import numpy as np
import datetime
import xarray as xr
import pyproj as proj4
from pyxlma.lmalib.io import read as lma_read
lma_data, starttime = lma_read.dataset(filenames)
Explanation: Reading and plotting LMA data
In a previous module we learned about LMA VHF source files - their structure, and principles of data quality control using the station mask and $\chi_{\nu}^2$.
We'll use the xlma-python package to read, process, and visualize the data in these files.
We'll start by importing some libraries and reading a file. In fact, we can read more than Replace the filename below with a file of interest to you, if you want.
End of explanation
# Should match what we expect from the filenames
print(starttime)
print(type(starttime))
Explanation: Investigating the pyxlma data structure
We just read in the data from a list of filenames, and got back an lma_data variable, and a starttime. What's in these variables?
starttime is a Python datetime object, with helps us do math operations on times instead of multiplying by 60 and 24 ourselves.
End of explanation
print(lma_data)
Explanation: lma_data is an xarray object. If we print it, we see that it looks much like a NetCDF file, with dimensions and variables that allow us to store whole arrays of data and give them names. xarray is the best way to look at NetCDF data in Python.
(If you'd like to learn more about xarray and NetCDF, you can check out Unidata's lessons on xarray and NetCDF.)
End of explanation
print(type(lma_data.event_longitude))
lma_data.event_longitude
Explanation: There are a few things to notice above.
- The dimensions tell us how many events and stations we had in our data file.
- Some other header info about the network location and the lma_analysis command is included.
- There are varaibles with dimension number_of_stations that give the data from the station data tables.
- There are varaibles with dimension number_of_events, including many we'd expect.
- Event location and time
- event_chi2 for the $\chi_{\nu}^2$ as given in the file.
- event_stations has been calculated for us from event_mask. That's helpful!
- Each event has been tagged with a unique event_id
These variables are easy to access by name - for example, we can get a nice display of
End of explanation
print(lma_data.event_longitude.attrs['standard_name'])
Explanation: Notice that this xarray DataArray variable is not only the data values, but some other metadata, such as the units and the standard variable name from the Climate and Forecast Metadata Conventions. We can access those attributes if we want them:
End of explanation
%matplotlib widget
import matplotlib.pyplot as plt
lma_data.plot.scatter?
Explanation: Simple plotting and filtering
xarray has some nice, built-in plotting functions for quickly looking at data. We'll make even nicer plots in a bit!
End of explanation
fig, axes = plt.subplots(1,1,figsize=(10,10))
count_subset = {'number_of_events':slice(0,10000)}
art = lma_data[count_subset].plot.scatter('event_longitude', 'event_latitude', ax=axes,
s=4, marker='s', #hue='event_time',
)
Explanation: We have lots of data in the file; let's grab the first 10000 points to make the plotting faster.
End of explanation
fig, axes = plt.subplots(1,1,figsize=(10,10))
count_subset = {'number_of_events':slice(0,10000)}
station_filter = (lma_data.event_stations >= 6)
chi_filter = (lma_data.event_chi2 <= 1.0)
filter_subset = {'number_of_events':(chi_filter & station_filter)}
# note that we first filter on all the data select 10000 points, and then on that dataset we further filter
art = lma_data[filter_subset][count_subset].plot.scatter('event_longitude', 'event_latitude', ax=axes,
s=4, marker='s', #hue='event_time',
)
Explanation: Looks pretty noisy. Let's try again, but filter to lower chi2 and greater event_contributing_stations.
Suggested activity: adjust the station and $\chi^2$ criteria to see how the noisiness changes. As noted in earlier lessons, some experimentation to choose the level of filtering is a very common process when working with LMA data.
You can also filter on event latitude and longitude in the same way to further zoom in, or plot other combinations of data variables.
End of explanation |
14,663 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
W12 lab assignment
Step1: Choropleth map
Let's make a choropleth map with Pokemon statistics. The color of a county should correspond to the number of Pokemons found there. You can download the data from Canvas (pokemon.csv). The data is a subset of the pokemon data from Kaggle.
We'll also need an SVG map. You can download it from Wikipedia.
If you open the SVG with a text editor, you'll see many <path> tags. Each of these is a county. We want to change their style tags, namely the fill color. We want the darkness of fill to correspond to the number of Pokemons in each county.
In the SVG, there is also an id tag for each path, which is actually something called a FIPS code. FIPS stands for Federal Information Processing Standard. Every county has a unique FIPS code, and it’s how we are going to associate each path with our pokemon data.
For this we first need to do some data cleaning.
Step2: The data only has the latitude and longitude data. To convert this to an FIPS code, we need some reverse-geocoding. The Federal Communications Commission provides an API for such tasks.
The API works through an HTTP request, so we can use Python's urllib library to handle it. For example
Step3: The result comes as a json object, so we need to parse it with Python's json decoder.
Step4: Now we can access it as a dictionary and get the county's FIPS code.
Step5: We can do this to all data in the dataframe. Pandas's apply is a very nice feature that you may want to use, it allows you to write a function and apply it to the dataframe.
Step6: We want to color the counties by the number of pokemons appearing in them, so now all we need is a table with the counties' FIPS and number of pokemons in them.
Step7: Now we can turn to our SVG file. We want to find the paths for each county
Step8: Read in the svg
Step9: Load it with BeautifulSoup
Step10: BeautifulSoup has a findAll() function that finds all given tags.
Step11: We should also decide on the colors. colorbrew provides some nice palattes. Pick one of the sequential colors and make the hexadecimal encodings into a list.
Step12: Now we’re going to change the style attribute for each path in the SVG. We’re just interested in fill color, but to make things easier we’re going to replace the entire style instead of parsing to replace only the color. Define the style as the following
Step13: Based on the number of pokemons, we want to assign the county to a color class. For example, if number > 50, use color1, if 40 < number <= 50, use color 2, etc.
Remember that we saved the svg in the soup object. Now that we have changed the svg to fill with colors, we can just write it out as a new file. | Python Code:
import pandas as pd
from urllib.request import urlopen
import json
import warnings
warnings.filterwarnings("ignore")
Explanation: W12 lab assignment
End of explanation
pokemon = pd.read_csv('pokemon.csv')
pokemon.head()
Explanation: Choropleth map
Let's make a choropleth map with Pokemon statistics. The color of a county should correspond to the number of Pokemons found there. You can download the data from Canvas (pokemon.csv). The data is a subset of the pokemon data from Kaggle.
We'll also need an SVG map. You can download it from Wikipedia.
If you open the SVG with a text editor, you'll see many <path> tags. Each of these is a county. We want to change their style tags, namely the fill color. We want the darkness of fill to correspond to the number of Pokemons in each county.
In the SVG, there is also an id tag for each path, which is actually something called a FIPS code. FIPS stands for Federal Information Processing Standard. Every county has a unique FIPS code, and it’s how we are going to associate each path with our pokemon data.
For this we first need to do some data cleaning.
End of explanation
res = urlopen("http://data.fcc.gov/api/block/find?format=json&latitude=28.35975&longitude=-81.421988").read().decode('utf-8')
res
Explanation: The data only has the latitude and longitude data. To convert this to an FIPS code, we need some reverse-geocoding. The Federal Communications Commission provides an API for such tasks.
The API works through an HTTP request, so we can use Python's urllib library to handle it. For example:
End of explanation
json.loads(res)
Explanation: The result comes as a json object, so we need to parse it with Python's json decoder.
End of explanation
json.loads(res)['County']['FIPS']
Explanation: Now we can access it as a dictionary and get the county's FIPS code.
End of explanation
# TODO: create a column in the dataframe called 'FIPS' for the FIPS codes.
# You should have the dataframe look like the following.
# Note that looking up all the lat-lon pairs may take some time.
def get_fips(row):
res = urlopen("http://data.fcc.gov/api/block/find?format=json&latitude="+str(row['latitude'])+"&longitude="+str(row['longitude'])).read().decode('utf-8')
return json.loads(res)['County']['FIPS']
pokemon['FIPS'] = pokemon.apply(get_fips, axis=1)
pokemon.head()
Explanation: We can do this to all data in the dataframe. Pandas's apply is a very nice feature that you may want to use, it allows you to write a function and apply it to the dataframe.
End of explanation
pokemon_density = pd.DataFrame(pokemon.groupby('FIPS').size().reset_index())
pokemon_density.columns = ['FIPS', 'Count']
pokemon_density.head()
Explanation: We want to color the counties by the number of pokemons appearing in them, so now all we need is a table with the counties' FIPS and number of pokemons in them.
End of explanation
from bs4 import BeautifulSoup
Explanation: Now we can turn to our SVG file. We want to find the paths for each county: there are over 3000 counties, so we'll need a nice way. For this, we can use the BeautifulSoup package. This is a package specialized at parsing XMLs. SVGs are essentially XML files, so can be handled in the same way as handling HTML and other XML files.
End of explanation
svg = open('USA_Counties_with_FIPS_and_names.svg', 'r').read()
Explanation: Read in the svg
End of explanation
soup = BeautifulSoup(svg)
Explanation: Load it with BeautifulSoup
End of explanation
paths = soup.findAll('path')
paths[0]
Explanation: BeautifulSoup has a findAll() function that finds all given tags.
End of explanation
colors = ['#fef0d9', '#fdd49e', '#fdbb84','#fc8d59','#e34a33','#b30000']
# TODO: substitute the above with a palatte of your choice.
colors = ['#f0f9e8','#bae4bc','#7bccc4','#43a2ca','#0868ac']
Explanation: We should also decide on the colors. colorbrew provides some nice palattes. Pick one of the sequential colors and make the hexadecimal encodings into a list.
End of explanation
path_style = 'font-size:12px;fill-rule:nonzero;stroke:#000000;stroke-opacity:1;\
stroke-width:0.1;stroke-miterlimit:4;stroke-dasharray:none;stroke-linecap:butt;\
marker-start:none;stroke-linejoin:bevel'
for p in paths:
try:
cnt = int(pokemon_density[pokemon_density['FIPS'] == p['id']]['Count'])
if cnt > 20: color_class = 4
elif (cnt> 15 and cnt <= 20):color_class = 3
elif (cnt > 10 and cnt <= 15):color_class = 2
elif (cnt > 5 and cnt <= 10):color_class = 1
else: color_class = 0
except:
continue
# TODO: decide color classes
color = colors[color_class]
p['style'] = path_style +";fill:"+ color
Explanation: Now we’re going to change the style attribute for each path in the SVG. We’re just interested in fill color, but to make things easier we’re going to replace the entire style instead of parsing to replace only the color. Define the style as the following:
End of explanation
with open ('svg_colored.svg', 'w') as g:
g.write(soup.prettify())
Explanation: Based on the number of pokemons, we want to assign the county to a color class. For example, if number > 50, use color1, if 40 < number <= 50, use color 2, etc.
Remember that we saved the svg in the soup object. Now that we have changed the svg to fill with colors, we can just write it out as a new file.
End of explanation |
14,664 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook plots the number of origins of viviparity and reversions to oviparity, broken down by tree. The result is a grouped histogram.
Step1: Read data using pandas.
Step2: Pivot the table to group the data by tree.
Step3: Plot using native pandas plotting. | Python Code:
import pandas as pd
from pandas import *
import matplotlib.pyplot as plt
%matplotlib inline
from ggplot import *
from numpy import random
plt.style.use('ggplot')
Explanation: This notebook plots the number of origins of viviparity and reversions to oviparity, broken down by tree. The result is a grouped histogram.
End of explanation
data = pd.read_csv("../Data/Histogram/pared_down.csv")
data
data.columns
Explanation: Read data using pandas.
End of explanation
table = pivot_table(data, index=['Tree'], columns=['Parameter'])
table
Explanation: Pivot the table to group the data by tree.
End of explanation
table.plot(kind='bar', width=.7, sort_columns=True).set_ylim(0,75)
plt.tight_layout()
plt.savefig('exTotal.svg', bbox_inches='tight', dpi=300)
Explanation: Plot using native pandas plotting.
End of explanation |
14,665 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Welcome to the PyGraphML Documentation
PyGraphML is a Python library designed to parse GraphML file.
Overview
GraphML
GraphML is a comprehensive and easy-to-use file format for graphs. It consists of a language core to describe the structural properties of a graph and a flexible extension mechanism to add application-specific data. Its main features include support of
Step1: Usage
Create a graph
Let's create a simple graph with 5 nodes and some edges between this nodes
Step2: Graph search
You can use breadth-first search and depth-first search
Step3: Visualize a graph with NetworkX
If you have matplotlib and NetworkX installed, you can visualize the graph. Note that Visualization is very basic and serves only to quickly check if graph is consistent
Step4: Write a graph into GraphML file
Now you may want to write your graph into a GraphML file. This is a
way
Step5: Read a graph from GraphML file
Now let's learn how to read a graph from a GraphML file. We will take the previous generated GraphML file, load it in Python and display it with NetworkX
Step6: Nodes and edges attributes management
GraphML format has a flexible attributes management as PyGraphML. To add an attribute to a node or an item, simply use Python power | Python Code:
%matplotlib inline
import tempfile
import os
import sys
sys.path.append("../")
from pygraphml import GraphMLParser
from pygraphml import Graph
Explanation: Welcome to the PyGraphML Documentation
PyGraphML is a Python library designed to parse GraphML file.
Overview
GraphML
GraphML is a comprehensive and easy-to-use file format for graphs. It consists of a language core to describe the structural properties of a graph and a flexible extension mechanism to add application-specific data. Its main features include support of:
directed, undirected, and mixed graphs,
hypergraphs,
hierarchical graphs,
graphical representations,
references to external data,
application-specific attribute data, and light-weight parsers.
Unlike many other file formats for graphs, GraphML does not use a
custom syntax. Instead, it is based on XML and hence ideally suited as
a common denominator for all kinds of services generating, archiving,
or processing graphs.
Note: Above description is coming from GraphML official website: http://graphml.graphdrawing.org/.
PyGraphML
PyGraphML is a small library designed to parse GraphML files. This
library has been written in Python. It's main feature are:
reading GraphML file and getting consistant graph accessible in Python.
write a graph object to a GraphML file.
flexible attributes management.
graph visualization using NetworkX
End of explanation
g = Graph()
n1 = g.add_node("A")
n2 = g.add_node("B")
n3 = g.add_node("C")
n4 = g.add_node("D")
n5 = g.add_node("E")
g.add_edge(n1, n3)
g.add_edge(n2, n3)
g.add_edge(n3, n4)
g.add_edge(n3, n5)
Explanation: Usage
Create a graph
Let's create a simple graph with 5 nodes and some edges between this nodes:
End of explanation
# Set a root
g.set_root(n1)
nodes = g.BFS()
for node in nodes:
print(node)
nodes = g.DFS_prefix()
for node in nodes:
print(node)
Explanation: Graph search
You can use breadth-first search and depth-first search:
End of explanation
g.show()
Explanation: Visualize a graph with NetworkX
If you have matplotlib and NetworkX installed, you can visualize the graph. Note that Visualization is very basic and serves only to quickly check if graph is consistent:
End of explanation
# Create graph
g = Graph()
n1 = g.add_node("A")
n2 = g.add_node("B")
n3 = g.add_node("C")
n4 = g.add_node("D")
n5 = g.add_node("E")
g.add_edge(n1, n3)
g.add_edge(n2, n3)
g.add_edge(n3, n4)
g.add_edge(n3, n5)
fname = tempfile.mktemp()
parser = GraphMLParser()
parser.write(g, fname)
# Visualize the GraphML file
with open(fname) as f:
print(f.read())
Explanation: Write a graph into GraphML file
Now you may want to write your graph into a GraphML file. This is a
way::
End of explanation
parser = GraphMLParser()
g = parser.parse(fname)
g.show()
Explanation: Read a graph from GraphML file
Now let's learn how to read a graph from a GraphML file. We will take the previous generated GraphML file, load it in Python and display it with NetworkX:
End of explanation
g = Graph()
n = g.add_node('label')
# Add attribute
n['color'] = 'red'
# Read attribute
print(n['color'])
Explanation: Nodes and edges attributes management
GraphML format has a flexible attributes management as PyGraphML. To add an attribute to a node or an item, simply use Python power:
End of explanation |
14,666 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<p style="text-align
Step2: 1. Implementar o algoritmo K-means
Nesta etapa você irá implementar as funções que compõe o algoritmo do KMeans uma a uma. É importante entender e ler a documentação de cada função, principalmente as dimensões dos dados esperados na saída.
1.1 Inicializar os centróides
A primeira etapa do algoritmo consiste em inicializar os centróides de maneira aleatória. Essa etapa é uma das mais importantes do algoritmo e uma boa inicialização pode diminuir bastante o tempo de convergência.
Para inicializar os centróides você pode considerar o conhecimento prévio sobre os dados, mesmo sem saber a quantidade de grupos ou sua distribuição.
Dica
Step3: Teste a função criada e visualize os centróides que foram calculados.
Step5: 1.2 Definir os clusters
Na segunda etapa do algoritmo serão definidos o grupo de cada dado, de acordo com os centróides calculados.
1.2.1 Função de distância
Codifique a função de distância euclidiana entre dois pontos (a, b).
Definido pela equação
Step6: Teste a função criada.
Step8: 1.2.2 Calcular o centroide mais próximo
Utilizando a função de distância codificada anteriormente, complete a função abaixo para calcular o centroid mais próximo de um ponto qualquer.
Dica
Step9: Teste a função criada
Step11: 1.2.3 Calcular centroid mais próximo de cada dado do dataset
Utilizando a função anterior que retorna o índice do centroid mais próximo, calcule o centroid mais próximo de cada dado do dataset.
Step12: Teste a função criada visualizando os cluster formados.
Step14: 1.3 Métrica de avaliação
Após formar os clusters, como sabemos se o resultado gerado é bom? Para isso, precisamos definir uma métrica de avaliação.
O algoritmo K-means tem como objetivo escolher centróides que minimizem a soma quadrática das distância entre os dados de um cluster e seu centróide. Essa métrica é conhecida como inertia.
$$\sum_{i=0}^{n}\min_{c_j \in C}(||x_i - c_j||^2)$$
A inertia, ou o critério de soma dos quadrados dentro do cluster, pode ser reconhecido como uma medida de o quão internamente coerentes são os clusters, porém ela sofre de alguns inconvenientes
Step15: Teste a função codificada executando o código abaixo.
Step17: 1.4 Atualizar os clusters
Nessa etapa, os centróides são recomputados. O novo valor de cada centróide será a media de todos os dados atribuídos ao cluster.
Step18: Visualize os clusters formados
Step19: Execute a função de atualização e visualize novamente os cluster formados
Step20: 2. K-means
2.1 Algoritmo completo
Utilizando as funções codificadas anteriormente, complete a classe do algoritmo K-means!
Step21: Verifique o resultado do algoritmo abaixo!
Step22: 2.2 Comparar com algoritmo do Scikit-Learn
Use a implementação do algoritmo do scikit-learn do K-means para o mesmo conjunto de dados. Mostre o valor da inércia e os conjuntos gerados pelo modelo. Você pode usar a mesma estrutura da célula de código anterior.
Dica
Step23: 3. Método do cotovelo
Implemete o método do cotovelo e mostre o melhor K para o conjunto de dados.
Step24: 4. Dataset Real
Exercícios
1 - Aplique o algoritmo do K-means desenvolvido por você no datatse iris [1]. Mostre os resultados obtidos utilizando pelo menos duas métricas de avaliação de clusteres [2].
[1] http | Python Code:
# import libraries
# linear algebra
import numpy as np
# data processing
import pandas as pd
# data visualization
from matplotlib import pyplot as plt
# sys - to get maximum float value
import sys
# load the data with pandas
url = 'https://raw.githubusercontent.com/InsightLab/data-science-cookbook/master/2019/09-clustering/dataset.csv'
dataset = pd.read_csv(url, header=None)
dataset = np.array(dataset)
plt.scatter(dataset[:,0], dataset[:,1], s=10)
plt.show()
Explanation: <p style="text-align: center;">Clusterização e algoritmo K-means</p>
Organizar dados em agrupamentos é um dos modos mais fundamentais de compreensão e aprendizado. Como por exemplo, os organismos em um sistema biologico são classificados em domínio, reino, filo, classe, etc. A análise de agrupamento é o estudo formal de métodos e algoritmos para agrupar objetos de acordo com medidas ou características semelhantes. A análise de cluster, em sua essência, não utiliza rótulos de categoria que marcam objetos com identificadores anteriores, ou seja, rótulos de classe. A ausência de informação de categoria distingue o agrupamento de dados (aprendizagem não supervisionada) da classificação ou análise discriminante (aprendizagem supervisionada). O objetivo da clusterização é encontrar estruturas em dados e, portanto, é de natureza exploratória.
A técnica de Clustering tem uma longa e rica história em uma variedade de campos científicos. Um dos algoritmos de clusterização mais populares e simples, o K-means, foi publicado pela primeira vez em 1955. Apesar do K-means ter sido proposto há mais de 50 anos e milhares de algoritmos de clustering terem sido publicados desde então, o K-means é ainda amplamente utilizado.
Fonte: Anil K. Jain, Data clustering: 50 years beyond K-means, Pattern Recognition Letters, Volume 31, Issue 8, 2010
Objetivo
Implementar as funções do algoritmo KMeans passo-a-passo
Comparar a implementação com o algoritmo do Scikit-Learn
Entender e codificar o Método do Cotovelo
Utilizar o K-means em um dataset real
Carregando os dados de teste
Carregue os dados disponibilizados, e identifique visualmente em quantos grupos os dados parecem estar distribuídos.
End of explanation
def calculate_initial_centers(dataset, k):
Inicializa os centróides iniciais de maneira arbitrária
Argumentos:
dataset -- Conjunto de dados - [m,n]
k -- Número de centróides desejados
Retornos:
centroids -- Lista com os centróides calculados - [k,n]
#### CODE HERE ####
m = dataset.shape[0]
centroids = list(dataset[np.random.randint(0, m - 1, 1)])
for it1 in range(k - 1):
max_dist = -1
for it2 in range(m):
nrst_cent_dist = sys.float_info.max
for it3 in range(len(centroids)):
dist = np.linalg.norm(dataset[it2] - centroids[it3])
# Get the distance to the nearest centroid
if (dist < nrst_cent_dist):
nrst_cent_dist = dist
nrst_cent = dataset[it2]
if (nrst_cent_dist > max_dist):
max_dist = nrst_cent_dist
new_cent = nrst_cent
centroids.append(new_cent)
centroids = np.array(centroids)
### END OF CODE ###
return centroids
Explanation: 1. Implementar o algoritmo K-means
Nesta etapa você irá implementar as funções que compõe o algoritmo do KMeans uma a uma. É importante entender e ler a documentação de cada função, principalmente as dimensões dos dados esperados na saída.
1.1 Inicializar os centróides
A primeira etapa do algoritmo consiste em inicializar os centróides de maneira aleatória. Essa etapa é uma das mais importantes do algoritmo e uma boa inicialização pode diminuir bastante o tempo de convergência.
Para inicializar os centróides você pode considerar o conhecimento prévio sobre os dados, mesmo sem saber a quantidade de grupos ou sua distribuição.
Dica: https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.uniform.html
End of explanation
k = 3
centroids = calculate_initial_centers(dataset, k)
plt.scatter(dataset[:,0], dataset[:,1], s=10)
plt.scatter(centroids[:,0], centroids[:,1], marker='^', c='red',s=100)
plt.show()
Explanation: Teste a função criada e visualize os centróides que foram calculados.
End of explanation
def euclidean_distance(a, b):
Calcula a distância euclidiana entre os pontos a e b
Argumentos:
a -- Um ponto no espaço - [1,n]
b -- Um ponto no espaço - [1,n]
Retornos:
distance -- Distância euclidiana entre os pontos
#### CODE HERE ####
n = len(a)
distance = 0
for i in range(n):
distance = distance + (a[i] - b[i])**2
distance = distance**0.5
### END OF CODE ###
return distance
Explanation: 1.2 Definir os clusters
Na segunda etapa do algoritmo serão definidos o grupo de cada dado, de acordo com os centróides calculados.
1.2.1 Função de distância
Codifique a função de distância euclidiana entre dois pontos (a, b).
Definido pela equação:
$$ dist(a, b) = \sqrt{(a_1-b_1)^{2}+(a_2-b_2)^{2}+ ... + (a_n-b_n)^{2}} $$
$$ dist(a, b) = \sqrt{\sum_{i=1}^{n}(a_i-b_i)^{2}} $$
End of explanation
a = np.array([1, 5, 9])
b = np.array([3, 7, 8])
if (euclidean_distance(a,b) == 3):
print("Distância calculada corretamente!")
else:
print("Função de distância incorreta")
Explanation: Teste a função criada.
End of explanation
def nearest_centroid(a, centroids):
Calcula o índice do centroid mais próximo ao ponto a
Argumentos:
a -- Um ponto no espaço - [1,n]
centroids -- Lista com os centróides - [k,n]
Retornos:
nearest_index -- Índice do centróide mais próximo
#### CODE HERE ####
# Check if centroids has two dimensions and, if not, convert to
if len(centroids.shape) == 1:
centroids = np.array([centroids])
nrst_cent_dist = sys.float_info.max
for j in range(len(centroids)):
dist = euclidean_distance(a, centroids[j])
if (dist < nrst_cent_dist):
nrst_cent_dist = dist
nearest_index = j
### END OF CODE ###
return nearest_index
Explanation: 1.2.2 Calcular o centroide mais próximo
Utilizando a função de distância codificada anteriormente, complete a função abaixo para calcular o centroid mais próximo de um ponto qualquer.
Dica: https://docs.scipy.org/doc/numpy/reference/generated/numpy.argmin.html
End of explanation
# Seleciona um ponto aleatório no dataset
index = np.random.randint(dataset.shape[0])
a = dataset[index,:]
# Usa a função para descobrir o centroid mais próximo
idx_nearest_centroid = nearest_centroid(a, centroids)
# Plota os dados ------------------------------------------------
plt.scatter(dataset[:,0], dataset[:,1], s=10)
# Plota o ponto aleatório escolhido em uma cor diferente
plt.scatter(a[0], a[1], c='magenta', s=30)
# Plota os centroids
plt.scatter(centroids[:,0], centroids[:,1], marker='^', c='red', s=100)
# Plota o centroid mais próximo com uma cor diferente
plt.scatter(centroids[idx_nearest_centroid,0],
centroids[idx_nearest_centroid,1],
marker='^', c='springgreen', s=100)
# Cria uma linha do ponto escolhido para o centroid selecionado
plt.plot([a[0], centroids[idx_nearest_centroid,0]],
[a[1], centroids[idx_nearest_centroid,1]],c='orange')
plt.annotate('CENTROID', (centroids[idx_nearest_centroid,0],
centroids[idx_nearest_centroid,1],))
plt.show()
Explanation: Teste a função criada
End of explanation
def all_nearest_centroids(dataset, centroids):
Calcula o índice do centroid mais próximo para cada
ponto do dataset
Argumentos:
dataset -- Conjunto de dados - [m,n]
centroids -- Lista com os centróides - [k,n]
Retornos:
nearest_indexes -- Índices do centróides mais próximos - [m,1]
#### CODE HERE ####
# Check if centroids has two dimensions and, if not, convert to
if len(centroids.shape) == 1:
centroids = np.array([centroids])
nearest_indexes = np.zeros(len(dataset))
for i in range(len(dataset)):
nearest_indexes[i] = nearest_centroid(dataset[i], centroids)
### END OF CODE ###
return nearest_indexes
Explanation: 1.2.3 Calcular centroid mais próximo de cada dado do dataset
Utilizando a função anterior que retorna o índice do centroid mais próximo, calcule o centroid mais próximo de cada dado do dataset.
End of explanation
nearest_indexes = all_nearest_centroids(dataset, centroids)
plt.scatter(dataset[:,0], dataset[:,1], c=nearest_indexes)
plt.scatter(centroids[:,0], centroids[:,1], marker='^', c='red', s=100)
plt.show()
Explanation: Teste a função criada visualizando os cluster formados.
End of explanation
def inertia(dataset, centroids, nearest_indexes):
Soma das distâncias quadradas das amostras para o
centro do cluster mais próximo.
Argumentos:
dataset -- Conjunto de dados - [m,n]
centroids -- Lista com os centróides - [k,n]
nearest_indexes -- Índices do centróides mais próximos - [m,1]
Retornos:
inertia -- Soma total do quadrado da distância entre
os dados de um cluster e seu centróide
#### CODE HERE ####
# Check if centroids has two dimensions and, if not, convert to
if len(centroids.shape) == 1:
centroids = np.array([centroids])
inertia = 0
for i in range(len(dataset)):
inertia = inertia + euclidean_distance(dataset[i], centroids[int(nearest_indexes[i])])**2
### END OF CODE ###
return inertia
Explanation: 1.3 Métrica de avaliação
Após formar os clusters, como sabemos se o resultado gerado é bom? Para isso, precisamos definir uma métrica de avaliação.
O algoritmo K-means tem como objetivo escolher centróides que minimizem a soma quadrática das distância entre os dados de um cluster e seu centróide. Essa métrica é conhecida como inertia.
$$\sum_{i=0}^{n}\min_{c_j \in C}(||x_i - c_j||^2)$$
A inertia, ou o critério de soma dos quadrados dentro do cluster, pode ser reconhecido como uma medida de o quão internamente coerentes são os clusters, porém ela sofre de alguns inconvenientes:
A inertia pressupõe que os clusters são convexos e isotrópicos, o que nem sempre é o caso. Desta forma, pode não representar bem em aglomerados alongados ou variedades com formas irregulares.
A inertia não é uma métrica normalizada: sabemos apenas que valores mais baixos são melhores e zero é o valor ótimo. Mas em espaços de dimensões muito altas, as distâncias euclidianas tendem a se tornar infladas (este é um exemplo da chamada “maldição da dimensionalidade”). A execução de um algoritmo de redução de dimensionalidade, como o PCA, pode aliviar esse problema e acelerar os cálculos.
Fonte: https://scikit-learn.org/stable/modules/clustering.html
Para podermos avaliar os nosso clusters, codifique a métrica da inertia abaixo, para isso você pode utilizar a função de distância euclidiana construída anteriormente.
$$inertia = \sum_{i=0}^{n}\min_{c_j \in C} (dist(x_i, c_j))^2$$
End of explanation
tmp_data = np.array([[1,2,3],[3,6,5],[4,5,6]])
tmp_centroide = np.array([[2,3,4]])
tmp_nearest_indexes = all_nearest_centroids(tmp_data, tmp_centroide)
if inertia(tmp_data, tmp_centroide, tmp_nearest_indexes) == 26:
print("Inertia calculada corretamente!")
else:
print("Função de inertia incorreta!")
# Use a função para verificar a inertia dos seus clusters
inertia(dataset, centroids, nearest_indexes)
Explanation: Teste a função codificada executando o código abaixo.
End of explanation
def update_centroids(dataset, centroids, nearest_indexes):
Atualiza os centroids
Argumentos:
dataset -- Conjunto de dados - [m,n]
centroids -- Lista com os centróides - [k,n]
nearest_indexes -- Índices do centróides mais próximos - [m,1]
Retornos:
centroids -- Lista com centróides atualizados - [k,n]
#### CODE HERE ####
# Check if centroids has two dimensions and, if not, convert to
if len(centroids.shape) == 1:
centroids = np.array([centroids])
sum_data_inCentroids = np.zeros((len(centroids), len(centroids[0])))
num_data_inCentroids = np.zeros(len(centroids))
for i in range(len(dataset)):
cent_idx = int(nearest_indexes[i])
sum_data_inCentroids[cent_idx] += dataset[i]
num_data_inCentroids[cent_idx] += 1
for i in range(len(centroids)):
centroids[i] = sum_data_inCentroids[i]/num_data_inCentroids[i]
### END OF CODE ###
return centroids
Explanation: 1.4 Atualizar os clusters
Nessa etapa, os centróides são recomputados. O novo valor de cada centróide será a media de todos os dados atribuídos ao cluster.
End of explanation
nearest_indexes = all_nearest_centroids(dataset, centroids)
# Plota os os cluster ------------------------------------------------
plt.scatter(dataset[:,0], dataset[:,1], c=nearest_indexes)
# Plota os centroids
plt.scatter(centroids[:,0], centroids[:,1], marker='^', c='red', s=100)
for index, centroid in enumerate(centroids):
dataframe = dataset[nearest_indexes == index,:]
for data in dataframe:
plt.plot([centroid[0], data[0]], [centroid[1], data[1]],
c='lightgray', alpha=0.3)
plt.show()
Explanation: Visualize os clusters formados
End of explanation
centroids = update_centroids(dataset, centroids, nearest_indexes)
Explanation: Execute a função de atualização e visualize novamente os cluster formados
End of explanation
class KMeans():
def __init__(self, n_clusters=8, max_iter=300):
self.n_clusters = n_clusters
self.max_iter = max_iter
def fit(self,X):
# Inicializa os centróides
self.cluster_centers_ = calculate_initial_centers(X, self.n_clusters)
# Computa o cluster de cada amostra
self.labels_ = all_nearest_centroids(X, self.cluster_centers_)
# Calcula a inércia inicial
old_inertia = inertia(X, self.cluster_centers_, self.labels_)
self.inertia_ = old_inertia
for index in range(self.max_iter):
#### CODE HERE ####
self.cluster_centers_ = update_centroids(X, self.cluster_centers_, self.labels_)
self.labels_ = all_nearest_centroids(X, self.cluster_centers_)
self.inertia_ = inertia(X, self.cluster_centers_, self.labels_)
if (self.inertia_ == old_inertia):
break
else:
old_inertia = self.inertia_
### END OF CODE ###
return self
def predict(self, X):
return all_nearest_centroids(X, self.cluster_centers_)
Explanation: 2. K-means
2.1 Algoritmo completo
Utilizando as funções codificadas anteriormente, complete a classe do algoritmo K-means!
End of explanation
kmeans = KMeans(n_clusters=3)
kmeans.fit(dataset)
print("Inércia = ", kmeans.inertia_)
plt.scatter(dataset[:,0], dataset[:,1], c=kmeans.labels_)
plt.scatter(kmeans.cluster_centers_[:,0],
kmeans.cluster_centers_[:,1], marker='^', c='red', s=100)
plt.show()
Explanation: Verifique o resultado do algoritmo abaixo!
End of explanation
#### CODE HERE ####
from sklearn.cluster import KMeans as sk_KMeans
skkmeans = sk_KMeans(n_clusters=3).fit(dataset)
print("Scikit-Learn KMeans' inertia: ", skkmeans.inertia_)
print("My KMeans inertia: ", kmeans.inertia_)
Explanation: 2.2 Comparar com algoritmo do Scikit-Learn
Use a implementação do algoritmo do scikit-learn do K-means para o mesmo conjunto de dados. Mostre o valor da inércia e os conjuntos gerados pelo modelo. Você pode usar a mesma estrutura da célula de código anterior.
Dica: https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans
End of explanation
#### CODE HERE ####
# Initialize array of Ks
ks = np.array(range(1, 11))
# Create array to receive the inertias for each K
inertias = np.zeros(len(ks))
for i in range(len(ks)):
# Compute inertia for K
kmeans = KMeans(ks[i]).fit(dataset)
inertias[i] = kmeans.inertia_
# Best K is the last one to improve the inertia in 30%
if (i > 0 and (inertias[i - 1] - inertias[i])/inertias[i] > 0.3):
best_k_idx = i
print("Best K: {}\n".format(ks[best_k_idx]))
plt.plot(ks, inertias, marker='o')
plt.plot(ks[best_k_idx], inertias[best_k_idx], 'ro')
Explanation: 3. Método do cotovelo
Implemete o método do cotovelo e mostre o melhor K para o conjunto de dados.
End of explanation
#### CODE HERE ####
Explanation: 4. Dataset Real
Exercícios
1 - Aplique o algoritmo do K-means desenvolvido por você no datatse iris [1]. Mostre os resultados obtidos utilizando pelo menos duas métricas de avaliação de clusteres [2].
[1] http://archive.ics.uci.edu/ml/datasets/iris
[2] http://scikit-learn.org/stable/modules/clustering.html#clustering-evaluation
Dica: você pode utilizar as métricas completeness e homogeneity.
2 - Tente melhorar o resultado obtido na questão anterior utilizando uma técnica de mineração de dados. Explique a diferença obtida.
Dica: você pode tentar normalizar os dados [3].
- [3] https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.normalize.html
3 - Qual o número de clusteres (K) você escolheu na questão anterior? Desenvolva o Método do Cotovelo sem usar biblioteca e descubra o valor de K mais adequado. Após descobrir, utilize o valor obtido no algoritmo do K-means.
4 - Utilizando os resultados da questão anterior, refaça o cálculo das métricas e comente os resultados obtidos. Houve uma melhoria? Explique.
End of explanation |
14,667 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Image data
The goal of this notebook is to detail how to interact with, and compute statistics on the images associated to the set of ads provided for the CP1 during the MEMEX Winter QPR 2017.
Data to download
Data posted on HDFS, see Wiki page.
Training images
Training images info
Test images
Test images info
Plus the data available on the Wiki
Info files
adjusted_images.json
image_url_sha1.csv
faces.jl
images_faces_stats.jl
Step1: Analyze image stats
Step2: Images distribution
Step3: Faces distribution
Step4: Show images and faces of one ad | Python Code:
import os
import csv
import json
# set some parameters
data_dir = "../data"
prefix = "test"
if prefix=="train":
input_file = "train_adjusted.json"
else:
input_file = "test_adjusted_unlabelled.json"
images_dir = os.path.join(data_dir,prefix+"_images")
url_sha1_file = os.path.join(data_dir,prefix+"_image_url_sha1.csv")
faces_file = os.path.join(data_dir,prefix+"_faces.jl")
stats_file = os.path.join(data_dir,prefix+"_images_faces_stats.jl")
images_file = os.path.join(data_dir,prefix+"_adjusted_images.json")
# parse faces_file
def parse_faces(faces_file):
faces_dict = {}
with open(faces_file, "rt") as faces:
for line in faces:
one_face_dict = json.loads(line)
img_sha1 = one_face_dict.keys()[0]
nb_faces = len(one_face_dict[img_sha1].keys())
#print nb_faces
faces_dict[img_sha1] = dict()
faces_dict[img_sha1]['count'] = nb_faces
faces_dict[img_sha1]['detections'] = one_face_dict[img_sha1]
return faces_dict
faces_dict = parse_faces(faces_file)
print len(faces_dict)
i = 3
print faces_dict.keys()[i], faces_dict[faces_dict.keys()[i]]
# parse images_file
def parse_images_file(images_file):
ads_images_dict = {}
with open(images_file, "rt") as images:
for line in images:
one_image_dict = json.loads(line)
ad_id_list = one_image_dict['obj_parent']
img_url = one_image_dict['obj_stored_url']
if type(ad_id_list) is not list:
ad_id_list = [ad_id_list]
for ad_id in ad_id_list:
if ad_id not in ads_images_dict:
ads_images_dict[ad_id] = [img_url]
else:
ads_images_dict[ad_id].append(img_url)
return ads_images_dict
ads_images_dict = parse_images_file(images_file)
print len(ads_images_dict)
print ads_images_dict.keys()[0],ads_images_dict[ads_images_dict.keys()[0]]
# parse image_url_sha1_file
def parse_url_sha1_file(url_sha1_file):
url_sha1_dict = {}
with open(url_sha1_file,"rt") as img_url_sha1:
for line in img_url_sha1:
url, sha1 = line.split(',')
url_sha1_dict[url] = sha1
return url_sha1_dict
url_sha1_dict = parse_url_sha1_file(url_sha1_file)
print len(url_sha1_dict)
print url_sha1_dict.keys()[0],url_sha1_dict[url_sha1_dict.keys()[0]]
Explanation: Image data
The goal of this notebook is to detail how to interact with, and compute statistics on the images associated to the set of ads provided for the CP1 during the MEMEX Winter QPR 2017.
Data to download
Data posted on HDFS, see Wiki page.
Training images
Training images info
Test images
Test images info
Plus the data available on the Wiki
Info files
adjusted_images.json
image_url_sha1.csv
faces.jl
images_faces_stats.jl
End of explanation
import matplotlib
from numpy.random import randn
import matplotlib.pyplot as plt
from matplotlib.ticker import FuncFormatter
%matplotlib inline
def to_percent(y, position):
# Ignore the passed in position. This has the effect of scaling the default
# tick locations.
s = str(100 * y)
# The percent symbol needs escaping in latex
if matplotlib.rcParams['text.usetex'] is True:
return s + r'$\%$'
else:
return s + '%'
Explanation: Analyze image stats
End of explanation
def get_ad_images(ad_id, ads_images_dict, url_sha1_dict, verbose=False):
images_url_list = ads_images_dict[ad_id]
images_sha1s = []
for image_url in images_url_list:
if image_url is None or not image_url:
continue
try:
images_sha1s.append(url_sha1_dict[image_url.strip()].strip())
except:
if verbose:
print 'Cannot find sha1 for: {}.'.format(image_url)
return images_sha1s
# Analyze distribution of images in ads_images_dict
images_count = []
for ad_id in ads_images_dict:
images_count.append(len(get_ad_images(ad_id, ads_images_dict, url_sha1_dict)))
def print_stats(np_img_count):
print np.min(np_img_count), np.mean(np_img_count), np.max(np_img_count)
# Normed histogram seems to be broken,
# using weights as suggested in http://stackoverflow.com/questions/5498008/pylab-histdata-normed-1-normalization-seems-to-work-incorrect
weights = np.ones_like(np_img_count)/float(len(np_img_count))
res = plt.hist(np_img_count, bins=100, weights=weights)
print np.sum(res[0])
# Create the formatter using the function to_percent. This multiplies all the
# default labels by 100, making them all percentages
formatter = FuncFormatter(to_percent)
# Set the formatter
plt.gca().yaxis.set_major_formatter(formatter)
plt.show()
print_stats(np.asarray(images_count))
Explanation: Images distribution
End of explanation
def get_faces_images(images_sha1s, faces_dict):
faces_out = {}
for sha1 in images_sha1s:
img_notfound = False
try:
tmp_faces = faces_dict[sha1]
except:
img_notfound = True
if img_notfound or tmp_faces['count']==0:
faces_out[sha1] = []
continue
bboxes = []
for face in tmp_faces['detections']:
bbox = [float(x) for x in tmp_faces['detections'][face]['bbox'].split(',')]
bbox.append(float(tmp_faces['detections'][face]['score']))
bboxes.append(bbox)
#print bboxes
faces_out[sha1] = bboxes
return faces_out
def show_faces(faces, images_dir):
from matplotlib.pyplot import imshow
from IPython.display import display
import numpy as np
%matplotlib inline
imgs = []
for face in faces:
if faces[face]:
img = open_image(face, images_dir)
draw_face_bbox(img, faces[face])
imgs.append(img)
if not imgs:
print 'No face images'
display(*imgs)
# get all faces ads from each ad
faces_in_images_percent = []
for ad_id in ads_images_dict:
images_sha1s = get_ad_images(ad_id, ads_images_dict, url_sha1_dict)
faces_images = get_faces_images(images_sha1s, faces_dict)
if len(faces_images)==0:
continue
nb_faces = 0
for face in faces_images:
if faces_images[face]:
nb_faces += 1
faces_in_images_percent.append(float(nb_faces)/len(faces_images))
np_faces_in_images_percent = np.asarray(faces_in_images_percent)
print_stats(np_faces_in_images_percent)
no_faces = np.where(np_faces_in_images_percent==0.0)
print no_faces[0].shape
print np_faces_in_images_percent.shape
percent_noface = float(no_faces[0].shape[0])/np_faces_in_images_percent.shape[0]
print 1-percent_noface
# get all faces scores from each ad
faces_scores = []
all_faces = []
for ad_id in ads_images_dict:
images_sha1s = get_ad_images(ad_id, ads_images_dict, url_sha1_dict)
faces_images = get_faces_images(images_sha1s, faces_dict)
if len(faces_images)==0:
continue
nb_faces = 0
for face in faces_images:
if faces_images[face]:
for one_face in faces_images[face]:
all_faces.append([face, one_face])
faces_scores.append(float(one_face[4]))
np_faces_scores = np.asarray(faces_scores)
print_stats(faces_scores)
low_scores_faces = np.where(np_faces_scores<0.90)[0]
print float(len(low_scores_faces))/len(np_faces_scores)
very_low_scores_faces = np.where(np_faces_scores<0.80)[0]
print float(len(very_low_scores_faces))/len(np_faces_scores)
#all_faces
print len(np_faces_scores)
nb_faces_to_show = 10
np.random.shuffle(very_low_scores_faces)
faces_to_show = [all_faces[x] for x in very_low_scores_faces[:nb_faces_to_show]]
print faces_to_show
for face_id, face in faces_to_show:
print face_id, face
face_dict = {}
face_dict[face_id] = [face]
show_faces(face_dict, images_dir)
Explanation: Faces distribution
End of explanation
def get_fnt(img, txt):
from PIL import ImageFont
# portion of image width you want text width to be
img_fraction = 0.20
fontsize = 2
font = ImageFont.truetype("arial.ttf", fontsize)
while font.getsize(txt)[0] < img_fraction*img.size[0]:
# iterate until the text size is just larger than the criteria
fontsize += 1
font = ImageFont.truetype("arial.ttf", fontsize)
return font, font.getsize(txt)[0]
def draw_face_bbox(img, bboxes, width=4):
from PIL import ImageDraw
import numpy as np
draw = ImageDraw.Draw(img)
for bbox in bboxes:
for i in range(width):
rect_start = (int(np.round(bbox[0] + width/2 - i)), int(np.round(bbox[1] + width/2 - i)))
rect_end = (int(np.round(bbox[2] - width/2 + i)), int(np.round(bbox[3] - width/2 + i)))
draw.rectangle((rect_start, rect_end), outline=(0, 255, 0))
# print score?
if len(bbox)==5:
score = str(bbox[4])
fnt, text_size = get_fnt(img, score[:5])
draw.text((np.round((bbox[0]+bbox[2])/2-text_size/2),np.round(bbox[1])), score[:5], font=fnt, fill=(255,255,255,64))
def open_image(sha1, images_dir):
from PIL import Image
img = Image.open(os.path.join(images_dir, sha1[:3], sha1))
return img
#face images of ad '84FC37A4E38F7DE2B9FCAAB902332ED60A344B8DF90893A5A8BE3FC1139FCD5A' are blurred but detected
# image '20893a926fbf50d1a5994f70ec64dbf33dd67e2a' highly pixelated
# male strippers '20E4597A6DA11BC07BB7578FFFCE07027F885AF02265FD663C0911D2699E0A79'
all_ads_id = range(len(ads_images_dict.keys()))
import numpy as np
np.random.shuffle(all_ads_id)
ad_id = ads_images_dict.keys()[all_ads_id[0]]
print ad_id
images_sha1s = get_ad_images(ad_id, ads_images_dict, url_sha1_dict)
print images_sha1s
faces = get_faces_images(images_sha1s, faces_dict)
print faces
show_faces(faces, images_dir)
Explanation: Show images and faces of one ad
End of explanation |
14,668 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Executing JavaScript Code Directly in SQL Queries Using the jseval Function Tutorial
MLDB provides a complete implementation of the SQL SELECT statement. Most of the functions you are used to using are available in your queries.
MLDB also supports additional functions that extend standard SQL in very interesting ways. One of those function is the jseval function that can be used to execute arbitrary JavaScript code inline in an SQL query.
In this tutorial, we will show some basic usage example followed by two different use-cases for the jseval function
Step2: Basic usage examples
Let's start by writing a simple SQL query that will multiply an input number by 2 in JavaScript
Step4: The variable val takes the input value 5 and the code is then evaluated.
Our function can also take in multiple parameters as input, and return different output values
Step5: In the above example, the string val,str_val mean that the function takes 2 input variables. Those values will be 5 and the string Bonjour!. Since we return a JavaScript object, we essentially return a row where the keys are the objects' keys and the cell values are the object's values.
Now that we have the basics in place, let's continue to a real use-case below.
Formatting data during the import step
In the Loading Data From An HTTP Server Tutorial tutorial, we loaded a specific file from an archive that was located on the Stanford Network Analysis Project (SNAP) website.
The dataset contains all the circles of friends in which user no. 3980 is part of. Each row represents a circle of
friends, and all the users that are part of that circle will be enumerated on the line.
Let's check's out the unformated version of the data first, by running the import.text procedure
Step7: We see that each line contains the circle number followed by user ids. This type of data is an ideal candidate for MLDB, since we can store it as bags of words, or rather, bags of friends. A dataset of type sparse.mutable can store sparse representations like this one very efficiently.
Normally, we could use the tokenize function to deal with data like this. However, since splitting the data on the <TAB> character yields a variable number of columns, the standard way of importing this won't work very nicely in the import.text procedure.
In the code below, we will use the jseval function to do the following in JavaScript
Step9: We can now run a SELECT query on the resulting dataset and get a nice sparse representation
Step11: We can now answer a simple question like
Step12: Since the maximum value is 1, we now know that the answer to the above question is no.
Although there are other ways to obtain the same result, using jseval and the dataset of type sparse.mutable allowed us to transform our data in a single step, without knowing its characteristics in advance. This shows how much added flexibility is added by such a function.
Designing custom feature generators
Another very powerful way the jseval function can be used is as a feature generator. When trying to prototype and iterate quickly, this can be a very efficient way to try out new ideas.
Let's start by creating a toy dataset using the description of machine learning concepts from Wikipedia
Step13: Taking a peek at our data, we see there is a single column called Text that contains a textual description of an ML concept
Step15: Let's now create a function of type sql.expression containing a jseval function that calculates different statistics about the string it is given. It calculates things like the number of words in the string, the number of capital letters, etc.
Putting it in an sql.expression allows us to reuse it easily later on.
Step16: Now that we have created our getStats function, we can call it on a single string
Step17: Looks like it works! We can also call it on the Text column of our ml_concepts dataset to get the statistics for all the rows of our dataset | Python Code:
from pymldb import Connection
mldb = Connection("http://localhost")
Explanation: Executing JavaScript Code Directly in SQL Queries Using the jseval Function Tutorial
MLDB provides a complete implementation of the SQL SELECT statement. Most of the functions you are used to using are available in your queries.
MLDB also supports additional functions that extend standard SQL in very interesting ways. One of those function is the jseval function that can be used to execute arbitrary JavaScript code inline in an SQL query.
In this tutorial, we will show some basic usage example followed by two different use-cases for the jseval function:
Formatting data during the import step
Designing custom feature generators
Setting up
Before we begin, let's start by importing the pymldb library so we can make REST API calls to MLDB. You can check out the Using pymldb Tutorial for more details.
End of explanation
mldb.query(
SELECT
jseval('
return val * 2;
','val', 5) AS output
)
Explanation: Basic usage examples
Let's start by writing a simple SQL query that will multiply an input number by 2 in JavaScript:
End of explanation
mldb.query(
SELECT
jseval('
var output = {};
output["mult"] = val * 2;
output["name"] = str_val + " Hello!";
return output;
','val,str_val', 5, 'Bonjour!') AS output
)
Explanation: The variable val takes the input value 5 and the code is then evaluated.
Our function can also take in multiple parameters as input, and return different output values:
End of explanation
dataUrl = "http://snap.stanford.edu/data/facebook.tar.gz"
mldb.put("/v1/procedures/import_data", {
"type": "import.text",
"params": {
"dataFileUrl": "archive+" + dataUrl + "#facebook/3980.circles",
"delimiter": " ",
"quoteChar": "",
"outputDataset": "import_URL2",
"runOnCreation": True
}
})
mldb.query("SELECT * NAMED rowName() FROM import_URL2 LIMIT 10")
Explanation: In the above example, the string val,str_val mean that the function takes 2 input variables. Those values will be 5 and the string Bonjour!. Since we return a JavaScript object, we essentially return a row where the keys are the objects' keys and the cell values are the object's values.
Now that we have the basics in place, let's continue to a real use-case below.
Formatting data during the import step
In the Loading Data From An HTTP Server Tutorial tutorial, we loaded a specific file from an archive that was located on the Stanford Network Analysis Project (SNAP) website.
The dataset contains all the circles of friends in which user no. 3980 is part of. Each row represents a circle of
friends, and all the users that are part of that circle will be enumerated on the line.
Let's check's out the unformated version of the data first, by running the import.text procedure:
End of explanation
dataUrl = "http://snap.stanford.edu/data/facebook.tar.gz"
print mldb.put("/v1/procedures/import_non_formated", {
"type": "import.text",
"params": {
"dataFileUrl": "archive+" + dataUrl + "#facebook/3980.circles",
"headers": ["circles"],
"select":
jseval('
var row_val = val.split("\t");
var rtn = {};
rtn["rowName"] = row_val[0];
for(i=1; i<row_val.length; i++) {
rtn[row_val[i]] = 1;
}
return rtn;
','val', circles) AS *
,
"outputDataset": {
"id": "import_non_formated",
"type": "sparse.mutable"
},
"runOnCreation": True
}
})
Explanation: We see that each line contains the circle number followed by user ids. This type of data is an ideal candidate for MLDB, since we can store it as bags of words, or rather, bags of friends. A dataset of type sparse.mutable can store sparse representations like this one very efficiently.
Normally, we could use the tokenize function to deal with data like this. However, since splitting the data on the <TAB> character yields a variable number of columns, the standard way of importing this won't work very nicely in the import.text procedure.
In the code below, we will use the jseval function to do the following in JavaScript:
- create an empty object
- split each line on the <TAB> character
- store the first element of each line under the key rowName in the object (circle0, circle1, etc...)
- store all remaining elements of the line using the element's name as the key, and the number 1 as the value
End of explanation
mldb.query(
SELECT * EXCLUDING(rowName)
NAMED rowName
FROM import_non_formated
ORDER BY CAST(rowName() AS integer)
LIMIT 5
)
Explanation: We can now run a SELECT query on the resulting dataset and get a nice sparse representation:
End of explanation
mldb.query(
SELECT *
FROM transpose(
(
SELECT sum({* EXCLUDING(rowName)}) as *
NAMED 'result'
FROM import_non_formated
)
)
ORDER BY result DESC
LIMIT 5
)
Explanation: We can now answer a simple question like: Is there any friend of user 3980 that appears in more than one of his circle of friends? It can be answered with the following query:
End of explanation
print mldb.put('/v1/procedures/import_ML_concepts', {
"type":"import.text",
"params": {
"dataFileUrl":"file://mldb/mldb_test_data/MachineLearningConcepts.csv",
"outputDataset": "ml_concepts",
"named": "Concepts",
"select": "Text",
"runOnCreation": True
}
}
)
Explanation: Since the maximum value is 1, we now know that the answer to the above question is no.
Although there are other ways to obtain the same result, using jseval and the dataset of type sparse.mutable allowed us to transform our data in a single step, without knowing its characteristics in advance. This shows how much added flexibility is added by such a function.
Designing custom feature generators
Another very powerful way the jseval function can be used is as a feature generator. When trying to prototype and iterate quickly, this can be a very efficient way to try out new ideas.
Let's start by creating a toy dataset using the description of machine learning concepts from Wikipedia:
End of explanation
mldb.query("SELECT * FROM ml_concepts")
Explanation: Taking a peek at our data, we see there is a single column called Text that contains a textual description of an ML concept:
End of explanation
print mldb.put("/v1/functions/getStats", {
"type": "sql.expression",
"params": {
"expression":
jseval('
var result = {};
result["len"] = txt.length;
result["numWords"] = txt.split(" ").length;
result["numCapital"] = txt.replace(/[^A-Z]/g, "").length;
result["numExpl"] = txt.replace(/[^!]/g, "").length;
result["numQst"] = txt.replace(/[^?]/g, "").length;
result["containsHashSign"] = txt.replace(/[^#]/g, "").length >= 1;
result["numNumbers"] = txt.replace(/[^0-9]/g, "").length;
result["capitalProportion"] = result["numCapital"] / result["len"];
result["explProportion"] = result["numExpl"] / result["len"];
result["qstProportion"] = result["numQst"] / result["len"];
result["numberProportion"] = result["numNumbers"] / result["len"];
return result;
', 'txt', text) as stats
}
})
Explanation: Let's now create a function of type sql.expression containing a jseval function that calculates different statistics about the string it is given. It calculates things like the number of words in the string, the number of capital letters, etc.
Putting it in an sql.expression allows us to reuse it easily later on.
End of explanation
mldb.query("SELECT getStats({text: 'This is a test #hopethisworks #mldb'}) as *")
Explanation: Now that we have created our getStats function, we can call it on a single string:
End of explanation
mldb.query("SELECT getStats({text: Text}) as * FROM ml_concepts")
Explanation: Looks like it works! We can also call it on the Text column of our ml_concepts dataset to get the statistics for all the rows of our dataset:
End of explanation |
14,669 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
机器学习工程师纳米学位
模型评价与验证
项目 1
Step1: 分析数据
在项目的第一个部分,你会对波士顿房地产数据进行初步的观察并给出你的分析。通过对数据的探索来熟悉数据可以让你更好地理解和解释你的结果。
由于这个项目的最终目标是建立一个预测房屋价值的模型,我们需要将数据集分为特征(features)和目标变量(target variable)。特征 'RM', 'LSTAT',和 'PTRATIO',给我们提供了每个数据点的数量相关的信息。目标变量:'MEDV',是我们希望预测的变量。他们分别被存在features和prices两个变量名中。
练习:基础统计运算
你的第一个编程练习是计算有关波士顿房价的描述统计数据。我们已为你导入了numpy,你需要使用这个库来执行必要的计算。这些统计数据对于分析模型的预测结果非常重要的。
在下面的代码中,你要做的是:
- 计算prices中的'MEDV'的最小值、最大值、均值、中值和标准差;
- 将运算结果储存在相应的变量中。
Step3: 问题1 - 特征观察
如前文所述,本项目中我们关注的是其中三个值
Step4: 问题2 - 拟合程度
假设一个数据集有五个数据且一个模型做出下列目标变量的预测:
| 真实数值 | 预测数值 |
|
Step5: 回答
Step6: 问题 3- 训练及测试
将数据集按一定比例分为训练用的数据集和测试用的数据集对学习算法有什么好处?
提示: 如果没有数据来对模型进行测试,会出现什么问题?
答案
Step7: 问题 4 - 学习数据
选择上述图像中的其中一个,并给出其最大深度。随着训练数据量的增加,训练曲线的评分有怎样的变化?测试曲线呢?如果有更多的训练数据,是否能有效提升模型的表现呢?
提示:学习曲线的评分是否最终会收敛到特定的值?
答案
Step9: 问题 5- 偏差与方差之间的权衡取舍
当模型以最大深度 1训练时,模型的预测是出现很大的偏差还是出现了很大的方差?当模型以最大深度10训练时,情形又如何呢?图形中的哪些特征能够支持你的结论?
提示: 你如何得知模型是否出现了偏差很大或者方差很大的问题?
答案
Step10: 做出预测
当我们用数据训练出一个模型,它现在就可用于对新的数据进行预测。在决策树回归函数中,模型已经学会对新输入的数据提问,并返回对目标变量的预测值。你可以用这个预测来获取数据未知目标变量的信息,这些数据必须是不包含在训练数据之内的。
问题 9- 最优模型
最优模型的最大深度(maximum depth)是多少?此答案与你在问题 6所做的猜测是否相同?
运行下方区域内的代码,将决策树回归函数代入训练数据的集合,以得到最优化的模型。
Step11: Answer
Step12: 答案
Step15: 问题 11 - 实用性探讨
简单地讨论一下你建构的模型能否在现实世界中使用?
提示: 回答几个问题,并给出相应结论的理由:
- 1978年所采集的数据,在今天是否仍然适用?
- 数据中呈现的特征是否足够描述一个房屋?
- 模型是否足够健壮来保证预测的一致性?
- 在波士顿这样的大都市采集的数据,能否应用在其它乡镇地区?
答案 | Python Code:
# Import libraries necessary for this project
# 载入此项目所需要的库
import numpy as np
import pandas as pd
import visuals as vs # Supplementary code
from sklearn.model_selection import ShuffleSplit
# Pretty display for notebooks
# 让结果在notebook中显示
%matplotlib inline
# Load the Boston housing dataset
# 载入波士顿房屋的数据集
data = pd.read_csv('housing.csv')
prices = data['MEDV']
features = data.drop('MEDV', axis = 1)
# Success
# 完成
print "Boston housing dataset has {} data points with {} variables each.".format(*data.shape)
Explanation: 机器学习工程师纳米学位
模型评价与验证
项目 1: 预测波士顿房价
欢迎来到机器学习工程师纳米学位的第一个项目!在此文件中,有些示例代码已经提供给你,但你还需要实现更多的功能来让项目成功运行。除非有明确要求,你无须修改任何已给出的代码。以'练习'开始的标题表示接下来的内容中有需要你必须实现的功能。每一部分都会有详细的指导,需要实现的部分也会在注释中以'TODO'标出。请仔细阅读所有的提示!
除了实现代码外,你还必须回答一些与项目和实现有关的问题。每一个需要你回答的问题都会以'问题 X'为标题。请仔细阅读每个问题,并且在问题后的'回答'文字框中写出完整的答案。你的项目将会根据你对问题的回答和撰写代码所实现的功能来进行评分。
提示:Code 和 Markdown 区域可通过 Shift + Enter 快捷键运行。此外,Markdown可以通过双击进入编辑模式。
开始
在这个项目中,你将利用马萨诸塞州波士顿郊区的房屋信息数据训练和测试一个模型,并对模型的性能和预测能力进行测试。通过该数据训练后的好的模型可以被用来对房屋做特定预测---尤其是对房屋的价值。对于房地产经纪等人的日常工作来说,这样的预测模型被证明非常有价值。
此项目的数据集来自UCI机器学习知识库。波士顿房屋这些数据于1978年开始统计,共506个数据点,涵盖了麻省波士顿不同郊区房屋14种特征的信息。本项目对原始数据集做了以下处理:
- 有16个'MEDV' 值为50.0的数据点被移除。 这很可能是由于这些数据点包含遗失或看不到的值。
- 有1个数据点的 'RM' 值为8.78. 这是一个异常值,已经被移除。
- 对于本项目,房屋的'RM', 'LSTAT','PTRATIO'以及'MEDV'特征是必要的,其余不相关特征已经被移除。
- 'MEDV'特征的值已经过必要的数学转换,可以反映35年来市场的通货膨胀效应。
运行下面区域的代码以载入波士顿房屋数据集,以及一些此项目所需的Python库。如果成功返回数据集的大小,表示数据集已载入成功。
End of explanation
# TODO: Minimum price of the data
#目标:计算价值的最小值
minimum_price = np.min(prices)
# TODO: Maximum price of the data
#目标:计算价值的最大值
maximum_price = np.max(prices)
# TODO: Mean price of the data
#目标:计算价值的平均值
mean_price = np.mean(prices)
# TODO: Median price of the data
#目标:计算价值的中值
median_price = np.median(prices)
# TODO: Standard deviation of prices of the data
#目标:计算价值的标准差
std_price = np.std(prices)
# Show the calculated statistics
#目标:输出计算的结果
print "Statistics for Boston housing dataset:\n"
print "Minimum price: ${:,.2f}".format(minimum_price)
print "Maximum price: ${:,.2f}".format(maximum_price)
print "Mean price: ${:,.2f}".format(mean_price)
print "Median price ${:,.2f}".format(median_price)
print "Standard deviation of prices: ${:,.2f}".format(std_price)
Explanation: 分析数据
在项目的第一个部分,你会对波士顿房地产数据进行初步的观察并给出你的分析。通过对数据的探索来熟悉数据可以让你更好地理解和解释你的结果。
由于这个项目的最终目标是建立一个预测房屋价值的模型,我们需要将数据集分为特征(features)和目标变量(target variable)。特征 'RM', 'LSTAT',和 'PTRATIO',给我们提供了每个数据点的数量相关的信息。目标变量:'MEDV',是我们希望预测的变量。他们分别被存在features和prices两个变量名中。
练习:基础统计运算
你的第一个编程练习是计算有关波士顿房价的描述统计数据。我们已为你导入了numpy,你需要使用这个库来执行必要的计算。这些统计数据对于分析模型的预测结果非常重要的。
在下面的代码中,你要做的是:
- 计算prices中的'MEDV'的最小值、最大值、均值、中值和标准差;
- 将运算结果储存在相应的变量中。
End of explanation
# TODO: Import 'r2_score'
from sklearn.metrics import r2_score
def performance_metric(y_true, y_predict):
Calculates and returns the performance score between
true and predicted values based on the metric chosen.
# TODO: Calculate the performance score between 'y_true' and 'y_predict'
score = r2_score(y_true, y_predict)
# Return the score
return score
Explanation: 问题1 - 特征观察
如前文所述,本项目中我们关注的是其中三个值:'RM'、'LSTAT' 和'PTRATIO',对每一个数据点:
- 'RM' 是该地区中每个房屋的平均房间数量;
- 'LSTAT' 是指该地区有多少百分比的房东属于是低收入阶层(有工作但收入微薄);
- 'PTRATIO' 是该地区的中学和小学里,学生和老师的数目比(学生/老师)。
凭直觉,上述三个特征中对每一个来说,你认为增大该特征的数值,'MEDV'的值会是增大还是减小呢?每一个答案都需要你给出理由。
提示:你预期一个'RM' 值是6的房屋跟'RM' 值是7的房屋相比,价值更高还是更低呢?
回答:
RM 增大,MEDV 增大,因为房屋面积变大;
LSTAT 增大,MEDV 减小,因为低收入者变多;
PTRATIO 增大,MEDV 增大,因为教育资源变得更加丰富
建模
在项目的第二部分中,你需要了解必要的工具和技巧来让你的模型进行预测。用这些工具和技巧对每一个模型的表现做精确的衡量可以极大地增强你预测的信心。
练习:定义衡量标准
如果不能对模型的训练和测试的表现进行量化地评估,我们就很难衡量模型的好坏。通常我们会定义一些衡量标准,这些标准可以通过对某些误差或者拟合程度的计算来得到。在这个项目中,你将通过运算决定系数 R<sup>2</sup> 来量化模型的表现。模型的决定系数是回归分析中十分常用的统计信息,经常被当作衡量模型预测能力好坏的标准。
R<sup>2</sup>的数值范围从0至1,表示目标变量的预测值和实际值之间的相关程度平方的百分比。一个模型的R<sup>2</sup> 值为0还不如直接用平均值来预测效果好;而一个R<sup>2</sup> 值为1的模型则可以对目标变量进行完美的预测。从0至1之间的数值,则表示该模型中目标变量中有百分之多少能够用特征来解释。模型也可能出现负值的R<sup>2</sup>,这种情况下模型所做预测有时会比直接计算目标变量的平均值差很多。
在下方代码的 performance_metric 函数中,你要实现:
- 使用 sklearn.metrics 中的 r2_score 来计算 y_true 和 y_predict的R<sup>2</sup>值,作为对其表现的评判。
- 将他们的表现评分储存到score变量中。
End of explanation
# Calculate the performance of this model
score = performance_metric([3, -0.5, 2, 7, 4.2], [2.5, 0.0, 2.1, 7.8, 5.3])
print "Model has a coefficient of determination, R^2, of {:.3f}.".format(score)
Explanation: 问题2 - 拟合程度
假设一个数据集有五个数据且一个模型做出下列目标变量的预测:
| 真实数值 | 预测数值 |
| :-------------: | :--------: |
| 3.0 | 2.5 |
| -0.5 | 0.0 |
| 2.0 | 2.1 |
| 7.0 | 7.8 |
| 4.2 | 5.3 |
你会觉得这个模型已成功地描述了目标变量的变化吗?如果成功,请解释为什么,如果没有,也请给出原因。
运行下方的代码,使用performance_metric函数来计算模型的决定系数。
End of explanation
# TODO: Import 'train_test_split'
from sklearn.model_selection import train_test_split
# TODO: Shuffle and split the data into training and testing subsets
X_train, X_test, y_train, y_test = train_test_split(features, prices, test_size=0.2, random_state=42)
# Success
print "Training and testing split was successful."
Explanation: 回答: 我觉得成功描述了。因为决定系数很的范围为 0 ~ 1,越接近1,说明这个模型可以对目标变量进行预测的效果越好,结果决定系数计算出来为 0.923 ,说明模型对目标变量的变化进行了良好的描述。
练习: 数据分割与重排
接下来,你需要把波士顿房屋数据集分成训练和测试两个子集。通常在这个过程中,数据也会被重新排序,以消除数据集中由于排序而产生的偏差。
在下面的代码中,你需要:
- 使用 sklearn.model_selection 中的 train_test_split, 将features和prices的数据都分成用于训练的数据子集和用于测试的数据子集。
- 分割比例为:80%的数据用于训练,20%用于测试;
- 选定一个数值以设定 train_test_split 中的 random_state ,这会确保结果的一致性;
- 最终分离出的子集为X_train,X_test,y_train,和y_test。
End of explanation
# Produce learning curves for varying training set sizes and maximum depths
vs.ModelLearning(features, prices)
Explanation: 问题 3- 训练及测试
将数据集按一定比例分为训练用的数据集和测试用的数据集对学习算法有什么好处?
提示: 如果没有数据来对模型进行测试,会出现什么问题?
答案: 这样做,可以使得我们可以通过测试用的数据集来对模型的泛化误差进行评估,检验模型的好坏。
分析模型的表现
在项目的第三部分,我们来看一下几个模型针对不同的数据集在学习和测试上的表现。另外,你需要专注于一个特定的算法,用全部训练集训练时,提高它的'max_depth' 参数,观察这一参数的变化如何影响模型的表现。把你模型的表现画出来对于分析过程十分有益。可视化可以让我们看到一些单看结果看不到的行为。
学习曲线
下方区域内的代码会输出四幅图像,它们是一个决策树模型在不同最大深度下的表现。每一条曲线都直观的显示了随着训练数据量的增加,模型学习曲线的训练评分和测试评分的变化。注意,曲线的阴影区域代表的是该曲线的不确定性(用标准差衡量)。这个模型的训练和测试部分都使用决定系数R<sup>2</sup>来评分。
运行下方区域中的代码,并利用输出的图形回答下面的问题。
End of explanation
vs.ModelComplexity(X_train, y_train)
Explanation: 问题 4 - 学习数据
选择上述图像中的其中一个,并给出其最大深度。随着训练数据量的增加,训练曲线的评分有怎样的变化?测试曲线呢?如果有更多的训练数据,是否能有效提升模型的表现呢?
提示:学习曲线的评分是否最终会收敛到特定的值?
答案: 第二个,最大深度为3。训练曲线开始逐渐降低,测试曲线开始逐渐升高,但它们最后都趋于平稳,所以并不能有效提升模型的表现。
复杂度曲线
下列代码内的区域会输出一幅图像,它展示了一个已经经过训练和验证的决策树模型在不同最大深度条件下的表现。这个图形将包含两条曲线,一个是训练的变化,一个是测试的变化。跟学习曲线相似,阴影区域代表该曲线的不确定性,模型训练和测试部分的评分都用的 performance_metric 函数。
运行下方区域中的代码,并利用输出的图形并回答下面的两个问题。
End of explanation
# TODO: Import 'make_scorer', 'DecisionTreeRegressor', and 'GridSearchCV'
from sklearn.tree import DecisionTreeRegressor
from sklearn.metrics import make_scorer
from sklearn.model_selection import GridSearchCV
def fit_model(X, y):
Performs grid search over the 'max_depth' parameter for a
decision tree regressor trained on the input data [X, y].
# Create cross-validation sets from the training data
cv_sets = ShuffleSplit(n_splits = 10, test_size = 0.20, random_state = 0)
# TODO: Create a decision tree regressor object
regressor = DecisionTreeRegressor(random_state=0)
# TODO: Create a dictionary for the parameter 'max_depth' with a range from 1 to 10
params = {'max_depth': range(1, 11)}
# TODO: Transform 'performance_metric' into a scoring function using 'make_scorer'
scoring_fnc = make_scorer(performance_metric)
# TODO: Create the grid search object
grid = GridSearchCV(regressor, params, scoring_fnc, cv=cv_sets)
# Fit the grid search object to the data to compute the optimal model
grid = grid.fit(X, y)
# Return the optimal model after fitting the data
return grid.best_estimator_
Explanation: 问题 5- 偏差与方差之间的权衡取舍
当模型以最大深度 1训练时,模型的预测是出现很大的偏差还是出现了很大的方差?当模型以最大深度10训练时,情形又如何呢?图形中的哪些特征能够支持你的结论?
提示: 你如何得知模型是否出现了偏差很大或者方差很大的问题?
答案: 为1时,出现了很大的偏差,因为此时无论是测试数据还是训练数据b标准系数都很低,测试数据和训练数据的标准系数之间差异很小,说明模型无法对数据进行良好预测。
为 10 时,出现了很大的方差,测试数据和训练数据的标准系数之间差异很大,说明出现了过拟合情况。
问题 6- 最优模型的猜测
你认为最大深度是多少的模型能够最好地对未见过的数据进行预测?你得出这个答案的依据是什么?
答案: 3。因为此时测试数据和训练数据的分数之间差异最小,且测试数据的标准系数达到最高。
评价模型表现
在这个项目的最后,你将自己建立模型,并使用最优化的fit_model函数,基于客户房子的特征来预测该房屋的价值。
问题 7- 网格搜索(Grid Search)
什么是网格搜索法?如何用它来优化学习算法?
回答:
是一种把参数网格化的算法。
它会自动生成一个不同参数值组成的“网格”:
===================================
('param1', param3) | ('param1', param4)
('param2', param3) | ('param2', param4)
==================================
通过尝试所有"网格"中使用的参数,并从中选取最佳的参数组合来优化学习算法。
问题 8- 交叉验证
什么是K折交叉验证法(k-fold cross-validation)?优化模型时,使用这种方法对网格搜索有什么好处?网格搜索是如何结合交叉验证来完成对最佳参数组合的选择的?
提示: 跟为何需要一组测试集的原因差不多,网格搜索时如果不使用交叉验证会有什么问题?GridSearchCV中的'cv_results'属性能告诉我们什么?
答案:
K折交叉验证法是将训练数据平均分配到K个容器,每次去其中一个做测试数据,其余做训练数据,进行K次后,对训练结果取平均值的一种获得更高精确度的一种算法。
可以时网格搜索的训练结果获得更高的精确度,如果不使用交叉验证,不能保证所有的数据都可以用于训练 ,一部分数据要使用再测试集中,模型的泛化误差会变大,从而影响网格搜索的效果。
网格搜索可以使拟合函数尝试所有的参数组合,并返回一个合适的分类器,自动调整至最佳参数组合。
练习:训练模型
在最后一个练习中,你将需要将所学到的内容整合,使用决策树演算法训练一个模型。为了保证你得出的是一个最优模型,你需要使用网格搜索法训练模型,以找到最佳的 'max_depth' 参数。你可以把'max_depth' 参数理解为决策树算法在做出预测前,允许其对数据提出问题的数量。决策树是监督学习算法中的一种。
此外,你会发现你的实现使用的是 ShuffleSplit() 。它也是交叉验证的一种方式(见变量 'cv_sets')。虽然这不是问题8中描述的 K-Fold 交叉验证,这个教程验证方法也很有用!这里 ShuffleSplit() 会创造10个('n_splits')混洗过的集合,每个集合中20%('test_size')的数据会被用作验证集。当你在实现的时候,想一想这跟 K-Fold 交叉验证有哪些相同点,哪些不同点?
在下方 fit_model 函数中,你需要做的是:
- 使用 sklearn.tree 中的 DecisionTreeRegressor 创建一个决策树的回归函数;
- 将这个回归函数储存到 'regressor' 变量中;
- 为 'max_depth' 创造一个字典,它的值是从1至10的数组,并储存到 'params' 变量中;
- 使用 sklearn.metrics 中的 make_scorer 创建一个评分函数;
- 将 performance_metric 作为参数传至这个函数中;
- 将评分函数储存到 'scoring_fnc' 变量中;
- 使用 sklearn.model_selection 中的 GridSearchCV 创建一个网格搜索对象;
- 将变量'regressor', 'params', 'scoring_fnc', 和 'cv_sets' 作为参数传至这个对象中;
- 将 GridSearchCV 存到 'grid' 变量中。
如果有同学对python函数如何传递多个参数不熟悉,可以参考这个MIT课程的视频。
End of explanation
# Fit the training data to the model using grid search
reg = fit_model(X_train, y_train)
# Produce the value for 'max_depth'
print "Parameter 'max_depth' is {} for the optimal model.".format(reg.get_params()['max_depth'])
Explanation: 做出预测
当我们用数据训练出一个模型,它现在就可用于对新的数据进行预测。在决策树回归函数中,模型已经学会对新输入的数据提问,并返回对目标变量的预测值。你可以用这个预测来获取数据未知目标变量的信息,这些数据必须是不包含在训练数据之内的。
问题 9- 最优模型
最优模型的最大深度(maximum depth)是多少?此答案与你在问题 6所做的猜测是否相同?
运行下方区域内的代码,将决策树回归函数代入训练数据的集合,以得到最优化的模型。
End of explanation
# Produce a matrix for client data
client_data = [[5, 17, 15], # Client 1
[4, 32, 22], # Client 2
[8, 3, 12]] # Client 3
# Show predictions
for i, price in enumerate(reg.predict(client_data)):
print "Predicted selling price for Client {}'s home: ${:,.2f}".format(i+1, price)
data['MEDV'].describe()
Explanation: Answer: 4。与猜测不同,猜测结果为3。
问题 10 - 预测销售价格
想像你是一个在波士顿地区的房屋经纪人,并期待使用此模型以帮助你的客户评估他们想出售的房屋。你已经从你的三个客户收集到以下的资讯:
| 特征 | 客戶 1 | 客戶 2 | 客戶 3 |
| :---: | :---: | :---: | :---: |
| 房屋内房间总数 | 5 间房间 | 4 间房间 | 8 间房间 |
| 社区贫困指数(%被认为是贫困阶层) | 17% | 32% | 3% |
| 邻近学校的学生-老师比例 | 15:1 | 22:1 | 12:1 |
你会建议每位客户的房屋销售的价格为多少?从房屋特征的数值判断,这样的价格合理吗?
提示:用你在分析数据部分计算出来的统计信息来帮助你证明你的答案。
运行下列的代码区域,使用你优化的模型来为每位客户的房屋价值做出预测。
End of explanation
vs.PredictTrials(features, prices, fit_model, client_data)
Explanation: 答案:
第一个顾客: $403,025.00.
第二个顾客:: $237,478.72.
第三个顾客:: $931,636.36.
这样的价格是合理的,以第三个顾客为例,他的房间数最多,社区贫困指数最低,且教育资源最丰富,因而价格最贵。以此类推,顾客一二的预测也是合理地。其次根据 `data['MEDV'].describe()` 运行的结果比较,三个价格也在合理范围类,因而价格是合理的。
敏感度
一个最优的模型不一定是一个健壮模型。有的时候模型会过于复杂或者过于简单,以致于难以泛化新增添的数据;有的时候模型采用的学习算法并不适用于特定的数据结构;有的时候样本本身可能有太多噪点或样本过少,使得模型无法准确地预测目标变量。这些情况下我们会说模型是欠拟合的。执行下方区域中的代码,采用不同的训练和测试集执行 fit_model 函数10次。注意观察对一个特定的客户来说,预测是如何随训练数据的变化而变化的。
End of explanation
### 你的代码
# Import libraries necessary for this project
# 载入此项目所需要的库
import numpy as np
import pandas as pd
import visuals as vs # Supplementary code
from sklearn.model_selection import ShuffleSplit
from sklearn.metrics import r2_score
from sklearn.model_selection import train_test_split
# TODO: Import 'make_scorer', 'DecisionTreeRegressor', and 'GridSearchCV'
from sklearn.tree import DecisionTreeRegressor
from sklearn.metrics import make_scorer
from sklearn.model_selection import GridSearchCV
# Pretty display for notebooks
# 让结果在notebook中显示
%matplotlib inline
# Load the Boston housing dataset
# 载入波士顿房屋的数据集
data = pd.read_csv('bj_housing.csv')
prices = data['Value']
features = data.drop('Value', axis = 1)
print features.head()
print prices.head()
# Success
# 完成
# print "Boston housing dataset has {} data points with {} variables each.".format(*data.shape)
def performance_metric(y_true, y_predict):
Calculates and returns the performance score between
true and predicted values based on the metric chosen.
# TODO: Calculate the performance score between 'y_true' and 'y_predict'
score = r2_score(y_true, y_predict)
# Return the score
return score
# TODO: Shuffle and split the data into training and testing subsets
X_train, X_test, y_train, y_test = train_test_split(features, prices, test_size=0.2, random_state=42)
# Success
print "Training and testing split was successful."
def fit_model(X, y):
Performs grid search over the 'max_depth' parameter for a
decision tree regressor trained on the input data [X, y].
# Create cross-validation sets from the training data
cv_sets = ShuffleSplit(n_splits = 10, test_size = 0.20, random_state = 0)
# TODO: Create a decision tree regressor object
regressor = DecisionTreeRegressor(random_state=0)
# TODO: Create a dictionary for the parameter 'max_depth' with a range from 1 to 10
params = {'max_depth': range(1, 11)}
# TODO: Transform 'performance_metric' into a scoring function using 'make_scorer'
scoring_fnc = make_scorer(performance_metric)
# TODO: Create the grid search object
grid = GridSearchCV(regressor, params, scoring_fnc, cv=cv_sets)
# Fit the grid search object to the data to compute the optimal model
grid = grid.fit(X, y)
# Return the optimal model after fitting the data
return grid.best_estimator_
# Fit the training data to the model using grid search
reg = fit_model(X_train, y_train)
# Produce the value for 'max_depth'
print "Parameter 'max_depth' is {} for the optimal model.".format(reg.get_params()['max_depth'])
client_data = [[128, 3, 2, 0, 2005, 13], [150, 3, 2, 0, 2005, 13]]
# Show predictions
for i, price in enumerate(reg.predict(client_data)):
print "Predicted selling price for Client {}'s home: ¥{:,.2f}".format(i+1, price)
Explanation: 问题 11 - 实用性探讨
简单地讨论一下你建构的模型能否在现实世界中使用?
提示: 回答几个问题,并给出相应结论的理由:
- 1978年所采集的数据,在今天是否仍然适用?
- 数据中呈现的特征是否足够描述一个房屋?
- 模型是否足够健壮来保证预测的一致性?
- 在波士顿这样的大都市采集的数据,能否应用在其它乡镇地区?
答案: 不能,首先这只是波士顿的房价,并不具有代表性,而且时间久远,房屋的价格还和其他特性有关,比如装修的程度。
可选问题 - 预测北京房价
(本题结果不影响项目是否通过)通过上面的实践,相信你对机器学习的一些常用概念有了很好的领悟和掌握。但利用70年代的波士顿房价数据进行建模的确对我们来说意义不是太大。现在你可以把你上面所学应用到北京房价数据集中bj_housing.csv。
免责声明:考虑到北京房价受到宏观经济、政策调整等众多因素的直接影响,预测结果仅供参考。
这个数据集的特征有:
- Area:房屋面积,平方米
- Room:房间数,间
- Living: 厅数,间
- School: 是否为学区房,0或1
- Year: 房屋建造时间,年
- Floor: 房屋所处楼层,层
目标变量:
- Value: 房屋人民币售价,万
你可以参考上面学到的内容,拿这个数据集来练习数据分割与重排、定义衡量标准、训练模型、评价模型表现、使用网格搜索配合交叉验证对参数进行调优并选出最佳参数,比较两者的差别,最终得出最佳模型对验证集的预测分数。
End of explanation |
14,670 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Content under Creative Commons Attribution license CC-BY 4.0, code under BSD 3-Clause License © 2018 by D. Koehn, notebook style sheet by L.A. Barba, N.C. Clementi
Step1: 1D viscoelastic SH modelling
After deriving the equations of motion for 1D wave propagation in viscoelastic SH media, we can now solve the problem using a staggered grid FD scheme. Furthermore, the anelastic coefficients of the Generalized Maxwell body (GMB) are optimized by a global Differential Evolution (DE) algorithm to achive a constant Q-spectrum. Finally, elastic and viscoelastic modelling results for a homogeneous model are compared.
FD solution of 1D isotropic viscoelastic SH problem
As derived in the last lesson, we can describe the 1D viscoelastic SH problem by the partial differential equations
Step2: Optimizing Yl coefficients
To achieve a constant Q-spectrum, we have to optimize the anelastic coefficients $Y_l$. This can be achieved by minimizing the objective function
Step3: Next, we define the objective function to optimize the $Y_l$ values ...
Step4: A function to evaluate the Q-spectrum for given $Y_l$ values might be also useful ...
Step5: To optimize the $Y_l$ values, we define their bounds necessary for the DE-algorithm, minimize the objective function, store the resulting $Y_l$ values and plot the optimized Q-spectrum
Step6: As usual, we define the modelling parameters ...
Step7: Next, we first test the elastic part of the 1D SH code and compare it with the analytical solution and finally run the viscoelastic SH code.
Comparison of 2D finite difference with analytical solution for homogeneous Vs model
In a previous exercise you proved that the analytical solutions for the homogeneous 1D acoustic and 1D elastic SH problem, beside a density factor $1/\rho_0$, are actual identical . In the function below we solve the homogeneous 1D SH problem by centered 2nd order spatial/temporal difference operators and compare the numerical results with the analytical solution
Step8: ... update the shear stress component $\sigma_{yx}$ for the elastic medium ...
Step9: ... and a function for the shear stress update $\sigma_{yx}$ for the viscoelastic case ...
Step10: ... and harmonically averaging the (unrelaxed) shear modulus ...
Step11: Finally, we can assemble the main FD code ...
Step12: ... run the elastic FD code and compare the result with the analytical solution
Step13: Finally, we run the viscoelastic modelling run and compare it with the elastic analytical solution | Python Code:
# Execute this cell to load the notebook's style sheet, then ignore it
from IPython.core.display import HTML
css_file = '../style/custom.css'
HTML(open(css_file, "r").read())
Explanation: Content under Creative Commons Attribution license CC-BY 4.0, code under BSD 3-Clause License © 2018 by D. Koehn, notebook style sheet by L.A. Barba, N.C. Clementi
End of explanation
# Import Libraries
# ----------------
import numpy as np
from numba import jit
import matplotlib
import matplotlib.pyplot as plt
from pylab import rcParams
from scipy.optimize import differential_evolution
# Ignore Warning Messages
# -----------------------
import warnings
warnings.filterwarnings("ignore")
Explanation: 1D viscoelastic SH modelling
After deriving the equations of motion for 1D wave propagation in viscoelastic SH media, we can now solve the problem using a staggered grid FD scheme. Furthermore, the anelastic coefficients of the Generalized Maxwell body (GMB) are optimized by a global Differential Evolution (DE) algorithm to achive a constant Q-spectrum. Finally, elastic and viscoelastic modelling results for a homogeneous model are compared.
FD solution of 1D isotropic viscoelastic SH problem
As derived in the last lesson, we can describe the 1D viscoelastic SH problem by the partial differential equations:
The 2nd order space/time FD scheme on a staggered grid
<img src="images/SG_1D_SH-Cart.png" width="50%">
using explicit time-stepping with the Leapfrog method to solve the 1D viscoelastic SH problem can be written as
\begin{align}
\scriptsize
\frac{v_y(i,n+1/2) - v_y(i,n-1/2)}{dt} &\scriptsize= \frac{1}{\rho}\biggl{\biggl(\frac{\partial \sigma_{yx}}{\partial x}\biggr)^c(i,n) + f_y(i,n)\biggr}, \notag\
\scriptsize\frac{\sigma_{yx}(i+1/2,n+1) - \sigma_{yx}(i+1/2,n)}{dt} &\scriptsize= \mu_{x,u} \biggl(\frac{\partial v_{y}}{\partial x}\biggr)^c(i+1/2,n+1/2) - \sum_{l=1}^L Y_l\xi_l(i+1/2,n),\notag\
\scriptsize \frac{\xi_l(i+1/2,n+1/2) - \xi_l(i+1/2,n-1/2)}{dt} &\scriptsize= \omega_l \biggl(\frac{\partial v_{y}}{\partial x}\biggr)^c(i+1/2,n+1/2) - \omega_l \xi_l(i+1/2,n-1/2)\notag
\end{align}
with the unrelaxed shear modulus
\begin{equation}
\mu_u = \frac{\mu}{1-\sum_{l=1}^L Y_l}\notag
\end{equation}
and the harmonically averaged unrelaxed shear modulus
\begin{equation}
\mu_{x,u} = 2 \biggl(\frac{1}{\mu_u(i)}+\frac{1}{\mu_u(i+1)}\biggr)^{-1}\notag
\end{equation}
Notice that we have to evaluate the memory variables $\xi_l$ in the stress update at a full time step, which has to be estimated from the $\xi_l$-values at half-time steps by arithmetic averaging:
\begin{equation}
\xi_l(i+1/2,n) = \frac{\xi_l(i+1/2,n+1/2)+\xi_l(i+1/2,n-1/2)}{2}\notag
\end{equation}
Initial and boundary conditions
Because we have analytical solutions for wave propagation in homogeneous elastic media, we should test our code implementation for a similar medium, by setting density $\rho$ and shear modulus $\mu$ to constant values $\rho_0,\; \mu_0$
\begin{align}
\rho(i) &= \rho_0 \notag \
\mu(i) &= \mu_0 = \rho_0 V_{s0}^2\notag
\end{align}
at each spatial grid point $i = 0, 1, 2, ..., nx$ and staggered grid points $i+1/2$ in order to compare the numerical with the analytical solution. For a complete description of the problem we also have to define initial and boundary conditions. The initial condition is
\begin{equation}
v_y(i,-1/2) = \sigma_{yx}(i+1/2,0) = \xi_{l}(i+1/2,-1/2) = 0, \nonumber
\end{equation}
so the modelling starts with zero particle velocity, shear stress and memory variable amplitudes at each spatial grid point. As boundary conditions, we assume
\begin{align}
v_y(0,n) &= \sigma_{yx}(1/2,n) = \xi_{l}(1/2,n) = 0, \nonumber\
v_y(nx,n) &= \sigma_{yx}(nx+1/2,n) = \xi_{l}(nx+1/2,n) = 0, \nonumber\
\end{align}
for all full and staggered time steps n, n+1/2. This Dirichlet boundary condition, leads to artifical boundary reflections which would obviously not describe a homogeneous medium. For now, we simply extend the model, so that boundary reflections are not recorded at the receiver positions.
Let's implement it ...
End of explanation
# Define GMB models
# -----------------
Qopt = 20 # target Q-value
L = 4 # number of Maxwell bodies
nfreq = 50 # number of frequencies to estimate Q-model
fmin = 5. # minimum frequency
fmax = 100.0 # maximum frequency
f = np.linspace(fmin,fmax,num=nfreq) # calculate frequencies to estimate Q-model
w = 2 * np.pi * f # circular frequencies to estimate Q-model
fl = np.linspace(fmin,fmax,num=L) # calculate relaxation frequencies
wl = 2 * np.pi * fl # circular relaxation frequencies
Explanation: Optimizing Yl coefficients
To achieve a constant Q-spectrum, we have to optimize the anelastic coefficients $Y_l$. This can be achieved by minimizing the objective function:
\begin{equation}
\chi(Y_l) = \int_{\omega_{min}}^{\omega_{max}} \biggl(Q^{-1}(Y_l,\omega) - Q^{-1}_{opt}\biggr)^2 d\omega\notag
\end{equation}
where $Q^{-1}_{opt}$ denotes the target constant inverse Q-value within the frequency range of the source wavelet and
\begin{equation}
Q^{-1}(Y_l,\omega) = \sum_{l=1}^L Y_l \frac{\omega_l\omega}{\omega^2+\omega_l^2}\notag
\end{equation}
is an approximation of the GMB Q-spectrum for $Q>>1$ according to (Blanch et al. 1995, Bohlen 2002, Yang et al. 2016). The objective function can be minimized using local or global optimization methods. In this case, we will use a global optimization Differential Evolution (DE) algorithm available from SciPy Optimization library.
In the following example, we want to solve the viscoelastic SH-problem by the (2,2)-FD algorithm defined above for a homogeneous model using 4 Maxwell bodies and target for a constant Q=20 value in a frequency band between 5 and 100 Hz:
End of explanation
# Objective function to optimize Yl values
# ----------------------------------------
def obj_Yl(Yl):
# Calculate Qs model based on GMB
# -------------------------------
Qinv_GMB = np.zeros(nfreq)
Qinv_const = (1/Qopt) * np.ones(nfreq)
for l in range (0,L):
Qinv_GMB += Yl[l] * (wl[l] * w) / (w**2+wl[l]**2)
# Calculate objective function
obj_Qinv = np.sum((Qinv_GMB - Qinv_const)**2)
return obj_Qinv
Explanation: Next, we define the objective function to optimize the $Y_l$ values ...
End of explanation
# Calculate Q-spectrum for given Yl parameters and circular relaxation frequencies
# --------------------------------------------------------------------------------
def calc_Q(Yl):
# Calculate Qs model based on GMB
# -------------------------------
Qinv_GMB = np.zeros(nfreq)
for l in range (0,L):
Qinv_GMB += Yl[l] * (wl[l] * w) / (w**2+wl[l]**2)
return f, Qinv_GMB
Explanation: A function to evaluate the Q-spectrum for given $Y_l$ values might be also useful ...
End of explanation
# Optimize dimensionless, anelastic coefficients Yl
# -------------------------------------------------
# Define bound constraints for DE algorithm
bounds = [(0.0, 1), (0.0, 1), (0.0, 1), (0.0, 1)]
# Optimize Q-model by Differential Evolution
DE_result = differential_evolution(obj_Yl, bounds)
print(' Final obj_Qinv = ', DE_result.fun)
# Calculate optimum Q model
f, Qinv_GMB = calc_Q(DE_result.x)
# Store and display optimized Yl, wl values
Yl = np.zeros(L)
Yl = DE_result.x
print('Yl = ', Yl)
print('wl = ', wl)
# Calculate Q(omega)
Q = 1 / Qinv_GMB
# Define figure size
rcParams['figure.figsize'] = 7, 5
# plot stress-strain relation
plt.plot(f, Q, 'r-',lw=3,label="Generalized Maxwell model")
plt.title(r'$Q(\omega)$ for Generalized Maxwell model')
plt.xlabel('Frequency f [Hz]')
plt.ylabel(r'$Q$ []')
plt.ylim(0,2*Qopt)
plt.grid()
plt.show()
Explanation: To optimize the $Y_l$ values, we define their bounds necessary for the DE-algorithm, minimize the objective function, store the resulting $Y_l$ values and plot the optimized Q-spectrum
End of explanation
# Definition of modelling parameters
# ----------------------------------
xmax = 500.0 # maximum spatial extension of the 1D model in x-direction (m)
tmax = 0.502 # maximum recording time of the seismogram (s)
vs0 = 580. # S-wave speed in medium (m/s)
rho0 = 1000. # Density in medium (kg/m^3)
# acquisition geometry
xr = 330.0 # x-receiver position (m)
xsrc = 250.0 # x-source position (m)
f0 = 40. # dominant frequency of the source (Hz)
t0 = 4. / f0 # source time shift (s)
Explanation: As usual, we define the modelling parameters ...
End of explanation
# Particle velocity vy update
# ---------------------------
@jit(nopython=True) # use JIT for C-performance
def update_vel(vy, syx, dx, dt, nx, rho):
for i in range(1, nx - 1):
# Calculate spatial derivatives
syx_x = (syx[i] - syx[i - 1]) / dx
# Update particle velocities
vy[i] = vy[i] + (dt/rho[i]) * syx_x
return vy
Explanation: Next, we first test the elastic part of the 1D SH code and compare it with the analytical solution and finally run the viscoelastic SH code.
Comparison of 2D finite difference with analytical solution for homogeneous Vs model
In a previous exercise you proved that the analytical solutions for the homogeneous 1D acoustic and 1D elastic SH problem, beside a density factor $1/\rho_0$, are actual identical . In the function below we solve the homogeneous 1D SH problem by centered 2nd order spatial/temporal difference operators and compare the numerical results with the analytical solution:
\begin{equation}
u_{y,analy}(x,t) = G_{1D} * S \nonumber
\end{equation}
with the 1D Green's function:
\begin{equation}
G_{1D}(x,t) = \dfrac{1}{2 \rho_0 V_{s0}}H\biggl((t-t_s)-\dfrac{|r|}{V_{s0}}\biggr), \nonumber
\end{equation}
where $H$ denotes the Heaviside function, $r = \sqrt{(x-x_s)^2}$ the source-receiver distance (offset) and $S$ the source wavelet. Keep in mind that the stress-velocity code computes the particle velocities $\mathbf{v_{y,analy}}$, while the analytical solution is expressed in terms of the displacement $\mathbf{u_{y,analy}}$. Therefore, we have to take the first derivative of the analytical solution, before comparing the numerical with the analytical solution:
\begin{equation}
v_{y,analy}(x,t) = \frac{\partial u_{y,analy}}{\partial t} \nonumber
\end{equation}
To implement the 2D SH code, we first introduce functions to update the particle velocity $v_y$ ...
End of explanation
# Shear stress syx updates (elastic)
# ----------------------------------
@jit(nopython=True) # use JIT for C-performance
def update_stress(vy, syx, dx, dt, nx, mux):
for i in range(1, nx - 1):
# Calculate spatial derivatives
vy_x = (vy[i + 1] - vy[i]) / dx
# Update shear stress
syx[i] = syx[i] + dt * mux[i] * vy_x
return syx
Explanation: ... update the shear stress component $\sigma_{yx}$ for the elastic medium ...
End of explanation
# Shear stress syx updates (viscoelastic)
# ---------------------------------------
@jit(nopython=True) # use JIT for C-performance
def update_stress_visc(vy, syx, xi, Yl, wl, L, dx, dt, nx, mux):
for i in range(1, nx - 1):
# Calculate spatial derivatives
vy_x = (vy[i + 1] - vy[i]) / dx
# Calculate sum over memory variables
xi_sum = 0.0
for l in range(0, L):
xi_sum += Yl[l] * xi[i,l]
# Update shear stress
# Note that the factor 0.5 in front of the memory variables sum
# is due to the arithmetic averaging of the xi variables at
# the staggered time steps
syx[i] = syx[i] + dt * mux[i] * (vy_x - 0.5 * xi_sum)
# Update memory variables
xi_sum = 0.0
for l in range(0, L):
xi[i,l] += dt * wl[l] * (vy_x - xi[i,l])
# After xi update calculate new xi_sum ...
xi_sum += Yl[l] * xi[i,l]
# ... and finish stress update
syx[i] = syx[i] - dt * mux[i] * xi_sum
return syx, xi
Explanation: ... and a function for the shear stress update $\sigma_{yx}$ for the viscoelastic case ...
End of explanation
# Harmonic averages of shear modulus
# ----------------------------------
@jit(nopython=True) # use JIT for C-performance
def shear_avg(mu, nx, mux):
for i in range(1, nx - 1):
# Calculate harmonic averages of shear moduli
mux[i] = 2 / (1 / mu[i + 1] + 1 / mu[i])
return mux
Explanation: ... and harmonically averaging the (unrelaxed) shear modulus ...
End of explanation
# 2D SH viscoelastic wave propagation (Finite Difference Solution)
# ----------------------------------------------------------------
def FD_1D_visc_SH_JIT(dt,dx,f0,xsrc,Yl,wl,L,mode):
nx = (int)(xmax/dx) # number of grid points in x-direction
print('nx = ',nx)
nt = (int)(tmax/dt) # maximum number of time steps
print('nt = ',nt)
ir = (int)(xr/dx) # receiver location in grid in x-direction
isrc = (int)(xsrc/dx) # source location in grid in x-direction
# Source time function (Gaussian)
# -------------------------------
src = np.zeros(nt + 1)
time = np.linspace(0 * dt, nt * dt, nt)
# 1st derivative of a Gaussian
src = -2. * (time - t0) * (f0 ** 2) * (np.exp(- (f0 ** 2) * (time - t0) ** 2))
# Analytical solution
# -------------------
G = time * 0.
vy_analy = time * 0.
# Initialize coordinates
# ----------------------
x = np.arange(nx)
x = x * dx # coordinates in x-direction (m)
# calculate source-receiver distance
r = np.sqrt((x[ir] - x[isrc])**2)
for it in range(nt): # Calculate Green's function (Heaviside function)
if (time[it] - r / vs0) >= 0:
G[it] = 1. / (2 * rho0 * vs0)
Gc = np.convolve(G, src * dt)
Gc = Gc[0:nt]
# compute vy_analy from uy_analy
for i in range(1, nt - 1):
vy_analy[i] = (Gc[i+1] - Gc[i-1]) / (2.0 * dt)
# Initialize empty wavefield arrays
# ---------------------------------
vy = np.zeros(nx) # particle velocity vy
syx = np.zeros(nx) # shear stress syx
# Initialize model (assume homogeneous model)
# -------------------------------------------
vs = np.zeros(nx)
vs = vs + vs0 # initialize wave velocity in model
rho = np.zeros(nx)
rho = rho + rho0 # initialize wave velocity in model
# calculate shear modulus
# -----------------------
mu = np.zeros(nx)
mu = rho * vs ** 2
# Estimate unrelaxed shear modulus in viscoelastic case
# -----------------------------------------------------
if(mode=='visc'):
mu1 = mu / (1 - np.sum(Yl))
# harmonic average of shear moduli
# --------------------------------
if(mode=='elast'):
mux = mu # initialize harmonic average mux
if(mode=='visc'):
mux = mu1 # initialize harmonic average mux
mux = shear_avg(mu, nx, mux)
# Initialize memory variables
# ---------------------------
if(mode=='visc'):
xi = np.zeros((nx,L))
# Initialize empty seismogram
# ---------------------------
seis = np.zeros(nt)
# Time looping
# ------------
for it in range(nt):
# Update particle velocity vy
# ---------------------------
vy = update_vel(vy, syx, dx, dt, nx, rho)
# Add Source Term at isrc
# ------------------------------
# Absolute particle velocity w.r.t analytical solution
vy[isrc] = vy[isrc] + (dt * src[it] / (rho[isrc] * dx))
# Update shear stress syx, syz
# ----------------------------
if(mode=='elast'):
syx = update_stress(vy, syx, dx, dt, nx, mux)
if(mode=='visc'):
syx, xi = update_stress_visc(vy, syx, xi, Yl, wl, L, dx, dt, nx, mux)
# Output of Seismogram
# -----------------
seis[it] = vy[ir]
# Compare FD Seismogram with analytical solution
# ----------------------------------------------
# Define figure size
rcParams['figure.figsize'] = 12, 5
if(mode=='elast'):
label = "Elastic FD solution"
if(mode=='visc'):
label = "Viscoelastic FD solution (Q = " + str(Qopt) + ")"
plt.plot(time, seis, 'b-',lw=3,label=label) # plot FD seismogram
Analy_seis = plt.plot(time,vy_analy,'r--',lw=3,label="Elastic analytical solution") # plot analytical solution
plt.xlim(time[0], time[-1])
plt.title('Seismogram')
plt.xlabel('Time (s)')
plt.ylabel('Amplitude')
plt.legend()
plt.grid()
plt.show()
Explanation: Finally, we can assemble the main FD code ...
End of explanation
%%time
# FD modelling of homogeneous elastic medium
# ------------------------------------------
dx = 1.0 # grid point distance in x-direction (m)
dt = 0.001 # time step (s)
FD_1D_visc_SH_JIT(dt,dx,f0,xsrc,Yl,wl,L,'elast')
Explanation: ... run the elastic FD code and compare the result with the analytical solution:
End of explanation
%%time
# FD modelling of homogeneous viscoelastic medium
# -----------------------------------------------
dx = 1.0 # grid point distance in x-direction (m)
dt = 0.001 # time step (s)
FD_1D_visc_SH_JIT(dt,dx,f0,xsrc,Yl,wl,L,'visc')
Explanation: Finally, we run the viscoelastic modelling run and compare it with the elastic analytical solution:
End of explanation |
14,671 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visual Comparison Between Different Classification Methods in Shogun
Notebook by Youssef Emad El-Din (Github ID
Step1: <a id = "section1">Data Generation and Visualization</a>
Transformation of features to Shogun format using <a href="http
Step5: Data visualization methods.
Step6: <a id="section2" href="http
Step7: SVM - Kernels
Shogun provides many options for using kernel functions. Kernels in Shogun are based on two classes which are <a href="http
Step8: <a id ="section2c" href="http
Step9: <a id ="section2d" href="http
Step10: <a id ="section3" href="http
Step11: <a id ="section4" href="http
Step12: <a id ="section5" href="http
Step13: <a id ="section6" href="http
Step14: <a id ="section7" href="http
Step15: <a id ="section7b">Probit Likelihood model</a>
Shogun's <a href="http
Step16: <a id="section8">Putting It All Together</a> | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import os
import shogun as sg
%matplotlib inline
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
#Needed lists for the final plot
classifiers_linear = []*10
classifiers_non_linear = []*10
classifiers_names = []*10
fadings = []*10
Explanation: Visual Comparison Between Different Classification Methods in Shogun
Notebook by Youssef Emad El-Din (Github ID: <a href="https://github.com/youssef-emad/">youssef-emad</a>)
This notebook demonstrates different classification methods in Shogun. The point is to compare and visualize the decision boundaries of different classifiers on two different datasets, where one is linear separable, and one is not.
<a href ="#section1">Data Generation and Visualization</a>
<a href ="#section2">Support Vector Machine</a>
<a href ="#section2a">Linear SVM</a>
<a href ="#section2b">Gaussian Kernel</a>
<a href ="#section2c">Sigmoid Kernel</a>
<a href ="#section2d">Polynomial Kernel</a>
<a href ="#section3">Naive Bayes</a>
<a href ="#section4">Nearest Neighbors</a>
<a href ="#section5">Linear Discriminant Analysis</a>
<a href ="#section6">Quadratic Discriminat Analysis</a>
<a href ="#section7">Gaussian Process</a>
<a href ="#section7a">Logit Likelihood model</a>
<a href ="#section7b">Probit Likelihood model</a>
<a href ="#section8">Putting It All Together</a>
End of explanation
shogun_feats_linear = sg.create_features(sg.read_csv(os.path.join(SHOGUN_DATA_DIR, 'toy/classifier_binary_2d_linear_features_train.dat')))
shogun_labels_linear = sg.create_labels(sg.read_csv(os.path.join(SHOGUN_DATA_DIR, 'toy/classifier_binary_2d_linear_labels_train.dat')))
shogun_feats_non_linear = sg.create_features(sg.read_csv(os.path.join(SHOGUN_DATA_DIR, 'toy/classifier_binary_2d_nonlinear_features_train.dat')))
shogun_labels_non_linear = sg.create_labels(sg.read_csv(os.path.join(SHOGUN_DATA_DIR, 'toy/classifier_binary_2d_nonlinear_labels_train.dat')))
feats_linear = shogun_feats_linear.get('feature_matrix')
labels_linear = shogun_labels_linear.get('labels')
feats_non_linear = shogun_feats_non_linear.get('feature_matrix')
labels_non_linear = shogun_labels_non_linear.get('labels')
Explanation: <a id = "section1">Data Generation and Visualization</a>
Transformation of features to Shogun format using <a href="http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1DenseFeatures.html">RealFeatures</a> and <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1BinaryLabels.html">BinaryLables</a> classes.
End of explanation
def plot_binary_data(plot,X_train, y_train):
This function plots 2D binary data with different colors for different labels.
plot.xlabel(r"$x$")
plot.ylabel(r"$y$")
plot.plot(X_train[0, np.argwhere(y_train == 1)], X_train[1, np.argwhere(y_train == 1)], 'ro')
plot.plot(X_train[0, np.argwhere(y_train == -1)], X_train[1, np.argwhere(y_train == -1)], 'bo')
def compute_plot_isolines(classifier,feats,size=200,fading=True):
This function computes the classification of points on the grid
to get the decision boundaries used in plotting
x1 = np.linspace(1.2*min(feats[0]), 1.2*max(feats[0]), size)
x2 = np.linspace(1.2*min(feats[1]), 1.2*max(feats[1]), size)
x, y = np.meshgrid(x1, x2)
plot_features = sg.create_features(np.array((np.ravel(x), np.ravel(y))))
if fading == True:
plot_labels = classifier.apply_binary(plot_features).get('current_values')
else:
plot_labels = classifier.apply(plot_features).get('labels')
z = plot_labels.reshape((size, size))
return x,y,z
def plot_model(plot,classifier,features,labels,fading=True):
This function plots an input classification model
x,y,z = compute_plot_isolines(classifier,features,fading=fading)
plot.pcolor(x,y,z,cmap='RdBu_r')
plot.contour(x, y, z, linewidths=1, colors='black')
plot_binary_data(plot,features, labels)
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.title("Linear Features")
plot_binary_data(plt,feats_linear, labels_linear)
plt.subplot(122)
plt.title("Non Linear Features")
plot_binary_data(plt,feats_non_linear, labels_non_linear)
Explanation: Data visualization methods.
End of explanation
plt.figure(figsize=(15,5))
c = 0.5
epsilon = 1e-3
svm_linear = sg.create_machine("LibLinear", C1=c, C2=c,
labels=shogun_labels_linear,
epsilon=epsilon,
liblinear_solver_type="L2R_L2LOSS_SVC")
svm_linear.train(shogun_feats_linear)
classifiers_linear.append(svm_linear)
classifiers_names.append("SVM Linear")
fadings.append(True)
plt.subplot(121)
plt.title("Linear SVM - Linear Features")
plot_model(plt,svm_linear,feats_linear,labels_linear)
svm_non_linear = sg.create_machine("LibLinear", C1=c, C2=c,
labels=shogun_labels_non_linear,
epsilon=epsilon,
liblinear_solver_type="L2R_L2LOSS_SVC")
svm_non_linear.train(shogun_feats_non_linear)
classifiers_non_linear.append(svm_non_linear)
plt.subplot(122)
plt.title("Linear SVM - Non Linear Features")
plot_model(plt,svm_non_linear,feats_non_linear,labels_non_linear)
Explanation: <a id="section2" href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1SVM.html">Support Vector Machine</a>
<a id="section2a" href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1LibLinear.html">Linear SVM</a>
Shogun provide <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1LibLinear.html">Liblinear</a> which is a library for large-scale linear learning focusing on SVM used for classification
End of explanation
gaussian_c = 0.7
gaussian_kernel_linear = sg.create_kernel("GaussianKernel", width=20)
gaussian_svm_linear = sg.create_machine('LibSVM', C1=gaussian_c, C2=gaussian_c,
kernel=gaussian_kernel_linear, labels=shogun_labels_linear)
gaussian_svm_linear.train(shogun_feats_linear)
classifiers_linear.append(gaussian_svm_linear)
fadings.append(True)
gaussian_kernel_non_linear = sg.create_kernel("GaussianKernel", width=10)
gaussian_svm_non_linear=sg.create_machine('LibSVM', C1=gaussian_c, C2=gaussian_c,
kernel=gaussian_kernel_non_linear, labels=shogun_labels_non_linear)
gaussian_svm_non_linear.train(shogun_feats_non_linear)
classifiers_non_linear.append(gaussian_svm_non_linear)
classifiers_names.append("SVM Gaussian Kernel")
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.title("SVM Gaussian Kernel - Linear Features")
plot_model(plt,gaussian_svm_linear,feats_linear,labels_linear)
plt.subplot(122)
plt.title("SVM Gaussian Kernel - Non Linear Features")
plot_model(plt,gaussian_svm_non_linear,feats_non_linear,labels_non_linear)
Explanation: SVM - Kernels
Shogun provides many options for using kernel functions. Kernels in Shogun are based on two classes which are <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1Kernel.html">Kernel</a> and <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1KernelMachine.html">KernelMachine</a> base class.
<a id ="section2b" href = "http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1GaussianKernel.html">Gaussian Kernel</a>
End of explanation
sigmoid_c = 0.9
sigmoid_kernel_linear = sg.create_kernel("SigmoidKernel", cache_size=200, gamma=1, coef0=0.5)
sigmoid_kernel_linear.init(shogun_feats_linear, shogun_feats_linear)
sigmoid_svm_linear = sg.create_machine('LibSVM', C1=sigmoid_c, C2=sigmoid_c,
kernel=sigmoid_kernel_linear, labels=shogun_labels_linear)
sigmoid_svm_linear.train()
classifiers_linear.append(sigmoid_svm_linear)
classifiers_names.append("SVM Sigmoid Kernel")
fadings.append(True)
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.title("SVM Sigmoid Kernel - Linear Features")
plot_model(plt,sigmoid_svm_linear,feats_linear,labels_linear)
sigmoid_kernel_non_linear = sg.create_kernel("SigmoidKernel", cache_size=400, gamma=2.5, coef0=2)
sigmoid_kernel_non_linear.init(shogun_feats_non_linear, shogun_feats_non_linear)
sigmoid_svm_non_linear = sg.create_machine('LibSVM', C1=sigmoid_c, C2=sigmoid_c,
kernel=sigmoid_kernel_non_linear, labels=shogun_labels_non_linear)
sigmoid_svm_non_linear.train()
classifiers_non_linear.append(sigmoid_svm_non_linear)
plt.subplot(122)
plt.title("SVM Sigmoid Kernel - Non Linear Features")
plot_model(plt,sigmoid_svm_non_linear,feats_non_linear,labels_non_linear)
Explanation: <a id ="section2c" href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CSigmoidKernel.html">Sigmoid Kernel</a>
End of explanation
poly_c = 0.5
degree = 4
poly_kernel_linear = sg.create_kernel('PolyKernel', degree=degree, c=1.0)
poly_kernel_linear.init(shogun_feats_linear, shogun_feats_linear)
poly_svm_linear = sg.create_machine('LibSVM', C1=poly_c, C2=poly_c,
kernel=poly_kernel_linear, labels=shogun_labels_linear)
poly_svm_linear.train()
classifiers_linear.append(poly_svm_linear)
classifiers_names.append("SVM Polynomial kernel")
fadings.append(True)
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.title("SVM Polynomial Kernel - Linear Features")
plot_model(plt,poly_svm_linear,feats_linear,labels_linear)
poly_kernel_non_linear = sg.create_kernel('PolyKernel', degree=degree, c=1.0)
poly_kernel_non_linear.init(shogun_feats_non_linear, shogun_feats_non_linear)
poly_svm_non_linear = sg.create_machine('LibSVM', C1=poly_c, C2=poly_c,
kernel=poly_kernel_non_linear, labels=shogun_labels_non_linear)
poly_svm_non_linear.train()
classifiers_non_linear.append(poly_svm_non_linear)
plt.subplot(122)
plt.title("SVM Polynomial Kernel - Non Linear Features")
plot_model(plt,poly_svm_non_linear,feats_non_linear,labels_non_linear)
Explanation: <a id ="section2d" href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CPolyKernel.html">Polynomial Kernel</a>
End of explanation
multiclass_labels_linear = shogun_labels_linear.get('labels')
for i in range(0,len(multiclass_labels_linear)):
if multiclass_labels_linear[i] == -1:
multiclass_labels_linear[i] = 0
multiclass_labels_non_linear = shogun_labels_non_linear.get('labels')
for i in range(0,len(multiclass_labels_non_linear)):
if multiclass_labels_non_linear[i] == -1:
multiclass_labels_non_linear[i] = 0
shogun_multiclass_labels_linear = sg.MulticlassLabels(multiclass_labels_linear)
shogun_multiclass_labels_non_linear = sg.MulticlassLabels(multiclass_labels_non_linear)
naive_bayes_linear = sg.create_machine("GaussianNaiveBayes")
naive_bayes_linear.put('features', shogun_feats_linear)
naive_bayes_linear.put('labels', shogun_multiclass_labels_linear)
naive_bayes_linear.train()
classifiers_linear.append(naive_bayes_linear)
classifiers_names.append("Naive Bayes")
fadings.append(False)
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.title("Naive Bayes - Linear Features")
plot_model(plt,naive_bayes_linear,feats_linear,labels_linear,fading=False)
naive_bayes_non_linear = sg.create_machine("GaussianNaiveBayes")
naive_bayes_non_linear.put('features', shogun_feats_non_linear)
naive_bayes_non_linear.put('labels', shogun_multiclass_labels_non_linear)
naive_bayes_non_linear.train()
classifiers_non_linear.append(naive_bayes_non_linear)
plt.subplot(122)
plt.title("Naive Bayes - Non Linear Features")
plot_model(plt,naive_bayes_non_linear,feats_non_linear,labels_non_linear,fading=False)
Explanation: <a id ="section3" href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1GaussianNaiveBayes.html">Naive Bayes</a>
End of explanation
number_of_neighbors = 10
distances_linear = sg.create_distance('EuclideanDistance')
distances_linear.init(shogun_feats_linear, shogun_feats_linear)
knn_linear = sg.create_machine("KNN", k=number_of_neighbors, distance=distances_linear,
labels=shogun_labels_linear)
knn_linear.train()
classifiers_linear.append(knn_linear)
classifiers_names.append("Nearest Neighbors")
fadings.append(False)
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.title("Nearest Neighbors - Linear Features")
plot_model(plt,knn_linear,feats_linear,labels_linear,fading=False)
distances_non_linear = sg.create_distance('EuclideanDistance')
distances_non_linear.init(shogun_feats_non_linear, shogun_feats_non_linear)
knn_non_linear = sg.create_machine("KNN", k=number_of_neighbors, distance=distances_non_linear,
labels=shogun_labels_non_linear)
knn_non_linear.train()
classifiers_non_linear.append(knn_non_linear)
plt.subplot(122)
plt.title("Nearest Neighbors - Non Linear Features")
plot_model(plt,knn_non_linear,feats_non_linear,labels_non_linear,fading=False)
Explanation: <a id ="section4" href="http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1KNN.html">Nearest Neighbors</a>
End of explanation
gamma = 0.1
lda_linear = sg.create_machine('LDA', gamma=gamma, labels=shogun_labels_linear)
lda_linear.train(shogun_feats_linear)
classifiers_linear.append(lda_linear)
classifiers_names.append("LDA")
fadings.append(True)
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.title("LDA - Linear Features")
plot_model(plt,lda_linear,feats_linear,labels_linear)
lda_non_linear = sg.create_machine('LDA', gamma=gamma, labels=shogun_labels_non_linear)
lda_non_linear.train(shogun_feats_non_linear)
classifiers_non_linear.append(lda_non_linear)
plt.subplot(122)
plt.title("LDA - Non Linear Features")
plot_model(plt,lda_non_linear,feats_non_linear,labels_non_linear)
Explanation: <a id ="section5" href="http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1CLDA.html">Linear Discriminant Analysis</a>
End of explanation
qda_linear = sg.create_machine("QDA", labels=shogun_multiclass_labels_linear)
qda_linear.train(shogun_feats_linear)
classifiers_linear.append(qda_linear)
classifiers_names.append("QDA")
fadings.append(False)
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.title("QDA - Linear Features")
plot_model(plt,qda_linear,feats_linear,labels_linear,fading=False)
qda_non_linear = sg.create_machine("QDA", labels=shogun_multiclass_labels_non_linear)
qda_non_linear.train(shogun_feats_non_linear)
classifiers_non_linear.append(qda_non_linear)
plt.subplot(122)
plt.title("QDA - Non Linear Features")
plot_model(plt,qda_non_linear,feats_non_linear,labels_non_linear,fading=False)
Explanation: <a id ="section6" href="http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1QDA.html">Quadratic Discriminant Analysis</a>
End of explanation
# create Gaussian kernel with width = 5.0
kernel = sg.create_kernel("GaussianKernel", width=5.0)
# create zero mean function
zero_mean = sg.create_gp_mean("ZeroMean")
# create logit likelihood model
likelihood = sg.create_gp_likelihood("LogitLikelihood")
# specify EP approximation inference method
inference_model_linear = sg.create_gp_inference("EPInferenceMethod",kernel=kernel,
features=shogun_feats_linear,
mean_function=zero_mean,
labels=shogun_labels_linear,
likelihood_model=likelihood)
# create and train GP classifier, which uses Laplace approximation
gaussian_logit_linear = sg.create_gaussian_process("GaussianProcessClassification", inference_method=inference_model_linear)
gaussian_logit_linear.train()
classifiers_linear.append(gaussian_logit_linear)
classifiers_names.append("Gaussian Process Logit")
fadings.append(True)
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.title("Gaussian Process - Logit - Linear Features")
plot_model(plt,gaussian_logit_linear,feats_linear,labels_linear)
inference_model_non_linear = sg.create_gp_inference("EPInferenceMethod", kernel=kernel,
features=shogun_feats_non_linear,
mean_function=zero_mean,
labels=shogun_labels_non_linear,
likelihood_model=likelihood)
gaussian_logit_non_linear = sg.create_gaussian_process("GaussianProcessClassification",
inference_method=inference_model_non_linear)
gaussian_logit_non_linear.train()
classifiers_non_linear.append(gaussian_logit_non_linear)
plt.subplot(122)
plt.title("Gaussian Process - Logit - Non Linear Features")
plot_model(plt,gaussian_logit_non_linear,feats_non_linear,labels_non_linear)
Explanation: <a id ="section7" href="http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1GaussianProcessBinaryClassification.html">Gaussian Process</a>
<a id ="section7a">Logit Likelihood model</a>
Shogun's <a href= "http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1LogitLikelihood.html">LogitLikelihood</a> and <a href="http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1EPInferenceMethod.html">EPInferenceMethod</a> classes are used.
End of explanation
likelihood = sg.create_gp_likelihood("ProbitLikelihood")
inference_model_linear = sg.create_gp_inference("EPInferenceMethod", kernel=kernel,
features=shogun_feats_linear,
mean_function=zero_mean,
labels=shogun_labels_linear,
likelihood_model=likelihood)
gaussian_probit_linear = sg.create_gaussian_process("GaussianProcessClassification",
inference_method=inference_model_linear)
gaussian_probit_linear.train()
classifiers_linear.append(gaussian_probit_linear)
classifiers_names.append("Gaussian Process Probit")
fadings.append(True)
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.title("Gaussian Process - Probit - Linear Features")
plot_model(plt,gaussian_probit_linear,feats_linear,labels_linear)
inference_model_non_linear = sg.create_gp_inference("EPInferenceMethod", kernel=kernel,
features=shogun_feats_non_linear,
mean_function=zero_mean,
labels=shogun_labels_non_linear,
likelihood_model=likelihood)
gaussian_probit_non_linear = sg.create_gaussian_process("GaussianProcessClassification",
inference_method=inference_model_non_linear)
gaussian_probit_non_linear.train()
classifiers_non_linear.append(gaussian_probit_non_linear)
plt.subplot(122)
plt.title("Gaussian Process - Probit - Non Linear Features")
plot_model(plt,gaussian_probit_non_linear,feats_non_linear,labels_non_linear)
Explanation: <a id ="section7b">Probit Likelihood model</a>
Shogun's <a href="http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1ProbitLikelihood.html">ProbitLikelihood</a> class is used.
End of explanation
figure = plt.figure(figsize=(30,9))
plt.subplot(2,11,1)
plot_binary_data(plt,feats_linear, labels_linear)
for i in range(0,10):
plt.subplot(2,11,i+2)
plt.title(classifiers_names[i])
plot_model(plt,classifiers_linear[i],feats_linear,labels_linear,fading=fadings[i])
plt.subplot(2,11,12)
plot_binary_data(plt,feats_non_linear, labels_non_linear)
for i in range(0,10):
plt.subplot(2,11,13+i)
plot_model(plt,classifiers_non_linear[i],feats_non_linear,labels_non_linear,fading=fadings[i])
Explanation: <a id="section8">Putting It All Together</a>
End of explanation |
14,672 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step3: Mini-Assignment 3
Step4: Stemming
As we could see from the results of the last assignment, our simple index doesn't handle punctuation and the difference between singular and plural versions of the same word very well. Fortunately, Python's NLTK package provides implementations of these algorithms we can use. If your Python package doesn't already come with it, you have to install NLTK by following these instructions.
Step5: Ranking
Another way to improve our search results is to rank them. A possible way to do this is to calculate a score for each document based on the matching terms from the query. One such scoring method is tf-idf, which comes with several variants, as explained in the lecture slides.
In order to quickly calculate the scores for a term/document combination, we'll need quick access to a couple of things
Step6: Let's test these functions with some examples
Step7: Using these three helper functions, we can now easily calculate the tf-idf weights of a term in a document by implementing the weighting formula from the slides, which you will do in the assignments below.
Assignments
Your name
Step8: Now we can make a smarter index based on these functions. For practical purposes, the code below generates the smarter index on a subset of the data, as generating an index with stemming on the entire set would take too much time. (You don't need to change or add anything here.)
Step9: Now implement the smarter_and_query function based on the two functions above. You can start from the code for and_query from the last assignment.
Step10: Task 2
Run the queries "red blood cell" and "pfemp1" with the new smarter_and_query function from task 1. Do they return our exemplary paper 24130474? For each of these examples, what do our new smarter functions specifically contribute to the result (as compared to our previous naive implementations for tokenization and preprocessing)?
Step11: Answer
Step12: Task 4
Create a function query_ntn(query_string), which accepts as input a single query string that could consist of one or more words, and returns or prints a list of (up to) 10 best matching documents, along with their score. Use tf-idf to calculate document scores based on the query, applying variant ntn, as above (see the formula for score_ntn on the lecture slides). Use an auxiliary function score_ntn to calculate the score. The results should be shown in descending order by score.
You can start by copying your or_query function from mini-assignment 2, then expand that to rank the results, making use of the tfidf(t,d) function you created above.
Step13: Task 5
Create a second version of the query function from Task 4, and call it query_nnc. This second version should use, as its name suggests, variant nnc instead of ntn, and therefore apply the cosine similarity measure. (See the formula for score_nnc on the lecture slides. You can drop the square root of |q| in the formula, as indicated on the slides.) You can use the length_tf function defined above. Use again an auxiliary function called score_nnc. You can start by copy-pasting the code from Task 4.
Use the provided display_summary function to make the output a bit more like the results page of a search engine, and demonstrate your query_nnc function with two or three interesting example queries. | Python Code:
import pickle, bz2, re
from collections import namedtuple, defaultdict, Counter
from IPython.display import display, HTML
from math import log10, sqrt
Summaries_file = 'data/malaria__Summaries.pkl.bz2'
Abstracts_file = 'data/malaria__Abstracts.pkl.bz2'
Summaries = pickle.load( bz2.BZ2File( Summaries_file, 'rb' ) )
Abstracts = pickle.load( bz2.BZ2File( Abstracts_file, 'rb' ) )
paper = namedtuple( 'paper', ['title', 'authors', 'year', 'doi'] )
for (id, paper_info) in Summaries.items():
Summaries[id] = paper( *paper_info )
def tokenize(text):
Function that tokenizes a string in a rather naive way. Can be extended later.
return text.split(' ')
def preprocess(tokens):
Perform linguistic preprocessing on a list of tokens. Can be extended later.
result = []
for token in tokens:
result.append(token.lower())
return result
def display_summary( id, show_abstract=False, show_id=True, extra_text='' ):
Function for printing a paper's summary through IPython's Rich Display System.
Trims long author lists, and adds a link to the paper's DOI (when available).
s = Summaries[id]
lines = []
title = s.title
if s.doi != '':
title = '<a href=http://dx.doi.org/%s>%s</a>' % (s.doi, title)
title = '<strong>' + title + '</strong>'
lines.append(title)
authors = ', '.join( s.authors[:20] ) + ('' if len(s.authors) <= 20 else ', ...')
lines.append(str(s.year) + '. ' + authors)
if (show_abstract):
lines.append('<small><strong>Abstract:</strong> <em>%s</em></small>' % Abstracts[id])
if (show_id):
lines.append('[ID: %d]' % id)
if (extra_text != ''):
lines.append(extra_text)
display( HTML('<br>'.join(lines)) )
inverted_index = defaultdict(set)
for (id, abstract) in Abstracts.items():
for term in preprocess(tokenize(abstract)):
inverted_index[term].add(id)
Explanation: Mini-Assignment 3: Improving the Search Index
In this mini-assignment, we will improve the search index and query functions from the previous mini-assignment.
Loading the Data and Defining Auxiliary Functions
This section is copied from the previous notebook.
End of explanation
from nltk.tokenize import word_tokenize
from nltk.stem.snowball import EnglishStemmer
import nltk
nltk.download('punkt')
stemmer = EnglishStemmer()
s = '''Good muffins cost $3.88\nin New York. Please buy me two of them.\n\nThanks.'''
print(tokenize(s))
print(word_tokenize(s))
print(stemmer.stem("processes"))
Explanation: Stemming
As we could see from the results of the last assignment, our simple index doesn't handle punctuation and the difference between singular and plural versions of the same word very well. Fortunately, Python's NLTK package provides implementations of these algorithms we can use. If your Python package doesn't already come with it, you have to install NLTK by following these instructions.
End of explanation
tf_matrix = defaultdict(Counter)
length_values = defaultdict(int)
for (doc_id, abstract) in Abstracts.items():
tokens = preprocess(tokenize(abstract))
tf_matrix[doc_id] = Counter(tokens)
l = 0
for t in tf_matrix[doc_id].keys():
l += tf_matrix[doc_id][t] ** 2
length_values[doc_id] = sqrt(l)
def tf(t,d):
return float(tf_matrix[d][t])
def df(t):
return float(len(inverted_index[t]))
def num_documents():
return float(len(Abstracts))
def length_tf(d):
return length_values[d]
Explanation: Ranking
Another way to improve our search results is to rank them. A possible way to do this is to calculate a score for each document based on the matching terms from the query. One such scoring method is tf-idf, which comes with several variants, as explained in the lecture slides.
In order to quickly calculate the scores for a term/document combination, we'll need quick access to a couple of things:
tf(t,d): How often does a term occur in a document
df(t): In how many documents does a term occur
N: The number of documents in our index
length_tf(d): The length of the document vector (with vectors of plain tf values)
End of explanation
print(tf('network', 24130474))
print(df('network'))
print(num_documents())
print(length_tf(24130474))
Explanation: Let's test these functions with some examples:
End of explanation
# Smarter linguistic processing
# Your code here:
#def smarter_tokenize(text):
# ...
#def smarter_preprocess(tokens):
# ...
# To test it:
print(smarter_preprocess(smarter_tokenize("Mary had a little group of processes worth $5.4")))
Explanation: Using these three helper functions, we can now easily calculate the tf-idf weights of a term in a document by implementing the weighting formula from the slides, which you will do in the assignments below.
Assignments
Your name: ...
Task 1
Implement in the code block below the smarter_tokenize function using NLTK's function for tokenization, and the smarter_preprocess function to perform stemming in addition to case normalization.
End of explanation
# Below, we create our smarter index (based on a subset of the documents for demonstration purposes)
smarter_index = defaultdict(set)
# Here we define the subset (somewhat arbitrary):
subset_of_ids = set(key for key in Abstracts.keys() if 24100000 <= key < 24200000)
subset_of_abstracts = ((key, Abstracts[key]) for key in subset_of_ids)
# Uncomment this line to process the whole corpus (might take a long time):
#subset_of_abstracts = Abstracts.items()
# Building our smarter index:
for (id, abstract) in subset_of_abstracts:
for term in smarter_preprocess(smarter_tokenize(abstract)):
smarter_index[term].add(id)
Explanation: Now we can make a smarter index based on these functions. For practical purposes, the code below generates the smarter index on a subset of the data, as generating an index with stemming on the entire set would take too much time. (You don't need to change or add anything here.)
End of explanation
# Smarter and_query based on the smarter tokenize and preprocess functions
# Your code here:
#def smarter_and_query(query_string):
# ...
Explanation: Now implement the smarter_and_query function based on the two functions above. You can start from the code for and_query from the last assignment.
End of explanation
# Add your code here
Explanation: Task 2
Run the queries "red blood cell" and "pfemp1" with the new smarter_and_query function from task 1. Do they return our exemplary paper 24130474? For each of these examples, what do our new smarter functions specifically contribute to the result (as compared to our previous naive implementations for tokenization and preprocessing)?
End of explanation
#def tfidf(t,d):
# ...
#print(tfidf('network', 24130474))
#print(tfidf('var', 24130474))
#print(tfidf('surface', 24130474))
Explanation: Answer: [Write your answer text here]
Task 3
Create a function tfidf(t,d) that returns the tf-idf score of term t in document d by using the tf(t,d), df(t) and num_documents() functions we defined above. The tf-idf formula can be found on the lecture slides. Use tf-idf with plain (non-logarithmic) term frequency, as applied by scoring variant ntn. Test your function with the examples shown below.
You can use our old index for this task and the tasks below: You do not need to include the results from above with the smarter tokenization and preprocessing functions.
You can use the log10(n) function to calculate the base 10 logarithm.
End of explanation
#score_ntn(query_words, doc_id)
# ...
#query_ntn(query_string)
# ...
Explanation: Task 4
Create a function query_ntn(query_string), which accepts as input a single query string that could consist of one or more words, and returns or prints a list of (up to) 10 best matching documents, along with their score. Use tf-idf to calculate document scores based on the query, applying variant ntn, as above (see the formula for score_ntn on the lecture slides). Use an auxiliary function score_ntn to calculate the score. The results should be shown in descending order by score.
You can start by copying your or_query function from mini-assignment 2, then expand that to rank the results, making use of the tfidf(t,d) function you created above.
End of explanation
#score_nnc(query_words, doc_id)
# ...
#query_nnc(query_string)
# ...
Explanation: Task 5
Create a second version of the query function from Task 4, and call it query_nnc. This second version should use, as its name suggests, variant nnc instead of ntn, and therefore apply the cosine similarity measure. (See the formula for score_nnc on the lecture slides. You can drop the square root of |q| in the formula, as indicated on the slides.) You can use the length_tf function defined above. Use again an auxiliary function called score_nnc. You can start by copy-pasting the code from Task 4.
Use the provided display_summary function to make the output a bit more like the results page of a search engine, and demonstrate your query_nnc function with two or three interesting example queries.
End of explanation |
14,673 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href='http
Step1: The Data
Let's work with the cancer data set again since it had so many features.
Step2: PCA Visualization
As we've noticed before it is difficult to visualize high dimensional data, we can use PCA to find the first two principal components, and visualize the data in this new, two-dimensional space, with a single scatter-plot. Before we do this though, we'll need to scale our data so that each feature has a single unit variance.
Step3: PCA with Scikit Learn uses a very similar process to other preprocessing functions that come with SciKit Learn. We instantiate a PCA object, find the principal components using the fit method, then apply the rotation and dimensionality reduction by calling transform().
We can also specify how many components we want to keep when creating the PCA object.
Step4: Now we can transform this data to its first 2 principal components.
Step5: Great! We've reduced 30 dimensions to just 2! Let's plot these two dimensions out!
Step6: Clearly by using these two components we can easily separate these two classes.
Interpreting the components
Unfortunately, with this great power of dimensionality reduction, comes the cost of being able to easily understand what these components represent.
The components correspond to combinations of the original features, the components themselves are stored as an attribute of the fitted PCA object
Step7: In this numpy matrix array, each row represents a principal component, and each column relates back to the original features. we can visualize this relationship with a heatmap | Python Code:
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import seaborn as sns
%matplotlib inline
Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
Principal Component Analysis
Let's discuss PCA! Since this isn't exactly a full machine learning algorithm, but instead an unsupervised learning algorithm, we will just have a lecture on this topic, but no full machine learning project (although we will walk through the cancer set with PCA).
PCA Review
Make sure to watch the video lecture and theory presentation for a full overview of PCA!
Remember that PCA is just a transformation of your data and attempts to find out what features explain the most variance in your data. For example:
<img src='PCA.png' />
Libraries
End of explanation
from sklearn.datasets import load_breast_cancer
cancer = load_breast_cancer()
cancer.keys()
print(cancer['DESCR'])
df = pd.DataFrame(cancer['data'],columns=cancer['feature_names'])
#(['DESCR', 'data', 'feature_names', 'target_names', 'target'])
df.head()
Explanation: The Data
Let's work with the cancer data set again since it had so many features.
End of explanation
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(df)
scaled_data = scaler.transform(df)
Explanation: PCA Visualization
As we've noticed before it is difficult to visualize high dimensional data, we can use PCA to find the first two principal components, and visualize the data in this new, two-dimensional space, with a single scatter-plot. Before we do this though, we'll need to scale our data so that each feature has a single unit variance.
End of explanation
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
pca.fit(scaled_data)
Explanation: PCA with Scikit Learn uses a very similar process to other preprocessing functions that come with SciKit Learn. We instantiate a PCA object, find the principal components using the fit method, then apply the rotation and dimensionality reduction by calling transform().
We can also specify how many components we want to keep when creating the PCA object.
End of explanation
x_pca = pca.transform(scaled_data)
scaled_data.shape
x_pca.shape
Explanation: Now we can transform this data to its first 2 principal components.
End of explanation
plt.figure(figsize=(8,6))
plt.scatter(x_pca[:,0],x_pca[:,1],c=cancer['target'],cmap='plasma')
plt.xlabel('First principal component')
plt.ylabel('Second Principal Component')
Explanation: Great! We've reduced 30 dimensions to just 2! Let's plot these two dimensions out!
End of explanation
pca.components_
Explanation: Clearly by using these two components we can easily separate these two classes.
Interpreting the components
Unfortunately, with this great power of dimensionality reduction, comes the cost of being able to easily understand what these components represent.
The components correspond to combinations of the original features, the components themselves are stored as an attribute of the fitted PCA object:
End of explanation
df_comp = pd.DataFrame(pca.components_,columns=cancer['feature_names'])
plt.figure(figsize=(12,6))
sns.heatmap(df_comp,cmap='plasma',)
Explanation: In this numpy matrix array, each row represents a principal component, and each column relates back to the original features. we can visualize this relationship with a heatmap:
End of explanation |
14,674 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
以下の論文のPSOアルゴリズムに従った。
http
Step1: 10都市で行う。パラメータは左から順に、(都市の数、粒子の数、前期の速度の影響率、localベストの影響率、globalベストの影響率)である。
Step2: 都市の座標とプロット。
Step3: 粒子の初期化。
Step4: 100回シミュレーションした。 | Python Code:
%matplotlib inline
import numpy as np
import pylab as pl
import math
from sympy import *
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from mpl_toolkits.mplot3d import Axes3D
def TSP_map(N): #100×100の正方格子内にN個の点を配置する関数
TSP_map = []
X = [i for i in range(100)]
Y = [i for i in range(100)]
x = np.array([])
y = np.array([])
for i in range(N):
x = np.append(x, np.random.choice(X))
y = np.append(y, np.random.choice(Y))
for i in range(N):
TSP_map.append([x[i], y[i]])
return TSP_map
class PSO:
def __init__(self, N, pN, omega, alpha, beta):
self.N = N
self.pN = pN
self.omega = omega
self.alpha = alpha
self.beta = beta
self.city = TSP_map(N)
def initialize(self):
ptcl = np.array([])
for i in range(self.pN):
a = np.random.choice([i for i in range(self.N - 1)])
b = np.random.choice([i for i in range(a, self.N)])
V = [a, b]
ptcl = np.append(ptcl, particle(i, V, self.N, self.omega, self.alpha, self.beta))
self.ptcl = ptcl
return self.ptcl
def one_simulate(self):
for i in range(self.pN):
self.ptcl[i].SS_id()
self.ptcl[i].SS_gd(self.p_gd_X)
self.ptcl[i].new_V()
self.ptcl[i].new_X()
self.ptcl[i].P_id(self.city)
def simulate(self, sim_num):
for i in range(self.pN):
self.ptcl[i].initial(self.city)
self.p_gd_X = self.P_gd()
for i in range(sim_num):
self.one_simulate()
self.p_gd_X = self.P_gd()
self.p_gd_X = self.P_gd()
return self.p_gd_X
def P_gd(self):
P_gd = self.ptcl[0].p_id
no = 0
for i in range(self.pN):
if P_gd < self.ptcl[i].p_id:
P_gd = self.ptcl[i].p_id
self.no = i
return self.ptcl[self.no].p_id_X
class particle:
def __init__(self, No, V, num_city, omega, alpha, beta): #noは粒子の番号(ナンバー)である。
self.No = No
self.V = V
self.num_city = num_city
self.omega = omega
self.alpha = alpha
self.beta = beta
self.X = self.init_X()
def initial(self, city):
self.ss_id = []
self.ss_gd = []
self.P_id(city)
def init_X(self):
c = np.array([i for i in range(self.num_city)])
np.random.shuffle(c)
return c
def SO(self, V, P):
SO = []
for i in range(len(V)):
if V[i] != P[i]:
t = np.where(V == P[i])
t = int(t[0])
a = V[i]
b = V[t]
V[i] = b
V[t] = a
SO.append([i, t])
return SO
def SS_id(self):
self.ss_id = self.SO(self.X, self.p_id_X)
def SS_gd(self, p_gd_X):
self.ss_gd = self.SO(self.X, p_gd_X)
def select(self, V, v, p):
select_v = np.array([])
for i in range(len(V)):
x = np.random.choice([1, 0], p=[p, 1-p])
if x == 1:
select_v = np.append(select_v,V[i])
return select_v
def new_V(self):
V = np.array([])
self.select(V, self.V, self.omega)
self.select(V, self.SS_id, self.alpha)
self.select(V, self.ss_gd, self.beta)
self.V = V
return self.V
def new_X(self):
for i in range(len(self.V)):
j = self.V[i][0]
k = self.V[i][1]
a = self.X[j]
b = self.X[k]
self.X[j] = b
self.X[k] = a
return self.X
def P_id(self, city): #二都市間の距離を足し上げてP_idを求める関数。
P_id = 0
for i in range(self.num_city):
if i != self.num_city-1:
x1 = city[self.X[i]][0]
y1 = city[self.X[i]][1]
x2 = city[self.X[i+1]][0]
y2 = city[self.X[i+1]][1]
else:
x1 = city[self.X[i]][0]
y1 = city[self.X[i]][1]
x2 = city[self.X[0]][0]
y2 = city[self.X[0]][1]
a = np.array([x1, y1])
b = np.array([x2, y2])
u = b - a
p = np.linalg.norm(u)
P_id += p
if P_id < self.P_id:
self.p_id = P_id
self.p_id_X = self.X
return self.p_id
Explanation: 以下の論文のPSOアルゴリズムに従った。
http://ci.nii.ac.jp/els/110006977755.pdf?id=ART0008887051&type=pdf&lang=en&host=cinii&order_no=&ppv_type=0&lang_sw=&no=1452683083&cp=
End of explanation
pso = PSO(10, 10, 0.3, 0.1, 0.3)
Explanation: 10都市で行う。パラメータは左から順に、(都市の数、粒子の数、前期の速度の影響率、localベストの影響率、globalベストの影響率)である。
End of explanation
pso.city
x = []
y = []
for i in range(len(pso.city)):
x.append(pso.city[i][0])
y.append(pso.city[i][1])
plt.scatter(x, y)
Explanation: 都市の座標とプロット。
End of explanation
pso.initialize()
Explanation: 粒子の初期化。
End of explanation
pso.simulate(100)
x = []
y = []
for i in pso.p_gd_X:
x.append(pso.city[i][0])
y.append(pso.city[i][1])
plt.plot(x, y)
pso.no
Explanation: 100回シミュレーションした。
End of explanation |
14,675 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Getting started with concise
Become familiar with Keras
In order to successfully use Concise, please make sure you are familiar with Keras. I strongly advise everyone to read the excellent Keras documentation first. As a Keras extension, Concise closely follows the Keras API.
Modules overview
Pre-processing functions
Encoding different objects into modeling-ready numpy arrays
concise.preprocessing
Custom Keras components
concise.layers
concise.initializers
concise.regularizers
concise.losses
concise.metrics
Hyper-parameter tuning
concise.hyopt
concise.eval_metrics
SNP-effect prediction
concise.effects
Other utilities
concise.utils
Example
Step1: Concise is fully compatible with Keras; we can save and load the Keras models (note | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import concise.layers as cl
import keras.layers as kl
import concise.initializers as ci
import concise.regularizers as cr
from keras.callbacks import EarlyStopping
from concise.preprocessing import encodeDNA
from keras.models import Model, load_model
# get the data
def load(split="train", st=None):
dt = pd.read_csv("../data/RBP/PUM2_{0}.csv".format(split))
# DNA/RNA sequence
xseq = encodeDNA(dt.seq) # list of sequences -> np.ndarray
# response variable
y = dt.binding_site.as_matrix().reshape((-1, 1)).astype("float")
return {"seq": xseq}, y
train, valid, test = load("train"), load("valid"), load("test")
# extract sequence length
seq_length = train[0]["seq"].shape[1]
# get the PWM list for initialization
from concise.data import attract
dfa = attract.get_metadata() # table with PWM meta-info
dfa_pum2 = dfa[dfa.Gene_name.str.match("PUM2") & \
dfa.Organism.str.match("Homo_sapiens") & \
(dfa.Experiment_description == "genome-wide in vivo immunoprecipitation")]
pwm_list = attract.get_pwm_list(dfa_pum2.PWM_id.unique()) # retrieve the PWM by id
print(pwm_list)
# specify the model
in_dna = cl.InputDNA(seq_length=seq_length, name="seq") # Convenience wrapper around keras.layers.Input()
x = cl.ConvDNA(filters=4, # Convenience wrapper around keras.layers.Conv1D()
kernel_size=8,
kernel_initializer=ci.PSSMKernelInitializer(pwm_list), # intialize the filters on the PWM values
activation="relu",
name="conv1")(in_dna)
x = kl.AveragePooling1D(pool_size=4)(x)
x = kl.Flatten()(x)
x = kl.Dense(units=1)(x)
m = Model(in_dna, x)
m.compile("adam", loss="binary_crossentropy", metrics=["acc"])
# train the model
m.fit(train[0], train[1], epochs=5);
Explanation: Getting started with concise
Become familiar with Keras
In order to successfully use Concise, please make sure you are familiar with Keras. I strongly advise everyone to read the excellent Keras documentation first. As a Keras extension, Concise closely follows the Keras API.
Modules overview
Pre-processing functions
Encoding different objects into modeling-ready numpy arrays
concise.preprocessing
Custom Keras components
concise.layers
concise.initializers
concise.regularizers
concise.losses
concise.metrics
Hyper-parameter tuning
concise.hyopt
concise.eval_metrics
SNP-effect prediction
concise.effects
Other utilities
concise.utils
Example: RBP binding model in concise
Here we will show a simple use-case with Concise. We will predict the eCLIP binding peaks of the RNA-binding protein (RBP) PUM2.
eCLIP raw data ENCODE
eCLIP paper Van Nostrand et al, Nature Meth. 2016
paper also using this data: Avsec et al, Bioarxiv 2017
End of explanation
# save the model
m.save("/tmp/model.h5")
# load the model
m2 = load_model("/tmp/model.h5")
# Convenience layers extend the base class (here keras.layers.Conv1D) with .plot_weights for filter visualization
m.get_layer("conv1").plot_weights(plot_type="motif_pwm_info", figsize=(4, 6));
Explanation: Concise is fully compatible with Keras; we can save and load the Keras models (note: concise package needs to be imported before loading: import concise...).
End of explanation |
14,676 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lab Exercise
Step1: Complete the TODOs before executing the next cell to train the model in your local environment
Modify the model.py file containing the convolutional neural network layer definitions in the cnn_model method per the instructions in the TODOS. Make sure to use the hyperparameter values specified by nfil2 and ksize2 variables. Open <a href="fashionmodel/trainer">fashionmodel/trainer</a> to find the model.py file.
Run as a Python module
Step2: Make sure that local training completed successfully before training using Cloud ML Engine
Note that GPU speed up depends on the model type. You'll notice that more complex models train substantially faster on GPUs. When you are working with simple models that take just seconds to minutes to train on a single node, keep in mind that Cloud ML Engine introduces a few minutes of overhead for training job setup & teardown.
Step3: Monitoring training with TensorBoard
Use this cell to launch tensorboard
Step4: Deploying and predicting with model
Deploy the model
Step5: The previous step of deploying the model can take a few minutes. If it is successful, you should see an output similar to this one
Step6: To predict with the model, save one of the test images as a JavaScript Object Notation (JSON) file. Also, take a look at it as a graphic and notice the expected class value in the title.
Step7: Here's how the same image looks when it saved in the test.json file for use with the prediction API.
Step8: Send the file to the prediction service and check whether the model you trained returns the correct prediction. | Python Code:
import os
PROJECT = 'my-project-id' # REPLACE WITH YOUR PROJECT ID
BUCKET = 'my-bucket-name' # REPLACE WITH YOUR BUCKET NAME
REGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
MODEL_TYPE='cnn' # 'dnn' or 'cnn'
# do not change these
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
os.environ['MODEL_TYPE'] = MODEL_TYPE
os.environ['TFVERSION'] = '1.8' # Tensorflow version
%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
Explanation: Lab Exercise: Implement Convolutional and Maxpooling Layers for Fashion MNIST Image Classification
In this notebook you will modify an implementation of a convolutional neural network to prepare it for training, validate the implementation by training it in your local environment, and then train and deploy the model using Cloud ML Engine.
End of explanation
%bash
rm -rf fashionmodel.tar.gz fashion_trained
gcloud ml-engine local train \
--module-name=trainer.task \
--package-path=${PWD}/fashionmodel/trainer \
-- \
--output_dir=${PWD}/fashion_trained \
--train_steps=1 \
--learning_rate=0.01 \
--model=$MODEL_TYPE
Explanation: Complete the TODOs before executing the next cell to train the model in your local environment
Modify the model.py file containing the convolutional neural network layer definitions in the cnn_model method per the instructions in the TODOS. Make sure to use the hyperparameter values specified by nfil2 and ksize2 variables. Open <a href="fashionmodel/trainer">fashionmodel/trainer</a> to find the model.py file.
Run as a Python module
End of explanation
%bash
OUTDIR=gs://${BUCKET}/fashion/trained_${MODEL_TYPE}
JOBNAME=fashion_${MODEL_TYPE}_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ml-engine jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=${PWD}/fashionmodel/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=BASIC_GPU \
--runtime-version=$TFVERSION \
-- \
--output_dir=$OUTDIR \
--train_steps=10000 --learning_rate=0.01 --train_batch_size=512 \
--model=$MODEL_TYPE
Explanation: Make sure that local training completed successfully before training using Cloud ML Engine
Note that GPU speed up depends on the model type. You'll notice that more complex models train substantially faster on GPUs. When you are working with simple models that take just seconds to minutes to train on a single node, keep in mind that Cloud ML Engine introduces a few minutes of overhead for training job setup & teardown.
End of explanation
from google.datalab.ml import TensorBoard
TensorBoard().start('gs://{}/fashion/trained_{}'.format(BUCKET, MODEL_TYPE))
for pid in TensorBoard.list()['pid']:
TensorBoard().stop(pid)
print 'Stopped TensorBoard with pid {}'.format(pid)
Explanation: Monitoring training with TensorBoard
Use this cell to launch tensorboard
End of explanation
%bash
MODEL_NAME="fashion"
MODEL_VERSION=${MODEL_TYPE}
MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/fashion/trained_${MODEL_TYPE}/export/exporter | tail -1)
echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes"
#gcloud ml-engine versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
#gcloud ml-engine models delete ${MODEL_NAME}
gcloud ml-engine models create ${MODEL_NAME} --regions $REGION
gcloud ml-engine versions create ${MODEL_VERSION} --model ${MODEL_NAME} --origin ${MODEL_LOCATION} --runtime-version=$TFVERSION
Explanation: Deploying and predicting with model
Deploy the model:
End of explanation
import tensorflow as tf
(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.fashion_mnist.load_data()
LABELS = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
Explanation: The previous step of deploying the model can take a few minutes. If it is successful, you should see an output similar to this one:
<pre>
Created ml engine model [projects/qwiklabs-gcp-27eb45524d98e9a5/models/fashion].
Creating version (this might take a few minutes)......
...................................................................................................................done.
</pre>
Next, download a local copy of the Fashion MNIST dataset to use with Cloud ML Engine for predictions.
End of explanation
HEIGHT=28
WIDTH=28
IMGNO=12 #CHANGE THIS to get different images
#Convert raw image data to a test.json file and persist it to disk
import json, codecs
jsondata = {'image': test_images[IMGNO].reshape(HEIGHT, WIDTH).tolist()}
json.dump(jsondata, codecs.open('test.json', 'w', encoding='utf-8'))
#Take a look at a sample image and the correct label from the test dataset
import matplotlib.pyplot as plt
plt.imshow(test_images[IMGNO].reshape(HEIGHT, WIDTH))
title = plt.title('{} / Class #{}'.format(LABELS[test_labels[IMGNO]], test_labels[IMGNO]))
Explanation: To predict with the model, save one of the test images as a JavaScript Object Notation (JSON) file. Also, take a look at it as a graphic and notice the expected class value in the title.
End of explanation
%bash
cat test.json
Explanation: Here's how the same image looks when it saved in the test.json file for use with the prediction API.
End of explanation
%bash
gcloud ml-engine predict \
--model=fashion \
--version=${MODEL_TYPE} \
--json-instances=./test.json
Explanation: Send the file to the prediction service and check whether the model you trained returns the correct prediction.
End of explanation |
14,677 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In this exercise, you'll apply what you learned in the Scaling and normalization tutorial.
Setup
The questions below will give you feedback on your work. Run the following cell to set up the feedback system.
Step1: Get our environment set up
To practice scaling and normalization, we're going to use a dataset of Kickstarter campaigns. (Kickstarter is a website where people can ask people to invest in various projects and concept products.)
The next code cell loads in the libraries and dataset we'll be using.
Step2: Let's start by scaling the goals of each campaign, which is how much money they were asking for. After scaling, all values lie between 0 and 1.
Step3: 1) Practice scaling
We just scaled the "usd_goal_real" column. What about the "goal" column?
Begin by running the code cell below to create a DataFrame original_goal_data containing the "goal" column.
Step4: Use original_goal_data to create a new DataFrame scaled_goal_data with values scaled between 0 and 1. You must use the minmax_scaling() function.
Step5: 2) Practice normalization
Now you'll practice normalization. We begin by normalizing the amount of money pledged to each campaign.
Step6: The values have changed significantly with normalization!
In the next code cell, you'll take a look at the distribution of the normalized data, where it should now resemble a normal distribution.
Step7: We used the "usd_pledged_real" column. Follow the same process to normalize the "pledged" column.
Step8: How does the normalized "usd_pledged_real" column look different from when we normalized the "pledged" column? Or, do they look mostly the same?
Once you have an answer, run the code cell below. | Python Code:
from learntools.core import binder
binder.bind(globals())
from learntools.data_cleaning.ex2 import *
print("Setup Complete")
Explanation: In this exercise, you'll apply what you learned in the Scaling and normalization tutorial.
Setup
The questions below will give you feedback on your work. Run the following cell to set up the feedback system.
End of explanation
# modules we'll use
import pandas as pd
import numpy as np
# for Box-Cox Transformation
from scipy import stats
# for min_max scaling
from mlxtend.preprocessing import minmax_scaling
# plotting modules
import seaborn as sns
import matplotlib.pyplot as plt
# read in all our data
kickstarters_2017 = pd.read_csv("../input/kickstarter-projects/ks-projects-201801.csv")
# set seed for reproducibility
np.random.seed(0)
Explanation: Get our environment set up
To practice scaling and normalization, we're going to use a dataset of Kickstarter campaigns. (Kickstarter is a website where people can ask people to invest in various projects and concept products.)
The next code cell loads in the libraries and dataset we'll be using.
End of explanation
# select the usd_goal_real column
original_data = pd.DataFrame(kickstarters_2017.usd_goal_real)
# scale the goals from 0 to 1
scaled_data = minmax_scaling(original_data, columns=['usd_goal_real'])
print('Original data\nPreview:\n', original_data.head())
print('Minimum value:', float(original_data.min()),
'\nMaximum value:', float(original_data.max()))
print('_'*30)
print('\nScaled data\nPreview:\n', scaled_data.head())
print('Minimum value:', float(scaled_data.min()),
'\nMaximum value:', float(scaled_data.max()))
Explanation: Let's start by scaling the goals of each campaign, which is how much money they were asking for. After scaling, all values lie between 0 and 1.
End of explanation
# select the usd_goal_real column
original_goal_data = pd.DataFrame(kickstarters_2017.goal)
Explanation: 1) Practice scaling
We just scaled the "usd_goal_real" column. What about the "goal" column?
Begin by running the code cell below to create a DataFrame original_goal_data containing the "goal" column.
End of explanation
# TODO: Your code here
scaled_goal_data = ____
# Check your answer
q1.check()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q1.hint()
#_COMMENT_IF(PROD)_
q1.solution()
#%%RM_IF(PROD)%%
# scale the goals from 0 to 1
scaled_goal_data = minmax_scaling(original_goal_data, columns=['goal'])
q1.assert_check_passed()
Explanation: Use original_goal_data to create a new DataFrame scaled_goal_data with values scaled between 0 and 1. You must use the minmax_scaling() function.
End of explanation
# get the index of all positive pledges (Box-Cox only takes positive values)
index_of_positive_pledges = kickstarters_2017.usd_pledged_real > 0
# get only positive pledges (using their indexes)
positive_pledges = kickstarters_2017.usd_pledged_real.loc[index_of_positive_pledges]
# normalize the pledges (w/ Box-Cox)
normalized_pledges = pd.Series(stats.boxcox(positive_pledges)[0],
name='usd_pledged_real', index=positive_pledges.index)
print('Original data\nPreview:\n', positive_pledges.head())
print('Minimum value:', float(positive_pledges.min()),
'\nMaximum value:', float(positive_pledges.max()))
print('_'*30)
print('\nNormalized data\nPreview:\n', normalized_pledges.head())
print('Minimum value:', float(normalized_pledges.min()),
'\nMaximum value:', float(normalized_pledges.max()))
Explanation: 2) Practice normalization
Now you'll practice normalization. We begin by normalizing the amount of money pledged to each campaign.
End of explanation
# plot normalized data
ax = sns.histplot(normalized_pledges, kde=True)
ax.set_title("Normalized data")
plt.show()
Explanation: The values have changed significantly with normalization!
In the next code cell, you'll take a look at the distribution of the normalized data, where it should now resemble a normal distribution.
End of explanation
# TODO: Your code here!
Explanation: We used the "usd_pledged_real" column. Follow the same process to normalize the "pledged" column.
End of explanation
# Check your answer (Run this code cell to receive credit!)
q2.check()
# Line below will give you a hint
#_COMMENT_IF(PROD)_
q2.hint()
#%%RM_IF(PROD)%%
# get the index of all positive pledges (Box-Cox only takes positive values)
index_positive_pledges = kickstarters_2017.pledged > 0
# get only positive pledges (using their indexes)
positive_pledges_only = kickstarters_2017.pledged.loc[index_positive_pledges]
# normalize the pledges (w/ Box-Cox)
normalized_values = pd.Series(stats.boxcox(positive_pledges_only)[0],
name='pledged', index=positive_pledges_only.index)
# plot normalized data
ax = sns.histplot(normalized_values, kde=True)
ax.set_title("Normalized data")
Explanation: How does the normalized "usd_pledged_real" column look different from when we normalized the "pledged" column? Or, do they look mostly the same?
Once you have an answer, run the code cell below.
End of explanation |
14,678 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualizing
EQTransformer comes with a few visualization tools to get a better sense of data that is beinig processed and the results.
1) continouty of the seismic data being processed
Step1: Check your current directory for 'data_chart.png'
2) continous waveforms
Step2: Now you can mark those events that you have detected in your helicorder plot to check if you have catched most of them or you are missing too many of them (high false negativ). This togather with the events plots (in the Figure subfolders in your station_output folders) can give you a sense that if you are using a too high or too low threshold levels.
Step3: 3) map plot
Step4: check for 'station_map.png'
4) histograms | Python Code:
from EQTransformer.utils.plot import plot_data_chart
plot_data_chart('preproc/time_tracks.pkl', time_interval=10)
Explanation: Visualizing
EQTransformer comes with a few visualization tools to get a better sense of data that is beinig processed and the results.
1) continouty of the seismic data being processed:
Both prepocessor and mseed_predictor output a "time_tracks.pkl" file that contains the time info of data slices and their number of components. You can use this file to visualize the continuity and type of your data using the following module:
End of explanation
from EQTransformer.utils.plot import plot_detections, plot_helicorder
plot_helicorder(input_mseed='downloads_mseeds/CA06/GS.CA06.00.HHZ__20190901T000000Z__20190902T000000Z.mseed',
input_csv=None, save_plot=True)
Explanation: Check your current directory for 'data_chart.png'
2) continous waveforms:
It is always a good idea to check the raw data:
End of explanation
plot_helicorder(input_mseed='downloads_mseeds/CA06/GS.CA06.00.HH2__20190901T000000Z__20190902T000000Z.mseed',
input_csv='detections1/CA06_outputs/X_prediction_results.csv', save_plot=True)
Explanation: Now you can mark those events that you have detected in your helicorder plot to check if you have catched most of them or you are missing too many of them (high false negativ). This togather with the events plots (in the Figure subfolders in your station_output folders) can give you a sense that if you are using a too high or too low threshold levels.
End of explanation
plot_detections(input_dir ="detections1",
input_json="json/station_list.json",
plot_type='station_map',
marker_size=50)
Explanation: 3) map plot:
You can also visulaize the number of detections over your network using this:
End of explanation
plot_detections(input_dir ="detections1",
input_json="json/station_list.json",
plot_type='hist',
time_window=120)
Explanation: check for 'station_map.png'
4) histograms:
And this command will automatically generate detection histograms for each station in your detection folder:
End of explanation |
14,679 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: GRU RNNs
Step2: How does this work on anything that is not a real movie review? | Python Code:
# Based on
# https://github.com/fchollet/deep-learning-with-python-notebooks/blob/master/6.2-understanding-recurrent-neural-networks.ipynb
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
%pylab inline
import matplotlib.pyplot as plt
import pandas as pd
import tensorflow as tf
from tensorflow import keras
# https://keras.io/datasets/#imdb-movie-reviews-sentiment-classification
max_features = 10000 # number of words to consider as features
maxlen = 500 # cut texts after this number of words (among top max_features most common words)
# each review is encoded as a sequence of word indexes
# indexed by overall frequency in the dataset
# output is 0 (negative) or 1 (positive)
imdb = tf.keras.datasets.imdb.load_data(num_words=max_features)
(raw_input_train, y_train), (raw_input_test, y_test) = imdb
# https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/sequence/pad_sequences
input_train = tf.keras.preprocessing.sequence.pad_sequences(raw_input_train, maxlen=maxlen)
input_test = tf.keras.preprocessing.sequence.pad_sequences(raw_input_test, maxlen=maxlen)
Explanation: <a href="https://colab.research.google.com/github/DJCordhose/ai/blob/master/notebooks/tensorflow/sentiment-gru-reg.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
End of explanation
# Batch Normalization:
# https://towardsdatascience.com/batch-normalization-in-neural-networks-1ac91516821c
# https://www.quora.com/Why-does-batch-normalization-help
from tensorflow.keras.layers import GRU, Embedding, Bidirectional, BatchNormalization, Dropout
embedding_dim = 32
dropout = 0.6
recurrent_dropout = 0.4
model = keras.Sequential()
# encoder
model.add(Embedding(input_dim=max_features, output_dim=embedding_dim, input_length=maxlen))
# https://arxiv.org/ftp/arxiv/papers/1701/1701.05923.pdf
# n = output dimension
# m = input dimension
# Total number of parameters for
# RNN = n**2 + nm + n
# GRU = 3 × (n**2 + nm + n)
# LSTM = 4 × (n**2 + nm + n)
# return_sequences passes all outputs of all timesteps (not only the last one) to the next layer
model.add(GRU(name='gru1', units=32, dropout=dropout, recurrent_dropout=recurrent_dropout, return_sequences=True))
# for embedding: 32*2 (“standard deviation” parameter (gamma), “mean” parameter (beta)) trainable parameters
# and 32*2 (moving_mean and moving_variance) non-trainable parameters
model.add(BatchNormalization())
model.add(Dropout(dropout))
# stack recurrent layers like with fc
model.add(GRU(name='gru2', units=32))
model.add(BatchNormalization())
model.add(Dropout(dropout))
# latent space
model.add(tf.keras.layers.Dense(name='fc', units=32, activation='relu'))
# binary classifier as decoder
model.add(tf.keras.layers.Dense(name='classifier', units=1, activation='sigmoid'))
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
model.summary()
batch_size = 1000
%time history = model.fit(input_train, y_train, epochs=15, batch_size=batch_size, validation_split=0.2)
train_loss, train_accuracy = model.evaluate(input_train, y_train, batch_size=batch_size)
train_accuracy
test_loss, test_accuracy = model.evaluate(input_test, y_test, batch_size=batch_size)
test_accuracy
def plot_history(history, samples=10, init_phase_samples=None):
epochs = history.params['epochs']
acc = history.history['acc']
val_acc = history.history['val_acc']
every_sample = int(epochs / samples)
acc = pd.DataFrame(acc).iloc[::every_sample, :]
val_acc = pd.DataFrame(val_acc).iloc[::every_sample, :]
fig, ax = plt.subplots(figsize=(20,5))
ax.plot(acc, 'bo', label='Training acc')
ax.plot(val_acc, 'b', label='Validation acc')
ax.set_title('Training and validation accuracy')
ax.legend()
plot_history(history)
# precition
model.predict(input_test[0:5])
# ground truth
y_test[0:5]
Explanation: GRU RNNs
End of explanation
word_to_id = keras.datasets.imdb.get_word_index()
def encode_text(text):
input_words = text.lower().split()
input_tokens = np.array([word_to_id[word] for word in input_words])
padded_input_tokens = keras.preprocessing.sequence.pad_sequences([input_tokens], maxlen=maxlen)
return padded_input_tokens
def predict_text(model, text):
input_sequence = encode_text(text)
embeddings = model.predict(input_sequence)
return embeddings
predict_text(model, "don't watch this movie")
predict_text(model, "lovely")
predict_text(model, "pathetic shit")
predict_text(model, "this is not a shit movie")
predict_text(model, "such a bad movie")
Explanation: How does this work on anything that is not a real movie review?
End of explanation |
14,680 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
101
Step1: This example is a mode choice model built using the Swissmetro example dataset.
First we can create a Model object
Step2: We can attach a title to the model. The title does not affect the calculations
as all; it is merely used in various output report styles.
Step3: We need to identify the availability and choice variables.
The Swissmetro dataset, as with all Biogeme data, is only
in co format, so we must define alternative
availability as an expression for each alternative, using a
dictionary to map alternative codes and expressions.
Step4: In the Swissmetro example dataset, as in many discrete choice
modeling applications, there is one and only one chosen
alternative for each case, so the choices can be described
as a single expression that evaluates to the code of the
chosen alternative.
Step5: We will also write utility functions for each alternative.
Since the data is only in co format, we must use only the
utility_co form for the utility functions.
Step6: Larch will find all the parameters in the model, but we'd like to output them in
a rational order. We can use the ordering method to do this
Step7: Now we can prepare the data, which is available in the data warehouse that
comes with Larch.
Step8: The swissmetro example models exclude some observations. We can use pandas
to identify the observations we would like to keep.
Step9: When you've created the data you need, you can pass the dataframe to
the larch.DataFrames constructor. Since the swissmetro data is in
idco format, we'll need to explicitly identify the alternative
codes as well.
Step10: You might notice we have not carefully constructed this object to
include only the relevant data or the various simple transformations
used in the utility definition above. Larch can do this itself, if
you assign this DataFrames not as the actual set of data used in model
estimation, but rather as the dataservice that can be used as the
source to create these computational arrays.
Step12: We can estimate the models and check the results match up with those given by Biogeme | Python Code:
# TEST
import os
import pandas as pd
pd.set_option("display.max_columns", 999)
pd.set_option('expand_frame_repr', False)
pd.set_option('display.precision', 3)
import larch
larch._doctest_mode_ = True
from pytest import approx
import larch.numba as lx
import larch.numba as lx
Explanation: 101: Swissmetro MNL Mode Choice
End of explanation
m = lx.Model()
Explanation: This example is a mode choice model built using the Swissmetro example dataset.
First we can create a Model object:
End of explanation
m.title = "swissmetro example 01 (simple logit)"
Explanation: We can attach a title to the model. The title does not affect the calculations
as all; it is merely used in various output report styles.
End of explanation
m.availability_co_vars = {
1: "TRAIN_AV * (SP!=0)",
2: "SM_AV",
3: "CAR_AV * (SP!=0)",
}
Explanation: We need to identify the availability and choice variables.
The Swissmetro dataset, as with all Biogeme data, is only
in co format, so we must define alternative
availability as an expression for each alternative, using a
dictionary to map alternative codes and expressions.
End of explanation
m.choice_co_code = 'CHOICE'
Explanation: In the Swissmetro example dataset, as in many discrete choice
modeling applications, there is one and only one chosen
alternative for each case, so the choices can be described
as a single expression that evaluates to the code of the
chosen alternative.
End of explanation
from larch.roles import P,X
m.utility_co[1] = P("ASC_TRAIN")
m.utility_co[2] = 0
m.utility_co[3] = P("ASC_CAR")
m.utility_co[1] += X("TRAIN_TT") * P("B_TIME")
m.utility_co[2] += X("SM_TT") * P("B_TIME")
m.utility_co[3] += X("CAR_TT") * P("B_TIME")
m.utility_co[1] += X("TRAIN_CO*(GA==0)") * P("B_COST")
m.utility_co[2] += X("SM_CO*(GA==0)") * P("B_COST")
m.utility_co[3] += X("CAR_CO") * P("B_COST")
Explanation: We will also write utility functions for each alternative.
Since the data is only in co format, we must use only the
utility_co form for the utility functions.
End of explanation
m.ordering = [
("ASCs", 'ASC.*',),
("LOS", 'B_.*',),
]
Explanation: Larch will find all the parameters in the model, but we'd like to output them in
a rational order. We can use the ordering method to do this:
End of explanation
import pandas as pd
raw_data = pd.read_csv(lx.example_file('swissmetro.csv.gz')).rename_axis(index='CASEID')
raw_data.head()
Explanation: Now we can prepare the data, which is available in the data warehouse that
comes with Larch.
End of explanation
keep = raw_data.eval("PURPOSE in (1,3) and CHOICE != 0")
selected_data = raw_data[keep]
Explanation: The swissmetro example models exclude some observations. We can use pandas
to identify the observations we would like to keep.
End of explanation
ds = lx.Dataset.construct.from_idco(selected_data, alts={1:'Train', 2:'SM', 3:'Car'})
ds
Explanation: When you've created the data you need, you can pass the dataframe to
the larch.DataFrames constructor. Since the swissmetro data is in
idco format, we'll need to explicitly identify the alternative
codes as well.
End of explanation
m.datatree = ds
Explanation: You might notice we have not carefully constructed this object to
include only the relevant data or the various simple transformations
used in the utility definition above. Larch can do this itself, if
you assign this DataFrames not as the actual set of data used in model
estimation, but rather as the dataservice that can be used as the
source to create these computational arrays.
End of explanation
m.set_cap(15)
m.maximize_loglike(method='SLSQP')
# TEST
r = _
assert r.loglike == approx(-5331.252006971916)
m.calculate_parameter_covariance()
m.parameter_summary()
# TEST
assert m.parameter_summary().data.to_markdown() ==
| | Value | Std Err | t Stat | Signif | Null Value |
|:----------------------|--------:|----------:|---------:|:---------|-------------:|
| ('ASCs', 'ASC_CAR') | -0.155 | 0.0432 | -3.58 | *** | 0 |
| ('ASCs', 'ASC_TRAIN') | -0.701 | 0.0549 | -12.78 | *** | 0 |
| ('LOS', 'B_COST') | -0.0108 | 0.000518 | -20.91 | *** | 0 |
| ('LOS', 'B_TIME') | -0.0128 | 0.000569 | -22.46 | *** | 0 |
[1:-1]
Explanation: We can estimate the models and check the results match up with those given by Biogeme:
End of explanation |
14,681 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Set weather data datetime
This notebook formats a date and a time column for weather data measurements with a unix timestamp. Each measurement is then inserted into a pumilio database.
Required packages
<a href="https
Step1: Import statements
Step2: Create and format a 'WeatherDate' and 'WeatherTime' column
Step3: Connect to database
Step5: Insert weather measurements into a pumilio database
Step6: Optionally export dataframe to a csv file | Python Code:
weather_filepath = ""
Explanation: Set weather data datetime
This notebook formats a date and a time column for weather data measurements with a unix timestamp. Each measurement is then inserted into a pumilio database.
Required packages
<a href="https://github.com/pydata/pandas">pandas</a> <br />
<a href="https://github.com/rasbt/pyprind">pyprind</a> <br />
<a href="https://github.com/jacobdein/pymilio">pymilio</a>
Variable declarations
weather_filepath – path to excel containing weather measurements, each with a unix timestamp
End of explanation
import pandas
import pyprind
from datetime import datetime
from Pymilio import database
Explanation: Import statements
End of explanation
weather_data = pandas.read_excel(weather_filepath)
weather_data['WeatherDate'] = weather_data['WeatherDate'].astype('str')
weather_data['WeatherTime'] = weather_data['WeatherTime'].astype('str')
for index, row in weather_data.iterrows():
timestamp = row['timestamp']
dt = datetime.fromtimestamp(timestamp)
date = datetime.strftime(dt, "%Y-%m-%d")
time = datetime.strftime(dt, "%H:%M:%S")
weather_data.set_value(index, 'WeatherDate', date)
weather_data.set_value(index, 'WeatherTime', time)
weather_data = weather_data.drop('timestamp', axis=1)
weather_data = weather_data.drop('LightIntensity', axis=1)
Explanation: Create and format a 'WeatherDate' and 'WeatherTime' column
End of explanation
db = database.Pymilio_db_connection(user='pumilio',
database='pumilio',
read_default_file='~/.my.cnf.pumilio')
Explanation: Connect to database
End of explanation
table_name = 'WeatherData'
column_list = [ n for n in weather_data.columns ]
column_names = ", ".join(column_list)
progress_bar = pyprind.ProgBar(len(weather_data), bar_char='█', title='Progress', monitor=True, stream=1, width=50)
for index, row in weather_data.iterrows():
progress_bar.update(item_id=str(index))
value_list = [ str(v) for v in row.as_matrix() ]
value_strings = "'"
value_strings = value_strings + "', '".join(value_list)
value_strings = value_strings + "'"
#value_strings = value_strings.replace('nan', 'NULL')
statement = INSERT INTO {0} ({1}) VALUES ({2}).format(table_name, column_names, value_strings)
db = pumilio_db._connect()
c = db.cursor()
c.execute(statement)
c.close()
db.close()
Explanation: Insert weather measurements into a pumilio database
End of explanation
#weather_data.to_csv("~/Desktop/weather_db.csv", index=False, header=False)
Explanation: Optionally export dataframe to a csv file
End of explanation |
14,682 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Land
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Description
Is Required
Step7: 1.4. Land Atmosphere Flux Exchanges
Is Required
Step8: 1.5. Atmospheric Coupling Treatment
Is Required
Step9: 1.6. Land Cover
Is Required
Step10: 1.7. Land Cover Change
Is Required
Step11: 1.8. Tiling
Is Required
Step12: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required
Step13: 2.2. Water
Is Required
Step14: 2.3. Carbon
Is Required
Step15: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required
Step16: 3.2. Time Step
Is Required
Step17: 3.3. Timestepping Method
Is Required
Step18: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required
Step19: 4.2. Code Version
Is Required
Step20: 4.3. Code Languages
Is Required
Step21: 5. Grid
Land surface grid
5.1. Overview
Is Required
Step22: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required
Step23: 6.2. Matches Atmosphere Grid
Is Required
Step24: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required
Step25: 7.2. Total Depth
Is Required
Step26: 8. Soil
Land surface soil
8.1. Overview
Is Required
Step27: 8.2. Heat Water Coupling
Is Required
Step28: 8.3. Number Of Soil layers
Is Required
Step29: 8.4. Prognostic Variables
Is Required
Step30: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required
Step31: 9.2. Structure
Is Required
Step32: 9.3. Texture
Is Required
Step33: 9.4. Organic Matter
Is Required
Step34: 9.5. Albedo
Is Required
Step35: 9.6. Water Table
Is Required
Step36: 9.7. Continuously Varying Soil Depth
Is Required
Step37: 9.8. Soil Depth
Is Required
Step38: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required
Step39: 10.2. Functions
Is Required
Step40: 10.3. Direct Diffuse
Is Required
Step41: 10.4. Number Of Wavelength Bands
Is Required
Step42: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required
Step43: 11.2. Time Step
Is Required
Step44: 11.3. Tiling
Is Required
Step45: 11.4. Vertical Discretisation
Is Required
Step46: 11.5. Number Of Ground Water Layers
Is Required
Step47: 11.6. Lateral Connectivity
Is Required
Step48: 11.7. Method
Is Required
Step49: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required
Step50: 12.2. Ice Storage Method
Is Required
Step51: 12.3. Permafrost
Is Required
Step52: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required
Step53: 13.2. Types
Is Required
Step54: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required
Step55: 14.2. Time Step
Is Required
Step56: 14.3. Tiling
Is Required
Step57: 14.4. Vertical Discretisation
Is Required
Step58: 14.5. Heat Storage
Is Required
Step59: 14.6. Processes
Is Required
Step60: 15. Snow
Land surface snow
15.1. Overview
Is Required
Step61: 15.2. Tiling
Is Required
Step62: 15.3. Number Of Snow Layers
Is Required
Step63: 15.4. Density
Is Required
Step64: 15.5. Water Equivalent
Is Required
Step65: 15.6. Heat Content
Is Required
Step66: 15.7. Temperature
Is Required
Step67: 15.8. Liquid Water Content
Is Required
Step68: 15.9. Snow Cover Fractions
Is Required
Step69: 15.10. Processes
Is Required
Step70: 15.11. Prognostic Variables
Is Required
Step71: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required
Step72: 16.2. Functions
Is Required
Step73: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required
Step74: 17.2. Time Step
Is Required
Step75: 17.3. Dynamic Vegetation
Is Required
Step76: 17.4. Tiling
Is Required
Step77: 17.5. Vegetation Representation
Is Required
Step78: 17.6. Vegetation Types
Is Required
Step79: 17.7. Biome Types
Is Required
Step80: 17.8. Vegetation Time Variation
Is Required
Step81: 17.9. Vegetation Map
Is Required
Step82: 17.10. Interception
Is Required
Step83: 17.11. Phenology
Is Required
Step84: 17.12. Phenology Description
Is Required
Step85: 17.13. Leaf Area Index
Is Required
Step86: 17.14. Leaf Area Index Description
Is Required
Step87: 17.15. Biomass
Is Required
Step88: 17.16. Biomass Description
Is Required
Step89: 17.17. Biogeography
Is Required
Step90: 17.18. Biogeography Description
Is Required
Step91: 17.19. Stomatal Resistance
Is Required
Step92: 17.20. Stomatal Resistance Description
Is Required
Step93: 17.21. Prognostic Variables
Is Required
Step94: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required
Step95: 18.2. Tiling
Is Required
Step96: 18.3. Number Of Surface Temperatures
Is Required
Step97: 18.4. Evaporation
Is Required
Step98: 18.5. Processes
Is Required
Step99: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required
Step100: 19.2. Tiling
Is Required
Step101: 19.3. Time Step
Is Required
Step102: 19.4. Anthropogenic Carbon
Is Required
Step103: 19.5. Prognostic Variables
Is Required
Step104: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required
Step105: 20.2. Carbon Pools
Is Required
Step106: 20.3. Forest Stand Dynamics
Is Required
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required
Step109: 22.2. Growth Respiration
Is Required
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required
Step111: 23.2. Allocation Bins
Is Required
Step112: 23.3. Allocation Fractions
Is Required
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required
Step115: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required
Step116: 26.2. Carbon Pools
Is Required
Step117: 26.3. Decomposition
Is Required
Step118: 26.4. Method
Is Required
Step119: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required
Step120: 27.2. Carbon Pools
Is Required
Step121: 27.3. Decomposition
Is Required
Step122: 27.4. Method
Is Required
Step123: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required
Step124: 28.2. Emitted Greenhouse Gases
Is Required
Step125: 28.3. Decomposition
Is Required
Step126: 28.4. Impact On Soil Properties
Is Required
Step127: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required
Step128: 29.2. Tiling
Is Required
Step129: 29.3. Time Step
Is Required
Step130: 29.4. Prognostic Variables
Is Required
Step131: 30. River Routing
Land surface river routing
30.1. Overview
Is Required
Step132: 30.2. Tiling
Is Required
Step133: 30.3. Time Step
Is Required
Step134: 30.4. Grid Inherited From Land Surface
Is Required
Step135: 30.5. Grid Description
Is Required
Step136: 30.6. Number Of Reservoirs
Is Required
Step137: 30.7. Water Re Evaporation
Is Required
Step138: 30.8. Coupled To Atmosphere
Is Required
Step139: 30.9. Coupled To Land
Is Required
Step140: 30.10. Quantities Exchanged With Atmosphere
Is Required
Step141: 30.11. Basin Flow Direction Map
Is Required
Step142: 30.12. Flooding
Is Required
Step143: 30.13. Prognostic Variables
Is Required
Step144: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required
Step145: 31.2. Quantities Transported
Is Required
Step146: 32. Lakes
Land surface lakes
32.1. Overview
Is Required
Step147: 32.2. Coupling With Rivers
Is Required
Step148: 32.3. Time Step
Is Required
Step149: 32.4. Quantities Exchanged With Rivers
Is Required
Step150: 32.5. Vertical Grid
Is Required
Step151: 32.6. Prognostic Variables
Is Required
Step152: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required
Step153: 33.2. Albedo
Is Required
Step154: 33.3. Dynamics
Is Required
Step155: 33.4. Dynamic Lake Extent
Is Required
Step156: 33.5. Endorheic Basins
Is Required
Step157: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'snu', 'sam0-unicon', 'land')
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: SNU
Source ID: SAM0-UNICON
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:38
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation |
14,683 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Working from Remote Geophysical data
Examples of accessing Netcdf data via TRHEDDS/OPENDAP services in Python, and plotting in Basemaps
First, import libraries
Important note It looks like for users on Windows with Python 3.x this demo will not work. It will work on Windows with Python 2.7 however. If you are on Linux or Mac (or running 2.7 in Windows) you can add the Basemap package with conda using the command conda install Basemap.
Step1: Next, link to a remote dataset
For example, ocean data from NOAA' catalouge
Step2: Project these data onto a map
Step3: Complex example
Step4: These have just been one time-step in a bigger dataset...
Interact with a dataset via a widget interface
Use a local dataset for speed - Met Office CRUTEM4 (http | Python Code:
from mpl_toolkits.basemap import Basemap, shiftgrid
from netCDF4 import Dataset, date2index
import time
import sys
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from datetime import datetime, timedelta
%matplotlib notebook
Explanation: Working from Remote Geophysical data
Examples of accessing Netcdf data via TRHEDDS/OPENDAP services in Python, and plotting in Basemaps
First, import libraries
Important note It looks like for users on Windows with Python 3.x this demo will not work. It will work on Windows with Python 2.7 however. If you are on Linux or Mac (or running 2.7 in Windows) you can add the Basemap package with conda using the command conda install Basemap.
End of explanation
nc = Dataset('http://coastwatch.pfeg.noaa.gov/erddap/griddap/ncdcOisst2Agg')
nc.variables # See the metadata via this
# Grab a slice of the SST to play with
# nc.variables['sst']
sst = nc.variables['sst'][0,:].squeeze()
# preview the data with an array plotting function from matplotlib
fig, ax = plt.subplots(1,1)
ax.imshow(np.flipud(sst))
lon, lat = sst.shape
print("Number of (floating point value) pixels of AVHRR data retrieved: {0:10,}".format(lon * lat))
print("Size in memory: {0:3.1f} mb".format(16 * (lon*lat)/1000000)) # 16 bytes in a float, 1 million bytes in a megabyte
Explanation: Next, link to a remote dataset
For example, ocean data from NOAA' catalouge
End of explanation
%%time
# based on example at http://matplotlib.org/basemap/users/examples.html
date = datetime(2007,12,15,0) # Specify date to plot.
dataset = Dataset('http://coastwatch.pfeg.noaa.gov/erddap/griddap/ncdcOisst2Agg')
timevar = dataset.variables['time']
timeindex = date2index(date, timevar) # find time index for desired date.
# read sst. Will automatically create a masked array using
# missing_value variable attribute. 'squeeze out' singleton dimensions.
sst = dataset.variables['sst'][timeindex,:].squeeze()
# read ice.
ice = dataset.variables['ice'][timeindex,:].squeeze()
# read lats and lons (representing centers of grid boxes).
lats = dataset.variables['latitude'][:]
lons = dataset.variables['longitude'][:]
lons, lats = np.meshgrid(lons,lats)
# create figure, axes instances.
fig = plt.figure()
ax = fig.add_axes([0.05,0.05,0.9,0.9])
# create Basemap instance.
# coastlines not used, so resolution set to None to skip
# continent processing (this speeds things up a bit)
m = Basemap(projection='kav7',lon_0=0,resolution=None)
m.drawmapboundary(fill_color='0.3') # color map background
# plot sst, then ice with pcolor
im1 = m.pcolormesh(lons, lats, sst, shading='flat', cmap=plt.cm.jet, latlon=True)
im2 = m.pcolormesh(lons, lats, ice, shading='flat', cmap=plt.cm.gist_gray, latlon=True)
# draw parallels and meridians, but don't bother labelling them.
#m.drawparallels(np.arange(-90.,99.,30.))
#m.drawmeridians(np.arange(-180.,180.,60.))
# add colorbar
cb = m.colorbar(im1,"bottom", size="5%", pad="2%")
# add a title.
ax.set_title('SST and ICE location on {0}'.format(date.date()))
plt.show()
Explanation: Project these data onto a map
End of explanation
# specify date to plot.
date = datetime(1993, 3, 14, 0)
yyyy = date.year
mm = date.month
dd = date.day
hh = date.hour
# set OpenDAP server URL.
URLbase="http://nomads.ncdc.noaa.gov/thredds/dodsC/modeldata/cmd_pgbh/"
URL=URLbase+"%04i/%04i%02i/%04i%02i%02i/pgbh00.gdas.%04i%02i%02i%02i.grb2" %\
(yyyy,yyyy,mm,yyyy,mm,dd,yyyy,mm,dd,hh)
data = Dataset(URL)
latitudes = data.variables['lat'][::-1]
longitudes = data.variables['lon'][:].tolist()
# Get pressure and 10-m wind data
slpin = 0.01*data.variables['Pressure_msl'][:].squeeze() # 0.01* to convert to hPa
uin = data.variables['U-component_of_wind_height_above_ground'][:].squeeze()
vin = data.variables['V-component_of_wind_height_above_ground'][:].squeeze()
# add cyclic points manually (could use addcyclic function)
slp = np.zeros((slpin.shape[0],slpin.shape[1]+1),np.float)
slp[:,0:-1] = slpin[::-1]; slp[:,-1] = slpin[::-1,0]
u = np.zeros((uin.shape[0],uin.shape[1]+1),np.float64)
u[:,0:-1] = uin[::-1]; u[:,-1] = uin[::-1,0]
v = np.zeros((vin.shape[0],vin.shape[1]+1),np.float64)
v[:,0:-1] = vin[::-1]; v[:,-1] = vin[::-1,0]
longitudes.append(360.)
longitudes = np.array(longitudes)
lons, lats = np.meshgrid(longitudes,latitudes) # make 2-d grid of lons, lats
m = Basemap(resolution='c',projection='ortho',lat_0=60.,lon_0=-60.)
# create figure, add axes
fig1 = plt.figure(figsize=(8,10))
ax = fig1.add_axes([0.1,0.1,0.8,0.8])
clevs = np.arange(960,1061,5)
x, y = m(lons, lats)
parallels = np.arange(-80.,90,20.)
meridians = np.arange(0.,360.,20.)
# plot SLP contours.
CS1 = m.contour(x,y,slp,clevs,linewidths=0.5,colors='k',animated=True)
CS2 = m.contourf(x,y,slp,clevs,cmap=plt.cm.RdBu_r,animated=True)
ugrid,newlons = shiftgrid(180.,u,longitudes,start=False)
vgrid,newlons = shiftgrid(180.,v,longitudes,start=False)
uproj,vproj,xx,yy = m.transform_vector(ugrid,vgrid,newlons,latitudes,31,31,returnxy=True,masked=True)
Q = m.quiver(xx,yy,uproj,vproj,scale=700)
qk = plt.quiverkey(Q, 0.1, 0.1, 20, '20 m/s', labelpos='W')
m.drawcoastlines(linewidth=1.5)
m.drawparallels(parallels)
m.drawmeridians(meridians)
cb = m.colorbar(CS2,"bottom", size="5%", pad="2%")
cb.set_label('hPa')
ax.set_title('SLP and Wind Vectors '+str(date.date()))
plt.show()
Explanation: Complex example: multiple datasets accessed and overlaid onto a map
Overlay datasets, project onto a map, calculate vectors, add map features (like coastlines)
End of explanation
from ipywidgets import interact, interactive, fixed
import ipywidgets as widgets
%matplotlib inline
nc = Dataset('Data/CRUTEM.4.4.0.0.anomalies.nc')
lats = nc.variables['latitude'][:]
lons = nc.variables['longitude'][:]
lons, lats = np.meshgrid(lons,lats)
tind = nc.variables['time'][:]
@interact(index=(0, len(tind)))
def ftest(index):
basetime = datetime(1850, 1, 1)
date = basetime + timedelta(days=int(tind[index]))
tanom = nc.variables['temperature_anomaly'][index,:].squeeze()
fig = plt.figure(figsize=(10,10))
ax = fig.add_axes([0.05,0.05,0.9,0.9])
m = Basemap(projection='moll',lon_0=0,resolution='c')
m.drawcoastlines()
im1 = m.pcolormesh(lons, lats, tanom, shading='flat', cmap=cm.RdBu_r, latlon=True, vmin=-10, vmax=10)
m.drawparallels(np.arange(-90., 99., 30.))
m.drawmeridians(np.arange(-180., 180., 60.))
cb = m.colorbar(im1,"bottom", size="5%", pad="2%")
ax.set_title('{0} CRUTEM4 anom. (°C)'.format(date.date()))
plt.show()
Explanation: These have just been one time-step in a bigger dataset...
Interact with a dataset via a widget interface
Use a local dataset for speed - Met Office CRUTEM4 (http://www.metoffice.gov.uk/hadobs/crutem4/data/gridded_fields/CRUTEM.4.4.0.0.anomalies.nc.gz)
First step is to turn a plot into a function call, then decorate it with a widget to scroll through the time-steps.
End of explanation |
14,684 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Experiment for the paper "Identification of singleton mentions in Russian"
Replication of CLLS-2016 paper (Ionov and Toldova 2016)
To reproduce this experiment you will need
Step1: Reading the texts from GS and matching them to actual texts
Loading chains and GS
Step2: Loading special lists
Special lists load from the directory stored in lists_dir
Step3: Building indices and dictionaries
Building additional indices (of all words and all groups)
Step4: Creating a classifier
Step5: Training and testing
Step6: Baseline
Baseline condition
Step7: String features
Step8: String + Struct features
Step9: String + Struct + List features
Step10: All features
Step11: Calculating feature importances
Step12: Additional actions
Getting feature distributions | Python Code:
%cd '/Users/max/Projects/Coreference/'
%cd 'rucoref'
from anaphoralib.corpora import rueval
from anaphoralib.tagsets import multeast
from anaphoralib.experiments.base import BaseClassifier
from anaphoralib import utils
from anaphoralib.experiments import utils as exp_utils
%cd '..'
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
from imblearn.over_sampling import BorderlineSMOTE
import numpy as np
%matplotlib inline
lists_dir = 'CLLS-2016/wordlists'
texts_dir = 'Corpus-2015/Tokens.txt'
gs_dir = 'Corpus-2015/Groups.txt'
tagset = multeast
random_state = 42
Explanation: Experiment for the paper "Identification of singleton mentions in Russian"
Replication of CLLS-2016 paper (Ionov and Toldova 2016)
To reproduce this experiment you will need:
1. RuCor corpus (from 2015-10-29)
2. Python modules:
* scikit-learn (v. 0.22.1)
* imbalanced-learn (v. 0.6.2)
* matplotlib (v. 3.1.3)
2. anaphoralib Python module
Since anaphoralib is in an early stage of development, there is no way to install it yet, so in order to import it, you should cd to the folder with the module. Paths to the corpus should be updated accordingly.
End of explanation
rucoref = rueval.RuCorefCorpus(multeast, rueval)
exp_utils.load_corpus(rucoref, texts_dir, gs_dir)
rucoref.groups[0][:10]
rucoref.print_stats()
rucoref.create_indices()
Explanation: Reading the texts from GS and matching them to actual texts
Loading chains and GS
End of explanation
import codecs
def load_list(filename):
data = set()
with codecs.open(filename, encoding='utf-8') as inp_file:
for line in inp_file:
data.add(line.strip('\r\n'))
return data
import os
wordlists = {}
for filename in os.listdir(lists_dir):
wordlists[filename.replace('.txt', '')] = load_list(os.path.join(lists_dir, filename))
print(wordlists.keys())
Explanation: Loading special lists
Special lists load from the directory stored in lists_dir
End of explanation
import collections
word_index = []
group_index = []
for i, text in enumerate(rucoref.texts):
word_index.append(collections.defaultdict(set))
group_index.append(collections.defaultdict(set))
for word in text:
word_index[-1]['_'.join(word.lemma)].add(word.offset)
for group in rucoref.groups[i]:
for g in group.iter_groups():
group_index[-1]['_'.join(g.lemma)].add(g.offset)
print('\n'.join(list(group_index[0].keys())[:15]))
Explanation: Building indices and dictionaries
Building additional indices (of all words and all groups):
End of explanation
import re
class SingletonClassifier(BaseClassifier):
def __init__(self):
super(SingletonClassifier, self).__init__()
self.feat_zones_ = ('struct', 'string', 'lists', 'synt')
self.stats = {'str_matches_before', 'head_matches_before', 'n_adj', 'len_np', 'is_genitive'}
self.stats.update('in_list_{}'.format(l) for l in wordlists)
self.rx_lat = re.compile('[A-Za-z]')
self.pronouns = {u"его", u"ее", u"её", u"ей", u"ему", u"ею", u"им", u"ими", u"их", u"которая",
u"которого", u"которое", u"которой", u"котором", u"которому", u"которую", u"которые",
u"который", u"которым", u"которыми", u"которых", u"него", u"нее", u"неё", u"ней", u"нем",
u"нём", u"нему", u"нею", u"ним", u"ними", u"них", u"он", u"она", u"они", u"оно", u"свое",
u"своё", u"своего", u"своей", u"своем", u"своём", u"своему", u"своею", u"свой", u"свои",
u"своим", u"своими", u"своих", u"свою", u"своя", u"себе", u"себя", u"собой", u"собою"}
self.clear_stats()
def get_feature_vector(self, corpus, group, i_text, save_feature_names=False):
if save_feature_names:
self.feature_names_ = []
vctr = []
group_lemma = '_'.join(group.lemma)
group_occurrences = group_index[i_text][group_lemma] if group_lemma in group_index[i_text] else []
head_index = group.head
head_lemma = group.lemma[group.head]
head_occurrences = word_index[i_text][head_lemma] if head_lemma in word_index[i_text] else []
head_offset = group.head_offset
group_words = group.words if group.type != 'word' else [group]
str_matches_before = sum(1 for occ in group_occurrences if occ < group.offset)
head_matches_before = sum(1 for occ in head_occurrences if occ < group.offset)
adj_in_group = [word for word in group_words[:head_index+1] if tagset.pos_filters['adj'](word)]
self.stats['str_matches_before'].append(str_matches_before)
self.stats['head_matches_before'].append(head_matches_before)
self.stats['n_adj'].append(len(adj_in_group))
self.stats['len_np'].append(len(group_words))
if 'string' in self.feat_zones_:
vctr.append(('str_match_before=0', str_matches_before == 0))
vctr.append(('head_match_before=0', head_matches_before == 0))
#vctr.append(('uppercase', all(word.isupper() and len(word) > 1 for word in group.wordform)))
#vctr.append(('capitalized', any(word[0].isupper() and len(group.wordform) > 1 for word in group.wordform[1:])))
vctr.append(('latin', any(self.rx_lat.search(word) for word in group.wordform)))
vctr.append(('is_proper_noun', corpus.tagset.pos_filters['properNoun'](group)))
vctr.append(('is_animate', corpus.tagset.extract_feature('animate', group) == u'y'))
vctr.append(('is_pronoun', group.wordform[0] in self.pronouns and len(group_words) == 1))
i_word = corpus.words_index[i_text][group.offset]
left_word = corpus.texts[i_text][i_word - 1] if i_word > 0 else None
right_word = corpus.texts[i_text][i_word + len(group.wordform) + 1] \
if i_word + len(group.wordform) + 1 < len(corpus.texts[i_text]) else None
if 'struct' in self.feat_zones_:
#vctr.append(('conj', bool((left_word and corpus.tagset.pos_filters['conj'](left_word))
# or (right_word and corpus.tagset.pos_filters['conj'](right_word)))))
vctr.append(('len_np==1', len(group.tags) == 1))
vctr.append(('1<len_np<4', 1 < len(group.tags) < 4))
vctr.append(('len_np>=4', 1 < len(group.tags) >= 4))
vctr.append(('n_adj=0', len(adj_in_group) == 0))
#vctr.append(('n_adj>1', len(adj_in_group) > 1))
vctr.append(('n_adj>2', len(adj_in_group) > 2))
vctr.append(('is_genitive', corpus.tagset.extract_feature('case', group) == u'g'))
self.stats['is_genitive'].append(vctr[-1][1])
sent_begin = left_word is None or left_word.tag == 'SENT'
sent_end = right_word is None or right_word.tag == 'SENT'
nomin = corpus.tagset.extract_feature('case', group) == u'n'
accus = corpus.tagset.extract_feature('case', group) == u'a'
if 'synt' in self.feat_zones_:
vctr.append(('is_subject', nomin or sent_begin))
#vctr.append(('is_object', accus or sent_end))
if 'lists' in self.feat_zones_:
for l in wordlists:
feat_name = 'in_list_{}'.format(l)
vctr.append((feat_name, any(lemma in wordlists[l] for lemma in group.lemma[:head_index+1])))
self.stats[feat_name].append(vctr[-1][1])
if save_feature_names:
self.feature_names_ = [feat[0] for feat in vctr]
return [int(feat[1]) for feat in vctr]
def prepare_data(self, corpus, random_state=42, test_size=0.3, feature_zones=None):
if feature_zones:
self.feat_zones_ = feature_zones
self.groups = []
self.x_data = []
self.y_data = []
self.stats['class'] = []
self.cur_data_ = 'Singletons'
self.class_names_ = ('non-singleton', 'singleton')
save_features = True
exceptions = {u'и', u'в', u'а', u'к', u'у', u'по', u'где', u'ведь', u'с'}
for i_text, text in enumerate(corpus.texts):
for i, mention in enumerate(corpus.mentions[i_text]):
group = corpus.heads_index[i_text][mention.offset]
if group.lemma[0] in exceptions and group.tags[0].startswith('N'):
continue
if i not in rucoref.gs_index[i_text]:
self.y_data.append(self.class_names_.index('singleton'))
else:
self.y_data.append(self.class_names_.index('non-singleton'))
self.x_data.append(self.get_feature_vector(corpus, group, i_text, save_features))
self.groups.append(group)
self.stats['class'].append(self.class_names_[self.y_data[-1]])
save_features = False
#pronoun_index = self.feature_names_.index('is_pronoun')
#if self.x_data[-1][pronoun_index]:
# self.x_data.pop()
# self.y_data.pop()
# continue
#del self.x_data[-1][pronoun_index]
super(SingletonClassifier, self).prepare_data(corpus, random_state, test_size)
#del self.feature_names_[pronoun_index]
class_numbers = [sum(1 for item in self.y_data if item == cur_class) for cur_class in range(len(self.class_names_))]
self.ratio = float(min(class_numbers) / float(max(class_numbers)))
Explanation: Creating a classifier
End of explanation
singleton_clf = SingletonClassifier()
singleton_clf.prepare_data(rucoref, random_state=random_state)
Explanation: Training and testing
End of explanation
def baseline_predict(data):
y_pred = np.zeros(len(data))
for i, row in enumerate(data):
y_pred[i] = (row[0] == 1 and row[1] == 1)
return y_pred
singleton_clf.test(y_pred=baseline_predict(singleton_clf.x_data_test), test_name='baseline')
Explanation: Baseline
Baseline condition: NP is a singleton if there is no such exact string or its head in the text before
End of explanation
singleton_clf.prepare_data(rucoref, random_state=random_state, feature_zones=('string',))
clf = RandomForestClassifier(n_estimators=200, random_state=random_state)
sampler = BorderlineSMOTE(sampling_strategy='auto', kind='borderline-1', random_state=random_state)
singleton_clf.fit(clf, sampler)
singleton_clf.print_stats()
len(singleton_clf.x_data_train)
singleton_clf.test(test_name='string features')
Explanation: String features
End of explanation
singleton_clf = SingletonClassifier()
singleton_clf.prepare_data(rucoref, random_state=random_state, feature_zones=('string', 'struct'))
clf = RandomForestClassifier(n_estimators=200, random_state=random_state)
sampler = BorderlineSMOTE(sampling_strategy='auto', kind='borderline-1', random_state=random_state)
singleton_clf.fit(clf, sampler)
singleton_clf.test(test_name='string+struct features')
Explanation: String + Struct features
End of explanation
singleton_clf = SingletonClassifier()
singleton_clf.prepare_data(rucoref, random_state=random_state, feature_zones=('string', 'struct', 'lists'))
clf = RandomForestClassifier(n_estimators=200, random_state=random_state)
sampler = BorderlineSMOTE(sampling_strategy='auto', kind='borderline-1', random_state=random_state)
singleton_clf.fit(clf, sampler)
singleton_clf.test(test_name='string+struct+lists')
Explanation: String + Struct + List features
End of explanation
singleton_clf = SingletonClassifier()
singleton_clf.prepare_data(rucoref, random_state=random_state, feature_zones=('string', 'struct', 'lists', 'synt'))
clf = RandomForestClassifier(n_estimators=200, random_state=random_state)
sampler = BorderlineSMOTE(sampling_strategy='auto', kind='borderline-1', random_state=random_state)
singleton_clf.fit(clf, sampler)
singleton_clf.test(test_name='all features')
for i, feat_val in enumerate(singleton_clf.clf_.feature_importances_):
print('{}: {:.4f}'.format(singleton_clf.feature_names_[i], feat_val))
out_singletons = open('singletons.all.txt', 'w', encoding='utf-8')
out_non_singletons = open('non-singletons.all.txt', 'w', encoding='utf-8')
for i, item in enumerate(singleton_clf.groups_train):
if singleton_clf.y_data_train[i] == 1:
out_singletons.write(str(singleton_clf.groups_train[i]))
out_singletons.write('\n')
else:
out_non_singletons.write(str(singleton_clf.groups_train[i]))
out_non_singletons.write('\n')
out_fp = open('singletons.fp.txt', 'w', encoding='utf-8')
out_fn = open('singletons.fn.txt', 'w', encoding='utf-8')
y_pred = singleton_clf.clf_.predict(singleton_clf.x_data_test)
for i, item in enumerate(singleton_clf.groups_test):
if singleton_clf.y_data_test[i] == 0 and y_pred[i] != singleton_clf.y_data_test[i]:
out_fp.write(str(singleton_clf.groups_test[i]))
out_fp.write('\n')
if singleton_clf.y_data_test[i] == 1 and y_pred[i] != singleton_clf.y_data_test[i]:
out_fn.write(str(singleton_clf.groups_test[i]))
out_fn.write('\n')
Explanation: All features
End of explanation
regr = LogisticRegression(random_state=random_state, max_iter=200)
sampler = BorderlineSMOTE(sampling_strategy='auto', kind='borderline-1', random_state=random_state)
singleton_clf.fit(regr, sampler)
for i, feat_name in enumerate(singleton_clf.feature_names_):
print('{}: {:.4f}'.format(feat_name, regr.coef_[0,i]))
Explanation: Calculating feature importances
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib
import seaborn as sns
import anaphoralib.experiments.utils
singleton_clf.stats.keys()
singleton_clf = SingletonClassifier()
singleton_clf.prepare_data(rucoref, random_state=random_state, feature_zones=('string', 'struct', 'lists', 'synt'))
feature_distributions = {}
for feat_name in singleton_clf.stats:
feature_distributions[feat_name] = {cls: [] for cls in singleton_clf.class_names_ + ('total',)}
for i, elem in enumerate(singleton_clf.stats['class']):
feature_distributions[feat_name][elem].append(singleton_clf.stats[feat_name][i])
feature_distributions[feat_name]['total'].append(singleton_clf.stats[feat_name][i])
anaphoralib.experiments.utils.latexify()
def plot_feat_distribution(distribution, bins, class_names, x_label='Feature value', filename='plot.pdf'):
bins = range(7)
ax = plt.gca()
ax.set_xlabel(x_label)
ax.set_ylabel("Density")
#ax.set_title("Distribution of feature")
plt.tight_layout()
format_axes(ax)
normed = True
true_hist = np.histogram(distribution[class_names[1]], bins, density=normed)
false_hist = np.histogram(distribution[class_names[0]], bins, density=normed)
w = 0.3
true_x = [item for item in range(len(true_hist[0]))]
false_x = [item+w for item in range(len(false_hist[0]))]
ax.set_xticks([item + float(w) for item in true_x])
ax.set_xticklabels(true_x)
rects1 = plt.bar(false_x, false_hist[0], w, color='0.3')
rects2 = plt.bar(true_x, true_hist[0], w, color='0.7')
plt.legend((rects1, rects2), class_names, loc='upper right')
plt.savefig("{}.pdf".format(filename))
plt.show()
plt.close()
import os
anaphoralib.experiments.utils.latexify(columns=2)
for feat_name in feature_distributions:
if feat_name == 'class':
continue
anaphoralib.experiments.utils.plot_feature_distribution(feature_distributions[feat_name], range(7),
singleton_clf.class_names_,
x_label=feat_name.replace('_', '\\_'), filename=os.path.join('CLLS-2016', feat_name))
from sklearn.model_selection import learning_curve
from sklearn.metrics import make_scorer, f1_score
from sklearn.utils import shuffle
singleton_clf = SingletonClassifier()
singleton_clf.prepare_data(rucoref, random_state=random_state, feature_zones=('string', 'struct', 'lists', 'synt'))
clf = RandomForestClassifier(n_estimators=200, random_state=random_state)
shuffled_x_data, shuffled_y_data = shuffle(singleton_clf.x_data, singleton_clf.y_data, random_state=random_state)
train_sizes_abs, train_scores, test_scores = learning_curve(clf,
shuffled_x_data,
shuffled_y_data,
cv=3,
scoring=make_scorer(f1_score, pos_label=0))
anaphoralib.experiments.utils.latexify(columns=2)
anaphoralib.experiments.utils.plot_learning_curve(train_sizes_abs,
train_scores, test_scores,
score_name='f1',
filename=os.path.join('CLLS-2016', 'learning_curve_plot'))
Explanation: Additional actions
Getting feature distributions
End of explanation |
14,685 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hashing
Hashing can be useful in speeding up the search process for a specific item that is part of a larger collection of items. Depending on the implementation of the hashing algorithm, this can turn the computational complexity of our search algorithm from $O(n)$ to $O(1)$. We do this by building a specific data structure, which we'll dive into next.
Hash Table
A hash table is a collection of items, stored in such a way as to make it easier to find them later. The table consists of slots that hold items and are named by a specific integer value, starting with 0.
Example of a hash table (sorry for the poor formatting because markdown
Step2: Once the hash values have been computed, we inset each item into the hash table at the designated position(s). We can now see that there are entries with corresponding hash values stored in a python dictionary. This is obviously a very simple implementation of a hash table.
There is something interesting to note here, though, when working through using a simple hashing algorithm like the remainder method. We have items, in our case integers, which hash to the same value. Specifically, we can see that there are 2 items that hash to each of the 1, 9, and 10 slots. These are what are known as collisions.
Clearly these collisions can cause problems, as out of the 8 initial items that we'd started with, we only have 5 items actually stored in our hash table. This leads us into the next section we'll discuss, and that is hash functions that can help alleviate this collision problem.
Hash Functions
Hash functions that map, perfectly, every item into it's own unique slot in a hash table is known as a perfect hash function. If we knew the collection of items and that it would never change, it's possible to construct a perfect hash function specific to this collection, but we know that the dynamics of the real world tend to not allow something so simple.
Dynamically growing the hash table size so each possible item in the item range can be accomodated is one way to construct a perfect hash function. This guarantees each item will have it's own slot. But this isn't feasible, as something as simple as tracking social security numbers would require over one billion slots within the hash table. And if we're only tracking a small subset of the full set of social security numbers, this would become horribly inefficient with respect to hardware resources available within the machine our code is running on.
With the goal of constructing a hash function that will minimize the number of collisions, has low computational complexity, and evenly distributes our items within the hash table, we can take a look at some common ways to extend this remainder method.
Folding Method
The folding method for hashing an item begins by diving the item into equal size pieces (though the last piece may not be of equal size to the rest). These pieces are then added together to create the resulting hash value. A good example of this is a phone number,such as 456-555-1234. We can break each pair of integers up into groups of 2, add them up, and use that resulting value as an input to our hashing function.
Step3: Ordinal Hash
When dealing with strings, we can use the ordinal values of the constituent characters of a given word, to create a hash.
It's important to notice, however, that anagrams can produce hash collisions, as shown below.
Step4: Weighted ordinal hashing
In the case above, just using ordinal values can cause hash collisions. We can actually use the positional structure of the word to as a set of weights for generating a given hash. As seen below.
A simple multiplication by the positional value of each character will cause anagrams to evaluate to different hash values.
Step5: Collision Resolution
When there are hash collisions, like we've seen previously, it's important to understand ways that we can alleviate the collisions.
One simple way to handle the collision, should there already be an entry in our hash table with the same hash value, is to search sequentially through all slots near the original hash, for an empty slot. This may require us to circularly traverse the entire hash table to allow us to cover all possible slots. This process is known as open addressing and the technique within this process that we're using is called linear probing.
In the following code examples, we'll reuse the simple remainder method hash function that we've defined above. Along with the original set of integers we were hashing, as there were some collisions that occured.
Step6: We can see there were multiple collisions within this dataset. Specifically hashes of 1, 9, and 10. And we can see in the resulting table that only the last computed hashes are stored in the respective table slots.
Below we'll implement an lp_hash function that will perform linear probing over the slots available within the table for any collisions that occur.
Step7: Used a little more interestingly, we can use the weighted ordinal hash function that we've defined above, combined with the lp_hash function that we've just defined, to store string(s) for later lookup. | Python Code:
items = [25,54,34,67,75,21,77,31]
def hash(item_list, table_size):
hash_table = dict([(i,None) for i,x in enumerate(range(table_size))])
for item in item_list:
i = item % table_size
print("The hash for %s is %s" % (item, i))
hash_table[i] = item
return hash_table
# Execute the hash function
# Create table with 11 entries to match example above
hash_table = hash(items, 11)
# Print the resulting hash table
print(hash_table)
Explanation: Hashing
Hashing can be useful in speeding up the search process for a specific item that is part of a larger collection of items. Depending on the implementation of the hashing algorithm, this can turn the computational complexity of our search algorithm from $O(n)$ to $O(1)$. We do this by building a specific data structure, which we'll dive into next.
Hash Table
A hash table is a collection of items, stored in such a way as to make it easier to find them later. The table consists of slots that hold items and are named by a specific integer value, starting with 0.
Example of a hash table (sorry for the poor formatting because markdown :
| 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |
|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|
| None | None | None | None | None | None | None | None | None | None | None |
Each entry in this hash table, is currently set to a value of None.
A hash function is used when mapping values into the slots available within a Hash table. The hash function typically takes, as input, an item from a collection, and will return an integer in the range of slot names, between $0$ and $m-1$. There are many different hash functions, but the first we can discuss is the "remainder method" hash function.
Remainder Hash Function
The remainder hash function takes an item from a collection, divides it by the table size, returning the remainder of it's hash value $h(item) = item \% \text{table_size}$. Typically modulo arithmetic is present in some form for all hash functions, as the result must be in the range of the total number of slots within the table.
Assuming we have a set of integer items ${25,54,34,67,75,21,77,31}$, we can use our hash function to find slots for our values, accordingly.
End of explanation
def stringify(item_list):
Method to convert integer values into array of component integers
string_items = []
while len(item_list) > 0:
for item in item_list:
chars = [int(c) for c in str(item)]
item_list.remove(item)
string_items.append(chars)
return string_items
def folding_hash(item_list):
'''
Quick hack at a folding hash algorithm
'''
hashes = []
while len(item_list) > 0:
hash_val = 0
for item in item_list:
while len(item) > 1:
str_1 = str(item[0])
str_2 = str(item[1])
str_concat = str_1 + str_2
bifold = int(str_concat)
hash_val += bifold
item.pop(0)
item.pop(0)
else:
if len(item) > 0:
hash_val += item[0]
else:
pass
hashes.append(hash_val)
return hashes
# Example phone numbers
phone_number = [4565551234, 4565557714, 9871542544, 4365554601]
# String/Character-fy the phone numbers
str_pn = stringify(phone_number)
# Hash the phone numbers
folded_hash = folding_hash(str_pn)
# Input values into hash table
folding_hash_table = hash(folded_hash, 11)
# Print the results
print(folding_hash_table)
Explanation: Once the hash values have been computed, we inset each item into the hash table at the designated position(s). We can now see that there are entries with corresponding hash values stored in a python dictionary. This is obviously a very simple implementation of a hash table.
There is something interesting to note here, though, when working through using a simple hashing algorithm like the remainder method. We have items, in our case integers, which hash to the same value. Specifically, we can see that there are 2 items that hash to each of the 1, 9, and 10 slots. These are what are known as collisions.
Clearly these collisions can cause problems, as out of the 8 initial items that we'd started with, we only have 5 items actually stored in our hash table. This leads us into the next section we'll discuss, and that is hash functions that can help alleviate this collision problem.
Hash Functions
Hash functions that map, perfectly, every item into it's own unique slot in a hash table is known as a perfect hash function. If we knew the collection of items and that it would never change, it's possible to construct a perfect hash function specific to this collection, but we know that the dynamics of the real world tend to not allow something so simple.
Dynamically growing the hash table size so each possible item in the item range can be accomodated is one way to construct a perfect hash function. This guarantees each item will have it's own slot. But this isn't feasible, as something as simple as tracking social security numbers would require over one billion slots within the hash table. And if we're only tracking a small subset of the full set of social security numbers, this would become horribly inefficient with respect to hardware resources available within the machine our code is running on.
With the goal of constructing a hash function that will minimize the number of collisions, has low computational complexity, and evenly distributes our items within the hash table, we can take a look at some common ways to extend this remainder method.
Folding Method
The folding method for hashing an item begins by diving the item into equal size pieces (though the last piece may not be of equal size to the rest). These pieces are then added together to create the resulting hash value. A good example of this is a phone number,such as 456-555-1234. We can break each pair of integers up into groups of 2, add them up, and use that resulting value as an input to our hashing function.
End of explanation
def ord_hash(string, table_size):
hash_val = 0
for position in range(len(string)):
hash_val = hash_val + ord(string[position])
return hash_val % table_size
print(ord_hash("cat", 11))
print(ord_hash("tac", 11))
Explanation: Ordinal Hash
When dealing with strings, we can use the ordinal values of the constituent characters of a given word, to create a hash.
It's important to notice, however, that anagrams can produce hash collisions, as shown below.
End of explanation
def weighted_ord_hash(string, table_size):
hash_val = 0
for position in range(len(string)):
hash_val = hash_val + (ord(string[position]) * position)
return hash_val % table_size
# ord_hash
print(ord_hash("cat", 11))
# weighted_ord_hash
print(weighted_ord_hash("tac", 11))
Explanation: Weighted ordinal hashing
In the case above, just using ordinal values can cause hash collisions. We can actually use the positional structure of the word to as a set of weights for generating a given hash. As seen below.
A simple multiplication by the positional value of each character will cause anagrams to evaluate to different hash values.
End of explanation
items = [25,54,34,67,75,21,77,31]
# Execute the hash function
# Create table with 11 entries to match example above
hash_table = hash(items, 11)
# Print the resulting hash table
print(hash_table)
Explanation: Collision Resolution
When there are hash collisions, like we've seen previously, it's important to understand ways that we can alleviate the collisions.
One simple way to handle the collision, should there already be an entry in our hash table with the same hash value, is to search sequentially through all slots near the original hash, for an empty slot. This may require us to circularly traverse the entire hash table to allow us to cover all possible slots. This process is known as open addressing and the technique within this process that we're using is called linear probing.
In the following code examples, we'll reuse the simple remainder method hash function that we've defined above. Along with the original set of integers we were hashing, as there were some collisions that occured.
End of explanation
items = [25,54,34,67,75,21,77,31]
def rehash(oldhash, table_size):
return (oldhash+1) % table_size
def lp_hash(item_list, table_size):
lp_hash_table = dict([(i,None) for i,x in enumerate(range(table_size))])
for item in item_list:
i = item % table_size
print("%s hashed == %s \n" %(item, i))
if lp_hash_table[i] == None:
lp_hash_table[i] = item
elif lp_hash_table[i] != None:
print("Collision, attempting linear probe \n")
next_slot = rehash(i, table_size)
print("Setting next slot to %s \n" % next_slot)
while lp_hash_table[next_slot] != None:
next_slot = rehash(next_slot, len(lp_hash_table.keys()))
print("Next slot was not empty, trying next slot %s \n" % next_slot)
if lp_hash_table[next_slot] == None:
lp_hash_table[next_slot] = item
return lp_hash_table
print(lp_hash(items, 11))
Explanation: We can see there were multiple collisions within this dataset. Specifically hashes of 1, 9, and 10. And we can see in the resulting table that only the last computed hashes are stored in the respective table slots.
Below we'll implement an lp_hash function that will perform linear probing over the slots available within the table for any collisions that occur.
End of explanation
animal_items = ["cat", "dog", "goat",
"chicken", "pig", "horse",
"ostrich", "lion", "puma"]
def rehash(oldhash, table_size):
return (oldhash+1) % table_size
def weighted_ord_hash(string, table_size):
hash_val = 0
for position in range(len(string)):
hash_val = hash_val + (ord(string[position]) * position)
return hash_val % table_size
def lp_hash(item_list, table_size):
lp_hash_table = dict([(i,None) for i,x in enumerate(range(table_size))])
for item in item_list:
i = weighted_ord_hash(item, table_size)
print("%s hashed == %s \n" %(item, i))
if lp_hash_table[i] == None:
lp_hash_table[i] = item
elif lp_hash_table[i] != None:
print("Collision, attempting linear probe \n")
next_slot = rehash(i, table_size)
print("Setting next slot to %s \n" % next_slot)
while lp_hash_table[next_slot] != None:
next_slot = rehash(next_slot, len(lp_hash_table.keys()))
print("Next slot was not empty, trying next slot %s \n" % next_slot)
if lp_hash_table[next_slot] == None:
lp_hash_table[next_slot] = item
return lp_hash_table
print(lp_hash(animal_items, 11))
Explanation: Used a little more interestingly, we can use the weighted ordinal hash function that we've defined above, combined with the lp_hash function that we've just defined, to store string(s) for later lookup.
End of explanation |
14,686 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook will demonstrate how to do basic SuperDARN data plotting.
Step1: Remote File RTI Plots
Step2: Local File RTI Plot
You can also plot data stored in a local file. Just change the variables in the cell below.
Step3: Fan Plots
Right now we don't have plotFan setup to accept local files. But, we will add that in shortly.
Geopgraphic Coordinates
Step4: Magnetic Coordinates
Magnetic coordinates still need a little work. For instance, high latitude continent lines don't always plot. Also, we are working on getting Simon's new AACGM system in place (http
Step5: Convection Plotting | Python Code:
%pylab inline
import datetime
import os
import matplotlib.pyplot as plt
from davitpy import pydarn
sTime = datetime.datetime(2008,2,22)
eTime = datetime.datetime(2008,2,23)
radar = 'bks'
beam = 7
Explanation: This notebook will demonstrate how to do basic SuperDARN data plotting.
End of explanation
#The following command will print the docstring for the plotRti routine:
#pydarn.plotting.rti.plotRti?
figs = pydarn.plotting.rti.plot_rti(sTime, radar, eTime=eTime, bmnum=beam)
fig = figs[0]
fig.show()
#Now save as a PNG to your home folder...
home = os.getenv('HOME')
filename = os.path.join(home,'rti.png')
fig.savefig(filename)
fig.clear() #Clear the figure from memory.
Explanation: Remote File RTI Plots
End of explanation
fileName = '/tmp/sd/20080222.000000.20080223.000000.bks.fitex'
fileType = 'fitex'
radar = 'bks'
beam = 7
sTime = datetime.datetime(2008,2,22)
eTime = datetime.datetime(2008,2,23)
figs = pydarn.plotting.rti.plot_rti(sTime, radar, eTime=eTime, bmnum=beam, fileName=fileName,fileType=fileType)
fig = figs[0]
fig.show()
fig.clear() #Clear the figure from memory.
Explanation: Local File RTI Plot
You can also plot data stored in a local file. Just change the variables in the cell below.
End of explanation
import datetime
import os
import matplotlib.pyplot as plt
from davitpy import pydarn
pydarn.plotting.fan.plotFan(datetime.datetime(2013,3,16,16,30),['fhe','fhw'],param='power',gsct=False)
Explanation: Fan Plots
Right now we don't have plotFan setup to accept local files. But, we will add that in shortly.
Geopgraphic Coordinates
End of explanation
import datetime
import os
import matplotlib.pyplot as plt
from davitpy import pydarn
pydarn.plotting.fan.plotFan(datetime.datetime(2013,3,16,16,30),['fhe','fhw'],param='power',gsct=False,coords='mag')
Explanation: Magnetic Coordinates
Magnetic coordinates still need a little work. For instance, high latitude continent lines don't always plot. Also, we are working on getting Simon's new AACGM system in place (http://dx.doi.org/doi/10.1002/2014JA020264). Not there yet...
End of explanation
import datetime
import matplotlib.pyplot as plt
import davitpy.pydarn.plotting.plotMapGrd
from davitpy.utils import *
fig = plt.figure(figsize=(15,15))
ax = fig.add_subplot(111)
sdate = datetime.datetime(2011,4,3,4,0)
mObj = plotUtils.mapObj(boundinglat=50.,gridLabels=True, coords='mag')
mapDatObj = davitpy.pydarn.plotting.plotMapGrd.MapConv(sdate, mObj, ax)
mapDatObj.overlayMapFitVel()
mapDatObj.overlayCnvCntrs()
mapDatObj.overlayHMB()
fig.show()
Explanation: Convection Plotting
End of explanation |
14,687 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Euler Equations
The Euler equations in primitive variable form, $q = (\rho, u, p)^\intercal$ appear as
Step1: The eigenvalues are the speeds at which information propagates with. SymPy returns them as a
dictionary, giving the multiplicity for each eigenvalue.
Step2: The right eigenvectors are what SymPy gives natively. For a given eigenvalue, $\lambda$, these
satisfy
Step3: 0-th right eigenvector
Step4: this corresponds to the eigenvalue
Step5: 1-st right eigenvector
Step6: this corresponds to the eigenvalue
Step7: 2-nd right eigenvector
Step8: this corresponds to the eigenvalue
Step9: Here they are as a matrix, $R$, in order from smallest to largest eigenvalue
Step10: Left Eigenvectors
The left eigenvectors satisfy
Step11: Traditionally, we normalize these such that $l^{(\mu)} \cdot r^{(\nu)} = \delta_{\mu\nu}$
Step12: 0-th left eigenvector
Step13: 1-st left eigenvector
Step14: 2-nd left eigenvector
Step15: Entropy formulation
here we write the system in terms of $q_s = (\rho, u, s)^\intercal$, where the system is
$${q_s}_t + A_s(q_s) {q_s}_x = 0$$
and
$$
A_s = \left (\begin{matrix}u & \rho & 0\
\frac{c^{2}}{\rho} & u & \frac{p_{s}}{\rho}\
0 & 0 & u\end{matrix}\right)
$$
Step16: left eigenvectors
Step17: normalization
Step18: 2-d system | Python Code:
from sympy.abc import rho
rho, u, c = symbols('rho u c')
A = Matrix([[u, rho, 0], [0, u, rho**-1], [0, c**2 * rho, u]])
A
Explanation: Euler Equations
The Euler equations in primitive variable form, $q = (\rho, u, p)^\intercal$ appear as:
$$q_t + A(q) q_x = 0$$
with the matrix $A(q)$:
$$A(q) = \left ( \begin{array}{ccc} u & \rho & 0 \
0 & u & 1/\rho \
0 & \gamma p & u \end{array} \right )
$$
The sound speed is related to the adiabatic index, $\gamma$, as $c^2 = \gamma p /\rho$.
We can represent this matrix symbolically in SymPy and explore its eigensystem.
End of explanation
A.eigenvals()
Explanation: The eigenvalues are the speeds at which information propagates with. SymPy returns them as a
dictionary, giving the multiplicity for each eigenvalue.
End of explanation
R = A.eigenvects() # this returns a tuple for each eigenvector with multiplicity -- unpack it
r = []
lam = []
for (ev, _, rtmp) in R:
r.append(rtmp[0])
lam.append(ev)
# we can normalize them anyway we want, so let's make the first entry 1
for n in range(len(r)):
v = r[n]
r[n] = v/v[0]
Explanation: The right eigenvectors are what SymPy gives natively. For a given eigenvalue, $\lambda$, these
satisfy:
$$A r = \lambda r$$
Right Eigenvectors
End of explanation
r[0]
Explanation: 0-th right eigenvector
End of explanation
lam[0]
Explanation: this corresponds to the eigenvalue
End of explanation
r[1]
Explanation: 1-st right eigenvector
End of explanation
lam[1]
Explanation: this corresponds to the eigenvalue
End of explanation
r[2]
Explanation: 2-nd right eigenvector
End of explanation
lam[2]
Explanation: this corresponds to the eigenvalue
End of explanation
R = zeros(3,3)
R[:,0] = r[1]
R[:,1] = r[0]
R[:,2] = r[2]
R
Explanation: Here they are as a matrix, $R$, in order from smallest to largest eigenvalue
End of explanation
B = A.transpose()
B
L = B.eigenvects()
l = []
laml = []
for (ev, _, ltmp) in L:
l.append(ltmp[0].transpose())
laml.append(ev)
Explanation: Left Eigenvectors
The left eigenvectors satisfy:
$$l A = \lambda l$$
SymPy doesn't have a method to get left eigenvectors directly, so we take the transpose of this expression:
$$(l A)^\intercal = A^\intercal l^\intercal = \lambda l^\intercal$$
Therefore, the transpose of the left eigenvectors, $l^\intercal$, are the right eigenvectors of transpose of $A$
End of explanation
for n in range(len(l)):
if lam[n] == laml[n]:
ltmp = l[n]
p = ltmp.dot(r[n])
l[n] = ltmp/p
Explanation: Traditionally, we normalize these such that $l^{(\mu)} \cdot r^{(\nu)} = \delta_{\mu\nu}$
End of explanation
l[0]
Explanation: 0-th left eigenvector
End of explanation
l[1]
Explanation: 1-st left eigenvector
End of explanation
l[2]
Explanation: 2-nd left eigenvector
End of explanation
ps = symbols('p_s')
As = Matrix([[u, rho, 0], [c**2/rho, u, ps/rho], [0, 0, u]])
As
As.eigenvals()
R = As.eigenvects() # this returns a tuple for each eigenvector with multiplicity -- unpack it
r = []
lam = []
for (ev, _, rtmp) in R:
r.append(rtmp[0])
lam.append(ev)
# we can normalize them anyway we want, so let's make the first entry 1
for n in range(len(r)):
v = r[n]
r[n] = v/v[0]
r[0], lam[0]
r[1], lam[1]
r[2], lam[2]
Explanation: Entropy formulation
here we write the system in terms of $q_s = (\rho, u, s)^\intercal$, where the system is
$${q_s}_t + A_s(q_s) {q_s}_x = 0$$
and
$$
A_s = \left (\begin{matrix}u & \rho & 0\
\frac{c^{2}}{\rho} & u & \frac{p_{s}}{\rho}\
0 & 0 & u\end{matrix}\right)
$$
End of explanation
Bs = As.transpose()
L = B.eigenvects()
l = []
laml = []
for (ev, _, ltmp) in L:
l.append(ltmp[0].transpose())
laml.append(ev)
Explanation: left eigenvectors
End of explanation
for n in range(len(l)):
if lam[n] == laml[n]:
ltmp = l[n]
p = ltmp.dot(r[n])
l[n] = ltmp/p
simplify(l[0])
l[1]
l[2]
Explanation: normalization
End of explanation
rho, u, v, c = symbols('rho u v c')
A = Matrix([[u, rho, 0, 0], [0, u, 0, rho**-1], [0,0, u, 0], [0, c**2 * rho, 0, u]])
A
A.eigenvals()
R = A.eigenvects() # this returns a tuple for each eigenvector with multiplicity -- unpack it
r = []
lam = []
for (ev, _, rtmp) in R:
for rv in rtmp:
r.append(rv)
lam.append(ev)
# we can normalize them anyway we want, so let's make the first entry 1
for n in range(len(r)):
v = r[n]
if not v[0] == 0:
r[n] = v/v[0]
r[0], lam[0]
r[1], lam[1]
r[2], lam[2]
r[3], lam[3]
Explanation: 2-d system
End of explanation |
14,688 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
You are almost done with the course. Nice job!
We have a couple more interesting problems for you before you go.
As always, run the setup code below before working on the questions.
Step1: Let's start with a string lightning round to warm up. What are the lengths of the strings below?
For each of the five strings below, predict what len() would return when passed that string. Use the variable length to record your answer, then run the cell to check whether you were right.
0a.
Step2: 0b.
Step3: 0c.
Step5: 0d.
Step6: 0e.
Step10: 1.
There is a saying that "Data scientists spend 80% of their time cleaning data, and 20% of their time complaining about cleaning data." Let's see if you can write a function to help clean US zip code data. Given a string, it should return whether or not that string represents a valid zip code. For our purposes, a valid zip code is any string consisting of exactly 5 digits.
HINT
Step12: 2.
A researcher has gathered thousands of news articles. But she wants to focus her attention on articles including a specific word. Complete the function below to help her filter her list of articles.
Your function should meet the following criteria
Step14: 3.
Now the researcher wants to supply multiple keywords to search for. Complete the function below to help her.
(You're encouraged to use the word_search function you just wrote when implementing this function. Reusing code in this way makes your programs more robust and readable - and it saves typing!) | Python Code:
from learntools.core import binder; binder.bind(globals())
from learntools.python.ex6 import *
print('Setup complete.')
Explanation: You are almost done with the course. Nice job!
We have a couple more interesting problems for you before you go.
As always, run the setup code below before working on the questions.
End of explanation
a = ""
length = ____
q0.a.check()
Explanation: Let's start with a string lightning round to warm up. What are the lengths of the strings below?
For each of the five strings below, predict what len() would return when passed that string. Use the variable length to record your answer, then run the cell to check whether you were right.
0a.
End of explanation
b = "it's ok"
length = ____
q0.b.check()
Explanation: 0b.
End of explanation
c = 'it\'s ok'
length = ____
q0.c.check()
Explanation: 0c.
End of explanation
d = hey
length = ____
q0.d.check()
Explanation: 0d.
End of explanation
e = '\n'
length = ____
q0.e.check()
Explanation: 0e.
End of explanation
def is_valid_zip(zip_code):
Returns whether the input string is a valid (5 digit) zip code
pass
# Check your answer
q1.check()
#%%RM_IF(PROD)%%
def is_valid_zip(zip_code):
Returns whether the input string is a valid (5 digit) zip code
return len(zip_code) == 5 and zip_code.isdigit()
q1.assert_check_passed()
#%%RM_IF(PROD)%%
def is_valid_zip(zip_code):
Returns whether the input string is a valid (5 digit) zip code
return len(zip_code) == 5
q1.assert_check_failed()
#_COMMENT_IF(PROD)_
q1.hint()
#_COMMENT_IF(PROD)_
q1.solution()
Explanation: 1.
There is a saying that "Data scientists spend 80% of their time cleaning data, and 20% of their time complaining about cleaning data." Let's see if you can write a function to help clean US zip code data. Given a string, it should return whether or not that string represents a valid zip code. For our purposes, a valid zip code is any string consisting of exactly 5 digits.
HINT: str has a method that will be useful here. Use help(str) to review a list of string methods.
End of explanation
def word_search(doc_list, keyword):
Takes a list of documents (each document is a string) and a keyword.
Returns list of the index values into the original list for all documents
containing the keyword.
Example:
doc_list = ["The Learn Python Challenge Casino.", "They bought a car", "Casinoville"]
>>> word_search(doc_list, 'casino')
>>> [0]
pass
# Check your answer
q2.check()
#_COMMENT_IF(PROD)_
q2.hint()
#_COMMENT_IF(PROD)_
q2.solution()
Explanation: 2.
A researcher has gathered thousands of news articles. But she wants to focus her attention on articles including a specific word. Complete the function below to help her filter her list of articles.
Your function should meet the following criteria:
Do not include documents where the keyword string shows up only as a part of a larger word. For example, if she were looking for the keyword “closed”, you would not include the string “enclosed.”
She does not want you to distinguish upper case from lower case letters. So the phrase “Closed the case.” would be included when the keyword is “closed”
Do not let periods or commas affect what is matched. “It is closed.” would be included when the keyword is “closed”. But you can assume there are no other types of punctuation.
End of explanation
def multi_word_search(doc_list, keywords):
Takes list of documents (each document is a string) and a list of keywords.
Returns a dictionary where each key is a keyword, and the value is a list of indices
(from doc_list) of the documents containing that keyword
>>> doc_list = ["The Learn Python Challenge Casino.", "They bought a car and a casino", "Casinoville"]
>>> keywords = ['casino', 'they']
>>> multi_word_search(doc_list, keywords)
{'casino': [0, 1], 'they': [1]}
pass
# Check your answer
q3.check()
#_COMMENT_IF(PROD)_
q3.solution()
Explanation: 3.
Now the researcher wants to supply multiple keywords to search for. Complete the function below to help her.
(You're encouraged to use the word_search function you just wrote when implementing this function. Reusing code in this way makes your programs more robust and readable - and it saves typing!)
End of explanation |
14,689 | Given the following text description, write Python code to implement the functionality described.
Description:
Minimize cost to split an array into K subsets such that the cost of each element is its product with its position in the subset
Function to find the minimum cost to split array into K subsets ; Sort the array in descending order ; Stores minimum cost to split the array into K subsets ; Stores position of elements of a subset ; Iterate over the range [ 1 , N ] ; Calculate the cost to select X - th element of every subset ; Update min_cost ; Update X ; Driver code ; Function call
| Python Code:
def getMinCost(arr , n , k ) :
arr . sort(reverse = True )
min_cost = 0 ;
X = 0 ;
for i in range(0 , n , k ) :
for j in range(i , n , 1 ) :
if(j < i + k ) :
min_cost += arr[j ] *(X + 1 ) ;
X += 1 ;
return min_cost ;
if __name__== ' __main __' :
arr =[9 , 20 , 7 , 8 ] ;
K = 2 ;
N = len(arr ) ;
print(getMinCost(arr , N , K ) ) ;
|
14,690 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Импортируем все необходимые библиотеки
Step1: Загружаем наши данные и смотрим на их состояние
Step2: Легко заметить, что в тренировочном датасете у нас не хватает данных о возрасте, каюте и месте погружения пассажира на корабль.
В тестовом датасете нам не хватает данных о возрасте, каюте и плате за пребывание на корабле.
Для начала разберемся с полем Embarked в тренировочном датасете, которое отвечает за место погружения.
Проверим, в каких строках у нас отсутствуют данные.
Step3: Посмотрим на общею зависимость шанса выживания от пункта погружения.
Step4: Смотрим на другие возможные зависимости, которые могли б нам указать на то, где пассажиры попали на корабль.
Step5: Теперь исправим пустое поле с платой за путешествение в тестовом датасете.
Step6: Давайте посмотрим на всех пассажиров, с похожими другими признаками.
Step7: Делаем вывод, что вероятнее всего плата была в таком размере.
Step8: Теперь разберемся с полем Возраста в тренировочном датасете. Ему нужно уделить больше внимания, т.к. это очень важный признак, который сильно влияет на выживаемость пассажиров.
Step9: В именах есть приставки, с ними тоже можно кое-что сделать, т.к. социальный статус может быть важным признаком выживаемости.
Step10: Вместо двух полей указывающий на наличие партнера (Parch) или родственника (SibSp), сделаем одно поле FamilySize
Step11: Пол тоже очень важный признак, но если вы смотрели фильм титаник, то наверное помните "Сначала женщин и детей." Поэтому предлагаю сооздать новый признак, который будет учитывать как пол, так и возраст
Step12: Убедились, что теперь наши данные в порядке и переходим к откидыванию лишнего.
Step13: У нас есть дискретные переменные и нам стоило б их закодировать. Для этого в пандас уже существует функция get_dummies
Step14: Создадим функцию, которая будет строить зависимость обучаемости от кол-ва тестовых семплов.
Step15: Разбиваем наш тренировочный датасет на 2, что б прежде чем сабмитить нашу модель, мы убедились что она не переобучается на наших данных (т.н. кросс-валидация)
Step16: Посмотрим модель рандом фореста. Параметры укажем обычные, потом благодаря GridSearchCV подберем оптимальные. Ну и в конце взглянем на то, что у нас вышло
Step17: Повторим все выше описанные процедуры, которые мы делали для рандом фореста, теперь для логистической регрессии.
Step18: Выбираем ту модель, которая нам больше понравилась и сабмитим ее на кагл. | Python Code:
# pandas
import pandas as pd
from pandas import DataFrame
import re
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('whitegrid')
%matplotlib inline
# machine learning
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import learning_curve, train_test_split, GridSearchCV
from sklearn.metrics import make_scorer, accuracy_score
Explanation: Импортируем все необходимые библиотеки
End of explanation
train_df = pd.read_csv("titanic/train.csv")
test_df = pd.read_csv("titanic/test.csv")
test_df.head()
train_df.info()
print("----------------------------")
test_df.info()
Explanation: Загружаем наши данные и смотрим на их состояние
End of explanation
# Embarked
train_df[train_df.Embarked.isnull()]
Explanation: Легко заметить, что в тренировочном датасете у нас не хватает данных о возрасте, каюте и месте погружения пассажира на корабль.
В тестовом датасете нам не хватает данных о возрасте, каюте и плате за пребывание на корабле.
Для начала разберемся с полем Embarked в тренировочном датасете, которое отвечает за место погружения.
Проверим, в каких строках у нас отсутствуют данные.
End of explanation
# plot
#sns.factorplot('Embarked','Survived', data=train_df,size=4,aspect=3)
fig, (axis1,axis2,axis3) = plt.subplots(1,3,figsize=(15,5))
sns.countplot(x='Embarked', data=train_df, ax=axis1)
sns.countplot(x='Survived', hue="Embarked", data=train_df, order=[1,0], ax=axis2)
# group by embarked, and get the mean for survived passengers for each value in Embarked
embark_perc = train_df[["Embarked", "Survived"]].groupby(['Embarked'],as_index=False).mean()
sns.barplot(x='Embarked', y='Survived', data=embark_perc,order=['S','C','Q'],ax=axis3)
Explanation: Посмотрим на общею зависимость шанса выживания от пункта погружения.
End of explanation
train_df.loc[train_df.Ticket == '113572']
print( 'C == ' + str( len(train_df.loc[train_df.Pclass == 1].loc[train_df.Fare > 75].loc[train_df.Fare < 85].loc[train_df.Embarked == 'C']) ) )
print( 'S == ' + str( len(train_df.loc[train_df.Pclass == 1].loc[train_df.Fare > 75].loc[train_df.Fare < 85].loc[train_df.Embarked == 'S']) ) )
train_df = train_df.set_value(train_df.Embarked.isnull(), 'Embarked', 'C')
train_df.loc[train_df.Embarked.isnull()]
Explanation: Смотрим на другие возможные зависимости, которые могли б нам указать на то, где пассажиры попали на корабль.
End of explanation
test_df[test_df.Fare.isnull()]
Explanation: Теперь исправим пустое поле с платой за путешествение в тестовом датасете.
End of explanation
fig = plt.figure(figsize=(8, 5))
ax = fig.add_subplot(111)
test_df[(test_df.Pclass==3)&(test_df.Embarked=='S')].Fare.hist(bins=100, ax=ax)
plt.xlabel('Fare')
plt.ylabel('Frequency')
plt.title('Histogram of Fare, Plcass 3 and Embarked S')
print ("The top 5 most common value of Fare")
test_df[(test_df.Pclass==3)&(test_df.Embarked=='S')].Fare.value_counts().head()
Explanation: Давайте посмотрим на всех пассажиров, с похожими другими признаками.
End of explanation
test_df.set_value(test_df.Fare.isnull(), 'Fare', 8.05)
test_df.loc[test_df.Fare.isnull()]
Explanation: Делаем вывод, что вероятнее всего плата была в таком размере.
End of explanation
test_df.loc[test_df.Age.isnull()].head()
fig, (axis1,axis2) = plt.subplots(1,2,figsize=(15,4))
axis1.set_title('Original Age values')
axis2.set_title('New Age values')
# среднее, дисперсия и пустые значение в тестовом датасете
average_age_train = train_df["Age"].mean()
std_age_train = train_df["Age"].std()
count_nan_age_train = train_df["Age"].isnull().sum()
# среднее, дисперсия и пустые значение в тестовом датасете
average_age_test = test_df["Age"].mean()
std_age_test = test_df["Age"].std()
count_nan_age_test = test_df["Age"].isnull().sum()
# генерируем случайные значения (mean - std) & (mean + std)
rand_1 = np.random.randint(average_age_train - std_age_train, average_age_train + std_age_train, size = count_nan_age_train)
rand_2 = np.random.randint(average_age_test - std_age_test, average_age_test + std_age_test, size = count_nan_age_test)
# строим гистограму возраста до изменений (пустые конвертим в инты)
train_df['Age'].dropna().astype(int).hist(bins=70, ax=axis1)
test_df['Age'].dropna().astype(int).hist(bins=70, ax=axis1)
# заполняем случайными значениями пустые поля с возрастом
train_df["Age"][np.isnan(train_df["Age"])] = rand_1
test_df["Age"][np.isnan(test_df["Age"])] = rand_2
# конвертим флоаты в инты
train_df['Age'] = train_df['Age'].astype(int)
test_df['Age'] = test_df['Age'].astype(int)
# гистограма нового возраста
train_df['Age'].hist(bins=70, ax=axis2)
test_df['Age'].hist(bins=70, ax=axis2)
# Еще немного графиков
# пик выживаемости в зависимости от возраста
facet = sns.FacetGrid(train_df, hue="Survived",aspect=4)
facet.map(sns.kdeplot,'Age',shade= True)
facet.set(xlim=(0, train_df['Age'].max()))
facet.add_legend()
# средняя выживаемость по возрасту
fig, axis1 = plt.subplots(1,1,figsize=(18,4))
average_age = train_df[["Age", "Survived"]].groupby(['Age'],as_index=False).mean()
sns.barplot(x='Age', y='Survived', data=average_age)
train_df.info()
test_df.info()
Explanation: Теперь разберемся с полем Возраста в тренировочном датасете. Ему нужно уделить больше внимания, т.к. это очень важный признак, который сильно влияет на выживаемость пассажиров.
End of explanation
Title_Dictionary = {
"Capt": "Officer",
"Col": "Officer",
"Major": "Officer",
"Jonkheer": "Nobel",
"Don": "Nobel",
"Sir" : "Nobel",
"Dr": "Officer",
"Rev": "Officer",
"the Countess":"Nobel",
"Dona": "Nobel",
"Mme": "Mrs",
"Mlle": "Miss",
"Ms": "Mrs",
"Mr" : "Mr",
"Mrs" : "Mrs",
"Miss" : "Miss",
"Master" : "Master",
"Lady" : "Nobel"
}
train_df['Title'] = train_df['Name'].apply(lambda x: Title_Dictionary[x.split(',')[1].split('.')[0].strip()])
test_df['Title'] = test_df['Name'].apply(lambda x: Title_Dictionary[x.split(',')[1].split('.')[0].strip()])
train_df.head(100)
Explanation: В именах есть приставки, с ними тоже можно кое-что сделать, т.к. социальный статус может быть важным признаком выживаемости.
End of explanation
train_df['FamilySize'] = train_df['SibSp'] + train_df['Parch']
test_df['FamilySize'] = test_df['SibSp'] + test_df['Parch']
train_df.head()
Explanation: Вместо двух полей указывающий на наличие партнера (Parch) или родственника (SibSp), сделаем одно поле FamilySize
End of explanation
def get_person(passenger):
age,sex = passenger
return 'child' if age < 16 else sex
train_df['Person'] = train_df[['Age','Sex']].apply(get_person,axis=1)
test_df['Person'] = test_df[['Age','Sex']].apply(get_person,axis=1)
train_df.head()
train_df.info()
print("----------------------------")
train_df.info()
Explanation: Пол тоже очень важный признак, но если вы смотрели фильм титаник, то наверное помните "Сначала женщин и детей." Поэтому предлагаю сооздать новый признак, который будет учитывать как пол, так и возраст
End of explanation
train_df.drop(labels=['PassengerId', 'Name', 'Cabin', 'Ticket', 'SibSp', 'Parch', 'Sex'], axis=1, inplace=True)
test_df.drop(labels=['Name', 'Cabin', 'Ticket', 'SibSp', 'Parch', 'Sex'], axis=1, inplace=True)
train_df.head()
Explanation: Убедились, что теперь наши данные в порядке и переходим к откидыванию лишнего.
End of explanation
dummies_person_train = pd.get_dummies(train_df['Person'],prefix='Person')
dummies_embarked_train = pd.get_dummies(train_df['Embarked'], prefix= 'Embarked')
dummies_title_train = pd.get_dummies(train_df['Title'], prefix= 'Title')
dummies_pclass_train = pd.get_dummies(train_df['Pclass'], prefix= 'Pclass')
train_df = pd.concat([train_df, dummies_person_train, dummies_embarked_train, dummies_title_train, dummies_pclass_train], axis=1)
train_df = train_df.drop(['Person','Embarked','Title', 'Pclass'], axis=1)
train_df.head()
dummies_person_test = pd.get_dummies(test_df['Person'],prefix='Person')
dummies_embarked_test = pd.get_dummies(test_df['Embarked'], prefix= 'Embarked')
dummies_title_test = pd.get_dummies(test_df['Title'], prefix= 'Title')
dummies_pclass_test = pd.get_dummies(test_df['Pclass'], prefix= 'Pclass')
test_df = pd.concat([test_df, dummies_person_test, dummies_embarked_test, dummies_title_test, dummies_pclass_test], axis=1)
test_df = test_df.drop(['Person','Embarked','Title', 'Pclass'], axis=1)
test_df.head()
Explanation: У нас есть дискретные переменные и нам стоило б их закодировать. Для этого в пандас уже существует функция get_dummies
End of explanation
def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None,
n_jobs=1, train_sizes=np.linspace(.1, 1.0, 5), scoring='accuracy'):
plt.figure(figsize=(10,6))
plt.title(title)
if ylim is not None:
plt.ylim(*ylim)
plt.xlabel("Training examples")
plt.ylabel(scoring)
train_sizes, train_scores, test_scores = learning_curve(estimator, X, y, cv=cv, scoring=scoring,
n_jobs=n_jobs, train_sizes=train_sizes)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.grid()
plt.fill_between(train_sizes, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.1,
color="r")
plt.fill_between(train_sizes, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.1, color="g")
plt.plot(train_sizes, train_scores_mean, 'o-', color="r",
label="Training score")
plt.plot(train_sizes, test_scores_mean, 'o-', color="g",
label="Cross-validation score")
plt.legend(loc="best")
return plt
Explanation: Создадим функцию, которая будет строить зависимость обучаемости от кол-ва тестовых семплов.
End of explanation
X = train_df.drop(['Survived'], axis=1)
y = train_df.Survived
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42, test_size = 0.3)
Explanation: Разбиваем наш тренировочный датасет на 2, что б прежде чем сабмитить нашу модель, мы убедились что она не переобучается на наших данных (т.н. кросс-валидация)
End of explanation
# Choose the type of classifier.
clf = RandomForestClassifier()
# Choose some parameter combinations to try
parameters = {'n_estimators': [4, 6, 9],
'max_features': ['log2', 'sqrt','auto'],
'criterion': ['entropy', 'gini'],
'max_depth': [2, 3, 5, 10],
'min_samples_split': [2, 3, 5],
'min_samples_leaf': [1,5,8]
}
# Type of scoring used to compare parameter combinations
acc_scorer = make_scorer(accuracy_score)
# Run the grid search
grid_obj = GridSearchCV(clf, parameters, scoring=acc_scorer)
grid_obj = grid_obj.fit(X_train, y_train)
# Set the clf to the best combination of parameters
clf = grid_obj.best_estimator_
# Fit the best algorithm to the data.
clf.fit(X_train, y_train)
predictions = clf.predict(X_test)
print(accuracy_score(y_test, predictions))
plot_learning_curve(clf, 'Random Forest', X, y, cv=4);
from sklearn.cross_validation import KFold
def run_kfold(clf):
kf = KFold(891, n_folds=10)
outcomes = []
fold = 0
for train_index, test_index in kf:
fold += 1
X_train, X_test = X.values[train_index], X.values[test_index]
y_train, y_test = y.values[train_index], y.values[test_index]
clf.fit(X_train, y_train)
predictions = clf.predict(X_test)
accuracy = accuracy_score(y_test, predictions)
outcomes.append(accuracy)
print("Fold {0} accuracy: {1}".format(fold, accuracy))
mean_outcome = np.mean(outcomes)
print("Mean Accuracy: {0}".format(mean_outcome))
run_kfold(clf)
Explanation: Посмотрим модель рандом фореста. Параметры укажем обычные, потом благодаря GridSearchCV подберем оптимальные. Ну и в конце взглянем на то, что у нас вышло
End of explanation
from sklearn.linear_model import LogisticRegression
lg = LogisticRegression(random_state=42, penalty='l1')
parameters = {'C':[0.5]}
# Type of scoring used to compare parameter combinations
acc_scorer_lg = make_scorer(accuracy_score)
# Run the grid search
grid_obj_lg = GridSearchCV(lg, parameters, scoring=acc_scorer_lg)
grid_obj_lg = grid_obj_lg.fit(X_train, y_train)
# Set the clf to the best combination of parameters
lg = grid_obj_lg.best_estimator_
# Fit the best algorithm to the data.
lg.fit(X_train, y_train)
predictions_lg = lg.predict(X_test)
print(accuracy_score(y_test, predictions_lg))
plot_learning_curve(lg, 'Logistic Regression', X, y, cv=4);
Explanation: Повторим все выше описанные процедуры, которые мы делали для рандом фореста, теперь для логистической регрессии.
End of explanation
ids = test_df['PassengerId']
predictions = clf.predict(test_df.drop('PassengerId', axis=1))
output = pd.DataFrame({ 'PassengerId' : ids, 'Survived': predictions })
output.to_csv('titanic-predictions.csv', index = False)
output.head()
Explanation: Выбираем ту модель, которая нам больше понравилась и сабмитим ее на кагл.
End of explanation |
14,691 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
An old FiPy solution to 1D BL
I'm not really sure if the result is correct.
The results is indeed incorrect. See the updated code a few cells below that might work slightly better.
Step1: Updates solution on October 8th, 2019
Step2: Visualize the relative permeability and fractional flow curves
Step3: Some tests on fipy numerix | Python Code:
from fipy import *
u = 1.e-3
L = 100.
nx = 200
dt = 200.
dx = L/nx
muo = 0.002
muw = 0.001
mesh = Grid1D(dx = L/nx, nx = nx)
x = mesh.cellCenters
sw = CellVariable(mesh=mesh, name="saturation", hasOld=True, value = 0.)
sw.setValue(1,where = x<=dx)
sw.constrain(1.,mesh.facesLeft)
#sw.constrain(0., mesh.facesRight)
sw.faceGrad.constrain([0], mesh.facesRight)
eq = TransientTerm(coeff=1) + UpwindConvectionTerm(coeff = u
*(sw**2./muw)/(sw**2./muw+(1-sw)**2./muo) * [[1]]) == 0
sw.constrain(1.,mesh.facesLeft)
#sw.constrain(0., mesh.facesRight)
sw.faceGrad.constrain([0], mesh.facesRight)
steps = 100
viewer = Viewer(vars = sw, datamax=1.1, datamin=-0.1)
for step in range(steps):
sw.updateOld()
swres = 1.0e6
while swres > 1e-5:
swres = eq.sweep(dt = dt, var = sw)
print(swres)
viewer.plot()
Explanation: An old FiPy solution to 1D BL
I'm not really sure if the result is correct.
The results is indeed incorrect. See the updated code a few cells below that might work slightly better.
End of explanation
from fipy import *
# relperm parameters
swc = 0.1
sor = 0.1
krw0 = 0.3
kro0 = 1.0
nw = 2.0
no = 2.0
# domain and boundaries
u = 1.e-3
L = 100.
nx = 50
dt = 200.
dx = L/nx
# fluid properties
muo = 0.002
muw = 0.001
# define the fractional flow functions
def krw(sw):
res = krw0*((sw-swc)/(1-swc-sor))**nw
return res
def kro(sw):
res = kro0*((1-sw-sor)/(1-swc-sor))**no
return res
def fw(sw):
res = krw(sw)/muw/(krw(sw)/muw+kro(sw)/muo)
return res
import matplotlib.pyplot as plt
import numpy as np
sw_plot = np.linspace(swc, 1-sor, 50)
Explanation: Updates solution on October 8th, 2019
End of explanation
krw_plot = [krw(sw) for sw in sw_plot]
kro_plot = [kro(sw) for sw in sw_plot]
fw_plot = [fw(sw) for sw in sw_plot]
plt.figure(1)
plt.plot(sw_plot, krw_plot, sw_plot, kro_plot)
plt.show()
plt.figure(2)
plt.plot(sw_plot, fw_plot)
plt.show()
# create the grid
mesh = Grid1D(dx = L/nx, nx = nx)
x = mesh.cellCenters
# create the cell variables and boundary conditions
sw = CellVariable(mesh=mesh, name="saturation", hasOld=True, value = 0.)
# sw.setValue(1,where = x<=dx)
sw.constrain(1-sor,mesh.facesLeft)
#sw.constrain(0., mesh.facesRight)
sw.faceGrad.constrain([0], mesh.facesRight)
Explanation: Visualize the relative permeability and fractional flow curves
End of explanation
krw_cell = krw(sw)
krw_cell.value
# It works fine
eq = TransientTerm(coeff=1) + UpwindConvectionTerm(coeff = u
*(sw**2./muw)/(sw**2./muw+(1-sw)**2./muo) * [[1]]) == 0
sw.constrain(1.,mesh.facesLeft)
#sw.constrain(0., mesh.facesRight)
sw.faceGrad.constrain([0], mesh.facesRight)
steps = 100
viewer = Viewer(vars = sw, datamax=1.1, datamin=-0.1)
for step in range(steps):
sw.updateOld()
swres = 1.0e6
while swres > 1e-5:
swres = eq.sweep(dt = dt, var = sw)
print(swres)
viewer.plot()
Explanation: Some tests on fipy numerix
End of explanation |
14,692 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Default Behavior of BTE
Step1: As you can see above, by default, BTE will query all APIs integrated.
Remove a specific API or a list of APIs from BTE
Step2: The Registry class stores all APIs used in BTE.
To remove one or more API from the registry, use the method below
Step3: check the current api list
Step4: So now, the "biolink" and "dgidb" are removed from the registry
We then pass the registry as a parameter to the FindConnection Class.
Step5: If you look through the query log above, the API "biolink" and "dgidb" are no longer querid.
Specify a list of APIs to include | Python Code:
from biothings_explorer.user_query_dispatcher import FindConnection
from biothings_explorer.hint import Hint
ht = Hint()
# find all potential representations of CML
cml_hint = ht.query("MONDO:0011996")
# select the correct representation of CML
cml = cml_hint['Disease'][0]
cml
# find all potential representations of imatinib
imatinib_hint = ht.query("imatinib")
# select the correct representation of imatinib
imatinib = imatinib_hint['ChemicalSubstance'][0]
imatinib
fc = FindConnection(input_obj=cml, output_obj=imatinib, intermediate_nodes='Gene')
fc.connect(verbose=True)
Explanation: Default Behavior of BTE
End of explanation
from biothings_explorer.registry import Registry
reg = Registry()
reg.show_all_apis()
Explanation: As you can see above, by default, BTE will query all APIs integrated.
Remove a specific API or a list of APIs from BTE
End of explanation
reg.remove_apis(['biolink', 'dgidb'])
Explanation: The Registry class stores all APIs used in BTE.
To remove one or more API from the registry, use the method below:
End of explanation
reg.show_all_apis()
Explanation: check the current api list
End of explanation
fc = FindConnection(input_obj=cml, output_obj=imatinib, intermediate_nodes='Gene', registry=reg)
fc.connect(verbose=True)
Explanation: So now, the "biolink" and "dgidb" are removed from the registry
We then pass the registry as a parameter to the FindConnection Class.
End of explanation
reg.refine_api_list(["semmed_chemical", "semmed_disease"])
reg.show_all_apis()
fc = FindConnection(input_obj=cml, output_obj=imatinib, intermediate_nodes='Gene', registry=reg)
fc.connect(verbose=True)
Explanation: If you look through the query log above, the API "biolink" and "dgidb" are no longer querid.
Specify a list of APIs to include
End of explanation |
14,693 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'noaa-gfdl', 'sandbox-2', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: NOAA-GFDL
Source ID: SANDBOX-2
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:35
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
14,694 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Graph Analyses
Here, we'll perform various analysis by constructing graphs and measure properties of those graphs to learn more about the data
Step1: We'll start with just looking at analysis in euclidian space, then thinking about weighing by synaptic density later. Since we hypothesize that our data will show that tissue varies as we move down the y-axis (z-axis in brain) through cortical layers, an interesting thing to do would be compare properties of the graphs on each layer (ie how does graph connectivity vary as we move through layers).
Let's start by triangulating our data. We'll use Delaunay on each y layer first. Putting our data in the right format for doing graph analysis...
Step2: Now that our data is in the right format, we'll create 52 delaunay graphs. Then we'll perform analyses on these graphs. A simple but useful metric would be to analyze edge length distributions in each layer.
Step3: We're going to need a method to get edge lengths from 2D centroid pairs
Step4: Realizing after all this that simply location is useless. We know the voxels are evenly spaced, which means our edge length data will be all the same. See that the "centroids" are no different
Step5: There is no distance between the two. Therefore it is perhaps more useful to consider a graph that considers node weights. Voronoi is dual to Delaunay, so that's not much of an option. We want something that considers both spacial location and density similarity.
Region Adjacency Graph (RAG)
In the past we've considered density information alone (ie analysis density histogram distribution) and above we are considering only spacial information, which totally doesn't say anything. To construct a graph that consider both spacial and density information, we'll use a Region Adjacency Graph (RAG).
In RAGs, two nodes are considered as neighbor if they are close in proximity (separated by a small number of pixels/voxels) in the horizontal or vertical direction.
<img src="../docs/figures/plot_rag_1.png" width="400">
Since our data includes density values at each node (voxel, or pixel since we're looking at y-layers), we can weight by the inverse of density difference between two nodes. Inverse because strongly connected nodes should be close in weight.
We have number of synapses S<sub>i</sub> at nodes $i$ and define weights $w$ between the nodes
Step6: Handrolling Own RAG generator
Step7: The following method is for later
Step8: Testing the RAG generator
Step9: Creating RAGs for each layer
Step10: OK, great! Now we have a list of 52 region adjacency graphs for each y-layer. Now we want to measure properties of those graphs and see how the properties vary in the y direction - through what we hypothesize are the cortical layers.
Number of Edges
This is just a sanity check. They should all be the same if we did it right!
Step11: Drawing Graphs
First we look at the default networkx graph plotting
Step12: This is using the spring layout, so we're losing positional information. We can improve the plot by adding position information.
Edge Weight Stats
Step13: Full edge weight histograms down y-axis
Step14: Mean edge weights
Step15: Edge Weight Variances
Step16: Edge Weight Third Moments (Skewness)
Step17: Edge Weight Fourth Moments (Kurtosis)
Step18: Hmmm...very interesting
Linear graph weights
We're now going to change our weighting function to be linear and scaled by the max and min difference in each layer. This might help eliminates some of the edge effect behavior I suspect is causing that rapid change in statistics in deeper y-layers.
Step19: Linear Edge Weight Means
Step20: Linear Edge Weight Variance
Step21: Linear Edge Weight Skewness
Step22: Linear Edge Weight Kurtosis
Step23: Number of Self Loops
Step24: Interesting. There are no self loops. Why would this be? Let's come back to this. In the meantime, I want to give some though to what it means to have a self loop, whether it should be theoretically possible given our data, and whether our graphs are formed properly.
The answer to this question is very simple. In a RAG, there are no self-loops by definition. Self loops are edges that form a connection between a node and itself.
<img src="../docs/figures/selfloop.png" width="100">
To see whether the graphs are formed properly, let's look at an adjacency lists
Step25: Compare that to the test data
Step26: X-Layers
Step27: We can see here the number of edges is low in that area that does not have many synapses. It, as expected, mirrors the distribution of synapses. It appears to be approximately uniform at the top, with buffers of very few synapses on the sides. Remember from here | Python Code:
import csv
from scipy.stats import kurtosis
from scipy.stats import skew
from scipy.spatial import Delaunay
import numpy as np
import math
import skimage
import matplotlib.pyplot as plt
import seaborn as sns
from skimage import future
import networkx as nx
%matplotlib inline
# Read in the data
data = open('../data/data.csv', 'r').readlines()
fieldnames = ['x', 'y', 'z', 'unmasked', 'synapses']
reader = csv.reader(data)
reader.next()
rows = [[int(col) for col in row] for row in reader]
# These will come in handy later
sorted_x = sorted(list(set([r[0] for r in rows])))
sorted_y = sorted(list(set([r[1] for r in rows])))
sorted_z = sorted(list(set([r[2] for r in rows])))
Explanation: Graph Analyses
Here, we'll perform various analysis by constructing graphs and measure properties of those graphs to learn more about the data
End of explanation
a = np.array(rows)
b = np.delete(a, np.s_[3::],1)
# Separate layers - have to do some wonky stuff to get this to work
b = sorted(b, key=lambda e: e[1])
b = np.array([v.tolist() for v in b])
b = np.split(b, np.where(np.diff(b[:,1]))[0]+1)
Explanation: We'll start with just looking at analysis in euclidian space, then thinking about weighing by synaptic density later. Since we hypothesize that our data will show that tissue varies as we move down the y-axis (z-axis in brain) through cortical layers, an interesting thing to do would be compare properties of the graphs on each layer (ie how does graph connectivity vary as we move through layers).
Let's start by triangulating our data. We'll use Delaunay on each y layer first. Putting our data in the right format for doing graph analysis...
End of explanation
graphs = []
centroid_list = []
for layer in b:
centroids = np.array(layer)
# get rid of the y value - not relevant anymore
centroids = np.delete(centroids, 1, 1)
centroid_list.append(centroids)
graph = Delaunay(centroids)
graphs.append(graph)
Explanation: Now that our data is in the right format, we'll create 52 delaunay graphs. Then we'll perform analyses on these graphs. A simple but useful metric would be to analyze edge length distributions in each layer.
End of explanation
def get_d_edge_length(edge):
(x1, y1), (x2, y2) = edge
return math.sqrt((x2-x1)**2 + (y2-y1)**2)
edge_length_list = [[]]
tri_area_list = [[]]
for del_graph in graphs:
tri_areas = []
edge_lengths = []
triangles = []
for t in centroids[del_graph.simplices]:
triangles.append(t)
a, b, c = [tuple(map(int,list(v))) for v in t]
edge_lengths.append(get_d_edge_length((a,b)))
edge_lengths.append(get_d_edge_length((a,c)))
edge_lengths.append(get_d_edge_length((b,c)))
try:
tri_areas.append(float(Triangle(a,b,c).area))
except:
continue
edge_length_list.append(edge_lengths)
tri_area_list.append(tri_areas)
Explanation: We're going to need a method to get edge lengths from 2D centroid pairs
End of explanation
np.subtract(centroid_list[0], centroid_list[1])
Explanation: Realizing after all this that simply location is useless. We know the voxels are evenly spaced, which means our edge length data will be all the same. See that the "centroids" are no different:
End of explanation
real_volume = np.zeros((len(sorted_y), len(sorted_x), len(sorted_z)))
for r in rows:
real_volume[sorted_y.index(r[1]), sorted_x.index(r[0]), sorted_z.index(r[2])] = r[-1]
np.shape(real_volume)
Explanation: There is no distance between the two. Therefore it is perhaps more useful to consider a graph that considers node weights. Voronoi is dual to Delaunay, so that's not much of an option. We want something that considers both spacial location and density similarity.
Region Adjacency Graph (RAG)
In the past we've considered density information alone (ie analysis density histogram distribution) and above we are considering only spacial information, which totally doesn't say anything. To construct a graph that consider both spacial and density information, we'll use a Region Adjacency Graph (RAG).
In RAGs, two nodes are considered as neighbor if they are close in proximity (separated by a small number of pixels/voxels) in the horizontal or vertical direction.
<img src="../docs/figures/plot_rag_1.png" width="400">
Since our data includes density values at each node (voxel, or pixel since we're looking at y-layers), we can weight by the inverse of density difference between two nodes. Inverse because strongly connected nodes should be close in weight.
We have number of synapses S<sub>i</sub> at nodes $i$ and define weights $w$ between the nodes:
$$w = \dfrac{1}{S_i - S_{i+1}}$$
RAGs are used largely in image processing, and it makes sense for our data to look more like an image. Since the data is evenly spaced, the absolute locations of the voxels don't matter. We can use the index in the matrix to represent spacial location, with the value at each pixel being the synapse density at that voxel. We've done this before in "real volume"
End of explanation
# point = tuple containing index of point (position)
# returns list of neighbors in [north, east, south, west]
def get_neighbors(point, image):
shape = np.shape(image)
neighbors = []
# North
neighbors.append((point[0], point[1]-1)) if point[1]>0 else neighbors.append(None)
# East
neighbors.append((point[0]+1, point[1])) if point[0]<shape[0]-1 else neighbors.append(None)
# South
neighbors.append((point[0], point[1]+1)) if point[1]<shape[1]-1 else neighbors.append(None)
# West
neighbors.append((point[0]-1, point[1])) if point[0]>0 else neighbors.append(None)
return neighbors
# calculates weights between nodes
# weight defined as inverse absolute distance
def get_weights_nonlinear(image, point, neighbors):
weights = []
for neigh in neighbors:
if neigh != None:
weight = 1/(abs(image[point] - image[neigh])+1)
weights.append(weight)
return weights
Explanation: Handrolling Own RAG generator
End of explanation
# calculates weights between nodes
# weight scaled and linear
# TODO: Explain weighting difference with math
def get_weights_linear(image, point, neighbors, scale_factor):
weights = []
for neigh in neighbors:
if neigh != None:
diff = abs(image[point] - image[neigh])
weight = 1 - (scale_factor*diff)
weights.append(weight)
return weights
image = real_volume[1]
# print image
point = (1,1)
neighbors = get_neighbors(point, image)
# print neighbors
ws = get_weights_nonlinear(image, point, neighbors)
def populate_graph(G, im):
nodes_to_add = []
for x in range(np.shape(im)[0]):
for y in range(np.shape(im)[1]):
nodes_to_add.append((x,y))
G.add_nodes_from(nodes_to_add)
def get_diff_range(image):
diffs = []
x = 0
for col in image:
y = 0
for pix in col:
point = (x,y)
neighs = get_neighbors(point, image)
for neigh in neighbors:
if neigh != None:
diffs.append(abs(image[point] - image[neigh]))
y+=1
x+=1
return (max(diffs), min(diffs))
def generate_rag(im, linear):
G=nx.Graph()
if linear == True:
(max_diff, min_diff) = get_diff_range(im)
scale_factor = 1/(max_diff - min_diff)
x = 0
for col in im:
y = 0
for pix in col:
point = (x,y)
neighs = get_neighbors(point, im)
if linear == True:
weights = get_weights_linear(im, point, neighs, scale_factor)
else:
weights = get_weights_nonlinear(im, point, neighs)
to_add = []
which = 0
for neigh in neighs:
if neigh != None:
to_add.append((point, neigh, weights[which]))
which+=1
# print to_add
G.add_weighted_edges_from(to_add)
y+=1
x+=1
return G
Explanation: The following method is for later
End of explanation
test_im = real_volume[1]
shape = np.shape(test_im)
ragu = generate_rag(test_im, False)
ragu.number_of_edges()
# ragu.adjacency_list()
Explanation: Testing the RAG generator
End of explanation
y_rags = []
for layer in real_volume:
y_rags.append(generate_rag(layer, False))
Explanation: Creating RAGs for each layer
End of explanation
num_edges = []
for rag in y_rags:
num_edges.append(rag.number_of_edges())
sns.barplot(x=range(len(num_edges)), y=num_edges)
sns.plt.show()
Explanation: OK, great! Now we have a list of 52 region adjacency graphs for each y-layer. Now we want to measure properties of those graphs and see how the properties vary in the y direction - through what we hypothesize are the cortical layers.
Number of Edges
This is just a sanity check. They should all be the same if we did it right!
End of explanation
# for rag in y_rags:
# plt.figure()
# nx.draw(rag, node_size=100)
Explanation: Drawing Graphs
First we look at the default networkx graph plotting:
We're gonna need to massage the drawing function a bit in order to get the custom positional graph to work.
End of explanation
def get_edge_weight_distributions(rags):
distributions = []
for rag in rags:
itty = rag.edges_iter()
weight_list = []
for index in range(rag.number_of_edges()):
eddy = itty.next()
weight_list.append(rag.get_edge_data(eddy[0], eddy[1])['weight'])
distributions.append(weight_list)
return distributions
distributions = get_edge_weight_distributions(y_rags)
Explanation: This is using the spring layout, so we're losing positional information. We can improve the plot by adding position information.
Edge Weight Stats
End of explanation
count = 0
for distr in distributions:
plt.hist(distr, bins=150)
plt.title("Layer " + str(count) + " Edge Weight Histogram")
plt.show()
count+=1
Explanation: Full edge weight histograms down y-axis
End of explanation
y_edge_means = []
for distrib in distributions:
y_edge_means.append(np.mean(distrib))
print y_edge_means
sns.barplot(x=range(len(y_edge_means)), y=y_edge_means)
sns.plt.show()
Explanation: Mean edge weights
End of explanation
y_edge_vars = []
for distrib in distributions:
y_edge_vars.append(np.var(distrib))
print y_edge_vars
sns.barplot(x=range(len(y_edge_vars)), y=y_edge_vars)
sns.plt.show()
Explanation: Edge Weight Variances
End of explanation
y_edge_skews = []
for distrib in distributions:
y_edge_skews.append(skew(distrib))
print y_edge_skews
sns.barplot(x=range(len(y_edge_skews)), y=y_edge_skews)
sns.plt.show()
Explanation: Edge Weight Third Moments (Skewness)
End of explanation
y_edge_kurts = []
for distrib in distributions:
y_edge_kurts.append(kurtosis(distrib))
print y_edge_kurts
sns.barplot(x=range(len(y_edge_kurts)), y=y_edge_kurts)
sns.plt.show()
Explanation: Edge Weight Fourth Moments (Kurtosis)
End of explanation
y_rags_linear_weight = []
for layer in real_volume:
y_rags_linear_weight.append(generate_rag(layer, True))
test_rag = generate_rag(real_volume[4], True)
itty = test_rag.edges_iter()
weight_list = []
for index in range(test_rag.number_of_edges()):
eddy = itty.next()
weight_list.append(test_rag.get_edge_data(eddy[0], eddy[1])['weight'])
distributions_lin = get_edge_weight_distributions(y_rags_linear_weight)
Explanation: Hmmm...very interesting
Linear graph weights
We're now going to change our weighting function to be linear and scaled by the max and min difference in each layer. This might help eliminates some of the edge effect behavior I suspect is causing that rapid change in statistics in deeper y-layers.
End of explanation
y_edge_linear_means = []
for distrib in distributions_lin:
y_edge_linear_means.append(np.mean(distrib))
sns.barplot(x=range(len(y_edge_linear_means)), y=y_edge_linear_means)
sns.plt.show()
Explanation: Linear Edge Weight Means
End of explanation
y_edge_linear_vars = []
for distrib in distributions_lin:
y_edge_linear_vars.append(np.var(distrib))
sns.barplot(x=range(len(y_edge_linear_vars)), y=y_edge_linear_vars)
sns.plt.show()
Explanation: Linear Edge Weight Variance
End of explanation
y_edge_linear_skews = []
for distrib in distributions_lin:
y_edge_linear_skews.append(skew(distrib))
sns.barplot(x=range(len(y_edge_linear_skews)), y=y_edge_linear_skews)
sns.plt.show()
Explanation: Linear Edge Weight Skewness
End of explanation
y_edge_linear_kurts = []
for distrib in distributions_lin:
y_edge_linear_kurts.append(kurtosis(distrib))
sns.barplot(x=range(len(y_edge_linear_kurts)), y=y_edge_linear_kurts)
sns.plt.show()
Explanation: Linear Edge Weight Kurtosis
End of explanation
num_self_loops = []
for rag in y_rags:
num_self_loops.append(rag.number_of_selfloops())
num_self_loops
Explanation: Number of Self Loops
End of explanation
# y_rags[0].adjacency_list()
Explanation: Interesting. There are no self loops. Why would this be? Let's come back to this. In the meantime, I want to give some though to what it means to have a self loop, whether it should be theoretically possible given our data, and whether our graphs are formed properly.
The answer to this question is very simple. In a RAG, there are no self-loops by definition. Self loops are edges that form a connection between a node and itself.
<img src="../docs/figures/selfloop.png" width="100">
To see whether the graphs are formed properly, let's look at an adjacency lists:
End of explanation
# Test Data
test = np.array([[1,2],[3,4]])
test_rag = skimage.future.graph.RAG(test)
test_rag.adjacency_list()
Explanation: Compare that to the test data:
End of explanation
real_volume_x = np.zeros((len(sorted_x), len(sorted_y), len(sorted_z)))
for r in rows:
real_volume_x[ sorted_x.index(r[0]), sorted_y.index(r[1]), sorted_z.index(r[2])] = r[-1]
x_rags = []
count = 0;
for layer in real_volume_x:
count = count + 1
x_rags.append(skimage.future.graph.RAG(layer))
num_edges_x = []
for rag in x_rags:
num_edges_x.append(rag.number_of_edges())
sns.barplot(x=range(len(num_edges_x)), y=num_edges_x)
sns.plt.show()
Explanation: X-Layers
End of explanation
plt.imshow(np.amax(real_volume, axis=2), interpolation='nearest')
plt.show()
# edge_length_list[3]
# tri_area_list[3]
# triangles
# Note for future
# del_features['d_edge_length_mean'] = np.mean(edge_lengths)
# del_features['d_edge_length_std'] = np.std(edge_lengths)
# del_features['d_edge_length_skew'] = scipy.stats.skew(edge_lengths)
# del_features['d_edge_length_kurtosis'] = scipy.stats.kurtosis(edge_lengths)
Explanation: We can see here the number of edges is low in that area that does not have many synapses. It, as expected, mirrors the distribution of synapses. It appears to be approximately uniform at the top, with buffers of very few synapses on the sides. Remember from here:
End of explanation |
14,695 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Forecast Tutorial
This tutorial will walk through forecast data from Unidata forecast model data using the forecast.py module within pvlib.
Table of contents
Step1: GFS (0.5 deg)
Step2: GFS (0.25 deg)
Step3: NAM
Step4: NDFD
Step5: RAP
Step6: HRRR
Step7: HRRR (ESRL)
Step8: Quick power calculation | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
# built in python modules
import datetime
import os
# python add-ons
import numpy as np
import pandas as pd
# for accessing UNIDATA THREDD servers
from siphon.catalog import TDSCatalog
from siphon.ncss import NCSS
import pvlib
from pvlib.forecast import GFS, HRRR_ESRL, NAM, NDFD, HRRR, RAP
# Choose a location and time.
# Tucson, AZ
latitude = 32.2
longitude = -110.9
tz = 'America/Phoenix'
start = pd.Timestamp(datetime.date.today(), tz=tz) # today's date
end = start + pd.Timedelta(days=7) # 7 days from today
print(start, end)
Explanation: Forecast Tutorial
This tutorial will walk through forecast data from Unidata forecast model data using the forecast.py module within pvlib.
Table of contents:
1. Setup
2. Intialize and Test Each Forecast Model
This tutorial has been tested against the following package versions:
* Python 3.5.2
* IPython 5.0.0
* pandas 0.18.0
* matplotlib 1.5.1
* netcdf4 1.2.1
* siphon 0.4.0
It should work with other Python and Pandas versions. It requires pvlib >= 0.3.0 and IPython >= 3.0.
Authors:
* Derek Groenendyk (@moonraker), University of Arizona, November 2015
* Will Holmgren (@wholmgren), University of Arizona, November 2015, January 2016, April 2016, July 2016
Setup
End of explanation
from pvlib.forecast import GFS, HRRR_ESRL, NAM, NDFD, HRRR, RAP
# GFS model, defaults to 0.5 degree resolution
fm = GFS()
# retrieve data
data = fm.get_data(latitude, longitude, start, end)
data[sorted(data.columns)]
data = fm.process_data(data)
data[['ghi', 'dni', 'dhi']].plot();
cs = fm.location.get_clearsky(data.index)
fig, ax = plt.subplots()
cs['ghi'].plot(ax=ax, label='ineichen')
data['ghi'].plot(ax=ax, label='gfs+larson')
ax.set_ylabel('ghi')
ax.legend();
fig, ax = plt.subplots()
cs['dni'].plot(ax=ax, label='ineichen')
data['dni'].plot(ax=ax, label='gfs+larson')
ax.set_ylabel('ghi')
ax.legend();
# retrieve data
data = fm.get_processed_data(latitude, longitude, start, end)
data[sorted(data.columns)]
data['temp_air'].plot()
plt.ylabel('temperature (%s)' % fm.units['temp_air']);
cloud_vars = ['total_clouds', 'low_clouds', 'mid_clouds', 'high_clouds']
for varname in cloud_vars:
data[varname].plot()
plt.ylabel('Cloud cover' + ' %')
plt.xlabel('Forecast Time ('+str(data.index.tz)+')')
plt.title('GFS 0.5 deg')
plt.legend(bbox_to_anchor=(1.18,1.0));
total_cloud_cover = data['total_clouds']
total_cloud_cover.plot(color='r', linewidth=2)
plt.ylabel('Total cloud cover' + ' (%s)' % fm.units['total_clouds'])
plt.xlabel('Forecast Time ('+str(data.index.tz)+')')
plt.title('GFS 0.5 deg');
Explanation: GFS (0.5 deg)
End of explanation
# GFS model at 0.25 degree resolution
fm = GFS(resolution='quarter')
# retrieve data
data = fm.get_processed_data(latitude, longitude, start, end)
for varname in cloud_vars:
data[varname].plot(ls='-', linewidth=2)
plt.ylabel('Cloud cover' + ' %')
plt.xlabel('Forecast Time ('+str(data.index.tz)+')')
plt.title('GFS 0.25 deg')
plt.legend(bbox_to_anchor=(1.18,1.0));
data[sorted(data.columns)]
Explanation: GFS (0.25 deg)
End of explanation
fm = NAM()
# retrieve data
data = fm.get_processed_data(latitude, longitude, start, end)
for varname in cloud_vars:
data[varname].plot(ls='-', linewidth=2)
plt.ylabel('Cloud cover' + ' %')
plt.xlabel('Forecast Time ('+str(data.index.tz)+')')
plt.title('NAM')
plt.legend(bbox_to_anchor=(1.18,1.0));
data['ghi'].plot(linewidth=2, ls='-')
plt.ylabel('GHI W/m**2')
plt.xlabel('Forecast Time ('+str(data.index.tz)+')');
data[sorted(data.columns)]
Explanation: NAM
End of explanation
fm = NDFD()
# retrieve data
data = fm.get_processed_data(latitude, longitude, start, end)
total_cloud_cover = data['total_clouds']
temp = data['temp_air']
wind = data['wind_speed']
total_cloud_cover.plot(color='r', linewidth=2)
plt.ylabel('Total cloud cover' + ' (%s)' % fm.units['total_clouds'])
plt.xlabel('Forecast Time ('+str(data.index.tz)+')')
plt.title('NDFD')
plt.ylim(0,100);
temp.plot(color='r', linewidth=2)
plt.ylabel('Temperature' + ' (%s)' % fm.units['temp_air'])
plt.xlabel('Forecast Time ('+str(data.index.tz)+')')
wind.plot(color='r', linewidth=2)
plt.ylabel('Wind Speed' + ' (%s)' % fm.units['wind_speed'])
plt.xlabel('Forecast Time ('+str(data.index.tz)+')')
data[sorted(data.columns)]
Explanation: NDFD
End of explanation
fm = RAP(resolution=20)
# retrieve data
data = fm.get_processed_data(latitude, longitude, start, end)
cloud_vars = ['total_clouds', 'high_clouds', 'mid_clouds', 'low_clouds']
for varname in cloud_vars:
data[varname].plot(ls='-', linewidth=2)
plt.ylabel('Cloud cover' + ' %')
plt.xlabel('Forecast Time ('+str(data.index.tz)+')')
plt.title('RAP')
plt.legend(bbox_to_anchor=(1.18,1.0));
data[sorted(data.columns)]
Explanation: RAP
End of explanation
fm = HRRR()
data_raw = fm.get_data(latitude, longitude, start, end)
# The HRRR model pulls in u, v winds for 2 layers above ground (10 m, 80 m)
# They are labeled as _0, _1 in the raw data
data_raw[sorted(data_raw.columns)]
data = fm.get_processed_data(latitude, longitude, start, end)
cloud_vars = ['total_clouds', 'high_clouds', 'mid_clouds', 'low_clouds']
for varname in cloud_vars:
data[varname].plot(ls='-', linewidth=2)
plt.ylabel('Cloud cover' + ' %')
plt.xlabel('Forecast Time ('+str(data.index.tz)+')')
plt.title('RAP')
plt.legend(bbox_to_anchor=(1.18,1.0));
data['temp_air'].plot(color='r', linewidth=2)
plt.ylabel('Temperature' + ' (%s)' % fm.units['temp_air'])
plt.xlabel('Forecast Time ('+str(data.index.tz)+')');
data['wind_speed'].plot(color='r', linewidth=2)
plt.ylabel('Wind Speed' + ' (%s)' % fm.units['wind_speed'])
plt.xlabel('Forecast Time ('+str(data.index.tz)+')');
data[sorted(data.columns)]
Explanation: HRRR
End of explanation
# NBVAL_SKIP
fm = HRRR_ESRL()
# retrieve data
# NBVAL_SKIP
data = fm.get_processed_data(latitude, longitude, start, end)
# NBVAL_SKIP
cloud_vars = ['total_clouds','high_clouds','mid_clouds','low_clouds']
# NBVAL_SKIP
for varname in cloud_vars:
data[varname].plot(ls='-', linewidth=2)
plt.ylabel('Cloud cover' + ' %')
plt.xlabel('Forecast Time ('+str(data.index.tz)+')')
plt.title('HRRR_ESRL')
plt.legend(bbox_to_anchor=(1.18,1.0));
# NBVAL_SKIP
data['ghi'].plot(linewidth=2, ls='-')
plt.ylabel('GHI W/m**2')
plt.xlabel('Forecast Time ('+str(data.index.tz)+')');
Explanation: HRRR (ESRL)
End of explanation
from pvlib.pvsystem import PVSystem, retrieve_sam
from pvlib.modelchain import ModelChain
sandia_modules = retrieve_sam('SandiaMod')
sapm_inverters = retrieve_sam('cecinverter')
module = sandia_modules['Canadian_Solar_CS5P_220M___2009_']
inverter = sapm_inverters['ABB__MICRO_0_25_I_OUTD_US_208__208V_']
system = PVSystem(module_parameters=module,
inverter_parameters=inverter,
surface_tilt=latitude,
surface_azimuth=180)
# fx is a common abbreviation for forecast
fx_model = GFS()
fx_data = fx_model.get_processed_data(latitude, longitude, start, end)
# use a ModelChain object to calculate modeling intermediates
mc = ModelChain(system, fx_model.location)
# extract relevant data for model chain
mc.run_model(weather=fx_data)
mc.total_irrad.plot();
mc.cell_temperature.plot();
mc.ac.plot();
Explanation: Quick power calculation
End of explanation |
14,696 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
VecLib
A Python library for playing with and visualizing vectors in Jupyter notebooks. For personal learning purposes.
Step3: Roadmap
<s>Addition and subtraction</s>
<s>Scaling (multiplication)</s>
<s>Visualizing in 2D</s>
<s>Visualizing in 3D</s>
Visualization legends
Visualize dot products as projections
Compute determinant
Cross products
Step4: Basic plots
Plotting single vector in 2D
Step5: Plotting multiple vectors in 2D
Step6: Plotting vectors in 3D
Step7: Operations
Dot product
Step8: Cross product
Step9: Other
Changing basis | Python Code:
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from IPython.display import display_png
%matplotlib inline
plt.style.use('seaborn-whitegrid')
Explanation: VecLib
A Python library for playing with and visualizing vectors in Jupyter notebooks. For personal learning purposes.
End of explanation
class Vector():
The base class for all vector operations
def __init__(self, arr, base=np.array([1, 1])):
self._arr = arr
self.base = base
def dot(self, other):
return np.dot(self._arr, other._arr)
def cross(self, other):
return Vector(np.cross(self._arr, other._arr))
def plot(self, ax=None):
dims = len(self._arr)
if dims > 3:
raise Exception('Cannot plot over 3 dimensions')
if not ax:
fig = plt.figure()
proj = '3d' if dims == 3 else None
ax = fig.add_subplot(111, projection=proj)
if dims == 1:
self._plot1d(ax)
elif dims == 2:
self._plot2d(ax)
elif dims == 3:
self._plot3d(ax)
def _plot2d(self, ax):
x, y = self._arr * self.base
ax.plot([0, x], [0, y])
min_, max_ = min(x, y), max(x, y)
ax.set_xlim([min(0, min_), max_])
ax.set_ylim([min(0, min_), max_])
def _plot2d_quiver(self, ax):
Work in progress.
x, y = self._arr
ax.quiver(0, 0, x, y, angles='xy', scale_units='xy', scale=1)
xmin = 0 if x >= 0 else x
xmax = 0 if x <= 0 else x
ymin = 0 if y >= 0 else y
ymax = 0 if y <= 0 else y
ax.set_xlim([xmin, xmax])
ax.set_ylim([ymin, ymax])
return ax
def _plot3d(self, ax):
x, y, z = self._arr
ax.plot([0, x], [0, y], [0, z])
def __add__(self, other):
return Vector(self._arr + other._arr)
def __sub__(self, other):
return Vector(self._arr - other._arr)
def __mul__(self, scalar):
return self._arr * scalar
def __eq__(self, other):
return np.all(self._arr == other._arr)
def _repr_png_(self):
return display_png(self.plot())
def __repr__(self):
return 'vector({})'.format([x for x in self._arr])
Explanation: Roadmap
<s>Addition and subtraction</s>
<s>Scaling (multiplication)</s>
<s>Visualizing in 2D</s>
<s>Visualizing in 3D</s>
Visualization legends
Visualize dot products as projections
Compute determinant
Cross products
End of explanation
v1 = Vector([2, 2, 3])
v1
Explanation: Basic plots
Plotting single vector in 2D
End of explanation
fig, ax = plt.subplots()
v1 = Vector(np.array([1,2]))
v2 = Vector(np.array([5,-1]))
v1.plot(ax)
v2.plot(ax)
Explanation: Plotting multiple vectors in 2D
End of explanation
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
v1 = Vector(np.array([5,3,4]))
v2 = Vector(np.array([1,-2,5]))
v1.plot(ax)
v2.plot(ax)
Explanation: Plotting vectors in 3D
End of explanation
v1 = Vector(np.array([1,3]))
v2 = Vector(np.array([2,1]))
v1.dot(v2)
Explanation: Operations
Dot product
End of explanation
v1 = Vector(np.array([1,0,0]))
v2 = Vector(np.array([0,1,0]))
v3 = v1.cross(v2)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
v1.plot(ax)
v2.plot(ax)
v3.plot(ax)
Explanation: Cross product
End of explanation
fig, ax = plt.subplots()
v1 = Vector(np.array([1, 1]), base=np.array([5, 2]))
v2 = Vector(np.array([1, 1]), base=np.array([-2, 3]))
v1.plot(ax)
v2.plot(ax)
Explanation: Other
Changing basis
End of explanation |
14,697 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Numpy
Step1: 1.2 Creating Arrays
Step2: 1.3 Basic Data Types
Step3: 1.4 Basic Visualization
Step4: 1.5 Indexing and Slicing
Step5: 1.6 Copies and views
Step6: 1.7 Fancy Indexing
Step7: 2. Numerical Operations on Arrays
2.1 Elementwise Operations
Step8: 2.2 Basic Reductions
Step9: 2.3 Broadcasting
Step10: 2.4 Array Shape Manipulation
Step11: 2.5 Sorting Data
Step12: 3. Exercises (See session 1.3.5.1 in the Python Scientific Lecture Notes)
Step13: 4. Efficiency
Step14: Additional Discussions
Python for Data Analysis
Lecture Notes
rpy2
Python Data Analysis Library
StatsModels
Matplotlib Gallery
Formatting | Python Code:
# import numpy by following the convention
import numpy as np
Explanation: Numpy: Creating and Manipulating Numerical Data
1. The Numpy Array Object
1.1 What are Numpy and Numpy Arrays?
Numpy: the core tool for performance numerical computing with Python
Numpy arrays: multi-dimentional data structures in Numpy (e.g. 1-D vector, 2-D matrix, 3-D data object, etc)
End of explanation
# manual construction of arrays
# 1-D
a = np.array([0,1,2,3])
a
# 2-D
b = np.array([[1,2,3],[5,6,7]])
b
# check for array dimension
a.ndim, b.ndim
# check for shape of the array
a.shape, b.shape
# functions for creating arrays
a = np.arange(1,9,2) # start, end(exclusive), step
a = np.arange(10)
a
a = np.linspace(0,1,6) # start, end, num-points
a
a = np.ones((3,2)) # a matrix of ones
a
a= np.zeros((2,3)) # a matrix of zeros
a
a = np.eye(3) # an identify matrix
a
a = np.diag(np.array([1,2,3,4])) # a diagonal matrix
a
# generating random numbers
# set seed
np.random.seed(1234)
# generate a vector of length 4, in which elements are iid draws from UNIF(0,1)
a = np.random.rand(4)
print(a)
# generate a vector of length 4, in which elements are iid draws from standard normal
a = np.random.randn(4)
print(a)
Explanation: 1.2 Creating Arrays
End of explanation
a = np.array([1,2,3],dtype=float)
a.dtype
a = np.array([True, False, True])
a.dtype
Explanation: 1.3 Basic Data Types
End of explanation
import matplotlib.pyplot as plt
# to display plots in the notebook
%pylab inline
x = np.linspace(0,3,20)
y = np.linspace(0,9,20)
plt.plot(x,y) # line plot
plt.plot(x,y,'o') # dot plot
Explanation: 1.4 Basic Visualization
End of explanation
# indices begin at 0
a = np.arange(10)
a[0], a[1], a[-1]
# slicing
a[2:5:2] #[start:end:step]
a[::]
a[::-1]
# matrices
a = np.diag(np.arange(3))
a
# slice an element in matrix
a[1,1], a[1,2]
# numpy array is mutable, and thus we could assign new values to it
a[1,1] = 10
a
# the second column of a
a[:,1]
# the first row of a
a[0,:]
Explanation: 1.5 Indexing and Slicing
End of explanation
# a slicing operation creates a view on the original array
a = np.arange(10)
b = a[::2]
b[0] = 100
a
# force a copy
a = np.arange(10)
b = a[::2].copy()
b[0] = 100
a
Explanation: 1.6 Copies and views
End of explanation
# indexing with booleans
a = np.arange(10)
ind = (a>5)
a[ind]
# indexing with an array of integers
a = np.arange(10,100,10)
a[[2,3,4,2,1]]
Explanation: 1.7 Fancy Indexing
End of explanation
# with scalars
a = np.array([1,2,3,4])
a + 1
2**a
# arithmetic operations are elementwise
b = np.ones(4)
a - b
a*b
# array multiplications
c = np.ones((3,3))
c*c
# matrix multiplication
c.dot(c)
# comparisons
a == b
a > b
# array-wise comparison
np.array_equal(a,b)
# transcendental functions
np.sin(a)
np.log(a)
np.exp(a)
# shape mismatches (this will cause an error)
b = np.array([1,2])
a + b
# transposition
a = np.triu(np.ones((3,3)),1)
a.T
Explanation: 2. Numerical Operations on Arrays
2.1 Elementwise Operations
End of explanation
# computing sums
a = np.array([1,2,3,4])
a.sum()
a = np.array([[1,2],[3,4]])
a.sum()
a.sum(axis=0) # column sum
a.sum(axis=1) # row sum
# other reductions
a = np.array([1,2,3,4])
a.min()
a.max()
a.argmin()
a.argmax()
a.mean()
a.std()
Explanation: 2.2 Basic Reductions
End of explanation
a = np.arange(0,40,10)
a = a[:,np.newaxis] # add a new axis -> 2D array
b = np.array([0,1,2])
a + b
# create a matrix indicating the difference between any two observations
x = np.linspace(0,10,5)
y = x[:,np.newaxis]
np.abs(x-y)
Explanation: 2.3 Broadcasting
End of explanation
# flattening
a = np.array([[1,2],[3,4]])
b= a.ravel()
b
c = b.reshape((2,2))
c
Explanation: 2.4 Array Shape Manipulation
End of explanation
a = np.array([[6,3,1],[9,1,4]]) # sort each row
b = np.sort(a,axis=1)
b
c = np.sort(a,axis=0) # sort each column
c
# sorting with fancy indexing
a = np.array([14,13,11,12])
j = np.argsort(a)
j
a[j]
# finding minima and maxima
a = np.array([4,22,3,9])
np.argmax(a)
np.argmin(a)
Explanation: 2.5 Sorting Data
End of explanation
# Q1
# For the 2-D array (without typing it explicityly)
x = np.arange(1,12,5)
y = np.arange(5)[:,np.newaxis]
z = x + y
z
# generate a new array containing its 2nd and 4th row
m = z[(1,3),:]
m
# Q2
# divide each column of the array elementwise
a = np.arange(25).reshape(5,5)
a
b = np.array([1,5,10,15,20])
b = b[:,np.newaxis]
a / b
# Q3
# generate a 10 by 3 array of random numbers
np.random.seed(1234)
a = np.random.rand(30).reshape(10,3)
a
# for each row, pick the number closest to 0.5
b = np.abs(a - 0.5)
ind = np.argmin(b,axis=1)
c = a[np.arange(10),ind]
c
Explanation: 3. Exercises (See session 1.3.5.1 in the Python Scientific Lecture Notes)
End of explanation
a = range(1000)
%timeit [i**2 for i in a]
b = np.arange(1000)
%timeit b**2
a = range(10000)
%timeit [i+1 for i in a]
c = np.arange(10000)
%timeit c+1
Explanation: 4. Efficiency
End of explanation
import urllib; from IPython.core.display import HTML
HTML(urllib.urlopen('http://bit.ly/1Ki3iXw').read())
Explanation: Additional Discussions
Python for Data Analysis
Lecture Notes
rpy2
Python Data Analysis Library
StatsModels
Matplotlib Gallery
Formatting
End of explanation |
14,698 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: The sqlite3 module implements a Python DB-API 2.0 compliant interface to SQLite, an in-process relational database. SQLite is designed to be embedded in applications, instead of using a separate database server program such as MySQL, PostgreSQL, or Oracle. It is fast, rigorously tested, and flexible, making it suitable for prototyping and production deployment for some applications.
Creating a Database
Step5: Retriving the Data
Step7: Query Metadata
The DB-API 2.0 specification says that after execute() has been called, the Cursor should set its description attribute to hold information about the data that will be returned by the fetch methods. The API specification say that the description value is a sequence of tuples containing the column name, type, display size, internal size, precision, scale, and a flag that says whether null values are accepted.
Step10: Row Object
Step12: Using Variables and Queries
Positional Parameters
Step14: Named Parameters
Step16: Bulk Loading
Step18: Transactions
Step20: Discarding Changes
Step22: Custom Aggregation
An aggregation function collects many pieces of individual data and summarizes it in some way. Examples of built-in aggregation functions are avg() (average), min(), max(), and count().
The API for aggregators used by sqlite3 is defined in terms of a class with two methods. The step() method is called once for each data value as the query is processed. The finalize() method is called one time at the end of the query and should return the aggregate value. This example implements an aggregator for the arithmetic mode. It returns the value that appears most frequently in the input. | Python Code:
import os
import sqlite3
db_filename = 'todo.db'
db_is_new = not os.path.exists(db_filename)
conn = sqlite3.connect(db_filename)
if db_is_new:
print('Need to create schema')
else:
print('Database exists, assume schme does, too.')
conn.close()
%ls *.db
%rm -rf todo.db
import os
import sqlite3
db_filename= 'todo.db'
scheme_filename = 'todo_schema.sql'
db_is_new = not os.path.exists(db_filename)
with sqlite3.connect(db_filename) as conn:
if db_is_new:
print('Creating scheme')
with open(scheme_filename, 'rt') as f:
scheme = f.read()
conn.executescript(scheme)
print('Inserting inital data')
conn.executescript(
insert into project (name, description, deadline)
values ('pymotw', 'Python Module of the Week',
'2016-11-01');
insert into task (details, status, deadline, project)
values ('write about select', 'done', '2016-04-25',
'pymotw');
insert into task (details, status, deadline, project)
values ('write about random', 'waiting', '2016-08-22',
'pymotw');
insert into task (details, status, deadline, project)
values ('write about sqlite3', 'active', '2017-07-31',
'pymotw');
)
else:
print('Database exists, assume scheme does, too.')
Explanation: The sqlite3 module implements a Python DB-API 2.0 compliant interface to SQLite, an in-process relational database. SQLite is designed to be embedded in applications, instead of using a separate database server program such as MySQL, PostgreSQL, or Oracle. It is fast, rigorously tested, and flexible, making it suitable for prototyping and production deployment for some applications.
Creating a Database
End of explanation
import sqlite3
db_filename = 'todo.db'
with sqlite3.connect(db_filename) as conn:
cursor = conn.cursor()
cursor.execute(
select id, priority, details, status, deadline from task
where project = 'pymotw'
)
for row in cursor.fetchall():
task_id, priority, details, status, deadline = row
print('{:2d} [{:d}] {:<25} [{:<8}] ({})'.format(
task_id, priority, details, status, deadline))
import sqlite3
db_filename = 'todo.db'
with sqlite3.connect(db_filename) as conn:
cursor = conn.cursor()
cursor.execute(
select name, description, deadline from project
where name = 'pymotw'
)
name, description, deadline = cursor.fetchone()
print('Project details for {} ({})\n due {}'.format(
description, name, deadline))
cursor.execute(
select id, priority, details, status, deadline from task
where project = 'pymotw' order by deadline
)
print('\nNext 5 tasks:')
for row in cursor.fetchmany(5):
task_id, priority, details, status, deadline = row
print('{:2d} [{:d}] {:<25} [{:<8}] ({})'.format(
task_id, priority, details, status, deadline))
Explanation: Retriving the Data
End of explanation
import sqlite3
db_filename = 'todo.db'
with sqlite3.connect(db_filename) as conn:
cursor = conn.cursor()
cursor.execute(
select * from task where project = 'pymotw'
)
print('Task table has these columns:')
for colinfo in cursor.description:
print(colinfo)
Explanation: Query Metadata
The DB-API 2.0 specification says that after execute() has been called, the Cursor should set its description attribute to hold information about the data that will be returned by the fetch methods. The API specification say that the description value is a sequence of tuples containing the column name, type, display size, internal size, precision, scale, and a flag that says whether null values are accepted.
End of explanation
import sqlite3
db_filename = 'todo.db'
with sqlite3.connect(db_filename) as conn:
# Change the row factory to use Row
conn.row_factory = sqlite3.Row
cursor = conn.cursor()
cursor.execute(
select name, description, deadline from project
where name = 'pymotw'
)
name, description, deadline = cursor.fetchone()
print('Project details for {} ({})\n due {}'.format(
description, name, deadline))
cursor.execute(
select id, priority, status, deadline, details from task
where project = 'pymotw' order by deadline
)
print('\nNext 5 tasks:')
for row in cursor.fetchmany(5):
print('{:2d} [{:d}] {:<25} [{:<8}] ({})'.format(
row['id'], row['priority'], row['details'],
row['status'], row['deadline'],
))
Explanation: Row Object
End of explanation
import sqlite3
import sys
db_filename = 'todo.db'
project_name = "pymotw"
with sqlite3.connect(db_filename) as conn:
cursor = conn.cursor()
query =
select id, priority, details, status, deadline from task
where project = ?
cursor.execute(query, (project_name,))
for row in cursor.fetchall():
task_id, priority, details, status, deadline = row
print('{:2d} [{:d}] {:<25} [{:<8}] ({})'.format(
task_id, priority, details, status, deadline))
Explanation: Using Variables and Queries
Positional Parameters
End of explanation
import sqlite3
import sys
db_filename = 'todo.db'
project_name = "pymotw"
with sqlite3.connect(db_filename) as conn:
cursor = conn.cursor()
query =
select id, priority, details, status, deadline from task
where project = :project_name
order by deadline, priority
cursor.execute(query, {'project_name': project_name})
for row in cursor.fetchall():
task_id, priority, details, status, deadline = row
print('{:2d} [{:d}] {:<25} [{:<8}] ({})'.format(
task_id, priority, details, status, deadline))
Explanation: Named Parameters
End of explanation
import csv
import sqlite3
import sys
db_filename = 'todo.db'
data_filename = 'task.csv'
SQL =
insert into task (details, priority, status, deadline, project)
values (:details, :priority, 'active', :deadline, :project)
with open(data_filename, 'rt') as csv_file:
csv_reader = csv.DictReader(csv_file)
with sqlite3.connect(db_filename) as conn:
cursor = conn.cursor()
cursor.executemany(SQL, csv_reader)
Explanation: Bulk Loading
End of explanation
import sqlite3
db_filename = 'todo.db'
def show_projects(conn):
cursor = conn.cursor()
cursor.execute('select name, description from project')
for name, desc in cursor.fetchall():
print(' ', name)
with sqlite3.connect(db_filename) as conn1:
print('Before changes:')
show_projects(conn1)
# Insert in one cursor
cursor1 = conn1.cursor()
cursor1.execute(
insert into project (name, description, deadline)
values ('virtualenvwrapper', 'Virtualenv Extensions',
'2011-01-01')
)
print('\nAfter changes in conn1:')
show_projects(conn1)
# Select from another connection, without committing first
print('\nBefore commit:')
with sqlite3.connect(db_filename) as conn2:
show_projects(conn2)
# Commit then select from another connection
conn1.commit()
print('\nAfter commit:')
with sqlite3.connect(db_filename) as conn3:
show_projects(conn3)
Explanation: Transactions
End of explanation
import sqlite3
db_filename = 'todo.db'
def show_projects(conn):
cursor = conn.cursor()
cursor.execute('select name, description from project')
for name, desc in cursor.fetchall():
print(' ', name)
with sqlite3.connect(db_filename) as conn:
print('Before changes:')
show_projects(conn)
try:
# Insert
cursor = conn.cursor()
cursor.execute(delete from project
where name = 'virtualenvwrapper'
)
# Show the settings
print('\nAfter delete:')
show_projects(conn)
# Pretend the processing caused an error
raise RuntimeError('simulated error')
except Exception as err:
# Discard the changes
print('ERROR:', err)
conn.rollback()
else:
# Save the changes
conn.commit()
# Show the results
print('\nAfter rollback:')
show_projects(conn)
Explanation: Discarding Changes
End of explanation
import sqlite3
import collections
db_filename = 'todo.db'
class Mode:
def __init__(self):
self.counter = collections.Counter()
def step(self, value):
print('step({!r})'.format(value))
self.counter[value] += 1
def finalize(self):
result, count = self.counter.most_common(1)[0]
print('finalize() -> {!r} ({} times)'.format(
result, count))
return result
with sqlite3.connect(db_filename) as conn:
conn.create_aggregate('mode', 1, Mode)
cursor = conn.cursor()
cursor.execute(
select mode(deadline) from task where project = 'pymotw'
)
row = cursor.fetchone()
print('mode(deadline) is:', row[0])
Explanation: Custom Aggregation
An aggregation function collects many pieces of individual data and summarizes it in some way. Examples of built-in aggregation functions are avg() (average), min(), max(), and count().
The API for aggregators used by sqlite3 is defined in terms of a class with two methods. The step() method is called once for each data value as the query is processed. The finalize() method is called one time at the end of the query and should return the aggregate value. This example implements an aggregator for the arithmetic mode. It returns the value that appears most frequently in the input.
End of explanation |
14,699 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Name
Data preparation using Apache Pig on YARN with Cloud Dataproc
Label
Cloud Dataproc, GCP, Cloud Storage, YARN, Pig, Apache, Kubeflow, pipelines, components
Summary
A Kubeflow Pipeline component to prepare data by submitting an Apache Pig job on YARN to Cloud Dataproc.
Details
Intended use
Use the component to run an Apache Pig job as one preprocessing step in a Kubeflow Pipeline.
Runtime arguments
| Argument | Description | Optional | Data type | Accepted values | Default |
|----------|-------------|----------|-----------|-----------------|---------|
| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | |
| region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | |
| cluster_name | The name of the cluster to run the job. | No | String | | |
| queries | The queries to execute the Pig job. Specify multiple queries in one string by separating them with semicolons. You do not need to terminate queries with semicolons. | Yes | List | | None |
| query_file_uri | The HCFS URI of the script that contains the Pig queries. | Yes | GCSPath | | None |
| script_variables | Mapping of the query’s variable names to their values (equivalent to the Pig command
Step1: Load the component using KFP SDK
Step2: Sample
Note
Step3: Example pipeline that uses the component
Step4: Compile the pipeline
Step5: Submit the pipeline for execution | Python Code:
%%capture --no-stderr
!pip3 install kfp --upgrade
Explanation: Name
Data preparation using Apache Pig on YARN with Cloud Dataproc
Label
Cloud Dataproc, GCP, Cloud Storage, YARN, Pig, Apache, Kubeflow, pipelines, components
Summary
A Kubeflow Pipeline component to prepare data by submitting an Apache Pig job on YARN to Cloud Dataproc.
Details
Intended use
Use the component to run an Apache Pig job as one preprocessing step in a Kubeflow Pipeline.
Runtime arguments
| Argument | Description | Optional | Data type | Accepted values | Default |
|----------|-------------|----------|-----------|-----------------|---------|
| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | |
| region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | |
| cluster_name | The name of the cluster to run the job. | No | String | | |
| queries | The queries to execute the Pig job. Specify multiple queries in one string by separating them with semicolons. You do not need to terminate queries with semicolons. | Yes | List | | None |
| query_file_uri | The HCFS URI of the script that contains the Pig queries. | Yes | GCSPath | | None |
| script_variables | Mapping of the query’s variable names to their values (equivalent to the Pig command: SET name="value";). | Yes | Dict | | None |
| pig_job | The payload of a PigJob. | Yes | Dict | | None |
| job | The payload of a Dataproc job. | Yes | Dict | | None |
| wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 |
Output
Name | Description | Type
:--- | :---------- | :---
job_id | The ID of the created job. | String
Cautions & requirements
To use the component, you must:
* Set up a GCP project by following this guide.
* Create a new cluster.
* The component can authenticate to GCP. Refer to Authenticating Pipelines to GCP for details.
* Grant the Kubeflow user service account the role roles/dataproc.editor on the project.
Detailed description
This component creates a Pig job from Dataproc submit job REST API.
Follow these steps to use the component in a pipeline:
1. Install the Kubeflow Pipeline SDK:
End of explanation
import kfp.components as comp
dataproc_submit_pig_job_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.7.0-rc.3/components/gcp/dataproc/submit_pig_job/component.yaml')
help(dataproc_submit_pig_job_op)
Explanation: Load the component using KFP SDK
End of explanation
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
QUERY = '''
natality_csv = load 'gs://public-datasets/natality/csv' using PigStorage(':');
top_natality_csv = LIMIT natality_csv 10;
dump natality_csv;'''
EXPERIMENT_NAME = 'Dataproc - Submit Pig Job'
Explanation: Sample
Note: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template.
Setup a Dataproc cluster
Create a new Dataproc cluster (or reuse an existing one) before running the sample code.
Prepare a Pig query
Either put your Pig queries in the queries list, or upload your Pig queries into a file to a Cloud Storage bucket and then enter the Cloud Storage bucket’s path in query_file_uri. In this sample, we will use a hard coded query in the queries list to select data from a local passwd file.
For more details on Apache Pig, see the Pig documentation.
Set sample parameters
End of explanation
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataproc submit Pig job pipeline',
description='Dataproc submit Pig job pipeline'
)
def dataproc_submit_pig_job_pipeline(
project_id = PROJECT_ID,
region = REGION,
cluster_name = CLUSTER_NAME,
queries = json.dumps([QUERY]),
query_file_uri = '',
script_variables = '',
pig_job='',
job='',
wait_interval='30'
):
dataproc_submit_pig_job_op(
project_id=project_id,
region=region,
cluster_name=cluster_name,
queries=queries,
query_file_uri=query_file_uri,
script_variables=script_variables,
pig_job=pig_job,
job=job,
wait_interval=wait_interval)
Explanation: Example pipeline that uses the component
End of explanation
pipeline_func = dataproc_submit_pig_job_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
Explanation: Compile the pipeline
End of explanation
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
Explanation: Submit the pipeline for execution
End of explanation |
Subsets and Splits