Unnamed: 0
int64
0
16k
text_prompt
stringlengths
110
62.1k
code_prompt
stringlengths
37
152k
12,000
Given the following text description, write Python code to implement the functionality described below step by step Description: Rossiter-McLaughlin Effect Setup Let's first make sure we have the latest version of PHOEBE 2.0 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release). Step1: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details. Step2: Let's make a significant mass ratio and radius ratio... Step3: Make sure the primary star is spinning quickly... Step4: Adding Datasets Now we'll add radial velocity and mesh datasets. We'll add two identical datasets for RVs so that we can have one computed dynamically and the other computed numerically (these options will need to be set later). Step5: Storing the mesh at every timestep is overkill, and will be both computationally and memory intensive. So let's just sample at the times we care about. Step6: Running Compute Now let's set the rv_method so that one dataset uses the dynamical method and the other uses the flux-weighted (numerical) method. Note that here we have to use set_value_all or loop over the components, as technically there are parameters for each component-dataset pair. Step7: Let's check to make sure that rv_method is set as we'd expect. Step8: Plotting Now let's plot the radial velocities. First we'll plot the dynamical RVs. Note that dynamical RVs show the true radial velocity of the center of mass of each star, and so we do not see the Rossiter McLaughlin effect. Step9: But the numerical method integrates over the visible surface elements, giving us what we'd observe if deriving RVs from observed spectra of the binary. Here we do see the Rossiter McLaughlin effect. You'll also notice that RVs are not available for the secondary star when its completely occulted (they're nans in the array). Step10: To visualize what is happening, we can plot the radial velocities of each surface element in the mesh at one of these times. Here just plot on the mesh@model parameterset - the mesh will automatically get coordinates from mesh01 and then we point to rvs@numericalrvs for the facecolors. Step11: Here you can see that the secondary star is blocking part of the "red" RVs of the primary star. This is essentially the same as plotting the negative z-component of the velocities (for convention - our system is in a right handed system with +z towards the viewer, but RV convention has negative RVs for blue shifts). We could also plot the RV per triangle by plotting 'vzs'. Note that this is actually defaulting to an inverted colormap to show you the same colorscheme ('RdBu_r' vs 'RdBu').
Python Code: !pip install -I "phoebe>=2.0,<2.1" %matplotlib inline Explanation: Rossiter-McLaughlin Effect Setup Let's first make sure we have the latest version of PHOEBE 2.0 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release). End of explanation import phoebe from phoebe import u # units import numpy as np import matplotlib.pyplot as plt logger = phoebe.logger() b = phoebe.default_binary() Explanation: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details. End of explanation b['q'] = 0.7 b['rpole@primary'] = 1.0 b['rpole@secondary'] = 0.5 b['teff@secondary@component'] = 5000 Explanation: Let's make a significant mass ratio and radius ratio... End of explanation b['syncpar@primary@component'] = 2 Explanation: Make sure the primary star is spinning quickly... End of explanation b.add_dataset('rv', times=np.linspace(0,2,201), dataset='dynamicalrvs') # TODO: can't set rv_method here because compute options don't exist yet... and that's kind of annoying b.add_dataset('rv', times=np.linspace(0,2,201), dataset='numericalrvs') Explanation: Adding Datasets Now we'll add radial velocity and mesh datasets. We'll add two identical datasets for RVs so that we can have one computed dynamically and the other computed numerically (these options will need to be set later). End of explanation times = b.get_value('times@primary@numericalrvs@dataset') times = times[times<0.1] print times b.add_dataset('mesh', dataset='mesh01', times=times) Explanation: Storing the mesh at every timestep is overkill, and will be both computationally and memory intensive. So let's just sample at the times we care about. End of explanation b.set_value_all('rv_method@dynamicalrvs@compute', 'dynamical') b.set_value_all('rv_method@numericalrvs@compute', 'flux-weighted') Explanation: Running Compute Now let's set the rv_method so that one dataset uses the dynamical method and the other uses the flux-weighted (numerical) method. Note that here we have to use set_value_all or loop over the components, as technically there are parameters for each component-dataset pair. End of explanation print b['rv_method'] b.run_compute(irrad_method='none') Explanation: Let's check to make sure that rv_method is set as we'd expect. End of explanation axs, artists = b['dynamicalrvs@model'].plot(component='primary', color='b') axs, artists = b['dynamicalrvs@model'].plot(component='secondary', color='r') Explanation: Plotting Now let's plot the radial velocities. First we'll plot the dynamical RVs. Note that dynamical RVs show the true radial velocity of the center of mass of each star, and so we do not see the Rossiter McLaughlin effect. End of explanation axs, artists = b['numericalrvs@model'].plot(component='primary', color='b') axs, artists = b['numericalrvs@model'].plot(component='secondary', color='r') Explanation: But the numerical method integrates over the visible surface elements, giving us what we'd observe if deriving RVs from observed spectra of the binary. Here we do see the Rossiter McLaughlin effect. You'll also notice that RVs are not available for the secondary star when its completely occulted (they're nans in the array). End of explanation fig = plt.figure(figsize=(12,12)) axs, artists = b['mesh@model'].plot(time=0.03, facecolor='rvs@numericalrvs', edgecolor=None) Explanation: To visualize what is happening, we can plot the radial velocities of each surface element in the mesh at one of these times. Here just plot on the mesh@model parameterset - the mesh will automatically get coordinates from mesh01 and then we point to rvs@numericalrvs for the facecolors. End of explanation fig = plt.figure(figsize=(12,12)) axs, artists = b['mesh01@model'].plot(time=0.09, facecolor='vzs', edgecolor=None) Explanation: Here you can see that the secondary star is blocking part of the "red" RVs of the primary star. This is essentially the same as plotting the negative z-component of the velocities (for convention - our system is in a right handed system with +z towards the viewer, but RV convention has negative RVs for blue shifts). We could also plot the RV per triangle by plotting 'vzs'. Note that this is actually defaulting to an inverted colormap to show you the same colorscheme ('RdBu_r' vs 'RdBu'). End of explanation
12,001
Given the following text description, write Python code to implement the functionality described below step by step Description: Spectrum Continuum Normalization Aim Step1: The obeservatios were originally automatically continuum normalized in the iraf extraction pipeline. I believe the continuum is not quite at 1 here anymore due to the divsion by the telluric spectra. Step2: The two PHOENIX ACES spectra here are the first best guess of the two spectral components. Step5: Current Normalization I then continuum normalize the Phoenix spectrum locally around my observations by fitting an exponenital to the continuum like so. Split the spectrum into 50 bins Take median of 20 highest points in each bin. Fix an exponetial Evaulate at the orginal wavelength values Divide original by the fit Step6: Above the top is the unnormalize spectra, with the median points in orangeand the green line the continuum fit. The bottom plot is the contiuum normalized result Step7: Combining Spectra I then mix the models using a combination of the two spectra. In this case with NO RV shifts. Step8: The companion is cooler there are many more deeper lines present in the spectra. Even a small contribution of the companion spectra reduce the continuum of the mixed spectra considerably. When I compare these mixed spectra to my observations Step9: As you can see here my observations are above the continuum most of the time. What I have noticed is this drastically affects the chisquared result as the mix model is the one with the least amount of alpha. I am thinking of renormalizing my observations by implementing equation (1) from Passegger 2016 (Fundamental M-dwarf parameters from high-resolution spectra using PHOENIX ACES modesl) F_obs = F_obs * (continuum_fit model / continuum_fit observation) They fit a linear function to the continuum of the observation and computed spectra to account for "slight differences in the continuum level and possible linear trends between the already noramlized spectra." One difference is that they say they normalize the average flux of the spectra to unity. Would this make a difference in this method. Questions Would this be the correct approach to take to solve this? Should I renomalize the observations first as well? Am I treating the cooler M-dwarf spectra correctly in this approach? Attempting the Passegger method Step10: In this example for the 5% companion spectra there is a bit of difference between the linear and scalar normalizations. With a larger difference at the longer wavelength. (more orange visible above the red.) Faint blue is the spectrum before the renormalization. Range of phoenix spectra
Python Code: import copy import numpy as np from astropy.io import fits import matplotlib.pyplot as plt % matplotlib inline #%matplotlib auto Explanation: Spectrum Continuum Normalization Aim: To perform Chi^2 comparision between PHOENIX ACES spectra and my CRIRES observations. Problem: The nomalization of the observed spectra Differences in the continuum normalization affect the chi^2 comparison when using mixed models of two different spectra. Proposed Solution: equation (1) from Passegger 2016 Fobs = F obs * (cont_fit model / cont_fit observation) where con_fit is a linear fit to the spectra. To take out and linear trends in the continuums and correct the amplitude of the continuum. In this notebook I outline what I do currently showing an example. End of explanation # Observation obs = fits.getdata("/home/jneal/.handy_spectra/HD211847-1-mixavg-tellcorr_1.fits") plt.plot(obs["wavelength"], obs["flux"]) plt.hlines(1, 2111, 2124, linestyle="--") plt.title("CRIRES spectra") plt.xlabel("Wavelength (nm)") plt.show() Explanation: The obeservatios were originally automatically continuum normalized in the iraf extraction pipeline. I believe the continuum is not quite at 1 here anymore due to the divsion by the telluric spectra. End of explanation # Models wav_model = fits.getdata("/home/jneal/Phd/data/PHOENIX-ALL/PHOENIX/WAVE_PHOENIX-ACES-AGSS-COND-2011.fits") wav_model /= 10 # nm host = "/home/jneal/Phd/data/PHOENIX-ALL/PHOENIX/Z-0.0/lte05700-4.50-0.0.PHOENIX-ACES-AGSS-COND-2011-HiRes.fits" old_companion = "/home/jneal/Phd/data/PHOENIX-ALL/PHOENIX/Z-0.0/lte02600-4.50-0.0.PHOENIX-ACES-AGSS-COND-2011-HiRes.fits" companion = "/home/jneal/Phd/data/PHOENIX-ALL/PHOENIX/Z-0.0/lte02300-4.50-0.0.PHOENIX-ACES-AGSS-COND-2011-HiRes.fits" host_f = fits.getdata(host) comp_f = fits.getdata(companion) plt.plot(wav_model, host_f, label="Host") plt.plot(wav_model, comp_f, label="Companion") plt.title("Phoenix spectra") plt.xlabel("Wavelength (nm)") plt.legend() plt.show() mask = (2000 < wav_model) & (wav_model < 2200) wav_model = wav_model[mask] host_f = host_f[mask] comp_f = comp_f[mask] plt.plot(wav_model, host_f, label="Host") plt.plot(wav_model, comp_f, label="Companion") plt.title("Phoenix spectra") plt.legend() plt.xlabel("Wavelength (nm)") plt.show() Explanation: The two PHOENIX ACES spectra here are the first best guess of the two spectral components. End of explanation def get_continuum_points(wave, flux, splits=50, top=20): Get continuum points along a spectrum. This splits a spectrum into "splits" number of bins and calculates the medain wavelength and flux of the upper "top" number of flux values. # Shorten array until can be evenly split up. remainder = len(flux) % splits if remainder: # Nozero reainder needs this slicing wave = wave[:-remainder] flux = flux[:-remainder] wave_shaped = wave.reshape((splits, -1)) flux_shaped = flux.reshape((splits, -1)) s = np.argsort(flux_shaped, axis=-1)[:, -top:] s_flux = np.array([ar1[s1] for ar1, s1 in zip(flux_shaped, s)]) s_wave = np.array([ar1[s1] for ar1, s1 in zip(wave_shaped, s)]) wave_points = np.median(s_wave, axis=-1) flux_points = np.median(s_flux, axis=-1) assert len(flux_points) == splits return wave_points, flux_points def continuum(wave, flux, splits=50, method='scalar', plot=False, top=20): Fit continuum of flux. top: is number of top points to take median of continuum. org_wave = wave[:] org_flux = flux[:] # Get continuum value in chunked sections of spectrum. wave_points, flux_points = get_continuum_points(wave, flux, splits=splits, top=top) poly_num = {"scalar": 0, "linear": 1, "quadratic": 2, "cubic": 3} if method == "exponential": z = np.polyfit(wave_points, np.log(flux_points), deg=1, w=np.sqrt(flux_points)) p = np.poly1d(z) norm_flux = np.exp(p(org_wave)) # Un-log the y values. else: z = np.polyfit(wave_points, flux_points, poly_num[method]) p = np.poly1d(z) norm_flux = p(org_wave) if plot: plt.subplot(211) plt.plot(wave, flux) plt.plot(wave_points, flux_points, "x-", label="points") plt.plot(org_wave, norm_flux, label='norm_flux') plt.legend() plt.subplot(212) plt.plot(org_wave, org_flux / norm_flux) plt.title("Normalization") plt.xlabel("Wavelength (nm)") plt.show() return norm_flux #host_cont = local_normalization(wav_model, host_f, splits=50, method="exponential", plot=True) host_continuum = continuum(wav_model, host_f, splits=50, method="exponential", plot=True) host_cont = host_f / host_continuum #comp_cont = local_normalization(wav_model, comp_f, splits=50, method="exponential", plot=True) comp_continuum = continuum(wav_model, comp_f, splits=50, method="exponential", plot=True) comp_cont = comp_f / comp_continuum Explanation: Current Normalization I then continuum normalize the Phoenix spectrum locally around my observations by fitting an exponenital to the continuum like so. Split the spectrum into 50 bins Take median of 20 highest points in each bin. Fix an exponetial Evaulate at the orginal wavelength values Divide original by the fit End of explanation plt.plot(wav_model, comp_cont, label="Companion") plt.plot(wav_model, host_cont-0.3, label="Host") plt.title("Continuum Normalized (with -0.3 offset)") plt.xlabel("Wavelength (nm)") plt.legend() plt.show() plt.plot(wav_model[20:200], comp_cont[20:200], label="Companion") plt.plot(wav_model[20:200], host_cont[20:200], label="Host") plt.title("Continuum Normalized - close up") plt.xlabel("Wavelength (nm)") ax = plt.gca() ax.get_xaxis().get_major_formatter().set_useOffset(False) plt.legend() plt.show() Explanation: Above the top is the unnormalize spectra, with the median points in orangeand the green line the continuum fit. The bottom plot is the contiuum normalized result End of explanation def mix(h, c, alpha): return (h + c * alpha) / (1 + alpha) mix1 = mix(host_cont, comp_cont, 0.01) # 1% of the companion spectra mix2 = mix(host_cont, comp_cont, 0.05) # 5% of the companion spectra # plt.plot(wav_model[20:100], comp_cont[20:100], label="comp") plt.plot(wav_model[20:100], host_cont[20:100], label="host") plt.plot(wav_model[20:100], mix1[20:100], label="mix 1%") plt.plot(wav_model[20:100], mix2[20:100], label="mix 5%") plt.xlabel("Wavelength (nm)") plt.legend() plt.show() Explanation: Combining Spectra I then mix the models using a combination of the two spectra. In this case with NO RV shifts. End of explanation mask = (wav_model > np.min(obs["wavelength"])) & (wav_model < np.max(obs["wavelength"])) plt.plot(wav_model[mask], mix1[mask], label="mix 1%") plt.plot(wav_model[mask], mix2[mask], label="mix 5%") plt.plot(obs["wavelength"], obs["flux"], label="obs") #plt.xlabel("Wavelength (nm)") plt.legend() plt.show() # Zoomed in plt.plot(wav_model[mask], mix2[mask], label="mix 5%") plt.plot(wav_model[mask], mix1[mask], label="mix 1%") plt.plot(obs["wavelength"], obs["flux"], label="obs") plt.xlabel("Wavelength (nm)") plt.legend() plt.xlim([2112, 2117]) plt.ylim([0.9, 1.1]) plt.title("Zoomed") plt.show() Explanation: The companion is cooler there are many more deeper lines present in the spectra. Even a small contribution of the companion spectra reduce the continuum of the mixed spectra considerably. When I compare these mixed spectra to my observations End of explanation from scipy.interpolate import interp1d # mix1_norm = continuum(wav_model, mix1, splits=50, method="linear", plot=False) # mix2_norm = local_normalization(wav_model, mix2, splits=50, method="linear", plot=False) obs_continuum = continuum(obs["wavelength"], obs["flux"], splits=20, method="linear", plot=True) linear1 = continuum(wav_model, mix1, splits=50, method="linear", plot=True) linear2 = continuum(wav_model, mix2, splits=50, method="linear", plot=False) obs_renorm1 = obs["flux"] * (interp1d(wav_model, linear1)(obs["wavelength"]) / obs_continuum) obs_renorm2 = obs["flux"] * (interp1d(wav_model, linear2)(obs["wavelength"]) / obs_continuum) # Just a scalar # mix1_norm = local_normalization(wav_model, mix1, splits=50, method="scalar", plot=False) # mix2_norm = local_normalization(wav_model, mix2, splits=50, method="scalar", plot=False) obs_scalar = continuum(obs["wavelength"], obs["flux"], splits=20, method="scalar", plot=False) scalar1 = continuum(wav_model, mix1, splits=50, method="scalar", plot=True) scalar2 = continuum(wav_model, mix2, splits=50, method="scalar", plot=False) print(scalar2) obs_renorm_scalar1 = obs["flux"] * (interp1d(wav_model, scalar1)(obs["wavelength"]) / obs_scalar) obs_renorm_scalar2 = obs["flux"] * (interp1d(wav_model, scalar2)(obs["wavelength"]) / obs_scalar) plt.plot(obs["wavelength"], obs_scalar, label="scalar observed") plt.plot(obs["wavelength"], obs_continuum, label="linear observed") plt.plot(obs["wavelength"], interp1d(wav_model, scalar1)(obs["wavelength"]), label="scalar 1%") plt.plot(obs["wavelength"], interp1d(wav_model, linear1)(obs["wavelength"]), label="linear 1%") plt.plot(obs["wavelength"], interp1d(wav_model, scalar2)(obs["wavelength"]), label="scalar 5%") plt.plot(obs["wavelength"], interp1d(wav_model, linear2)(obs["wavelength"]), label="linear 5%") plt.title("Linear and Scalar continuum renormalizations.") plt.legend() plt.show() plt.plot(obs["wavelength"], obs["flux"], label="obs", alpha =0.6) plt.plot(obs["wavelength"], obs_renorm1, label="linear norm") plt.plot(obs["wavelength"], obs_renorm_scalar1, label="scalar norm") plt.plot(wav_model[mask], mix1[mask], label="mix 1%") plt.legend() plt.title("1% model") plt.hlines(1, 2111, 2124, linestyle="--", alpha=0.2) plt.show() plt.plot(obs["wavelength"], obs["flux"], label="obs", alpha =0.6) plt.plot(obs["wavelength"], obs_renorm1, label="linear norm") plt.plot(obs["wavelength"], obs_renorm_scalar1, label="scalar norm") plt.plot(wav_model[mask], mix1[mask], label="mix 1%") plt.legend() plt.title("1% model, zoom") plt.xlim([2120, 2122]) plt.hlines(1, 2111, 2124, linestyle="--", alpha=0.2) plt.show() plt.plot(obs["wavelength"], obs["flux"], label="obs", alpha =0.6) plt.plot(obs["wavelength"], obs_renorm2, label="linear norm") plt.plot(obs["wavelength"], obs_renorm_scalar2, label="scalar norm") plt.plot(wav_model[mask], mix2[mask], label="mix 5%") plt.legend() plt.title("5% model") plt.hlines(1, 2111, 2124, linestyle="--", alpha=0.2) plt.show() plt.plot(obs["wavelength"], obs["flux"], label="obs", alpha =0.6) plt.plot(obs["wavelength"], obs_renorm2, label="linear norm") plt.plot(obs["wavelength"], obs_renorm_scalar2, label="scalar norm") plt.plot(wav_model[mask], mix2[mask], label="mix 5%") plt.legend() plt.title("5% model zoomed") plt.xlim([2120, 2122]) plt.hlines(1, 2111, 2124, linestyle="--", alpha=0.2) plt.show() Explanation: As you can see here my observations are above the continuum most of the time. What I have noticed is this drastically affects the chisquared result as the mix model is the one with the least amount of alpha. I am thinking of renormalizing my observations by implementing equation (1) from Passegger 2016 (Fundamental M-dwarf parameters from high-resolution spectra using PHOENIX ACES modesl) F_obs = F_obs * (continuum_fit model / continuum_fit observation) They fit a linear function to the continuum of the observation and computed spectra to account for "slight differences in the continuum level and possible linear trends between the already noramlized spectra." One difference is that they say they normalize the average flux of the spectra to unity. Would this make a difference in this method. Questions Would this be the correct approach to take to solve this? Should I renomalize the observations first as well? Am I treating the cooler M-dwarf spectra correctly in this approach? Attempting the Passegger method End of explanation wav_model = fits.getdata("/home/jneal/Phd/data/PHOENIX-ALL/PHOENIX/WAVE_PHOENIX-ACES-AGSS-COND-2011.fits") wav_model /= 10 # nm temps = [2300, 3000, 4000, 5000] mask1 = (1000 < wav_model) & (wav_model < 3300) masked_wav1 = wav_model[mask1] for temp in temps[::-1]: file = "/home/jneal/Phd/data/PHOENIX-ALL/PHOENIX/Z-0.0/lte0{0}-4.50-0.0.PHOENIX-ACES-AGSS-COND-2011-HiRes.fits".format(temp) host_f = fits.getdata(file) plt.plot(masked_wav1, host_f[mask1], label="Teff={}".format(temp)) plt.title("Phoenix spectra") plt.xlabel("Wavelength (nm)") plt.legend() plt.show() mask = (2000 < wav_model) & (wav_model < 2300) masked_wav = wav_model[mask] for temp in temps[::-1]: file = "/home/jneal/Phd/data/PHOENIX-ALL/PHOENIX/Z-0.0/lte0{0}-4.50-0.0.PHOENIX-ACES-AGSS-COND-2011-HiRes.fits".format(temp) host_f = fits.getdata(file) host_f = host_f[mask] plt.plot(masked_wav, host_f, label="Teff={}".format(temp)) plt.title("Phoenix spectra") plt.xlabel("Wavelength (nm)") plt.legend() plt.show() # Observations for chip in range(1,5): obs = fits.getdata("/home/jneal/.handy_spectra/HD211847-1-mixavg-tellcorr_{}.fits".format(chip)) plt.plot(obs["wavelength"], obs["flux"], label="chip {}".format(chip)) plt.hlines(1, 2111, 2165, linestyle="--") plt.title("CRIRES spectrum HD211847") plt.xlabel("Wavelength (nm)") plt.legend() plt.show() # Observations for chip in range(1,5): obs = fits.getdata("/home/jneal/.handy_spectra/HD30501-1-mixavg-tellcorr_{}.fits".format(chip)) plt.plot(obs["wavelength"], obs["flux"], label="chip {}".format(chip)) plt.hlines(1, 2111, 2165, linestyle="--") plt.title("CRIRES spectrum HD30501") plt.xlabel("Wavelength (nm)") plt.legend() plt.show() Explanation: In this example for the 5% companion spectra there is a bit of difference between the linear and scalar normalizations. With a larger difference at the longer wavelength. (more orange visible above the red.) Faint blue is the spectrum before the renormalization. Range of phoenix spectra End of explanation
12,002
Given the following text description, write Python code to implement the functionality described below step by step Description: 数据结构的内置方法 这一节介绍常用的pandas数据结构内置方法。很重要的一节。 创建本节要用到的数据结构。 Step1: Head() Tail() 想要预览Series或DataFrame对象,可以使用head()和tail()方法。默认显示5行数据,你也可以自己设置显示的行数。 Step2: 属性和 ndarray pandas对象有很多属性,你可以通过这些属性访问数据。 shape Step3: 只想得到对象中的数据而忽略index和columns,使用values属性就可以 Step4: 如果DataFrame或Panel对象的数据类型相同(比如都是 int64),修改object.values相当于直接修改原对象的值。如果数据类型不相同,则根本不能对values属性返回值进行赋值。 注意 Step5: 填充缺失值 在Series和DataFrame中,算术运算方法(比如add())有一个fill_value参数,含义很明显,计算前用一个值来代替缺失值,然后再参与运算。注意,如果参与运算的两个object同一位置(同行同列)都是NaN,fill_value不起作用,计算结果还是NaN。 看例子: Step6: 灵活的比较操作 pandas引入了二元比较运算方法:eq, ne, lt, gt, le。 Step7: 意思操作返回一个和输入对象同类型的对象,值类型为bool,返回结果可以用于检索。 布尔降维 Boolean Reductions pandas提供了三个方法(any(), all(), bool())和一个empty属性来对布尔结果进行降维。 Step8: 同样可以对降维后的结果再进行降维。 Step9: 使用empty属性检测一个pandas对象是否为空。 Step10: 对于只含有一个元素的pandas对象,对其进行布尔检测,使用bool() Step11: 比较对象是否相等 一个问题通常有多种解法。一个最简单的例子:df+df和df*2。为了检测两个计算结果是否相等,你可能想到:(df+df == df*2).all(),然而,这样计算得到的结果是False: Step12: 为什么df + df == df*2 返回的结果含有False?因为NaN和NaN比较厚结果为False! Step13: 还好pandas提供了equals()方法解决上面NaN之间不想等的问题。 Step14: 注意: 在使用equals()方法进行比较时,两个对象如果数据不一致必为False。 Step15: 不同类型的对象之间 逐元素比较 你可以直接对pandas对象和一个常量值进行逐元素比较: Step16: 不同类型的对象(比如pandas数据结构、numpy数组)之间进行逐元素的比较也是没有问题的,前提是两个对象的shape要相同。 Step17: 但要知道不同shape的numpy数组之间是可以直接比较的!因为广播!即使无法广播,也不会Error而是返回False。 Step18: combine_first() 看一下例子: Step19: 解释: 对于df1中NaN的元素,用df2中对应位置的元素替换! DataFrame.combine() DataFrame.combine()方法接收一个DF对象和一个combiner方法。 Step20: 统计相关 的方法 Series, DataFrame和Panel内置了许多计算统计相关指标的方法。这些方法大致分为两类: * 返回低维结果,比如sum(),mean(),quantile() * 返回同原对象同样大小的对象,比如cumsum(), cumprod() 总体来说,这些方法接收一个坐标轴参数 Step21: 所有的这些方法都有skipna参数,含义是计算过程中是否剔除缺失值,skipna默认值为True。 Step22: 这些函数可以参与算术和广播运算。 比如: Step23: 注意cumsum() cumprod()方法 保留NA值的位置。 Step24: 下面列出常用的方法及其描述。提醒每一个方法都有一个level参数用于具有层次索引的对象。 | 方法 | 描述 | | ------------- | Step25: descrieb(), 数据摘要 describe()方法非常有用,它计算数据的各种常用统计指标(比如平均值、标准差等),计算时不包括NA。拿到数据首先要有大概的了解,使用describe()方法就对了。 Step26: 默认describe()只包含25%, 50%, 75%, 也可以通过percentiles参数进行指定。 Step27: 如果Series内数据是非数值类型,describe()也能给出一定的统计结果 Step28: 如果DataFrame对象有的列是数值类型,有的列不是数值类型,describe()仅对数值类型的列进行计算。 Step29: 如果非要知道非数值列的统计指标呢?describe提供了include参数,取值范围{'object', 'number', 'all'}。 看一下例子, 注意'object'和'number'都是在列表中,而'all'不需要放在列表中: Step30: 最大/最小值对应的索引值 Series和DataFrame内置的idxmin() idxmax()方法求得 最小值、最大值对应的索引值,看一下例子: Step31: 如果多个数值都是最大值或最小值,idxmax() idxmin()返回最大值、最小值第一次出现对应的索引值 Step32: 实际上,idxmin和idxmax就是NumPy中的argmin和argmax。 value_counts() 数值计数 value_counts()计算一维度数据结构的直方图。 Step33: 虽然前面介绍过mode()方法了,看两个例子吧: Step34: 区间离散化 cut() qcut()方法可以对连续数据进行离散化 Step35: qcut()方法计算样本的分位数,比如我们可以将正态分布的数据 进行四分位数离散化: Step36: 离散区间也可以用极限定义 Step37: 函数应用 如果你想用自己写的方法或其他库方法操作pandas对象,你应该知道下面的三种方式。 具体选择哪种方式取决于你是想操作整个DataFrame对象还是DataFrame对象的某几行或某几列,或者逐元素操作。 管道 pipe() 基于列或行的函数引用 apply() 对DataFrame对象逐元素计算 applymap() 对管道 DataFrame和Series当然能够作为参数传入方法。然而,如果涉及到多个方法的序列调用,推荐使用pipe()。看一下例子: Step38: 上面一行代码推荐用下面的等价写法 Step39: 注意 f g h三个方法中DataFrame都是作为第一个参数。如果DataFrame作为第二个参数呢?方法是为pipe提供(callable, data_keyword),pipe会自动调用DataFrame对象。 比如,使用statsmodels处理回归问题,他们的API期望第一个参数是公式,第二个参数是DataFrame对象data。我们使用pipe传递(sm.poisson, 'data') Step40: 灵活运用apply()方法可以统计出数据集的很多特性。比如,假设我们希望从数据中抽取每一列最大值的索引值。 Step41: apply()方法当然支持接收其他参数了,比如下面的例子: Step42: 另一个有用的特性是对DataFrame对象传递Series方法,然后针对DF对象的每一列或每一行执行 Series内置的方法! Step43: 应用逐元素操作的Python方法 既然不是所有的方法都能被向量化(接收NumPy数组,返回另一个数组或者值),但是DataFrame内置的applymap()和Series的map()方法能够接收任意的接收一个值且返回一个值的Python方法。 Step44: Series.map()还有一个功能是模仿merge(), join() Step45: 重新索引和改变label reindex()是pandas中基本的数据对其方法。其他所有依赖label对齐的方法基本都要靠reindex()实现。reindex(重新索引)意味着是沿着某条轴转换数据以匹配新设定的label。具体来说,reindex()做了三件事情: * 对数据进行排序以匹配新的labels * 如果新label对应的位置没有数据,插入缺失值NA * 可以指定调用fill填充数据。 下面是一个简单的例子: Step46: 对于DataFrame来说,你可以同时改变列名和索引值。 Step47: 如果只想改变列或者索引的label,DataFrame也提供了reindex_axis()方法,接收label和axis。 Step48: 上面一行代码顺便说明了Series的索引和DataFrame的索引是同一类的实例。 重新索引来和另一个对象对齐 reindex_like() 你可能想传递一个对象,使得原来对象的label和传入的对象一样,使用reindex_like()即可。 Step49: 使用align() 是两个对象相互对齐 align()方法是让两个对象同事对齐的最快方法。它含有join参数, * join='outer' Step50: 对于DataFrame来说,join方法默认会应用到索引和列名。 Step51: align()也含有一个axis参数,指定仅对于某一坐标轴进行对齐。 Step52: DataFrame.align()同样能接收Series对象,此时axis指的是DataFrame对象的索引或列。 Step53: 重索引时顺便填充数值 reindex()方法还有一个method参数,用于填充数值,method取值如下: * pad/ffill Step54: method参数要求索引必须是有序的:递增或递减。 除了method='nearest',其他method取值也能用fillna()方法实现: Step55: 二者的区别是:如果索引不是有序的,reindex()会报错,而fillna()和interpolate()不会检查索引是否有序。 重索引时 有条件地填充NaN limit和tolerance参数会对填充操作进行条件限制,通常限制填充的次数。 Step56: 移除某些索引值 和reindex()方法很相似的是drop(),用于移除索引的某些取值。 Step57: 重命名索引值 rename() 方法可以对索引值重新命名,命名方式可以是字典或Series,也可以是任意的方法。 Step58: 唯一的要求是传入的函数调用索引值时必须有一个返回值,如果你传入的是字典或Series,要求是索引值必须是其键值。这点很好理解。 Step59: 默认情况下修改的仅仅是副本,如果想对原对象索引值修改,inplace=True. 在0.18.0版本中,rename方法也能修改Series.name Step60: 迭代操作 Iteration pandas中迭代操作依赖于具体的对象。迭代Series对象时类似迭代数组,产生的是值,迭代DataFrame或Panel对象时类似迭代字典的键值。 一句话,(for i in object)产生 Step61: pandas对象也有类似字典的iteritems()方法来迭代(key, value)。 为了每一行迭代DataFrame对象,有两种方法: * iterrows() Step62: iterrows() iterrows()方法用于迭代DataFrame的每一行,返回的是索引和Series的迭代器,但要注意Series的dtype可能和原来每一行的dtype不同。 Step63: itertuples() itertuples()方法迭代DataFrame每一行,返回的是namedtuple。因为返回的不是Series,所以会保留DataFrame中值的dtype。 Step64: .dt 访问器 Series对象如果索引是datetime/period,可以用自带的.dt访问器返回日期、小时、分钟。 Step65: 改变时间的格式也很方便,Series.dt.strftime() Step66: ## 字符串处理方法 Series带有一系列的字符串处理, 默认不对NaN处理。 Step67: 排序 排序方法可以分为两大类: 按照实际的值排序和按照label排序。 按照索引排序 sort_index() Series.sort_index(), DataFrame.sort_index(), 参数是ascending, axis。 Step68: 按照值排序 Series.sort_values(), DataFrame.sort_values()用于按照值进行排序,参数有by Step69: 通过na_position参数处理NA值。 Step70: searchsorted() Series.searchsorted()类似numpy.ndarray.searchsorted()。 找到元素在排好序后的位置(下标)。 Step71: 最小/最大值 Series有nsmallest() nlargest()方法能够返回最小或最大的n个值。如果Series对象很大,这两种方法会比先排序后使用head()方法快很多。 Step72: 从v0.17.0开始,DataFrame也有了以上两个方法。 Step73: 多索引列 排序 如果一列是多索引,你必须指定全部每一级的索引。 Step74: 复制 copy()方法复制数据结构的值并返回一个新的对象。记住复制操作不到万不得已不使用。 比如,改变DataFrame对象值的几种方法: * inserting, deleting, modifying a column * 为索引、列 赋值 * 对于同构数据,直接使用values属性修改值。 几乎所有的方法都不对原对象进行直接修改,而是返回修改后的一个新对象!如果原对象数据被修改,肯定是你显示指定的修改操作。 ## dtypes属性 pandas对象的主要数据类型包括 Step75: Series同样有dtypes属性 Step76: 如果pandas对象的一列中有多种数据类型,dtype返回的是能兼容所有数据类型的类型,object范围最大的。 Step77: get_dtype_counts()方法返回DataFrame中每一种数据类型的列数。 Step78: 数值数据类型可以在ndarray,Series和DataFrame中传播。 Step79: 数据类型的默认值 整型的默认值蕾西int64,浮点型的默认类型是float64,和你用的是32位还是64位的系统无关。 Step80: Numpy中数值的具体类型则要依赖于平台。 Step81: upcasting 不同类型结合时会upcast,即得到更通用的类型,看例子吧: Step82: astype()方法 使用astype()显示的进行转型。默认会返回原对象的副本,即使数据类型不变。当然可以传递copy=False参数直接对原对象转型。 Step83: 对object类型进行转型 convert_objects()方法能对object类型进行转型。如果想转为数字,参数是convert_numeric=True。 Step84: 基于dtype 选择列 select_dtypes()方法实现了基于列dtype的构造子集方法。 Step85: select_dtypes()有两个参数:include, exclude。含义是要选择的列的dtype和不选择列的dtype。 Step86: 如果要选择字符串类型的列,必须使用object类型。 Step87: 如果想要知道某种数据类型的所有子类型,比如numpy.number类型,你可以定义如下的方法:
Python Code: import numpy as np import pandas as pd index = pd.date_range('1/1/2000', periods=8) s = pd.Series(np.random.randn(5), index=['a', 'b', 'c', 'd', 'e']) df = pd.DataFrame(np.random.randn(8, 3), index=index, columns=['A', 'B', 'C']) wp = pd.Panel(np.random.randn(2,5,4), items=['Item1', 'Item2'], major_axis=pd.date_range('1/1/2000',periods=5), minor_axis=['A', 'B', 'C', 'D']) Explanation: 数据结构的内置方法 这一节介绍常用的pandas数据结构内置方法。很重要的一节。 创建本节要用到的数据结构。 End of explanation long_series = pd.Series(np.random.randn(1000)) long_series.head() long_series.tail(3) Explanation: Head() Tail() 想要预览Series或DataFrame对象,可以使用head()和tail()方法。默认显示5行数据,你也可以自己设置显示的行数。 End of explanation df[:2] df.columns = [x.lower() for x in df.columns] #将列名重置为小写 df Explanation: 属性和 ndarray pandas对象有很多属性,你可以通过这些属性访问数据。 shape: 显示对象的维度,同ndarray 坐标label Series: index DataFrame: index(行)和columns Panel: items, major_axis and minor_axis 可以通过属性进行安全赋值。 End of explanation s.values df.values type(df.values) wp.values Explanation: 只想得到对象中的数据而忽略index和columns,使用values属性就可以 End of explanation df = pd.DataFrame({'one' : pd.Series(np.random.randn(3), index=['a', 'b', 'c']), 'two' : pd.Series(np.random.randn(4), index=['a', 'b', 'c', 'd']), 'three' : pd.Series(np.random.randn(3), index=['b', 'c', 'd'])}) df row = df.ix[1] row column = df['two'] column df.sub(row, axis='columns') df.sub(row, axis=1) df.sub(row, axis='index') df.sub(row, axis=0) Explanation: 如果DataFrame或Panel对象的数据类型相同(比如都是 int64),修改object.values相当于直接修改原对象的值。如果数据类型不相同,则根本不能对values属性返回值进行赋值。 注意: 如果对象内数据类型不同,values返回的ndarray的dtype将是能够兼容所有数据类型的类型。比如,有的列数据是int,有的列数据是float,.values返回的ndarray的dtype将是float。 加速的操作 pandas从0.11.0版本开始使用numexpr库对二值数值类型操作加速,用bottleneck库对布尔操作加速。 加速效果对大数据尤其明显。 这里有一个速度的简单对比,使用100,000行* 100列的DataFrame: 所以,在安装pandas后也要顺便安装numexpr, bottleneck。 灵活的二元运算 在所有的pandas对象之间的二元运算中,大家最感兴趣的一般是下面两个: * 高维数据结构(比如DataFrame)和低维数据结构(比如Series)之间计算时的广播(broadcasting)行为 * 计算时有缺失值 广播 DataFrame对象内置add(),sub(),mul(),div()以及radd(), rsub(),...等方法。 至于广播计算,Series的输入是最有意思的。 End of explanation df df2 = pd.DataFrame({'one' : pd.Series(np.random.randn(3), index=['a', 'b', 'c']), 'two' : pd.Series(np.random.randn(4), index=['a', 'b', 'c', 'd']), 'three' : pd.Series(np.random.randn(4), index=['a', 'b', 'c', 'd'])}) df2 df + df2 df.add(df2, fill_value=0) #注意['a', 'three']不是NaN Explanation: 填充缺失值 在Series和DataFrame中,算术运算方法(比如add())有一个fill_value参数,含义很明显,计算前用一个值来代替缺失值,然后再参与运算。注意,如果参与运算的两个object同一位置(同行同列)都是NaN,fill_value不起作用,计算结果还是NaN。 看例子: End of explanation df.gt(df2) df2.ne(df) Explanation: 灵活的比较操作 pandas引入了二元比较运算方法:eq, ne, lt, gt, le。 End of explanation df>0 (df>0).all() #与操作 (df > 0).any()#或操作 Explanation: 意思操作返回一个和输入对象同类型的对象,值类型为bool,返回结果可以用于检索。 布尔降维 Boolean Reductions pandas提供了三个方法(any(), all(), bool())和一个empty属性来对布尔结果进行降维。 End of explanation (df > 0).any().any() Explanation: 同样可以对降维后的结果再进行降维。 End of explanation df.empty pd.DataFrame(columns=list('ABC')).empty Explanation: 使用empty属性检测一个pandas对象是否为空。 End of explanation pd.Series([True]).bool() pd.Series([False]).bool() pd.DataFrame([[True]]).bool() pd.DataFrame([[False]]).bool() Explanation: 对于只含有一个元素的pandas对象,对其进行布尔检测,使用bool(): End of explanation df + df == df*2 (df+df == df*2).all() Explanation: 比较对象是否相等 一个问题通常有多种解法。一个最简单的例子:df+df和df*2。为了检测两个计算结果是否相等,你可能想到:(df+df == df*2).all(),然而,这样计算得到的结果是False: End of explanation np.nan == np.nan Explanation: 为什么df + df == df*2 返回的结果含有False?因为NaN和NaN比较厚结果为False! End of explanation (df+df).equals(df*2) Explanation: 还好pandas提供了equals()方法解决上面NaN之间不想等的问题。 End of explanation df1 = pd.DataFrame({'c':['f',0,np.nan]}) df1 df2 = pd.DataFrame({'c':[np.nan, 0, 'f']}, index=[2,1,0]) df2 df1.equals(df2) df1.equals(df2.sort_index()) #对df2的索引排序,然后再比较 Explanation: 注意: 在使用equals()方法进行比较时,两个对象如果数据不一致必为False。 End of explanation pd.Series(['foo', 'bar', 'baz']) == 'foo' pd.Index(['foo', 'bar', 'baz']) == 'foo' Explanation: 不同类型的对象之间 逐元素比较 你可以直接对pandas对象和一个常量值进行逐元素比较: End of explanation pd.Series(['foo', 'bar', 'baz']) == pd.Index(['foo', 'bar', 'qux']) pd.Series(['foo', 'bar', 'baz']) == np.array(['foo', 'bar', 'qux']) pd.Series(['foo', 'bar', 'baz']) == pd.Series(['foo', 'bar']) #长度不相同 Explanation: 不同类型的对象(比如pandas数据结构、numpy数组)之间进行逐元素的比较也是没有问题的,前提是两个对象的shape要相同。 End of explanation np.array([1,2,3]) == np.array([2]) np.array([1, 2, 3]) == np.array([1, 2]) Explanation: 但要知道不同shape的numpy数组之间是可以直接比较的!因为广播!即使无法广播,也不会Error而是返回False。 End of explanation df1 = pd.DataFrame({'A' : [1., np.nan, 3., 5., np.nan], 'B' : [np.nan, 2., 3., np.nan, 6.]}) df1 df2 = pd.DataFrame({'A' : [5., 2., 4., np.nan, 3., 7.], 'B' : [np.nan, np.nan, 3., 4., 6., 8.]}) df2 df1.combine_first(df2) Explanation: combine_first() 看一下例子: End of explanation combiner = lambda x,y: np.where(pd.isnull(x), y,x) df1.combine(df2, combiner) Explanation: 解释: 对于df1中NaN的元素,用df2中对应位置的元素替换! DataFrame.combine() DataFrame.combine()方法接收一个DF对象和一个combiner方法。 End of explanation df df.mean() #axis=0, 计算每一列的平均值 df.mean(1) #计算每一行的平均值 Explanation: 统计相关 的方法 Series, DataFrame和Panel内置了许多计算统计相关指标的方法。这些方法大致分为两类: * 返回低维结果,比如sum(),mean(),quantile() * 返回同原对象同样大小的对象,比如cumsum(), cumprod() 总体来说,这些方法接收一个坐标轴参数: * Series不需要坐标轴参数 * DataFrame 默认axis=0(index), axis=1(columns) * Panel 默认axis=1(major), axis=0(items), axis=2(minor) End of explanation df.sum(0, skipna=False) df.sum(axis=1, skipna=True) Explanation: 所有的这些方法都有skipna参数,含义是计算过程中是否剔除缺失值,skipna默认值为True。 End of explanation ts_stand = (df-df.mean())/df.std() ts_stand.std() xs_stand = df.sub(df.mean(1), axis=0).div(df.std(1), axis=0) xs_stand.std(1) Explanation: 这些函数可以参与算术和广播运算。 比如: End of explanation df.cumsum() Explanation: 注意cumsum() cumprod()方法 保留NA值的位置。 End of explanation series = pd.Series(np.random.randn(500)) series[20:500]=np.nan series[10:20]=5 series.nunique() series Explanation: 下面列出常用的方法及其描述。提醒每一个方法都有一个level参数用于具有层次索引的对象。 | 方法 | 描述 | | ------------- |:-------------:| | count | 沿着坐标轴统计非空的行数| | sum | 沿着坐标轴取加和| | mean | 沿着坐标轴求均值| |mad|沿着坐标轴计算平均绝对偏差| |median|沿着坐标轴计算中位数| |min|沿着坐标轴取最小值| |max|沿着坐标轴取最大值| |mode|沿着坐标轴取众数| |abs|计算每一个值的绝对值| |prod|沿着坐标轴求乘积| |std|沿着坐标轴计算标准差| |var|沿着坐标轴计算无偏方差| |sem|沿着坐标轴计算标准差| |skew|沿着坐标轴计算样本偏斜| |kurt|沿着坐标轴计算样本峰度| |quantile|沿着坐标轴计算样本分位数,单位%| |cumsum|沿着坐标轴计算累加和| |cumprod|沿着坐标轴计算累积乘| |cummax|沿着坐标轴计算累计最大| |cummin|沿着坐标轴计算累计最小| Note:所有需要沿着坐标轴计算的方法,默认axis=0,即将方法应用到每一列数据上。 Series还有一个nunique()方法返回非空数值 组成的集合的大小。 End of explanation series = pd.Series(np.random.randn(1000)) series[::2]=np.nan series.describe() frame = pd.DataFrame(np.random.randn(1000, 5), columns=['a', 'b', 'c', 'd', 'e']) frame.ix[::2]=np.nan frame.describe() Explanation: descrieb(), 数据摘要 describe()方法非常有用,它计算数据的各种常用统计指标(比如平均值、标准差等),计算时不包括NA。拿到数据首先要有大概的了解,使用describe()方法就对了。 End of explanation series.describe(percentiles=[.05, .25, .75, .95]) Explanation: 默认describe()只包含25%, 50%, 75%, 也可以通过percentiles参数进行指定。 End of explanation s = pd.Series(['a', 'a', 'b', 'b', 'a', 'a', np.nan, 'c', 'd', 'a']) s.describe() Explanation: 如果Series内数据是非数值类型,describe()也能给出一定的统计结果 End of explanation frame = pd.DataFrame({'a':['Yes', 'Yes', 'NO', 'No'], 'b':range(4)}) frame.describe() Explanation: 如果DataFrame对象有的列是数值类型,有的列不是数值类型,describe()仅对数值类型的列进行计算。 End of explanation frame.describe(include=['object']) #只对非数值列进行统计计算 frame.describe(include=['number']) frame.describe(include='all')#'all'不是列表 Explanation: 如果非要知道非数值列的统计指标呢?describe提供了include参数,取值范围{'object', 'number', 'all'}。 看一下例子, 注意'object'和'number'都是在列表中,而'all'不需要放在列表中: End of explanation s1 = pd.Series(np.random.randn(5)) s1 s1.idxmin(), s1.idxmax() #最小值:-0.296405, 最大值:1.735420 df1 = pd.DataFrame(np.random.randn(5,3), columns=list('ABC')) df1 df1.idxmin(axis=0) df1.idxmax(axis=1) Explanation: 最大/最小值对应的索引值 Series和DataFrame内置的idxmin() idxmax()方法求得 最小值、最大值对应的索引值,看一下例子: End of explanation df3 = pd.DataFrame([2, 1, 1, 3, np.nan], columns=['A'], index=list('edcba')) df3 df3['A'].idxmin() Explanation: 如果多个数值都是最大值或最小值,idxmax() idxmin()返回最大值、最小值第一次出现对应的索引值 End of explanation data = np.random.randint(0, 7, size=50) data s = pd.Series(data) s.value_counts() pd.value_counts(data) #也是全局方法 Explanation: 实际上,idxmin和idxmax就是NumPy中的argmin和argmax。 value_counts() 数值计数 value_counts()计算一维度数据结构的直方图。 End of explanation s5 = pd.Series([1,1,3,3,3,5,5,7,7,7]) s5.mode() df5 = pd.DataFrame({"A": np.random.randint(0, 7, size=50), "B": np.random.randint(-10, 15, size=50)}) df5 df5.mode() Explanation: 虽然前面介绍过mode()方法了,看两个例子吧: End of explanation arr = np.random.randn(20) factor = pd.cut(arr, 4) factor factor = pd.cut(arr, [-5, -1, 0, 1, 5]) #输入 离散区间 factor Explanation: 区间离散化 cut() qcut()方法可以对连续数据进行离散化 End of explanation arr = np.random.randn(30) factor = pd.qcut(arr, [0, .25, .5, .75, 1]) factor pd.value_counts(factor) Explanation: qcut()方法计算样本的分位数,比如我们可以将正态分布的数据 进行四分位数离散化: End of explanation arr = np.random.randn(20) factor = pd.cut(arr, [-np.inf, 0, np.inf]) factor Explanation: 离散区间也可以用极限定义 End of explanation #f, g 和h是三个方法,接收DataFrame对象,返回DataFrame对象 f(g(h(df), arg1=1), arg2=2, arg3=3) Explanation: 函数应用 如果你想用自己写的方法或其他库方法操作pandas对象,你应该知道下面的三种方式。 具体选择哪种方式取决于你是想操作整个DataFrame对象还是DataFrame对象的某几行或某几列,或者逐元素操作。 管道 pipe() 基于列或行的函数引用 apply() 对DataFrame对象逐元素计算 applymap() 对管道 DataFrame和Series当然能够作为参数传入方法。然而,如果涉及到多个方法的序列调用,推荐使用pipe()。看一下例子: End of explanation (df.pipe(h).pipe(g, arg1=1).pipe(f, arg2=2, arg3=3)) Explanation: 上面一行代码推荐用下面的等价写法: End of explanation df.apply(np.mean) df.apply(np.mean, axis=1) df.apply(lambda x: x.max() - x.min()) df.apply(np.cumsum) df.apply(np.exp) Explanation: 注意 f g h三个方法中DataFrame都是作为第一个参数。如果DataFrame作为第二个参数呢?方法是为pipe提供(callable, data_keyword),pipe会自动调用DataFrame对象。 比如,使用statsmodels处理回归问题,他们的API期望第一个参数是公式,第二个参数是DataFrame对象data。我们使用pipe传递(sm.poisson, 'data'): pipe灵感来自于Unix中伟大的艺术:管道。pandas中pipe()的实现很简洁,推荐阅读源代码pd.DataFrame.pipe 基于行或者列的函数应用 任意函数都可以直接对DataFrame或Panel某一坐标轴进行直接操纵,只需要使用apply()方法即可,同描述性统计方法一样,apply()方法接收axis参数。 End of explanation tsdf = pd.DataFrame(np.random.randn(1000, 3), columns=['A', 'B', 'C'], index=pd.date_range('1/1/2000', periods=1000)) tsdf tsdf.apply(lambda x:x.idxmax()) Explanation: 灵活运用apply()方法可以统计出数据集的很多特性。比如,假设我们希望从数据中抽取每一列最大值的索引值。 End of explanation def subtract_and_divide(x, sub, divide=1): return (x - sub)/divide df.apply(subtract_and_divide, args=(5,),divide=3) Explanation: apply()方法当然支持接收其他参数了,比如下面的例子: End of explanation df df.apply(pd.Series.interpolate) Explanation: 另一个有用的特性是对DataFrame对象传递Series方法,然后针对DF对象的每一列或每一行执行 Series内置的方法! End of explanation df4 = pd.DataFrame(np.random.randn(4, 3),index=['a','b','c','d'],columns=['one', 'two', 'three']) df4 f = lambda x:len(str(x)) df4['one'].map(f) df4.applymap(f) Explanation: 应用逐元素操作的Python方法 既然不是所有的方法都能被向量化(接收NumPy数组,返回另一个数组或者值),但是DataFrame内置的applymap()和Series的map()方法能够接收任意的接收一个值且返回一个值的Python方法。 End of explanation s = pd.Series(['six', 'seven', 'six', 'seven', 'six'], index=['a', 'b', 'c', 'd', 'e']) t = pd.Series({'six':6., 'seven':7.}) s t s.map(t) Explanation: Series.map()还有一个功能是模仿merge(), join() End of explanation s = pd.Series(np.random.randn(5), index=['a','b','c','d','e']) s s.reindex(['e', 'b', 'f', 'd']) Explanation: 重新索引和改变label reindex()是pandas中基本的数据对其方法。其他所有依赖label对齐的方法基本都要靠reindex()实现。reindex(重新索引)意味着是沿着某条轴转换数据以匹配新设定的label。具体来说,reindex()做了三件事情: * 对数据进行排序以匹配新的labels * 如果新label对应的位置没有数据,插入缺失值NA * 可以指定调用fill填充数据。 下面是一个简单的例子: End of explanation df df.reindex(index=['c', 'f', 'b'], columns=['three', 'two', 'one']) Explanation: 对于DataFrame来说,你可以同时改变列名和索引值。 End of explanation rs = s.reindex(df.index) rs rs.index is df.index Explanation: 如果只想改变列或者索引的label,DataFrame也提供了reindex_axis()方法,接收label和axis。 End of explanation df2 = pd.DataFrame(np.random.randn(3, 2),index=['a','b','c'],columns=['one', 'two']) df2 df df.reindex_like(df2) Explanation: 上面一行代码顺便说明了Series的索引和DataFrame的索引是同一类的实例。 重新索引来和另一个对象对齐 reindex_like() 你可能想传递一个对象,使得原来对象的label和传入的对象一样,使用reindex_like()即可。 End of explanation s = pd.Series(np.random.randn(5), index=['a', 'b', 'c', 'd', 'e']) s1 = s[:4] s2 = s[1:] s1 s2 s1.align(s2) s1.align(s2, join='inner') #交集是 'b', 'c', 'd' Explanation: 使用align() 是两个对象相互对齐 align()方法是让两个对象同事对齐的最快方法。它含有join参数, * join='outer': 取得两个对象的索引并集,这也是join的默认值。 * join='left': 使用调用对象的索引值 * join='right':使用被调用对象的索引值 * join='inner': 使用两个对象的索引交集 align()方法返回一个元组,元素元素是重新索引的Series对象。 End of explanation df df2 = df.iloc[:5,:2] df2 df.align(df2, join='inner') df.align(df2) Explanation: 对于DataFrame来说,join方法默认会应用到索引和列名。 End of explanation df.align(df2, join='inner', axis=0) Explanation: align()也含有一个axis参数,指定仅对于某一坐标轴进行对齐。 End of explanation df.align(df2.ix[0], axis=1) Explanation: DataFrame.align()同样能接收Series对象,此时axis指的是DataFrame对象的索引或列。 End of explanation rng = pd.date_range('1/3/2000', periods=8) ts = pd.Series(np.random.randn(8), index=rng) ts2 = ts[[0, 3, 6]] ts ts2 ts2.reindex(ts.index) ts2.reindex(ts.index, method='ffill') #索引小那一行的数值填充NaN ts2.reindex(ts.index, method='bfill') #索引大的非NaN的数值填充NaN ts2.reindex(ts.index, method='nearest') Explanation: 重索引时顺便填充数值 reindex()方法还有一个method参数,用于填充数值,method取值如下: * pad/ffill: 使用后面的值填充数值 * bfill/backfill: 使用前面的值填充数值 * nearest: 使用最近的索引值进行填充 以Series为例,看一下: End of explanation ts2.reindex(ts.index).fillna(method='ffill') Explanation: method参数要求索引必须是有序的:递增或递减。 除了method='nearest',其他method取值也能用fillna()方法实现: End of explanation ts2.reindex(ts.index, method='ffill', limit=1) ts2 ts ts2.reindex(ts.index, method='ffill', tolerance='1 day') Explanation: 二者的区别是:如果索引不是有序的,reindex()会报错,而fillna()和interpolate()不会检查索引是否有序。 重索引时 有条件地填充NaN limit和tolerance参数会对填充操作进行条件限制,通常限制填充的次数。 End of explanation df df.drop(['A'], axis=1) Explanation: 移除某些索引值 和reindex()方法很相似的是drop(),用于移除索引的某些取值。 End of explanation s s.rename(str.upper) Explanation: 重命名索引值 rename() 方法可以对索引值重新命名,命名方式可以是字典或Series,也可以是任意的方法。 End of explanation df = pd.DataFrame({'one' : pd.Series(np.random.randn(3), index=['a', 'b', 'c']), 'two' : pd.Series(np.random.randn(4), index=['a', 'b', 'c', 'd']), 'three' : pd.Series(np.random.randn(3), index=['b', 'c', 'd'])}) df df.rename(columns={'one' : 'foo', 'two' : 'bar'}, index={'a' : 'apple', 'b' : 'banana', 'd' : 'durian'}) Explanation: 唯一的要求是传入的函数调用索引值时必须有一个返回值,如果你传入的是字典或Series,要求是索引值必须是其键值。这点很好理解。 End of explanation s.rename('sclar-name') Explanation: 默认情况下修改的仅仅是副本,如果想对原对象索引值修改,inplace=True. 在0.18.0版本中,rename方法也能修改Series.name End of explanation df = pd.DataFrame({'col1' : np.random.randn(3), 'col2' : np.random.randn(3)}, index=['a', 'b', 'c']) df for col in df: #产生的是列名 print col Explanation: 迭代操作 Iteration pandas中迭代操作依赖于具体的对象。迭代Series对象时类似迭代数组,产生的是值,迭代DataFrame或Panel对象时类似迭代字典的键值。 一句话,(for i in object)产生: * Series: 值 * DataFrame: 列名 * Panel: item名 看一下例子吧: End of explanation for item, frame in df.iteritems(): print item, frame Explanation: pandas对象也有类似字典的iteritems()方法来迭代(key, value)。 为了每一行迭代DataFrame对象,有两种方法: * iterrows(): 按照行来迭代(index, Series)。会把每一行转为Series对象。 * itertuples(): 按照行来迭代namedtuples. 这种方法比iterrows()快,大多数情况下推荐使用此方法。 警告: 迭代pandas对象通常会比较慢。所以尽量避免迭代操作,可以用以下方法替换迭代: * 数据结构的内置方法,索引或numpy方法等 * apply() * 使用cython写内循环 警告: 当迭代进行时永远不要有修改操作。 iteritems() 类似字典的接口,iteritems()对键值对进行迭代操作: * Series: (index, scalar value) * DataFrame: (column, Series) * Panel: (item, DataFrame) 看一下例子: End of explanation for row_index, row in df.iterrows(): print row_index, row Explanation: iterrows() iterrows()方法用于迭代DataFrame的每一行,返回的是索引和Series的迭代器,但要注意Series的dtype可能和原来每一行的dtype不同。 End of explanation for row in df.itertuples(): print row Explanation: itertuples() itertuples()方法迭代DataFrame每一行,返回的是namedtuple。因为返回的不是Series,所以会保留DataFrame中值的dtype。 End of explanation s = pd.Series(pd.date_range('20160101 09:10:12', periods=4)) s s.dt.hour s.dt.second s.dt.day Explanation: .dt 访问器 Series对象如果索引是datetime/period,可以用自带的.dt访问器返回日期、小时、分钟。 End of explanation s = pd.Series(pd.date_range('20130101', periods=4)) s s.dt.strftime('%Y/%m/%d') Explanation: 改变时间的格式也很方便,Series.dt.strftime() End of explanation s = pd.Series(['A', 'B', 'C', 'Aaba', 'Baca', np.nan, 'CABA', 'dog', 'cat']) s s.str.lower() Explanation: ## 字符串处理方法 Series带有一系列的字符串处理, 默认不对NaN处理。 End of explanation unsorted_df = df.reindex(index=['a', 'd', 'c', 'b'], columns=['three', 'two', 'one']) unsorted_df unsorted_df.sort_index() unsorted_df.sort_index(ascending=False) unsorted_df.sort_index(axis=1) unsorted_df['three'].sort_index() # Series Explanation: 排序 排序方法可以分为两大类: 按照实际的值排序和按照label排序。 按照索引排序 sort_index() Series.sort_index(), DataFrame.sort_index(), 参数是ascending, axis。 End of explanation df1 = pd.DataFrame({'one':[2,1,1,1],'two':[1,3,2,4],'three':[5,4,3,2]}) df1 df1.sort_values(by='two') df1.sort_values(by=['one', 'two']) Explanation: 按照值排序 Series.sort_values(), DataFrame.sort_values()用于按照值进行排序,参数有by End of explanation s[2] = np.nan s.sort_values() s.sort_values(na_position='first') #将NA放在前面 Explanation: 通过na_position参数处理NA值。 End of explanation ser = pd.Series([1,2,3]) ser.searchsorted([0, 3]) #元素0下标是0, 元素3下标是2。注意不同元素之间是独立的,所以元素3的位置是2而不是插入0后的3. ser.searchsorted([0, 4]) ser.searchsorted([1, 3], side='right') ser.searchsorted([1, 3], side='left') ser = pd.Series([3, 1, 2]) ser.searchsorted([0, 3], sorter=np.argsort(ser)) Explanation: searchsorted() Series.searchsorted()类似numpy.ndarray.searchsorted()。 找到元素在排好序后的位置(下标)。 End of explanation s = pd.Series(np.random.permutation(10)) s s.sort_values() s.nsmallest(3) s.nlargest(3) Explanation: 最小/最大值 Series有nsmallest() nlargest()方法能够返回最小或最大的n个值。如果Series对象很大,这两种方法会比先排序后使用head()方法快很多。 End of explanation df = pd.DataFrame({'a': [-2, -1, 1, 10, 8, 11, -1], 'b': list('abdceff'), 'c': [1.0, 2.0, 4.0, 3.2, np.nan, 3.0, 4.0]}) df df.nlargest(5, 'a') #列 'a'最大的3个值 df.nlargest(5, ['a', 'c']) df.nsmallest(3, 'a') df.nsmallest(5, ['a', 'c']) Explanation: 从v0.17.0开始,DataFrame也有了以上两个方法。 End of explanation df1 = pd.DataFrame({'one':[2,1,1,1],'two':[1,3,2,4],'three':[5,4,3,2]}) df1.columns = pd.MultiIndex.from_tuples([('a','one'),('a','two'),('b','three')]) df1 df1.sort_values(by=('a','two')) Explanation: 多索引列 排序 如果一列是多索引,你必须指定全部每一级的索引。 End of explanation df = pd.DataFrame(dict(A = np.random.rand(3), B = 1, C = 'foo', D = pd.Timestamp('20010102'), E = pd.Series([1.0]*3).astype('float32'), F = False, G = pd.Series([1]*3,dtype='int8'))) df df.dtypes Explanation: 复制 copy()方法复制数据结构的值并返回一个新的对象。记住复制操作不到万不得已不使用。 比如,改变DataFrame对象值的几种方法: * inserting, deleting, modifying a column * 为索引、列 赋值 * 对于同构数据,直接使用values属性修改值。 几乎所有的方法都不对原对象进行直接修改,而是返回修改后的一个新对象!如果原对象数据被修改,肯定是你显示指定的修改操作。 ## dtypes属性 pandas对象的主要数据类型包括: float, int, bool, datetime64[ns], datetime64[ns, tz], timedelta[ns], category, object. 除此之外还有更具体的说明存储比特数的数据类型比如int64, int32。 DataFrame的dtypes属性返回一个Series对象,Series值是DF中每一列的数据类型。 End of explanation df['A'].dtype Explanation: Series同样有dtypes属性 End of explanation pd.Series([1,2,3,4,5,6.]) pd.Series([1,2,3,6.,'foo']) Explanation: 如果pandas对象的一列中有多种数据类型,dtype返回的是能兼容所有数据类型的类型,object范围最大的。 End of explanation df.get_dtype_counts() Explanation: get_dtype_counts()方法返回DataFrame中每一种数据类型的列数。 End of explanation df1 = pd.DataFrame(np.random.randn(8, 1), columns=['A'], dtype='float32') df1 df1.dtypes df2 = pd.DataFrame(dict( A = pd.Series(np.random.randn(8), dtype='float16'), B = pd.Series(np.random.randn(8)), C = pd.Series(np.array(np.random.randn(8), dtype='uint8')) )) #这里是float16, uint8 df2 df2.dtypes Explanation: 数值数据类型可以在ndarray,Series和DataFrame中传播。 End of explanation pd.DataFrame([1, 2], columns=['a']).dtypes pd.DataFrame({'a': [1, 2]}).dtypes pd.DataFrame({'a': 1}, index=list(range(2))).dtypes Explanation: 数据类型的默认值 整型的默认值蕾西int64,浮点型的默认类型是float64,和你用的是32位还是64位的系统无关。 End of explanation frame = pd.DataFrame(np.array([1, 2])) #如果是在32位系统,数据类型int32 Explanation: Numpy中数值的具体类型则要依赖于平台。 End of explanation df1.dtypes df2.dtypes df1.reindex_like(df2).fillna(value=0.0).dtypes df3 = df1.reindex_like(df2).fillna(value=0.0) + df2 df3.dtypes Explanation: upcasting 不同类型结合时会upcast,即得到更通用的类型,看例子吧: End of explanation df3 df3.dtypes df3.astype('float32').dtypes Explanation: astype()方法 使用astype()显示的进行转型。默认会返回原对象的副本,即使数据类型不变。当然可以传递copy=False参数直接对原对象转型。 End of explanation df3['D'] = '1.' df3['E'] = '1' df3 df3.dtypes #现在'D' 'E'两列都是object类型 df3.convert_objects(convert_numeric=True).dtypes df3['D'] = df3['D'].astype('float16') df3['E'] = df3['E'].astype('int32') df3.dtypes Explanation: 对object类型进行转型 convert_objects()方法能对object类型进行转型。如果想转为数字,参数是convert_numeric=True。 End of explanation df = pd.DataFrame({'string': list('abc'), 'int64': list(range(1, 4)), 'uint8': np.arange(3, 6).astype('u1'), 'float64': np.arange(4.0, 7.0), 'bool1': [True, False, True], 'bool2': [False, True, False], 'dates': pd.date_range('now', periods=3).values, 'category': pd.Series(list("ABC")).astype('category')}) df df['tdeltas'] = df.dates.diff() df['uint64'] = np.arange(3, 6).astype('u8') df['other_dates'] = pd.date_range('20130101', periods=3).values df['tz_aware_dates'] = pd.date_range('20130101', periods=3, tz='US/Eastern') df df.dtypes Explanation: 基于dtype 选择列 select_dtypes()方法实现了基于列dtype的构造子集方法。 End of explanation df.select_dtypes(include=[bool]) df.select_dtypes(include=['bool']) df.dtypes df.select_dtypes(include=['number', 'bool'], exclude=['unsignedinteger']) Explanation: select_dtypes()有两个参数:include, exclude。含义是要选择的列的dtype和不选择列的dtype。 End of explanation df.select_dtypes(include=['object']) Explanation: 如果要选择字符串类型的列,必须使用object类型。 End of explanation def subdtypes(dtype): subs = dtype.__subclasses__() if not subs: return dtype return [dtype, [subdtypes(dt) for dt in subs]] subdtypes(np.generic) Explanation: 如果想要知道某种数据类型的所有子类型,比如numpy.number类型,你可以定义如下的方法: End of explanation
12,003
Given the following text description, write Python code to implement the functionality described below step by step Description: A Systematic Approach to Visualizing Data Exploring a Telecom Customer Churn Dataset TO DO - Nothing so far. Acknowlegements - Thanks to David Wihl for fixing a plotting error. Introduction In this notebook we'll explore a dataset containing information about a telecom company's customers. It comes from an IBM Watson repository. We've chosen this dataset because it is complicated enough to teach us things but not so complicated that it sidetracks us. It also gives us an opportunity to reason about a type of business problem that you will encounter (if you haven't already). The dataset comes to us in the form of an Excel file. The name of the file is "WA_FN-UseC_-Telcom-Customer-Churn.xlsx". It's quite a cumbersome name, but we'll stick with it so we'll always know where it came from (you can quickly google the name in a pinch). And in general datasets come to us with various names and it's better to get to used to that right from the start. We get data in two ways -- by creating it or by getting it from somewhere else. Once we get data, data scientists spend a surprising amount of time getting their heads wrapped around the data, cleaning it, and preparing it to be analyzed. We'll illustrate the main steps of this process here. The objective of this notebook is to provide a template for data exploration -- the first and perhaps most critical step in any kind of data analysis project including machine learning. There are two main reasons for visualizing data Step1: The data is not in a form that can be manipulated for exploration and visualization. 2. Get a Handle on the Structure of the Data 2a. Number of Rows and Columns Step2: How many rows (# of customers) and how many columns (# attritbutes of each customer) do we have? Step3: This meams we have 7,043 customers with each customer tagged with 21 individual attributes such as customerID, gender, SeniorCitizen, etc. Let's get a complete list of the attributes. 2b. An Overview of Customer Features (or Attributes) Here is a complete list of features (also known as attributes) that describe each of the 7,043 customers. We don't know yet if all customers are tagged with all 21 features -- we'll find out soon. Step4: Some of the features have a discrete set of possible values (e.g., gender, PaymentMethod) while some others can take a range of values that need not be discrete (e.g., tenure, MonthlyCharges, TotalCharges). 2c. Categorical Feature and Their Possible Values Some of the features in our dataset are categorical -- their values come from a small handful of discrete possibilities. Features like gender and payment method fit are categorical. Categorical features are also known as discrete features. Let's separate the discrete/categorical features from the rest -- we'll get a better grip if we look at them separately first. Step5: 2d. Numerical Features Three of the features in this dataset are numerical Step6: Notice that the SeniorCitizen attribute or feature is respresented numerically as a 1 or 0 -- but these numbers actually represent "Yes" or "No". In othere words, SeniorCitizen is a categorical attribute. 3. Visualize the Numerical Features SIDEBAR Let's build ourselves a handy way to look at any set of attributes we choose. We can use this to isolate and explore various groups of attributes. Step7: A Simple Display of Slices of the Dataset Step8: Summary Statistics Step9: Box Plots Step10: Histogram Step11: 4. Visualize Relationships Between Numerical Features Step12: How are the numerical attributes correlated? Step13: 5. Visualize the Categorical Features Step14: How Balanced are the Categorical Features? How are the categorical features of the dataset distributed? For example, are there very few senior citizens? Is it the case that an overwhelming number of customers in the dataset have no dependents? Understanding how the features are balanced will give us a sense of how generalizable the results obtained from the dataset will be. SIDEBAR Step15: Display Categorical Features Step16: EXERCISE 1 How balanced are the categorical features? Do you anticipate any problems using this dataset to predict if a customer will switch (be subject to churn) or not? 6. Visualize Relationships Between Categorical Features Does gender make a difference for churn? Step17: Does a factor in addition to gender affect churn? Step18: 7. Visualize Relationships Between Numerical and Categorical Attributes Does tenure affect churn? Step19: Does tenure in addition to gender affect churn? Step20: Do senior citizens pay more per month? Step21: Do monthly charges depend on the lenght of the contract? Step22: Does having online backup service increase monthly charges? Step23: 8. Find and Handle Missing Values Step24: The zeros mean there are no missing data values. This is very nice, but unusual in most datasets, so we've lucked out. When there are missing values it takes effort and judgement to decide how to handle them. Sometimes it's not clear how to handle missing data even though there are a number of standard techniques to choose from. One thing to watch out for is a value that seems to be missing, except that it really is an empty string like '' or a string with some spaces such as ' '. These usually trip up the plotting functions and that's one (stressful) way to identify them. Step25: It turns out that TotalCharges are empty when tenure = 0. These rows do have a monthly charge. We'll just make the total charge equal the monthly charge in these cases.
Python Code: # We keep plotting simple and use common packages and defaults import matplotlib.pyplot as plt import seaborn as sns # Set the aesthetics for Seaborn visuals sns.set(context='notebook', style='whitegrid', palette='deep', font='sans-serif', font_scale=1.3, color_codes=True, rc=None) %matplotlib inline import os # to navigate the file system import numpy as np # for number crunching import pandas as pd # for data loading and manipulation # OS-independent way to navigate the file system # One directory up in relation to directory of this notebook new_dir = os.path.normpath(os.getcwd() + os.sep + os.pardir) # Where the file is file_url = new_dir + os.sep + "Data" + os.sep + "WA_Fn-UseC_-Telco-Customer-Churn.xlsx" file_url # Read the excel sheet into a pandas dataframe df_churn = pd.read_excel(file_url, sheetname=0) Explanation: A Systematic Approach to Visualizing Data Exploring a Telecom Customer Churn Dataset TO DO - Nothing so far. Acknowlegements - Thanks to David Wihl for fixing a plotting error. Introduction In this notebook we'll explore a dataset containing information about a telecom company's customers. It comes from an IBM Watson repository. We've chosen this dataset because it is complicated enough to teach us things but not so complicated that it sidetracks us. It also gives us an opportunity to reason about a type of business problem that you will encounter (if you haven't already). The dataset comes to us in the form of an Excel file. The name of the file is "WA_FN-UseC_-Telcom-Customer-Churn.xlsx". It's quite a cumbersome name, but we'll stick with it so we'll always know where it came from (you can quickly google the name in a pinch). And in general datasets come to us with various names and it's better to get to used to that right from the start. We get data in two ways -- by creating it or by getting it from somewhere else. Once we get data, data scientists spend a surprising amount of time getting their heads wrapped around the data, cleaning it, and preparing it to be analyzed. We'll illustrate the main steps of this process here. The objective of this notebook is to provide a template for data exploration -- the first and perhaps most critical step in any kind of data analysis project including machine learning. There are two main reasons for visualizing data: - To know what you have - To get ideas for how to investigate the data Missing values, data attributes with values that have different orders of magnitude, or skewed/unrepresentative data can all affect the quality of what you can infer from the data. So how should data be visualized? There is no set recipe. Here are some guidelines. 1. Get the Data into Manipulable Form First thing to do is to load up the data. This depends on the source of the input file -- it could come from a file on the local computer system or from a remote location like an Amazon AWS S3 bucket. For us, the file is already stored locally in the Data folder. NOTE: We'll use some standard Python packages to load, manipulate, and visualize the data. These packages are tools that make our lives as data scientists a lot easier and less tedious. Packages are loaded using the "import" keyword or the "from A import B" or the "from A import B as C" locutions. End of explanation # Look at the first few lines of the data -- scroll to the right to see more columns df_churn.head() Explanation: The data is not in a form that can be manipulated for exploration and visualization. 2. Get a Handle on the Structure of the Data 2a. Number of Rows and Columns End of explanation # Number of rows and columns num_rows, num_cols = df_churn.shape num_rows, num_cols Explanation: How many rows (# of customers) and how many columns (# attritbutes of each customer) do we have? End of explanation # Here is a list of the features with the first 5 values of each feature. feature_list = list(df_churn) # First 5 values of each feature in the list first_5 = [list(df_churn[attribute][0:5]) for attribute in feature_list] list(zip(feature_list, first_5)) Explanation: This meams we have 7,043 customers with each customer tagged with 21 individual attributes such as customerID, gender, SeniorCitizen, etc. Let's get a complete list of the attributes. 2b. An Overview of Customer Features (or Attributes) Here is a complete list of features (also known as attributes) that describe each of the 7,043 customers. We don't know yet if all customers are tagged with all 21 features -- we'll find out soon. End of explanation # Identify all the features that are categorical # Feature index numbers that are *not* categorical. # Just count from the dataset starting at customerID's index = 0 not_categorical = [0,5,18,19] # CustomerID is not a feature but a unique identifier # the categorical features are then the complement of the above list categorical = list(set(range(df_churn.shape[1])) - set(not_categorical)) # get the unique values of the categorical features [[feature_list[feature_index], list(df_churn.iloc[:, feature_index].unique())] \ for feature_index in categorical] Explanation: Some of the features have a discrete set of possible values (e.g., gender, PaymentMethod) while some others can take a range of values that need not be discrete (e.g., tenure, MonthlyCharges, TotalCharges). 2c. Categorical Feature and Their Possible Values Some of the features in our dataset are categorical -- their values come from a small handful of discrete possibilities. Features like gender and payment method fit are categorical. Categorical features are also known as discrete features. Let's separate the discrete/categorical features from the rest -- we'll get a better grip if we look at them separately first. End of explanation # Put the numerical features into a list for subsquent use numeric_features = [feature_list[n] for n in not_categorical][1:] # CustomerID is not a numeric feature numeric_features Explanation: 2d. Numerical Features Three of the features in this dataset are numerical: - tenure - MonthlyCharges - TotalCharges End of explanation # View a selection of rows and columns in the df_churn dataframe df_map = {'telco churn data': df_churn} # We're calling our dataset 'telco churn data' def table_view(data_frame_name, feature_list, start_row=3, end_row=5): ''' Displays selected columns and rows of a data frame. ''' # Verify the inputs are sane # get the size of the dataframe num_rows, num_cols = df_map[data_frame_name].shape if (start_row < 0) | (start_row > num_rows) : return print("Please use a valid Start Row number. It can be any number from 0 to {}".format(num_rows)) if (end_row < 0) | (end_row > num_rows) | (end_row < start_row + 1): return print("Please use a valid End Row number. \ It can be any number from 0 to {} \ and must be greater than your start row number".format(num_rows)) view = df_map[data_frame_name][feature_list].iloc[start_row:end_row] return view # Using the table_view function defined above # The name of our dataset (as defined above) is 'telco churn data' # You can select any and any number of attributes in selected_columns selected_columns = ['tenure', 'SeniorCitizen', 'MonthlyCharges', 'TotalCharges'] table_view('telco churn data', selected_columns, 10, 14) # We can explore views in an interactive way from __future__ import print_function from ipywidgets import interact, interactive, fixed, interact_manual import ipywidgets as widgets from ipywidgets import Button, HBox, VBox from IPython.display import display from IPython.display import clear_output # Layout the interactive view widgets # Data Frame Chooser Dropdown dataFrame = widgets.Select( options=['telco churn data'], value='telco churn data', description='Data Source:', disabled=False ) # Start Row Text Field startRow = widgets.IntText( value=7, description='Start Row:', disabled=False ) # End Row Text Field endRow = widgets.IntText( value=12, description='End Row:', disabled=False ) # Attribute Selector (Multiple Select) allFeatures = widgets.SelectMultiple( options = feature_list, rows = 20, description = 'Select Mulitple Features:', value = ['gender', 'SeniorCitizen', 'tenure', 'MonthlyCharges'] ) # Button button = widgets.Button( description='Show View', disabled=False, button_style='info', # 'success', 'info', 'warning', 'danger' or '' tooltip='Go!', icon='' ) def on_button_clicked(b): # Pass the values of the widgets to the table_view function clear_output() return print(table_view(dataFrame.value, list(allFeatures.value), startRow.value, endRow.value)) button.on_click(on_button_clicked) Explanation: Notice that the SeniorCitizen attribute or feature is respresented numerically as a 1 or 0 -- but these numbers actually represent "Yes" or "No". In othere words, SeniorCitizen is a categorical attribute. 3. Visualize the Numerical Features SIDEBAR Let's build ourselves a handy way to look at any set of attributes we choose. We can use this to isolate and explore various groups of attributes. End of explanation # Display the elements HBox([VBox([dataFrame, allFeatures, button]), VBox([startRow, endRow])]) Explanation: A Simple Display of Slices of the Dataset End of explanation # Here are summary statistics - rough format but still useful #df_churn['TotalCharges'].astype(float) df_churn[['tenure', 'MonthlyCharges', 'TotalCharges']].describe(include='all') Explanation: Summary Statistics End of explanation fig, (ax1,ax2,ax3) = plt.subplots(nrows=1, ncols=3, figsize=(12,4)) sns.boxplot(x=df_churn['tenure'], ax = ax1, palette='Set1') #ax1.set_title("Tenure") sns.boxplot(x=df_churn['MonthlyCharges'], ax = ax2, palette='Set2') #ax2.set_title("Monthly Charge") # There are some monthly charges missing -- supress for now because we haven't yet handled missing values #sns.boxplot(x=df_churn['TotalCharges'], ax = ax3, palette='Set3') #ax3.set_title("Total Charge") plt.tight_layout() Explanation: Box Plots End of explanation # Set up the plot def plotFeatureHist(feature_name): fig, ax = plt.subplots(figsize=(12,7)) sns.distplot(df_churn[feature_name], kde=False) return plt.show() # How are the monthly charges distributed? plotFeatureHist('MonthlyCharges') Explanation: Histogram End of explanation # How are tenure and monthly charges related? g = sns.JointGrid(x="MonthlyCharges", y="tenure", data=df_churn) g = g.plot(sns.regplot, sns.distplot) # Pairwise scatter plots of the numerical attributes cols_numeric = ['tenure', 'MonthlyCharges', 'TotalCharges'] sns.set(style='whitegrid', context='notebook') sns.pairplot(df_churn[cols_numeric], size=3.5) Explanation: 4. Visualize Relationships Between Numerical Features End of explanation # Calculate the correlation table # Not sure why SeniorCitizen appears but TotalCharges doesn't appear. corr = df_churn.corr() corr # Correlation Density Plot feature_display_names = ['Tenure', 'Monthly Charges', 'Total Charges'] cm = df_churn.corr() sns.set(font_scale=1) # NOTE: fmt directive controls number of decimal points displayed hm = sns.heatmap(cm, cbar=True, annot=True, square=False, fmt='.2f', annot_kws={'size':14}, yticklabels=feature_display_names, xticklabels=feature_display_names) plt.title('Correlation of Numerical Features') Explanation: How are the numerical attributes correlated? End of explanation # Remind ourselves of the categorical features in the dataset [feature_list[feature_index] for feature_index in categorical] Explanation: 5. Visualize the Categorical Features End of explanation # Set up the plot def plotFeatureCount(feature_name, count_flag): fig, ax = plt.subplots(figsize=(12,7)) if count_flag == 'Count': ax = sns.countplot(x=feature_name, data=df_churn) elif count_flag == 'Percentage': x = df_churn[feature_name].unique() y = [len([val for val in df_churn[feature_name] if val == x_val])/len(df_churn[feature_name]) * 100 \ for x_val in x] ax = sns.barplot(x,y) plt.ylabel(count_flag) return plt.show() # set up the plot for interactivity # Dropdown w_features = widgets.Dropdown( options = [feature_list[feature_index] for feature_index in categorical], description = 'Select Feature:', value = 'gender', button_style='info' ) w_radio = widgets.RadioButtons( options=['Count', 'Percentage'], value='Count', description='Display:', disabled=False ) def on_value_change(change): # Pass the value of the dropdown to the plotFeatureCount function clear_output() return plotFeatureCount(w_features.value, w_radio.value) w_features.observe(on_value_change) w_radio.observe(on_value_change) Explanation: How Balanced are the Categorical Features? How are the categorical features of the dataset distributed? For example, are there very few senior citizens? Is it the case that an overwhelming number of customers in the dataset have no dependents? Understanding how the features are balanced will give us a sense of how generalizable the results obtained from the dataset will be. SIDEBAR End of explanation # Show a default plot plotFeatureCount(w_features.value, w_radio.value) # Show the widgets HBox([w_features, w_radio]) Explanation: Display Categorical Features End of explanation # One way to visualize the effect of gender on churn -- i.e. interdependence between variables from plotnine import * (ggplot(df_churn, aes(x='Churn', fill='gender')) + geom_bar(position='fill')) Explanation: EXERCISE 1 How balanced are the categorical features? Do you anticipate any problems using this dataset to predict if a customer will switch (be subject to churn) or not? 6. Visualize Relationships Between Categorical Features Does gender make a difference for churn? End of explanation # Does the streaming movies along with gender affect churn? from plotnine import * (ggplot(df_churn, aes(x='Churn', fill='gender')) + geom_bar(position='fill') + facet_wrap('~StreamingMovies')) Explanation: Does a factor in addition to gender affect churn? End of explanation # Density Plot sns.kdeplot(df_churn.query("Churn == 'No'").tenure, shade=True, alpha=0.2, label='No', color='salmon') sns.kdeplot(df_churn.query("Churn == 'Yes'").tenure, shade=True, alpha=0.2, label='Yes', color='dodgerblue') plt.title('The first 20 months are critical') plt.xlabel('Tenure') Explanation: 7. Visualize Relationships Between Numerical and Categorical Attributes Does tenure affect churn? End of explanation # Facet Plots g = sns.FacetGrid(df_churn, row='Churn', col='gender', margin_titles=True) g.map(sns.distplot, 'tenure') Explanation: Does tenure in addition to gender affect churn? End of explanation # Pivots # Do monthly charges depend on gender? They don't seem to; but they do seem to depend on whether # or not the person is a senior citizen -- 0 = Not a senior citizen, 1 = senior citizen fig, ax = plt.subplots(figsize=(10,7)) sns.boxplot(x='gender', y='MonthlyCharges', hue="SeniorCitizen", data=df_churn, palette='Set3') plt.legend(loc='upper right') Explanation: Do senior citizens pay more per month? End of explanation # Jitter plot # Do monthly charges depend on the length of contract? from plotnine import * (ggplot(df_churn, aes(x='Contract', y='MonthlyCharges')) + geom_jitter(position=position_jitter(0.4))) Explanation: Do monthly charges depend on the lenght of the contract? End of explanation # Jitter plot # How does online backup service affect monthly charges? from plotnine import * (ggplot(df_churn, aes(x='OnlineBackup', y='MonthlyCharges')) + geom_jitter(position=position_jitter(0.4))) Explanation: Does having online backup service increase monthly charges? End of explanation # For each column in the dataset, add up the rows in which the column data is missing # You can do this across the entire dataset for a quick look at what's missing df_churn.isnull().sum() Explanation: 8. Find and Handle Missing Values End of explanation # For each of the features, find rows where they might be empty -- we'll have to handle these appropriately def isEmpty(feature): empty_rows = [] for i in range(len(df_churn)): if isinstance(df_churn[feature][i], str): empty_rows.append(i) return empty_rows # Which of the numerical features have no numerical values? empty = [[feature,isEmpty(feature)] for feature in numeric_features] empty # Let's have a look at these rows where one or more numberical features are empty. df_churn.iloc[empty[2][1]] Explanation: The zeros mean there are no missing data values. This is very nice, but unusual in most datasets, so we've lucked out. When there are missing values it takes effort and judgement to decide how to handle them. Sometimes it's not clear how to handle missing data even though there are a number of standard techniques to choose from. One thing to watch out for is a value that seems to be missing, except that it really is an empty string like '' or a string with some spaces such as ' '. These usually trip up the plotting functions and that's one (stressful) way to identify them. End of explanation df_churn.iloc[488]['MonthlyCharges'] # Get the monthly charges for these rows monthly_charges = [df_churn.iloc[loc]['MonthlyCharges'] for loc in empty[2][1]] monthly_charges #df_churn.set_value(488, 'TotalCharges', 52.555) #df_churn.iloc[488] # For all customers whose tenure is 0 months, set the TotalCharges equal to the MonthlyCharges # This is the new dataset [df_churn.set_value(loc, 'TotalCharges', df_churn.iloc[loc]['MonthlyCharges']) for loc in empty[2][1]] df_churn.shape # Let's check if the values of TotalCharges are as they should be. df_churn.iloc[empty[2][1]][['MonthlyCharges','TotalCharges']] Explanation: It turns out that TotalCharges are empty when tenure = 0. These rows do have a monthly charge. We'll just make the total charge equal the monthly charge in these cases. End of explanation
12,004
Given the following text description, write Python code to implement the functionality described below step by step Description: The Simple Harmonic Oscillator Here we will expand on the harmonic oscillator first shown in the getting started script. I'll walk you through some of the features of desolver and hopefully give a better a sense of how to use the software. So let's begin! First we import the libraries we'll need. I import all the matplotlib machinery using the magic command %matplotlib, but this is only for notebook/ipython environments. Then I import desolver and the desolver backend as well (this will be useful for specifying our problem), and set the default datatype to float64. Step2: Specifying the Dynamical System Now let's specify the right hand side of our dynamical system. It should be $$ \frac{\mathrm{d^2}x}{\mathrm{dt}^2} = -\frac{k}{m} x $$ But desolver only works with first order differential equations, thus we must cast this into a first order system before we can solve it. Thus we obtain the following system $$ \begin{array}{l} \frac{\mathrm{d}x}{\mathrm{dt}} = v_x \ \frac{\mathrm{d}v_x}{\mathrm{dt}} = -\frac{k}{m} x \end{array} $$ which can be specified as a simple matrix equation as $$ \begin{array}{c} \frac{\mathrm{d}y}{\mathrm{dt}} = \begin{bmatrix} 0 & 1 \ -\frac{k}{m} & 0 \end{bmatrix} \cdot \vec y \ \vec y = \begin{bmatrix}x \ v_x\end{bmatrix} \end{array} $$ Step3: First thing to notice is that we used the backend to specify the matrix and minimise the use of numpy specific machinery. This isn't necessary if you only use numpy, but by doing this we can make this code run with the pytorch backend with minimal effort. Second thing is the use of the decorator @de.rhs_prettifier, this is a convenience decorator that allows me to specify a text representation of the differential equations. Convenient if I want to print it Step4: Or if I want it to look pretty when it is rendered in the notebook Step5: Let's specify the initial conditions as well Step6: And now we're ready to integrate! The Numerical Integration There are a number of things we must choose before we numerically integrate our system of equations. The first of these is whether or not we want an interpolating spline so that we can compute the state of our system between timesteps. The second is the duration of the numerical integration. And the third is the value of parameters of the system Step7: Since k=1 and m=1 and we integrated for 1 cycle, we expect that the final state of the system is the same as the initial state. Step8: Wonderful! We see that the final state is almost exactly the same. Furthermore, we see that this is within the tolerances we specified when creating the OdeSystem where we set rtol and atol to 1e-9. To show you that this is not a fluke, we'll change them to 1e-12 and see what happens. Step9: It's very simple to change the tolerances and rerun the system. Furthermore, we can update our constants and see what happens. If we quadruple k, the spring constant, the period will double, and so after an integration period of $2\pi$ the system should, yet again, be in a final state that is almost exactly the initial state. Step10: The final state is again almost the same as the initial state, but now the maximum absolute difference has increased. This is due to the fact that the numerical error when using an adaptive runge-kutta method is not a random walk, but a function of the whole numerical procedure. Thus if we double the initial integration time, and set k=1 again, we'll see that the error is larger. Step11: The longer we integrate for, the larger this error will become. Is there anything we can do? YES We can use a symplectic integrator since this is a system with a Hamiltonian $H=\frac{kx^2}{2} + \frac{mv_x^2}/2$. A symplectic integrator preserves the symplectic two-form $\mathrm{d}\vec p \wedge\mathrm{d}\vec q$ where $p$ is the momentum and $q$ is the position such that $$ \begin{array}{l} \frac{\mathrm{d}p}{\mathrm{dt}} = -\frac{\partial H}{\partial q} \ \frac{\mathrm{d}q}{\mathrm{dt}} = \frac{\partial H}{\partial p} \end{array} $$ where, in our case, $v_x = \frac{p}{m}$ and $x = q$. Why is this important? I'll leave the detailed theory to Wikipedia and other sources, but the jist of it is that a symplectic integrator is essentially a geometric transformation in the phase space of the system and thus preserves the differential volume element of a Hamiltonian that is almost, but not quite, the Hamiltonian of the system. This is great because it means that, in the best case scenario, the errors in the numerically integrated states are random walks instead of increasing linearly with the integration time. The downside is that a symplectic integrator is not adaptive and thus requires more function evaluations than a Runge-Kutta method. Step12: Above, I've run the numerical integration using a step size of $0.05$ for increasing integration periods from one cycle to four cycles to sixteen cycles and, despite that, the error has stayed near the limits of double precision arithmetic. If I further integrate for 1024 cycles, we'll see that the error begins to increase and this is expected because although the errors in each step may be random walks, they have a cumulative effect that is not necessarily a random walk. Step13: Looking at the Hamiltonian So we've said that a symplectic integrator preserves a perturbed Hamiltonian, we should be able to see this by computing the Hamiltonian at each timestep and looking at how it evolves over the course of a numerical integration. Let's first define the Hamiltonian. Step14: Now it's not interesting to just look at the Hamiltonian alone, we'd like to look at how the Hamiltonian evolves so we will look at the absolute difference between the Hamiltonian at time $t$ and the initial Hamiltonian. To start off, we'll look at the Hamiltonian when we use an adaptive integrator. Step15: We see that the Hamiltonian starts off correctly, but then, very rapidly, jumps up to an error on the order of $10^{-9}$ which is the same as the tolerance we've set for the numerical integration. Now let's compare this to a symplectic integrator. Step16: Well that's completely different behaviour to the adaptive integrator. We see now that the Hamiltonian oscillates between a maximal error and a minimal one, but remains within $10^{-14}$ of the true Hamiltonian. This is exactly the behaviour we were expecting. Let's look further long term and compare with the adaptive integrator, we'll decrease the adaptive integration tolerances down to $10^{-12}$ to see if that helps.
Python Code: %matplotlib inline from matplotlib import pyplot as plt import desolver as de import desolver.backend as D D.set_float_fmt('float64') Explanation: The Simple Harmonic Oscillator Here we will expand on the harmonic oscillator first shown in the getting started script. I'll walk you through some of the features of desolver and hopefully give a better a sense of how to use the software. So let's begin! First we import the libraries we'll need. I import all the matplotlib machinery using the magic command %matplotlib, but this is only for notebook/ipython environments. Then I import desolver and the desolver backend as well (this will be useful for specifying our problem), and set the default datatype to float64. End of explanation @de.rhs_prettifier( equ_repr="[vx, -k*x/m]", md_repr=r $$ \frac{\mathrm{d}y}{\mathrm{dt}} = \begin{bmatrix} 0 & 1 \\ -\frac{k}{m} & 0 \end{bmatrix} \cdot \vec y $$ ) def rhs(t, state, k, m, **kwargs): return D.array([[0.0, 1.0], [-k/m, 0.0]])@state Explanation: Specifying the Dynamical System Now let's specify the right hand side of our dynamical system. It should be $$ \frac{\mathrm{d^2}x}{\mathrm{dt}^2} = -\frac{k}{m} x $$ But desolver only works with first order differential equations, thus we must cast this into a first order system before we can solve it. Thus we obtain the following system $$ \begin{array}{l} \frac{\mathrm{d}x}{\mathrm{dt}} = v_x \ \frac{\mathrm{d}v_x}{\mathrm{dt}} = -\frac{k}{m} x \end{array} $$ which can be specified as a simple matrix equation as $$ \begin{array}{c} \frac{\mathrm{d}y}{\mathrm{dt}} = \begin{bmatrix} 0 & 1 \ -\frac{k}{m} & 0 \end{bmatrix} \cdot \vec y \ \vec y = \begin{bmatrix}x \ v_x\end{bmatrix} \end{array} $$ End of explanation print(rhs) Explanation: First thing to notice is that we used the backend to specify the matrix and minimise the use of numpy specific machinery. This isn't necessary if you only use numpy, but by doing this we can make this code run with the pytorch backend with minimal effort. Second thing is the use of the decorator @de.rhs_prettifier, this is a convenience decorator that allows me to specify a text representation of the differential equations. Convenient if I want to print it End of explanation display(rhs) Explanation: Or if I want it to look pretty when it is rendered in the notebook End of explanation y_init = D.array([1., 0.]) Explanation: Let's specify the initial conditions as well End of explanation a = de.OdeSystem(rhs, y0=y_init, dense_output=True, t=(0, 2*D.pi), dt=0.01, rtol=1e-9, atol=1e-9, constants=dict(k=1.0, m=1.0)) a.integrate() Explanation: And now we're ready to integrate! The Numerical Integration There are a number of things we must choose before we numerically integrate our system of equations. The first of these is whether or not we want an interpolating spline so that we can compute the state of our system between timesteps. The second is the duration of the numerical integration. And the third is the value of parameters of the system: k and m. Unlike scipy, desolver let's you specify a dictionary of constants that are passed to the rhs function and can be modified even after constructing the OdeSystem object. This is particularly useful if you want to vary a single constant over multiple integrations without changing any other parameters. Now, we'll set the numerical integration to 1 cycle of the oscillator at k=1 and m=1 which, when computed from the formula $T=2\pi\sqrt{\frac{k}{m}}$, is exactly $2\pi$. End of explanation print("initial state = {}".format(a[0].y)) print("final state = {}".format(a[-1].y)) print("maximum absolute difference = {}".format(D.max(D.abs(a[-1].y - a[0].y)))) Explanation: Since k=1 and m=1 and we integrated for 1 cycle, we expect that the final state of the system is the same as the initial state. End of explanation a.rtol = 1e-12 a.atol = 1e-12 a.reset() a.integrate() print("initial state = {}".format(a[0].y)) print("final state = {}".format(a[-1].y)) print("maximum absolute difference = {}".format(D.max(D.abs(a[-1].y - a[0].y)))) Explanation: Wonderful! We see that the final state is almost exactly the same. Furthermore, we see that this is within the tolerances we specified when creating the OdeSystem where we set rtol and atol to 1e-9. To show you that this is not a fluke, we'll change them to 1e-12 and see what happens. End of explanation a.constants['k'] = a.constants['k'] * 4 a.reset() a.integrate() print("initial state = {}".format(a[0].y)) print("final state = {}".format(a[-1].y)) print("maximum absolute difference = {}".format(D.max(D.abs(a[-1].y - a[0].y)))) Explanation: It's very simple to change the tolerances and rerun the system. Furthermore, we can update our constants and see what happens. If we quadruple k, the spring constant, the period will double, and so after an integration period of $2\pi$ the system should, yet again, be in a final state that is almost exactly the initial state. End of explanation a.constants['k'] = 1 a.tf = 4*D.pi a.reset() a.integrate() print("initial state = {}".format(a[0].y)) print("final state = {}".format(a[-1].y)) print("maximum absolute difference = {}".format(D.max(D.abs(a[-1].y - a[0].y)))) Explanation: The final state is again almost the same as the initial state, but now the maximum absolute difference has increased. This is due to the fact that the numerical error when using an adaptive runge-kutta method is not a random walk, but a function of the whole numerical procedure. Thus if we double the initial integration time, and set k=1 again, we'll see that the error is larger. End of explanation a.set_method("BABS9O7H") a.dt = 0.05 a.tf = 2*D.pi a.integrate() print("initial state = {}".format(a[0].y)) print("final state = {}".format(a[-1].y)) print("maximum absolute difference = {}".format(D.max(D.abs(a[-1].y - a[0].y)))) a.set_method("BABS9O7H") a.dt = 0.05 a.tf = 8*D.pi a.integrate() print("initial state = {}".format(a[0].y)) print("final state = {}".format(a[-1].y)) print("maximum absolute difference = {}".format(D.max(D.abs(a[-1].y - a[0].y)))) a.set_method("BABS9O7H") a.dt = 0.05 a.tf = 32*D.pi a.integrate() print("initial state = {}".format(a[0].y)) print("final state = {}".format(a[-1].y)) print("maximum absolute difference = {}".format(D.max(D.abs(a[-1].y - a[0].y)))) Explanation: The longer we integrate for, the larger this error will become. Is there anything we can do? YES We can use a symplectic integrator since this is a system with a Hamiltonian $H=\frac{kx^2}{2} + \frac{mv_x^2}/2$. A symplectic integrator preserves the symplectic two-form $\mathrm{d}\vec p \wedge\mathrm{d}\vec q$ where $p$ is the momentum and $q$ is the position such that $$ \begin{array}{l} \frac{\mathrm{d}p}{\mathrm{dt}} = -\frac{\partial H}{\partial q} \ \frac{\mathrm{d}q}{\mathrm{dt}} = \frac{\partial H}{\partial p} \end{array} $$ where, in our case, $v_x = \frac{p}{m}$ and $x = q$. Why is this important? I'll leave the detailed theory to Wikipedia and other sources, but the jist of it is that a symplectic integrator is essentially a geometric transformation in the phase space of the system and thus preserves the differential volume element of a Hamiltonian that is almost, but not quite, the Hamiltonian of the system. This is great because it means that, in the best case scenario, the errors in the numerically integrated states are random walks instead of increasing linearly with the integration time. The downside is that a symplectic integrator is not adaptive and thus requires more function evaluations than a Runge-Kutta method. End of explanation a.set_method("BABS9O7H") a.dt = 0.05 a.tf = 2*1024*D.pi a.integrate() print("initial state = {}".format(a[0].y)) print("final state = {}".format(a[-1].y)) print("maximum absolute difference = {}".format(D.max(D.abs(a[-1].y - a[0].y)))) fig = plt.figure(figsize=(14,8)) ax = fig.add_subplot(111) displn = ax.plot(a.t, a.y[:, 0], label="Oscillator Displacement", color='C0') axt = ax.twinx() velln = axt.plot(a.t, a.y[:, 1], label="Oscillator Velocity", color='red', linestyle='--') ax.set_xlabel("Time (s)") ax.set_ylabel("Displacement (m)") axt.set_ylabel("Velocity (m/s)") ax.set_xlim(0, 2*D.pi) ax.spines['left'].set_color('C0') ax.tick_params(axis='y', colors='C0') ax.yaxis.label.set_color('C0') axt.spines['right'].set_color('red') axt.spines['left'].set_color('C0') axt.tick_params(axis='y', colors='red') axt.yaxis.label.set_color('red') # added these three lines lns = displn + velln labs = [l.get_label() for l in lns] ax.legend(lns, labs) ax.set_title("1 Cycle of a Harmonic Oscillator") plt.tight_layout() Explanation: Above, I've run the numerical integration using a step size of $0.05$ for increasing integration periods from one cycle to four cycles to sixteen cycles and, despite that, the error has stayed near the limits of double precision arithmetic. If I further integrate for 1024 cycles, we'll see that the error begins to increase and this is expected because although the errors in each step may be random walks, they have a cumulative effect that is not necessarily a random walk. End of explanation def kinetic_energy(t, state, k, m): x, vx = state return m * vx**2 / 2 def potential_energy(t, state, k, m): x, vx = state return k * x**2 / 2 def hamiltonian(t, state, k, m): return kinetic_energy(t, state, k, m) + potential_energy(t, state, k, m) Explanation: Looking at the Hamiltonian So we've said that a symplectic integrator preserves a perturbed Hamiltonian, we should be able to see this by computing the Hamiltonian at each timestep and looking at how it evolves over the course of a numerical integration. Let's first define the Hamiltonian. End of explanation a.reset() a.method = "RK45" a.rtol = 1e-9 a.atol = 1e-9 a.tf = 4*2*D.pi a.integrate() fig = plt.figure(figsize=(14,8)) ax = fig.add_subplot(111) E_H = D.abs(hamiltonian(a.t, a.y.T, **a.constants) - hamiltonian(a.t[0], a.y[0], **a.constants)) disp_H = ax.plot(a.t, E_H, label="Hamiltonian / Total Energy", color='black') ax.set_xlabel("Time (s)") ax.set_ylabel("Hamiltonian (J)") # added these three lines ax.legend() ax.set_yscale("log") ax.set_title("{:.0f} Cycle{} of a Harmonic Oscillator".format(a.tf/(2*D.pi), "s" if a.tf/(2*D.pi) > 1 else "")) plt.tight_layout() Explanation: Now it's not interesting to just look at the Hamiltonian alone, we'd like to look at how the Hamiltonian evolves so we will look at the absolute difference between the Hamiltonian at time $t$ and the initial Hamiltonian. To start off, we'll look at the Hamiltonian when we use an adaptive integrator. End of explanation a.reset() a.set_method("BABS9O7H") a.rtol = 1e-9 a.atol = 1e-9 a.dt = 1e-1 a.tf = 4*2*D.pi a.integrate() fig = plt.figure(figsize=(14,8)) ax = fig.add_subplot(111) E_H = D.abs(hamiltonian(a.t, a.y.T, **a.constants) - hamiltonian(a.t[0], a.y[0], **a.constants)) disp_H = ax.plot(a.t, E_H, label="Hamiltonian / Total Energy", color='C0') ax.set_xlabel("Time (s)") ax.set_ylabel("Hamiltonian (J)") # added these three lines ax.legend() ax.set_yscale("log") ax.set_title("{:.0f} Cycle{} of a Harmonic Oscillator".format(a.tf/(2*D.pi), "s" if a.tf/(2*D.pi) > 1 else "")) plt.tight_layout() Explanation: We see that the Hamiltonian starts off correctly, but then, very rapidly, jumps up to an error on the order of $10^{-9}$ which is the same as the tolerance we've set for the numerical integration. Now let's compare this to a symplectic integrator. End of explanation fig = plt.figure(figsize=(14,8)) ax = fig.add_subplot(111) a.tf = 1024*2*D.pi a.reset() a.set_method("BABS9O7H") a.dt = 1e-1 a.integrate() E_H = D.abs(hamiltonian(a.t, a.y.T, **a.constants) - hamiltonian(a.t[0], a.y[0], **a.constants)) disp_H = ax.plot(a.t, E_H, label="BABS9O7H", color='C0') a.reset() a.set_method("RK45") a.rtol = 1e-12 a.atol = 1e-12 a.dt = 1e-1 a.integrate() E_H = D.abs(hamiltonian(a.t, a.y.T, **a.constants) - hamiltonian(a.t[0], a.y[0], **a.constants)) disp_H = ax.plot(a.t, E_H, label="Runge-Kutta 45", color='C1') a.reset() a.set_method("RK87") a.rtol = 1e-12 a.atol = 1e-12 a.dt = 1e-1 a.integrate() E_H = D.abs(hamiltonian(a.t, a.y.T, **a.constants) - hamiltonian(a.t[0], a.y[0], **a.constants)) disp_H = ax.plot(a.t, E_H, label="Runge-Kutta 8(7)", color='C2') a.reset() a.set_method("RK108") a.rtol = 1e-12 a.atol = 1e-12 a.dt = 1e-1 a.integrate() E_H = D.abs(hamiltonian(a.t, a.y.T, **a.constants) - hamiltonian(a.t[0], a.y[0], **a.constants)) disp_H = ax.plot(a.t, E_H, label="Runge-Kutta 10(8)", color='C3') a.reset() a.set_method("RK1412") a.rtol = 1e-12 a.atol = 1e-12 a.dt = 1e-1 a.integrate() E_H = D.abs(hamiltonian(a.t, a.y.T, **a.constants) - hamiltonian(a.t[0], a.y[0], **a.constants)) disp_H = ax.plot(a.t, E_H, label="Runge-Kutta 14(12)", color='C4') ax.set_xlabel("Time (s)") ax.set_ylabel("Hamiltonian (J)") # added these three lines ax.legend(loc='lower right') ax.set_yscale("log") ax.set_title("{:.0f} Cycle{} of a Harmonic Oscillator".format(a.tf/(2*D.pi), "s" if a.tf/(2*D.pi) > 1 else "")) plt.tight_layout() Explanation: Well that's completely different behaviour to the adaptive integrator. We see now that the Hamiltonian oscillates between a maximal error and a minimal one, but remains within $10^{-14}$ of the true Hamiltonian. This is exactly the behaviour we were expecting. Let's look further long term and compare with the adaptive integrator, we'll decrease the adaptive integration tolerances down to $10^{-12}$ to see if that helps. End of explanation
12,005
Given the following text description, write Python code to implement the functionality described below step by step Description: Load data Predict the california average house value Step1: Model with the recommendation of the cheat-sheet Based on the Sklearn algorithm cheat-sheet Step2: Improve the model parametrization Step3: Check the second cheat sheet recommendation, a LinearSVR model Step4: Build a decision tree regressor
Python Code: from sklearn import datasets all_data = datasets.california_housing.fetch_california_housing() # Describe dataset print(all_data.DESCR) print(all_data.feature_names) # Print some data lines print(all_data.data[:10]) print(all_data.target) #Randomize, normalize and separate train & test from sklearn.utils import shuffle X, y = shuffle(all_data.data, all_data.target, random_state=42) from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42) # Normalize the data from sklearn.preprocessing import Normalizer normal = Normalizer() X_train = normal.fit_transform(X_train) X_test = normal.transform(X_test) Explanation: Load data Predict the california average house value End of explanation from sklearn import linear_model reg = linear_model.Ridge() reg.fit(X_train, y_train) # Evaluate from sklearn.metrics import mean_absolute_error y_test_predict = reg.predict(X_test) print('Mean absolute error ', mean_absolute_error(y_test, y_test_predict)) print('Variance score: ', reg.score(X_test, y_test)) # Plot a scaterplot real vs predict import matplotlib.pyplot as plt %matplotlib inline plt.scatter(y_test, y_test_predict) # Save model from sklearn.externals import joblib joblib.dump(reg, '/tmp/reg_model.pkl') # Load model reg_loaded = joblib.load('/tmp/reg_model.pkl') # View the coeficients print('Coeficients :', reg_loaded.coef_) print('Intercept: ', reg_loaded.intercept_ ) Explanation: Model with the recommendation of the cheat-sheet Based on the Sklearn algorithm cheat-sheet End of explanation # Use the function RidgeCV to select the best alpha using cross validation reg = linear_model.RidgeCV(alphas=[.1, 1., 10.]) reg.fit(X_train, y_train) print('Best alpha: ', reg.alpha_) # Build a model with the recommended alpha reg = linear_model.Ridge (alpha = 0.1) reg.fit(X_train, y_train) y_test_predict = reg.predict(X_test) print('Mean absolute error ', mean_absolute_error(y_test, y_test_predict)) print('Variance score: ', reg.score(X_test, y_test)) plt.scatter(y_test, y_test_predict) Explanation: Improve the model parametrization End of explanation from sklearn import svm reg_svr = svm.LinearSVR() reg_svr.fit(X_train, y_train) y_test_predict = reg_svr.predict(X_test) print('Mean absolute error ', mean_absolute_error(y_test, y_test_predict)) print('Variance score: ', reg_svr.score(X_test, y_test)) plt.scatter(y_test, y_test_predict) Explanation: Check the second cheat sheet recommendation, a LinearSVR model End of explanation # Basic regression tree from sklearn import tree dtree = tree.DecisionTreeRegressor() dtree.fit(X_train, y_train) y_test_predict = dtree.predict(X_test) print('Mean absolute error ', mean_absolute_error(y_test, y_test_predict)) print('Variance score: ', dtree.score(X_test, y_test)) plt.scatter(y_test, y_test_predict) # A second model regularized controling the depth dtree2 = tree.DecisionTreeRegressor(max_depth=5) dtree2.fit(X_train, y_train) y_test_predict = dtree2.predict(X_test) print('Mean absolute error ', mean_absolute_error(y_test, y_test_predict)) print('Variance score: ', dtree2.score(X_test, y_test)) plt.scatter(y_test, y_test_predict) # Plot the tree import pydotplus from IPython.display import Image dot_data = tree.export_graphviz(dtree2, out_file=None, feature_names=all_data.feature_names, filled=True, rounded=True, special_characters=True) graph = pydotplus.graph_from_dot_data(dot_data) Image(graph.create_png()) Explanation: Build a decision tree regressor End of explanation
12,006
Given the following text description, write Python code to implement the functionality described below step by step Description: Unterricht zur Kammerprüfung Step1: Sommer_2014 Step2: Frage 1 Erstellen Sie eine SQL-Abfrage, die alle Artikel auflistet, deren Artikelbezeichnungen die Zeichenketten "Schmerzmittel" oder "schmerzmittel" enthalten. Zu jedem Artikel sollen jeweils alle Attribute ausgeben werden. Lösung Step3: Frage 2 Erstellen Sie eine Abfrage, die alle Kunden und deren Umsätze auflistet. Zu jedem Kunden aollen alle Attribute ausgegeben werden. Die Liste soll nach Umsatz absteigend sortiert werden. Lösung Step4: Frage 3 Erstellen Sie eine SQL-Abfrage, die für jeden Artikel Folgendes ermittelt Step5: Frage 4 Deutschland ist in 10 Postleitzahlregionen (0-9, 1. Stelle der PLZ) eingeteilt. Erstellen Sie eine SQl-Abfrage für eine Liste, die für jede PLZ-Region (0-9) den Gesamtumsatz aufweist. Die Liste soll nach Gesamtumsatz absteigend sortiert werden. Lösung Step6: Heiko Mader O-Ton Step7: Aufgabe 3 Step8: Aufgabe 4 Original von H.M ergibt fehler Step9: Leichte Änderungen führen zu einem "fast richtigen" Ergebnis er multipliziert dabei aber nur den jeweils ersten Datensatz aus der Rechnungsposition-Tabelle (siehe 2527,2) für PLZ 9 das wird auch bei der Aufgabe 3 ein möglicher fehler sein, der fällt aber da nicht evtl. auf ???
Python Code: %load_ext sql Explanation: Unterricht zur Kammerprüfung End of explanation %sql mysql://steinam:steinam@localhost/sommer_2014 Explanation: Sommer_2014 End of explanation %%sql select * from artikel where Art_Bezeichnung like '%Schmerzmittel%' or Art_Bezeichnung like '%schmerzmittel%'; Explanation: Frage 1 Erstellen Sie eine SQL-Abfrage, die alle Artikel auflistet, deren Artikelbezeichnungen die Zeichenketten "Schmerzmittel" oder "schmerzmittel" enthalten. Zu jedem Artikel sollen jeweils alle Attribute ausgeben werden. Lösung End of explanation %%sql select k.Kd_firma, sum(rp.RgPos_Menge * rp.RgPos_Preis) as Umsatz from Kunde k left join Rechnung r on k.Kd_Id = r.Rg_Kd_ID inner join Rechnungsposition rp on r.Rg_ID = rp.RgPos_RgID group by k.`Kd_Firma` order by Umsatz desc; %%sql -- Originallösung bringt das gleiche Ergebnis select k.`Kd_Firma`, (select sum(RgPos_menge * RgPos_Preis) from `rechnungsposition` rp, rechnung r where r.`Rg_ID` = `rp`.`RgPos_RgID` and r.`Rg_Kd_ID` = k.`Kd_ID`) as Umsatz from kunde k order by Umsatz desc Explanation: Frage 2 Erstellen Sie eine Abfrage, die alle Kunden und deren Umsätze auflistet. Zu jedem Kunden aollen alle Attribute ausgegeben werden. Die Liste soll nach Umsatz absteigend sortiert werden. Lösung End of explanation %%sql -- meine Lösung select artikel.*, sum(RgPos_Menge) as Menge, count(RgPos_ID) as Anzahl from artikel inner join `rechnungsposition` where `rechnungsposition`.`RgPos_ArtID` = `artikel`.`Art_ID` group by artikel.`Art_ID` %%sql -- Leitungslösung select artikel.* , (select sum(RgPOS_Menge) from Rechnungsposition rp where rp.RgPos_ArtID = artikel.Art_ID) as Menge, (select count(RgPOS_menge) from Rechnungsposition rp where rp.RgPos_ArtID = artikel.Art_ID) as Anzahl from Artikel Explanation: Frage 3 Erstellen Sie eine SQL-Abfrage, die für jeden Artikel Folgendes ermittelt: - Die Menge, die insgesamt verkauft wurde - Die Anzahl der Rechnungspositionen Lösung End of explanation %%sql -- Original select left(kunde.`Kd_PLZ`,1) as Region, sum(`rechnungsposition`.`RgPos_Menge` * `rechnungsposition`.`RgPos_Preis`) as Summe from kunde left join rechnung on kunde.`Kd_ID` = rechnung.`Rg_Kd_ID` left join rechnungsposition on `rechnung`.`Rg_ID` = `rechnungsposition`.`RgPos_RgID` group by Region order by Summe; %%sql -- Inner join ändert nichts select left(kunde.`Kd_PLZ`,1) as Region, sum(`rechnungsposition`.`RgPos_Menge` * `rechnungsposition`.`RgPos_Preis`) as Summe from kunde inner join rechnung on kunde.`Kd_ID` = rechnung.`Rg_Kd_ID` inner join rechnungsposition on `rechnung`.`Rg_ID` = `rechnungsposition`.`RgPos_RgID` group by Region order by Summe; Explanation: Frage 4 Deutschland ist in 10 Postleitzahlregionen (0-9, 1. Stelle der PLZ) eingeteilt. Erstellen Sie eine SQl-Abfrage für eine Liste, die für jede PLZ-Region (0-9) den Gesamtumsatz aufweist. Die Liste soll nach Gesamtumsatz absteigend sortiert werden. Lösung End of explanation %%sql select kunde.*, umsatz from kunde inner join ( select (RgPos_menge * RgPos_Preis) as Umsatz, kd_id from `rechnungsposition` inner join rechnung on `rechnungsposition`.`RgPos_ID` = `rechnung`.`Rg_ID` inner join kunde on `rechnung`.`Rg_Kd_ID` = Kunde.`Kd_ID` group by `Kd_ID` ) a on Kunde.`Kd_ID` = a.Kd_ID order by umsatz desc; Explanation: Heiko Mader O-Ton: ich glaube es ist richtig :-) Aufgabe 2 Syntax geht, aber Ergebnis stimmt nicht End of explanation %%sql select a.*, mengeGesamt,anzahlRechPos from artikel a Inner join ( select SUM(RgPos_menge) as mengeGesamt, art_id from `rechnungsposition` inner join artikel on `rechnungsposition`.`RgPos_ArtID` = artikel.`Art_ID` group by art_id ) b on a.`Art_ID` = b.art_id Inner join (select count(*) as anzahlRechPos, art_id from `rechnungsposition` inner join artikel on `rechnungsposition`.`RgPos_ArtID` = artikel.`Art_ID` group by art_id ) c on a.`Art_ID` = c.art_id Explanation: Aufgabe 3 End of explanation %%sql select gebiet, umsatz from `kunde` inner join ( select kd_plz as gebiet, kd_id from `kunde` where kd_plz in (0%,1%,2%,3%,4%,5%,6%,7%,8%,9%) group by kd_id ) a on kunde.`Kd_ID` = b.kd_id inner join ( select rgPos_Menge * rgPos_Preis as Umsatz2, kd_id from `rechnungsposition` inner join rechnung on `rechnungsposition`.`RgPos_RgID` = rechnung.`Rg_ID` inner join kunde on `rechnung`.`Rg_Kd_ID` = kunde.`Kd_ID` group by kd_id ) b on `kunde`.`Kd_ID` = b.kd_id order by umsatz desc; Explanation: Aufgabe 4 Original von H.M ergibt fehler End of explanation %%sql select gebiet, umsatz from `kunde` inner join ( select kd_plz as gebiet, kd_id from `kunde` where left(kd_plz,1) in (0,1,2,3,4,5,6,7,8,9) group by kd_id ) a on kunde.`Kd_ID` = a.kd_id inner join ( select sum(rgPos_Menge * rgPos_Preis) as Umsatz, kd_id from `rechnungsposition` inner join rechnung on `rechnungsposition`.`RgPos_RgID` = rechnung.`Rg_ID` inner join kunde on `rechnung`.`Rg_Kd_ID` = kunde.`Kd_ID` group by kd_id ) b on `kunde`.`Kd_ID` = b.kd_id order by umsatz desc; %%sql select a.*, sum(rp.RgPos_Menge) as "MengeGesamt", count(rp.RgPos_ArtId) as "AnzahlRechPos" from artikel a inner join RechnungsPosition rp on rp.RgPos_ArtId = a.Art_Id group by Art_ID Explanation: Leichte Änderungen führen zu einem "fast richtigen" Ergebnis er multipliziert dabei aber nur den jeweils ersten Datensatz aus der Rechnungsposition-Tabelle (siehe 2527,2) für PLZ 9 das wird auch bei der Aufgabe 3 ein möglicher fehler sein, der fällt aber da nicht evtl. auf ??? End of explanation
12,007
Given the following text description, write Python code to implement the functionality described below step by step Description: Piecewise Affine Transforms Step1: We build a PiecewiseAffine by supplying two sets of points and a shared triangle list Step2: Lets make a random 5000 point PointCloud in the unit square and view it Step3: Now lets see the effect having warped
Python Code: import numpy as np from menpo.transform import PiecewiseAffine Explanation: Piecewise Affine Transforms End of explanation from menpo.shape import TriMesh, PointCloud a = np.array([[0, 0], [1, 0], [0, 1], [1, 1], [-0.5, -0.7], [0.8, -0.4], [0.9, -2.1]]) b = np.array([[0,0], [2, 0], [-1, 3], [2, 6], [-1.0, -0.01], [1.0, -0.4], [0.8, -1.6]]) tl = np.array([[0,2,1], [1,3,2]]) src = TriMesh(a, tl) src_points = PointCloud(a) tgt = PointCloud(b) pwa = PiecewiseAffine(src_points, tgt) Explanation: We build a PiecewiseAffine by supplying two sets of points and a shared triangle list End of explanation %matplotlib inline # points_s = PointCloud(np.random.rand(10000).reshape([-1,2])) points_f = PointCloud(np.random.rand(10000).reshape([-1,2])) points_f.view() Explanation: Lets make a random 5000 point PointCloud in the unit square and view it End of explanation t_points_f = pwa.apply(points_f); t_points_f.view() test = np.array([[0.1,0.1], [0.7, 0.9], [0.2,0.3], [0.5, 0.6]]) pwa.index_alpha_beta(test) Explanation: Now lets see the effect having warped End of explanation
12,008
Given the following text description, write Python code to implement the functionality described below step by step Description: Create a forward operator and display sensitivity maps Sensitivity maps can be produced from forward operators that indicate how well different sensor types will be able to detect neural currents from different regions of the brain. Step1: Show gain matrix a.k.a. leadfield matrix with sensitivity map
Python Code: # Author: Eric Larson <[email protected]> # # License: BSD (3-clause) import mne from mne.datasets import sample import matplotlib.pyplot as plt print(__doc__) data_path = sample.data_path() raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif' trans = data_path + '/MEG/sample/sample_audvis_raw-trans.fif' src = data_path + '/subjects/sample/bem/sample-oct-6-src.fif' bem = data_path + '/subjects/sample/bem/sample-5120-5120-5120-bem-sol.fif' subjects_dir = data_path + '/subjects' # Note that forward solutions can also be read with read_forward_solution fwd = mne.make_forward_solution(raw_fname, trans, src, bem, fname=None, meg=True, eeg=True, mindist=5.0, n_jobs=2, overwrite=True) # convert to surface orientation for better visualization fwd = mne.convert_forward_solution(fwd, surf_ori=True) leadfield = fwd['sol']['data'] print("Leadfield size : %d x %d" % leadfield.shape) grad_map = mne.sensitivity_map(fwd, ch_type='grad', mode='fixed') mag_map = mne.sensitivity_map(fwd, ch_type='mag', mode='fixed') eeg_map = mne.sensitivity_map(fwd, ch_type='eeg', mode='fixed') Explanation: Create a forward operator and display sensitivity maps Sensitivity maps can be produced from forward operators that indicate how well different sensor types will be able to detect neural currents from different regions of the brain. End of explanation picks_meg = mne.pick_types(fwd['info'], meg=True, eeg=False) picks_eeg = mne.pick_types(fwd['info'], meg=False, eeg=True) fig, axes = plt.subplots(2, 1, figsize=(10, 8), sharex=True) fig.suptitle('Lead field matrix (500 dipoles only)', fontsize=14) for ax, picks, ch_type in zip(axes, [picks_meg, picks_eeg], ['meg', 'eeg']): im = ax.imshow(leadfield[picks, :500], origin='lower', aspect='auto', cmap='RdBu_r') ax.set_title(ch_type.upper()) ax.set_xlabel('sources') ax.set_ylabel('sensors') plt.colorbar(im, ax=ax, cmap='RdBu_r') plt.show() plt.figure() plt.hist([grad_map.data.ravel(), mag_map.data.ravel(), eeg_map.data.ravel()], bins=20, label=['Gradiometers', 'Magnetometers', 'EEG'], color=['c', 'b', 'k']) plt.legend() plt.title('Normal orientation sensitivity') plt.xlabel('sensitivity') plt.ylabel('count') plt.show() grad_map.plot(time_label='Gradiometer sensitivity', subjects_dir=subjects_dir, clim=dict(lims=[0, 50, 100])) Explanation: Show gain matrix a.k.a. leadfield matrix with sensitivity map End of explanation
12,009
Given the following text description, write Python code to implement the functionality described below step by step Description: Features selection for multiple linear regression Following is an example taken from the masterpiece book Introduction to Statistical Learning by Hastie, Witten, Tibhirani, James. It is based on an Advertising Dataset, available on the accompanying web site Step1: Is there a relationship between sales and advertising? First of all, we fit a regression line using the Ordinary Least Square algorithm, i.e. the line that minimises the squared differences between the actual Sales and the line itself. The multiple linear regression model takes the form Step2: These are the beta coefficients calculated Step3: We interpret these results as follows Step4: Now we need the Total Sum of Squares (TSS) Step5: The F-statistic is the ratio between (TSS-RSS)/p and RSS/(n-p-1) Step6: When there is no relationship between the response and predictors, one would expect the F-statistic to take on a value close to 1. On the other hand, if Ha is true, then we expect F to be greater than 1. In this case, F is far larger than 1 Step7: RSE is 1.68 units while the mean value for the response is 14.02, indicating a percentage error of roughly 12%. Second, the R2 statistic records the percentage of variability in the response that is explained by the predictors. The predictors explain almost 90% of the variance in sales. Summary statsmodels has a handy function that provides the above metrics in one single table Step8: One thing to note is that R2 (R-squared above) will always increase when more variables are added to the model, even if those variables are only weakly associated with the response. Therefore an adjusted R2 is provided, which is R2 adjusted by the number of predictors. Another thing to note is that the summary table shows also a t-statistic and a p-value for each single feature. These provide information about whether each individual predictor is related to the response (high t-statistic or low p-value). But be careful looking only at these individual p-values instead of looking at the overall F-statistic. It seems likely that if any one of the p-values for the individual features is very small, then at least one of the predictors is related to the response. However, this logic is flawed, especially when you have many predictors; statistically about 5 % of the p-values will be below 0.05 by chance (this is the effect infamously leveraged by the so-called p-hacking). The F-statistic does not suffer from this problem because it adjusts for the number of predictors. Which media contribute to sales? To answer this question, we could examine the p-values associated with each predictor’s t-statistic. In the multiple linear regression above, the p-values for TV and radio are low, but the p-value for newspaper is not. This suggests that only TV and radio are related to sales. But as just seen, if p is large then we are likely to make some false discoveries. The task of determining which predictors are associated with the response, in order to fit a single model involving only those predictors, is referred to as variable /feature selection. Ideally, we could perform the variable selection by trying out a lot of different models, each containing a different subset of the features. We can then select the best model out of all of the models that we have considered (for example, the model with the smallest RSS and the biggest R2). Other used metrics are the Mallow’s Cp, Akaike information criterion (AIC), Bayesian information criterion (BIC), and adjusted R2. All of them are visible in the summary model. Step9: Unfortunately, there are a total of 2^p models that contain subsets of p variables. For three predictors, it would still be manageable, only 8 models to fit and evaluate but as p increases, the number of models grows exponentially. Instead, we can use other approaches. The three classical ways are the forward selection (start with no features and add one after the other until a threshold is reached); the backward selection (start with all features and remove one by one) and the mixed selection (a combination of the two). We try here the forward selection. Forward selection We start with a null model (no features), we then fit three (p=3) simple linear regressions and add to the null model the variable that results in the lowest RSS. Step10: The model containing only TV as a predictor had an RSS=2103 and an R2 of 0.61 Step11: The lowest RSS and the highest R2 are for the TV medium. Now we have a best model M1 which contains TV advertising. We then add to this M1 model the variable that results in the lowest RSS for the new two-variable model. This approach is continued until some stopping rule is satisfied. Step12: Well, the model with TV AND Radio greatly decreased RSS and increased R2, so that will be our M2 model. Now, we have only three variables here. We can decide to stop at M2 or use an M3 model with all three variables. Recall that we already fitted and evaluated a model with all features, just at the beginning. Step13: M3 is slightly better than M2 (but remember that R2 always increases when adding new variables) so we call the approach completed and decide that the M2 model with TV and Radio is the good compromise. Adding the newspaper could possibly overfits on new test data. Next year no budget for newspaper advertising and that amount will be used for TV and Radio instead. Step14: Plotting the model The M2 model has two variables therefore can be plotted as a plane in a 3D chart. Step15: The M2 model can be described by this equation Step16: Let's plot the actual values as red points and the model predictions as a cyan plane Step17: Is there synergy among the advertising media? Adding radio to the model leads to a substantial improvement in R2. This implies that a model that uses TV and radio expenditures to predict sales is substantially better than one that uses only TV advertising. In our previous analysis of the Advertising data, we concluded that both TV and radio seem to be associated with sales. The linear models that formed the basis for this conclusion assumed that the effect on sales of increasing one advertising medium is independent of the amount spent on the other media. For example, the linear model states that the average effect on sales of a one-unit increase in TV is always β1, regardless of the amount spent on radio. However, this simple model may be incorrect. Suppose that spending money on radio advertising actually increases the effectiveness of TV advertising, so that the slope term for TV should increase as radio increases. In this situation, given a fixed budget of $100K spending half on radio and half on TV may increase sales more than allocating the entire amount to either TV or to radio. In marketing, this is known as a synergy effect. The figure above suggests that such an effect may be present in the advertising data. Notice that when levels of either TV or radio are low, then the true sales are lower than predicted by the linear model. But when advertising is split between the two media, then the model tends to underestimate sales. Step18: The results strongly suggest that the model that includes the interaction term is superior to the model that contains only main effects. The p-value for the interaction term, TV×radio, is extremely low, indicating that there is strong evidence for Ha Step19: The R2 for this model is 96.8 %, compared to only 89.7% for the model M2 that predicts sales using TV and radio without an interaction term. This means that (96.8 − 89.7)/(100 − 89.7) = 69% of the variability in sales that remains after fitting the additive model has been explained by the interaction term. A linear model that uses radio, TV, and an interaction between the two to predict sales takes the form
Python Code: import pandas as pd ad = pd.read_csv("../datasets/advertising.csv", index_col=0) ad.info() ad.describe() ad.head() %matplotlib inline import matplotlib.pyplot as plt plt.scatter(ad.TV, ad.Sales, color='blue', label="TV") plt.scatter(ad.Radio, ad.Sales, color='green', label='Radio') plt.scatter(ad.Newspaper, ad.Sales, color='red', label='Newspaper') plt.legend(loc="lower right") plt.title("Sales vs. Advertising") plt.xlabel("Advertising [1000 $]") plt.ylabel("Sales [Thousands of units]") plt.grid() plt.show() ad.corr() plt.imshow(ad.corr(), cmap=plt.cm.Blues, interpolation='nearest') plt.colorbar() tick_marks = [i for i in range(len(ad.columns))] plt.xticks(tick_marks, ad.columns, rotation='vertical') plt.yticks(tick_marks, ad.columns) Explanation: Features selection for multiple linear regression Following is an example taken from the masterpiece book Introduction to Statistical Learning by Hastie, Witten, Tibhirani, James. It is based on an Advertising Dataset, available on the accompanying web site: http://www-bcf.usc.edu/~gareth/ISL/data.html The dataset contains statistics about the sales of a product in 200 different markets, together with advertising budgets in each of these markets for different media channels: TV, radio and newspaper. Imaging being the Marketing responsible and you need to prepare a new advertising plan for next year. Import Advertising data End of explanation import statsmodels.formula.api as sm modelAll = sm.ols('Sales ~ TV + Radio + Newspaper', ad).fit() Explanation: Is there a relationship between sales and advertising? First of all, we fit a regression line using the Ordinary Least Square algorithm, i.e. the line that minimises the squared differences between the actual Sales and the line itself. The multiple linear regression model takes the form: Sales = β0 + β1*TV + β2*Radio + β3*Newspaper + ε, where Beta are the regression coefficients we want to find and epsilon is the error that we want to minimise. For this we use the statsmodels package and its ols function. Fit the LR model End of explanation modelAll.params Explanation: These are the beta coefficients calculated: End of explanation y_pred = modelAll.predict(ad) import numpy as np RSS = np.sum((y_pred - ad.Sales)**2) RSS Explanation: We interpret these results as follows: for a given amount of TV and newspaper advertising, spending an additional 1000 dollars on radio advertising leads to an increase in sales by approximately 189 units. In contrast, the coefficient for newspaper represents the average effect (negligible) of increasing newspaper spending by 1000 dollars while holding TV and radio fixed. Is at least one of the features useful in predicting Sales? We use a hypothesis test to answer this question. The most common hypothesis test involves testing the null hypothesis of: H0: There is no relationship between the media and sales versus the alternative hypothesis Ha: There is some relationship between the media and sales. Mathematically, this corresponds to testing H0: β1 = β2 = β3 = β4 = 0 versus Ha: at least one βi is non-zero. This hypothesis test is performed by computing the F-statistic The F-statistic We need first of all the Residual Sum of Squares (RSS), i.e. the sum of all squared errors (differences between actual sales and predictions from the regression line). Remember this is the number that the regression is trying to minimise. End of explanation y_mean = np.mean(ad.Sales) # mean of sales TSS = np.sum((ad.Sales - y_mean)**2) TSS Explanation: Now we need the Total Sum of Squares (TSS): the total variance in the response Y, and can be thought of as the amount of variability inherent in the response before the regression is performed. The distance from any point in a collection of data, to the mean of the data, is the deviation. End of explanation p=3 # we have three predictors: TV, Radio and Newspaper n=200 # we have 200 data points (input samples) F = ((TSS-RSS)/p) / (RSS/(n-p-1)) F Explanation: The F-statistic is the ratio between (TSS-RSS)/p and RSS/(n-p-1) End of explanation RSE = np.sqrt((1/(n-2))*RSS); RSE np.mean(ad.Sales) R2 = 1 - RSS/TSS; R2 Explanation: When there is no relationship between the response and predictors, one would expect the F-statistic to take on a value close to 1. On the other hand, if Ha is true, then we expect F to be greater than 1. In this case, F is far larger than 1: at least one of the three advertising media must be related to sales. How strong is the relationship? Once we have rejected the null hypothesis in favor of the alternative hypothesis, it is natural to want to quantify the extent to which the model fits the data. The quality of a linear regression fit is typically assessed using two related quantities: the residual standard error (RSE) and the R2 statistic (the square of the correlation of the response and the variable, when close to 1 means high correlation). End of explanation modelAll.summary() Explanation: RSE is 1.68 units while the mean value for the response is 14.02, indicating a percentage error of roughly 12%. Second, the R2 statistic records the percentage of variability in the response that is explained by the predictors. The predictors explain almost 90% of the variance in sales. Summary statsmodels has a handy function that provides the above metrics in one single table: End of explanation def evaluateModel (model): print("RSS = ", ((ad.Sales - model.predict())**2).sum()) print("R2 = ", model.rsquared) Explanation: One thing to note is that R2 (R-squared above) will always increase when more variables are added to the model, even if those variables are only weakly associated with the response. Therefore an adjusted R2 is provided, which is R2 adjusted by the number of predictors. Another thing to note is that the summary table shows also a t-statistic and a p-value for each single feature. These provide information about whether each individual predictor is related to the response (high t-statistic or low p-value). But be careful looking only at these individual p-values instead of looking at the overall F-statistic. It seems likely that if any one of the p-values for the individual features is very small, then at least one of the predictors is related to the response. However, this logic is flawed, especially when you have many predictors; statistically about 5 % of the p-values will be below 0.05 by chance (this is the effect infamously leveraged by the so-called p-hacking). The F-statistic does not suffer from this problem because it adjusts for the number of predictors. Which media contribute to sales? To answer this question, we could examine the p-values associated with each predictor’s t-statistic. In the multiple linear regression above, the p-values for TV and radio are low, but the p-value for newspaper is not. This suggests that only TV and radio are related to sales. But as just seen, if p is large then we are likely to make some false discoveries. The task of determining which predictors are associated with the response, in order to fit a single model involving only those predictors, is referred to as variable /feature selection. Ideally, we could perform the variable selection by trying out a lot of different models, each containing a different subset of the features. We can then select the best model out of all of the models that we have considered (for example, the model with the smallest RSS and the biggest R2). Other used metrics are the Mallow’s Cp, Akaike information criterion (AIC), Bayesian information criterion (BIC), and adjusted R2. All of them are visible in the summary model. End of explanation modelTV = sm.ols('Sales ~ TV', ad).fit() modelTV.summary().tables[1] evaluateModel(modelTV) Explanation: Unfortunately, there are a total of 2^p models that contain subsets of p variables. For three predictors, it would still be manageable, only 8 models to fit and evaluate but as p increases, the number of models grows exponentially. Instead, we can use other approaches. The three classical ways are the forward selection (start with no features and add one after the other until a threshold is reached); the backward selection (start with all features and remove one by one) and the mixed selection (a combination of the two). We try here the forward selection. Forward selection We start with a null model (no features), we then fit three (p=3) simple linear regressions and add to the null model the variable that results in the lowest RSS. End of explanation modelRadio = sm.ols('Sales ~ Radio', ad).fit() modelRadio.summary().tables[1] evaluateModel(modelRadio) modelPaper = sm.ols('Sales ~ Newspaper', ad).fit() modelPaper.summary().tables[1] evaluateModel(modelPaper) Explanation: The model containing only TV as a predictor had an RSS=2103 and an R2 of 0.61 End of explanation modelTVRadio = sm.ols('Sales ~ TV + Radio', ad).fit() modelTVRadio.summary().tables[1] evaluateModel(modelTVRadio) modelTVPaper = sm.ols('Sales ~ TV + Newspaper', ad).fit() modelTVPaper.summary().tables[1] evaluateModel(modelTVPaper) Explanation: The lowest RSS and the highest R2 are for the TV medium. Now we have a best model M1 which contains TV advertising. We then add to this M1 model the variable that results in the lowest RSS for the new two-variable model. This approach is continued until some stopping rule is satisfied. End of explanation evaluateModel(modelAll) Explanation: Well, the model with TV AND Radio greatly decreased RSS and increased R2, so that will be our M2 model. Now, we have only three variables here. We can decide to stop at M2 or use an M3 model with all three variables. Recall that we already fitted and evaluated a model with all features, just at the beginning. End of explanation modelTVRadio.summary() Explanation: M3 is slightly better than M2 (but remember that R2 always increases when adding new variables) so we call the approach completed and decide that the M2 model with TV and Radio is the good compromise. Adding the newspaper could possibly overfits on new test data. Next year no budget for newspaper advertising and that amount will be used for TV and Radio instead. End of explanation modelTVRadio.params Explanation: Plotting the model The M2 model has two variables therefore can be plotted as a plane in a 3D chart. End of explanation normal = np.array([0.19,0.05,-1]) point = np.array([-15.26,0,0]) # a plane is a*x + b*y +c*z + d = 0 # [a,b,c] is the normal. Thus, we have to calculate # d and we're set d = -np.sum(point*normal) # dot product # create x,y x, y = np.meshgrid(range(50), range(300)) # calculate corresponding z z = (-normal[0]*x - normal[1]*y - d)*1./normal[2] Explanation: The M2 model can be described by this equation: Sales = 0.19 * Radio + 0.05 * TV + 2.9 which I can write as: 0.19x + 0.05y - z + 2.9 = 0 Its normal is (0.19, 0.05, -1) and a point on the plane is (-2.9/0.19,0,0) = (-15.26,0,0) End of explanation from mpl_toolkits.mplot3d import Axes3D fig = plt.figure() fig.suptitle('Regression: Sales ~ Radio + TV Advertising') ax = Axes3D(fig) ax.set_xlabel('Radio') ax.set_ylabel('TV') ax.set_zlabel('Sales') ax.scatter(ad.Radio, ad.TV, ad.Sales, c='red') ax.plot_surface(x,y,z, color='cyan', alpha=0.3) Explanation: Let's plot the actual values as red points and the model predictions as a cyan plane: End of explanation modelSynergy = sm.ols('Sales ~ TV + Radio + TV*Radio', ad).fit() modelSynergy.summary().tables[1] Explanation: Is there synergy among the advertising media? Adding radio to the model leads to a substantial improvement in R2. This implies that a model that uses TV and radio expenditures to predict sales is substantially better than one that uses only TV advertising. In our previous analysis of the Advertising data, we concluded that both TV and radio seem to be associated with sales. The linear models that formed the basis for this conclusion assumed that the effect on sales of increasing one advertising medium is independent of the amount spent on the other media. For example, the linear model states that the average effect on sales of a one-unit increase in TV is always β1, regardless of the amount spent on radio. However, this simple model may be incorrect. Suppose that spending money on radio advertising actually increases the effectiveness of TV advertising, so that the slope term for TV should increase as radio increases. In this situation, given a fixed budget of $100K spending half on radio and half on TV may increase sales more than allocating the entire amount to either TV or to radio. In marketing, this is known as a synergy effect. The figure above suggests that such an effect may be present in the advertising data. Notice that when levels of either TV or radio are low, then the true sales are lower than predicted by the linear model. But when advertising is split between the two media, then the model tends to underestimate sales. End of explanation evaluateModel(modelSynergy) Explanation: The results strongly suggest that the model that includes the interaction term is superior to the model that contains only main effects. The p-value for the interaction term, TV×radio, is extremely low, indicating that there is strong evidence for Ha : β3 not zero. In other words, it is clear that the true relationship is not additive. End of explanation modelSynergy.params Explanation: The R2 for this model is 96.8 %, compared to only 89.7% for the model M2 that predicts sales using TV and radio without an interaction term. This means that (96.8 − 89.7)/(100 − 89.7) = 69% of the variability in sales that remains after fitting the additive model has been explained by the interaction term. A linear model that uses radio, TV, and an interaction between the two to predict sales takes the form: sales = β0 + β1 × TV + β2 × radio + β3 × (radio×TV) + ε End of explanation
12,010
Given the following text description, write Python code to implement the functionality described below step by step Description: Training Models at Scale with AI Platform Learning Objectives Step1: Note Step2: Create BigQuery tables If you have not already created a BigQuery dataset for our data, run the following cell Step3: Let's create a table with 1 million examples. Note that the order of columns is exactly what was in our CSV files. Step4: Make the validation dataset be 1/10 the size of the training dataset. Step5: Export the tables as CSV files Step6: Make code compatible with AI Platform Training Service In order to make our code compatible with AI Platform Training Service we need to make the following changes Step7: Move code into a Python package The first thing to do is to convert your training code snippets into a regular Python package that we will then pip install into the Docker container. A Python package is simply a collection of one or more .py files along with an __init__.py file to identify the containing directory as a package. The __init__.py sometimes contains initialization code but for our purposes an empty file suffices. Create the package directory Our package directory contains 3 files Step8: Paste existing code into model.py A Python package requires our code to be in a .py file, as opposed to notebook cells. So, we simply copy and paste our existing code for the previous notebook into a single file. In the cell below, we write the contents of the cell into model.py packaging the model we developed in the previous labs so that we can deploy it to AI Platform Training Service. Step9: Modify code to read data from and write checkpoint files to GCS If you look closely above, you'll notice a new function, train_and_evaluate that wraps the code that actually trains the model. This allows us to parametrize the training by passing a dictionary of parameters to this function (e.g, batch_size, num_examples_to_train_on, train_data_path etc.) This is useful because the output directory, data paths and number of train steps will be different depending on whether we're training locally or in the cloud. Parametrizing allows us to use the same code for both. We specify these parameters at run time via the command line. Which means we need to add code to parse command line parameters and invoke train_and_evaluate() with those params. This is the job of the task.py file. Step10: Run trainer module package locally Now we can test our training code locally as follows using the local test data. We'll run a very small training job over a single file with a small batch size and one eval step. Step11: Run your training package on Cloud AI Platform Once the code works in standalone mode locally, you can run it on Cloud AI Platform. To submit to the Cloud we use gcloud ai-platform jobs submit training [jobname] and simply specify some additional parameters for AI Platform Training Service
Python Code: # Use the chown command to change the ownership of repository to user !sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst # Install the Google Cloud BigQuery !pip install --user google-cloud-bigquery==1.25.0 Explanation: Training Models at Scale with AI Platform Learning Objectives: 1. Learn how to organize your training code into a Python package 1. Train your model using cloud infrastructure via Google Cloud AI Platform Training Service Introduction In this notebook we'll make the jump from training locally, to do training in the cloud. We'll take advantage of Google Cloud's AI Platform Training Service. AI Platform Training Service is a managed service that allows the training and deployment of ML models without having to provision or maintain servers. The infrastructure is handled seamlessly by the managed service for us. Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook. End of explanation # The OS module in Python provides functions for interacting with the operating system import os from google.cloud import bigquery # Change with your own bucket and project below: BUCKET = "<BUCKET>" PROJECT = "<PROJECT>" REGION = "<YOUR REGION>" OUTDIR = "gs://{bucket}/taxifare/data".format(bucket=BUCKET) # Store the value of `BUCKET`, `OUTDIR`, `PROJECT`, `REGION` and `TFVERSION` in environment variables. os.environ['BUCKET'] = BUCKET os.environ['OUTDIR'] = OUTDIR os.environ['PROJECT'] = PROJECT os.environ['REGION'] = REGION os.environ['TFVERSION'] = "2.1" %%bash gcloud config set project $PROJECT gcloud config set compute/region $REGION Explanation: Note: Restart your kernel to use updated packages. Kindly ignore the deprecation warnings and incompatibility errors related to google-cloud-storage. Specify your project name, bucket name and region in the cell below. End of explanation # Created a BigQuery dataset for our data bq = bigquery.Client(project = PROJECT) dataset = bigquery.Dataset(bq.dataset("taxifare")) try: bq.create_dataset(dataset) print("Dataset created") except: print("Dataset already exists") Explanation: Create BigQuery tables If you have not already created a BigQuery dataset for our data, run the following cell: End of explanation %%bigquery # Creating the table in our dataset. CREATE OR REPLACE TABLE taxifare.feateng_training_data AS SELECT (tolls_amount + fare_amount) AS fare_amount, pickup_datetime, pickup_longitude AS pickuplon, pickup_latitude AS pickuplat, dropoff_longitude AS dropofflon, dropoff_latitude AS dropofflat, passenger_count*1.0 AS passengers, 'unused' AS key FROM `nyc-tlc.yellow.trips` WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 1000)) = 1 AND trip_distance > 0 AND fare_amount >= 2.5 AND pickup_longitude > -78 AND pickup_longitude < -70 AND dropoff_longitude > -78 AND dropoff_longitude < -70 AND pickup_latitude > 37 AND pickup_latitude < 45 AND dropoff_latitude > 37 AND dropoff_latitude < 45 AND passenger_count > 0 Explanation: Let's create a table with 1 million examples. Note that the order of columns is exactly what was in our CSV files. End of explanation %%bigquery # Creating the table in our dataset. CREATE OR REPLACE TABLE taxifare.feateng_valid_data AS SELECT (tolls_amount + fare_amount) AS fare_amount, pickup_datetime, pickup_longitude AS pickuplon, pickup_latitude AS pickuplat, dropoff_longitude AS dropofflon, dropoff_latitude AS dropofflat, passenger_count*1.0 AS passengers, 'unused' AS key FROM `nyc-tlc.yellow.trips` WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 10000)) = 2 AND trip_distance > 0 AND fare_amount >= 2.5 AND pickup_longitude > -78 AND pickup_longitude < -70 AND dropoff_longitude > -78 AND dropoff_longitude < -70 AND pickup_latitude > 37 AND pickup_latitude < 45 AND dropoff_latitude > 37 AND dropoff_latitude < 45 AND passenger_count > 0 Explanation: Make the validation dataset be 1/10 the size of the training dataset. End of explanation %%bash # Deleting the current contents of output directory. echo "Deleting current contents of $OUTDIR" gsutil -m -q rm -rf $OUTDIR # Fetching the training data to output directory. echo "Extracting training data to $OUTDIR" bq --location=US extract \ --destination_format CSV \ --field_delimiter "," --noprint_header \ taxifare.feateng_training_data \ $OUTDIR/taxi-train-*.csv echo "Extracting validation data to $OUTDIR" bq --location=US extract \ --destination_format CSV \ --field_delimiter "," --noprint_header \ taxifare.feateng_valid_data \ $OUTDIR/taxi-valid-*.csv # The `ls` command will show the content of working directory gsutil ls -l $OUTDIR # The `cat` command will outputs the contents of one or more URLs # Using `head -2` we are showing only top two output files !gsutil cat gs://$BUCKET/taxifare/data/taxi-train-000000000000.csv | head -2 Explanation: Export the tables as CSV files End of explanation # The `ls` command will show the content of working directory !gsutil ls gs://$BUCKET/taxifare/data Explanation: Make code compatible with AI Platform Training Service In order to make our code compatible with AI Platform Training Service we need to make the following changes: Upload data to Google Cloud Storage Move code into a trainer Python package Submit training job with gcloud to train on AI Platform Upload data to Google Cloud Storage (GCS) Cloud services don't have access to our local files, so we need to upload them to a location the Cloud servers can read from. In this case we'll use GCS. End of explanation ls ./taxifare/trainer/ Explanation: Move code into a Python package The first thing to do is to convert your training code snippets into a regular Python package that we will then pip install into the Docker container. A Python package is simply a collection of one or more .py files along with an __init__.py file to identify the containing directory as a package. The __init__.py sometimes contains initialization code but for our purposes an empty file suffices. Create the package directory Our package directory contains 3 files: End of explanation %%writefile ./taxifare/trainer/model.py # The datetime module used to work with dates as date objects. import datetime # The logging module in Python allows writing status messages to a file or any other output streams. import logging # The OS module in Python provides functions for interacting with the operating system import os # The shutil module in Python provides many functions of high-level operations on files and collections of files. # This module helps in automating process of copying and removal of files and directories. import shutil # Here we'll import data processing libraries like Numpy and Tensorflow import numpy as np import tensorflow as tf from tensorflow.keras import activations from tensorflow.keras import callbacks from tensorflow.keras import layers from tensorflow.keras import models from tensorflow import feature_column as fc logging.info(tf.version.VERSION) # Defining the feature names into a list `CSV_COLUMNS` CSV_COLUMNS = [ 'fare_amount', 'pickup_datetime', 'pickup_longitude', 'pickup_latitude', 'dropoff_longitude', 'dropoff_latitude', 'passenger_count', 'key', ] LABEL_COLUMN = 'fare_amount' # Defining the default values into a list `DEFAULTS` DEFAULTS = [[0.0], ['na'], [0.0], [0.0], [0.0], [0.0], [0.0], ['na']] DAYS = ['Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat'] def features_and_labels(row_data): for unwanted_col in ['key']: row_data.pop(unwanted_col) # The .pop() method will return item and drop from frame. label = row_data.pop(LABEL_COLUMN) return row_data, label def load_dataset(pattern, batch_size, num_repeat): # The tf.data.experimental.make_csv_dataset() method reads CSV files into a dataset dataset = tf.data.experimental.make_csv_dataset( file_pattern=pattern, batch_size=batch_size, column_names=CSV_COLUMNS, column_defaults=DEFAULTS, num_epochs=num_repeat, ) # The `map()` function executes a specified function for each item in an iterable. # The item is sent to the function as a parameter. return dataset.map(features_and_labels) def create_train_dataset(pattern, batch_size): dataset = load_dataset(pattern, batch_size, num_repeat=None) # The `prefetch()` method will start a background thread to populate a ordered buffer that acts like a queue, so that downstream pipeline stages need not block. return dataset.prefetch(1) def create_eval_dataset(pattern, batch_size): dataset = load_dataset(pattern, batch_size, num_repeat=1) # The `prefetch()` method will start a background thread to populate a ordered buffer that acts like a queue, so that downstream pipeline stages need not block. return dataset.prefetch(1) def parse_datetime(s): if type(s) is not str: s = s.numpy().decode('utf-8') return datetime.datetime.strptime(s, "%Y-%m-%d %H:%M:%S %Z") def euclidean(params): lon1, lat1, lon2, lat2 = params londiff = lon2 - lon1 latdiff = lat2 - lat1 return tf.sqrt(londiff*londiff + latdiff*latdiff) def get_dayofweek(s): ts = parse_datetime(s) return DAYS[ts.weekday()] @tf.function def dayofweek(ts_in): return tf.map_fn( lambda s: tf.py_function(get_dayofweek, inp=[s], Tout=tf.string), ts_in ) @tf.function def fare_thresh(x): return 60 * activations.relu(x) def transform(inputs, NUMERIC_COLS, STRING_COLS, nbuckets): # Pass-through columns transformed = inputs.copy() del transformed['pickup_datetime'] feature_columns = { colname: fc.numeric_column(colname) for colname in NUMERIC_COLS } # Scaling longitude from range [-70, -78] to [0, 1] for lon_col in ['pickup_longitude', 'dropoff_longitude']: transformed[lon_col] = layers.Lambda( lambda x: (x + 78)/8.0, name='scale_{}'.format(lon_col) )(inputs[lon_col]) # Scaling latitude from range [37, 45] to [0, 1] for lat_col in ['pickup_latitude', 'dropoff_latitude']: transformed[lat_col] = layers.Lambda( lambda x: (x - 37)/8.0, name='scale_{}'.format(lat_col) )(inputs[lat_col]) # Adding Euclidean dist (no need to be accurate: NN will calibrate it) transformed['euclidean'] = layers.Lambda(euclidean, name='euclidean')([ inputs['pickup_longitude'], inputs['pickup_latitude'], inputs['dropoff_longitude'], inputs['dropoff_latitude'] ]) feature_columns['euclidean'] = fc.numeric_column('euclidean') # hour of day from timestamp of form '2010-02-08 09:17:00+00:00' transformed['hourofday'] = layers.Lambda( lambda x: tf.strings.to_number( tf.strings.substr(x, 11, 2), out_type=tf.dtypes.int32), name='hourofday' )(inputs['pickup_datetime']) feature_columns['hourofday'] = fc.indicator_column( fc.categorical_column_with_identity( 'hourofday', num_buckets=24)) latbuckets = np.linspace(0, 1, nbuckets).tolist() lonbuckets = np.linspace(0, 1, nbuckets).tolist() b_plat = fc.bucketized_column( feature_columns['pickup_latitude'], latbuckets) b_dlat = fc.bucketized_column( feature_columns['dropoff_latitude'], latbuckets) b_plon = fc.bucketized_column( feature_columns['pickup_longitude'], lonbuckets) b_dlon = fc.bucketized_column( feature_columns['dropoff_longitude'], lonbuckets) ploc = fc.crossed_column( [b_plat, b_plon], nbuckets * nbuckets) dloc = fc.crossed_column( [b_dlat, b_dlon], nbuckets * nbuckets) pd_pair = fc.crossed_column([ploc, dloc], nbuckets ** 4) feature_columns['pickup_and_dropoff'] = fc.embedding_column( pd_pair, 100) return transformed, feature_columns def rmse(y_true, y_pred): return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true))) def build_dnn_model(nbuckets, nnsize, lr): # input layer is all float except for pickup_datetime which is a string STRING_COLS = ['pickup_datetime'] NUMERIC_COLS = ( set(CSV_COLUMNS) - set([LABEL_COLUMN, 'key']) - set(STRING_COLS) ) inputs = { colname: layers.Input(name=colname, shape=(), dtype='float32') for colname in NUMERIC_COLS } inputs.update({ colname: layers.Input(name=colname, shape=(), dtype='string') for colname in STRING_COLS }) # transforms transformed, feature_columns = transform( inputs, NUMERIC_COLS, STRING_COLS, nbuckets=nbuckets) dnn_inputs = layers.DenseFeatures(feature_columns.values())(transformed) x = dnn_inputs for layer, nodes in enumerate(nnsize): x = layers.Dense(nodes, activation='relu', name='h{}'.format(layer))(x) output = layers.Dense(1, name='fare')(x) model = models.Model(inputs, output) #TODO 1a lr_optimizer = tf.keras.optimizers.Adam(learning_rate=lr) model.compile(optimizer=lr_optimizer, loss='mse', metrics=[rmse, 'mse']) return model def train_and_evaluate(hparams): #TODO 1b batch_size = hparams['batch_size'] nbuckets = hparams['nbuckets'] lr = hparams['lr'] nnsize = hparams['nnsize'] eval_data_path = hparams['eval_data_path'] num_evals = hparams['num_evals'] num_examples_to_train_on = hparams['num_examples_to_train_on'] output_dir = hparams['output_dir'] train_data_path = hparams['train_data_path'] timestamp = datetime.datetime.now().strftime('%Y%m%d%H%M%S') savedmodel_dir = os.path.join(output_dir, 'export/savedmodel') model_export_path = os.path.join(savedmodel_dir, timestamp) checkpoint_path = os.path.join(output_dir, 'checkpoints') tensorboard_path = os.path.join(output_dir, 'tensorboard') if tf.io.gfile.exists(output_dir): tf.io.gfile.rmtree(output_dir) model = build_dnn_model(nbuckets, nnsize, lr) logging.info(model.summary()) trainds = create_train_dataset(train_data_path, batch_size) evalds = create_eval_dataset(eval_data_path, batch_size) steps_per_epoch = num_examples_to_train_on // (batch_size * num_evals) checkpoint_cb = callbacks.ModelCheckpoint( checkpoint_path, save_weights_only=True, verbose=1 ) tensorboard_cb = callbacks.TensorBoard(tensorboard_path) history = model.fit( trainds, validation_data=evalds, epochs=num_evals, steps_per_epoch=max(1, steps_per_epoch), verbose=2, # 0=silent, 1=progress bar, 2=one line per epoch callbacks=[checkpoint_cb, tensorboard_cb] ) # Exporting the model with default serving function. tf.saved_model.save(model, model_export_path) return history Explanation: Paste existing code into model.py A Python package requires our code to be in a .py file, as opposed to notebook cells. So, we simply copy and paste our existing code for the previous notebook into a single file. In the cell below, we write the contents of the cell into model.py packaging the model we developed in the previous labs so that we can deploy it to AI Platform Training Service. End of explanation %%writefile taxifare/trainer/task.py # The argparse module makes it easy to write user-friendly command-line interfaces. It parses the defined arguments from the `sys.argv`. # The argparse module also automatically generates help & usage messages and issues errors when users give the program invalid arguments. import argparse from trainer import model # Write an `task.py` file for adding code to parse command line parameters and invoke `train_and_evaluate()` with those parameters. if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument( "--batch_size", help="Batch size for training steps", type=int, default=32 ) parser.add_argument( "--eval_data_path", help="GCS location pattern of eval files", required=True ) parser.add_argument( "--nnsize", help="Hidden layer sizes (provide space-separated sizes)", nargs="+", type=int, default=[32, 8] ) parser.add_argument( "--nbuckets", help="Number of buckets to divide lat and lon with", type=int, default=10 ) parser.add_argument( "--lr", help = "learning rate for optimizer", type = float, default = 0.001 ) parser.add_argument( "--num_evals", help="Number of times to evaluate model on eval data training.", type=int, default=5 ) parser.add_argument( "--num_examples_to_train_on", help="Number of examples to train on.", type=int, default=100 ) parser.add_argument( "--output_dir", help="GCS location to write checkpoints and export models", required=True ) parser.add_argument( "--train_data_path", help="GCS location pattern of train files containing eval URLs", required=True ) parser.add_argument( "--job-dir", help="this model ignores this field, but it is required by gcloud", default="junk" ) args = parser.parse_args() hparams = args.__dict__ hparams.pop("job-dir", None) model.train_and_evaluate(hparams) Explanation: Modify code to read data from and write checkpoint files to GCS If you look closely above, you'll notice a new function, train_and_evaluate that wraps the code that actually trains the model. This allows us to parametrize the training by passing a dictionary of parameters to this function (e.g, batch_size, num_examples_to_train_on, train_data_path etc.) This is useful because the output directory, data paths and number of train steps will be different depending on whether we're training locally or in the cloud. Parametrizing allows us to use the same code for both. We specify these parameters at run time via the command line. Which means we need to add code to parse command line parameters and invoke train_and_evaluate() with those params. This is the job of the task.py file. End of explanation %%bash # Testing our training code locally EVAL_DATA_PATH=./taxifare/tests/data/taxi-valid* TRAIN_DATA_PATH=./taxifare/tests/data/taxi-train* OUTPUT_DIR=./taxifare-model test ${OUTPUT_DIR} && rm -rf ${OUTPUT_DIR} export PYTHONPATH=${PYTHONPATH}:${PWD}/taxifare python3 -m trainer.task \ --eval_data_path $EVAL_DATA_PATH \ --output_dir $OUTPUT_DIR \ --train_data_path $TRAIN_DATA_PATH \ --batch_size 5 \ --num_examples_to_train_on 100 \ --num_evals 1 \ --nbuckets 10 \ --lr 0.001 \ --nnsize 32 8 Explanation: Run trainer module package locally Now we can test our training code locally as follows using the local test data. We'll run a very small training job over a single file with a small batch size and one eval step. End of explanation %%bash # Output directory and jobID OUTDIR=gs://${BUCKET}/taxifare/trained_model_$(date -u +%y%m%d_%H%M%S) JOBID=taxifare_$(date -u +%y%m%d_%H%M%S) echo ${OUTDIR} ${REGION} ${JOBID} gsutil -m rm -rf ${OUTDIR} # Model and training hyperparameters BATCH_SIZE=50 NUM_EXAMPLES_TO_TRAIN_ON=100 NUM_EVALS=100 NBUCKETS=10 LR=0.001 NNSIZE="32 8" # GCS paths GCS_PROJECT_PATH=gs://$BUCKET/taxifare DATA_PATH=$GCS_PROJECT_PATH/data TRAIN_DATA_PATH=$DATA_PATH/taxi-train* EVAL_DATA_PATH=$DATA_PATH/taxi-valid* #TODO 2 gcloud ai-platform jobs submit training $JOBID \ --module-name=trainer.task \ --package-path=taxifare/trainer \ --staging-bucket=gs://${BUCKET} \ --python-version=3.7 \ --runtime-version=${TFVERSION} \ --region=${REGION} \ -- \ --eval_data_path $EVAL_DATA_PATH \ --output_dir $OUTDIR \ --train_data_path $TRAIN_DATA_PATH \ --batch_size $BATCH_SIZE \ --num_examples_to_train_on $NUM_EXAMPLES_TO_TRAIN_ON \ --num_evals $NUM_EVALS \ --nbuckets $NBUCKETS \ --lr $LR \ --nnsize $NNSIZE Explanation: Run your training package on Cloud AI Platform Once the code works in standalone mode locally, you can run it on Cloud AI Platform. To submit to the Cloud we use gcloud ai-platform jobs submit training [jobname] and simply specify some additional parameters for AI Platform Training Service: - jobid: A unique identifier for the Cloud job. We usually append system time to ensure uniqueness - region: Cloud region to train in. See here for supported AI Platform Training Service regions The arguments before -- \ are for AI Platform Training Service. The arguments after -- \ are sent to our task.py. Because this is on the entire dataset, it will take a while. You can monitor the job from the GCP console in the Cloud AI Platform section. End of explanation
12,011
Given the following text description, write Python code to implement the functionality described below step by step Description: A Simple Catcher CNN Demo We first need to import the entire X library by adding the super folder and then importing the right keras libraries Step1: Setup Game Here we setup the game and training settings Step2: Catcher Model Here is the model we train for catching Step3: Test the agent Here we follow the learned policy Step5: Show Playing Here we show the playing in the notebook using HTML rendering of the animated GIF
Python Code: import os, sys sys.path.append(os.path.join('..')) import keras.backend as K K.set_image_dim_ordering('th') # needs to be set since it defaults to tensorflow now from keras.models import Sequential from keras.layers.convolutional import Convolution2D, MaxPooling2D from keras.layers.core import Flatten from keras.optimizers import SGD from x.environment import Catcher from x.models import KerasModel from x.memory import ExperienceReplay from x.agent import DiscreteAgent Explanation: A Simple Catcher CNN Demo We first need to import the entire X library by adding the super folder and then importing the right keras libraries End of explanation num_actions = 3 nb_filters, nb_rows, nb_cols = 32, 3, 3 grid_x, grid_y = 11, 11 epoch = 100 batch = 50 memory_len = 500 gamma = 0.9 epsilon = 0.1 Explanation: Setup Game Here we setup the game and training settings End of explanation # keras model keras_model = Sequential() keras_model.add(Convolution2D(nb_filters, nb_rows, nb_cols, input_shape=(1, grid_x, grid_y), activation='relu', subsample=(2, 2))) keras_model.add(Convolution2D(nb_filters, nb_rows, nb_cols, activation='relu')) keras_model.add(Convolution2D(num_actions, nb_rows, nb_cols)) keras_model.add(MaxPooling2D(keras_model.output_shape[-2:])) keras_model.add(Flatten()) # X wrapper for Keras model = KerasModel(keras_model) # Memory M = ExperienceReplay(memory_length=memory_len) # Agent A = DiscreteAgent(model, M) # SGD optimizer + MSE cost + MAX policy = Q-learning as we know it A.compile(optimizer=SGD(lr=0.2), loss="mse", policy_rule="max") # To run an experiment, the Agent needs an Enviroment to iteract with catcher = Catcher(grid_size=grid_x, output_shape=(1, grid_x, grid_y)) A.learn(catcher, epoch=epoch, batch_size=batch) Explanation: Catcher Model Here is the model we train for catching End of explanation out_dir = 'rl_dir' if not os.path.exists(out_dir): os.mkdir(out_dir) A.play(catcher, epoch=100, visualize={'filepath': os.path.join(out_dir, 'demo.gif'), 'n_frames': 270, 'gray': True}) Explanation: Test the agent Here we follow the learned policy End of explanation from IPython.display import HTML import base64 with open(os.path.join(out_dir, 'demo.gif'), 'rb') as in_file: data_str = base64.b64encode(in_file.read()).decode("ascii").replace("\n", "") data_uri = "data:image/png;base64,{0}".format(data_str) HTML(<img src="{0}" width = "256px" height = "256px" />.format(data_uri)) Explanation: Show Playing Here we show the playing in the notebook using HTML rendering of the animated GIF End of explanation
12,012
Given the following text description, write Python code to implement the functionality described below step by step Description: List Structures The concept of a list is similar to oureveryday notion of a list. We read off (access) items on our to-do list, add items, cross off (delete) items, and so forth. We look at the use of lists next. A list is a linear data structure , meaning that its elements have a linear ordering. (First, second, ...) Each item in the list is identified by its index value (location) . Index starts with 0 There are common operations performed on lists including; retrieve, update, insert, delete, and append. Lists in Python Mutable Flexible length Allowing mixed type elements index 0...n-1 denoted with [value1, value2] Step1: Tuples in Python In contrast to lists, tuple is defined and cannot be altered. Otherwise, lists and tuples are essentially same. To denote tuples we use ( ) Step2: Sequences A sequence in Python is a linearly ordered set of elements accessed by an index number. Lists, tuples, and strings are all sequences. We know what is string Step3: Finding length with len() Step4: Accessing elements by indexing Step5: Slicing Step6: Counting elements Step7: Finding Indexes from elements Step8: Checking Membership with in Step9: Concatenation Step10: Finding minimum Step11: Finding maximum Step12: Lists and Tuples can used with more dimensions. Nested lists and tuples are giving this flexibility. Step13: When we trying to get the specific elements in nested lists or tuples, we should do something different Step14: Apply It! <p style=color Step15: In the example above k is called loop variable. In the list we had 6 elements so our loop iterated six times. We can create the same script with while loop as follows Step16: For statement can be applied to all sequence types, including strings. Let's see how Step17: Now since we know about the beautiful for loops, it's time to learn a built-in function most commonly used with for loops Step18: As you can see range() function creates a list so we can use it with for loops Step19: Test Time Question1 Step20: List Comprehension The range function allows for the generation of sequences of integers in fixed increments. List Comprehensions in Python can be used to generate more varied sequences.
Python Code: ["Watermelon"] list(123) list("123") a = [] b = list() a == b x = [0,1,2,3,4,5,6] z = list() y = list(range(7)) x == y type(['one', 'two']) type(['apples' , 50, False]) type([]) # Empty list # Define a list # Using list function to create empty list a = list() print(type(a)) print(a) # Using brackets to create empty list b = [] print(type(b)) print(b) # We can also initilize lists with some content inside c = [1,2,3] # Create list with integer values inside # We can access specific elements of the list using index value print(c[1]) print(c[0]) n = 0 tot1 = 0 while n < len(c): tot1 = tot1 + c[n] n = n +1 tot1 tot = sum(c) tot total = c[0] + c[1] + c[2] total # Creates a list of elements from 0 to 9 lst = list(range(10)) print(lst) a = 10 # Lets update(replace) an element of the list called lst lst[2] = 19 print(lst) # Let's remove an element from the list del lst[2] print(lst) # We can add elements to the list using 2 different methods: lst.insert(8,3) # adds element 3 at index 8 print(lst) lst.append(4) # adds element 4 at the end of list print(lst) Explanation: List Structures The concept of a list is similar to oureveryday notion of a list. We read off (access) items on our to-do list, add items, cross off (delete) items, and so forth. We look at the use of lists next. A list is a linear data structure , meaning that its elements have a linear ordering. (First, second, ...) Each item in the list is identified by its index value (location) . Index starts with 0 There are common operations performed on lists including; retrieve, update, insert, delete, and append. Lists in Python Mutable Flexible length Allowing mixed type elements index 0...n-1 denoted with [value1, value2] End of explanation nums = (10,20,30) type(nums) print(nums[2]) nums.insert(1,15) # Non alterable del nums[2] nums.append(40) y = 75 x = 45 (x,y) = (y,x) x y Explanation: Tuples in Python In contrast to lists, tuple is defined and cannot be altered. Otherwise, lists and tuples are essentially same. To denote tuples we use ( ) End of explanation # Initializing our variable s1 = 'hello' s2 = 'world!' t1 = (1,2,3,4) t2 = (5,6) l1 = ['apple', 'pear', 'peach'] l2 = [10,20,30,40,50,60,70] Explanation: Sequences A sequence in Python is a linearly ordered set of elements accessed by an index number. Lists, tuples, and strings are all sequences. We know what is string: "String" Strings are also immutable like tuples, but we can we will use the intensively. Let's look at more operations we can use with sequences: End of explanation # Finding length for string print(len(s1)) print(len(s2)) # Finding length for tuples print(len(t1)) print(len(t2)) # Finding length for lists print(len(l1)) print(len(l2)) Explanation: Finding length with len(): End of explanation print(s1[0]) print(s2[4]) print(t1[1]) print(l2[2]) Explanation: Accessing elements by indexing: End of explanation s1[1:4] s2[3:] l1[1:] t1 t1[::-1] # Special Slicing feature Explanation: Slicing: End of explanation s1.count('l') # Counting l occurences in the sequence l2 l2.count(30) t1 t1.count(15) Explanation: Counting elements: End of explanation # c = s1.count("l") # 2 word = "This is a very nice day in Houston!" user_input = input("Please enter a letter that I can find index for you: ") i = 0 lst = [] c = word.count(user_input) while i < len(word): if c > 1: if word[i] == user_input: lst.append(i) elif c == 0: print(0) break else: if word[i] == user_input: print(i) break i = i + 1 print(lst) s1.index('l') l1.index('peach') t1.index(3) Explanation: Finding Indexes from elements: End of explanation 'h' in s1 9 in t2 30 in l2 Explanation: Checking Membership with in : End of explanation s1, s2 print(s1+s2) print(s1 + " " +s2) print(t1+t2) print(l1+l2) (s1 + " ")* 4 Explanation: Concatenation: End of explanation min(s2) min(t2) min(l1) Explanation: Finding minimum: End of explanation max(s2) max(t1) max(l1) Explanation: Finding maximum: End of explanation class_grades = [ [85, 91, 89], [78, 81, 86], [62, 75, 77] ] class_grades Explanation: Lists and Tuples can used with more dimensions. Nested lists and tuples are giving this flexibility. End of explanation class_grades[0] # Prints the first sub-list # We can get the specific item like this in a long way student1_grades = class_grades[0] student1_exam1 = student1_grades[0] student1_exam1 # In short this is more conventional class_grades[0][0] # Let's write a small script that calculates the class average. k = 0 exam_avg = [] while k < len(class_grades): avg = (class_grades[k][0] + class_grades[k][1] + class_grades[k][2]) / 3.0 exam_avg.append(avg) k += 1 format((sum(exam_avg) / 3.0), '.2f') Explanation: When we trying to get the specific elements in nested lists or tuples, we should do something different: End of explanation nums = [10,20,30,40,50,60] for k in nums: print(k) Explanation: Apply It! <p style=color:red> Write a small program that finds your Chineze Zodiac and its characteristics using tuples and datetime module, resulting output: </p> This program will display your Chinese Zodiac Sign and Associated personal characteristics. Enter your year of birth (yyyy): 1984 Your Chinese Zodiac sign is the Rat Your personal charactersitics... Forthright, industrious, sensitive, intellectual, sociable Would you like to enter another year? (y/n): n <p style=color:red> Here are the characteristics: <br> rat = 'Forthright, industrious, sensitive, intellectual, sociable' <br> ox = 'Dependable, methodical, modest, born leader, patient' <br> tiger = 'Unpredictable, rebellious, passionate, daring, impulsive' <br> rabbit = 'Good friend, kind, soft-spoken, cautious, artistic' <br> dragon = 'Strong, self-assured, proud. decisive, loyal' <br> snake = 'Deep thinker, creative, responsible, calm, purposeful' <br> horse = 'Cheerful, quick-witted, perceptive, talkative, open-minded' <br> goat = 'Sincere, sympathetic, shy, generous, mothering' <br> monkey = 'Motivator, inquisitive, flexible, innovative, problem solver' <br> rooster = 'Organizer, self-assured, decisive, perfectionist, zealous' <br> dog = 'Honest, unpretentious, idealistic, moralistic, easy going' <br> pig = 'Peace-loving, hard-working, trusting, understanding, thoughtful' <br> </p> Test Time Question1: Which of the following sequence types is a mutable type? a) Strings b) Lists c) Tuples Question2: What is the result of the following snippet: lst = [4,2,9,1] lst.insert(2,3) a) [4,2,3,9,1] b) [4,3,2,9,1] c) [4,2,9,2,1] Question3: Which of the following set of operations can be applied to any sequence? a) len(s), s[i], s+w (concatenation) b) max(s), s[i], sum(s) c) len(s), s[i], s.sort() Iterating Over Sequences We can iterate over sequences using while loop, the one that we learned last lecture. However there is a better and more Pythonic way, it is called For loops For loops are used to construct definite loops. Syntax: for k in sequence: statements End of explanation k = 0 while k < len(nums): print(nums[k]) k += 1 Explanation: In the example above k is called loop variable. In the list we had 6 elements so our loop iterated six times. We can create the same script with while loop as follows: End of explanation for ch in 'Hello World!': print(ch) Explanation: For statement can be applied to all sequence types, including strings. Let's see how: End of explanation list(range(2)) list(range(2,11)) list(range(1, 11, 2)) Explanation: Now since we know about the beautiful for loops, it's time to learn a built-in function most commonly used with for loops: a built-in range() function End of explanation tot = 0 for k in range(1, 11, 2): tot = tot + k print(k) print("Total is", tot) Explanation: As you can see range() function creates a list so we can use it with for loops: End of explanation nums=[12, 4, 11, 23, 18, 41, 27] k = 0 while k < len(nums) and nums[k] != 18: k += 1 print(k) fruit = "Strawberry" for k in range(0, len(fruit),2): print(fruit[k], end='') Explanation: Test Time Question1: For nums=[10,30,20,40] , what does the following for loop output? for k in nums: print(k) Questions2: For fruit='strawberry' , what does the following for loop output? for k in range(0,len(fruit),2): print(fruit[k], end='') Question3: For nums=[12, 4, 11, 23, 18, 41, 27] , what is the value of k when the while loop terminates? k = 0 while k &lt; len(nums) and nums[k] != 18: k += 1 End of explanation [x**2 for x in [1,2,3]] [x**2 for x in range(10)] nums = [-1,1,-2,2,-3,3,-4,4,-5,5] [x for x in nums if x >= 0] [ord(ch) for ch in 'Hello World'] vowels = ('a', 'e', 'i','o', 'u') w = 'Hello World!' [ch for ch in w if ch in vowels] Explanation: List Comprehension The range function allows for the generation of sequences of integers in fixed increments. List Comprehensions in Python can be used to generate more varied sequences. End of explanation
12,013
Given the following text description, write Python code to implement the functionality described below step by step Description: Modifying yields This Notebook shows how to modify specific yields without having to re-generate yields table for every case. The modification will alter the input yields internally (within the code) and will leave the original yields table files intact. To do so, the yield_modifier (developed by Tom Trueman) argument must be used, which consists of a list of arrays in the form of [ [iso, M, Z, type, modifier] , [...]]. This will modify the yield of a specific isotope for the given M and Z by multiplying the yield by a given factor (type="multiply") or replacing the yield by a new value (type="replace"). Modifier will be either the factor or value depending on type. Notebook created by Benoit Côté Step1: Modifying the isotopic yields of a specific stellar model Step2: Modifying the isotopic yields of several stellar models Step3: Example with OMEGA
Python Code: # Import Python modules import matplotlib.pyplot as plt # Import NuPyCEE codes from NuPyCEE import sygma Explanation: Modifying yields This Notebook shows how to modify specific yields without having to re-generate yields table for every case. The modification will alter the input yields internally (within the code) and will leave the original yields table files intact. To do so, the yield_modifier (developed by Tom Trueman) argument must be used, which consists of a list of arrays in the form of [ [iso, M, Z, type, modifier] , [...]]. This will modify the yield of a specific isotope for the given M and Z by multiplying the yield by a given factor (type="multiply") or replacing the yield by a new value (type="replace"). Modifier will be either the factor or value depending on type. Notebook created by Benoit Côté End of explanation # Select an isotope and a stellar model (the model needs to be in the yields table). iso = "Si-28" M = 15.0 Z = 0.02 # Run SYGMA with no yields modification. s1 = sygma.sygma(iniZ=Z) # Run SYGMA where the yield is multiplied by 2. factor = 2.0 yield_modifier = [ [iso, M, Z, "multiply", factor] ] s2 = sygma.sygma(iniZ=Z, yield_modifier=yield_modifier) # Run SYGMA where the yield is replaced by 0.6. value = 0.6 yield_modifier = [ [iso, M, Z, "replace", value] ] s3 = sygma.sygma(iniZ=Z, yield_modifier=yield_modifier) # Get the isotope array index. i_iso = s1.history.isotopes.index(iso) # Print the yield that was taken by SYGMA. print(iso,"yield (original) : ", s1.get_interp_yields(M,Z)[i_iso],"Msun") print(iso,"yield multiplied by",factor,":", s2.get_interp_yields(M,Z)[i_iso],"Msun") print(iso,"yield replaced by",value,": ", s3.get_interp_yields(M,Z)[i_iso],"Msun") # Plot the yields as a function of stellar mass. # Note: The y axis is not the yields as found in the yield table file. # It is the IMF-weighted yields from a 1Msun stellar population. %matplotlib nbagg s3.plot_mass_range_contributions(specie=iso, color="C0", label="Replaced") s2.plot_mass_range_contributions(specie=iso, color="C1", label="Multiplied") s1.plot_mass_range_contributions(specie=iso, color="C2", label="Original") plt.title(iso,fontsize=12) Explanation: Modifying the isotopic yields of a specific stellar model End of explanation # Select an isotope iso = "Si-28" # Define the list of multiplication factor for each stellar model factor_list = [2, 4, 8, 16] M_list = [12.0, 15.0, 20.0, 25.0] Z = 0.02 # Fill the yield_modifier array yield_modifier = [] for M, factor in zip(M_list, factor_list): yield_modifier.append([iso, M, Z, "multiply", factor]) # Run SYGMA with yields modification s4 = sygma.sygma(iniZ=Z, yield_modifier=yield_modifier) # Plot the yields as a function of stellar mass (IMF weighted yields for a 1Msun population) %matplotlib nbagg s4.plot_mass_range_contributions(specie=iso, color="C1", label="Multiplied") s1.plot_mass_range_contributions(specie=iso, color="C2", label="Original") plt.title(iso,fontsize=12) Explanation: Modifying the isotopic yields of several stellar models End of explanation # Import the NuPyCEE galactic chemical evolution code from NuPyCEE import omega # Print the list of available M and Z in the yields table print("M:",s1.M_table) print("Z:",s1.Z_table) # Boost the Mg-24 yields of all massive stars by a factor of 2 iso = "Mg-24" yield_modifier = [] for M in [12.0, 15.0, 20.0, 25.0]: for Z in [0.02, 0.01, 0.006, 0.001, 0.0001]: yield_modifier.append([iso,M,Z,"multiply",2]) # Run OMEGA with and without the yield modifier o1 = omega.omega() o2 = omega.omega(yield_modifier=yield_modifier) # Plot the amount of Mg-24 present in the interstellar medium of the galaxy %matplotlib nbagg o2.plot_mass(specie=iso, label="Modified", color="r", shape="--") o1.plot_mass(specie=iso, label="Original") # Set visual plt.xscale("linear") plt.xlabel("Galactic age [yr]") plt.ylabel(iso+" mass in the ISM [Msun]") Explanation: Example with OMEGA End of explanation
12,014
Given the following text description, write Python code to implement the functionality described below step by step Description: Use mozinor for regression Import the main module Step1: Prepare the pipeline (str) filepath Step2: Now run the pipeline May take some times Step3: The class instance, now contains 2 objects, the model for this data, and the best stacking for this data To make auto generate the code of the model Generate the code for the best model Step4: Generate the code for the best stacking Step6: To check which model is the best Best model Step8: Best stacking
Python Code: from mozinor.baboulinet import Baboulinet Explanation: Use mozinor for regression Import the main module End of explanation cls = Baboulinet(filepath="toto2.csv", y_col="predict", regression=True) Explanation: Prepare the pipeline (str) filepath: Give the csv file (str) y_col: The column to predict (bool) regression: Regression or Classification ? (bool) process: (WARNING) apply some preprocessing on your data (tune this preprocess with params below) (char) sep: delimiter (list) col_to_drop: which columns you don't want to use in your prediction (bool) derivate: for all features combination apply, n1 * n2, n1 / n2 ... (bool) transform: for all features apply, log(n), sqrt(n), square(n) (bool) scaled: scale the data ? (bool) infer_datetime: for all columns check the type and build new columns from them (day, month, year, time) if they are date type (str) encoding: data encoding (bool) dummify: apply dummies on your categoric variables The data files have been generated by sklearn.dataset.make_regression End of explanation res = cls.babouline() Explanation: Now run the pipeline May take some times End of explanation cls.bestModelScript() Explanation: The class instance, now contains 2 objects, the model for this data, and the best stacking for this data To make auto generate the code of the model Generate the code for the best model End of explanation cls.bestStackModelScript() Explanation: Generate the code for the best stacking End of explanation res.best_model show = Model: {}, Score: {} print(show.format(res.best_model["Estimator"], res.best_model["Score"])) Explanation: To check which model is the best Best model End of explanation res.best_stack_models show = FirstModel: {}, SecondModel: {}, Score: {} print(show.format(res.best_stack_models["Fit1stLevelEstimator"], res.best_stack_models["Fit2ndLevelEstimator"], res.best_stack_models["Score"])) Explanation: Best stacking End of explanation
12,015
Given the following text description, write Python code to implement the functionality described below step by step Description: Anna KaRNNa In this notebook, we'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book. This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN. <img src="assets/charseq.jpeg" width="500"> Step1: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network. Step2: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever. Step3: And we can see the characters encoded as integers. Step4: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from. Step5: Making training mini-batches Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this Step6: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps. Step7: If you implemented get_batches correctly, the above output should look something like ``` x [[55 63 69 22 6 76 45 5 16 35] [ 5 69 1 5 12 52 6 5 56 52] [48 29 12 61 35 35 8 64 76 78] [12 5 24 39 45 29 12 56 5 63] [ 5 29 6 5 29 78 28 5 78 29] [ 5 13 6 5 36 69 78 35 52 12] [63 76 12 5 18 52 1 76 5 58] [34 5 73 39 6 5 12 52 36 5] [ 6 5 29 78 12 79 6 61 5 59] [ 5 78 69 29 24 5 6 52 5 63]] y [[63 69 22 6 76 45 5 16 35 35] [69 1 5 12 52 6 5 56 52 29] [29 12 61 35 35 8 64 76 78 28] [ 5 24 39 45 29 12 56 5 63 29] [29 6 5 29 78 28 5 78 29 45] [13 6 5 36 69 78 35 52 12 43] [76 12 5 18 52 1 76 5 58 52] [ 5 73 39 6 5 12 52 36 5 78] [ 5 29 78 12 79 6 61 5 59 63] [78 69 29 24 5 6 52 5 63 76]] `` although the exact numbers will be different. Check to make sure the data is shifted over one step fory`. Building the model Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network. <img src="assets/charRNN.png" width=500px> Inputs First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob. This will be a scalar, that is a 0-D tensor. To make a scalar, you create a placeholder without giving it a size. Exercise Step8: LSTM Cell Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer. We first create a basic LSTM cell with python lstm = tf.contrib.rnn.BasicLSTMCell(num_units) where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with python tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. Previously with TensorFlow 1.0, you could do this python tf.contrib.rnn.MultiRNNCell([cell]*num_layers) This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow 1.0 will create different weight matrices for all cell objects. But, starting with TensorFlow 1.1 you actually need to create new cell objects in the list. To get it to work in TensorFlow 1.1, it should look like ```python def build_cell(num_units, keep_prob) Step9: RNN Output Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character, so we want this layer to have size $C$, the number of classes/characters we have in our text. If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$. We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells. We get the LSTM output as a list, lstm_output. First we need to concatenate this whole list into one array with tf.concat. Then, reshape it (with tf.reshape) to size $(M * N) \times L$. One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names. Exercise Step10: Training loss Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$. Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss. Exercise Step11: Optimizer Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step. Step12: Build the network Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN. Exercise Step13: Hyperparameters Here are the hyperparameters for the network. batch_size - Number of sequences running through the network in one pass. num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here. lstm_size - The number of units in the hidden layers. num_layers - Number of hidden LSTM layers to use learning_rate - Learning rate for training keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this. Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from. Tips and Tricks Monitoring Validation Loss vs. Training Loss If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular Step14: Time for training This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint. Here I'm saving checkpoints with the format i{iteration number}_l{# hidden layer units}.ckpt Exercise Step15: Saved checkpoints Read up on saving and loading checkpoints here Step16: Sampling Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that. The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters. Step17: Here, pass in the path to a checkpoint and sample from the network.
Python Code: import time from collections import namedtuple import numpy as np import tensorflow as tf Explanation: Anna KaRNNa In this notebook, we'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book. This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN. <img src="assets/charseq.jpeg" width="500"> End of explanation with open('anna.txt', 'r') as f: text=f.read() vocab = sorted(set(text)) vocab_to_int = {c: i for i, c in enumerate(vocab)} int_to_vocab = dict(enumerate(vocab)) encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32) Explanation: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network. End of explanation text[:100] Explanation: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever. End of explanation encoded[:100] Explanation: And we can see the characters encoded as integers. End of explanation len(vocab) Explanation: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from. End of explanation def get_batches(arr, n_seqs, n_steps): '''Create a generator that returns batches of size n_seqs x n_steps from arr. Arguments --------- arr: Array you want to make batches from n_seqs: Batch size, the number of sequences per batch (num of seqs is different from num of batches) n_steps: Number of sequence steps per batch ''' # Get the number of characters per batch and number of batches we can make characters_per_batch = n_seqs * n_steps n_batches = len(arr) // characters_per_batch # Keep only enough characters to make full batches arr = arr[: n_batches * characters_per_batch] # Reshape into n_seqs rows arr = arr.reshape((n_seqs, n_steps * n_batches)) # arr = arr.reshape((n_seqs, -1)) for n in range(0, arr.shape[1], n_steps): # The features x = arr[:, n: n + n_steps] # The targets, shifted by one y = np.zeros_like(x) y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0] yield x, y Explanation: Making training mini-batches Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this: <img src="assets/[email protected]" width=500px> <br> We have our text encoded as integers as one long array in encoded. Let's create a function that will give us an iterator for our batches. I like using generator functions to do this. Then we can pass encoded into this function and get our batch generator. The first thing we need to do is discard some of the text so we only have completely full batches. Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences) and $M$ is the number of steps. Then, to get the number of batches we can make from some array arr, you divide the length of arr by the batch size. Once you know the number of batches and the batch size, you can get the total number of characters to keep. After that, we need to split arr into $N$ sequences. You can do this using arr.reshape(size) where size is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences (n_seqs below), let's make that the size of the first dimension. For the second dimension, you can use -1 as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$ where $K$ is the number of batches. Now that we have this array, we can iterate through it to get our batches. The idea is each batch is a $N \times M$ window on the array. For each subsequent batch, the window moves over by n_steps. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character. You'll usually see the first input character used as the last target character, so something like this: python y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0] where x is the input batch and y is the target batch. The way I like to do this window is use range to take steps of size n_steps from $0$ to arr.shape[1], the total number of steps in each sequence. That way, the integers you get from range always point to the start of a batch, and each window is n_steps wide. Exercise: Write the code for creating batches in the function below. The exercises in this notebook will not be easy. I've provided a notebook with solutions alongside this notebook. If you get stuck, checkout the solutions. The most important thing is that you don't copy and paste the code into here, type out the solution code yourself. End of explanation batches = get_batches(encoded, 10, 50) x, y = next(batches) x1, y1 = next(batches) print('x\n', x[:10, :10]) print('\ny\n', y[:10, :10]) result0 = [int_to_vocab[ele] for ele in x[0, :]] print(result0) result1 = [int_to_vocab[ele] for ele in x[1, :]] print(result1) result0 = [int_to_vocab[ele] for ele in x1[0, :]] print(result0) result1 = [int_to_vocab[ele] for ele in x1[1, :]] print(result1) Explanation: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps. End of explanation def build_inputs(batch_size, num_steps): ''' Define placeholders for inputs, targets, and dropout Arguments --------- batch_size: Batch size, number of sequences per batch num_steps: Number of sequence steps in a batch ''' # Declare placeholders we'll feed into the graph inputs = tf.placeholder(dtype=tf.int32, shape=(batch_size, num_steps), name="input") targets = tf.placeholder(dtype=tf.int32, shape=(batch_size, num_steps), name="targets") # Keep probability placeholder for drop out layers keep_prob = tf.placeholder(dtype=tf.float32, name="input") return inputs, targets, keep_prob Explanation: If you implemented get_batches correctly, the above output should look something like ``` x [[55 63 69 22 6 76 45 5 16 35] [ 5 69 1 5 12 52 6 5 56 52] [48 29 12 61 35 35 8 64 76 78] [12 5 24 39 45 29 12 56 5 63] [ 5 29 6 5 29 78 28 5 78 29] [ 5 13 6 5 36 69 78 35 52 12] [63 76 12 5 18 52 1 76 5 58] [34 5 73 39 6 5 12 52 36 5] [ 6 5 29 78 12 79 6 61 5 59] [ 5 78 69 29 24 5 6 52 5 63]] y [[63 69 22 6 76 45 5 16 35 35] [69 1 5 12 52 6 5 56 52 29] [29 12 61 35 35 8 64 76 78 28] [ 5 24 39 45 29 12 56 5 63 29] [29 6 5 29 78 28 5 78 29 45] [13 6 5 36 69 78 35 52 12 43] [76 12 5 18 52 1 76 5 58 52] [ 5 73 39 6 5 12 52 36 5 78] [ 5 29 78 12 79 6 61 5 59 63] [78 69 29 24 5 6 52 5 63 76]] `` although the exact numbers will be different. Check to make sure the data is shifted over one step fory`. Building the model Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network. <img src="assets/charRNN.png" width=500px> Inputs First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob. This will be a scalar, that is a 0-D tensor. To make a scalar, you create a placeholder without giving it a size. Exercise: Create the input placeholders in the function below. End of explanation def build_lstm(lstm_size, num_layers, batch_size, keep_prob): ''' Build LSTM cell. Arguments --------- keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability lstm_size: Size of the hidden layers in the LSTM cells num_layers: Number of LSTM layers batch_size: Batch size ''' def build_cell(num_units, keep_prob): lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size) drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) return drop cell = tf.contrib.rnn.MultiRNNCell([build_cell(lstm_size, keep_prob) for _ in range(num_layers)]) initial_state = cell.zero_state(batch_size, tf.float32) return cell, initial_state Explanation: LSTM Cell Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer. We first create a basic LSTM cell with python lstm = tf.contrib.rnn.BasicLSTMCell(num_units) where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with python tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. Previously with TensorFlow 1.0, you could do this python tf.contrib.rnn.MultiRNNCell([cell]*num_layers) This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow 1.0 will create different weight matrices for all cell objects. But, starting with TensorFlow 1.1 you actually need to create new cell objects in the list. To get it to work in TensorFlow 1.1, it should look like ```python def build_cell(num_units, keep_prob): lstm = tf.contrib.rnn.BasicLSTMCell(num_units) drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) return drop tf.contrib.rnn.MultiRNNCell([build_cell(num_units, keep_prob) for _ in range(num_layers)]) ``` Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell. We also need to create an initial cell state of all zeros. This can be done like so python initial_state = cell.zero_state(batch_size, tf.float32) Below, we implement the build_lstm function to create these LSTM cells and the initial state. End of explanation def build_output(lstm_output, in_size, out_size): ''' Build a softmax layer, return the softmax output and logits. Arguments --------- lstm_output: List of output tensors from the LSTM layer in_size: Size of the input tensor, for example, size of the LSTM cells (which might means L in the above instruction?) out_size: Size of this softmax layer. The softmax output gives us a probability distribution we can use to predict the next character, so we want this layer to have size C, the number of classes/characters we have in our text. ''' # Reshape output so it's a bunch of rows, one row for each step for each sequence. # Concatenate lstm_output over axis 1 (the columns) # lstm_output is N×M×L and after concatenation the result should be (M∗N)×L seq_output = tf.concat(lstm_output, axis=1) print("seq_output.shape: ", seq_output.shape) # Reshape seq_output to a 2D tensor with lstm_size columns x = tf.reshape(seq_output, shape=(-1, in_size)) print("x.shape: ", x.shape) print("out_size: ", out_size) # Connect the RNN outputs to a softmax layer with tf.variable_scope('softmax'): # Create the weight and bias variables here softmax_w = tf.Variable(tf.truncated_normal(shape=(in_size, out_size), stddev=0.1)) softmax_b = tf.Variable(tf.zeros(shape=(out_size))) # Since output is a bunch of rows of RNN cell outputs, logits will be a bunch # of rows of logit outputs, one for each step and sequence logits = tf.matmul(x, softmax_w) + softmax_b # Use softmax to get the probabilities for predicted characters out = tf.nn.softmax(logits, name='predictions') return out, logits Explanation: RNN Output Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character, so we want this layer to have size $C$, the number of classes/characters we have in our text. If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$. We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells. We get the LSTM output as a list, lstm_output. First we need to concatenate this whole list into one array with tf.concat. Then, reshape it (with tf.reshape) to size $(M * N) \times L$. One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names. Exercise: Implement the output layer in the function below. End of explanation def build_loss(logits, targets, lstm_size, num_classes): ''' Calculate the loss from the logits and the targets. Arguments --------- logits: Logits from final fully connected layer, shape is (10000, 83) targets: Targets for supervised learning, shape is (100, 100) lstm_size: Number of LSTM hidden units, 512 num_classes: Number of classes in targets, 83 ''' # One-hot encode targets and reshape to match logits, one row per sequence per step y_one_hot = tf.one_hot(indices=targets, depth=num_classes) y_reshaped = tf.reshape(y_one_hot, shape=logits.shape) # Softmax cross entropy loss loss = tf.nn.softmax_cross_entropy_with_logits(labels=y_reshaped, logits=logits) loss = tf.reduce_mean(loss) return loss Explanation: Training loss Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$. Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss. Exercise: Implement the loss calculation in the function below. End of explanation def build_optimizer(loss, learning_rate, grad_clip): ''' Build optmizer for training, using gradient clipping. Arguments: loss: Network loss learning_rate: Learning rate for optimizer ''' # Optimizer for training, using gradient clipping to control exploding gradients tvars = tf.trainable_variables() grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip) train_op = tf.train.AdamOptimizer(learning_rate) optimizer = train_op.apply_gradients(zip(grads, tvars)) return optimizer Explanation: Optimizer Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step. End of explanation class CharRNN: def __init__(self, num_classes, batch_size=64, num_steps=50, lstm_size=128, num_layers=2, learning_rate=0.001, grad_clip=5, sampling=False): # When we're using this network for sampling later, we'll be passing in # one character at a time, so providing an option for that if sampling == True: batch_size, num_steps = 1, 1 else: batch_size, num_steps = batch_size, num_steps tf.reset_default_graph() # Build the input placeholder tensors self.inputs, self.targets, self.keep_prob = build_inputs(batch_size, num_steps) # Build the LSTM cell cell, self.initial_state = build_lstm(lstm_size, num_layers, batch_size, keep_prob) ### Run the data through the RNN layers # First, one-hot encode the input tokens x_one_hot = tf.one_hot(indices=self.inputs, depth=num_classes) # Run each sequence step through the RNN with tf.nn.dynamic_rnn outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=self.initial_state) self.final_state = state # Get softmax predictions and logits self.prediction, self.logits = build_output(outputs, lstm_size, num_classes) # for the record, lstm_size is the number of hidden layer # Loss and optimizer (with gradient clipping) self.loss = build_loss(self.logits, self.targets, lstm_size, num_classes) self.optimizer = build_optimizer(self.loss, learning_rate, grad_clip) Explanation: Build the network Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN. Exercise: Use the functions you've implemented previously and tf.nn.dynamic_rnn to build the network. End of explanation batch_size = 10 # Sequences per batch num_steps = 50 # Number of sequence steps per batch lstm_size = 128 # Size of hidden layers in LSTMs num_layers = 2 # Number of LSTM layers learning_rate = 0.01 # Learning rate keep_prob = 0.5 # Dropout keep probability Explanation: Hyperparameters Here are the hyperparameters for the network. batch_size - Number of sequences running through the network in one pass. num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here. lstm_size - The number of units in the hidden layers. num_layers - Number of hidden LSTM layers to use learning_rate - Learning rate for training keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this. Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from. Tips and Tricks Monitoring Validation Loss vs. Training Loss If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular: If your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on. If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer) Approximate number of parameters The two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are: The number of parameters in your model. This is printed when you start training. The size of your dataset. 1MB file is approximately 1 million characters. These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples: I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger. I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss. Best models strategy The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end. It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance. By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative. End of explanation epochs = 20 # Save every N iterations save_every_n = 200 model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps, lstm_size=lstm_size, num_layers=num_layers, learning_rate=learning_rate) saver = tf.train.Saver(max_to_keep=100) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) # Use the line below to load a checkpoint and resume training #saver.restore(sess, 'checkpoints/______.ckpt') counter = 0 for e in range(epochs): # Train network new_state = sess.run(model.initial_state) loss = 0 for x, y in get_batches(encoded, batch_size, num_steps): counter += 1 start = time.time() feed = {model.inputs: x, model.targets: y, model.keep_prob: keep_prob, model.initial_state: new_state} batch_loss, new_state, _ = sess.run([model.loss, model.final_state, model.optimizer], feed_dict=feed) end = time.time() print('Epoch: {}/{}... '.format(e+1, epochs), 'Training Step: {}... '.format(counter), 'Training loss: {:.4f}... '.format(batch_loss), '{:.4f} sec/batch'.format((end - start))) if (counter % save_every_n == 0): saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size)) saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size)) Explanation: Time for training This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint. Here I'm saving checkpoints with the format i{iteration number}_l{# hidden layer units}.ckpt Exercise: Set the hyperparameters above to train the network. Watch the training loss, it should be consistently dropping. Also, I highly advise running this on a GPU. End of explanation tf.train.get_checkpoint_state('checkpoints') Explanation: Saved checkpoints Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables End of explanation def pick_top_n(preds, vocab_size, top_n=5): p = np.squeeze(preds) p[np.argsort(p)[:-top_n]] = 0 p = p / np.sum(p) c = np.random.choice(vocab_size, 1, p=p)[0] return c def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "): samples = [c for c in prime] model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True) saver = tf.train.Saver() with tf.Session() as sess: saver.restore(sess, checkpoint) new_state = sess.run(model.initial_state) for c in prime: x = np.zeros((1, 1)) x[0,0] = vocab_to_int[c] feed = {model.inputs: x, model.keep_prob: 1., model.initial_state: new_state} preds, new_state = sess.run([model.prediction, model.final_state], feed_dict=feed) c = pick_top_n(preds, len(vocab)) samples.append(int_to_vocab[c]) for i in range(n_samples): x[0,0] = c feed = {model.inputs: x, model.keep_prob: 1., model.initial_state: new_state} preds, new_state = sess.run([model.prediction, model.final_state], feed_dict=feed) c = pick_top_n(preds, len(vocab)) samples.append(int_to_vocab[c]) return ''.join(samples) Explanation: Sampling Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that. The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters. End of explanation tf.train.latest_checkpoint('checkpoints') checkpoint = tf.train.latest_checkpoint('checkpoints') samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far") print(samp) checkpoint = 'checkpoints/i200_l512.ckpt' samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far") print(samp) checkpoint = 'checkpoints/i600_l512.ckpt' samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far") print(samp) checkpoint = 'checkpoints/i1200_l512.ckpt' samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far") print(samp) Explanation: Here, pass in the path to a checkpoint and sample from the network. End of explanation
12,016
Given the following text description, write Python code to implement the functionality described below step by step Description: PLUMOLOGY vis Step1: Reading PLUMED output We read a file in PLUMED output format Step2: We can also specify certain columns using regular expressions, and also specify the stepping Step3: Lets read some MetaD hills files Step4: The separate files are horizontally concatenated into one dataframe Step5: Analysis Lets compute 1D histograms of our collective variables Step6: We also have a SketchMap representation of our trajectory and can visualize it as a free-energy surface Step7: We should probably clip it to a reasonable range first
Python Code: from plumology import vis, util, io import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline Explanation: PLUMOLOGY vis: Visualization and plotting functions util: Various utilities and calculation functions io: Functions to read certain output files and an HDF interface End of explanation data = io.read_plumed('data.dat') data.head() Explanation: Reading PLUMED output We read a file in PLUMED output format: End of explanation data = io.read_plumed('data.dat', columns=[r'p.i\d', 'peplen'], step=10) data.head() Explanation: We can also specify certain columns using regular expressions, and also specify the stepping: End of explanation hills = io.read_all_hills(['HILLS.0', 'HILLS.1']) Explanation: Lets read some MetaD hills files: End of explanation hills.head() Explanation: The separate files are horizontally concatenated into one dataframe: End of explanation dist, ranges = util.dist1D(data, nbins=50) _ = vis.dist1D(dist, ranges) Explanation: Analysis Lets compute 1D histograms of our collective variables: End of explanation sm_data = io.read_plumed('colvar-red1.dat') Explanation: We also have a SketchMap representation of our trajectory and can visualize it as a free-energy surface: End of explanation clipped_data = util.clip(sm_data, ranges={'cv1': (-50, 50), 'cv2': (-50, 50)}) edges = util.dist1D(clipped_data, ret='edges') dist = util.dist2D(clipped_data, nbins=50, weight_name='ww') fes = util.free_energy(dist, kbt=2.49) _ = vis.dist2D(fes, edges) Explanation: We should probably clip it to a reasonable range first: End of explanation
12,017
Given the following text description, write Python code to implement the functionality described below step by step Description: imports needed What library am I using? http Step1: noteStore http Step2: my .__MASTER note__ is actually pretty complex....so parsing it and adding to it will take some effort. But let's give it a try. Working with Note Contents Things to figure out Step3: Getting tags by name Step4: things to do with tags find all notes for a given tag get tag guid, name, count, parent / check for existence create new tag delete tag move tag to new parent expunge tags -- disconnect tags from notes can we get history of a tag Step5: synchronization state Step6: list notebooks and note counts Step8: compute distribution of note sizes Step10: creating a new note with content and tag Note type noteStore.createNote nice to have convenience of not having to calculate tag guids Step11: Move Evernote tags to have a different parent
Python Code: import settings from evernote.api.client import EvernoteClient dev_token = settings.authToken client = EvernoteClient(token=dev_token, sandbox=False) userStore = client.get_user_store() user = userStore.getUser() print user.username import EvernoteWebUtil as ewu ewu.init(settings.authToken) ewu.user.username Explanation: imports needed What library am I using? http://dev.evernote.com/ When I'm ready I would hit a Get API key button and fill out the form: https://www.evernote.com/shard/s1/sh/e03e0393-b2cb-4a54-94d1-60e65f482ad3/bb93b060e287d4d979fef70d7b997df9 Docs: The Evernote SDK for Python Quick-start Guide Evernote SDK for JavaScript Quick-start Guide In getting started, you can take one or both of the following approaches: get a key set up for the sandbox set up a dev key to work with your own account and not worry about Oauth initially. you can have a dev token for both the sandbox and for production to access production accounts: https://sandbox.evernote.com/api/DeveloperToken.action https://www.evernote.com/api/DeveloperToken.action EvernoteWebUtil is my wrapper for ... End of explanation # getting notes for a given notebook import datetime from itertools import islice notes = islice(ewu.notes_metadata(includeTitle=True, includeUpdated=True, includeUpdateSequenceNum=True, notebookGuid=ewu.notebook(name=':CORE').guid), None) for note in notes: print note.title, note.updateSequenceNum, datetime.datetime.fromtimestamp(note.updated/1000.) # let's read my __MASTER note__ # is it possible to search notes by title? [(n.guid, n.title) for n in ewu.notes(title=".__MASTER note__")] import settings from evernote.api.client import EvernoteClient dev_token = settings.authToken client = EvernoteClient(token=dev_token, sandbox=False) userStore = client.get_user_store() user = userStore.getUser() noteStore = client.get_note_store() print user.username userStore.getUser() noteStore.getNoteContent('ecc59d05-c010-4b3b-a04b-7d4eeb7e8505') Explanation: noteStore http://dev.evernote.com/documentation/reference/NoteStore.html#Svc_NoteStore getting notebook by name End of explanation import lxml Explanation: my .__MASTER note__ is actually pretty complex....so parsing it and adding to it will take some effort. But let's give it a try. Working with Note Contents Things to figure out: XML parsing XML creation XML validation via schema End of explanation ewu.tag('#1-Now') sorted(ewu.tag_counts_by_name().items(), key=lambda x: -x[1])[:10] tags = ewu.noteStore.listTags() tags_by_name = dict([(tag.name, tag) for tag in tags]) tag_counts_by_name = ewu.tag_counts_by_name() tags_by_guid = ewu.tags_by_guid() # figure out which tags have no notes attached and possibly delete them -- say if they don't have children tags # oh -- don't delete them willy nilly -- some have organizational purposes set(tags_by_name) - set(tag_counts_by_name) # calculated tag_children -- tags that have children from collections import defaultdict tag_children = defaultdict(list) for tag in tags: if tag.parentGuid is not None: tag_children[tag.parentGuid].append(tag) [tags_by_guid[guid].name for guid in tag_children.keys()] for (guid, children) in tag_children.items(): print tags_by_guid[guid].name for child in children: print "\t", child.name Explanation: Getting tags by name End of explanation # find all notes for a given tag [n.title for n in ewu.notes_metadata(includeTitle=True, tagGuids=[tags_by_name['#1-Now'].guid])] ewu.notebook(name='Action Pending').guid [n.title for n in ewu.notes_metadata(includeTitle=True, notebookGuid=ewu.notebook(name='Action Pending').guid, tagGuids=[tags_by_name['#1-Now'].guid])] # with a GUID, you can get the current state of a tag # http://dev.evernote.com/documentation/reference/NoteStore.html#Fn_NoteStore_getTag # not super useful for me since I'm already pulling a list of all tags in order to map names to guids ewu.noteStore.getTag(ewu.tag(name='#1-Now').guid) # create a tag # http://dev.evernote.com/documentation/reference/NoteStore.html#Fn_NoteStore_createTag # must pass name; optional to pass from evernote.edam.type.ttypes import Tag ewu.noteStore.createTag(Tag(name="happy happy2!", parentGuid=None)) ewu.tag(name="happy happy2!", refresh=True) # expunge tag # http://dev.evernote.com/documentation/reference/NoteStore.html#Fn_NoteStore_expungeTag ewu.noteStore.expungeTag(ewu.tag("happy happy2!").guid) # find all notes for a given tag and notebook action_now_notes = list(ewu.notes_metadata(includeTitle=True, notebookGuid=ewu.notebook(name='Action Pending').guid, tagGuids=[tags_by_name['#1-Now'].guid])) [(n.guid, n.title) for n in action_now_notes ] # get all tags for a given note import datetime from itertools import islice notes = list(islice(ewu.notes_metadata(includeTitle=True, includeUpdated=True, includeUpdateSequenceNum=True, notebookGuid=ewu.notebook(name=':PROJECTS').guid), None)) plus_tags_set = set() for note in notes: tags = ewu.noteStore.getNoteTagNames(note.guid) plus_tags = [tag for tag in tags if tag.startswith("+")] plus_tags_set.update(plus_tags) print note.title, note.updateSequenceNum, datetime.datetime.fromtimestamp(note.updated/1000.), \ len(plus_tags) == 1 Explanation: things to do with tags find all notes for a given tag get tag guid, name, count, parent / check for existence create new tag delete tag move tag to new parent expunge tags -- disconnect tags from notes can we get history of a tag: when created? dealing with deleted tags find "related" tags -- in the Evernote client, when I click on a specific tag, it seems like I see the highlighting of other, possibly related, tags -- http://dev.evernote.com/documentation/reference/NoteStore.html#Fn_NoteStore_findRelated ? I will also want to locate notes that have a certain tag or set of tags and are in a certain notebook. End of explanation syncstate = ewu.noteStore.getSyncState() syncstate syncstate.fullSyncBefore, syncstate.updateCount import datetime datetime.datetime.fromtimestamp(syncstate.fullSyncBefore/1000.) Explanation: synchronization state End of explanation ewu.notebookcounts() Explanation: list notebooks and note counts End of explanation k = list(ewu.sizes_of_notes()) print len(k) plt.plot(k) sort(k) plt.plot(sort(k)) plt.plot([log(i) for i in sort(k)]) Make a histogram of normally distributed random numbers and plot the analytic PDF over it import numpy as np import matplotlib.pyplot as plt import matplotlib.mlab as mlab mu, sigma = 100, 15 x = mu + sigma * np.random.randn(10000) fig = plt.figure() ax = fig.add_subplot(111) # the histogram of the data n, bins, patches = ax.hist(x, 50, normed=1, facecolor='green', alpha=0.75) # hist uses np.histogram under the hood to create 'n' and 'bins'. # np.histogram returns the bin edges, so there will be 50 probability # density values in n, 51 bin edges in bins and 50 patches. To get # everything lined up, we'll compute the bin centers bincenters = 0.5*(bins[1:]+bins[:-1]) # add a 'best fit' line for the normal PDF y = mlab.normpdf( bincenters, mu, sigma) l = ax.plot(bincenters, y, 'r--', linewidth=1) ax.set_xlabel('Smarts') ax.set_ylabel('Probability') #ax.set_title(r'$\mathrm{Histogram\ of\ IQ:}\ \mu=100,\ \sigma=15$') ax.set_xlim(40, 160) ax.set_ylim(0, 0.03) ax.grid(True) plt.show() plt.hist(k) plt.hist([log10(i) for i in k], 50) # calculate Notebook name -> note count nb_guid_dict = dict([(nb.guid, nb) for nb in ewu.all_notebooks()]) nb_name_dict = dict([(nb.name, nb) for nb in ewu.all_notebooks()]) ewu.notes_metadata(includeTitle=True) import itertools g = itertools.islice(ewu.notes_metadata(includeTitle=True, includeUpdateSequenceNum=True, notebookGuid=nb_name_dict["Action Pending"].guid), 10) list(g) len(_) # grab content of a specific note # http://dev.evernote.com/documentation/reference/NoteStore.html#Fn_NoteStore_getNote # params: guid, withContent, withResourcesData, withResourcesRecognition, withResourcesAlternateData note = ewu.noteStore.getNote('a49d531e-f3f8-4e72-9523-e5a558f11d87', True, False, False, False) note_content = ewu.noteStore.getNoteContent('a49d531e-f3f8-4e72-9523-e5a558f11d87') note_content Explanation: compute distribution of note sizes End of explanation import EvernoteWebUtil as ewu reload(ewu) from evernote.edam.type.ttypes import Note note_template = <?xml version="1.0" encoding="UTF-8" standalone="no"?> <!DOCTYPE en-note SYSTEM "http://xml.evernote.com/pub/enml2.dtd"> <en-note style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;"> {0} </en-note> note = Note() note.title = "hello from ipython" note.content = note_template.format("hello from Canada 2") note.tagNames = ["hello world"] note = ewu.noteStore.createNote(note) note.guid assert False Explanation: creating a new note with content and tag Note type noteStore.createNote nice to have convenience of not having to calculate tag guids End of explanation from evernote.edam.type.ttypes import Tag import EvernoteWebUtil as ewu tags = ewu.noteStore.listTags() tags_by_name = dict([(tag.name, tag) for tag in tags]) print tags_by_name['+JoinTheAction'], tags_by_name['.Active Projects'] # update +JoinTheAction tag to put it underneath .Active Projects jta_tag = tags_by_name['+JoinTheAction'] jta_tag.parentGuid = tags_by_name['.Active Projects'].guid result = ewu.noteStore.updateTag(Tag(name=jta_tag.name, guid=jta_tag.guid, parentGuid=tags_by_name['.Active Projects'].guid)) print result # mark certain project as inactive result = ewu.noteStore.updateTag(Tag(name="+Relaunch unglue.it", guid=tags_by_name["+Relaunch unglue.it"].guid, parentGuid=tags_by_name['.Inactive Projects'].guid)) # getTag? ewu.noteStore.getTag(tags_by_name['+JoinTheAction'].guid) tags_by_name["+Relaunch unglue.it"] result = ewu.noteStore.updateTag(ewu.authToken, Tag(name="+Relaunch unglue.it", guid=tags_by_name["+Relaunch unglue.it"].guid, parentGuid=tags_by_name['.Inactive Projects'].guid)) Explanation: Move Evernote tags to have a different parent End of explanation
12,018
Given the following text description, write Python code to implement the functionality described below step by step Description: Spectrum Plugins The SpectrumLike plugin is designed to handle binned photon/particle spectra. It comes in three basic classes Step1: We will construct a simulated spectrum over the energy range 10-1000 keV. The spectrum will have logrithmic energy boundaries. We will simulate a blackbody source spectrum on top of powerlaw background. Step2: The count spectrum Let's examine a few properties about the count spectrum including the contents stored in the plugin, viewing the count distribution, masking channels, and rebinnined the spectrum. We can examine the contents of our plugin with the display function Step3: These properties are accessible from the object. For example Step4: To view the count spectrum, we call the view_count_spectrum method Step5: It is even possible see which channels are above a given significance threshold. Red regions are below the supplied significance regions. Step6: Note Step7: which will set the energy range 10-12.5 keV and 56-100 keV to be used in the analysis. Note that there is no difference in saying 10 or 10.0. Channel selections Step8: This will set channels 10-12 and 20-50 as active channels to be used in the analysis Mixed channel and energy selections Step9: Use all measurements (i.e., reset to initial state) Step10: Exclude measurements Step11: Select and exclude Step12: Rebinning We can rebin the spectra based off a minimum total or background rate requried. This is useful when using profile likelihoods, however, we do not change the underlying likelihood by binning up the data. For more information, consult the statistics section. To rebin a spectrum based off the total counts, we specify the minimum counts per bin we would like, say 100 Step13: We can remove the rebinning this way Step14: Instead, when using a profile likelihood which requires at least one background count per bin to be valid, we would call Step15: Fitting To fit the data, we need to create a function, a PointSouce, a Model, and either a JointLikelihood or BayesianAnalysis object. Step16: Perhaps we want to fit a different model and compare the results. We change the spectral model and will overplot the fit's expected counts with the fit to the blackbody. Step17: Examining the fit in count space lets us easily see that the fit with the powerlaw model is very poor. We can of course deterimine the fit quality numerically, but this is saved for another section. DispersionSpectrumLike Instruments that exhibit energy dispersion must have their spectra fit through a process called forward folding. Let $R(\varepsilon,E)$ be our response converting between true (monte carlo) energy ($E$) and detector channel/energy ($\varepsilon$), $f(E, \vec{\phi}{\rm s})$ be our photon model which is a function of $E$ and source model parameters $\vec{\phi}{\rm s}$. Then, the source counts ($S_{c} (\vec{\phi}{\rm s})$) registered in the detector between channel (c) with energy boundaries $E{{\rm min}, c}$ and $E_{{\rm max}, c}$ (in the absence of background) are given by the convolution of the photon model with the response Step18: We can view the response and the count spectrum created. Step19: All the functionality of SpectrumLike is inherited in DispersionSpectrumLike. Therefore, fitting, and examination of the data is the same. OGIPLike Finally, many x-ray mission provide data in the form of fits files known and pulse-height analysis (PHA) data. The keywords for the information in the data are known as the Office of Guest Investigators Program (OGIP) standard. While these data are always a form of binned spectra, 3ML provide a convience plugin for reading OGIP standard PHA Type I (single spectrum) and Type II (multiple spectrum) files. The OGIPLike plugin inherits from DispersionSpectrumLike and thus needs either a full response or a redistribution matrix (RMF) and ancillary response (ARF) file. The plugin will prove the keywords in the data files to automatically figure out the correct likelihood for the observation.
Python Code: from threeML import * %matplotlib notebook import matplotlib.pyplot as plt import numpy as np Explanation: Spectrum Plugins The SpectrumLike plugin is designed to handle binned photon/particle spectra. It comes in three basic classes: SpectrumLike: Generic binned spectral DispersionSpectrumLike: Generic binned spectra with energy dispersion OGIPLike: binned spectra with dispersion from OGIP PHA files The functionality of all three plugins is the same. SpectrumLike The most basic spectrum plugin is SpectrumLike which handles spectra with and without backgrounds. There are six basic features of a spectrum: the energy boundries of the bins, the data in these energy bins, the statistical properties of the total spectrum Possion (counts are meausred in an on/off fashion), Gaussian (counts are the result of a masking process or a fit), the exposure, the background (and its associated statistical properties), and any known systematic errors associated with the total or background spectrum. Let's start by examining an observation where the total counts are Poisson distributed and the measured background ground has been observed by viewing an off-source region and hence is also Poisson. End of explanation energies = np.logspace(1,3,51) low_edge = energies[:-1] high_edge = energies[1:] # get a blackbody source function source_function = Blackbody(K=9E-2,kT=20) # power law background function background_function = Powerlaw(K=1,index=-1.5, piv=100.) spectrum_generator = SpectrumLike.from_function('fake', source_function=source_function, background_function=background_function, energy_min=low_edge, energy_max=high_edge) Explanation: We will construct a simulated spectrum over the energy range 10-1000 keV. The spectrum will have logrithmic energy boundaries. We will simulate a blackbody source spectrum on top of powerlaw background. End of explanation spectrum_generator.display() Explanation: The count spectrum Let's examine a few properties about the count spectrum including the contents stored in the plugin, viewing the count distribution, masking channels, and rebinnined the spectrum. We can examine the contents of our plugin with the display function: End of explanation print(spectrum_generator.exposure) print(spectrum_generator.significance) print(spectrum_generator.observed_counts) Explanation: These properties are accessible from the object. For example: End of explanation fig = spectrum_generator.view_count_spectrum() Explanation: To view the count spectrum, we call the view_count_spectrum method: End of explanation fig = spectrum_generator.view_count_spectrum(significance_level=5) Explanation: It is even possible see which channels are above a given significance threshold. Red regions are below the supplied significance regions. End of explanation spectrum_generator.set_active_measurements('10-12.5','56.0-100.0') fig = spectrum_generator.view_count_spectrum() Explanation: Note: In 3ML, the Significance module is used to compute significnaces. When total counts ($N_{\rm on}$) are Poisson distributed and the background or off-source counts ($N_{\rm off}$) are also Poisson distributed, the significance in $\sigma$ is calculated via the likelihood ratio derived in Li & Ma (1980): $$ \sigma = \sqrt{-2 \log \lambda} = \sqrt{2} \left( N_{\rm on} \log \left[ \frac{1+\alpha}{\alpha} \frac{N_{\rm on}}{N_{\rm on}+N_{\rm off}} \right] + N_{\rm off} \log \left[ (1 + \alpha)\frac{N_{\rm off}}{N_{\rm on}+N_{\rm off}} \right] \right)$$ In the case that the background is Gaussian distributed, an equivalent likelihood ratio is used (see Vianello in prep). Selection Many times, there are channels that we are not valid for analysis due to poor instrument characteristics, overflow, or systematics. We then would like to mask or exclude these channels before fitting the spectrum. We provide several ways to do this and it is useful to consult the docstring. However, we review the process here. NOTE to Xspec users: while XSpec uses integers and floats to distinguish between energies and channels specifications, 3ML does not, as it would be error-prone when writing scripts. Read the following documentation to know how to achieve the same functionality. Energy selections: They are specified as 'emin-emax'. Energies are in keV. End of explanation spectrum_generator.set_active_measurements('c10-c12','c20-c50') fig = spectrum_generator.view_count_spectrum() Explanation: which will set the energy range 10-12.5 keV and 56-100 keV to be used in the analysis. Note that there is no difference in saying 10 or 10.0. Channel selections: They are specified as 'c[channel min]-c[channel max]'. End of explanation spectrum_generator.set_active_measurements('0.2-c10','c20-1000') fig = spectrum_generator.view_count_spectrum() Explanation: This will set channels 10-12 and 20-50 as active channels to be used in the analysis Mixed channel and energy selections: You can also specify mixed energy/channel selections, for example to go from 0.2 keV to channel 10 and from channel 20 to 1000 keV: End of explanation spectrum_generator.set_active_measurements('all') fig = spectrum_generator.view_count_spectrum() Explanation: Use all measurements (i.e., reset to initial state): Use 'all' to select all measurements, as in: End of explanation spectrum_generator.set_active_measurements(exclude=["c2-20", "50-c40"]) fig = spectrum_generator.view_count_spectrum() Explanation: Exclude measurements: Excluding measurements work as selecting measurements, but with the "exclude" keyword set to the energies and/or channels to be excluded. To exclude between channel 10 and 20 keV and 50 keV to channel 120 do: End of explanation spectrum_generator.set_active_measurements("0.2-c10",exclude=["c30-c50"]) fig = spectrum_generator.view_count_spectrum() Explanation: Select and exclude: Call this method more than once if you need to select and exclude. For example, to select between 0.2 keV and channel 10, but exclude channel 30-50 and energy, do: End of explanation spectrum_generator.set_active_measurements("all") spectrum_generator.rebin_on_source(100) fig = spectrum_generator.view_count_spectrum() Explanation: Rebinning We can rebin the spectra based off a minimum total or background rate requried. This is useful when using profile likelihoods, however, we do not change the underlying likelihood by binning up the data. For more information, consult the statistics section. To rebin a spectrum based off the total counts, we specify the minimum counts per bin we would like, say 100: End of explanation spectrum_generator.remove_rebinning() Explanation: We can remove the rebinning this way: End of explanation spectrum_generator.rebin_on_background(10) fig = spectrum_generator.view_count_spectrum() spectrum_generator.remove_rebinning() Explanation: Instead, when using a profile likelihood which requires at least one background count per bin to be valid, we would call: End of explanation bb = Blackbody() pts = PointSource('mysource',0,0,spectral_shape=bb) model = Model(pts) # MLE fitting jl = JointLikelihood(model,DataList(spectrum_generator)) result = jl.fit() count_fig1 = spectrum_generator.display_model(min_rate=10) _ = plot_spectra(jl.results, flux_unit='erg/(cm2 s keV)') _ = plt.ylim(1E-20) Explanation: Fitting To fit the data, we need to create a function, a PointSouce, a Model, and either a JointLikelihood or BayesianAnalysis object. End of explanation import warnings pl = Powerlaw() pts = PointSource('mysource',0,0,spectral_shape=pl) model = Model(pts) # MLE fitting jl = JointLikelihood(model,DataList(spectrum_generator)) with warnings.catch_warnings(): warnings.simplefilter('ignore') result = jl.fit() spectrum_generator.display_model(min_rate=10, show_data=False, show_residuals=True, data_color='g', model_color='b', model_label='powerlaw', model_subplot=count_fig1.axes) Explanation: Perhaps we want to fit a different model and compare the results. We change the spectral model and will overplot the fit's expected counts with the fit to the blackbody. End of explanation from threeML.plugins.DispersionSpectrumLike import DispersionSpectrumLike from threeML.utils.OGIP.response import OGIPResponse from threeML.io.package_data import get_path_of_data_file # we will use a demo response response = OGIPResponse(get_path_of_data_file('datasets/ogip_powerlaw.rsp')) source_function = Broken_powerlaw(K=1E-2, alpha=0, beta=-2, xb=2000, piv=200) background_function = Powerlaw(K=10, index=-1.5, piv=100.) dispersion_spectrum_generator = DispersionSpectrumLike.from_function('test', source_function=source_function, response=response, background_function=background_function) Explanation: Examining the fit in count space lets us easily see that the fit with the powerlaw model is very poor. We can of course deterimine the fit quality numerically, but this is saved for another section. DispersionSpectrumLike Instruments that exhibit energy dispersion must have their spectra fit through a process called forward folding. Let $R(\varepsilon,E)$ be our response converting between true (monte carlo) energy ($E$) and detector channel/energy ($\varepsilon$), $f(E, \vec{\phi}{\rm s})$ be our photon model which is a function of $E$ and source model parameters $\vec{\phi}{\rm s}$. Then, the source counts ($S_{c} (\vec{\phi}{\rm s})$) registered in the detector between channel (c) with energy boundaries $E{{\rm min}, c}$ and $E_{{\rm max}, c}$ (in the absence of background) are given by the convolution of the photon model with the response: $$S_{c} (\vec{\phi}{\rm s}) = \int{0}^\infty {\rm d} E \, f(E, \vec{\phi}{\rm s}) \int{E_{{\rm min}, c}}^{E_{{\rm max}, c}} {\rm d} \varepsilon \, R(\varepsilon, E) $$ Therefore, to fit the data in count space, we assume a photon model, fold it through the response, and calculate the predicted counts. This process is iterated on the source model parameters via likelihood minimization or posterior sampling until an optimal set of parameters is found. To handle dispersed spectra, 3ML provides the DispersionSpectrumLike plugin. End of explanation _ = dispersion_spectrum_generator.display_rsp() fig = dispersion_spectrum_generator.view_count_spectrum() Explanation: We can view the response and the count spectrum created. End of explanation ogip_data = OGIPLike('ogip', observation=get_path_of_data_file('datasets/ogip_powerlaw.pha'), background = get_path_of_data_file('datasets/ogip_powerlaw.bak'), response=get_path_of_data_file('datasets/ogip_powerlaw.rsp')) ogip_data.display() fig = ogip_data.view_count_spectrum() Explanation: All the functionality of SpectrumLike is inherited in DispersionSpectrumLike. Therefore, fitting, and examination of the data is the same. OGIPLike Finally, many x-ray mission provide data in the form of fits files known and pulse-height analysis (PHA) data. The keywords for the information in the data are known as the Office of Guest Investigators Program (OGIP) standard. While these data are always a form of binned spectra, 3ML provide a convience plugin for reading OGIP standard PHA Type I (single spectrum) and Type II (multiple spectrum) files. The OGIPLike plugin inherits from DispersionSpectrumLike and thus needs either a full response or a redistribution matrix (RMF) and ancillary response (ARF) file. The plugin will prove the keywords in the data files to automatically figure out the correct likelihood for the observation. End of explanation
12,019
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: Linear Spatial Autocorrelation Model The two methodologies under study (i.e. Meta-analysis and distributed networks) share the assumption that the observations are independent between each other. In other words, if two plots (say p1 and p2 ) are from different studies, the covariance between p1 and p2 is zero. The assumption is reasonable because of the character of both methodologies. Data derived from meta-analysis and distributed networks is composed of experiments measured in different environmental conditions and geographical locations; using an assortment of experimental techniques and sample designs. It is therefore reasonable to expect that the residuals derived from a linear statistical model will be explained by a non structured error (e.g. $epsilon \sim N(0,\sigma^2)). Data Used and computational challenges The dataset used as reference was the FIA dataset. It comprises more than 36k plot records. Each with a different spatial coordinate. Analysing the data for spatial effects require to manage a 36k x 36k matrix and the parameter optimization through GLS requires to calculate the inverse as well. Model specification The spatial model proposed follows a classical geostatistical approach. In other words, an empirical variogram was used to estimate a {\em valid} analytical model (Webster and Oliver, 2001). In accordance with the model simulations, the spatial model is as follows Step2: $$ \gamma(h) = nugget + (sill (1 - (\frac{1}{2^{\kappa -1}} \Gamma(\kappa) (\frac{h}{r})^{\kappa} K_\kappa \Big(\frac{h}{r}\Big)$$ Step3: Relating the Semivariogram with the correlation function The variogram of a spatial stochastic process $S(x)$ is the function Step4: new_data.residuals2.hist() plt.title('Residuals of $log(Biomass) \sim log(Spp_{rich})$') ------------------ plt.scatter(new_data.newLon,new_data.residuals2) plt.xlabel('Longitude (meters)') plt.ylabel('Residuals Step5: The best fitted values were Step6: Todo Step8: GLS estimation. It's not possible to do it all in the server or this computer because it requires massive computational capacity. I'll do it with a geographical section or sample. Taken from Step9: Here after the data has been fitted with GLS for all the 36k data. The new results are stored in new_data
Python Code: # Load Biospytial modules and etc. %matplotlib inline import sys sys.path.append('/apps') import django django.setup() import pandas as pd import numpy as np import matplotlib.pylab as plt ## Use the ggplot style plt.style.use('ggplot') ## check the matern import scipy.special as special #def MaternVariogram(h,range_a,nugget=40,sill=100,kappa=0.5): def MaternVariogram(h,sill=1,range_a=100,nugget=40,kappa=0.5): The Matern Variogram of order $\kappa$. $$ \gamma(h) = nugget + (sill (1 - (\farc{1}{2^{\kappa -1}} \Gamma(\kappa) (\frac{h}{r})^{\kappa} K_\kappa \Big(\frac{h}{r}\Big)$$ Let: a = $$ b = $$ K_v = Modified Bessel function of the second kind of real order v #a = np.power(2, 1 - kappa) / special.gamma(kappa) #b = (np.sqrt(2 * kappa) / range_a) * h a = 1 / np.power(2,kappa - 1 ) * special.gamma(kappa) b = (h / float(range_a)) K_v = special.kv(kappa,b) #kh = sigma * a * np.power(b,kappa) * K_v #kh = (sill - nugget) * ( 1 - (a * np.power(b,kappa) * K_v)) kh = nugget + (sill * ( 1 - (a * np.power(b,kappa) * K_v))) kh = np.nan_to_num(kh) return kh cc = MaternVariogram(hx,range_a=100000,sill=100,nugget=300,kappa=0.5) plt.plot(hx,cc,'.') Explanation: Linear Spatial Autocorrelation Model The two methodologies under study (i.e. Meta-analysis and distributed networks) share the assumption that the observations are independent between each other. In other words, if two plots (say p1 and p2 ) are from different studies, the covariance between p1 and p2 is zero. The assumption is reasonable because of the character of both methodologies. Data derived from meta-analysis and distributed networks is composed of experiments measured in different environmental conditions and geographical locations; using an assortment of experimental techniques and sample designs. It is therefore reasonable to expect that the residuals derived from a linear statistical model will be explained by a non structured error (e.g. $epsilon \sim N(0,\sigma^2)). Data Used and computational challenges The dataset used as reference was the FIA dataset. It comprises more than 36k plot records. Each with a different spatial coordinate. Analysing the data for spatial effects require to manage a 36k x 36k matrix and the parameter optimization through GLS requires to calculate the inverse as well. Model specification The spatial model proposed follows a classical geostatistical approach. In other words, an empirical variogram was used to estimate a {\em valid} analytical model (Webster and Oliver, 2001). In accordance with the model simulations, the spatial model is as follows: $$log Biomass = log(Spp Richness) + S_x + \epsilon$$ Where: $$E(log(biomass)) = \beta_0 log(Spp Richness)$ and $Var(y) = [\tau^2 \rho(|x,x’|) + \sigma^2] $$ $\tau$ is a variance parameter of the gaussian isotropic and stationary process S_x with spatial autocorrelation distance function $\rho$ given by: $$\rho (h)=(s-n)\left(1-\exp \left(-{\frac {h^{2}}{r^{2}}}\right)\right)+n1_{{(0,\infty )}}(h)$$ Where $h$ is the distance $|x,x’|$ , $s$, $n$ and $r$ are the parameters for sill, nugget and range. Exploratory analysis To begin with, a linear model using OLS was fitted using a log-log transformation of Biomass and Species Richness as response variable and covariate (respectively). I.e. $ (log(Biomass) | S) = \beta log(Spp Richness) + \epsilon $ A histogram of the residuals shows a symmetric distribution (see figure below). The residuals show no significant spatial trend across latitude or longitude (see figures 2bis and 3bis). We decided to follow the principle of model parsimony by not including the spatial coordinates as covariates (fixed effect). Empirical Variogram and model fit The residuals however, show variance dependent on the distance (autocorrelation). An empirical variogram was calculated to account for this effect (see figure below). The variogram was calculated using 50 lag distances of 13.5 km each (derived from dividing the data’s distance matrix range by 50). A Monte Carlo envelope method (blue region) at 0.25 and 0.975 quantiles was calculated to designate the region under the null hypothesis, i.e. with no spatial autocorrelation (Diggle and Ribeiro, 2003). The resulting variogram (orange dots) show a typical spatial autocorrelation pattern, distinct from complete randomness and with an increasing variance as a function of distance. Using this pattern we fitted a gaussian model using non-linear least squares method implemented in Scipy.optimize.curve_fit (Jones et.al., 2017). The results obtained were: Sill 0.341, Range 50318.763, Nugget 0.33 . The resulting function is overlapped in green. Conclusion The model: $log(Biomass) = \beta log(Spp_richness) + \epsilon$ presents non-explicative random effects with spatial autocorrelation. The variogram of its residuals shows a typical pattern for distance dependent heteroscedasticity. In other words, the correlation of distinct data points depends on the distance i.e. ($Cov(p_1,p_2) = \sigma^2 \rho(|p_1 - p_2|^2)$) where $\rho$ is a spatial auto-correlation function (derived from the empirical variogram under a gaussian model assumption). The observations reject the assumptions of the linear model estimator obtained by OLS, those on independence and identically distributed errors. The Generalised Least Square (GLS) estimator would be a better approach for obtaining the linear parameters but more importantly a more reliable variance and consequently more reliable confidence interval. Recommendations and future work The whole dataset unveiled a spatial structure that needs to be accounted for in both studies; distributed plots and independent studies. The GLS estimator is a more robust method for optimising the linear models. The covariance matrix (used in meta-analysis) can be extended to include a spatial effect and derive better estimators and their confidence interval. End of explanation from external_plugins.spystats import tools gx = tools.exponentialVariogram(hx,sill=100,nugget=0,range_a=100000) plt.plot(hx,gx) plt.plot(hx,cc,'.') def gaussianVariogram(h,sill=0,range_a=0,nugget=0): if isinstance(h,np.ndarray): Ih = np.array([1.0 if hx >= 0.0 else 0.0 for hx in h]) else: Ih = 1.0 if h >= 0 else 0.0 #Ih = 1.0 if h >= 0 else 0.0 g_h = ((sill - nugget)*(1 - np.exp(-(h**2 / range_a**2)))) + nugget*Ih return g_h ## Fitting model. ### Optimizing the empirical values def theoreticalVariogram(model_function,sill,range_a,nugget,kappa=0): if kappa == 0: return lambda x : model_function(x,sill,range_a,nugget) else: return lambda x : model_function(x,sill,range_a,nugget,kappa) Explanation: $$ \gamma(h) = nugget + (sill (1 - (\frac{1}{2^{\kappa -1}} \Gamma(\kappa) (\frac{h}{r})^{\kappa} K_\kappa \Big(\frac{h}{r}\Big)$$ End of explanation from external_plugins.spystats import tools %run ../testvariogram.py ## Remove duplications withoutrep = new_data.drop_duplicates(subset=['newLon','newLat']) print(new_data).shape print(withoutrep).shape new_data = withoutrep Explanation: Relating the Semivariogram with the correlation function The variogram of a spatial stochastic process $S(x)$ is the function: $$ V(x,x') = \frac{1}{2} Var { S(x) - S(x') } $$ Note that : $$V(x,x') = \frac{1}{2} { Var(S(x)) + Var(S(x') - 2Cov(S(x),S(x')) } $$ For the stationary case: $$ 2 V(u) = 2\sigma^2 (1 - \rho(u)) $$ So: $$ \rho(u) = 1 - \frac{V(u)}{\sigma^2} $$ End of explanation ### Read the data first #hx = np.linspace(0,400000,100) #spmodel = theoreticalVariogram(gaussianVariogram,sill,range_a,nugget) #nt = 30 # num iterations thrs_dist = 100000 empirical_semivariance_log_log = "../HEC_runs/results/logbiomas_logsppn_res.csv" filename = "../HEC_runs/results/low_q/data_envelope.csv" #### here put the hec calculated envelope_data = pd.read_csv(filename) emp_var_log_log = pd.read_csv(empirical_semivariance_log_log) gvg = tools.Variogram(new_data,'logBiomass',using_distance_threshold=thrs_dist) gvg.envelope = emp_var_log_log gvg.empirical = emp_var_log_log.variogram gvg.lags = emp_var_log_log.lags emp_var_log_log = emp_var_log_log.dropna() vdata = gvg.envelope.dropna() gvg.plot(refresh=False,legend=False,percentage_trunked=20) plt.title("Semivariogram of residuals $log(Biomass) ~ log(SppR)$") Explanation: new_data.residuals2.hist() plt.title('Residuals of $log(Biomass) \sim log(Spp_{rich})$') ------------------ plt.scatter(new_data.newLon,new_data.residuals2) plt.xlabel('Longitude (meters)') plt.ylabel('Residuals: $log(Biomass) - \hat{Y}$') ------------------- plt.scatter(new_data.newLat,new_data.residuals2) plt.xlabel('Latitude (meters)') plt.ylabel('$Residuals: log(Biomass) - \hat{Y}$') Read the data End of explanation sill = 0.34122564947 range_a = 50318.763452 nugget = 0.329687351696 import matplotlib.pylab as plt hx = np.linspace(0,600000,100) from scipy.optimize import curve_fit s = 0.345 r = 50000.0 nugget = 0.33 kappa = 0.5 init_vals = [s,r,nugget] # for [amp, cen, wid] init_matern = [s,r,nugget,kappa] #bg, covar_gaussian = curve_fit(gaussianVariogram, xdata=emp_var_log_log.lags.values, ydata=emp_var_log_log.variogram.values, p0=init_vals) bg, covar_gaussian = curve_fit(MaternVariogram, xdata=emp_var_log_log.lags.values, ydata=emp_var_log_log.variogram.values, p0=init_matern) #MaternVariogram(h,range_a,nugget=40,sill=100,kappa=0.5) #vdata = gvg.envelope.dropna() ## The best parameters asre: #gau_var = tools.gaussianVariogram(hx,bg[0],bg[1],bg[2]) gau_var = MaternVariogram(hx,bg[0],bg[1],bg[2],bg[3]) sill = bg[0] range_a = bg[1] nugget = bg[2] kappa = bg[3] spmodel = theoreticalVariogram(MaternVariogram,sill,range_a,nugget,kappa) #spmodel = theoreticalVariogram(gaussianVariogram,sill,range_a,nugget) #MaternVariogram(0,sill,range_a,nugget,kappa) results = "Sill %s , range_a %s , nugget %s, kappa %s"%(sill,range_a,nugget,kappa) print(results) #n_points = pd.DataFrame(map(lambda v : v.n_points,variograms2)) #points = n_points.transpose() #ejem2 = pd.DataFrame(variogram2.values * points.values) # Chunks (variograms) columns # lag rows #vempchunk2 = ejem2.sum(axis=1) / points.sum(axis=1) #plt.plot(lags,vempchunk2,'--',color='blue',lw=2.0) plt.figure(num=None, figsize=(8, 6), dpi=80, facecolor='w', edgecolor='k') thrs_dist = 1000000 gvg.plot(refresh=False,legend=False,percentage_trunked=20) plt.title("Empirical variogram for the residuals: $log(Biomass) \sim log(Spp_{rich}) $ ") plt.plot(hx,spmodel(hx),color='green',lw=2.3) Explanation: The best fitted values were: (Processed by chunks see: http://localhost:8888/notebooks/external_plugins/spystats/notebooks/variogram_envelope_by_chunks.ipynb ) Sill 0.34122564947 Range 50318.763452 Nugget 0.329687351696 End of explanation X = np.linspace(0,600000,50) tvar = spmodel(X) correlation_h = lambda h : 1 - (spmodel(h)) spmodel(0) plt.plot(X,correlation) plt.plot(np.linspace(0,7,1000),special.kv(0.5,np.linspace(0,7,1000))) Explanation: Todo: Fit Matern End of explanation def randomSelection(data,k): n = len(data) idxs = np.random.choice(n,k,replace=True) random_sample = data.iloc[idxs] return random_sample ################# #n = len(new_data) #p = 3000 # The amount of samples taken (let's do it without replacement) def systSelection(data,k): n = len(data) idxs = range(0,n,k) systematic_sample = data.iloc[idxs] return systematic_sample ################## n = len(new_data) k = 10 # The k-th element to take as a sample def subselectDataFrameByCoordinates(dataframe,namecolumnx,namecolumny,minx,maxx,miny,maxy): Returns a subselection by coordinates using the dataframe/ minx = float(minx) maxx = float(maxx) miny = float(miny) maxy = float(maxy) section = dataframe[lambda x: (x[namecolumnx] > minx) & (x[namecolumnx] < maxx) & (x[namecolumny] > miny) & (x[namecolumny] < maxy) ] return section sample = systSelection(new_data,10) sample = randomSelection(new_data,10) minx = -85 maxx = -80 miny = 30 maxy = 35 section = subselectDataFrameByCoordinates(new_data,'LON','LAT',minx,maxx,miny,maxy) vsamp = tools.Variogram(section,'logBiomass') import statsmodels.regression.linear_model as lm Mdist = vsamp.distance_coordinates.flatten() vsamp.plot(num_iterations=1) %time vars = np.array(correlation_h(Mdist)) MMdist = Mdist.reshape(len(section),len(section)) CovMat = vars.reshape(len(section),len(section)) X = section.logSppN.values Y = section.logBiomass.values section.plot() tt = section.geometry tt.plot() plt.imshow(MMdist,interpolation='None',cmap=plt.cm.Blues) plt.imshow(CovMat,interpolation='None',cmap=plt.cm.Blues) %time results_gls = lm.GLS(Y,X,sigma=CovMat) #tt = np.linalg.cholesky(CovMat) #np.linalg.eigvals(CovMat) #CovMat.flatten() #MMdist.flatten() #lm.GLS? modelillo = results_gls.fit() modelillo.summary() Explanation: GLS estimation. It's not possible to do it all in the server or this computer because it requires massive computational capacity. I'll do it with a geographical section or sample. Taken from: http://localhost:8888/notebooks/external_plugins/spystats/notebooks/Analysis_spatial_autocorrelation_with_empirical_variogram_using_GLS.ipynb Re fit the $\beta$ Oke, first calculate the distances End of explanation filename = "../HEC_runs/results/new_data.csv" data = pd.read_csv(filename) #### here put the hec calculated data.columns plt.plot(data.logSppN,data.logBiomass,'.') plt.plot(data.logSppN,data.Y_hat) plt.plot(data.SppN,data.plotBiomass,'.') plt.plot(data.SppN,np.exp(data.Y_hat),'.') plt.plot(data.logSppN,data.Y_hat - data.logBiomass,'.') plt.plot(data.logSppN, data.logBiomass,'.') plt.hist(np.exp(data.Y_hat)) plt.plot(data.logBiomass,data.Y_hat,'.') Explanation: Here after the data has been fitted with GLS for all the 36k data. The new results are stored in new_data End of explanation
12,020
Given the following text description, write Python code to implement the functionality described below step by step Description: Convolutional Neural Networks In this notebook, I'll try converting radio images into useful features using a simple convolutional neural network in Keras. The best kinds of CNN to use are apparently fast region-based CNNs, but because computer vision is hard and somewhat off-topic I'll instead be doing this pretty naïvely. The Keras MNIST example will be a good starting point. I'll also use SWIRE to find potential hosts (notebook 13) and pull out radio images surrounding them. I'll be using the frozen ATLAS classifications that I prepared earlier (notebook 12). Step1: Training data The first step is to separate out all the training data. I'm well aware that having too much training data at once will cause Python to run out of memory, so I'll need to figure out how to deal with that when I get to it. For each potential host, I'll pull out a $20 \times 20$, $40 \times 40$, and $80 \times 80$ patch of radio image. These numbers are totally arbitrary but they seem like nice sizes. Note that this will miss really spread out black hole jets. I'm probably fine with that. Step2: Now, I'll run this over the ATLAS data. Step3: Keras doesn't support class weights, so I need to downsample the non-host galaxies. Step4: Convolutional neural network The basic structure will be as follows Step5: Now we can train it! Step6: Let's see some filters. Step7: Good enough. Now, let's save the models. Step8: ...Now, let's test that it saved. Step9: Looks good. Ideally, we train this longer, but I don't have enough time right now. Let's save the training data and move on.
Python Code: import collections import io from pprint import pprint import sqlite3 import sys import warnings import astropy.io.votable import astropy.wcs import matplotlib.pyplot import numpy import requests import requests_cache import sklearn.cross_validation %matplotlib inline sys.path.insert(1, '..') import crowdastro.data import crowdastro.labels import crowdastro.rgz_analysis.consensus import crowdastro.show warnings.simplefilter('ignore', UserWarning) # astropy always raises warnings on Windows. requests_cache.install_cache(cache_name='gator_cache', backend='sqlite', expire_after=None) def get_potential_hosts(subject): if subject['metadata']['source'].startswith('C'): # CDFS catalog = 'chandra_cat_f05' else: # ELAIS-S1 catalog = 'elaiss1_cat_f05' query = { 'catalog': catalog, 'spatial': 'box', 'objstr': '{} {}'.format(*subject['coords']), 'size': '120', 'outfmt': '3', } url = 'http://irsa.ipac.caltech.edu/cgi-bin/Gator/nph-query' r = requests.get(url, params=query) votable = astropy.io.votable.parse_single_table(io.BytesIO(r.content), pedantic=False) ras = votable.array['ra'] decs = votable.array['dec'] # Convert to px. fits = crowdastro.data.get_ir_fits(subject) wcs = astropy.wcs.WCS(fits.header) xs, ys = wcs.all_world2pix(ras, decs, 0) return numpy.array((xs, ys)).T def get_true_hosts(subject, potential_hosts, conn): consensus_xs = [] consensus_ys = [] consensus = crowdastro.labels.get_subject_consensus(subject, conn, 'atlas_classifications') true_hosts = {} # Maps radio signature to (x, y) tuples. for radio, (x, y) in consensus.items(): if x is not None and y is not None: closest = None min_distance = float('inf') for host in potential_hosts: dist = numpy.hypot(x - host[0], y - host[1]) if dist < min_distance: closest = host min_distance = dist true_hosts[radio] = closest return true_hosts Explanation: Convolutional Neural Networks In this notebook, I'll try converting radio images into useful features using a simple convolutional neural network in Keras. The best kinds of CNN to use are apparently fast region-based CNNs, but because computer vision is hard and somewhat off-topic I'll instead be doing this pretty naïvely. The Keras MNIST example will be a good starting point. I'll also use SWIRE to find potential hosts (notebook 13) and pull out radio images surrounding them. I'll be using the frozen ATLAS classifications that I prepared earlier (notebook 12). End of explanation subject = crowdastro.data.db.radio_subjects.find_one({'zooniverse_id': 'ARG0003rga'}) crowdastro.show.subject(subject) matplotlib.pyplot.show() crowdastro.show.radio(subject) matplotlib.pyplot.show() potential_hosts = get_potential_hosts(subject) conn = sqlite3.connect('../crowdastro-data/processed.db') true_hosts = {tuple(i) for i in get_true_hosts(subject, potential_hosts, conn).values()} conn.close() xs = [] ys = [] for x, y in true_hosts: xs.append(x) ys.append(y) crowdastro.show.subject(subject) matplotlib.pyplot.scatter(xs, ys, c='r', s=100) matplotlib.pyplot.show() def get_training_data(subject, potential_hosts, true_hosts): radio_image = crowdastro.data.get_radio(subject, size='5x5') training_data = [] radius = 40 padding = 150 for host_x, host_y in potential_hosts: patch_80 = radio_image[int(host_x - radius + padding) : int(host_x + radius + padding), int(host_y - radius + padding) : int(host_y + radius + padding)] classification = (host_x, host_y) in true_hosts training_data.append((patch_80, classification)) return training_data patches, classifications = zip(*get_training_data(subject, potential_hosts, true_hosts)) Explanation: Training data The first step is to separate out all the training data. I'm well aware that having too much training data at once will cause Python to run out of memory, so I'll need to figure out how to deal with that when I get to it. For each potential host, I'll pull out a $20 \times 20$, $40 \times 40$, and $80 \times 80$ patch of radio image. These numbers are totally arbitrary but they seem like nice sizes. Note that this will miss really spread out black hole jets. I'm probably fine with that. End of explanation conn = sqlite3.connect('../crowdastro-data/processed.db') training_inputs = [] training_outputs = [] for index, subject in enumerate(crowdastro.data.get_all_subjects(atlas=True)): print('Extracting training data from ATLAS subject #{}'.format(index)) potential_hosts = get_potential_hosts(subject) true_hosts = {tuple(i) for i in get_true_hosts(subject, potential_hosts, conn).values()} patches, classifications = zip(*get_training_data(subject, potential_hosts, true_hosts)) training_inputs.extend(patches) training_outputs.extend(classifications) conn.close() Explanation: Now, I'll run this over the ATLAS data. End of explanation n_hosts = sum(training_outputs) n_not_hosts = len(training_outputs) - n_hosts n_to_discard = n_not_hosts - n_hosts new_training_inputs = [] new_training_outputs = [] for inp, out in zip(training_inputs, training_outputs): if not out and n_to_discard > 0: n_to_discard -= 1 else: new_training_inputs.append(inp) new_training_outputs.append(out) print(sum(new_training_outputs)) print(len(new_training_outputs)) training_inputs = numpy.array(new_training_inputs) training_outputs = numpy.array(new_training_outputs, dtype=float) Explanation: Keras doesn't support class weights, so I need to downsample the non-host galaxies. End of explanation import keras.layers.convolutional import keras.layers.core import keras.models model = keras.models.Sequential() n_filters = 32 conv_size = 10 pool_size = 5 dropout = 0.25 hidden_layer_size = 64 model.add(keras.layers.convolutional.Convolution2D(n_filters, conv_size, conv_size, border_mode='valid', input_shape=(1, 80, 80))) model.add(keras.layers.core.Activation('relu')) model.add(keras.layers.convolutional.MaxPooling2D(pool_size=(pool_size, pool_size))) model.add(keras.layers.convolutional.Convolution2D(n_filters, conv_size, conv_size, border_mode='valid',)) model.add(keras.layers.core.Activation('relu')) model.add(keras.layers.convolutional.MaxPooling2D(pool_size=(pool_size, pool_size))) model.add(keras.layers.core.Dropout(dropout)) model.add(keras.layers.core.Flatten()) model.add(keras.layers.core.Dense(hidden_layer_size)) model.add(keras.layers.core.Activation('sigmoid')) model.add(keras.layers.core.Dense(1)) model.add(keras.layers.core.Activation('sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adadelta') Explanation: Convolutional neural network The basic structure will be as follows: An input layer. A 2D convolution layer with 32 filters and a $10 \times 10$ kernel. (This is the same size kernel that Radio Galaxy Zoo uses for their peak detection.) A relu activation layer. A max pooling layer with pool size 5. A 25% dropout layer. A flatten layer. A dense layer with 64 nodes. A relu activation layer. A dense layer with 1 node. A sigmoid activation layer. I may try to split the input into three images of different sizes in future. End of explanation xs_train, xs_test, ts_train, ts_test = sklearn.cross_validation.train_test_split( training_inputs, training_outputs, test_size=0.1, random_state=0, stratify=training_outputs) image_size = xs_train.shape[1:] xs_train = xs_train.reshape(xs_train.shape[0], 1, image_size[0], image_size[1]) xs_test = xs_test.reshape(xs_test.shape[0], 1, image_size[0], image_size[1]) xs_train.shape model.fit(xs_train, ts_train) Explanation: Now we can train it! End of explanation get_convolutional_output = keras.backend.function([model.layers[0].input], [model.layers[2].get_output()]) model.get_weights()[2].shape figure = matplotlib.pyplot.figure(figsize=(15, 15)) for i in range(32): ax = figure.add_subplot(8, 4, i+1) ax.axis('off') ax.pcolor(model.get_weights()[0][i, 0], cmap='gray') matplotlib.pyplot.show() Explanation: Let's see some filters. End of explanation model_json = model.to_json() with open('../crowdastro-data/cnn_model_2.json', 'w') as f: f.write(model_json) model.save_weights('../crowdastro-data/cnn_weights_2.h5') Explanation: Good enough. Now, let's save the models. End of explanation with open('../crowdastro-data/cnn_model_2.json', 'r') as f: model2 = keras.models.model_from_json(f.read()) model2.load_weights('../crowdastro-data/cnn_weights_2.h5') figure = matplotlib.pyplot.figure(figsize=(15, 15)) for i in range(32): ax = figure.add_subplot(8, 4, i+1) ax.axis('off') ax.pcolor(model2.get_weights()[0][i, 0], cmap='gray') matplotlib.pyplot.show() Explanation: ...Now, let's test that it saved. End of explanation import tables with tables.open_file('../crowdastro-data/atlas_training_data.h5', mode='w', title='ATLAS training data') as f: root = f.root f.create_array(root, 'training_inputs', training_inputs) f.create_array(root, 'training_outputs', training_outputs) Explanation: Looks good. Ideally, we train this longer, but I don't have enough time right now. Let's save the training data and move on. End of explanation
12,021
Given the following text description, write Python code to implement the functionality described below step by step Description: Chapter 14 Step1: Sometimes you see double dots at the beginning of the file path; this means 'the parent of the current directory'. When writing a file path, you can use the following Step2: 1.2 Opening a file We can use the file path to tell Python which file to open by using the built-in function open(). The open() function does not return the actual text that is saved in the text file. It returns a 'file object' from which we can read the content using the .read() function (more on this later). We pass three arguments to the open() function Step3: Overview of possible mode arguments (the most important ones are 'r' and 'w') Step4: This TextIOWrapper thing is Python's way of saying it has opened a connection to the file charlie.txt. To actually see its content, we need to tell python to read the file. 1.3 Reading a file Here, we will discuss three ways of reading the contents of a file Step5: 1.3.2 readlines() The readlines() function allows you to access the content of a file as a list of lines. This means, it splits the text in a file at the new lines characters ('\n') for you) Step6: Now you can, for example, use a for-loop to print each line in the file (note that the second line is just a newline character) Step7: Important note When we open a file, we can only use one of the read operations once. If we want to read it again, we have to open a new file variable. Consider the code below Step8: The code returns an empty list. To fix this, we have to open the file again Step9: 1.3.3 Readline() The third operation readline() returns the next line of the file, returning the text up to and including the next newline character (\n, or \r\n on Windows). More simply put, this operation will read a file line-by-line. So if you call this operation again, it will return the next line in the file. Try it out below! Step10: Which function to choose For small files that you want to load entirely, you can use one of these three methods (readline, read, or readlines). Note, however, that we can also simply do the following to read a file line by line (this is recommended for larger files and when we are really only interested in a small portion of the file) Step11: Note the last line of this code snippet Step12: 1.4.2 Using a context manager There is actually an easier (and preferred) way to make sure that the file is closed as soon as you don't need it anymore, namely using what is called a context manager. Instead of using open() and close(), we use the syntax shown below. The main advantage of using the with-statement is that it automatically closes the file once you leave the local context defined by the indentation level. If you 'manually' open and close the file, you risk forgetting to close the file. Therefore, context managers are considered a best-practice, and we will use the with-statement in all of our following code. From now on, we highly recommend using a context manager in your code. Step13: 2 Manipulating file content Once your file content is loaded in a Python variable, you can manipulate its content as you can manipulate any other variable. You can edit it, add/remove lines, count word occurrences, etc. Let's say we read the file content in a list of its lines, as shown below. Note that we can use all of the different methods for reading files in the context manager. Step14: Then we can for instance preserve only the first 2 lines of the file, in a new variable Step15: We can count the lines that are longer than 15 characters Step16: We will soon see how to perform text processing once we have loaded the file, by using an external module in the next chapter. But let's first write store the modified text in a new file to preserve the changes. 3 Writing files To write content to a file, we can open a new file and write the text to this file by using the write() method. Again, we can do this by using the context manager. Remember that we have to specify the mode using w. Let's first slightly adapt our Charlie story by replacing the names in the text Step17: We can now save the manipulated content to a new file Step18: Open the file charle_new.txt in the folder ../Data/Charlie in any text editor and read a personalized version of the story! Note about append mode (a) Step19: If we only want to consider text files and ignore everything else (here a file called 'IGNORE_ME!'), we can specify this in our search by only looking for files with the extension .txt Step20: A question mark (?) matches any single character in that position in the name. For example, the following code prints all filenames in the directory ../Data/dreams that start with 'vickie' followed by exactly 1 character and end with the extension .txt (note that this will not print vickie10.txt) Step21: You can also find filenames recursively by using the pattern ** (the keyword argument recursive should be set to True), which will match any files and zero or more directories and subdirectories. The following code prints all files with the extension .txt in the directory ../Data/ and in all its subdirectories Step22: 4.2 The os module Another module that you will frequently see being used in examples is the os module. The os module has many features that can be very useful and which are not supported by the glob module. We will not go over each and every useful method here, but here's a list of some of the things that you can do (some of which we have seen above) Step23: Exercises Exercise 1 Step24: Exercise 2 Step25: Exercise 3 Step26: Exercise 4
Python Code: filename = "../Data/Charlie/charlie.txt" # The double dots mean 'go up one level in the directory tree'. Explanation: Chapter 14: Reading and writing text files We use some materials from this other Python course. In this chapter, you will learn how to read data from files, do some analysis, and write the results to disk. Reading and writing files are quite an essential part of programming, as it is the first step for your program to communicate with the outside world. In most cases, you will write programs that take data from some source, manipulates it in some way, and writes some results out somewhere. For example, if you would write a survey, you could take input from participants on a webserver and save their answers in some files or in a database. When the survey is over, you would read these results in and do some analysis on the data you have collected, maybe do some visualizations and save your results. In Natural Language Processing (NLP), you often process files containing raw texts with some code and write the results to some other file. At the end of this chapter, you will be able to: open one or multiple text files work with the modules os and glob read the contents of a file write new or manipulated content to new (or existing) files close a file If you want to learn more about these topics, you might find the following links useful: Video: File Objects - Reading and Writing to Files Video: Automate Parsing and Renaming of Multiple Files Video: OS Module - Use Underlying Operating System Functionality Blog post: 6 Ways the Linux File System is Different From the Windows File System Blog post: Gotcha — backslashes in Windows filenames If you have questions about this chapter, please contact us ([email protected]). 1. Reading a file In Python, you can read the content of a file, store it as the type of object that you need (string, list, etc.) and manipulate it (e.g., replacing or removing words). You can also write new content to an existing or a new file. Here, we will discuss how to: open a file read in the content store the context in a variable (to do something), e.g., as a string or list close the file 1.1. File paths To open a file, we need to associate the file on disk with a variable in Python. First, we tell Python where the file is stored on your disk. The location of your file is often referred to as the file path. Python will start looking in the 'working' or 'current' directory (which often will be where your Python script is). If it's in the working directory, you only have to tell Python the name of the file (e.g., charlie.txt). If it's not in the working directory, as in our case, you have to tell Python the exact path to your file. We will create a string variable to store this information: End of explanation # For windows: import os windows_file_path = os.path.normpath("C:/somePath/someFilename") # Use forward slashes Explanation: Sometimes you see double dots at the beginning of the file path; this means 'the parent of the current directory'. When writing a file path, you can use the following: / means the root of the current drive; ./ means the current directory; ../ means the parent of the current directory. Consider the directory tree below. If you want to go from your current working directory (cwd) to the one directly above (dir3), your path is ../. If you want to go to dir1, your path is ../../ If you want to go to dir5, your path is ../dir5/ If you want to go to dir2, your path is ../../dir2/ You will learn how to navigate your directory tree quite intuitively with a bit of practice. If you have any doubts, it is always a good idea to follow a quick tutorial on basic command-line operations. <img src='images/directory_tree.png'> Navigating your directory tree on Windows Also, note that the formatting of file paths is different across operating systems. The file path, as specified above, should work on any UNIX platform (Linux, Mac). If you are using Windows, however, you might run into problems when formatting file paths in this way outside of this notebook, because Windows uses backslashes instead of forward slashes (Jupyter Notebook should already have taken care of these problems for you). In that case, it might be useful to have a look at this page about the differences between the file systems, and at this page about solving this problem in Python. In short, it's probably best if you use the code below (we will talk about the os module in more detail later today). This is very useful to know if you are a Windows user, and it will become relevant for the final assignment. End of explanation filepath = "../Data/Charlie/charlie.txt" infile = open(filepath, "r") # 'r' stands for READ mode # Do something with the file infile.close() # Close the file (you can ignore this for now) Explanation: 1.2 Opening a file We can use the file path to tell Python which file to open by using the built-in function open(). The open() function does not return the actual text that is saved in the text file. It returns a 'file object' from which we can read the content using the .read() function (more on this later). We pass three arguments to the open() function: the path to the file that you wish to open the mode, a combination of characters explaining the purpose of the file opening (like read or write) and type of content stored in the file (like textual or binary format). For instance, if we are reading a plain text file, we can use the characters 'r' (represents read-mode) and 't' (represents plain text-mode). the last argument, a keyword argument (encoding), specifies the encoding of the text file, but you can forget about this for now. The most important mode arguments the open() function can take are: r = Opens a file for reading only. The file pointer is placed at the beginning of the file. w = Opens a file for writing only. Overwrites the file if the file exists. If the file does not exist, creates a new file for writing. a = Opens a file for appending. The file pointer is at the end of the file if the file exists. If the file does not exist, it creates a new file for writing. Use it if you would like to add something to the end of a file Then, to open the file 'charlie.txt' for reading purposes, we use the following: End of explanation infile = open("../Data/Charlie/charlie.txt" , "r") print(infile) infile.close() Explanation: Overview of possible mode arguments (the most important ones are 'r' and 'w'): | Character | Meaning | | --------- | ------- | |'r' | open for reading (default)| |'w' | open for writing, truncating the file first| |'x' | open for exclusive creation, failing if the file already exists| |'a' | open for writing, appending to the end of the file if it exists| |'b' | binary mode| |'t' | text mode (default)| |'+' | open a disk file for updating (reading and writing)| |'U' | universal newlines mode (deprecated)| So far, we have opened the file. This, however, does not yet show us the file content. Try printing 'infile': End of explanation # Opening the file using the filepath and and the 'read' mode: infile = open("../Data/Charlie/charlie.txt" , "r") # Reading the file using the `read()` function and assigning it to the variable `content` content = infile.read() print(content) print() print('This function returns a', type(content)) # closing the file (more on this below) infile.close() Explanation: This TextIOWrapper thing is Python's way of saying it has opened a connection to the file charlie.txt. To actually see its content, we need to tell python to read the file. 1.3 Reading a file Here, we will discuss three ways of reading the contents of a file: read() readlines() readline() 1.3.1 read() The read() method is used to access the entire text in a file, which we can assign to a variable. Consider the code below. The variable content now holds the entire content of the file charlie.txt as a single string, and we can access and manipulate it just like any other string. When we are done with accessing the file, we use the close() method to close the file. End of explanation # Opening the file using the filepath and and the 'read' mode: infile = open("../Data/Charlie/charlie.txt" , "r") # Reading the file using the `read()` function and assigning it to the variable `content` lines = infile.readlines() print(lines) print() print('This function returns a', type(lines)) # closing the file infile.close() Explanation: 1.3.2 readlines() The readlines() function allows you to access the content of a file as a list of lines. This means, it splits the text in a file at the new lines characters ('\n') for you): End of explanation for line in lines: print("LINE:", line) Explanation: Now you can, for example, use a for-loop to print each line in the file (note that the second line is just a newline character): End of explanation infile = open("../Data/Charlie/charlie.txt" , "r") content = infile.read() lines = infile.readlines() print(content) print(lines) infile.close() Explanation: Important note When we open a file, we can only use one of the read operations once. If we want to read it again, we have to open a new file variable. Consider the code below: End of explanation filepath = "../Data/Charlie/charlie.txt" infile = open(filepath , "r") content = infile.read() infile = open(filepath, "r") lines = infile.readlines() print(content) print(lines) infile.close() Explanation: The code returns an empty list. To fix this, we have to open the file again: End of explanation filepath = "../Data/Charlie/charlie.txt" infile = open(filepath, "r") next_line = infile.readline() print(next_line) next_line = infile.readline() print(next_line) next_line = infile.readline() print(next_line) infile.close() Explanation: 1.3.3 Readline() The third operation readline() returns the next line of the file, returning the text up to and including the next newline character (\n, or \r\n on Windows). More simply put, this operation will read a file line-by-line. So if you call this operation again, it will return the next line in the file. Try it out below! End of explanation infile = open(filename, "r") for line in infile: print(line) infile.close() Explanation: Which function to choose For small files that you want to load entirely, you can use one of these three methods (readline, read, or readlines). Note, however, that we can also simply do the following to read a file line by line (this is recommended for larger files and when we are really only interested in a small portion of the file): End of explanation filepath = "../Data/Charlie/charlie.txt" # open file infile = open(filepath , "r") # assign content to a varialbe content = infile.read() # close file infile.close() # do whatever you want with the context, e.g. print it: print(content) Explanation: Note the last line of this code snippet: infile.close(). This closes our file, which is a very important operation. This prevents Python from keeping files that are unnecessary anymore still open. In the next subchapter, we will also see a more convenient way to ensure files get closed after their usage. 1.4. Closing the file Here, we will introduce closing a file with the method close() and using a context manager to open and close files. After reading the contents of a file, the TextWrapper no longer needs to be open since we have stored the content as a variable. In fact, it is good practice to close the file as soon as you do not need it anymore. 1.4.1 close() We do this by using the close() method as already shown several times above. End of explanation filepath = "../Data/Charlie/charlie.txt" with open(filepath, "r") as infile: # the file is only open here # get content while file is open content = infile.read() # the context manager took care of closing the file again # we can now work with the content without having to worry about # closing the file print(content) Explanation: 1.4.2 Using a context manager There is actually an easier (and preferred) way to make sure that the file is closed as soon as you don't need it anymore, namely using what is called a context manager. Instead of using open() and close(), we use the syntax shown below. The main advantage of using the with-statement is that it automatically closes the file once you leave the local context defined by the indentation level. If you 'manually' open and close the file, you risk forgetting to close the file. Therefore, context managers are considered a best-practice, and we will use the with-statement in all of our following code. From now on, we highly recommend using a context manager in your code. End of explanation filepath = "../Data/Charlie/charlie.txt" with open(filepath, "r") as infile: lines = infile.readlines() print(lines) Explanation: 2 Manipulating file content Once your file content is loaded in a Python variable, you can manipulate its content as you can manipulate any other variable. You can edit it, add/remove lines, count word occurrences, etc. Let's say we read the file content in a list of its lines, as shown below. Note that we can use all of the different methods for reading files in the context manager. End of explanation first_two_lines=lines[:2] first_two_lines Explanation: Then we can for instance preserve only the first 2 lines of the file, in a new variable: End of explanation counter=0 for line in lines: if len(line)>15: counter+=1 print(counter) Explanation: We can count the lines that are longer than 15 characters: End of explanation filepath = "../Data/Charlie/charlie.txt" # read in file and assign content to the variable content with open(filepath, "r") as infile: content = infile.read() # manipulate content your_name = "x y" #type in your name friends_name = "a b" #type in the name of a friend # Replace all instances of Charlie Bucket with your name and save it in new_content new_content = content.replace("Charlie Bucket", your_name) # Replace all instancs of Mr Wonka with your friends name and save it in new_new_content new_new_content = new_content.replace("Mr Wonka", friends_name) Explanation: We will soon see how to perform text processing once we have loaded the file, by using an external module in the next chapter. But let's first write store the modified text in a new file to preserve the changes. 3 Writing files To write content to a file, we can open a new file and write the text to this file by using the write() method. Again, we can do this by using the context manager. Remember that we have to specify the mode using w. Let's first slightly adapt our Charlie story by replacing the names in the text: End of explanation filename = "../Data/Charlie/charlie_new.txt" with open(filename, "w") as outfile: outfile.write(new_new_content) Explanation: We can now save the manipulated content to a new file: End of explanation import glob for filename in glob.glob("../Data/Dreams/*"): print(filename) Explanation: Open the file charle_new.txt in the folder ../Data/Charlie in any text editor and read a personalized version of the story! Note about append mode (a): The third mode of opening a file is append ('a'). If the file 'charlie_new.txt' does not exist, then append and write act the same: they create this new file and fill it with content. The difference between write and append occurs when this file would exist. In that case, the write mode overwrites its content, while the append mode adds the new content at the end of the existing one. 4 Reading and writing multiple files You will often have multiple files to work with. The folder ../Data/Dreams contains 10 text files describing dreams of Vickie, a 10-year-old girl. These texts are extracted from DreamBank. To process multiple files, we often want to iterate over a list of files. These files are usually stored in one or multiple directories on your computer. Instead of writing out every single file path, it is much more convenient to iterate over all the files in the directory ../Data/Dreams. So we need to find a way to tell Python: "I want to do something with all these files at this location!" There are two modules which make dealing with multiple files a lot easier. glob os We will introduce them below. 4.1 The glob module The glob module is very useful to find all the pathnames matching a specified pattern according to the rules used by the Unix shell. You can use two wildcards: the asterisk (*) and the question mark (?). An asterisk matches zero or more characters in a segment of a name, while the question mark matches a single character in a segment of a name. For example, the following code gives all filenames in the directory ../Data/dreams: End of explanation for filename in glob.glob("../Data/Dreams/*.txt"): print(filename) Explanation: If we only want to consider text files and ignore everything else (here a file called 'IGNORE_ME!'), we can specify this in our search by only looking for files with the extension .txt: End of explanation for filename in glob.glob("../Data/Dreams/vickie?.txt"): print(filename) Explanation: A question mark (?) matches any single character in that position in the name. For example, the following code prints all filenames in the directory ../Data/dreams that start with 'vickie' followed by exactly 1 character and end with the extension .txt (note that this will not print vickie10.txt): End of explanation for filename in glob.glob("../Data/**/*.txt", recursive=True): print(filename) Explanation: You can also find filenames recursively by using the pattern ** (the keyword argument recursive should be set to True), which will match any files and zero or more directories and subdirectories. The following code prints all files with the extension .txt in the directory ../Data/ and in all its subdirectories: End of explanation # Start by importing the module: import os # let's use a filepath for testing it out: filepath = "../Data/Charlie/charlie.txt" os.path.basename(filepath) Explanation: 4.2 The os module Another module that you will frequently see being used in examples is the os module. The os module has many features that can be very useful and which are not supported by the glob module. We will not go over each and every useful method here, but here's a list of some of the things that you can do (some of which we have seen above): creating single or multiple directories: os.mkdir(), os.mkdirs(); removing single or multiple directories: os.rmdir(), os.rmdirs(); checking whether something is a file or a directory: os.path.isfile(), os.path.isdir(); split a path and return a tuple containing the directory and filename: os.path.split(); construct a pathname out of one or more partial pathnames: os.path.join() split a filename and return a tuple containing the filename and the file extension: os.path.splitext() get only the basename or the directory path: os.path.basename(), os.path.dirname(). Feel free to play around with these methods and figure out how they work yourself :-) End of explanation # your code here Explanation: Exercises Exercise 1: Write a program that opens RedCircle.txt in the ../Data/RedCircle folder and prints its content as a single string: End of explanation # your code here Explanation: Exercise 2: Write a program that opens RedCircle.txt in the ../Data/RedCircle folder and prints a list containing all lines in the file: End of explanation # your code here Explanation: Exercise 3: Create a counter dictionary like in block 2 (the dictionaries chapter), where you will count the number of occurences of each word in a file. End of explanation # your code here Explanation: Exercise 4: The module os implements functions that allow us to work with the operating system (see folder contents, change directory, etc.). Use the function listdir from the module os to see the contents of the current directory. Then print all the items that do not start with a dot. End of explanation
12,022
Given the following text description, write Python code to implement the functionality described below step by step Description: Compute source power using DICS beamfomer Compute a Dynamic Imaging of Coherent Sources (DICS) [1]_ filter from single-trial activity to estimate source power across a frequency band. This example demonstrates how to source localize the event-related synchronization (ERS) of beta band activity in the "somato" dataset. References .. [1] Gross et al. Dynamic imaging of coherent sources Step1: Reading the raw data and creating epochs Step2: We are interested in the beta band. Define a range of frequencies, using a log scale, from 12 to 30 Hz. Step3: Computing the cross-spectral density matrix for the beta frequency band, for different time intervals. We use a decim value of 20 to speed up the computation in this example at the loss of accuracy. Step4: Computing DICS spatial filters using the CSD that was computed on the entire timecourse. Step5: Applying DICS spatial filters separately to the CSD computed using the baseline and the CSD computed during the ERS activity. Step6: Visualizing source power during ERS activity relative to the baseline power.
Python Code: # Author: Marijn van Vliet <[email protected]> # Roman Goj <[email protected]> # Denis Engemann <[email protected]> # # License: BSD (3-clause) import numpy as np import mne from mne.datasets import somato from mne.time_frequency import csd_morlet from mne.beamformer import make_dics, apply_dics_csd print(__doc__) Explanation: Compute source power using DICS beamfomer Compute a Dynamic Imaging of Coherent Sources (DICS) [1]_ filter from single-trial activity to estimate source power across a frequency band. This example demonstrates how to source localize the event-related synchronization (ERS) of beta band activity in the "somato" dataset. References .. [1] Gross et al. Dynamic imaging of coherent sources: Studying neural interactions in the human brain. PNAS (2001) vol. 98 (2) pp. 694-699 End of explanation data_path = somato.data_path() raw_fname = data_path + '/MEG/somato/sef_raw_sss.fif' fname_fwd = data_path + '/MEG/somato/somato-meg-oct-6-fwd.fif' subjects_dir = data_path + '/subjects' raw = mne.io.read_raw_fif(raw_fname) # Set picks, use a single sensor type picks = mne.pick_types(raw.info, meg='grad', exclude='bads') # Read epochs events = mne.find_events(raw) epochs = mne.Epochs(raw, events, event_id=1, tmin=-1.5, tmax=2, picks=picks, preload=True) # Read forward operator fwd = mne.read_forward_solution(fname_fwd) Explanation: Reading the raw data and creating epochs: End of explanation freqs = np.logspace(np.log10(12), np.log10(30), 9) Explanation: We are interested in the beta band. Define a range of frequencies, using a log scale, from 12 to 30 Hz. End of explanation csd = csd_morlet(epochs, freqs, tmin=-1, tmax=1.5, decim=20) csd_baseline = csd_morlet(epochs, freqs, tmin=-1, tmax=0, decim=20) # ERS activity starts at 0.5 seconds after stimulus onset csd_ers = csd_morlet(epochs, freqs, tmin=0.5, tmax=1.5, decim=20) Explanation: Computing the cross-spectral density matrix for the beta frequency band, for different time intervals. We use a decim value of 20 to speed up the computation in this example at the loss of accuracy. End of explanation filters = make_dics(epochs.info, fwd, csd.mean(), pick_ori='max-power') Explanation: Computing DICS spatial filters using the CSD that was computed on the entire timecourse. End of explanation baseline_source_power, freqs = apply_dics_csd(csd_baseline.mean(), filters) beta_source_power, freqs = apply_dics_csd(csd_ers.mean(), filters) Explanation: Applying DICS spatial filters separately to the CSD computed using the baseline and the CSD computed during the ERS activity. End of explanation stc = beta_source_power / baseline_source_power message = 'DICS source power in the 12-30 Hz frequency band' brain = stc.plot(hemi='both', views='par', subjects_dir=subjects_dir, time_label=message) Explanation: Visualizing source power during ERS activity relative to the baseline power. End of explanation
12,023
Given the following text description, write Python code to implement the functionality described below step by step Description: AIA Response Function Tests Step1: The goal of this notebook is to test the wavelength and temperature response function calculations that are currently being developed in SunPy. Wavelength Response Functions First, we'll calculate the wavelength response functions for 6 of the 7 AIA EUV channels Step2: Contribution Functions, $G(n,T)$ Next, we'll calculate the contribution functions for a couple of ions, hopefully ones that are relatively important to each channel. According to the AIA LMSAL webpage, | Channel ($\mathrm{\mathring{A}}$) | Primary Ions | Characteristic Temperature, $\log{T}$ (K) | | Step3: Now, make a list of all the ions that we care about so that we can easily iterate through them. Step4: Finally, iterate through all of the ions and store the contribution function and associated information. Step5: Calculating Temperature Response Functions From Boerner et al. (2012), the temperature response function $K_i(T)$ is given by $$ K_i(T)=\int_0^{\infty}\mathrm{d}\lambda\,G(\lambda,T)R_i(\lambda) $$ First, we need to reshape the contribution functions for our discrete number of ions into $G(\lambda,T)$ such that each column of $G$ is $G_{\lambda}(T)$. Then we can interpolate each $R_i$ over that discrete number of wavelengths. Step6: Finally, try to plot all of the temperature response functions.
Python Code: import os import sys import pickle import numpy as np import scipy import matplotlib.pyplot as plt import ChiantiPy.core as ch import sunpy.instr.aia as aia %matplotlib inline Explanation: AIA Response Function Tests End of explanation response = aia.Response(path_to_genx_dir='../ssw_aia_response_data/') response.calculate_wavelength_response() response.peek_wavelength_response() Explanation: The goal of this notebook is to test the wavelength and temperature response function calculations that are currently being developed in SunPy. Wavelength Response Functions First, we'll calculate the wavelength response functions for 6 of the 7 AIA EUV channels: 171, 193, 131, 335, 211, and 94 $\mathrm{\mathring{A}}$. End of explanation temperature = np.logspace(5.,8.,50) density = 1.e+9 Explanation: Contribution Functions, $G(n,T)$ Next, we'll calculate the contribution functions for a couple of ions, hopefully ones that are relatively important to each channel. According to the AIA LMSAL webpage, | Channel ($\mathrm{\mathring{A}}$) | Primary Ions | Characteristic Temperature, $\log{T}$ (K) | |:-------:|:--------------:|:------------------------------:| | 94 | Fe XVII | 6.8 | | 131 | Fe VIII, XX, XXIII | 5.6, 7.0, 7.2 | | 171 | Fe IX | 5.8 | | 193 | Fe XII, XXIV | 6.1, 7.3 | | 211 | Fe XIV | 6.3 | | 335 | Fe XVI | 6.4 | First, choose a temperature range and constant density. End of explanation ions = ['fe_8','fe_9','fe_12','fe_14','fe_16','fe_17','fe_20','fe_23','fe_24'] search_interval = np.array([-2.5,2.5]) ion_wvl_ranges = [c+search_interval for c in [131.,171.,193.,211.,335.,94.,131.,131.,193.]] Explanation: Now, make a list of all the ions that we care about so that we can easily iterate through them. End of explanation #warning! This takes a long time! contribution_fns = {} for i,iwr in zip(ions,ion_wvl_ranges): tmp_ion = ch.ion(i,temperature=temperature,eDensity=density,em=1.e+27) tmp_ion.gofnt(wvlRange=[iwr[0],iwr[1]],top=3,plot=False) plt.show() contribution_fns[i] = tmp_ion.Gofnt Explanation: Finally, iterate through all of the ions and store the contribution function and associated information. End of explanation sorted_g = sorted([g[1] for g in contribution_fns.items()],key=lambda x: x['wvl']) g_matrix = np.vstack((g['gofnt'] for g in sorted_g)).T discrete_wavelengths = np.array([g['wvl'] for g in sorted_g]) for key in wavelength_response_fns: wavelength_response_fns[key]['wavelength_interpolated'] = discrete_wavelengths[:,0] wavelength_response_fns[key]['response_interpolated'] = np.interp(discrete_wavelengths, wavelength_response_fns[key]['wavelength'], wavelength_response_fns[key]['response'])[:,0] temperature_response = {} for key in wavelength_response_fns: g_times_r = g_matrix*wavelength_response_fns[key]['response_interpolated'] temperature_response[key] = np.trapz(g_times_r, wavelength_response_fns[key]['wavelength_interpolated']) Explanation: Calculating Temperature Response Functions From Boerner et al. (2012), the temperature response function $K_i(T)$ is given by $$ K_i(T)=\int_0^{\infty}\mathrm{d}\lambda\,G(\lambda,T)R_i(\lambda) $$ First, we need to reshape the contribution functions for our discrete number of ions into $G(\lambda,T)$ such that each column of $G$ is $G_{\lambda}(T)$. Then we can interpolate each $R_i$ over that discrete number of wavelengths. End of explanation fig = plt.figure(figsize=(10,10)) ax = fig.gca() for tresp in temperature_response.items(): ax.plot(temperature,tresp[1],label=str(tresp[0]),color=sns.xkcd_rgb[channel_colors[tresp[0]]]) ax.set_ylim([1e-28,1e-22]) ax.set_xscale('log') ax.set_yscale('log') ax.set_xlabel(r'$T$ (K)') ax.set_ylabel(r'$K_i(T)$') ax.legend(loc='best',title=r'Channel ($\mathrm{\mathring{A}}$)') Explanation: Finally, try to plot all of the temperature response functions. End of explanation
12,024
Given the following text description, write Python code to implement the functionality described below step by step Description: Mie Performance and Jitting Scott Prahl Apr 2021 If miepython is not installed, uncomment the following cell (i.e., delete the #) and run (shift-enter) Step1: Size Parameters We will use %timeit to see speeds for unjitted code, then jitted code Step2: Embedded spheres Step3: Testing ez_mie Another high level function that should be sped up by jitting. Step4: Scattering Phase Function Step5: And finally, as function of sphere size
Python Code: #!pip install --user miepython import numpy as np import matplotlib.pyplot as plt try: import miepython.miepython as miepython_jit import miepython.miepython_nojit as miepython except ModuleNotFoundError: print('miepython not installed. To install, uncomment and run the cell above.') print('Once installation is successful, rerun this cell again.') Explanation: Mie Performance and Jitting Scott Prahl Apr 2021 If miepython is not installed, uncomment the following cell (i.e., delete the #) and run (shift-enter) End of explanation ntests=6 m=1.5 N = np.logspace(0,3,ntests,dtype=int) result = np.zeros(ntests) resultj = np.zeros(ntests) for i in range(ntests): x = np.linspace(0.1,20,N[i]) a = %timeit -o qext, qsca, qback, g = miepython.mie(m,x) result[i]=a.best for i in range(ntests): x = np.linspace(0.1,20,N[i]) a = %timeit -o qext, qsca, qback, g = miepython_jit.mie(m,x) resultj[i]=a.best improvement = result/resultj plt.loglog(N,resultj,':r') plt.loglog(N,result,':b') plt.loglog(N,resultj,'or',label='jit') plt.loglog(N,result,'ob', label='no jit') plt.legend() plt.xlabel("Number of sphere sizes calculated") plt.ylabel("Execution Time") plt.title("Jit improvement is %d to %dX"%(np.min(improvement),np.max(improvement))) plt.show() Explanation: Size Parameters We will use %timeit to see speeds for unjitted code, then jitted code End of explanation ntests = 6 mwater = 4/3 # rough approximation m=1.0 mm = m/mwater r=500 # nm N = np.logspace(0,3,ntests,dtype=int) result = np.zeros(ntests) resultj = np.zeros(ntests) for i in range(ntests): lambda0 = np.linspace(300,800,N[i]) # also in nm xx = 2*np.pi*r*mwater/lambda0 a = %timeit -o qext, qsca, qback, g = miepython.mie(mm,xx) result[i]=a.best for i in range(ntests): lambda0 = np.linspace(300,800,N[i]) # also in nm xx = 2*np.pi*r*mwater/lambda0 a = %timeit -o qext, qsca, qback, g = miepython_jit.mie(mm,xx) resultj[i]=a.best improvement = result/resultj plt.loglog(N,resultj,':r') plt.loglog(N,result,':b') plt.loglog(N,resultj,'or',label='jit') plt.loglog(N,result,'ob', label='no jit') plt.legend() plt.xlabel("Number of Wavelengths Calculated") plt.ylabel("Execution Time") plt.title("Jit improvement is %d to %dX"%(np.min(improvement),np.max(improvement))) plt.show() Explanation: Embedded spheres End of explanation ntests=6 m_sphere = 1.0 n_water = 4/3 d = 1000 # nm N = np.logspace(0,3,ntests,dtype=int) result = np.zeros(ntests) resultj = np.zeros(ntests) for i in range(ntests): lambda0 = np.linspace(300,800,N[i]) # also in nm a = %timeit -o qext, qsca, qback, g = miepython.ez_mie(m_sphere, d, lambda0, n_water) result[i]=a.best for i in range(ntests): lambda0 = np.linspace(300,800,N[i]) # also in nm a = %timeit -o qext, qsca, qback, g = miepython_jit.ez_mie(m_sphere, d, lambda0, n_water) resultj[i]=a.best improvement = result/resultj plt.loglog(N,resultj,':r') plt.loglog(N,result,':b') plt.loglog(N,resultj,'or',label='jit') plt.loglog(N,result,'ob', label='no jit') plt.legend() plt.xlabel("Number of Wavelengths Calculated") plt.ylabel("Execution Time") plt.title("Jit improvement is %d to %dX"%(np.min(improvement),np.max(improvement))) plt.show() Explanation: Testing ez_mie Another high level function that should be sped up by jitting. End of explanation ntests = 6 m = 1.5 x = np.pi/3 N = np.logspace(0,3,ntests,dtype=int) result = np.zeros(ntests) resultj = np.zeros(ntests) for i in range(ntests): theta = np.linspace(-180,180,N[i]) mu = np.cos(theta/180*np.pi) a = %timeit -o s1, s2 = miepython.mie_S1_S2(m,x,mu) result[i]=a.best for i in range(ntests): theta = np.linspace(-180,180,N[i]) mu = np.cos(theta/180*np.pi) a = %timeit -o s1, s2 = miepython_jit.mie_S1_S2(m,x,mu) resultj[i]=a.best improvement = result/resultj plt.loglog(N,resultj,':r') plt.loglog(N,result,':b') plt.loglog(N,resultj,'or',label='jit') plt.loglog(N,result,'ob', label='no jit') plt.legend() plt.xlabel("Number of Angles Calculated") plt.ylabel("Execution Time") plt.title("Jit improvement is %d to %dX"%(np.min(improvement),np.max(improvement))) plt.show() Explanation: Scattering Phase Function End of explanation ntests=6 m = 1.5-0.1j x = np.logspace(0,3,ntests) result = np.zeros(ntests) resultj = np.zeros(ntests) theta = np.linspace(-180,180) mu = np.cos(theta/180*np.pi) for i in range(ntests): a = %timeit -o s1, s2 = miepython.mie_S1_S2(m,x[i],mu) result[i]=a.best for i in range(ntests): a = %timeit -o s1, s2 = miepython_jit.mie_S1_S2(m,x[i],mu) resultj[i]=a.best improvement = result/resultj plt.loglog(N,resultj,':r') plt.loglog(N,result,':b') plt.loglog(N,resultj,'or',label='jit') plt.loglog(N,result,'ob', label='no jit') plt.legend() plt.xlabel("Sphere Size Parameter") plt.ylabel("Execution Time") plt.title("Jit improvement is %d to %dX"%(np.min(improvement),np.max(improvement))) plt.show() Explanation: And finally, as function of sphere size End of explanation
12,025
Given the following text description, write Python code to implement the functionality described below step by step Description: Using Variational Equations With the Chain Rule For a complete introduction to variational equations, please read the paper by Rein and Tamayo (2016). Variational equations can be used to calculate derivatives in an $N$-body simulation. More specifically, given a set of initial conditions $\alpha_i$ and a set of variables at the end of the simulation $v_k$, we can calculate all first order derivatives $$\frac{\partial v_k}{\partial \alpha_i}$$ as well as all second order derivates $$\frac{\partial^2 v_k}{\partial \alpha_i\partial \alpha_j}$$ For this tutorial, we work with a two planet system. We first chose the semi-major axis $a$ of the outer planet as an initial condition (this is our $\alpha_i$). At the end of the simulation we output the velocity of the star in the $x$ direction (this is our $v_k$). To do that, let us first import REBOUND and numpy. Step1: The following function takes $a$ as a parameter, then integrates the two planet system and returns the velocity of the star at the end of the simulation. Step2: If we run the simulation again, with a different initial $a$, we get a different velocity Step3: We could now run many different simulations to map out the parameter space. This is a very simple examlpe of a typical use case Step4: Note the two new functions. sim.add_variation() adds a set of variational particles to the simulation. All variational particles are by default initialized to zero. We use the vary() function to initialize them to a variation that we are interested in. Here, we initialize the variational particles corresponding to a change in the semi-major axis, $a$, of the particle with index 2 (the outer planet). Step5: We can use the derivative to construct a Taylor series expansion of the velocity around $a_0=1.5$ Step6: Compare this value with the explicitly calculate one above. They are almost the same! But we can do even better, by using second order variational equations to calculate second order derivatives. Step7: Using a Taylor series expansion to second order gives a better estimate of v(1.51). Step8: Now that we know how to calculate first and second order derivates of positions and velocities of particles, we can simply use the chain rule to calculate more complicated derivates. For example, instead of the velocity $v_x$, you might be interested in the quanity $w\equiv(v_x - c)^2$ where $c$ is a constant. This is something that typically appears in a $\chi^2$ fit. The chain rule gives us Step9: Similarly, you can also use the chain rule to vary initial conditions of particles in a way that is not supported by REBOUND by default. For example, suppose you want to work in some fancy coordinate system, using $h\equiv e\sin(\omega)$ and $k\equiv e \cos(\omega)$ variables instead of $e$ and $\omega$. You might want to do that because $h$ and $k$ variables are often better behaved near $e\sim0$. In that case the chain rule gives us
Python Code: import rebound import numpy as np Explanation: Using Variational Equations With the Chain Rule For a complete introduction to variational equations, please read the paper by Rein and Tamayo (2016). Variational equations can be used to calculate derivatives in an $N$-body simulation. More specifically, given a set of initial conditions $\alpha_i$ and a set of variables at the end of the simulation $v_k$, we can calculate all first order derivatives $$\frac{\partial v_k}{\partial \alpha_i}$$ as well as all second order derivates $$\frac{\partial^2 v_k}{\partial \alpha_i\partial \alpha_j}$$ For this tutorial, we work with a two planet system. We first chose the semi-major axis $a$ of the outer planet as an initial condition (this is our $\alpha_i$). At the end of the simulation we output the velocity of the star in the $x$ direction (this is our $v_k$). To do that, let us first import REBOUND and numpy. End of explanation def calculate_vx(a): sim = rebound.Simulation() sim.add(m=1.) # star sim.add(primary=sim.particles[0],m=1e-3, a=1) # inner planet sim.add(primary=sim.particles[0],m=1e-3, a=a) # outer planet sim.integrate(2.*np.pi*10.) # integrate for ~10 orbits return sim.particles[0].vx # return star's velocity in the x direction calculate_vx(a=1.5) # initial semi-major axis of the outer planet is 1.5 Explanation: The following function takes $a$ as a parameter, then integrates the two planet system and returns the velocity of the star at the end of the simulation. End of explanation calculate_vx(a=1.51) # initial semi-major axis of the outer planet is 1.51 Explanation: If we run the simulation again, with a different initial $a$, we get a different velocity: End of explanation def calculate_vx_derivative(a): sim = rebound.Simulation() sim.add(m=1.) # star sim.add(primary=sim.particles[0],m=1e-3, a=1) # inner planet sim.add(primary=sim.particles[0],m=1e-3, a=a) # outer planet v1 = sim.add_variation() # add a set of variational particles v1.vary(2,"a") # initialize the variational particles sim.integrate(2.*np.pi*10.) # integrate for ~10 orbits return sim.particles[0].vx, v1.particles[0].vx # return star's velocity and its derivative Explanation: We could now run many different simulations to map out the parameter space. This is a very simple examlpe of a typical use case: the fitting of a radial velocity datapoint. However, we can be smarter than simple running an almost identical simulation over and over again by using variational equations. These will allow us to calculate the derivate of the stellar velocity at the end of the simulation. We can take derivative with respect to any of the initial conditions, i.e. a particles's mass, semi-major axis, x-coordinate, etc. Here, we want to take the derivative with respect to the semi-major axis of the outer planet. The following function does exactly that: End of explanation calculate_vx_derivative(a=1.5) Explanation: Note the two new functions. sim.add_variation() adds a set of variational particles to the simulation. All variational particles are by default initialized to zero. We use the vary() function to initialize them to a variation that we are interested in. Here, we initialize the variational particles corresponding to a change in the semi-major axis, $a$, of the particle with index 2 (the outer planet). End of explanation a0=1.5 va0, dva0 = calculate_vx_derivative(a=a0) def v(a): return va0 + (a-a0)*dva0 print(v(1.51)) Explanation: We can use the derivative to construct a Taylor series expansion of the velocity around $a_0=1.5$: $$v(a) \approx v(a_0) + (a-a_0) \frac{\partial v}{\partial a}$$ End of explanation def calculate_vx_derivative_2ndorder(a): sim = rebound.Simulation() sim.add(m=1.) # star sim.add(primary=sim.particles[0],m=1e-3, a=1) # inner planet sim.add(primary=sim.particles[0],m=1e-3, a=a) # outer planet v1 = sim.add_variation() v1.vary(2,"a") # The following lines add and initialize second order variational particles v2 = sim.add_variation(order=2, first_order=v1) v2.vary(2,"a") sim.integrate(2.*np.pi*10.) # integrate for ~10 orbits # return star's velocity and its first and second derivatives return sim.particles[0].vx, v1.particles[0].vx, v2.particles[0].vx Explanation: Compare this value with the explicitly calculate one above. They are almost the same! But we can do even better, by using second order variational equations to calculate second order derivatives. End of explanation a0=1.5 va0, dva0, ddva0 = calculate_vx_derivative_2ndorder(a=a0) def v(a): return va0 + (a-a0)*dva0 + 0.5*(a-a0)**2*ddva0 print(v(1.51)) Explanation: Using a Taylor series expansion to second order gives a better estimate of v(1.51). End of explanation def calculate_w_derivative(a): sim = rebound.Simulation() sim.add(m=1.) # star sim.add(primary=sim.particles[0],m=1e-3, a=1) # inner planet sim.add(primary=sim.particles[0],m=1e-3, a=a) # outer planet v1 = sim.add_variation() # add a set of variational particles v1.vary(2,"a") # initialize the variational particles sim.integrate(2.*np.pi*10.) # integrate for ~10 orbits c = 1.02 # some constant w = (sim.particles[0].vx-c)**2 dwda = 2.*v1.particles[0].vx * (sim.particles[0].vx-c) return w, dwda # return w and its derivative calculate_w_derivative(1.5) Explanation: Now that we know how to calculate first and second order derivates of positions and velocities of particles, we can simply use the chain rule to calculate more complicated derivates. For example, instead of the velocity $v_x$, you might be interested in the quanity $w\equiv(v_x - c)^2$ where $c$ is a constant. This is something that typically appears in a $\chi^2$ fit. The chain rule gives us: $$ \frac{\partial w}{\partial a} = 2 \cdot (v_x-c)\cdot \frac{\partial v_x}{\partial a}$$ The variational equations provide the $\frac{\partial v_x}{\partial a}$ part, the ordinary particles provide $v_x$. End of explanation def calculate_vx_derivative_h(): h, k = 0.1, 0.2 e = np.sqrt(h**2+k**2) omega = np.arctan2(k,h) sim = rebound.Simulation() sim.add(m=1.) # star sim.add(primary=sim.particles[0],m=1e-3, a=1) # inner planet sim.add(primary=sim.particles[0],m=1e-3, a=1.5, e=e, omega=omega) # outer planet v1 = sim.add_variation() dpde = rebound.Particle(simulation=sim, particle=sim.particles[2], variation="e") dpdomega = rebound.Particle(simulation=sim, particle=sim.particles[2], m=1e-3, a=1.5, e=e, omega=omega, variation="omega") v1.particles[2] = h/e * dpde - k/(e*e) * dpdomega sim.integrate(2.*np.pi*10.) # integrate for ~10 orbits # return star's velocity and its first derivatives return sim.particles[0].vx, v1.particles[0].vx calculate_vx_derivative_h() Explanation: Similarly, you can also use the chain rule to vary initial conditions of particles in a way that is not supported by REBOUND by default. For example, suppose you want to work in some fancy coordinate system, using $h\equiv e\sin(\omega)$ and $k\equiv e \cos(\omega)$ variables instead of $e$ and $\omega$. You might want to do that because $h$ and $k$ variables are often better behaved near $e\sim0$. In that case the chain rule gives us: $$\frac{\partial p(e(h, k), \omega(h, k))}{\partial h} = \frac{\partial p}{\partial e}\frac{\partial e}{\partial h} + \frac{\partial p}{\partial \omega}\frac{\partial \omega}{\partial h}$$ where $p$ is any of the particles initial coordinates. In our case the derivates of $e$ and $\omega$ with respect to $h$ are: $$\frac{\partial \omega}{\partial h} = -\frac{k}{e^2}\quad\text{and}\quad \frac{\partial e}{\partial h} = \frac{h}{e}$$ With REBOUND, you can easily implement this. The following function calculates the derivate of the star's velocity with respect to the outer planet's $h$ variable. End of explanation
12,026
Given the following text description, write Python code to implement the functionality described below step by step Description: Q 看一下 mnist 資料 開始 Tensorflow Step1: Softmax regression 基本上就是用 $ e ^ {W x +b} $ 的比例來計算機率 其中 x 是長度 784 的向量(圖片), W 是 10x784矩陣,加上一個長度為 10 的向量。 算出來的十個數值,依照比例當成我們預估的機率。 Step2: Loss function 的計算是 cross_entorpy. 基本上就是 $-log(\Pr(Y_{true}))$ Step3: Multilayer Convolutional Network
Python Code: import tensorflow as tf from tfdot import tfdot Explanation: Q 看一下 mnist 資料 開始 Tensorflow End of explanation # 輸入的 placeholder X = tf.placeholder(tf.float32, shape=[None, 784], name="X") # 權重參數,為了計算方便和一些慣例(行向量及列向量的差異),矩陣乘法的方向和上面解說相反 W = tf.Variable(tf.zeros([784, 10]), name='W') b = tf.Variable(tf.zeros([10]), name='b') # 這裡可以看成是列向量 tfdot() # 計算出來的公式 Y = tf.exp(tf.matmul(X, W) +b, name="Y") Y_softmax = tf.nn.softmax(Y, name="Y_softmax") # or #Y_softmax = tf.div(Y, tf.reduce_sum(Y, axis=1, keep_dims=True), name="Y_softmax") tfdot() Explanation: Softmax regression 基本上就是用 $ e ^ {W x +b} $ 的比例來計算機率 其中 x 是長度 784 的向量(圖片), W 是 10x784矩陣,加上一個長度為 10 的向量。 算出來的十個數值,依照比例當成我們預估的機率。 End of explanation # 真正的 Y Y_ = tf.placeholder(tf.float32, shape=[None, 10], name="Y_") #和算出來的 Y 來做 cross entropy #cross_entropy = tf.reduce_mean(-tf.reduce_sum(Y_*tf.log(Y_softmax), axis=1)) # or cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=Y_, logits=Y)) tfdot() train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy) tfdot(size=(15,30)) train_Y = np.eye(10)[train_y] test_Y = np.eye(10)[test_y] validation_Y = np.eye(10)[validation_y] sess = tf.InteractiveSession() tf.global_variables_initializer().run() for i in range(1000): rnd_idx = np.random.choice(train_X.shape[0], 50, replace=False) train_step.run(feed_dict={X: train_X[rnd_idx], Y_:train_Y[rnd_idx]}) Y.eval(feed_dict={X: train_X[:10]}) prediction = tf.argmax(Y, axis=1) # print predictions prediction.eval(feed_dict={X: train_X[:10]}) # print labels showX(train_X[:10]) train_y[:10] correct_prediction = tf.equal(tf.argmax(Y,1), tf.argmax(Y_, 1)) correct_prediction.eval({X: train_X[:10] , Y_: train_Y[:10]}) accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float")) accuracy.eval(feed_dict={X: train_X[:10] , Y_: train_Y[:10]}) accuracy.eval(feed_dict={X: train_X , Y_: train_Y}) # 合在一起來看 for t in range(10): for i in range(1000): rnd_idx = np.random.choice(train_X.shape[0], 200, replace=False) train_step.run(feed_dict={X: train_X[rnd_idx], Y_:train_Y[rnd_idx]}) a = accuracy.eval({X: validation_X , Y_: validation_Y}) print (t, a) accuracy.eval({X: test_X , Y_: test_Y}) sess.close() Explanation: Loss function 的計算是 cross_entorpy. 基本上就是 $-log(\Pr(Y_{true}))$ End of explanation # 重設 session 和 graph tf.reset_default_graph() # 輸入還是一樣 X = tf.placeholder(tf.float32, shape=[None, 784], name="X") Y_ = tf.placeholder(tf.float32, shape=[None, 10], name="Y_") # 設定 weight 和 bais def weight_variable(shape): initial = tf.truncated_normal(shape, stddev=0.1) return tf.Variable(initial, name ='W') def bias_variable(shape): initial = tf.constant(0.1, shape=shape) return tf.Variable(initial, name = 'b') # 設定 cnn 的 layers def conv2d(X, W): return tf.nn.conv2d(X, W, strides=[1,1,1,1], padding='SAME') def max_pool_2x2(X): return tf.nn.max_pool(X, ksize=[1,2,2,1], strides=[1,2,2,1], padding='SAME') # fisrt layer with tf.name_scope('conv1'): ## variables W_conv1 = weight_variable([3,3,1,32]) b_conv1 = bias_variable([32]) ## build the layer X_image = tf.reshape(X, [-1, 28, 28, 1]) h_conv1 = tf.nn.relu(conv2d(X_image, W_conv1) + b_conv1) h_pool1 = max_pool_2x2(h_conv1) tfdot() # second layer with tf.name_scope('conv2'): ## variables W_conv2 = weight_variable([3,3,32,64]) b_conv2 = bias_variable([64]) ## build the layer h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2) h_pool2 = max_pool_2x2(h_conv2) # fully-connected layer with tf.name_scope('full'): W_fc1 = weight_variable([7*7*64, 1024]) b_fc1 = bias_variable([1024]) h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64]) h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1)+b_fc1) # Dropout: A Simple Way to Prevent Neural Networks from Over fitting # https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf with tf.name_scope('dropout'): keep_prob = tf.placeholder("float", name="keep_prob") h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob) # Readout with tf.name_scope('readout'): W_fc2 = weight_variable([1024,10]) b_fc2 = bias_variable([10]) Y = tf.matmul(h_fc1_drop, W_fc2)+b_fc2 cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=Y_, logits=Y)) train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy) prediction = tf.argmax(Y, 1, name="prediction") correct_prediction = tf.equal(prediction, tf.argmax(Y_, 1), name="correction") accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32), name="accuracy") sess = tf.InteractiveSession() tf.global_variables_initializer().run() %%timeit -r 1 -n 1 for i in range(5000): rnd_idx = np.random.choice(train_X.shape[0], 50, replace=False) if i%250 == 0: validation_accuracy = accuracy.eval({ X: validation_X[:200], Y_: validation_Y[:200], keep_prob: 1.0 }) print("step %d, validation accuracy %g"%(i, validation_accuracy)) train_step.run({X: train_X[rnd_idx], Y_: train_Y[rnd_idx], keep_prob: 0.5 }) np.mean([accuracy.eval({X: test_X[i:i+1000], Y_: test_Y[i:i+1000], keep_prob: 1.0}) for i in range(0, test_X.shape[0], 1000)] ) tf.train.write_graph(sess.graph_def, "./", "mnist_simple.pb", as_text=False) Explanation: Multilayer Convolutional Network End of explanation
12,027
Given the following text description, write Python code to implement the functionality described below step by step Description: Trace Analysis Examples Idle States Residency Analysis This notebook shows the features provided by the idle state analysis module. It will be necessary to collect the following events Step1: Target Configuration The target configuration is used to describe and configure your test environment. You can find more details in examples/utils/testenv_example.ipynb. Our target is a Juno R0 development board running Linux. Step2: Workload configuration and execution Detailed information on RTApp can be found in examples/wlgen/rtapp_example.ipynb. This experiment Step3: Parse trace and analyse data Step4: Per-CPU Idle State Residency Profiling It is possible to get the residency in each idle state of a CPU or a cluster with the following commands Step5: For the translation between the idle value and its description Step6: The IdleAnalysis module provide methods for plotting residency data Step7: CPU idle state over time Take a look at the target's idle states Step8: Now use trappy to plot the idle state of a single CPU over time. Higher is deeper Step9: Examine idle period lengths Let's get a DataFrame showing the length of each idle period on the CPU and the index of the cpuidle state that was entered. Step10: Make a scatter plot of the length of idle periods against the state that was entered. We should see that for long idle periods, deeper states were entered (i.e. we should see a positive corellation between the X and Y axes). Step11: Draw a histogram of the length of idle periods shorter than 100ms in which the CPU entered cpuidle state 2. Step12: Per-cluster Idle State Residency
Python Code: import logging from conf import LisaLogging LisaLogging.setup() %matplotlib inline import os # Support to access the remote target from env import TestEnv # Support to access cpuidle information from the target from devlib import * # Support to configure and run RTApp based workloads from wlgen import RTA, Ramp # Support for trace events analysis from trace import Trace # DataFrame support import pandas as pd from pandas import DataFrame # Trappy (plots) support from trappy import ILinePlot from trappy.stats.grammar import Parser Explanation: Trace Analysis Examples Idle States Residency Analysis This notebook shows the features provided by the idle state analysis module. It will be necessary to collect the following events: cpu_idle, to filter out intervals of time in which the CPU is idle sched_switch, to recognise tasks on kernelshark Details on idle states profiling ar given in Per-CPU/Per-Cluster Idle State Residency Profiling below. End of explanation # Setup a target configuration my_conf = { # Target platform and board "platform" : 'linux', "board" : 'juno', # Target board IP/MAC address "host" : '192.168.0.1', # Login credentials "username" : 'root', "password" : 'juno', "results_dir" : "IdleAnalysis", # RTApp calibration values (comment to let LISA do a calibration run) "rtapp-calib" : { "0": 318, "1": 125, "2": 124, "3": 318, "4": 318, "5": 319 }, # Tools required by the experiments "tools" : ['rt-app', 'trace-cmd'], "modules" : ['bl', 'cpufreq', 'cpuidle'], "exclude_modules" : ['hwmon'], # FTrace events to collect for all the tests configuration which have # the "ftrace" flag enabled "ftrace" : { "events" : [ "cpu_idle", "sched_switch" ], "buffsize" : 10 * 1024, }, } # Initialize a test environment te = TestEnv(my_conf, wipe=False, force_new=True) target = te.target # We're going to run quite a heavy workload to try and create short idle periods. # Let's set the CPU frequency to max to make sure those idle periods exist # (otherwise at a lower frequency the workload might overload the CPU # so it never went idle at all) te.target.cpufreq.set_all_governors('performance') Explanation: Target Configuration The target configuration is used to describe and configure your test environment. You can find more details in examples/utils/testenv_example.ipynb. Our target is a Juno R0 development board running Linux. End of explanation cpu = 1 def experiment(te): # Create RTApp RAMP task rtapp = RTA(te.target, 'ramp', calibration=te.calibration()) rtapp.conf(kind='profile', params={ 'ramp' : Ramp( start_pct = 80, end_pct = 10, delta_pct = 5, time_s = 0.5, period_ms = 5, cpus = [cpu]).get() }) # FTrace the execution of this workload te.ftrace.start() # Momentarily wake all CPUs to ensure cpu_idle trace events are present from the beginning te.target.cpuidle.perturb_cpus() rtapp.run(out_dir=te.res_dir) te.ftrace.stop() # Collect and keep track of the trace trace_file = os.path.join(te.res_dir, 'trace.dat') te.ftrace.get_trace(trace_file) # Dump platform descriptor te.platform_dump(te.res_dir) experiment(te) Explanation: Workload configuration and execution Detailed information on RTApp can be found in examples/wlgen/rtapp_example.ipynb. This experiment: - Runs a periodic RT-App workload, pinned to CPU 1, that ramps down from 80% to 10% over 7.5 seconds - Uses perturb_cpus to ensure 'cpu_idle' events are present in the trace for all CPUs - Triggers and collects ftrace output End of explanation # Base folder where tests folder are located res_dir = te.res_dir logging.info('Content of the output folder %s', res_dir) !tree {res_dir} trace = Trace(te.platform, res_dir, events=my_conf['ftrace']['events']) Explanation: Parse trace and analyse data End of explanation # Idle state residency for CPU 3 CPU=3 state_res = trace.data_frame.cpu_idle_state_residency(CPU) state_res Explanation: Per-CPU Idle State Residency Profiling It is possible to get the residency in each idle state of a CPU or a cluster with the following commands: End of explanation DataFrame(data={'value': state_res.index.values, 'name': [te.target.cpuidle.get_state(i, cpu=CPU) for i in state_res.index.values]}) Explanation: For the translation between the idle value and its description: End of explanation ia = trace.analysis.idle # Actual time spent in each idle state ia.plotCPUIdleStateResidency([1,2]) # Percentage of time spent in each idle state ia.plotCPUIdleStateResidency([1,2], pct=True) Explanation: The IdleAnalysis module provide methods for plotting residency data: End of explanation te.target.cpuidle.get_states() Explanation: CPU idle state over time Take a look at the target's idle states: End of explanation p = Parser(trace.ftrace, filters = {'cpu_id': cpu}) idle_df = p.solve('cpu_idle:state') ILinePlot(idle_df, column=cpu, drawstyle='steps-post').view() Explanation: Now use trappy to plot the idle state of a single CPU over time. Higher is deeper: the plot is at -1 when the CPU is active, 0 for WFI, 1 for CPU sleep, etc. We should see that as the workload ramps down and the idle periods become longer, the idle states used become deeper. End of explanation def get_idle_periods(df): series = df[cpu] series = series[series.shift() != series].dropna() if series.iloc[0] == -1: series = series.iloc[1:] idles = series.iloc[0::2] wakeups = series.iloc[1::2] if len(idles) > len(wakeups): idles = idles.iloc[:-1] else: wakeups = wakeups.iloc[:-1] lengths = pd.Series((wakeups.index - idles.index), index=idles.index) return pd.DataFrame({"length": lengths, "state": idles}) Explanation: Examine idle period lengths Let's get a DataFrame showing the length of each idle period on the CPU and the index of the cpuidle state that was entered. End of explanation lengths = get_idle_periods(idle_df) lengths.plot(kind='scatter', x='length', y='state') Explanation: Make a scatter plot of the length of idle periods against the state that was entered. We should see that for long idle periods, deeper states were entered (i.e. we should see a positive corellation between the X and Y axes). End of explanation df = lengths[(lengths['state'] == 2) & (lengths['length'] < 0.010)] df.hist(column='length', bins=50) Explanation: Draw a histogram of the length of idle periods shorter than 100ms in which the CPU entered cpuidle state 2. End of explanation # Idle state residency for CPUs in the big cluster trace.data_frame.cluster_idle_state_residency('big') # Actual time spent in each idle state for CPUs in the big and LITTLE clusters ia.plotClusterIdleStateResidency(['big', 'LITTLE']) # Percentage of time spent in each idle state for CPUs in the big and LITTLE clusters ia.plotClusterIdleStateResidency(['big', 'LITTLE'], pct=True) Explanation: Per-cluster Idle State Residency End of explanation
12,028
Given the following text description, write Python code to implement the functionality described below step by step Description: + Word Count Lab Step2: (1b) Pluralize and test Let's use a map() transformation to add the letter 's' to each string in the base RDD we just created. We'll define a Python function that returns the word with an 's' at the end of the word. Please replace &lt;FILL IN&gt; with your solution. If you have trouble, the next cell has the solution. After you have defined makePlural you can run the third cell which contains a test. If you implementation is correct it will print 1 test passed. This is the general form that exercises will take, except that no example solution will be provided. Exercises will include an explanation of what is expected, followed by code cells where one cell will have one or more &lt;FILL IN&gt; sections. The cell that needs to be modified will have # TODO Step3: (1c) Apply makePlural to the base RDD Now pass each item in the base RDD into a map() transformation that applies the makePlural() function to each element. And then call the collect() action to see the transformed RDD. Step4: (1d) Pass a lambda function to map Let's create the same RDD using a lambda function. Step5: (1e) Length of each word Now use map() and a lambda function to return the number of characters in each word. We'll collect this result directly into a variable. Step6: (1f) Pair RDDs The next step in writing our word counting program is to create a new type of RDD, called a pair RDD. A pair RDD is an RDD where each element is a pair tuple (k, v) where k is the key and v is the value. In this example, we will create a pair consisting of ('&lt;word&gt;', 1) for each word element in the RDD. We can create the pair RDD using the map() transformation with a lambda() function to create a new RDD. Step7: Part 2 Step8: (2b) Use groupByKey() to obtain the counts Using the groupByKey() transformation creates an RDD containing 3 elements, each of which is a pair of a word and a Python iterator. Now sum the iterator using a map() transformation. The result should be a pair RDD consisting of (word, count) pairs. Step9: (2c) Counting using reduceByKey A better approach is to start from the pair RDD and then use the reduceByKey() transformation to create a new pair RDD. The reduceByKey() transformation gathers together pairs that have the same key and applies the function provided to two values at a time, iteratively reducing all of the values to a single value. reduceByKey() operates by applying the function first within each partition on a per-key basis and then across the partitions, allowing it to scale efficiently to large datasets. Step10: (2d) All together The expert version of the code performs the map() to pair RDD, reduceByKey() transformation, and collect in one statement. Step11: Part 3 Step12: (3b) Mean using reduce Find the mean number of words per unique word in wordCounts. Use a reduce() action to sum the counts in wordCounts and then divide by the number of unique words. First map() the pair RDD wordCounts, which consists of (key, value) pairs, to an RDD of values. Step14: Part 4 Step16: (4b) Capitalization and punctuation Real world files are more complicated than the data we have been using in this lab. Some of the issues we have to address are Step17: (4c) Load a text file For the next part of this lab, we will use the Complete Works of William Shakespeare from Project Gutenberg. To convert a text file into an RDD, we use the SparkContext.textFile() method. We also apply the recently defined removePunctuation() function using a map() transformation to strip out the punctuation and change all text to lowercase. Since the file is large we use take(15), so that we only print 15 lines. Step18: (4d) Words from lines Before we can use the wordcount() function, we have to address two issues with the format of the RDD Step19: (4e) Remove empty elements The next step is to filter out the empty elements. Remove all entries where the word is ''. Step20: (4f) Count the words We now have an RDD that is only words. Next, let's apply the wordCount() function to produce a list of word counts. We can view the top 15 words by using the takeOrdered() action; however, since the elements of the RDD are pairs, we need a custom sort function that sorts using the value part of the pair. You'll notice that many of the words are common English words. These are called stopwords. In a later lab, we will see how to eliminate them from the results. Use the wordCount() function and takeOrdered() to obtain the fifteen most common words and their counts.
Python Code: wordsList = ['cat', 'elephant', 'rat', 'rat', 'cat'] wordsRDD = sc.parallelize(wordsList, 4) # Print out the type of wordsRDD print type(wordsRDD) Explanation: + Word Count Lab: Building a word count application This lab will build on the techniques covered in the Spark tutorial to develop a simple word count application. The volume of unstructured text in existence is growing dramatically, and Spark is an excellent tool for analyzing this type of data. In this lab, we will write code that calculates the most common words in the Complete Works of William Shakespeare retrieved from Project Gutenberg. This could also be scaled to find the most common words on the Internet. During this lab we will cover: Part 1: Creating a base RDD and pair RDDs Part 2: Counting with pair RDDs Part 3: Finding unique words and a mean value Part 4: Apply word count to a file Note that, for reference, you can look up the details of the relevant methods in Spark's Python API Part 1: Creating a base RDD and pair RDDs In this part of the lab, we will explore creating a base RDD with parallelize and using pair RDDs to count words. (1a) Create a base RDD We'll start by generating a base RDD by using a Python list and the sc.parallelize method. Then we'll print out the type of the base RDD. End of explanation # TODO: Replace <FILL IN> with appropriate code def makePlural(word): Adds an 's' to `word`. Note: This is a simple function that only adds an 's'. No attempt is made to follow proper pluralization rules. Args: word (str): A string. Returns: str: A string with 's' added to it. return word + 's' print makePlural('cat') # One way of completing the function def makePlural(word): return word + 's' print makePlural('cat') # Load in the testing code and check to see if your answer is correct # If incorrect it will report back '1 test failed' for each failed test # Make sure to rerun any cell you change before trying the test again from test_helper import Test # TEST Pluralize and test (1b) Test.assertEquals(makePlural('rat'), 'rats', 'incorrect result: makePlural does not add an s') Explanation: (1b) Pluralize and test Let's use a map() transformation to add the letter 's' to each string in the base RDD we just created. We'll define a Python function that returns the word with an 's' at the end of the word. Please replace &lt;FILL IN&gt; with your solution. If you have trouble, the next cell has the solution. After you have defined makePlural you can run the third cell which contains a test. If you implementation is correct it will print 1 test passed. This is the general form that exercises will take, except that no example solution will be provided. Exercises will include an explanation of what is expected, followed by code cells where one cell will have one or more &lt;FILL IN&gt; sections. The cell that needs to be modified will have # TODO: Replace &lt;FILL IN&gt; with appropriate code on its first line. Once the &lt;FILL IN&gt; sections are updated and the code is run, the test cell can then be run to verify the correctness of your solution. The last code cell before the next markdown section will contain the tests. End of explanation # TODO: Replace <FILL IN> with appropriate code pluralRDD = wordsRDD.map(makePlural) print pluralRDD.collect() # TEST Apply makePlural to the base RDD(1c) Test.assertEquals(pluralRDD.collect(), ['cats', 'elephants', 'rats', 'rats', 'cats'], 'incorrect values for pluralRDD') Explanation: (1c) Apply makePlural to the base RDD Now pass each item in the base RDD into a map() transformation that applies the makePlural() function to each element. And then call the collect() action to see the transformed RDD. End of explanation # TODO: Replace <FILL IN> with appropriate code pluralLambdaRDD = wordsRDD.map(lambda s: s + "s") print pluralLambdaRDD.collect() # TEST Pass a lambda function to map (1d) Test.assertEquals(pluralLambdaRDD.collect(), ['cats', 'elephants', 'rats', 'rats', 'cats'], 'incorrect values for pluralLambdaRDD (1d)') Explanation: (1d) Pass a lambda function to map Let's create the same RDD using a lambda function. End of explanation # TODO: Replace <FILL IN> with appropriate code pluralLengths = (pluralRDD .map(len) .collect()) print pluralLengths # TEST Length of each word (1e) Test.assertEquals(pluralLengths, [4, 9, 4, 4, 4], 'incorrect values for pluralLengths') Explanation: (1e) Length of each word Now use map() and a lambda function to return the number of characters in each word. We'll collect this result directly into a variable. End of explanation # TODO: Replace <FILL IN> with appropriate code wordPairs = wordsRDD.map(lambda a: (a, 1)) print wordPairs.collect() # TEST Pair RDDs (1f) Test.assertEquals(wordPairs.collect(), [('cat', 1), ('elephant', 1), ('rat', 1), ('rat', 1), ('cat', 1)], 'incorrect value for wordPairs') Explanation: (1f) Pair RDDs The next step in writing our word counting program is to create a new type of RDD, called a pair RDD. A pair RDD is an RDD where each element is a pair tuple (k, v) where k is the key and v is the value. In this example, we will create a pair consisting of ('&lt;word&gt;', 1) for each word element in the RDD. We can create the pair RDD using the map() transformation with a lambda() function to create a new RDD. End of explanation # TODO: Replace <FILL IN> with appropriate code # Note that groupByKey requires no parameters wordsGrouped = wordPairs.groupByKey() print wordsGrouped.collect() for key, value in wordsGrouped.collect(): print '{0}: {1}'.format(key, list(value)) # TEST groupByKey() approach (2a) Test.assertEquals(sorted(wordsGrouped.mapValues(lambda x: list(x)).collect()), [('cat', [1, 1]), ('elephant', [1]), ('rat', [1, 1])], 'incorrect value for wordsGrouped') Explanation: Part 2: Counting with pair RDDs Now, let's count the number of times a particular word appears in the RDD. There are multiple ways to perform the counting, but some are much less efficient than others. A naive approach would be to collect() all of the elements and count them in the driver program. While this approach could work for small datasets, we want an approach that will work for any size dataset including terabyte- or petabyte-sized datasets. In addition, performing all of the work in the driver program is slower than performing it in parallel in the workers. For these reasons, we will use data parallel operations. (2a) groupByKey() approach An approach you might first consider (we'll see shortly that there are better ways) is based on using the groupByKey() transformation. As the name implies, the groupByKey() transformation groups all the elements of the RDD with the same key into a single list in one of the partitions. There are two problems with using groupByKey(): The operation requires a lot of data movement to move all the values into the appropriate partitions. The lists can be very large. Consider a word count of English Wikipedia: the lists for common words (e.g., the, a, etc.) would be huge and could exhaust the available memory in a worker. Use groupByKey() to generate a pair RDD of type ('word', iterator). End of explanation # TODO: Replace <FILL IN> with appropriate code wordCountsGrouped = wordsGrouped.mapValues(lambda x: sum(x)) print wordCountsGrouped.collect() # TEST Use groupByKey() to obtain the counts (2b) Test.assertEquals(sorted(wordCountsGrouped.collect()), [('cat', 2), ('elephant', 1), ('rat', 2)], 'incorrect value for wordCountsGrouped') Explanation: (2b) Use groupByKey() to obtain the counts Using the groupByKey() transformation creates an RDD containing 3 elements, each of which is a pair of a word and a Python iterator. Now sum the iterator using a map() transformation. The result should be a pair RDD consisting of (word, count) pairs. End of explanation # TODO: Replace <FILL IN> with appropriate code # Note that reduceByKey takes in a function that accepts two values and returns a single value wordCounts = wordPairs.reduceByKey(lambda a, b: a + b) print wordCounts.collect() # TEST Counting using reduceByKey (2c) Test.assertEquals(sorted(wordCounts.collect()), [('cat', 2), ('elephant', 1), ('rat', 2)], 'incorrect value for wordCounts') Explanation: (2c) Counting using reduceByKey A better approach is to start from the pair RDD and then use the reduceByKey() transformation to create a new pair RDD. The reduceByKey() transformation gathers together pairs that have the same key and applies the function provided to two values at a time, iteratively reducing all of the values to a single value. reduceByKey() operates by applying the function first within each partition on a per-key basis and then across the partitions, allowing it to scale efficiently to large datasets. End of explanation # TODO: Replace <FILL IN> with appropriate code # secode method: #dd.map(lambda x: (x, 1)).reduceByKey(lambda k, v: v + v).collect() wordCountsCollected = (wordsRDD .map(lambda a: (a, 1)) .reduceByKey(lambda a, b: a + b) .collect()) print wordCountsCollected # TEST All together (2d) Test.assertEquals(sorted(wordCountsCollected), [('cat', 2), ('elephant', 1), ('rat', 2)], 'incorrect value for wordCountsCollected') Explanation: (2d) All together The expert version of the code performs the map() to pair RDD, reduceByKey() transformation, and collect in one statement. End of explanation # TODO: Replace <FILL IN> with appropriate code uniqueWords = len(set(wordsRDD.collect())) print uniqueWords # TEST Unique words (3a) Test.assertEquals(uniqueWords, 3, 'incorrect count of uniqueWords') Explanation: Part 3: Finding unique words and a mean value (3a) Unique words Calculate the number of unique words in wordsRDD. You can use other RDDs that you have already created to make this easier. End of explanation # TODO: Replace <FILL IN> with appropriate code from operator import add print wordCounts.collect() totalCount = wordCounts.map(lambda (k, v): v).reduce(lambda x, y: x + y) print 'totalCount:', totalCount average = totalCount / float(uniqueWords) print totalCount print round(average, 2) # TEST Mean using reduce (3b) Test.assertEquals(round(average, 2), 1.67, 'incorrect value of average') Explanation: (3b) Mean using reduce Find the mean number of words per unique word in wordCounts. Use a reduce() action to sum the counts in wordCounts and then divide by the number of unique words. First map() the pair RDD wordCounts, which consists of (key, value) pairs, to an RDD of values. End of explanation # TODO: Replace <FILL IN> with appropriate code def wordCount(wordListRDD): Creates a pair RDD with word counts from an RDD of words. Args: wordListRDD (RDD of str): An RDD consisting of words. Returns: RDD of (str, int): An RDD consisting of (word, count) tuples. return wordListRDD.map(lambda k: (k, 1)).reduceByKey(lambda x, y: x + y) print wordCount(wordsRDD).collect() # TEST wordCount function (4a) Test.assertEquals(sorted(wordCount(wordsRDD).collect()), [('cat', 2), ('elephant', 1), ('rat', 2)], 'incorrect definition for wordCount function') Explanation: Part 4: Apply word count to a file In this section we will finish developing our word count application. We'll have to build the wordCount function, deal with real world problems like capitalization and punctuation, load in our data source, and compute the word count on the new data. (4a) wordCount function First, define a function for word counting. You should reuse the techniques that have been covered in earlier parts of this lab. This function should take in an RDD that is a list of words like wordsRDD and return a pair RDD that has all of the words and their associated counts. End of explanation # TODO: Replace <FILL IN> with appropriate code import re def removePunctuation(text): Removes punctuation, changes to lower case, and strips leading and trailing spaces. Note: Only spaces, letters, and numbers should be retained. Other characters should should be eliminated (e.g. it's becomes its). Leading and trailing spaces should be removed after punctuation is removed. Args: text (str): A string. Returns: str: The cleaned up string. # method 1(failed): return text.replace(',', '').replace('.', '').strip().lower() #r = "!,'." #print re.sub(r, '', text).strip().lower() #return re.sub(r, '', text).strip().lower() # method 2(success) #return text.replace(',', '').replace('.', '').replace("'", '').replace('!', '').strip().lower() # method 3(failed) # import string # return text.translate(None, string.punctuation).strip().lower() # method 4(success) import string for c in string.punctuation: text = text.replace(c, "") return text.strip().lower() print removePunctuation('Hi, you!') print removePunctuation(' No under_score!') # TEST Capitalization and punctuation (4b) print removePunctuation(" The Elephant's 4 cats. ") Test.assertEquals(removePunctuation(" The Elephant's 4 cats. "), 'the elephants 4 cats', 'incorrect definition for removePunctuation function') Explanation: (4b) Capitalization and punctuation Real world files are more complicated than the data we have been using in this lab. Some of the issues we have to address are: Words should be counted independent of their capitialization (e.g., Spark and spark should be counted as the same word). All punctuation should be removed. Any leading or trailing spaces on a line should be removed. Define the function removePunctuation that converts all text to lower case, removes any punctuation, and removes leading and trailing spaces. Use the Python re module to remove any text that is not a letter, number, or space. Reading help(re.sub) might be useful. End of explanation # Just run this code import os.path baseDir = os.path.join('data') inputPath = os.path.join('cs100', 'lab1', 'shakespeare.txt') fileName = os.path.join(baseDir, inputPath) shakespeareRDD = (sc .textFile(fileName, 8) .map(removePunctuation)) #print shakespeareRDD.collect() print '\n'.join(shakespeareRDD .zipWithIndex() # to (line, lineNum) .map(lambda (l, num): '{0}: {1}'.format(num, l)) # to 'lineNum: line' .take(15)) Explanation: (4c) Load a text file For the next part of this lab, we will use the Complete Works of William Shakespeare from Project Gutenberg. To convert a text file into an RDD, we use the SparkContext.textFile() method. We also apply the recently defined removePunctuation() function using a map() transformation to strip out the punctuation and change all text to lowercase. Since the file is large we use take(15), so that we only print 15 lines. End of explanation # TODO: Replace <FILL IN> with appropriate code shakespeareWordsRDD = shakespeareRDD.flatMap(lambda s: s.split(' ')) shakespeareWordCount = shakespeareWordsRDD.count() print shakespeareWordsRDD.top(5) print shakespeareWordCount # TEST Words from lines (4d) # This test allows for leading spaces to be removed either before or after # punctuation is removed. Test.assertTrue(shakespeareWordCount == 927631 or shakespeareWordCount == 928908, 'incorrect value for shakespeareWordCount') Test.assertEquals(shakespeareWordsRDD.top(5), [u'zwaggerd', u'zounds', u'zounds', u'zounds', u'zounds'], 'incorrect value for shakespeareWordsRDD') Explanation: (4d) Words from lines Before we can use the wordcount() function, we have to address two issues with the format of the RDD: The first issue is that that we need to split each line by its spaces. The second issue is we need to filter out empty lines. Apply a transformation that will split each element of the RDD by its spaces. For each element of the RDD, you should apply Python's string split() function. You might think that a map() transformation is the way to do this, but think about what the result of the split() function will be. End of explanation # TODO: Replace <FILL IN> with appropriate code shakeWordsRDD = shakespeareWordsRDD.filter(lambda s: s != '') shakeWordCount = shakeWordsRDD.count() print shakeWordCount # TEST Remove empty elements (4e) Test.assertEquals(shakeWordCount, 882996, 'incorrect value for shakeWordCount') Explanation: (4e) Remove empty elements The next step is to filter out the empty elements. Remove all entries where the word is ''. End of explanation # TODO: Replace <FILL IN> with appropriate code #print shakeWordsRDD.collect() # method 1 (success) #from operator import add #top15WordsAndCounts = shakeWordsRDD.map(lambda w: (w, 1)).reduceByKey(add) # method 2 top15WordsAndCounts = shakeWordsRDD.map(lambda w: (w, 1)).reduceByKey(lambda x, y: x + y).top(15, key = lambda (k, v): v) print top15WordsAndCounts print '\n'.join(map(lambda (w, c): '{0}: {1}'.format(w, c), top15WordsAndCounts)) # TEST Count the words (4f) Test.assertEquals(top15WordsAndCounts, [(u'the', 27361), (u'and', 26028), (u'i', 20681), (u'to', 19150), (u'of', 17463), (u'a', 14593), (u'you', 13615), (u'my', 12481), (u'in', 10956), (u'that', 10890), (u'is', 9134), (u'not', 8497), (u'with', 7771), (u'me', 7769), (u'it', 7678)], 'incorrect value for top15WordsAndCounts') Explanation: (4f) Count the words We now have an RDD that is only words. Next, let's apply the wordCount() function to produce a list of word counts. We can view the top 15 words by using the takeOrdered() action; however, since the elements of the RDD are pairs, we need a custom sort function that sorts using the value part of the pair. You'll notice that many of the words are common English words. These are called stopwords. In a later lab, we will see how to eliminate them from the results. Use the wordCount() function and takeOrdered() to obtain the fifteen most common words and their counts. End of explanation
12,029
Given the following text description, write Python code to implement the functionality described below step by step Description: Goulib.polynomial polynomial and piecewise defined functions Step1: Polynomial a Polynomial is an Expr defined by factors and with some more methods Step2: Motion "motion laws" are functions of time which return (position, velocity, acceleration, jerk) tuples Step3: Polynomial Segments Polynomials are very handy to define Segments as coefficients can easily be determined from start/end conditions. Also, polynomials can easily be integrated or derivated in order to obtain position, velocity, or acceleration laws from each other. Motion defines several handy functions that return SegmentPoly matching common situations Step4: Interval operations on [a..b[ intervals Step5: Piecewise Piecewise defined functions Step6: The simplest are piecewise continuous functions. They are defined by $(x_i,y_i)$ tuples given in any order. $f(x) = \begin{cases}y_0 & x < x_1 \ y_i & x_i \le x < x_{i+1} \ y_n & x > x_n \end{cases}$ Step7: By default y0=0 , but it can be specified at construction. Piecewise functions can also be defined by adding (x0,y,x1) segments Step8: Piecewise Expr function
Python Code: from Goulib.notebook import * from Goulib.polynomial import * from Goulib import itertools2, plot Explanation: Goulib.polynomial polynomial and piecewise defined functions End of explanation p1=Polynomial([-1,1,3]) # inited from coefficients in ascending power order p1 # Latex output by default p2=Polynomial('- 5x^3 +3*x') # inited from string, in any power order, with optional spaces and * p2.plot() [(x,p1(x)) for x in itertools2.linspace(-1,1,11)] #evaluation p1-p2+2 # addition and subtraction of polynomials and scalars -3*p1*p2**2 # polynomial (and scalar) multiplication and scalar power p1.derivative()+p2.integral() #integral and derivative Explanation: Polynomial a Polynomial is an Expr defined by factors and with some more methods End of explanation from Goulib.motion import * Explanation: Motion "motion laws" are functions of time which return (position, velocity, acceleration, jerk) tuples End of explanation seg=Segment2ndDegree(0,1,(-1,1,2)) # time interval and initial position,velocity and constant acceleration seg.plot() seg=Segment4thDegree(0,0,(-2,1),(2,3)) #start time and initial and final (position,velocity) seg.plot() seg=Segment4thDegree(0,2,(-2,1),(None,3)) # start and final time, initial (pos,vel) and final vel seg.plot() Explanation: Polynomial Segments Polynomials are very handy to define Segments as coefficients can easily be determined from start/end conditions. Also, polynomials can easily be integrated or derivated in order to obtain position, velocity, or acceleration laws from each other. Motion defines several handy functions that return SegmentPoly matching common situations End of explanation from Goulib.interval import * Interval(5,6)+Interval(2,3)+Interval(3,4) Explanation: Interval operations on [a..b[ intervals End of explanation from Goulib.piecewise import * Explanation: Piecewise Piecewise defined functions End of explanation p1=Piecewise([(4,4),(3,3),(1,1),(5,0)]) p1 # default rendering is LaTeX p1.plot() #pity that matplotlib doesn't accept large LaTeX as title... Explanation: The simplest are piecewise continuous functions. They are defined by $(x_i,y_i)$ tuples given in any order. $f(x) = \begin{cases}y_0 & x < x_1 \ y_i & x_i \le x < x_{i+1} \ y_n & x > x_n \end{cases}$ End of explanation p2=Piecewise(default=1) p2+=(2.5,1,6.5) p2+=(1.5,1,3.5) p2.plot(xmax=7,ylim=(-1,5)) plot.plot([p1,p2,p1+p2,p1-p2,p1*p2,p1/p2], labels=['p1','p2','p1+p2','p1-p2','p1*p2','p1/p2'], xmax=7, ylim=(-2,10), offset=0.02) p1=Piecewise([(2,True)],False) p2=Piecewise([(1,True),(2,False),(3,True)],False) plot.plot([p1,p2,p1|p2,p1&p2,p1^p2,p1>>3], labels=['p1','p2','p1 or p2','p1 and p2','p1 xor p2','p1>>3'], xmax=7,ylim=(-.5,1.5), offset=0.02) Explanation: By default y0=0 , but it can be specified at construction. Piecewise functions can also be defined by adding (x0,y,x1) segments End of explanation from math import cos f=Piecewise().append(0,cos).append(1,lambda x:x**x) f f.plot() Explanation: Piecewise Expr function End of explanation
12,030
Given the following text description, write Python code to implement the functionality described below step by step Description: Hands-on! Nessa prática, sugerimos alguns pequenos exemplos para você implementar sobre o Spark. Logistic Regression com Cross-Validation No exercício LogisticRegression foi utilizado TrainValidationSplit como abordagem de avaliação do modelo gerado. Atualize o exercício consideram CrossValidator e compare os resultados. Não esqueça de utilizar Pipeline. Bibliotecas Step1: Funções Step2: Convertendo a saída de categórica para numérica Step3: Definição do Modelo Logístico Step4: Cross-Validation - TrainValidationSplit e CrossValidator Step5: Treino do Modelo e Predição do Teste Step6: Avaliação dos Modelos Step7: Conclusão Step8: Definição do Modelo de Árvores Randômicas Step9: Cross-Validation - CrossValidator Step10: Treino do Modelo e Predição do Teste Step11: Avaliação do Modelo
Python Code: from pyspark.ml.classification import LogisticRegression from pyspark.ml.evaluation import RegressionEvaluator, MulticlassClassificationEvaluator from pyspark.ml import Pipeline from pyspark.mllib.regression import LabeledPoint from pyspark.ml.linalg import Vectors from pyspark.ml.feature import StringIndexer from pyspark.mllib.evaluation import MulticlassMetrics from pyspark.ml.tuning import ParamGridBuilder, TrainValidationSplit, CrossValidator Explanation: Hands-on! Nessa prática, sugerimos alguns pequenos exemplos para você implementar sobre o Spark. Logistic Regression com Cross-Validation No exercício LogisticRegression foi utilizado TrainValidationSplit como abordagem de avaliação do modelo gerado. Atualize o exercício consideram CrossValidator e compare os resultados. Não esqueça de utilizar Pipeline. Bibliotecas End of explanation def mapLibSVM(row): return (row[5],Vectors.dense(row[:3])) df = spark.read \ .format("csv") \ .option("header", "true") \ .option("inferSchema", "true") \ .load("datasets/iris.data") Explanation: Funções End of explanation indexer = StringIndexer(inputCol="label", outputCol="labelIndex") indexer = indexer.fit(df).transform(df) indexer.show() dfLabeled = indexer.rdd.map(mapLibSVM).toDF(["label", "features"]) dfLabeled.show() train, test = dfLabeled.randomSplit([0.9, 0.1], seed=12345) Explanation: Convertendo a saída de categórica para numérica End of explanation lr = LogisticRegression(labelCol="label", maxIter=15) Explanation: Definição do Modelo Logístico End of explanation paramGrid = ParamGridBuilder()\ .addGrid(lr.regParam, [0.1, 0.001]) \ .build() tvs = TrainValidationSplit(estimator=lr, estimatorParamMaps=paramGrid, evaluator=MulticlassClassificationEvaluator(), trainRatio=0.8) cval = CrossValidator(estimator=lr, estimatorParamMaps=paramGrid, evaluator=MulticlassClassificationEvaluator(), numFolds=10) Explanation: Cross-Validation - TrainValidationSplit e CrossValidator End of explanation result_tvs = tvs.fit(train).transform(test) result_cval = cval.fit(train).transform(test) preds_tvs = result_tvs.select(["prediction", "label"]) preds_cval = result_cval.select(["prediction", "label"]) Explanation: Treino do Modelo e Predição do Teste End of explanation # Instânciação dos Objetos de Métrics metrics_tvs = MulticlassMetrics(preds_tvs.rdd) metrics_cval = MulticlassMetrics(preds_cval.rdd) # Estatísticas Gerais para o Método TrainValidationSplit print("Summary Stats") print("F1 Score = %s" % metrics_tvs.fMeasure()) print("Accuracy = %s" % metrics_tvs.accuracy) print("Weighted recall = %s" % metrics_tvs.weightedRecall) print("Weighted precision = %s" % metrics_tvs.weightedPrecision) print("Weighted F(1) Score = %s" % metrics_tvs.weightedFMeasure()) print("Weighted F(0.5) Score = %s" % metrics_tvs.weightedFMeasure(beta=0.5)) print("Weighted false positive rate = %s" % metrics_tvs.weightedFalsePositiveRate) # Estatísticas Gerais para o Método TrainValidationSplit print("Summary Stats") print("F1 Score = %s" % metrics_cval.fMeasure()) print("Accuracy = %s" % metrics_cval.accuracy) print("Weighted recall = %s" % metrics_cval.weightedRecall) print("Weighted precision = %s" % metrics_cval.weightedPrecision) print("Weighted F(1) Score = %s" % metrics_cval.weightedFMeasure()) print("Weighted F(0.5) Score = %s" % metrics_cval.weightedFMeasure(beta=0.5)) print("Weighted false positive rate = %s" % metrics_cval.weightedFalsePositiveRate) Explanation: Avaliação dos Modelos End of explanation from pyspark.ml.classification import RandomForestClassifier Explanation: Conclusão: Uma vez que ambos os modelos de CrossValidation usam o mesmo modelo de predição (a Regressão Logística), e contando com o fato de que o dataset é relativamente pequeno, é natural que ambos os métodos de CrossValidation encontrem o mesmo (ou aproximadamente igual) valor ótimo para os hyperparâmetros testados. Por esse motivo, após descobrirem esse valor de hiperparâmetros, os dois modelos irão demonstrar resultados bastante similiares quando avaliados sobre o Conjunto de Treino (que também é o mesmo para os dois modelos). Random Forest Use o exercício anterior como base, mas agora utilizando pyspark.ml.classification.RandomForestClassifier. Use Pipeline e CrossValidator para avaliar o modelo gerado. Bibliotecas End of explanation rf = RandomForestClassifier(labelCol="label", featuresCol="features") Explanation: Definição do Modelo de Árvores Randômicas End of explanation paramGrid = ParamGridBuilder()\ .addGrid(rf.numTrees, [1, 100]) \ .build() cval = CrossValidator(estimator=rf, estimatorParamMaps=paramGrid, evaluator=MulticlassClassificationEvaluator(), numFolds=10) Explanation: Cross-Validation - CrossValidator End of explanation results = cval.fit(train).transform(test) predictions = results.select(["prediction", "label"]) Explanation: Treino do Modelo e Predição do Teste End of explanation # Instânciação dos Objetos de Métrics metrics = MulticlassMetrics(predictions.rdd) # Estatísticas Gerais para o Método TrainValidationSplit print("Summary Stats") print("F1 Score = %s" % metrics.fMeasure()) print("Accuracy = %s" % metrics.accuracy) print("Weighted recall = %s" % metrics.weightedRecall) print("Weighted precision = %s" % metrics.weightedPrecision) print("Weighted F(1) Score = %s" % metrics.weightedFMeasure()) print("Weighted F(0.5) Score = %s" % metrics.weightedFMeasure(beta=0.5)) print("Weighted false positive rate = %s" % metrics.weightedFalsePositiveRate) Explanation: Avaliação do Modelo End of explanation
12,031
Given the following text description, write Python code to implement the functionality described below step by step Description: Detection with SSD In this example, we will load a SSD model and use it to detect objects. 1. Setup First, Load necessary libs and set up caffe and caffe_root Step1: Load LabelMap. Step2: Load the net in the test phase for inference, and configure input preprocessing. Step3: 2. SSD detection Load an image. Step4: Run the net and examine the top_k results Step5: Plot the boxes
Python Code: import numpy as np import matplotlib.pyplot as plt %matplotlib inline plt.rcParams['figure.figsize'] = (10, 10) plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # Only run this cell once in the active kernel or the files in later cells will not be found # Make sure that caffe is on the python path: caffe_root = '../' # this file is expected to be in {caffe_root}/examples import os os.chdir(caffe_root) import sys sys.path.insert(0, 'python') import caffe #Commenting out caffe device setting to allow CPU only #caffe.set_device(0) #caffe.set_mode_gpu() Explanation: Detection with SSD In this example, we will load a SSD model and use it to detect objects. 1. Setup First, Load necessary libs and set up caffe and caffe_root End of explanation from google.protobuf import text_format from caffe.proto import caffe_pb2 # load PASCAL VOC labels labelmap_file = 'data/VOC0712/labelmap_voc.prototxt' file = open(labelmap_file, 'r') labelmap = caffe_pb2.LabelMap() text_format.Merge(str(file.read()), labelmap) def get_labelname(labelmap, labels): num_labels = len(labelmap.item) labelnames = [] if type(labels) is not list: labels = [labels] for label in labels: found = False for i in xrange(0, num_labels): if label == labelmap.item[i].label: found = True labelnames.append(labelmap.item[i].display_name) break assert found == True return labelnames Explanation: Load LabelMap. End of explanation model_def = 'models/VGGNet/VOC0712/SSD_300x300/deploy.prototxt' #model_weights = 'models/VGGNet/VOC0712/SSD_300x300/VGG_VOC0712_SSD_300x300_iter_60000.caffemodel' model_weights = 'models/VGGNet/VOC0712/SSD_300x300/VGG_VOC0712_SSD_300x300_iter_120000.caffemodel' net = caffe.Net(model_def, # defines the structure of the model caffe.TEST, # use test mode (e.g., don't perform dropout) weights=model_weights) # contains the trained weights # input preprocessing: 'data' is the name of the input blob == net.inputs[0] transformer = caffe.io.Transformer({'data': net.blobs['data'].data.shape}) transformer.set_transpose('data', (2, 0, 1)) transformer.set_mean('data', np.array([104,117,123])) # mean pixel transformer.set_raw_scale('data', 255) # the reference model operates on images in [0,255] range instead of [0,1] transformer.set_channel_swap('data', (2,1,0)) # the reference model has channels in BGR order instead of RGB Explanation: Load the net in the test phase for inference, and configure input preprocessing. End of explanation example = 'examples/images/fish-bike.jpg' #example = 'examples/images/cat.jpg' #Try your images if you mapped a volume at /images into the Docker container #example = '/images/filename.jpg' # set net to batch size of 1 image_resize = 300 net.blobs['data'].reshape(1,3,image_resize,image_resize) image = caffe.io.load_image(example) plt.imshow(image) Explanation: 2. SSD detection Load an image. End of explanation transformed_image = transformer.preprocess('data', image) net.blobs['data'].data[...] = transformed_image # Forward pass. detections = net.forward()['detection_out'] # Parse the outputs. det_label = detections[0,0,:,1] det_conf = detections[0,0,:,2] det_xmin = detections[0,0,:,3] det_ymin = detections[0,0,:,4] det_xmax = detections[0,0,:,5] det_ymax = detections[0,0,:,6] # Get detections with confidence higher than 0.6. top_indices = [i for i, conf in enumerate(det_conf) if conf >= 0.6] top_conf = det_conf[top_indices] top_label_indices = det_label[top_indices].tolist() top_labels = get_labelname(labelmap, top_label_indices) top_xmin = det_xmin[top_indices] top_ymin = det_ymin[top_indices] top_xmax = det_xmax[top_indices] top_ymax = det_ymax[top_indices] Explanation: Run the net and examine the top_k results End of explanation colors = plt.cm.hsv(np.linspace(0, 1, 21)).tolist() plt.imshow(image) currentAxis = plt.gca() for i in xrange(top_conf.shape[0]): xmin = int(round(top_xmin[i] * image.shape[1])) ymin = int(round(top_ymin[i] * image.shape[0])) xmax = int(round(top_xmax[i] * image.shape[1])) ymax = int(round(top_ymax[i] * image.shape[0])) score = top_conf[i] label = int(top_label_indices[i]) label_name = top_labels[i] display_txt = '%s: %.2f'%(label_name, score) coords = (xmin, ymin), xmax-xmin+1, ymax-ymin+1 color = colors[label] currentAxis.add_patch(plt.Rectangle(*coords, fill=False, edgecolor=color, linewidth=2)) currentAxis.text(xmin, ymin, display_txt, bbox={'facecolor':color, 'alpha':0.5}) Explanation: Plot the boxes End of explanation
12,032
Given the following text description, write Python code to implement the functionality described below step by step Description: Lesson 26 Step1: find.all() returns a list of strings. It behaves differently with groups. Step2: To get the total string, just wrap the total regex in its own group, so you get [(totalstring, group1, group2),...]. RegEx Character Classes \d\ is the RegEx character for digits. Step3: Other regex characters are Step4: It is possible to create your own character classes, outside of these shorthand classes, using [] Step5: A useful feature of custom character classes are negative character classes
Python Code: import re phoneRegex = re.compile(r'/d/d/d-/d/d/d-/d/d/d/d') #phoneRegex.search() # finds first match #phoneRegex.findall() # finds all matches Explanation: Lesson 26: RegEx Character Classes and the .findall() Method The find.all() method for regex objects finds all matching strings in a text. End of explanation import re phoneRegex = re.compile(r'(/d/d/d)-(/d/d/d-/d/d/d/d)') # Two groups, so returns tuples #phoneRegex.findall() # finds all matches in pairs; [('group1', 'group2'),...] Explanation: find.all() returns a list of strings. It behaves differently with groups. End of explanation #digitRegex = re.compile(r'(1|2|3|4...|n)`) is equivalent to #digitRegex = re.compile(r'\d\') Explanation: To get the total string, just wrap the total regex in its own group, so you get [(totalstring, group1, group2),...]. RegEx Character Classes \d\ is the RegEx character for digits. End of explanation # Example using lyrics from The Twelve Days of Christmas lyrics = ''' 12 Drummers Drumming 11 Pipers Piping 10 Lords a Leaping 9 Ladies Dancing 8 Maids a Milking 7 Swans a Swimming 6 Geese a Laying 5 Golden Rings 4 Calling Birds 3 French Hens 2 Turtle Doves and 1 Partridge in a Pear Tree ''' xmasRegex = re.compile(r'\d+\s\w+') # 1 or more digits, space, 1 or more words xmasRegex.findall(lyrics) # Returns all 'x gift', but stops at space because \w+ does not include spaces Explanation: Other regex characters are: \D Any character that is NOT a numeric digit from 0 to 9. \w Any letter, numeric digit, punctuation, or the underscore character (word characters.) \W Any character that is NOT a letter, numeric digit, or the underscore character. \s Any space, tab, or newline character (space characters.) \S Any character that is NOT a space character. End of explanation vowelRegex = re.compile(r'[aeiouAEIOU]') # RegEx for lowercase and uppercase vowels alphabetRegex = re.compile(r'[a-zA-Z]') # RegEx for lowercase and uppercase alphabet using ranges print(vowelRegex.findall('Robocop eats baby food.')) # Finds a list of all vowels in string doublevowelRegex = re.compile(r'[aeiouAEIOU]{2}') # RegEx for two lowercase and uppercase vowels in a row; {2} repeats. print(doublevowelRegex.findall('Robocop eats baby food.')) # Finds a list of all vowels in string Explanation: It is possible to create your own character classes, outside of these shorthand classes, using []: End of explanation consonantsRegex = re.compile(r'[^aeiouAEIOU]') # RegEx for finding all characters that are NOT vowels print(consonantsRegex.findall('Robocop eats baby food.')) # Output will include spaces and words. Explanation: A useful feature of custom character classes are negative character classes: End of explanation
12,033
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: Objects and exceptions Object-oriented programming is a widespread paradigm that helps programmers create intuitive layers of abstraction. This example notebook is aimed at helping people along with the concepts of object-oriented programming and may require more time than is possible in a one-day workshop. In Python, classes are defined with the keyword class. All classes should inherit either some other class or a base-class confusingly called object. When a function is associated with a class, it is called a method. Step2: Typically classes are made to store some data that is logically related to a single entity. When a class is instantiated, i.e. given a set of values , the result of the action is an object. Step4: Object-oriented design can let programmers re-use code by inheriting other classes to extend their functionality. Step6: Python has plenty of so-called magic methods for classes. For instance, the __str__ method defines how a vector is textually represented when you call it's str. The __repr__ controls how the object is represented in the interpreter. Step8: Python supports multiple inheritance. This is another very powerful tool, but can sometimes lead to confusion when the same function is defined in many classes. If this confuses you, the concept to look for is Method Resolution Order . It is possible to call a parent class's methods using the keyword super. Also, it is possible to define what regular operators do to a class. Step9: Observe that in most cases we don't check for the types of the arguments given. This is idiomatic Python and contrary to some other object-oriented languages. It is better to ask for forgiveness than permission Another powerful feature of classes is composability Step10: As in all Python one should be aware of when an operation changes the state of an object, like our drawing and when it returns a new object, like multiplying vectors with a scalar. Instead of text we could create a Scalable Vector Graphics (SVG) image using this class. Step11: Exceptions In Python, exceptions are lightweight, i.e. handling them doesn't cause a notable decrease in performance as happens in some languages. The purpose of exceptions is to communicate that something didn't go right. The name of the exception typically tells what kind of error ocurred and the exception can also contain a more explicit message. Step12: The container-class can exhibit at least two different exceptions. Step13: Who should worry about the various issues is a good philosophical question. We could either make the Container-class secure in that it doesn't raise any errors to whoever calls it or we could let the caller worry about such errors. For now let's assume that the programmer is competent and knows what is a valid key and what isn't. Step14: A try-except may contain a finallyblock, which is always guaranteed to execute. Also, it is permissible to catch multiple different errors. Step15: There is also syntax for catching multiple error types in the same catch clause. The keyword raise is used to continue error handling. This is useful if you want to log errors but let them pass onward anyway. A raise without arguments will re-raise the error that was being handled.
Python Code: class Vector2D(object): Represents a 2-dimensional vector def __init__(self, x, y): self.x = x self.y = y Explanation: Objects and exceptions Object-oriented programming is a widespread paradigm that helps programmers create intuitive layers of abstraction. This example notebook is aimed at helping people along with the concepts of object-oriented programming and may require more time than is possible in a one-day workshop. In Python, classes are defined with the keyword class. All classes should inherit either some other class or a base-class confusingly called object. When a function is associated with a class, it is called a method. End of explanation vector1 = Vector2D(1,1) vector2 = Vector2D(0,0) type(vector1)(2, 3) Explanation: Typically classes are made to store some data that is logically related to a single entity. When a class is instantiated, i.e. given a set of values , the result of the action is an object. End of explanation class AddableVector2D(Vector2D): Represents a 2D vector that can be added to another def add(self, another): return type(self)(self.x + another.x, self.y + another.y) addvec1 = AddableVector2D(1,1) addvec2 = AddableVector2D(2,2) addvec3 = addvec1.add(addvec2) Explanation: Object-oriented design can let programmers re-use code by inheriting other classes to extend their functionality. End of explanation class RepresentableVector2D(Vector2D): A vector that has textual representations def __str__(self): return "Vector2D({},{})".format(self.x, self.y) def __repr__(self): return str(self) rv1 = RepresentableVector2D(1, 2) print(str(rv1)) rv1 Explanation: Python has plenty of so-called magic methods for classes. For instance, the __str__ method defines how a vector is textually represented when you call it's str. The __repr__ controls how the object is represented in the interpreter. End of explanation class ExtendedVector2D(AddableVector2D, RepresentableVector2D): A vector that has several features def __str__(self): return "Extended" + super(ExtendedVector2D, self).__str__() # addition with + def __add__(self, other): return self.add(other) # negation with - prefix def __neg__(self): return type(self)(-self.x, -self.y) # subtraction with - def __sub__(self, other): return self + (-other) def __mul__(self, scalar): return type(self)(self.x*scalar, self.y*scalar) vec1 = ExtendedVector2D(1, 4) vec2 = vec1 * 0.5 vec3 = vec1 - (vec2*3) vec3 Explanation: Python supports multiple inheritance. This is another very powerful tool, but can sometimes lead to confusion when the same function is defined in many classes. If this confuses you, the concept to look for is Method Resolution Order . It is possible to call a parent class's methods using the keyword super. Also, it is possible to define what regular operators do to a class. End of explanation class Drawing2D(object): def __init__(self, x=0, y=0, initial_vectors=None): self.x = x self.y = y self.vectors = initial_vectors if initial_vectors else [] def add(self, vector): self.vectors.append(vector) def scale(self, scalar): self.vectors = [v*scalar for v in self.vectors] def __str__(self): output = "start at {},{}.\n".format(self.x, self.y) for vector in self.vectors: output += "draw vector {},{}\n".format(vector.x, vector.y) return output Explanation: Observe that in most cases we don't check for the types of the arguments given. This is idiomatic Python and contrary to some other object-oriented languages. It is better to ask for forgiveness than permission Another powerful feature of classes is composability: you can abstract something to another level. Let's make a class to represent a very rough vector-based drawing. End of explanation vectors = [vec1, vec2, vec3] drawing = Drawing2D(0,0, vectors) print(drawing) drawing.scale(0.75) print('---') print(drawing) print('---') drawing.add(ExtendedVector2D(-1,-1)) print(drawing) Explanation: As in all Python one should be aware of when an operation changes the state of an object, like our drawing and when it returns a new object, like multiplying vectors with a scalar. Instead of text we could create a Scalable Vector Graphics (SVG) image using this class. End of explanation class Container(object): def __init__(self): self.bag = {} def put(self, key, item): self.bag[key] = item def get(self, key): return self.bag[key] Explanation: Exceptions In Python, exceptions are lightweight, i.e. handling them doesn't cause a notable decrease in performance as happens in some languages. The purpose of exceptions is to communicate that something didn't go right. The name of the exception typically tells what kind of error ocurred and the exception can also contain a more explicit message. End of explanation container = Container() container.put([1, 2, 3], "example") container.get("not_in_it") Explanation: The container-class can exhibit at least two different exceptions. End of explanation try: container = Container() container.put([1,2,3], "value") except TypeError as err: print("Stupid programmer caused an error: " + str(err)) Explanation: Who should worry about the various issues is a good philosophical question. We could either make the Container-class secure in that it doesn't raise any errors to whoever calls it or we could let the caller worry about such errors. For now let's assume that the programmer is competent and knows what is a valid key and what isn't. End of explanation try: container = Container() container.put(3, "value") container.get(3) except TypeError as err: print("Stupid programmer caused an error: " + str(err)) except KeyError as err: print("Stupid programmer caused another error: " + str(err)) finally: print("all is well in the end") # go ahead, make changes that cause one of the exceptions to be raised Explanation: A try-except may contain a finallyblock, which is always guaranteed to execute. Also, it is permissible to catch multiple different errors. End of explanation try: container = Container() container.put(3, "value") container.get(5) except (TypeError, KeyError) as err: print("please shoot me") if type(err) == TypeError: raise Exception("That's it I quit!") else: raise Explanation: There is also syntax for catching multiple error types in the same catch clause. The keyword raise is used to continue error handling. This is useful if you want to log errors but let them pass onward anyway. A raise without arguments will re-raise the error that was being handled. End of explanation
12,034
Given the following text description, write Python code to implement the functionality described below step by step Description: Deep learning 2. Convolutional Neural Networks The dense FFN used before contained 600000 parameters. These are expensive to train! A picture is not a flat array of numbers, it is a 2D matrix with multiple color channels. Convolution kernels are able to find 2D hidden features. A CNN's layers are operating on a 3D tensor of shape (height, width, channels). A colored imaged usually has three channels (RGB) but more are possible. We have grayscale thus only one channel. The width and height dimensions tend to shrink as we go deeper in the network. Why? NNs are efective information filters. Why is this important for a biologist? - Sight is our main sense. Labelling pictures is much easier than other types of data! - Most biological data can be converted to image format. (including genomics, transcriptomics, etc) - Spatial transcriptomics, as well as some single cell data have multi-channel and spatial features. - Microscopy is biology too! Step1: Method Step2: The layers Step3: Loss
Python Code: from tensorflow.keras.datasets import mnist from tensorflow.keras.utils import to_categorical (train_images, train_labels), (test_images, test_labels) = mnist.load_data() train_images = train_images.reshape((60000, 28, 28, 1)) train_images = train_images.astype('float32') / 255 test_images = test_images.reshape((10000, 28, 28, 1)) test_images = test_images.astype('float32') / 255 train_labels = to_categorical(train_labels) test_labels = to_categorical(test_labels) print(train_images.shape) Explanation: Deep learning 2. Convolutional Neural Networks The dense FFN used before contained 600000 parameters. These are expensive to train! A picture is not a flat array of numbers, it is a 2D matrix with multiple color channels. Convolution kernels are able to find 2D hidden features. A CNN's layers are operating on a 3D tensor of shape (height, width, channels). A colored imaged usually has three channels (RGB) but more are possible. We have grayscale thus only one channel. The width and height dimensions tend to shrink as we go deeper in the network. Why? NNs are efective information filters. Why is this important for a biologist? - Sight is our main sense. Labelling pictures is much easier than other types of data! - Most biological data can be converted to image format. (including genomics, transcriptomics, etc) - Spatial transcriptomics, as well as some single cell data have multi-channel and spatial features. - Microscopy is biology too! End of explanation from IPython.display import Image Image(url= "../img/cnn.png", width=400, height=400) Image(url= "../img/convolution.png", width=400, height=400) Image(url= "../img/pooling.png", width=400, height=400) Explanation: Method: The convolutional network will filter the image in a sequence, gradually expanding the complexity of hidden features and eliminating the noise via the "downsampling bottleneck". A CNN's filtering principle is based on the idea of functional convolution, this is a mathematical way of comparing two functions in a temporal manner by sliding one over the other. Parts: convolution, pooling and classification End of explanation #from tensorflow.keras import layers from tensorflow.keras.layers import Conv2D, MaxPooling2D, Dropout, Flatten, Dense from tensorflow.keras import models model = models.Sequential() # first block model.add(Conv2D(32, kernel_size=(3, 3), input_shape=(28, 28, 1), activation='relu')) model.add(MaxPooling2D(pool_size=(2,2))) # second block model.add(Conv2D(64, kernel_size=(3,3), activation='relu')) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Dropout(0.2)) # flattening followed by dense layer and final output layer model.add(Flatten()) model.add(Dense(128, activation='relu')) model.add(Dense(10,activation='softmax')) model.summary() Explanation: The layers: - The first block: 32 number of kernels (convolutional filters) each of 3 x 3 size followed by a max pooling operation with pool size of 2 x 2. - The second block: 64 number of kernels each of 3 x 3 size followed by a max pooling operation with pool size of 2 x 2 and a dropout of 20% to ensure the regularization and thus avoiding overfitting of the model. - classification block: flattening operation which transforms the data to 1 dimensional so as to feed it to fully connected or dense layer. The first dense layer consists of 128 neurons with relu activation while the final output layer consist of 10 neurons with softmax activation which will output the probability for each of the 10 classes. End of explanation from tensorflow.keras.optimizers import Adam # compiling the model model.compile(loss='categorical_crossentropy', optimizer=Adam(lr=0.01), metrics=['accuracy']) # train the model training dataset history = model.fit(train_images, train_labels, epochs=5, validation_data=(test_images, test_labels), batch_size=128) # save the model model.save('cnn.h5') test_loss, test_acc = model.evaluate(test_images, test_labels) print(test_loss, test_acc) %matplotlib inline import matplotlib.pyplot as plt print(history.history.keys()) # Ploting the accuracy graph plt.plot(history.history['accuracy']) plt.plot(history.history['val_accuracy']) plt.title('model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'val'], loc='best') plt.show() # Ploting the loss graph plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'val'], loc='best') plt.show() Explanation: Loss: Many functions are possible (mean square error, maximum likelihood) cross-entropy loss (or log loss): sum for all predicted classes $- \sum_c y_c log(p_c)$, where $y_c$ is a binary inndication of classification success and $p_c$ is the probability value of the model prediction Optimizers: SGD: slower, classic, can get stuck in local minima, uses momentum to avoid small valleys rmsprop (root mean square propagation): batches contain bias noise, weights are adjusted ortogonal to bias, leading to faster convergence. Adam: combines the above Read more at: https://medium.com/analytics-vidhya/momentum-rmsprop-and-adam-optimizer-5769721b4b19 Learning rate ($\alpha$): Gradient descent algorithms multiply the magnitude of the gradient (the rate of error change with respect to each weight) by a scalar known as learning rate (also sometimes called step size) to determine the next point. $w_{ij} = w_{ij} + \alpha \frac{dE}{dw_{ij}}$ End of explanation
12,035
Given the following text description, write Python code to implement the functionality described below step by step Description: training an unsupervised VAE with APOGEE DR14 spectra this notebooke takes you through the building and training of a fairly deep VAE. I have not actually done too much work with DR14, so it may pose some potential difficulties, but this should be a good start. The training is suited to be put into a python script and run from the command line. If you're inclined, you may want to experiment with the model architecture, but I'm pretty sure this will work. Step1: load data this is file dependent this particular function expects the aspcap dr14 h5 file that can be downloaded from vos Step2: set some model hyper-parameters Step3: Zero-Augmentation In an effort to evenly distribute the weighting of the VAE, throughout training, a zero-augmentation technique was applied to the training spectra samples - both synthetic and observed. The zero-augmentation is implemented as the first layer in the encoder where a zero-augmentation mask is sent as an input along with the input spectrum and the two are multiplied together. The zero-augmentation mask is the same size as the input spectrum vector and is composed of ones and zeros. For the APOGEE wave-grid, the spectral region is divided into seven chunks and for each input spectrum a random 0-3 of these chunks are assigned to be zeros while the remainder of the zero-augmentation mask is made up of ones. This means for a given spectrum, the input for training may be 4/7ths, 5/7ths, 6/7ths, or the entire spectrum. This augmentation is done randomly throughout training, meaning that each spectrum will be randomly assigned a different zero-augmentation mask at every iteration. Step4: build encoder takes spectra (x) and zero-augmentation mask as inputs and outputs latent distribution (z_mean and z_log_var) Step5: build decoder takes z (latent variables) as an input and outputs a stellar spectrum Step6: build models Step7: create loss function This VAE has two loss functions that are minimized simultaneously Step8: build and compile model trainer Step9: train model you can experiment with the number of epochs. I suggest starting with fewer and seeing if the results are adequate. if not, continue training. The models and results are saved in models/ and results/ after each epoch, so you can run analyses throughout training. Step10: analyze results Note, this is a dummy result. I haven't trained the models for a proper epoch yet Step11: t-sne an example of an unsupervised clustering method on the latent space.
Python Code: import numpy as np import time import h5py import keras import matplotlib.pyplot as plt import sys from keras.layers import (Input, Dense, Lambda, Flatten, Reshape, BatchNormalization, Activation, Dropout, Conv1D, UpSampling1D, MaxPooling1D, ZeroPadding1D, LeakyReLU) from keras.engine.topology import Layer from keras.optimizers import Adam from keras.models import Model from keras import backend as K plt.switch_backend('agg') Explanation: training an unsupervised VAE with APOGEE DR14 spectra this notebooke takes you through the building and training of a fairly deep VAE. I have not actually done too much work with DR14, so it may pose some potential difficulties, but this should be a good start. The training is suited to be put into a python script and run from the command line. If you're inclined, you may want to experiment with the model architecture, but I'm pretty sure this will work. End of explanation # Define edges of detectors (for APOGEE) blue_chip_begin = 322 blue_chip_end = 3242 green_chip_begin = 3648 green_chip_end = 6048 red_chip_begin = 6412 red_chip_end = 8306 # function for loading data def load_train_data_weighted(data_file,indices=None): # grab all if indices is None: with h5py.File(data_file,"r") as F: ap_spectra = F['spectrum'][:] ap_err_spectra = F['error_spectrum'][:] # grab a batch else: with h5py.File(data_file, "r") as F: indices_bool = np.ones((len(F['spectrum']),),dtype=bool) indices_bool[:] = False indices_bool[indices] = True ap_spectra = F['spectrum'][indices_bool,:] ap_err_spectra = F['error_spectrum'][indices_bool,:] # combine chips ap_spectra = np.hstack((ap_spectra[:,blue_chip_begin:blue_chip_end], ap_spectra[:,green_chip_begin:green_chip_end], ap_spectra[:,red_chip_begin:red_chip_end])) # set nan values to zero ap_spectra[np.isnan(ap_spectra)]=0. ap_err_spectra = np.hstack((ap_err_spectra[:,blue_chip_begin:blue_chip_end], ap_err_spectra[:,green_chip_begin:green_chip_end], ap_err_spectra[:,red_chip_begin:red_chip_end])) return ap_spectra,ap_err_spectra # function for reshaping spectra into appropriate format for CNN def cnn_reshape(spectra): return spectra.reshape(spectra.shape[0],spectra.shape[1],1) Explanation: load data this is file dependent this particular function expects the aspcap dr14 h5 file that can be downloaded from vos:starnet/public End of explanation img_cols, img_chns = 7214, 1 num_fluxes=7214 input_shape=(num_fluxes,1) # z_dims is the dimension of the latent space z_dims = 64 batch_size = 64 epsilon_std = 1.0 learning_rate = 0.001 decay = 0.0 padding=u'same' kernel_init = keras.initializers.RandomNormal(mean=0.0, stddev=0.01) bias_init = keras.initializers.Zeros() Explanation: set some model hyper-parameters End of explanation # zero-augmentation layer (a trick I use to input chunks of zeros into the input spectra) class ZeroAugmentLayer(Layer): def __init__(self, **kwargs): self.is_placeholder = True super(ZeroAugmentLayer, self).__init__(**kwargs) def zero_agument(self, x_real, zero_mask): return x_real*zero_mask def call(self, inputs): x_real = inputs[0] zero_mask = inputs[1] x_augmented = self.zero_agument(x_real, zero_mask) return x_augmented # a function for creating the zero-masks used during training def create_zero_mask(spectra,min_chunks,max_chunks,chunk_size,dataset=None,ones_padded=False): if dataset is None: zero_mask = np.ones_like(spectra) elif dataset=='apogee': zero_mask = np.ones((spectra.shape[0],7214)) elif dataset=='segue': zero_mask = np.ones((spectra.shape[0],3688)) num_spec = zero_mask.shape[0] len_spec = zero_mask.shape[1] num_bins = len_spec/chunk_size remainder = len_spec%chunk_size spec_sizes = np.array([chunk_size for i in range(num_bins)]) spec_sizes[-1]=spec_sizes[-1]+remainder num_bins_removed = np.random.randint(min_chunks,max_chunks+1,size=(num_spec,)) for i, mask in enumerate(zero_mask): bin_indx_removed = np.random.choice(num_bins, num_bins_removed[i], replace=False) for indx in bin_indx_removed: if indx==0: mask[indx*spec_sizes[indx]:(indx+1)*spec_sizes[indx]]=0. else: mask[indx*spec_sizes[indx-1]:indx*spec_sizes[indx-1]+spec_sizes[indx]]=0. return zero_mask Explanation: Zero-Augmentation In an effort to evenly distribute the weighting of the VAE, throughout training, a zero-augmentation technique was applied to the training spectra samples - both synthetic and observed. The zero-augmentation is implemented as the first layer in the encoder where a zero-augmentation mask is sent as an input along with the input spectrum and the two are multiplied together. The zero-augmentation mask is the same size as the input spectrum vector and is composed of ones and zeros. For the APOGEE wave-grid, the spectral region is divided into seven chunks and for each input spectrum a random 0-3 of these chunks are assigned to be zeros while the remainder of the zero-augmentation mask is made up of ones. This means for a given spectrum, the input for training may be 4/7ths, 5/7ths, 6/7ths, or the entire spectrum. This augmentation is done randomly throughout training, meaning that each spectrum will be randomly assigned a different zero-augmentation mask at every iteration. End of explanation def build_encoder(input_1,input_2): # zero-augment input spectrum x = ZeroAugmentLayer()([input_1,input_2]) # first conv block x = Conv1D(filters=16, kernel_size=8, strides=1, kernel_initializer=kernel_init, bias_initializer=bias_init, padding=padding)(x) x = LeakyReLU(0.1)(x) x = BatchNormalization()(x) x = Dropout(0.2)(x) # second conv bloack x = Conv1D(filters=16, kernel_size=8, strides=1, kernel_initializer=kernel_init, bias_initializer=bias_init, padding=padding)(x) x = LeakyReLU(0.1)(x) x = BatchNormalization()(x) # maxpooling layer and flatten x = MaxPooling1D(pool_size=4, strides=4, padding='valid')(x) x = Flatten()(x) x = Dropout(0.2)(x) # intermediate dense block x = Dense(256)(x) x = LeakyReLU(0.3)(x) x = BatchNormalization()(x) x = Dropout(0.3)(x) # latent distribution output z_mean = Dense(z_dims)(x) z_log_var = Dense(z_dims)(x) return Model([input_1,input_2],[z_mean,z_log_var]) # function for obtaining a latent sample given a distribution def sampling(args, latent_dim=z_dims, epsilon_std=epsilon_std): z_mean, z_log_var = args epsilon = K.random_normal(shape=(z_dims,), mean=0., stddev=epsilon_std) return z_mean + K.exp(z_log_var) * epsilon Explanation: build encoder takes spectra (x) and zero-augmentation mask as inputs and outputs latent distribution (z_mean and z_log_var) End of explanation def build_decoder(inputs): # input fully-connected block x = Dense(256)(inputs) x = LeakyReLU(0.1)(x) x = BatchNormalization()(x) x = Dropout(0.2)(x) # intermediate fully-connected block w = input_shape[0] // (2 ** 3) x = Dense(w * 16)(x) x = LeakyReLU(0.1)(x) x = BatchNormalization()(x) x = Dropout(0.2)(x) # reshape for convolutional blocks x = Reshape((w, 16))(x) # first deconv block x = UpSampling1D(size=4)(x) x = Conv1D(kernel_initializer=kernel_init,bias_initializer=bias_init,padding="same", filters=16,kernel_size=8)(x) x = LeakyReLU(0.1)(x) x = BatchNormalization()(x) x = Dropout(0.1)(x) # zero-padding to get x in the right dimension to create the spectra x = ZeroPadding1D(padding=(2,1))(x) # second deconv block x = UpSampling1D(size=2)(x) x = Conv1D(kernel_initializer=kernel_init,bias_initializer=bias_init,padding="same", filters=16,kernel_size=8)(x) x = LeakyReLU(0.1)(x) x = BatchNormalization()(x) # output conv layer x = Conv1D(kernel_initializer=kernel_init,bias_initializer=bias_init,padding="same", filters=1,kernel_size=8,activation='linear')(x) return Model(inputs,x) Explanation: build decoder takes z (latent variables) as an input and outputs a stellar spectrum End of explanation # encoder and predictor input placeholders input_spec = Input(shape=input_shape) input_mask = Input(shape=input_shape) # error spectra placeholder input_err_spec = Input(shape=input_shape) # decoder input placeholder input_z = Input(shape=(z_dims,)) model_name='vae_test' start_e = 0 # if you want to continue training from a certain epoch, you can uncomment the load models lines # and comment out the build_encoder, build_decoder lines ''' encoder = keras.models.load_model('models/encoder_'+model_name+'_epoch_'+str(start_e)+'.h5', custom_objects={'ZeroAugmentLayer':ZeroAugmentLayer}) decoder = keras.models.load_model('models/decoder_'+model_name+'_epoch_'+str(start_e)+'.h5', custom_objects={'ZeroAugmentLayer':ZeroAugmentLayer}) ''' # encoder model encoder = build_encoder(input_spec, input_mask) # decoder layers decoder = build_decoder(input_z) #''' encoder.summary() decoder.summary() # outputs for encoder z_mean, z_log_var = encoder([input_spec, input_mask]) # sample from latent distribution given z_mean and z_log_var z = Lambda(sampling, output_shape=(z_dims,))([z_mean, z_log_var]) # outputs for decoder output_spec = decoder(z) Explanation: build models End of explanation # loss for evaluating the regenerated spectra and the latent distribution class VAE_LossLayer_weighted(Layer): __name__ = u'vae_labeled_loss_layer' def __init__(self, **kwargs): self.is_placeholder = True super(VAE_LossLayer_weighted, self).__init__(**kwargs) def lossfun(self, x_true, x_pred, z_avg, z_log_var, x_err): mse = K.mean(K.square((x_true - x_pred)/x_err)) kl_loss_x = K.mean(-0.5 * K.sum(1.0 + z_log_var - K.square(z_avg) - K.exp(z_log_var), axis=-1)) return mse + kl_loss_x def call(self, inputs): # inputs for the layer: x_true = inputs[0] x_pred = inputs[1] z_avg = inputs[2] z_log_var = inputs[3] x_err = inputs[4] # calculate loss loss = self.lossfun(x_true, x_pred, z_avg, z_log_var, x_err) # add loss to model self.add_loss(loss, inputs=inputs) # returned value not really used for anything return x_true # dummy loss to give zeros, hence no gradients to train # the real loss is computed as the layer shown above and therefore this dummy loss is just # used to satisfy keras notation when compiling the model def zero_loss(y_true, y_pred): return K.zeros_like(y_true) Explanation: create loss function This VAE has two loss functions that are minimized simultaneously: a weighted mean-squared-error to analyze the predicted spectra: \begin{equation} mse = \frac{1}{N}\sum{\frac{(x_{true}-x_{pred})^2}{(x_{err})^2}} \end{equation} a relative entropy, KL (Kullback–Leibler divergence) loss to keep the latent variables within a similar distribuition: \begin{equation} KL = \frac{1}{N}\sum{-\frac{1}{2}(1.0+z_{log_var} - z_{avg}^2 - e^{z_{log_var}})} \end{equation} End of explanation # create loss layer vae_loss = VAE_LossLayer_weighted()([input_spec, output_spec, z_mean, z_log_var, input_err_spec]) # build trainer with spectra, zero-masks, and error spectra as inputs. output is the final loss layer vae = Model(inputs=[input_spec, input_mask, input_err_spec], outputs=[vae_loss]) # compile trainer vae.compile(loss=[zero_loss], optimizer=Adam(lr=1.0e-4, beta_1=0.5)) vae.summary() # a model that encodes and then decodes a spectrum (this is used to plot the intermediate results during training) gen_x_to_x = Model([input_spec,input_mask], output_spec) gen_x_to_x.compile(loss='mse', optimizer=Adam(lr=1.0e-4, beta_1=0.5)) # a function to display the time remaining or elapsed def time_format(t): m, s = divmod(t, 60) m = int(m) s = int(s) if m == 0: return u'%d sec' % s else: return u'%d min' % (m) # function for training on a batch def train_on_batch(x_batch,x_err_batch): # create zero-augmentation mask for batch zero_mask = create_zero_mask(x_batch,0,3,1030,dataset=None,ones_padded=False) # train on batch loss = [vae.train_on_batch([cnn_reshape(x_batch), cnn_reshape(zero_mask),cnn_reshape(x_err_batch)], cnn_reshape(x_batch))] losses = {'vae_loss': loss[0]} return losses def fit_model(model_name, data_file, epochs, reporter): # get the number of spectra in the data_file with h5py.File(data_file, "r") as F: num_data_ap = len(F['spectrum']) # lets use 90% of the samples for training num_data_train_ap = int(num_data_ap*0.9) # the remainder will be grabbed for testing the model throughout training test_indices_range_ap = [num_data_train_ap,num_data_ap] # loop through the number of epochs for e in xrange(start_e,epochs): # create a randomized array of indices to grab batches of the spectra perm_ap = np.random.permutation(num_data_train_ap) start_time = time.time() # loop through the batches losses_=[] for b in xrange(0, num_data_train_ap, batchsize): # determine current batch size bsize = min(batchsize, num_data_train_ap - b) # grab a batch of indices indx_batch = perm_ap[b:b+bsize] # load a batch of data x_batch, x_err_batch= load_train_data_weighted(data_file,indices=indx_batch) # train on batch losses = train_on_batch(x_batch,x_err_batch) losses_.append(losses) # Print current status ratio = 100.0 * (b + bsize) / num_data_train_ap print unichr(27) + u"[2K",; sys.stdout.write(u'') print u'\rEpoch #%d | %d / %d (%6.2f %%) ' % \ (e + 1, b + bsize, num_data_train_ap, ratio),; sys.stdout.write(u'') for k in reporter: if k in losses: print u'| %s = %5.3f ' % (k, losses[k]),; sys.stdout.write(u'') # Compute ETA elapsed_time = time.time() - start_time eta = elapsed_time / (b + bsize) * (num_data_train_ap - (b + bsize)) print u'| ETA: %s ' % time_format(eta),; sys.stdout.write(u'') sys.stdout.flush() print u'' # Print epoch status ratio = 100.0 print unichr(27) + u"[2K",; sys.stdout.write(u'') print u'\rEpoch #%d | %d / %d (%6.2f %%) ' % \ (e + 1, num_data_train_ap, num_data_train_ap, ratio),; sys.stdout.write(u'') losses_all = {} for k in losses_[0].iterkeys(): losses_all[k] = tuple(d[k] for d in losses_) for k in reporter: if k in losses_all: losses_all[k]=np.sum(losses_all[k])/len(losses_) for k in reporter: if k in losses_all: print u'| %s = %5.3f ' % (k, losses_all[k]),; sys.stdout.write(u'') # save loss to evaluate progress myfile = open(model_name+'.txt', 'a') for k in reporter: if k in losses: myfile.write("%s," % losses[k]) myfile.write("\n") myfile.close() # Compute Time Elapsed elapsed_time = time.time() - start_time eta = elapsed_time print u'| TE: %s ' % time_format(eta),; sys.stdout.write(u'') #sys.stdout.flush() print('\n') # save models encoder.save('models/encoder_'+model_name+'_epoch_'+str(e)+'.h5') decoder.save('models/decoder_'+model_name+'_epoch_'+str(e)+'.h5') # plot results for a test set to evaluate how the vae is able to reproduce a spectrum test_sample_indices = np.random.choice(range(test_indices_range_ap[0],test_indices_range_ap[1]), 5, replace=False) sample_orig,_, = load_train_data_weighted(data_file,indices=test_sample_indices) zero_mask_test = create_zero_mask(sample_orig,0,3,1030) test_x = gen_x_to_x.predict([cnn_reshape(sample_orig),cnn_reshape(zero_mask_test)]) sample_orig_aug = sample_orig*zero_mask_test sample_diff = sample_orig-test_x.reshape(test_x.shape[0],test_x.shape[1]) # save test results fig, axes = plt.subplots(20,1,figsize=(70, 20)) for i in range(len(test_sample_indices)): # original spectrum axes[i*4].plot(sample_orig[i],c='r') axes[i*4].set_ylim((0.4,1.2)) # input zero-augmented spectrum axes[1+4*i].plot(sample_orig_aug[i],c='g') axes[1+4*i].set_ylim((0.4,1.2)) # regenerated spectrum axes[2+4*i].plot(test_x[i],c='b') axes[2+4*i].set_ylim((0.4,1.2)) # residual between original and regenerated spectra axes[3+4*i].plot(sample_diff[i],c='m') axes[3+4*i].set_ylim((-0.3,0.3)) # save results plt.savefig('results/test_sample_ap_'+model_name+'_epoch_'+str(e)+'.jpg') plt.close('all') Explanation: build and compile model trainer End of explanation reporter=['vae_loss'] epochs=30 batchsize=64 if start_e>0: start_e=start_e+1 data_file = '/data/stars/aspcapStar_combined_main_dr14.h5' fit_model(model_name,data_file, epochs,reporter) Explanation: train model you can experiment with the number of epochs. I suggest starting with fewer and seeing if the results are adequate. if not, continue training. The models and results are saved in models/ and results/ after each epoch, so you can run analyses throughout training. End of explanation import numpy as np import h5py import keras import matplotlib.pyplot as plt import sys from keras.layers import (Input, Lambda) from keras.engine.topology import Layer from keras import backend as K %matplotlib inline # Define edges of detectors (for APOGEE) blue_chip_begin = 322 blue_chip_end = 3242 green_chip_begin = 3648 green_chip_end = 6048 red_chip_begin = 6412 red_chip_end = 8306 # function for loading data def load_train_data_weighted(data_file,indices=None): # grab all if indices is None: with h5py.File(data_file,"r") as F: ap_spectra = F['spectrum'][:] ap_err_spectra = F['error_spectrum'][:] # grab a batch else: with h5py.File(data_file, "r") as F: indices_bool = np.ones((len(F['spectrum']),),dtype=bool) indices_bool[:] = False indices_bool[indices] = True ap_spectra = F['spectrum'][indices_bool,:] ap_err_spectra = F['error_spectrum'][indices_bool,:] # combine chips ap_spectra = np.hstack((ap_spectra[:,blue_chip_begin:blue_chip_end], ap_spectra[:,green_chip_begin:green_chip_end], ap_spectra[:,red_chip_begin:red_chip_end])) # set nan values to zero ap_spectra[np.isnan(ap_spectra)]=0. ap_err_spectra = np.hstack((ap_err_spectra[:,blue_chip_begin:blue_chip_end], ap_err_spectra[:,green_chip_begin:green_chip_end], ap_err_spectra[:,red_chip_begin:red_chip_end])) return ap_spectra,ap_err_spectra # zero-augmentation layer (a trick I use to input chunks of zeros into the input spectra) class ZeroAugmentLayer(Layer): def __init__(self, **kwargs): self.is_placeholder = True super(ZeroAugmentLayer, self).__init__(**kwargs) def zero_agument(self, x_real, zero_mask): return x_real*zero_mask def call(self, inputs): x_real = inputs[0] zero_mask = inputs[1] x_augmented = self.zero_agument(x_real, zero_mask) return x_augmented # a function for creating the zero-masks used during training def create_zero_mask(spectra,min_chunks,max_chunks,chunk_size,dataset=None,ones_padded=False): if dataset is None: zero_mask = np.ones_like(spectra) elif dataset=='apogee': zero_mask = np.ones((spectra.shape[0],7214)) elif dataset=='segue': zero_mask = np.ones((spectra.shape[0],3688)) num_spec = zero_mask.shape[0] len_spec = zero_mask.shape[1] num_bins = len_spec/chunk_size remainder = len_spec%chunk_size spec_sizes = np.array([chunk_size for i in range(num_bins)]) spec_sizes[-1]=spec_sizes[-1]+remainder num_bins_removed = np.random.randint(min_chunks,max_chunks+1,size=(num_spec,)) for i, mask in enumerate(zero_mask): bin_indx_removed = np.random.choice(num_bins, num_bins_removed[i], replace=False) for indx in bin_indx_removed: if indx==0: mask[indx*spec_sizes[indx]:(indx+1)*spec_sizes[indx]]=0. else: mask[indx*spec_sizes[indx-1]:indx*spec_sizes[indx-1]+spec_sizes[indx]]=0. return zero_mask # function for reshaping spectra into appropriate format for CNN def cnn_reshape(spectra): return spectra.reshape(spectra.shape[0],spectra.shape[1],1) losses = np.zeros((1,)) with open("vae_test.txt", "r") as f: for i,line in enumerate(f): currentline = np.array(line.split(",")[0],dtype=float) if i ==0: losses[0]=currentline.reshape((1,)) else: losses = np.hstack((losses,currentline.reshape((1,)))) plt.plot(losses[0:16],label='vae_loss') plt.legend() plt.show() # function for encoding a spectrum into the latent space def encode_spectrum(model_name,epoch,spectra): encoder = keras.models.load_model('models/encoder_'+model_name+'_epoch_'+str(epoch)+'.h5', custom_objects={'ZeroAugmentLayer':ZeroAugmentLayer}) z_avg,z_log_var = encoder.predict([cnn_reshape(spectra),cnn_reshape(np.ones_like(spectra))]) return z_avg, z_log_var data_file = '/data/stars/aspcapStar_combined_main_dr14.h5' test_range = [0,30000] test_sample_indices = np.random.choice(range(0,30000), 5000, replace=False) sample_x,_, = load_train_data_weighted(data_file,indices=test_sample_indices) model_name = 'vae_test' epoch=16 z_avg, z_log_var = encode_spectrum(model_name,epoch,sample_x) Explanation: analyze results Note, this is a dummy result. I haven't trained the models for a proper epoch yet End of explanation from tsne import bh_sne perplex=80 t_data = z_avg # convert data to float64 matrix. float64 is need for bh_sne t_data = np.asarray(t_data).astype('float64') t_data = t_data.reshape((t_data.shape[0], -1)) # perform t-SNE embedding vis_data = bh_sne(t_data, perplexity=perplex) # separate 2D into x and y axes information vis_x = vis_data[:, 0] vis_y = vis_data[:, 1] fig = plt.figure(figsize=(10, 10)) synth_ap = plt.scatter(vis_x, vis_y, marker='o', c='r',label='APOGEE', alpha=0.4) plt.tick_params( axis='x', which='both', bottom='off', top='off', labelbottom='off') plt.tick_params( axis='y', which='both', right='off', left='off', labelleft='off') plt.legend(fontsize=30) plt.tight_layout() plt.show() Explanation: t-sne an example of an unsupervised clustering method on the latent space. End of explanation
12,036
Given the following text description, write Python code to implement the functionality described below step by step Description: Stacker-Crane Experiments The Euclidean Stacker-Crane problem (ESCP) is a generalization of the Euclidean Travelling Salesman Problem. In the ESCP we are given pickup-delivery pairs and aim to find delivery-pickup pairs to form a minimal tour. The SPLICE algorithm was proven to almost-surely provide an asymptotically optimal solution, where pickups and deliveries are each sampled from respective distributions. The SPLICE algorithm, however, relies on a Euclidean Bipartite matching between all pairs, an O(n^3) operation (though complex approximations exist). Here we test 2 algorithms that we propose, PLG and ASPLICE each of which relies on a probabilistic quad tree, a quad tree that terminates decomposition where (number of points in cell)/(total points) < p_hat for every cell. Intuitively this means that most delivery points are close to pickups. PLG In PLG we simply follow a pickup-delivery pair, if the delivery falls in a cell with an unmatched pickup then link to that, otherwise connect to any unmatched pickup anywhere. This is an almost-surely asymtotically near-optimal algorithm when pickup and delivery distributions are identical, meaning that we can chose p_hat so that as number of points -> infinity, the ratio of PLG cost to optimal cost approaches some (1+eps) where eps may be made as small as desired. This algorithm runs in linear time. For non-idential distributions, this is still a constant factor approximation that depends on how similar the distributions are (in both a Wasserstein and Total Variation distances way.) ASPLICE The ASPLICE algorithm is functionally similar to PLG, except that the connection between cells is not random. This algorithm first connects all delivery-pickups possible within cells and then assigns excess delivery and pickup pairs based on solving the Transportation Problem, and then merging subtours. This algorithm is guaranteed to outperform PLG in probability, and gains all the analysis of SPLICE, as it approximates the algorithm. The primary difference is that solving EBMP on all points is much more expensive than solving the Transportation Problem on excess in cells. The SPLICE algorithm is impractically slow for over 250 pairs (over a minute to calculate), whereas ASPLICE, even using an LP solver (which is far from a fast approach), can handle over 1000 pairs with ease. Step2: Define experiment Here we run PLG, SPLICE and ASPLICE over pairs of points generated function. This creates a DataFrame with entries containing the results. The timestamp corresponds to creation of the pairs, so it may be used to compare results on the same dataset instance. Step3: Functions to generate pairs Step4: Setup and Run Experiment Step5: Show sample scatter plot Step6: Comparing Average Cost vs Number of Pairs Step7: Comparing Average Runtimes We compare the average runtime for different parameters. Step8: Compute ratio of Costs Next we find the ratio of each algorithms cost to splice. We then calculate the average over each number of pairs. In other words we are approximating E[ ALG cost / SPLICE cost ].
Python Code: # Load modules import sys from __future__ import print_function from collections import defaultdict import numpy as np import matplotlib.pyplot as plt import matplotlib as mpl %matplotlib inline import pandas as pd from pandas import DataFrame import time import random from pqt import PQTDecomposition from splice import splice_alg from asplice import asplice_alg from plg import plg_alg Explanation: Stacker-Crane Experiments The Euclidean Stacker-Crane problem (ESCP) is a generalization of the Euclidean Travelling Salesman Problem. In the ESCP we are given pickup-delivery pairs and aim to find delivery-pickup pairs to form a minimal tour. The SPLICE algorithm was proven to almost-surely provide an asymptotically optimal solution, where pickups and deliveries are each sampled from respective distributions. The SPLICE algorithm, however, relies on a Euclidean Bipartite matching between all pairs, an O(n^3) operation (though complex approximations exist). Here we test 2 algorithms that we propose, PLG and ASPLICE each of which relies on a probabilistic quad tree, a quad tree that terminates decomposition where (number of points in cell)/(total points) < p_hat for every cell. Intuitively this means that most delivery points are close to pickups. PLG In PLG we simply follow a pickup-delivery pair, if the delivery falls in a cell with an unmatched pickup then link to that, otherwise connect to any unmatched pickup anywhere. This is an almost-surely asymtotically near-optimal algorithm when pickup and delivery distributions are identical, meaning that we can chose p_hat so that as number of points -> infinity, the ratio of PLG cost to optimal cost approaches some (1+eps) where eps may be made as small as desired. This algorithm runs in linear time. For non-idential distributions, this is still a constant factor approximation that depends on how similar the distributions are (in both a Wasserstein and Total Variation distances way.) ASPLICE The ASPLICE algorithm is functionally similar to PLG, except that the connection between cells is not random. This algorithm first connects all delivery-pickups possible within cells and then assigns excess delivery and pickup pairs based on solving the Transportation Problem, and then merging subtours. This algorithm is guaranteed to outperform PLG in probability, and gains all the analysis of SPLICE, as it approximates the algorithm. The primary difference is that solving EBMP on all points is much more expensive than solving the Transportation Problem on excess in cells. The SPLICE algorithm is impractically slow for over 250 pairs (over a minute to calculate), whereas ASPLICE, even using an LP solver (which is far from a fast approach), can handle over 1000 pairs with ease. End of explanation def run_experiments(gen_pd_edges, n_pairs_li, n_reps, p_hat_li, verbose=False, **kwargs): Parameters: n_pairs_li - a list of the number of pairs to generate n_reps - the number of repetitions of experiment with that number of pairs gen_pd_edges - function the generates n pairs include_pqt_time - whether or not the time to compute the pqt should be included verbose - whether or not to print the repetitions to stdout data = defaultdict(list) def add_datum_kv((k,v)): data[k].append(v) return (k,v) # Run experiment for n_pairs in n_pairs_li: for rep in xrange(n_reps): if verbose: print("Number of pairs: {} Rep: {} at ({})"\ .format(n_pairs,rep, time.strftime( "%H:%M %S", time.gmtime()) ) ) sys.stdout.flush() # Generate pairs pd_edges = gen_pd_edges(n_pairs) time_stamp = int(time.time()*1000) # Run SPLICE start_time = time.clock() _, splice_cost = splice_alg(pd_edges) splice_runtime = time.clock() - start_time splice_datum = {'alg': splice_alg.__name__, 'timestamp': time_stamp, 'n_pairs': n_pairs, 'cost': splice_cost, 'p_hat': float('inf'), 'alg_runtime': splice_runtime, 'pqt_runtime': None, 'rep': rep} # Add datum map(add_datum_kv, splice_datum.iteritems()) for p_hat in p_hat_li: # Generate PQT start_time = time.clock() pqt = PQTDecomposition().from_points(pd_edges.keys(), p_hat=p_hat) pqt_runtime = time.clock() - start_time # Run ASPLICE start_time = time.clock() _, asplice_cost = asplice_alg(pd_edges, pqt=pqt) asplice_runtime = time.clock() - start_time # Add datum asplice_datum = {'alg': asplice_alg.__name__, 'timestamp': time_stamp, 'n_pairs': n_pairs, 'cost': asplice_cost, 'p_hat': p_hat, 'alg_runtime': asplice_runtime, 'pqt_runtime': pqt_runtime, 'rep': rep} map(add_datum_kv, asplice_datum.iteritems()) # Run PLG start_time = time.clock() _, plg_cost = plg_alg(pd_edges, pqt=pqt) plg_runtime = time.clock() - start_time # Add datum plg_datum = {'alg': plg_alg.__name__, 'timestamp': time_stamp, 'n_pairs': n_pairs, 'cost': plg_cost, 'p_hat': p_hat, 'alg_runtime': plg_runtime, 'pqt_runtime': pqt_runtime, 'rep': rep} map(add_datum_kv, plg_datum.iteritems()) return DataFrame(data) Explanation: Define experiment Here we run PLG, SPLICE and ASPLICE over pairs of points generated function. This creates a DataFrame with entries containing the results. The timestamp corresponds to creation of the pairs, so it may be used to compare results on the same dataset instance. End of explanation def gen_pd_edges(gen_p_fn, gen_d_fn, n_pairs=50): # Generates random pd pair from distributions # Must be hashable, so gen_p_fn cannot returns a list or np.array return {gen_p_fn(): gen_d_fn() \ for i in xrange(n_pairs)} def gen_pt_gmm(means, covs, mix_weights): while True: # Which Gaussian to sample from ridx = np.random.choice(len(mix_weights), p=mix_weights) # Sample point pt = np.random.multivariate_normal(means[ridx], covs[ridx]) # Only accept point if it lies in the unit square if 0.<=pt[0]<=1. and 0.<=pt[1]<=1.: return tuple(pt) else: continue # Define GMMs gmm1_params = { 'means': [[0.5, 0.5], [0.8,0.1]], 'covs': [[[0.01, 0.], [0., 0.01]], [[0.015, 0.], [0., 0.02]]], 'mix_weights': [0.6, 0.4] } gmm2_params = { 'means': [[0.51, 0.55], [0.78, 0.10]], 'covs': [[[0.01, 0.00], [0.00, 0.01]], [[0.015, 0.00], [0.00, 0.02]]], 'mix_weights': [0.6, 0.4] } gmm3_params = { 'means': [[0.21, 0.30], [0.78, 0.84], [0.78, 0.10]], 'covs': [[[0.01, 0.00], [0.00, 0.01]], [[0.015, 0.00], [0.00, 0.02]], [[0.015, 0.00], [0.00, 0.02]]], 'mix_weights': [0.5, 0.3, 0.2] } # Setup gen_pd functions def gen_pt_unif(): return (random.random(), random.random()) def gen_pd_unif(n_pairs): return gen_pd_edges(gen_pt_unif, gen_pt_unif, n_pairs) def gen_pt_gmm1(): return gen_pt_gmm(**gmm1_params) def gen_pt_gmm2(): return gen_pt_gmm(**gmm2_params) def gen_pt_gmm3(): return gen_pt_gmm(**gmm3_params) def gen_pd_gmm_close(n_pairs): return gen_pd_edges(gen_pt_gmm1, gen_pt_gmm2, n_pairs) def gen_pd_gmm_far(n_pairs): return gen_pd_edges(gen_pt_gmm1, gen_pt_gmm3, n_pairs) Explanation: Functions to generate pairs End of explanation # Setup parameters experiment1 = { 'n_pairs_li': list(xrange(10, 100, 3)) \ + list(xrange(100, 150, 5)) \ + list(xrange(150, 200, 10)), 'p_hat_li': [0.01, 0.1], 'n_reps': 3, 'gen_pd_edges': gen_pd_unif, 'save_path': "results/comparison/uniform/", 'name': "uni", 'verbose': True } experiment2 = { 'n_pairs_li': list(xrange(10, 100, 3)) \ + list(xrange(100, 150, 5)) \ + list(xrange(150, 200, 10)), 'p_hat_li': [0.01, 0.1], 'n_reps': 3, 'gen_pd_edges': gen_pd_gmm_close, 'save_path': "results/comparison/gmm_close/", 'name': "close", 'verbose': True } experiment3 = { 'n_pairs_li': list(xrange(10, 100, 3)) \ + list(xrange(100, 150, 5)) \ + list(xrange(150, 200, 10)), 'p_hat_li': [0.01, 0.1], 'n_reps': 3, 'gen_pd_edges': gen_pd_gmm_far, 'save_path': "results/comparison/gmm_far/", 'name': "far", 'verbose': True } # Choose the experiment experiment = experiment1 Explanation: Setup and Run Experiment End of explanation save_path = "{}scatter_{}.pdf" \ .format(experiment['save_path'], experiment['name']) fig, ax = plt.subplots() pds = experiment['gen_pd_edges'](1000) ax.scatter(*zip(*pds.keys()), color='r', edgecolors='none', s=2) ax.scatter(*zip(*pds.values()), color='b', edgecolors='none', s=2) ax.set_xlim([0.,1.]) ax.set_ylim([0.,1.]) ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) plt.savefig(save_path, bbox_inches='tight') # Run the experiment df = run_experiments(**experiment) # Show the last 6 rows of results df.tail(6) Explanation: Show sample scatter plot End of explanation save_path = "{}avg_cost_{}.pdf" \ .format(experiment['save_path'], experiment['name']) with plt.style.context('seaborn-white'): mean_df = df.groupby(['alg', 'p_hat', 'n_pairs']) \ [['cost','alg_runtime','pqt_runtime']] \ .mean() \ .add_prefix('mean_') \ .reset_index() fig, ax = plt.subplots() #ax.set_yscale('log') group_plts = mean_df.groupby(['alg', 'p_hat']) for i,(name, group) in enumerate(group_plts): alg,p_hat = name if alg == splice_alg.__name__: plt.plot(group['n_pairs'], group['mean_cost'], label=r"SPLICE")#, #color=cmap(i / float(len(group_plts)))) elif alg == asplice_alg.__name__: plt.plot(group['n_pairs'], group['mean_cost'], label=r"ASPLICE $\hat{{p}}={:.3}$".format(p_hat))#, #color=cmap(i / float(len(group_plts)))) elif alg == plg_alg.__name__: plt.plot(group['n_pairs'], group['mean_cost'], label=r"PLG $\hat{{p}}={:.3}$".format(p_hat))#, #color=cmap(i / float(len(group_plts)))) ax.set_xlabel("Number of pd Pairs", fontsize=15) ax.set_ylabel("Cost", fontsize=15) ax.set_title('Algorithm Cost vs Number of Pairs') ax.legend(loc='center left', bbox_to_anchor=(1, 0.5), fancybox=True, shadow=True) plt.savefig(save_path, bbox_inches='tight') Explanation: Comparing Average Cost vs Number of Pairs End of explanation save_path = "{}avg_runtime_{}.pdf" \ .format(experiment['save_path'], experiment['name']) with plt.style.context('seaborn-white'): fig, ax = plt.subplots() ax.set_yscale('log') group_plts = mean_df.groupby(['alg', 'p_hat']) cmap = mpl.cm.autumn for i,(name, group) in enumerate(group_plts): alg,p_hat = name if alg == splice_alg.__name__: plt.plot(group['n_pairs'], group['mean_alg_runtime'], label=r"SPLICE") #color=cmap(i / float(len(group_plts)))) elif alg == asplice_alg.__name__: plt.plot(group['n_pairs'], group['mean_alg_runtime'], label=r"ASPLICE $\hat{{p}}={:.3}$".format(p_hat), linestyle="-") #color=cmap(i / float(len(group_plts)))) elif alg == plg_alg.__name__: plt.plot(group['n_pairs'], group['mean_alg_runtime'], label=r"PLG $\hat{{p}}={:.3}$".format(p_hat), linestyle="-") #color=cmap(i / float(len(group_plts)))) ax.set_xlabel("Number of pd Pairs", fontsize=15) ax.set_ylabel("Time (s)", fontsize=15) ax.set_title('Algorithm Runtime vs Number of Pairs') ax.legend(loc='center left', bbox_to_anchor=(1, 0.5), fancybox=True, shadow=True) plt.savefig(save_path, bbox_inches='tight') Explanation: Comparing Average Runtimes We compare the average runtime for different parameters. End of explanation grouped = df.groupby(['alg','p_hat']) # Extract splice costs splice_costs = df[df['alg'] == splice_alg.__name__].cost splice_costs[:5] save_path = "{}avg_ratios_{}.pdf" \ .format(experiment['save_path'], experiment['name']) with plt.style.context('seaborn-white'): fig, ax = plt.subplots() for i,(key, group) in enumerate(grouped): alg, p_hat = key # Compute ratio of alg cost to splice_cost cost_ratios = group.cost.values / splice_costs.values # Compute the mean of ratios for each rep mean_cost_ratios = np.mean(cost_ratios.reshape(-1, experiment['n_reps']), axis=1) if alg == splice_alg.__name__: #Skip plotting SPLICE continue plt.plot(experiment['n_pairs_li'], mean_cost_ratios, label=r"SPLICE", color=cmap(i / float(len(grouped)))) elif alg == asplice_alg.__name__: plt.plot(experiment['n_pairs_li'], mean_cost_ratios, label=r"ASPLICE $\hat{{p}}={:.3}$".format(p_hat), linestyle="-") #,color=cmap(i / float(len(grouped)))) elif alg == plg_alg.__name__: plt.plot(experiment['n_pairs_li'], mean_cost_ratios, label=r"PLG $\hat{{p}}={:.3}$".format(p_hat), linestyle="-") #,color=cmap(i / float(len(grouped)))) ax.set_xlabel("Number of pd Pairs", fontsize=15) ax.set_ylabel("Mean Ratio to SPLICE", fontsize=15) ax.set_title('Mean Cost Ratio vs Number of Pairs') ax.legend(loc='center left', bbox_to_anchor=(1, 0.5), fancybox=True, shadow=True) ax.grid(True) plt.savefig(save_path, bbox_inches='tight') Explanation: Compute ratio of Costs Next we find the ratio of each algorithms cost to splice. We then calculate the average over each number of pairs. In other words we are approximating E[ ALG cost / SPLICE cost ]. End of explanation
12,037
Given the following text description, write Python code to implement the functionality described below step by step Description: Ubiquitous NumPy I called this notebook ubiquitous numpy as the main goal of this section is to show examples of how much is the impact of NumPy over the Scientific Python Ecosystem. Later on, see also this extra notebook Step1: Features in the Iris dataset Step2: Try by yourself one of the following commands where 'd' is the variable containing the dataset Step3: Clustering Clustering example on iris dataset data using sklearn.cluster.KMeans Step4: Plotting using matplotlib Matplotlib is one of the most popular and widely used plotting library in Python. Matplotlib is tightly integrated with NumPy as all the functions expect ndarray in input.
Python Code: from IPython.core.display import Image, display display(Image(filename='images/iris_setosa.jpg')) print("Iris Setosa\n") display(Image(filename='images/iris_versicolor.jpg')) print("Iris Versicolor\n") display(Image(filename='images/iris_virginica.jpg')) print("Iris Virginica") Explanation: Ubiquitous NumPy I called this notebook ubiquitous numpy as the main goal of this section is to show examples of how much is the impact of NumPy over the Scientific Python Ecosystem. Later on, see also this extra notebook: Extra Torch Tensor - Requires PyTorch 1. pandas and pandas.DataFrame Machine Learning (and Numpy Arrays) Machine Learning is about building programs with tunable parameters (typically an array of floating point values) that are adjusted automatically so as to improve their behavior by adapting to previously seen data. Machine Learning can be considered a subfield of Artificial Intelligence since those algorithms can be seen as building blocks to make computers learn to behave more intelligently by somehow generalizing rather that just storing and retrieving data items like a database system would do. We'll take a look at a very simple machine learning tasks here: the clustering task Data for Machine Learning Algorithms Data in machine learning algorithms, with very few exceptions, is assumed to be stored as a two-dimensional array, of size [n_samples, n_features]. The arrays can be either numpy arrays, or in some cases scipy.sparse matrices. The size of the array is expected to be [n_samples, n_features] n_samples: The number of samples: each sample is an item to process (e.g. classify). A sample can be a document, a picture, a sound, a video, an astronomical object, a row in database or CSV file, or whatever you can describe with a fixed set of quantitative traits. n_features: The number of features or distinct traits that can be used to describe each item in a quantitative manner. Features are generally real-valued, but may be boolean or discrete-valued in some cases. The number of features must be fixed in advance. However it can be very high dimensional (e.g. millions of features) with most of them being zeros for a given sample. This is a case where scipy.sparse matrices can be useful, in that they are much more memory-efficient than numpy arrays. Addendum There is a dedicated notebook in the training material, explicitly dedicated to scipy.sparse: 07_1_Sparse_Matrices A Simple Example: the Iris Dataset End of explanation from sklearn.datasets import load_iris iris = load_iris() Explanation: Features in the Iris dataset: sepal length in cm sepal width in cm petal length in cm petal width in cm Target classes to predict: Iris Setosa Iris Versicolour Iris Virginica End of explanation print(iris.keys()) print(iris.DESCR) print(type(iris.data)) X = iris.data print(X.size, X.shape) y = iris.target type(y) Explanation: Try by yourself one of the following commands where 'd' is the variable containing the dataset: print(iris.keys()) # Structure of the contained data print(iris.DESCR) # A complete description of the dataset print(iris.data.shape) # [n_samples, n_features] print(iris.target.shape) # [n_samples,] print(iris.feature_names) datasets.get_data_home() # This is where the datasets are stored End of explanation from sklearn.cluster import KMeans kmean = KMeans(n_clusters=3) kmean.fit(iris.data) kmean.cluster_centers_ kmean.cluster_centers_.shape Explanation: Clustering Clustering example on iris dataset data using sklearn.cluster.KMeans End of explanation from itertools import combinations import numpy as np from matplotlib import pyplot as plt %matplotlib inline rgb = np.empty(shape=y.shape, dtype='<U1') rgb[y==0] = 'r' rgb[y==1] = 'g' rgb[y==2] = 'b' for cols in combinations(range(4), 2): f, ax = plt.subplots(figsize=(7.5, 7.5)) ax.scatter(X[:, cols[0]], X[:, cols[1]], c=rgb) ax.scatter(kmean.cluster_centers_[:, cols[0]], kmean.cluster_centers_[:, cols[1]], marker='*', s=250, color='black', label='Centers') feature_x = iris.feature_names[cols[0]] feature_y = iris.feature_names[cols[1]] ax.set_title("Features: {} vs {}".format(feature_x.title(), feature_y.title())) ax.set_xlabel(feature_x) ax.set_ylabel(feature_y) ax.legend(loc='best') plt.show() Explanation: Plotting using matplotlib Matplotlib is one of the most popular and widely used plotting library in Python. Matplotlib is tightly integrated with NumPy as all the functions expect ndarray in input. End of explanation
12,038
Given the following text description, write Python code to implement the functionality described below step by step Description: Weighting functions in the $CO_2$ 15 $\mu m$ absorption band Below is a plot of radiance (or intensity) (left axis) and brightness temperature (right axis) vs. wavenumber near the main $CO_2$ absorption band. Wavenumber is defined as $1/\lambda$; the center of the absorption band is at $\lambda = 15\ \mu m$ which is a wavenumber of 1/0.0015 = 666 $cm^{-1}$. The VTPR (vertical temperature profiling radiometer) has six channels Step1: Assignment 9 In this assignmet we'll work with the five standard atmospheres from hydrostatic.ipynb to show how to use Stull eq. 8.4 to calculate radiance at the top of the atmosphere for wavelengths similar to the 6 sounder channels shown in the above figure. I define a new function find_tau below to calculate the optical thickness for a $CO_2$-like absorbing gas. I ask your to find the transmissivities $t=\exp(-\tau_{tot} - \tau)$ and the weighting functions $\Delta t$, at 7 wavelengths, and use those plus $B_\lambda$ to calculate the radiance for the 7 channels for each of the 5 atmospheres. Step3: mass absorption coefficient for fake $CO_2$ To keep things simple I'm going to make up a set of 7 absorption coefficients that will give weighting functions that look something like the VPTR. We have been working with the volume absorption coefficient Step4: Example $\tau$ calculation for r_gas=0.01 $kg/kg$ and k_lambda = 0.01 $m^2/kg$ Step5: Extending this to 7 wavelengths In the next cell I make up 7 k_lambda values to go with 7 wavelengths from 13 to 15 microns (766 to 666 $cm^{-1}$)
Python Code: Image('figures/wallace4_33.png',width=500) Explanation: Weighting functions in the $CO_2$ 15 $\mu m$ absorption band Below is a plot of radiance (or intensity) (left axis) and brightness temperature (right axis) vs. wavenumber near the main $CO_2$ absorption band. Wavenumber is defined as $1/\lambda$; the center of the absorption band is at $\lambda = 15\ \mu m$ which is a wavenumber of 1/0.0015 = 666 $cm^{-1}$. The VTPR (vertical temperature profiling radiometer) has six channels: Channel 1 is at the center of the band -- it has the lowest transmissivity and is measuring photons coming from 45 km at the top of the stratosphere (See Stull Chapter 1, Figure 1.10). As the channel number increases from 2-6 the transmissivity also increases, and the photons originate from increasing lower levels of the atmosphere with increasing kinetic temperatures. Note that the different heights for the peaks in each of the weighting functions. End of explanation import numpy as np import matplotlib.pyplot as plt import matplotlib.ticker as ticks from a301utils.a301_readfile import download import h5py from a301lib.radiation import Blambda,planckInvert # # from the hydrostatic noteboo # from a301lib.thermo import calcDensHeight filename='std_soundings.h5' download(filename) from pandas import DataFrame with h5py.File(filename) as infile: sound_dict={} print('soundings: ',list(infile.keys())) # # names are separated by commas, so split them up # and strip leading blanks # column_names=infile.attrs['variable_names'].split(',') column_names = [item.strip() for item in column_names] column_units = infile.attrs['units'].split(',') column_units = [item.strip() for item in column_units] for name in infile.keys(): data = infile[name][...] sound_dict[name]=DataFrame(data,columns=column_names) Explanation: Assignment 9 In this assignmet we'll work with the five standard atmospheres from hydrostatic.ipynb to show how to use Stull eq. 8.4 to calculate radiance at the top of the atmosphere for wavelengths similar to the 6 sounder channels shown in the above figure. I define a new function find_tau below to calculate the optical thickness for a $CO_2$-like absorbing gas. I ask your to find the transmissivities $t=\exp(-\tau_{tot} - \tau)$ and the weighting functions $\Delta t$, at 7 wavelengths, and use those plus $B_\lambda$ to calculate the radiance for the 7 channels for each of the 5 atmospheres. End of explanation Rd = 287. def find_tau(r_gas,k_lambda,df): given a data frame df with a standard sounding, return the optical depth assuming an absorbing gas with a constant mixing ration r_gas and absorption coefficient k_lambda Parameters ---------- r_gas: float absorber mixing ratio, kg/kg k_lambda: float mass absorption coefficient, m^2/kg df: dataframe sounding with height levels as rows, columns 'temp': temperature in K 'press': pressure in Pa Returns ------- tau: vector (float) optical depth measured from the surface at each height in df # # density scale height # Hdens = calcDensHeight(df) # # surface density # rho0 = df['press']/(Rd*df['temp']) rho = rho0*np.exp(-df['z']/Hdens) height = df['z'].values tau=np.empty_like(rho) # # start from the surface # tau[0]=0 num_levels=len(rho) num_layers=num_levels-1 for index in range(num_layers): delta_z=height[index+1] - height[index] delta_tau=r_gas*rho[index]*k_lambda*delta_z tau[index+1]=tau[index] + delta_tau return tau Explanation: mass absorption coefficient for fake $CO_2$ To keep things simple I'm going to make up a set of 7 absorption coefficients that will give weighting functions that look something like the VPTR. We have been working with the volume absorption coefficient: $$\beta_\lambda = n b\ (m^{-1})$$ where $n$ is the absorber number concentration in $#\,m^{-3}$ and $b$ is the absorption cross section of a molecule in $m^2$. For absorbing gasses like $CO_2$ that obey the ideal gas law $n$ depends inversely on temperature -- which changes rapidly with height. A better route is to use the mass absorption coefficient $k_\lambda$: $$k_\lambda = \frac{n b}{\rho_{air}}$$ units: $m^2/kg$. With this definition the optical depth is: $$\tau_\lambda = \int_0^z \rho_{air} k_\lambda dz$$ Why is this an improvement? We now have an absorption coefficient that can is roughly constant with height and can be taken out of the integral. End of explanation %matplotlib inline r_gas=0.01 #kg/kg k_lambda=0.01 #m^2/kg df=sound_dict['tropics'] # # set the top of the atmosphere at 20 km # top = 20.e3 df = df.loc[df['z']<top] height = df['z'] press = df['press'] tau=find_tau(r_gas,k_lambda,df) fig1,axis1=plt.subplots(1,1) axis1.plot(tau,height*1.e-3) axis1.set_title('vertical optical depth vs. height') axis1.set_ylabel('height (km)') axis1.set_xlabel('optical depth (no units)') fig2,axis2=plt.subplots(1,1) axis2.plot(tau,press*1.e-3) axis2.invert_yaxis() axis2.set_title('vertical optical depth vs. pressure') axis2.set_ylabel('pressure (kPa)') axis2.set_xlabel('optical depth (no units)') Explanation: Example $\tau$ calculation for r_gas=0.01 $kg/kg$ and k_lambda = 0.01 $m^2/kg$ End of explanation # # assign the 7 k_lambdas to 7 CO2 absorption band wavelengths # (see Wallace and Hobbs figure 4.33) # wavenums=np.linspace(666,766,7) #wavenumbers in cm^{-1} wavelengths=1/wavenums*1.e-2 #wavelength in m wavelengths_um = wavelengths*1.e6 # in microns print('channel wavelengths (microns) ',wavelengths_um) #microns df=sound_dict['tropics'] top = 20.e3 #stop at 20 km df = df.loc[df['z']< top] height = df['z'].values mid_height = (height[1:] - height[:-1])/2. # # here are the mass absorption coefficients for each of the 7 wavelengths # in m^2/kg # k_lambda_list=np.array([ 0.175 , 0.15 , 0.125 , 0.1 , 0.075, 0.05 , 0.025]) legend_string=["{:5.3f}".format(item) for item in k_lambda_list] # # find the height at mid-layer # mid_height=(height[1:] + height[:-1])/2. # # make a list of tuples of k_lambda and its label # using zip # k_vals=zip(k_lambda_list,legend_string) fig1,ax=plt.subplots(1,1,figsize=(10,10)) heightkm=height*1.e-3 mid_heightkm=mid_height*1.e-3 for k_lambda,k_label in k_vals: tau=find_tau(r_gas,k_lambda,df) ax.plot(tau,heightkm,label=k_label) ax.legend() Explanation: Extending this to 7 wavelengths In the next cell I make up 7 k_lambda values to go with 7 wavelengths from 13 to 15 microns (766 to 666 $cm^{-1}$) End of explanation
12,039
Given the following text description, write Python code to implement the functionality described below step by step Description: The following are the results we've got from online augmentation so far. Some bugs have been fixed by Scott since then so these might be redundant. If they're not redundant then they are very bad. Loading the pickle Step1: Replicating 8aug The DensePNGDataset run with 8 augmentations got us most of the way to our best score in one go. If we can replicate that results with online augmentation then we can be pretty confident that online augmentation is a good idea. Unfortunately, it looks like we can't Step2: Would actually like to know what kind of score this model gets on the check_test_score script. Step3: So we can guess that the log loss score we're seeing is in fact correct. There are definitely some bugs in the ListDataset code. Many Augmentations We want to be able to use online augmentations to run large combinations of different augmentations on the images. This model had almost everything turned on, a little Step4: Looks like it's completely incapable of learning. These problems suggest that the augmentation might be garbling the images; making them useless for learning from. Or worse, garbling the order so each image doesn't correspond to its label. Transformer Results We also have results from a network trained using a Transformer dataset, which is how online augmentation is supposed to be supported in Pylearn2.
Python Code: import pylearn2.utils import pylearn2.config import theano import neukrill_net.dense_dataset import neukrill_net.utils import numpy as np %matplotlib inline import matplotlib.pyplot as plt import holoviews as hl %load_ext holoviews.ipython import sklearn.metrics cd .. settings = neukrill_net.utils.Settings("settings.json") run_settings = neukrill_net.utils.load_run_settings( "run_settings/replicate_8aug.json", settings, force=True) model = pylearn2.utils.serial.load(run_settings['alt_picklepath']) c = 'train_objective' channel = model.monitor.channels[c] Explanation: The following are the results we've got from online augmentation so far. Some bugs have been fixed by Scott since then so these might be redundant. If they're not redundant then they are very bad. Loading the pickle End of explanation plt.title(c) plt.plot(channel.example_record,channel.val_record) c = 'train_y_nll' channel = model.monitor.channels[c] plt.title(c) plt.plot(channel.example_record,channel.val_record) def plot_monitor(c = 'valid_y_nll'): channel = model.monitor.channels[c] plt.title(c) plt.plot(channel.example_record,channel.val_record) return None plot_monitor() plot_monitor(c="valid_objective") Explanation: Replicating 8aug The DensePNGDataset run with 8 augmentations got us most of the way to our best score in one go. If we can replicate that results with online augmentation then we can be pretty confident that online augmentation is a good idea. Unfortunately, it looks like we can't: End of explanation %run check_test_score.py run_settings/replicate_8aug.json Explanation: Would actually like to know what kind of score this model gets on the check_test_score script. End of explanation run_settings = neukrill_net.utils.load_run_settings( "run_settings/online_manyaug.json", settings, force=True) model = pylearn2.utils.serial.load(run_settings['alt_picklepath']) plot_monitor(c="valid_objective") Explanation: So we can guess that the log loss score we're seeing is in fact correct. There are definitely some bugs in the ListDataset code. Many Augmentations We want to be able to use online augmentations to run large combinations of different augmentations on the images. This model had almost everything turned on, a little: End of explanation settings = neukrill_net.utils.Settings("settings.json") run_settings = neukrill_net.utils.load_run_settings( "run_settings/alexnet_based_onlineaug.json", settings, force=True) model = pylearn2.utils.serial.load(run_settings['pickle abspath']) plot_monitor(c="train_y_nll") plot_monitor(c="valid_y_nll") plot_monitor(c="train_objective") plot_monitor(c="valid_objective") Explanation: Looks like it's completely incapable of learning. These problems suggest that the augmentation might be garbling the images; making them useless for learning from. Or worse, garbling the order so each image doesn't correspond to its label. Transformer Results We also have results from a network trained using a Transformer dataset, which is how online augmentation is supposed to be supported in Pylearn2. End of explanation
12,040
Given the following text description, write Python code to implement the functionality described below step by step Description: Оценки тестирования по результатам Описание Пусть есть тест с известными правильными ответами и диапазонами ответов на каждый вопрос. Для определённости возьмём возможные ответы как 0 или 1. Сложность каждого задания заранее не известна. Нам нужно определить относительную сложность каждого задания исходя из того, сколько участников правильно ответили на него рассчитать баллы каждого участника и его место в рейтинге исходя из полученных выше оценок сложности заданий. Step1: Данные Step2: Расчёты Step3: Результаты
Python Code: %matplotlib inline import math import matplotlib.pyplot as plt import numpy as np Explanation: Оценки тестирования по результатам Описание Пусть есть тест с известными правильными ответами и диапазонами ответов на каждый вопрос. Для определённости возьмём возможные ответы как 0 или 1. Сложность каждого задания заранее не известна. Нам нужно определить относительную сложность каждого задания исходя из того, сколько участников правильно ответили на него рассчитать баллы каждого участника и его место в рейтинге исходя из полученных выше оценок сложности заданий. End of explanation # правильные ответы заданий good = 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1 good # полученные ответы от участников, построчно gots = ((0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1), (0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1), (0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0), (1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0), (0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0)) gots Explanation: Данные End of explanation # Расчёт правильности ответов corr = [[1 if rep[i] == good[i] else 0 for i in xrange(len(good))] for rep in gots] corr # число правильных ответов для каждого вопроса corrbyn = [0 for i in xrange(len(good))] for v in corr: for a in xrange(len(good)): corrbyn[a] += v[a] corrbyn # пусть ценности ответов равны, ценности вопросов обратно пропорциональны числу правильно на них ответивших # рассчитаем ценность каждого вопроса ngots = len(gots) qual = [ngots - x for x in corrbyn] qual # уточнённые ответы с ценностями corq = [[v[i] * qual[i] for i in xrange(len(v))] for v in corr] corq # суммы баллов каждого участников balls = [sum(v) for v in corq] balls # победители, отсортированно wins = sorted(list(enumerate(balls)), key=lambda x: x[1], reverse=True) wins Explanation: Расчёты End of explanation print "\nТаким образом, победил участник №", wins[0][0]+1, "с", wins[0][1], "баллами, затем участник №", wins[1][0]+1, "с", wins[1][1], "баллами, и т.д.\n" # расстановка по местам for i, u in enumerate(wins): print "место %2d -- участник %2d, баллы: %3d" % (i+1, u[0]+1, u[1]) plt.bar(range(len(balls)), balls) Explanation: Результаты End of explanation
12,041
Given the following text description, write Python code to implement the functionality described below step by step Description: Plot of allometrically-scaled mass-specific metabolic rate Step1: Replicating allometrically-scaled calculated parameters First attempt Step2: Second attempt Based on information on Biot 2012 supplementary info, Allometric Parameterization section. $r_i$ for for producers are all slightly off. $x_i$ for Fish1 and Fish2 match. Results for Fish3 and Fish4 are slightly off. Step3: Endotherm metabolic rates From Williams et al. 2007
Python Code: %matplotlib inline import numpy as np import matplotlib.pyplot as plt # This plot shows how mass-specific metabolic rate falls off with body size x = np.arange(1, 100) plt.plot(x, x**-.25) plt.xlabel("body size") plt.ylabel("metabolic rate") Explanation: Plot of allometrically-scaled mass-specific metabolic rate End of explanation # Trying to replicate the calculations for metabolic rate x_i of Fish4 (Adult piscivorous fish) # from (Boit et al. 2012). # Formulas are from Brose et al. 2006 # Constants a_r = 1 # Allometric constant for primary producer growth rate (Brose et al. 2006) a_x = 0.88 # Allometric constant for ectotherm vertebrate metabolism (Brose et al. 2006) M_P = 6.40e-5 # Body mass of Alg1 (Single cell algae) (Boit et al. 2012) M_C = 7.04e+7 # Body mass of Fish4 (Adult piscivorous fish) (Boit et al. 2012) R_P = a_r * M_P ** -.25 # Producer mass-specific growth rate, pre-time-normalization X_C = a_x * M_C ** -.25 # Consumer mass-specific metabolic rate, pre-time-normalization # Normalize by the time scale r_i = 1 # Mass-specific growth rate of basal population is set to 1 (QUESTION: Why is r_i defined and not used?) x_i = X_C / R_P # Mass-specific metabolic rate print("R_P = {}, X_C = {}, x_i = {}".format(R_P, X_C, x_i)) Explanation: Replicating allometrically-scaled calculated parameters First attempt: metabolic rate x_i of Fish4 (Adult piscivorous fish) from (Boit et al. 2012), using constants and formulas from Brose et al. 2006. Resulting x_i does not match. End of explanation M_k = 6.40e-5 # Body mass of reference producer Alg1 a_Ti = 0.88 # Allometric constant for all fish species a_rk = a_ri = 1 # Allometric constant for all producers A_nonfish = 0.15 # Allometric scaling exponent for nonfish species A_fish = 0.11 # Allometric scaling exponent for fish species # M_i is body mass in micrograms carbon per individual for producerName, M_i in (('Alg1', 6.4e-5), ('Alg2', 2.56e-4), ('Alg3', 3.2e-5), ('Alg4', 1.28e-4), ('Alg5', 8e-6), ('APP', 2.5e-7)): r_i = (a_ri / a_rk) * ((M_k / M_i) ** A_nonfish) print("{} r_i = {}".format(producerName, r_i)) for fishName, M_i in (('Fish1', 1.1e+6), ('Fish2', 4.4e+6), ('Fish3', 3.52e+7), ('Fish4', 7.04e+7)): x_i = (a_Ti / a_rk) * ((M_k / M_i) ** A_fish) print("{} x_i = {}".format(fishName, x_i)) Explanation: Second attempt Based on information on Biot 2012 supplementary info, Allometric Parameterization section. $r_i$ for for producers are all slightly off. $x_i$ for Fish1 and Fish2 match. Results for Fish3 and Fish4 are slightly off. End of explanation # "Mass-to-respiration conversion constant specific to animal species i's metabolic type" # - definition from Cotter 2015 # - value for endotherms from Williams et al. 2007, originally from Yodzis and Innes # - units: kg^.25 / year a_Ti = 54.9 # f_ri is a "fractional constant used in metabolic and growth rate functions" (Cotter) # k refers to the primary producer # Where is its value? # "value may be specified for each specific population or feeding interaction in a particular ecological context" # - (Williams) # "f_r...would typically be on the order of 0.1 for a field population." (Y&I) #f_rk = 0.1 f_rk = 1 # a_ri is a "mass-to-growth conversion constant specific to plant species i" (Cotter) # k refers to the primary producer # Trying the value for phytoplankton from the Williams/Y&I table # - units: kg^.25 / year #a_rk = 0.4 a_rk = 1 # Body mass of reference producer. # Trying to replicate parameter values in WoB DB, so I'll use "Grass and Herbs" as the reference producer. # Its "body mass" is 40 (units? Since it's divided by M_i, which has the same units, units cancel) M_k = 40 #M_k = 1 # Body mass of Aardvark from WoB database M_i = 66 # African Clawless Otter #M_i = 13 # x_i = (a_Ti / a_rk) * ((M_k / M_i) ** A_fish) x_i = (a_Ti / (f_rk * a_rk)) * ((M_k / M_i) ** .25) # This should be 0.0821097 (Aardvark) print(x_i) # Not even close # Do some algebra x_i = 0.0821097 # Aardvark #x_i = 0.123252 # Otter coef = x_i / ((M_k / M_i) ** .25) print(coef) %matplotlib inline import numpy as np import pandas as pd df = pd.read_csv('wob-database/species-table.csv') df['metabolismUnscaled'] = 1 / df['biomass'] ** 0.25 df['ratio'] = df.metabolism / df.metabolismUnscaled pd.pivot_table(df, index='category', values=['ratio'], aggfunc=[np.mean, np.std, np.median]) df.sort_values(by='category') text = "one\ntwo" for line in text.split('\n'): print(line) Explanation: Endotherm metabolic rates From Williams et al. 2007 End of explanation
12,042
Given the following text description, write Python code to implement the functionality described below step by step Description: SRTM Product Showcase Products used Step1: Define Methods slope_pct * dem Step2: Connect to the datacube Step3: Set Analysis Region
Python Code: import sys import os sys.path.append(os.environ.get('NOTEBOOK_ROOT')) %matplotlib inline import datacube import matplotlib.pyplot as plt import numpy as np import xarray as xr from scipy.ndimage import convolve Explanation: SRTM Product Showcase Products used: srtm_google (original source) Dataset from 11-Feb-2000 Load packages Import Python packages that are used for the analysis. End of explanation def slope_pct(dem, resolution): # Kernel for rate of elevation change in x-axis. dx_kernel = np.array([[1, 0, -1], [2, 0, -2], [1, 0, -1]]) # Kernel for rate of elevation change in y-axis. dy_kernel = np.array([[1, 2, 1], [0, 0, 0], [-1, -2, -1]]) # Rate of change calculations for each axis. dx = convolve(dem, dx_kernel) / (8 * resolution) dy = convolve(dem, dy_kernel) / (8 * resolution) # Return rise/run * 100 for slope percent. return np.sqrt(np.square(dx) + np.square(dy)) * 100 Explanation: Define Methods slope_pct * dem: The DEM product to use for slope map calculation. * resolution: The resolution of the supplied DEM product. End of explanation dc = datacube.Datacube(app='SRTM_Product_Showcase') Explanation: Connect to the datacube End of explanation product = "srtm_google" # Mtera Reservoir - Tanzania # latitude = (-7.22, -6.80) # longitude = (35.60, 36.00) # Lake Ilopango, El Salvador latitude = (13.6099, 13.7391) longitude = (-89.1046, -88.9799) # Lake Sulunga, Tanzania # latitude = (-6.2936, -5.8306 ) # longitude = (34.9943, 35.3624 ) srtm_dataset = dc.load(product=product,latitude=latitude,longitude=longitude).isel(time=0) srtm_dataset.elevation.plot.imshow(cmap=plt.cm.nipy_spectral, size=8); attrs = srtm_dataset.elevation.attrs.copy(); attrs.update(units='%') srtm_dataset['slope'] = xr.DataArray(slope_pct(srtm_dataset.elevation, srtm_dataset.geobox.resolution[1]*(1.1132/0.00001)), dims=dict(srtm_dataset.dims), attrs=attrs) srtm_dataset.slope.plot.imshow(cmap=plt.cm.nipy_spectral, size=8); Explanation: Set Analysis Region End of explanation
12,043
Given the following text description, write Python code to implement the functionality described below step by step Description: <h1> Here we will be focusing more on the cnmf part and its main functions <h1> <img src='docs/img/cnmf1.png'/> Step1: <h1> Using the workload manager SLURM </h1> to have an extensive use of the machine. <p> we want to operate this the faster possible. Thanks to the segmentation of the video in patches we can parallelize ou algorithm. We are using python integrated methods to get this parallelization to work on one machine as well as on clusters of machines </p> <table> <tr> <td> This is to be used when working with a cluster of machines Step2: <b> We can see here that the number of processes are the number of core your computer possess. <br/> Your computer can be seen as a node that possess X cores </b> <h1> Memory mapping files in F order</h1> <p> see Step3: <h2>the correlation image </h2> Step4: CNMFSetParms define Dictionaries of CNMF parameters. Any parameter that is not set get a default value specified. each dictionnary is used by different part of the CNMF process Step5: <h2> Preprocessing of the datas and initialization of the components </h2> <ul><li> here, we compute the mean of the noise spectral density </li> <li> then, we initialize each component (components that have been spatially filter using a gaussian kernel) with a greedy algorithm </li> <li> we then operate a rank1 NMF on those ROIs using the HALS algorithm</li></ul> <p> see More Step6: <h2> HALS </h2> we want to minimize <img src=docs/img/hals1.png width=300px/> updating parameters <img src=docs/img/hals2.png width=300px /> <p>HALS Step7: <h1> CNMF process </h1> We are considering the video as a matrix called Y of dimension height x widht x frames we now want to find A, C and B such that Y = A x C + B B being the Background, composed of its spatial b and temporal f component A being the spatial component of the neurons (also seen as their shape) C being the temporal component of the neurons (also seen as their calcium activity or traces) <h2> Update spatial </h2> will consider C as fixed and try to update A. the process will be the following Step8: <h2> Update temporal </h2> Will consider A as fixed and try to update C. the process will be the following Step9: <h2> Merging components </h2> merge the components that overlaps and have a high temporal correlation the process will be the following Step10: A refining step refine spatial and temporal components Step11: <h1>DISCARD LOW QUALITY COMPONENT </h1> <p> The patch dubdivision creates several spurious components that are not neurons </p> <p>We select the components according to criteria examining spatial and temporal components</p> <img src="docs/img/evaluationcomponent.png"/> <p> Temporal components, for each trace Step12: accepted components Step13: discarded components
Python Code: try: if __IPYTHON__: # this is used for debugging purposes only. allows to reload classes when changed get_ipython().magic(u'load_ext autoreload') get_ipython().magic(u'autoreload 2') except NameError: print('Not IPYTHON') pass import sys import numpy as np from time import time from scipy.sparse import coo_matrix import psutil import glob import os import scipy from ipyparallel import Client import pylab as pl import caiman as cm from caiman.components_evaluation import evaluate_components from caiman.utils.visualization import plot_contours,view_patches_bar,nb_plot_contour,nb_view_patches from caiman.base.rois import extract_binary_masks_blob import caiman.source_extraction.cnmf as cnmf from caiman.utils.utils import download_demo #import bokeh.plotting as bp import bokeh.plotting as bpl try: from bokeh.io import vform, hplot except: # newer version of bokeh does not use vform & hplot, instead uses column & row from bokeh.layouts import column as vform from bokeh.layouts import row as hplot from bokeh.models import CustomJS, ColumnDataSource, Slider from IPython.display import display, clear_output import matplotlib as mpl import matplotlib.cm as cmap import numpy as np bpl.output_notebook() Explanation: <h1> Here we will be focusing more on the cnmf part and its main functions <h1> <img src='docs/img/cnmf1.png'/> End of explanation # frame rate in Hz final_frate=10 #backend='SLURM' backend='local' if backend == 'SLURM': n_processes = np.int(os.environ.get('SLURM_NPROCS')) else: # roughly number of cores on your machine minus 1 n_processes = np.maximum(np.int(psutil.cpu_count()),1) print('using ' + str(n_processes) + ' processes') #%% start cluster for efficient computation single_thread=False if single_thread: dview=None else: try: c.close() except: print('C was not existing, creating one') print("Stopping cluster to avoid unnencessary use of memory....") sys.stdout.flush() if backend == 'SLURM': try: cm.stop_server(is_slurm=True) except: print('Nothing to stop') slurm_script='/mnt/xfs1/home/agiovann/SOFTWARE/Constrained_NMF/SLURM/slurmStart.sh' cm.start_server(slurm_script=slurm_script) pdir, profile = os.environ['IPPPDIR'], os.environ['IPPPROFILE'] c = Client(ipython_dir=pdir, profile=profile) else: cm.stop_server() cm.start_server() c=Client() print('Using '+ str(len(c)) + ' processes') dview=c[:len(c)] Explanation: <h1> Using the workload manager SLURM </h1> to have an extensive use of the machine. <p> we want to operate this the faster possible. Thanks to the segmentation of the video in patches we can parallelize ou algorithm. We are using python integrated methods to get this parallelization to work on one machine as well as on clusters of machines </p> <table> <tr> <td> This is to be used when working with a cluster of machines : </td> <td>This will put dispatch and manage the workload gave by the algorithm : </td> </tr> <tr> <td><img src="docs/img/Dockerfile.gif"/> <td> <img src="docs/img/node.gif" /> </td> </tr> <p> learn more : <em> https://slurm.schedmd.com/overview.html </em> </p> End of explanation #%% FOR LOADING ALL TIFF FILES IN A FILE AND SAVING THEM ON A SINGLE MEMORY MAPPABLE FILE fnames=['demoMovieJ.tif'] base_folder='./example_movies/' # folder containing the demo files # %% download movie if not there if fnames[0] in ['Sue_2x_3000_40_-46.tif','demoMovieJ.tif']: download_demo(fnames[0]) fnames = [os.path.join('example_movies',fnames[0])] m_orig = cm.load_movie_chain(fnames[:1]) downsample_factor=1 # use .2 or .1 if file is large and you want a quick answer final_frate=final_frate*downsample_factor name_new=cm.save_memmap_each(fnames , dview=dview,base_name='Yr', resize_fact=(1, 1, downsample_factor) , remove_init=0,idx_xy=None ) name_new.sort() fname_new=cm.save_memmap_join(name_new,base_name='Yr', n_chunks=12, dview=dview) print(fnames) print(fname_new) print ("\n we can see we are loading the file (line1) into a memorymapped object (line2)") Explanation: <b> We can see here that the number of processes are the number of core your computer possess. <br/> Your computer can be seen as a node that possess X cores </b> <h1> Memory mapping files in F order</h1> <p> see : http://localhost:8888/notebooks/CaImAn/demo_caiman_pipeline.ipynb </p> <p> We want the parallel processes to access and our video matrix without having it in memory and duplicating it, as explained already on the demo_pipeline notebook </p> <img src="docs/img/Fordermmap.png" /> End of explanation Yr,dims,T=cm.load_memmap(fname_new) Y=np.reshape(Yr,dims+(T,),order='F') #%% visualize correlation image Cn = cm.local_correlations(Y) pl.imshow(Cn,cmap='gray') pl.show() Explanation: <h2>the correlation image </h2> End of explanation K=30 # number of neurons expected per patch gSig=[6,6] # expected half size of neurons merge_thresh=0.8 # merging threshold, max correlation allowed p=2 #order of the autoregressive system options = cnmf.utilities.CNMFSetParms(Y ,n_processes,p=p,gSig=gSig,K=K,ssub=2,tsub=2, normalize_init=True) Explanation: CNMFSetParms define Dictionaries of CNMF parameters. Any parameter that is not set get a default value specified. each dictionnary is used by different part of the CNMF process : init_paramters pre_processing_parameters patch_parameters spatial_parameters temporal_parameters End of explanation Yr,sn,g,psx = cnmf.pre_processing.preprocess_data(Yr ,dview=dview ,n_pixels_per_process=100, noise_range = [0.25,0.5] ,noise_method = 'logmexp', compute_g=False, p = 2, lags = 5, include_noise = False, pixels = None ,max_num_samples_fft=3000, check_nan = True) Ain, Cin, b_in, f_in, center=cnmf.initialization.initialize_components(Y ,K=30, gSig=[5, 5], gSiz=None, ssub=1, tsub=1, nIter=5, maxIter=5, nb=1 , use_hals=False, normalize_init=True, img=None, method='greedy_roi' , max_iter_snmf=500, alpha_snmf=10e2, sigma_smooth_snmf=(.5, .5, .5) , perc_baseline_snmf=20) p1=nb_plot_contour(Cn,Ain,dims[0],dims[1],thr=0.9,face_color=None , line_color='black',alpha=0.4,line_width=2) bpl.show(p1) Explanation: <h2> Preprocessing of the datas and initialization of the components </h2> <ul><li> here, we compute the mean of the noise spectral density </li> <li> then, we initialize each component (components that have been spatially filter using a gaussian kernel) with a greedy algorithm </li> <li> we then operate a rank1 NMF on those ROIs using the HALS algorithm</li></ul> <p> see More : NMF AND ROI :http://www.cell.com/neuron/fulltext/S0896-6273(15)01084-3<br\></p> Simultaneous Denoising, Deconvolution, and Demixing of Calcium Imaging Data by Eftychios A. Pnevmatikakis & al. End of explanation Ain, Cin, b_in, f_in = cnmf.initialization.hals(Y, Ain, Cin, b_in, f_in, maxIter=5) p1=nb_plot_contour(Cn,Ain,dims[0],dims[1],thr=0.9,face_color=None , line_color='black',alpha=0.4,line_width=2) bpl.show(p1) Explanation: <h2> HALS </h2> we want to minimize <img src=docs/img/hals1.png width=300px/> updating parameters <img src=docs/img/hals2.png width=300px /> <p>HALS : (Keigo Kimura et al.) http://proceedings.mlr.press/v39/kimura14.pdf</p> End of explanation options['spatial_params']['n_pixels_per_process'] = 2000 A,b,Cin,f_in = cnmf.spatial.update_spatial_components(Yr, Cin, f_in, Ain, sn=sn, dview=dview,**options['spatial_params']) p1=nb_plot_contour(Cn,A.todense(),dims[0],dims[1],thr=0.9,face_color=None, line_color='black',alpha=0.4,line_width=2) bpl.show(p1) Explanation: <h1> CNMF process </h1> We are considering the video as a matrix called Y of dimension height x widht x frames we now want to find A, C and B such that Y = A x C + B B being the Background, composed of its spatial b and temporal f component A being the spatial component of the neurons (also seen as their shape) C being the temporal component of the neurons (also seen as their calcium activity or traces) <h2> Update spatial </h2> will consider C as fixed and try to update A. the process will be the following : intialization of each parameters testing of the input values finding relevant pixels in that should belong to the neuron using either an iterative structure or an ellipse to look around the center of mass of the neuron ( cm found in the initialization ) this will be define a first shape of the neuron /!\ pixels are usually unlinked computing the distance indicator (a map of the distances of each relevant pixels to the center of mass of the neuron) memory mapping the matrices C and Y (info before) updating the components in parallel : using ipyparralel solving this problem for each pixel of the component $$ arg\min_{A_i,B_i}\sum A_i $$ subject to $$|| Y_i - A_i\times C + b_i\times f || <= std_{noise}(i)\times \sqrt(T)$$ using the lasso lars method from scikit learn toolbox https://en.wikipedia.org/wiki/Least-angle_regression, <br/> https://en.wikipedia.org/wiki/Lasso_(statistics), <br/> http://scikit-learn.org/stable/modules/linear_model.html#lars-lasso then, the newly refined components are thresholded (the C of the CNMF, one of the constrained here is that the matrix needs to be sparse) : first by applicating a median filtering https://en.wikipedia.org/wiki/Median_filter then by thresholding using a normalized user defined value continuing with a morphological closing of the components, using openCv functions https://www.mathworks.com/help/images/ref/imclose.html (the matlab version) we remove the unconnected pixels (we keep the large connected components ) finnaly we compute the residuals (also called the background) which is computed as B=Y-AC End of explanation options['temporal_params']['block_size'] = 2000 options['temporal_params']['p'] = 0 # fast updating without deconvolution C,A,b,f,S,bl,c1,neurons_sn,g,YrA,lam = cnmf.temporal.update_temporal_components( Yr,A,b,Cin,f_in,bl=None,c1=None,sn=None,g=None,**options['temporal_params']) clear_output(wait=True) Explanation: <h2> Update temporal </h2> Will consider A as fixed and try to update C. the process will be the following : Intialization of each parameters Testing of the input values Generating residuals s.t. $$Yres_A = YA - (A^T AC)^T$$ Creating groups of components that can be processed in parallel Ones that are composed of not overlapping components Using a simple greedy method Updating Calcium traces ( C ) Using Oasis. which will deconvolve the spikes of each neurons from the Calcium traces matrix C. <br><br> learn more : (Friedrich & al) http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1005423 see the demo here : https://github.com/j-friedrich/OASIS/blob/master/examples/Demo.ipynb To infer the true shape of the calcium traces using an autoregressive framework To infer the most likely spike train ( also called particular events). It will find the probability of a spike train according to the mean and std of the trace. - If it is superior to a threshold it will be defined as a particular event/neural spike- This will give us a matrix which is itself constrained ( C from CNMF ) - This is done in parallel using ipyparallel. We finally update the background End of explanation A_m,C_m,nr_m,merged_ROIs,S_m,bl_m,c1_m,sn_m,g_m=cnmf.merging.merge_components( Yr,A,b,C,f,S,sn,options['temporal_params'], options['spatial_params'], dview=dview, bl=bl, c1=c1, sn=neurons_sn, g=g, thr=merge_thresh, mx=50, fast_merge = True) Explanation: <h2> Merging components </h2> merge the components that overlaps and have a high temporal correlation the process will be the following : intialization of each parameters testing of the input values find a graph of overlapping components we look for connected ones we keep the one that are "connected enough" (above a threshold) On Each groups : We normalize the components to be able to compare them We sum them together we process a rank one NMF we compute the traces (deconvolution) We replace the neurons by the merged one End of explanation A2,b2,C2,f = cnmf.spatial.update_spatial_components(Yr, C_m, f, A_m, sn=sn,dview=dview, **options['spatial_params']) options['temporal_params']['p'] = p # set it back to perform full deconvolution C2,A2,b2,f2,S2,bl2,c12,neurons_sn2,g21,YrA, lam = cnmf.temporal.update_temporal_components( Yr,A2,b2,C2,f,dview=dview, bl=None,c1=None,sn=None,g=None,**options['temporal_params']) clear_output(wait=True) Explanation: A refining step refine spatial and temporal components End of explanation #evaluation fitness_raw, fitness_delta, erfc_raw,erfc_delta, r_values, significant_samples = evaluate_components(Y, C2+YrA, A2, C2, b2, f2, final_frate, remove_baseline=True,N=5, robust_std=False, Athresh=0.1, Npeaks=10, thresh_C=0.3) #different thresholding ( needs to pass at least one of them ) traces = C2 + YrA idx_components_r=np.where(r_values>=.6)[0] idx_components_raw=np.where(fitness_raw<-60)[0] idx_components_delta=np.where(fitness_delta<-20)[0] #merging to have all that have passed at least one threshold. idx_components=np.union1d(idx_components_r,idx_components_raw) idx_components=np.union1d(idx_components,idx_components_delta) #finding the bad components idx_components_bad=np.setdiff1d(range(len(traces)),idx_components) clear_output(wait=True) print(' ***** ') print(len(traces)) print(len(idx_components)) fg=pl.figure(figsize=(12,20)) pl.subplot(1,2,1) crd = plot_contours(A2.tocsc()[:,idx_components],Cn,thr=0.9) pl.subplot(1,2,2) crd = plot_contours(A2.tocsc()[:,idx_components_bad],Cn,thr=0.9) p2=nb_plot_contour(Cn,A2.tocsc()[:,idx_components].todense(),dims[0],dims[1],thr=0.9,face_color='purple', line_color='black',alpha=0.3,line_width=2) bpl.show(p2) Explanation: <h1>DISCARD LOW QUALITY COMPONENT </h1> <p> The patch dubdivision creates several spurious components that are not neurons </p> <p>We select the components according to criteria examining spatial and temporal components</p> <img src="docs/img/evaluationcomponent.png"/> <p> Temporal components, for each trace: </p> <li> compute the robust mode, corresponding to the baseline value</li> <li> use the values under the mode to estimate noise variance</li> <li> compute the probability of having large transients given the noise distribution estimated </li> <li> Threshold on this probability s.t. some of the component are discarded because lacking large enough positive transients </li> <p> Spatial components, for each components: </p> <li> average the frames in the moveie where the neurons is active (from temporal component), this provides a nice image of the neuron</li> <li> compare this image with the corresponding spatial component (Person's correlation coefficient)</li> <li> threshold the correlation coefficient </li> End of explanation discard_traces_fluo=nb_view_patches(Yr,A2.tocsc()[:,idx_components],C2[idx_components],b2,f2,dims[0],dims[1],thr = 0.8,image_neurons=Cn, denoised_color='red') Explanation: accepted components End of explanation discard_traces_fluo=nb_view_patches(Yr,A2.tocsc()[:,idx_components_bad],C2[idx_components_bad],b2,f2,dims[0],dims[1],thr = 0.8,image_neurons=Cn, denoised_color='red') cm.stop_server() Explanation: discarded components End of explanation
12,044
Given the following text description, write Python code to implement the functionality described below step by step Description: Index - Back - Next Widget List Step1: Numeric widgets There are many widgets distributed with ipywidgets that are designed to display numeric values. Widgets exist for displaying integers and floats, both bounded and unbounded. The integer widgets share a similar naming scheme to their floating point counterparts. By replacing Float with Int in the widget name, you can find the Integer equivalent. IntSlider The slider is displayed with a specified, initial value. Lower and upper bounds are defined by min and max, and the value can be incremented according to the step parameter. The slider's label is defined by description parameter The slider's orientation is either 'horizontal' (default) or 'vertical' readout displays the current value of the slider next to it. The options are True (default) or False readout_format specifies the format function used to represent slider value. The default is '.2f' Step2: FloatSlider Step3: An example of sliders displayed vertically. Step4: FloatLogSlider The FloatLogSlider has a log scale, which makes it easy to have a slider that covers a wide range of positive magnitudes. The min and max refer to the minimum and maximum exponents of the base, and the value refers to the actual value of the slider. Step5: IntRangeSlider Step6: FloatRangeSlider Step7: IntProgress Step8: FloatProgress Step9: The numerical text boxes that impose some limit on the data (range, integer-only) impose that restriction when the user presses enter. BoundedIntText Step10: BoundedFloatText Step11: IntText Step12: FloatText Step13: Boolean widgets There are three widgets that are designed to display a boolean value. ToggleButton Step14: Checkbox value specifies the value of the checkbox indent parameter places an indented checkbox, aligned with other controls. Options are True (default) or False Step15: Valid The valid widget provides a read-only indicator. Step16: Selection widgets There are several widgets that can be used to display single selection lists, and two that can be used to select multiple values. All inherit from the same base class. You can specify the enumeration of selectable options by passing a list (options are either (label, value) pairs, or simply values for which the labels are derived by calling str). <div class="alert alert-info"> Changes in *ipywidgets 8* Step17: The following is also valid, displaying the words 'One', 'Two', 'Three' as the dropdown choices but returning the values 1, 2, 3. Step18: RadioButtons Step19: With dynamic layout and very long labels Step20: Select Step21: SelectionSlider Step22: SelectionRangeSlider The value, index, and label keys are 2-tuples of the min and max values selected. The options must be nonempty. Step23: ToggleButtons Step24: SelectMultiple Multiple values can be selected with <kbd>shift</kbd> and/or <kbd>ctrl</kbd> (or <kbd>command</kbd>) pressed and mouse clicks or arrow keys. Step25: String widgets There are several widgets that can be used to display a string value. The Text, Textarea, and Combobox widgets accept input. The HTML and HTMLMath widgets display a string as HTML (HTMLMath also renders math). The Label widget can be used to construct a custom control label. Text Step26: Textarea Step27: Combobox Step28: Password The Password widget hides user input on the screen. This widget is not a secure way to collect sensitive information because Step29: Label The Label widget is useful if you need to build a custom description next to a control using similar styling to the built-in control descriptions. Step30: HTML Step31: HTML Math Step32: Image Step33: Button Step34: The icon attribute can be used to define an icon; see the fontawesome page for available icons. A callback function foo can be registered using button.on_click(foo). The function foo will be called when the button is clicked with the button instance as its single argument. Output The Output widget can capture and display stdout, stderr and rich output generated by IPython. For detailed documentation, see the output widget examples. Play (Animation) widget The Play widget is useful to perform animations by iterating on a sequence of integers with a certain speed. The value of the slider below is linked to the player. Step35: Date picker The date picker widget works in Chrome, Firefox and IE Edge, but does not currently work in Safari because it does not support the HTML date input field. Step36: Color picker Step37: File Upload The FileUpload allows to upload any type of file(s) into memory in the kernel. Step38: The upload widget exposes a value attribute that contains the files uploaded. The value attribute is a tuple with a dictionary for each uploaded file. For instance Step39: Container/Layout widgets These widgets are used to hold other widgets, called children. Each has a children property that may be set either when the widget is created or later. Box Step40: HBox Step41: VBox Step42: GridBox This box uses the HTML Grid specification to lay out its children in two dimensional grid. The example below lays out the 8 items inside in 3 columns and as many rows as needed to accommodate the items. Step43: Accordion Step44: Tabs In this example the children are set after the tab is created. Titles for the tabs are set in the same way they are for Accordion. Step45: Stacked The Stacked widget can have multiple children widgets as for Tab and Accordion, but only shows one at a time depending on the value of selected_index Step46: This can be used in combination with another selection-based widget to show different widgets depending on the selection Step47: Accordion, Tab, and Stacked use selected_index, not value Unlike the rest of the widgets discussed earlier, the container widgets Accordion and Tab update their selected_index attribute when the user changes which accordion or tab is selected. That means that you can both see what the user is doing and programmatically set what the user sees by setting the value of selected_index. Setting selected_index = None closes all of the accordions or deselects all tabs. In the cells below try displaying or setting the selected_index of the tab and/or accordion. Step48: Nesting tabs and accordions Tabs and accordions can be nested as deeply as you want. If you have a few minutes, try nesting a few accordions or putting an accordion inside a tab or a tab inside an accordion. The example below makes a couple of tabs with an accordion children in one of them
Python Code: import ipywidgets as widgets Explanation: Index - Back - Next Widget List End of explanation widgets.IntSlider( value=7, min=0, max=10, step=1, description='Test:', disabled=False, continuous_update=False, orientation='horizontal', readout=True, readout_format='d' ) Explanation: Numeric widgets There are many widgets distributed with ipywidgets that are designed to display numeric values. Widgets exist for displaying integers and floats, both bounded and unbounded. The integer widgets share a similar naming scheme to their floating point counterparts. By replacing Float with Int in the widget name, you can find the Integer equivalent. IntSlider The slider is displayed with a specified, initial value. Lower and upper bounds are defined by min and max, and the value can be incremented according to the step parameter. The slider's label is defined by description parameter The slider's orientation is either 'horizontal' (default) or 'vertical' readout displays the current value of the slider next to it. The options are True (default) or False readout_format specifies the format function used to represent slider value. The default is '.2f' End of explanation widgets.FloatSlider( value=7.5, min=0, max=10.0, step=0.1, description='Test:', disabled=False, continuous_update=False, orientation='horizontal', readout=True, readout_format='.1f', ) Explanation: FloatSlider End of explanation widgets.FloatSlider( value=7.5, min=0, max=10.0, step=0.1, description='Test:', disabled=False, continuous_update=False, orientation='vertical', readout=True, readout_format='.1f', ) Explanation: An example of sliders displayed vertically. End of explanation widgets.FloatLogSlider( value=10, base=10, min=-10, # max exponent of base max=10, # min exponent of base step=0.2, # exponent step description='Log Slider' ) Explanation: FloatLogSlider The FloatLogSlider has a log scale, which makes it easy to have a slider that covers a wide range of positive magnitudes. The min and max refer to the minimum and maximum exponents of the base, and the value refers to the actual value of the slider. End of explanation widgets.IntRangeSlider( value=[5, 7], min=0, max=10, step=1, description='Test:', disabled=False, continuous_update=False, orientation='horizontal', readout=True, readout_format='d', ) Explanation: IntRangeSlider End of explanation widgets.FloatRangeSlider( value=[5, 7.5], min=0, max=10.0, step=0.1, description='Test:', disabled=False, continuous_update=False, orientation='horizontal', readout=True, readout_format='.1f', ) Explanation: FloatRangeSlider End of explanation widgets.IntProgress( value=7, min=0, max=10, step=1, description='Loading:', bar_style='', # 'success', 'info', 'warning', 'danger' or '' orientation='horizontal' ) Explanation: IntProgress End of explanation widgets.FloatProgress( value=7.5, min=0, max=10.0, step=0.1, description='Loading:', bar_style='info', orientation='horizontal' ) Explanation: FloatProgress End of explanation widgets.BoundedIntText( value=7, min=0, max=10, step=1, description='Text:', disabled=False ) Explanation: The numerical text boxes that impose some limit on the data (range, integer-only) impose that restriction when the user presses enter. BoundedIntText End of explanation widgets.BoundedFloatText( value=7.5, min=0, max=10.0, step=0.1, description='Text:', disabled=False ) Explanation: BoundedFloatText End of explanation widgets.IntText( value=7, description='Any:', disabled=False ) Explanation: IntText End of explanation widgets.FloatText( value=7.5, description='Any:', disabled=False ) Explanation: FloatText End of explanation widgets.ToggleButton( value=False, description='Click me', disabled=False, button_style='', # 'success', 'info', 'warning', 'danger' or '' tooltip='Description', icon='check' # (FontAwesome names without the `fa-` prefix) ) Explanation: Boolean widgets There are three widgets that are designed to display a boolean value. ToggleButton End of explanation widgets.Checkbox( value=False, description='Check me', disabled=False, indent=False ) Explanation: Checkbox value specifies the value of the checkbox indent parameter places an indented checkbox, aligned with other controls. Options are True (default) or False End of explanation widgets.Valid( value=False, description='Valid!', ) Explanation: Valid The valid widget provides a read-only indicator. End of explanation widgets.Dropdown( options=['1', '2', '3'], value='2', description='Number:', disabled=False, ) Explanation: Selection widgets There are several widgets that can be used to display single selection lists, and two that can be used to select multiple values. All inherit from the same base class. You can specify the enumeration of selectable options by passing a list (options are either (label, value) pairs, or simply values for which the labels are derived by calling str). <div class="alert alert-info"> Changes in *ipywidgets 8*: Selection widgets no longer accept a dictionary of options. Pass a list of key-value pairs instead. </div> Dropdown End of explanation widgets.Dropdown( options=[('One', 1), ('Two', 2), ('Three', 3)], value=2, description='Number:', ) Explanation: The following is also valid, displaying the words 'One', 'Two', 'Three' as the dropdown choices but returning the values 1, 2, 3. End of explanation widgets.RadioButtons( options=['pepperoni', 'pineapple', 'anchovies'], # value='pineapple', # Defaults to 'pineapple' # layout={'width': 'max-content'}, # If the items' names are long description='Pizza topping:', disabled=False ) Explanation: RadioButtons End of explanation widgets.Box( [ widgets.Label(value='Pizza topping with a very long label:'), widgets.RadioButtons( options=[ 'pepperoni', 'pineapple', 'anchovies', 'and the long name that will fit fine and the long name that will fit fine and the long name that will fit fine ' ], layout={'width': 'max-content'} ) ] ) Explanation: With dynamic layout and very long labels End of explanation widgets.Select( options=['Linux', 'Windows', 'OSX'], value='OSX', # rows=10, description='OS:', disabled=False ) Explanation: Select End of explanation widgets.SelectionSlider( options=['scrambled', 'sunny side up', 'poached', 'over easy'], value='sunny side up', description='I like my eggs ...', disabled=False, continuous_update=False, orientation='horizontal', readout=True ) Explanation: SelectionSlider End of explanation import datetime dates = [datetime.date(2015, i, 1) for i in range(1, 13)] options = [(i.strftime('%b'), i) for i in dates] widgets.SelectionRangeSlider( options=options, index=(0, 11), description='Months (2015)', disabled=False ) Explanation: SelectionRangeSlider The value, index, and label keys are 2-tuples of the min and max values selected. The options must be nonempty. End of explanation widgets.ToggleButtons( options=['Slow', 'Regular', 'Fast'], description='Speed:', disabled=False, button_style='', # 'success', 'info', 'warning', 'danger' or '' tooltips=['Description of slow', 'Description of regular', 'Description of fast'], # icons=['check'] * 3 ) Explanation: ToggleButtons End of explanation widgets.SelectMultiple( options=['Apples', 'Oranges', 'Pears'], value=['Oranges'], #rows=10, description='Fruits', disabled=False ) Explanation: SelectMultiple Multiple values can be selected with <kbd>shift</kbd> and/or <kbd>ctrl</kbd> (or <kbd>command</kbd>) pressed and mouse clicks or arrow keys. End of explanation widgets.Text( value='Hello World', placeholder='Type something', description='String:', disabled=False ) Explanation: String widgets There are several widgets that can be used to display a string value. The Text, Textarea, and Combobox widgets accept input. The HTML and HTMLMath widgets display a string as HTML (HTMLMath also renders math). The Label widget can be used to construct a custom control label. Text End of explanation widgets.Textarea( value='Hello World', placeholder='Type something', description='String:', disabled=False ) Explanation: Textarea End of explanation widgets.Combobox( # value='John', placeholder='Choose Someone', options=['Paul', 'John', 'George', 'Ringo'], description='Combobox:', ensure_option=True, disabled=False ) Explanation: Combobox End of explanation widgets.Password( value='password', placeholder='Enter password', description='Password:', disabled=False ) Explanation: Password The Password widget hides user input on the screen. This widget is not a secure way to collect sensitive information because: The contents of the Password widget are transmitted unencrypted. If the widget state is saved in the notebook the contents of the Password widget is stored as plain text. End of explanation widgets.HBox([widgets.Label(value="The $m$ in $E=mc^2$:"), widgets.FloatSlider()]) Explanation: Label The Label widget is useful if you need to build a custom description next to a control using similar styling to the built-in control descriptions. End of explanation widgets.HTML( value="Hello <b>World</b>", placeholder='Some HTML', description='Some HTML', ) Explanation: HTML End of explanation widgets.HTMLMath( value=r"Some math and <i>HTML</i>: \(x^2\) and $$\frac{x+1}{x-1}$$", placeholder='Some HTML', description='Some HTML', ) Explanation: HTML Math End of explanation file = open("images/WidgetArch.png", "rb") image = file.read() widgets.Image( value=image, format='png', width=300, height=400, ) Explanation: Image End of explanation button = widgets.Button( description='Click me', disabled=False, button_style='', # 'success', 'info', 'warning', 'danger' or '' tooltip='Click me', icon='check' # (FontAwesome names without the `fa-` prefix) ) button Explanation: Button End of explanation play = widgets.Play( value=50, min=0, max=100, step=1, interval=500, description="Press play", disabled=False ) slider = widgets.IntSlider() widgets.jslink((play, 'value'), (slider, 'value')) widgets.HBox([play, slider]) Explanation: The icon attribute can be used to define an icon; see the fontawesome page for available icons. A callback function foo can be registered using button.on_click(foo). The function foo will be called when the button is clicked with the button instance as its single argument. Output The Output widget can capture and display stdout, stderr and rich output generated by IPython. For detailed documentation, see the output widget examples. Play (Animation) widget The Play widget is useful to perform animations by iterating on a sequence of integers with a certain speed. The value of the slider below is linked to the player. End of explanation widgets.DatePicker( description='Pick a Date', disabled=False ) Explanation: Date picker The date picker widget works in Chrome, Firefox and IE Edge, but does not currently work in Safari because it does not support the HTML date input field. End of explanation widgets.ColorPicker( concise=False, description='Pick a color', value='blue', disabled=False ) Explanation: Color picker End of explanation widgets.FileUpload( accept='', # Accepted file extension e.g. '.txt', '.pdf', 'image/*', 'image/*,.pdf' multiple=False # True to accept multiple files upload else False ) Explanation: File Upload The FileUpload allows to upload any type of file(s) into memory in the kernel. End of explanation widgets.Controller( index=0, ) Explanation: The upload widget exposes a value attribute that contains the files uploaded. The value attribute is a tuple with a dictionary for each uploaded file. For instance: ```python uploader = widgets.FileUpload() display(uploader) upload something... once a file is uploaded, use the .value attribute to retrieve the content: uploader.value => ( => { => 'name': 'example.txt', => 'type': 'text/plain', => 'size': 36, => 'last_modified': datetime.datetime(2020, 1, 9, 15, 58, 43, 321000, tzinfo=datetime.timezone.utc), => 'content': <memory at 0x10c1b37c8> => }, => ) ``` Entries in the dictionary can be accessed either as items, as one would any dictionary, or as attributes: ``` uploaded_file = uploader.value[0] uploaded_file["size"] => 36 uploaded_file.size => 36 ``` The contents of the file uploaded are in the value of the content key. They are a memory view: ```python uploaded_file.content => <memory at 0x10c1b37c8> ``` You can extract the content to bytes: ```python uploaded_file.content.tobytes() => b'This is the content of example.txt.\n' ``` If the file is a text file, you can get the contents as a string by decoding it: ```python import codecs codecs.decode(uploaded_file.content, encoding="utf-8") => 'This is the content of example.txt.\n' ``` You can save the uploaded file to the filesystem from the kernel: python with open("./saved-output.txt", "wb") as fp: fp.write(uploaded_file.content) To convert the uploaded file into a Pandas dataframe, you can use a BytesIO object: python import io import pandas as pd pd.read_csv(io.BytesIO(uploaded_file.content)) If the uploaded file is an image, you can visualize it with an image widget: python widgets.Image(value=uploaded_file.content.tobytes()) <div class="alert alert-info"> Changes in *ipywidgets 8*: The `FileUpload` changed significantly in ipywidgets 8: - The `.value` traitlet is now a list of dictionaries, rather than a dictionary mapping the uploaded name to the content. To retrieve the original form, use `{f["name"]: f.content.tobytes() for f in uploader.value}`. - The `.data` traitlet has been removed. To retrieve it, use `[f.content.tobytes() for f in uploader.value]`. - The `.metadata` traitlet has been removed. To retrieve it, use `[{k: v for k, v in f.items() if k != "content"} for f in w.value]`. </div> <div class="alert alert-warning"> Warning: When using the `FileUpload` Widget, uploaded file content might be saved in the notebook if widget state is saved. </div> Controller The Controller allows a game controller to be used as an input device. End of explanation items = [widgets.Label(str(i)) for i in range(4)] widgets.Box(items) Explanation: Container/Layout widgets These widgets are used to hold other widgets, called children. Each has a children property that may be set either when the widget is created or later. Box End of explanation items = [widgets.Label(str(i)) for i in range(4)] widgets.HBox(items) Explanation: HBox End of explanation items = [widgets.Label(str(i)) for i in range(4)] left_box = widgets.VBox([items[0], items[1]]) right_box = widgets.VBox([items[2], items[3]]) widgets.HBox([left_box, right_box]) Explanation: VBox End of explanation items = [widgets.Label(str(i)) for i in range(8)] widgets.GridBox(items, layout=widgets.Layout(grid_template_columns="repeat(3, 100px)")) Explanation: GridBox This box uses the HTML Grid specification to lay out its children in two dimensional grid. The example below lays out the 8 items inside in 3 columns and as many rows as needed to accommodate the items. End of explanation accordion = widgets.Accordion(children=[widgets.IntSlider(), widgets.Text()], titles=('Slider', 'Text')) accordion Explanation: Accordion End of explanation tab_contents = ['P0', 'P1', 'P2', 'P3', 'P4'] children = [widgets.Text(description=name) for name in tab_contents] tab = widgets.Tab() tab.children = children tab.titles = [str(i) for i in range(len(children))] tab Explanation: Tabs In this example the children are set after the tab is created. Titles for the tabs are set in the same way they are for Accordion. End of explanation button = widgets.Button(description='Click here') slider = widgets.IntSlider() stacked = widgets.Stacked([button, slider]) stacked # will show only the button Explanation: Stacked The Stacked widget can have multiple children widgets as for Tab and Accordion, but only shows one at a time depending on the value of selected_index: End of explanation dropdown = widgets.Dropdown(options=['button', 'slider']) widgets.jslink((dropdown, 'index'), (stacked, 'selected_index')) widgets.VBox([dropdown, stacked]) Explanation: This can be used in combination with another selection-based widget to show different widgets depending on the selection: End of explanation tab.selected_index = 3 accordion.selected_index = None Explanation: Accordion, Tab, and Stacked use selected_index, not value Unlike the rest of the widgets discussed earlier, the container widgets Accordion and Tab update their selected_index attribute when the user changes which accordion or tab is selected. That means that you can both see what the user is doing and programmatically set what the user sees by setting the value of selected_index. Setting selected_index = None closes all of the accordions or deselects all tabs. In the cells below try displaying or setting the selected_index of the tab and/or accordion. End of explanation tab_nest = widgets.Tab() tab_nest.children = [accordion, accordion] tab_nest.titles = ('An accordion', 'Copy of the accordion') tab_nest Explanation: Nesting tabs and accordions Tabs and accordions can be nested as deeply as you want. If you have a few minutes, try nesting a few accordions or putting an accordion inside a tab or a tab inside an accordion. The example below makes a couple of tabs with an accordion children in one of them End of explanation
12,045
Given the following text description, write Python code to implement the functionality described below step by step Description: KNOWLEDGE The knowledge module covers Chapter 19 Step1: CONTENTS Overview Current-Best Learning OVERVIEW Like the learning module, this chapter focuses on methods for generating a model/hypothesis for a domain; however, unlike the learning chapter, here we use prior knowledge to help us learn from new experiences and find a proper hypothesis. First-Order Logic Usually knowledge in this field is represented as first-order logic; a type of logic that uses variables and quantifiers in logical sentences. Hypotheses are represented by logical sentences with variables, while examples are logical sentences with set values instead of variables. The goal is to assign a value to a special first-order logic predicate, called goal predicate, for new examples given a hypothesis. We learn this hypothesis by infering knowledge from some given examples. Representation In this module, we use dictionaries to represent examples, with keys being the attribute names and values being the corresponding example values. Examples also have an extra boolean field, 'GOAL', for the goal predicate. A hypothesis is represented as a list of dictionaries. Each dictionary in that list represents a disjunction. Inside these dictionaries/disjunctions we have conjunctions. For example, say we want to predict if an animal (cat or dog) will take an umbrella given whether or not it rains or the animal wears a coat. The goal value is 'take an umbrella' and is denoted by the key 'GOAL'. An example Step2: Implementation As mentioned earlier, examples are dictionaries (with keys being the attribute names) and hypotheses are lists of dictionaries (each dictionary is a disjunction). Also, in the hypothesis, we denote the NOT operation with an exclamation mark (!). We have functions to calculate the list of all specializations/generalizations, to check if an example is consistent/false positive/false negative with a hypothesis. We also have an auxiliary function to add a disjunction (or operation) to a hypothesis, and two other functions to check consistency of all (or just the negative) examples. You can read the source by running the cell below Step3: You can view the auxiliary functions in the knowledge module. A few notes on the functionality of some of the important methods Step4: Let our initial hypothesis be [{'Species' Step5: We got 5/7 correct. Not terribly bad, but we can do better. Lets now run the algorithm and see how that performs in comparison to our current result. Step6: We got everything right! Let's print our hypothesis Step7: If an example meets any of the disjunctions in the list, it will be True, otherwise it will be False. Let's move on to a bigger example, the "Restaurant" example from the book. The attributes for each example are the following Step8: In code Step9: Say our initial hypothesis is that there should be an alternative option and lets run the algorithm. Step10: The predictions are correct. Let's see the hypothesis that accomplished that
Python Code: from knowledge import * from notebook import pseudocode, psource Explanation: KNOWLEDGE The knowledge module covers Chapter 19: Knowledge in Learning from Stuart Russel's and Peter Norvig's book Artificial Intelligence: A Modern Approach. Execute the cell below to get started. End of explanation pseudocode('Current-Best-Learning') Explanation: CONTENTS Overview Current-Best Learning OVERVIEW Like the learning module, this chapter focuses on methods for generating a model/hypothesis for a domain; however, unlike the learning chapter, here we use prior knowledge to help us learn from new experiences and find a proper hypothesis. First-Order Logic Usually knowledge in this field is represented as first-order logic; a type of logic that uses variables and quantifiers in logical sentences. Hypotheses are represented by logical sentences with variables, while examples are logical sentences with set values instead of variables. The goal is to assign a value to a special first-order logic predicate, called goal predicate, for new examples given a hypothesis. We learn this hypothesis by infering knowledge from some given examples. Representation In this module, we use dictionaries to represent examples, with keys being the attribute names and values being the corresponding example values. Examples also have an extra boolean field, 'GOAL', for the goal predicate. A hypothesis is represented as a list of dictionaries. Each dictionary in that list represents a disjunction. Inside these dictionaries/disjunctions we have conjunctions. For example, say we want to predict if an animal (cat or dog) will take an umbrella given whether or not it rains or the animal wears a coat. The goal value is 'take an umbrella' and is denoted by the key 'GOAL'. An example: {'Species': 'Cat', 'Coat': 'Yes', 'Rain': 'Yes', 'GOAL': True} A hypothesis can be the following: [{'Species': 'Cat'}] which means an animal will take an umbrella if and only if it is a cat. Consistency We say that an example e is consistent with an hypothesis h if the assignment from the hypothesis for e is the same as e['GOAL']. If the above example and hypothesis are e and h respectively, then e is consistent with h since e['Species'] == 'Cat'. For e = {'Species': 'Dog', 'Coat': 'Yes', 'Rain': 'Yes', 'GOAL': True}, the example is no longer consistent with h, since the value assigned to e is False while e['GOAL'] is True. CURRENT-BEST LEARNING Overview In Current-Best Learning, we start with a hypothesis and we refine it as we iterate through the examples. For each example, there are three possible outcomes: the example is consistent with the hypothesis, the example is a false positive (real value is false but got predicted as true) and the example is a false negative (real value is true but got predicted as false). Depending on the outcome we refine the hypothesis accordingly: Consistent: We do not change the hypothesis and move on to the next example. False Positive: We specialize the hypothesis, which means we add a conjunction. False Negative: We generalize the hypothesis, either by removing a conjunction or a disjunction, or by adding a disjunction. When specializing or generalizing, we should make sure to not create inconsistencies with previous examples. To avoid that caveat, backtracking is needed. Thankfully, there is not just one specialization or generalization, so we have a lot to choose from. We will go through all the specializations/generalizations and we will refine our hypothesis as the first specialization/generalization consistent with all the examples seen up to that point. Pseudocode End of explanation psource(current_best_learning, specializations, generalizations) Explanation: Implementation As mentioned earlier, examples are dictionaries (with keys being the attribute names) and hypotheses are lists of dictionaries (each dictionary is a disjunction). Also, in the hypothesis, we denote the NOT operation with an exclamation mark (!). We have functions to calculate the list of all specializations/generalizations, to check if an example is consistent/false positive/false negative with a hypothesis. We also have an auxiliary function to add a disjunction (or operation) to a hypothesis, and two other functions to check consistency of all (or just the negative) examples. You can read the source by running the cell below: End of explanation animals_umbrellas = [ {'Species': 'Cat', 'Rain': 'Yes', 'Coat': 'No', 'GOAL': True}, {'Species': 'Cat', 'Rain': 'Yes', 'Coat': 'Yes', 'GOAL': True}, {'Species': 'Dog', 'Rain': 'Yes', 'Coat': 'Yes', 'GOAL': True}, {'Species': 'Dog', 'Rain': 'Yes', 'Coat': 'No', 'GOAL': False}, {'Species': 'Dog', 'Rain': 'No', 'Coat': 'No', 'GOAL': False}, {'Species': 'Cat', 'Rain': 'No', 'Coat': 'No', 'GOAL': False}, {'Species': 'Cat', 'Rain': 'No', 'Coat': 'Yes', 'GOAL': True} ] Explanation: You can view the auxiliary functions in the knowledge module. A few notes on the functionality of some of the important methods: specializations: For each disjunction in the hypothesis, it adds a conjunction for values in the examples encountered so far (if the conjunction is consistent with all the examples). It returns a list of hypotheses. generalizations: It adds to the list of hypotheses in three phases. First it deletes disjunctions, then it deletes conjunctions and finally it adds a disjunction. add_or: Used by generalizations to add an or operation (a disjunction) to the hypothesis. Since the last example is the problematic one which wasn't consistent with the hypothesis, it will model the new disjunction to that example. It creates a disjunction for each combination of attributes in the example and returns the new hypotheses consistent with the negative examples encountered so far. We do not need to check the consistency of positive examples, since they are already consistent with at least one other disjunction in the hypotheses' set, so this new disjunction doesn't affect them. In other words, if the value of a positive example is negative under the disjunction, it doesn't matter since we know there exists a disjunction consistent with the example. Since the algorithm stops searching the specializations/generalizations after the first consistent hypothesis is found, usually you will get different results each time you run the code. Examples We will take a look at two examples. The first is a trivial one, while the second is a bit more complicated (you can also find it in the book). Earlier, we had the "animals taking umbrellas" example. Now we want to find a hypothesis to predict whether or not an animal will take an umbrella. The attributes are Species, Rain and Coat. The possible values are [Cat, Dog], [Yes, No] and [Yes, No] respectively. Below we give seven examples (with GOAL we denote whether an animal will take an umbrella or not): End of explanation initial_h = [{'Species': 'Cat'}] for e in animals_umbrellas: print(guess_value(e, initial_h)) Explanation: Let our initial hypothesis be [{'Species': 'Cat'}]. That means every cat will be taking an umbrella. We can see that this is not true, but it doesn't matter since we will refine the hypothesis using the Current-Best algorithm. First, let's see how that initial hypothesis fares to have a point of reference. End of explanation h = current_best_learning(animals_umbrellas, initial_h) for e in animals_umbrellas: print(guess_value(e, h)) Explanation: We got 5/7 correct. Not terribly bad, but we can do better. Lets now run the algorithm and see how that performs in comparison to our current result. End of explanation print(h) Explanation: We got everything right! Let's print our hypothesis: End of explanation def r_example(Alt, Bar, Fri, Hun, Pat, Price, Rain, Res, Type, Est, GOAL): return {'Alt': Alt, 'Bar': Bar, 'Fri': Fri, 'Hun': Hun, 'Pat': Pat, 'Price': Price, 'Rain': Rain, 'Res': Res, 'Type': Type, 'Est': Est, 'GOAL': GOAL} Explanation: If an example meets any of the disjunctions in the list, it will be True, otherwise it will be False. Let's move on to a bigger example, the "Restaurant" example from the book. The attributes for each example are the following: Alternative option (Alt) Bar to hang out/wait (Bar) Day is Friday (Fri) Is hungry (Hun) How much does it cost (Price, takes values in [$, $$, $$$]) How many patrons are there (Pat, takes values in [None, Some, Full]) Is raining (Rain) Has made reservation (Res) Type of restaurant (Type, takes values in [French, Thai, Burger, Italian]) Estimated waiting time (Est, takes values in [0-10, 10-30, 30-60, >60]) We want to predict if someone will wait or not (Goal = WillWait). Below we show twelve examples found in the book. With the function r_example we will build the dictionary examples: End of explanation restaurant = [ r_example('Yes', 'No', 'No', 'Yes', 'Some', '$$$', 'No', 'Yes', 'French', '0-10', True), r_example('Yes', 'No', 'No', 'Yes', 'Full', '$', 'No', 'No', 'Thai', '30-60', False), r_example('No', 'Yes', 'No', 'No', 'Some', '$', 'No', 'No', 'Burger', '0-10', True), r_example('Yes', 'No', 'Yes', 'Yes', 'Full', '$', 'Yes', 'No', 'Thai', '10-30', True), r_example('Yes', 'No', 'Yes', 'No', 'Full', '$$$', 'No', 'Yes', 'French', '>60', False), r_example('No', 'Yes', 'No', 'Yes', 'Some', '$$', 'Yes', 'Yes', 'Italian', '0-10', True), r_example('No', 'Yes', 'No', 'No', 'None', '$', 'Yes', 'No', 'Burger', '0-10', False), r_example('No', 'No', 'No', 'Yes', 'Some', '$$', 'Yes', 'Yes', 'Thai', '0-10', True), r_example('No', 'Yes', 'Yes', 'No', 'Full', '$', 'Yes', 'No', 'Burger', '>60', False), r_example('Yes', 'Yes', 'Yes', 'Yes', 'Full', '$$$', 'No', 'Yes', 'Italian', '10-30', False), r_example('No', 'No', 'No', 'No', 'None', '$', 'No', 'No', 'Thai', '0-10', False), r_example('Yes', 'Yes', 'Yes', 'Yes', 'Full', '$', 'No', 'No', 'Burger', '30-60', True) ] Explanation: In code: End of explanation initial_h = [{'Alt': 'Yes'}] h = current_best_learning(restaurant, initial_h) for e in restaurant: print(guess_value(e, h)) Explanation: Say our initial hypothesis is that there should be an alternative option and lets run the algorithm. End of explanation print(h) Explanation: The predictions are correct. Let's see the hypothesis that accomplished that: End of explanation
12,046
Given the following text description, write Python code to implement the functionality described below step by step Description: Input File Creation First let's start with some tools to create input files for a given deployment schedule. Step3: Simulate Now let's build some tools to run simulations and extract a GWe time series. Step7: Distancing Now let's build some tools to distance between a GWe time series and a demand curve. Step9: Initialize Optimization Now let's start with a couple of simple simulations Step10: First, add a schedule where nothing is deployed, leaving the initial facilities to retire. Step11: Next, add a simulation that is the max deployment schedule to bound the space Step18: Optimizer Now let's add some tools to do the estimation phase of the optimization. Step19: Simulate Ok! Let's try it.
Python Code: import os import sys import uuid import json import time import subprocess from math import ceil from copy import deepcopy import numpy as np import pandas as pd import cymetric as cym %matplotlib inline import matplotlib.pyplot as plt import george import dtw with open('once-through.json') as f: BASE_SIM = json.load(f) DURATION = BASE_SIM['simulation']['control']['duration'] YEARS = ceil(DURATION / 12) MONTH_SHUFFLE = (1, 7, 10, 4, 8, 6, 12, 2, 5, 9, 11, 3) NULL_SCHEDULE = {'build_times': [{'val': 1}], 'n_build': [{'val': 0}], 'prototypes': [{'val': 'LWR'}]} LWR_PROTOTYPE = {'val': 'LWR'} OPT_H5 = 'opt.h5' BASE_SIM['simulation']['region']['institution']['config']['DeployInst'] def deploy_inst_schedule(Θ): if np.sum(Θ) == 0: return NULL_SCHEDULE sched = {'build_times': {'val': []}, 'n_build': {'val': []}, 'prototypes': {'val': []}} build_times = sched['build_times']['val'] n_build = sched['n_build']['val'] prototypes = sched['prototypes']['val'] m = 0 for i, θ in enumerate(Θ): if θ <= 0: continue build_times.append(i*12 + MONTH_SHUFFLE[m]) n_build.append(int(θ)) prototypes.append('LWR') m = (m + 1) % 12 return sched def make_sim(Θ, fname='sim.json'): sim = deepcopy(BASE_SIM) inst = sim['simulation']['region']['institution'] inst['config']['DeployInst'] = deploy_inst_schedule(Θ) with open(fname, 'w') as f: json.dump(sim, f) return sim s = make_sim([]) s['simulation']['region']['institution']['config']['DeployInst'] Explanation: Input File Creation First let's start with some tools to create input files for a given deployment schedule. End of explanation def run(fname='sim.json', out=OPT_H5): Runs a simulation and returns the sim id. cmd = ['cyclus', '--warn-limit', '0', '-o', out, fname] proc = subprocess.run(cmd, check=True, universal_newlines=True, stdout=subprocess.PIPE) simid = proc.stdout.rsplit(None, 1)[-1] return simid ZERO_GWE = pd.DataFrame({'GWe': np.zeros(YEARS)}, index=np.arange(YEARS)) ZERO_GWE.index.name = 'Time' def extract_gwe(simid, out=OPT_H5): Computes the annual GWe for a simulation. with cym.dbopen(out) as db: evaler = cym.Evaluator(db) raw = evaler.eval('TimeSeriesPower', conds=[('SimId', '==', uuid.UUID(simid))]) ano = pd.DataFrame({'Time': raw.Time.apply(lambda x: x//12), 'GWe': raw.Value.apply(lambda x: 1e-3*x/12)}) gwe = ano.groupby('Time').sum() gwe = (gwe + ZERO_GWE).fillna(0.0) return np.array(gwe.GWe) Explanation: Simulate Now let's build some tools to run simulations and extract a GWe time series. End of explanation DEFAULT_DEMAND = 90 * (1.01**np.arange(YEARS)) # 1% growth def d(g, f=None): The dynamic time warping distance between a GWe time series and a demand curve. f = DEFAULT_DEMAND if f is None else f rtn = dtw.distance(f[:, np.newaxis], g[:, np.newaxis]) return rtn def αd(g, f=None, dtol=1e-5): Computes the mininmal α and the DTW d. f = DEFAULT_DEMAND if f is None else f C = dtw.cost_matrix(f[:, np.newaxis], g[:, np.newaxis]) d = dtw.distance(cost=C) #d_t = np.diagonal(C) / (np.arange(1, C.shape[0] + 1) + np.arange(1, C.shape[1] + 1)) d_t = np.diagonal(C) / np.sum(C.shape) #α = np.argwhere(np.cumsum(d_t <= dtol) != np.arange(1, len(d_t) + 1))[0,0] filt = np.argwhere(d_t <= dtol) α = filt[-1,-1] if len(filt) > 0 else 0 #α = filt[0,0] if len(filt) > 0 else 0 print("Simulation α", α) print("Simulation d(t)", d_t) return α, d def gwed(Θ, f=None, dtol=1e-5, find_α=False): For a given deployment schedule Θ, return the GWe time series and the distance to the demand function f. make_sim(Θ) simid = run() gwe = extract_gwe(simid) if find_α: rtn = αd(gwe, f=f, dtol=dtol) else: rtn = d(gwe, f=f) return (gwe,) + rtn Explanation: Distancing Now let's build some tools to distance between a GWe time series and a demand curve. End of explanation N = np.asarray(np.ceil(4*(1.01)**np.arange(YEARS)), dtype=int) # max annual deployments Θs = [] # deployment schedules G = [] # GWe per sim D = [] # distances per sim if os.path.isfile(OPT_H5): os.remove(OPT_H5) def add_sim(Θ, f=None, dtol=1e-5): Add a simulation to the known simulations by performing the simulation. g_s, α, d_s = gwed(Θ, f=f, dtol=dtol, find_α=True) Θs.append(Θ) G.append(g_s) D.append(d_s) return α Explanation: Initialize Optimization Now let's start with a couple of simple simulations End of explanation add_sim(np.zeros(YEARS, dtype=int)) Explanation: First, add a schedule where nothing is deployed, leaving the initial facilities to retire. End of explanation add_sim(N) Explanation: Next, add a simulation that is the max deployment schedule to bound the space End of explanation Γ = 285 np.random.seed(424242) def gp_gwe(Θs, G, α, T=None, tol=1e-6, verbose=False): Create a Gaussian process regression model for GWe. S = len(G) T = YEARS if T is None else T t = np.arange(T) P = len(Θs[0]) ndim = P + 1 - α y_mean = np.mean(G) y = np.concatenate(G) x = np.empty((S*T, ndim), dtype=int) for i in range(S): x[i*T:(i+1)*T, 0] = t x[i*T:(i+1)*T, 1:] = Θs[i][np.newaxis, α:] yerr = tol * y_mean #kernel = float(y_mean) * george.kernels.ExpSquaredKernel(1.0, ndim=ndim) #for p in range(P): # kernel *= george.kernels.ExpSquaredKernel(1.0, ndim=ndim) #kernel = float(y_mean) * george.kernels.Matern52Kernel(1.0, ndim=ndim) kernel = float(y_mean) * george.kernels.Matern32Kernel(1.0, ndim=ndim) gp = george.GP(kernel, mean=y_mean) gp.compute(x, yerr=yerr, sort=False) gp.optimize(x, y, yerr=yerr, sort=False, verbose=verbose) return gp, x, y def predict_gwe(Θ, gp, y, α, T=None): Predict GWe for a deployment schedule Θ and a GP. T = YEARS if T is None else T t = np.arange(T) P = len(Θ) ndim = P + 1 - α x = np.empty((T, ndim), dtype=int) x[:,0] = t x[:,1:] = Θ[np.newaxis,α:] mu = gp.predict(y, x, mean_only=True) return mu def gp_d_inv(θ_p, D_inv, tol=1e-6, verbose=False): Computes a Gaussian process model for a deployment parameter. S = len(D) ndim = 1 x = θ_p y = D_inv y_mean = np.mean(y) yerr = tol * y_mean kernel = float(y_mean) * george.kernels.ExpSquaredKernel(1.0, ndim=ndim) gp = george.GP(kernel, mean=y_mean, solver=george.HODLRSolver) gp.compute(x, yerr=yerr, sort=False) gp.optimize(x, y, yerr=yerr, sort=False, verbose=verbose) return gp, x, y def weights(Θs, D, N, Nlower, α, tol=1e-6, verbose=False): P = len(N) θ_ps = np.array(Θs) D = np.asarray(D) D_inv = D**-1 W = [None] * α for p in range(α, P): θ_p = θ_ps[:,p] range_p = np.arange(Nlower[p], N[p] + 1) gp, _, _ = gp_d_inv(θ_p, D_inv, tol=tol, verbose=verbose) d_inv_np = gp.predict(D_inv, range_p, mean_only=True) #p_min = np.argmin(D) #lam = θ_p[p_min] #fact = np.cumprod([1.0] + list(range(1, N[p] + 1)))[Nlower[p]:N[p] + 1] #d_inv_np = np.exp(-lam) * (lam**range_p) / fact if np.all(np.isnan(d_inv_np)) or np.all(d_inv_np <= 0.0): # try D, instead of D^-1 #gp, _, _ = gp_d_inv(θ_p, D, tol=tol, verbose=verbose) #d_np = gp.predict(D, np.arange(0, N[p] + 1), mean_only=True) # try setting the shortest d to 1, all others 0. #d_inp_np = np.zeros(N[p] + 1, dtype='f8') #p_min = np.argmin(D) #d_inv_np[np.argwhere(θ_p[p_min] == range_p)] = 1.0 # try Poisson dist centered at min. p_min = np.argmin(D) lam = θ_p[p_min] fact = np.cumprod([1.0] + list(range(1, N[p] + 1)))[Nlower[p]:N[p] + 1] d_inv_np = np.exp(-lam) * (lam**range_p) / fact if np.any(d_inv_np < 0.0): d_inv_np[d_inv_np < 0.0] = np.min(d_inv_np[d_inv_np > 0.0]) d_inv_np_tot = d_inv_np.sum() w_p = d_inv_np / d_inv_np_tot W.append(w_p) return W def guess_scheds(Θs, W, Γ, gp, y, α, T=None): Guess a new deployment schedule, given a number of samples Γ, weights W, and Guassian process for the GWe. P = len(W) Θ_γs = np.empty((Γ, P), dtype=int) Θ_γs[:, :α] = Θs[0][:α] for p in range(α, P): w_p = W[p] Θ_γs[:, p] = np.random.choice(len(w_p), size=Γ, p=w_p) Δ = [] for γ in range(Γ): Θ_γ = Θ_γs[γ] g_star = predict_gwe(Θ_γ, gp, y, α, T=T) d_star = d(g_star) Δ.append(d_star) γ = np.argmin(Δ) Θ_γ = Θ_γs[γ] print('hyperparameters', gp.kernel[:]) #print('Θ_γs', Θ_γs) #print('Θ_γs[γ]', Θ_γs[γ]) #print('Predition', Δ[γ], Δ) return Θ_γ, Δ[γ] def guess_scheds_loop(Θs, gp, y, N, Nlower): Guess a new deployment schedule, given a number of samples Γ, weights W, and Guassian process for the GWe. P = len(N) Θ = np.array(Θs[0], dtype=int) for p in range(P): d_p = [] range_p = np.arange(Nlower[p], N[p] + 1, dtype=int) for n_p in range_p: Θ[p] = n_p g_star = predict_gwe(Θ, gp, y, α=0, T=p+1)[:p+1] d_star = d(g_star, f=DEFAULT_DEMAND[:p+1]) d_p.append(d_star) Θ[p] = range_p[np.argmin(d_p)] print('hyperparameters', gp.kernel[:]) return Θ, np.min(d_p) def estimate(Θs, G, D, N, Nlower, Γ, α, T=None, tol=1e-6, verbose=False, method='stochastic'): Runs an estimation step, returning a new deployment schedule. gp, x, y = gp_gwe(Θs, G, α, T=T, tol=tol, verbose=verbose) if method == 'stochastic': # orig W = weights(Θs, D, N, Nlower, α, tol=tol, verbose=verbose) Θ, dmin = guess_scheds(Θs, W, Γ, gp, y, α, T=T) elif method == 'inner-prod': # inner prod Θ, dmin = guess_scheds_loop(Θs, gp, y, N, Nlower) elif method == 'all': W = weights(Θs, D, N, Nlower, α, tol=tol, verbose=verbose) Θ_stoch, dmin_stoch = guess_scheds(Θs, W, Γ, gp, y, α, T=T) Θ_inner, dmin_inner = guess_scheds_loop(Θs, gp, y, N, Nlower) if dmin_stoch < dmin_inner: winner = 'stochastic' Θ = Θ_stoch else: winner = 'inner' Θ = Θ_inner print('Estimate winner is {}'.format(winner)) else: raise ValueError('method {} not known'.format(method)) return Θ def optimize(MAX_D=0.1, MAX_S=12, T=None, tol=1e-6, dtol=1e-5, verbose=False): global Θs, G, D α = 0 s = 2 z = 2 n = N nlower = n0 = np.zeros(len(N), dtype=int) dtol = np.linspace(dtol * 2.0 / len(N), dtol, len(N)) method = 'stochastic' #method = 'all' while MAX_D < D[-1] and s < MAX_S and α + 1 < YEARS: print(s) print('-'*18) Gprev = np.array(G[:z]) t0 = time.time() method = 'stochastic' if s%4 < 2 else 'all' Θ = estimate(Θs, G, D, n, nlower, Γ, α, T=T, tol=tol, verbose=verbose, method=method) t1 = time.time() α_s = add_sim(Θ, dtol=dtol) t2 = time.time() print('Estimate time: {0} min {1} sec'.format((t1-t0)//60, (t1-t0)%60)) print('Simulation time: {0} min {1} sec'.format((t2-t1)//60, (t2-t1)%60)) print(D) sys.stdout.flush() idx = [int(i) for i in np.argsort(D)[:z]] if D[-1] == max(D): idx.append(-1) #elif len(G) == z + 2 and np.allclose(G[:z], Gprev): # n = np.array([Θs[0] + 1, N], dtype=int).min(axis=0) # nlower = np.array([Θs[0] - 1, n0], dtype=int).max(axis=0) # print('New N-upper', n) # print('New N-lower', nlower) #if (α < α_s) and ((len(D) - 1) in idx[:2]): #if (α < α_s) and (len(D) == idx[0] + 1): if (len(D) == idx[0] + 1): print('Update α: {0} -> {1}'.format(α, α_s)) α = α_s # method = 'stochastic' #elif method == 'stochastic' and len(D) == z + 2: #elif len(D) == z + 2: # print('Trying inner product method') # method = 'inner-prod' #else: # print('Trying stochastic method') # method = 'stochastic' #elif α > 0: # print('Update α: {0} -> {1}'.format(α, α - 1)) # α -= 1 Θs = [Θs[i] for i in idx] G = [G[i] for i in idx] D = [D[i] for i in idx] s += 1 print() Explanation: Optimizer Now let's add some tools to do the estimation phase of the optimization. End of explanation %%time optimize(MAX_S=25, dtol=1e-5) Θs[0] G[0] DEFAULT_DEMAND (G[0] - DEFAULT_DEMAND) / DEFAULT_DEMAND np.abs(G[0] - DEFAULT_DEMAND) 1.7 / 40 Θs[1] - Θs[0] sum(N) Explanation: Simulate Ok! Let's try it. End of explanation
12,047
Given the following text description, write Python code to implement the functionality described below step by step Description: Morph volumetric source estimate This example demonstrates how to morph an individual subject's Step1: Setup paths Step2: Compute example data. For reference see ex-inverse-volume. Load data Step3: Get a SourceMorph object for VolSourceEstimate subject_from can typically be inferred from Step4: Apply morph to VolSourceEstimate The morph can be applied to the source estimate data, by giving it as the first argument to the Step5: Convert morphed VolSourceEstimate into NIfTI We can convert our morphed source estimate into a NIfTI volume using Step6: Plot results
Python Code: # Author: Tommy Clausner <[email protected]> # # License: BSD-3-Clause import os import nibabel as nib import mne from mne.datasets import sample, fetch_fsaverage from mne.minimum_norm import apply_inverse, read_inverse_operator from nilearn.plotting import plot_glass_brain print(__doc__) Explanation: Morph volumetric source estimate This example demonstrates how to morph an individual subject's :class:mne.VolSourceEstimate to a common reference space. We achieve this using :class:mne.SourceMorph. Data will be morphed based on an affine transformation and a nonlinear registration method known as Symmetric Diffeomorphic Registration (SDR) by :footcite:AvantsEtAl2008. Transformation is estimated from the subject's anatomical T1 weighted MRI (brain) to FreeSurfer's 'fsaverage' T1 weighted MRI (brain)_. Afterwards the transformation will be applied to the volumetric source estimate. The result will be plotted, showing the fsaverage T1 weighted anatomical MRI, overlaid with the morphed volumetric source estimate. End of explanation sample_dir_raw = sample.data_path() sample_dir = os.path.join(sample_dir_raw, 'MEG', 'sample') subjects_dir = os.path.join(sample_dir_raw, 'subjects') fname_evoked = os.path.join(sample_dir, 'sample_audvis-ave.fif') fname_inv = os.path.join(sample_dir, 'sample_audvis-meg-vol-7-meg-inv.fif') fname_t1_fsaverage = os.path.join(subjects_dir, 'fsaverage', 'mri', 'brain.mgz') fetch_fsaverage(subjects_dir) # ensure fsaverage src exists fname_src_fsaverage = subjects_dir + '/fsaverage/bem/fsaverage-vol-5-src.fif' Explanation: Setup paths End of explanation evoked = mne.read_evokeds(fname_evoked, condition=0, baseline=(None, 0)) inverse_operator = read_inverse_operator(fname_inv) # Apply inverse operator stc = apply_inverse(evoked, inverse_operator, 1.0 / 3.0 ** 2, "dSPM") # To save time stc.crop(0.09, 0.09) Explanation: Compute example data. For reference see ex-inverse-volume. Load data: End of explanation src_fs = mne.read_source_spaces(fname_src_fsaverage) morph = mne.compute_source_morph( inverse_operator['src'], subject_from='sample', subjects_dir=subjects_dir, niter_affine=[10, 10, 5], niter_sdr=[10, 10, 5], # just for speed src_to=src_fs, verbose=True) Explanation: Get a SourceMorph object for VolSourceEstimate subject_from can typically be inferred from :class:src &lt;mne.SourceSpaces&gt;, and subject_to is set to 'fsaverage' by default. subjects_dir can be None when set in the environment. In that case SourceMorph can be initialized taking src as only argument. See :class:mne.SourceMorph for more details. The default parameter setting for zooms will cause the reference volumes to be resliced before computing the transform. A value of '5' would cause the function to reslice to an isotropic voxel size of 5 mm. The higher this value the less accurate but faster the computation will be. The recommended way to use this is to morph to a specific destination source space so that different subject_from morphs will go to the same space.` A standard usage for volumetric data reads: End of explanation stc_fsaverage = morph.apply(stc) Explanation: Apply morph to VolSourceEstimate The morph can be applied to the source estimate data, by giving it as the first argument to the :meth:morph.apply() &lt;mne.SourceMorph.apply&gt; method. <div class="alert alert-info"><h4>Note</h4><p>Volumetric morphing is much slower than surface morphing because the volume for each time point is individually resampled and SDR morphed. The :meth:`mne.SourceMorph.compute_vol_morph_mat` method can be used to compute an equivalent sparse matrix representation by computing the transformation for each source point individually. This generally takes a few minutes to compute, but can be :meth:`saved <mne.SourceMorph.save>` to disk and be reused. The resulting sparse matrix operation is very fast (about 400× faster) to :meth:`apply <mne.SourceMorph.apply>`. This approach is more efficient when the number of time points to be morphed exceeds the number of source space points, which is generally in the thousands. This can easily occur when morphing many time points and multiple conditions.</p></div> End of explanation # Create mri-resolution volume of results img_fsaverage = morph.apply(stc, mri_resolution=2, output='nifti1') Explanation: Convert morphed VolSourceEstimate into NIfTI We can convert our morphed source estimate into a NIfTI volume using :meth:morph.apply(..., output='nifti1') &lt;mne.SourceMorph.apply&gt;. End of explanation # Load fsaverage anatomical image t1_fsaverage = nib.load(fname_t1_fsaverage) # Plot glass brain (change to plot_anat to display an overlaid anatomical T1) display = plot_glass_brain(t1_fsaverage, title='subject results to fsaverage', draw_cross=False, annotate=True) # Add functional data as overlay display.add_overlay(img_fsaverage, alpha=0.75) Explanation: Plot results End of explanation
12,048
Given the following text description, write Python code to implement the functionality described below step by step Description: Natural Language Processing with NLTK Author Step1: 1. Corpus acquisition. In these notebooks we will explore some tools for text processing and analysis and two topic modeling algorithms available from Python toolboxes. To do so, we will explore and analyze collections of Wikipedia articles from a given category, using wikitools, that makes easy the capture of content from wikimedia sites. (As a side note, there are many other available text collections to test topic modelling algorithm. In particular, the NLTK library has many examples, that can explore them using the nltk.download() tool. import nltk nltk.download() for instance, you can take the gutemberg dataset Mycorpus = nltk.corpus.gutenberg text_name = Mycorpus.fileids()[0] raw = Mycorpus.raw(text_name) Words = Mycorpus.words(text_name) Also, tools like Gensim or Sci-kit learn include text databases to work with). In order to use Wikipedia data, we will select a single category of articles Step2: You can try with any other categories. Take into account that the behavior of topic modelling algorithms may depend on the amount of documents available for the analysis. Select a category with at least 100 articles. You can browse the wikipedia category tree here, https Step3: Now, we have stored the whole text collection in two lists Step4: 2. Corpus Processing Topic modelling algorithms process vectorized data. In order to apply them, we need to transform the raw text input data into a vector representation. To do so, we will remove irrelevant information from the text data and preserve as much relevant information as possible to capture the semantic content in the document collection. Thus, we will proceed with the following steps Step5: Task Step6: 2.2. Homogeneization By looking at the tokenized corpus you may verify that there are many tokens that correspond to punktuation signs and other symbols that are not relevant to analyze the semantic content. They can be removed using the stemming tool from nltk. The homogeneization process will consist of Step7: 2.2.2. Stemming vs Lemmatization At this point, we can choose between applying a simple stemming or ussing lemmatization. We will try both to test their differences. Task Step8: Alternatively, we can apply lemmatization. For english texts, we can use the lemmatizer from NLTK, which is based on WordNet. If you have not used wordnet before, you will likely need to download it from nltk Step9: Task Step10: One of the advantages of the lemmatizer method is that the result of lemmatization is still a true word, which is more advisable for the presentation of text processing results and lemmatization. However, without using contextual information, lemmatize() does not remove grammatical differences. This is the reason why "is" or "are" are preserved and not replaced by infinitive "be". As an alternative, we can apply .lemmatize(word, pos), where 'pos' is a string code specifying the part-of-speech (pos), i.e. the grammatical role of the words in its sentence. For instance, you can check the difference between wnl.lemmatize('is') and wnl.lemmatize('is, pos='v'). 2.3. Cleaning The third step consists of removing those words that are very common in language and do not carry out usefull semantic content (articles, pronouns, etc). Once again, we might need to load the stopword files using the download tools from nltk Step11: Task Step12: 2.4. Vectorization Up to this point, we have transformed the raw text collection of articles in a list of articles, where each article is a collection of the word roots that are most relevant for semantic analysis. Now, we need to convert these data (a list of token lists) into a numerical representation (a list of vectors, or a matrix). To do so, we will start using the tools provided by the gensim library. As a first step, we create a dictionary containing all tokens in our text corpus, and assigning an integer identifier to each one of them. Step13: In the second step, let us create a numerical version of our corpus using the doc2bow method. In general, D.doc2bow(token_list) transform any list of tokens into a list of tuples (token_id, n), one per each token in token_list, where token_id is the token identifier (according to dictionary D) and n is the number of occurrences of such token in token_list. Task Step14: At this point, it is good to make sure to understand what has happened. In corpus_clean we had a list of token lists. With it, we have constructed a Dictionary, D, which assign an integer identifier to each token in the corpus. After that, we have transformed each article (in corpus_clean) in a list tuples (id, n). Step15: Note that we can interpret each element of corpus_bow as a sparse_vector. For example, a list of tuples [(0, 1), (3, 3), (5,2)] for a dictionary of 10 elements can be represented as a vector, where any tuple (id, n) states that position id must take value n. The rest of positions must be zero. [1, 0, 0, 3, 0, 2, 0, 0, 0, 0] These sparse vectors will be the inputs to the topic modeling algorithms. Note that, at this point, we have built a Dictionary containing Step16: and a bow representation of a corpus with Step17: Before starting with the semantic analyisis, it is interesting to observe the token distribution for the given corpus. Step18: ids_sorted is a list of all token ids, sorted by decreasing number of occurrences in the whole corpus. For instance, the most frequent term is Step19: which appears Step20: In the following we plot the most frequent terms in the corpus. Step21: Exercise Step22: Exercise Step23: Exercise Step24: Exercise (All in one) Step25: Exercise (Visualizing categories) Step26: Exercise (bigrams) Step27: 2.4. Saving results The dictionary D and the Bag of Words in corpus_bow are the key inputs to the topic model algorithms analyzed in the following notebook. Save them to be ready to use them during the next session.
Python Code: %matplotlib inline # Required imports from wikitools import wiki from wikitools import category import nltk from nltk.tokenize import word_tokenize from nltk.corpus import stopwords from nltk.stem import WordNetLemmatizer import gensim import numpy as np import lda import lda.datasets import matplotlib.pyplot as plt from test_helper import Test Explanation: Natural Language Processing with NLTK Author: Jesús Cid Sueiro Date: 2016/04/03 In this notebook we will explore some tools for text analysis in python. To do so, first we will import the requested python libraries. End of explanation site = wiki.Wiki("https://en.wikipedia.org/w/api.php") # Select a category with a reasonable number of articles (>100) cat = "Economics" # cat = "Pseudoscience" print cat Explanation: 1. Corpus acquisition. In these notebooks we will explore some tools for text processing and analysis and two topic modeling algorithms available from Python toolboxes. To do so, we will explore and analyze collections of Wikipedia articles from a given category, using wikitools, that makes easy the capture of content from wikimedia sites. (As a side note, there are many other available text collections to test topic modelling algorithm. In particular, the NLTK library has many examples, that can explore them using the nltk.download() tool. import nltk nltk.download() for instance, you can take the gutemberg dataset Mycorpus = nltk.corpus.gutenberg text_name = Mycorpus.fileids()[0] raw = Mycorpus.raw(text_name) Words = Mycorpus.words(text_name) Also, tools like Gensim or Sci-kit learn include text databases to work with). In order to use Wikipedia data, we will select a single category of articles: End of explanation # Loading category data. This may take a while print "Loading category data. This may take a while..." cat_data = category.Category(site, cat) corpus_titles = [] corpus_text = [] for n, page in enumerate(cat_data.getAllMembersGen()): print "\r Loading article {0}".format(n + 1), corpus_titles.append(page.title) corpus_text.append(page.getWikiText()) n_art = len(corpus_titles) print "\nLoaded " + str(n_art) + " articles from category " + cat Explanation: You can try with any other categories. Take into account that the behavior of topic modelling algorithms may depend on the amount of documents available for the analysis. Select a category with at least 100 articles. You can browse the wikipedia category tree here, https://en.wikipedia.org/wiki/Category:Contents, for instance. We start downloading the text collection. End of explanation # n = 5 # print corpus_titles[n] # print corpus_text[n] Explanation: Now, we have stored the whole text collection in two lists: corpus_titles, which contains the titles of the selected articles corpus_text, with the text content of the selected wikipedia articles You can browse the content of the wikipedia articles to get some intuition about the kind of documents that will be processed. End of explanation # You can comment this if the package is already available. # Select option "d) Download", and identifier "punkt" # nltk.download() Explanation: 2. Corpus Processing Topic modelling algorithms process vectorized data. In order to apply them, we need to transform the raw text input data into a vector representation. To do so, we will remove irrelevant information from the text data and preserve as much relevant information as possible to capture the semantic content in the document collection. Thus, we will proceed with the following steps: Tokenization Homogeneization Cleaning Vectorization 2.1. Tokenization For the first steps, we will use some of the powerfull methods available from the Natural Language Toolkit. In order to use the word_tokenize method from nltk, you might need to get the appropriate libraries using nltk.download(). You must select option "d) Download", and identifier "punkt" End of explanation corpus_tokens = [] for n, art in enumerate(corpus_text): print "\rTokenizing article {0} out of {1}".format(n + 1, n_art), # This is to make sure that all characters have the appropriate encoding. art = art.decode('utf-8') # Tokenize each text entry. # scode: tokens = <FILL IN> # Add the new token list as a new element to corpus_tokens (that will be a list of lists) # scode: <FILL IN> print "\n The corpus has been tokenized. Let's check some portion of the first article:" print corpus_tokens[0][0:30] Test.assertEquals(len(corpus_tokens), n_art, "The number of articles has changed unexpectedly") Test.assertTrue(len(corpus_tokens) >= 100, "Your corpus_tokens has less than 100 articles. Consider using a larger dataset") Explanation: Task: Insert the appropriate call to word_tokenize in the code below, in order to get the tokens list corresponding to each Wikipedia article: End of explanation # Select stemmer. stemmer = nltk.stem.SnowballStemmer('english') corpus_filtered = [] for n, token_list in enumerate(corpus_tokens): print "\rFiltering article {0} out of {1}".format(n + 1, n_art), # Convert all tokens in token_list to lowercase, remove non alfanumeric tokens and stem. # Store the result in a new token list, clean_tokens. # scode: filtered_tokens = <FILL IN> # Add art to corpus_filtered # scode: <FILL IN> print "\nLet's check the first tokens from document 0 after stemming:" print corpus_filtered[0][0:30] Test.assertTrue(all([c==c.lower() for c in corpus_filtered[23]]), 'Capital letters have not been removed') Test.assertTrue(all([c.isalnum() for c in corpus_filtered[13]]), 'Non alphanumeric characters have not been removed') Explanation: 2.2. Homogeneization By looking at the tokenized corpus you may verify that there are many tokens that correspond to punktuation signs and other symbols that are not relevant to analyze the semantic content. They can be removed using the stemming tool from nltk. The homogeneization process will consist of: Removing capitalization: capital alphabetic characters will be transformed to their corresponding lowercase characters. Removing non alphanumeric tokens (e.g. punktuation signs) Stemming/Lemmatization: removing word terminations to preserve the root of the words and ignore grammatical information. 2.2.1. Filtering Let us proceed with the filtering steps 1 and 2 (removing capitalization and non-alphanumeric tokens). Task: Convert all tokens in corpus_tokens to lowercase (using .lower() method) and remove non alphanumeric tokens (that you can detect with .isalnum() method). You can do it in a single line of code... End of explanation # Select stemmer. stemmer = nltk.stem.SnowballStemmer('english') corpus_stemmed = [] for n, token_list in enumerate(corpus_filtered): print "\rStemming article {0} out of {1}".format(n + 1, n_art), # Apply stemming to all tokens in token_list and save them in stemmed_tokens # scode: stemmed_tokens = <FILL IN> # Add stemmed_tokens to the stemmed corpus # scode: <FILL IN> print "\nLet's check the first tokens from document 0 after stemming:" print corpus_stemmed[0][0:30] Test.assertTrue((len([c for c in corpus_stemmed[0] if c!=stemmer.stem(c)]) < 0.1*len(corpus_stemmed[0])), 'It seems that stemming has not been applied properly') Explanation: 2.2.2. Stemming vs Lemmatization At this point, we can choose between applying a simple stemming or ussing lemmatization. We will try both to test their differences. Task: Apply the .stem() method, from the stemmer object created in the first line, to corpus_filtered. End of explanation # You can comment this if the package is already available. # Select option "d) Download", and identifier "wordnet" # nltk.download() Explanation: Alternatively, we can apply lemmatization. For english texts, we can use the lemmatizer from NLTK, which is based on WordNet. If you have not used wordnet before, you will likely need to download it from nltk End of explanation wnl = WordNetLemmatizer() # Select stemmer. corpus_lemmat = [] for n, token_list in enumerate(corpus_filtered): print "\rLemmatizing article {0} out of {1}".format(n + 1, n_art), # scode: lemmat_tokens = <FILL IN> # Add art to the stemmed corpus # scode: <FILL IN> print "\nLet's check the first tokens from document 0 after stemming:" print corpus_lemmat[0][0:30] Explanation: Task: Apply the .lemmatize() method, from the WordNetLemmatizer object created in the first line, to corpus_filtered. End of explanation # You can comment this if the package is already available. # Select option "d) Download", and identifier "stopwords" # nltk.download() Explanation: One of the advantages of the lemmatizer method is that the result of lemmatization is still a true word, which is more advisable for the presentation of text processing results and lemmatization. However, without using contextual information, lemmatize() does not remove grammatical differences. This is the reason why "is" or "are" are preserved and not replaced by infinitive "be". As an alternative, we can apply .lemmatize(word, pos), where 'pos' is a string code specifying the part-of-speech (pos), i.e. the grammatical role of the words in its sentence. For instance, you can check the difference between wnl.lemmatize('is') and wnl.lemmatize('is, pos='v'). 2.3. Cleaning The third step consists of removing those words that are very common in language and do not carry out usefull semantic content (articles, pronouns, etc). Once again, we might need to load the stopword files using the download tools from nltk End of explanation corpus_clean = [] stopwords_en = stopwords.words('english') n = 0 for token_list in corpus_stemmed: n += 1 print "\rRemoving stopwords from article {0} out of {1}".format(n, n_art), # Remove all tokens in the stopwords list and append the result to corpus_clean # scode: clean_tokens = <FILL IN> # scode: <FILL IN> print "\n Let's check tokens after cleaning:" print corpus_clean[0][0:30] Test.assertTrue(len(corpus_clean) == n_art, 'List corpus_clean does not contain the expected number of articles') Test.assertTrue(len([c for c in corpus_clean[0] if c in stopwords_en])==0, 'Stopwords have not been removed') Explanation: Task: In the second line below we read a list of common english stopwords. Clean corpus_stemmed by removing all tokens in the stopword list. End of explanation # Create dictionary of tokens D = gensim.corpora.Dictionary(corpus_clean) n_tokens = len(D) print "The dictionary contains {0} tokens".format(n_tokens) print "First tokens in the dictionary: " for n in range(10): print str(n) + ": " + D[n] Explanation: 2.4. Vectorization Up to this point, we have transformed the raw text collection of articles in a list of articles, where each article is a collection of the word roots that are most relevant for semantic analysis. Now, we need to convert these data (a list of token lists) into a numerical representation (a list of vectors, or a matrix). To do so, we will start using the tools provided by the gensim library. As a first step, we create a dictionary containing all tokens in our text corpus, and assigning an integer identifier to each one of them. End of explanation # Transform token lists into sparse vectors on the D-space # scode: corpus_bow = <FILL IN> Test.assertTrue(len(corpus_bow)==n_art, 'corpus_bow has not the appropriate size') Explanation: In the second step, let us create a numerical version of our corpus using the doc2bow method. In general, D.doc2bow(token_list) transform any list of tokens into a list of tuples (token_id, n), one per each token in token_list, where token_id is the token identifier (according to dictionary D) and n is the number of occurrences of such token in token_list. Task: Apply the doc2bow method from gensim dictionary D, to all tokens in every article in corpus_clean. The result must be a new list named corpus_bow where each element is a list of tuples (token_id, number_of_occurrences). End of explanation print "Original article (after cleaning): " print corpus_clean[0][0:30] print "Sparse vector representation (first 30 components):" print corpus_bow[0][0:30] print "The first component, {0} from document 0, states that token 0 ({1}) appears {2} times".format( corpus_bow[0][0], D[0], corpus_bow[0][0][1]) Explanation: At this point, it is good to make sure to understand what has happened. In corpus_clean we had a list of token lists. With it, we have constructed a Dictionary, D, which assign an integer identifier to each token in the corpus. After that, we have transformed each article (in corpus_clean) in a list tuples (id, n). End of explanation print "{0} tokens".format(len(D)) Explanation: Note that we can interpret each element of corpus_bow as a sparse_vector. For example, a list of tuples [(0, 1), (3, 3), (5,2)] for a dictionary of 10 elements can be represented as a vector, where any tuple (id, n) states that position id must take value n. The rest of positions must be zero. [1, 0, 0, 3, 0, 2, 0, 0, 0, 0] These sparse vectors will be the inputs to the topic modeling algorithms. Note that, at this point, we have built a Dictionary containing End of explanation print "{0} Wikipedia articles".format(len(corpus_bow)) Explanation: and a bow representation of a corpus with End of explanation # SORTED TOKEN FREQUENCIES (I): # Create a "flat" corpus with all tuples in a single list corpus_bow_flat = [item for sublist in corpus_bow for item in sublist] # Initialize a numpy array that we will use to cont tokens. # token_count[n] should store the number of ocurrences of the n-th token, D[n] token_count = np.zeros(n_tokens) # Count the number of occurrences of each token. for x in corpus_bow_flat: # Update the proper element in token_count # scode: <FILL IN> # Sort by decreasing number of occurences ids_sorted = np.argsort(- token_count) tf_sorted = token_count[ids_sorted] Explanation: Before starting with the semantic analyisis, it is interesting to observe the token distribution for the given corpus. End of explanation print D[ids_sorted[0]] Explanation: ids_sorted is a list of all token ids, sorted by decreasing number of occurrences in the whole corpus. For instance, the most frequent term is End of explanation print "{0} times in the whole corpus".format(tf_sorted[0]) Explanation: which appears End of explanation # SORTED TOKEN FREQUENCIES (II): plt.rcdefaults() # Example data n_bins = 25 hot_tokens = [D[i] for i in ids_sorted[n_bins-1::-1]] y_pos = np.arange(len(hot_tokens)) z = tf_sorted[n_bins-1::-1]/n_art plt.barh(y_pos, z, align='center', alpha=0.4) plt.yticks(y_pos, hot_tokens) plt.xlabel('Average number of occurrences per article') plt.title('Token distribution') plt.show() # SORTED TOKEN FREQUENCIES: # Example data plt.semilogy(tf_sorted) plt.xlabel('Average number of occurrences per article') plt.title('Token distribution') plt.show() Explanation: In the following we plot the most frequent terms in the corpus. End of explanation # scode: cold_tokens = <FILL IN> print "There are {0} cold tokens, which represent {1}% of the total number of tokens in the dictionary".format( len(cold_tokens), float(len(cold_tokens))/n_tokens*100) Explanation: Exercise: There are usually many tokens that appear with very low frequency in the corpus. Count the number of tokens appearing only once, and what is the proportion of them in the token list. End of explanation # scode: <WRITE YOUR CODE HERE> Explanation: Exercise: Represent graphically those 20 tokens that appear in the highest number of articles. Note that you can use the code above (headed by # SORTED TOKEN FREQUENCIES) with a very minor modification. End of explanation # scode: <WRITE YOUR CODE HERE> Explanation: Exercise: Count the number of tokens appearing only in a single article. End of explanation # scode: <WRITE YOUR CODE HERE> Explanation: Exercise (All in one): Note that, for pedagogical reasons, we have used a different for loop for each text processing step creating a new corpus_xxx variable after each step. For very large corpus, this could cause memory problems. As a summary exercise, repeat the whole text processing, starting from corpus_text up to computing the bow, with the following modifications: Use a single for loop, avoiding the creation of any intermediate corpus variables. Use lemmatization instead of stemming. Remove all tokens appearing in only one document and less than 2 times. Save the result in a new variable corpus_bow1. End of explanation # scode: <WRITE YOUR CODE HERE> Explanation: Exercise (Visualizing categories): Repeat the previous exercise with a second wikipedia category. For instance, you can take "communication". Save the result in variable corpus_bow2. Determine the most frequent terms in corpus_bow1 (term1) and corpus_bow2 (term2). Transform each article in corpus_bow1 and corpus_bow2 into a 2 dimensional vector, where the first component is the frecuency of term1 and the second component is the frequency of term2 Draw a dispersion plot of all 2 dimensional points, using a different marker for each corpus. Could you differentiate both corpora using the selected terms only? What if the 2nd most frequent term is used? End of explanation # scode: <WRITE YOUR CODE HERE> # Check the code below to see how ngrams works, and adapt it to solve the exercise. # from nltk.util import ngrams # sentence = 'this is a foo bar sentences and i want to ngramize it' # sixgrams = ngrams(sentence.split(), 2) # for grams in sixgrams: #  print grams Explanation: Exercise (bigrams): nltk provides an utility to compute n-grams from a list of tokens, in nltk.util.ngrams. Join all tokens in corpus_clean in a single list and compute the bigrams. Plot the 20 most frequent bigrams in the corpus. End of explanation import pickle data = {} data['D'] = D data['corpus_bow'] = corpus_bow pickle.dump(data, open("wikiresults.p", "wb")) Explanation: 2.4. Saving results The dictionary D and the Bag of Words in corpus_bow are the key inputs to the topic model algorithms analyzed in the following notebook. Save them to be ready to use them during the next session. End of explanation
12,049
Given the following text description, write Python code to implement the functionality described below step by step Description: 2016-09-30 Step2: 1.1 Cross-validation Question Step3: Now use this function to compute cross-validated predictions on the data. Step4: Question Complete the code below to compute the cross-validated accuracy and area under the curve of the logistic regression on our data. Plot the ROC curve Step5: 1.2 Feature scaling Standardization of a dataset is a common requirement for many machine learning estimators Step6: Let us now visualize the distribution of one of the features of the data. Step7: Question Compute the cross-validated predictions of the logistic regression on the scaled data. Question Plot the two ROC curves (one for the logistic regression on the original data, one for the logistic regression on the scaled data) on the same plot. 1.3 Feature scaling and cross-validation In a cross-validation setting, we ignore the samples from the test fold when training the classifier. This also means that scaling should be done on the training data only. In scikit-learn, we can use a scaler to make centering and scaling happen independently on each feature by computing the relevant statistics on the samples in the training set. The mean and standard deviation will be stored to be used on the test data. Step9: Question Rewrite the cross_validate method to include a scaling step. Step10: Question Now use the cross_validate_with_scaling method to cross-validate the logistic regression on our data. Question Again, compare the AUROC and ROC curves with those obtained previously. What do you conclude? 1.4 Sample normalization Normalization is the process of scaling individual samples to have unit norm. It can be useful when using machine learning algorithms that use the distance between samples.
Python Code: import numpy as np %pylab inline # Load the data X = np.loadtxt('data/small_Endometrium_Uterus.csv', delimiter=',', skiprows=1, usecols=range(1, 3001)) # Python 2.7 only y = np.loadtxt('data/small_Endometrium_Uterus.csv', delimiter=',', skiprows=1, usecols=[3001], converters={3001: lambda s: 0 if s=='Endometrium' else 1}, dtype='int') # Python 3 alternative: #y = np.loadtxt('data/small_Endometrium_Uterus.csv', delimiter=',', # skiprows=1, usecols=[3001], dtype='bytes').astype('str') # Convert 'Endometrium' to 0 and 'Uterus' to 1 #y = np.where(y=='Endometrium', 0, 1) # Set up a stratified 10-fold cross-validation from sklearn import cross_validation folds = cross_validation.StratifiedKFold(y, 10, shuffle=True) print folds Explanation: 2016-09-30: Logistic Regression & Project 1. Logistic Regression In this lab, we will appply logistic regression to the Endometrium vs. Uterus cancer data End of explanation def cross_validate(design_matrix, labels, classifier, cv_folds): Perform a cross-validation and returns the predictions. Parameters: ----------- design_matrix: (n_samples, n_features) np.array Design matrix for the experiment. labels: (n_samples, ) np.array Vector of labels. classifier: sklearn classifier object Classifier instance; must have the following methods: - fit(X, y) to train the classifier on the data X, y - predict_proba(X) to apply the trained classifier to the data X and return probability estimates cv_folds: sklearn cross-validation object Cross-validation iterator. Return: ------- pred: (n_samples, ) np.array Vectors of predictions (same order as labels). pred = np.zeros(labels.shape) for tr, te in cv_folds: # TODO return pred Explanation: 1.1 Cross-validation Question: Create a cross-validation function that takes a design matrix, label array, scikit-learn classifier, and scikit-learn cross_validation object and returns the corresponding list of cross-validated predictions. Make sure that you are returning the predictions in the correct order! Check the documentation of fit(X, y) and predict_proba(X) in sklearn.linear_model.LogisticRegression http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html End of explanation from sklearn import linear_model clf = linear_model.LogisticRegression(C=1e6) # high C means no regularization (we'll talk about regularization next week!) ypred_logreg = cross_validate(X, y, clf, folds) Explanation: Now use this function to compute cross-validated predictions on the data. End of explanation from sklearn import metrics fpr_logreg, tpr_logreg, thresholds = metrics.roc_curve(y, ypred_logreg, pos_label=1) print "Accuracy:", #TODO auc_logreg = metrics.auc(fpr_logreg, tpr_logreg) plt.plot(#TODO ) plt.xlabel('False Positive Rate', fontsize=16) plt.ylabel('True Positive Rate', fontsize=16) plt.title('ROC curve: Logistic regression', fontsize=16) plt.legend(loc="lower right") #plt.savefig('%s/evu_linreg.pdf' % fig_dir, bbox_inches='tight') Explanation: Question Complete the code below to compute the cross-validated accuracy and area under the curve of the logistic regression on our data. Plot the ROC curve End of explanation from sklearn import preprocessing X_scaled = preprocessing.scale(X) Explanation: 1.2 Feature scaling Standardization of a dataset is a common requirement for many machine learning estimators: they might behave badly if the individual feature do not more or less look like standard normally distributed data (e.g. Gaussian with 0 mean and unit variance). If a feature has a variance that is orders of magnitude larger that others, it might dominate the objective function and make the estimator unable to learn from other features correctly as expected. In practice we often ignore the shape of the distribution and just transform the data to center it by removing the mean value of each feature, then scale it by dividing non-constant features by their standard deviation. Scikit-learn offers tools to deal with this issue. End of explanation idx_1 = 0 # first feature fig = plt.figure(figsize=(12, 8)) # (width, height) fig.add_subplot(221) # 2 x 2 grid, 1st subplot h = plt.hist(X[:, idx_1], bins=30, color='blue') plt.title('Feature %d (not scaled)' % idx_1, fontsize=16) fig.add_subplot(222) # 2 x 2 grid, 2nd subplot h = plt.hist(X_scaled[:, idx_1], bins=30, color='orange') plt.title('Feature %d (scaled)' % idx_1, fontsize=16) idx_2 = 1 # second feature fig.add_subplot(223) # 2 x 2 grid, 3rd subplot h = plt.hist(X[:, idx_2], bins=30, color='blue') plt.title('Feature %d (not scaled)' % idx_2, fontsize=16) fig.add_subplot(224) # 2 x 2 grid, 4th subplot h = plt.hist(X_scaled[:, idx_2], bins=30, color='orange') plt.title('Feature %d (scaled)' % idx_2, fontsize=16) plt.tight_layout() # improve spacing between subplots Explanation: Let us now visualize the distribution of one of the features of the data. End of explanation scaler = preprocessing.StandardScaler() #Xtr = scaler.fit_transform(Xtr) #Xte = scaler.transform(Xte) Explanation: Question Compute the cross-validated predictions of the logistic regression on the scaled data. Question Plot the two ROC curves (one for the logistic regression on the original data, one for the logistic regression on the scaled data) on the same plot. 1.3 Feature scaling and cross-validation In a cross-validation setting, we ignore the samples from the test fold when training the classifier. This also means that scaling should be done on the training data only. In scikit-learn, we can use a scaler to make centering and scaling happen independently on each feature by computing the relevant statistics on the samples in the training set. The mean and standard deviation will be stored to be used on the test data. End of explanation def cross_validate_with_scaling(design_matrix, labels, classifier, cv_folds): Perform a cross-validation and returns the predictions. Use a scaler to scale the features to mean 0, standard deviation 1. Parameters: ----------- design_matrix: (n_samples, n_features) np.array Design matrix for the experiment. labels: (n_samples, ) np.array Vector of labels. classifier: sklearn classifier object Classifier instance; must have the following methods: - fit(X, y) to train the classifier on the data X, y - predict_proba(X) to apply the trained classifier to the data X and return probability estimates cv_folds: sklearn cross-validation object Cross-validation iterator. Return: ------- pred: (n_samples, ) np.array Vectors of predictions (same order as labels). pred = np.zeros(labels.shape) for tr, te in cv_folds: # TODO return pred Explanation: Question Rewrite the cross_validate method to include a scaling step. End of explanation X_norm = preprocessing.normalize(X) Explanation: Question Now use the cross_validate_with_scaling method to cross-validate the logistic regression on our data. Question Again, compare the AUROC and ROC curves with those obtained previously. What do you conclude? 1.4 Sample normalization Normalization is the process of scaling individual samples to have unit norm. It can be useful when using machine learning algorithms that use the distance between samples. End of explanation
12,050
Given the following text description, write Python code to implement the functionality described below step by step Description: Previous 1.3 保留最后N个元素 问题 在迭代操作或者其他操作的时候,怎样只保留最后有限几个元素的历史记录? 解决方案 保留有限历史记录正是 collections.deque 大显身手的时候。比如,下面的代码在多行上面做简单的文本匹配, 并返回匹配所在行的前 N 行: ``` python from collections import deque def search(lines, pattern, history = 5) Step1: 尽管你也可以手动在一个列表上实现这一的操作(比如增加、删除等等)。但是这里的队列方案会更加优雅并且运行得更快些。 更一般的, deque 类可以被用在任何你只需要一个简单队列数据结构的场合。 如果你不设置最大队列大小,那么就会得到一个无限大小队列,你可以在队列的两端执行添加和弹出元素的操作。 代码示例:
Python Code: from collections import deque q = deque(maxlen = 3) q.append(1) q.append(2) q.append(3) q q.append(4) q q.append(5) q Explanation: Previous 1.3 保留最后N个元素 问题 在迭代操作或者其他操作的时候,怎样只保留最后有限几个元素的历史记录? 解决方案 保留有限历史记录正是 collections.deque 大显身手的时候。比如,下面的代码在多行上面做简单的文本匹配, 并返回匹配所在行的前 N 行: ``` python from collections import deque def search(lines, pattern, history = 5): previous_lines = deque(maxlen = history) for li in lines: if pattern in li: yield li, previous_lines previous_lines.append(li) Example use on a file if main = "main": with open(r"../../cookbook/somefile.txt", "r") as f: for line, prevlines in search(f, "python", 5): for pline in prevlines: print(pline, end = "") print(line, end = "") print("-" * 20) ``` 讨论 我们在写查询元素的代码时,通常会使用包含 yield 表达式的生成器函数,也就是我们上面示例代码中的那样。 这样可以将搜索过程代码和使用搜索结果代码解耦。如果你还不清楚什么是生成器,请参看 4.3 节。 使用 deque(maxlen=N) 构造函数会新建一个固定大小的队列。当新的元素加入并且这个队列已满的时候, 最老的元素会自动被移除掉。 代码示例: End of explanation q = deque() q.append(1) q.append(2) q.append(3) q q.appendleft(4) q q.pop() q q.popleft() Explanation: 尽管你也可以手动在一个列表上实现这一的操作(比如增加、删除等等)。但是这里的队列方案会更加优雅并且运行得更快些。 更一般的, deque 类可以被用在任何你只需要一个简单队列数据结构的场合。 如果你不设置最大队列大小,那么就会得到一个无限大小队列,你可以在队列的两端执行添加和弹出元素的操作。 代码示例: End of explanation
12,051
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: Language Translation In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the Data Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. Step3: Explore the Data Play around with view_sentence_range to view different parts of the data. Step6: Implement Preprocessing Function Text to Word Ids As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the &lt;EOS&gt; word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end. You can get the &lt;EOS&gt; word id by doing Step8: Preprocess all the data and save it Running the code cell below will preprocess all the data and save it to file. Step10: Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. Step12: Check the Version of TensorFlow and Access to GPU This will check to make sure you have the correct version of TensorFlow and access to a GPU Step15: Build the Neural Network You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below Step18: Process Decoding Input Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the beginning of each batch. Step21: Encoding Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn(). Step24: Decoding - Training Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs. Step27: Decoding - Inference Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Step30: Build the Decoding Layer Implement decoding_layer() to create a Decoder RNN layer. Create RNN cell for decoding using rnn_size and num_layers. Create the output fuction using lambda to transform it's input, logits, to class logits. Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits. Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits. Note Step33: Build the Neural Network Apply the functions you implemented above to Step34: Neural Network Training Hyperparameters Tune the following parameters Step36: Build the Graph Build the graph using the neural network you implemented. Step39: Train Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem. Step41: Save Parameters Save the batch_size and save_path parameters for inference. Step43: Checkpoint Step46: Sentence to Sequence To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences. Convert the sentence to lowercase Convert words into ids using vocab_to_int Convert words not in the vocabulary, to the &lt;UNK&gt; word id. Step48: Translate This will translate translate_sentence from English to French.
Python Code: DON'T MODIFY ANYTHING IN THIS CELL import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) Explanation: Language Translation In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the Data Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. End of explanation view_sentence_range = (0, 10) DON'T MODIFY ANYTHING IN THIS CELL import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) Explanation: Explore the Data Play around with view_sentence_range to view different parts of the data. End of explanation def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) eos = target_vocab_to_int['<EOS>'] sourceIdText = [[source_vocab_to_int[word] for word in line.split()] for line in source_text.split('\n')] targetIdText = [[target_vocab_to_int[word] for word in line.split()] + [eos] for line in target_text.split('\n')] # TODO: Implement Function return sourceIdText, targetIdText DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_text_to_ids(text_to_ids) Explanation: Implement Preprocessing Function Text to Word Ids As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the &lt;EOS&gt; word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end. You can get the &lt;EOS&gt; word id by doing: python target_vocab_to_int['&lt;EOS&gt;'] You can get other word ids using source_vocab_to_int and target_vocab_to_int. End of explanation DON'T MODIFY ANYTHING IN THIS CELL helper.preprocess_and_save_data(source_path, target_path, text_to_ids) Explanation: Preprocess all the data and save it Running the code cell below will preprocess all the data and save it to file. End of explanation DON'T MODIFY ANYTHING IN THIS CELL import numpy as np import helper (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() Explanation: Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. End of explanation DON'T MODIFY ANYTHING IN THIS CELL from distutils.version import LooseVersion import warnings import tensorflow as tf # Check TensorFlow Version assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__) print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) Explanation: Check the Version of TensorFlow and Access to GPU This will check to make sure you have the correct version of TensorFlow and access to a GPU End of explanation def model_inputs(): Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate, keep probability) input = tf.placeholder(tf.int32, [None, None], name = 'input') targets = tf.placeholder(tf.int32, [None, None]) learning_rate = tf.placeholder(tf.float32) keep_prob = tf.placeholder(tf.float32, name='keep_prob') # TODO: Implement Function return input, targets, learning_rate, keep_prob DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_model_inputs(model_inputs) Explanation: Build the Neural Network You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below: - model_inputs - process_decoding_input - encoding_layer - decoding_layer_train - decoding_layer_infer - decoding_layer - seq2seq_model Input Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders: Input text placeholder named "input" using the TF Placeholder name parameter with rank 2. Targets placeholder with rank 2. Learning rate placeholder with rank 0. Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0. Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability) End of explanation def process_decoding_input(target_data, target_vocab_to_int, batch_size): Preprocess target data for decoding :param target_data: Target Placeholder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1]) dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1) return dec_input # TODO: Implement Function DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_process_decoding_input(process_decoding_input) Explanation: Process Decoding Input Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the beginning of each batch. End of explanation def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob): Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :return: RNN state cell = tf.contrib.rnn.BasicLSTMCell(rnn_size) cell = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob) cells = tf.contrib.rnn.MultiRNNCell([cell] * num_layers) _, cells_state = tf.nn.dynamic_rnn(cells, rnn_inputs, dtype=tf.float32) return cells_state DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_encoding_layer(encoding_layer) Explanation: Encoding Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn(). End of explanation def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob): Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param sequence_length: Sequence Length :param decoding_scope: TenorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Train Logits train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state) train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder( dec_cell, train_decoder_fn, dec_embed_input, sequence_length, scope=decoding_scope) train_logits = output_fn(train_pred) return train_logits DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_decoding_layer_train(decoding_layer_train) Explanation: Decoding - Training Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs. End of explanation def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob): Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param maximum_length: The maximum allowed time steps to decode :param vocab_size: Size of vocabulary :param decoding_scope: TensorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Inference Logits # TODO: Implement Function infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference( output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size) inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, infer_decoder_fn, scope=decoding_scope) return inference_logits DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_decoding_layer_infer(decoding_layer_infer) Explanation: Decoding - Inference Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder(). End of explanation def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob): Create decoding layer :param dec_embed_input: Decoder embedded input :param dec_embeddings: Decoder embeddings :param encoder_state: The encoded state :param vocab_size: Size of vocabulary :param sequence_length: Sequence Length :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param keep_prob: Dropout keep probability :return: Tuple of (Training Logits, Inference Logits) # TODO: Implement Function dec_cell = tf.contrib.rnn.BasicLSTMCell(rnn_size) dec_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, output_keep_prob=keep_prob) dec_cell = tf.contrib.rnn.MultiRNNCell([dec_cell] * num_layers) start_of_sequence_id = target_vocab_to_int['<GO>'] end_of_sequence_id = target_vocab_to_int['<EOS>'] maximum_length = sequence_length - 1 with tf.variable_scope("decoding") as decoding_scope: output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, None, scope=decoding_scope) train_logits = decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) decoding_scope.reuse_variables() infer_logits = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) return train_logits, infer_logits DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_decoding_layer(decoding_layer) Explanation: Build the Decoding Layer Implement decoding_layer() to create a Decoder RNN layer. Create RNN cell for decoding using rnn_size and num_layers. Create the output fuction using lambda to transform it's input, logits, to class logits. Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits. Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits. Note: You'll need to use tf.variable_scope to share variables between training and inference. End of explanation def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param sequence_length: Sequence Length :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training Logits, Inference Logits) rnn_inputs = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size) encoder_state = encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob) target_data = process_decoding_input(target_data, target_vocab_to_int, batch_size) dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size])) dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, target_data) train_logits, infer_logits = decoding_layer(dec_embed_input, dec_embeddings, encoder_state, target_vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob) return train_logits, infer_logits # TODO: Implement Function DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_seq2seq_model(seq2seq_model) Explanation: Build the Neural Network Apply the functions you implemented above to: Apply embedding to the input data for the encoder. Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob). Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function. Apply embedding to the target data for the decoder. Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob). End of explanation # Number of Epochs epochs = 10 # Batch Size batch_size = 256 # RNN Size rnn_size = 128 # Number of Layers num_layers = 2 # Embedding Size encoding_embedding_size = 150 decoding_embedding_size = 150 # Learning Rate learning_rate = 0.001 # Dropout Keep Probability keep_probability = 0.85 Explanation: Neural Network Training Hyperparameters Tune the following parameters: Set epochs to the number of epochs. Set batch_size to the batch size. Set rnn_size to the size of the RNNs. Set num_layers to the number of layers. Set encoding_embedding_size to the size of the embedding for the encoder. Set decoding_embedding_size to the size of the embedding for the decoder. Set learning_rate to the learning rate. Set keep_probability to the Dropout keep probability End of explanation DON'T MODIFY ANYTHING IN THIS CELL save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_source_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob = model_inputs() sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model( tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) tf.identity(inference_logits, 'logits') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( train_logits, targets, tf.ones([input_shape[0], sequence_length])) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) Explanation: Build the Graph Build the graph using the neural network you implemented. End of explanation DON'T MODIFY ANYTHING IN THIS CELL import time def get_accuracy(target, logits): Calculate accuracy max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], 'constant') if max_seq - logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1]), (0,0)], 'constant') return np.mean(np.equal(target, np.argmax(logits, 2))) train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = helper.pad_sentence_batch(source_int_text[:batch_size]) valid_target = helper.pad_sentence_batch(target_int_text[:batch_size]) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch) in enumerate( helper.batch_data(train_source, train_target, batch_size)): start_time = time.time() _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, sequence_length: target_batch.shape[1], keep_prob: keep_probability}) batch_train_logits = sess.run( inference_logits, {input_data: source_batch, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_source, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits) end_time = time.time() print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') Explanation: Train Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem. End of explanation DON'T MODIFY ANYTHING IN THIS CELL # Save parameters for checkpoint helper.save_params(save_path) Explanation: Save Parameters Save the batch_size and save_path parameters for inference. End of explanation DON'T MODIFY ANYTHING IN THIS CELL import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() Explanation: Checkpoint End of explanation def sentence_to_seq(sentence, vocab_to_int): Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids # TODO: Implement Function int_sentence = [vocab_to_int.get(w.lower(), vocab_to_int['<UNK>']) for w in sentence.split()] return int_sentence DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_sentence_to_seq(sentence_to_seq) Explanation: Sentence to Sequence To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences. Convert the sentence to lowercase Convert words into ids using vocab_to_int Convert words not in the vocabulary, to the &lt;UNK&gt; word id. End of explanation translate_sentence = 'he saw a old yellow truck .' DON'T MODIFY ANYTHING IN THIS CELL translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('logits:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)])) print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)])) Explanation: Translate This will translate translate_sentence from English to French. End of explanation
12,052
Given the following text description, write Python code to implement the functionality described below step by step Description: Running an MSTIS simulation Now we will use the initial trajectories we obtained from bootstrapping to run an MSTIS simulation. This will show both how objects can be regenerated from storage and how regenerated equivalent objects can be used in place of objects that weren't stored. Tasks covered in this notebook Step1: Loading things from storage First we'll reload some of the stuff we stored before. Of course, this starts with opening the file. Step2: A lot of information can be recovered from the old storage, and so we don't have the recreate it. However, we did not save our network, so we'll have to create a new one. Since the network creates the ensembles, that means we will have to translate the trajectories from the old ensembles to new ensembles. Step3: Loading from storage is very easy. Each store is a list. We take the 0th snapshot as a template (it doesn't actually matter which one) for the next storage we'll create. There's only one engine stored, so we take the only one. Step4: Named objects can be found in storage by using their name as a dictionary key. This allows us to load our old collective variables and states. Step5: Once again, we have everything we need to build the MSTIS network. Recall that this will create all the ensembles we need for the simulation. However, even though the ensembles are semantically the same, these are not the same objects. We'll need to deal with that later. Step6: Now we need to set up real trajectories that we can use for each of these. We can start by loading the stored sampleset. Step7: About Samples The OPS object called Sample is used to associate a trajectory with a replica ID and an ensemble. The trajectory needs to be associated with an ensemble so we know how to get correct statistics from the many ensembles that we might be sampling simultaneously. The trajectory needs to be associated with a replica ID so that replica exchange approaches can be analyzed. Since the ensembles in our MSTIS network are not the exact ensemble objects that we saved our samples with (they were rebuilt), we still need a way to identify which of the new ensembles to associate them with. There are two main ways to do this. The first is to take one trajectory, and associate it with as many ensembles as possible. If your first path comes from a TPS simulation, that is the approach you'll want to take. The second approach is better suited to our conditions here Step8: The sanity_check function ensures that all trajectories in the sampleset are actually in the ensemble they claim to be associated with. At this point, we should have 9 samples. Step9: Remapping old ensembles to new ensembles If your old and new ensembles have the same string representations, then OPS has a function to help you automatically map them. As long as you create the ensembles in the same way, they'll have the same string representation. Note that if you don't have the same string representation, you would have to assign trajectories to ensembles by hand (which isn't that hard, but is a bit tedious). Step10: Setting up special ensembles Whichever way we initially set up the SampleSet, at this point it only contains samples for the main sampling trajectories of each transition. Now we need to put trajectories into various auxiliary ensembles. Multiple state outer ensemble The multiple state outer ensemble is, in fact, sampled during the bootstrapping. However, it is actually sampled once for every state that shares it. It is very easy to find a trajectory that satisfies the ensemble and to load add that sample to our sampleset. Step11: Minus interface ensemble The minus interface ensembles do not yet have a trajectory. We will generate them by starting with same-state trajectories (A-to-A, B-to-B, C-to-C) in each interface, and extending into the minus ensemble. check whether the traj is A-to-A extend First we need to make sure that the trajectory in the innermost ensemble of each state also ends in that state. This is necessary so that when we extend the trajectory, it can extends into the minus ensemble. If the trajectory isn't right, we run a shooting move on it until it is. Step12: Now that all the innermost ensembles are safe to use for extending into a minus interface, we extend them into a minus interface Step13: Equilibration In molecular dynamics, you need to equilibrate if you don't start with an equilibrium frame (e.g., if you start with solvent molecules on a grid, your system should equilibrate before you start taking statistics). Similarly, if you start with a set of paths which are far from the path ensemble equilibrium, you need to equilibrate. This could either be because your trajectories are not from the real dynamics (generated with metadynamics, high temperature, etc.) or because your trajectories are not representative of the path ensemble (e.g., if you put transition trajectories into all interfaces). As with MD, running equilibration can be the same process as running the total simulation. However, in path sampling, it doesn't have to be Step14: Running RETIS Now we run the full calculation. Up to here, we haven't been storing any of our results. This time, we'll start a storage object, and we'll save the network we've created. Then we'll run a new PathSampling calculation object. Step15: The next block sets up a live visualization. This is optional, and only recommended if you're using OPS interactively (which would only be for very small systems). Some of the same tools can be used to play back the behavior after the fact if you want to see the behavior for more complicated systems. You can create a background (here we use the PES contours), and the visualization will plot the trajectories. Step16: Now everything is ready
Python Code: %matplotlib inline import openpathsampling as paths import numpy as np Explanation: Running an MSTIS simulation Now we will use the initial trajectories we obtained from bootstrapping to run an MSTIS simulation. This will show both how objects can be regenerated from storage and how regenerated equivalent objects can be used in place of objects that weren't stored. Tasks covered in this notebook: * Loading OPS objects from storage * Ways of assigning initial trajectories to initial samples * Setting up a path sampling simulation with various move schemes * Visualizing trajectories while the path sampling is running End of explanation old_store = paths.AnalysisStorage("mstis_bootstrap.nc") Explanation: Loading things from storage First we'll reload some of the stuff we stored before. Of course, this starts with opening the file. End of explanation print "PathMovers:", len(old_store.pathmovers) print "Samples:", len(old_store.samples) print "Ensembles:", len(old_store.ensembles) print "SampleSets:", len(old_store.samplesets) print "Snapshots:", len(old_store.snapshots) print "Networks:", len(old_store.networks) Explanation: A lot of information can be recovered from the old storage, and so we don't have the recreate it. However, we did not save our network, so we'll have to create a new one. Since the network creates the ensembles, that means we will have to translate the trajectories from the old ensembles to new ensembles. End of explanation template = old_store.snapshots[0] engine = old_store.engines[0] Explanation: Loading from storage is very easy. Each store is a list. We take the 0th snapshot as a template (it doesn't actually matter which one) for the next storage we'll create. There's only one engine stored, so we take the only one. End of explanation opA = old_store.cvs['opA'] opB = old_store.cvs['opB'] opC = old_store.cvs['opC'] stateA = old_store.volumes['A'] stateB = old_store.volumes['B'] stateC = old_store.volumes['C'] # we could also load the interfaces, but it takes less code to build new ones: interfacesA = paths.VolumeInterfaceSet(opA, 0.0,[0.2, 0.3, 0.4]) interfacesB = paths.VolumeInterfaceSet(opB, 0.0,[0.2, 0.3, 0.4]) interfacesC = paths.VolumeInterfaceSet(opC, 0.0,[0.2, 0.3, 0.4]) Explanation: Named objects can be found in storage by using their name as a dictionary key. This allows us to load our old collective variables and states. End of explanation ms_outers = paths.MSOuterTISInterface.from_lambdas( {ifaces: 0.5 for ifaces in [interfacesA, interfacesB, interfacesC]} ) mstis = paths.MSTISNetwork( [(stateA, interfacesA), (stateB, interfacesB), (stateC, interfacesC)], ms_outers=ms_outers ) Explanation: Once again, we have everything we need to build the MSTIS network. Recall that this will create all the ensembles we need for the simulation. However, even though the ensembles are semantically the same, these are not the same objects. We'll need to deal with that later. End of explanation # load the sampleset we have saved before old_sampleset = old_store.samplesets[0] Explanation: Now we need to set up real trajectories that we can use for each of these. We can start by loading the stored sampleset. End of explanation # this makes a dictionary mapping the outermost ensemble of each sampling transition # to a trajectory from the old_sampleset that satisfies that ensemble trajs = {} for ens in [t.ensembles[-1] for t in mstis.sampling_transitions]: trajs[ens] = [s.trajectory for s in old_sampleset if ens(s.trajectory)][0] assert(len(trajs)==3) # otherwise, we have a problem initial_samples = {} for t in mstis.sampling_transitions: initial_samples[t] = paths.SampleSet.map_trajectory_to_ensembles(trajs[t.ensembles[-1]], t.ensembles) single_trajectory_sset = paths.SampleSet.relabel_replicas_per_ensemble(initial_samples.values()) Explanation: About Samples The OPS object called Sample is used to associate a trajectory with a replica ID and an ensemble. The trajectory needs to be associated with an ensemble so we know how to get correct statistics from the many ensembles that we might be sampling simultaneously. The trajectory needs to be associated with a replica ID so that replica exchange approaches can be analyzed. Since the ensembles in our MSTIS network are not the exact ensemble objects that we saved our samples with (they were rebuilt), we still need a way to identify which of the new ensembles to associate them with. There are two main ways to do this. The first is to take one trajectory, and associate it with as many ensembles as possible. If your first path comes from a TPS simulation, that is the approach you'll want to take. The second approach is better suited to our conditions here: we already have a good trajectory for each ensemble. So we just want to remap our old ensembles to new ones. Loading one trajectory into lots of ensembles End of explanation single_trajectory_sset.sanity_check() assert(len(single_trajectory_sset)==9) Explanation: The sanity_check function ensures that all trajectories in the sampleset are actually in the ensemble they claim to be associated with. At this point, we should have 9 samples. End of explanation sset = paths.SampleSet.translate_ensembles(old_sampleset, mstis.sampling_ensembles) sset.sanity_check() assert(len(sset)==9) # tests only: this cell sets something for the online testing # the next cell unsets it when running the notebook live bootstrap_sset = sset sset = single_trajectory_sset #! skip # tests don't run this, but users should! sset = bootstrap_sset Explanation: Remapping old ensembles to new ensembles If your old and new ensembles have the same string representations, then OPS has a function to help you automatically map them. As long as you create the ensembles in the same way, they'll have the same string representation. Note that if you don't have the same string representation, you would have to assign trajectories to ensembles by hand (which isn't that hard, but is a bit tedious). End of explanation for outer_ens in mstis.special_ensembles['ms_outer']: # doesn't matter which we take, so we take the first traj = next(s.trajectory for s in old_sampleset if outer_ens(s.trajectory)==True) samp = paths.Sample( replica=None, ensemble=outer_ens, trajectory=traj ) # now we apply it and correct for the replica ID sset.append_as_new_replica(samp) sset.sanity_check() assert(len(sset)==10) Explanation: Setting up special ensembles Whichever way we initially set up the SampleSet, at this point it only contains samples for the main sampling trajectories of each transition. Now we need to put trajectories into various auxiliary ensembles. Multiple state outer ensemble The multiple state outer ensemble is, in fact, sampled during the bootstrapping. However, it is actually sampled once for every state that shares it. It is very easy to find a trajectory that satisfies the ensemble and to load add that sample to our sampleset. End of explanation for transition in mstis.sampling_transitions: innermost_ensemble = transition.ensembles[0] shooter = None if not transition.stateA(sset[innermost_ensemble].trajectory[-1]): shooter = paths.OneWayShootingMover(ensemble=innermost_ensemble, selector=paths.UniformSelector(), engine=engine) pseudoscheme = paths.LockedMoveScheme(root_mover=shooter) pseudosim = paths.PathSampling(storage=None, move_scheme=pseudoscheme, sample_set=sset, ) while not transition.stateA(sset[innermost_ensemble].trajectory[-1]): pseudosim.run(1) sset = pseudosim.sample_set Explanation: Minus interface ensemble The minus interface ensembles do not yet have a trajectory. We will generate them by starting with same-state trajectories (A-to-A, B-to-B, C-to-C) in each interface, and extending into the minus ensemble. check whether the traj is A-to-A extend First we need to make sure that the trajectory in the innermost ensemble of each state also ends in that state. This is necessary so that when we extend the trajectory, it can extends into the minus ensemble. If the trajectory isn't right, we run a shooting move on it until it is. End of explanation minus_samples = [] for transition in mstis.sampling_transitions: minus_samples.append(transition.minus_ensemble.extend_sample_from_trajectories( sset[transition.ensembles[0]].trajectory, replica=-len(minus_samples)-1, engine=engine )) sset = sset.apply_samples(minus_samples) sset.sanity_check() assert(len(sset)==13) Explanation: Now that all the innermost ensembles are safe to use for extending into a minus interface, we extend them into a minus interface: End of explanation equil_scheme = paths.OneWayShootingMoveScheme(mstis, engine=engine) equilibration = paths.PathSampling( storage=None, sample_set=sset, move_scheme=equil_scheme ) #! skip # tests need the unequilibrated samples to ensure passing equilibration.run(5) sset = equilibration.sample_set Explanation: Equilibration In molecular dynamics, you need to equilibrate if you don't start with an equilibrium frame (e.g., if you start with solvent molecules on a grid, your system should equilibrate before you start taking statistics). Similarly, if you start with a set of paths which are far from the path ensemble equilibrium, you need to equilibrate. This could either be because your trajectories are not from the real dynamics (generated with metadynamics, high temperature, etc.) or because your trajectories are not representative of the path ensemble (e.g., if you put transition trajectories into all interfaces). As with MD, running equilibration can be the same process as running the total simulation. However, in path sampling, it doesn't have to be: we can equilibrate without replica exchange moves or path reversal moves, for example. In the example below, we create a MoveScheme that only includes shooting movers. End of explanation # logging creates ops_output.log file with details of what the calculation is doing #import logging.config #logging.config.fileConfig("../resources/logging.conf", disable_existing_loggers=False) storage = paths.storage.Storage("mstis.nc", "w") storage.save(template) [cv.with_diskcache() for cv in old_store.cvs] print [cv.diskcache_allow_incomplete for cv in old_store.cvs] mstis_calc = paths.PathSampling( storage=storage, sample_set=sset, move_scheme=paths.DefaultScheme(mstis, engine=engine) ) mstis_calc.save_frequency = 50 Explanation: Running RETIS Now we run the full calculation. Up to here, we haven't been storing any of our results. This time, we'll start a storage object, and we'll save the network we've created. Then we'll run a new PathSampling calculation object. End of explanation #! skip # skip this during testing, but leave it for demo purposes # we use the %run magic because this isn't in a package %run ../resources/toy_plot_helpers.py xval = paths.FunctionCV("xval", lambda snap : snap.xyz[0][0]) yval = paths.FunctionCV("yval", lambda snap : snap.xyz[0][1]) mstis_calc.live_visualizer = paths.StepVisualizer2D(mstis, xval, yval, [-1.0, 1.0], [-1.0, 1.0]) background = ToyPlot() background.contour_range = np.arange(-1.5, 1.0, 0.1) background.add_pes(engine.pes) mstis_calc.live_visualizer.background = background.plot() mstis_calc.status_update_frequency = 1 # increasing this number speeds things up, but isn't as pretty Explanation: The next block sets up a live visualization. This is optional, and only recommended if you're using OPS interactively (which would only be for very small systems). Some of the same tools can be used to play back the behavior after the fact if you want to see the behavior for more complicated systems. You can create a background (here we use the PES contours), and the visualization will plot the trajectories. End of explanation mstis_calc.run_until(100) n_steps = int(mstis_calc.move_scheme.n_steps_for_trials(mstis_calc.move_scheme.movers['shooting'][0], 1000)) print n_steps #! skip # don't run all those steps in testing! mstis_calc.run_until(n_steps) storage.close() Explanation: Now everything is ready: let's run the simulation! End of explanation
12,053
Given the following text description, write Python code to implement the functionality described below step by step Description: Monte Carlo Dropout -- Example Notebook Launch this notebook in Google CoLab This notebook is a modified fork of the Bayesian MNIST classifier implementation here. In this notebook, a Bayesian LeNet model is trained using the MNIST data. A Bayesian inference function generates the mean prediction accuracy and the associated prediction uncertainty of the trained model. Step2: Build a Bayesian network The network used in this example is a LeNet. Step3: Compile model Step4: Load model weights Step5: Train a Bayesian network Step6: Save model weights Step7: Build a Bayesian inference function Step8: Run Bayesian inference
Python Code: ! wget https://media.githubusercontent.com/media/rahulremanan/python_tutorial/master/Machine_Vision/07_Bayesian_deep_learning/weights/bayesianLeNet.h5 -O ./bayesianLeNet.h5 Explanation: Monte Carlo Dropout -- Example Notebook Launch this notebook in Google CoLab This notebook is a modified fork of the Bayesian MNIST classifier implementation here. In this notebook, a Bayesian LeNet model is trained using the MNIST data. A Bayesian inference function generates the mean prediction accuracy and the associated prediction uncertainty of the trained model. End of explanation from keras import Input, Model from keras.layers import Conv2D, MaxPool2D, Dense, Flatten, Dropout def LeNet(input_shape, num_classes): inp = Input(shape=input_shape) x = Conv2D(filters=20, kernel_size=5, strides=1)(inp) x = MaxPool2D(pool_size=2, strides=2)(x) x = Conv2D(filters=50, kernel_size=5, strides=1)(x) x = MaxPool2D(pool_size=2, strides=2)(x) x = Flatten()(x) x = Dense(500, activation='relu')(x) x = Dense(num_classes, activation='softmax')(x) return Model(inp, x, name='LeNet') def bayesianLeNet(input_shape, num_classes, enable_dropout=True): An example implementation of a Bayesian LeNet convolutional neural network. This network uses the Bayesian approximation by Monte Carlo estimations using dropouts. To enable Bayesian approxiamtion, set the enable_dropout flag to True. inp = Input(shape=input_shape) x = Conv2D(filters=20, kernel_size=5, strides=1)(inp) x = Dropout(0.5)(x, training=True) x = MaxPool2D(pool_size=2, strides=2)(x) x = Conv2D(filters=50, kernel_size=5, strides=1)(x) x = Dropout(0.5)(x, training=enable_dropout) x = MaxPool2D(pool_size=2, strides=2)(x) x = Flatten()(x) x = Dropout(0.5)(x, training=enable_dropout) x = Dense(500, activation='relu')(x) x = Dropout(0.5)(x, training=enable_dropout) x = Dense(num_classes, activation='softmax')(x) return Model(inp, x, name='bayesianLeNet') import argparse import os from keras.callbacks import TensorBoard from keras.datasets import mnist from keras import utils import numpy as np from tqdm import tqdm TENSORBOARD_DIR = './tensorboard' MODEL_PATH = './bayesianLeNet.h5' def make_dirs(): if not os.path.isdir(TENSORBOARD_DIR): os.makedirs(TENSORBOARD_DIR) make_dirs() def prepare_data(): (X_train, y_train), (X_test, y_test) = mnist.load_data() X_train = X_train.reshape((X_train.shape[0], X_train.shape[1], X_train.shape[2], 1)) X_train = X_train.astype(np.float32) / 255. X_test = X_test.reshape((X_test.shape[0], X_test.shape[1], X_test.shape[2], 1)) X_test = X_test.astype(np.float32) / 255. y_train, y_test = utils.to_categorical(y_train, 10), utils.to_categorical(y_test, 10) return (X_train, y_train), (X_test, y_test) (X_train, y_train), (X_test, y_test) = prepare_data() bayesian_network=True download_weights=True batch_size=1000 epochs=10 if bayesian_network: model = bayesianLeNet(input_shape=X_train.shape[1:], num_classes=10) else: model = LeNet(input_shape=X_train.shape[1:], num_classes=10) Explanation: Build a Bayesian network The network used in this example is a LeNet. End of explanation model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['acc']) Explanation: Compile model End of explanation if os.path.exists(MODEL_PATH): model.load_weights(MODEL_PATH) print ('Loaded model weights from: {}'.format(MODEL_PATH)) Explanation: Load model weights End of explanation model.fit(x=X_train, y=y_train, batch_size=batch_size, epochs=epochs, validation_data=(X_test, y_test), callbacks=[TensorBoard(log_dir=os.path.join(TENSORBOARD_DIR, model.name), write_images=True)]) Explanation: Train a Bayesian network End of explanation model.save_weights(MODEL_PATH) Explanation: Save model weights End of explanation def bayesianInference(model, X_test, y_test, eval_steps=10): batch_size = 1000 bayesian_error = [] for batch_id in tqdm(range(X_test.shape[0] // batch_size)): # take batch of data x = X_test[batch_id * batch_size: (batch_id + 1) * batch_size] # init empty predictions y_ = np.zeros((eval_steps, batch_size, y_test[0].shape[0])) for sample_id in range(eval_steps): # save predictions from a sample pass y_[sample_id] = model.predict(x, batch_size) # average over all passes mean_y = y_.mean(axis=0) # evaluate against labels y = y_test[batch_size * batch_id: (batch_id + 1) * batch_size] # compute error point_error = np.count_nonzero(np.not_equal(mean_y.argmax(axis=1), y.argmax(axis=1))) bayesian_error.append(point_error) mean_error = np.sum(bayesian_error) / X_test.shape[0] uncertainty = np.std(bayesian_error) / X_test.shape[0] mean_accuracy = 1 - mean_error return [mean_accuracy, uncertainty] Explanation: Build a Bayesian inference function End of explanation if bayesian_network: out = bayesianInference(model, X_test, y_test) print ('\n') print ('\nValidation accuracy: {} ...'.format(out[0])) print ('Validation uncertainty: {} ...'.format(out[1])) else: (_, acc) = model.evaluate(x=X_test, y=y_test, batch_size=args.batch_size) print('\nValidation accuracy: {}'.format(acc)) if download_weights: from google.colab import files files.download(MODEL_PATH) Explanation: Run Bayesian inference End of explanation
12,054
Given the following text description, write Python code to implement the functionality described below step by step Description: Survival Analysis Think Bayes, Second Edition Copyright 2020 Allen B. Downey License Step1: This chapter introduces "survival analysis", which is a set of statistical methods used to answer questions about the time until an event. In the context of medicine it is literally about survival, but it can be applied to the time until any kind of event, or instead of time it can be about space or other dimensions. Survival analysis is challenging because the data we have are often incomplete. But as we'll see, Bayesian methods are particularly good at working with incomplete data. As examples, we'll consider two applications that are a little less serious than life and death Step2: As an example, here's a Weibull distribution with parameters $\lambda=3$ and $k=0.8$. Step3: The result is an object that represents the distribution. Here's what the Weibull CDF looks like with those parameters. Step4: actual_dist provides rvs, which we can use to generate a random sample from this distribution. Step5: So, given the parameters of the distribution, we can generate a sample. Now let's see if we can go the other way Step6: And a uniform prior for $k$ Step7: I'll use make_joint to make a joint prior distribution for the two parameters. Step8: The result is a DataFrame that represents the joint prior, with possible values of $\lambda$ across the columns and values of $k$ down the rows. Now I'll use meshgrid to make a 3-D mesh with $\lambda$ on the first axis (axis=0), $k$ on the second axis (axis=1), and the data on the third axis (axis=2). Step9: Now we can use weibull_dist to compute the PDF of the Weibull distribution for each pair of parameters and each data point. Step10: The likelihood of the data is the product of the probability densities along axis=2. Step11: Now we can compute the posterior distribution in the usual way. Step13: The following function encapsulates these steps. It takes a joint prior distribution and the data, and returns a joint posterior distribution. Step14: Here's how we use it. Step15: And here's a contour plot of the joint posterior distribution. Step16: It looks like the range of likely values for $\lambda$ is about 1 to 4, which contains the actual value we used to generate the data, 3. And the range for $k$ is about 0.5 to 1.5, which contains the actual value, 0.8. Marginal Distributions To be more precise about these ranges, we can extract the marginal distributions Step17: And compute the posterior means and 90% credible intervals. Step18: The vertical gray line show the actual value of $\lambda$. Here's the marginal posterior distribution for $k$. Step19: The posterior distributions are wide, which means that with only 10 data points we can't estimated the parameters precisely. But for both parameters, the actual value falls in the credible interval. Step20: Incomplete Data In the previous example we were given 10 random values from a Weibull distribution, and we used them to estimate the parameters (which we pretended we didn't know). But in many real-world scenarios, we don't have complete data; in particular, when we observe a system at a point in time, we generally have information about the past, but not the future. As an example, suppose you work at a dog shelter and you are interested in the time between the arrival of a new dog and when it is adopted. Some dogs might be snapped up immediately; others might have to wait longer. The people who operate the shelter might want to make inferences about the distribution of these residence times. Suppose you monitor arrivals and departures over 8 weeks and 10 dogs arrive during that interval. I'll assume that their arrival times are distributed uniformly, so I'll generate random values like this. Step21: Now let's suppose that the residence times follow the Weibull distribution we used in the previous example. We can generate a sample from that distribution like this Step22: I'll use these values to construct a DataFrame that contains the arrival and departure times for each dog, called start and end. Step23: For display purposes, I'll sort the rows of the DataFrame by arrival time. Step24: Notice that several of the lifelines extend past the observation window of 8 weeks. So if we observed this system at the beginning of Week 8, we would have incomplete information. Specifically, we would not know the future adoption times for Dogs 6, 7, and 8. I'll simulate this incomplete data by identifying the lifelines that extend past the observation window Step25: censored is a Boolean Series that is True for lifelines that extend past Week 8. Data that is not available is sometimes called "censored" in the sense that it is hidden from us. But in this case it is hidden because we don't know the future, not because someone is censoring it. For the lifelines that are censored, I'll modify end to indicate when they are last observed and status to indicate that the observation is incomplete. Step27: Now we can plot a "lifeline" for each dog, showing the arrival and departure times on a time line. Step28: And I'll add one more column to the table, which contains the duration of the observed parts of the lifelines. Step29: What we have simulated is the data that would be available at the beginning of Week 8. Using Incomplete Data Now, let's see how we can use both kinds of data, complete and incomplete, to infer the parameters of the distribution of residence times. First I'll split the data into two sets Step30: For the complete data, we can use update_weibull, which uses the PDF of the Weibull distribution to compute the likelihood of the data. Step32: For the incomplete data, we have to think a little harder. At the end of the observation interval, we don't know what the residence time will be, but we can put a lower bound on it; that is, we can say that the residence time will be greater than T. And that means that we can compute the likelihood of the data using the survival function, which is the probability that a value from the distribution exceeds T. The following function is identical to update_weibull except that it uses sf, which computes the survival function, rather than pdf. Step33: Here's the update with the incomplete data. Step34: And here's what the joint posterior distribution looks like after both updates. Step35: Compared to the previous contour plot, it looks like the range of likely values for $\lambda$ is substantially wider. We can see that more clearly by looking at the marginal distributions. Step36: Here's the posterior marginal distribution for $\lambda$ compared to the distribution we got using all complete data. Step37: The distribution with some incomplete data is substantially wider. As an aside, notice that the posterior distribution does not come all the way to 0 on the right side. That suggests that the range of the prior distribution is not wide enough to cover the most likely values for this parameter. If I were concerned about making this distribution more accurate, I would go back and run the update again with a wider prior. Here's the posterior marginal distribution for $k$ Step38: In this example, the marginal distribution is shifted to the left when we have incomplete data, but it is not substantially wider. In summary, we have seen how to combine complete and incomplete data to estimate the parameters of a Weibull distribution, which is useful in many real-world scenarios where some of the data are censored. In general, the posterior distributions are wider when we have incomplete data, because less information leads to more uncertainty. This example is based on data I generated; in the next section we'll do a similar analysis with real data. Light Bulbs In 2007 researchers ran an experiment to characterize the distribution of lifetimes for light bulbs. Here is their description of the experiment Step39: We can load the data into a DataFrame like this Step40: Column h contains the times when bulbs failed in hours; Column f contains the number of bulbs that failed at each time. We can represent these values and frequencies using a Pmf, like this Step41: Because of the design of this experiment, we can consider the data to be a representative sample from the distribution of lifetimes, at least for light bulbs that are lit continuously. The average lifetime is about 1400 h. Step42: Assuming that these data are well modeled by a Weibull distribution, let's estimate the parameters that fit the data. Again, I'll start with uniform priors for $\lambda$ and $k$ Step43: For this example, there are 51 values in the prior distribution, rather than the usual 101. That's because we are going to use the posterior distributions to do some computationally-intensive calculations. They will run faster with fewer values, but the results will be less precise. As usual, we can use make_joint to make the prior joint distribution. Step44: Although we have data for 50 light bulbs, there are only 32 unique lifetimes in the dataset. For the update, it is convenient to express the data in the form of 50 lifetimes, with each lifetime repeated the given number of times. We can use np.repeat to transform the data. Step45: Now we can use update_weibull to do the update. Step46: Here's what the posterior joint distribution looks like Step47: To summarize this joint posterior distribution, we'll compute the posterior mean lifetime. Posterior Means To compute the posterior mean of a joint distribution, we'll make a mesh that contains the values of $\lambda$ and $k$. Step48: Now for each pair of parameters we'll use weibull_dist to compute the mean. Step49: The result is an array with the same dimensions as the joint distribution. Now we need to weight each mean with the corresponding probability from the joint posterior. Step50: Finally we compute the sum of the weighted means. Step52: Based on the posterior distribution, we think the mean lifetime is about 1413 hours. The following function encapsulates these steps Step54: Incomplete Information The previous update was not quite right, because it assumed each light bulb died at the instant we observed it. According to the report, the researchers only checked the bulbs every 12 hours. So if they see that a bulb has died, they know only that it died during the 12 hours since the last check. It is more strictly correct to use the following update function, which uses the CDF of the Weibull distribution to compute the probability that a bulb dies during a given 12 hour interval. Step55: The probability that a value falls in an interval is the difference between the CDF at the beginning and end of the interval. Here's how we run the update. Step56: And here are the results. Step57: Visually this result is almost identical to what we got using the PDF. And that's good news, because it suggests that using the PDF can be a good approximation even if it's not strictly correct. To see whether it makes any difference at all, let's check the posterior means. Step58: When we take into account the 12-hour interval between observations, the posterior mean is about 6 hours less. And that makes sense Step59: If there are 100 bulbs and each has this probability of dying, the number of dead bulbs follows a binomial distribution. Step60: And here's what it looks like. Step61: But that's based on the assumption that we know $\lambda$ and $k$, and we don't. Instead, we have a posterior distribution that contains possible values of these parameters and their probabilities. So the posterior predictive distribution is not a single binomial; instead it is a mixture of binomials, weighted with the posterior probabilities. We can use make_mixture to compute the posterior predictive distribution. It doesn't work with joint distributions, but we can convert the DataFrame that represents a joint distribution to a Series, like this Step62: The result is a Series with a MultiIndex that contains two "levels" Step63: Now we can use make_mixture, passing as parameters the posterior probabilities in posterior_series and the sequence of binomial distributions in pmf_seq. Step64: Here's what the posterior predictive distribution looks like, compared to the binomial distribution we computed with known parameters. Step65: The posterior predictive distribution is wider because it represents our uncertainty about the parameters as well as our uncertainty about the number of dead bulbs. Summary This chapter introduces survival analysis, which is used to answer questions about the time until an event, and the Weibull distribution, which is a good model for "lifetimes" (broadly interpreted) in a number of domains. We used joint distributions to represent prior probabilities for the parameters of the Weibull distribution, and we updated them three ways Step67: Exercise Step68: Now we need some data. The following cell downloads data I collected from the National Oceanic and Atmospheric Administration (NOAA) for Seattle, Washington in May 2020. Step69: Now we can load it into a DataFrame Step70: I'll make a Boolean Series to indicate which days it rained. Step71: And select the total rainfall on the days it rained. Step72: Here's what the CDF of the data looks like. Step73: The maximum is 1.14 inches of rain is one day. To estimate the probability of more than 1.5 inches, we need to extrapolate from the data we have, so our estimate will depend on whether the gamma distribution is really a good model. I suggest you proceed in the following steps
Python Code: # If we're running on Colab, install empiricaldist # https://pypi.org/project/empiricaldist/ import sys IN_COLAB = 'google.colab' in sys.modules if IN_COLAB: !pip install empiricaldist # Get utils.py from os.path import basename, exists def download(url): filename = basename(url) if not exists(filename): from urllib.request import urlretrieve local, _ = urlretrieve(url, filename) print('Downloaded ' + local) download('https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py') from utils import set_pyplot_params set_pyplot_params() Explanation: Survival Analysis Think Bayes, Second Edition Copyright 2020 Allen B. Downey License: Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) End of explanation from scipy.stats import weibull_min def weibull_dist(lam, k): return weibull_min(k, scale=lam) Explanation: This chapter introduces "survival analysis", which is a set of statistical methods used to answer questions about the time until an event. In the context of medicine it is literally about survival, but it can be applied to the time until any kind of event, or instead of time it can be about space or other dimensions. Survival analysis is challenging because the data we have are often incomplete. But as we'll see, Bayesian methods are particularly good at working with incomplete data. As examples, we'll consider two applications that are a little less serious than life and death: the time until light bulbs fail and the time until dogs in a shelter are adopted. To describe these "survival times", we'll use the Weibull distribution. The Weibull Distribution The Weibull distribution is often used in survival analysis because it is a good model for the distribution of lifetimes for manufactured products, at least over some parts of the range. SciPy provides several versions of the Weibull distribution; the one we'll use is called weibull_min. To make the interface consistent with our notation, I'll wrap it in a function that takes as parameters $\lambda$, which mostly affects the location or "central tendency" of the distribution, and $k$, which affects the shape. End of explanation lam = 3 k = 0.8 actual_dist = weibull_dist(lam, k) Explanation: As an example, here's a Weibull distribution with parameters $\lambda=3$ and $k=0.8$. End of explanation import numpy as np from empiricaldist import Cdf from utils import decorate qs = np.linspace(0, 12, 101) ps = actual_dist.cdf(qs) cdf = Cdf(ps, qs) cdf.plot() decorate(xlabel='Duration in time', ylabel='CDF', title='CDF of a Weibull distribution') Explanation: The result is an object that represents the distribution. Here's what the Weibull CDF looks like with those parameters. End of explanation np.random.seed(17) data = actual_dist.rvs(10) data Explanation: actual_dist provides rvs, which we can use to generate a random sample from this distribution. End of explanation from utils import make_uniform lams = np.linspace(0.1, 10.1, num=101) prior_lam = make_uniform(lams, name='lambda') Explanation: So, given the parameters of the distribution, we can generate a sample. Now let's see if we can go the other way: given the sample, we'll estimate the parameters. Here's a uniform prior distribution for $\lambda$: End of explanation ks = np.linspace(0.1, 5.1, num=101) prior_k = make_uniform(ks, name='k') Explanation: And a uniform prior for $k$: End of explanation from utils import make_joint prior = make_joint(prior_lam, prior_k) Explanation: I'll use make_joint to make a joint prior distribution for the two parameters. End of explanation lam_mesh, k_mesh, data_mesh = np.meshgrid( prior.columns, prior.index, data) Explanation: The result is a DataFrame that represents the joint prior, with possible values of $\lambda$ across the columns and values of $k$ down the rows. Now I'll use meshgrid to make a 3-D mesh with $\lambda$ on the first axis (axis=0), $k$ on the second axis (axis=1), and the data on the third axis (axis=2). End of explanation densities = weibull_dist(lam_mesh, k_mesh).pdf(data_mesh) densities.shape Explanation: Now we can use weibull_dist to compute the PDF of the Weibull distribution for each pair of parameters and each data point. End of explanation likelihood = densities.prod(axis=2) likelihood.sum() Explanation: The likelihood of the data is the product of the probability densities along axis=2. End of explanation from utils import normalize posterior = prior * likelihood normalize(posterior) Explanation: Now we can compute the posterior distribution in the usual way. End of explanation def update_weibull(prior, data): Update the prior based on data. lam_mesh, k_mesh, data_mesh = np.meshgrid( prior.columns, prior.index, data) densities = weibull_dist(lam_mesh, k_mesh).pdf(data_mesh) likelihood = densities.prod(axis=2) posterior = prior * likelihood normalize(posterior) return posterior Explanation: The following function encapsulates these steps. It takes a joint prior distribution and the data, and returns a joint posterior distribution. End of explanation posterior = update_weibull(prior, data) Explanation: Here's how we use it. End of explanation from utils import plot_contour plot_contour(posterior) decorate(title='Posterior joint distribution of Weibull parameters') Explanation: And here's a contour plot of the joint posterior distribution. End of explanation from utils import marginal posterior_lam = marginal(posterior, 0) posterior_k = marginal(posterior, 1) Explanation: It looks like the range of likely values for $\lambda$ is about 1 to 4, which contains the actual value we used to generate the data, 3. And the range for $k$ is about 0.5 to 1.5, which contains the actual value, 0.8. Marginal Distributions To be more precise about these ranges, we can extract the marginal distributions: End of explanation import matplotlib.pyplot as plt plt.axvline(3, color='C5') posterior_lam.plot(color='C4', label='lambda') decorate(xlabel='lam', ylabel='PDF', title='Posterior marginal distribution of lam') Explanation: And compute the posterior means and 90% credible intervals. End of explanation plt.axvline(0.8, color='C5') posterior_k.plot(color='C12', label='k') decorate(xlabel='k', ylabel='PDF', title='Posterior marginal distribution of k') Explanation: The vertical gray line show the actual value of $\lambda$. Here's the marginal posterior distribution for $k$. End of explanation print(lam, posterior_lam.credible_interval(0.9)) print(k, posterior_k.credible_interval(0.9)) Explanation: The posterior distributions are wide, which means that with only 10 data points we can't estimated the parameters precisely. But for both parameters, the actual value falls in the credible interval. End of explanation np.random.seed(19) start = np.random.uniform(0, 8, size=10) start Explanation: Incomplete Data In the previous example we were given 10 random values from a Weibull distribution, and we used them to estimate the parameters (which we pretended we didn't know). But in many real-world scenarios, we don't have complete data; in particular, when we observe a system at a point in time, we generally have information about the past, but not the future. As an example, suppose you work at a dog shelter and you are interested in the time between the arrival of a new dog and when it is adopted. Some dogs might be snapped up immediately; others might have to wait longer. The people who operate the shelter might want to make inferences about the distribution of these residence times. Suppose you monitor arrivals and departures over 8 weeks and 10 dogs arrive during that interval. I'll assume that their arrival times are distributed uniformly, so I'll generate random values like this. End of explanation np.random.seed(17) duration = actual_dist.rvs(10) duration Explanation: Now let's suppose that the residence times follow the Weibull distribution we used in the previous example. We can generate a sample from that distribution like this: End of explanation import pandas as pd d = dict(start=start, end=start+duration) obs = pd.DataFrame(d) Explanation: I'll use these values to construct a DataFrame that contains the arrival and departure times for each dog, called start and end. End of explanation obs = obs.sort_values(by='start', ignore_index=True) obs Explanation: For display purposes, I'll sort the rows of the DataFrame by arrival time. End of explanation censored = obs['end'] > 8 Explanation: Notice that several of the lifelines extend past the observation window of 8 weeks. So if we observed this system at the beginning of Week 8, we would have incomplete information. Specifically, we would not know the future adoption times for Dogs 6, 7, and 8. I'll simulate this incomplete data by identifying the lifelines that extend past the observation window: End of explanation obs.loc[censored, 'end'] = 8 obs.loc[censored, 'status'] = 0 Explanation: censored is a Boolean Series that is True for lifelines that extend past Week 8. Data that is not available is sometimes called "censored" in the sense that it is hidden from us. But in this case it is hidden because we don't know the future, not because someone is censoring it. For the lifelines that are censored, I'll modify end to indicate when they are last observed and status to indicate that the observation is incomplete. End of explanation def plot_lifelines(obs): Plot a line for each observation. obs: DataFrame for y, row in obs.iterrows(): start = row['start'] end = row['end'] status = row['status'] if status == 0: # ongoing plt.hlines(y, start, end, color='C0') else: # complete plt.hlines(y, start, end, color='C1') plt.plot(end, y, marker='o', color='C1') decorate(xlabel='Time (weeks)', ylabel='Dog index', title='Lifelines showing censored and uncensored observations') plt.gca().invert_yaxis() plot_lifelines(obs) Explanation: Now we can plot a "lifeline" for each dog, showing the arrival and departure times on a time line. End of explanation obs['T'] = obs['end'] - obs['start'] Explanation: And I'll add one more column to the table, which contains the duration of the observed parts of the lifelines. End of explanation data1 = obs.loc[~censored, 'T'] data2 = obs.loc[censored, 'T'] data1 data2 Explanation: What we have simulated is the data that would be available at the beginning of Week 8. Using Incomplete Data Now, let's see how we can use both kinds of data, complete and incomplete, to infer the parameters of the distribution of residence times. First I'll split the data into two sets: data1 contains residence times for dogs whose arrival and departure times are known; data2 contains incomplete residence times for dogs who were not adopted during the observation interval. End of explanation posterior1 = update_weibull(prior, data1) Explanation: For the complete data, we can use update_weibull, which uses the PDF of the Weibull distribution to compute the likelihood of the data. End of explanation def update_weibull_incomplete(prior, data): Update the prior using incomplete data. lam_mesh, k_mesh, data_mesh = np.meshgrid( prior.columns, prior.index, data) # evaluate the survival function probs = weibull_dist(lam_mesh, k_mesh).sf(data_mesh) likelihood = probs.prod(axis=2) posterior = prior * likelihood normalize(posterior) return posterior Explanation: For the incomplete data, we have to think a little harder. At the end of the observation interval, we don't know what the residence time will be, but we can put a lower bound on it; that is, we can say that the residence time will be greater than T. And that means that we can compute the likelihood of the data using the survival function, which is the probability that a value from the distribution exceeds T. The following function is identical to update_weibull except that it uses sf, which computes the survival function, rather than pdf. End of explanation posterior2 = update_weibull_incomplete(posterior1, data2) Explanation: Here's the update with the incomplete data. End of explanation plot_contour(posterior2) decorate(title='Posterior joint distribution, incomplete data') Explanation: And here's what the joint posterior distribution looks like after both updates. End of explanation posterior_lam2 = marginal(posterior2, 0) posterior_k2 = marginal(posterior2, 1) Explanation: Compared to the previous contour plot, it looks like the range of likely values for $\lambda$ is substantially wider. We can see that more clearly by looking at the marginal distributions. End of explanation posterior_lam.plot(color='C5', label='All complete', linestyle='dashed') posterior_lam2.plot(color='C2', label='Some censored') decorate(xlabel='lambda', ylabel='PDF', title='Marginal posterior distribution of lambda') Explanation: Here's the posterior marginal distribution for $\lambda$ compared to the distribution we got using all complete data. End of explanation posterior_k.plot(color='C5', label='All complete', linestyle='dashed') posterior_k2.plot(color='C12', label='Some censored') decorate(xlabel='k', ylabel='PDF', title='Posterior marginal distribution of k') Explanation: The distribution with some incomplete data is substantially wider. As an aside, notice that the posterior distribution does not come all the way to 0 on the right side. That suggests that the range of the prior distribution is not wide enough to cover the most likely values for this parameter. If I were concerned about making this distribution more accurate, I would go back and run the update again with a wider prior. Here's the posterior marginal distribution for $k$: End of explanation download('https://gist.github.com/epogrebnyak/7933e16c0ad215742c4c104be4fbdeb1/raw/c932bc5b6aa6317770c4cbf43eb591511fec08f9/lamps.csv') Explanation: In this example, the marginal distribution is shifted to the left when we have incomplete data, but it is not substantially wider. In summary, we have seen how to combine complete and incomplete data to estimate the parameters of a Weibull distribution, which is useful in many real-world scenarios where some of the data are censored. In general, the posterior distributions are wider when we have incomplete data, because less information leads to more uncertainty. This example is based on data I generated; in the next section we'll do a similar analysis with real data. Light Bulbs In 2007 researchers ran an experiment to characterize the distribution of lifetimes for light bulbs. Here is their description of the experiment: An assembly of 50 new Philips (India) lamps with the rating 40 W, 220 V (AC) was taken and installed in the horizontal orientation and uniformly distributed over a lab area 11 m x 7 m. The assembly was monitored at regular intervals of 12 h to look for failures. The instants of recorded failures were [recorded] and a total of 32 data points were obtained such that even the last bulb failed. End of explanation df = pd.read_csv('lamps.csv', index_col=0) df.head() Explanation: We can load the data into a DataFrame like this: End of explanation from empiricaldist import Pmf pmf_bulb = Pmf(df['f'].to_numpy(), df['h']) pmf_bulb.normalize() Explanation: Column h contains the times when bulbs failed in hours; Column f contains the number of bulbs that failed at each time. We can represent these values and frequencies using a Pmf, like this: End of explanation pmf_bulb.mean() Explanation: Because of the design of this experiment, we can consider the data to be a representative sample from the distribution of lifetimes, at least for light bulbs that are lit continuously. The average lifetime is about 1400 h. End of explanation lams = np.linspace(1000, 2000, num=51) prior_lam = make_uniform(lams, name='lambda') ks = np.linspace(1, 10, num=51) prior_k = make_uniform(ks, name='k') Explanation: Assuming that these data are well modeled by a Weibull distribution, let's estimate the parameters that fit the data. Again, I'll start with uniform priors for $\lambda$ and $k$: End of explanation prior_bulb = make_joint(prior_lam, prior_k) Explanation: For this example, there are 51 values in the prior distribution, rather than the usual 101. That's because we are going to use the posterior distributions to do some computationally-intensive calculations. They will run faster with fewer values, but the results will be less precise. As usual, we can use make_joint to make the prior joint distribution. End of explanation data_bulb = np.repeat(df['h'], df['f']) len(data_bulb) Explanation: Although we have data for 50 light bulbs, there are only 32 unique lifetimes in the dataset. For the update, it is convenient to express the data in the form of 50 lifetimes, with each lifetime repeated the given number of times. We can use np.repeat to transform the data. End of explanation posterior_bulb = update_weibull(prior_bulb, data_bulb) Explanation: Now we can use update_weibull to do the update. End of explanation plot_contour(posterior_bulb) decorate(title='Joint posterior distribution, light bulbs') Explanation: Here's what the posterior joint distribution looks like: End of explanation lam_mesh, k_mesh = np.meshgrid( prior_bulb.columns, prior_bulb.index) Explanation: To summarize this joint posterior distribution, we'll compute the posterior mean lifetime. Posterior Means To compute the posterior mean of a joint distribution, we'll make a mesh that contains the values of $\lambda$ and $k$. End of explanation means = weibull_dist(lam_mesh, k_mesh).mean() means.shape Explanation: Now for each pair of parameters we'll use weibull_dist to compute the mean. End of explanation prod = means * posterior_bulb Explanation: The result is an array with the same dimensions as the joint distribution. Now we need to weight each mean with the corresponding probability from the joint posterior. End of explanation prod.to_numpy().sum() Explanation: Finally we compute the sum of the weighted means. End of explanation def joint_weibull_mean(joint): Compute the mean of a joint distribution of Weibulls. lam_mesh, k_mesh = np.meshgrid( joint.columns, joint.index) means = weibull_dist(lam_mesh, k_mesh).mean() prod = means * joint return prod.to_numpy().sum() Explanation: Based on the posterior distribution, we think the mean lifetime is about 1413 hours. The following function encapsulates these steps: End of explanation def update_weibull_between(prior, data, dt=12): Update the prior based on data. lam_mesh, k_mesh, data_mesh = np.meshgrid( prior.columns, prior.index, data) dist = weibull_dist(lam_mesh, k_mesh) cdf1 = dist.cdf(data_mesh) cdf2 = dist.cdf(data_mesh-12) likelihood = (cdf1 - cdf2).prod(axis=2) posterior = prior * likelihood normalize(posterior) return posterior Explanation: Incomplete Information The previous update was not quite right, because it assumed each light bulb died at the instant we observed it. According to the report, the researchers only checked the bulbs every 12 hours. So if they see that a bulb has died, they know only that it died during the 12 hours since the last check. It is more strictly correct to use the following update function, which uses the CDF of the Weibull distribution to compute the probability that a bulb dies during a given 12 hour interval. End of explanation posterior_bulb2 = update_weibull_between(prior_bulb, data_bulb) Explanation: The probability that a value falls in an interval is the difference between the CDF at the beginning and end of the interval. Here's how we run the update. End of explanation plot_contour(posterior_bulb2) decorate(title='Joint posterior distribution, light bulbs') Explanation: And here are the results. End of explanation joint_weibull_mean(posterior_bulb) joint_weibull_mean(posterior_bulb2) Explanation: Visually this result is almost identical to what we got using the PDF. And that's good news, because it suggests that using the PDF can be a good approximation even if it's not strictly correct. To see whether it makes any difference at all, let's check the posterior means. End of explanation lam = 1550 k = 4.25 t = 1000 prob_dead = weibull_dist(lam, k).cdf(t) prob_dead Explanation: When we take into account the 12-hour interval between observations, the posterior mean is about 6 hours less. And that makes sense: if we assume that a bulb is equally likely to expire at any point in the interval, the average would be the midpoint of the interval. Posterior Predictive Distribution Suppose you install 100 light bulbs of the kind in the previous section, and you come back to check on them after 1000 hours. Based on the posterior distribution we just computed, what is the distribution of the number of bulbs you find dead? If we knew the parameters of the Weibull distribution for sure, the answer would be a binomial distribution. For example, if we know that $\lambda=1550$ and $k=4.25$, we can use weibull_dist to compute the probability that a bulb dies before you return: End of explanation from utils import make_binomial n = 100 p = prob_dead dist_num_dead = make_binomial(n, p) Explanation: If there are 100 bulbs and each has this probability of dying, the number of dead bulbs follows a binomial distribution. End of explanation dist_num_dead.plot(label='known parameters') decorate(xlabel='Number of dead bulbs', ylabel='PMF', title='Predictive distribution with known parameters') Explanation: And here's what it looks like. End of explanation posterior_series = posterior_bulb.stack() posterior_series.head() Explanation: But that's based on the assumption that we know $\lambda$ and $k$, and we don't. Instead, we have a posterior distribution that contains possible values of these parameters and their probabilities. So the posterior predictive distribution is not a single binomial; instead it is a mixture of binomials, weighted with the posterior probabilities. We can use make_mixture to compute the posterior predictive distribution. It doesn't work with joint distributions, but we can convert the DataFrame that represents a joint distribution to a Series, like this: End of explanation pmf_seq = [] for (k, lam) in posterior_series.index: prob_dead = weibull_dist(lam, k).cdf(t) pmf = make_binomial(n, prob_dead) pmf_seq.append(pmf) Explanation: The result is a Series with a MultiIndex that contains two "levels": the first level contains the values of k; the second contains the values of lam. With the posterior in this form, we can iterate through the possible parameters and compute a predictive distribution for each pair. End of explanation from utils import make_mixture post_pred = make_mixture(posterior_series, pmf_seq) Explanation: Now we can use make_mixture, passing as parameters the posterior probabilities in posterior_series and the sequence of binomial distributions in pmf_seq. End of explanation dist_num_dead.plot(label='known parameters') post_pred.plot(label='unknown parameters') decorate(xlabel='Number of dead bulbs', ylabel='PMF', title='Posterior predictive distribution') Explanation: Here's what the posterior predictive distribution looks like, compared to the binomial distribution we computed with known parameters. End of explanation # Solution goes here # Solution goes here # Solution goes here # Solution goes here Explanation: The posterior predictive distribution is wider because it represents our uncertainty about the parameters as well as our uncertainty about the number of dead bulbs. Summary This chapter introduces survival analysis, which is used to answer questions about the time until an event, and the Weibull distribution, which is a good model for "lifetimes" (broadly interpreted) in a number of domains. We used joint distributions to represent prior probabilities for the parameters of the Weibull distribution, and we updated them three ways: knowing the exact duration of a lifetime, knowing a lower bound, and knowing that a lifetime fell in a given interval. These examples demonstrate a feature of Bayesian methods: they can be adapted to handle incomplete, or "censored", data with only small changes. As an exercise, you'll have a chance to work with one more type of censored data, when we are given an upper bound on a lifetime. The methods in this chapter work with any distribution with two parameters. In the exercises, you'll have a chance to estimate the parameters of a two-parameter gamma distribution, which is used to describe a variety of natural phenomena. And in the next chapter we'll move on to models with three parameters! Exercises Exercise: Using data about the lifetimes of light bulbs, we computed the posterior distribution from the parameters of a Weibull distribution, $\lambda$ and $k$, and the posterior predictive distribution for the number of dead bulbs, out of 100, after 1000 hours. Now suppose you do the experiment: You install 100 light bulbs, come back after 1000 hours, and find 20 dead light bulbs. Update the posterior distribution based on this data. How much does it change the posterior mean? Suggestions: Use a mesh grid to compute the probability of finding a bulb dead after 1000 hours for each pair of parameters. For each of those probabilities, compute the likelihood of finding 20 dead bulbs out of 100. Use those likelihoods to update the posterior distribution. End of explanation import scipy.stats def gamma_dist(k, theta): Makes a gamma object. k: shape parameter theta: scale parameter returns: gamma object return scipy.stats.gamma(k, scale=theta) Explanation: Exercise: In this exercise, we'll use one month of data to estimate the parameters of a distribution that describes daily rainfall in Seattle. Then we'll compute the posterior predictive distribution for daily rainfall and use it to estimate the probability of a rare event, like more than 1.5 inches of rain in a day. According to hydrologists, the distribution of total daily rainfall (for days with rain) is well modeled by a two-parameter gamma distribution. When we worked with the one-parameter gamma distribution in <<_TheGammaDistribution>>, we used the Greek letter $\alpha$ for the parameter. For the two-parameter gamma distribution, we will use $k$ for the "shape parameter", which determines the shape of the distribution, and the Greek letter $\theta$ or theta for the "scale parameter". The following function takes these parameters and returns a gamma object from SciPy. End of explanation # Load the data file download('https://github.com/AllenDowney/ThinkBayes2/raw/master/data/2203951.csv') Explanation: Now we need some data. The following cell downloads data I collected from the National Oceanic and Atmospheric Administration (NOAA) for Seattle, Washington in May 2020. End of explanation weather = pd.read_csv('2203951.csv') weather.head() Explanation: Now we can load it into a DataFrame: End of explanation rained = weather['PRCP'] > 0 rained.sum() Explanation: I'll make a Boolean Series to indicate which days it rained. End of explanation prcp = weather.loc[rained, 'PRCP'] prcp.describe() Explanation: And select the total rainfall on the days it rained. End of explanation cdf_data = Cdf.from_seq(prcp) cdf_data.plot() decorate(xlabel='Total rainfall (in)', ylabel='CDF', title='Distribution of rainfall on days it rained') Explanation: Here's what the CDF of the data looks like. End of explanation # Solution goes here # Solution goes here # Solution goes here # Solution goes here # Solution goes here # Solution goes here # Solution goes here # Solution goes here # Solution goes here # Solution goes here # Solution goes here # Solution goes here # Solution goes here # Solution goes here # Solution goes here # Solution goes here # Solution goes here # Solution goes here # Solution goes here # Solution goes here # Solution goes here Explanation: The maximum is 1.14 inches of rain is one day. To estimate the probability of more than 1.5 inches, we need to extrapolate from the data we have, so our estimate will depend on whether the gamma distribution is really a good model. I suggest you proceed in the following steps: Construct a prior distribution for the parameters of the gamma distribution. Note that $k$ and $\theta$ must be greater than 0. Use the observed rainfalls to update the distribution of parameters. Compute the posterior predictive distribution of rainfall, and use it to estimate the probability of getting more than 1.5 inches of rain in one day. End of explanation
12,055
Given the following text description, write Python code to implement the functionality described below step by step Description: Intro Spacy Step2: Spacy Documentation Spacy is an NLP/Computational Linguistics package built from the ground up. It's written in Cython so it's fast!! Let's check it out. Here's some text from Alice in Wonderland free on Gutenberg. Step3: Download and load the model. SpaCy has an excellent English NLP processor. It has the following features which we shall explore Step4: Looks like the same text? Let's dig a little deeper Tokenization Sentences Step5: Words and Punctuation - Along with POS tagging Step6: Entities - Explanation of Entity Types Step7: Noun Chunks Step8: The Semi Holy Grail - Syntactic Depensy Parsing See Demo for clarity Step9: What is 'nsubj'? 'acomp'? See The Universal Dependencies Word Vectorization - Word2Vec
Python Code: !pip install spacy nltk Explanation: Intro Spacy End of explanation text = 'Please would you tell me,' said Alice, a little timidly, for she was not quite sure whether it was good manners for her to speak first, 'why your cat grins like that?' 'It's a Cheshire cat,' said the Duchess, 'and that's why. Pig!' She said the last word with such sudden violence that Alice quite jumped; but she saw in another moment that it was addressed to the baby, and not to her, so she took courage, and went on again:— 'I didn't know that Cheshire cats always grinned; in fact, I didn't know that cats could grin.' 'They all can,' said the Duchess; 'and most of 'em do.' 'I don't know of any that do,' Alice said very politely, feeling quite pleased to have got into a conversation. 'You don't know much,' said the Duchess; 'and that's a fact.' Explanation: Spacy Documentation Spacy is an NLP/Computational Linguistics package built from the ground up. It's written in Cython so it's fast!! Let's check it out. Here's some text from Alice in Wonderland free on Gutenberg. End of explanation import spacy import spacy.en.download # spacy.en.download.main() processor = spacy.en.English() processed_text = processor(text) processed_text Explanation: Download and load the model. SpaCy has an excellent English NLP processor. It has the following features which we shall explore: - Entity recognition - Dependency Parsing - Part of Speech tagging - Word Vectorization - Tokenization - Lemmatization - Noun Chunks Download the Model, it may take a while End of explanation n = 0 for sentence in processed_text.sents: print(n, sentence) n+=1 Explanation: Looks like the same text? Let's dig a little deeper Tokenization Sentences End of explanation n = 0 for sentence in processed_text.sents: for token in sentence: print(n, token, token.pos_, token.lemma_) n+=1 Explanation: Words and Punctuation - Along with POS tagging End of explanation for entity in processed_text.ents: print(entity, entity.label_) Explanation: Entities - Explanation of Entity Types End of explanation for noun_chunk in processed_text.noun_chunks: print(noun_chunk) Explanation: Noun Chunks End of explanation def pr_tree(word, level): if word.is_punct: return for child in word.lefts: pr_tree(child, level+1) print('\t'* level + word.text + ' - ' + word.dep_) for child in word.rights: pr_tree(child, level+1) for sentence in processed_text.sents: pr_tree(sentence.root, 0) print('-------------------------------------------') Explanation: The Semi Holy Grail - Syntactic Depensy Parsing See Demo for clarity End of explanation proc_fruits = processor('''I think green apples are delicious. While pears have a strange texture to them. The bowls they sit in are ugly.''') apples, pears, bowls = proc_fruits.sents fruit = processed_text.vocab['fruit'] print(apples.similarity(fruit)) print(pears.similarity(fruit)) print(bowls.similarity(fruit)) Explanation: What is 'nsubj'? 'acomp'? See The Universal Dependencies Word Vectorization - Word2Vec End of explanation
12,056
Given the following text description, write Python code to implement the functionality described below step by step Description: Deep Learning, part3 Step1: Autoencoders properties and usage Step2: Goal Step3: Task
Python Code: from IPython.display import Image Image(url= "../img/AE.png", width=400, height=400) Explanation: Deep Learning, part3: Other important examples Generative models: autoencoders and GANS Working with tabular data, data integration Recurrent NN and attention mechanisms Reinforcement learning Generative models: autoencoders, VAEs and GANS Generative models. These are NNs models used for dimensionality reduction or dataset transformations. A popular use for a NNs is to take its fitted weights and use them on other datasets. This is called transfer learning. NNs need to verify information against a set of prior information in order to learn. In that sense, all NNs are supervised learning methods. It is possible however to perform unsupervised learning with NNs, and the most popular method is auto-encoders. More precisely though, they are self-supervised because they generate their own labels from the training data. Autoencoders A dimensionality reduction (or compression) NN algorithm in which the input of a model is the same as the output. They compress the input into a lower-dimensional code and then reconstruct the output from this representation. 3 components: encoder, code and decoder. The encoder compresses the input and produces the code, the decoder then reconstructs the input only using this code. End of explanation import pathlib from pathlib import Path import pandas as pd data_loc = r'D:\windata\work\biopycourse\data\cll_data' df = pd.read_csv(pathlib.Path(data_loc) / "cll_mrna.txt", index_col=0, sep ="\t") df = df.dropna(axis='columns') print(df.shape) df.head() X_train = df.T Explanation: Autoencoders properties and usage: - Data-specific: Only able to meaningfully compress data similar to what they have been trained on. Autoencoders trained on handwritten digits won't compress landscape photos. - Lossy: The output of the autoencoder will not be exactly the same as the input, it will be a close but degraded representation. - Data denoising: By learning the relevant features they are able to denoise/normalize a dataset. - Clustering: Clustering algorithms struggle with large dimensional data, so AE are an important preprocessing step. - Generative models: Variational Autoencoders (VAE) learn the parameters of the probability distribution modeling the input data. By sampling points from this distribution we can also use the VAE as a generative model. Tabular data So far we have only used NNs on image and text (by converting them to numbers). Let's see an example of working directly with tabular data, which is more commonly used in 'omics research. End of explanation import tensorflow import numpy as np from tensorflow.keras.models import Model from tensorflow.keras.layers import BatchNormalization, Concatenate, Dense, Input, Lambda,Dropout # Hyperparameters input_size = X_train.shape[1] # elu, https://keras.io/activations/, maybe deals better with vanishing gradient #act = "elu" act = "relu" # the intermediate dense layers size ds = 128 # latent space dimension size ls = 16 # dropout rate [0 1] dropout = 0.2 # ensure reproducibility np.random.seed(42) tf.random.set_seed(42) # Define the encoder inputs_layer = Input(shape=(input_size,), name='input') x = Dense(ds, activation=act)(inputs_layer) x = BatchNormalization()(x) coded_layer = Dense(ls, name='coded_layer')(x) encoder = Model(inputs_layer, coded_layer, name='encoder') encoder.summary() # Define the decoder decoder_inputs_layer = Input(shape=(ls,), name='latent_inputs') x = decoder_inputs_layer x = Dense(ds, activation=act)(x) x = BatchNormalization()(x) x = Dropout(dropout)(x) output_layer = Dense(input_size)(x) decoder = Model(decoder_inputs_layer, output_layer, name='decoder') decoder.summary() # Define the autoencoder outputs = decoder(encoder(inputs_layer)) autoencoder = Model(inputs_layer, outputs, name='autoencoder') autoencoder.summary() # compile and run from tensorflow.keras.optimizers import Adam from tensorflow.keras import optimizers adam = optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.001, amsgrad=False) autoencoder.compile(loss='mse', optimizer=adam, metrics=['accuracy']) #history = autoencoder.fit(X_train, X_train, epochs=5, batch_size=32, shuffle=True, validation_data=(X_test, X_test)) history = autoencoder.fit(X_train, X_train, epochs=200, batch_size=64, shuffle=True) autoencoder.save('cnn.h5') encoded_X_train = encoder.predict(X_train) encoded_X_train.shape Explanation: Goal: reduce the dimensionality of this dataset from 5000 to 16, in order to efficiently cluster these samples. New learnings: - parametrization - batch normalization layers - naming layers End of explanation from keras.preprocessing import sequence from keras.models import Sequential from keras.layers import Dense, Embedding from keras.layers import LSTM from keras.datasets import imdb max_features = 20000 maxlen = 80 # cut texts after this number of words (among top max_features most common words) batch_size = 32 print('Loading data...') (x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features) print(len(x_train), 'train sequences') print(len(x_test), 'test sequences') print('Pad sequences (samples x time)') x_train = sequence.pad_sequences(x_train, maxlen=maxlen) x_test = sequence.pad_sequences(x_test, maxlen=maxlen) print('x_train shape:', x_train.shape) print('x_test shape:', x_test.shape) print('Build model...') model = Sequential() model.add(Embedding(max_features, 128)) model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2)) model.add(Dense(1, activation='sigmoid')) # try using different optimizers and different optimizer configs model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) print('Train...') model.fit(x_train, y_train, batch_size=batch_size, epochs=15, validation_data=(x_test, y_test)) score, acc = model.evaluate(x_test, y_test, batch_size=batch_size) print('Test score:', score) print('Test accuracy:', acc) conda install numpy">=1.19.1" import numpy print(numpy.__version__) Explanation: Task: - Run KMeans both before and after dimensionality reduction, and plot their silhouette scores. Is there an improvement? - (advanced) Expand the AE above into a VAE, and repeat clustering assesment. - Using the above hyperparameters try to improve the model fit and re-asses clustering performance. - (really advanced) Search for a VAE-GANS implementation and re run. What are GANS? “the most interesting idea in the last 10 years in Machine Learning” (Ian LeCun) Generator model: the goal of the generator is to fool the discriminator, so the generative neural network is trained to maximise the final classification error (between true and generated data) Discriminator model: the goal of the discriminator is to detect fake generated data, so the discriminative neural network is trained to minimise the final classification error Example for MNIST: - https://machinelearningmastery.com/how-to-develop-a-generative-adversarial-network-for-an-mnist-handwritten-digits-from-scratch-in-keras/ Recurrent Neural Networks These networks process (loop) the information several times through every node. Such networks are mainly applied with the purpose of classifying sequential input and rely on backpropagation of error to do so. When the information passes a single time, the network is called feed-forward. Recurrent networks, on the other hand, take as their input not just the current input example they see, but also what they have perceived previously in time. Thus a RNN uses the concept of time and memory. One could, for example, define the activation function on a hidden state in this manner, by a method called backpropagation through time: output_t = relu(dot(W, input) + dot(U, output.t-1)) A traditional deep neural network uses different parameters at each layer, while a RNN shares the same parameters across all steps. The output of each time step doesn't need to be kept (not necessarily). We not care for example while doing sentiment analysis about the output after every word. Features: - they can be bi-directional - they can be deep (multiple layers per time step) - RNNs can be combined with CNNs to solve complex problems, from speech or image recognition to machine translation. End of explanation
12,057
Given the following text description, write Python code to implement the functionality described below step by step Description: Context Often, it isn't possible to get the real data where we applied our analysis. In these cases, we can generate similar dataset that contain similar phenomena based on real data. This notebook shows an example about how we can do it. Get base data The data, we want to derive another dataset. It's just there to get some realistic file names Step1: Create synthetic dataset 1 For the first technology, where "JDBC" was used. Create committed lines Step2: Add timestamp Step3: Treat first commit separetely Set a fixed value because we have to start with some code at the beginning Step4: Add file names Sample file names including their paths from an existing dataset Step5: Check dataset Step6: Sum up the data and check if it was created as wanted. Step7: Create synthetic dataset 2 Step8: Check dataset Step9: Add some noise Step10: Check dataset Step11: Concatenate all datasets Step12: Truncate data until fixed date Step13: Export the data Step14: Check loaded data
Python Code: from lib.ozapfdis import git_tc log = git_tc.log_numstat("C:/dev/repos/buschmais-spring-petclinic") log.head() log = log[log.file.str.contains(".java")] log.loc[log.file.str.contains("/jdbc/"), 'type'] = "jdbc" log.loc[log.file.str.contains("/jpa/"), 'type'] = "jpa" log.loc[log.type.isna(), 'type'] = "other" log.head() Explanation: Context Often, it isn't possible to get the real data where we applied our analysis. In these cases, we can generate similar dataset that contain similar phenomena based on real data. This notebook shows an example about how we can do it. Get base data The data, we want to derive another dataset. It's just there to get some realistic file names End of explanation import numpy as np import pandas as pd np.random.seed(0) # adding period added_lines = [int(np.random.normal(30,50)) for i in range(0,600)] # deleting period added_lines.extend([int(np.random.normal(-50,100)) for i in range(0,200)]) added_lines.extend([int(np.random.normal(-2,20)) for i in range(0,200)]) added_lines.extend([int(np.random.normal(-3,10)) for i in range(0,200)]) df_jdbc = pd.DataFrame() df_jdbc['lines'] = added_lines df_jdbc.head() Explanation: Create synthetic dataset 1 For the first technology, where "JDBC" was used. Create committed lines End of explanation times = pd.timedelta_range("00:00:00","23:59:59", freq="s") times = pd.Series(times) times.head() dates = pd.date_range('2013-05-15', '2017-07-23') dates = pd.to_datetime(dates) dates = dates[~dates.dayofweek.isin([5,6])] dates = pd.Series(dates) dates = dates.add(times.sample(len(dates), replace=True).values) dates.head() df_jdbc['timestamp'] = dates.sample(len(df_jdbc), replace=True).sort_values().reset_index(drop=True) df_jdbc = df_jdbc.sort_index() df_jdbc.head() Explanation: Add timestamp End of explanation df_jdbc.loc[0, 'lines'] = 250 df_jdbc.head() df_jdbc = df_jdbc Explanation: Treat first commit separetely Set a fixed value because we have to start with some code at the beginning End of explanation df_jdbc['file'] = log[log['type'] == 'jdbc']['file'].sample(len(df_jdbc), replace=True).values Explanation: Add file names Sample file names including their paths from an existing dataset End of explanation %matplotlib inline df_jdbc.lines.hist() Explanation: Check dataset End of explanation df_jdbc_timed = df_jdbc.set_index('timestamp') df_jdbc_timed['count'] = df_jdbc_timed.lines.cumsum() df_jdbc_timed['count'].plot() last_non_zero_timestamp = df_jdbc_timed[df_jdbc_timed['count'] >= 0].index.max() last_non_zero_timestamp df_jdbc = df_jdbc[df_jdbc.timestamp <= last_non_zero_timestamp] df_jdbc.head() Explanation: Sum up the data and check if it was created as wanted. End of explanation df_jpa = pd.DataFrame([int(np.random.normal(20,50)) for i in range(0,600)], columns=['lines']) df_jpa.loc[0,'lines'] = 150 df_jpa['timestamp'] = pd.DateOffset(years=2) + dates.sample(len(df_jpa), replace=True).sort_values().reset_index(drop=True) df_jpa = df_jpa.sort_index() df_jpa['file'] = log[log['type'] == 'jpa']['file'].sample(len(df_jpa), replace=True).values df_jpa.head() Explanation: Create synthetic dataset 2 End of explanation df_jpa.lines.hist() df_jpa_timed = df_jpa.set_index('timestamp') df_jpa_timed['count'] = df_jpa_timed.lines.cumsum() df_jpa_timed['count'].plot() Explanation: Check dataset End of explanation dates_other = pd.date_range(df_jdbc.timestamp.min(), df_jpa.timestamp.max()) dates_other = pd.to_datetime(dates_other) dates_other = dates_other[~dates_other.dayofweek.isin([5,6])] dates_other = pd.Series(dates_other) dates_other = dates_other.add(times.sample(len(dates_other), replace=True).values) dates_other.head() df_other = pd.DataFrame([int(np.random.normal(5,100)) for i in range(0,40000)], columns=['lines']) df_other['timestamp'] = dates_other.sample(len(df_other), replace=True).sort_values().reset_index(drop=True) df_other = df_other.sort_index() df_other['file'] = log[log['type'] == 'other']['file'].sample(len(df_other), replace=True).values df_other.head() Explanation: Add some noise End of explanation df_other.lines.hist() df_other_timed = df_other.set_index('timestamp') df_other_timed['count'] = df_other_timed.lines.cumsum() df_other_timed['count'].plot() Explanation: Check dataset End of explanation df = pd.concat([df_jpa, df_jdbc, df_other], ignore_index=True).sort_values(by='timestamp') df.loc[df.lines > 0, 'additions'] = df.lines df.loc[df.lines < 0, 'deletions'] = df.lines * -1 df = df.fillna(0).reset_index(drop=True) df = df[['additions', 'deletions', 'file', 'timestamp']] df.loc[(df.deletions > 0) & (df.loc[0].timestamp == df.timestamp),'additions'] = df.deletions df.loc[df.loc[0].timestamp == df.timestamp,'deletions'] = 0 df['additions'] = df.additions.astype(int) df['deletions'] = df.deletions.astype(int) df = df.sort_values(by='timestamp', ascending=False) df.head() Explanation: Concatenate all datasets End of explanation df = df[df.timestamp < pd.Timestamp('2018-01-01')] df.head() Explanation: Truncate data until fixed date End of explanation df.to_csv("datasets/git_log_refactoring.gz", index=None, compression='gzip') Explanation: Export the data End of explanation df_loaded = pd.read_csv("datasets/git_log_refactoring.gz") df_loaded.head() df_loaded.info() Explanation: Check loaded data End of explanation
12,058
Given the following text description, write Python code to implement the functionality described below step by step Description: Análise Exploratória Esse notebook introduz os conceitos de Análise Exploratória Para isso utilizaremos a base de dados de Crimes de São Francisco obtidos do site de competições Kaggle. Esse notebook contém Step1: Durante os exercícios precisaremos pular a linha do cabeçalho de tal forma a trabalhar apenas com a tabela de dados. Uma forma de fazer isso é utilizando o comando filter() para eliminar toda linha igual a variável header. Step2: Agora temos um dataset em que cada linha é uma string contendo todos os valores. Porém, para explorarmos os dados precisamos que cada objeto seja uma lista de valores. Utilize o comando split() para transformar os objetos em listas de strings. Step3: Reparem que o campo Resolution cujo valor no primeiro registro era "ARREST, BOOKED" se tornou dois campos diferentes por causa do split(). Nesses casos em que uma simples separação não funciona, nós podemos utilizar as Expressões Regulares. O Python tem suporte as Regex através da biblioteca re. Vamos utilizar o comando re.split() para cuidar da separação de nossa base em campos. Além disso, vamos aproveitar para converter o primeiro campo, que representa data e hora, para objeto do tipo datetime através do comando datetime.datetime.strptime(). Também vamos agrupar as coordenadas X e Y em uma tupla de floats. Outra ajuda que o Python pode nos dar é a utilização das namedtuple que permite acessar cada campo de cada objeto pelo nome. Ex. Step4: Parte 2 Step5: De forma similar, vamos gerar a contagem para as regiões de São Francisco (PdDistrict). Step6: (2b) Cálculo da Média Nesse exercício vamos calcular a média de crimes em cada região para cada dia da semana. Para isso, primeiro devemos calcular a quantidade de dias de cada dia da semana que existem na base de dados, para isso vamos criar uma RDD de tuplas em que o primeiro campo é a tupla da data no formato 'dia-mes-ano' e do dia da semana e o segundo campo o valor $1$. Em seguida, reduzimos a RDD sem efetuar a soma, mantendo o valor $1$. Essa redução filtra a RDD para que cada data apareça uma única vez. Ao final, podemos efetuar o mapeamento de (DayOfWeek,1) e redução com soma para contabilizar quantas vezes cada dia da semana aparece na base de dados. Nossa próxima RDD terá como chave uma tupla ( (DayOfWeek, PdDistrict), 1) para contabilizar quantos crimes ocorreram em determinada região e naquele dia da semana. Após a redução, devemos mapear esse RDD para (DayOfWeek, (PdDistrict, contagem)). Finalmente, podemos juntar as duas RDDs uma vez que elas possuem a mesma chave (DayOfWeek), dessa forma teremos tuplas no formato ( DayOfWeek, ( (PdDistrict,contagem), contagemDiaDaSemana ) ). Isso deve ser mapeado para Step7: (2c) Média e Desvio-Padrão pelo PySpark Uma alternativa para calcular média, desvio-padrão e outros valores descritivos é utilizando os comandos internos do Spark. Para isso é necessário gerar uma RDD de listas de valores. Gere uma RDD contendo a tupla ( (Dates,DayOfWeek, PdDistrict), contagem), mapeie para ( (DayOfWeek,PdDistrict), Contagem) e agrupe pela chave. Isso irá gerar uma RDD ( (DayOfWeek,PdDistrict), Iterador(contagens) ). Agora crie um dicionário RegionAvgSpark, inicialmente vazio e colete apenas o primeiro elemento da tupla para a variável Keys. Itere essa variável realizando os seguintes passos Step8: Parte 3 Step9: Quando temos subcategorias de interesse, podemos plotar através de um gráfico de barras empilhado. Vamos plotar o conteúdo da variável RegionAvg. Primeiro passo é criar um dicionário Y em que a chave é o dia da semana e o valor é uma np.array contendo a média de cada região para aquele dia. Em seguida precisamos criar uma matriz Bottom que determina qual é o início de cada uma das barras. O início da barra do dia i deve ser o final da barra do dia i-1. Com isso calculado podemos gerar um plot por dia com o parâmetro bottom correspondente ao vetor Bottom daquele dia. Step10: (3b) Gráfico de Linha O gráfico de linha é utilizado principalmente para mostrar uma tendência temporal. Nesse exercício vamos primeiro gerar o número médio de crimes em cada hora do dia. Primeiro, novamente, geramos um RDD contendo um único registro de cada hora para cada dia. Em seguida, contabilizamos a soma da quantidade de crime em cada hora. Finalmente, juntamos as duas RDDs e calculamos a média dos valores. Step11: (3c) Gráfico de Dispersão O gráfico de dispersão é utilizado para visualizar correlações entre as variáveis. Com esse gráfico é possível observar se o crescimento da quantidade de uma categoria está relacionada ao crescimento/decrescimento de outra (mas não podemos dizer se uma causa a outra). Na primeira parte do exercício calcularemos a correlação entre os diferentes tipos de crime. Para isso primeiro precisamos construir uma RDD em que cada registro corresponde a uma data o valor contido nele é a quantidade de crimes de cada tipo. Diferente dos exercícios anteriores, devemos manter essa informação como uma lista de valores em que todos os registros sigam a mesma ordem da lista de crimes. O primeiro passo é criar uma RDD com a tupla ( (Mes-Ano, Crime), 1 ) e utilizá-la para gerar a tupla ( (Mes-Ano,Crime) Quantidade ). Mapeamos essa RDD para definir Mes-Ano como chave e agrupamos em torno dessa chave, gerando uma lista de quantidade de crimes em cada data. Aplicamos a função dict() nessa lista para obtermos uma RDD no seguinte formato Step12: O próximo passo consiste em calcular o total de pares Mes-Ano para ser possível o cálculo da média. Finalmente, criamos a RDD fractionCrimesDateRDD em que a chave é Mes-Ano e o valor é uma lista da fração de cada tipo de crime ocorridos naquele mês e ano. Para gerar essa lista vamos utilizar o list comprehension do Python de tal forma a calcular a fração para cada crime na variável crimes. Os dicionários em Python tem um método chamado get() que permite atribuir um valor padrão caso a chave não exista. Ex. Step13: Finalmente, utilizaremos a função Statistics.corr() da biblioteca pyspark.mlllib.stat. Para isso mapeamos nossa RDD para conter apenas a lista de valores da lista de tuplas. Step14: Convertendo a matriz corr para np.array podemos buscar pelo maior valor negativo e positivo diferentes de 1.0. Para isso vamos utilizar as funções min() e argmin(). Step15: Agora que sabemos quais crimes tem maior correlação, vamos plotar um gráfico de dispersão daqueles com maior correlação negativa. Primeiro criamos duas RDDs, var1RDD e var2RDD. Elas são um mapeamento da fractionCrimesDateRDD filtradas para conter apenas o crime contido em Xlabel e Ylabel, respectivamente. Juntamos as duas RDDs em uma única RDD, correlationRDD que mapeará para tuplas de valores, onde os valores são as médias calculadas em fractionCrimesDateRDD. Step16: No gráfico abaixo, é possível perceber que quanto mais crimes do tipo NON-CRIMINAL ocorrem em um dia, menos FORGERY/COUNTERFEITING ocorrem. Step17: (3d) Histograma O uso do Histograma é para visualizar a distribuição dos dados. Dois tipos de distribuição que são observadas normalmente é a Gaussiana, em que os valores se concentram em torno de uma média e a Lei de Potência, em que os valores menores são observados com maior frequência. Vamos verificar a distribuição das prisões efetuadas (categoria ARREST em * Resolution*) em cada mês. Com essa distribuição poderemos verificar se o número de prisões é consistente durante os meses do período estudado. Primeiro criaremos uma RDD chamada bookedRDD que contém apenas os registros contendo ARREST no campo Resolution (lembre-se que esse campo é uma lista) e contabilizar a quantidade de registros em cada 'Mes-Ano'. Ao final, vamos mapear para uma RDD contendo apenas os valores contabilizados. Step18: Notem que lemos o histograma da seguinte maneira Step19: No gráfico abaixo, percebemos que existem, em média, muito mais prisões para o tipo ASSAULT do que o tipo ROBBERY, ambos com pequena variação.
Python Code: import os import numpy as np from pyspark import SparkContext sc = SparkContext() filename = os.path.join("Data","Aula03","Crime.csv") CrimeRDD = sc.textFile(filename,8) header = CrimeRDD.take(1)[0] # o cabeçalho é a primeira linha do arquivo print "Campos disponíveis: {}".format(header) Explanation: Análise Exploratória Esse notebook introduz os conceitos de Análise Exploratória Para isso utilizaremos a base de dados de Crimes de São Francisco obtidos do site de competições Kaggle. Esse notebook contém: Parte 1: Parsing da base de dados de Crimes de São Francisco Parte 2: Estatísticas Básicas das Variáveis Parte 3: Plotagem de Gráficos Para os exercícios é aconselhável consultar a documentação da API do PySpark Parte 1: Parsing da Base de Dados Nessa primeira parte do notebook vamos aprender a trabalhar com arquivos CSV. Os arquivos CSV são arquivos textos representando tabelas de dados, numéricas ou categóricas, com formatação apropriada para a leitura estruturada. A primeira linha de um arquivo CSV é o cabeçalho, com o nome de cada coluna da tabela separados por vírgulas. Cada linha subsequente representa um objeto da base de dados com os valores também separados por vírgula. Esses valores podem ser numéricos, categóricos (textuais) e listas. As listas são representadas por listas de valores separadas por vírgulas e entre aspas. Vamos carregar a base de dados histórica de Crimes de São Francisco, um dos temas do projeto final. No primeiro passo vamos armazenar o cabeçalho em uma variável chamada header e imprimi-la para a descrição dos campos de nossa base. End of explanation # EXERCICIO CrimeHeadlessRDD = CrimeRDD.filter(lambda x:x!=header)#<COMPLETAR> firstObject = CrimeHeadlessRDD.take(1)[0] print firstObject assert firstObject==u'2015-05-13 23:53:00,WARRANTS,WARRANT ARREST,Wednesday,NORTHERN,"ARREST, BOOKED",OAK ST / LAGUNA ST,-122.425891675136,37.7745985956747', 'valor incorreto' print "OK" Explanation: Durante os exercícios precisaremos pular a linha do cabeçalho de tal forma a trabalhar apenas com a tabela de dados. Uma forma de fazer isso é utilizando o comando filter() para eliminar toda linha igual a variável header. End of explanation # EXERCICIO CrimeHeadlessRDD = (CrimeRDD.filter(lambda x: x!=header).map(lambda y: y.split(","))) #.map(lambda x: x!= header)#<COMPLETAR> #.map(lambda y: y.split()#<COMPLETAR> #) firstObjectList = CrimeHeadlessRDD.take(1)[0] print firstObjectList assert firstObjectList[0]==u'2015-05-13 23:53:00', 'valores incorretos' print "OK" Explanation: Agora temos um dataset em que cada linha é uma string contendo todos os valores. Porém, para explorarmos os dados precisamos que cada objeto seja uma lista de valores. Utilize o comando split() para transformar os objetos em listas de strings. End of explanation # EXERCICIO import re import datetime from collections import namedtuple headeritems = header.split(',') # transformar o cabeçalho em lista del headeritems[-1] # apagar o último item e... headeritems[-1] = 'COORD' # transformar em COORD # Dates,Category,Descript,DayOfWeek,PdDistrict,Resolution,Address,COORD Crime = namedtuple('Crime',headeritems) # gera a namedtuple Crime com os campos de header REGEX = r',(?=(?:[^"]*"[^"]*")*(?![^"]*"))' # buscar por "," tal que após essa vírgula (?=) ou exista um par de "" ou não tenha " sozinha # ?= indica para procurarmos pelo padrão após a vírgula # ?: significa para não interpretar os parênteses como captura de valores # [^"]* 0 ou sequências de caracteres que não sejam aspas # [^"]*"[^"]*" <qualquer caracter exceto aspas> " <qualquer caracter exceto aspas> " # ?! indica para verificar se não existe tal padrão a frente da vírgula def ParseCrime(rec): # utilizando re.split() vamos capturar nossos valores Date, Category, Descript, DayOfWeek, PdDistrict, Resolution, Address, X, Y = re.split(REGEX,rec)#<COMPLETAR> # Converta a data para o formato datetime Date = datetime.datetime.strptime(Date, "%Y-%m-%d %H:%M:%S") # COORD é uma tupla com floats representando X e Y COORD = (X,Y)#<COMPLETAR> # O campos 'Resolution' será uma lista dos valores separados por vírgula, sem as aspas Resolution = Resolution.split(",")#<COMPLETAR> return Crime(Date, Category, Descript, DayOfWeek, PdDistrict, Resolution, Address, COORD) # Aplique a função ParseCrime para cada objeto da base #CrimeHeadlessRDD = (CrimeRDD.map(ParseCrime) # .<COMPLETAR> # .<COMPLETAR> # ) CrimeHeadlessRDD = CrimeRDD.filter(lambda x: x!=header).map(ParseCrime) firstClean = CrimeHeadlessRDD.take(1)[0] totalRecs = CrimeHeadlessRDD.count() print firstClean assert type(firstClean.Dates) is datetime.datetime and type(firstClean.Resolution) is list and type(firstClean.COORD) is tuple,'tipos incorretos' print "OK" assert CrimeHeadlessRDD.filter(lambda x: len(x)!=8).count()==0, 'algo deu errado!' print "OK" assert totalRecs==878049, 'total de registros incorreto' print "OK" Explanation: Reparem que o campo Resolution cujo valor no primeiro registro era "ARREST, BOOKED" se tornou dois campos diferentes por causa do split(). Nesses casos em que uma simples separação não funciona, nós podemos utilizar as Expressões Regulares. O Python tem suporte as Regex através da biblioteca re. Vamos utilizar o comando re.split() para cuidar da separação de nossa base em campos. Além disso, vamos aproveitar para converter o primeiro campo, que representa data e hora, para objeto do tipo datetime através do comando datetime.datetime.strptime(). Também vamos agrupar as coordenadas X e Y em uma tupla de floats. Outra ajuda que o Python pode nos dar é a utilização das namedtuple que permite acessar cada campo de cada objeto pelo nome. Ex.: rec.Dates. End of explanation # EXERCICIO from operator import add CatCountRDD = CrimeHeadlessRDD.map(lambda x:(x.Category,1)).reduceByKey(add) #.<COMPLETAR> #.<COMPLETAR> #) #contagemFinal = (palavrasRDD.map(lambda x:(x,1)).reduceByKey(add) catCount = sorted(CatCountRDD.collect(), key=lambda x: -x[1]) print catCount assert catCount[0][1]==174900, 'valores incorretos' print "OK" Explanation: Parte 2: Estatísticas Básicas das Variáveis Nessa parte do notebook vamos aprender a filtrar a base de dados para calcular estatísticas básicas necessárias para a análise exploratória. (2a) Contagem de frequência A contagem de frequência é realizada de forma similar ao exercício de contagem de palavras. Primeiro mapeamos a variável de interesse. Como exemplo vamos gerar uma lista da quantidade total de cada tipo de crime (Category). End of explanation # EXERCICIO RegionCountRDD = CrimeHeadlessRDD.map(lambda x:(x.PdDistrict,1)).reduceByKey(add) # .<COMPLETAR> # .<COMPLETAR> #) regCount = sorted(RegionCountRDD.collect(), key=lambda x: -x[1]) print regCount assert regCount[0][1]==157182, 'valores incorretos' print "OK" Explanation: De forma similar, vamos gerar a contagem para as regiões de São Francisco (PdDistrict). End of explanation # EXERCICIO from operator import add # Dates,Category,Descript,DayOfWeek,PdDistrict,Resolution,Address,COORD # Lambda para converter um datetime em `Dia-Mes-Ano` day2str = lambda x: '{}-{}-{}'.format(x.day,x.month,x.year) totalDatesRDD = CrimeHeadlessRDD.map(lambda x : ((day2str(x.Dates),x.DayOfWeek),1)).reduceByKey(lambda y,z:y).map(lambda a: (a[0][1],1)).reduceByKey(add) #.<COMPLETAR> #.<COMPLETAR> #.<COMPLETAR> #.<COMPLETAR> #) crimesWeekDayRegionRDD = CrimeHeadlessRDD.map(lambda x:((x.DayOfWeek,x.PdDistrict),1)).reduceByKey(add).map(lambda (y,z):(y[0],(y[1],z))) # .<COMPLETAR> # .<COMPLETAR> # .<COMPLETAR> # ) RegionAvgPerDayRDD = (crimesWeekDayRegionRDD .join(totalDatesRDD) .map(lambda ( dow, ( (pd,c), cds ) ):( dow, (pd, c / float(cds))))#<COMPLETAR> .groupByKey()#<COMPLETAR> .map(lambda x:(x[0],dict(x[1])))#<COMPLETAR> ) RegionAvg = RegionAvgPerDayRDD.collectAsMap() print RegionAvg['Sunday'] assert np.round(RegionAvg['Sunday']['BAYVIEW'],2)==37.27, 'valores incorretos {}'.format(np.round(RegionAvg[0][2],2)) print "OK" Explanation: (2b) Cálculo da Média Nesse exercício vamos calcular a média de crimes em cada região para cada dia da semana. Para isso, primeiro devemos calcular a quantidade de dias de cada dia da semana que existem na base de dados, para isso vamos criar uma RDD de tuplas em que o primeiro campo é a tupla da data no formato 'dia-mes-ano' e do dia da semana e o segundo campo o valor $1$. Em seguida, reduzimos a RDD sem efetuar a soma, mantendo o valor $1$. Essa redução filtra a RDD para que cada data apareça uma única vez. Ao final, podemos efetuar o mapeamento de (DayOfWeek,1) e redução com soma para contabilizar quantas vezes cada dia da semana aparece na base de dados. Nossa próxima RDD terá como chave uma tupla ( (DayOfWeek, PdDistrict), 1) para contabilizar quantos crimes ocorreram em determinada região e naquele dia da semana. Após a redução, devemos mapear esse RDD para (DayOfWeek, (PdDistrict, contagem)). Finalmente, podemos juntar as duas RDDs uma vez que elas possuem a mesma chave (DayOfWeek), dessa forma teremos tuplas no formato ( DayOfWeek, ( (PdDistrict,contagem), contagemDiaDaSemana ) ). Isso deve ser mapeado para: ( DayOfWeek, ( PdDistrict, contagem / contagemDiaDaSemana ) ) Lembrando de converter contagemDiaDaSemana para float. Finalmente, o resultado pode ser agrupado pela chave, gerando uma tupla ( DayOfWeek, [ (Pd1, media1), (Pd2, media2), ... ] ). Essa lista pode ser mapeada para um dicionário com o comando dict. No final, transformamos o RDD em um dicionário Python com o comando collectAsMap(). End of explanation # EXERCICIO countWeekDayDistRDD = (CrimeHeadlessRDD.map(lambda x:((day2str(x.Dates),x.DayOfWeek,x.PdDistrict),1)) .reduceByKey(add) .map(lambda ((d,dow,pd),c):((dow,pd),c))#<COMPLETAR> .groupByKey()#<COMPLETAR> #.<COMPLETAR> #.<COMPLETAR> ) # Esse procedimento só é viável se existirem poucas chaves RegionAvgSpark = {} Keys = countWeekDayDistRDD.map(lambda rec: rec[0]).collect() for key in Keys: listRDD = (countWeekDayDistRDD .filter(lambda rec: rec[0]==key) .flatMap(lambda rec: rec[1]) ) if key[0] not in RegionAvgSpark: RegionAvgSpark[key[0]] = {} RegionAvgSpark[key[0]][key[1]] = (listRDD.mean(), listRDD.stdev()) print RegionAvgSpark['Sunday'] assert np.round(RegionAvgSpark['Sunday']['BAYVIEW'][0],2)==37.39 and np.round(RegionAvgSpark['Sunday']['BAYVIEW'][1],2)==10.06, 'valores incorretos' print "OK" Explanation: (2c) Média e Desvio-Padrão pelo PySpark Uma alternativa para calcular média, desvio-padrão e outros valores descritivos é utilizando os comandos internos do Spark. Para isso é necessário gerar uma RDD de listas de valores. Gere uma RDD contendo a tupla ( (Dates,DayOfWeek, PdDistrict), contagem), mapeie para ( (DayOfWeek,PdDistrict), Contagem) e agrupe pela chave. Isso irá gerar uma RDD ( (DayOfWeek,PdDistrict), Iterador(contagens) ). Agora crie um dicionário RegionAvgSpark, inicialmente vazio e colete apenas o primeiro elemento da tupla para a variável Keys. Itere essa variável realizando os seguintes passos: Se key[0] não existir no dicionário, crie a entrada key[0] como um dicionário vazio. Mapeie countWeekDayDistRDD filtrando por key e gerando a RDD com os valores da tupla. Note que não queremos uma lista de listas. Insira a tupla (media, desvio-padrão) utilizando os comandos mean() e stdev() do PySpark, armazenando na chave RegionAvgSpark[ key[0] ][ key[1] ]. End of explanation %matplotlib inline import matplotlib.pyplot as plt # Dates,Category,Descript,DayOfWeek,PdDistrict,Resolution,Address,COORD # Lambda para converter um datetime em `Dia-Mes-Ano` day2str = lambda x: '{}-{}-{}'.format(x.day,x.month,x.year) totalDatesRDD = (CrimeHeadlessRDD .map(lambda rec: (day2str(rec.Dates),1)) .reduceByKey(lambda x,y: x) ) totalDays = float(totalDatesRDD.count()) avgCrimesRegionRDD = (RegionCountRDD .map(lambda rec: (rec[0],rec[1]/totalDays)) ) Xticks,Y = zip(*avgCrimesRegionRDD.collectAsMap().items()) indices = np.arange(len(Xticks)) width = 0.35 fig = plt.figure(figsize=(8,4.2), facecolor='white', edgecolor='white') plt.bar(indices,Y, width) plt.grid(b=True, which='major', axis='y') plt.xticks(indices+width/2., Xticks, rotation=17 ) plt.ylabel('Number of crimes') plt.xlabel('Region') pass Explanation: Parte 3: Plotagem de Gráficos Nessa parte do notebook vamos aprender a manipular os dados para gerar listas de valores a serem utilizados na plotagem de gráficos. Para a plotagem de gráficos vamos utilizar o matplotlib que já vem por padrão na maioria das distribuições do Python (ex.: Anaconda). Outras bibliotecas alternativas interessantes são: Seaborn e Bokeh. (3a) Gráfico de Barras O gráfico de barras é utilizado quando queremos comparar dados entre categorias diferentes de uma variável categórica. Como exemplo, vamos contabilizar o número médio de crimes diários por região. Vamos primeiro criar a RDD totalDatesRDD que contém a lista de dias únicos, computaremos o total de dias com o comando count() armazenando na variável totalDays. Não se esqueça de converter o valor para float. Em seguida, crie o RDD avgCrimesRegionRDD que utiliza a RDD RegionCountRDD para calcular a média de crimes por região. Utilizando o comando zip() do Python é possível descompactar um dicionário em duas variáveis, uma com as chaves e outra com os valores. Utilizaremos essas variáveis para a plotagem do gráfico. End of explanation # Dias da semana como referência Day = ['Sunday','Monday','Tuesday','Wednesday','Thursday','Friday','Saturday'] # Uma cor para cada dia Color = ['r','b','g','y','c','k','purple'] # Dicionário (dia, array de médias) Y = {} for day in Day: Y[day] = np.array([RegionAvg[day][x] for x in Xticks]) # Matriz dias x regiões Bottom = np.zeros( (len(Day),len(Xticks)) ) for i in range(1,len(Day)): Bottom[i,:] = Bottom[i-1,:]+Y[Day[i-1]] indices = np.arange(len(Xticks)) width = 0.35 fig = plt.figure(figsize=(8,4.2), facecolor='white', edgecolor='white') # Gera uma lista de plots, um para cada dia plots = [plt.bar(indices,Y[Day[i]], width, color=Color[i], bottom=Bottom[i]) for i in range(len(Day))] plt.legend( [p[0] for p in plots], Day,loc='center left', bbox_to_anchor=(1, 0.5) ) plt.grid(b=True, which='major', axis='y') plt.xticks(indices+width/2., Xticks, rotation=17 ) plt.ylabel('Number of crimes') plt.xlabel('Region') pass Explanation: Quando temos subcategorias de interesse, podemos plotar através de um gráfico de barras empilhado. Vamos plotar o conteúdo da variável RegionAvg. Primeiro passo é criar um dicionário Y em que a chave é o dia da semana e o valor é uma np.array contendo a média de cada região para aquele dia. Em seguida precisamos criar uma matriz Bottom que determina qual é o início de cada uma das barras. O início da barra do dia i deve ser o final da barra do dia i-1. Com isso calculado podemos gerar um plot por dia com o parâmetro bottom correspondente ao vetor Bottom daquele dia. End of explanation # EXERCICIO parseWeekday = lambda x: '{}-{}-{}'.format(x.day, x.month, x.year) hoursRDD = (CrimeHeadlessRDD .map(lambda x: ((x.Dates.hour,parseWeekday (x.Dates)),1))#<COMPLETAR> .reduceByKey(add)#<COMPLETAR> #.<COMPLETAR> #.<COMPLETAR> ) crimePerHourRDD = (CrimeHeadlessRDD # .map(lambda x: (x.Dates.hour)<COMPLETAR> # .<COMPLETAR> # ) #avgCrimeHourRDD = (crimePerHourRDD # .<COMPLETAR> # .<COMPLETAR> # ) #crimePerHour = avgCrimeHourRDD.collect() #print crimePerHour[0:5] assert np.round(crimePerHour[0][1],2)==19.96, 'valores incorretos' print "OK" crimePerHourSort = sorted(crimePerHour,key=lambda x: x[0]) X,Y = zip(*crimePerHourSort) fig = plt.figure(figsize=(8,4.2), facecolor='white', edgecolor='white') plt.plot(X,Y) plt.grid(b=True, which='major', axis='y') plt.ylabel('Avg. Number of crimes') plt.xlabel('Hour') pass Explanation: (3b) Gráfico de Linha O gráfico de linha é utilizado principalmente para mostrar uma tendência temporal. Nesse exercício vamos primeiro gerar o número médio de crimes em cada hora do dia. Primeiro, novamente, geramos um RDD contendo um único registro de cada hora para cada dia. Em seguida, contabilizamos a soma da quantidade de crime em cada hora. Finalmente, juntamos as duas RDDs e calculamos a média dos valores. End of explanation # EXERCICIO parseMonthYear = lambda x: '{}-{}'.format(x.month, x.year) crimes = map(lambda x: x[0], catCount) datesCrimesRDD = (CrimeHeadlessRDD .map(lambda x: ((parseMonthYear(x.Dates),x.Category),1))#<COMPLETAR> .reduceByKey(add)#<COMPLETAR> .map(lambda ((ma,crime),contagem):(ma,(crime,contagem)))#<COMPLETAR> .groupByKey()#<COMPLETAR> .map(lambda y:(y[0],dict(y[1])))#<COMPLETAR> ) #print datesCrimesRDD.collect() print datesCrimesRDD.take(1) assert datesCrimesRDD.take(1)[0][1][u'KIDNAPPING']==12,'valores incorretos' print 'ok' Explanation: (3c) Gráfico de Dispersão O gráfico de dispersão é utilizado para visualizar correlações entre as variáveis. Com esse gráfico é possível observar se o crescimento da quantidade de uma categoria está relacionada ao crescimento/decrescimento de outra (mas não podemos dizer se uma causa a outra). Na primeira parte do exercício calcularemos a correlação entre os diferentes tipos de crime. Para isso primeiro precisamos construir uma RDD em que cada registro corresponde a uma data o valor contido nele é a quantidade de crimes de cada tipo. Diferente dos exercícios anteriores, devemos manter essa informação como uma lista de valores em que todos os registros sigam a mesma ordem da lista de crimes. O primeiro passo é criar uma RDD com a tupla ( (Mes-Ano, Crime), 1 ) e utilizá-la para gerar a tupla ( (Mes-Ano,Crime) Quantidade ). Mapeamos essa RDD para definir Mes-Ano como chave e agrupamos em torno dessa chave, gerando uma lista de quantidade de crimes em cada data. Aplicamos a função dict() nessa lista para obtermos uma RDD no seguinte formato: (Mes-Ano, {CRIME: quantidade}). Além disso, vamos criar a variável crimes contendo a lista de crimes contidas na lista de pares catCount computada anteriormente. End of explanation # EXERCICIO totalPerDateRDD = (CrimeHeadlessRDD .map(lambda x:(parseMonthYear(x.Dates),1))#<COMPLETAR> .reduceByKey(add)#<COMPLETAR> ) fractionCrimesDateRDD = (datesCrimesRDD .map(lambda x:(x[0],[i for i in crimes if i == x[1]]))#<COMPLETAR> .reduceByKey(add)#<COMPLETAR> .cache() ) print fractionCrimesDateRDD.take(1) assert np.abs(fractionCrimesDateRDD.take(1)[0][1][0][1]-0.163950)<1e-6,'valores incorretos' print 'ok' Explanation: O próximo passo consiste em calcular o total de pares Mes-Ano para ser possível o cálculo da média. Finalmente, criamos a RDD fractionCrimesDateRDD em que a chave é Mes-Ano e o valor é uma lista da fração de cada tipo de crime ocorridos naquele mês e ano. Para gerar essa lista vamos utilizar o list comprehension do Python de tal forma a calcular a fração para cada crime na variável crimes. Os dicionários em Python tem um método chamado get() que permite atribuir um valor padrão caso a chave não exista. Ex.: dicionario.get( chave, 0.0) retornará 0.0 caso a chave não exista. End of explanation from pyspark.mllib.stat import Statistics corr = Statistics.corr(fractionCrimesDateRDD.map(lambda rec: map(lambda x: x[1],rec[1]))) print corr Explanation: Finalmente, utilizaremos a função Statistics.corr() da biblioteca pyspark.mlllib.stat. Para isso mapeamos nossa RDD para conter apenas a lista de valores da lista de tuplas. End of explanation npCorr = np.array(corr) rowMin = npCorr.min(axis=1).argmin() colMin = npCorr[rowMin,:].argmin() print crimes[rowMin], crimes[colMin], npCorr[rowMin,colMin] npCorr[npCorr==1.] = 0. rowMax = npCorr.max(axis=1).argmax() colMax = npCorr[rowMax,:].argmax() print crimes[rowMax], crimes[colMax], npCorr[rowMax,colMax] Explanation: Convertendo a matriz corr para np.array podemos buscar pelo maior valor negativo e positivo diferentes de 1.0. Para isso vamos utilizar as funções min() e argmin(). End of explanation # EXERCICIO Xlabel = 'FORGERY/COUNTERFEITING'#'DRIVING UNDER THE INFLUENCE' Ylabel = 'NON-CRIMINAL'#'LIQUOR LAWS' var1RDD = (fractionCrimesDateRDD .map(lambda rec: (rec[0], filter(lambda x: x[0]==Xlabel,rec[1])[0][1])) ) var2RDD = (fractionCrimesDateRDD .map(lambda rec: (rec[0], filter(lambda x: x[0]==Ylabel,rec[1])[0][1])) ) correlationRDD = (var1RDD .<COMPLETAR> .<COMPLETAR> ) Data = correlationRDD.collect() print Data[0] assert np.abs(Data[0][0]-0.015904)<1e-6, 'valores incorretos' print 'ok' Explanation: Agora que sabemos quais crimes tem maior correlação, vamos plotar um gráfico de dispersão daqueles com maior correlação negativa. Primeiro criamos duas RDDs, var1RDD e var2RDD. Elas são um mapeamento da fractionCrimesDateRDD filtradas para conter apenas o crime contido em Xlabel e Ylabel, respectivamente. Juntamos as duas RDDs em uma única RDD, correlationRDD que mapeará para tuplas de valores, onde os valores são as médias calculadas em fractionCrimesDateRDD. End of explanation X,Y = zip(*Data) fig = plt.figure(figsize=(8,4.2), facecolor='white', edgecolor='white') plt.scatter(X,Y) plt.grid(b=True, which='major', axis='y') plt.xlabel(Xlabel) plt.ylabel(Ylabel) pass Explanation: No gráfico abaixo, é possível perceber que quanto mais crimes do tipo NON-CRIMINAL ocorrem em um dia, menos FORGERY/COUNTERFEITING ocorrem. End of explanation # EXERCICIO bookedRDD = (CrimeHeadlessRDD .filter(lambda x: u'"ARREST' in x.Resolution)#<COMPLETAR> .map(lambda y:(parseMonthYear(y.Dates),1))#<COMPLETAR> .reduceByKey(add)#<COMPLETAR> .map(lambda z:z[1])#.<COMPLETAR> ) Data = bookedRDD.collect() #print Data print Data[:5] assert Data[0]==1914,'valores incorretos' print 'ok' fig = plt.figure(figsize=(8,4.2), facecolor='white', edgecolor='white') plt.hist(Data) plt.grid(b=True, which='major', axis='y') plt.xlabel('ARRESTED') pass Explanation: (3d) Histograma O uso do Histograma é para visualizar a distribuição dos dados. Dois tipos de distribuição que são observadas normalmente é a Gaussiana, em que os valores se concentram em torno de uma média e a Lei de Potência, em que os valores menores são observados com maior frequência. Vamos verificar a distribuição das prisões efetuadas (categoria ARREST em * Resolution*) em cada mês. Com essa distribuição poderemos verificar se o número de prisões é consistente durante os meses do período estudado. Primeiro criaremos uma RDD chamada bookedRDD que contém apenas os registros contendo ARREST no campo Resolution (lembre-se que esse campo é uma lista) e contabilizar a quantidade de registros em cada 'Mes-Ano'. Ao final, vamos mapear para uma RDD contendo apenas os valores contabilizados. End of explanation # EXERCICIO parseDayMonth = lambda x: '{}-{}'.format(x.month,x.year) bookedRDD = (CrimeHeadlessRDD .filter(lambda x: u'"ARREST' in x.Resolution)#<COMPLETAR> .map(lambda y:(parseMonthYear(y.Dates),1))#<COMPLETAR> .reduceByKey(add)#<COMPLETAR> .map(lambda z:z[1])#.<COMPLETAR> ) robberyBookedRDD = (CrimeHeadlessRDD .filter(lambda x: u'"ARREST' in x.Resolution and x.Category == "ROBBERY")#<COMPLETAR> .map(lambda y:(parseMonthYear(y.Dates),1))#<COMPLETAR> .reduceByKey(add)#<COMPLETAR> .map(lambda z:z[1])#.<COMPLETAR> ) assaultBookedRDD = (CrimeHeadlessRDD .filter(lambda x: u'"ARREST' in x.Resolution and x.Category == "ASSAULT")#<COMPLETAR> .map(lambda y:(parseMonthYear(y.Dates),1))#<COMPLETAR> .reduceByKey(add)#<COMPLETAR> .map(lambda z:z[1])#.<COMPLETAR> ) robData = robberyBookedRDD.collect() assData = assaultBookedRDD.collect() assert robData[0]==27,'valores incorretos' print 'ok' assert assData[0]==152,'valores incorretos' print 'ok' Explanation: Notem que lemos o histograma da seguinte maneira: em cerca de 50 meses foram observadas entre 1750 e 2000 prisões. Porém, não sabemos precisar em quais meses houve um aumento ou redução das prisões. Isso deve ser observado através de um gráfico de linha. (3e) Box-plot O Box-plot é um gráfico muito utilizado em estatística para visualizar o resumo estatístico de uma variável. Para esse exercício vamos plotar duas box-plot sobre a média do número de prisões durante os meses analisados para os crimes do tipo ROBBERY e ASSAULT. O mapeamento é exatamente o mesmo do exercício anterior, porém filtrando para o tipo de roubo analisado. End of explanation fig = plt.figure(figsize=(8,4.2), facecolor='white', edgecolor='white') plt.boxplot([robData,assData]) plt.grid(b=True, which='major', axis='y') plt.ylabel('ARRESTED') plt.xticks([1,2], ['ROBBERY','ASSAULT']) pass Explanation: No gráfico abaixo, percebemos que existem, em média, muito mais prisões para o tipo ASSAULT do que o tipo ROBBERY, ambos com pequena variação. End of explanation
12,059
Given the following text description, write Python code to implement the functionality described below step by step Description: Phenotype Phase Plane Phenotype phase planes will show distinct phases of optimal growth with different use of two different substrates. For more information, see Edwards et al. Cobrapy supports calculating and plotting (using matplotlib) these phenotype phase planes. Here, we will make one for the "textbook" E. coli core model. Step1: We want to make a phenotype phase plane to evaluate uptakes of Glucose and Oxygen. Step2: If brewer2mpl is installed, other color schemes can be used as well Step3: The number of points which are plotted in each dimension can also be changed Step4: The code can also use multiple processes to speed up calculations
Python Code: %matplotlib inline from time import time import cobra.test from cobra.flux_analysis import calculate_phenotype_phase_plane model = cobra.test.create_test_model("textbook") Explanation: Phenotype Phase Plane Phenotype phase planes will show distinct phases of optimal growth with different use of two different substrates. For more information, see Edwards et al. Cobrapy supports calculating and plotting (using matplotlib) these phenotype phase planes. Here, we will make one for the "textbook" E. coli core model. End of explanation data = calculate_phenotype_phase_plane(model, "EX_glc__D_e", "EX_o2_e") data.plot_matplotlib(); Explanation: We want to make a phenotype phase plane to evaluate uptakes of Glucose and Oxygen. End of explanation data.plot_matplotlib("Pastel1") data.plot_matplotlib("Dark2"); Explanation: If brewer2mpl is installed, other color schemes can be used as well End of explanation calculate_phenotype_phase_plane(model, "EX_glc__D_e", "EX_o2_e", reaction1_npoints=20, reaction2_npoints=20).plot_matplotlib(); Explanation: The number of points which are plotted in each dimension can also be changed End of explanation start_time = time() calculate_phenotype_phase_plane(model, "EX_glc__D_e", "EX_o2_e", n_processes=1, reaction1_npoints=100, reaction2_npoints=100) print("took %.2f seconds with 1 process" % (time() - start_time)) start_time = time() calculate_phenotype_phase_plane(model, "EX_glc__D_e", "EX_o2_e", n_processes=4, reaction1_npoints=100, reaction2_npoints=100) print("took %.2f seconds with 4 process" % (time() - start_time)) Explanation: The code can also use multiple processes to speed up calculations End of explanation
12,060
Given the following text description, write Python code to implement the functionality described below step by step Description: Content and Objectives Show validity of theorems for generating arbitrary distributions out of uniform distribution Import Step1: Exponential out of Uniform Step2: Gaussian out of Uniform Step3: Generating Uniform Distribution by Congruence Step4: Central $\chi^2$ and Rayleigh Distribution Step5: Non-central $\chi^2$ and Rice Distribution
Python Code: # importing import numpy as np from scipy import stats, special import matplotlib.pyplot as plt import matplotlib # showing figures inline %matplotlib inline # plotting options font = {'size' : 20} plt.rc('font', **font) plt.rc('text', usetex=True) matplotlib.rc('figure', figsize=(18, 6) ) Explanation: Content and Objectives Show validity of theorems for generating arbitrary distributions out of uniform distribution Import End of explanation # theoretical values lamb = 1 delta_t = .01 t = np.arange( 0, 12, delta_t ) f_exp_theo = lamb * np.exp( - lamb * t ) # simulation N_trials = int( 1e4 ) # get uniformly distributed values and map according to lecture slides Z = np.random.rand( N_trials ) Y = -1/lamb * np.log( 1 - Z ) # plotting plt.plot( t, f_exp_theo, linewidth=2.0, label='Theo.') plt.hist( Y, 50, density=1, label='Sim.', alpha=0.75) plt.xlabel('$x, n$') plt.ylabel('$f(x), H_{{{}}}(n)$'.format(N_trials)) plt.grid( True ) plt.legend( loc = 'upper right' ) Explanation: Exponential out of Uniform End of explanation # theoretical values delta_t = .01 t = np.arange( -5, 5, delta_t ) f_exp_theo = 1/np.sqrt(2*np.pi) * np.exp( - t**2 /2 ) # simulation N_trials = int( 1e4 ) # get uniformly distributed values and map according to theorem # NOTE: ppf realizing inverse of cdf of gaussian by determining quantiles Z = np.random.rand( N_trials ) Y = stats.norm.ppf( Z ) # plotting plt.plot( t, f_exp_theo, linewidth=2.0, label='Theo.') plt.hist( Y, 50, density=1, label='Sim.', alpha=0.75) plt.xlabel('$x, n$') plt.ylabel('$f(x), H_{{{}}}(n)$'.format(N_trials)) plt.grid( True ) plt.legend( loc = 'upper right' ) Explanation: Gaussian out of Uniform End of explanation # theoretical values delta_t = .01 t = np.arange( 0, 1, delta_t ) f_uniform_theo = np.ones_like( t ) # parameters out of the book chapter cited in the lecture a = 7**5 c = 0 m = 2**31 - 1 # number of points to be sampled N_points = int( 1e7 ) N_points_small = int( 1e5 ) # initialize array for pseudo-random numbers points = np.zeros( N_points ) points[ 0 ] = np.random.randint( m ) # loop for generating numbers by congruence for n in np.arange( 1, N_points ): new_value = ( points[ n - 1 ] * a + c ) % m points[ n ] = new_value # normalize to [0,1] points /= m # plotting plt.subplot(121) plt.plot( t, f_uniform_theo, linewidth=2.0, label='Theo.') plt.hist( points[ : N_points_small ], bins=50, density=1, label='sim.' ) plt.grid( True ) plt.xlabel('$n$') plt.ylabel('$H_{{{}}}(n)$'.format( N_points_small ) ) plt.subplot(122) plt.plot( t, f_uniform_theo, linewidth=2.0, label='Theo.') plt.hist( points, bins=50, density=1, label='sim.' ) plt.grid( True ) plt.xlabel('$n$') plt.ylabel('$H_{{{}}}(n)$'.format( N_points ) ) Explanation: Generating Uniform Distribution by Congruence End of explanation # parameters of distribution sigma2 = 1 # continuous world and theoretical pdf delta_x = .001 x = np.arange( 0, 10 * np.sqrt(sigma2) + delta_x, delta_x) f_theo_chi2 = 1 / sigma2 / 2 * np.exp( - x / 2 / sigma2 ) f_theo_Ray = x / sigma2 * np.exp( - x**2 / 2 / sigma2 ) # sample gaussian N_samples = int( 1e4 ) X = np.sqrt( sigma2 ) * np.random.randn( 2, N_samples ) X2 = np.sum( X**2, axis = 0 ) X_R = np.sqrt( X2 ) # plotting plt.subplot(121) plt.plot( x, f_theo_chi2, linewidth=2.0, label='Theo.') plt.hist( X2, 50, density=1, label='Sim.', alpha=0.75) plt.xlabel('$x, n$') plt.ylabel('$f(x), H_{{{}}}(n)$'.format( N_samples ) ) plt.grid( True ) plt.legend( loc = 'upper right' ) plt.title('Central $\chi^2$ Distribution') plt.subplot(122) plt.plot( x, f_theo_Ray, linewidth=2.0, label='Theo.') plt.hist( X_R, 50, density=1, label='Sim.', alpha=0.75) plt.xlabel('$x, n$') plt.ylabel('$f(x), H_{{{}}}(n)$'.format( N_samples ) ) plt.grid( True ) plt.legend( loc = 'upper right' ) plt.title('Rayleigh Distribution' ) Explanation: Central $\chi^2$ and Rayleigh Distribution End of explanation # parameters of distribution sigma2 = 1 mu = 2 s2 = 2 * mu**2 # continuous world and theory delta_x = .001 x = np.arange( 0, mu**2 * 2 + 30 * np.sqrt(sigma2) + delta_x, delta_x) f_theo_chi2 = 1 / sigma2 / 2 * np.exp( - ( s2 + x ) / 2 / sigma2 ) * special.iv( 0, np.sqrt( x * s2 / sigma2 ) ) f_theo_Rice = x / sigma2 * np.exp( - ( x**2 + s2 )/ 2 / sigma2 ) * special.iv( 0, x * np.sqrt( s2 ) / sigma2 ) # sample gaussian N_samples = int( 1e4 ) X = mu + np.sqrt( sigma2 ) * np.random.randn( 2, N_samples ) X2 = np.sum( X**2, axis = 0 ) X_R = np.sqrt( X2 ) # plotting plt.subplot(121) plt.plot( x, f_theo_chi2, linewidth=2.0, label='Theo.') plt.hist( X2, 50, density=1, label='Sim.', alpha=0.75) plt.xlabel('$x, n$') plt.ylabel('$f(x), H_{{{}}}(n)$'.format( N_samples ) ) plt.grid( True ) plt.legend( loc = 'upper right' ) plt.title('Non-central $\chi^2$ Distribution') plt.subplot(122) plt.plot( x, f_theo_Rice, linewidth=2.0, label='Theo.') plt.hist( X_R, 50, density=1, label='Sim.', alpha=0.75) plt.xlabel('$x, n$') plt.ylabel('$f(x), H_{{{}}}(n)$'.format( N_samples ) ) plt.grid( True ) plt.legend( loc = 'upper right' ) plt.title('Rice Distribution' ) Explanation: Non-central $\chi^2$ and Rice Distribution End of explanation
12,061
Given the following text description, write Python code to implement the functionality described below step by step Description: Run Generic Automated EAS tests This is a starting-point notebook for running tests from the generic EAS suite in tests/eas/generic.py. The test classes that are imported here provide helper methods to aid analysis of the cause of failure. You can use Python's help built-in to find those methods (or you can just read the docstrings in the code). These tests make estimation of the energy efficiency of task placements, without directly examining the behaviour of cpufreq or cpuidle. Several test classes are provided, the only difference between them being the workload that is used. Setup Step1: Run test workload If you simply want to run all the tests and get pass/fail results, use this command in the LISA shell Step2: By default we'll run the EnergyModelWakeMigration test, which runs a workload alternating between high and low-intensity. All the other test classes shown above have the same interface, but run different workloads. To run the tests on different workloads, change this line below Step3: Examine trace get_power_df and get_expected_power_df look at the ftrace results from the workload estimation and judge the energy efficiency of the system, considering only task placement (assuming perfect load-tracking/prediction, cpuidle, and cpufreq systems). The energy estimation doesn't take every single wakeup and idle period into account, but simply estimates an average power usage over the time that each task spent attached to each CPU during each phase of the rt-app workload. These return DataFrames estimating the energy usage of the system under each task placement. estimated_power will represent this estimation for the scheduling pattern that we actually observed, while expected_power will represent our estimation of how much power an optimal scheduling pattern would use. Check the docstrings for these functions (and other functions in the test class) for more detail. Step4: Plot Schedule Step5: Plot estimated ideal and estimated power usage This plot shows how the power estimation for the observed scheduling pattern varies from the estimated power for an ideal schedule. Where the plotted value for the observed power is higher than the plotted ideal power, the system was wasting power (e.g. a low-intensity task was unnecessarily placed on a high-power CPU). Where the observed value is lower than the ideal value, this means the system was too efficient (e.g. a high-intensity task was placed on a low-power CPU that could not accomadate its compute requirements). Step6: Plot CPU frequency Step7: Assertions These are the assertions used to generate pass/fail results s. They aren't very useful in this interactive context - it's much more interesting to examine plots like the one above and see whether the behaviour was desirable or not. These are intended for automated regression testing. Nonetheless, let's see what the results would be for this run. test_slack checks the "slack" reported by the rt-app workload. If this slack was negative, this means the workload didn't receive enough CPU capacity. In a real system this would represent lacking interactive performance. Step8: test_task_placement checks that the task placement was energy efficient, taking advantage of lower-power CPUs whenever possible.
Python Code: %load_ext autoreload %autoreload 2 %matplotlib inline import logging from conf import LisaLogging LisaLogging.setup()#level=logging.WARNING) import pandas as pd from perf_analysis import PerfAnalysis import trappy from trappy import ILinePlot from trappy.stats.grammar import Parser Explanation: Run Generic Automated EAS tests This is a starting-point notebook for running tests from the generic EAS suite in tests/eas/generic.py. The test classes that are imported here provide helper methods to aid analysis of the cause of failure. You can use Python's help built-in to find those methods (or you can just read the docstrings in the code). These tests make estimation of the energy efficiency of task placements, without directly examining the behaviour of cpufreq or cpuidle. Several test classes are provided, the only difference between them being the workload that is used. Setup End of explanation from tests.eas.generic import TwoBigTasks, TwoBigThreeSmall, RampUp, RampDown, EnergyModelWakeMigration, OneSmallTask Explanation: Run test workload If you simply want to run all the tests and get pass/fail results, use this command in the LISA shell: lisa-test tests/eas/generic.py. This notebook is intended as a starting point for analysing what scheduler behaviour was judged to be faulty. Target configuration is taken from $LISA_HOME/target.config - you'll need to edit that file to provide connection details for the target you want to test. End of explanation t = EnergyModelWakeMigration(methodName="test_task_placement") print t.__doc__ t.setUpClass() experiment = t.executor.experiments[0] Explanation: By default we'll run the EnergyModelWakeMigration test, which runs a workload alternating between high and low-intensity. All the other test classes shown above have the same interface, but run different workloads. To run the tests on different workloads, change this line below: End of explanation # print t.get_power_df.__doc__ estimated_power = t.get_power_df(experiment) # print t.get_expected_power_df.__doc__ expected_power = t.get_expected_power_df(experiment) Explanation: Examine trace get_power_df and get_expected_power_df look at the ftrace results from the workload estimation and judge the energy efficiency of the system, considering only task placement (assuming perfect load-tracking/prediction, cpuidle, and cpufreq systems). The energy estimation doesn't take every single wakeup and idle period into account, but simply estimates an average power usage over the time that each task spent attached to each CPU during each phase of the rt-app workload. These return DataFrames estimating the energy usage of the system under each task placement. estimated_power will represent this estimation for the scheduling pattern that we actually observed, while expected_power will represent our estimation of how much power an optimal scheduling pattern would use. Check the docstrings for these functions (and other functions in the test class) for more detail. End of explanation trace = t.get_trace(experiment) trappy.plotter.plot_trace(trace.ftrace) Explanation: Plot Schedule End of explanation df = pd.concat([ expected_power.sum(axis=1), estimated_power.sum(axis=1)], axis=1, keys=['ideal_power', 'observed_power']).fillna(method='ffill') ILinePlot(df, column=df.columns.tolist(), drawstyle='steps-post').view() Explanation: Plot estimated ideal and estimated power usage This plot shows how the power estimation for the observed scheduling pattern varies from the estimated power for an ideal schedule. Where the plotted value for the observed power is higher than the plotted ideal power, the system was wasting power (e.g. a low-intensity task was unnecessarily placed on a high-power CPU). Where the observed value is lower than the ideal value, this means the system was too efficient (e.g. a high-intensity task was placed on a low-power CPU that could not accomadate its compute requirements). End of explanation trace.analysis.frequency.plotClusterFrequencies() Explanation: Plot CPU frequency End of explanation try: t.test_slack() except AssertionError as e: print "test_slack failed:" print e else: print "test_slack passed" Explanation: Assertions These are the assertions used to generate pass/fail results s. They aren't very useful in this interactive context - it's much more interesting to examine plots like the one above and see whether the behaviour was desirable or not. These are intended for automated regression testing. Nonetheless, let's see what the results would be for this run. test_slack checks the "slack" reported by the rt-app workload. If this slack was negative, this means the workload didn't receive enough CPU capacity. In a real system this would represent lacking interactive performance. End of explanation try: t.test_task_placement() except AssertionError as e: print "test_task_placement failed:" print e else: print "test_task_placement passed" Explanation: test_task_placement checks that the task placement was energy efficient, taking advantage of lower-power CPUs whenever possible. End of explanation
12,062
Given the following text description, write Python code to implement the functionality described below step by step Description: Defining decaying sin wave Function accepts dictionary of parameters and array of x-points, returns array of y-points. Represents fit model. Step1: Plotting function for default parameters Define 100 points on x-axis. Constructing dictionary with initial parameter values for decaying_sin. Step2: Defining objective function Objective function requires array of errors eps. Will return array of residuals using data array defined earlier.
Python Code: def decaying_sin(params, x): amp = params['amp'] phaseshift = params['phase'] freq = params['frequency'] decay = params['decay'] return amp * np.sin(x*freq + phaseshift) * np.exp(-x*x*decay) Explanation: Defining decaying sin wave Function accepts dictionary of parameters and array of x-points, returns array of y-points. Represents fit model. End of explanation x = np.linspace(0.0, 10.0, 100) default_params = {"amp" : 10.0, "decay" : 0.05, "phase" : 1.0, "frequency" : 4.0} data = decaying_sin(default_params, x) eps = np.linspace(0.0, 10.0, 100) eps.fill(0.01) a = plt.plot(x, data) Explanation: Plotting function for default parameters Define 100 points on x-axis. Constructing dictionary with initial parameter values for decaying_sin. End of explanation def objective_function(params): model = decaying_sin(params, x) return (data - model) / eps params = lmfit.Parameters() params.add('amp', value=1) params.add('decay', value=0.1) params.add('phase', value=0.1) params.add('frequency', value=1.0) fig, ax = plt.subplots() a = ax.plot(x, data) b = ax.plot(x, decaying_sin(params, x)) fig, ax2 = plt.subplots() def plotter(params, a, b): current_data = decaying_sin(params, x) ax2.plot(x, data) ax2.plot(x, current_data) axes = plt.gca() axes.set_ylim(-10.0, 10.0) out = lmfit.minimize(objective_function, params, iter_cb=plotter) out.params.pretty_print() Explanation: Defining objective function Objective function requires array of errors eps. Will return array of residuals using data array defined earlier. End of explanation
12,063
Given the following text description, write Python code to implement the functionality described below step by step Description: Creating and Using Panel Data Using ArcGIS Defined Location Cubes Step1: Example Step2: Open Panel Cube From NetCDF File for Analysis Step3: Number of Locations and Time Periods Step4: List Variables Step5: View Mann-Kendall Trend Results in PANDAS Data Frame Step6: Get 3D Analysis Variable Step7: Use PySAL to Analyze LISA Markov Transitions Step8: View Transistion Probabilities
Python Code: import os as OS import arcpy as ARCPY import SSDataObject as SSDO import SSPanelObject as SSPO import SSPanel as PANEL ARCPY.overwriteOutput = True Explanation: Creating and Using Panel Data Using ArcGIS Defined Location Cubes End of explanation inputFC = r'../data/CA_Counties_Panel.shp' outputCube = r'../data/CA_Panel.nc' fullFC = OS.path.abspath(inputFC) outputCube = OS.path.abspath(outputCube) fullPath, fcName = OS.path.split(fullFC) ssdo = SSDO.SSDataObject(inputFC) uniqueIDField = "MYID" timeField = "YEAR" analysisField = "PCR" panelObj = SSPO.SSPanelObject(inputFC) requireGeometry = panelObj.ssdo.shapeType.upper() == "POLYGON" panelObj.obtainData(uniqueIDField, "YEAR", "1 Years", fields = [analysisField], requireGeometry = requireGeometry) panelCube = PANEL.SSPanel(outputCube, panelObj = panelObj) varName = panelObj.fieldNames[0] panelCube.mannKendall(varName) panelCube.close() Explanation: Example: Per Capita Incomes Relative to National Average in California (1969 - 2010) Create Defined Locations Cube from Repeating Shapes Feature Class Run Mann-Kendall Trend Statistic Save to NetCDF File End of explanation panel = PANEL.SSPanel(outputCube) Explanation: Open Panel Cube From NetCDF File for Analysis End of explanation print("# locations = {0}, # time periods = {1}".format(panel.numLocations, panel.numTime)) Explanation: Number of Locations and Time Periods End of explanation print(panel.obtainVariableList()) Explanation: List Variables End of explanation import pandas as PANDAS locations = panel.locationLabel[0] z = panel.obtainValues('PCR_TREND_ZSCORE') pv = panel.obtainValues('PCR_TREND_PVALUE') d = {'PCR_TREND_ZSCORE':z, 'PCR_TREND_PVALUE':pv} df = PANDAS.DataFrame(d, index = locations) print(df.head()) Explanation: View Mann-Kendall Trend Results in PANDAS Data Frame End of explanation data = panel.obtainValues(analysisField) print(data.shape) Explanation: Get 3D Analysis Variable End of explanation import pysal as PYSAL w = PYSAL.open(r"../data/queen.gal").read() lm = PYSAL.LISA_Markov(data.T, w) print(lm.classes) Explanation: Use PySAL to Analyze LISA Markov Transitions End of explanation print(lm.p) panel.close() Explanation: View Transistion Probabilities End of explanation
12,064
Given the following text description, write Python code to implement the functionality described below step by step Description: Distance entre deux mots de même longueur et tests unitaires Calculer une distance entre deux mots n'est pas le plus intuitif des problèmes. Dans ce notebook, on se permet de tâtonner pour faire évoluer quelques idées autour du sujet. C'est l'occasion aussi de montrer à quoi servent les tests unitaires et pourquoi ils sont utiles lorsqu'on tâtonne. Step1: Distance naïve Naïf... mais beaucoup d'idées naïves finissent par aboutir à des pyramides complexes. Distance très naïve On se restraint au cas où les deux mots à comparer ont la même longueur. Et dans ce cas, le plus simple est de compter le nombre de caractères différents à chaque position. Step2: Distance entre deux mots de longueur différente mais pas si différente On considère le cas où les deux mots ont des longueurs égales ou différentes de un caractères. Dans le premier cas, on utilise la distance précédente, dans le second cas, on ajoute un espace au mot le plus court et on appelle la distance précédente. Mais où insérer cet espace ? A toutes les positions bien sûr, la distance sera le minimum de toutes les distances calculées. Pour simplifier, on commence par décider que le premier mot doit être le plus court des deux. Si ce n'est pas le cas, on les permute. Step3: Parfois on aime bien comprendre un peu plus en détail. On ajoute alors un paramètre verbose qui affiche des informations sans pour autant affecter le résultat. Step4: Le paramètre verbose est une sorte de règle communément partagée, une convention... C'est ce que qu'en disent les pirates. Distance entre deux mots de longueur différente On suit la même idée et on insère des espaces dans le mot le plus petit de façon récursive jusqu'à pouvoir utiliser la distance précédente. Le code ressemble beaucoup à la fonction précédente. Step5: Test unitaires Quand on développe un algorithme, on l'applique sur quelques exemples pour vérifier qu'il marche... Puis, on l'améliore et on vérifie qu'il fonctionne sur de nouveaux exemples plus complexes... Vérifie-t-on que cela marche fonctionne encore pour les premiers cas... Le plus souvent non... car c'est fastideux... J'en conviens... Alors pourquoi ne pas noter tous ces cas dans une fonction qui les vérifie... La fonction ne prend aucun paramètres, elle réussit si la fonction retourne tous les résultats désirés, elle échoue dans le cas contraire. Step6: Pas d'erreur... On continue avec la seconde distance en ajoutant des cas pour lesquels elle a été programmée. Pour les tests, on utilise un caractère '_' différent des espaces ' ' utilisé par les fonctions distance. Step7: Toujours pas d'erreur... La vie est magnifique... On continue avec la troisième distance en ajoutant des cas pour lesquels elle a été programmée. Step8: Toujours pas d'erreur... Magnifique... Et maintenant... Il est vrai qu'on ne s'est pas penché sur les coûts de chaque fonction mais la fonction distance3 est incroyablement longue. On note $N = \max(len(m1), len(m2))$. coût distance1 Step9: On utilise les tests unitaires pour vérifier qu'elle retourne les mêmes résultats, ceux qu'on souhaite. Step10: Ca marche... m et n sont très proches, et alors ? Step11: Comme beaucoup de gens font l'erreur, on voudrait que le coût soit réduit de moitié. On veut alors que la confusion entre m et n ait un coût de 0.5. Step12: Et toujours les tests unitaires. Step13: ff, f, ph, f... plus personne ne sait écrire Tout marche. Et maintenant on aimerait que Step14: Test unitaire again.
Python Code: from jyquickhelper import add_notebook_menu add_notebook_menu() Explanation: Distance entre deux mots de même longueur et tests unitaires Calculer une distance entre deux mots n'est pas le plus intuitif des problèmes. Dans ce notebook, on se permet de tâtonner pour faire évoluer quelques idées autour du sujet. C'est l'occasion aussi de montrer à quoi servent les tests unitaires et pourquoi ils sont utiles lorsqu'on tâtonne. End of explanation def distance1(m1, m2): d = 0 for i in range(0, len(m1)): if m1[i] != m2[i]: d += 1 return d distance1("info", "imfo") Explanation: Distance naïve Naïf... mais beaucoup d'idées naïves finissent par aboutir à des pyramides complexes. Distance très naïve On se restraint au cas où les deux mots à comparer ont la même longueur. Et dans ce cas, le plus simple est de compter le nombre de caractères différents à chaque position. End of explanation def distance2(m1, m2): if len(m1) == len(m2): return distance1(m1, m2) if len(m2) < len(m1): m1, m2 = m2, m1 meilleur = len(m2) for i in range(len(m1) + 1): m1_e = m1[:i] + ' ' + m1[i:] d = distance1(m1_e, m2) if d < meilleur: meilleur = d return meilleur distance2("cab", "ab") distance2("abcd", "bcdef") Explanation: Distance entre deux mots de longueur différente mais pas si différente On considère le cas où les deux mots ont des longueurs égales ou différentes de un caractères. Dans le premier cas, on utilise la distance précédente, dans le second cas, on ajoute un espace au mot le plus court et on appelle la distance précédente. Mais où insérer cet espace ? A toutes les positions bien sûr, la distance sera le minimum de toutes les distances calculées. Pour simplifier, on commence par décider que le premier mot doit être le plus court des deux. Si ce n'est pas le cas, on les permute. End of explanation def distance2_verbose(m1, m2, verbose=False): if len(m1) == len(m2): return distance1(m1, m2) if len(m2) < len(m1): m1, m2 = m2, m1 meilleur = len(m2) for i in range(len(m1) + 1): m1_e = m1[:i] + ' ' + m1[i:] d = distance1(m1_e, m2) if d < meilleur: meilleur = d if verbose: print("i=%r m1_e=%r m2=%r d=%d meilleur=%d" % ( i, m1_e, m2, d, meilleur)) return meilleur distance2_verbose("cab", "ab", True) Explanation: Parfois on aime bien comprendre un peu plus en détail. On ajoute alors un paramètre verbose qui affiche des informations sans pour autant affecter le résultat. End of explanation def distance3(m1, m2): if abs(len(m1) - len(m2)) <= 1: return distance2(m1, m2) if len(m2) < len(m1): m1, m2 = m2, m1 meilleur = len(m2) for i in range(len(m1) + 1): m1_e = m1[:i] + ' ' + m1[i:] d = distance3(m1_e, m2) if d < meilleur: meilleur = d return meilleur distance3("info", "pimfos") Explanation: Le paramètre verbose est une sorte de règle communément partagée, une convention... C'est ce que qu'en disent les pirates. Distance entre deux mots de longueur différente On suit la même idée et on insère des espaces dans le mot le plus petit de façon récursive jusqu'à pouvoir utiliser la distance précédente. Le code ressemble beaucoup à la fonction précédente. End of explanation def test_dist_equal(d): assert d("", "") == 0 assert d("a", "a") == 0 assert d("a", "b") == 1 def test_distance1(): test_dist_equal(distance1) test_distance1() Explanation: Test unitaires Quand on développe un algorithme, on l'applique sur quelques exemples pour vérifier qu'il marche... Puis, on l'améliore et on vérifie qu'il fonctionne sur de nouveaux exemples plus complexes... Vérifie-t-on que cela marche fonctionne encore pour les premiers cas... Le plus souvent non... car c'est fastideux... J'en conviens... Alors pourquoi ne pas noter tous ces cas dans une fonction qui les vérifie... La fonction ne prend aucun paramètres, elle réussit si la fonction retourne tous les résultats désirés, elle échoue dans le cas contraire. End of explanation def test_dist_diff1(d): assert d("", "a") == 1 assert d("a", "") == 1 assert d("_a", "a") == 1 assert d("a_", "a") == 1 assert d("a", "a_") == 1 assert d("a", "_a") == 1 def test_distance2(): test_dist_equal(distance2) test_dist_diff1(distance2) test_distance2() Explanation: Pas d'erreur... On continue avec la seconde distance en ajoutant des cas pour lesquels elle a été programmée. Pour les tests, on utilise un caractère '_' différent des espaces ' ' utilisé par les fonctions distance. End of explanation def test_dist_diff2(d): assert d("", "ab") == 2 assert d("ab", "") == 2 assert d("_ab", "a") == 2 assert d("ab_", "ab") == 1 assert d("ab", "ab_") == 1 assert d("ab", "_ab") == 1 assert d("ab", "ab") == 0 assert d("ab", "a_b") == 1 assert d("a_b", "ab") == 1 def test_distance3(): test_dist_equal(distance3) test_dist_diff1(distance3) test_dist_diff2(distance3) test_distance3() Explanation: Toujours pas d'erreur... La vie est magnifique... On continue avec la troisième distance en ajoutant des cas pour lesquels elle a été programmée. End of explanation import numpy def edit_distance(m1, m2): mat = numpy.zeros((len(m1) + 1, len(m2) + 1)) for i in range(len(m1) + 1): mat[i, 0] = i for j in range(len(m2) + 1): mat[0, j] = j for i in range(1, len(m1) + 1): for j in range(1, len(m2) + 1): c1 = mat[i-1, j] + 1 c2 = mat[i, j-1] + 1 if m1[i-1] == m2[j-1]: c = 0 else: c = 1 c3 = mat[i-1, j-1] + c mat[i, j] = min([c1, c2, c3]) return mat[-1, -1] print("edit", edit_distance('agrafe', 'agrae')) Explanation: Toujours pas d'erreur... Magnifique... Et maintenant... Il est vrai qu'on ne s'est pas penché sur les coûts de chaque fonction mais la fonction distance3 est incroyablement longue. On note $N = \max(len(m1), len(m2))$. coût distance1: $O(N)$ coût distance2: $O(N^2)$ coût distance3: $O(N^{\delta+1})$ où $\delta = |len(m1), len(m2)|$. Je vous laisse quelques minutes pour vérifier. J'interprète : c'est beaucoup trop. Distance d'édition On implémente l'algorithme de la distance de Levenstein. End of explanation def test_edit_distance(): test_dist_equal(edit_distance) test_dist_diff1(edit_distance) test_dist_diff2(edit_distance) test_edit_distance() Explanation: On utilise les tests unitaires pour vérifier qu'elle retourne les mêmes résultats, ceux qu'on souhaite. End of explanation edit_distance("rémunérer", "rénumérer") Explanation: Ca marche... m et n sont très proches, et alors ? End of explanation def edit_distance2(m1, m2): mat = numpy.zeros((len(m1) + 1, len(m2) + 1)) cmp_char = {('m','n') : 0.5, ('n','m') : 0.5} for i in range(len(m1) + 1): mat[i, 0] = i for j in range(len(m2) + 1): mat[0, j] = j for i in range(1, len(m1) + 1): for j in range(1, len(m2) + 1): c1 = mat[i-1, j] + 1 c2 = mat[i, j-1] + 1 if m1[i-1] == m2[j-1]: c = 0 else: c = cmp_char.get((m1[i-1], m2[j-1]), 1) c3 = mat[i-1, j-1] + c mat[i, j] = min([c1, c2, c3]) if i >= 2: cc = cmp_char.get((m1[i-2:i], m2[j-1]), 1) c4 = mat[i-2, j-1] + cc mat[i, j] = min(mat[i, j], c4) if j >= 2: cc = cmp_char.get((m1[i-1], m2[j-2:j]), 1) c4 = mat[i-1, j-2] + cc mat[i, j] = min(mat[i, j], c4) return mat[-1, -1] print("edit", edit_distance2('rémunérer', 'rénumérer')) Explanation: Comme beaucoup de gens font l'erreur, on voudrait que le coût soit réduit de moitié. On veut alors que la confusion entre m et n ait un coût de 0.5. End of explanation def test_special(d): assert d('rémunérer', 'rénumérer') == 1 def test_edit_distance2(): test_dist_equal(edit_distance2) test_dist_diff1(edit_distance2) test_dist_diff2(edit_distance2) test_special(edit_distance2) test_edit_distance2() Explanation: Et toujours les tests unitaires. End of explanation def edit_distance3(m1, m2): mat = numpy.zeros((len(m1) + 1, len(m2) + 1)) cmp_char = {('m','n') : 0.5, ('n','m') : 0.5, ('ff', 'f'): 0.5, ('f', 'ff'): 0.5, ('ph', 'f'): 0.4, ('ph', 'f'): 0.4} ins_char = {} for i in range(len(m1) + 1): mat[i, 0] = i for j in range(len(m2) + 1): mat[0, j] = j for i in range(1, len(m1) + 1): for j in range(1, len(m2) + 1): c1 = mat[i-1, j] + ins_char.get(m1[i-1], 1) c2 = mat[i, j-1] + ins_char.get(m2[j-1], 1) if m1[i-1] == m2[j-1]: c = 0 else: c = cmp_char.get((m1[i-1], m2[j-1]), 1) c3 = mat[i-1, j-1] + c mat[i, j] = min([c1, c2, c3]) if i >= 2: cc = cmp_char.get((m1[i-2:i], m2[j-1]), 1) c4 = mat[i-2, j-1] + cc mat[i, j] = min(mat[i, j], c4) if j >= 2: cc = cmp_char.get((m1[i-1], m2[j-2:j]), 1) c4 = mat[i-1, j-2] + cc mat[i, j] = min(mat[i, j], c4) return mat[-1, -1] print("edit", edit_distance('agrafe', 'agrae')) Explanation: ff, f, ph, f... plus personne ne sait écrire Tout marche. Et maintenant on aimerait que : distance('agraffe', 'agrafe') == 0.5 distance('agrafe', 'agrae') == 1 distance('éléphant', 'éléfant') == 0.5 Nouvelle distance encore. End of explanation def test_special(d): assert d('rémunérer', 'rénumérer') == 1 assert d('agrafe', 'agrae') == 1 assert d('agraffe', 'agrafe') == 0.5 assert d('éléphant', 'éléfant') == 0.4 def test_edit_distance3(): test_dist_equal(edit_distance3) test_dist_diff1(edit_distance3) test_dist_diff2(edit_distance3) test_special(edit_distance3) test_edit_distance3() Explanation: Test unitaire again. End of explanation
12,065
Given the following text description, write Python code to implement the functionality described below step by step Description: A Simple Autoencoder We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data. In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset. Step1: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits. Step2: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input. Exercise Step3: Training Step4: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss and the test loss afterwards. Calling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed). Step5: Checking out the results Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts.
Python Code: %matplotlib inline import numpy as np import tensorflow as tf import matplotlib.pyplot as plt from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('MNIST_data', validation_size=0) Explanation: A Simple Autoencoder We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data. In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset. End of explanation img = mnist.train.images[2] plt.imshow(img.reshape((28, 28)), cmap='Greys_r') Explanation: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits. End of explanation # Size of the encoding layer (the hidden layer) encoding_dim = 32 image_size = mnist.train.images.shape[1] inputs_ = tf.placeholder(tf.float32, (None, image_size), name='inputs') targets_ = tf.placeholder(tf.float32, (None, image_size), name='targets') # Output of hidden layer encoded = tf.layers.dense(inputs_, encoding_dim, activation=tf.nn.relu) # Output layer logits logits = tf.layers.dense(encoded, image_size, activation=None) # Sigmoid output from decoded = tf.nn.sigmoid(logits, name='output') loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits) cost = tf.reduce_mean(loss) opt = tf.train.AdamOptimizer(0.001).minimize(cost) Explanation: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input. Exercise: Build the graph for the autoencoder in the cell below. The input images will be flattened into 784 length vectors. The targets are the same as the inputs. And there should be one hidden layer with a ReLU activation and an output layer with a sigmoid activation. Feel free to use TensorFlow's higher level API, tf.layers. For instance, you would use tf.layers.dense(inputs, units, activation=tf.nn.relu) to create a fully connected layer with a ReLU activation. The loss should be calculated with the cross-entropy loss, there is a convenient TensorFlow function for this tf.nn.sigmoid_cross_entropy_with_logits (documentation). You should note that tf.nn.sigmoid_cross_entropy_with_logits takes the logits, but to get the reconstructed images you'll need to pass the logits through the sigmoid function. End of explanation # Create the session sess = tf.Session() Explanation: Training End of explanation epochs = 20 batch_size = 200 sess.run(tf.global_variables_initializer()) for e in range(epochs): for ii in range(mnist.train.num_examples//batch_size): batch = mnist.train.next_batch(batch_size) feed = {inputs_: batch[0], targets_: batch[0]} batch_cost, _ = sess.run([cost, opt], feed_dict=feed) print("Epoch: {}/{}...".format(e+1, epochs), "Training loss: {:.4f}".format(batch_cost)) Explanation: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss and the test loss afterwards. Calling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed). End of explanation fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4)) in_imgs = mnist.test.images[:10] reconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs}) for images, row in zip([in_imgs, reconstructed], axes): for img, ax in zip(images, row): ax.imshow(img.reshape((28, 28)), cmap='Greys_r') ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) fig.tight_layout(pad=0.1) sess.close() Explanation: Checking out the results Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts. End of explanation
12,066
Given the following text description, write Python code to implement the functionality described below step by step Description: Training Define feature function Input is 3D array and voxelsize. Output is feature vector with rows number equal to pixel number and cols number equal to number of features. Step1: Classifier selection Any classifier with fit() and predict() function can be used. Decision tree Step2: Testing Step3: Evaluation
Python Code: def externfv(data3d, voxelsize_mm): # scale f0 = scipy.ndimage.filters.gaussian_filter(data3d, sigma=3).reshape(-1, 1) f1 = scipy.ndimage.filters.gaussian_filter(data3d, sigma=1).reshape(-1, 1) - f0 fv = np.concatenate([ f0, f1 ], 1) return fv Explanation: Training Define feature function Input is 3D array and voxelsize. Output is feature vector with rows number equal to pixel number and cols number equal to number of features. End of explanation # select classifier cl = imtools.ml.gmmcl.GMMCl(n_components=3) # both foreground and background gmm with 3 components # init trainer ol = imtools.trainer3d.Trainer3D(classifier=cl) # select feature function ol.feature_function = externfv for i in range(1, 5): datap = io3d.datasets.read_dataset("3Dircadb1", 'data3d', i) datap_liver = io3d.datasets.read_dataset("3Dircadb1", 'liver', i) ol.add_train_data(datap["data3d"], (datap_liver["data3d"] > 0).astype(np.uint8), voxelsize_mm=datap["voxelsize_mm"]) ol.fit() # save to file joblib.dump(ol, "ol.joblib") Explanation: Classifier selection Any classifier with fit() and predict() function can be used. Decision tree: ```python import sklearn.tree cl = sklearn.tree.DecisionTreeClassifier() Neural Network Classifier:python import sklearn.neural_network cl = sklearn.neural_network.MLPClassifier() ``` GMM 1 component for foreground, 3 components for background: python cl = imtools.ml.gmmcl.GMMCl() cl.cl = {0:sklearn.mixture.GaussianMixture(n_components=3), 1:sklearn.mixture.GaussianMixture(n_components=1)} End of explanation # load trained from file ol = joblib.load("ol.joblib") # one = list(imtools.datasets.sliver_reader("*000.mhd", read_seg=True))[0] # numeric_label, vs_mm, oname, orig_data, rname, ref_data = one i = 3 datap = io3d.datasets.read_dataset("3Dircadb1", 'data3d', i) fit = ol.predict(datap["data3d"], voxelsize_mm=datap["voxelsize_mm"]) # visualization plt.figure(figsize=(15, 10)) sed3.show_slices(datap["data3d"], fit, slice_step=20, axis=1, flipV=False) Explanation: Testing End of explanation datap_liver = io3d.datasets.read_dataset("3Dircadb1", 'liver', i) ground_true = (datap_liver['data3d'] > 0).astype(np.uint8) print(sklearn.metrics.classification_report(ground_true.ravel(), fit.astype(np.uint8).ravel())) Explanation: Evaluation End of explanation
12,067
Given the following text description, write Python code to implement the functionality described below step by step Description: Source localization with a custom inverse solver The objective of this example is to show how to plug a custom inverse solver in MNE in order to facilate empirical comparison with the methods MNE already implements (wMNE, dSPM, sLORETA, LCMV, (TF-)MxNE etc.). This script is educational and shall be used for methods evaluations and new developments. It is not meant to be an example of good practice to analyse your data. The example makes use of 2 functions apply_solver and solver so changes can be limited to the solver function (which only takes three parameters Step2: Auxiliary function to run the solver Step4: Define your solver Step5: Apply your custom solver Step6: View in 2D and 3D ("glass" brain like 3D plot)
Python Code: import numpy as np from scipy import linalg import mne from mne.datasets import sample from mne.viz import plot_sparse_source_estimates data_path = sample.data_path() fwd_fname = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif' ave_fname = data_path + '/MEG/sample/sample_audvis-ave.fif' cov_fname = data_path + '/MEG/sample/sample_audvis-shrunk-cov.fif' subjects_dir = data_path + '/subjects' condition = 'Left Auditory' # Read noise covariance matrix noise_cov = mne.read_cov(cov_fname) # Handling average file evoked = mne.read_evokeds(ave_fname, condition=condition, baseline=(None, 0)) evoked.crop(tmin=0.04, tmax=0.18) evoked = evoked.pick_types(eeg=False, meg=True) # Handling forward solution forward = mne.read_forward_solution(fwd_fname) Explanation: Source localization with a custom inverse solver The objective of this example is to show how to plug a custom inverse solver in MNE in order to facilate empirical comparison with the methods MNE already implements (wMNE, dSPM, sLORETA, LCMV, (TF-)MxNE etc.). This script is educational and shall be used for methods evaluations and new developments. It is not meant to be an example of good practice to analyse your data. The example makes use of 2 functions apply_solver and solver so changes can be limited to the solver function (which only takes three parameters: the whitened data, the gain matrix, and the number of orientations) in order to try out another inverse algorithm. End of explanation def apply_solver(solver, evoked, forward, noise_cov, loose=0.2, depth=0.8): Call a custom solver on evoked data. This function does all the necessary computation: - to select the channels in the forward given the available ones in the data - to take into account the noise covariance and do the spatial whitening - to apply loose orientation constraint as MNE solvers - to apply a weigthing of the columns of the forward operator as in the weighted Minimum Norm formulation in order to limit the problem of depth bias. Parameters ---------- solver : callable The solver takes 3 parameters: data M, gain matrix G, number of dipoles orientations per location (1 or 3). A solver shall return 2 variables: X which contains the time series of the active dipoles and an active set which is a boolean mask to specify what dipoles are present in X. evoked : instance of mne.Evoked The evoked data forward : instance of Forward The forward solution. noise_cov : instance of Covariance The noise covariance. loose : float in [0, 1] | 'auto' Value that weights the source variances of the dipole components that are parallel (tangential) to the cortical surface. If loose is 0 then the solution is computed with fixed orientation. If loose is 1, it corresponds to free orientations. The default value ('auto') is set to 0.2 for surface-oriented source space and set to 1.0 for volumic or discrete source space. depth : None | float in [0, 1] Depth weighting coefficients. If None, no depth weighting is performed. Returns ------- stc : instance of SourceEstimate The source estimates. # Import the necessary private functions from mne.inverse_sparse.mxne_inverse import \ (_prepare_gain, _check_loose_forward, is_fixed_orient, _reapply_source_weighting, _make_sparse_stc) all_ch_names = evoked.ch_names loose, forward = _check_loose_forward(loose, forward) # Handle depth weighting and whitening (here is no weights) gain, gain_info, whitener, source_weighting, mask = _prepare_gain( forward, evoked.info, noise_cov, pca=False, depth=depth, loose=loose, weights=None, weights_min=None) # Select channels of interest sel = [all_ch_names.index(name) for name in gain_info['ch_names']] M = evoked.data[sel] # Whiten data M = np.dot(whitener, M) n_orient = 1 if is_fixed_orient(forward) else 3 X, active_set = solver(M, gain, n_orient) X = _reapply_source_weighting(X, source_weighting, active_set) stc = _make_sparse_stc(X, active_set, forward, tmin=evoked.times[0], tstep=1. / evoked.info['sfreq']) return stc Explanation: Auxiliary function to run the solver End of explanation def solver(M, G, n_orient): Run L2 penalized regression and keep 10 strongest locations. Parameters ---------- M : array, shape (n_channels, n_times) The whitened data. G : array, shape (n_channels, n_dipoles) The gain matrix a.k.a. the forward operator. The number of locations is n_dipoles / n_orient. n_orient will be 1 for a fixed orientation constraint or 3 when using a free orientation model. n_orient : int Can be 1 or 3 depending if one works with fixed or free orientations. If n_orient is 3, then ``G[:, 2::3]`` corresponds to the dipoles that are normal to the cortex. Returns ------- X : array, (n_active_dipoles, n_times) The time series of the dipoles in the active set. active_set : array (n_dipoles) Array of bool. Entry j is True if dipole j is in the active set. We have ``X_full[active_set] == X`` where X_full is the full X matrix such that ``M = G X_full``. K = linalg.solve(np.dot(G, G.T) + 1e15 * np.eye(G.shape[0]), G).T K /= np.linalg.norm(K, axis=1)[:, None] X = np.dot(K, M) indices = np.argsort(np.sum(X ** 2, axis=1))[-10:] active_set = np.zeros(G.shape[1], dtype=bool) for idx in indices: idx -= idx % n_orient active_set[idx:idx + n_orient] = True X = X[active_set] return X, active_set Explanation: Define your solver End of explanation # loose, depth = 0.2, 0.8 # corresponds to loose orientation loose, depth = 1., 0. # corresponds to free orientation stc = apply_solver(solver, evoked, forward, noise_cov, loose, depth) Explanation: Apply your custom solver End of explanation plot_sparse_source_estimates(forward['src'], stc, bgcolor=(1, 1, 1), opacity=0.1) Explanation: View in 2D and 3D ("glass" brain like 3D plot) End of explanation
12,068
Given the following text description, write Python code to implement the functionality described below step by step Description: Copyright 2021 Google LLC. Licensed under the Apache License, Version 2.0 (the "License"); Step1: Spectral Representations of Natural Images This notebook will show how to extract the spectral representations of an image, and see the effect of truncation of these spectral representation to the first $m$ components. Imports Step2: Image Upload Upload your images by running the cell below Step3: We rescale images to a reasonable resolution, otherwise this would take very long. Note that we will have $h \times w$ nodes in the resulting graph, where $h$ and $w$ are the height and width of the image. Step4: Helper Functions To compute the adjacency list and the Laplacian of the corresponding grid graph. Step5: By using a sparse matrix representation of the Laplacian, we save on memory significantly. Step6: After we have computed the Laplacian, we can compute its eigenvectors. Step7: The Laplacian is always positive semidefinite. Step8: Keeping the Top $m$ Components Once we have the eigenvectors, we can compute the (truncated) spectral representations. Step9: Saving Results We save results to the 'processed' subdirectory. Step10: You can download the images from this folder as a zipped folder by running the cells below.
Python Code: #@title License # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. Explanation: Copyright 2021 Google LLC. Licensed under the Apache License, Version 2.0 (the "License"); End of explanation import functools import io import itertools import os import matplotlib.pyplot as plt import numpy as np import PIL import scipy.sparse import scipy.sparse.linalg from google.colab import files Explanation: Spectral Representations of Natural Images This notebook will show how to extract the spectral representations of an image, and see the effect of truncation of these spectral representation to the first $m$ components. Imports End of explanation imgs = files.upload() def open_as_array(img_bytes): img_pil = PIL.Image.open(io.BytesIO(img_bytes)) img_pil = img_pil.resize((img_width, img_height)) return np.asarray(img_pil) img_name, img_bytes = list(imgs.items())[0] img_data = open_as_array(img_bytes) plt.axis('off') _ = plt.imshow(img_data) Explanation: Image Upload Upload your images by running the cell below End of explanation img_width = 50 img_height = 40 Explanation: We rescale images to a reasonable resolution, otherwise this would take very long. Note that we will have $h \times w$ nodes in the resulting graph, where $h$ and $w$ are the height and width of the image. End of explanation def get_index(x, y, img_width, img_height): return y * img_width + x; def get_neighbours(x, y, img_width, img_height): neighbours_x_pos = [max(0, x - 1), x, min(x + 1, img_width - 1)] neighbours_y_pos = [max(0, y - 1), y, min(y + 1, img_height - 1)] neighbours = product(neighbours_x_pos, neighbours_y_pos) neighbours = set(neighbours) neighbours.discard((x, y)) return neighbours Explanation: Helper Functions To compute the adjacency list and the Laplacian of the corresponding grid graph. End of explanation def compute_sparse_laplacian(img_width, img_height): neighbours_fn = functools.partial(get_neighbours, img_width=img_width, img_height=img_height) index_fn = functools.partial(get_index, img_width=img_width, img_height=img_height) senders = [] recievers = [] values = [] for x in range(img_width): for y in range(img_height): pos = (x, y) pos_index = index_fn(*pos) degree = 0. for neighbour in neighbours_fn(*pos): neigh_index = index_fn(*neighbour) senders.append(pos_index) recievers.append(neigh_index) values.append(-1.) degree += 1. senders.append(pos_index) recievers.append(pos_index) values.append(degree) num_nodes = img_width * img_height laplacian_shape = (num_nodes, num_nodes) return scipy.sparse.coo_matrix((values, (senders, recievers))) laplacian = compute_sparse_laplacian(img_width, img_height) Explanation: By using a sparse matrix representation of the Laplacian, we save on memory significantly. End of explanation num_eigenvecs = 1500 v0 = np.ones(img_width * img_height) eigenvals, eigenvecs = scipy.sparse.linalg.eigsh(laplacian, k=num_eigenvecs, which='SM', v0=v0) Explanation: After we have computed the Laplacian, we can compute its eigenvectors. End of explanation assert np.all(eigenvals >= 0) plt.hist(eigenvals, bins=100) plt.title('Histogram of Laplacian Eigenvalues') plt.show() Explanation: The Laplacian is always positive semidefinite. End of explanation def keep_first_components(img_data, num_components): orig_shape = img_data.shape img_reshaped = np.reshape(img_data, (-1, 3)) chosen_eigenvecs = eigenvecs[:, :num_components] spectral_coeffs = chosen_eigenvecs.T @ img_reshaped upd_img_data_reshaped = chosen_eigenvecs @ spectral_coeffs return np.reshape(upd_img_data_reshaped, orig_shape).astype(int) plt.axis('off') plt.imshow(keep_first_components(img_data, 200)) plt.savefig('test.png', bbox_inches='tight', pad_inches=0) Explanation: Keeping the Top $m$ Components Once we have the eigenvectors, we can compute the (truncated) spectral representations. End of explanation save_dir = 'processed' os.mkdir(save_dir) for img_name, img_bytes in imgs.items(): base_name = os.path.basename(img_name).split('.')[0] img_data = open_as_array(img_name) for num_components in [1, 2, 5, 10, 20, 100, 200, 500]: upd_img_data = keep_first_components(img_data, num_components) upd_img_name = f'{base_name}-{num_components}.png' plt.axis('off') plt.imshow(upd_img_data) _ = plt.savefig(f'{save_dir}/{upd_img_name}', bbox_inches='tight', pad_inches=0) Explanation: Saving Results We save results to the 'processed' subdirectory. End of explanation !zip -r processed.zip processed files.download('processed.zip') Explanation: You can download the images from this folder as a zipped folder by running the cells below. End of explanation
12,069
Given the following text description, write Python code to implement the functionality described below step by step Description: Testing the pyEMMA API Step1: Now we import a few general packages that we need to start with. The following imports basic numerics and algebra routines (numpy) and plotting routines (matplotlib), and makes sure that all plots are shown inside the notebook rather than in a separate window (nicer that way). Step2: Now we import the pyEMMA package that we will be using in the beginning Step3: TICA and clustering So we would like to first reduce our dimension by throwing out the ‘uninteresting’ ones and only keeping the ‘relevant’ ones. But how do we do that? It turns out that a really good way to do that if you are interesting in the slow kinetics of the molecule - e.g. for constructing a Markov model, is to use the time-lagged independent component analysis (TICA) [2]. Amongst linear methods, TICA is optimal in its ability to approximate the relevant slow coordinates / reaction coordinates from MD simulation [3], and therefore it’s ideal to construct Markov models. Step4: By default, TICA will choose a number of output dimensions to cover 95% of the kinetic variance and scale the output to produce a kinetic map. In this case we retain 575 dimensions, which is a lot but note that they are scaled by eigenvalue, so it’s mostly the first dimensions that contribute. Step5: The TICA object has a number of properties that we can extract and work with. We have already obtained the projected trajectory and wrote it in a variable Y that is a matrix of size (103125 x 2). The rows are the MD steps, the 2 columns are the independent component coordinates projected onto. So each columns is a trajectory. Let us plot them Step6: A particular thing about the IC’s is that they have zero mean and variance one. We can easily check that Step7: The small deviations from 0 and 1 come from statistical and numerical issues. That’s not a problem. Note that if we had set kinetic_map=True when doing TICA, then the variances would not be 1 but rather the square of the corresponding TICA eigenvalue. TICA is a special transformation because it will project the data such that the autocorrelation along the independent components is as slow as possible. The eigenvalues of the TICA transform are the values of these autocorrelations at the chosen lag time (here 100). We can even interpret them in terms of relaxation timescales Step8: We will see more timescales later when we estimate a Markov model, and there will be some differences. For now you should treat these numbers as a rough guess of your molecule’s timescales, and we will see later that this guess is actually a bit too fast. The timescales are relative to the 10 ns saving interval, so we have
Python Code: import pyemma pyemma.__version__ Explanation: Testing the pyEMMA API End of explanation import matplotlib.pylab as plt import numpy as np %pylab inline Explanation: Now we import a few general packages that we need to start with. The following imports basic numerics and algebra routines (numpy) and plotting routines (matplotlib), and makes sure that all plots are shown inside the notebook rather than in a separate window (nicer that way). End of explanation import pyemma.coordinates as coor import pyemma.msm as msm import pyemma.plots as mplt from pyemma import config # some helper funcs def average_by_state(dtraj, x, nstates): assert(len(dtraj) == len(x)) N = len(dtraj) res = np.zeros((nstates)) for i in range(nstates): I = np.argwhere(dtraj == i)[:,0] res[i] = np.mean(x[I]) return res def avg_by_set(x, sets): # compute mean positions of sets. This is important because of some technical points the set order # in the coarse-grained TPT object can be different from the input order. avg = np.zeros(len(sets)) for i in range(len(sets)): I = list(sets[i]) avg[i] = np.mean(x[I]) return avg shortcuts = {'average_by_state': average_by_state, 'avg_by_set': avg_by_set} import glob trajfiles = sorted(glob.glob('./*/05*nc')) for file in trajfiles: print("%s\n" % file) topfile = "./test.pdb" feat = coor.featurizer(topfile) feat.add_backbone_torsions(cossin=True) feat.add_chi1_torsions(cossin=True) inp = coor.source(trajfiles, feat) print("Number of trajectories: %s" % inp.number_of_trajectories()) print("Aggregate simulation time: %.2f ns" % (inp.n_frames_total() * 0.02)) print("Number of dimensions: %s" % inp.dimension()) Explanation: Now we import the pyEMMA package that we will be using in the beginning: the coordinates package. This package contains functions and classes for reading and writing trajectory files, extracting order parameters from them (such as distances or angles), as well as various methods for dimensionality reduction and clustering. The shortcuts module is a bunch of functions specific to this workshop - they help us to visualize some of our results. Some of them might become part of the pyemma package once they are more mature. End of explanation tica_obj = coor.tica(inp, lag=100) Y = tica_obj.get_output()[0] Explanation: TICA and clustering So we would like to first reduce our dimension by throwing out the ‘uninteresting’ ones and only keeping the ‘relevant’ ones. But how do we do that? It turns out that a really good way to do that if you are interesting in the slow kinetics of the molecule - e.g. for constructing a Markov model, is to use the time-lagged independent component analysis (TICA) [2]. Amongst linear methods, TICA is optimal in its ability to approximate the relevant slow coordinates / reaction coordinates from MD simulation [3], and therefore it’s ideal to construct Markov models. End of explanation print("Projected data shape: (%s,%s)" % (Y.shape[0], Y.shape[1])) print('Retained dimensions: %s' % tica_obj.dimension()) plot(tica_obj.cumvar, linewidth=2) plot([tica_obj.dimension(), tica_obj.dimension()], [0, 1], color='black', linewidth=2) plot([0, Y.shape[0]], [0.95, 0.95], color='black', linewidth=2) xlabel('Number of dimensions'); ylabel('Cum. kinetic variance fraction') Explanation: By default, TICA will choose a number of output dimensions to cover 95% of the kinetic variance and scale the output to produce a kinetic map. In this case we retain 575 dimensions, which is a lot but note that they are scaled by eigenvalue, so it’s mostly the first dimensions that contribute. End of explanation mplt.plot_free_energy(np.vstack(Y)[:, 0], np.vstack(Y)[:, 1]) xlabel('independent component 1'); ylabel('independent component 2') Explanation: The TICA object has a number of properties that we can extract and work with. We have already obtained the projected trajectory and wrote it in a variable Y that is a matrix of size (103125 x 2). The rows are the MD steps, the 2 columns are the independent component coordinates projected onto. So each columns is a trajectory. Let us plot them: End of explanation print("Mean values: %s" % np.mean(Y[0], axis = 0)) print("Variances: %s" % np.var(Y[0], axis = 0)) Explanation: A particular thing about the IC’s is that they have zero mean and variance one. We can easily check that: End of explanation print(-100/np.log(tica_obj.eigenvalues[:5])) Explanation: The small deviations from 0 and 1 come from statistical and numerical issues. That’s not a problem. Note that if we had set kinetic_map=True when doing TICA, then the variances would not be 1 but rather the square of the corresponding TICA eigenvalue. TICA is a special transformation because it will project the data such that the autocorrelation along the independent components is as slow as possible. The eigenvalues of the TICA transform are the values of these autocorrelations at the chosen lag time (here 100). We can even interpret them in terms of relaxation timescales: End of explanation subplot2grid((2,1),(0,0)) plot(Y[:,0]) ylabel('ind. comp. 1') subplot2grid((2,1),(1,0)) plot(Y[:,1]) ylabel('ind. comp. 2') xlabel('time (10 ns)') tica_obj.chunksize mplt.plot_implied_timescales(tica_obj) Explanation: We will see more timescales later when we estimate a Markov model, and there will be some differences. For now you should treat these numbers as a rough guess of your molecule’s timescales, and we will see later that this guess is actually a bit too fast. The timescales are relative to the 10 ns saving interval, so we have End of explanation
12,070
Given the following text description, write Python code to implement the functionality described below step by step Description: Example 3 Step1: Step 1 Step2: The full list of parameters that can be set with the initialization are as follows (all are optional). | Argument | Defaults | Purpose | | ------------- | ------------- | ------------- | | tag | "Untagged" | The label of the file where the output of MultiNest will be stored, specifically they are stored at work_dir/chains/tag/. | | work_dir | $pwd | The directory where all outputs from the NPTF will be stored. This defaults to the notebook directory, but an alternative can be specified. | | psf_dir | work_dir/psf_dir/ | Where the psf corrections will be stored (this correction is discussed in the next notebook). | Step 2 Step3: In order to study the inner galaxy, we restrict ourselves to a smaller ROI defined by the analysis mask discussed in Example 2. The mask must be the same length as the data and exposure. Step4: Add in the templates we will want to use as background models. When adding templates, the first entry is the template itself and the second the string by which it is identified. The length for each template must again match the data. Step5: Step 3 Step6: Note the diffuse model is normalised to a much larger value than the maximum prior of the other templates. This is because the diffuse model explains the majority of the flux in our ROI. The value of 15 was determined from a fit where the diffuse model was not fixed. Step 4 Step7: Step 5 Step8: Step 6 Step9: The triangle plot makes it clear that a non-zero value of the GCE template is preferred by the fit. Note also that as we gave the isotropic template a log flat prior, the parameter in the triangle plot is $\log_{10} A_\mathrm{iso}$. We also show the relative fraction of the Flux obtained by the GCE as compared to other templates. Note the majority of the flux is absorbed by the diffuse model.
Python Code: # Import relevant modules %matplotlib inline %load_ext autoreload %autoreload 2 import numpy as np import corner import matplotlib.pyplot as plt from NPTFit import nptfit # module for performing scan from NPTFit import create_mask as cm # module for creating the mask from NPTFit import dnds_analysis # module for analysing the output Explanation: Example 3: Running Poissonian Scans with MultiNest In this example we demonstrate how to run a scan using only templates that follow Poisson statistics. Nevertheless many aspects of how the code works in general, such as initialization, loading data, masks and templates, and running the code with MultiNest carry over to the non-Poissonian case. In detail we will perform an analysis of the inner galaxy involving all five background templates discussed in Example 1. We will show that the fit prefers a non-zero value for the GCE template. NB: This example makes use of the Fermi Data, which needs to already be installed. See Example 1 for details. End of explanation n = nptfit.NPTF(tag='Poissonian_Example') Explanation: Step 1: Setting up an instance of NPTFit To begin with we need to create an instance of NPTF from nptfit.py. We will load it with the tag set to "Poissonian_Example", which is the name attached to the folder within the chains directory where the output will be stored. Note for long runs the chains output can become large, so periodically deleting runs you are no longer using is recommended. End of explanation fermi_data = np.load('fermi_data/fermidata_counts.npy') fermi_exposure = np.load('fermi_data/fermidata_exposure.npy') n.load_data(fermi_data, fermi_exposure) Explanation: The full list of parameters that can be set with the initialization are as follows (all are optional). | Argument | Defaults | Purpose | | ------------- | ------------- | ------------- | | tag | "Untagged" | The label of the file where the output of MultiNest will be stored, specifically they are stored at work_dir/chains/tag/. | | work_dir | $pwd | The directory where all outputs from the NPTF will be stored. This defaults to the notebook directory, but an alternative can be specified. | | psf_dir | work_dir/psf_dir/ | Where the psf corrections will be stored (this correction is discussed in the next notebook). | Step 2: Add in Data, a Mask and Background Templates Next we need to pass the code some data to analyze. For this purpose we use the Fermi Data described in Example 1. The format for load_data is data and then exposure. NB: we emphasize that although we use the example of HEALPix maps here, the code more generally works on any 1-d arrays, as long as the data, exposure, mask, and templates all have the same length. End of explanation pscmask=np.array(np.load('fermi_data/fermidata_pscmask.npy'), dtype=bool) analysis_mask = cm.make_mask_total(band_mask = True, band_mask_range = 2, mask_ring = True, inner = 0, outer = 30, custom_mask = pscmask) n.load_mask(analysis_mask) Explanation: In order to study the inner galaxy, we restrict ourselves to a smaller ROI defined by the analysis mask discussed in Example 2. The mask must be the same length as the data and exposure. End of explanation dif = np.load('fermi_data/template_dif.npy') iso = np.load('fermi_data/template_iso.npy') bub = np.load('fermi_data/template_bub.npy') psc = np.load('fermi_data/template_psc.npy') gce = np.load('fermi_data/template_gce.npy') n.add_template(dif, 'dif') n.add_template(iso, 'iso') n.add_template(bub, 'bub') n.add_template(psc, 'psc') n.add_template(gce, 'gce') Explanation: Add in the templates we will want to use as background models. When adding templates, the first entry is the template itself and the second the string by which it is identified. The length for each template must again match the data. End of explanation n.add_poiss_model('dif', '$A_\mathrm{dif}$', False, fixed=True, fixed_norm=15.) n.add_poiss_model('iso', '$A_\mathrm{iso}$', [-2,1], True) n.add_poiss_model('bub', '$A_\mathrm{bub}$', [0,2], False) n.add_poiss_model('psc', '$A_\mathrm{psc}$', [0,2], False) n.add_poiss_model('gce', '$A_\mathrm{gce}$', [0,2], False) Explanation: Step 3: Add Background Models to the Fit Now from this list of templates the NPTF now knows about, we add in a series of background models which will be passed to MultiNest. In Example 6 we will show how to evaluate the likelihood without MultiNest, so that it can be interfaced with alternative inference packages. Poissonian templates only have one parameter associated with them: $A$ the template normalisation. Poissonian models are added to the fit via add_poiss_model. The first argument sets the spatial template for this background model, and should match the string used in add_template. The second argument is a LaTeX ready string used to identify the floated parameter later on. By default added models will be floated. For floated templates the next two parameters are the prior range, added in the form [param_min, param_max] and then whether the prior is log flat (True) or linear flat (False). For log flat priors the priors are specified as indices, so that [-2,1] floats over a linear range [0.01,10]. Templates can also be added with a fixed normalisation. In this case no prior need be specified and instead fixed=True should be specified as well as fixed_norm=value, where value is $A$ the template normalisation. We use each of these possibilities in the example below. End of explanation n.configure_for_scan() Explanation: Note the diffuse model is normalised to a much larger value than the maximum prior of the other templates. This is because the diffuse model explains the majority of the flux in our ROI. The value of 15 was determined from a fit where the diffuse model was not fixed. Step 4: Configure the Scan Now the scan knows what models we want to fit to the data, we can configure the scan. In essence this step prepares all the information given above into an efficient format for calculating the likelihood. The main actions performed are: 1. Take the data and templates, and reduce them to only the ROI we will use as defined by the mask; 2. Further for a non-Poissonian scan an accounting for the number of exposure regions requested is made; and 3. Take the priors and parameters and prepare them into an efficient form for calculating the likelihood function that can then be used directly or passed to MultiNest. End of explanation n.perform_scan(nlive=500) Explanation: Step 5: Perform the Scan Having setup all the parameters, we can now perform the scan using MultiNest. We will show an example of how to manually calculate the likelihood in Example 6. | Argument | Default Value | Purpose | | ------------- | ------------- | ------------- | | run_tag | None | An additional tag can be specified to create a subdirectory of work_dir/chains/tag/ in which the output is stored. | nlive | 100 | Number of live points to be used during the MultiNest scan. A higher value thatn 100 is recommended for most runs, although larger values correspond to increased run time. | | pymultinest_options | None | When set to None our default choices for MultiNest will be used (explained below). To alter these options, a dictionary of parameters and their values should be placed here. | Our default MultiNest options are defined as follows: python pymultinest_options = {'importance_nested_sampling': False, 'resume': False, 'verbose': True, 'sampling_efficiency': 'model', 'init_MPI': False, 'evidence_tolerance': 0.5, 'const_efficiency_mode': False} For variations on these, a dictionary in the same format should be passed to perform_scan. A detailed explanation of the MultiNest options can be found here: https://johannesbuchner.github.io/PyMultiNest/pymultinest_run.html End of explanation n.load_scan() an = dnds_analysis.Analysis(n) an.make_triangle() Explanation: Step 6: Analyze the Output Here we show a simple example of the output - the triangle plot. The full list of possible analysis options is explained in more detail in Example 8. In order to do this we need to first load the scan using load_scan, which takes as an optional argument the same run_tag as used for the run. Note that load_scan can be used to load a run performed in a previous instance of NPTF, as long as the various parameters match. After the scan is loaded we then create an instance of dnds_analysis, which takes an instance of nptfit.NPTF as an argument - which must already have a scan loaded. From here we simply make a triangle plot. End of explanation an.plot_intensity_fraction_poiss('gce', bins=800, color='tomato', label='GCE') an.plot_intensity_fraction_poiss('iso', bins=800, color='cornflowerblue', label='Iso') an.plot_intensity_fraction_poiss('bub', bins=800, color='plum', label='Bub') plt.xlabel('Flux fraction (%)') plt.legend(fancybox = True) plt.xlim(0,8); Explanation: The triangle plot makes it clear that a non-zero value of the GCE template is preferred by the fit. Note also that as we gave the isotropic template a log flat prior, the parameter in the triangle plot is $\log_{10} A_\mathrm{iso}$. We also show the relative fraction of the Flux obtained by the GCE as compared to other templates. Note the majority of the flux is absorbed by the diffuse model. End of explanation
12,071
Given the following text description, write Python code to implement the functionality described below step by step Description: Locality Sensitive Hashing Question 1 The edit distance is the minimum number of character insertions and character deletions required to turn one string into another. Compute the edit distance between each pair of the strings he, she, his, and hers. Then, identify the pairs at each edit distance. Step1: Question 2 Consider the following matrix Step2: Question 3 Consider a matrix representing the signatures of seven columns, C1 through C7 Step3: Question 4 Find the set of 2-shingles for the "document" Step4: Question 5 How many distinct 3-shingles are there in the string "hello world" (excluding the quotes)? Step5: Question 6 Suppose we want to assign points to whichever of the points (0,0) or (100,40) is nearer. Depending on whether we use the L1 or L2 norm, a point (x,y) could be clustered with a different one of these two points. For this problem, you should work out the conditions under which a point will be assigned to (0,0) when the L1 norm is used, but assigned to (100,40) when the L2 norm is used. Identify one of those points from the list Step6: Question 7 Suppose we have an LSH family h of (d1,d2,.6,.4) hash functions. We can use three functions from h and the AND-construction to form a (d1,d2,w,x) family, and we can use two functions from h and the OR-construction to form a (d1,d2,y,z) family. Calculate w, x, y, and z. Step7: Question 8 There are 8 strings that represent sets
Python Code: from collections import defaultdict from itertools import combinations def lcs(a, b): lengths = [[0 for j in range(len(b)+1)] for i in range(len(a)+1)] for i, x in enumerate(a): for j, y in enumerate(b): if x == y: lengths[i+1][j+1] = lengths[i][j] + 1 else: lengths[i+1][j+1] = max(lengths[i+1][j], lengths[i][j+1]) # read the substring out from the matrix result = "" x, y = len(a), len(b) while x != 0 and y != 0: if lengths[x][y] == lengths[x-1][y]: x -= 1 elif lengths[x][y] == lengths[x][y-1]: y -= 1 else: assert a[x-1] == b[y-1] result = a[x-1] + result x -= 1 y -= 1 return result strings = ["he","she","his","hers"] pairs = list(combinations(strings,2)) length = defaultdict(list) for pair in pairs: x = pair[0] y = pair[1] dist = len(x) + len(y) - 2 * (len(lcs(x,y))) length[dist].append((x,y)) for k,v in length.items(): print k,v Explanation: Locality Sensitive Hashing Question 1 The edit distance is the minimum number of character insertions and character deletions required to turn one string into another. Compute the edit distance between each pair of the strings he, she, his, and hers. Then, identify the pairs at each edit distance. End of explanation import numpy as np def convertToRows(minhash_row, order): the_rows = [] for item in minhash_row: the_rows.append(order[item - 1]) return the_rows def minhash(matrix, order): m = matrix.shape[1] minhash_row = [0] * m i = 1 for r in order: row = matrix[r - 1] for c in range(0, m): if minhash_row[c] == 0: minhash_row[c] = i * row[c] i += 1 if 0 not in minhash_row: break return minhash_row matrix = np.array([[0, 1, 1, 0], [1, 0, 1, 1], [0, 1, 0, 1], [0, 0, 1, 0], [1, 0, 1, 0], [0, 1, 0, 0]]) order = [4, 6, 1, 3, 5, 2] minhash_row = minhash(matrix, order) rows_that_contributed = convertToRows(minhash_row, order) print(rows_that_contributed) Explanation: Question 2 Consider the following matrix: | | C1 | C2 | C3 | C4 | |-|-|-|-|-| |R1 | 0 | 1 | 1 | 0 | |R2 | 1 | 0 | 1 | 1 | |R3 | 0 | 1 | 0 | 1 | |R4 | 0 | 0 | 1 | 0 | |R5 | 1 | 0 | 1 | 0 | |R6 | 0 | 1 | 0 | 0 | Perform a minhashing of the data, with the order of rows: R4, R6, R1, R3, R5, R2. Which of the following is the correct minhash value of the stated column? Note: we give the minhash value in terms of the original name of the row, rather than the order of the row in the permutation. These two schemes are equivalent, since we only care whether hash values for two columns are equal, not what their actual values are. End of explanation def candPair(x, y): i = 0 while i < len(x): j = i + 1 while j < len(x): if x[i] == x [j] and y[i] == y[j]: print "C%s,C%s" % (i+1,j+1) j += 1 i += 1 print "Candidate pairs:" candPair([1,2,1,1,2,5,4], [2,3,4,2,3,2,2]) candPair([3,1,2,3,1,3,2], [4,1,3,1,2,4,4]) candPair([5,2,5,1,1,5,1], [6,1,6,4,1,1,4]) Explanation: Question 3 Consider a matrix representing the signatures of seven columns, C1 through C7: |C1 |C2 |C3 |C4 |C5 |C6 |C7| |-|-|-|-|-|-|-| |1 |2 |1 |1 |2 |5 |4| |2 |3 |4 |2 |3 |2 |2| |3 |1 |2 |3 |1 |3 |2| |4 |1 |3 |1 |2 |4 |4| |5 |2 |5 |1 |1 |5 |1| |6 |1 |6 |4 |1 |1 |4| Suppose we use locality-sensitive hashing with three bands of two rows each. Assume there are enough buckets available that the hash function for each band can be the identity function (i.e., columns hash to the same bucket if and only if they are identical in the band). Find all the candidate pairs. End of explanation str1 = "ABRACADABRA" str2 = "BRICABRAC" def getShingles(str): shingles = [] doc_shingles = [] for i in range(len(str)-1): shingles.append(str[i]+str[i+1]) return set(shingles) def getCommon(list1,list2): s1 = set(list1) s2 = set(list2) return len(s1.intersection(s2)) def getTotal(list1,list2): s1 = set(list1) s2 = set(list2) return len(s1.union(s2)) print "Set of 2-shinlges in ABRACADABRA =", getShingles(str1) print "Set of 2-shinlges in BRICABRAC =", getShingles(str2) print print "Number of 2-shinlges in ABRACADABRA = ", len(getShingles(str1)) print "Number of 2-shinlges in BRICABRAC = ", len(getShingles(str2)) print print "Number of Common Shingles = ", getCommon(getShingles(str1),getShingles(str2)) print "Total Number of Shingles = ", getTotal(getShingles(str1),getShingles(str2)) Explanation: Question 4 Find the set of 2-shingles for the "document": ABRACADABRA and also for the "document": BRICABRAC. Answer the following questions: How many 2-shingles does ABRACADABRA have? How many 2-shingles does BRICABRAC have? How many 2-shingles do they have in common? What is the Jaccard similarity between the two documents"? End of explanation str1 = "hello world" def shingle(str): s = set() for i,c in enumerate(str): if i < len(str) - 2: shing = c+str[i+1]+str[i+2] print shing s.add(shing) print print s return s shingle1 = shingle(str1) print print('Number of 3-shinlges in "hello world" = '+str(len(shingle1))); Explanation: Question 5 How many distinct 3-shingles are there in the string "hello world" (excluding the quotes)? End of explanation def distance(pt1, pt2, norm=2): differences = sum(abs(coord1-coord2)**norm for coord1, coord2 in zip(pt1,pt2)) return np.power(differences, 1./norm) for pt1 in ((50,18), (53,15), (56,15), (52,13)): print pt1 for norm in range(1,3): pt2 = ((0,0), (100,40)) one = distance(pt1, pt2[0], norm=norm) two = distance(pt1, pt2[1], norm=norm) if one < two: print pt1, "assigned to", pt2[0], "under L{} norm".format(norm) else: print pt1, "assigned to", pt2[1], "under L{} norm".format(norm) print Explanation: Question 6 Suppose we want to assign points to whichever of the points (0,0) or (100,40) is nearer. Depending on whether we use the L1 or L2 norm, a point (x,y) could be clustered with a different one of these two points. For this problem, you should work out the conditions under which a point will be assigned to (0,0) when the L1 norm is used, but assigned to (100,40) when the L2 norm is used. Identify one of those points from the list: (50,18), (53,15), (56,15), (52,13). End of explanation #import numpy as np # LSH family h (d1, d2, p1, p2) = (d1, d2, 0.6, 0.4) p1 = 0.6 p2 = 0.4 # We can use three functions from h and the AND-construction to form a (d1,d2,w,x) family # we cube the probabilities associated with h # (d1, d2, (p1)^3, (p2)^3) def AND_hash(p): return p ** 3 #w = np.power(p1, 3) #x = np.power(p2, 3) # We can use two functions from h and the OR-construction to form a (d1,d2,y,z) family # we take each probability associated with h, subtract it from 1, # square the result, and subtract that from 1 def OR_hash(p): return 1 - (1-p) ** 2 #y = 1 - np.power(1 - p1, 2) #z = 1 - np.power(1 - p2, 2) print "w =", AND_hash(p1) print "x =", AND_hash(p2) print "y =", OR_hash(p1) print "z =", OR_hash(p2) Explanation: Question 7 Suppose we have an LSH family h of (d1,d2,.6,.4) hash functions. We can use three functions from h and the AND-construction to form a (d1,d2,w,x) family, and we can use two functions from h and the OR-construction to form a (d1,d2,y,z) family. Calculate w, x, y, and z. End of explanation # The 8 strings that represent sets: s1 = "abcef" s2 = "acdeg" s3 = "bcdefg" s4 = "adfg" s5 = "bcdfgh" s6 = "bceg" s7 = "cdfg" s8 = "abcd" # Upper limit on Jaccard distance is 0.2 # We index a string of length L on the symbols appearing in its prefix of length floor(0.2L+1) # Strings of length 5 and 6 are indexed on their first 2 symbols # Strings of length 4 are indexed on their first symbol Jaccard = 0.2 def prefxlen(str, Jaccard): return int(np.floor(len(str)*Jaccard + 1)) s = [s1, s2, s3, s4, s5, s6, s7, s8] length = map(lambda s: prefxlen(s, Jaccard), s) print "Prefix lenth of s1 to s8:", length # We want to find the strings in index a, b and c def index(str): indx = {} for s in str: for l in range(prefxlen(s, Jaccard)): if s[l] in indx: indx[s[l]].append(s) else: indx[s[l]] = [s] return indx indx = index(s) print "strings in a, b and c:", indx # Number of strings compared with each of s1, s3 and s6 when it is used as the probe string def count(indx, s): counts = set(sum([indx[s[l]] for l in range(prefxlen(s, Jaccard))], [])) counts.remove(s) return (s, len(counts), counts) print "String, number of strings for comparisons, set of strings for comparisons:" for s in [s1, s3, s6]: print count(indx, s) Explanation: Question 8 There are 8 strings that represent sets: s1 = abcef; s2 = acdeg; s3 = bcdefg; s4 = adfg; s5 = bcdfgh; s6 = bceg; s7 = cdfg; s8 = abcd. The upper limit on Jaccard distance is 0.2, and we index the strings based on symbols appearing in the prefix. For each of s1, s3, and s6, determine how many other strings that string will be compared with, if it is used as the probe string. End of explanation
12,072
Given the following text description, write Python code to implement the functionality described below step by step Description: <a href='http Step1: Here we see tokens combine to form the entities Washington, DC, next May and the Washington Monument Entity annotations Doc.ents are token spans with their own set of annotations. <table> <tr><td>`ent.text`</td><td>The original entity text</td></tr> <tr><td>`ent.label`</td><td>The entity type's hash value</td></tr> <tr><td>`ent.label_`</td><td>The entity type's string description</td></tr> <tr><td>`ent.start`</td><td>The token span's *start* index position in the Doc</td></tr> <tr><td>`ent.end`</td><td>The token span's *stop* index position in the Doc</td></tr> <tr><td>`ent.start_char`</td><td>The entity text's *start* index position in the Doc</td></tr> <tr><td>`ent.end_char`</td><td>The entity text's *stop* index position in the Doc</td></tr> </table> Step2: NER Tags Tags are accessible through the .label_ property of an entity. <table> <tr><th>TYPE</th><th>DESCRIPTION</th><th>EXAMPLE</th></tr> <tr><td>`PERSON`</td><td>People, including fictional.</td><td>*Fred Flintstone*</td></tr> <tr><td>`NORP`</td><td>Nationalities or religious or political groups.</td><td>*The Republican Party*</td></tr> <tr><td>`FAC`</td><td>Buildings, airports, highways, bridges, etc.</td><td>*Logan International Airport, The Golden Gate*</td></tr> <tr><td>`ORG`</td><td>Companies, agencies, institutions, etc.</td><td>*Microsoft, FBI, MIT*</td></tr> <tr><td>`GPE`</td><td>Countries, cities, states.</td><td>*France, UAR, Chicago, Idaho*</td></tr> <tr><td>`LOC`</td><td>Non-GPE locations, mountain ranges, bodies of water.</td><td>*Europe, Nile River, Midwest*</td></tr> <tr><td>`PRODUCT`</td><td>Objects, vehicles, foods, etc. (Not services.)</td><td>*Formula 1*</td></tr> <tr><td>`EVENT`</td><td>Named hurricanes, battles, wars, sports events, etc.</td><td>*Olympic Games*</td></tr> <tr><td>`WORK_OF_ART`</td><td>Titles of books, songs, etc.</td><td>*The Mona Lisa*</td></tr> <tr><td>`LAW`</td><td>Named documents made into laws.</td><td>*Roe v. Wade*</td></tr> <tr><td>`LANGUAGE`</td><td>Any named language.</td><td>*English*</td></tr> <tr><td>`DATE`</td><td>Absolute or relative dates or periods.</td><td>*20 July 1969*</td></tr> <tr><td>`TIME`</td><td>Times smaller than a day.</td><td>*Four hours*</td></tr> <tr><td>`PERCENT`</td><td>Percentage, including "%".</td><td>*Eighty percent*</td></tr> <tr><td>`MONEY`</td><td>Monetary values, including unit.</td><td>*Twenty Cents*</td></tr> <tr><td>`QUANTITY`</td><td>Measurements, as of weight or distance.</td><td>*Several kilometers, 55kg*</td></tr> <tr><td>`ORDINAL`</td><td>"first", "second", etc.</td><td>*9th, Ninth*</td></tr> <tr><td>`CARDINAL`</td><td>Numerals that do not fall under another type.</td><td>*2, Two, Fifty-two*</td></tr> </table> Adding a Named Entity to a Span Normally we would have spaCy build a library of named entities by training it on several samples of text.<br>In this case, we only want to add one value Step3: <font color=green>Right now, spaCy does not recognize "Tesla" as a company.</font> Step4: <font color=green>In the code above, the arguments passed to Span() are Step5: Adding Named Entities to All Matching Spans What if we want to tag all occurrences of "Tesla"? In this section we show how to use the PhraseMatcher to identify a series of spans in the Doc Step6: Counting Entities While spaCy may not have a built-in tool for counting entities, we can pass a conditional statement into a list comprehension Step7: <font color=blue>Problem with Line Breaks</font> <div class="alert alert-info" style="margin Step8: <font color=blue>However, there is a simple fix that can be added to the nlp pipeline Step9: For more on Named Entity Recognition visit https Step10: Doc.noun_chunks is a generator function Previously we mentioned that Doc objects do not retain a list of sentences, but they're available through the Doc.sents generator.<br>It's the same with Doc.noun_chunks - lists can be created if needed
Python Code: # Perform standard imports import spacy nlp = spacy.load('en_core_web_sm') # Write a function to display basic entity info: def show_ents(doc): if doc.ents: for ent in doc.ents: print(ent.text+' - '+ent.label_+' - '+str(spacy.explain(ent.label_))) else: print('No named entities found.') doc = nlp(u'May I go to Washington, DC next May to see the Washington Monument?') show_ents(doc) Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a> Named Entity Recognition (NER) spaCy has an 'ner' pipeline component that identifies token spans fitting a predetermined set of named entities. These are available as the ents property of a Doc object. End of explanation doc = nlp(u'Can I please borrow 500 dollars from you to buy some Microsoft stock?') for ent in doc.ents: print(ent.text, ent.start, ent.end, ent.start_char, ent.end_char, ent.label_) Explanation: Here we see tokens combine to form the entities Washington, DC, next May and the Washington Monument Entity annotations Doc.ents are token spans with their own set of annotations. <table> <tr><td>`ent.text`</td><td>The original entity text</td></tr> <tr><td>`ent.label`</td><td>The entity type's hash value</td></tr> <tr><td>`ent.label_`</td><td>The entity type's string description</td></tr> <tr><td>`ent.start`</td><td>The token span's *start* index position in the Doc</td></tr> <tr><td>`ent.end`</td><td>The token span's *stop* index position in the Doc</td></tr> <tr><td>`ent.start_char`</td><td>The entity text's *start* index position in the Doc</td></tr> <tr><td>`ent.end_char`</td><td>The entity text's *stop* index position in the Doc</td></tr> </table> End of explanation doc = nlp(u'Tesla to build a U.K. factory for $6 million') show_ents(doc) Explanation: NER Tags Tags are accessible through the .label_ property of an entity. <table> <tr><th>TYPE</th><th>DESCRIPTION</th><th>EXAMPLE</th></tr> <tr><td>`PERSON`</td><td>People, including fictional.</td><td>*Fred Flintstone*</td></tr> <tr><td>`NORP`</td><td>Nationalities or religious or political groups.</td><td>*The Republican Party*</td></tr> <tr><td>`FAC`</td><td>Buildings, airports, highways, bridges, etc.</td><td>*Logan International Airport, The Golden Gate*</td></tr> <tr><td>`ORG`</td><td>Companies, agencies, institutions, etc.</td><td>*Microsoft, FBI, MIT*</td></tr> <tr><td>`GPE`</td><td>Countries, cities, states.</td><td>*France, UAR, Chicago, Idaho*</td></tr> <tr><td>`LOC`</td><td>Non-GPE locations, mountain ranges, bodies of water.</td><td>*Europe, Nile River, Midwest*</td></tr> <tr><td>`PRODUCT`</td><td>Objects, vehicles, foods, etc. (Not services.)</td><td>*Formula 1*</td></tr> <tr><td>`EVENT`</td><td>Named hurricanes, battles, wars, sports events, etc.</td><td>*Olympic Games*</td></tr> <tr><td>`WORK_OF_ART`</td><td>Titles of books, songs, etc.</td><td>*The Mona Lisa*</td></tr> <tr><td>`LAW`</td><td>Named documents made into laws.</td><td>*Roe v. Wade*</td></tr> <tr><td>`LANGUAGE`</td><td>Any named language.</td><td>*English*</td></tr> <tr><td>`DATE`</td><td>Absolute or relative dates or periods.</td><td>*20 July 1969*</td></tr> <tr><td>`TIME`</td><td>Times smaller than a day.</td><td>*Four hours*</td></tr> <tr><td>`PERCENT`</td><td>Percentage, including "%".</td><td>*Eighty percent*</td></tr> <tr><td>`MONEY`</td><td>Monetary values, including unit.</td><td>*Twenty Cents*</td></tr> <tr><td>`QUANTITY`</td><td>Measurements, as of weight or distance.</td><td>*Several kilometers, 55kg*</td></tr> <tr><td>`ORDINAL`</td><td>"first", "second", etc.</td><td>*9th, Ninth*</td></tr> <tr><td>`CARDINAL`</td><td>Numerals that do not fall under another type.</td><td>*2, Two, Fifty-two*</td></tr> </table> Adding a Named Entity to a Span Normally we would have spaCy build a library of named entities by training it on several samples of text.<br>In this case, we only want to add one value: End of explanation from spacy.tokens import Span # Get the hash value of the ORG entity label ORG = doc.vocab.strings[u'ORG'] # Create a Span for the new entity new_ent = Span(doc, 0, 1, label=ORG) # Add the entity to the existing Doc object doc.ents = list(doc.ents) + [new_ent] Explanation: <font color=green>Right now, spaCy does not recognize "Tesla" as a company.</font> End of explanation show_ents(doc) Explanation: <font color=green>In the code above, the arguments passed to Span() are:</font> - doc - the name of the Doc object - 0 - the start index position of the span - 1 - the stop index position (exclusive) - label=ORG - the label assigned to our entity End of explanation doc = nlp(u'Our company plans to introduce a new vacuum cleaner. ' u'If successful, the vacuum cleaner will be our first product.') show_ents(doc) # Import PhraseMatcher and create a matcher object: from spacy.matcher import PhraseMatcher matcher = PhraseMatcher(nlp.vocab) # Create the desired phrase patterns: phrase_list = ['vacuum cleaner', 'vacuum-cleaner'] phrase_patterns = [nlp(text) for text in phrase_list] # Apply the patterns to our matcher object: matcher.add('newproduct', None, *phrase_patterns) # Apply the matcher to our Doc object: matches = matcher(doc) # See what matches occur: matches # Here we create Spans from each match, and create named entities from them: from spacy.tokens import Span PROD = doc.vocab.strings[u'PRODUCT'] new_ents = [Span(doc, match[1],match[2],label=PROD) for match in matches] doc.ents = list(doc.ents) + new_ents show_ents(doc) Explanation: Adding Named Entities to All Matching Spans What if we want to tag all occurrences of "Tesla"? In this section we show how to use the PhraseMatcher to identify a series of spans in the Doc: End of explanation doc = nlp(u'Originally priced at $29.50, the sweater was marked down to five dollars.') show_ents(doc) len([ent for ent in doc.ents if ent.label_=='MONEY']) Explanation: Counting Entities While spaCy may not have a built-in tool for counting entities, we can pass a conditional statement into a list comprehension: End of explanation spacy.__version__ doc = nlp(u'Originally priced at $29.50,\nthe sweater was marked down to five dollars.') show_ents(doc) Explanation: <font color=blue>Problem with Line Breaks</font> <div class="alert alert-info" style="margin: 20px">There's a <a href='https://github.com/explosion/spaCy/issues/1717'>known issue</a> with <strong>spaCy v2.0.12</strong> where some linebreaks are interpreted as `GPE` entities:</div> End of explanation # Quick function to remove ents formed on whitespace: def remove_whitespace_entities(doc): doc.ents = [e for e in doc.ents if not e.text.isspace()] return doc # Insert this into the pipeline AFTER the ner component: nlp.add_pipe(remove_whitespace_entities, after='ner') # Rerun nlp on the text above, and show ents: doc = nlp(u'Originally priced at $29.50,\nthe sweater was marked down to five dollars.') show_ents(doc) Explanation: <font color=blue>However, there is a simple fix that can be added to the nlp pipeline:</font> End of explanation doc = nlp(u"Autonomous cars shift insurance liability toward manufacturers.") for chunk in doc.noun_chunks: print(chunk.text+' - '+chunk.root.text+' - '+chunk.root.dep_+' - '+chunk.root.head.text) Explanation: For more on Named Entity Recognition visit https://spacy.io/usage/linguistic-features#101 Noun Chunks Doc.noun_chunks are base noun phrases: token spans that include the noun and words describing the noun. Noun chunks cannot be nested, cannot overlap, and do not involve prepositional phrases or relative clauses.<br> Where Doc.ents rely on the ner pipeline component, Doc.noun_chunks are provided by the parser. noun_chunks components: <table> <tr><td>`.text`</td><td>The original noun chunk text.</td></tr> <tr><td>`.root.text`</td><td>The original text of the word connecting the noun chunk to the rest of the parse.</td></tr> <tr><td>`.root.dep_`</td><td>Dependency relation connecting the root to its head.</td></tr> <tr><td>`.root.head.text`</td><td>The text of the root token's head.</td></tr> </table> End of explanation len(doc.noun_chunks) len(list(doc.noun_chunks)) Explanation: Doc.noun_chunks is a generator function Previously we mentioned that Doc objects do not retain a list of sentences, but they're available through the Doc.sents generator.<br>It's the same with Doc.noun_chunks - lists can be created if needed: End of explanation
12,073
Given the following text description, write Python code to implement the functionality described below step by step Description: NumPy NumPy is the fundamental package for scientific computing with Python. The main additions to the standard Python are New datatype, NumPy array static, multidimensional Fast processing of arrays Tools for linear algebra, random numbers, ... Numpy array The NumPy array is static, which means that All elements have the same type, i.e. contrary to Python lists one cannot have both e.g. integer numbers and strings as elements The size of the array is fixed at the time of creation, so elements cannot be added or removed Array can have arbitrary number of dimensions, and even though the size of the array is fixed, the shape can be changed, i.e 2x2 matrix can be changed into 4 element vector. It is possible to combine, split, and resize arrays, but a new array is then always created. The picture below illustrates the differences between NumPy arrays and Python list. As the NumPy array is (normally) contiguous in memory, the processing is much faster, and the dynamic nature of list adds also lots of overhead. First thing when starting to work with NumPy, is to import the package. The package is commonly imported as np Step1: Creating NumPy arrays From a list (or tuple) Step2: Multidimensional lists (or tuples) produce multidimensional arrays Step3: Evenly spaced values Step4: Specific sized arrays Step5: Indexing and slicing arrays Indexing is similar to lists, with different dimensions separated by commas Step6: Contrary to lists, slicing is possible over all dimensions Step7: Views and copies of arrays As with all mutable Python objects, simple assignment creates a reference to array. If a and b are references to same array, changing contents of b changes also contents of a. NumPy arrays have a copy method for actual copying of array Step8: Slicing creates a view to the array, and modifying the view changes corresponding original contents Step9: Array operations Most arithmetic operations for NumPy arrays are done elementwise. Note for Matlab users! Multiplication is done elementwise. Step10: NumPy has special functions which can work with the array arguments (sin, cos, exp, sqrt, log, ...) Step11: Vectorized operations For loops and indexing in Python are slow. If the corresponding operation can be written in terms of full (or partial) arrays, operation can be speeded up significantly. Example Step12: Linear algebra NumPy contains linear algebra operations for matrix and vector products, eigenproblems and linear systems. Typically, NumPy is built against optimized BLAS libraries which means that these operations are very efficient (much faster than naive implementation e.g. with C or Fortran) Step13: Simple plotting with matplotlib matplotlib is powerful 2D plotting library for Python. matplotlib can produce publication quality figures in various hardcopy formats. matplotlib can be used in scripts and in interactive shells, as well as in notebooks. matplotlib tries to make easy things easy and hard things possible. You can generate plots, histograms, power spectra, bar charts, errorcharts, scatterplots, etc, with just a few lines of code. Good way to learn about matplotlib's possibilities is to check the screenshots and gallery which provide also the code to produce the figures. For simple plotting, one commonly imports matplotlib.pyplot package Step14: For showing fictures in the notebook, one can invoke the following magic Step15: Simple line plots of NumPy arrays Step16: Multiple subplots
Python Code: import numpy as np Explanation: NumPy NumPy is the fundamental package for scientific computing with Python. The main additions to the standard Python are New datatype, NumPy array static, multidimensional Fast processing of arrays Tools for linear algebra, random numbers, ... Numpy array The NumPy array is static, which means that All elements have the same type, i.e. contrary to Python lists one cannot have both e.g. integer numbers and strings as elements The size of the array is fixed at the time of creation, so elements cannot be added or removed Array can have arbitrary number of dimensions, and even though the size of the array is fixed, the shape can be changed, i.e 2x2 matrix can be changed into 4 element vector. It is possible to combine, split, and resize arrays, but a new array is then always created. The picture below illustrates the differences between NumPy arrays and Python list. As the NumPy array is (normally) contiguous in memory, the processing is much faster, and the dynamic nature of list adds also lots of overhead. First thing when starting to work with NumPy, is to import the package. The package is commonly imported as np End of explanation a = np.array((1, 2, 3, 4)) print(a) print(a.dtype) print(a.size) a = np.array((1,2,3,4), dtype=float) # Type can be explicitly specified print(a) print(a.dtype) print(a.size) Explanation: Creating NumPy arrays From a list (or tuple): End of explanation my_list = [[1,2,3], [4,5,6]] a = np.array(my_list) print(a) print(a.size) print(a.shape) Explanation: Multidimensional lists (or tuples) produce multidimensional arrays End of explanation a = np.arange(6) # half open interval up to 6 print(a) a = np.arange(0.1, 1, 0.2) # half open interval with start, stop, step print(a) b = np.linspace(-4.5, 4.5, 5) # specified number of samples within closed interval print(b) Explanation: Evenly spaced values End of explanation mat = np.empty((2, 2, 2), float) # uninitialized 2x2x2 array mat = np.zeros((3,3), int) # initialized to zeros mat = np.ones((2,3), complex) #initialized to ones Explanation: Specific sized arrays End of explanation a = np.arange(6) print(a[2]) print(a[-2]) mat = np.array([[1, 2, 3], [4, 5, 6]]) print(mat) print(mat[0,2]) print(mat[-1,-2]) Explanation: Indexing and slicing arrays Indexing is similar to lists, with different dimensions separated by commas End of explanation mat = np.array([[1, 2, 3, 4], [5, 6, 7, 8]]) print(mat[1, 1:3]) mat = np.zeros((4,4)) mat[1:-1,1:-1] = 2 print(mat) Explanation: Contrary to lists, slicing is possible over all dimensions End of explanation a = np.arange(6) print(a) b = a # b is a reference, changing values in b changes also a b[2] = -3 print(a) b = a.copy() # b is copy, changing b does not affect a b[0] = 66 print(b) print(a) Explanation: Views and copies of arrays As with all mutable Python objects, simple assignment creates a reference to array. If a and b are references to same array, changing contents of b changes also contents of a. NumPy arrays have a copy method for actual copying of array End of explanation c = a[1:4] print(c) c[-1] = 47 print(a) Explanation: Slicing creates a view to the array, and modifying the view changes corresponding original contents End of explanation a = np.array([1.0, 2.0, 3.0]) b = 2.0 print(b * a) print(b + a) print(a + a) print(a * a) Explanation: Array operations Most arithmetic operations for NumPy arrays are done elementwise. Note for Matlab users! Multiplication is done elementwise. End of explanation x = np.linspace(-np.pi, np.pi, 5) y = np.sin(x) Explanation: NumPy has special functions which can work with the array arguments (sin, cos, exp, sqrt, log, ...) End of explanation N = 1000 a = np.arange(N) dif = np.zeros(N-1, a.dtype) %%timeit #timeit magic allows easy timing for the execution of an cell # brute force with for loop for i in range(1, N): dif[i-1] = a[i] - a[i-1] %%timeit # vectorized operation dif = a[1:] - a[:-1] Explanation: Vectorized operations For loops and indexing in Python are slow. If the corresponding operation can be written in terms of full (or partial) arrays, operation can be speeded up significantly. Example: calculating the difference between successive array elements End of explanation A = np.array(((2, 1), (1, 3))) B = np.array(((-2, 4.2), (4.2, 6))) C = np.dot(A, B) # matrix-matrix product w, v = np.linalg.eig(A) # eigenvalues in w, eigenvectors in v b = np.array((1, 2)) x = np.linalg.solve(C, b) # Solve Cx = b print(np.dot(C, x)) # np.dot calculates also matrix-vector and vector-vector products Explanation: Linear algebra NumPy contains linear algebra operations for matrix and vector products, eigenproblems and linear systems. Typically, NumPy is built against optimized BLAS libraries which means that these operations are very efficient (much faster than naive implementation e.g. with C or Fortran) End of explanation import matplotlib.pyplot as plt Explanation: Simple plotting with matplotlib matplotlib is powerful 2D plotting library for Python. matplotlib can produce publication quality figures in various hardcopy formats. matplotlib can be used in scripts and in interactive shells, as well as in notebooks. matplotlib tries to make easy things easy and hard things possible. You can generate plots, histograms, power spectra, bar charts, errorcharts, scatterplots, etc, with just a few lines of code. Good way to learn about matplotlib's possibilities is to check the screenshots and gallery which provide also the code to produce the figures. For simple plotting, one commonly imports matplotlib.pyplot package End of explanation %matplotlib inline Explanation: For showing fictures in the notebook, one can invoke the following magic End of explanation x = np.linspace(-np.pi, np.pi, 100) y = np.sin(x) plt.plot(x, y) plt.title('A simple plot') plt.xlabel('time (s)') Explanation: Simple line plots of NumPy arrays End of explanation x = np.linspace(-np.pi, np.pi, 100) y1 = np.sin(x) y2 = np.cos(x) plt.subplot(211) #create 2x1 plot, use 1st plt.plot(x, y1, linewidth=2) plt.ylabel('sin') plt.subplot(212) #use 2nd plt.plot(x, y2, '--or') # use dashed line, 'o' markers and red color plt.ylabel('cos', fontsize=16) plt.xlabel(r'$\theta$') # when using Latex, string has to be so called raw string (r'my string') Explanation: Multiple subplots End of explanation
12,074
Given the following text description, write Python code to implement the functionality described below step by step Description: Pivoted Document Length Normalization Background In many cases, normalizing the tfidf weights for each term favors weight of terms of the documents with shorter length. The pivoted document length normalization scheme counters the effect of this bias for short documents by making tfidf independent of the document length. This is achieved by tilting the normalization curve along the pivot point defined by user with some slope. Roughly following the equation Step1: Get TFIDF scores for corpus without pivoted document length normalisation Step2: Get TFIDF scores for corpus with pivoted document length normalisation testing on various values of alpha. Step3: Visualizing the pivoted normalization Since cosine normalization favors retrieval of short documents from the plot we can see that when slope was 1 (when pivoted normalisation was not applied) short documents with length of around 500 had very good score hence the bias for short documents can be seen. As we varied the value of slope from 1 to 0 we introdcued a new bias for long documents to counter the bias caused by cosine normalisation. Therefore at a certain point we got an optimum value of slope which is 0.5 where the overall accuracy of the model is increased.
Python Code: # # Download our dataset # import gensim.downloader as api nws = api.load("20-newsgroups") # # Pick texts from relevant newsgroups, split into training and test set. # cat1, cat2 = ('sci.electronics', 'sci.space') # # X_* contain the actual texts as strings. # Y_* contain labels, 0 for cat1 (sci.electronics) and 1 for cat2 (sci.space) # X_train = [] X_test = [] y_train = [] y_test = [] for i in nws: if i["set"] == "train" and i["topic"] == cat1: X_train.append(i["data"]) y_train.append(0) elif i["set"] == "train" and i["topic"] == cat2: X_train.append(i["data"]) y_train.append(1) elif i["set"] == "test" and i["topic"] == cat1: X_test.append(i["data"]) y_test.append(0) elif i["set"] == "test" and i["topic"] == cat2: X_test.append(i["data"]) y_test.append(1) from gensim.parsing.preprocessing import preprocess_string from gensim.corpora import Dictionary id2word = Dictionary([preprocess_string(doc) for doc in X_train]) train_corpus = [id2word.doc2bow(preprocess_string(doc)) for doc in X_train] test_corpus = [id2word.doc2bow(preprocess_string(doc)) for doc in X_test] print(len(X_train), len(X_test)) # We perform our analysis on top k documents which is almost top 10% most scored documents k = len(X_test) // 10 from gensim.sklearn_api.tfidf import TfIdfTransformer from sklearn.linear_model import LogisticRegression from gensim.matutils import corpus2csc # This function returns the model accuracy and indivitual document prob values using # gensim's TfIdfTransformer and sklearn's LogisticRegression def get_tfidf_scores(kwargs): tfidf_transformer = TfIdfTransformer(**kwargs).fit(train_corpus) X_train_tfidf = corpus2csc(tfidf_transformer.transform(train_corpus), num_terms=len(id2word)).T X_test_tfidf = corpus2csc(tfidf_transformer.transform(test_corpus), num_terms=len(id2word)).T clf = LogisticRegression().fit(X_train_tfidf, y_train) model_accuracy = clf.score(X_test_tfidf, y_test) doc_scores = clf.decision_function(X_test_tfidf) return model_accuracy, doc_scores Explanation: Pivoted Document Length Normalization Background In many cases, normalizing the tfidf weights for each term favors weight of terms of the documents with shorter length. The pivoted document length normalization scheme counters the effect of this bias for short documents by making tfidf independent of the document length. This is achieved by tilting the normalization curve along the pivot point defined by user with some slope. Roughly following the equation: pivoted_norm = (1 - slope) * pivot + slope * old_norm This scheme is proposed in the paper Pivoted Document Length Normalization by Singhal, Buckley and Mitra. Overall this approach can in many cases help increase the accuracy of the model where the document lengths are hugely varying in the entire corpus. Introduction This guide demonstrates how to perform pivoted document length normalization. We will train a logistic regression to distinguish between text from two different newsgroups. Our results will show that using pivoted document length normalization yields a better model (higher classification accuracy). End of explanation params = {} model_accuracy, doc_scores = get_tfidf_scores(params) print(model_accuracy) import numpy as np # Sort the document scores by their scores and return a sorted list # of document score and corresponding document lengths. def sort_length_by_score(doc_scores, X_test): doc_scores = sorted(enumerate(doc_scores), key=lambda x: x[1]) doc_leng = np.empty(len(doc_scores)) ds = np.empty(len(doc_scores)) for i, _ in enumerate(doc_scores): doc_leng[i] = len(X_test[_[0]]) ds[i] = _[1] return ds, doc_leng print( "Normal cosine normalisation favors short documents as our top {} " "docs have a smaller mean doc length of {:.3f} compared to the corpus mean doc length of {:.3f}" .format( k, sort_length_by_score(doc_scores, X_test)[1][:k].mean(), sort_length_by_score(doc_scores, X_test)[1].mean() ) ) Explanation: Get TFIDF scores for corpus without pivoted document length normalisation End of explanation best_model_accuracy = 0 optimum_slope = 0 for slope in np.arange(0, 1.1, 0.1): params = {"pivot": 10, "slope": slope} model_accuracy, doc_scores = get_tfidf_scores(params) if model_accuracy > best_model_accuracy: best_model_accuracy = model_accuracy optimum_slope = slope print("Score for slope {} is {}".format(slope, model_accuracy)) print("We get best score of {} at slope {}".format(best_model_accuracy, optimum_slope)) params = {"pivot": 10, "slope": optimum_slope} model_accuracy, doc_scores = get_tfidf_scores(params) print(model_accuracy) print( "With pivoted normalisation top {} docs have mean length of {:.3f} " "which is much closer to the corpus mean doc length of {:.3f}" .format( k, sort_length_by_score(doc_scores, X_test)[1][:k].mean(), sort_length_by_score(doc_scores, X_test)[1].mean() ) ) Explanation: Get TFIDF scores for corpus with pivoted document length normalisation testing on various values of alpha. End of explanation %matplotlib inline import matplotlib.pyplot as py best_model_accuracy = 0 optimum_slope = 0 w = 2 h = 2 f, axarr = py.subplots(h, w, figsize=(15, 7)) it = 0 for slope in [1, 0.2]: params = {"pivot": 10, "slope": slope} model_accuracy, doc_scores = get_tfidf_scores(params) if model_accuracy > best_model_accuracy: best_model_accuracy = model_accuracy optimum_slope = slope doc_scores, doc_leng = sort_length_by_score(doc_scores, X_test) y = abs(doc_scores[:k, np.newaxis]) x = doc_leng[:k, np.newaxis] py.subplot(1, 2, it+1).bar(x, y, width=20, linewidth=0) py.title("slope = " + str(slope) + " Model accuracy = " + str(model_accuracy)) py.ylim([0, 4.5]) py.xlim([0, 3200]) py.xlabel("document length") py.ylabel("confidence score") it += 1 py.tight_layout() py.show() Explanation: Visualizing the pivoted normalization Since cosine normalization favors retrieval of short documents from the plot we can see that when slope was 1 (when pivoted normalisation was not applied) short documents with length of around 500 had very good score hence the bias for short documents can be seen. As we varied the value of slope from 1 to 0 we introdcued a new bias for long documents to counter the bias caused by cosine normalisation. Therefore at a certain point we got an optimum value of slope which is 0.5 where the overall accuracy of the model is increased. End of explanation
12,075
Given the following text description, write Python code to implement the functionality described below step by step Description: Active Network Management Framework This notebook stands as a tutorial and as a showcase of the Python package we developped in order to promote the development of computational techniques for Active Network Management (ANM). ANM strategies rely on short-term policies that control the power injected by generators and/or taken off by loads in order to avoid congestion or voltage issues. The generic formulation of the ANM problem is motivated and described in this paper, which details also the procedure used for building the test case that is provided in the package. 1. Simulator's basics The main class of the package is Simulator. This class has a single mandatory parameter that is the data structure defining the case (i.e. the distribution system) on which the simulation will be performed. The package has a predefined case that is located in case75.py. In addition, to ensure reproducibility, a specific random number generator will provided to the simulator. Step1: An instance of Simulator has several member variables that describes the case Step2: 2. Stochastic models and power functions By default, the simulator uses the callables defined in models.py to sample sequences of wind speeds and solar irradiances. These models can be overridden by specifying the keyword parameters wind and sun. Similarly, case75 relies on a callable defined in models.py to genere sequences of load consumptions. The power functions, which defines the production of generators as a function of the wind speed and solar irradiance, are defined within the case. As an example, the power function of photovoltaïc generators has the following definition in case75.py Step3: 3. Power flow analyses The previous simulation run did not require any power flow analysis. Such an analysis is performed once per time step as soon as a voltage or current magnitude is requested. Let's run a simulation of one day and plot the evolution of the voltage magnitudes Step4: The constant voltage magnitude corresponds to the slack bus, which models the connection with the transmission network. 4. Desicion making The simulator can be used to assess the quality of a policy that, at each time step, decides which flexible loads must be actived and which generators must be curtailed for the following time step. Because both the future of the system is stochastic and the flexible services span over several time periods, we face a problem of sequential decision making under uncertainty. We will now descripte an approach to compute and evaluate a simple and naive policy using the simulator. The approach consists in three steps Step5: 4.2 Determining a linear classifier The linear classifier is defined by $\bar{P}$, that we determine as Step6: 4.3 Test the policy on the simulator To determine the curtailment for the next time period, we assume that all the curtailable generators will operate at the active power upper limit $P_\max$. This limit is the decision variable that we wish to compute at each time step. As there is a cost (defined in the case) per cutailed MWh, we must determine the largest $P_\max$ that enables operational constraints to be met
Python Code: from ANM import Simulator from case75 import case75 from numpy.random import RandomState sim = Simulator(case75(), rng=RandomState(987654321)) Explanation: Active Network Management Framework This notebook stands as a tutorial and as a showcase of the Python package we developped in order to promote the development of computational techniques for Active Network Management (ANM). ANM strategies rely on short-term policies that control the power injected by generators and/or taken off by loads in order to avoid congestion or voltage issues. The generic formulation of the ANM problem is motivated and described in this paper, which details also the procedure used for building the test case that is provided in the package. 1. Simulator's basics The main class of the package is Simulator. This class has a single mandatory parameter that is the data structure defining the case (i.e. the distribution system) on which the simulation will be performed. The package has a predefined case that is located in case75.py. In addition, to ensure reproducibility, a specific random number generator will provided to the simulator. End of explanation print "The distribution network has %d buses (i.e. nodes) and %d branches (i.e. links)." % (sim.N_buses,sim.N_branches) print "The network supplies %d loads and gathers the production of %d generators." % (sim.N_loads, sim.N_gens) print "The control means consist in %d flexible loads (%.1f%%) and %d curtailable generators (%.1f%%)." \ % (sim.N_flex, 100.0*sim.N_flex/sim.N_loads, sim.N_curt, 100.0*sim.N_curt/sim.N_gens) Explanation: An instance of Simulator has several member variables that describes the case: End of explanation %matplotlib inline import matplotlib import matplotlib.pyplot as plt matplotlib.rcParams["figure.figsize"] = (10,6) P_prod, P_cons = [], [] for _ in range(96): # One day long simulation P_prod.append(sum([sim.getPGen(gen) for gen in range(sim.N_gens)])) P_cons.append(sum([sim.getPLoad(load) for load in range(sim.N_loads)])) sim.transition() # Triggers a transition of the simulation (i.e. simulare next time step) plt.plot(P_prod, label="production") plt.plot(P_cons, label="consumption") plt.ylabel("Active power [MW]") plt.xlabel("Time") plt.xticks([0, 24, 48, 72, 96], ["0h", "6h", "12h", "18h", "24h"]) plt.xlim([0,96]) _ = plt.legend() Explanation: 2. Stochastic models and power functions By default, the simulator uses the callables defined in models.py to sample sequences of wind speeds and solar irradiances. These models can be overridden by specifying the keyword parameters wind and sun. Similarly, case75 relies on a callable defined in models.py to genere sequences of load consumptions. The power functions, which defines the production of generators as a function of the wind speed and solar irradiance, are defined within the case. As an example, the power function of photovoltaïc generators has the following definition in case75.py: for gen, scale in zip(pv_gens,pv_scales): # *gen* is the id of the device (see case75.py) # *scale* models the efficiency and area of the generator case["power"] += [[gen, lambda ir, ws, scale=scale: scale*ir]] # Add a power function to the case The duration of a time step depends on the stochastic models and on the case, each time step corresponds to 15 minutes in the considered example. Here are the overall active power consumption and production within the network over a 1-day-long simulation, such sa determined by the seed, the stochastic models, the devices (i.e. loads and generators) and their corresponding power functions: End of explanation from numpy import array V = [] for _ in range(96): # One day long simulation V.append([sim.getV(bus) for bus in range(sim.N_buses)]) sim.transition() # Triggers a transition of the simulation (i.e. simulates next time step) plt.plot(array(V), "k") plt.ylabel("Voltage magnitude [p.u.]") plt.xlabel("Time") plt.xticks([0, 24, 48, 72, 96], ["0h", "6h", "12h", "18h", "24h"]) _ = plt.xlim([0,96]) Explanation: 3. Power flow analyses The previous simulation run did not require any power flow analysis. Such an analysis is performed once per time step as soon as a voltage or current magnitude is requested. Let's run a simulation of one day and plot the evolution of the voltage magnitudes: End of explanation # Build the dataset N_sim, L_sim = 5, 192 # 5 runs of 2 days are simulated to build the dataset dataset = [] rng_dataset = RandomState(6576458) for _ in range(N_sim): # A new instance is required for every simulation run. sim_dataset = Simulator(case75(),rng=rng_dataset) for _ in range(L_sim): # Compute averall active power balance. P_i = sum([sim_dataset.getPGen(gen) for gen in range(sim_dataset.N_gens)]) \ + sum([sim_dataset.getPLoad(load) for load in range(sim_dataset.N_loads)]) # isSafe() returns True when operational constraints are met, False otherwise y_i = sim_dataset.isSafe() dataset.append([P_i,y_i]) sim_dataset.transition() print "Simulations led to %.1f%% of secure time steps." \ % (100.0*sum([1 if data[1] else 0 for data in dataset])/len(dataset)) Explanation: The constant voltage magnitude corresponds to the slack bus, which models the connection with the transmission network. 4. Desicion making The simulator can be used to assess the quality of a policy that, at each time step, decides which flexible loads must be actived and which generators must be curtailed for the following time step. Because both the future of the system is stochastic and the flexible services span over several time periods, we face a problem of sequential decision making under uncertainty. We will now descripte an approach to compute and evaluate a simple and naive policy using the simulator. The approach consists in three steps: * Run many simuations to build a dataset ${(P_i,y_i)~|~i = 1,\dots,N}$ where each point $i$ corresponds to a simulated time step, and such that $P_i$ is equal the overall active power balance within the distribution network: $$P_i = \sum_{l\in\mathrm{generators}} P_{i,g} + \sum_{l\in\mathrm{loads}} P_{i,l}\,,$$ and $y_i$ is a boolean that indicates if the system is secure (i.e. if operational constraints are met). * Deduce a simple linear rule $P \leq \bar{P}$ to approximate system security as of function of the overall active power balance $P$: $$ \hat{y} = \begin{cases} 1 & \mathrm{if}~P \leq \bar{P}\,, \quad (\mathrm{secure})\ 0 & \mathrm{otherwise}. \end{cases} $$ * To apply the policy, determine, at each time step, curtailment instructions (i.e. upper limits on the production of curtailable generators) based on $N_{trajs}$ samples of the next state of the system (flexible loads are ignored for the sake of simplicity). These samples can be generated by cloning the instance of Simulator. 4.1 Building dataset End of explanation # Determine linear approximator using a 5% margin P_bar = 0.95*min([d[0] for d in dataset if not d[1]]) # Show approximation fig = plt.figure(figsize=(15,3)) plt.axvspan(xmin=P_bar, xmax=1.1*max([d[0] for d in dataset]), color="r", alpha=0.25) plt.scatter([d[0] for d in dataset], [0.0]*len(dataset), c=[float(d[1]) for d in dataset], s=50, cmap="prism") plt.yticks([]) plt.xlim([1.1*min([d[0] for d in dataset]),1.1*max([d[0] for d in dataset])]) plt.xlabel("Active power unbalance [MW]") plt.title("Simple linear approximation") fig.tight_layout() Explanation: 4.2 Determining a linear classifier The linear classifier is defined by $\bar{P}$, that we determine as: $$ \bar{P} = 0.95 \cdot \min_i P_i\\hspace{7em}\text{s.t. }y_i\text{ is False}$$ End of explanation from copy import copy, deepcopy L_simu = 96 N_trajs = 10 rng_trajs = RandomState(3478765) non_curt_gens = [gen for gen in range(sim.N_gens) if gen not in sim.curtIdInGens] # record current state to compare simulation with/without the policy sim_cloned = copy(sim) sim_cloned = copy(sim) sim_cloned.wind = copy(sim.wind) sim_cloned.sun = copy(sim.sun) sim_cloned.Ploads_fcts = deepcopy(sim.Ploads_fcts) sim_cloned.rng = deepcopy(sim.rng) # P_prod = [] # Gather overall potential production during the simulation P_curt = [] # Gather overall effictive production (incl. curt.) during the simulation P_cons = [] # Gather overall consumption during the simulation V = [] # Gather all voltage magnitudes during the simulation for _ in range(L_simu): # Generate the N_trajs trajectories P_exo = [] for _ in range(N_trajs): # Copy the Simulator's instance sampler = copy(sim) sampler.wind = copy(sim.wind) sampler.sun = copy(sim.sun) sampler.Ploads_fcts = deepcopy(sim.Ploads_fcts) sampler.rng = rng_trajs # Simulate a transition sampler.transition() P_exo.append(sum([sampler.getPGen(gen) for gen in non_curt_gens])+\ sum([sampler.getPLoad(load) for load in range(sampler.N_loads)])) # Determine the production limit of curtailable generators and apply the control actions P_max = min([(P_bar-P)/sim.N_curt for P in P_exo]) for gen in sim.curtIdInGens: sim.setPmax(gen, P_max) # Simulate a transition sim.transition() P_prod.append(sum([sim.getPGen(gen) for gen in range(sim.N_gens)])) P_curt.append(sum([sim.getPCurtGen(gen) for gen in range(sim.N_gens)])) P_cons.append(sum([sim.getPLoad(load) for load in range(sim.N_loads)])) V.append([sim.getV(bus) for bus in range(1,sim.N_buses)]) # Range starts at 1 to ignore slack bus # Simulate the same run without applying the policy P_prod_free = [] # Gather overall potential production during the simulation P_cons_free = [] # Gather overall consumption during the simulation V_free = [] # Gather all voltage magnitudes during the simulation for _ in range(L_simu): # Simulate a transition sim_cloned.transition() P_prod_free.append(sum([sim_cloned.getPGen(gen) for gen in range(sim_cloned.N_gens)])) P_cons_free.append(sum([sim_cloned.getPLoad(load) for load in range(sim_cloned.N_loads)])) V_free.append([sim_cloned.getV(bus) for bus in range(1,sim_cloned.N_buses)]) # Range starts at 1 to ignore slack bus # Plot simulation results with policy fig, (ax1,ax2) = plt.subplots(1, 2, figsize=(15,5)) fig.suptitle("Simulation with policy", fontsize=16) ax1.fill_between(range(L_simu), P_prod, P_curt, color="r") ax1.plot(P_prod, "k--", label=None) ax1.plot(P_curt, "k-", lw="2", label="production") ax1.plot(P_cons, "g", label="consumption") ax1.legend(loc=3) ax1.set_xticks([0, 24, 48, 72, 96]) ax1.set_xticklabels(["0h", "6h", "12h", "18h", "24h"]) ax1.set_xlim([0,L_simu]) ax2.plot(V,"k",label=None) ax2.axhline(y=1.05, color="r", linestyle="--", label="$V_\max\,,\,V_\min$") ax2.axhline(y=0.95, color="r", linestyle="--") ax2.set_ylim([0.94,1.06]) ax2.set_xlim([0,L_simu]) ax2.legend(loc=3) fig.tight_layout() plt.subplots_adjust(top=0.9) # Plot simulation results without policy fig, (ax1,ax2) = plt.subplots(1, 2, figsize=(15,5)) fig.suptitle("Simulation without policy", fontsize=16) ax1.plot(P_prod_free, "k-", lw="2", label="production") ax1.plot(P_cons_free, "g", label="consumption") ax1.legend(loc=3) ax1.set_xlim([0,L_simu]) ax2.plot(V_free,"k",label=None) ax2.axhline(y=1.05, color="r", linestyle="--", label="$V_\max\,,\,V_\min$") ax2.axhline(y=0.95, color="r", linestyle="--") ax2.set_ylim([0.94,1.06]) ax2.set_xlim([0,L_simu]) ax2.legend(loc=3) fig.tight_layout() plt.subplots_adjust(top=0.9) Explanation: 4.3 Test the policy on the simulator To determine the curtailment for the next time period, we assume that all the curtailable generators will operate at the active power upper limit $P_\max$. This limit is the decision variable that we wish to compute at each time step. As there is a cost (defined in the case) per cutailed MWh, we must determine the largest $P_\max$ that enables operational constraints to be met: $$ \begin{alignat}{2} \max \hspace{2em} & P_\max \ \text{s.t.} \hspace{2.25em} & P^{(k)}{exo} + N{curt} \, P_\max \leq \bar{P} &,\ & 1\leq k\leq N_{trajs}\,, \end{alignat} $$ where $P^{(k)}{exo}$ is, for sampled fututure state $k \in {1,\dots,N{trajs}}$, the overall active power balance without taking into account the injection of curtailable generators. The solution to this linear program is straightfoward: $$P_\max = \min_k \frac{\bar{P}-P^{(k)}{exo}}{N{curt}}\,.$$ We now simulate this policy on a run of 1 day, and then compare with the same simulation run without policy. End of explanation
12,076
Given the following text description, write Python code to implement the functionality described below step by step Description: Q1. Solution 3-shingles for "hello world" Step1: Q3. This question involves three different Bloom-filter-like scenarios. Each scenario involves setting to 1 certain bits of a 10-bit array, each bit of which is initially 0. Scenario A Step2: Q4. In this market-basket problem, there are 99 items, numbered 2 to 100. There is a basket for each prime number between 2 and 100. The basket for p contains all and only the items whose numbers are a multiple of p. For example, the basket for 17 contains the following items Step3: Q6. In this question we use six minhash functions, organized as three bands of two rows each, to identify sets of high Jaccard similarity. If two sets have Jaccard similarity 0.6, what is the probability (to two decimal places) that this pair will become a candidate pair? Step4: Q7. Suppose we have a (.4, .6, .9, .1)-sensitive family of functions. If we apply a 3-way OR construction to this family, we get a new family of functions whose sensitivity is Step5: Q8. Suppose we have a database of (Class, Student, Grade) facts, each giving the grade the student got in the class. We want to estimate the fraction of students who have gotten A's in at least 10 classes, but we do not want to examine the entire relation, just a sample of 10% of the tuples. We shall hash tuples to 10 buckets, and take only those tuples in the first bucket. But to get a valid estimate of the fraction of students with at least 10 A's, we need to pick our hash key judiciously. To which Attribute(s) of the relation should we apply the hash function? Q8 Solution. We will need to hash it to with regard to class and students Q9 Suppose the Web consists of four pages A, B, C, and D, that form a chain A-->B-->C-->D We wish to compute the PageRank of each of these pages, but since D is a "dead end," we will "teleport" from D with probability 1 to one of the four pages, each with equal probability. We do not teleport from pages A, B, or C. Assuming the sum of the PageRanks of the four pages is 1, what is the PageRank of page B, correct to two decimal places? Step6: Q10. Suppose in the AGM model we have four individuals (A,B,C,D} and two communities. Community 1 consists of {A,B,C} and Community 2 consists of {B,C,D}. For Community 1 there is a 30% chance- it will cause an edge between any two of its members. For Community 2 there is a 40% chance it will cause an edge between any two of its members. To the nearest two decimal places, what is the probability that there is an edge between B and C? Step7: Q11. X is a dataset of n columns for which we train a supervised Machine Learning algorithm. e is the error of the model measured against a validation dataset. Unfortunately, e is too high because model has overfitted on the training data X and it doesn't generalize well. We now decide to reduce the model variance by reducing the dimensionality of X, using a Singular Value Decomposition, and using the resulting dataset to train our model. If i is the number of singular values used in the SVD reduction, how does e change as a function of i, for i ∈ {1, 2,...,n}? Q11 Solution. A Convex Function starts low Step8: Q13. Recall that the power iteration does r=X·r until converging, where X is a nxn matrix and n is the number of nodes in the graph. Using the power iteration notation above, what is matrix X value when solving topic sensitive Pagerank with teleport set {0,1} for the following graph? Use beta=0.8. (Recall that the teleport set contains the destination nodes used when teleporting).
Python Code: ## Q2 Solution. def hash(x): return math.fmod(3 * x + 2, 11) for i in xrange(1,12): print hash(i) Explanation: Q1. Solution 3-shingles for "hello world": hel, ell, llo, lo_, o_w ,_wo, wor, orl, rld => 9 in total Q2. Solution End of explanation ## Q3 Solution. prob = 1.0 / 10 a = (1 - prob)**4 print a b = (1 - ( 1 - (1 - prob)**2) )**2 print b c = (1 - (1.0 /10 * 1.0 / 9)) print c Explanation: Q3. This question involves three different Bloom-filter-like scenarios. Each scenario involves setting to 1 certain bits of a 10-bit array, each bit of which is initially 0. Scenario A: we use one hash function that randomly, and with equal probability, selects one of the ten bits of the array. We apply this hash function to four different inputs and set to 1 each of the selected bits. Scenario B: We use two hash functions, each of which randomly, with equal probability, and independently of the other hash function selects one of the of 10 bits of the array. We apply both hash functions to each of two inputs and set to 1 each of the selected bits. Scenario C: We use one hash function that randomly and with equal probability selects two different bits among the ten in the array. We apply this hash function to two inputs and set to 1 each of the selected bits. Let a, b, and c be the expected number of bits set to 1 under scenarios A, B, and C, respectively. Which of the following correctly describes the relationships among a, b, and c? End of explanation ## Q5 Solution. vec1 = np.array([2, 1, 1]) vec2 = np.array([10, -7, 1]) print vec1.dot(vec2) / (np.linalg.norm(vec1) * np.linalg.norm(vec2)) Explanation: Q4. In this market-basket problem, there are 99 items, numbered 2 to 100. There is a basket for each prime number between 2 and 100. The basket for p contains all and only the items whose numbers are a multiple of p. For example, the basket for 17 contains the following items: {17, 34, 51, 68, 85}. What is the support of the pair of items {12, 30}? Q4 Solution. support = 2 => {2,4,6,8, ...} & {3, 6, 9,...} Q5. To two decimal places, what is the cosine of the angle between the vectors [2,1,1] and [10,-7,1]? End of explanation ## Q6 Solution. # probability that they agree at one particular band p1 = 0.6**2 print (1 - p1)**3 Explanation: Q6. In this question we use six minhash functions, organized as three bands of two rows each, to identify sets of high Jaccard similarity. If two sets have Jaccard similarity 0.6, what is the probability (to two decimal places) that this pair will become a candidate pair? End of explanation ## Q7 Solution. p1 = 1 - (1 - .9)**3 p2 = 1 - (1 - .1)**3 print "new LSH is (.4, .6, {}, {})-sensitive family".format(p1, p2) Explanation: Q7. Suppose we have a (.4, .6, .9, .1)-sensitive family of functions. If we apply a 3-way OR construction to this family, we get a new family of functions whose sensitivity is: End of explanation ## Q9 Solution. M = np.array([[0, 0, 0, .25], [1, 0, 0, .25], [0, 1, 0, .25], [0, 0, 1, .25]]) r = np.array([.25, .25, .25, .25]) for i in xrange(30): r = M.dot(r) print r Explanation: Q8. Suppose we have a database of (Class, Student, Grade) facts, each giving the grade the student got in the class. We want to estimate the fraction of students who have gotten A's in at least 10 classes, but we do not want to examine the entire relation, just a sample of 10% of the tuples. We shall hash tuples to 10 buckets, and take only those tuples in the first bucket. But to get a valid estimate of the fraction of students with at least 10 A's, we need to pick our hash key judiciously. To which Attribute(s) of the relation should we apply the hash function? Q8 Solution. We will need to hash it to with regard to class and students Q9 Suppose the Web consists of four pages A, B, C, and D, that form a chain A-->B-->C-->D We wish to compute the PageRank of each of these pages, but since D is a "dead end," we will "teleport" from D with probability 1 to one of the four pages, each with equal probability. We do not teleport from pages A, B, or C. Assuming the sum of the PageRanks of the four pages is 1, what is the PageRank of page B, correct to two decimal places? End of explanation ## Q10 Solution. print 1 - (1 - .3)*(1 - .4) Explanation: Q10. Suppose in the AGM model we have four individuals (A,B,C,D} and two communities. Community 1 consists of {A,B,C} and Community 2 consists of {B,C,D}. For Community 1 there is a 30% chance- it will cause an edge between any two of its members. For Community 2 there is a 40% chance it will cause an edge between any two of its members. To the nearest two decimal places, what is the probability that there is an edge between B and C? End of explanation ##Q12 L = np.array([[-.25, -.5, -.76, -.29, -.03, -.07, -.01], [-.05, -.1, -.15, .20, .26, .51, .77 ]]).T print L V = np.array([[6.74, 0],[0, 5.44]]) print V R = np.array([[-.57, -.11, -.57, -.11, -.57], [-.09, 0.70, -.09, .7, -.09]]) print R print L.dot(V).dot(R) Explanation: Q11. X is a dataset of n columns for which we train a supervised Machine Learning algorithm. e is the error of the model measured against a validation dataset. Unfortunately, e is too high because model has overfitted on the training data X and it doesn't generalize well. We now decide to reduce the model variance by reducing the dimensionality of X, using a Singular Value Decomposition, and using the resulting dataset to train our model. If i is the number of singular values used in the SVD reduction, how does e change as a function of i, for i ∈ {1, 2,...,n}? Q11 Solution. A Convex Function starts low End of explanation X = 0.8 * np.array([[1.0/3, 0, 0], [1.0/3, 0, 0], [1.0/3, 1, 0]]) X += 0.2 * np.array([[.5, .5, .5], [.5, .5, .5], [ 0, 0, 0]]) print X Explanation: Q13. Recall that the power iteration does r=X·r until converging, where X is a nxn matrix and n is the number of nodes in the graph. Using the power iteration notation above, what is matrix X value when solving topic sensitive Pagerank with teleport set {0,1} for the following graph? Use beta=0.8. (Recall that the teleport set contains the destination nodes used when teleporting). End of explanation
12,077
Given the following text description, write Python code to implement the functionality described below step by step Description: Boilerplate Step1: Sample from the model This section demonstrates the most basic usage of the package, i.e., sampling from a pre-trained model. Step2: Create the environment First, we need to create the environment. Step3: Create the agent We provide a convenience function get_module_wrappers that returns two python functions implementing the agent. The first one, initial_state, is used to get the initial state of the agent (specifically, the state of the LSTM). The second, step, takes an observation from the environment and performs a single agent step. Step4: Run sampling Step5: Running the following cell will return a sample from the model. You can execute it multiple times until you get the one that you like. In addition to showing the final state of the canvas we also record all the agent's actions (note actions variable) needed to reproduce it. Step6: Manipulate the sample Let's now do something the obtained sample. Since we have the corresponding sequence of actions we can re-render the image in higher resolution. To that end, we will need to create one more environment with modified settings. Create a high-resolution version of the environment Step7: Execute pre-recorded actions Step8: Change the thickness of the brush strokes In addition to changing the resolution of the images, let's in introduce some more subtle structural changes. We could, for example, change the thickness of all the strokes. Step9: Change the brush type Finally, let's re-render the image above using a different brush type. Step10: Fluid Paint environment demonstration Fluid Paint environment works almost exactly like libmypaint.LibMyPaint. Below, we show how to obtain samples from the model trained in this environment.
Python Code: import copy import os import matplotlib.pyplot as plt import numpy as np from scipy import ndimage import tensorflow as tf import tensorflow_hub as hub import spiral.agents.default as default_agent import spiral.agents.utils as agent_utils from spiral.environments import fluid from spiral.environments import libmypaint nest = tf.contrib.framework.nest # Disable TensorFlow debug output. tf.logging.set_verbosity(tf.logging.ERROR) Explanation: Boilerplate End of explanation # The path to libmypaint brushes. BRUSHES_BASEDIR = os.path.join(os.getcwd(), "..", "third_party/mypaint-brushes-1.3.0") BRUSHES_BASEDIR = os.path.abspath(BRUSHES_BASEDIR) # The path to a TF-Hub module. MODULE_PATH = "https://tfhub.dev/deepmind/spiral/default-wgangp-celebahq64-gen-19steps/agent4/1" Explanation: Sample from the model This section demonstrates the most basic usage of the package, i.e., sampling from a pre-trained model. End of explanation env_settings = dict( episode_length=20, # Number of frames in each episode. canvas_width=64, # The width of the canvas in pixels. grid_width=32, # The width of the action grid. brush_type="classic/dry_brush", # The type of the brush. brush_sizes=[1, 2, 4, 8, 12, 24], # The sizes of the brush to use. use_color=True, # Color or black & white output? use_pressure=True, # Use pressure parameter of the brush? use_alpha=False, # Drop or keep the alpha channel of the canvas? background="white", # Background could either be "white" or "transparent". brushes_basedir=BRUSHES_BASEDIR, # The location of libmypaint brushes. ) env = libmypaint.LibMyPaint(**env_settings) Explanation: Create the environment First, we need to create the environment. End of explanation initial_state, step = agent_utils.get_module_wrappers(MODULE_PATH) Explanation: Create the agent We provide a convenience function get_module_wrappers that returns two python functions implementing the agent. The first one, initial_state, is used to get the initial state of the agent (specifically, the state of the LSTM). The second, step, takes an observation from the environment and performs a single agent step. End of explanation state = initial_state() Explanation: Run sampling End of explanation noise_sample = np.random.normal(size=(10,)).astype(np.float32) time_step = env.reset() actions = [] for t in range(19): time_step.observation["noise_sample"] = noise_sample action, state = step(time_step.step_type, time_step.observation, state) time_step = env.step(action) actions.append(action) plt.close("all") plt.figure(figsize=(5, 5)) plt.imshow(time_step.observation["canvas"], interpolation="nearest") Explanation: Running the following cell will return a sample from the model. You can execute it multiple times until you get the one that you like. In addition to showing the final state of the canvas we also record all the agent's actions (note actions variable) needed to reproduce it. End of explanation # Let's make the canvas 8x8 times bigger. SCALE_FACTOR = 8 # Patch the environments setting for higher resolution. hires_env_settings = copy.deepcopy(env_settings) hires_env_settings["canvas_width"] *= SCALE_FACTOR hires_env_settings["brush_sizes"] = [ s * SCALE_FACTOR for s in hires_env_settings["brush_sizes"]] env = libmypaint.LibMyPaint(**hires_env_settings) Explanation: Manipulate the sample Let's now do something the obtained sample. Since we have the corresponding sequence of actions we can re-render the image in higher resolution. To that end, we will need to create one more environment with modified settings. Create a high-resolution version of the environment End of explanation env.reset() for t in range(19): time_step = env.step(actions[t]) plt.close("all") plt.figure(figsize=(5, 5)) plt.imshow(time_step.observation["canvas"], interpolation="nearest") Explanation: Execute pre-recorded actions End of explanation modified_actions = copy.deepcopy(actions) for action in modified_actions: action["size"] = np.array(0, dtype=np.int32) action["pressure"] = np.array(2, dtype=np.int32) env.reset() for t in range(19): time_step = env.step(modified_actions[t]) plt.close("all") plt.figure(figsize=(5, 5)) plt.imshow(time_step.observation["canvas"], interpolation="nearest") Explanation: Change the thickness of the brush strokes In addition to changing the resolution of the images, let's in introduce some more subtle structural changes. We could, for example, change the thickness of all the strokes. End of explanation # Patch the environments setting for a different brush. pen_env_settings = copy.deepcopy(hires_env_settings) pen_env_settings["brush_type"] = "classic/pen" env = libmypaint.LibMyPaint(**pen_env_settings) env.reset() for t in range(19): time_step = env.step(modified_actions[t]) plt.close("all") plt.figure(figsize=(5, 5)) plt.imshow(time_step.observation["canvas"], interpolation="nearest") Explanation: Change the brush type Finally, let's re-render the image above using a different brush type. End of explanation # The path to the shaders. SHADERS_BASEDIR = os.path.join(os.getcwd(), "..", "third_party/paint/shaders") SHADERS_BASEDIR = os.path.abspath(SHADERS_BASEDIR) # The path to a TF-Hub module. MODULE_PATH = "https://tfhub.dev/deepmind/spiral/default-fluid-gansn-celebahq64-gen-19steps/1" env_settings = dict( episode_length=20, # Number of frames in each episode. canvas_width=256, # The width of the canvas in pixels. grid_width=32, # The width of the action grid. brush_sizes=[2.5, 5.0, 10.0, 20.0, 40.0, 80.0], # The sizes of the brush to use. shaders_basedir=SHADERS_BASEDIR, # The location of shaders. ) env = fluid.FluidPaint(**env_settings) initial_state, step = agent_utils.get_module_wrappers(MODULE_PATH) state = initial_state() noise_sample = np.random.normal(size=(10,)).astype(np.float32) time_step = env.reset() actions = [] for t in range(19): time_step.observation["noise_sample"] = noise_sample # The environment uses 256x256 canvas but the agent requires 64x64 input. ratio = 64 / 256 time_step.observation["canvas"] = ndimage.zoom( time_step.observation["canvas"], [ratio, ratio, 1], order=1) action, state = step(time_step.step_type, time_step.observation, state) time_step = env.step(action) actions.append(action) plt.close("all") plt.figure(figsize=(5, 5)) plt.imshow(time_step.observation["canvas"], interpolation="nearest") Explanation: Fluid Paint environment demonstration Fluid Paint environment works almost exactly like libmypaint.LibMyPaint. Below, we show how to obtain samples from the model trained in this environment. End of explanation
12,078
Given the following text description, write Python code to implement the functionality described below step by step Description: Welcome to The QuantConnect Research Page Refer to this page for documentation https Step1: Historical Data Requests We can use the QuantConnect API to make Historical Data Requests. The data will be presented as multi-index pandas.DataFrame where the first index is the Symbol. For more information, please follow the link. Step2: Indicators We can easily get the indicator of a given symbol with QuantBook. For all indicators, please checkout QuantConnect Indicators Reference Table
Python Code: %matplotlib inline # Imports from clr import AddReference AddReference("System") AddReference("QuantConnect.Common") AddReference("QuantConnect.Jupyter") AddReference("QuantConnect.Indicators") from System import * from QuantConnect import * from QuantConnect.Data.Custom import * from QuantConnect.Data.Market import TradeBar, QuoteBar from QuantConnect.Jupyter import * from QuantConnect.Indicators import * from datetime import datetime, timedelta import matplotlib.pyplot as plt import pandas as pd # Create an instance qb = QuantBook() # Select asset data spy = qb.AddEquity("SPY") Explanation: Welcome to The QuantConnect Research Page Refer to this page for documentation https://www.quantconnect.com/docs#Introduction-to-Jupyter Contribute to this template file https://github.com/QuantConnect/Lean/blob/master/Jupyter/BasicQuantBookTemplate.ipynb QuantBook Basics Start QuantBook Add the references and imports Create a QuantBook instance End of explanation # Gets historical data from the subscribed assets, the last 360 datapoints with daily resolution h1 = qb.History(qb.Securities.Keys, 360, Resolution.Daily) # Plot closing prices from "SPY" h1.loc["SPY"]["close"].plot() Explanation: Historical Data Requests We can use the QuantConnect API to make Historical Data Requests. The data will be presented as multi-index pandas.DataFrame where the first index is the Symbol. For more information, please follow the link. End of explanation # Example with BB, it is a datapoint indicator # Define the indicator bb = BollingerBands(30, 2) # Gets historical data of indicator bbdf = qb.Indicator(bb, "SPY", 360, Resolution.Daily) # drop undesired fields bbdf = bbdf.drop('standarddeviation', 1) # Plot bbdf.plot() Explanation: Indicators We can easily get the indicator of a given symbol with QuantBook. For all indicators, please checkout QuantConnect Indicators Reference Table End of explanation
12,079
Given the following text description, write Python code to implement the functionality described below step by step Description: DES Y6 Deep Field Exposures Step1: 2. User Input 2.1. General User Input Step2: 2.2. Logical Variables to Indicate which Code Cells to Run Step3: 2.3. Sky Region Definitions Step4: 2.4. Check on Location of TMPDIR... Step5: 2.5. Create Main Zeropoints Directory (if it does not already exist)... Step17: 3. Useful Modules Step24: 4. Zeropoints by tying to DES-transformed ATLAS-REFCAT2 Stars We will first work with the DES data, and then we will repeat for the DECADE data. DES Step32: Combine region-by-region results into a single file... Step39: DECADE Step47: Combine region-by-region results into a single file...
Python Code: import numpy as np import pandas as pd from scipy import interpolate import glob import math import os import subprocess import sys import gc import glob import pickle import easyaccess as ea #import AlasBabylon import fitsio from astropy.io import fits import astropy.coordinates as coord from astropy.coordinates import SkyCoord import astropy.units as u from astropy.table import Table, vstack import tempfile import matplotlib.pyplot as plt %matplotlib inline # Useful class to stop "Run All" at a cell # containing the command "raise StopExecution" class StopExecution(Exception): def _render_traceback_(self): pass Explanation: DES Y6 Deep Field Exposures: Photometric Zeropoints tied to ATLAS-REFCAT2 1. Setup End of explanation verbose = 1 tag_des = 'Y6A2_FINALCUT' # Official tag for DES Y6A2_FINALCUT tag_decade = 'DECADE_FINALCUT' # Tag for DECADE rawdata_dir = '../RawData' zeropoints_dir='../Zeropoints' bandList = ['g', 'r', 'i', 'z', 'Y'] Explanation: 2. User Input 2.1. General User Input End of explanation do_calc_refcat2_zps = True Explanation: 2.2. Logical Variables to Indicate which Code Cells to Run End of explanation region_name_list = [ 'VVDSF14', 'VVDSF22', 'DEEP2', 'SN-E', 'SN-X_err', 'SN-X', 'ALHAMBRA2', 'SN-S', 'SN-C', 'EDFS', 'MACS0416', 'SN-S_err', 'COSMOS' ] region_ramin = { 'VVDSF14':208., 'VVDSF22':333., 'DEEP2':351., 'SN-E':6., 'SN-X_err':13., 'SN-X':32., 'ALHAMBRA2':35., 'SN-S':39.5, 'SN-C':50., 'EDFS':55., 'MACS0416':62., 'SN-S_err':83.5, 'COSMOS':148. } region_ramax = { 'VVDSF14':212., 'VVDSF22':337., 'DEEP2':354., 'SN-E':12., 'SN-X_err':17., 'SN-X':38., 'ALHAMBRA2':39., 'SN-S':44.5, 'SN-C':56., 'EDFS':67., 'MACS0416':66., 'SN-S_err':88., 'COSMOS':153. } region_decmin = { 'VVDSF14':3., 'VVDSF22':-1.5, 'DEEP2':-1.5, 'SN-E':-46., 'SN-X_err':-32., 'SN-X':-8., 'ALHAMBRA2':-0.5, 'SN-S':-2.5, 'SN-C':-31., 'EDFS':-51., 'MACS0416':-25.5, 'SN-S_err':-38., 'COSMOS':0.5 } region_decmax = { 'VVDSF14':7., 'VVDSF22':1.5, 'DEEP2':1.5, 'SN-E':-41., 'SN-X_err':-29., 'SN-X':-3., 'ALHAMBRA2':2.5, 'SN-S':1.5, 'SN-C':-25., 'EDFS':-46., 'MACS0416':-22.5, 'SN-S_err':-35., 'COSMOS':4.0 } for regionName in region_name_list: print regionName, region_ramin[regionName], region_ramax[regionName], region_decmin[regionName], region_decmax[regionName] Explanation: 2.3. Sky Region Definitions End of explanation # Check on TMPDIR... tempfile.gettempdir() # Set tmpdir variable to $TMPDIR (for future reference)... tmpdir = os.environ['TMPDIR'] Explanation: 2.4. Check on Location of TMPDIR... End of explanation # Create main Zeropoints directory, if it does not already exist... if not os.path.exists(zeropoints_dir): os.makedirs(zeropoints_dir) Explanation: 2.5. Create Main Zeropoints Directory (if it does not already exist)... End of explanation def DECam_tie_to_refcat2(inputFile, outputFile, band, fluxObsColName, fluxerrObsColName, aggFieldColName, verbose): import numpy as np import os import sys import datetime import pandas as pd from astropy.table import Table, vstack validBandsList = ['g', 'r', 'i', 'z', 'Y'] if band not in validBandsList: print Filter band %s is not currently handled... Exiting now! % (band) return 1 reqColList = ['g','r','i','z','dg','dr','di','dz', fluxObsColName,fluxerrObsColName,aggFieldColName] # Does the input file exist? if os.path.isfile(inputFile)==False: print DECam_tie_to_refcat2 input file %s does not exist. Exiting... % (inputFile) return 1 # Read inputFile into a pandas DataFrame... print datetime.datetime.now() print Reading in %s as a pandas DataFrame... % (inputFile) t = Table.read(inputFile) dataFrame = t.to_pandas() print datetime.datetime.now() print reqColFlag = 0 colList = dataFrame.columns.tolist() for reqCol in reqColList: if reqCol not in colList: print ERROR: Required column %s is not in the header % (reqCol) reqColFlag = 1 if reqColFlag == 1: print Missing required columns in header of %s... Exiting now! (inputFile) return 1 # Identify column of the standard magnitude for the given band... magStdColName = MAG_STD_%s % (band.upper()) magerrStdColName = MAGERR_STD_%s % (band.upper()) # Transform ATLAS-REFCAT2 mags to DES mags for the given band... dataFrame = transREFCAT2toDES(dataFrame, band, magStdColName, magerrStdColName) # Add a 'MAG_OBS' column and a 'MAG_DIFF' column to the pandas DataFrame... dataFrame['MAG_OBS'] = -2.5*np.log10(dataFrame[fluxObsColName]) dataFrame['MAG_DIFF'] = dataFrame[magStdColName]-dataFrame['MAG_OBS'] ############################################### # Aggregate by aggFieldColName ############################################### # Make a copy of original dataFrame... df = dataFrame.copy() # Create an initial mask... mask1 = ( (df[magStdColName] >= 0.) & (df[magStdColName] <= 25.) ) mask1 = ( mask1 & (df[fluxObsColName] > 10.) & (df['FLAGS'] < 2) & (np.abs(df['SPREAD_MODEL']) < 0.01)) if magerrStdColName != 'None': mask1 = ( mask1 & (df[magerrStdColName] < 0.1) ) magDiffGlobalMedian = df[mask1]['MAG_DIFF'].median() magDiffMin = magDiffGlobalMedian - 5.0 magDiffMax = magDiffGlobalMedian + 5.0 mask2 = ( (df['MAG_DIFF'] > magDiffMin) & (df['MAG_DIFF'] < magDiffMax) ) mask = mask1 & mask2 # Iterate over the copy of dataFrame 3 times, removing outliers... # We are using "Method 2/Group by item" from # http://nbviewer.jupyter.org/urls/bitbucket.org/hrojas/learn-pandas/raw/master/lessons/07%20-%20Lesson.ipynb print "Sigma-clipping..." niter = 0 for i in range(3): niter = i + 1 print iter%d... % ( niter ) # make a copy of original df, and then delete the old one... newdf = df[mask].copy() del df # group by aggFieldColName... grpnewdf = newdf.groupby([aggFieldColName]) # add/update new columns to newdf print datetime.datetime.now() newdf['Outlier'] = grpnewdf['MAG_DIFF'].transform( lambda x: abs(x-x.mean()) > 3.00*x.std() ) #newdf['Outlier'] = grpnewdf['MAG_DIFF'].transform( lambda x: abs(x-x.mean()) > 2.00*x.std() ) print datetime.datetime.now() del grpnewdf print datetime.datetime.now() #print newdf nrows = newdf['MAG_DIFF'].size print Number of rows remaining: %d % ( nrows ) df = newdf mask = ( df['Outlier'] == False ) # Perform pandas grouping/aggregating functions on sigma-clipped Data Frame... print datetime.datetime.now() print 'Performing grouping/aggregating functions on sigma-clipped pandas DataFrame...' groupedDataFrame = df.groupby([aggFieldColName]) magZeroMedian = groupedDataFrame['MAG_DIFF'].median() magZeroMean = groupedDataFrame['MAG_DIFF'].mean() magZeroStd = groupedDataFrame['MAG_DIFF'].std() magZeroNum = groupedDataFrame['MAG_DIFF'].count() magZeroErr = magZeroStd/np.sqrt(magZeroNum-1) print datetime.datetime.now() print # Rename these pandas series... magZeroMedian.name = 'MAG_ZERO_MEDIAN' magZeroMean.name = 'MAG_ZERO_MEAN' magZeroStd.name = 'MAG_ZERO_STD' magZeroNum.name = 'MAG_ZERO_NUM' magZeroErr.name = 'MAG_ZERO_MEAN_ERR' # Also, calculate group medians for all columns in df that have a numerical data type... numericalColList = df.select_dtypes(include=[np.number]).columns.tolist() groupedDataMedian = {} for numericalCol in numericalColList: groupedDataMedian[numericalCol] = groupedDataFrame[numericalCol].median() groupedDataMedian[numericalCol].name = %s_MEDIAN % (numericalCol) # Create new data frame containing all the relevant aggregate quantities #newDataFrame = pd.concat( [magZeroMedian, magZeroMean, magZeroStd, \ # magZeroErr, magZeroNum], \ # join='outer', axis=1 ) seriesList = [] for numericalCol in numericalColList: seriesList.append(groupedDataMedian[numericalCol]) seriesList.extend([magZeroMedian, magZeroMean, magZeroStd, \ magZeroErr, magZeroNum]) #print seriesList newDataFrame = pd.concat( seriesList, join='outer', axis=1 ) #newDataFrame.index.rename('FILENAME', inplace=True) # Saving catname-based results to output files... print datetime.datetime.now() print Writing %s output file (using pandas to_csv method)... % (outputFile) newDataFrame.to_csv(outputFile, float_format='%.4f') print datetime.datetime.now() print return 0 # Transform ATLAS-REFCAT2 mags into DES mags for this filter band... def transREFCAT2toDES(dataFrame, band, magStdColName, magerrStdColName): import numpy as np import pandas as pd from collections import OrderedDict as odict # Transformation coefficients (updated based on fit to DES). # mag_des = mag_refcat2 + A[mag][0]*color_refcat2 + A[mag][1] # (from A. Drlica-Wagner's https://github.com/kadrlica/desqr/blob/master/desqr/calibrate.py) REFCAT2 = odict([ ('g', [+0.0994, -0.0076 - 0.0243]), # [-0.2 < (g-r)_refcat2 <= 1.2] ('r', [-0.1335, +0.0189 + 0.0026]), # [-0.2 < (g-r)_refcat2 <= 1.2] ('i', [-0.3407, +0.0026 - 0.0039]), # [-0.2 < (i-z)_refcat2 <= 0.3] ('z', [-0.2575, -0.0074 - 0.0127]), # [-0.2 < (i-z)_refcat2 <= 0.3] ('Y', [-0.6032, +0.0185]), # [-0.2 < (i-z)_refcat2 <= 0.3] ]) A = REFCAT2 if band is 'g': # g_des = g_refcat2 + 0.0994*(g-r)_refcat2 - 0.0076 - 0.0243 [-0.2 < (g-r)_ps <= 1.2] dataFrame[magStdColName] = dataFrame['g']+\ A[band][0]*(dataFrame['g']-dataFrame['r'])+A[band][1] dataFrame[magerrStdColName] = dataFrame['dg'] # temporary mask = ( (dataFrame['g']-dataFrame['r']) > -0.2) mask &= ( (dataFrame['g']-dataFrame['r']) <= 1.2) elif band is 'r': # r_des = r_refcat2 - 0.1335*(g-r)_refcat2 + 0.0189 + 0.0026 [-0.2 < (g-r)_ps <= 1.2] dataFrame[magStdColName] = dataFrame['r']+\ A[band][0]*(dataFrame['g']-dataFrame['r'])+A[band][1] dataFrame[magerrStdColName] = dataFrame['dr'] # temporary mask = ( (dataFrame['g']-dataFrame['r']) > -0.2) mask &= ( (dataFrame['g']-dataFrame['r']) <= 1.2) elif band is 'i': # i_des = i_refcat2 - 0.3407*(i-z)_refcat2 + 0.0026 - 0.0039 [-0.2 < (i-z)_ps <= 0.3] dataFrame[magStdColName] = dataFrame['i']+\ A[band][0]*(dataFrame['i']-dataFrame['z'])+A[band][1] dataFrame[magerrStdColName] = dataFrame['di'] # temporary mask = ( (dataFrame['i']-dataFrame['z']) > -0.2) mask &= ( (dataFrame['i']-dataFrame['z']) <= 0.3) elif band is 'z': # z_des = z_refcat2 - 0.2575*(i-z)_refcat2 - 0.0074 - 0.0127 [-0.2 < (i-z)_ps <= 0.3] dataFrame[magStdColName] = dataFrame['z']+\ A[band][0]*(dataFrame['i']-dataFrame['z'])+A[band][1] dataFrame[magerrStdColName] = dataFrame['dz'] # temporary mask = ( (dataFrame['i']-dataFrame['z']) > -0.2) mask &= ( (dataFrame['i']-dataFrame['z']) <= 0.3) elif band is 'Y': # Y_des = z_refcat2 - 0.6032*(i-z)_refcat2 + 0.0185 [-0.2 < (i-z)_ps <= 0.3] dataFrame[magStdColName] = dataFrame['z']+\ A[band][0]*(dataFrame['i']-dataFrame['z'])+A[band][1] dataFrame[magerrStdColName] = dataFrame['dz'] # temporary mask = ( (dataFrame['i']-dataFrame['z']) > -0.2) mask &= ( (dataFrame['i']-dataFrame['z']) <= 0.3) else: msg = "Unrecognized band: %s "%band raise ValueError(msg) dataFrame = dataFrame[mask].copy() return dataFrame Explanation: 3. Useful Modules End of explanation %%time if do_calc_refcat2_zps: fluxObsColName = 'FLUX_PSF' fluxerrObsColName = 'FLUXERR_PSF' aggFieldColName = 'FILENAME' subdir = DES_%s % (tag_des) tmpdir = os.environ['TMPDIR'] for regionName in region_name_list: print print # # # # # # # # # # # # # # # print Working on region %s % (regionName) print # # # # # # # # # # # # # # # print for band in bandList: input_file_template = cat_%s.%s.?.%s.refcat2.fits % (subdir, regionName, band) input_file_template = os.path.join(rawdata_dir, 'ExpCatFITS', subdir, input_file_template) input_file_list = glob.glob(input_file_template) input_file_list = np.sort(input_file_list) if np.size(input_file_list) == 0: print "No files matching template %s" % (input_file_template) for inputFile in input_file_list: print inputFile if os.path.exists(inputFile): #outputFile = os.path.splitext(inputFile)[0] + '.zps.csv' #print outputFile outputFile = os.path.splitext(os.path.basename(inputFile))[0]+'.csv' outputFile = 'zps_' + outputFile[4:] outputFile = os.path.join(zeropoints_dir, outputFile) print outputFile status = DECam_tie_to_refcat2(inputFile, outputFile, band, fluxObsColName, fluxerrObsColName, aggFieldColName, verbose) if status > 0: print 'ERROR: %s FAILED! Continuing...' else: print %s does not exist... skipping... % (inputFile) print Explanation: 4. Zeropoints by tying to DES-transformed ATLAS-REFCAT2 Stars We will first work with the DES data, and then we will repeat for the DECADE data. DES: Calculate zeropoints region by region... End of explanation %%time if do_calc_refcat2_zps: subdir = DES_%s % (tag_des) tmpdir = os.environ['TMPDIR'] for band in bandList: print print # # # # # # # # # # # # # # # print Working on band %s % (band) print # # # # # # # # # # # # # # # print outputFile = zps_%s.%s.refcat2.csv % (subdir, band) outputFile = outputFile = os.path.join(zeropoints_dir, outputFile) input_file_template = zps_%s.*.?.%s.refcat2.csv % (subdir, band) input_file_template = os.path.join(zeropoints_dir, input_file_template) input_file_list = glob.glob(input_file_template) input_file_list = np.sort(input_file_list) if np.size(input_file_list) == 0: print "No files matching template %s" % (input_file_template) continue df_comb = pd.concat(pd.read_csv(inputFile) for inputFile in input_file_list) outputFile = zps_%s.%s.refcat2.csv % (subdir, band) outputFile = outputFile = os.path.join(zeropoints_dir, outputFile) print outputFile df_comb.to_csv(outputFile, index=False) del df_comb Explanation: Combine region-by-region results into a single file... End of explanation %%time if do_calc_refcat2_zps: fluxObsColName = 'FLUX_PSF' fluxerrObsColName = 'FLUXERR_PSF' aggFieldColName = 'FILENAME' subdir = %s % (tag_decade) tmpdir = os.environ['TMPDIR'] for regionName in region_name_list: print print # # # # # # # # # # # # # # # print Working on region %s % (regionName) print # # # # # # # # # # # # # # # print for band in bandList: input_file_template = cat_%s.%s.?.%s.refcat2.fits % (subdir, regionName, band) input_file_template = os.path.join(rawdata_dir, 'ExpCatFITS', subdir, input_file_template) input_file_list = glob.glob(input_file_template) input_file_list = np.sort(input_file_list) if np.size(input_file_list) == 0: print "No files matching template %s" % (input_file_template) for inputFile in input_file_list: print inputFile if os.path.exists(inputFile): #outputFile = os.path.splitext(inputFile)[0] + '.zps.csv' #print outputFile outputFile = os.path.splitext(os.path.basename(inputFile))[0]+'.csv' outputFile = 'zps_' + outputFile[4:] outputFile = os.path.join(zeropoints_dir, outputFile) print outputFile status = DECam_tie_to_refcat2(inputFile, outputFile, band, fluxObsColName, fluxerrObsColName, aggFieldColName, verbose) if status > 0: print 'ERROR: %s FAILED! Continuing...' else: print %s does not exist... skipping... % (inputFile) print Explanation: DECADE: Calculate zeropoints region by region... End of explanation %%time if do_calc_refcat2_zps: subdir = %s % (tag_decade) tmpdir = os.environ['TMPDIR'] for band in bandList: print print # # # # # # # # # # # # # # # print Working on band %s % (band) print # # # # # # # # # # # # # # # print outputFile = zps_%s.%s.refcat2.csv % (subdir, band) outputFile = outputFile = os.path.join(zeropoints_dir, outputFile) input_file_template = zps_%s.*.?.%s.refcat2.csv % (subdir, band) input_file_template = os.path.join(zeropoints_dir, input_file_template) input_file_list = glob.glob(input_file_template) input_file_list = np.sort(input_file_list) if np.size(input_file_list) == 0: print "No files matching template %s" % (input_file_template) continue df_comb = pd.concat(pd.read_csv(inputFile) for inputFile in input_file_list) outputFile = zps_%s.%s.refcat2.csv % (subdir, band) outputFile = outputFile = os.path.join(zeropoints_dir, outputFile) print outputFile df_comb.to_csv(outputFile, index=False) del df_comb Explanation: Combine region-by-region results into a single file... End of explanation
12,080
Given the following text description, write Python code to implement the functionality described. Description: Flatten a multi |
Python Code: def flattenList2(head ) : headcop = head save =[] save . append(head ) prev = None while(len(save ) != 0 ) : temp = save[- 1 ] save . pop() if(temp . next ) : save . append(temp . next )  if(temp . down ) : save . append(temp . down )  if(prev != None ) : prev . next = temp  prev = temp  return headcop 
12,081
Given the following text description, write Python code to implement the functionality described below step by step Description: Interact Exercise 4 Imports Step1: Line with Gaussian noise Write a function named random_line that creates x and y data for a line with y direction random noise that has a normal distribution $N(0,\sigma^2)$ Step3: After doing some research on stackoverflow, I learned how to use scipy.stats to generate normally distributed random noise for my y-values. Step6: Write a function named plot_random_line that takes the same arguments as random_line and creates a random line using random_line and then plots the x and y points using Matplotlib's scatter function Step7: Use interact to explore the plot_random_line function using
Python Code: %matplotlib inline import matplotlib.pyplot as plt import numpy as np from IPython.html.widgets import interact, interactive, fixed from IPython.display import display Explanation: Interact Exercise 4 Imports End of explanation import scipy.stats Explanation: Line with Gaussian noise Write a function named random_line that creates x and y data for a line with y direction random noise that has a normal distribution $N(0,\sigma^2)$: $$ y = m x + b + N(0,\sigma^2) $$ Be careful about the sigma=0.0 case. End of explanation def random_line(m, b, sigma, size=10): Create a line y = m*x + b + N(0,sigma**2) between x=[-1.0,1.0] Parameters ---------- m : float The slope of the line. b : float The y-intercept of the line. sigma : float The standard deviation of the y direction normal distribution noise. size : int The number of points to create for the line. Returns ------- x : array of floats The array of x values for the line with `size` points. y : array of floats The array of y values for the lines with `size` points. x = np.linspace(-1.0,1.0,size) noise = scipy.stats.norm.rvs(loc=0, scale=sigma, size=size) y = m*x + b + noise return x,y m = 0.0; b = 1.0; sigma=0.0; size=3 x, y = random_line(m, b, sigma, size) assert len(x)==len(y)==size assert list(x)==[-1.0,0.0,1.0] assert list(y)==[1.0,1.0,1.0] sigma = 1.0 m = 0.0; b = 0.0 size = 500 x, y = random_line(m, b, sigma, size) assert np.allclose(np.mean(y-m*x-b), 0.0, rtol=0.1, atol=0.1) assert np.allclose(np.std(y-m*x-b), sigma, rtol=0.1, atol=0.1) Explanation: After doing some research on stackoverflow, I learned how to use scipy.stats to generate normally distributed random noise for my y-values. End of explanation def ticks_out(ax): Move the ticks to the outside of the box. ax.get_xaxis().set_tick_params(direction='out', width=1, which='both') ax.get_yaxis().set_tick_params(direction='out', width=1, which='both') def plot_random_line(m, b, sigma, size=10, color='red'): Plot a random line with slope m, intercept b and size points. x = np.linspace(-1.0,1.0,size) noise = scipy.stats.norm.rvs(loc=0, scale=sigma, size=size) y = m*x + b + noise f = plt.figure(figsize=(7,5)) plt.scatter(x,y, color='%s' % color, marker='o', alpha = .85) plt.tick_params(right=False, top=False, axis='both', direction='out') plt.xlim(-1.1,1.1) plt.ylim(-10.0,10.0) plt.xlabel('x') plt.ylabel('y') plt.title('Random Line Scatter Data') plot_random_line(5.0, -1.0, 2.0, 50) assert True # use this cell to grade the plot_random_line function Explanation: Write a function named plot_random_line that takes the same arguments as random_line and creates a random line using random_line and then plots the x and y points using Matplotlib's scatter function: Make the marker color settable through a color keyword argument with a default of red. Display the range $x=[-1.1,1.1]$ and $y=[-10.0,10.0]$. Customize your plot to make it effective and beautiful. End of explanation interact(plot_random_line, m=(-10.0,10.0,0.1), b=(-5.0,5.0,0.1), sigma=(0.0,5.0,0.01), size=(10,100,10), color={'red':'r', 'green':'g', 'blue':'b'}) #### assert True # use this cell to grade the plot_random_line interact Explanation: Use interact to explore the plot_random_line function using: m: a float valued slider from -10.0 to 10.0 with steps of 0.1. b: a float valued slider from -5.0 to 5.0 with steps of 0.1. sigma: a float valued slider from 0.0 to 5.0 with steps of 0.01. size: an int valued slider from 10 to 100 with steps of 10. color: a dropdown with options for red, green and blue. End of explanation
12,082
Given the following text description, write Python code to implement the functionality described below step by step Description: Intro to jeepr with gprMax data jeepr is a set of utilities for handling GPR data, especially gprMax models and synthetics, and real data from USRadar instruments. Step1: Make Scan from a gprMax simulation .out file Step2: Note, however, that the t0 of the section has been reset to 0 ns. Step3: Let's look at a spectrum; it looks quite different from real data. Step4: Make Model from gprMax VTI file Step5: Plot Model and Scan together in time domain
Python Code: import numpy as np import matplotlib.pyplot as plt % matplotlib inline import jeepr jeepr.__version__ Explanation: Intro to jeepr with gprMax data jeepr is a set of utilities for handling GPR data, especially gprMax models and synthetics, and real data from USRadar instruments. End of explanation from jeepr import Scan g = Scan.from_gprmax('../tests/test_2D_merged.out') g.__dict__ g.plot() t0 = np.sqrt(2) / float(g.freq) h = g.crop(t=t0) h.plot() h.shape h.log Explanation: Make Scan from a gprMax simulation .out file End of explanation h.t0 Explanation: Note, however, that the t0 of the section has been reset to 0 ns. End of explanation f, p = g.get_spectrum() plt.plot(f, p) Explanation: Let's look at a spectrum; it looks quite different from real data. End of explanation from jeepr import Model m = Model.from_gprMax('../tests/test_2D.in') m.plot() m.__dict__ ground = m.rx['position'][0] n = m.crop(z=ground) n.plot() Explanation: Make Model from gprMax VTI file End of explanation n_time, _ = n.to_time(dt=5e-11) n_time.plot() fig = plt.figure(figsize=(16, 9)) ax0 = fig.add_subplot(111) ax0 = h.plot(ax=ax0) ax0 = n_time.plot(ax=ax0, alpha=0.5) plt.show() Explanation: Plot Model and Scan together in time domain End of explanation
12,083
Given the following text description, write Python code to implement the functionality described below step by step Description: Distributed Numpy Parsing Joeri R. Hermans Departement of Data Science & Knowledge Engineering Maastricht University, The Netherlands This notebook will show you how to parse a collection of Numpy files straight from HDFS into a Spark Dataframe. Cluster Configuration In the following sections, we set up the cluster properties. Step1: Obtaining the required file-paths Basically what we are going to do now, is obtain a lists of file paths (*.npy) which we will map with a custom lambda function to read all the data into a dataframe. Step2: Creating a Spark Dataframe from the specified list Before we convert to a list to a Spark Dataframe, we first need to specify the schema. We do this by converting every element in the list to a Spark row. Afterwards, Spark will be able to automatically infer the schema of the dataframe. Step3: Now we are able to create the Spark DataFrame. Note, for Spark 2.0 use spark. instead of sqlContext.. Step4: Parsing your Numpy files This is a fairly straightforward operation where we basically map all the file paths using a custom lambda function to read the numpy files from HDFS. Step5: Now we have a working prototype, let's construct a Spark mapper which will fetch the data in a distributed manner from HDFS. Note that if you would like to adjust the data in any way after reading, you can do so by modifying the lambda function, or executing another map after the data has been read.
Python Code: %matplotlib inline import numpy as np import os from pyspark import SparkContext from pyspark import SparkConf from pyspark.sql.types import * from pyspark.sql import Row from pyspark.storagelevel import StorageLevel # Use the DataBricks AVRO reader. os.environ['PYSPARK_SUBMIT_ARGS'] = '--packages com.databricks:spark-avro_2.11:3.2.0 pyspark-shell' # Modify these variables according to your needs. application_name = "Distributed Numpy Parsing" using_spark_2 = False local = False if local: # Tell master to use local resources. master = "local[*]" num_processes = 3 num_executors = 1 else: # Tell master to use YARN. master = "yarn-client" num_executors = 20 num_processes = 1 # This variable is derived from the number of cores and executors, # and will be used to assign the number of model trainers. num_workers = num_executors * num_processes print("Number of desired executors: " + `num_executors`) print("Number of desired processes / executor: " + `num_processes`) print("Total number of workers: " + `num_workers`) # Do not change anything here. conf = SparkConf() conf.set("spark.app.name", application_name) conf.set("spark.master", master) conf.set("spark.executor.cores", `num_processes`) conf.set("spark.executor.instances", `num_executors`) conf.set("spark.executor.memory", "5g") conf.set("spark.locality.wait", "0") conf.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer") conf.set("spark.kryoserializer.buffer.max", "2000") conf.set("spark.executor.heartbeatInterval", "6000s") conf.set("spark.network.timeout", "10000000s") conf.set("spark.shuffle.spill", "true") conf.set("spark.driver.memory", "10g") conf.set("spark.driver.maxResultSize", "10g") # Check if the user is running Spark 2.0 + if using_spark_2: sc = SparkSession.builder.config(conf=conf) \ .appName(application_name) \ .getOrCreate() else: # Create the Spark context. sc = SparkContext(conf=conf) # Add the missing imports from pyspark import SQLContext sqlContext = SQLContext(sc) # Check if we are using Spark 2.0 if using_spark_2: reader = sc else: reader = sqlContext Explanation: Distributed Numpy Parsing Joeri R. Hermans Departement of Data Science & Knowledge Engineering Maastricht University, The Netherlands This notebook will show you how to parse a collection of Numpy files straight from HDFS into a Spark Dataframe. Cluster Configuration In the following sections, we set up the cluster properties. End of explanation # Define the command that needs to be executed, this will list all the numpy files in the specified directory. cmd = "hdfs dfs -ls /user/jhermans/data/cms/RelValWjet_Pt_3000_3500_13_GEN-SIM-RECO_evt3150/*.npy | awk '{print $NF}'" # Fetch the output of the command, and construct a list. output = os.popen(cmd).read() file_paths = output.split("\n") Explanation: Obtaining the required file-paths Basically what we are going to do now, is obtain a lists of file paths (*.npy) which we will map with a custom lambda function to read all the data into a dataframe. End of explanation rows = [] for path in file_paths: row = Row(**{'path': path}) rows.append(row) Explanation: Creating a Spark Dataframe from the specified list Before we convert to a list to a Spark Dataframe, we first need to specify the schema. We do this by converting every element in the list to a Spark row. Afterwards, Spark will be able to automatically infer the schema of the dataframe. End of explanation df = sqlContext.createDataFrame(rows) # Repartition the dataset for increased parallelism. df = df.repartition(20) print("Number of paths to be parsed: " + str(df.count())) df.printSchema() # Example content of the dataframe. df.take(1) Explanation: Now we are able to create the Spark DataFrame. Note, for Spark 2.0 use spark. instead of sqlContext.. End of explanation # Development cell, this will be executed in the lambdas. import pydoop.hdfs as hdfs with hdfs.open(file_paths[0]) as f: data = np.load(f) # Obtain the fields (columns) of your numpy data. fields = [] for k in data[0].dtype.fields: fields.append(k) print("Number of columns: " + str(len(data.dtype.fields))) print("First five columns: ") i = 0 for k in data.dtype.fields: print(k) i += 1 if i == 5: break Explanation: Parsing your Numpy files This is a fairly straightforward operation where we basically map all the file paths using a custom lambda function to read the numpy files from HDFS. End of explanation def parse(iterator): rows = [] # MODIFY TO YOUR NEEDS IF NECESSARY for row in iterator: path = row['path'] # Load the file from HFDS. with hdfs.open(path) as f: data = np.load(f) # Add all rows in current path. for r in data: d = {} for f in fields: d[f] = r[f].item() rows.append(Row(**d)) return iter(rows) # Apply the lambda function. dataset = df.rdd.mapPartitions(parse).toDF() dataset.printSchema() Explanation: Now we have a working prototype, let's construct a Spark mapper which will fetch the data in a distributed manner from HDFS. Note that if you would like to adjust the data in any way after reading, you can do so by modifying the lambda function, or executing another map after the data has been read. End of explanation
12,084
Given the following text description, write Python code to implement the functionality described below step by step Description: Lecture 2 - Logic, Loops, and Arrays This iPython notebook covers some of the most important aspects of the Python language that is used daily by real Astronomers and Physicists. Topics will include Step1: If you let a and b be conditional statements (like the above statements, e.g. a = x < y), then you can combine the two together using logical operators, which can be thought of as functions for conditional statements. There are three logical operators that are handy to know Step2: Now, these might not seem especially useful at first, but they're the bread and butter of programming. Even more importantly, they are used when we are doing if/else statements or loops, which we will now cover. An if/else statement (or simply an if statement) are segments of code that have a conditional statement built into it, such that the code within that segment doesn't activate unless the conditional statement is true. Here's an example. Play around with the variables x and y to see what happens. Step3: The idea here is that Python checks to see if the statement (in this case "x < y") is True. If it is, then it will do what is below the if statement. The else statement tells Python what to do if the condition is False. Note that Python requires you to indent these segments of code, and WILL NOT like it if you don't. Some languages don't require it, but Python is very particular when it comes to this point. (The parentheses around the conditional statement, however, are optional.) You also do not need an "else" segment, which effectively means that if the condition isn't True, then that segment of code doesn't do anything, and Python will just continue on past the if statement. Here is an example of such a case. Play around with it to see what happens when you change the values of x and y. Step4: While-loops are similar to if statements, in the sense that they also have a conditional statement that is built into it and it executes when the conditional is True. However, the only difference is, it will KEEP executing that segment of code until the conditional statement becomes False. This might seem a bit strange, but you can get the hang of it! For example, let's say we want Python to count from 1 to 10. Step5: Note here that we tell Python to print the number x (x starts at 1) and then redefining x as itself +1 (so, x=1 gets redefined to x = x+1 = 1+1 = 2). Python then executes the loop again, but now x has been incremented by 1. We continue this process from x = 1 to x = 10, printing out x every time. Thus, with a fairly compact bit of code, you get 10 lines of output. It is sometimes handy to define what is known as a DUMMY VARIABLE, whose only job is to count the number of times the loop has been executed. Let's call this dummy variable i. Step6: But...what if the conditional statement is always true? B. Defining Your Own Functions So far, we have really focused on using built-in functions (such as from numpy and matplotlib), but what about defining our own? This is easy to do, and can be a way to not only clean up your code, but also allows you to apply the same set of operations to multiple variables without having to explicitly write it out every time. For example, let's say we want to define a function that takes the square root of a number. It's probably a good idea to check if the number is positive first, otherwise we'll end up with an imaginary answer. Step7: So the outline for a function is python def &lt;function name&gt; (&lt;input variable&gt;) Step9: When defining your own functions, you can also use multiple input variables. For example, if we want to find the greatest common divisor (gcd) of two numbers, we could apply something called Euclid's algorithm. We define gcd(a,0) = a. Then we note that the gcd(a,b) = gcd(b,r), where r is the remainderwhen a is divided by b. So we can repeat this process until we end up with zero remainder. Then, we return whatever number is left in a as the greatest common divisor. A command you might not have encountered yet is %. The expression x % y returns the remainder when x is divided by y. Step11: Challenge 1 - Fibonacci Sequence and the Golden Ratio The Fibonacci sequence is defined by $f_{n+1} =f_n + f_{n-1}$ and the initial values $f_0 = 0$ and $f_1 = 1$. The first few elements of the sequence are Step14: The ratio of successive elements in the Fibonacci sequence converges to $$\phi = (1 + \sqrt{5})/2 ≈ 1.61803\dots$$ which is the famous golden ratio. Your task is to approximate $\phi$. Define a function phi_approx that calculates the approximate value of $\phi$ obtained by the ratio of the $n$-th and $(n−1)$-st elements, $$f_n /f_{n-1} \approx \phi$$ phi_approx should have one variable, $n$. Its return value should be the $n$-th order approximation of $\phi$. Step15: C. Numpy Arrays - Review of Basics and Some More Advanced Topics Recall in the first lecture that we introduced a python module known as numpy and type of variable known as a numpy array. For review, we will call numpy to be imported into this notebook so we can use its contents. Step16: Here we are calling in the contents of numpy and giving it the shorthand name 'np' for convenience. To create an array variable (let's call it 'x'), we simply assign 'x' to be equal to the output of the np.array() function, using a list as an input. You can then verify its contents by using the print() function. Step17: As we learned in Lecture 1, numpy arrays are convenient because they allow us to do math across the whole array and not just individual numbers. For example, let's say we want to make a new variable 'y' such that y = $x^{2}$, then this is done simply as Step18: The documentation of possible functions that can be applied to integers and floats (i.e. single numbers), as well as numpy arrays, can be found here Step19: Now how do we assign a new value to an element of the array? We use the following "square bracket" notation Step20: Now you try it. Store the second Fibonacci number in the second position of your array and use a print statement to verify that you have done so. Step21: Python array indexing is fairly straightforward once you get the hang of it. Let's say you wanted the last element of the array, but you don't quite recall the size of the array. One of the easiest ways to access that element is to use negative indexing. Negative indexing is the same as normal indexing, but backward, in the sense that you start with the last element of the array and count forward. More explicitly, for any array Step22: Now, sometimes its useful to access more than one element of an array. Let's say that we have an array with 100 elements in the range [0,10] (including endpoints). If you recall, this can be done via the np.linspace() function. Step23: Now then, in order to get a range of elements rather than simply a single one, we use the notation Step24: If you want everything passed a certain point of the array (including that point), then you would just eliminate the right number, for example x[90 Step25: Finally, simply using the " Step26: Now then, remember that, $a = \frac{dv}{dt}$ and thus, $dv = a\ dt$ So, the change of an objects velocity ($dv$) is equal to the acceleration ($a = g$ in this case) multiplied by the change in time ($dt$) Likewise Step27: Now that we've defined intV (short for "integrate v"), let's use it real quick, just to test it out. Let dt = 0.1 (meaning, your taking a step forward in time by 0.1 seconds). Step28: As you can see, $V_{x}$ hasn't changed, but $V_{y}$ has decreased, representing the projectile slowing down as it's going upward. I'll let you define the function now for the position vector. Call it intR, and it should be a function of (r,v,dt), and remember that now both $r_{x}$ and $r_{y}$ are changing. Remember to return an array. Step29: Now we have the functions that calculate the changes in the position and velocity vectors. We're almost there! Now, we will need a while-loop in order to step the projectile through its trajectory. What would the condition be? Well, we know that the projectile stops when it hits the ground. So, one way we can do this is to have the condition being (r[1] &gt;= 0), since the ground is defined at y = 0. So, having your intV and intR functions, along with a while-loop and a dt = 0.1 (known as the "step-size"), can you use Python to predict where the projectile will land? Step30: Now, note that we've defined the while-loop such that it doesn't stop exactly at 0. Firstly, this was strategic, since the initial y = 0, and thus the while-loop wouldn't initialize to begin with (you can try to change it). One way you can overcome this issue is to decrease dt, meaning that you're letting less time pass between each step. Ideally, you'd want dt to be infinitely small, but we don't have that convenience in reality. Re-run the cells, but with dt = 0.01 and we will get much closer to the correct answer. So, we know where it's going to land...can we plot the full trajectory? Yes, but this is a bit complicated, and requires one last function Step31: Now, all you have to do, is each time the while-loop executes, you use np.append() for the x and y arrays, adding the new values to the end of them. How do you do that? Well, looking at the np.append() documentation, for x, you do x = np.append(x,[r[0]]) The same syntax is used for the y array. After that, you simply use plt.plot(x,y,'o') to plot the trajectory of the ball (the last 'o' is used to change the plotting from a line to points). Good luck! Also, don't forget to reset your v and r arrays (otherwise, this will not work) Step32: If everything turns out alright, you should get the characteristic parabola. Also, if you're going to experiment with changing the intial position and velocity, remember to re-run the cell where we define the x and y arrays in order to clear the plot. Now you've learned how to do numerical integration. This technique is used all throughout Physics and Astronomy, and while there are more advanced ways to do it in order to increase accuracy, the heart of the idea is the same. Here is a figure made by Corbin Taylor (head of the Python team) that used numerical integration to track the position of a ray of light as it falls into a spinning black hole. <img src="./raytrace_picture.jpg"> D. Loading And Saving Data Arrays So, we have learned a lot about data arrays and how we can manipulate them, either through mathematics or indexing. However, up until this point, all we've done is use arrays that we ourselves created. But what happens if we have data from elsewhere? Can Python use that? The answer is of course yes, and while there are ways to import data that are a bit complicated at times, we're going to teach you some of the most basic, and most useful, ways. For this section, we will be using plotting to visualize the data as you're working with it, and as such, we will be loading in the package "matplotlib.pyplot" which you used in Lecture 1 Step33: Now then, let's say we are doing a timing experiment, where we look at the brightness of an object as a function of time. This is actually a very common type of measurement that you may do in research, such as looking for dips in the brightness of stars as a way to detect planets. This data is stored in a text file named timeseries_data.txt in the directory lecture2_data. Let's load it in. Step34: Now we have the data loaded into Python as a numpy array, and one handy thing you can do is to use Python to find the dimensions of the array. This is done by using ".shape" as so. Step35: In this format, we know that this is a 2x1000 array (two rows, 1000 columns). Another way you can think about this is that you have two 1000-element arrays contained within another array, where each of those arrays are elements (think of it as an array of arrays). The first row is the time stamps when each measurement was taken, while the second row is that of the value of the measurement itself. For ease of handling this data, one can in principle take each of these rows and create new arrays out of them. Let's do just that. Step36: Here, you have 2 dimensions with the array timeseriesData, and as such much specify the row first and then the column. So, - array_name[n, Step37: Looking at our data, you see clear spikes that jump well above most of the signal. (I've added this to the data to represent outliers that may sometimes appear when you're messing with raw data, and those must be dealt with). In astronomy, you sometimes have relativistic charged particles, not from your source, that hit the detector known as cosmic rays, and we often have to remove these. There are some very complex codes that handle cosmic rays, but for our purposes (keeping it easy), we're going to just set a hard cut off of, let's say 15. In order to do this, we can use conditional indexing in place of normal indices. This involves taking a conditional statement (more on those later) and testing whether it evaluates to True on each element in the array. This gives an array of Booleans, which we can use as logical indices to select only the entries for which the logical statement is True. Step38: In this case, the conditional statement that we have used is signal &lt; cutOff. Here, conditional indexing keeps the data that we have deemed "good" by this criteria. We can also do the same for the corresponding time stamps, since t and signal have the same length. Step39: Now let's plot it. You try. Step40: Now that you have your data all cleaned up, it would be nice if we could save it for later and not have to go through the process of cleaning it up every time. Fear not! Python has you covered. There are two formats that we are going to cover, one that is Python-specific, and the other a simple text format. First, we must package our two cleaned up arrays into one again. This can be done simply with the np.array() function. Step41: Then, we can use either the np.save() function or the np.savetxt function, the first saving the array into a '.npy' file and the other, into a '.txt' file. The syntax is pretty much the same for each. Step42: Now that your data files are saved, you can load them up again, using np.loadtxt() and np.load() for .txt and .npy files respectively. We used np.loadtxt() above, and np.load works the same way. So, let's load in the .npy file and see if our data was saved correctly. Step43: Now, let's see if you can do the same thing, but with the .txt file that we saved.
Python Code: #Example conditional statements x = 1 y = 2 x<y #x is less than y #x is greater than y x>y #x is less-than or equal to y x<=y #x is greater-than or equal to y x>=y Explanation: Lecture 2 - Logic, Loops, and Arrays This iPython notebook covers some of the most important aspects of the Python language that is used daily by real Astronomers and Physicists. Topics will include: The logic of Python, including while loops and if/else statements Function definitions and how to make your own A review of numpy arrays and a discussion of their usefulness in solving real problems Reading in data from text and numpy file formats, along with creating your own outputs to be used later A. Logic, If/Else, and Loops You can make conditional (logical) statements in Python, which return either "True" or "False", also known as "Booleans." A basic logic statement is something that we've used already: x < y. Here is this one again, and a few more. End of explanation #Example of and operator (1<2)and(2<3) #Example of or operator (1<2)or(2>3) #Example of not operator not(1>2) Explanation: If you let a and b be conditional statements (like the above statements, e.g. a = x < y), then you can combine the two together using logical operators, which can be thought of as functions for conditional statements. There are three logical operators that are handy to know: And operator: a and b Or operator: a or b Not operator: not(a) End of explanation x = 1 y = 2 if (x < y): print("Yup, totally true!") else: print("Nope, completely wrong!") Explanation: Now, these might not seem especially useful at first, but they're the bread and butter of programming. Even more importantly, they are used when we are doing if/else statements or loops, which we will now cover. An if/else statement (or simply an if statement) are segments of code that have a conditional statement built into it, such that the code within that segment doesn't activate unless the conditional statement is true. Here's an example. Play around with the variables x and y to see what happens. End of explanation x = 1 y = 2 if (x>y): print("The condition is True!") x+y Explanation: The idea here is that Python checks to see if the statement (in this case "x < y") is True. If it is, then it will do what is below the if statement. The else statement tells Python what to do if the condition is False. Note that Python requires you to indent these segments of code, and WILL NOT like it if you don't. Some languages don't require it, but Python is very particular when it comes to this point. (The parentheses around the conditional statement, however, are optional.) You also do not need an "else" segment, which effectively means that if the condition isn't True, then that segment of code doesn't do anything, and Python will just continue on past the if statement. Here is an example of such a case. Play around with it to see what happens when you change the values of x and y. End of explanation x = 1 while (x <= 10): print(x) x = x+1 Explanation: While-loops are similar to if statements, in the sense that they also have a conditional statement that is built into it and it executes when the conditional is True. However, the only difference is, it will KEEP executing that segment of code until the conditional statement becomes False. This might seem a bit strange, but you can get the hang of it! For example, let's say we want Python to count from 1 to 10. End of explanation x = 2 i = 0 #dummy variable while (i<10): x = 2*x print(x) i = i+1 #another way to write this is i+=1, but it's idiosyncratic and we won't use it here Explanation: Note here that we tell Python to print the number x (x starts at 1) and then redefining x as itself +1 (so, x=1 gets redefined to x = x+1 = 1+1 = 2). Python then executes the loop again, but now x has been incremented by 1. We continue this process from x = 1 to x = 10, printing out x every time. Thus, with a fairly compact bit of code, you get 10 lines of output. It is sometimes handy to define what is known as a DUMMY VARIABLE, whose only job is to count the number of times the loop has been executed. Let's call this dummy variable i. End of explanation #Defining a square root function def sqrt(x): if (x < 0): print("Your input is not positive!") else: return x**(1/2) sqrt(4) sqrt(-4) Explanation: But...what if the conditional statement is always true? B. Defining Your Own Functions So far, we have really focused on using built-in functions (such as from numpy and matplotlib), but what about defining our own? This is easy to do, and can be a way to not only clean up your code, but also allows you to apply the same set of operations to multiple variables without having to explicitly write it out every time. For example, let's say we want to define a function that takes the square root of a number. It's probably a good idea to check if the number is positive first, otherwise we'll end up with an imaginary answer. End of explanation import math print(math.sqrt(25)) print(math.sin(math.pi/2)) print(math.exp(math.pi)-math.pi) Explanation: So the outline for a function is python def &lt;function name&gt; (&lt;input variable&gt;): &lt;some code here&gt; return &lt;output variable&gt; In general, many common mathematical functions like sqrt, log, exp, sin, cos can be found in the math module. So we don't have to write our own - phew! End of explanation def gcd(a, b): Calculate the Greatest Common Divisor of a and b. Unless b==0, the result will have the same sign as b (so that when b is divided by it, the result comes out positive). while b > 0: a, b = b, a%b return a print(gcd(120,16)) Explanation: When defining your own functions, you can also use multiple input variables. For example, if we want to find the greatest common divisor (gcd) of two numbers, we could apply something called Euclid's algorithm. We define gcd(a,0) = a. Then we note that the gcd(a,b) = gcd(b,r), where r is the remainderwhen a is divided by b. So we can repeat this process until we end up with zero remainder. Then, we return whatever number is left in a as the greatest common divisor. A command you might not have encountered yet is %. The expression x % y returns the remainder when x is divided by y. End of explanation # Answer def fib(n): Return nth element of the Fibonacci sequence. # Create the base case n0 = 0 n1 = 1 # Loop n times. Just ignore the variable i. for i in range(n): n_new = n0 + n1 n0 = n1 n1 = n_new return n0 Explanation: Challenge 1 - Fibonacci Sequence and the Golden Ratio The Fibonacci sequence is defined by $f_{n+1} =f_n + f_{n-1}$ and the initial values $f_0 = 0$ and $f_1 = 1$. The first few elements of the sequence are: $0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377 ...$ Using what you just learned about functions, define a function fib, which calculates the $n$-th element in the Fibonacci sequence. It should have one input variable, $n$ and its return value should be the $n$-th element in the Fibonacci sequence. End of explanation #Answer: phi_approx_output_format = \ Approximation order: {:d} fib_n: {:g} fib_(n-1): {:g} phi: {:.25f} def phi_approx(n, show_output=True): Return the nth-order Fibonacci approximation to the golden ratio. fib_n = fib(n) fib_nm1 = fib(n - 1) phi = fib_n/fib_nm1 if show_output: print(phi_approx_output_format.format(n, fib_n, fib_nm1, phi)) return phi Explanation: The ratio of successive elements in the Fibonacci sequence converges to $$\phi = (1 + \sqrt{5})/2 ≈ 1.61803\dots$$ which is the famous golden ratio. Your task is to approximate $\phi$. Define a function phi_approx that calculates the approximate value of $\phi$ obtained by the ratio of the $n$-th and $(n−1)$-st elements, $$f_n /f_{n-1} \approx \phi$$ phi_approx should have one variable, $n$. Its return value should be the $n$-th order approximation of $\phi$. End of explanation import numpy as np Explanation: C. Numpy Arrays - Review of Basics and Some More Advanced Topics Recall in the first lecture that we introduced a python module known as numpy and type of variable known as a numpy array. For review, we will call numpy to be imported into this notebook so we can use its contents. End of explanation x = np.array([1,2,3,4,5]) print(x) Explanation: Here we are calling in the contents of numpy and giving it the shorthand name 'np' for convenience. To create an array variable (let's call it 'x'), we simply assign 'x' to be equal to the output of the np.array() function, using a list as an input. You can then verify its contents by using the print() function. End of explanation y = x**2 print(y) Explanation: As we learned in Lecture 1, numpy arrays are convenient because they allow us to do math across the whole array and not just individual numbers. For example, let's say we want to make a new variable 'y' such that y = $x^{2}$, then this is done simply as End of explanation data = np.zeros(10) print(data) Explanation: The documentation of possible functions that can be applied to integers and floats (i.e. single numbers), as well as numpy arrays, can be found here: https://docs.scipy.org/doc/numpy/reference/routines.math.html As discussed previously, there are numerous ways to create arrays beyond np.numpy(). These include: * np.arange() * np.linspace() These create arrays of numbers within a range with a specific step-size between each consecutive number in the array. It is sometimes convenient to have Python create other arrays for you, depending on the problem that you are going to solve. For example, sometimes it is handy to create an array of all zeros, which can then be replaced later with data. This can be done by using np.zeros(). Going back to the Fibonacci example, let's say we want to store the first 10 elements of the Fibonacci sequence in an array for easy access in the future. To ready such an array, you simply do the following. End of explanation data[0] = fib(0) print(data[0]) Explanation: Now how do we assign a new value to an element of the array? We use the following "square bracket" notation: array_name[index_number] = value In this, the array (with the name "array_name" or whatever it is you have named it) will have "value" replace whatever is in the position corresponding to "index_number." Arrays are numbered starting from 0, such that First position = 0 Second position = 1 Third position = 2 etc. It is a bit confusing, but after a bit of time, this becomes quite natural. Let's practice with the Fibonacci example. First, let's store the first Fibonacci number in our array. We use the brackets to store that value in the first position (0 index number) in the data array we made above. End of explanation #Your code goes here Explanation: Now you try it. Store the second Fibonacci number in the second position of your array and use a print statement to verify that you have done so. End of explanation #Your code goes here Explanation: Python array indexing is fairly straightforward once you get the hang of it. Let's say you wanted the last element of the array, but you don't quite recall the size of the array. One of the easiest ways to access that element is to use negative indexing. Negative indexing is the same as normal indexing, but backward, in the sense that you start with the last element of the array and count forward. More explicitly, for any array: array[-1] = last element of array array[-2] = second to last element of the array array[-3] = third to last element of the array etc Now then, let's create an array using np.arange() with 10 elements, and see if you can access the last element and the second to last element using negative indexing. Print out these values. End of explanation x = np.linspace(0,10,100) Explanation: Now, sometimes its useful to access more than one element of an array. Let's say that we have an array with 100 elements in the range [0,10] (including endpoints). If you recall, this can be done via the np.linspace() function. End of explanation x[0:3] Explanation: Now then, in order to get a range of elements rather than simply a single one, we use the notation: x[i_start,i_end+1] For example, let's say you want the 1st, 2nd, and 3rd element, then you'd have to do x[0:3] In this notation, ":" represents you want everything between 0 and 3, and including 0. Let's test this. End of explanation #Your code goes here Explanation: If you want everything passed a certain point of the array (including that point), then you would just eliminate the right number, for example x[90:] would give you everything after (and including) the 90 index element. Similarly, if you want everything before a certain index x[:90] would give you everything before the 90 index element. So, let's say that you would want everything up to and including the tenth element of the array $x$. How would you do that? (Remember, the tenth element has an index of 9) End of explanation #Your code here #Answers g = -9.8 v = np.array([3.,3.]) r = np.array([0.,0.]) Explanation: Finally, simply using the ":" gives you all the elements in the array. Challenge 2 - Projectile Motion In this challenge problem, you will be building what is known as a NUMERICAL INTEGRATOR in order to predict the projectiles trajectory through a gravitational field (i.e. what happens when you throw a ball through the air) Let's say that you have a projectile (let's say a ball) in a world with 2 spatial dimensions (dimensions x and y). This world has a constant acceleration due to gravity (call it simply g) that points in the -y direction and has a surface at y = 0. Can we calculate the motion of the projectile in the x-y plane after the projectile is given some initial velocity vector v? In particular, can we predict where the ball will land? With loops, yes we can! Let's first define all of the relevant variables so far. Let g = -9.8 (units of m/s, so an Earth-like world), the initial velocity vector being an array v = [3.,3.], and an initial position vector (call it r) in the x-y plane of r = [0.,1.]. For ease, let's use numpy arrays for the vectors. End of explanation def intV(v,g,dt): deltaVy = g*dt vXnew = v[0] vYnew = v[1]+deltaVy return np.array([vXnew,vYnew]) Explanation: Now then, remember that, $a = \frac{dv}{dt}$ and thus, $dv = a\ dt$ So, the change of an objects velocity ($dv$) is equal to the acceleration ($a = g$ in this case) multiplied by the change in time ($dt$) Likewise: $v_{x} = \frac{dx}{dt}$ and $v_{y} = \frac{dy}{dt}$, or $v_{x}\ dt = dx$ and $v_{y}\ dt = dy$ Now, in this case, since there is only downward acceleration, the change of $v_{x}$ is 0 until the projective hits the ground. Now, we're going to define two functions, one that will calculate the velocity vector components and the other, the position vector components, and returning a new vector with the new components. I'll give you the first one. End of explanation dt = 0.1 intV(v,g,dt) Explanation: Now that we've defined intV (short for "integrate v"), let's use it real quick, just to test it out. Let dt = 0.1 (meaning, your taking a step forward in time by 0.1 seconds). End of explanation #Your code here. #Answer def intR(r,v,dt): rXnew = r[0]+(v[0]*dt) rYnew = r[1]+(v[1]*dt) return np.array([rXnew,rYnew]) Explanation: As you can see, $V_{x}$ hasn't changed, but $V_{y}$ has decreased, representing the projectile slowing down as it's going upward. I'll let you define the function now for the position vector. Call it intR, and it should be a function of (r,v,dt), and remember that now both $r_{x}$ and $r_{y}$ are changing. Remember to return an array. End of explanation #Your code here. #Answer dt = 0.01 while (r[1] >= 0.): v = intV(v,g,dt) r = intR(r,v,dt) print(r) Explanation: Now we have the functions that calculate the changes in the position and velocity vectors. We're almost there! Now, we will need a while-loop in order to step the projectile through its trajectory. What would the condition be? Well, we know that the projectile stops when it hits the ground. So, one way we can do this is to have the condition being (r[1] &gt;= 0), since the ground is defined at y = 0. So, having your intV and intR functions, along with a while-loop and a dt = 0.1 (known as the "step-size"), can you use Python to predict where the projectile will land? End of explanation x = np.array([]) #defining an empty array that will store x position y = np.array([]) #defining an empty array that will store y position Explanation: Now, note that we've defined the while-loop such that it doesn't stop exactly at 0. Firstly, this was strategic, since the initial y = 0, and thus the while-loop wouldn't initialize to begin with (you can try to change it). One way you can overcome this issue is to decrease dt, meaning that you're letting less time pass between each step. Ideally, you'd want dt to be infinitely small, but we don't have that convenience in reality. Re-run the cells, but with dt = 0.01 and we will get much closer to the correct answer. So, we know where it's going to land...can we plot the full trajectory? Yes, but this is a bit complicated, and requires one last function: np.append(). https://docs.scipy.org/doc/numpy/reference/generated/numpy.append.html The idea is to use np.append() to make an array that keeps track of where the ball has been. Let's define two empty arrays that will store our information (x and y). This is an odd idea, defining an array variable without any elements, so instead think of it as a basket without anything inside of it yet, and we will np.append() to fill it. End of explanation #Your code goes here #Answer v = np.array([3.,3.]) r = np.array([0.,0.]) dt = 0.01 while (r[1] >= 0.): v = intV(v,g,dt) r = intR(r,v,dt) x = np.append(x,r[0]) y = np.append(y,r[1]) print(r) plt.plot(x,y,'o') plt.show() Explanation: Now, all you have to do, is each time the while-loop executes, you use np.append() for the x and y arrays, adding the new values to the end of them. How do you do that? Well, looking at the np.append() documentation, for x, you do x = np.append(x,[r[0]]) The same syntax is used for the y array. After that, you simply use plt.plot(x,y,'o') to plot the trajectory of the ball (the last 'o' is used to change the plotting from a line to points). Good luck! Also, don't forget to reset your v and r arrays (otherwise, this will not work) End of explanation %matplotlib inline import matplotlib.pyplot as plt Explanation: If everything turns out alright, you should get the characteristic parabola. Also, if you're going to experiment with changing the intial position and velocity, remember to re-run the cell where we define the x and y arrays in order to clear the plot. Now you've learned how to do numerical integration. This technique is used all throughout Physics and Astronomy, and while there are more advanced ways to do it in order to increase accuracy, the heart of the idea is the same. Here is a figure made by Corbin Taylor (head of the Python team) that used numerical integration to track the position of a ray of light as it falls into a spinning black hole. <img src="./raytrace_picture.jpg"> D. Loading And Saving Data Arrays So, we have learned a lot about data arrays and how we can manipulate them, either through mathematics or indexing. However, up until this point, all we've done is use arrays that we ourselves created. But what happens if we have data from elsewhere? Can Python use that? The answer is of course yes, and while there are ways to import data that are a bit complicated at times, we're going to teach you some of the most basic, and most useful, ways. For this section, we will be using plotting to visualize the data as you're working with it, and as such, we will be loading in the package "matplotlib.pyplot" which you used in Lecture 1 End of explanation timeseriesData = np.loadtxt("./lecture2_data/timeseries_data.txt") Explanation: Now then, let's say we are doing a timing experiment, where we look at the brightness of an object as a function of time. This is actually a very common type of measurement that you may do in research, such as looking for dips in the brightness of stars as a way to detect planets. This data is stored in a text file named timeseries_data.txt in the directory lecture2_data. Let's load it in. End of explanation timeseriesData.shape Explanation: Now we have the data loaded into Python as a numpy array, and one handy thing you can do is to use Python to find the dimensions of the array. This is done by using ".shape" as so. End of explanation t = timeseriesData[0,:] signal = timeseriesData[1,:] Explanation: In this format, we know that this is a 2x1000 array (two rows, 1000 columns). Another way you can think about this is that you have two 1000-element arrays contained within another array, where each of those arrays are elements (think of it as an array of arrays). The first row is the time stamps when each measurement was taken, while the second row is that of the value of the measurement itself. For ease of handling this data, one can in principle take each of these rows and create new arrays out of them. Let's do just that. End of explanation #Your code here #Answer plt.plot(t,signal) plt.show() Explanation: Here, you have 2 dimensions with the array timeseriesData, and as such much specify the row first and then the column. So, - array_name[n,:] is the n-th row, and all columns within that row. - array_name[:,n] is the n-th column, and all rows within that particular column. Now then, let's see what the data looks like using the plot() function that you learned last time. Do you remember how to do it? Why don't you try! Plot t as your x-axis and signal as your y-axis. Don't forget to show your plot. End of explanation cutOff = 15. signalFix = signal[signal < cutOff] Explanation: Looking at our data, you see clear spikes that jump well above most of the signal. (I've added this to the data to represent outliers that may sometimes appear when you're messing with raw data, and those must be dealt with). In astronomy, you sometimes have relativistic charged particles, not from your source, that hit the detector known as cosmic rays, and we often have to remove these. There are some very complex codes that handle cosmic rays, but for our purposes (keeping it easy), we're going to just set a hard cut off of, let's say 15. In order to do this, we can use conditional indexing in place of normal indices. This involves taking a conditional statement (more on those later) and testing whether it evaluates to True on each element in the array. This gives an array of Booleans, which we can use as logical indices to select only the entries for which the logical statement is True. End of explanation tFix = t[signal < cutOff] Explanation: In this case, the conditional statement that we have used is signal &lt; cutOff. Here, conditional indexing keeps the data that we have deemed "good" by this criteria. We can also do the same for the corresponding time stamps, since t and signal have the same length. End of explanation #Your code goes here plt.plot(tFix,signalFix) plt.show() Explanation: Now let's plot it. You try. End of explanation dataFix = np.array([tFix,signalFix]) Explanation: Now that you have your data all cleaned up, it would be nice if we could save it for later and not have to go through the process of cleaning it up every time. Fear not! Python has you covered. There are two formats that we are going to cover, one that is Python-specific, and the other a simple text format. First, we must package our two cleaned up arrays into one again. This can be done simply with the np.array() function. End of explanation np.save('./lecture2_data/dataFix.npy',dataFix) np.savetxt('./lecture2_data/dataFix.txt',dataFix) Explanation: Then, we can use either the np.save() function or the np.savetxt function, the first saving the array into a '.npy' file and the other, into a '.txt' file. The syntax is pretty much the same for each. End of explanation data = np.load('./lecture2_data/dataFix.npy') t = data[0,:] signal = data[1,:] plt.plot(t,signal) plt.show() Explanation: Now that your data files are saved, you can load them up again, using np.loadtxt() and np.load() for .txt and .npy files respectively. We used np.loadtxt() above, and np.load works the same way. So, let's load in the .npy file and see if our data was saved correctly. End of explanation #Your code goes here Explanation: Now, let's see if you can do the same thing, but with the .txt file that we saved. End of explanation
12,085
Given the following text description, write Python code to implement the functionality described below step by step Description: python async programming 非同步編程在python中最近是越來越受歡迎,在python中有著許多libraries是用來做非同步的,其中之一是asyncio而且這也是讓python在async編程受歡迎的主因,在開始正題前,我們先來理解一些歷史緣由。 在普遍的程式,執行順序都是一行一行執行,每次要繼續往下執行前,都會等著上一行完成,也就是俗稱的Sequential programming,那麼這樣的編程可能會遇到什麼問題呢? 最大的問題就是如果上一行執行太久的話,我一定要等上一行執行完我才能夠繼續往下走嗎? 最常見的情況就是api request,得到回傳結果,我才能繼續往下走,但是其實我下面接著要做的並不用等這個結果就可以執行了,所以就會耗費無意義的時間,為了解決這樣的事情,會使用thread。 process 可以產生多個 thread,可以讓你的程式一次做很多事情,把它想成影分身,主體只有一個,但是你的分身卻可以同時幫你做其他事情。 上面這張圖,說明了什麼? 帥! 哈哈,其實是想表達,鳴人自己(process),開出了很多分身(thread),每個分身都做不同的事情 方便吧! 但是 thread 是有他的問題存在的,其中像是 race condition dead lock resource starvavtion 先撇除上面會遇到的問題,thread還有著一個成本就是cpu的context switch,因為一顆cpu一次只能run一個thread,它實際上背後用很快的速度在進行thread的交換並執行,這就是所謂的context switch。那麼會有既可以達到多工的效果,又可以免除遇到上述的race condition等等問題的技術存在嗎?! 答案是有的,那就是今天我們要講的主題 python async io,ayncio背後其實是用到coroutine的概念實作,從wiki上面來看,其實coroutine就是一種可以中斷及繼續執行函式呼叫的技術,直接從下面的例子來看! Step1: 上面就是python最基本支援coroutine的使用方式,第一個function n_hello 是一般的for loop版本的印出數字,另外一個function c_hello 是使用yield,藉此讓你看看兩者行為,明顯的感受出使用yield可以將程式的執行順序從subroutine轉回到main,繼續呼叫next又可以跳回去subroutine。 有沒有覺得跟multi thread很像呢,基本上是行為是差不多的,但是coroutine是基於中斷函式,繼續執行其他函式的方式來達到多工,並不像multi thread,會有同時兩個thread執行同份程式碼的問題,進而造成前面所說的,race condition, dead lock.. 那些問題,前面使用鳴人的影分身來比喻multi thread,對於coroutine,我個人想要使用下面這張來比喻 影子模仿術,鹿丸放出多條影子(coroutine),藉由自己的大腦來控制所有人的行動。 那麼接著再稍微深入看看yield的使用方式,前面使用方式是yield把值從function傳出去,那麼我們今天可以把值從外面傳到function裡面使用嗎? 答案是可以的! 以下看看例子 Step2: 根據上面的使用情境,你應該會覺得多多少少可以有更方便的用法才對,因此python的確在pep380有提出yield from這個語法糖 Step3: syntax sugar @asyncio.coroutine => async yield from => await asyncio vs thread asyncio 神秘在哪? 讓我們來瞧瞧 https
Python Code: import time def n_hello(): for i in range(6): print(i) def c_hello(): for i in range(4): print('in function {}'.format(i)) yield i def infinit_loop(): num = 0 while True: num += 1 print(num) yield n_hello() print("=====") c = c_hello() next(c) print("come back to main") next(c) next(c) Explanation: python async programming 非同步編程在python中最近是越來越受歡迎,在python中有著許多libraries是用來做非同步的,其中之一是asyncio而且這也是讓python在async編程受歡迎的主因,在開始正題前,我們先來理解一些歷史緣由。 在普遍的程式,執行順序都是一行一行執行,每次要繼續往下執行前,都會等著上一行完成,也就是俗稱的Sequential programming,那麼這樣的編程可能會遇到什麼問題呢? 最大的問題就是如果上一行執行太久的話,我一定要等上一行執行完我才能夠繼續往下走嗎? 最常見的情況就是api request,得到回傳結果,我才能繼續往下走,但是其實我下面接著要做的並不用等這個結果就可以執行了,所以就會耗費無意義的時間,為了解決這樣的事情,會使用thread。 process 可以產生多個 thread,可以讓你的程式一次做很多事情,把它想成影分身,主體只有一個,但是你的分身卻可以同時幫你做其他事情。 上面這張圖,說明了什麼? 帥! 哈哈,其實是想表達,鳴人自己(process),開出了很多分身(thread),每個分身都做不同的事情 方便吧! 但是 thread 是有他的問題存在的,其中像是 race condition dead lock resource starvavtion 先撇除上面會遇到的問題,thread還有著一個成本就是cpu的context switch,因為一顆cpu一次只能run一個thread,它實際上背後用很快的速度在進行thread的交換並執行,這就是所謂的context switch。那麼會有既可以達到多工的效果,又可以免除遇到上述的race condition等等問題的技術存在嗎?! 答案是有的,那就是今天我們要講的主題 python async io,ayncio背後其實是用到coroutine的概念實作,從wiki上面來看,其實coroutine就是一種可以中斷及繼續執行函式呼叫的技術,直接從下面的例子來看! End of explanation def g(x): for i in range(x): yield i def will_cause_exception(): x = yield print("wow {}".format(x)) return x def infinite_send(): while True: x = yield print("send {}".format(x)) w = will_cause_exception() next(w) try: w.send(5) except StopIteration as e: return_value = e.value # the function return value will be store in the exception's value print(return_value) # let you see exception raise e Explanation: 上面就是python最基本支援coroutine的使用方式,第一個function n_hello 是一般的for loop版本的印出數字,另外一個function c_hello 是使用yield,藉此讓你看看兩者行為,明顯的感受出使用yield可以將程式的執行順序從subroutine轉回到main,繼續呼叫next又可以跳回去subroutine。 有沒有覺得跟multi thread很像呢,基本上是行為是差不多的,但是coroutine是基於中斷函式,繼續執行其他函式的方式來達到多工,並不像multi thread,會有同時兩個thread執行同份程式碼的問題,進而造成前面所說的,race condition, dead lock.. 那些問題,前面使用鳴人的影分身來比喻multi thread,對於coroutine,我個人想要使用下面這張來比喻 影子模仿術,鹿丸放出多條影子(coroutine),藉由自己的大腦來控制所有人的行動。 那麼接著再稍微深入看看yield的使用方式,前面使用方式是yield把值從function傳出去,那麼我們今天可以把值從外面傳到function裡面使用嗎? 答案是可以的! 以下看看例子 End of explanation def test_yield_from(): w = will_cause_exception() value = yield from w print("no exception {}".format(value)) yield t = test_yield_from() next(t) t.send(10) def amazing_yeild_from(x): yield from range(x) yield from range(x-1, -1, -1) print(list(amazing_yeild_from(5))) %%time import asyncio import requests @asyncio.coroutine def aio_requests(url): r = requests.get(url) return r @asyncio.coroutine def aio_response(response): data = response.text return data urls = ['http://www.google.com', 'http://www.yandex.ru', 'http://www.python.org', 'http://www.python.org', 'http://www.python.org'] @asyncio.coroutine def call_url(url): response = yield from aio_requests(url) data = yield from aio_response(response) print('{}: {} bytes'.format(url, len(data))) return data futures = [call_url(url) for url in urls] loop = asyncio.get_event_loop() loop.run_until_complete(asyncio.wait(futures)) %%time def syn_call_url(url): r = requests.get(url) data = r.text print('{}: {} bytes'.format(url, len(data))) for url in urls: syn_call_url(url) %%time async def async_requests(url): r = requests.get(url) return r async def async_response(response): data = response.text return data async def call_url(url): response = await async_requests(url) data = await async_response(response) print('{}: {} bytes'.format(url, len(data))) return data futures = [call_url(url) for url in urls] loop = asyncio.get_event_loop() loop.run_until_complete(asyncio.wait(futures)) Explanation: 根據上面的使用情境,你應該會覺得多多少少可以有更方便的用法才對,因此python的確在pep380有提出yield from這個語法糖 End of explanation %%time import time time.sleep(1) Explanation: syntax sugar @asyncio.coroutine => async yield from => await asyncio vs thread asyncio 神秘在哪? 讓我們來瞧瞧 https://www.reddit.com/r/learnpython/comments/5qwm5h/asyncio_for_dummies/dd432ke/ golang 沒有 reentrant lock End of explanation
12,086
Given the following text description, write Python code to implement the functionality described below step by step Description: Classic Monty Hall Bayesian Network authors Step1: Let's create the distributions for the guest and the prize. Note that both distributions are independent of one another. Step2: Now let's create the conditional probability table for our Monty. The table is dependent on both the guest and the prize. Step3: Now lets create the states for the bayesian network. Step4: Then the bayesian network itself, adding the states in after. Step5: Then the transitions. Step6: With a "bake" to finalize the structure of our network. Step7: Now we can check the possible states in our network. Step8: Now we can see what happens to our network when our Guest chooses 'A'. Step9: Now our host chooses 'B'. (note that prize goes to 66% if you switch) Step10: We can also see what happens if our host simply chooses 'B'. Step11: Now let's train our network on the following set of data. Step12: Let's see the results! Starting with the Monty. Step13: Then our Prize. Step14: Finally our Guest.
Python Code: import math from pomegranate import * Explanation: Classic Monty Hall Bayesian Network authors:<br> Jacob Schreiber [<a href="mailto:[email protected]">[email protected]</a>]<br> Nicholas Farn [<a href="mailto:[email protected]">[email protected]</a>] Lets test out the Bayesian Network framework to produce the Monty Hall problem, but modified a little. The Monty Hall problem is basically a game show where a guest chooses one of three doors to open, with an unknown one having a prize behind it. Monty then opens another non-chosen door without a prize behind it, and asks the guest if they would like to change their answer. Many people were surprised to find that if the guest changed their answer, there was a 66% chance of success as opposed to a 50% as might be expected if there were two doors. This can be modelled as a Bayesian network with three nodes-- guest, prize, and Monty, each over the domain of door 'A', 'B', 'C'. Monty is dependent on both guest and prize, in that it can't be either of them. Lets extend this a little bit to say the guest has an untrustworthy friend whose answer he will not go with. End of explanation guest = DiscreteDistribution( { 'A': 1./3, 'B': 1./3, 'C': 1./3 } ) prize = DiscreteDistribution( { 'A': 1./3, 'B': 1./3, 'C': 1./3 } ) Explanation: Let's create the distributions for the guest and the prize. Note that both distributions are independent of one another. End of explanation monty = ConditionalProbabilityTable( [[ 'A', 'A', 'A', 0.0 ], [ 'A', 'A', 'B', 0.5 ], [ 'A', 'A', 'C', 0.5 ], [ 'A', 'B', 'A', 0.0 ], [ 'A', 'B', 'B', 0.0 ], [ 'A', 'B', 'C', 1.0 ], [ 'A', 'C', 'A', 0.0 ], [ 'A', 'C', 'B', 1.0 ], [ 'A', 'C', 'C', 0.0 ], [ 'B', 'A', 'A', 0.0 ], [ 'B', 'A', 'B', 0.0 ], [ 'B', 'A', 'C', 1.0 ], [ 'B', 'B', 'A', 0.5 ], [ 'B', 'B', 'B', 0.0 ], [ 'B', 'B', 'C', 0.5 ], [ 'B', 'C', 'A', 1.0 ], [ 'B', 'C', 'B', 0.0 ], [ 'B', 'C', 'C', 0.0 ], [ 'C', 'A', 'A', 0.0 ], [ 'C', 'A', 'B', 1.0 ], [ 'C', 'A', 'C', 0.0 ], [ 'C', 'B', 'A', 1.0 ], [ 'C', 'B', 'B', 0.0 ], [ 'C', 'B', 'C', 0.0 ], [ 'C', 'C', 'A', 0.5 ], [ 'C', 'C', 'B', 0.5 ], [ 'C', 'C', 'C', 0.0 ]], [guest, prize] ) Explanation: Now let's create the conditional probability table for our Monty. The table is dependent on both the guest and the prize. End of explanation s1 = State( guest, name="guest" ) s2 = State( prize, name="prize" ) s3 = State( monty, name="monty" ) Explanation: Now lets create the states for the bayesian network. End of explanation network = BayesianNetwork( "test" ) network.add_states( s1, s2, s3 ) Explanation: Then the bayesian network itself, adding the states in after. End of explanation network.add_transition( s1, s3 ) network.add_transition( s2, s3 ) Explanation: Then the transitions. End of explanation network.bake() Explanation: With a "bake" to finalize the structure of our network. End of explanation print("\t".join([ state.name for state in network.states ])) Explanation: Now we can check the possible states in our network. End of explanation observations = { 'guest' : 'A' } beliefs = map( str, network.predict_proba( observations ) ) print("\n".join( "{}\t{}".format( state.name, belief ) for state, belief in zip( network.states, beliefs ) )) Explanation: Now we can see what happens to our network when our Guest chooses 'A'. End of explanation observations = { 'guest' : 'A', 'monty' : 'B' } beliefs = map( str, network.predict_proba( observations ) ) print("\n".join( "{}\t{}".format( state.name, belief ) for state, belief in zip( network.states, beliefs ) )) Explanation: Now our host chooses 'B'. (note that prize goes to 66% if you switch) End of explanation observations = { 'monty' : 'B' } beliefs = map( str, network.predict_proba( observations ) ) print("\n".join( "{}\t{}".format( state.name, belief ) for state, belief in zip( network.states, beliefs ) )) Explanation: We can also see what happens if our host simply chooses 'B'. End of explanation data = [[ 'A', 'A', 'C' ], [ 'A', 'A', 'C' ], [ 'A', 'A', 'B' ], [ 'A', 'A', 'A' ], [ 'A', 'A', 'C' ], [ 'B', 'B', 'B' ], [ 'B', 'B', 'C' ], [ 'C', 'C', 'A' ], [ 'C', 'C', 'C' ], [ 'C', 'C', 'C' ], [ 'C', 'C', 'C' ], [ 'C', 'B', 'A' ]] network.fit( data ) Explanation: Now let's train our network on the following set of data. End of explanation print(monty) Explanation: Let's see the results! Starting with the Monty. End of explanation print(prize) Explanation: Then our Prize. End of explanation print(guest) Explanation: Finally our Guest. End of explanation
12,087
Given the following text description, write Python code to implement the functionality described below step by step Description: Quaternion Series Quantum Mechanics Step1: Lecture 1 Step2: The first term is a real-valued, with the 3-imaginary vector equal to zero. I think it is bad practice to just pretend the three zeros are not there in any way. One can make an equivalence relation between quaternions of the form $(\mathbb{R}, 0, 0, 0)$ and the real numbers. The real numbers are a subgroup of quaternions, and never the other way around. It is important to understand exactly why the three imaginary terms are zero. It is too common for people to say "it's the norm" and give the subject no thought. No thought means no insights. A quaternion points in the direction of itself, so all the anti-symmetric cross terms are equal to zero. The conjugate operator picks out the mirror reflection of the imaginary terms. The product of an imaginary with its mirror image is an all positive real number and zero for all three imaginary numbers. Calculus is the story of neighborhoods near points. There are two broad classes of changes one can imagine for a norm. In the first, a point $A$ goes to $A'$. It could be either slightly bigger or smaller, shown in a slightly bigger or smaller first value. Or the mirror reflection to be slightly off. This would create a non-zero space-times-time 3-vector. Everyone accepts that a norm can get larger or smaller, it is a "size" thing. But a change in direction will lead to imaginary terms that can either commute, anti-commute, or be a mixture of both. This possibility makes this view of a quaternion norm sound richer. Test out the second identity Step3: Note on notation Step4: Subtracting one from the other shows they are identical. There are many more algebraic relationships known for Hilbert spaces such as the triangle inequality and the Schwarz inequality which is the basis of the uncertainty principle. These all work for the Euclidean product with quaternions. Lecture 2 Step5: A little calculation in the head should show this works as expected - except one is not used to seeing quaternion series in action. The first system analyzed has but 2 states, keeping things simple. The first pair of states are likewise so simple they are orthonormal to a casual observer. Step6: Calculate $<u|u>$, $<d|d>$ and $<u|d>$ Step7: The next pair of states is constructed from the first pair, $u$ and $d$ like so (QM Step8: The final calculation for chapter 2 is like the one for $r$ and $L$ except one uses an arbitrarily chosen imaginary value - it could point any direction in 3D space - like so Step9: Notice how long the qtypes have gotten (the strings that keep a record of all the manipulations done to a quaternion). The initial state was just a zero and a one, but that had to get added to another and normalized, then multiplied by a factor of $i$ and combined again. Orthonormal again, as hoped for. Is the quaternion series approach a faithful representation of these 6 states? On page 43-44, there are 8 products that all add up to one half. See if this works out... Step10: There is an important technical detail in this calculation I should point out. In the <bra|ket> form, the bra gets conjugated. Notice though that if one does two of these, < i | L >< L | i >, then there has to be a product formed between the two brackets. In practice, < i | L >* < L | i > gives the wrong result
Python Code: %%capture %matplotlib inline import numpy as np import sympy as sp import matplotlib.pyplot as plt # To get equations the look like, well, equations, use the following. from sympy.interactive import printing printing.init_printing(use_latex=True) from IPython.display import display # Tools for manipulating quaternions. import Q_tools as qt; Explanation: Quaternion Series Quantum Mechanics: Lectures 1 and 2 by Doug Sweetser, email to [email protected] This notebook is being created as a companion to the book "Quantum Mechanics: the Theoretical Minimum" by Susskind and Friedman (QM:TTM for short). Those authors of course never use quaternions as they are a bit player in the crowded field of mathematical tools. Nature has used one accounting system since the beginning of space-time, so I will be a jerk in the name of consistency. This leads to a different perspective on what makes an equation quantum mechanical. If a conjugate operator is used, then the expression is about quantum mechanics. It is odd to have such a brief assertion given the complexity of the subject, but that make the hypothesis fun - and testable by seeing if anything in the book cannot be done with quaternions and their conjugates. Import the tools to work with quaternions in this notebook. End of explanation a0, A1, A2, A3 = sp.symbols("a0 A1 A2 A3") b0, B1, B2, B3 = sp.symbols("b0 B1 B2 B3") c0, C1, C2, C3 = sp.symbols("c0 C1 C2 C3") A = qt.QH([a0, A1, A2, A3], qtype="A") B = qt.QH([b0, B1, B2, B3], qtype="B") C = qt.QH([c0, C1, C2, C3], qtype="C") display(A.conj().product(A).t) display(A.conj().product(A).x) display(A.conj().product(A).y) display(A.conj().product(A).z) Explanation: Lecture 1: Systems and Experiments Bracket Notation and Three Identities Bracket notation from this quaternion-centric perspective is just a quaternion product where the first term must necessarily be conjugated. I have called this the "Euclidean product". The quaternion product is associative but the Euclidean product is not ($(A^ B)^ C \ne A^ (B^ C)$ although their norms are equal). Write out three things in bracket notation that are known to be true about inner products(QM:TTH, p. 31). 1. $<A|A> \rightarrow A^ A$ is real 1. $<A|B> = <B|A>^ \rightarrow A^ B = (B^ A)^$ 1. $(<A|+<B|)|C> = <A|C> + <B|C> \rightarrow (A+ B)^C = A^C + B^ C$ This may provide the first signs that the odd math of quantum mechanics is the math of Euclidean products of quaternions. So, is $A^* A$ real? Yes and no. End of explanation AB_conj = A.Euclidean_product(B) BA = B.Euclidean_product(A).conj() print("(A* B)* = {}".format(AB_conj)) print("B* A = {}".format(BA)) print("(A* B)* - B* A = {}".format(AB_conj.dif(BA))) Explanation: The first term is a real-valued, with the 3-imaginary vector equal to zero. I think it is bad practice to just pretend the three zeros are not there in any way. One can make an equivalence relation between quaternions of the form $(\mathbb{R}, 0, 0, 0)$ and the real numbers. The real numbers are a subgroup of quaternions, and never the other way around. It is important to understand exactly why the three imaginary terms are zero. It is too common for people to say "it's the norm" and give the subject no thought. No thought means no insights. A quaternion points in the direction of itself, so all the anti-symmetric cross terms are equal to zero. The conjugate operator picks out the mirror reflection of the imaginary terms. The product of an imaginary with its mirror image is an all positive real number and zero for all three imaginary numbers. Calculus is the story of neighborhoods near points. There are two broad classes of changes one can imagine for a norm. In the first, a point $A$ goes to $A'$. It could be either slightly bigger or smaller, shown in a slightly bigger or smaller first value. Or the mirror reflection to be slightly off. This would create a non-zero space-times-time 3-vector. Everyone accepts that a norm can get larger or smaller, it is a "size" thing. But a change in direction will lead to imaginary terms that can either commute, anti-commute, or be a mixture of both. This possibility makes this view of a quaternion norm sound richer. Test out the second identity: $$(A^ B)^ = (B^*, A)$$ End of explanation A_plus_B_then_C = A.conj().add(B.conj()).product(C).expand_q() AC_plus_BC = A.conj().product(C).add(B.conj().product(C)).expand_q() print("(A+B)* C: {}\n".format(A_plus_B_then_C)) print("A*C + B*C: {}\n".format(AC_plus_BC)) print("(A+B)* C - (A*C + B*C): {}".format(A_plus_B_then_C.dif(AC_plus_BC))) Explanation: Note on notation: someone pointed out that is absolutely all calculations start and end with quaternions, then it is easy to feel lost - this quaternion looks like that one. The string at the end that I call a "qtype" represents all the steps that went into a calculation. The last qtype above reads: AxB-BxA* which hopefully is clear in this contex. Despite the fact that quaternions do not commute, the conjugate operator does the job correctly because the angle between the two quaternions does not change. Now for the third identity about sums. End of explanation A = qt.QHStates([qt.QH([0,1,2,3]), qt.QH([1,2,1,2])]) AA = A.Euclidean_product('bra', ket=A) AA.print_states("<A|A>") Explanation: Subtracting one from the other shows they are identical. There are many more algebraic relationships known for Hilbert spaces such as the triangle inequality and the Schwarz inequality which is the basis of the uncertainty principle. These all work for the Euclidean product with quaternions. Lecture 2: Quantum States Quaternion Series as Quantum States A quantum state is an n-dimensional vector space. This is fundamentally different from a set of states because certain math relationships are allowed. Vectors can be added to one another, multiplied by complex numbers. One can take the inner product of two vectors. Most important calculations involve taking the inner product. A perspective I will explore here is that a (possibly infinite) series of quaternions has the same algebraic properties of Hilbert spaces when one uses the Euclidean product, $A^ B = \sum_{1}^{n} a_n^ b_n$ This only works if the length of the series for A is exactly equal to that of B. Whatever can be done with a quaternion can be done with its series representation. Unlike vectors that can either be be a row or a column, quaternion series only have a length. Let's just do one calculation, < A | A >: End of explanation q0, q1, qi, qj, qk = qt.QH().q_0(), qt.QH().q_1(), qt.QH().q_i(), qt.QH().q_j(), qt.QH().q_k() u = qt.QHStates([q1, q0]) d = qt.QHStates([q0, q1]) u.print_states("u", True) d.print_states("d") Explanation: A little calculation in the head should show this works as expected - except one is not used to seeing quaternion series in action. The first system analyzed has but 2 states, keeping things simple. The first pair of states are likewise so simple they are orthonormal to a casual observer. End of explanation u.Euclidean_product('bra', ket=u).print_states("<u|u>") d.Euclidean_product('bra', ket=d).print_states("<d|d>") u.Euclidean_product('bra', ket=d).print_states("<u|d>") Explanation: Calculate $<u|u>$, $<d|d>$ and $<u|d>$: End of explanation sqrt_2op = qt.QHStates([qt.QH([sp.sqrt(1/2), 0, 0, 0])]) u2 = u.Euclidean_product('ket', operator=sqrt_2op) d2 = d.Euclidean_product('ket', operator=sqrt_2op) r = u2.add(d2) L = u2.dif(d2) r.print_states("r", True) L.print_states("L") r.Euclidean_product('bra', ket=r).print_states("<r|r>", True) L.Euclidean_product('bra', ket=L).print_states("<L|L>", True) r.Euclidean_product('bra', ket=L).print_states("<r|L>", True) Explanation: The next pair of states is constructed from the first pair, $u$ and $d$ like so (QM:TTM, page 41): End of explanation i_op = qt.QHStates([q1, q0, q0, qi]) i = r.Euclidean_product('ket', operator=i_op) o = L.Euclidean_product('ket', operator=i_op) i.print_states("i", True) o.print_states("o") i.Euclidean_product('bra', ket=i).print_states("<i|i>", True) o.Euclidean_product('bra', ket=o).print_states("<o|o>", True) i.Euclidean_product('bra', ket=o).print_states("<i|o>") Explanation: The final calculation for chapter 2 is like the one for $r$ and $L$ except one uses an arbitrarily chosen imaginary value - it could point any direction in 3D space - like so: End of explanation ou = o.Euclidean_product('bra', ket=u) uo = i.Euclidean_product('bra', ket=o) print("ouuo sum:\n", ou.product('bra', ket=uo).summation(), "\n") od = o.Euclidean_product('bra', ket=d) do = d.Euclidean_product('bra', ket=o) print("oddo sum:\n", od.product('bra', ket=do).summation(), "\n") iu = i.Euclidean_product('bra', ket=u) ui = u.Euclidean_product('bra', ket=i) print("iuui sum:\n", iu.product('bra', ket=ui).summation(), "\n") id = i.Euclidean_product('bra', ket=d) di = d.Euclidean_product('bra', ket=i) print("iddi sum:\n", id.product('bra', ket=di).summation()) Or = o.Euclidean_product('bra', ket=r) ro = r.Euclidean_product('bra', ket=o) print("orro:\n", Or.product('bra', ket=ro).summation(), "\n") oL = o.Euclidean_product('bra', ket=L) Lo = L.Euclidean_product('bra', ket=o) print("oLLo:\n", oL.product('bra', ket=Lo).summation(), "\n") ir = i.Euclidean_product('bra', ket=r) ri = r.Euclidean_product('bra', ket=i) print("irri:\n", ir.product('bra', ket=ri).summation(), "\n") iL = i.Euclidean_product('bra', ket=L) Li = L.Euclidean_product('bra', ket=i) print("iLLi:\n", iL.product('bra', ket=Li).summation()) Explanation: Notice how long the qtypes have gotten (the strings that keep a record of all the manipulations done to a quaternion). The initial state was just a zero and a one, but that had to get added to another and normalized, then multiplied by a factor of $i$ and combined again. Orthonormal again, as hoped for. Is the quaternion series approach a faithful representation of these 6 states? On page 43-44, there are 8 products that all add up to one half. See if this works out... End of explanation print("iL*Li:\n", iL.Euclidean_product('bra', ket=Li).summation()) Explanation: There is an important technical detail in this calculation I should point out. In the <bra|ket> form, the bra gets conjugated. Notice though that if one does two of these, < i | L >< L | i >, then there has to be a product formed between the two brackets. In practice, < i | L >* < L | i > gives the wrong result: End of explanation
12,088
Given the following text description, write Python code to implement the functionality described below step by step Description: Deep Convolutional GANs In this notebook, you'll build a GAN using convolutional layers in the generator and discriminator. This is called a Deep Convolutional GAN, or DCGAN for short. The DCGAN architecture was first explored last year and has seen impressive results in generating new images, you can read the original paper here. You'll be training DCGAN on the Street View House Numbers (SVHN) dataset. These are color images of house numbers collected from Google street view. SVHN images are in color and much more variable than MNIST. So, we'll need a deeper and more powerful network. This is accomplished through using convolutional layers in the discriminator and generator. It's also necessary to use batch normalization to get the convolutional networks to train. The only real changes compared to what you saw previously are in the generator and discriminator, otherwise the rest of the implementation is the same. Step1: Getting the data Here you can download the SVHN dataset. Run the cell above and it'll download to your machine. Step2: These SVHN files are .mat files typically used with Matlab. However, we can load them in with scipy.io.loadmat which we imported above. Step3: Here I'm showing a small sample of the images. Each of these is 32x32 with 3 color channels (RGB). These are the real images we'll pass to the discriminator and what the generator will eventually fake. Step4: Here we need to do a bit of preprocessing and getting the images into a form where we can pass batches to the network. First off, we need to rescale the images to a range of -1 to 1, since the output of our generator is also in that range. We also have a set of test and validation images which could be used if we're trying to identify the numbers in the images. Step5: Network Inputs Here, just creating some placeholders like normal. Step6: Generator Here you'll build the generator network. The input will be our noise vector z as before. Also as before, the output will be a $tanh$ output, but this time with size 32x32 which is the size of our SVHN images. What's new here is we'll use convolutional layers to create our new images. The first layer is a fully connected layer which is reshaped into a deep and narrow layer, something like 4x4x1024 as in the original DCGAN paper. Then we use batch normalization and a leaky ReLU activation. Next is a transposed convolution where typically you'd halve the depth and double the width and height of the previous layer. Again, we use batch normalization and leaky ReLU. For each of these layers, the general scheme is convolution > batch norm > leaky ReLU. You keep stacking layers up like this until you get the final transposed convolution layer with shape 32x32x3. Below is the archicture used in the original DCGAN paper Step7: Discriminator Here you'll build the discriminator. This is basically just a convolutional classifier like you've built before. The input to the discriminator are 32x32x3 tensors/images. You'll want a few convolutional layers, then a fully connected layer for the output. As before, we want a sigmoid output, and you'll need to return the logits as well. For the depths of the convolutional layers I suggest starting with 16, 32, 64 filters in the first layer, then double the depth as you add layers. Note that in the DCGAN paper, they did all the downsampling using only strided convolutional layers with no maxpool layers. You'll also want to use batch normalization with tf.layers.batch_normalization on each layer except the first convolutional and output layers. Again, each layer should look something like convolution > batch norm > leaky ReLU. Note Step9: Model Loss Calculating the loss like before, nothing new here. Step11: Optimizers Not much new here, but notice how the train operations are wrapped in a with tf.control_dependencies block so the batch normalization layers can update their population statistics. Step12: Building the model Here we can use the functions we defined about to build the model as a class. This will make it easier to move the network around in our code since the nodes and operations in the graph are packaged in one object. Step13: Here is a function for displaying generated images. Step14: And another function we can use to train our network. Notice when we call generator to create the samples to display, we set training to False. That's so the batch normalization layers will use the population statistics rather than the batch statistics. Also notice that we set the net.input_real placeholder when we run the generator's optimizer. The generator doesn't actually use it, but we'd get an error without it because of the tf.control_dependencies block we created in model_opt. Step15: Hyperparameters GANs are very sensitive to hyperparameters. A lot of experimentation goes into finding the best hyperparameters such that the generator and discriminator don't overpower each other. Try out your own hyperparameters or read the DCGAN paper to see what worked for them. Exercise
Python Code: %matplotlib inline import pickle as pkl import matplotlib.pyplot as plt import numpy as np from scipy.io import loadmat import tensorflow as tf !mkdir data Explanation: Deep Convolutional GANs In this notebook, you'll build a GAN using convolutional layers in the generator and discriminator. This is called a Deep Convolutional GAN, or DCGAN for short. The DCGAN architecture was first explored last year and has seen impressive results in generating new images, you can read the original paper here. You'll be training DCGAN on the Street View House Numbers (SVHN) dataset. These are color images of house numbers collected from Google street view. SVHN images are in color and much more variable than MNIST. So, we'll need a deeper and more powerful network. This is accomplished through using convolutional layers in the discriminator and generator. It's also necessary to use batch normalization to get the convolutional networks to train. The only real changes compared to what you saw previously are in the generator and discriminator, otherwise the rest of the implementation is the same. End of explanation from urllib.request import urlretrieve from os.path import isfile, isdir from tqdm import tqdm data_dir = 'data/' if not isdir(data_dir): raise Exception("Data directory doesn't exist!") class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): self.total = total_size self.update((block_num - self.last_block) * block_size) self.last_block = block_num if not isfile(data_dir + "train_32x32.mat"): with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar: urlretrieve( 'http://ufldl.stanford.edu/housenumbers/train_32x32.mat', data_dir + 'train_32x32.mat', pbar.hook) if not isfile(data_dir + "test_32x32.mat"): with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Testing Set') as pbar: urlretrieve( 'http://ufldl.stanford.edu/housenumbers/test_32x32.mat', data_dir + 'test_32x32.mat', pbar.hook) Explanation: Getting the data Here you can download the SVHN dataset. Run the cell above and it'll download to your machine. End of explanation trainset = loadmat(data_dir + 'train_32x32.mat') testset = loadmat(data_dir + 'test_32x32.mat') Explanation: These SVHN files are .mat files typically used with Matlab. However, we can load them in with scipy.io.loadmat which we imported above. End of explanation idx = np.random.randint(0, trainset['X'].shape[3], size=36) fig, axes = plt.subplots(6, 6, sharex=True, sharey=True, figsize=(5,5),) for ii, ax in zip(idx, axes.flatten()): ax.imshow(trainset['X'][:,:,:,ii], aspect='equal') ax.xaxis.set_visible(False) ax.yaxis.set_visible(False) plt.subplots_adjust(wspace=0, hspace=0) Explanation: Here I'm showing a small sample of the images. Each of these is 32x32 with 3 color channels (RGB). These are the real images we'll pass to the discriminator and what the generator will eventually fake. End of explanation def scale(x, feature_range=(-1, 1)): # scale to (0, 1) x = ((x - x.min())/(255 - x.min())) # scale to feature_range min, max = feature_range x = x * (max - min) + min return x class Dataset: def __init__(self, train, test, val_frac=0.5, shuffle=False, scale_func=None): split_idx = int(len(test['y'])*(1 - val_frac)) self.test_x, self.valid_x = test['X'][:,:,:,:split_idx], test['X'][:,:,:,split_idx:] self.test_y, self.valid_y = test['y'][:split_idx], test['y'][split_idx:] self.train_x, self.train_y = train['X'], train['y'] self.train_x = np.rollaxis(self.train_x, 3) self.valid_x = np.rollaxis(self.valid_x, 3) self.test_x = np.rollaxis(self.test_x, 3) if scale_func is None: self.scaler = scale else: self.scaler = scale_func self.shuffle = shuffle def batches(self, batch_size): if self.shuffle: idx = np.arange(len(dataset.train_x)) np.random.shuffle(idx) self.train_x = self.train_x[idx] self.train_y = self.train_y[idx] n_batches = len(self.train_y)//batch_size for ii in range(0, len(self.train_y), batch_size): x = self.train_x[ii:ii+batch_size] y = self.train_y[ii:ii+batch_size] yield self.scaler(x), y Explanation: Here we need to do a bit of preprocessing and getting the images into a form where we can pass batches to the network. First off, we need to rescale the images to a range of -1 to 1, since the output of our generator is also in that range. We also have a set of test and validation images which could be used if we're trying to identify the numbers in the images. End of explanation def model_inputs(real_dim, z_dim): inputs_real = tf.placeholder(tf.float32, (None, *real_dim), name='input_real') inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z') return inputs_real, inputs_z Explanation: Network Inputs Here, just creating some placeholders like normal. End of explanation def generator(z, output_dim, reuse=False, alpha=0.2, training=True): with tf.variable_scope('generator', reuse=reuse): # First fully connected layer x1 = tf.layers.dense(z, 4*4*512) x1 = tf.reshape(x1, ) # Output layer, 32x32x3 logits = out = tf.tanh(logits) return out Explanation: Generator Here you'll build the generator network. The input will be our noise vector z as before. Also as before, the output will be a $tanh$ output, but this time with size 32x32 which is the size of our SVHN images. What's new here is we'll use convolutional layers to create our new images. The first layer is a fully connected layer which is reshaped into a deep and narrow layer, something like 4x4x1024 as in the original DCGAN paper. Then we use batch normalization and a leaky ReLU activation. Next is a transposed convolution where typically you'd halve the depth and double the width and height of the previous layer. Again, we use batch normalization and leaky ReLU. For each of these layers, the general scheme is convolution > batch norm > leaky ReLU. You keep stacking layers up like this until you get the final transposed convolution layer with shape 32x32x3. Below is the archicture used in the original DCGAN paper: Note that the final layer here is 64x64x3, while for our SVHN dataset, we only want it to be 32x32x3. Exercise: Build the transposed convolutional network for the generator in the function below. Be sure to use leaky ReLUs on all the layers except for the last tanh layer, as well as batch normalization on all the transposed convolutional layers except the last one. End of explanation def discriminator(x, reuse=False, alpha=0.2): with tf.variable_scope('discriminator', reuse=reuse): # Input layer is 32x32x3 x = logits = out = return out, logits Explanation: Discriminator Here you'll build the discriminator. This is basically just a convolutional classifier like you've built before. The input to the discriminator are 32x32x3 tensors/images. You'll want a few convolutional layers, then a fully connected layer for the output. As before, we want a sigmoid output, and you'll need to return the logits as well. For the depths of the convolutional layers I suggest starting with 16, 32, 64 filters in the first layer, then double the depth as you add layers. Note that in the DCGAN paper, they did all the downsampling using only strided convolutional layers with no maxpool layers. You'll also want to use batch normalization with tf.layers.batch_normalization on each layer except the first convolutional and output layers. Again, each layer should look something like convolution > batch norm > leaky ReLU. Note: in this project, your batch normalization layers will always use batch statistics. (That is, always set training to True.) That's because we are only interested in using the discriminator to help train the generator. However, if you wanted to use the discriminator for inference later, then you would need to set the training parameter appropriately. Exercise: Build the convolutional network for the discriminator. The input is a 32x32x3 images, the output is a sigmoid plus the logits. Again, use Leaky ReLU activations and batch normalization on all the layers except the first. End of explanation def model_loss(input_real, input_z, output_dim, alpha=0.2): Get the loss for the discriminator and generator :param input_real: Images from the real dataset :param input_z: Z input :param out_channel_dim: The number of channels in the output image :return: A tuple of (discriminator loss, generator loss) g_model = generator(input_z, output_dim, alpha=alpha) d_model_real, d_logits_real = discriminator(input_real, alpha=alpha) d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, alpha=alpha) d_loss_real = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real))) d_loss_fake = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake))) g_loss = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake))) d_loss = d_loss_real + d_loss_fake return d_loss, g_loss Explanation: Model Loss Calculating the loss like before, nothing new here. End of explanation def model_opt(d_loss, g_loss, learning_rate, beta1): Get optimization operations :param d_loss: Discriminator loss Tensor :param g_loss: Generator loss Tensor :param learning_rate: Learning Rate Placeholder :param beta1: The exponential decay rate for the 1st moment in the optimizer :return: A tuple of (discriminator training operation, generator training operation) # Get weights and bias to update t_vars = tf.trainable_variables() d_vars = [var for var in t_vars if var.name.startswith('discriminator')] g_vars = [var for var in t_vars if var.name.startswith('generator')] # Optimize with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)): d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars) g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars) return d_train_opt, g_train_opt Explanation: Optimizers Not much new here, but notice how the train operations are wrapped in a with tf.control_dependencies block so the batch normalization layers can update their population statistics. End of explanation class GAN: def __init__(self, real_size, z_size, learning_rate, alpha=0.2, beta1=0.5): tf.reset_default_graph() self.input_real, self.input_z = model_inputs(real_size, z_size) self.d_loss, self.g_loss = model_loss(self.input_real, self.input_z, real_size[2], alpha=alpha) self.d_opt, self.g_opt = model_opt(self.d_loss, self.g_loss, learning_rate, beta1) Explanation: Building the model Here we can use the functions we defined about to build the model as a class. This will make it easier to move the network around in our code since the nodes and operations in the graph are packaged in one object. End of explanation def view_samples(epoch, samples, nrows, ncols, figsize=(5,5)): fig, axes = plt.subplots(figsize=figsize, nrows=nrows, ncols=ncols, sharey=True, sharex=True) for ax, img in zip(axes.flatten(), samples[epoch]): ax.axis('off') img = ((img - img.min())*255 / (img.max() - img.min())).astype(np.uint8) ax.set_adjustable('box-forced') im = ax.imshow(img, aspect='equal') plt.subplots_adjust(wspace=0, hspace=0) return fig, axes Explanation: Here is a function for displaying generated images. End of explanation def train(net, dataset, epochs, batch_size, print_every=10, show_every=100, figsize=(5,5)): saver = tf.train.Saver() sample_z = np.random.uniform(-1, 1, size=(72, z_size)) samples, losses = [], [] steps = 0 with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for e in range(epochs): for x, y in dataset.batches(batch_size): steps += 1 # Sample random noise for G batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size)) # Run optimizers _ = sess.run(net.d_opt, feed_dict={net.input_real: x, net.input_z: batch_z}) _ = sess.run(net.g_opt, feed_dict={net.input_z: batch_z, net.input_real: x}) if steps % print_every == 0: # At the end of each epoch, get the losses and print them out train_loss_d = net.d_loss.eval({net.input_z: batch_z, net.input_real: x}) train_loss_g = net.g_loss.eval({net.input_z: batch_z}) print("Epoch {}/{}...".format(e+1, epochs), "Discriminator Loss: {:.4f}...".format(train_loss_d), "Generator Loss: {:.4f}".format(train_loss_g)) # Save losses to view after training losses.append((train_loss_d, train_loss_g)) if steps % show_every == 0: gen_samples = sess.run( generator(net.input_z, 3, reuse=True, training=False), feed_dict={net.input_z: sample_z}) samples.append(gen_samples) _ = view_samples(-1, samples, 6, 12, figsize=figsize) plt.show() saver.save(sess, './checkpoints/generator.ckpt') with open('samples.pkl', 'wb') as f: pkl.dump(samples, f) return losses, samples Explanation: And another function we can use to train our network. Notice when we call generator to create the samples to display, we set training to False. That's so the batch normalization layers will use the population statistics rather than the batch statistics. Also notice that we set the net.input_real placeholder when we run the generator's optimizer. The generator doesn't actually use it, but we'd get an error without it because of the tf.control_dependencies block we created in model_opt. End of explanation real_size = (32,32,3) z_size = 100 learning_rate = 0.001 batch_size = 64 epochs = 1 alpha = 0.01 beta1 = 0.9 # Create the network net = GAN(real_size, z_size, learning_rate, alpha=alpha, beta1=beta1) # Load the data and train the network here dataset = Dataset(trainset, testset) losses, samples = train(net, dataset, epochs, batch_size, figsize=(10,5)) fig, ax = plt.subplots() losses = np.array(losses) plt.plot(losses.T[0], label='Discriminator', alpha=0.5) plt.plot(losses.T[1], label='Generator', alpha=0.5) plt.title("Training Losses") plt.legend() _ = view_samples(-1, samples, 6, 12, figsize=(10,5)) Explanation: Hyperparameters GANs are very sensitive to hyperparameters. A lot of experimentation goes into finding the best hyperparameters such that the generator and discriminator don't overpower each other. Try out your own hyperparameters or read the DCGAN paper to see what worked for them. Exercise: Find hyperparameters to train this GAN. The values found in the DCGAN paper work well, or you can experiment on your own. In general, you want the discriminator loss to be around 0.3, this means it is correctly classifying images as fake or real about 50% of the time. End of explanation
12,089
Given the following text description, write Python code to implement the functionality described below step by step Description: Estandarizacion de datos de los Anuarios Geoestadísticos de INEGI 2017 1. Introduccion Parámetros que se obtienen de esta fuente Step1: 2. Descarga de datos Cada entidad cuenta con una página que presenta sus respectivos anuarios geoestadísticos. La manera más rápida de obtener las ligas de los anuarios es entrar a la biblioteca de INEGI (http Step2: Extraccion de indices Conocer la información que contiene cada hoja del índice geoestadístico puede ser muy valioso y es necesario para hacer una función que itere adecuadamente por los archivos de todos los estados, debido a que los anuarios geoestadísticos de cada estado tienen ligeras variaciones que impiden una iteración directa. Step3: Los índices obtenidos de esta manera recibirán una limpieza manual desde Excel. Estandarizacion de datos para Parámetros. P0610 Ventas de electricidad Debido a la falta de estructura de los índices de parámetros de electricidad, tuvieron que ser estandarizados manualmente en excel. Con los índices estandarizados ya es posible generar un iterador Step4: Seleccionar renglones correspondientes a volumen de ventas de energía en MW/h Step5: Extraer datos de todas las ciudades Por medio de una función que se aplica al dataframe ventaselec, obtenemos los datos de ventas de energía eléctrica para todos los estados. Esta función incluye lineas para verificar los datasets extraidos. Step6: Revision de datos extraidos Revision de los datos que se dejaron fuera (en trashesdic), para asegurarme de que no dejé ningún municipio fuera. Step7: Revision de columnas en todos los datasets Step8: Podemos ver que la extracción de columnas no fue uniforme. Tomando como base la Ciudad de México, se revisarán los casos particulares Step9: Los siguientes estados contienen columnas que no son estándar Step10: Se van a revisar caso por caso los siguientes estados. Step11: CVE_EDO 05 Step12: CVE_EDO 10 Step13: CVE_EDO 12 Step14: CVE_EDO 16 Step15: CVE_EDO 18 Step16: CVE_EDO 26 Step17: CVE_EDO 29 Step18: Consolidacion de dataframe Todos los dataframes se guardarán como un solo archivo único y chingón. Step19: Como todo lo triste de esta vida, el dataset no tiene asignadas claves geoestadísticas municipales, por lo que será necesario etiquetarlo manualmente en Excel. Step20: Las siguientes columnas de código muestran el dataset después de la limpieza realizada en Excel. Step21: El dataset limpio quedó guardado como '..\PCCS\01_Dmine\Datasets\AGEO\2017\VentasElectricidad.xlsx' P0701 Longitud total de la red de carreteras del municipio (excluyendo las autopistas) Debido a la falta de estructura de los índices de parámetros de Movilidad, tuvieron que ser estandarizados manualmente en excel. Con los índices estandarizados ya es posible generar un iterador Step22: Seleccionar renglones correspondientes a Longitud total de la red de carreteras (kilometros) Step23: Falta el índice para CDMX, hay que agregarlo aparte Step24: Extraer datos de todas las ciudades Por medio de una función que se aplica al dataframe, obtenemos los datos de ventas de energía eléctrica para todos los estados. Esta función incluye lineas para verificar los datasets extraidos. Step25: Los datos extraídos son muy irregulares por lo que es mas rapido limpiarlos en excel.
Python Code: descripciones = { 'P0610': 'Ventas de electricidad', 'P0701': 'Longitud total de la red de carreteras del municipio (excluyendo las autopistas)' } # Librerias utilizadas import pandas as pd import sys import urllib import os import csv import zipfile # Configuracion del sistema print('Python {} on {}'.format(sys.version, sys.platform)) print('Pandas version: {}'.format(pd.__version__)) import platform; print('Running on {} {}'.format(platform.system(), platform.release())) Explanation: Estandarizacion de datos de los Anuarios Geoestadísticos de INEGI 2017 1. Introduccion Parámetros que se obtienen de esta fuente: ID |Descripción ---|:---------- P0610|Ventas de electricidad P0701|Longitud total de la red de carreteras del municipio (excluyendo las autopistas) End of explanation raiz = 'http://internet.contenidos.inegi.org.mx/contenidos/Productos/prod_serv/contenidos/espanol/bvinegi/productos/nueva_estruc/anuarios_2017/' # El diccionario tiene como llave la CVE_EDO y dirige hacia la liga de descarga del archivo zip con las tablas del # Anuario Geoestadístico de cada estado links = { '01': raiz + '702825092078.zip', '02': raiz + '702825094874.zip', '03': raiz + '702825094881.zip', '04': raiz + '702825095109.zip', '05': raiz + '702825095406.zip', '06': raiz + '702825092061.zip', '07': raiz + '702825094836.zip', '08': raiz + '702825092139.zip', '09': raiz + '702825094683.zip', '10': raiz + '702825092115.zip', '11': raiz + '702825092146.zip', '12': raiz + '702825094690.zip', '13': raiz + '702825095093.zip', '14': raiz + '702825092085.zip', '15': raiz + '702825094706.zip', '16': raiz + '702825092092.zip', '17': raiz + '702825094713.zip', '18': raiz + '702825092054.zip', '19': raiz + '702825094911.zip', '20': raiz + '702825094843.zip', '21': raiz + '702825094973.zip', '22': raiz + '702825092108.zip', '23': raiz + '702825095130.zip', '24': raiz + '702825092122.zip', '25': raiz + '702825094898.zip', '26': raiz + '702825094904.zip', '27': raiz + '702825095123.zip', '28': raiz + '702825094928.zip', '29': raiz + '702825096212.zip', '30': raiz + '702825094980.zip', '31': raiz + '702825095116.zip', '32': raiz + '702825092047.zip' } for value in links.values(): print(value) # Descarga de archivos a carpeta local destino = r'D:\PCCS\00_RawData\01_CSV\AGEO\2017' archivos = {} # Diccionario para guardar memoria de descarga for k,v in links.items(): archivo_local = destino + r'\{}.zip'.format(k) if os.path.isfile(archivo_local): print('Ya existe el archivo: {}'.format(archivo_local)) archivos[k] = archivo_local else: print('Descargando {} ... ... ... ... ... '.format(archivo_local)) urllib.request.urlretrieve(v, archivo_local) # archivos[k] = archivo_local print('se descargó {}'.format(archivo_local)) # Descompresión de archivos de estado unzipped = {} for estado, comprimido in archivos.items(): target = destino + '\\' + estado if os.path.isdir(target): print('Ya existe el directorio: {}'.format(target)) unzipped[estado] = target else: print('Descomprimiendo {} ... ... ... ... ... '.format(target)) descomprimir = zipfile.ZipFile(comprimido, 'r') descomprimir.extractall(target) descomprimir.close unzipped[estado] = target Explanation: 2. Descarga de datos Cada entidad cuenta con una página que presenta sus respectivos anuarios geoestadísticos. La manera más rápida de obtener las ligas de los anuarios es entrar a la biblioteca de INEGI (http://www.beta.inegi.org.mx/app/publicaciones/) y buscar la palabra "Anuario" en el campo de búsqueda. CVE_EDO |Nombre|URL ---|:---|:---------- 01|Aguascalientes|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825092078 02|Baja California|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825094874 03|Baja California Sur|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825094881 04|Campeche|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825095109 05|Coahuila de Zaragoza|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825095406 06|Colima|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825092061 07|Chiapas|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825094836 08|Chihuahua|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825092139 09|Ciudad de México|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825094683 10|Durango|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825092115 11|Guanajuato|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825092146 12|Guerrero|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825094690 13|Hidalgo|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825095093 14|Jalisco|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825092085 15|México|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825094706 16|Michoacán de Ocampo|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825092092 17|Morelos|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825094713 18|Nayarit|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825092054 19|Nuevo León|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825094911 20|Oaxaca|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825094843 21|Puebla|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825094973 22|Querétaro|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825092108 23|Quintana Roo|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825095130 24|San Luis Potosí|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825092122 25|Sinaloa|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825094898 26|Sonora|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825094904 27|Tabasco|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825095123 28|Tamaulipas|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825094928 29|Tlaxcala|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825096212 30|Veracruz de Ignacio de la Llave|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825094980 31|Yucatán|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825095116 32|Zacatecas|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825092047 Dentro de cada página, se incluye una liga directa para descargar un archivo comprimido con las tablas de datos de cada anuario geoestadítico. La lista links contiene estas URL y se utilizará para sistematizar la descarga de datos. End of explanation unzipped # Extraer indices indices = {} for estado, ruta in unzipped.items(): for file in os.listdir(ruta): if file.endswith('.xls'): path = ruta + '\\' + file indice = pd.read_excel(path, sheetname='Índice', skiprows = 1) # Primera lectura al indice para sacar columnas dtypes = list(indice) tempdic = {} for i in dtypes: tempdic[i] = 'str' indice = pd.read_excel(path, sheetname='Índice', skiprows = 1, dtype = tempdic).dropna(how = 'all') # Segunda lectura al indice ya con dtypes name = list(indice)[0] # Guarda el nombre del indice cols = [] for i in range(len(list(indice))): cols.append('col{}'.format(i)) # Crea nombres estandar de columna indice.columns = cols # Asigna nombres de columna indice['indice'] = name indice['file'] = file if estado not in indices.keys(): # Crea un diccionario para cada estado, si no existe indices[estado] = {} indices[estado][name] = indice print('Procesado {} |||NOMBRE:||| {}; [{}]'.format(file, name, len(cols))) # Imprime los resultados del proceso # Reordenar los dataframes por tipo indices_2 = {} for estado in indices.keys(): for indice in indices[estado].keys(): if indice not in indices_2.keys(): indices_2[indice] = {} indices_2[indice][estado] = indices[estado][indice] # Convertir indices en archivos unicos. finalindexes = {} for i in indices_2.keys(): print(i) frameslist = [] for estado in indices_2[i].keys(): frame = indices_2[i][estado] frame['estado'] = estado frameslist.append(frame) fullindex = pd.concat(frameslist) finalindexes[i] = fullindex print('Hecho: {}\n'.format(i)) # Escribir archivos xlsx path = r'D:\PCCS\01_Dmine\Datasets\AGEO\2017\indices' for indice in finalindexes.keys(): file = path+'\\'+indice+'.xlsx' writer = pd.ExcelWriter(file) finalindexes[indice].to_excel(writer, sheet_name = 'Indice') writer.save() print('[{}] lineas - archivo {}'.format(len(finalindexes[indice]), file)) Explanation: Extraccion de indices Conocer la información que contiene cada hoja del índice geoestadístico puede ser muy valioso y es necesario para hacer una función que itere adecuadamente por los archivos de todos los estados, debido a que los anuarios geoestadísticos de cada estado tienen ligeras variaciones que impiden una iteración directa. End of explanation # Importar dataset de índices f_indice = r'D:\PCCS\01_Dmine\Datasets\AGEO\2017\indices\Limpios\Electricidad.xlsx' ds_indices = pd.read_excel(f_indice, dtype={'Numeral':'str', 'estado':'str'}).set_index('estado') ds_indices.head() Explanation: Los índices obtenidos de esta manera recibirán una limpieza manual desde Excel. Estandarizacion de datos para Parámetros. P0610 Ventas de electricidad Debido a la falta de estructura de los índices de parámetros de electricidad, tuvieron que ser estandarizados manualmente en excel. Con los índices estandarizados ya es posible generar un iterador End of explanation # Dataframe con índice de hojas sobre el tema "Ventas de electricidad" ventaselec = ds_indices[ds_indices['Units'] == '(Megawatts-hora)'] ventaselec.head() len(ventaselec) # Crear columna con rutas path = r'D:\PCCS\00_RawData\01_CSV\AGEO\2017' ventaselec['path'] = path+'\\'+ventaselec.index+'\\'+ventaselec['file'] # Definir función para traer datos a python unnameds = set(['Unnamed: '+str(i) for i in range(0, 50)]) # Lista 'Unnamed: x' de 0 a 50 def get_ventas(path, sheet, estado): temp = pd.ExcelFile(path) temp = temp.parse(sheet, header = 6).dropna(axis = 0, how='all').dropna(axis = 1, how='all') # Elimina las columnas unnamed dropplets = set(temp.columns).intersection(unnameds) temp = temp.drop(dropplets, axis = 1) temp = temp.dropna(axis = 0, how='all') temp = temp.reset_index().drop('index', axis = 1) # Identifica los últimos renglones, que no contienen datos col0 = temp.columns[0] # Nombre de la columna 0, para usarlo en un chingo de lugares. Bueno 3 try: tempnotas = temp[col0][temp[col0] == 'Nota:'].index[0] # Para las hojas que terminan en 'Notas' except: tempnotas = temp[col0][temp[col0] == 'a/'].index[0] # Para las hojas que terminan en 'a/' print(tempnotas) # Aparta los renglones después de "a/", para conocer la información que dejé fuera. trashes = temp.iloc[tempnotas:-1] # Elimina los renglones después de "a/" temp = temp.iloc[0:tempnotas] # Crear columna de estado y renombrar la primera columna para poder concatenar datframes más tarde. temp['CVE_EDO'] = estado temp = temp.rename(columns={col0:'NOM_MUN'}) print(type(temp)) return temp, trashes Explanation: Seleccionar renglones correspondientes a volumen de ventas de energía en MW/h End of explanation # Funcion para extraer datos def getdata(serie, estado): path = serie['path'] sheet = serie['Numeral'] print('{}\n{}'.format('-'*30, path)) # Imprime la ruta hacia el archivo print('Hoja: {}'.format(sheet)) # Imprime el nombre de la hoja que se va a extraer trashes, temp = get_ventas(path, sheet, estado) print(temp.iloc[[0, -1]][temp.columns[0]]) print(list(temp)) print(('len = {}'.format(len(temp)))) return trashes, temp ventasdic = {} trashesdic = {} for estado in ventaselec.index: ventasdic[estado], trashesdic[estado] = getdata(ventaselec.loc[estado], estado) # Ejemplo de uno de los dataframes extraidos ventasdic['09'] ventaselec['path'] Explanation: Extraer datos de todas las ciudades Por medio de una función que se aplica al dataframe ventaselec, obtenemos los datos de ventas de energía eléctrica para todos los estados. Esta función incluye lineas para verificar los datasets extraidos. End of explanation a = '-'*30 for CVE_EDO in trashesdic.keys(): print('{}\n{}\n{}'.format(a, trashesdic[CVE_EDO], a)) Explanation: Revision de datos extraidos Revision de los datos que se dejaron fuera (en trashesdic), para asegurarme de que no dejé ningún municipio fuera. End of explanation for CVE_EDO in ventasdic.keys(): variables = list(ventasdic[CVE_EDO]) longitud = len(variables) # Cuantas variables existen? longset = len(set(variables)) # Cuantas variables son distintas? print('{}{} [{} - {}]\n{}\n{}'.format(a, CVE_EDO, longitud, longset, variables, a)) Explanation: Revision de columnas en todos los datasets End of explanation varCDMX = set(list(ventasdic['09'])) varCDMX Explanation: Podemos ver que la extracción de columnas no fue uniforme. Tomando como base la Ciudad de México, se revisarán los casos particulares: End of explanation nostandar = [] # Lista de estados cuyas columnas no son estandar for CVE_EDO in ventasdic.keys(): varsedo = set(list(ventasdic[CVE_EDO])) diffs = varCDMX.symmetric_difference(varsedo) if len(diffs) != 0: print('{}{}\n{}\n{}'.format(a, CVE_EDO, diffs, a)) nostandar.append(CVE_EDO) Explanation: Los siguientes estados contienen columnas que no son estándar: End of explanation nostandar Explanation: Se van a revisar caso por caso los siguientes estados. End of explanation REV_EDO = '05' ventasdic[REV_EDO].head(6) list(ventasdic[REV_EDO]) drops = ['a/', '\nb/', '\nc/', 'd/', '\ne/'] # Nombres de las columnas que se van a eliminar ventasdic['05'] = ventasdic['05'].drop(drops, axis = 1) old_colnames = list(ventasdic[REV_EDO]) new_colnames = list(ventasdic['09']) colnames = {i:j for i,j in zip(old_colnames,new_colnames)} ventasdic[REV_EDO] = ventasdic[REV_EDO].rename(columns = colnames) # Normalizacion ventasdic['05'].head() Explanation: CVE_EDO 05 End of explanation REV_EDO = '10' ventasdic[REV_EDO].head(6) list(ventasdic[REV_EDO]) drops = ['d/'] # Nombres de las columnas que se van a eliminar ventasdic[REV_EDO] = ventasdic[REV_EDO].drop(drops, axis = 1) Explanation: CVE_EDO 10 End of explanation REV_EDO = '12' ventasdic[REV_EDO].head(6) list(ventasdic[REV_EDO]) drops = ['a/'] # Nombres de las columnas que se van a eliminar ventasdic[REV_EDO] = ventasdic[REV_EDO].drop(drops, axis = 1) Explanation: CVE_EDO 12 End of explanation REV_EDO = '16' ventasdic[REV_EDO].head(6) list(ventasdic[REV_EDO]) drops = ['d/'] # Nombres de las columnas que se van a eliminar ventasdic[REV_EDO] = ventasdic[REV_EDO].drop(drops, axis = 1) Explanation: CVE_EDO 16 End of explanation REV_EDO = '18' ventasdic[REV_EDO].head(6) list(ventasdic[REV_EDO]) drops = ['d/'] # Nombres de las columnas que se van a eliminar ventasdic[REV_EDO] = ventasdic[REV_EDO].drop(drops, axis = 1) Explanation: CVE_EDO 18 End of explanation REV_EDO = '26' ventasdic[REV_EDO].head(6) list(ventasdic[REV_EDO]) drops = ['d/'] # Nombres de las columnas que se van a eliminar ventasdic[REV_EDO] = ventasdic[REV_EDO].drop(drops, axis = 1) Explanation: CVE_EDO 26 End of explanation REV_EDO = '29' ventasdic[REV_EDO].head(6) list(ventasdic[REV_EDO]) drops = ['a/', '\nb/', '\nc/', 'd/', '\ne/'] # Nombres de las columnas que se van a eliminar ventasdic[REV_EDO] = ventasdic[REV_EDO].drop(drops, axis = 1) old_colnames = list(ventasdic[REV_EDO]) new_colnames = list(ventasdic['09']) colnames = {i:j for i,j in zip(old_colnames,new_colnames)} ventasdic[REV_EDO] = ventasdic[REV_EDO].rename(columns = colnames) # Normalizacion Explanation: CVE_EDO 29 End of explanation # Unificacion de datos a un solo dataset ventasDS = pd.DataFrame() for CVE_EDO, dataframe in ventasdic.items(): estado = ventasdic[CVE_EDO]['NOM_MUN'][0] print('Adjuntando {} - {} -----'.format(CVE_EDO, estado)) ventasDS = ventasDS.append(dataframe) # Nombre final para columnas, para evitar duplicidades con otros datasets # y para eliminar acentos que podrían causar problemas de encoding colnames = { 'NOM_MUN':'NOM_MUN', 'Total':'Total ventas elec', 'Doméstico':'VE Domestico', 'Alumbrado\npúblico':'VE alumbrado publico', 'Bombeo de aguas \npotables y negras':'VE Bombeo agua potable y negra', 'Agrícola':'VE Agricola', 'Industrial y \nde servicios':'VE Industrial y servicios', } ventasDS = ventasDS.rename(columns=colnames) ventasDS.head() # Metadatos metadatos = { 'Nombre del Dataset': 'Anuario Geoestadistico 2017 por estado, Datos de electricidad', 'Descripcion del dataset': None, 'Disponibilidad Temporal': '2016', 'Periodo de actualizacion': 'Anual', 'Nivel de Desagregacion': 'Municipal', 'Notas': 's/n', 'Fuente': 'INEGI - Anuarios Geoestadisticos', 'URL_Fuente': 'http://www.beta.inegi.org.mx/proyectos/ccpv/2010/?#section', 'Dataset base': None } metadatos = pd.DataFrame.from_dict(metadatos, orient='index', dtype='str') metadatos.columns = ['Descripcion'] metadatos= metadatos.rename_axis('Metadato') metadatos Variables = { 'NOM_MUN':'Nombre del Municipio', 'Total ventas elec':'Total de ventas de energía electrica', 'VE Domestico':'Ventas de energia en el sector domestico (Megawatts-Hora)', 'VE alumbrado publico':'Ventas de energia en alumbrado publico (Megawatts-Hora)', 'VE Bombeo agua potable y negra':'Ventas de energia en bombeo de agua potable y negra (Megawatts-Hora)', 'VE Agricola':'Ventas de energia en el sector Agricola (Megawatts-Hora)', 'VE Industrial y servicios':'Ventas de energia en el sector industrial y de Servicios (Megawatts-Hora)', 'CVE_EDO':'Clave Geoestadistica Estatal de 2 Digitos', } Variables = pd.DataFrame.from_dict(Variables, orient='index', dtype='str') Variables.columns = ['Descripcion'] Variables= Variables.rename_axis('Mnemonico') Variables Explanation: Consolidacion de dataframe Todos los dataframes se guardarán como un solo archivo único y chingón. End of explanation #Exportar dataset file = r'D:\PCCS\01_Dmine\Datasets\AGEO\2017'+'\\'+'electricidad.xlsx' writer = pd.ExcelWriter(file) ventasDS.to_excel(writer, sheet_name = 'DATOS') metadatos.to_excel(writer, sheet_name ='METADATOS') Variables.to_excel(writer, sheet_name ='VARIABLES') writer.save() Explanation: Como todo lo triste de esta vida, el dataset no tiene asignadas claves geoestadísticas municipales, por lo que será necesario etiquetarlo manualmente en Excel. End of explanation # Muestra del dataset con claves geoestadísticas asignadas archivo = r'D:\PCCS\01_Dmine\Datasets\AGEO\2017\VentasElectricidad.xlsx' elec = pd.read_excel(archivo, sheetname='DATOS', dtype={'CVE_MUN':'str'}) elec = elec.set_index('CVE_MUN') elec.head() Explanation: Las siguientes columnas de código muestran el dataset después de la limpieza realizada en Excel. End of explanation # Importar dataset de índices f_indice = r'D:\PCCS\01_Dmine\Datasets\AGEO\2017\indices\Limpios\IndicesMovilidad.xlsx' ds_indices = pd.read_excel(f_indice, dtype={'Numeral':'str', 'estado':'str'}).set_index('estado') ds_indices.head() Explanation: El dataset limpio quedó guardado como '..\PCCS\01_Dmine\Datasets\AGEO\2017\VentasElectricidad.xlsx' P0701 Longitud total de la red de carreteras del municipio (excluyendo las autopistas) Debido a la falta de estructura de los índices de parámetros de Movilidad, tuvieron que ser estandarizados manualmente en excel. Con los índices estandarizados ya es posible generar un iterador End of explanation # Dataframe con índice de hojas sobre el tema "Longitud total de la red de carreteras" dataframe = ds_indices[ds_indices['ID'] == '22.1'] dataframe.head() len(dataframe) dataframe.index Explanation: Seleccionar renglones correspondientes a Longitud total de la red de carreteras (kilometros) End of explanation ds_indices[ds_indices['ID'] == '20.16'] dataframe = dataframe.append(ds_indices[ds_indices['ID'] == '20.16']) # Crear columna con rutas path = r'D:\PCCS\00_RawData\01_CSV\AGEO\2017' dataframe['path'] = path+'\\'+dataframe.index+'\\'+dataframe['file'] dataframe.head(2) # Definir función para traer datos a python unnameds = set(['Unnamed: '+str(i) for i in range(0, 50)]) # Lista 'Unnamed: x' de 0 a 50 # Esta lista se utilizará para eliminar columnas nombradas Unnamed, que por lo general no tienen datos. def get_datos(path, sheet, estado): temp = pd.ExcelFile(path) temp = temp.parse(sheet, header = 6).dropna(axis = 0, how='all').dropna(axis = 1, how='all') return temp # Identifica los últimos renglones, que no contienen datos # col0 = temp.columns[0] # Nombre de la columna 0, para usarlo en un chingo de lugares. Bueno 3 # try: tempnotas = temp[col0][temp[col0] == 'Nota:'].index[0] # Para las hojas que terminan en 'Notas' # except: tempnotas = temp[col0][temp[col0] == 'a/'].index[0] # Para las hojas que terminan en 'a/' # print(tempnotas) # # # Aparta los renglones después de "a/", para conocer la información que dejé fuera. # trashes = temp.iloc[tempnotas:-1] # # # Elimina los renglones después de "a/" # temp = temp.iloc[0:tempnotas] # # # Crear columna de estado y renombrar la primera columna para poder concatenar datframes más tarde. # temp['CVE_EDO'] = estado # temp = temp.rename(columns={col0:'NOM_MUN'}) # print(type(temp)) # # return temp, trashes Explanation: Falta el índice para CDMX, hay que agregarlo aparte End of explanation # Funcion para extraer datos def getdata(serie, estado): path = serie['path'] sheet = serie['ID'] print('{}\n{}'.format('-'*30, path)) # Imprime la ruta hacia el archivo print('Hoja: {}'.format(sheet)) # Imprime el nombre de la hoja que se va a extraer temp = get_datos(path, sheet, estado) print(temp.iloc[[0, -1]][temp.columns[0]]) print(list(temp)) print(('len = {}'.format(len(temp)))) return temp datadic = {} for estado in dataframe.index: datadic[estado] = getdata(dataframe.loc[estado], estado) datadic.keys() Explanation: Extraer datos de todas las ciudades Por medio de una función que se aplica al dataframe, obtenemos los datos de ventas de energía eléctrica para todos los estados. Esta función incluye lineas para verificar los datasets extraidos. End of explanation #Exportar dataset file = r'D:\PCCS\01_Dmine\Datasets\AGEO\2017'+'\\'+'Long_Carreteras_raw1.xlsx' writer = pd.ExcelWriter(file) for estado, dataset in datadic.items(): dataset.to_excel(writer, sheet_name = estado) print('Se guardó dataset para {}'.format(estado)) writer.save() Explanation: Los datos extraídos son muy irregulares por lo que es mas rapido limpiarlos en excel. End of explanation
12,090
Given the following text description, write Python code to implement the functionality described below step by step Description: <a href='http Step1: Load the data As a first step we will load a large dataset using dask. If you have followed the setup instructions you will have downloaded a large CSV containing 12 million taxi trips. Let's load this data using dask to create a dataframe ddf Step2: Create a dataset In previous sections we have already seen how to declare a set of Points from a pandas DataFrame. Here we do the same for a Dask dataframe passed in with the desired key dimensions Step3: We could now simply type points, and Bokeh will attempt to display this data as a standard Bokeh plot. Before doing that, however, remember that we have 12 million rows of data, and no current plotting program will handle this well! Instead of letting Bokeh see this data, let's convert it to something far more tractable using the datashader operation. This operation will aggregate the data on a 2D grid, apply shading to assign pixel colors to each bin in this grid, and build an RGB Element (just a fixed-sized image) we can safely display Step4: If you zoom in you will note that the plot rerenders depending on the zoom level, which allows the full dataset to be explored interactively even though only an image of it is ever sent to the browser. The way this works is that datashade is a dynamic operation that also declares some linked streams. These linked streams are automatically instantiated and dynamically supply the plot size, x_range, and y_range from the Bokeh plot to the operation based on your current viewport as you zoom or pan Step5: Adding a tile source Using the GeoViews (geographic) extension for HoloViews, we can display a map in the background. Just declare a Bokeh WMTSTileSource and pass it to the gv.WMTS Element, then we can overlay it Step6: Aggregating with a variable So far we have simply been counting taxi dropoffs, but our dataset is much richer than that. We have information about a number of variables including the total cost of a taxi ride, the total_amount. Datashader provides a number of aggregator functions, which you can supply to the datashade operation. Here use the ds.mean aggregator to compute the average cost of a trip at a dropoff location Step7: Grouping by a variable Because datashading happens only just before visualization, you can use any of the techniques shown in previous sections to select, filter, or group your data before visualizing it, such as grouping it by the hour of day Step8: Additional features The actual points are never given directly to Bokeh, and so the normal Bokeh hover (and other) tools will not normally be useful with Datashader output. However, we can easily verlay an invisible QuadMesh to reveal information on hover, providing information about values in a local area while still only ever sending a fixed-size array to the browser to avoid issues with large data.
Python Code: import pandas as pd import holoviews as hv import dask.dataframe as dd import datashader as ds import geoviews as gv from holoviews.operation.datashader import datashade, aggregate hv.extension('bokeh') Explanation: <a href='http://www.holoviews.org'><img src="assets/hv+bk.png" alt="HV+BK logos" width="40%;" align="left"/></a> <div style="float:right;"><h2>07. Working with large datasets</h2></div> HoloViews supports even high-dimensional datasets easily, and the standard mechanisms discussed already work well as long as you select a small enough subset of the data to display at any one time. However, some datasets are just inherently large, even for a single frame of data, and cannot safely be transferred for display in any standard web browser. Luckily, HoloViews makes it simple for you to use the separate datashader together with any of the plotting extension libraries, including Bokeh and Matplotlib. The datashader library is designed to complement standard plotting libraries by providing faithful visualizations for very large datasets, focusing on revealing the overall distribution, not just individual data points. Datashader uses computations accellerated using Numba, making it fast to work with datasets of millions or billions of datapoints stored in dask dataframes. Dask dataframes provide an API that is functionally equivalent to pandas, but allows working with data out of core while scaling out to many processors and even clusters. Here we will use Dask to load a large CSV file of taxi coordinates. <div> <img align="left" src="./assets/numba.png" width='140px'/> <img align="left" src="./assets/dask.png" width='85px'/> <img align="left" src="./assets/datashader.png" width='158px'/> </div> How does datashader work? <img src="./assets/datashader_pipeline.png" width="80%"/> Tools like Bokeh map Data (left) directly into an HTML/JavaScript Plot (right) datashader instead renders Data into a plot-sized Aggregate array, from which an Image can be constructed then embedded into a Bokeh Plot Only the fixed-sized Image needs to be sent to the browser, allowing millions or billions of datapoints to be used Every step automatically adjusts to the data, but can be customized When not to use datashader Plotting less than 1e5 or 1e6 data points When every datapoint matters; standard Bokeh will render all of them For full interactivity (hover tools) with every datapoint When to use datashader Actual big data; when Bokeh/Matplotlib have trouble When the distribution matters more than individual points When you find yourself sampling or binning to better understand the distribution End of explanation ddf = dd.read_csv('../data/nyc_taxi.csv', parse_dates=['tpep_pickup_datetime']) ddf['hour'] = ddf.tpep_pickup_datetime.dt.hour # If your machine is low on RAM (<8GB) don't persist (though everything will be much slower) ddf = ddf.persist() print('%s Rows' % len(ddf)) print('Columns:', list(ddf.columns)) Explanation: Load the data As a first step we will load a large dataset using dask. If you have followed the setup instructions you will have downloaded a large CSV containing 12 million taxi trips. Let's load this data using dask to create a dataframe ddf: End of explanation points = hv.Points(ddf, kdims=['dropoff_x', 'dropoff_y']) Explanation: Create a dataset In previous sections we have already seen how to declare a set of Points from a pandas DataFrame. Here we do the same for a Dask dataframe passed in with the desired key dimensions: End of explanation %opts RGB [width=600 height=500 bgcolor="black"] datashade(points) Explanation: We could now simply type points, and Bokeh will attempt to display this data as a standard Bokeh plot. Before doing that, however, remember that we have 12 million rows of data, and no current plotting program will handle this well! Instead of letting Bokeh see this data, let's convert it to something far more tractable using the datashader operation. This operation will aggregate the data on a 2D grid, apply shading to assign pixel colors to each bin in this grid, and build an RGB Element (just a fixed-sized image) we can safely display: End of explanation datashade.streams # Exercise: Plot the taxi pickup locations ('pickup_x' and 'pickup_y' columns) # Warning: Don't try to display hv.Points() directly; it's too big! Use datashade() for any display # Optional: Change the cmap on the datashade operation to inferno from datashader.colors import inferno points = hv.Points(ddf, kdims=['pickup_x', 'pickup_y']) datashade(points, cmap=inferno) Explanation: If you zoom in you will note that the plot rerenders depending on the zoom level, which allows the full dataset to be explored interactively even though only an image of it is ever sent to the browser. The way this works is that datashade is a dynamic operation that also declares some linked streams. These linked streams are automatically instantiated and dynamically supply the plot size, x_range, and y_range from the Bokeh plot to the operation based on your current viewport as you zoom or pan: End of explanation %opts RGB [xaxis=None yaxis=None] import geoviews as gv from bokeh.models import WMTSTileSource url = 'https://server.arcgisonline.com/ArcGIS/rest/services/World_Imagery/MapServer/tile/{Z}/{Y}/{X}.jpg' wmts = WMTSTileSource(url=url) gv.WMTS(wmts) * datashade(points) %opts RGB [xaxis=None yaxis=None] # Exercise: Overlay the taxi pickup data on top of the Wikipedia tile source wiki_url = 'https://maps.wikimedia.org/osm-intl/{Z}/{X}/{Y}@2x.png' wmts = WMTSTileSource(url=wiki_url) gv.WMTS(wmts) * datashade(points) Explanation: Adding a tile source Using the GeoViews (geographic) extension for HoloViews, we can display a map in the background. Just declare a Bokeh WMTSTileSource and pass it to the gv.WMTS Element, then we can overlay it: End of explanation selected = points.select(total_amount=(None, 1000)) selected.data = selected.data.persist() gv.WMTS(wmts) * datashade(selected, aggregator=ds.mean('total_amount')) # Exercise: Use the ds.min or ds.max aggregator to visualize ``tip_amount`` by dropoff location # Optional: Eliminate outliers by using select selected = points.select(tip_amount=(None, 1000)) selected.data = selected.data.persist() gv.WMTS(wmts) * datashade(selected, aggregator=ds.max('tip_amount')) # Try using ds.min Explanation: Aggregating with a variable So far we have simply been counting taxi dropoffs, but our dataset is much richer than that. We have information about a number of variables including the total cost of a taxi ride, the total_amount. Datashader provides a number of aggregator functions, which you can supply to the datashade operation. Here use the ds.mean aggregator to compute the average cost of a trip at a dropoff location: End of explanation %opts Image [width=600 height=500 logz=True xaxis=None yaxis=None] taxi_ds = hv.Dataset(ddf) grouped = taxi_ds.to(hv.Points, ['dropoff_x', 'dropoff_y'], groupby=['hour'], dynamic=True) aggregate(grouped).redim.values(hour=range(24)) %%opts Image [width=300 height=200 xaxis=None yaxis=None] # Exercise: Facet the trips in the morning hours as an NdLayout using aggregate(grouped.layout()) # Hint: You can reuse the existing grouped variable or select a subset before using the .to method taxi_ds = hv.Dataset(ddf).select(hour=(2, 8)) taxi_ds.data = taxi_ds.data.persist() grouped = taxi_ds.to(hv.Points, ['dropoff_x', 'dropoff_y'], groupby=['hour']) aggregate(grouped.layout()).cols(3) Explanation: Grouping by a variable Because datashading happens only just before visualization, you can use any of the techniques shown in previous sections to select, filter, or group your data before visualizing it, such as grouping it by the hour of day: End of explanation %%opts QuadMesh [width=800 height=400 tools=['hover']] (alpha=0 hover_line_alpha=1 hover_fill_alpha=0) hover_info = aggregate(points, width=40, height=20, streams=[hv.streams.RangeXY]).map(hv.QuadMesh, hv.Image) gv.WMTS(wmts) * datashade(points) * hover_info Explanation: Additional features The actual points are never given directly to Bokeh, and so the normal Bokeh hover (and other) tools will not normally be useful with Datashader output. However, we can easily verlay an invisible QuadMesh to reveal information on hover, providing information about values in a local area while still only ever sending a fixed-size array to the browser to avoid issues with large data. End of explanation
12,091
Given the following text description, write Python code to implement the functionality described below step by step Description: Generative Adversarial Network In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits! GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out Step1: Model Inputs First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks. Exercise Step2: Generator network Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values. Variable Scope Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks. We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again. To use tf.variable_scope, you use a with statement Step3: Discriminator The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer. Exercise Step4: Hyperparameters Step5: Build network Now we're building the network from the functions defined above. First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z. Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes. Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True). Exercise Step6: Discriminator and Generator Losses Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like python tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels)) For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth) The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that. Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images. Exercise Step7: Optimizers We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph. For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance). We can do something similar with the discriminator. All the variables in the discriminator start with discriminator. Then, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list. Exercise Step8: Training Step9: Training loss Here we'll check out the training losses for the generator and discriminator. Step10: Generator samples from training Here we can view samples of images from the generator. First we'll look at images taken while training. Step11: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make. Step12: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion! Step13: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3. Sampling from the generator We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!
Python Code: %matplotlib inline import pickle as pkl import numpy as np import tensorflow as tf import matplotlib.pyplot as plt from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('MNIST_data') Explanation: Generative Adversarial Network In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits! GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out: Pix2Pix CycleGAN A whole list The idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator. The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can fool the discriminator. The output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow. End of explanation def model_inputs(real_dim, z_dim): inputs_real = tf.placeholder(dtype=tf.float32, shape=[None, real_dim], name="input_real") inputs_z = tf.placeholder(dtype=tf.float32, shape=[None, z_dim], name="input_z") return inputs_real, inputs_z Explanation: Model Inputs First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks. Exercise: Finish the model_inputs function below. Create the placeholders for inputs_real and inputs_z using the input sizes real_dim and z_dim respectively. End of explanation def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01): ''' Build the generator network. Arguments --------- z : Input tensor for the generator out_dim : Shape of the generator output n_units : Number of units in hidden layer reuse : Reuse the variables with tf.variable_scope alpha : leak parameter for leaky ReLU Returns ------- out, logits: ''' with tf.variable_scope("generator",reuse=reuse): # finish this # Hidden layer h1 = tf.layers.dense(z, n_units,activation=None) # Leaky ReLU h1 = tf.maximum(alpha*h1, h1) # Logits and tanh output logits = tf.layers.dense(h1, out_dim, activation=None) out = tf.tanh(logits) return out Explanation: Generator network Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values. Variable Scope Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks. We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again. To use tf.variable_scope, you use a with statement: python with tf.variable_scope('scope_name', reuse=False): # code here Here's more from the TensorFlow documentation to get another look at using tf.variable_scope. Leaky ReLU TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can just take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x: $$ f(x) = max(\alpha * x, x) $$ Tanh Output The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1. Exercise: Implement the generator network in the function below. You'll need to return the tanh output. Make sure to wrap your code in a variable scope, with 'generator' as the scope name, and pass the reuse keyword argument from the function to tf.variable_scope. End of explanation def discriminator(x, n_units=128, reuse=False, alpha=0.01): ''' Build the discriminator network. Arguments --------- x : Input tensor for the discriminator n_units: Number of units in hidden layer reuse : Reuse the variables with tf.variable_scope alpha : leak parameter for leaky ReLU Returns ------- out, logits: ''' with tf.variable_scope("discriminator", reuse=reuse): # finish this # Hidden layer h1 = tf.layers.dense(x, n_units, activation=None) # Leaky ReLU h1 = tf.maximum(alpha*h1, h1) logits = tf.layers.dense(h1, 1, activation=None) out = tf.sigmoid(logits) return out, logits Explanation: Discriminator The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer. Exercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a variable scope, with 'discriminator' as the scope name, and pass the reuse keyword argument from the function arguments to tf.variable_scope. End of explanation # Size of input image to discriminator input_size = 784 # 28x28 MNIST images flattened # Size of latent vector to generator z_size = 100 # Sizes of hidden layers in generator and discriminator g_hidden_size = 128 d_hidden_size = 128 # Leak factor for leaky ReLU alpha = 0.01 # Label smoothing smooth = 0.1 Explanation: Hyperparameters End of explanation tf.reset_default_graph() # Create our input placeholders input_real, input_z = model_inputs(input_size, z_size) # Generator network here g_model = generator(input_z, input_size, g_hidden_size, reuse=False, alpha=alpha) # g_model is the generator output # Disriminator network here d_model_real, d_logits_real = discriminator(input_real, d_hidden_size, reuse=False, alpha=alpha) d_model_fake, d_logits_fake = discriminator(g_model, d_hidden_size, reuse=True, alpha=alpha) Explanation: Build network Now we're building the network from the functions defined above. First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z. Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes. Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True). Exercise: Build the network from the functions you defined earlier. End of explanation # Calculate losses real_labels = tf.ones_like(d_logits_real)*(1-smooth) d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=real_labels)) d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_logits_fake))) d_loss = d_loss_real + d_loss_fake g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_logits_fake))) Explanation: Discriminator and Generator Losses Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like python tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels)) For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth) The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that. Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images. Exercise: Calculate the losses for the discriminator and the generator. There are two discriminator losses, one for real images and one for fake images. For the real image loss, use the real logits and (smoothed) labels of ones. For the fake image loss, use the fake logits with labels of all zeros. The total discriminator loss is the sum of those two losses. Finally, the generator loss again uses the fake logits from the discriminator, but this time the labels are all ones because the generator wants to fool the discriminator. End of explanation # Optimizers learning_rate = 0.002 # Get the trainable_variables, split into G and D parts t_vars = tf.trainable_variables() g_vars = [x for x in t_vars if x.name.startswith('generator')] d_vars = [x for x in t_vars if x.name.startswith('discriminator')] d_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars) g_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars) Explanation: Optimizers We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph. For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance). We can do something similar with the discriminator. All the variables in the discriminator start with discriminator. Then, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list. Exercise: Below, implement the optimizers for the generator and discriminator. First you'll need to get a list of trainable variables, then split that list into two lists, one for the generator variables and another for the discriminator variables. Finally, using AdamOptimizer, create an optimizer for each network that update the network variables separately. End of explanation batch_size = 100 epochs = 100 samples = [] losses = [] saver = tf.train.Saver(var_list = g_vars) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for e in range(epochs): for ii in range(mnist.train.num_examples//batch_size): batch = mnist.train.next_batch(batch_size) # Get images, reshape and rescale to pass to D batch_images = batch[0].reshape((batch_size, 784)) batch_images = batch_images*2 - 1 # Sample random noise for G batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size)) # Run optimizers _ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z}) _ = sess.run(g_train_opt, feed_dict={input_z: batch_z}) # At the end of each epoch, get the losses and print them out train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images}) train_loss_g = g_loss.eval({input_z: batch_z}) print("Epoch {}/{}...".format(e+1, epochs), "Discriminator Loss: {:.4f}...".format(train_loss_d), "Generator Loss: {:.4f}".format(train_loss_g)) # Save losses to view after training losses.append((train_loss_d, train_loss_g)) # Sample from generator as we're training for viewing afterwards sample_z = np.random.uniform(-1, 1, size=(16, z_size)) gen_samples = sess.run( generator(input_z, input_size, reuse=True), feed_dict={input_z: sample_z}) samples.append(gen_samples) saver.save(sess, './checkpoints/generator.ckpt') # Save training generator samples with open('train_samples.pkl', 'wb') as f: pkl.dump(samples, f) Explanation: Training End of explanation %matplotlib inline import matplotlib.pyplot as plt fig, ax = plt.subplots() losses = np.array(losses) plt.plot(losses.T[0], label='Discriminator') plt.plot(losses.T[1], label='Generator') plt.title("Training Losses") plt.legend() Explanation: Training loss Here we'll check out the training losses for the generator and discriminator. End of explanation def view_samples(epoch, samples): fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True) for ax, img in zip(axes.flatten(), samples[epoch]): ax.xaxis.set_visible(False) ax.yaxis.set_visible(False) im = ax.imshow(img.reshape((28,28)), cmap='Greys_r') return fig, axes # Load samples from generator taken while training with open('train_samples.pkl', 'rb') as f: samples = pkl.load(f) Explanation: Generator samples from training Here we can view samples of images from the generator. First we'll look at images taken while training. End of explanation _ = view_samples(-1, samples) Explanation: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make. End of explanation rows, cols = 10, 6 fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True) for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes): for img, ax in zip(sample[::int(len(sample)/cols)], ax_row): ax.imshow(img.reshape((28,28)), cmap='Greys_r') ax.xaxis.set_visible(False) ax.yaxis.set_visible(False) Explanation: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion! End of explanation saver = tf.train.Saver(var_list=g_vars) with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) sample_z = np.random.uniform(-1, 1, size=(16, z_size)) gen_samples = sess.run( generator(input_z, input_size, reuse=True), feed_dict={input_z: sample_z}) view_samples(0, [gen_samples]) Explanation: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3. Sampling from the generator We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples! End of explanation
12,092
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Seaice MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties --&gt; Model 2. Key Properties --&gt; Variables 3. Key Properties --&gt; Seawater Properties 4. Key Properties --&gt; Resolution 5. Key Properties --&gt; Tuning Applied 6. Key Properties --&gt; Key Parameter Values 7. Key Properties --&gt; Assumptions 8. Key Properties --&gt; Conservation 9. Grid --&gt; Discretisation --&gt; Horizontal 10. Grid --&gt; Discretisation --&gt; Vertical 11. Grid --&gt; Seaice Categories 12. Grid --&gt; Snow On Seaice 13. Dynamics 14. Thermodynamics --&gt; Energy 15. Thermodynamics --&gt; Mass 16. Thermodynamics --&gt; Salt 17. Thermodynamics --&gt; Salt --&gt; Mass Transport 18. Thermodynamics --&gt; Salt --&gt; Thermodynamics 19. Thermodynamics --&gt; Ice Thickness Distribution 20. Thermodynamics --&gt; Ice Floe Size Distribution 21. Thermodynamics --&gt; Melt Ponds 22. Thermodynamics --&gt; Snow Processes 23. Radiative Processes 1. Key Properties --&gt; Model Name of seaice model used. 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 2. Key Properties --&gt; Variables List of prognostic variable in the sea ice model. 2.1. Prognostic Is Required Step7: 3. Key Properties --&gt; Seawater Properties Properties of seawater relevant to sea ice 3.1. Ocean Freezing Point Is Required Step8: 3.2. Ocean Freezing Point Value Is Required Step9: 4. Key Properties --&gt; Resolution Resolution of the sea ice grid 4.1. Name Is Required Step10: 4.2. Canonical Horizontal Resolution Is Required Step11: 4.3. Number Of Horizontal Gridpoints Is Required Step12: 5. Key Properties --&gt; Tuning Applied Tuning applied to sea ice model component 5.1. Description Is Required Step13: 5.2. Target Is Required Step14: 5.3. Simulations Is Required Step15: 5.4. Metrics Used Is Required Step16: 5.5. Variables Is Required Step17: 6. Key Properties --&gt; Key Parameter Values Values of key parameters 6.1. Typical Parameters Is Required Step18: 6.2. Additional Parameters Is Required Step19: 7. Key Properties --&gt; Assumptions Assumptions made in the sea ice model 7.1. Description Is Required Step20: 7.2. On Diagnostic Variables Is Required Step21: 7.3. Missing Processes Is Required Step22: 8. Key Properties --&gt; Conservation Conservation in the sea ice component 8.1. Description Is Required Step23: 8.2. Properties Is Required Step24: 8.3. Budget Is Required Step25: 8.4. Was Flux Correction Used Is Required Step26: 8.5. Corrected Conserved Prognostic Variables Is Required Step27: 9. Grid --&gt; Discretisation --&gt; Horizontal Sea ice discretisation in the horizontal 9.1. Grid Is Required Step28: 9.2. Grid Type Is Required Step29: 9.3. Scheme Is Required Step30: 9.4. Thermodynamics Time Step Is Required Step31: 9.5. Dynamics Time Step Is Required Step32: 9.6. Additional Details Is Required Step33: 10. Grid --&gt; Discretisation --&gt; Vertical Sea ice vertical properties 10.1. Layering Is Required Step34: 10.2. Number Of Layers Is Required Step35: 10.3. Additional Details Is Required Step36: 11. Grid --&gt; Seaice Categories What method is used to represent sea ice categories ? 11.1. Has Mulitple Categories Is Required Step37: 11.2. Number Of Categories Is Required Step38: 11.3. Category Limits Is Required Step39: 11.4. Ice Thickness Distribution Scheme Is Required Step40: 11.5. Other Is Required Step41: 12. Grid --&gt; Snow On Seaice Snow on sea ice details 12.1. Has Snow On Ice Is Required Step42: 12.2. Number Of Snow Levels Is Required Step43: 12.3. Snow Fraction Is Required Step44: 12.4. Additional Details Is Required Step45: 13. Dynamics Sea Ice Dynamics 13.1. Horizontal Transport Is Required Step46: 13.2. Transport In Thickness Space Is Required Step47: 13.3. Ice Strength Formulation Is Required Step48: 13.4. Redistribution Is Required Step49: 13.5. Rheology Is Required Step50: 14. Thermodynamics --&gt; Energy Processes related to energy in sea ice thermodynamics 14.1. Enthalpy Formulation Is Required Step51: 14.2. Thermal Conductivity Is Required Step52: 14.3. Heat Diffusion Is Required Step53: 14.4. Basal Heat Flux Is Required Step54: 14.5. Fixed Salinity Value Is Required Step55: 14.6. Heat Content Of Precipitation Is Required Step56: 14.7. Precipitation Effects On Salinity Is Required Step57: 15. Thermodynamics --&gt; Mass Processes related to mass in sea ice thermodynamics 15.1. New Ice Formation Is Required Step58: 15.2. Ice Vertical Growth And Melt Is Required Step59: 15.3. Ice Lateral Melting Is Required Step60: 15.4. Ice Surface Sublimation Is Required Step61: 15.5. Frazil Ice Is Required Step62: 16. Thermodynamics --&gt; Salt Processes related to salt in sea ice thermodynamics. 16.1. Has Multiple Sea Ice Salinities Is Required Step63: 16.2. Sea Ice Salinity Thermal Impacts Is Required Step64: 17. Thermodynamics --&gt; Salt --&gt; Mass Transport Mass transport of salt 17.1. Salinity Type Is Required Step65: 17.2. Constant Salinity Value Is Required Step66: 17.3. Additional Details Is Required Step67: 18. Thermodynamics --&gt; Salt --&gt; Thermodynamics Salt thermodynamics 18.1. Salinity Type Is Required Step68: 18.2. Constant Salinity Value Is Required Step69: 18.3. Additional Details Is Required Step70: 19. Thermodynamics --&gt; Ice Thickness Distribution Ice thickness distribution details. 19.1. Representation Is Required Step71: 20. Thermodynamics --&gt; Ice Floe Size Distribution Ice floe-size distribution details. 20.1. Representation Is Required Step72: 20.2. Additional Details Is Required Step73: 21. Thermodynamics --&gt; Melt Ponds Characteristics of melt ponds. 21.1. Are Included Is Required Step74: 21.2. Formulation Is Required Step75: 21.3. Impacts Is Required Step76: 22. Thermodynamics --&gt; Snow Processes Thermodynamic processes in snow on sea ice 22.1. Has Snow Aging Is Required Step77: 22.2. Snow Aging Scheme Is Required Step78: 22.3. Has Snow Ice Formation Is Required Step79: 22.4. Snow Ice Formation Scheme Is Required Step80: 22.5. Redistribution Is Required Step81: 22.6. Heat Diffusion Is Required Step82: 23. Radiative Processes Sea Ice Radiative Processes 23.1. Surface Albedo Is Required Step83: 23.2. Ice Radiation Transmission Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'niwa', 'sandbox-3', 'seaice') Explanation: ES-DOC CMIP6 Model Properties - Seaice MIP Era: CMIP6 Institute: NIWA Source ID: SANDBOX-3 Topic: Seaice Sub-Topics: Dynamics, Thermodynamics, Radiative Processes. Properties: 80 (63 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:30 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.model.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties --&gt; Model 2. Key Properties --&gt; Variables 3. Key Properties --&gt; Seawater Properties 4. Key Properties --&gt; Resolution 5. Key Properties --&gt; Tuning Applied 6. Key Properties --&gt; Key Parameter Values 7. Key Properties --&gt; Assumptions 8. Key Properties --&gt; Conservation 9. Grid --&gt; Discretisation --&gt; Horizontal 10. Grid --&gt; Discretisation --&gt; Vertical 11. Grid --&gt; Seaice Categories 12. Grid --&gt; Snow On Seaice 13. Dynamics 14. Thermodynamics --&gt; Energy 15. Thermodynamics --&gt; Mass 16. Thermodynamics --&gt; Salt 17. Thermodynamics --&gt; Salt --&gt; Mass Transport 18. Thermodynamics --&gt; Salt --&gt; Thermodynamics 19. Thermodynamics --&gt; Ice Thickness Distribution 20. Thermodynamics --&gt; Ice Floe Size Distribution 21. Thermodynamics --&gt; Melt Ponds 22. Thermodynamics --&gt; Snow Processes 23. Radiative Processes 1. Key Properties --&gt; Model Name of seaice model used. 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of sea ice model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.model.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.variables.prognostic') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Sea ice temperature" # "Sea ice concentration" # "Sea ice thickness" # "Sea ice volume per grid cell area" # "Sea ice u-velocity" # "Sea ice v-velocity" # "Sea ice enthalpy" # "Internal ice stress" # "Salinity" # "Snow temperature" # "Snow depth" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Variables List of prognostic variable in the sea ice model. 2.1. Prognostic Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of prognostic variables in the sea ice component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "TEOS-10" # "Constant" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Seawater Properties Properties of seawater relevant to sea ice 3.1. Ocean Freezing Point Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.2. Ocean Freezing Point Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If using a constant seawater freezing point, specify this value. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Resolution Resolution of the sea ice grid 4.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.2. Canonical Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.3. Number Of Horizontal Gridpoints Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Total number of horizontal (XY) points (or degrees of freedom) on computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Key Properties --&gt; Tuning Applied Tuning applied to sea ice model component 5.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.2. Target Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.3. Simulations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 *Which simulations had tuning applied, e.g. all, not historical, only pi-control? * End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.4. Metrics Used Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List any observed metrics used in tuning model/parameters End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.5. Variables Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Which variables were changed during the tuning process? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Ice strength (P*) in units of N m{-2}" # "Snow conductivity (ks) in units of W m{-1} K{-1} " # "Minimum thickness of ice created in leads (h0) in units of m" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 6. Key Properties --&gt; Key Parameter Values Values of key parameters 6.1. Typical Parameters Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N What values were specificed for the following parameters if used? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.2. Additional Parameters Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.assumptions.description') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7. Key Properties --&gt; Assumptions Assumptions made in the sea ice model 7.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General overview description of any key assumptions made in this model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.2. On Diagnostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.3. Missing Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Key Properties --&gt; Conservation Conservation in the sea ice component 8.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Provide a general description of conservation methodology. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.properties') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Energy" # "Mass" # "Salt" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 8.2. Properties Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Properties conserved in sea ice by the numerical schemes. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.budget') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.3. Budget Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3 End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 8.4. Was Flux Correction Used Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does conservation involved flux correction? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.5. Corrected Conserved Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List any variables which are conserved by more than the numerical scheme alone. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Ocean grid" # "Atmosphere Grid" # "Own Grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 9. Grid --&gt; Discretisation --&gt; Horizontal Sea ice discretisation in the horizontal 9.1. Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Grid on which sea ice is horizontal discretised? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Structured grid" # "Unstructured grid" # "Adaptive grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 9.2. Grid Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the type of sea ice grid? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Finite differences" # "Finite elements" # "Finite volumes" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 9.3. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the advection scheme? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 9.4. Thermodynamics Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the time step in the sea ice model thermodynamic component in seconds. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 9.5. Dynamics Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the time step in the sea ice model dynamic component in seconds. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.6. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify any additional horizontal discretisation details. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Zero-layer" # "Two-layers" # "Multi-layers" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10. Grid --&gt; Discretisation --&gt; Vertical Sea ice vertical properties 10.1. Layering Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 10.2. Number Of Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 If using multi-layers specify how many. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 10.3. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify any additional vertical grid details. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 11. Grid --&gt; Seaice Categories What method is used to represent sea ice categories ? 11.1. Has Mulitple Categories Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Set to true if the sea ice model has multiple sea ice categories. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 11.2. Number Of Categories Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 If using sea ice categories specify how many. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.3. Category Limits Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 If using sea ice categories specify each of the category limits. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.4. Ice Thickness Distribution Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the sea ice thickness distribution scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.other') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.5. Other Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 12. Grid --&gt; Snow On Seaice Snow on sea ice details 12.1. Has Snow On Ice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is snow on ice represented in this model? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 12.2. Number Of Snow Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of vertical levels of snow on ice? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12.3. Snow Fraction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how the snow fraction on sea ice is determined End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12.4. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify any additional details related to snow on ice. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.horizontal_transport') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Incremental Re-mapping" # "Prather" # "Eulerian" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13. Dynamics Sea Ice Dynamics 13.1. Horizontal Transport Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the method of horizontal advection of sea ice? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Incremental Re-mapping" # "Prather" # "Eulerian" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.2. Transport In Thickness Space Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the method of sea ice transport in thickness space (i.e. in thickness categories)? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Hibler 1979" # "Rothrock 1975" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.3. Ice Strength Formulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Which method of sea ice strength formulation is used? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.redistribution') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Rafting" # "Ridging" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.4. Redistribution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Which processes can redistribute sea ice (including thickness)? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.rheology') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Free-drift" # "Mohr-Coloumb" # "Visco-plastic" # "Elastic-visco-plastic" # "Elastic-anisotropic-plastic" # "Granular" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.5. Rheology Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Rheology, what is the ice deformation formulation? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Pure ice latent heat (Semtner 0-layer)" # "Pure ice latent and sensible heat" # "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)" # "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14. Thermodynamics --&gt; Energy Processes related to energy in sea ice thermodynamics 14.1. Enthalpy Formulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the energy formulation? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Pure ice" # "Saline ice" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14.2. Thermal Conductivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What type of thermal conductivity is used? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Conduction fluxes" # "Conduction and radiation heat fluxes" # "Conduction, radiation and latent heat transport" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14.3. Heat Diffusion Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the method of heat diffusion? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Heat Reservoir" # "Thermal Fixed Salinity" # "Thermal Varying Salinity" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14.4. Basal Heat Flux Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method by which basal ocean heat flux is handled? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 14.5. Fixed Salinity Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14.6. Heat Content Of Precipitation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method by which the heat content of precipitation is handled. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14.7. Precipitation Effects On Salinity Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15. Thermodynamics --&gt; Mass Processes related to mass in sea ice thermodynamics 15.1. New Ice Formation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method by which new sea ice is formed in open water. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.2. Ice Vertical Growth And Melt Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method that governs the vertical growth and melt of sea ice. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Floe-size dependent (Bitz et al 2001)" # "Virtual thin ice melting (for single-category)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.3. Ice Lateral Melting Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the method of sea ice lateral melting? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.4. Ice Surface Sublimation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method that governs sea ice surface sublimation. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.5. Frazil Ice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method of frazil ice formation. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 16. Thermodynamics --&gt; Salt Processes related to salt in sea ice thermodynamics. 16.1. Has Multiple Sea Ice Salinities Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 16.2. Sea Ice Salinity Thermal Impacts Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does sea ice salinity impact the thermal properties of sea ice? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Prescribed salinity profile" # "Prognostic salinity profile" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17. Thermodynamics --&gt; Salt --&gt; Mass Transport Mass transport of salt 17.1. Salinity Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is salinity determined in the mass transport of salt calculation? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 17.2. Constant Salinity Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If using a constant salinity value specify this value in PSU? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.3. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the salinity profile used. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Prescribed salinity profile" # "Prognostic salinity profile" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 18. Thermodynamics --&gt; Salt --&gt; Thermodynamics Salt thermodynamics 18.1. Salinity Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is salinity determined in the thermodynamic calculation? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 18.2. Constant Salinity Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If using a constant salinity value specify this value in PSU? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 18.3. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the salinity profile used. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Explicit" # "Virtual (enhancement of thermal conductivity, thin ice melting)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 19. Thermodynamics --&gt; Ice Thickness Distribution Ice thickness distribution details. 19.1. Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is the sea ice thickness distribution represented? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Explicit" # "Parameterised" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 20. Thermodynamics --&gt; Ice Floe Size Distribution Ice floe-size distribution details. 20.1. Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is the sea ice floe-size represented? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 20.2. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Please provide further details on any parameterisation of floe-size. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 21. Thermodynamics --&gt; Melt Ponds Characteristics of melt ponds. 21.1. Are Included Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are melt ponds included in the sea ice model? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Flocco and Feltham (2010)" # "Level-ice melt ponds" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 21.2. Formulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What method of melt pond formulation is used? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Albedo" # "Freshwater" # "Heat" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 21.3. Impacts Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N What do melt ponds have an impact on? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging') # PROPERTY VALUE(S): # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 22. Thermodynamics --&gt; Snow Processes Thermodynamic processes in snow on sea ice 22.1. Has Snow Aging Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Set to True if the sea ice model has a snow aging scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22.2. Snow Aging Scheme Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the snow aging scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 22.3. Has Snow Ice Formation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Set to True if the sea ice model has snow ice formation. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22.4. Snow Ice Formation Scheme Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the snow ice formation scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22.5. Redistribution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the impact of ridging on snow cover? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Single-layered heat diffusion" # "Multi-layered heat diffusion" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22.6. Heat Diffusion Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the heat diffusion through snow methodology in sea ice thermodynamics? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Delta-Eddington" # "Parameterized" # "Multi-band albedo" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23. Radiative Processes Sea Ice Radiative Processes 23.1. Surface Albedo Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method used to handle surface albedo. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Delta-Eddington" # "Exponential attenuation" # "Ice radiation transmission per category" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23.2. Ice Radiation Transmission Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Method by which solar radiation through sea ice is handled. End of explanation
12,093
Given the following text description, write Python code to implement the functionality described below step by step Description: Plotting age distributions with respect to genotype groups Step1: For two of the 5 groups, the Shapiro test p-value is lower than 1e-3, which means that the distributions of these two groups can't be considered as normal. (But theorically none of them is) Matching pairs using nearest neighbours The matching algorithm Step2: Loading data Step3: Matching the groups Step4: Plotting data and see that the groups are now matching Step5: Matching groups using linear assignment method Step6: Plotting data and see that the groups are now matching Step7: Assessing the effect from the matching We perform a two-sample t-test between each group and the target group, before and after applying the matching. As the dataset is composed of 3 variables (age, gender, education), this returns 3 t values and 3 p-values for each comparison.
Python Code: %matplotlib inline import pandas as pd from scipy import stats from matplotlib import pyplot as plt data = pd.read_excel('/home/grg/spm/data/covariates.xls') for i in xrange(5): x = data[data['apo'] == i]['age'].values plt.hist(x, bins=20) print i, 'W:%.4f p:%.4f -'%stats.shapiro(x), len(x), 'subjects between', int(min(x)), 'and', int(max(x)) plt.legend(['apoe23', 'apoe24', 'apoe33', 'apoe34', 'apoe44']) plt.show() Explanation: Plotting age distributions with respect to genotype groups End of explanation from sklearn.preprocessing import StandardScaler from sklearn.neighbors import NearestNeighbors def get_matching_pairs(treated_df, non_treated_df, scaler=True): treated_x = treated_df.values non_treated_x = non_treated_df.values if scaler: scaler = StandardScaler() scaler.fit(treated_x) treated_x = scaler.transform(treated_x) non_treated_x = scaler.transform(non_treated_x) nbrs = NearestNeighbors(n_neighbors=1, algorithm='ball_tree').fit(non_treated_x) distances, indices = nbrs.kneighbors(treated_x) indices = indices.reshape(indices.shape[0]) matched = non_treated_df.ix[indices] matched = non_treated_df.irow(matched.index) return matched Explanation: For two of the 5 groups, the Shapiro test p-value is lower than 1e-3, which means that the distributions of these two groups can't be considered as normal. (But theorically none of them is) Matching pairs using nearest neighbours The matching algorithm: End of explanation df = pd.read_excel('/home/grg/spm/data/covariates.xls') df = df[['subject','apo','age','gender','educyears']] groups = [df[df['apo']==i] for i in xrange(5)] for i in xrange(5): groups[i] = groups[i].set_index(groups[i]['subject']) del groups[i]['subject'] del groups[i]['apo'] Explanation: Loading data End of explanation treated_df = groups[4] matched_df = [get_matching_pairs(treated_df, groups[i], scaler=False) for i in xrange(4)] Explanation: Matching the groups End of explanation fig, ax = plt.subplots(figsize=(6,6)) for i in xrange(4): x = matched_df[i]['age'] plt.hist(x, bins=20) print i, 'W:%.4f p:%.4f -'%stats.shapiro(x), len(x), 'subjects between', int(min(x)), 'and', int(max(x)) x = treated_df['age'] plt.hist(x, bins=20) print 4, 'W:%.4f p:%.4f -'%stats.shapiro(x), len(x), 'subjects between', int(min(x)), 'and', int(max(x)) plt.legend(['apoe23', 'apoe24', 'apoe33', 'apoe34', 'apoe44']) Explanation: Plotting data and see that the groups are now matching End of explanation import pandas as pd df = pd.read_excel('/home/grg/spm/data/covariates.xls') df = df[['subject','apo','age','gender','educyears']] groups = [df[df['apo']==i] for i in xrange(5)] for i in xrange(5): groups[i] = groups[i].set_index(groups[i]['subject']) del groups[i]['subject'] del groups[i]['apo'] groups = [df[df['apo']==i] for i in xrange(5)] for i in xrange(5): groups[i] = groups[i].set_index(groups[i]['subject']) del groups[i]['apo'] del groups[i]['subject'] treated_df = groups[4] non_treated_df = groups[0] from scipy.spatial.distance import cdist from scipy import optimize def get_matching_pairs(treated_df, non_treated_df): cost_matrix = cdist(treated_df.values, non_treated_df.values) row_ind, col_ind = optimize.linear_sum_assignment(cost_matrix) return non_treated_df.iloc[col_ind] treated_df = groups[4] matched_df = [get_matching_pairs(treated_df, groups[i]) for i in xrange(4)] Explanation: Matching groups using linear assignment method End of explanation fig, ax = plt.subplots(figsize=(6,6)) for i in xrange(4): x = matched_df[i]['age'] plt.hist(x, bins=20) print i, 'W:%.4f p:%.4f -'%stats.shapiro(x), len(x), 'subjects between', int(min(x)), 'and', int(max(x)) x = treated_df['age'] plt.hist(x, bins=20) print 4, 'W:%.4f p:%.4f -'%stats.shapiro(x), len(x), 'subjects between', int(min(x)), 'and', int(max(x)) plt.legend(['apoe23', 'apoe24', 'apoe33', 'apoe34', 'apoe44']) import json groups_index = [each.index.tolist() for each in matched_df] groups_index.append(groups[4].index.tolist()) json.dump(groups_index, open('/tmp/groups.json','w')) Explanation: Plotting data and see that the groups are now matching End of explanation from scipy.stats import ttest_ind for i in xrange(4): print '=== Group %s ==='%i tval_bef, pval_bef = ttest_ind(groups[i].values, treated_df.values) tval_aft, pval_aft = ttest_ind(matched_df[i].values, treated_df.values) print 'p-values before matching: %s - p-values after matching: %s'%(pval_bef, pval_aft) df = pd.read_excel('/home/grg/spm/data/covariates.xls') list(df[df['apo']!=1]['subject'].values) Explanation: Assessing the effect from the matching We perform a two-sample t-test between each group and the target group, before and after applying the matching. As the dataset is composed of 3 variables (age, gender, education), this returns 3 t values and 3 p-values for each comparison. End of explanation
12,094
Given the following text description, write Python code to implement the functionality described below step by step Description: Vorhersagen mit trainiertem CNN Modell und Auswertung Step1: Laden realistischer Daten Step2: Modell laden Step3: Bewertung Step4: Nutzung mit Server Installationen 1. Flask basiert https Step5: 2. Google Cloud ML Service
Python Code: import warnings warnings.filterwarnings('ignore') %matplotlib inline %pylab inline import matplotlib.pylab as plt import numpy as np from distutils.version import StrictVersion import sklearn print(sklearn.__version__) assert StrictVersion(sklearn.__version__ ) >= StrictVersion('0.18.1') import tensorflow as tf tf.logging.set_verbosity(tf.logging.ERROR) print(tf.__version__) assert StrictVersion(tf.__version__) >= StrictVersion('1.2.1') import keras print(keras.__version__) assert StrictVersion(keras.__version__) >= StrictVersion('2.0.6') # We need keras 2.0.6 or later as this is the version we created the model with # !pip install keras --upgrade Explanation: Vorhersagen mit trainiertem CNN Modell und Auswertung End of explanation !curl -O https://raw.githubusercontent.com/DJCordhose/speed-limit-signs/master/data/real-world.zip from zipfile import ZipFile zip = ZipFile(r'real-world.zip') zip.extractall('.') !ls -l real-world import os import skimage.data import skimage.transform def load_data(data_dir): # Get all subdirectories of data_dir. Each represents a label. directories = [d for d in os.listdir(data_dir) if os.path.isdir(os.path.join(data_dir, d))] # Loop through the label directories and collect the data in # two lists, labels and images. labels = [] images = [] all_file_names = [] for d in directories: label_dir = os.path.join(data_dir, d) file_names = [os.path.join(label_dir, f) for f in os.listdir(label_dir)] # For each label, load it's images and add them to the images list. # And add the label number (i.e. directory name) to the labels list. for f in file_names: images.append(skimage.data.imread(f)) labels.append(int(d)) all_file_names.append(f) # Resize images images64 = [skimage.transform.resize(image, (64, 64)) for image in images] return images64, labels, all_file_names # Load datasets. ROOT_PATH = "./" data_dir = os.path.join(ROOT_PATH, "real-world") images, labels, file_names = load_data(data_dir) import matplotlib import matplotlib.pyplot as plt def display_images_and_labels(images, labels): plt.figure(figsize=(15, 15)) i = 0 for label in labels: # Pick the first image for each label. image = images[i] plt.subplot(4, 4, i + 1) # A grid of 8 rows x 8 columns plt.axis('off') plt.title("{0}".format(label)) i += 1 plt.imshow(image) plt.show() display_images_and_labels(images, file_names) Explanation: Laden realistischer Daten End of explanation !curl -O https://transfer.sh/M5SOs/conv-vgg.hdf5 from keras.models import load_model model = load_model('conv-vgg.hdf5') !ls -lh BATCH_SIZE = 500 y = np.array(labels) X = np.array(images) from keras.utils.np_utils import to_categorical num_categories = 6 y = to_categorical(y, num_categories) loss, accuracy = model.evaluate(X, y, batch_size=BATCH_SIZE) loss, accuracy import skimage.transform def predict_single(image): # normalize X_sample = np.array([image]) prediction = model.predict(X_sample) predicted_category = np.argmax(prediction, axis=1) return predicted_category, prediction # Display the predictions and the ground truth visually. def display_prediction (images, true_labels, predicted_labels): fig = plt.figure(figsize=(10, 10)) for i in range(len(true_labels)): truth = true_labels[i] prediction = predicted_labels[i] plt.subplot(6, 3,1+i) plt.axis('off') color='green' if truth == prediction else 'red' plt.text(80, 10, "Truth: {0}\nPrediction: {1}".format(truth, prediction), fontsize=12, color=color) plt.imshow(images[i]) X_sample = np.array(images) prediction = model.predict(X_sample) predicted_categories = np.argmax(prediction, axis=1) ground_truth = np.array(labels) display_prediction(images, ground_truth, predicted_categories) Explanation: Modell laden End of explanation !rm -r tmp !mkdir tmp # Only works locally # https://github.com/keplr-io/quiver # create a tmp dir in the local directory this notebook runs in, otherwise quiver will fail (and won't tell you why) # !mkdir tmp # change image_class to feed in different classes image_class = '3' # https://github.com/keplr-io/quiver from quiver_engine import server server.launch(model, input_folder=data_dir+'/'+image_class, port=7000) # open at http://localhost:7000/ # interrupt kernel to return control to notebook # Alternative mit noch mehr Visualisierungsmöglichkeiten # https://github.com/raghakot/keras-vis Explanation: Bewertung: 9 von 15 richtig, 60% Accracy Gar nicht mal so schlecht, besonders weil wir Translations Invariance nie trainiert haben Aber no translation invariance, signes have to be at center not robust against background false positives (70 is address sign, not traffic sign) not robust against unclear signes (50 looks like 30) not robust against distortions (100, 80) Next steps might be to artifically expand training set by applying transformations that match missing robustness introduce category 'no speed limit sign' translation invariance added in training material try other architectures Wir gucken mal unter die Haube Bisher haben wir das Modell als Blackbox angesehen, wir haben gar keine Ahnung, was da eigentlich erkannt wird Warum wird das mit der 80 nichts? Nach der ersten Conv Schicht Nach einer mittleren Conv Schicht Nach der letzten Conv Schicht End of explanation # Erfordert einen lokalen Flask Server !curl -H "Content-Type: application/json" -X GET -d '{"url": "https://github.com/DJCordhose/speed-limit-signs/raw/master/data/real-world/1000/70-house-detail.jpg"}' http://127.0.0.1:5000 !curl -H "Content-Type: application/json" -X GET -d '{"url": "https://github.com/DJCordhose/speed-limit-signs/raw/master/data/real-world/4/100-sky-cutoff-detail.jpg"}' http://127.0.0.1:5000 Explanation: Nutzung mit Server Installationen 1. Flask basiert https://github.com/DJCordhose/speed-limit-signs/tree/master/server End of explanation # Example for iris, model exported as Tensorflow # gsutil cp -R 1 gs://irisnn # create model and version at https://console.cloud.google.com/mlengine # in a DOS shell on local machine in this folder # gcloud ml-engine predict --model=irisnn --json-instances=./sample_iris.json # SCORES # [0.9954029321670532, 0.004596732556819916, 3.3544753819114703e-07] Explanation: 2. Google Cloud ML Service End of explanation
12,095
Given the following text description, write Python code to implement the functionality described below step by step Description: Custom Estimator Learning Objectives Step1: Next, we'll load our data set. Step2: Examine the data It's a good idea to get to know your data a little bit before you work with it. We'll print out a quick summary of a few useful statistics on each column. This will include things like mean, standard deviation, max, min, and various quantiles. Step3: This data is at the city block level, so these features reflect the total number of rooms in that block, or the total number of people who live on that block, respectively. Let's create a different, more appropriate feature. Because we are predicing the price of a single house, we should try to make all our features correspond to a single house as well Step4: Build a custom estimator linear regressor In this exercise, we'll be trying to predict median_house_value. It will be our label. We'll use the remaining columns as our input features. To train our model, we'll use the Estimator API and create a custom estimator for linear regression. Note that we don't actually need a custom estimator for linear regression since there is a canned estimator for it, however we're keeping it simple so you can practice creating a custom estimator function.
Python Code: import math import shutil import numpy as np import pandas as pd import tensorflow as tf tf.logging.set_verbosity(tf.logging.INFO) pd.options.display.max_rows = 10 pd.options.display.float_format = '{:.1f}'.format Explanation: Custom Estimator Learning Objectives: * Use a custom estimator of the Estimator class in TensorFlow to predict median housing price The data is based on 1990 census data from California. This data is at the city block level, so these features reflect the total number of rooms in that block, or the total number of people who live on that block, respectively. <p> Let's use a set of features to predict house value. ## Set Up In this first cell, we'll load the necessary libraries. End of explanation df = pd.read_csv("https://storage.googleapis.com/ml_universities/california_housing_train.csv", sep = ",") Explanation: Next, we'll load our data set. End of explanation df.head() df.describe() Explanation: Examine the data It's a good idea to get to know your data a little bit before you work with it. We'll print out a quick summary of a few useful statistics on each column. This will include things like mean, standard deviation, max, min, and various quantiles. End of explanation df['num_rooms'] = df['total_rooms'] / df['households'] df['num_bedrooms'] = df['total_bedrooms'] / df['households'] df['persons_per_house'] = df['population'] / df['households'] df.describe() df.drop(['total_rooms', 'total_bedrooms', 'population', 'households'], axis = 1, inplace = True) df.describe() Explanation: This data is at the city block level, so these features reflect the total number of rooms in that block, or the total number of people who live on that block, respectively. Let's create a different, more appropriate feature. Because we are predicing the price of a single house, we should try to make all our features correspond to a single house as well End of explanation # Define feature columns feature_columns = { colname : tf.feature_column.numeric_column(colname) \ for colname in ['housing_median_age','median_income','num_rooms','num_bedrooms','persons_per_house'] } # Bucketize lat, lon so it's not so high-res; California is mostly N-S, so more lats than lons feature_columns['longitude'] = tf.feature_column.bucketized_column(tf.feature_column.numeric_column('longitude'), np.linspace(-124.3, -114.3, 5).tolist()) feature_columns['latitude'] = tf.feature_column.bucketized_column(tf.feature_column.numeric_column('latitude'), np.linspace(32.5, 42, 10).tolist()) # Split into train and eval and create input functions msk = np.random.rand(len(df)) < 0.8 traindf = df[msk] evaldf = df[~msk] SCALE = 100000 BATCH_SIZE=128 train_input_fn = tf.estimator.inputs.pandas_input_fn(x = traindf[list(feature_columns.keys())], y = traindf["median_house_value"] / SCALE, num_epochs = None, batch_size = BATCH_SIZE, shuffle = True) eval_input_fn = tf.estimator.inputs.pandas_input_fn(x = evaldf[list(feature_columns.keys())], y = evaldf["median_house_value"] / SCALE, # note the scaling num_epochs = 1, batch_size = len(evaldf), shuffle=False) # Create the custom estimator def custom_estimator(features, labels, mode, params): # 0. Extract data from feature columns input_layer = tf.feature_column.input_layer(features, params['feature_columns']) # 1. Define Model Architecture predictions = tf.layers.dense(input_layer,1,activation=None) # 2. Loss function, training/eval ops if mode == tf.estimator.ModeKeys.TRAIN or mode == tf.estimator.ModeKeys.EVAL: labels = tf.expand_dims(tf.cast(labels, dtype=tf.float32), -1) loss = tf.losses.mean_squared_error(labels, predictions) optimizer = tf.train.FtrlOptimizer(learning_rate=0.2) train_op = optimizer.minimize( loss = loss, global_step = tf.train.get_global_step()) eval_metric_ops = { "rmse": tf.metrics.root_mean_squared_error(labels*SCALE, predictions*SCALE) } else: loss = None train_op = None eval_metric_ops = None # 3. Create predictions predictions_dict = #TODO: create predictions dictionary # 4. Create export outputs export_outputs = #TODO: create export_outputs dictionary # 5. Return EstimatorSpec return tf.estimator.EstimatorSpec( mode = mode, predictions = predictions_dict, loss = loss, train_op = train_op, eval_metric_ops = eval_metric_ops, export_outputs = export_outputs) # Create serving input function def serving_input_fn(): feature_placeholders = { colname : tf.placeholder(tf.float32, [None]) for colname in 'housing_median_age,median_income,num_rooms,num_bedrooms,persons_per_house'.split(',') } feature_placeholders['longitude'] = tf.placeholder(tf.float32, [None]) feature_placeholders['latitude'] = tf.placeholder(tf.float32, [None]) features = { key: tf.expand_dims(tensor, -1) for key, tensor in feature_placeholders.items() } return tf.estimator.export.ServingInputReceiver(features, feature_placeholders) # Create custom estimator's train and evaluate function def train_and_evaluate(output_dir): estimator = # TODO: Add estimator, make sure to add params={'feature_columns': list(feature_columns.values())} as an argument train_spec = tf.estimator.TrainSpec( input_fn = train_input_fn, max_steps = 1000) exporter = tf.estimator.LatestExporter('exporter', serving_input_fn) eval_spec = tf.estimator.EvalSpec( input_fn = eval_input_fn, steps = None, exporters = exporter) tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec) #Run Training OUTDIR = 'custom_estimator_trained_model' shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time train_and_evaluate(OUTDIR) Explanation: Build a custom estimator linear regressor In this exercise, we'll be trying to predict median_house_value. It will be our label. We'll use the remaining columns as our input features. To train our model, we'll use the Estimator API and create a custom estimator for linear regression. Note that we don't actually need a custom estimator for linear regression since there is a canned estimator for it, however we're keeping it simple so you can practice creating a custom estimator function. End of explanation
12,096
Given the following text description, write Python code to implement the functionality described below step by step Description: Alapozás Görbék megadási módjai Implicit alak Az implicit alak a görbét alkotó pontokat egy teszt formájában adja meg, melynek segítségével el lehet dönteni, hogy egy adott pont rajta fekszik-e a görbén. Kétdimenziós esetben az implicit alak felírható a következő formában $$ f(x, y) = 0, $$ mely egyenletet a görbét alkotó pontok elégítik ki. $f$ itt egy tetszőleges valós értékű függvény. Például, ha az origó középpontú $r$ sugarú kört szeretnénk felírni implicit alakban, akkor $$ f(x,y) = x^2 + y^2 - r^2. $$ Paraméteres alak A görbék paraméteres megadása egy leképezés valamilyen paramétertartomány és a görbepontok között. A paraméteres alak egy olyan függvény, mely a paraméter értékeihez pozíciókat ad meg a görbén. Képzeljük el, hogy papíron, ceruzával rajzolunk egy görbét. Ekkor a paraméter tekinthető az időnek, a paraméter tartománya pedig a rajzolás kezdetének és befejeztének. Ekkor a paraméteres alak megadja, hogy egy adott időpillanatban épp hol volt a ceruza Step1: $C^1$ matematikai folytonosság Ebben az esetben a görbedarabok első deriváltja (a görbéhez húzott érintővektor) megegyezik a csatlakozási pontban. Az előző $f(t)$ és $g(u)$ függvények által leírt görbék esetén tehát $$ f^\prime(t_2) = g^\prime(u_1). $$ Ha a $C^1$ folytonosság nem teljesül, akkor a csatlakozási pontban éles törést figyelhetünk meg. Step2: $C^2$ matematikai folytonosság $C^2$ matematikai folytonosság esetén a csatlakozási pontban a görbék második deriváltja megegyezik. Azaz $$ f^{\prime\prime}(t_2) = g^{\prime\prime}(u_1). $$ $C^2$ folytonosság hiányában, bár nem lesz törés a csatlakozási pontban, azonban a görbe alakja hirtelen megváltozhat. Step3: Geometriai folytonosság $G^0$ geometriai folytonosság Ugyanazt jelenti, mint a $C^0$ matematikai folytonosság, a görbedarabok csatlakoznak egymáshoz. $G^1$ geometriai folytonosság A $G^1$ geometriai folytonosság azt jelenti, hogy a két csatlakozó görbedarab csatlakozási pontba húzott érintővektora különböző nagyságú, azonban azonos irányú. Azaz $$ f^{\prime}(t_2) = k \cdot g^{\prime}(u_1), $$ ahol $k > 0$ valós szám. Step4: Kapcsolat a matematikai és a geometriai folytonosság között A matematikai folytonosság szigorúbb, mint a geometriai folytonosság, hiszen az $n$-ed rendű matematikai folytonosság az $n$-edik deriváltak egyenlőségét kívánja meg. Emiatt, ha két görbe $C^n$ matematikai folytonossággal csatlakozik, akkor ez a csatlakozás egyúttal $G^n$ geometriai folytonosságú is. A paraméteres alak Ha rendelkezünk a görbe alakját befolyásoló kontrollpontokkal, valamint tudjuk, hogy hanyadfokú polinommal szeretnénk leírni a görbét, felírhatjuk a paraméteres alakot. Azonban ezt háromféle módon is megtehetjük
Python Code: addScript("js/c0-parametric-continuity", "c0-parametric-continuity") Explanation: Alapozás Görbék megadási módjai Implicit alak Az implicit alak a görbét alkotó pontokat egy teszt formájában adja meg, melynek segítségével el lehet dönteni, hogy egy adott pont rajta fekszik-e a görbén. Kétdimenziós esetben az implicit alak felírható a következő formában $$ f(x, y) = 0, $$ mely egyenletet a görbét alkotó pontok elégítik ki. $f$ itt egy tetszőleges valós értékű függvény. Például, ha az origó középpontú $r$ sugarú kört szeretnénk felírni implicit alakban, akkor $$ f(x,y) = x^2 + y^2 - r^2. $$ Paraméteres alak A görbék paraméteres megadása egy leképezés valamilyen paramétertartomány és a görbepontok között. A paraméteres alak egy olyan függvény, mely a paraméter értékeihez pozíciókat ad meg a görbén. Képzeljük el, hogy papíron, ceruzával rajzolunk egy görbét. Ekkor a paraméter tekinthető az időnek, a paraméter tartománya pedig a rajzolás kezdetének és befejeztének. Ekkor a paraméteres alak megadja, hogy egy adott időpillanatban épp hol volt a ceruza: $$ (x, y) = f(t). $$ Vegyük észre, hogy szemben az implicit alakkal, $f$ most egy vektor-értékű függvény. Paraméteres alakban az origó középpontú $r$ sugarú kört a következő formában írhatjuk le: $$ f(t) = (\cos t, \sin t) \qquad t \in [0, 2\pi). $$ A jegyzet további részeiben a paraméteres alakot fogjuk feltételezni. Procedurális forma A procedurális vagy generatív forma olyan, az előző két csoporton kívül eső eljárás, melynek segítségével görbepontokat generálhatunk. Például ilyenek a különböző subdivision sémák. Kontrollpontok Egy görbe megadásához általában szükségünk van úgynevezett kontrollpontokra, melyek a görbe által felvett alakot fogják meghatározni. Ha a görbe egy kontrollponton áthalad, akkor azt mondjuk, hogy interpolálja az adott pontot, míg ellenkező esetben approximálja. A görbe alakját a kontrollpontok határozzák meg, így a kontrollpontok manipulációjával tudjuk a görbét befolyásolni. Interpoláció Tegyük fel, hogy adottak a $p_0, p_1, \ldots, p_n$ kontrollpontok. Interpoláció esetén egy olyan $f(t)$ görbét keresünk, mely illeszkedik ezekre a pontokra. Azaz, a $t$ paraméter tartományában vannak olyan $t_0, t_1, \ldots, t_n$ értékek, hogy $$ \begin{align} f(t_0) &= p_0\ f(t_1) &= p_1 \ &\vdots \ f(t_n) &= p_n \ \end{align} $$ Folytonosság Gyakran előforduló probléma, hogy egynél több görbével (görbedarabbal) rendelkezünk, és ezeket szeretnénk valamilyen módon összekapcsolni. Azt, hogy a görbedarabok az összekapcsolás során hogyan találkoznak, a folytonossággal fogjuk jellemezni, és ezt a tulajdonságot a csatlakozási pontban fogjuk vizsgálni. Matematikai folytonosság $C^0$ matematikai folytonosság A $C^0$ matematikai folytonosság egyszerűen azt jelenti, hogy a görbék a végpontjaiknál kapcsolódnak. Azaz, ha van egy $f(t)$ függvénnyel megadott görbénk, melynek paramétertartománya $[t_1, t_2]$ és egy $g(u)$ függvénnyel megadott görbénk, melynek paramétertartománya $[u_1, u_2]$, akkor $$ f(t_2) = g(u_1). $$ End of explanation addScript("js/c1-parametric-continuity", "c1-parametric-continuity") Explanation: $C^1$ matematikai folytonosság Ebben az esetben a görbedarabok első deriváltja (a görbéhez húzott érintővektor) megegyezik a csatlakozási pontban. Az előző $f(t)$ és $g(u)$ függvények által leírt görbék esetén tehát $$ f^\prime(t_2) = g^\prime(u_1). $$ Ha a $C^1$ folytonosság nem teljesül, akkor a csatlakozási pontban éles törést figyelhetünk meg. End of explanation addScript("js/c2-parametric-continuity", "c2-parametric-continuity") Explanation: $C^2$ matematikai folytonosság $C^2$ matematikai folytonosság esetén a csatlakozási pontban a görbék második deriváltja megegyezik. Azaz $$ f^{\prime\prime}(t_2) = g^{\prime\prime}(u_1). $$ $C^2$ folytonosság hiányában, bár nem lesz törés a csatlakozási pontban, azonban a görbe alakja hirtelen megváltozhat. End of explanation addScript("js/g1-geometric-continuity", "g1-parametric-continuity") Explanation: Geometriai folytonosság $G^0$ geometriai folytonosság Ugyanazt jelenti, mint a $C^0$ matematikai folytonosság, a görbedarabok csatlakoznak egymáshoz. $G^1$ geometriai folytonosság A $G^1$ geometriai folytonosság azt jelenti, hogy a két csatlakozó görbedarab csatlakozási pontba húzott érintővektora különböző nagyságú, azonban azonos irányú. Azaz $$ f^{\prime}(t_2) = k \cdot g^{\prime}(u_1), $$ ahol $k > 0$ valós szám. End of explanation def styling(): styles = open("../../styles/custom.html", "r").read() return HTML(styles) styling() Explanation: Kapcsolat a matematikai és a geometriai folytonosság között A matematikai folytonosság szigorúbb, mint a geometriai folytonosság, hiszen az $n$-ed rendű matematikai folytonosság az $n$-edik deriváltak egyenlőségét kívánja meg. Emiatt, ha két görbe $C^n$ matematikai folytonossággal csatlakozik, akkor ez a csatlakozás egyúttal $G^n$ geometriai folytonosságú is. A paraméteres alak Ha rendelkezünk a görbe alakját befolyásoló kontrollpontokkal, valamint tudjuk, hogy hanyadfokú polinommal szeretnénk leírni a görbét, felírhatjuk a paraméteres alakot. Azonban ezt háromféle módon is megtehetjük: feltételeket adunk meg, melyeket a görbének (vagyis a görbét leíró függvénynek) teljesítenie kell, vagy megadunk egy karakterisztikus mátrixot, ami leírja a görbét, vagy megadjuk azokat a súlyfüggvényeket (bázisfüggvényeket), amelyekkel előállíható a görbe. A három felírási mód természetesen ekvivalens, azonban mindegyik más előnnyel bír. Nézzünk meg most egy-egy példát! Feltételes alak Legyen a görbét leíró paraméteres függvény $f(t)$, ahol $t \in [0, 1]$! Tegyük fel, hogy adott $4$ kontrollpont, $p_1, p_2, p_3, p_4$ és, hogy harmadfokú görbét szeretnénk képezni. Tegyük fel továbbá, hogy $f(t)$-nek a következő feltételeket kell teljesítenie: $$ \begin{align} f(0) &= p_1 \ f(1) &= p_4 \ f^{\prime}(0) &= 3(p_2 - p_1) \ f^{\prime}(1) &= 3(p_4 - p_3) \end{align} $$ Ezzel, azaz a paramétertartomány elejére és végén felvett értékekre tett feltételekkel egyértelműen megadtuk a görbét. Polinomiális alak Írjuk fel az előző feltételekkel megadott görbe polinomiális előállítását! Tudjuk, hogy egy olyan harmadfokú polinomot k1resünk, melyre a fenti feltételek teljesülnek. A polinomot írjuk fel először a következő formában: $$ f(t) = \sum\limits_{i=1}^{n}b_i(t) \cdot p_i, $$ ahol $b_i(t)$ az $i$-edik súlyfüggvény. Ezek a súlyfüggvények adják meg, hogy a paramétertartomány egy adott $t$ eleme esetén az eredetileg megadott geometriai feltételek (a $p_i$ kontrollpontok) milyen szerepet játszanak. Tehát $f(t)$ minden $t$ értékre a kontrollpontok egy lineáris kombinációját állítja elő. Az általános alakja egy $b_i$ súlyfüggvénynek (harmadfokú esetben) a következő: $$ b_i(t) = a_1 \cdot t^3 + b_1 \cdot t^2 + c_1 \cdot t + d_1 $$ Az előző feltételekkel adott görbe esetén a konkrét $b_i$ polinomok a következőek lesznek: $$ \begin{align} b_1(t) &= -t^3 + 3t^2 -3t + 1 \ b_2(t) &= 3t^3 -6t^2 + 3t \ b_3(t) &= -3t^3 + 3t^2 \ b_4(t) &= t^3 \end{align} $$ Mátrix alak Polinomális alakban felírt görbét könnyedén átírhatunk mátrix alakúra. Dolgozzunk most az előzőleg felírt polinomokkal! Ne felejtsük el, hogy harmadfokú görbével dolgozunk. Legyen $T(t)$ egy $4\times 1$-es paramétermátrix: $$ T(t) = \begin{bmatrix} t^3 \ t^2 \ t \ 1 \end{bmatrix} $$ Legyen továbbá $M$ az együtthatómátrix, melyet az egyes súlyfüggvényekben szereplő együtthatókból képzünk: $$ M = \begin{bmatrix} a_1 & b_1 & c_1 & d_1 \ a_2 & b_2 & c_2 & d_2 \ a_3 & b_3 & c_3 & d_3 \ a_4 & b_4 & c_4 & d_4 \ \end{bmatrix} $$ Azaz az előző példa esetében: $$ M = \begin{bmatrix} -1 & 3 & -3 & 1 \ 3 & -6 & 3 & 0 \ -3 & 3 & 0 & 0 \ 1 & 0 & 0 & 0 \end{bmatrix} $$ Végül írjuk fel a geometriai feltételek $G$ mátrixát: $$ G = \begin{bmatrix} p_1 & p_2 & p_3 & p_4 \end{bmatrix} $$ A $G$ mátrix oszlopaiban az egyes kontrollpontok megfelelő koordinátáit találjuk. Ezután $f(t)$ már felírható $$ f(t) = GMT(t) $$ alakban. Ha először az $M$ és $T(t)$ mátrixokat szorozzuk össze, akkor az előzőleg felírt bázisfüggvényeket kapjuk Ezeket ezután rendre a megfelelő kontrollponttal beszorozva kapjuk a kontrollpontok lineáris kombinációját. Források Schwarcz Tibor (2005). Bevezetés a számítógépi grafikába. pp 48-52., https://gyires.inf.unideb.hu/mobiDiak/Schwarcz-Tibor/Bevezetes-a-szamitogepi-grafikaba/bevgraf.pdf D. D. Hearn, M. P. Baker, W. Caritehers (2014). Computer Graphics With OpenGL, Fourth Edition, pp. 409-414. P. Shirley, S. Marschner (2009). Fundamentals of Computer Graphics. Third Edition, pp. 339-348. End of explanation
12,097
Given the following text description, write Python code to implement the functionality described below step by step Description: Weight By Portfolio Strategy Basic buy and hold that allows weighting by user specified weights, Equal, Sharpe Ratio, Annual Returns, Std Dev, Vola, or DS Vola. Rebalance is yearly, monthly, weekly, or daily. Option to sell all shares of an investment is regime turns negative. Step1: Some global data Step2: Run Strategy Step3: View log DataFrames Step4: Generate strategy stats - display all available stats. Step5: View Performance by Symbol Step6: Run Benchmark, Retrieve benchmark logs, and Generate benchmark stats. Step7: Plot Equity Curves Step8: Bar Graph Step9: Analysis
Python Code: import datetime import matplotlib.pyplot as plt import pandas as pd import pinkfish as pf import strategy # Format price data. pd.options.display.float_format = '{:0.2f}'.format %matplotlib inline # Set size of inline plots. '''note: rcParams can't be in same cell as import matplotlib or %matplotlib inline %matplotlib notebook: will lead to interactive plots embedded within the notebook, you can zoom and resize the figure %matplotlib inline: only draw static images in the notebook ''' plt.rcParams["figure.figsize"] = (10, 7) Explanation: Weight By Portfolio Strategy Basic buy and hold that allows weighting by user specified weights, Equal, Sharpe Ratio, Annual Returns, Std Dev, Vola, or DS Vola. Rebalance is yearly, monthly, weekly, or daily. Option to sell all shares of an investment is regime turns negative. End of explanation # Symbol Lists. SP500_Sectors = \ {'XLB': None, 'XLE': None, 'XLF': None, 'XLI': None, 'XLK': None, 'XLP': None, 'XLU': None, 'XLV': None, 'XLY': None} Mixed_Asset_Classes = \ {'IWB': None, 'SPY': None, 'VGK': None, 'IEV': None, 'EWJ': None, 'EPP': None, 'IEF': None, 'SHY': None, 'GLD': None} FANG_Stocks = \ {'FB': None, 'AMZN': None, 'NFLX': None, 'GOOG': None} Stocks_Bonds_Gold = \ {'SPY': None, 'QQQ': None, 'TLT': None, 'GLD': None} Stocks_Bonds = \ {'SPY': 0.50, 'AGG': 0.50} # Pick one of the above. weights = Stocks_Bonds_Gold symbols = list(weights) capital = 100_000 start = datetime.datetime(*pf.ALPHA_BEGIN) #start = datetime.datetime(*pf.SP500_BEGIN) end = datetime.datetime.now() weight_by_choices = ('equal', 'sharpe', 'ret', 'sd', 'vola', 'ds_vola') rebalance_choices = ('yearly', 'monthly', 'weekly', 'daily') options = { 'use_adj' : True, 'use_cache' : True, 'margin' : 1, 'weights' : weights, 'weight_by' : 'vola', 'rebalance' : 'monthly', 'use_regime_filter' : False } Explanation: Some global data End of explanation s = strategy.Strategy(symbols, capital, start, end, options=options) s.run() Explanation: Run Strategy End of explanation s.rlog.head() s.tlog.tail() s.dbal.tail() Explanation: View log DataFrames: raw trade log, trade log, and daily balance. End of explanation pf.print_full(s.stats) Explanation: Generate strategy stats - display all available stats. End of explanation symbols_no_weights = [k for k,v in weights.items() if v is None] symbols_weights = {k:v for k,v in weights.items() if v is not None} remaining_weight = 1 - sum(symbols_weights.values()) weights_ = weights.copy() for symbol in symbols_no_weights: weights_[symbol] = (1 / len(symbols_no_weights)) * remaining_weight totals = s.portfolio.performance_per_symbol(weights=weights_) totals corr_df = s.portfolio.correlation_map(s.ts) corr_df Explanation: View Performance by Symbol End of explanation benchmark = pf.Benchmark('SPY', s.capital, s.start, s.end, use_adj=True) benchmark.run() Explanation: Run Benchmark, Retrieve benchmark logs, and Generate benchmark stats. End of explanation pf.plot_equity_curve(s.dbal, benchmark=benchmark.dbal) Explanation: Plot Equity Curves: Strategy vs Benchmark End of explanation df = pf.plot_bar_graph(s.stats, benchmark.stats) df Explanation: Bar Graph: Strategy vs Benchmark End of explanation kelly = pf.kelly_criterion(s.stats, benchmark.stats) kelly Explanation: Analysis: Kelly Criterian End of explanation
12,098
Given the following text description, write Python code to implement the functionality described below step by step Description: Plotting and Visualization There are a handful of third-party Python packages that are suitable for creating scientific plots and visualizations. These include packages like Step1: The above plot simply shows two sets of random numbers taken from a normal distribution plotted against one another. The 'ro' argument is a shorthand argument telling matplotlib that I wanted the points represented as red circles. This plot was expedient. We can exercise a little more control by breaking the plotting into a workflow Step2: matplotlib is a relatively low-level plotting package, relative to others. It makes very few assumptions about what constitutes good layout (by design), but has a lot of flexiblility to allow the user to completely customize the look of the output. If you want to make your plots look pretty like mine, steal the matplotlibrc file from Huy Nguyen. Plotting in Pandas On the other hand, Pandas includes methods for DataFrame and Series objects that are relatively high-level, and that make reasonable assumptions about how the plot should look. Step3: Notice that by default a line plot is drawn, and a light grid is included. All of this can be changed, however Step4: Similarly, for a DataFrame Step5: As an illustration of the high-level nature of Pandas plots, we can split multiple series into subplots with a single argument for plot Step6: Or, we may want to have some series displayed on the secondary y-axis, which can allow for greater detail and less empty space Step7: If we would like a little more control, we can use matplotlib's subplots function directly, and manually assign plots to its axes Step8: Bar plots Bar plots are useful for displaying and comparing measurable quantities, such as counts or volumes. In Pandas, we just use the plot method with a kind='bar' argument. For this series of examples, let's load up the Titanic dataset Step9: Another way of comparing the groups is to look at the survival rate, by adjusting for the number of people in each group. Step10: Histograms Frequenfly it is useful to look at the distribution of data before you analyze it. Histograms are a sort of bar graph that displays relative frequencies of data values; hence, the y-axis is always some measure of frequency. This can either be raw counts of values or scaled proportions. For example, we might want to see how the fares were distributed aboard the titanic Step11: The hist method puts the continuous fare values into bins, trying to make a sensible décision about how many bins to use (or equivalently, how wide the bins are). We can override the default value (10) Step12: There are algorithms for determining an "optimal" number of bins, each of which varies somehow with the number of observations in the data series. Step13: A density plot is similar to a histogram in that it describes the distribution of the underlying data, but rather than being a pure empirical representation, it is an estimate of the underlying "true" distribution. As a result, it is smoothed into a continuous line plot. We create them in Pandas using the plot method with kind='kde', where kde stands for kernel density estimate. Step14: Often, histograms and density plots are shown together Step15: Here, we had to normalize the histogram (normed=True), since the kernel density is normalized by definition (it is a probability distribution). We will explore kernel density estimates more in the next section. Boxplots A different way of visualizing the distribution of data is the boxplot, which is a display of common quantiles; these are typically the quartiles and the lower and upper 5 percent values. Step16: You can think of the box plot as viewing the distribution from above. The blue crosses are "outlier" points that occur outside the extreme quantiles. One way to add additional information to a boxplot is to overlay the actual data; this is generally most suitable with small- or moderate-sized data series. Step17: When data are dense, a couple of tricks used above help the visualization Step18: Why is this plot a poor choice? bar charts should be used for measurable quantities (e.g. raw data), not estimates. The area of the bar does not represent anything, since these are estimates derived from the data. the "data-ink ratio" (sensu Edward Tufte) is very high. There are only 6 values represented here (3 means and 3 standard deviations). the plot hides the underlying data. A boxplot is always a better choice than a dynamite plot. Step19: Exercise Using the Titanic data, create kernel density estimate plots of the age distributions of survivors and victims. Scatterplots To look at how Pandas does scatterplots, let's reload the baseball sample dataset. Step20: Scatterplots are useful for data exploration, where we seek to uncover relationships among variables. There are no scatterplot methods for Series or DataFrame objects; we must instead use the matplotlib function scatter. Step21: We can add additional information to scatterplots by assigning variables to either the size of the symbols or their colors. Step22: To view scatterplots of a large numbers of variables simultaneously, we can use the scatter_matrix function that was recently added to Pandas. It generates a matrix of pair-wise scatterplots, optiorally with histograms or kernel density estimates on the diagonal. Step23: Trellis Plots One of the enduring strengths of carrying out statistical analyses in the R language is the quality of its graphics. In particular, the addition of Hadley Wickham's ggplot2 package allows for flexible yet user-friendly generation of publication-quality plots. Its srength is based on its implementation of a powerful model of graphics, called the Grammar of Graphics (GofG). The GofG is essentially a theory of scientific graphics that allows the components of a graphic to be completely described. ggplot2 uses this description to build the graphic component-wise, by adding various layers. Pandas recently added functions for generating graphics using a GofG approach. Chiefly, this allows for the easy creation of trellis plots, which are a faceted graphic that shows relationships between two variables, conditioned on particular values of other variables. This allows for the representation of more than two dimensions of information without having to resort to 3-D graphics, etc. Let's use the titanic dataset to create a trellis plot that represents 4 variables at a time. This consists of 4 steps Step24: Using the cervical dystonia dataset, we can simultaneously examine the relationship between age and the primary outcome variable as a function of both the treatment received and the week of the treatment by creating a scatterplot of the data, and fitting a polynomial relationship between age and twstrs Step25: We can use the RPlot class to represent more than just trellis graphics. It is also useful for displaying multiple variables on the same panel, using combinations of color, size and shapes to do so.
Python Code: plt.plot(np.random.normal(size=100), np.random.normal(size=100), 'ro') Explanation: Plotting and Visualization There are a handful of third-party Python packages that are suitable for creating scientific plots and visualizations. These include packages like: matplotlib Chaco PyX Bokeh Here, we will focus excelusively on matplotlib and the high-level plotting availabel within pandas. It is currently the most robust and feature-rich package available. Visual representation of data We require plots, charts and other statistical graphics for the written communication of quantitative ideas. They allow us to more easily convey relationships and reveal deviations from patterns. Gelman and Unwin 2011: A well-designed graph can display more information than a table of the same size, and more information than numbers embedded in text. Graphical displays allow and encourage direct visual comparisons. Matplotlib The easiest way to interact with matplotlib is via pylab in iPython. By starting iPython (or iPython notebook) in "pylab mode", both matplotlib and numpy are pre-loaded into the iPython session: ipython notebook --pylab You can specify a custom graphical backend (e.g. qt, gtk, osx), but iPython generally does a good job of auto-selecting. Now matplotlib is ready to go, and you can access the matplotlib API via plt. If you do not start iPython in pylab mode, you can do this manually with the following convention: import matplotlib.pyplot as plt End of explanation with mpl.rc_context(rc={'font.family': 'serif', 'font.weight': 'bold', 'font.size': 8}): fig = plt.figure(figsize=(6,3)) ax1 = fig.add_subplot(121) ax1.set_xlabel('some random numbers') ax1.set_ylabel('more random numbers') ax1.set_title("Random scatterplot") plt.plot(np.random.normal(size=100), np.random.normal(size=100), 'r.') ax2 = fig.add_subplot(122) plt.hist(np.random.normal(size=100), bins=15) ax2.set_xlabel('sample') ax2.set_ylabel('cumulative sum') ax2.set_title("Normal distrubution") plt.tight_layout() plt.savefig("normalvars.png", dpi=150) Explanation: The above plot simply shows two sets of random numbers taken from a normal distribution plotted against one another. The 'ro' argument is a shorthand argument telling matplotlib that I wanted the points represented as red circles. This plot was expedient. We can exercise a little more control by breaking the plotting into a workflow: End of explanation normals = pd.Series(np.random.normal(size=10)) normals.plot() Explanation: matplotlib is a relatively low-level plotting package, relative to others. It makes very few assumptions about what constitutes good layout (by design), but has a lot of flexiblility to allow the user to completely customize the look of the output. If you want to make your plots look pretty like mine, steal the matplotlibrc file from Huy Nguyen. Plotting in Pandas On the other hand, Pandas includes methods for DataFrame and Series objects that are relatively high-level, and that make reasonable assumptions about how the plot should look. End of explanation normals.cumsum().plot(grid=False) Explanation: Notice that by default a line plot is drawn, and a light grid is included. All of this can be changed, however: End of explanation variables = pd.DataFrame({'normal': np.random.normal(size=100), 'gamma': np.random.gamma(1, size=100), 'poisson': np.random.poisson(size=100)}) variables.cumsum(0).plot() Explanation: Similarly, for a DataFrame: End of explanation variables.cumsum(0).plot(subplots=True) Explanation: As an illustration of the high-level nature of Pandas plots, we can split multiple series into subplots with a single argument for plot: End of explanation variables.cumsum(0).plot(secondary_y='normal') Explanation: Or, we may want to have some series displayed on the secondary y-axis, which can allow for greater detail and less empty space: End of explanation fig, axes = plt.subplots(nrows=1, ncols=3, figsize=(12, 4)) for i,var in enumerate(['normal','gamma','poisson']): variables[var].cumsum(0).plot(ax=axes[i], title=var) axes[0].set_ylabel('cumulative sum') Explanation: If we would like a little more control, we can use matplotlib's subplots function directly, and manually assign plots to its axes: End of explanation titanic = pd.read_excel("data/titanic.xls", "titanic") titanic.head() titanic.groupby('pclass').survived.sum().plot(kind='bar') titanic.groupby(['sex','pclass']).survived.sum().plot(kind='barh') death_counts = pd.crosstab([titanic.pclass, titanic.sex], titanic.survived.astype(bool)) death_counts.plot(kind='bar', stacked=True, color=['black','gold'], grid=False) Explanation: Bar plots Bar plots are useful for displaying and comparing measurable quantities, such as counts or volumes. In Pandas, we just use the plot method with a kind='bar' argument. For this series of examples, let's load up the Titanic dataset: End of explanation death_counts.div(death_counts.sum(1).astype(float), axis=0).plot(kind='barh', stacked=True, color=['black','gold']) Explanation: Another way of comparing the groups is to look at the survival rate, by adjusting for the number of people in each group. End of explanation titanic.fare.hist(grid=False) Explanation: Histograms Frequenfly it is useful to look at the distribution of data before you analyze it. Histograms are a sort of bar graph that displays relative frequencies of data values; hence, the y-axis is always some measure of frequency. This can either be raw counts of values or scaled proportions. For example, we might want to see how the fares were distributed aboard the titanic: End of explanation titanic.fare.hist(bins=30) Explanation: The hist method puts the continuous fare values into bins, trying to make a sensible décision about how many bins to use (or equivalently, how wide the bins are). We can override the default value (10): End of explanation sturges = lambda n: int(log2(n) + 1) square_root = lambda n: int(sqrt(n)) from scipy.stats import kurtosis doanes = lambda data: int(1 + log(len(data)) + log(1 + kurtosis(data) * (len(data) / 6.) ** 0.5)) n = len(titanic) sturges(n), square_root(n), doanes(titanic.fare.dropna()) titanic.fare.hist(bins=doanes(titanic.fare.dropna())) Explanation: There are algorithms for determining an "optimal" number of bins, each of which varies somehow with the number of observations in the data series. End of explanation titanic.fare.dropna().plot(kind='kde', xlim=(0,600)) Explanation: A density plot is similar to a histogram in that it describes the distribution of the underlying data, but rather than being a pure empirical representation, it is an estimate of the underlying "true" distribution. As a result, it is smoothed into a continuous line plot. We create them in Pandas using the plot method with kind='kde', where kde stands for kernel density estimate. End of explanation titanic.fare.hist(bins=doanes(titanic.fare.dropna()), normed=True, color='lightseagreen') titanic.fare.dropna().plot(kind='kde', xlim=(0,600), style='r--') Explanation: Often, histograms and density plots are shown together: End of explanation titanic.boxplot(column='fare', by='pclass', grid=False) Explanation: Here, we had to normalize the histogram (normed=True), since the kernel density is normalized by definition (it is a probability distribution). We will explore kernel density estimates more in the next section. Boxplots A different way of visualizing the distribution of data is the boxplot, which is a display of common quantiles; these are typically the quartiles and the lower and upper 5 percent values. End of explanation bp = titanic.boxplot(column='age', by='pclass', grid=False) for i in [1,2,3]: y = titanic.age[titanic.pclass==i].dropna() # Add some random "jitter" to the x-axis x = np.random.normal(i, 0.04, size=len(y)) plot(x, y, 'r.', alpha=0.2) Explanation: You can think of the box plot as viewing the distribution from above. The blue crosses are "outlier" points that occur outside the extreme quantiles. One way to add additional information to a boxplot is to overlay the actual data; this is generally most suitable with small- or moderate-sized data series. End of explanation titanic.groupby('pclass')['fare'].mean().plot(kind='bar', yerr=titanic.groupby('pclass')['fare'].std()) Explanation: When data are dense, a couple of tricks used above help the visualization: reducing the alpha level to make the points partially transparent adding random "jitter" along the x-axis to avoid overstriking A related but inferior cousin of the box plot is the so-called dynamite plot, which is just a bar chart with half of an error bar. End of explanation data1 = [150, 155, 175, 200, 245, 255, 395, 300, 305, 320, 375, 400, 420, 430, 440] data2 = [225, 380] fake_data = pd.DataFrame([data1, data2]).transpose() p = fake_data.mean().plot(kind='bar', yerr=fake_data.std(), grid=False) fake_data = pd.DataFrame([data1, data2]).transpose() p = fake_data.mean().plot(kind='bar', yerr=fake_data.std(), grid=False) x1, x2 = p.xaxis.get_majorticklocs() plot(np.random.normal(x1, 0.01, size=len(data1)), data1, 'ro') plot([x2]*len(data2), data2, 'ro') Explanation: Why is this plot a poor choice? bar charts should be used for measurable quantities (e.g. raw data), not estimates. The area of the bar does not represent anything, since these are estimates derived from the data. the "data-ink ratio" (sensu Edward Tufte) is very high. There are only 6 values represented here (3 means and 3 standard deviations). the plot hides the underlying data. A boxplot is always a better choice than a dynamite plot. End of explanation baseball = pd.read_csv("data/baseball.csv") baseball.head() Explanation: Exercise Using the Titanic data, create kernel density estimate plots of the age distributions of survivors and victims. Scatterplots To look at how Pandas does scatterplots, let's reload the baseball sample dataset. End of explanation plt.scatter(baseball.ab, baseball.h) xlim(0, 700); ylim(0, 200) Explanation: Scatterplots are useful for data exploration, where we seek to uncover relationships among variables. There are no scatterplot methods for Series or DataFrame objects; we must instead use the matplotlib function scatter. End of explanation plt.scatter(baseball.ab, baseball.h, s=baseball.hr*10, alpha=0.5) xlim(0, 700); ylim(0, 200) plt.scatter(baseball.ab, baseball.h, c=baseball.hr, s=40, cmap='hot') xlim(0, 700); ylim(0, 200); Explanation: We can add additional information to scatterplots by assigning variables to either the size of the symbols or their colors. End of explanation _ = pd.scatter_matrix(baseball.loc[:,'r':'sb'], figsize=(12,8), diagonal='kde') Explanation: To view scatterplots of a large numbers of variables simultaneously, we can use the scatter_matrix function that was recently added to Pandas. It generates a matrix of pair-wise scatterplots, optiorally with histograms or kernel density estimates on the diagonal. End of explanation from pandas.tools.rplot import * titanic = titanic[titanic.age.notnull() & titanic.fare.notnull()] tp = RPlot(titanic, x='age') tp.add(TrellisGrid(['pclass', 'sex'])) tp.add(GeomDensity()) _ = tp.render(gcf()) Explanation: Trellis Plots One of the enduring strengths of carrying out statistical analyses in the R language is the quality of its graphics. In particular, the addition of Hadley Wickham's ggplot2 package allows for flexible yet user-friendly generation of publication-quality plots. Its srength is based on its implementation of a powerful model of graphics, called the Grammar of Graphics (GofG). The GofG is essentially a theory of scientific graphics that allows the components of a graphic to be completely described. ggplot2 uses this description to build the graphic component-wise, by adding various layers. Pandas recently added functions for generating graphics using a GofG approach. Chiefly, this allows for the easy creation of trellis plots, which are a faceted graphic that shows relationships between two variables, conditioned on particular values of other variables. This allows for the representation of more than two dimensions of information without having to resort to 3-D graphics, etc. Let's use the titanic dataset to create a trellis plot that represents 4 variables at a time. This consists of 4 steps: Create a RPlot object that merely relates two variables in the dataset Add a grid that will be used to condition the variables by both passenger class and sex Add the actual plot that will be used to visualize each comparison Draw the visualization End of explanation cdystonia = pd.read_csv("data/cdystonia.csv", index_col=None) cdystonia.head() plt.figure(figsize=(12,12)) bbp = RPlot(cdystonia, x='age', y='twstrs') bbp.add(TrellisGrid(['week', 'treat'])) bbp.add(GeomScatter()) bbp.add(GeomPolyFit(degree=2)) _ = bbp.render(gcf()) Explanation: Using the cervical dystonia dataset, we can simultaneously examine the relationship between age and the primary outcome variable as a function of both the treatment received and the week of the treatment by creating a scatterplot of the data, and fitting a polynomial relationship between age and twstrs: End of explanation cdystonia['site'] = cdystonia.site.astype(float) plt.figure(figsize=(6,6)) cp = RPlot(cdystonia, x='age', y='twstrs') cp.add(GeomPoint(colour=ScaleGradient('site', colour1=(1.0, 1.0, 0.5), colour2=(1.0, 0.0, 0.0)), size=ScaleSize('week', min_size=10.0, max_size=200.0), shape=ScaleShape('treat'))) _ = cp.render(gcf()) Explanation: We can use the RPlot class to represent more than just trellis graphics. It is also useful for displaying multiple variables on the same panel, using combinations of color, size and shapes to do so. End of explanation
12,099
Given the following text description, write Python code to implement the functionality described below step by step Description: Integration Exercise 1 Imports Step2: Trapezoidal rule The trapezoidal rule generates a numerical approximation to the 1d integral Step3: Now use scipy.integrate.quad to integrate the f and g functions and see how the result compares with your trapz function. Print the results and errors.
Python Code: %matplotlib inline import matplotlib.pyplot as plt import numpy as np from scipy import integrate Explanation: Integration Exercise 1 Imports End of explanation def trapz(f, a, b, N): Integrate the function f(x) over the range [a,b] with N points. t=(b-a)/N p=np.linspace(a,b,N+1) weights=np.ones_like(p) weights[0]=0.5 weights[-1]=0.5 return t*np.dot(f(p),weights) f = lambda x: x**2 g = lambda x: np.sin(x) I = trapz(f, 0, 1, 1000) assert np.allclose(I, 0.33333349999999995) J = trapz(g, 0, np.pi, 1000) assert np.allclose(J, 1.9999983550656628) Explanation: Trapezoidal rule The trapezoidal rule generates a numerical approximation to the 1d integral: $$ I(a,b) = \int_a^b f(x) dx $$ by dividing the interval $[a,b]$ into $N$ subdivisions of length $h$: $$ h = (b-a)/N $$ Note that this means the function will be evaluated at $N+1$ points on $[a,b]$. The main idea of the trapezoidal rule is that the function is approximated by a straight line between each of these points. Write a function trapz(f, a, b, N) that performs trapezoidal rule on the function f over the interval $[a,b]$ with N subdivisions (N+1 points). End of explanation res=integrate.quad(f,0,1) print(res) res=integrate.quad(g,0,np.pi) print(res) assert True # leave this cell to grade the previous one Explanation: Now use scipy.integrate.quad to integrate the f and g functions and see how the result compares with your trapz function. Print the results and errors. End of explanation