Unnamed: 0
int64
0
16k
text_prompt
stringlengths
110
62.1k
code_prompt
stringlengths
37
152k
12,900
Given the following text description, write Python code to implement the functionality described below step by step Description: Advanced Sounding Plot a sounding using MetPy with more advanced features. Beyond just plotting data, this uses calculations from metpy.calc to find the lifted condensation level (LCL) and the profile of a surface-based parcel. The area between the ambient profile and the parcel profile is colored as well. Step1: Upper air data can be obtained using the siphon package, but for this example we will use some of MetPy's sample data. Step2: We will pull the data out of the example dataset into individual variables and assign units. Step3: Create a new figure. The dimensions here give a good aspect ratio.
Python Code: import matplotlib.pyplot as plt import pandas as pd import metpy.calc as mpcalc from metpy.cbook import get_test_data from metpy.plots import add_metpy_logo, SkewT from metpy.units import units Explanation: Advanced Sounding Plot a sounding using MetPy with more advanced features. Beyond just plotting data, this uses calculations from metpy.calc to find the lifted condensation level (LCL) and the profile of a surface-based parcel. The area between the ambient profile and the parcel profile is colored as well. End of explanation col_names = ['pressure', 'height', 'temperature', 'dewpoint', 'direction', 'speed'] df = pd.read_fwf(get_test_data('may4_sounding.txt', as_file_obj=False), skiprows=5, usecols=[0, 1, 2, 3, 6, 7], names=col_names) # Drop any rows with all NaN values for T, Td, winds df = df.dropna(subset=('temperature', 'dewpoint', 'direction', 'speed'), how='all' ).reset_index(drop=True) Explanation: Upper air data can be obtained using the siphon package, but for this example we will use some of MetPy's sample data. End of explanation p = df['pressure'].values * units.hPa T = df['temperature'].values * units.degC Td = df['dewpoint'].values * units.degC wind_speed = df['speed'].values * units.knots wind_dir = df['direction'].values * units.degrees u, v = mpcalc.wind_components(wind_speed, wind_dir) Explanation: We will pull the data out of the example dataset into individual variables and assign units. End of explanation fig = plt.figure(figsize=(9, 9)) add_metpy_logo(fig, 115, 100) skew = SkewT(fig, rotation=45) # Plot the data using normal plotting functions, in this case using # log scaling in Y, as dictated by the typical meteorological plot. skew.plot(p, T, 'r') skew.plot(p, Td, 'g') skew.plot_barbs(p, u, v) skew.ax.set_ylim(1000, 100) skew.ax.set_xlim(-40, 60) # Calculate LCL height and plot as black dot. Because `p`'s first value is # ~1000 mb and its last value is ~250 mb, the `0` index is selected for # `p`, `T`, and `Td` to lift the parcel from the surface. If `p` was inverted, # i.e. start from low value, 250 mb, to a high value, 1000 mb, the `-1` index # should be selected. lcl_pressure, lcl_temperature = mpcalc.lcl(p[0], T[0], Td[0]) skew.plot(lcl_pressure, lcl_temperature, 'ko', markerfacecolor='black') # Calculate full parcel profile and add to plot as black line prof = mpcalc.parcel_profile(p, T[0], Td[0]).to('degC') skew.plot(p, prof, 'k', linewidth=2) # Shade areas of CAPE and CIN skew.shade_cin(p, T, prof) skew.shade_cape(p, T, prof) # An example of a slanted line at constant T -- in this case the 0 # isotherm skew.ax.axvline(0, color='c', linestyle='--', linewidth=2) # Add the relevant special lines skew.plot_dry_adiabats() skew.plot_moist_adiabats() skew.plot_mixing_lines() # Show the plot plt.show() Explanation: Create a new figure. The dimensions here give a good aspect ratio. End of explanation
12,901
Given the following text description, write Python code to implement the functionality described below step by step Description: Explorando as despesas da cidade de São Paulo Um tutorial de primeiros passos para acessar a execução orçamentária do município usando Python e a biblioteca de análise de dados Pandas * Passo 1. Cadastro na API e token de acesso Acessar a Vitrine de APIs da Prodam Step1: Orçamento Primeiro, vamos ter uma visão geral do que foi orçado para a Secretaria Municipal de Saúde neste ano, bem como os valores congelados e já executados. Isso é possível com a consulta "Despesas" Step2: Empenhos Empenho é o ato em que autoridade verifica a existência do crédito orçamentário e autoriza a execução da despesa (por exemplo, para realizar uma licitação). A partir daí, os valores vão sendo liquidados e pagos conforme a execução de um contrato. Vamos ver quanto a Secretaria Municipal de Saúde empenhou de seu orçamento em 2017. Step3: A API fornece apenas uma página na consulta. O script abaixo checa a quantidade de páginas nos metadados da consulta e itera o número de vezes necessário para obter todas as páginas Step4: Com os passos acima, fizemos a requisição de todas as páginas e convertemos o arquivo formato json em um DataFrame. Agora podemos trabalhar com a análise desses dado no Pandas. Para checar quantos registros existentes, vamos ver o final da lista Step5: Modalidades de Aplicação Aqui vemos a quantidade de recursos aplicados na Saúde, a título de exemplo, por Modalidade -- se é aplicação na rede direta ou repasse a organizações sociais. Note que o mesmo poderia ser feito para qualquer órgão, ou mesmo para a Prefeitura como um todo Step6: Maiores despesas de 2017 Aqui vamos produzir a lista das 15 maiores despesas da Saúde neste ano Step7: Fontes de recursos Agrupamento dos empenhos por fonte de recursos Step8: Passo 4. Quer salvar um csv? O objetivo deste tutorial não era fazer uma análise exaustiva da base, mas apenas mostrar o que é possível a partir do consumo da API. Você também pode salvar toda a base de empenhos num arquivo .csv e trabalhar no seu Excel (super te entendo). O Pandas também ajuda nisso! Assim
Python Code: import pandas as pd import requests import json import numpy as np TOKEN = '198f959a5f39a1c441c7c863423264' base_url = "https://gatewayapi.prodam.sp.gov.br:443/financas/orcamento/sof/v2.1.0" headers={'Authorization' : str('Bearer ' + TOKEN)} Explanation: Explorando as despesas da cidade de São Paulo Um tutorial de primeiros passos para acessar a execução orçamentária do município usando Python e a biblioteca de análise de dados Pandas * Passo 1. Cadastro na API e token de acesso Acessar a Vitrine de APIs da Prodam:https://api.prodam.sp.gov.br/store/ Selecione a API do SOF Clique em "Inscrever-se" Acesse o menu "Minhas assinaturas" Gere uma chave de acesso de produção; coloque um valor de validade negativo, para evitar que expire Copie o Token de Acesso Passo 2. Teste na API Console A API Console é uma interface que permite testar as diferentes consultas e obter a URL com os parâmetros desejados. Por exemplo, se deseja obter todos os contratos da Secretaria de Educação em 2017, basta entrar no item /consultaContrato e informar "2017" no campo anoContrato e "16" (código da Educação) no campo codOrgao. A URL resultante dessa consulta é https://gatewayapi.prodam.sp.gov.br:443/financas/orcamento/sof/v2.1.0/consultaContrato?anoContrato=2017&codOrgao=16 Passo 3. Mãos ao Pandas! Este é o script que consulta a API (para qualquer URL gerada acima) e transforma o arquivo obtido em formato json para um Data Frame do Pandas, a partir do qual será possível fazer as análises. Substitua a constante TOKEN pelo seu código de assinatura! End of explanation url_orcado = '{base_url}/consultarDespesas?anoDotacao=2017&mesDotacao=08&codOrgao=84'.format(base_url=base_url) request_orcado = requests.get(url_orcado, headers=headers, verify=True).json() df_orcado = pd.DataFrame(request_orcado['lstDespesas']) df_resumo_orcado = df_orcado[['valOrcadoInicial', 'valOrcadoAtualizado', 'valCongelado', 'valDisponivel', 'valEmpenhadoLiquido', 'valLiquidado']] df_resumo_orcado Explanation: Orçamento Primeiro, vamos ter uma visão geral do que foi orçado para a Secretaria Municipal de Saúde neste ano, bem como os valores congelados e já executados. Isso é possível com a consulta "Despesas" End of explanation url_empenho = '{base_url}/consultaEmpenhos?anoEmpenho=2017&mesEmpenho=08&codOrgao=84'.format(base_url=base_url) pagination = '&numPagina={PAGE}' request_empenhos = requests.get(url_empenho, headers=headers, verify=True).json() Explanation: Empenhos Empenho é o ato em que autoridade verifica a existência do crédito orçamentário e autoriza a execução da despesa (por exemplo, para realizar uma licitação). A partir daí, os valores vão sendo liquidados e pagos conforme a execução de um contrato. Vamos ver quanto a Secretaria Municipal de Saúde empenhou de seu orçamento em 2017. End of explanation number_of_pages = request_empenhos['metadados']['qtdPaginas'] todos_empenhos = [] todos_empenhos = todos_empenhos + request_empenhos['lstEmpenhos'] if number_of_pages>1: for p in range(2, number_of_pages+1): request_empenhos = requests.get(url_empenho + pagination.format(PAGE=p), headers=headers, verify=True).json() todos_empenhos = todos_empenhos + request_empenhos['lstEmpenhos'] df_empenhos = pd.DataFrame(todos_empenhos) Explanation: A API fornece apenas uma página na consulta. O script abaixo checa a quantidade de páginas nos metadados da consulta e itera o número de vezes necessário para obter todas as páginas: End of explanation df_empenhos.tail() Explanation: Com os passos acima, fizemos a requisição de todas as páginas e convertemos o arquivo formato json em um DataFrame. Agora podemos trabalhar com a análise desses dado no Pandas. Para checar quantos registros existentes, vamos ver o final da lista: End of explanation modalidades = df_empenhos.groupby('txtModalidadeAplicacao')['valTotalEmpenhado', 'valLiquidado'].sum() modalidades # Outra maneira de fazer a mesma operação: #pd.pivot_table(df_empenhos, values='valTotalEmpenhado', index=['txtModalidadeAplicacao'], aggfunc=np.sum) Explanation: Modalidades de Aplicação Aqui vemos a quantidade de recursos aplicados na Saúde, a título de exemplo, por Modalidade -- se é aplicação na rede direta ou repasse a organizações sociais. Note que o mesmo poderia ser feito para qualquer órgão, ou mesmo para a Prefeitura como um todo: End of explanation despesas = pd.pivot_table(df_empenhos, values=['valLiquidado', 'valPagoExercicio'], index=['numCpfCnpj', 'txtRazaoSocial', 'txtDescricaoPrograma'], aggfunc=np.sum).sort_values('valPagoExercicio', axis=0, ascending=False, inplace=False, kind='quicksort', na_position='last') despesas.head(15) Explanation: Maiores despesas de 2017 Aqui vamos produzir a lista das 15 maiores despesas da Saúde neste ano: End of explanation fonte = pd.pivot_table(df_empenhos, values=['valLiquidado', 'valPagoExercicio'], index=['txtDescricaoFonteRecurso'], aggfunc=np.sum).sort_values('valPagoExercicio', axis=0, ascending=False, inplace=False, kind='quicksort', na_position='last') fonte Explanation: Fontes de recursos Agrupamento dos empenhos por fonte de recursos: End of explanation df_empenhos.to_csv('empenhos.csv') Explanation: Passo 4. Quer salvar um csv? O objetivo deste tutorial não era fazer uma análise exaustiva da base, mas apenas mostrar o que é possível a partir do consumo da API. Você também pode salvar toda a base de empenhos num arquivo .csv e trabalhar no seu Excel (super te entendo). O Pandas também ajuda nisso! Assim: End of explanation
12,902
Given the following text description, write Python code to implement the functionality described below step by step Description: Time-frequency beamforming using LCMV Compute LCMV source power in a grid of time-frequency windows and display results. The original reference is Step1: Read raw data, preload to allow filtering Step2: Time-frequency beamforming based on LCMV
Python Code: # Author: Roman Goj <[email protected]> # # License: BSD (3-clause) import mne from mne import compute_covariance from mne.datasets import sample from mne.event import make_fixed_length_events from mne.beamformer import tf_lcmv from mne.viz import plot_source_spectrogram print(__doc__) data_path = sample.data_path() raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif' noise_fname = data_path + '/MEG/sample/ernoise_raw.fif' event_fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif' fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif' subjects_dir = data_path + '/subjects' label_name = 'Aud-lh' fname_label = data_path + '/MEG/sample/labels/%s.label' % label_name Explanation: Time-frequency beamforming using LCMV Compute LCMV source power in a grid of time-frequency windows and display results. The original reference is: Dalal et al. Five-dimensional neuroimaging: Localization of the time-frequency dynamics of cortical activity. NeuroImage (2008) vol. 40 (4) pp. 1686-1700 End of explanation raw = mne.io.read_raw_fif(raw_fname, preload=True) raw.info['bads'] = ['MEG 2443'] # 1 bad MEG channel # Pick a selection of magnetometer channels. A subset of all channels was used # to speed up the example. For a solution based on all MEG channels use # meg=True, selection=None and add grad=4000e-13 to the reject dictionary. # We could do this with a "picks" argument to Epochs and the LCMV functions, # but here we use raw.pick_types() to save memory. left_temporal_channels = mne.read_selection('Left-temporal') raw.pick_types(meg='mag', eeg=False, eog=False, stim=False, exclude='bads', selection=left_temporal_channels) reject = dict(mag=4e-12) # Re-normalize our empty-room projectors, which should be fine after # subselection raw.info.normalize_proj() # Setting time limits for reading epochs. Note that tmin and tmax are set so # that time-frequency beamforming will be performed for a wider range of time # points than will later be displayed on the final spectrogram. This ensures # that all time bins displayed represent an average of an equal number of time # windows. tmin, tmax = -0.55, 0.75 # s tmin_plot, tmax_plot = -0.3, 0.5 # s # Read epochs. Note that preload is set to False to enable tf_lcmv to read the # underlying raw object. # Filtering is then performed on raw data in tf_lcmv and the epochs # parameters passed here are used to create epochs from filtered data. However, # reading epochs without preloading means that bad epoch rejection is delayed # until later. To perform bad epoch rejection based on the reject parameter # passed here, run epochs.drop_bad(). This is done automatically in # tf_lcmv to reject bad epochs based on unfiltered data. event_id = 1 events = mne.read_events(event_fname) epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True, baseline=None, preload=False, reject=reject) # Read empty room noise, preload to allow filtering, and pick subselection raw_noise = mne.io.read_raw_fif(noise_fname, preload=True) raw_noise.info['bads'] = ['MEG 2443'] # 1 bad MEG channel raw_noise.pick_types(meg='mag', eeg=False, eog=False, stim=False, exclude='bads', selection=left_temporal_channels) raw_noise.info.normalize_proj() # Create artificial events for empty room noise data events_noise = make_fixed_length_events(raw_noise, event_id, duration=1.) # Create an epochs object using preload=True to reject bad epochs based on # unfiltered data epochs_noise = mne.Epochs(raw_noise, events_noise, event_id, tmin, tmax, proj=True, baseline=None, preload=True, reject=reject) # Make sure the number of noise epochs is the same as data epochs epochs_noise = epochs_noise[:len(epochs.events)] # Read forward operator forward = mne.read_forward_solution(fname_fwd, surf_ori=True) # Read label label = mne.read_label(fname_label) Explanation: Read raw data, preload to allow filtering End of explanation # Setting frequency bins as in Dalal et al. 2008 (high gamma was subdivided) freq_bins = [(4, 12), (12, 30), (30, 55), (65, 299)] # Hz win_lengths = [0.3, 0.2, 0.15, 0.1] # s # Setting the time step tstep = 0.05 # Setting the whitened data covariance regularization parameter data_reg = 0.001 # Subtract evoked response prior to computation? subtract_evoked = False # Calculating covariance from empty room noise. To use baseline data as noise # substitute raw for raw_noise, epochs.events for epochs_noise.events, tmin for # desired baseline length, and 0 for tmax_plot. # Note, if using baseline data, the averaged evoked response in the baseline # period should be flat. noise_covs = [] for (l_freq, h_freq) in freq_bins: raw_band = raw_noise.copy() raw_band.filter(l_freq, h_freq, method='iir', n_jobs=1) epochs_band = mne.Epochs(raw_band, epochs_noise.events, event_id, tmin=tmin_plot, tmax=tmax_plot, baseline=None, proj=True) noise_cov = compute_covariance(epochs_band, method='shrunk') noise_covs.append(noise_cov) del raw_band # to save memory # Computing LCMV solutions for time-frequency windows in a label in source # space for faster computation, use label=None for full solution stcs = tf_lcmv(epochs, forward, noise_covs, tmin, tmax, tstep, win_lengths, freq_bins=freq_bins, subtract_evoked=subtract_evoked, reg=data_reg, label=label) # Plotting source spectrogram for source with maximum activity. # Note that tmin and tmax are set to display a time range that is smaller than # the one for which beamforming estimates were calculated. This ensures that # all time bins shown are a result of smoothing across an identical number of # time windows. plot_source_spectrogram(stcs, freq_bins, tmin=tmin_plot, tmax=tmax_plot, source_index=None, colorbar=True) Explanation: Time-frequency beamforming based on LCMV End of explanation
12,903
Given the following text description, write Python code to implement the functionality described below step by step Description: <center> <h1>Simple Py-ART Usage </h1> </center> Step1: Data available by FigShare Here Step2: Take a read of the BAMS article by Zirnic and Ryzhkov
Python Code: #first we do some imports and check the version of Py-ART for consistency import pyart from matplotlib import pyplot as plt import numpy as np %matplotlib inline print pyart.__version__ #you can grab the data here: http://figshare.com/articles/Data_for_AMS_Short_Course_on_Open_Source_Radar_Software/1537461 filename = './data/KAMX_20140417_1056' radar = pyart.io.read(filename) Explanation: <center> <h1>Simple Py-ART Usage </h1> </center> End of explanation print radar.fields.keys() print radar.fields['reflectivity'].keys() display = pyart.graph.RadarMapDisplay(radar) f = plt.figure(figsize = [17,4]) plt.subplot(1, 3, 1) display.plot_ppi_map('differential_reflectivity', max_lat = 26.5, min_lat =25.4, min_lon = -81., max_lon = -79.5, vmin = -7, vmax = 7, lat_lines = np.arange(20,28,.2), lon_lines = np.arange(-82, -79, .5), resolution = 'l') plt.subplot(1, 3, 2) display.plot_ppi_map('reflectivity', max_lat = 26.5, min_lat =25.4, min_lon = -81., max_lon = -79.5, vmin = -8, vmax = 64, lat_lines = np.arange(20,28,.2), lon_lines = np.arange(-82, -79, .5), resolution = 'l') plt.subplot(1, 3, 3) display.plot_ppi_map('velocity', sweep = 1, max_lat = 26.5, min_lat =25.4, min_lon = -81., max_lon = -79.5, vmin = -15, vmax = 15, lat_lines = np.arange(20,28,.2), lon_lines = np.arange(-82, -79, .5), resolution = 'l') zdr = radar.fields['differential_reflectivity']['data'] smooth_zdr = np.zeros_like(zdr) for i in range(smooth_zdr.shape[0]): smooth_zdr[i,:] = \ pyart.correct.phase_proc.smooth_and_trim(zdr[i,:], 8) radar.add_field_like('differential_reflectivity', 'differential_reflectivity_smooth', smooth_zdr, replace_existing = True) display = pyart.graph.RadarMapDisplay(radar) f = plt.figure(figsize = [17,4]) plt.subplot(1, 3, 1) display.plot_ppi_map('differential_reflectivity_smooth', max_lat = 26.5, min_lat =25.4, min_lon = -81., max_lon = -79.5, vmin = -7, vmax = 7, lat_lines = np.arange(20,28,.2), lon_lines = np.arange(-82, -79, .5), resolution = 'i') plt.subplot(1, 3, 2) display.plot_ppi_map('reflectivity', max_lat = 26.5, min_lat =25.4, min_lon = -81., max_lon = -79.5, vmin = -8, vmax = 64, lat_lines = np.arange(20,28,.2), lon_lines = np.arange(-82, -79, .5), resolution = 'i') plt.subplot(1, 3, 3) display.plot_ppi_map('velocity', sweep = 1, max_lat = 26.5, min_lat =25.4, min_lon = -81., max_lon = -79.5, vmin = -15, vmax = 15, lat_lines = np.arange(20,28,.2), lon_lines = np.arange(-82, -79, .5), resolution = 'i') display = pyart.graph.RadarMapDisplay(radar) f = plt.figure(figsize = [17,4]) plt.subplot(1, 3, 1) display.plot_ppi_map('differential_reflectivity_smooth', max_lat = 26.4, min_lat =26, min_lon = -80.75, max_lon = -80.25, vmin = -7, vmax = 7, lat_lines = np.arange(20,28,.1), lon_lines = np.arange(-82, -79, .2), resolution = 'l') plt.subplot(1, 3, 2) display.plot_ppi_map('reflectivity', max_lat = 26.4, min_lat =26, min_lon = -80.75, max_lon = -80.25, vmin = -8, vmax = 64, lat_lines = np.arange(20,28,.1), lon_lines = np.arange(-82, -79, .2), resolution = 'l') plt.subplot(1, 3, 3) display.plot_ppi_map('velocity', sweep = 1, max_lat = 26.4, min_lat =26, min_lon = -80.75, max_lon = -80.25, vmin = -15, vmax = 15, lat_lines = np.arange(20,28,.1), lon_lines = np.arange(-82, -79, .2), resolution = 'l') Explanation: Data available by FigShare Here: http://figshare.com/articles/Data_for_AMS_Short_Course_on_Open_Source_Radar_Software/1537461 Download and unpack into the data subdirectory of this repository End of explanation display = pyart.graph.RadarMapDisplay(radar) f = plt.figure(figsize = [17,4]) plt.subplot(1, 3, 1) display.plot_ppi_map('differential_reflectivity_smooth', max_lat = 26.4, min_lat =26, min_lon = -80.75, max_lon = -80.25, vmin = -7, vmax = 7, lat_lines = np.arange(20,28,.1), lon_lines = np.arange(-82, -79, .2), resolution = 'l') plt.subplot(1, 3, 2) display.plot_ppi_map('reflectivity', max_lat = 26.4, min_lat =26, min_lon = -80.75, max_lon = -80.25, vmin = -8, vmax = 64, lat_lines = np.arange(20,28,.1), lon_lines = np.arange(-82, -79, .2), resolution = 'l') plt.subplot(1, 3, 3) display.plot_ppi_map('differential_phase', sweep = 0, max_lat = 26.4, min_lat =26, min_lon = -80.75, max_lon = -80.25, vmin = 0, vmax = 360, lat_lines = np.arange(20,28,.1), lon_lines = np.arange(-82, -79, .2), resolution = 'l') pyart.io.write_cfradial('./data/foo.nc', radar) Explanation: Take a read of the BAMS article by Zirnic and Ryzhkov: http://journals.ametsoc.org/doi/pdf/10.1175/1520-0477%281999%29080%3C0389%3APFWSR%3E2.0.CO%3B2 Should also be a $\delta_{dp}$ signal on top of $\phi_{dp}$.. Lets take a look End of explanation
12,904
Given the following text description, write Python code to implement the functionality described below step by step Description: Your first neural network In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more. Step1: Load and prepare the data A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon! Step2: Checking out the data This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above. Below is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model. Step3: Dummy variables Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies(). Step4: Scaling target variables To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1. The scaling factors are saved so we can go backwards when we use the network for predictions. Step5: Splitting the data into training, testing, and validation sets We'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders. Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set). Step7: Time to build the network Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters Step8: Training the network Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops. You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later. Choose the number of epochs This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting. Choose the learning rate This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge. Choose the number of hidden nodes The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose. Step9: Check out your predictions Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly. Step10: Thinking about your results Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does? Note
Python Code: %matplotlib inline %config InlineBackend.figure_format = 'retina' import numpy as np import pandas as pd import matplotlib.pyplot as plt Explanation: Your first neural network In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more. End of explanation data_path = 'Bike-Sharing-Dataset/hour.csv' rides = pd.read_csv(data_path) rides.head() Explanation: Load and prepare the data A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon! End of explanation rides[:24*10].plot(x='dteday', y='cnt') Explanation: Checking out the data This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above. Below is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model. End of explanation dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday'] for each in dummy_fields: dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False) rides = pd.concat([rides, dummies], axis=1) fields_to_drop = ['instant', 'dteday', 'season', 'weathersit', 'weekday', 'atemp', 'mnth', 'workingday', 'hr'] data = rides.drop(fields_to_drop, axis=1) data.head() Explanation: Dummy variables Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies(). End of explanation quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed'] # Store scalings in a dictionary so we can convert back later scaled_features = {} for each in quant_features: mean, std = data[each].mean(), data[each].std() scaled_features[each] = [mean, std] data.loc[:, each] = (data[each] - mean)/std Explanation: Scaling target variables To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1. The scaling factors are saved so we can go backwards when we use the network for predictions. End of explanation # Save the last 21 days test_data = data[-21*24:] data = data[:-21*24] # Separate the data into features and targets target_fields = ['cnt', 'casual', 'registered'] features, targets = data.drop(target_fields, axis=1), data[target_fields] test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields] Explanation: Splitting the data into training, testing, and validation sets We'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders. End of explanation # Hold out the last 60 days of the remaining data as a validation set train_features, train_targets = features[:-60*24], targets[:-60*24] val_features, val_targets = features[-60*24:], targets[-60*24:] Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set). End of explanation class NeuralNetwork(object): def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate): # Set number of nodes in input, hidden and output layers. self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes # Initialize weights self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5, (self.hidden_nodes, self.input_nodes)) self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5, (self.output_nodes, self.hidden_nodes)) self.lr = learning_rate #### Set this to your implemented sigmoid function #### # Activation function is the sigmoid function self.activation_function = lambda x: 1. / (1. + np.exp(-x)) def train(self, inputs_list, targets_list): # Convert inputs list to 2d array inputs = np.array(inputs_list, ndmin=2).T targets = np.array(targets_list, ndmin=2).T #### Implement the forward pass here #### ### Forward pass ### # TODO: Hidden layer hidden_inputs = np.dot(self.weights_input_to_hidden, inputs) hidden_outputs = self.activation_function(hidden_inputs) # TODO: Output layer final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs) final_outputs = final_inputs #### Implement the backward pass here #### ### Backward pass ### # TODO: Output error output_errors = targets - final_outputs # TODO: Backpropagated error hidden_errors = np.dot(self.weights_hidden_to_output.T, output_errors) * hidden_outputs * (1 - hidden_outputs) hidden_grad = np.dot(hidden_errors, inputs.T) # TODO: Update the weights self.weights_hidden_to_output += self.lr * np.dot(output_errors, hidden_outputs.T) self.weights_input_to_hidden += self.lr * hidden_grad def run(self, inputs_list): # Run a forward pass through the network inputs = np.array(inputs_list, ndmin=2).T #### Implement the forward pass here #### # TODO: Hidden layer hidden_inputs = np.dot(self.weights_input_to_hidden, inputs) hidden_outputs = self.activation_function(hidden_inputs) # TODO: Output layer final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs) final_outputs = final_inputs return final_outputs def MSE(y, Y): return np.mean((y-Y)**2) Explanation: Time to build the network Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes. The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation. We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation. Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$. Below, you have these tasks: 1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function. 2. Implement the forward pass in the train method. 3. Implement the backpropagation algorithm in the train method, including calculating the output error. 4. Implement the forward pass in the run method. End of explanation import sys ### Set the hyperparameters here ### epochs = 1000 learning_rate = 0.1 hidden_nodes = 10 output_nodes = 1 N_i = train_features.shape[1] network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate) losses = {'train':[], 'validation':[]} for e in range(epochs): # Go through a random batch of 128 records from the training data set batch = np.random.choice(train_features.index, size=128) for record, target in zip(train_features.ix[batch].values, train_targets.ix[batch]['cnt']): network.train(record, target) # Printing out the training progress train_loss = MSE(network.run(train_features), train_targets['cnt'].values) val_loss = MSE(network.run(val_features), val_targets['cnt'].values) sys.stdout.write("\rProgress: " + str(100 * e/float(epochs))[:4] \ + "% ... Training loss: " + str(train_loss)[:5] \ + " ... Validation loss: " + str(val_loss)[:5]) losses['train'].append(train_loss) losses['validation'].append(val_loss) plt.plot(losses['train'], label='Training loss') plt.plot(losses['validation'], label='Validation loss') plt.legend() plt.ylim(ymax=0.5) Explanation: Training the network Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops. You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later. Choose the number of epochs This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting. Choose the learning rate This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge. Choose the number of hidden nodes The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose. End of explanation fig, ax = plt.subplots(figsize=(8,4)) mean, std = scaled_features['cnt'] predictions = network.run(test_features)*std + mean ax.plot(predictions[0], label='Prediction') ax.plot((test_targets['cnt']*std + mean).values, label='Data') ax.set_xlim(right=len(predictions)) ax.legend() dates = pd.to_datetime(rides.ix[test_data.index]['dteday']) dates = dates.apply(lambda d: d.strftime('%b %d')) ax.set_xticks(np.arange(len(dates))[12::24]) _ = ax.set_xticklabels(dates[12::24], rotation=45) Explanation: Check out your predictions Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly. End of explanation import unittest inputs = [0.5, -0.2, 0.1] targets = [0.4] test_w_i_h = np.array([[0.1, 0.4, -0.3], [-0.2, 0.5, 0.2]]) test_w_h_o = np.array([[0.3, -0.1]]) class TestMethods(unittest.TestCase): ########## # Unit tests for data loading ########## def test_data_path(self): # Test that file path to dataset has been unaltered self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv') def test_data_loaded(self): # Test that data frame loaded self.assertTrue(isinstance(rides, pd.DataFrame)) ########## # Unit tests for network functionality ########## def test_activation(self): network = NeuralNetwork(3, 2, 1, 0.5) # Test that the activation function is a sigmoid self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5)))) def test_train(self): # Test that weights are updated correctly on training network = NeuralNetwork(3, 2, 1, 0.5) network.weights_input_to_hidden = test_w_i_h.copy() network.weights_hidden_to_output = test_w_h_o.copy() network.train(inputs, targets) self.assertTrue(np.allclose(network.weights_hidden_to_output, np.array([[ 0.37275328, -0.03172939]]))) self.assertTrue(np.allclose(network.weights_input_to_hidden, np.array([[ 0.10562014, 0.39775194, -0.29887597], [-0.20185996, 0.50074398, 0.19962801]]))) def test_run(self): # Test correctness of run method network = NeuralNetwork(3, 2, 1, 0.5) network.weights_input_to_hidden = test_w_i_h.copy() network.weights_hidden_to_output = test_w_h_o.copy() self.assertTrue(np.allclose(network.run(inputs), 0.09998924)) suite = unittest.TestLoader().loadTestsFromModule(TestMethods()) unittest.TextTestRunner().run(suite) Explanation: Thinking about your results Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does? Note: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter Your answer below Before Dec 21 model fits well. From Dec 22 model fits bad, because the amount of data decrease. Unit tests Run these unit tests to check the correctness of your network implementation. These tests must all be successful to pass the project. End of explanation
12,905
Given the following text description, write Python code to implement the functionality described below step by step Description: Thanksgiving Survey Analysis Every year Thanksgiving is celebrated in United States all around the country. Some people travel to their hometown while others celebrate with friends. In this project, we are going to analyse a survey on Thanksgiving and try to draw some conclusion based on given information. The dataset used in this analysis is a courtesy of FiveThirtyEight, it is released to public and can be found on their GitHub page. Structure of the Dataset The dataset has numerous columns that stand for every question asked in the survey. As explained on FiveThirtyEight's GitHub page (link given above), the columns are Step1: Hypothesis 1 - "The most preferred food for Thanksgiving is turkey." First, let's get rid of rows that answered "No" when asked if they celebrate Thanksgiving Step2: Let's look at all unique answers given for main dish at Thanksgiving Step3: A short code can show us which food is the most preferred one Step4: So as hypothesized, turkey is the most preferred main dish at Thanksgiving dinner. Hypothesis 2 - "Younger people would travel for Thanksgiving more than older people." Let's look first few rows of the age column of the data. Step5: As can be seen above, the age column has intervals instead of actual numbers. The unique answers are Step6: Let's define a function and apply it to "Age" column to cast each answer to a number. (We'll take the average of intervals, and 70 for "60+".) Step7: We need to get rid of missing values Step8: Now we have ages, let's look at another survey question about traveling for Thanksgiving. Step9: Since there are only 4 unique answers, we can calculate a mean age value for each of them. Step10: Now, let's plot the results to get a better understanding. Step11: As we can see average age of people who stay home is larger than the ones who travel. However, the mean values of ages are pretty close to each other, so we can't say there is a strong correlation between age and travel distance. Hypothesis 3 Step12: Let's look at how income data is stored in the dataset. Step13: Again, we have intervals of values instead of precise values. Let's define a function to get an average, (we will have 250000 for "$200,000 and up"). Step14: We need to get rid of the rows with missing values. Step15: We will follow the same process that we did for hypothesis 2. Step16: Let's plot the results
Python Code: # this line is required to see visualizations inline for Jupyter notebook %matplotlib inline # importing modules that we need for analysis import matplotlib.pyplot as plt import pandas as pd import numpy as np # read the data from file and print out first few rows and columns thanksgiving = pd.read_csv("thanksgiving.csv", encoding="Latin-1") thanksgiving.iloc[0:10,0:3] thanksgiving.columns[:10] Explanation: Thanksgiving Survey Analysis Every year Thanksgiving is celebrated in United States all around the country. Some people travel to their hometown while others celebrate with friends. In this project, we are going to analyse a survey on Thanksgiving and try to draw some conclusion based on given information. The dataset used in this analysis is a courtesy of FiveThirtyEight, it is released to public and can be found on their GitHub page. Structure of the Dataset The dataset has numerous columns that stand for every question asked in the survey. As explained on FiveThirtyEight's GitHub page (link given above), the columns are: Do you celebrate Thanksgiving? What is typically the main dish at your Thanksgiving dinner? Other (please specify) How is the main dish typically cooked? Other (please specify) What kind of stuffing/dressing do you typically have? Other (please specify) What type of cranberry sauce do you typically have? Other (please specify) Do you typically have gravy? Which of these side dishes are typically served at your Thanksgiving dinner? Please select all that apply. Brussel sprouts Carrots Cauliflower Corn Cornbread Fruit salad Green beans/green bean casserole Macaroni and cheese Mashed potatoes Rolls/biscuits Vegetable salad Yams/sweet potato casserole Other (please specify) Which type of pie is typically served at your Thanksgiving dinner? Please select all that apply. Apple Buttermilk Cherry Chocolate Coconut cream Key lime Peach Pecan Pumpkin Sweet Potato None Other (please specify) Which of these desserts do you typically have at Thanksgiving dinner? Please select all that apply. Apple cobbler Blondies Brownies Carrot cake Cheesecake Cookies Fudge Ice cream Peach cobbler None Other (please specify) Do you typically pray before or after the Thanksgiving meal? How far will you travel for Thanksgiving? Will you watch any of the following programs on Thanksgiving? Please select all that apply. Macy's Parade What's the age cutoff at your "kids' table" at Thanksgiving? Have you ever tried to meet up with hometown friends on Thanksgiving night? Have you ever attended a "Friendsgiving?" Will you shop any Black Friday sales on Thanksgiving Day? Do you work in retail? Will you employer make you work on Black Friday? How would you describe where you live? Age What is your gender? How much total combined money did all members of your HOUSEHOLD earn last year? US Region Hypotheses Before diving into analysis of the data let's come up with hypotheses. Considering the traditions of Thanksgiving, we can say: The most preferred food for Thanksgiving is turkey. Younger people would travel (to their parents' house) for Thanksgiving more than older people. People with higher income would travel more for Thanksgiving. Setting up Data End of explanation thanksgiving["Do you celebrate Thanksgiving?"].value_counts() thanksgiving = thanksgiving[thanksgiving["Do you celebrate Thanksgiving?"] == "Yes"] thanksgiving["Do you celebrate Thanksgiving?"].value_counts() Explanation: Hypothesis 1 - "The most preferred food for Thanksgiving is turkey." First, let's get rid of rows that answered "No" when asked if they celebrate Thanksgiving: End of explanation thanksgiving["What is typically the main dish at your Thanksgiving dinner?"].unique() Explanation: Let's look at all unique answers given for main dish at Thanksgiving: End of explanation thanksgiving["What is typically the main dish at your Thanksgiving dinner?"].value_counts() Explanation: A short code can show us which food is the most preferred one: End of explanation thanksgiving["Age"][:10] Explanation: So as hypothesized, turkey is the most preferred main dish at Thanksgiving dinner. Hypothesis 2 - "Younger people would travel for Thanksgiving more than older people." Let's look first few rows of the age column of the data. End of explanation thanksgiving["Age"].unique() Explanation: As can be seen above, the age column has intervals instead of actual numbers. The unique answers are: End of explanation def age_to_num(string): # if nan, return None if pd.isnull(string): return None first_item = string.split(" ")[0] # if the answer is "60+" return 70 if "+" in first_item: return 70.0 last_item = string.split(" ")[2] #return average of the interval return (int(first_item)+int(last_item))/2 # apply age_to_num function to "Age" column and assign it to new column thanksgiving["num_age"] = thanksgiving["Age"].apply(age_to_num) thanksgiving["num_age"].unique() Explanation: Let's define a function and apply it to "Age" column to cast each answer to a number. (We'll take the average of intervals, and 70 for "60+".) End of explanation thanksgiving = thanksgiving[thanksgiving["num_age"].isnull() == False] thanksgiving["num_age"].describe() Explanation: We need to get rid of missing values: End of explanation thanksgiving["How far will you travel for Thanksgiving?"].unique() Explanation: Now we have ages, let's look at another survey question about traveling for Thanksgiving. End of explanation # for each unique answer, select the rows and calculate the mean value of ages. local_string = 'Thanksgiving is local--it will take place in the town I live in' local_rows = thanksgiving[thanksgiving["How far will you travel for Thanksgiving?"] == local_string] local_age_mean = local_rows["num_age"].mean() fewhours_string = "Thanksgiving is out of town but not too far--it's a drive of a few hours or less" fewhours_rows = thanksgiving[thanksgiving["How far will you travel for Thanksgiving?"] == fewhours_string] fewhours_age_mean = fewhours_rows["num_age"].mean() home_string = "Thanksgiving is happening at my home--I won't travel at all" home_rows = thanksgiving[thanksgiving["How far will you travel for Thanksgiving?"] == home_string] home_age_mean = home_rows["num_age"].mean() faraway_string = 'Thanksgiving is out of town and far away--I have to drive several hours or fly' faraway_rows = thanksgiving[thanksgiving["How far will you travel for Thanksgiving?"] == faraway_string] faraway_age_mean = faraway_rows["num_age"].mean() print("Local: " + str(local_age_mean)) print("Drive of few hours or less: " + str(fewhours_age_mean)) print("Home: " + str(home_age_mean)) print("Drive of several hours or have to fly: " + str(faraway_age_mean)) Explanation: Since there are only 4 unique answers, we can calculate a mean age value for each of them. End of explanation x = np.arange(4)+0.75 plt.bar(x,[ fewhours_age_mean, local_age_mean, faraway_age_mean, home_age_mean], width=0.5) plt.xticks([1,2,3,4], ["Few hours", "Local", "Far away", "Home"]) plt.title("Average Age of People for Different Amounts of Travel on Thanksgiving") plt.ylabel("Average Age") plt.xlabel("Travel amount") plt.show() Explanation: Now, let's plot the results to get a better understanding. End of explanation thanksgiving2 = pd.read_csv("thanksgiving.csv", encoding="Latin-1") thanksgiving2 = thanksgiving2 = thanksgiving2[thanksgiving2["Do you celebrate Thanksgiving?"] == "Yes"] Explanation: As we can see average age of people who stay home is larger than the ones who travel. However, the mean values of ages are pretty close to each other, so we can't say there is a strong correlation between age and travel distance. Hypothesis 3 : "People with higher income would travel more for Thanksgiving." First, let's read values from file again, since we removed some rows on previous analysis. End of explanation thanksgiving2["How much total combined money did all members of your HOUSEHOLD earn last year?"].unique() Explanation: Let's look at how income data is stored in the dataset. End of explanation def income_to_num(string): if pd.isnull(string): return None first_item = string.split(" ")[0] # if the answer is "Prefer not to answer" return none if first_item == "Prefer": return None last_item = string.split(" ")[2] #if the answer is "$200,000 and up" return 250000 if last_item == "up": return 250000.0 #remove dollar signs and commas first_item = first_item.replace("$","") first_item = first_item.replace(",","") last_item = last_item.replace("$","") last_item = last_item.replace(",","") #return the average of the interval return (int(first_item)+int(last_item))/2 thanksgiving2["num_income"] = thanksgiving2["How much total combined money did all members of your HOUSEHOLD earn last year?"].apply(income_to_num) Explanation: Again, we have intervals of values instead of precise values. Let's define a function to get an average, (we will have 250000 for "$200,000 and up"). End of explanation thanksgiving2 = thanksgiving2[thanksgiving2["num_income"].isnull() == False] thanksgiving2["num_income"].describe() Explanation: We need to get rid of the rows with missing values. End of explanation # for each unique answer, select the rows and calculate the mean value of income. local_string = 'Thanksgiving is local--it will take place in the town I live in' local_rows = thanksgiving2[thanksgiving2["How far will you travel for Thanksgiving?"] == local_string] local_income_mean = local_rows["num_income"].mean() fewhours_string = "Thanksgiving is out of town but not too far--it's a drive of a few hours or less" fewhours_rows = thanksgiving2[thanksgiving2["How far will you travel for Thanksgiving?"] == fewhours_string] fewhours_income_mean = fewhours_rows["num_income"].mean() home_string = "Thanksgiving is happening at my home--I won't travel at all" home_rows = thanksgiving2[thanksgiving2["How far will you travel for Thanksgiving?"] == home_string] home_income_mean = home_rows["num_income"].mean() faraway_string = 'Thanksgiving is out of town and far away--I have to drive several hours or fly' faraway_rows = thanksgiving2[thanksgiving2["How far will you travel for Thanksgiving?"] == faraway_string] faraway_income_mean = faraway_rows["num_income"].mean() print("Local: " + str(local_income_mean)) print("Drive of few hours or less: " + str(fewhours_income_mean)) print("Home: " + str(home_income_mean)) print("Drive of several hours or have to fly: " + str(faraway_income_mean)) Explanation: We will follow the same process that we did for hypothesis 2. End of explanation x = np.arange(4)+0.75 plt.bar(x,[ fewhours_income_mean, local_income_mean, faraway_income_mean, home_income_mean], width=0.5) plt.xticks([1,2,3,4], ["Few hours", "Local", "Far away", "Home"]) plt.title("Average Income of People for Different Amounts of Travel on Thanksgiving") plt.ylabel("Average Income") plt.xlabel("Travel amount") plt.show() Explanation: Let's plot the results: End of explanation
12,906
Given the following text description, write Python code to implement the functionality described below step by step Description: Manipulating pages pikepdf presents the pages in a PDF through the Pdf.pages property, which follows the list protocol. As such page numbers begin at 0. Let's look at a simple PDF that contains four pages. Step1: How many pages? Step2: Thanks to IPython's rich Python object representations you can view the PDF while you work on it if you execute this IPython notebook. Click the View PDF link below to view the file. You can view the PDF after change you make. If you're reading this documentation online or as part of distribution, you won't see the rich representation. Step3: You can also examine individual pages, which we'll explore in the next section. Suffice to say that you can access pages by indexing them and slicing them. Step4: Suppose the file was scanned backwards. We can easily reverse it in place - maybe it was scanned backwards, a common problem with automatic document scanners. Step5: Pretty nice, isn't it? Of course, the pages in this file are in correct order, so let's put them back. Step6: Removing and adding pages is easy too. Step7: We've trimmed down the file to its essential first and last page. Now, let's add some content from another file. Step8: Using counting numbers Because PDF pages are usually numbered in counting numbers (1, 2, 3...), pikepdf provides a convenience accessor .p() that uses counting numbers
Python Code: from pikepdf import Pdf pdf = Pdf.open('../../tests/resources/fourpages.pdf') Explanation: Manipulating pages pikepdf presents the pages in a PDF through the Pdf.pages property, which follows the list protocol. As such page numbers begin at 0. Let's look at a simple PDF that contains four pages. End of explanation len(pdf.pages) Explanation: How many pages? End of explanation pdf Explanation: Thanks to IPython's rich Python object representations you can view the PDF while you work on it if you execute this IPython notebook. Click the View PDF link below to view the file. You can view the PDF after change you make. If you're reading this documentation online or as part of distribution, you won't see the rich representation. End of explanation pdf.pages[-1].MediaBox Explanation: You can also examine individual pages, which we'll explore in the next section. Suffice to say that you can access pages by indexing them and slicing them. End of explanation pdf.pages.reverse() pdf Explanation: Suppose the file was scanned backwards. We can easily reverse it in place - maybe it was scanned backwards, a common problem with automatic document scanners. End of explanation pdf.pages.reverse() Explanation: Pretty nice, isn't it? Of course, the pages in this file are in correct order, so let's put them back. End of explanation del pdf.pages[1:3] # Remove pages 2-3 labeled "second page" and "third page" pdf Explanation: Removing and adding pages is easy too. End of explanation appendix = Pdf.open('../../tests/resources/sandwich.pdf') pdf.pages.extend(appendix.pages) graph = Pdf.open('../../tests/resources/graph.pdf') pdf.pages.insert(1, graph.pages[0]) pdf # pdf.save('output.pdf') Explanation: We've trimmed down the file to its essential first and last page. Now, let's add some content from another file. End of explanation pdf.pages.p(1) # The first page in the document pdf.pages[0] # Also the first page in the document ; Explanation: Using counting numbers Because PDF pages are usually numbered in counting numbers (1, 2, 3...), pikepdf provides a convenience accessor .p() that uses counting numbers: End of explanation
12,907
Given the following text description, write Python code to implement the functionality described below step by step Description: This notebook is show how to use Q2-DSFDR in command line interface convert feature table to qiime2 qza artifact Step1: select interested category to compare using DS-FDR Step2: output the list of differential abundant taxa (True indicates statistical significance)
Python Code: !qiime tools import \ --input-path ../data/deblur-feature-table.biom \ --type 'FeatureTable[Frequency]' \ --source-format BIOMV210Format \ --output-path ../data/dblr_haddad.qza Explanation: This notebook is show how to use Q2-DSFDR in command line interface convert feature table to qiime2 qza artifact End of explanation !qiime dsfdr permutation-fdr \ --i-table ../data/dblr_haddad.qza \ --m-metadata-file ../data/metadata_rare2k.txt \ --m-metadata-column 'exposure_type' \ --o-reject haddad.dsfdr --verbose Explanation: select interested category to compare using DS-FDR End of explanation !qiime tools export haddad.dsfdr.qza --output-dir haddad.dsfdr.results Explanation: output the list of differential abundant taxa (True indicates statistical significance) End of explanation
12,908
Given the following text description, write Python code to implement the functionality described below step by step Description: Attitude Control System (ACS) This assignment is broken up into the following sections Step2: Solar Torques Step4: Magnetic Torques Step5: Since both the magnetic torques are less than the solar torques, their sum is also less. Gravitational Gradient Torques
Python Code: import math q = 0.6 P_mars = 2.0 * 10 ** -6 A_left = 7.6 # cm^2 L_left = 131.2 # cm A_right = 6.3 # cm^2 L_right = 126.1 # cm Explanation: Attitude Control System (ACS) This assignment is broken up into the following sections: Mission Attitude Control modes Selection of the ACS system-type Minimum Thrust levels Environmental Torques System Performance Mission Attitude Control modes The following control modes have been identified: Science Mode - Main mode of the system where scientific modules are used. Requires Nadir pointing All instruments are powered Data Transfer Mode - Mode used to transfer data to/from Earth Power removed except Comms Used for large data xfers while still conserving power Energizing Mode - Spacecraft is charging from the Sun's Rays Solar Arrays are pointed within a 5 degree maximum pointing error Systems are powered off to optimize charging rates/times ACS System Type Based on the requirement identified, the three axis system appears to be the only system that will meet all requirements. Thrust Levels Environmental Torques End of explanation def solar_torque(P, A, L, q): Calculates the solar torque (T) based on the Solar Pressure (P), spacecraft Area (A), distance from centroid of surface A (L), and reflective factor (q) This function uses the following formula: T = P * A * L * (1 + q) Parameters: ----------- :param P: Solar Pressure of the orbiting planet (in W/m^2) :param A: Area of the spacecraft side (in m^2) :param L: Distance from the centroid of the surface A (in m) :param q: Reflectance factor between 0 and 1 if not 0 <= q <=1: raise ValueError("q must be between 0 and 1") return P * A * L * (1 + q) T_right = solar_torque(P_mars, A_right / 100, L_right / 100, q) T_left = solar_torque(P_mars, A_left / 100, L_left / 100, q) print("Total Torque = {}".format(T_right + T_left)) Explanation: Solar Torques End of explanation def magnetic_torque(D, B=None, M=None, r=None): Calculates the magnetic torque on a space craft orbiting a planetary object based on the residule dipole (D) of the spacecraft and the planetary object's magnetic field (B). This function uses the following formula: T = 10e-7 * D * B Where: B = 2 * M / r^3 If B isn't defined, it's assumed that M and r will be, otherwise a ValueError is raised. If B is defined, the function uses that value, even when M and/or r is defined. Parameters: ----------- :param D: Residual dipole of the spacecraft (in pole-cm) :param B: Planetary object's magnetic field (in gauss) :param M: Magnetic moment of the planetary object (in emu) :param r: Spacecraft orbital radius (in cm) if B is None and (M is None or r is None): raise ValueError("B or M and r must be defined!") if B is None: B = 2 * M / r ** 3 return 10 ** -7 * D * B mars_r = 3.397 # km mars_orbit_dist = .400 # km mars_B_o = 5 * 10**-8 mars_r_o = mars_r * 10 ** 8 r = mars_r + mars_orbit_dist * 10 ** 8 B = (mars_B_o * mars_r_o ** 3) / (r ** 3) * math.sqrt((3 * math.sin(0)**2 + 1)) B T_m_left = magnetic_torque(A_left, B) T_m_right = magnetic_torque(A_right, B) print(T_m_left, T_m_right) T_m_right < T_right and T_m_left < T_left Explanation: Magnetic Torques End of explanation def gravity_gradient_torque(u, r, I_z, I_y, theta): return 3 * u / r ** 3 * abs(I_z - I_y) * theta mars_u = 324858.8 T_g = gravity_gradient_torque(mars_u, r, L_left / 100, L_right / 100, math.pi / 4) T_g T_g < T_left and T_g < T_right Explanation: Since both the magnetic torques are less than the solar torques, their sum is also less. Gravitational Gradient Torques End of explanation
12,909
Given the following text description, write Python code to implement the functionality described below step by step Description: Convolution As the name implies, convolution operations are an important component of convolutional neural networks. The ability for a CNN to accurately match diverse patterns can be attributed to using convolution operations. These operations require complex input which was shown in the previous section. In this section we'll experiment with convolution operations and the parameters which are available to tune them. <p style="text-align Step1: The example code creates two tensors. The input_batch tensor has a similar shape to the image_batch tensor seen in the previous section. This will be the first tensor being convolved and the second tensor will be kernel. Kernel is an important term that is interchangeable with weights, filter, convolution matrix or mask. Since this task is computer vision related, it's useful to use the term kernel because it is being treated as an [image kernel](https Step2: The output is another tensor which is the same rank as the input_batch but includes the number of dimensions found in the kernel. Consider if input_batch represented an image, the image would have a single channel, in this case it could be considered a grayscale image (see Working with Colors). Each element in the tensor would represent one pixel of the image. The pixel in the bottom right corner of the image would have the value of 3.0. Consider the tf.nn.conv2d convolution operation as a combination of the image (represented as input_batch) and the kernel tenser. The convolution of these two tensors create a feature map. Feature map is a broad term except in computer vision where it relates to the output of operations which work with an image kernel. The feature map now represents the convolution of these tensors by adding new layers to the output. The relationship between the input images and the output feature map can be explored with code. Accessing elements from the input batch and the feature map are done using the same index. By accessing the same pixel in both the input and the feature map shows how the input was changed when it convolved with the kernel. In the following case, the lower right pixel in the image was changed to output the value found by multiplying <span class="math-tex" data-type="tex">\(3.0 * 1.0\)</span> and <span class="math-tex" data-type="tex">\(3.0 * 2.0\)</span>. The values correspond to the pixel value and the corresponding value found in the kernel. Step3: In this simplified example, each pixel of every image is multiplied by the corresponding value found in the kernel and then added to a corresponding layer in the feature map. Layer, in this context, is referencing a new dimension in the output. With this example, it's hard to see a value in convolution operations. Strides The value of convolutions in computer vision is their ability to reduce the dimensionality of the input, which is an image in this case. An image's dimensionality (2D image) is its width, height and number of channels. A large image dimensionality requires an exponentially larger amount of time for a neural network to scan over every pixel and judge which ones are important. Reducing dimensionality of an image with convolutions is done by altering the strides of the kernel. The parameter strides, causes a kernel to skip over pixels of an image and not include them in the output. It's not fair to say the pixels are skipped because they still may affect the output. The strides parameter highlights how a convolution operation is working with a kernel when a larger image and more complex kernel are used. As a convolution is sliding the kernel over the input, it's using the strides parameter to change how it walks over the input. Instead of going over every element of an input the strides parameter could configure the convolution to skip certain elements. For example, take the convolution of a larger image and a larger kernel. In this case, it's a convolution between a 6 pixel tall, 6 pixel wide and 1 channel deep image (6x6x1) and a (3x3x1) kernel. Step4: The input_batch was combined with the kernel by moving the kernel over the input_batch striding (or skipping) over certain elements. Each time the kernel was moved, it get centered over an element of input_batch. Then the overlapping values are multiplied together and the result is added together. This is how a convolution combines two inputs using what's referred to as pointwise multiplication. It may be easier to visualize using the following figure. In this figure, the same logic is done as what is found in the code. Two tensors convolved together while striding over the input. The strides reduced the dimensionality of the output a large amount while the kernel size allowed the convolution to use all the input values. None of the input data was completely removed from striding but now the input is a smaller tensor. Strides are a way to adjust the dimensionality of input tensors. Reducing dimensionality requires less processing power, and will keep from creating receptive fields which completely overlap. The strides parameter follows the same format as the input tensor [image_batch_size_stride, image_height_stride, image_width_stride, image_channels_stride]. Changing the first or last element of the stride parameter are rare, they'd skip data in a tf.nn.conv2d operation and not take the input into account. The image_height_stride and image_width_stride are useful to alter in reducing input dimensionality. A challenge which comes up often with striding over the input is how to deal with a stride which doesn't evenly end at the edge of the input. The uneven striding will come up often due to image size and kernel size not matching the striding. If the image size, kernel size and strides can't be changed then padding can be added to the image to deal with the uneven area. Padding When a kernel is overlapped on an image it should be set to fit within the bounds of the image. At times, the sizing may not fit and a good alternative is to fill the missing area in the image. Filling the missing area of the image is known as padding the image. TensorFlow will pad the image with zeros or raise an error when the sizes don't allow a kernel to stride over an image without going past its bounds. The amount of zeros or the error state of tf.nn.conv2d is controlled by the parameter padding which has two possible values ('VALID', 'SAME'). SAME Step5: The output created from convolving an image with an edge detection kernel are all the areas where and edge was detected. The code assumes a batch of images is already available (image_batch) with a real image loaded from disk. In this case, the image is an example image found in the Stanford Dogs Dataset. The kernel has three input and three output channels. The channels sync up to RGB values between <span class="math-tex" data-type="tex">\([0, 255]\)</span> with 255 being the maximum intensity. The tf.minimum and tf.nn.relu calls are there to keep the convolution values within the range of valid RGB colors of <span class="math-tex" data-type="tex">\([0, 255]\)</span>. There are many other common kernels which can be used in this simplified example. Each will highlight different patterns in an image with different results. The following kernel will sharpen an image by increasing the intensity of color changes. Step6: The values in the kernel were adjusted with the center of the kernel increased in intensity and the areas around the kernel reduced in intensity. The change, matches patterns with intense pixels and increases their intensity outputting an image which is visually sharpened. Note that the corners of the kernel are all 0 and don't affect the output which operates in a plus shaped pattern. These kernels match patterns in images at a rudimentary level. A convolutional neural network matches edges and more by using a complex kernel it learned during training. The starting values for the kernel are usually random and over time they're trained by the CNN's learning layer. When a CNN is complete, it starts running and each image sent in is convolved with a kernel which is then changed based on if the predicted value matches the labeled value of the image. For example, if a Sheepdog picture is considered a Pit Bull by the CNN being trained it will then change the filters a small amount to try and match Sheepdog pictures better. Learning complex patterns with a CNN involves more than a single layer of convolution. Even the example code included a tf.nn.relu layer used to prepare the output for visualization. Convolution layers may occur more than once in a CNN but they'll likely include other layer types as well. These layers combined form the support network required for a successful CNN architecture.
Python Code: # setup-only-ignore import tensorflow as tf import numpy as np # setup-only-ignore sess = tf.InteractiveSession() input_batch = tf.constant([ [ # First Input [[0.0], [1.0]], [[2.0], [3.0]] ], [ # Second Input [[2.0], [4.0]], [[6.0], [8.0]] ] ]) kernel = tf.constant([ [ [[1.0, 2.0]] ] ]) Explanation: Convolution As the name implies, convolution operations are an important component of convolutional neural networks. The ability for a CNN to accurately match diverse patterns can be attributed to using convolution operations. These operations require complex input which was shown in the previous section. In this section we'll experiment with convolution operations and the parameters which are available to tune them. <p style="text-align: center;"><i>Convolution operation convolving two input tensors (input and kernel) into a single output tensor which represents information from each input.</i></p> <br /> Input and Kernel Convolution operations in TensorFlow are done using tf.nn.conv2d in a typical situation. There are other convolution operations available using TensorFlow designed with special use cases. tf.nn.conv2d is the preferred convolution operation to begin experimenting with. For example, we can experiment with convolving two tensors together and inspect the result. End of explanation conv2d = tf.nn.conv2d(input_batch, kernel, strides=[1, 1, 1, 1], padding='SAME') sess.run(conv2d) Explanation: The example code creates two tensors. The input_batch tensor has a similar shape to the image_batch tensor seen in the previous section. This will be the first tensor being convolved and the second tensor will be kernel. Kernel is an important term that is interchangeable with weights, filter, convolution matrix or mask. Since this task is computer vision related, it's useful to use the term kernel because it is being treated as an [image kernel](https://en.wikipedia.org/wiki/Kernel_(image_processing). There is no practical difference in the term when used to describe this functionality in TensorFlow. The parameter in TensorFlow is named filter and it expects a set of weights which will be learned from training. The amount of different weights included in the kernel (filter parameter) will configure the amount of kernels which will be learned. In the example code, there is a single kernel which is the first dimension of the kernel variable. The kernel is built to return a tensor which will include one channel with the original input and a second channel with the original input doubled. In this case, channel is used to describe the elements in a rank 1 tensor (vector). Channel is a term from computer vision which describes the output vector, for example an RGB image has three channels represented as a rank 1 tensor [red, green, blue]. At this time, ignore the strides and padding parameter which will be covered later and focus on the convolution (tf.nn.conv2d) output. End of explanation lower_right_image_pixel = sess.run(input_batch)[0][1][1] lower_right_kernel_pixel = sess.run(conv2d)[0][1][1] lower_right_image_pixel, lower_right_kernel_pixel Explanation: The output is another tensor which is the same rank as the input_batch but includes the number of dimensions found in the kernel. Consider if input_batch represented an image, the image would have a single channel, in this case it could be considered a grayscale image (see Working with Colors). Each element in the tensor would represent one pixel of the image. The pixel in the bottom right corner of the image would have the value of 3.0. Consider the tf.nn.conv2d convolution operation as a combination of the image (represented as input_batch) and the kernel tenser. The convolution of these two tensors create a feature map. Feature map is a broad term except in computer vision where it relates to the output of operations which work with an image kernel. The feature map now represents the convolution of these tensors by adding new layers to the output. The relationship between the input images and the output feature map can be explored with code. Accessing elements from the input batch and the feature map are done using the same index. By accessing the same pixel in both the input and the feature map shows how the input was changed when it convolved with the kernel. In the following case, the lower right pixel in the image was changed to output the value found by multiplying <span class="math-tex" data-type="tex">\(3.0 * 1.0\)</span> and <span class="math-tex" data-type="tex">\(3.0 * 2.0\)</span>. The values correspond to the pixel value and the corresponding value found in the kernel. End of explanation input_batch = tf.constant([ [ # First Input (6x6x1) [[0.0], [1.0], [2.0], [3.0], [4.0], [5.0]], [[0.1], [1.1], [2.1], [3.1], [4.1], [5.1]], [[0.2], [1.2], [2.2], [3.2], [4.2], [5.2]], [[0.3], [1.3], [2.3], [3.3], [4.3], [5.3]], [[0.4], [1.4], [2.4], [3.4], [4.4], [5.4]], [[0.5], [1.5], [2.5], [3.5], [4.5], [5.5]], ], ]) kernel = tf.constant([ # Kernel (3x3x1) [[[0.0]], [[0.5]], [[0.0]]], [[[0.0]], [[1.0]], [[0.0]]], [[[0.0]], [[0.5]], [[0.0]]] ]) # NOTE: the change in the size of the strides parameter. conv2d = tf.nn.conv2d(input_batch, kernel, strides=[1, 3, 3, 1], padding='SAME') sess.run(conv2d) Explanation: In this simplified example, each pixel of every image is multiplied by the corresponding value found in the kernel and then added to a corresponding layer in the feature map. Layer, in this context, is referencing a new dimension in the output. With this example, it's hard to see a value in convolution operations. Strides The value of convolutions in computer vision is their ability to reduce the dimensionality of the input, which is an image in this case. An image's dimensionality (2D image) is its width, height and number of channels. A large image dimensionality requires an exponentially larger amount of time for a neural network to scan over every pixel and judge which ones are important. Reducing dimensionality of an image with convolutions is done by altering the strides of the kernel. The parameter strides, causes a kernel to skip over pixels of an image and not include them in the output. It's not fair to say the pixels are skipped because they still may affect the output. The strides parameter highlights how a convolution operation is working with a kernel when a larger image and more complex kernel are used. As a convolution is sliding the kernel over the input, it's using the strides parameter to change how it walks over the input. Instead of going over every element of an input the strides parameter could configure the convolution to skip certain elements. For example, take the convolution of a larger image and a larger kernel. In this case, it's a convolution between a 6 pixel tall, 6 pixel wide and 1 channel deep image (6x6x1) and a (3x3x1) kernel. End of explanation # setup-only-ignore import matplotlib as mil #mil.use('svg') mil.use("nbagg") from matplotlib import pyplot fig = pyplot.gcf() fig.set_size_inches(4, 4) image_filename = "./images/chapter-05-object-recognition-and-classification/convolution/n02113023_219.jpg" image_filename = "/Users/erikerwitt/Downloads/images/n02085936-Maltese_dog/n02085936_804.jpg" filename_queue = tf.train.string_input_producer( tf.train.match_filenames_once(image_filename)) image_reader = tf.WholeFileReader() _, image_file = image_reader.read(filename_queue) image = tf.image.decode_jpeg(image_file) sess.run(tf.initialize_all_variables()) coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(coord=coord) image_batch = tf.image.convert_image_dtype(tf.expand_dims(image, 0), tf.float32, saturate=False) kernel = tf.constant([ [ [[ -1., 0., 0.], [ 0., -1., 0.], [ 0., 0., -1.]], [[ -1., 0., 0.], [ 0., -1., 0.], [ 0., 0., -1.]], [[ -1., 0., 0.], [ 0., -1., 0.], [ 0., 0., -1.]] ], [ [[ -1., 0., 0.], [ 0., -1., 0.], [ 0., 0., -1.]], [[ 8., 0., 0.], [ 0., 8., 0.], [ 0., 0., 8.]], [[ -1., 0., 0.], [ 0., -1., 0.], [ 0., 0., -1.]] ], [ [[ -1., 0., 0.], [ 0., -1., 0.], [ 0., 0., -1.]], [[ -1., 0., 0.], [ 0., -1., 0.], [ 0., 0., -1.]], [[ -1., 0., 0.], [ 0., -1., 0.], [ 0., 0., -1.]] ] ]) conv2d = tf.nn.conv2d(image_batch, kernel, [1, 1, 1, 1], padding="SAME") activation_map = sess.run(tf.minimum(tf.nn.relu(conv2d), 255)) # setup-only-ignore fig = pyplot.gcf() pyplot.imshow(activation_map[0], interpolation='nearest') #pyplot.show() fig.set_size_inches(4, 4) fig.savefig("./images/chapter-05-object-recognition-and-classification/convolution/example-edge-detection.png") Explanation: The input_batch was combined with the kernel by moving the kernel over the input_batch striding (or skipping) over certain elements. Each time the kernel was moved, it get centered over an element of input_batch. Then the overlapping values are multiplied together and the result is added together. This is how a convolution combines two inputs using what's referred to as pointwise multiplication. It may be easier to visualize using the following figure. In this figure, the same logic is done as what is found in the code. Two tensors convolved together while striding over the input. The strides reduced the dimensionality of the output a large amount while the kernel size allowed the convolution to use all the input values. None of the input data was completely removed from striding but now the input is a smaller tensor. Strides are a way to adjust the dimensionality of input tensors. Reducing dimensionality requires less processing power, and will keep from creating receptive fields which completely overlap. The strides parameter follows the same format as the input tensor [image_batch_size_stride, image_height_stride, image_width_stride, image_channels_stride]. Changing the first or last element of the stride parameter are rare, they'd skip data in a tf.nn.conv2d operation and not take the input into account. The image_height_stride and image_width_stride are useful to alter in reducing input dimensionality. A challenge which comes up often with striding over the input is how to deal with a stride which doesn't evenly end at the edge of the input. The uneven striding will come up often due to image size and kernel size not matching the striding. If the image size, kernel size and strides can't be changed then padding can be added to the image to deal with the uneven area. Padding When a kernel is overlapped on an image it should be set to fit within the bounds of the image. At times, the sizing may not fit and a good alternative is to fill the missing area in the image. Filling the missing area of the image is known as padding the image. TensorFlow will pad the image with zeros or raise an error when the sizes don't allow a kernel to stride over an image without going past its bounds. The amount of zeros or the error state of tf.nn.conv2d is controlled by the parameter padding which has two possible values ('VALID', 'SAME'). SAME: The convolution output is the SAME size as the input. This doesn't take the filter's size into account when calculating how to stride over the image. This may stride over more of the image than what exists in the bounds while padding all the missing values with zero. VALID: Take the filter's size into account when calculating how to stride over the image. This will try to keep as much of the kernel inside the image's bounds as possible. There may be padding in some cases but will avoid. It's best to consider the size of the input but if padding is necessary then TensorFlow has the option built in. In most simple scenarios, SAME is a good choice to begin with. VALID is preferential when the input and kernel work well with the strides. For further information, TensorFlow covers this subject well in the convolution documentation. Data Format There's another parameter to tf.nn.conv2d which isn't shown from these examples named data_format. The tf.nn.conv2d docs explain how to change the data format so the input, kernel and strides follow a format other than the format being used thus far. Changing this format is useful if there is an input tensor which doesn't follow the [batch_size, height, width, channel] standard. Instead of changing the input to match, it's possible to change the data_format parameter to use a different layout. data_format: An optional string from: "NHWC", "NCHW". Defaults to "NHWC". Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, in_height, in_width, in_channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, in_channels, in_height, in_width]. | Data Format | Definition | |:---: | :---: | | N | Number of tensors in a batch, the batch_size. | | H | Height of the tensors in each batch. | | W | Width of the tensors in each batch. | | C | Channels of the tensors in each batch. | Kernels in Depth In TensorFlow the filter parameter is used to specify the kernel convolved with the input. Filters are commonly used in photography to adjust attributes of a picture, such as the amount of sunlight allowed to reach a camera's lens. In photography, filters allow a photographer to drastically alter the picture they're taking. The reason the photographer is able to alter their picture using a filter is because the filter can recognize certain attributes of the light coming in to the lens. For example, a red lens filter will absorb (block) every frequency of light which isn't red allowing only red to pass through the filter. In computer vision, kernels (filters) are used to recognize important attributes of a digital image. They do this by using certain patterns to highlight when features exist in an image. A kernel which will replicate the red filter example image is implemented by using a reduced value for all colors except red. In this case, the reds will stay the same but all other colors matched are reduced. The example seen at the start of this chapter uses a kernel designed to do edge detection. Edge detection kernels are common in computer vision applications and could be implemented using basic TensorFlow operations and a single tf.nn.conv2d operation. End of explanation kernel = tf.constant([ [ [[ 0., 0., 0.], [ 0., 0., 0.], [ 0., 0., 0.]], [[ -1., 0., 0.], [ 0., -1., 0.], [ 0., 0., -1.]], [[ 0., 0., 0.], [ 0., 0., 0.], [ 0., 0., 0.]] ], [ [[ -1., 0., 0.], [ 0., -1., 0.], [ 0., 0., -1.]], [[ 5., 0., 0.], [ 0., 5., 0.], [ 0., 0., 5.]], [[ -1., 0., 0.], [ 0., -1., 0.], [ 0., 0., -1.]] ], [ [[ 0., 0., 0.], [ 0., 0., 0.], [ 0., 0., 0.]], [[ -1., 0., 0.], [ 0., -1., 0.], [ 0., 0., -1.]], [[ 0, 0., 0.], [ 0., 0., 0.], [ 0., 0., 0.]] ] ]) conv2d = tf.nn.conv2d(image_batch, kernel, [1, 1, 1, 1], padding="SAME") activation_map = sess.run(tf.minimum(tf.nn.relu(conv2d), 255)) # setup-only-ignore fig = pyplot.gcf() pyplot.imshow(activation_map[0], interpolation='nearest') #pyplot.show() fig.set_size_inches(4, 4) fig.savefig("./images/chapter-05-object-recognition-and-classification/convolution/example-sharpen.png") Explanation: The output created from convolving an image with an edge detection kernel are all the areas where and edge was detected. The code assumes a batch of images is already available (image_batch) with a real image loaded from disk. In this case, the image is an example image found in the Stanford Dogs Dataset. The kernel has three input and three output channels. The channels sync up to RGB values between <span class="math-tex" data-type="tex">\([0, 255]\)</span> with 255 being the maximum intensity. The tf.minimum and tf.nn.relu calls are there to keep the convolution values within the range of valid RGB colors of <span class="math-tex" data-type="tex">\([0, 255]\)</span>. There are many other common kernels which can be used in this simplified example. Each will highlight different patterns in an image with different results. The following kernel will sharpen an image by increasing the intensity of color changes. End of explanation # setup-only-ignore filename_queue.close(cancel_pending_enqueues=True) coord.request_stop() coord.join(threads) Explanation: The values in the kernel were adjusted with the center of the kernel increased in intensity and the areas around the kernel reduced in intensity. The change, matches patterns with intense pixels and increases their intensity outputting an image which is visually sharpened. Note that the corners of the kernel are all 0 and don't affect the output which operates in a plus shaped pattern. These kernels match patterns in images at a rudimentary level. A convolutional neural network matches edges and more by using a complex kernel it learned during training. The starting values for the kernel are usually random and over time they're trained by the CNN's learning layer. When a CNN is complete, it starts running and each image sent in is convolved with a kernel which is then changed based on if the predicted value matches the labeled value of the image. For example, if a Sheepdog picture is considered a Pit Bull by the CNN being trained it will then change the filters a small amount to try and match Sheepdog pictures better. Learning complex patterns with a CNN involves more than a single layer of convolution. Even the example code included a tf.nn.relu layer used to prepare the output for visualization. Convolution layers may occur more than once in a CNN but they'll likely include other layer types as well. These layers combined form the support network required for a successful CNN architecture. End of explanation
12,910
Given the following text description, write Python code to implement the functionality described below step by step Description: Regression Week 5 Step1: Load in house sales data Dataset is from house sales in King County, the region where the city of Seattle, WA is located. Step2: If we want to do any "feature engineering" like creating new features or adjusting existing ones we should do this directly using the SFrames as seen in the first notebook of Week 2. For this notebook, however, we will work with the existing features. Import useful functions from previous notebook As in Week 2, we convert the SFrame into a 2D Numpy array. Copy and paste get_num_data() from the second notebook of Week 2. Step3: Also, copy and paste the predict_output() function to compute the predictions for an entire matrix of features given the matrix and the weights Step4: Normalize features In the house dataset, features vary wildly in their relative magnitude Step5: Numpy provides a shorthand for computing 2-norms of each column Step6: To normalize, apply element-wise division Step7: Using the shorthand we just covered, write a short function called normalize_features(feature_matrix), which normalizes columns of a given feature matrix. The function should return a pair (normalized_features, norms), where the second item contains the norms of original features. As discussed in the lectures, we will use these norms to normalize the test data in the same way as we normalized the training data. Step8: To test the function, run the following Step9: Implementing Coordinate Descent with normalized features We seek to obtain a sparse set of weights by minimizing the LASSO cost function SUM[ (prediction - output)^2 ] + lambda*( |w[1]| + ... + |w[k]|). (By convention, we do not include w[0] in the L1 penalty term. We never want to push the intercept to zero.) The absolute value sign makes the cost function non-differentiable, so simple gradient descent is not viable (you would need to implement a method called subgradient descent). Instead, we will use coordinate descent Step10: Don't forget to normalize features Step11: We assign some random set of initial weights and inspect the values of ro[i] Step13: Use predict_output() to make predictions on this data. Compute the values of ro[i] for each feature in this simple model, using the formula given above, using the formula Step14: QUIZ QUESTION Recall that, whenever ro[i] falls between -l1_penalty/2 and l1_penalty/2, the corresponding weight w[i] is sent to zero. Now suppose we were to take one step of coordinate descent on either feature 1 or feature 2. What range of values of l1_penalty would not set w[1] zero, but would set w[2] to zero, if we were to take a step in that coordinate? Step15: QUIZ QUESTION What range of values of l1_penalty would set both w[1] and w[2] to zero, if we were to take a step in that coordinate? Step16: So we can say that ro[i] quantifies the significance of the i-th feature Step17: To test the function, run the following cell Step18: Cyclical coordinate descent Now that we have a function that optimizes the cost function over a single coordinate, let us implement cyclical coordinate descent where we optimize coordinates 0, 1, ..., (d-1) in order and repeat. When do we know to stop? Each time we scan all the coordinates (features) once, we measure the change in weight for each coordinate. If no coordinate changes by more than a specified threshold, we stop. For each iteration Step19: Using the following parameters, learn the weights on the sales dataset. Step20: First create a normalized version of the feature matrix, normalized_simple_feature_matrix Step21: Then, run your implementation of LASSO coordinate descent Step22: QUIZ QUESTIONS 1. What is the RSS of the learned model on the normalized dataset? 2. Which features had weight zero at convergence? Step23: Evaluating LASSO fit with more features Let us split the sales dataset into training and test sets. Step24: Let us consider the following set of features. Step25: First, create a normalized feature matrix from the TRAINING data with these features. (Make you store the norms for the normalization, since we'll use them later) Step26: First, learn the weights with l1_penalty=1e7, on the training data. Initialize weights to all zeros, and set the tolerance=1. Call resulting weights weights1e7, you will need them later. Step27: QUIZ QUESTION What features had non-zero weight in this case? Step28: Next, learn the weights with l1_penalty=1e8, on the training data. Initialize weights to all zeros, and set the tolerance=1. Call resulting weights weights1e8, you will need them later. Step29: QUIZ QUESTION What features had non-zero weight in this case? Step30: Finally, learn the weights with l1_penalty=1e4, on the training data. Initialize weights to all zeros, and set the tolerance=5e5. Call resulting weights weights1e4, you will need them later. (This case will take quite a bit longer to converge than the others above.) Step31: QUIZ QUESTION What features had non-zero weight in this case? Step32: Rescaling learned weights Recall that we normalized our feature matrix, before learning the weights. To use these weights on a test set, we must normalize the test data in the same way. Alternatively, we can rescale the learned weights to include the normalization, so we never have to worry about normalizing the test data Step33: To check your results, if you call normalized_weights1e7 the normalized version of weights1e7, then Step34: Evaluating each of the learned models on the test data Let's now evaluate the three models on the test data Step35: Compute the RSS of each of the three normalized weights on the (unnormalized) test_feature_matrix
Python Code: import graphlab Explanation: Regression Week 5: LASSO (coordinate descent) In this notebook, you will implement your very own LASSO solver via coordinate descent. You will: * Write a function to normalize features * Implement coordinate descent for LASSO * Explore effects of L1 penalty Fire up graphlab create Make sure you have the latest version of graphlab (>= 1.7) End of explanation sales = graphlab.SFrame('kc_house_data.gl/') # In the dataset, 'floors' was defined with type string, # so we'll convert them to int, before using it below sales['floors'] = sales['floors'].astype(int) Explanation: Load in house sales data Dataset is from house sales in King County, the region where the city of Seattle, WA is located. End of explanation import numpy as np # note this allows us to refer to numpy as np instead def get_numpy_data(data_sframe, features, output): data_sframe['constant'] = 1 # this is how you add a constant column to an SFrame # add the column 'constant' to the front of the features list so that we can extract it along with the others: features = ['constant'] + features # this is how you combine two lists # select the columns of data_SFrame given by the features list into the SFrame features_sframe (now including constant): features_sframe = data_sframe[features] # the following line will convert the features_SFrame into a numpy matrix: feature_matrix = features_sframe.to_numpy() # assign the column of data_sframe associated with the output to the SArray output_sarray output_sarray = data_sframe[output] # the following will convert the SArray into a numpy array by first converting it to a list output_array = output_sarray.to_numpy() return(feature_matrix, output_array) Explanation: If we want to do any "feature engineering" like creating new features or adjusting existing ones we should do this directly using the SFrames as seen in the first notebook of Week 2. For this notebook, however, we will work with the existing features. Import useful functions from previous notebook As in Week 2, we convert the SFrame into a 2D Numpy array. Copy and paste get_num_data() from the second notebook of Week 2. End of explanation def predict_output(feature_matrix, weights): # assume feature_matrix is a numpy matrix containing the features as columns and weights is a corresponding numpy array # create the predictions vector by using np.dot() predictions = np.dot(feature_matrix, weights) return(predictions) Explanation: Also, copy and paste the predict_output() function to compute the predictions for an entire matrix of features given the matrix and the weights: End of explanation X = np.array([[3.,5.,8.],[4.,12.,15.]]) print X Explanation: Normalize features In the house dataset, features vary wildly in their relative magnitude: sqft_living is very large overall compared to bedrooms, for instance. As a result, weight for sqft_living would be much smaller than weight for bedrooms. This is problematic because "small" weights are dropped first as l1_penalty goes up. To give equal considerations for all features, we need to normalize features as discussed in the lectures: we divide each feature by its 2-norm so that the transformed feature has norm 1. Let's see how we can do this normalization easily with Numpy: let us first consider a small matrix. End of explanation norms = np.linalg.norm(X, axis=0) # gives [norm(X[:,0]), norm(X[:,1]), norm(X[:,2])] print norms Explanation: Numpy provides a shorthand for computing 2-norms of each column: End of explanation print X / norms # gives [X[:,0]/norm(X[:,0]), X[:,1]/norm(X[:,1]), X[:,2]/norm(X[:,2])] Explanation: To normalize, apply element-wise division: End of explanation def normalize_features(feature_matrix): norms = np.linalg.norm(feature_matrix, axis=0) return feature_matrix/norms, norms Explanation: Using the shorthand we just covered, write a short function called normalize_features(feature_matrix), which normalizes columns of a given feature matrix. The function should return a pair (normalized_features, norms), where the second item contains the norms of original features. As discussed in the lectures, we will use these norms to normalize the test data in the same way as we normalized the training data. End of explanation features, norms = normalize_features(np.array([[3.,6.,9.],[4.,8.,12.]])) print features # should print # [[ 0.6 0.6 0.6] # [ 0.8 0.8 0.8]] print norms # should print # [5. 10. 15.] Explanation: To test the function, run the following: End of explanation simple_features = ['sqft_living', 'bedrooms'] my_output = 'price' (simple_feature_matrix, output) = get_numpy_data(sales, simple_features, my_output) Explanation: Implementing Coordinate Descent with normalized features We seek to obtain a sparse set of weights by minimizing the LASSO cost function SUM[ (prediction - output)^2 ] + lambda*( |w[1]| + ... + |w[k]|). (By convention, we do not include w[0] in the L1 penalty term. We never want to push the intercept to zero.) The absolute value sign makes the cost function non-differentiable, so simple gradient descent is not viable (you would need to implement a method called subgradient descent). Instead, we will use coordinate descent: at each iteration, we will fix all weights but weight i and find the value of weight i that minimizes the objective. That is, we look for argmin_{w[i]} [ SUM[ (prediction - output)^2 ] + lambda*( |w[1]| + ... + |w[k]|) ] where all weights other than w[i] are held to be constant. We will optimize one w[i] at a time, circling through the weights multiple times. 1. Pick a coordinate i 2. Compute w[i] that minimizes the cost function SUM[ (prediction - output)^2 ] + lambda*( |w[1]| + ... + |w[k]|) 3. Repeat Steps 1 and 2 for all coordinates, multiple times For this notebook, we use cyclical coordinate descent with normalized features, where we cycle through coordinates 0 to (d-1) in order, and assume the features were normalized as discussed above. The formula for optimizing each coordinate is as follows: ┌ (ro[i] + lambda/2) if ro[i] &lt; -lambda/2 w[i] = ├ 0 if -lambda/2 &lt;= ro[i] &lt;= lambda/2 └ (ro[i] - lambda/2) if ro[i] &gt; lambda/2 where ro[i] = SUM[ [feature_i]*(output - prediction + w[i]*[feature_i]) ]. Note that we do not regularize the weight of the constant feature (intercept) w[0], so, for this weight, the update is simply: w[0] = ro[i] Effect of L1 penalty Let us consider a simple model with 2 features: End of explanation simple_feature_matrix, norms = normalize_features(simple_feature_matrix) Explanation: Don't forget to normalize features: End of explanation weights = np.array([1., 4., 1.]) Explanation: We assign some random set of initial weights and inspect the values of ro[i]: End of explanation def compute_ro(feature, errors, weight): return np.dot(feature, errors + weight * feature) def compute_all_ro(feature_matrix, output, weights): :param i index of the feature prediction = predict_output(feature_matrix, weights) errors = output - prediction ro = [] for i in range(len(weights)): feature = feature_matrix[:, i] ro.append(compute_ro(feature, errors, weights[i])) return ro compute_all_ro(simple_feature_matrix, output, weights) Explanation: Use predict_output() to make predictions on this data. Compute the values of ro[i] for each feature in this simple model, using the formula given above, using the formula: ro[i] = SUM[ [feature_i]*(output - prediction + w[i]*[feature_i]) ] Hint: You can get a Numpy vector for feature_i using: simple_feature_matrix[:,i] End of explanation 87939470.772991061 * 2 > 1.64e8 87939470.772991061 * 2 > 1.73e8 Explanation: QUIZ QUESTION Recall that, whenever ro[i] falls between -l1_penalty/2 and l1_penalty/2, the corresponding weight w[i] is sent to zero. Now suppose we were to take one step of coordinate descent on either feature 1 or feature 2. What range of values of l1_penalty would not set w[1] zero, but would set w[2] to zero, if we were to take a step in that coordinate? End of explanation 87939470.772991061 * 2 > 1.9e8 Explanation: QUIZ QUESTION What range of values of l1_penalty would set both w[1] and w[2] to zero, if we were to take a step in that coordinate? End of explanation def lasso_coordinate_descent_step(i, feature_matrix, output, weights, l1_penalty): # compute prediction prediction = predict_output(feature_matrix, weights) # compute ro[i] = SUM[ [feature_i]*(output - prediction + weight[i]*[feature_i]) ] errors = output - prediction feature = feature_matrix[:, i] ro_i = compute_ro(feature, errors, weights[i]) if i == 0: # intercept -- do not regularize new_weight_i = ro_i elif ro_i < -l1_penalty/2.: new_weight_i = ro_i + l1_penalty/2 elif ro_i > l1_penalty/2.: new_weight_i = ro_i - l1_penalty/2 else: new_weight_i = 0. return new_weight_i Explanation: So we can say that ro[i] quantifies the significance of the i-th feature: the larger ro[i] is, the more likely it is for the i-th feature to be retained. Single Coordinate Descent Step Using the formula above, implement coordinate descent that minimizes the cost function over a single feature i. Note that the intercept (weight 0) is not regularized. The function should accept feature matrix, output, current weights, l1 penalty, and index of feature to optimize over. The function should return new weight for feature i. End of explanation # should print 0.425558846691 import math print lasso_coordinate_descent_step(1, np.array([[3./math.sqrt(13),1./math.sqrt(10)],[2./math.sqrt(13),3./math.sqrt(10)]]), np.array([1., 1.]), np.array([1., 4.]), 0.1) Explanation: To test the function, run the following cell: End of explanation def lasso_cyclical_coordinate_descent(feature_matrix, output, initial_weights, l1_penalty, tolerance): weights = initial_weights changes = [float('inf') for i in range(len(weights))] while any(change > tolerance for change in changes): for i in range(len(weights)): old_weights_i = weights[i] # remember old value of weight[i], as it will be overwritten # the following line uses new values for weight[0], weight[1], ..., weight[i-1] # and old values for weight[i], ..., weight[d-1] weights[i] = lasso_coordinate_descent_step(i, feature_matrix, output, weights, l1_penalty) # use old_weights_i to compute change in coordinate changes[i] = abs(old_weights_i - weights[i]) return weights Explanation: Cyclical coordinate descent Now that we have a function that optimizes the cost function over a single coordinate, let us implement cyclical coordinate descent where we optimize coordinates 0, 1, ..., (d-1) in order and repeat. When do we know to stop? Each time we scan all the coordinates (features) once, we measure the change in weight for each coordinate. If no coordinate changes by more than a specified threshold, we stop. For each iteration: 1. As you loop over features in order and perform coordinate descent, measure how much each coordinate changes. 2. After the loop, if the maximum change across all coordinates is falls below the tolerance, stop. Otherwise, go back to step 1. Return weights IMPORTANT: when computing a new weight for coordinate i, make sure to incorporate the new weights for coordinates 0, 1, ..., i-1. One good way is to update your weights variable in-place. See following pseudocode for illustration. ``` for i in range(len(weights)): old_weights_i = weights[i] # remember old value of weight[i], as it will be overwritten # the following line uses new values for weight[0], weight[1], ..., weight[i-1] # and old values for weight[i], ..., weight[d-1] weights[i] = lasso_coordinate_descent_step(i, feature_matrix, output, weights, l1_penalty) # use old_weights_i to compute change in coordinate ... ``` End of explanation simple_features = ['sqft_living', 'bedrooms'] my_output = 'price' initial_weights = np.zeros(3) l1_penalty = 1e7 tolerance = 1.0 Explanation: Using the following parameters, learn the weights on the sales dataset. End of explanation (simple_feature_matrix, output) = get_numpy_data(sales, simple_features, my_output) (normalized_simple_feature_matrix, simple_norms) = normalize_features(simple_feature_matrix) # normalize features Explanation: First create a normalized version of the feature matrix, normalized_simple_feature_matrix End of explanation weights = lasso_cyclical_coordinate_descent(normalized_simple_feature_matrix, output, initial_weights, l1_penalty, tolerance) Explanation: Then, run your implementation of LASSO coordinate descent: End of explanation def compute_RSS(feature_matrix, output, weights): errors = predict_output(feature_matrix, weights) - output return np.dot(errors, errors) RSS = compute_RSS(normalized_simple_feature_matrix, output, weights) '{:.3E}'.format(RSS) weights Explanation: QUIZ QUESTIONS 1. What is the RSS of the learned model on the normalized dataset? 2. Which features had weight zero at convergence? End of explanation train_data,test_data = sales.random_split(.8,seed=0) Explanation: Evaluating LASSO fit with more features Let us split the sales dataset into training and test sets. End of explanation all_features = ['bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot', 'floors', 'waterfront', 'view', 'condition', 'grade', 'sqft_above', 'sqft_basement', 'yr_built', 'yr_renovated'] (train_matrix, train_output) = get_numpy_data(train_data, all_features, 'price') (test_matrix, test_output) = get_numpy_data(test_data, all_features, 'price') Explanation: Let us consider the following set of features. End of explanation norm_train_matrix, norms = normalize_features(train_matrix) Explanation: First, create a normalized feature matrix from the TRAINING data with these features. (Make you store the norms for the normalization, since we'll use them later) End of explanation initial_weights = np.zeros(1+len(all_features)) # plus the intercept l1_penalty=1e7 tolerance=1 weights1e7 = lasso_cyclical_coordinate_descent(norm_train_matrix, train_output, initial_weights, l1_penalty, tolerance) def find_zero_weight_features(weights, features): zero_indices, = np.where(weights == 0) for i in zero_indices - 1: if i>=0: print i, features[i] def find_none_zero_weight_features(weights, features): zero_indices, = np.where(weights != 0) for i in zero_indices - 1: if i>=0: print i, features[i] Explanation: First, learn the weights with l1_penalty=1e7, on the training data. Initialize weights to all zeros, and set the tolerance=1. Call resulting weights weights1e7, you will need them later. End of explanation find_zero_weight_features(weights1e7, all_features) find_none_zero_weight_features(weights1e7, all_features) Explanation: QUIZ QUESTION What features had non-zero weight in this case? End of explanation initial_weights = np.zeros(1+len(all_features)) # plus the intercept l1_penalty=1e8 tolerance=1 weights1e8 = lasso_cyclical_coordinate_descent(norm_train_matrix, train_output, initial_weights, l1_penalty, tolerance) Explanation: Next, learn the weights with l1_penalty=1e8, on the training data. Initialize weights to all zeros, and set the tolerance=1. Call resulting weights weights1e8, you will need them later. End of explanation find_none_zero_weight_features(weights1e8, all_features) Explanation: QUIZ QUESTION What features had non-zero weight in this case? End of explanation initial_weights = np.zeros(1+len(all_features)) # plus the intercept l1_penalty=1e4 tolerance=5e5 weights1e4 = lasso_cyclical_coordinate_descent(norm_train_matrix, train_output, initial_weights, l1_penalty, tolerance) Explanation: Finally, learn the weights with l1_penalty=1e4, on the training data. Initialize weights to all zeros, and set the tolerance=5e5. Call resulting weights weights1e4, you will need them later. (This case will take quite a bit longer to converge than the others above.) End of explanation find_none_zero_weight_features(weights1e4, all_features) Explanation: QUIZ QUESTION What features had non-zero weight in this case? End of explanation weights1e4_normalized = weights1e4 / norms weights1e7_normalized = weights1e7 / norms weights1e8_normalized = weights1e8 / norms Explanation: Rescaling learned weights Recall that we normalized our feature matrix, before learning the weights. To use these weights on a test set, we must normalize the test data in the same way. Alternatively, we can rescale the learned weights to include the normalization, so we never have to worry about normalizing the test data: In this case, we must scale the resulting weights so that we can make predictions with original features: 1. Store the norms of the original features to a vector called norms: features, norms = normalize_features(features) 2. Run Lasso on the normalized features and obtain a weights vector 3. Compute the weights for the original features by performing element-wise division, i.e. weights_normalized = weights / norms Now, we can apply weights_normalized to the test data, without normalizing it! Create a normalized version of each of the weights learned above. (weights1e4, weights1e7, weights1e8). End of explanation print weights1e7_normalized[3] Explanation: To check your results, if you call normalized_weights1e7 the normalized version of weights1e7, then: print normalized_weights1e7[3] should return 161.31745624837794. End of explanation (test_feature_matrix, test_output) = get_numpy_data(test_data, all_features, 'price') Explanation: Evaluating each of the learned models on the test data Let's now evaluate the three models on the test data: End of explanation print compute_RSS(test_feature_matrix, test_output, weights1e4_normalized) print compute_RSS(test_feature_matrix, test_output, weights1e7_normalized) print compute_RSS(test_feature_matrix, test_output, weights1e8_normalized) Explanation: Compute the RSS of each of the three normalized weights on the (unnormalized) test_feature_matrix: End of explanation
12,911
Given the following text description, write Python code to implement the functionality described below step by step Description: lingam.utils In this example, we need to import numpy, pandas, and lingam. Step1: We define utility functions to draw the directed acyclic graph. Step2: print_causal_directions We create test data consisting of 6 variables. Step3: We run booststrap and get the result. Step4: We can get the ranking of the causal directions extracted by get_causal_direction_counts() method. Step5: Then, we import lingam.utils and check the results with the print_causal_directions function. Step6: We can also output by specifying the variable name. Step7: print_dags We use the bootstrap results above to get the ranking of the DAGs extracted. Step8: Then, we import lingam.utils and check the results with the print_dagc function. Step9: make_prior_knowledge In order to perform causal discovery using prior knowledge of causal relations, make_prior_knowledge function that creates a prior knowledge matrix is provided. First, we import lingam.utils to use make_prior_knowledge function. Step10: Exogenous variables If the exogenous variable is known, specify the variable index in the exogenous_variables argument. Step11: Sink variables If the sink variable such as the target variable of the predictive model is already known, specify the variable index in the sink_variables argument. Step12: Directed path If the causal direction between variables is already known, specify the variable index pair in the paths argument. Step13: No directed path If there is no causal direction between variables is already known, specify the variable index pair in the no_paths argument. Step14: Mix all knowledge All prior knowledge can be specified at the same time. A prior knowledge matrix is created with priorities in the order of exogenous_variables, sink_variables, paths, no_paths. Step15: remove_effect If it is considered that there are hidden common causes in X0(categorical variable) and X1 in following DAG, we want to run causal discovery excluding the effects of X0 and X1. Step16: In this cases, we can import lingam.utils and use remove_effect function. Step17: If we run DirectLiNGAM on the dataset that excludes the effects of X0 and X1, we can get the following results Step18: To add causal coefficients from variables whose effects have been removed, regression for each variable may be performed. Step19: get_sink_variables By using the get_sink_variables() method, we can get the sink variables in the adjacent matrix. Step20: get_exo_variables By using the get_exo_variables() method, we can get the exogenous variables in the adjacent matrix. Step21: find_all_paths Using the find_all_paths() method, we can explore all paths from any variable to any variable.
Python Code: import numpy as np import pandas as pd import graphviz import lingam from lingam.utils import make_dot np.set_printoptions(precision=3, suppress=True) np.random.seed(0) Explanation: lingam.utils In this example, we need to import numpy, pandas, and lingam. End of explanation def make_prior_knowledge_graph(prior_knowledge_matrix): d = graphviz.Digraph(engine='dot') labels = [f'x{i}' for i in range(prior_knowledge_matrix.shape[0])] for label in labels: d.node(label, label) dirs = np.where(prior_knowledge_matrix > 0) for to, from_ in zip(dirs[0], dirs[1]): d.edge(labels[from_], labels[to]) dirs = np.where(prior_knowledge_matrix < 0) for to, from_ in zip(dirs[0], dirs[1]): d.edge(labels[from_], labels[to], style='dashed') return d Explanation: We define utility functions to draw the directed acyclic graph. End of explanation x3 = np.random.uniform(size=10000) x0 = 3.0*x3 + np.random.uniform(size=10000) x2 = 6.0*x3 + np.random.uniform(size=10000) x1 = 3.0*x0 + 2.0*x2 + np.random.uniform(size=10000) x5 = 4.0*x0 + np.random.uniform(size=10000) x4 = 8.0*x0 - 1.0*x2 + np.random.uniform(size=10000) X = pd.DataFrame(np.array([x0, x1, x2, x3, x4, x5]).T ,columns=['x0', 'x1', 'x2', 'x3', 'x4', 'x5']) X.head() Explanation: print_causal_directions We create test data consisting of 6 variables. End of explanation model = lingam.DirectLiNGAM() result = model.bootstrap(X, 100) Explanation: We run booststrap and get the result. End of explanation cdc = result.get_causal_direction_counts(n_directions=8, min_causal_effect=0.01) Explanation: We can get the ranking of the causal directions extracted by get_causal_direction_counts() method. End of explanation from lingam.utils import print_causal_directions print_causal_directions(cdc, 100) Explanation: Then, we import lingam.utils and check the results with the print_causal_directions function. End of explanation labels = ['X1', 'X2', 'X3', 'X4', 'X5', 'X6'] print_causal_directions(cdc, 100, labels=labels) Explanation: We can also output by specifying the variable name. End of explanation dagc = result.get_directed_acyclic_graph_counts(n_dags=3, min_causal_effect=0.01) Explanation: print_dags We use the bootstrap results above to get the ranking of the DAGs extracted. End of explanation from lingam.utils import print_dagc print_dagc(dagc, 100) labels = ['X1', 'X2', 'X3', 'X4', 'X5', 'X6'] print_dagc(dagc, 100, labels=labels) Explanation: Then, we import lingam.utils and check the results with the print_dagc function. End of explanation from lingam.utils import make_prior_knowledge Explanation: make_prior_knowledge In order to perform causal discovery using prior knowledge of causal relations, make_prior_knowledge function that creates a prior knowledge matrix is provided. First, we import lingam.utils to use make_prior_knowledge function. End of explanation pk = make_prior_knowledge(n_variables=3, exogenous_variables=[0, 1]) print(pk) make_prior_knowledge_graph(pk) Explanation: Exogenous variables If the exogenous variable is known, specify the variable index in the exogenous_variables argument. End of explanation pk = make_prior_knowledge(n_variables=3, sink_variables=[1, 2]) print(pk) make_prior_knowledge_graph(pk) Explanation: Sink variables If the sink variable such as the target variable of the predictive model is already known, specify the variable index in the sink_variables argument. End of explanation pk = make_prior_knowledge(n_variables=3, paths=[[0, 1]]) print(pk) make_prior_knowledge_graph(pk) Explanation: Directed path If the causal direction between variables is already known, specify the variable index pair in the paths argument. End of explanation pk = make_prior_knowledge(n_variables=3, no_paths=[[0, 1]]) print(pk) make_prior_knowledge_graph(pk) Explanation: No directed path If there is no causal direction between variables is already known, specify the variable index pair in the no_paths argument. End of explanation pk = make_prior_knowledge( n_variables=4, exogenous_variables=[3], sink_variables=[0], paths=[[1, 0]], no_paths=[[3, 0]], ) make_prior_knowledge_graph(pk) Explanation: Mix all knowledge All prior knowledge can be specified at the same time. A prior knowledge matrix is created with priorities in the order of exogenous_variables, sink_variables, paths, no_paths. End of explanation m = np.array([[0.0, 0.0, 0.0, 0.0, 0.0], [0.5, 0.0, 0.0, 0.0, 0.0], [4.0,-1.0, 0.0, 0.0, 0.0], [2.0, 0.0,-3.0, 0.0, 0.0], [0.0, 1.0, 2.0, 2.0, 0.0]]) make_dot(m) # x0 = np.random.uniform(size=10000) x0 = np.random.randint(2, size=10000) x1 = 0.5*x0 + np.random.uniform(size=10000) x2 = 4.0*x0 - 1.0*x1 + np.random.uniform(size=10000) x3 = 2.0*x0 - 3.0*x2 + np.random.uniform(size=10000) x4 = 1.0*x1 + 2.0*x2 + 2.0*x3 + np.random.uniform(size=10000) X = pd.DataFrame(np.array([x0, x1, x2, x3, x4]).T ,columns=['x0', 'x1', 'x2', 'x3', 'x4']) X.head() Explanation: remove_effect If it is considered that there are hidden common causes in X0(categorical variable) and X1 in following DAG, we want to run causal discovery excluding the effects of X0 and X1. End of explanation from lingam.utils import remove_effect remove_features=[0, 1] X_removed = remove_effect(X, remove_features=remove_features) Explanation: In this cases, we can import lingam.utils and use remove_effect function. End of explanation model = lingam.DirectLiNGAM() model.fit(X_removed) print(model.causal_order_) print(model.adjacency_matrix_) make_dot(model.adjacency_matrix_) Explanation: If we run DirectLiNGAM on the dataset that excludes the effects of X0 and X1, we can get the following results: End of explanation from sklearn import linear_model from sklearn.utils import check_array def get_reg_coef(X, features, target, gamma=1.0): X = np.copy(check_array(X)) lr = linear_model.LinearRegression() lr.fit(X[:, features], X[:, target]) weight = np.power(np.abs(lr.coef_), gamma) reg = linear_model.LassoLarsIC(criterion='bic') reg.fit(X[:, features] * weight, X[:, target]) return reg.coef_ * weight B = model.adjacency_matrix_.copy() B[2, 0], B[2, 1] = get_reg_coef(X, [0, 1], 2) B[3, 0], B[3, 1], _ = get_reg_coef(X, [0, 1, 2], 3) B[4, 0], B[4, 1], _, _ = get_reg_coef(X, [0, 1, 2, 3], 4) print(B) make_dot(B, lower_limit=0.1) # To align the display, the causal direction of small coefficients is excluded. Explanation: To add causal coefficients from variables whose effects have been removed, regression for each variable may be performed. End of explanation m = np.array([[0.0, 0.0, 0.0, 0.0, 0.0], [0.5, 0.0, 0.0, 0.0, 0.0], [4.0,-1.0, 0.0, 0.0, 0.0], [2.0, 0.0,-3.0, 0.0, 0.0], [0.0, 1.0, 2.0, 2.0, 0.0]]) make_dot(m) from lingam.utils import get_sink_variables get_sink_variables(m) Explanation: get_sink_variables By using the get_sink_variables() method, we can get the sink variables in the adjacent matrix. End of explanation m = np.array([[0.0, 0.0, 0.0, 0.0, 0.0], [0.5, 0.0, 0.0, 0.0, 0.0], [4.0,-1.0, 0.0, 0.0, 0.0], [2.0, 0.0,-3.0, 0.0, 0.0], [0.0, 1.0, 2.0, 2.0, 0.0]]) make_dot(m) from lingam.utils import get_exo_variables get_exo_variables(m) Explanation: get_exo_variables By using the get_exo_variables() method, we can get the exogenous variables in the adjacent matrix. End of explanation m = np.array([[0.0, 0.0, 0.0, 0.0, 0.0], [0.5, 0.0, 0.0, 0.0, 0.0], [4.0,-1.0, 0.0, 0.0, 0.0], [2.0, 0.0,-3.0, 0.0, 0.0], [0.0, 1.0, 2.0, 2.0, 0.0]]) make_dot(m) from lingam.utils import find_all_paths paths = find_all_paths(m, 0, 4) pd.DataFrame({'path': paths[0], 'effect': paths[1]}) Explanation: find_all_paths Using the find_all_paths() method, we can explore all paths from any variable to any variable. End of explanation
12,912
Given the following text problem statement, write Python code to implement the functionality described below in problem statement Problem: How can I know the (row, column) index of the minimum(might not be single) of a numpy array/matrix?
Problem: import numpy as np a = np.array([[1, 0], [0, 2]]) result = np.argwhere(a == np.min(a))
12,913
Given the following text description, write Python code to implement the functionality described below step by step Description: Demonstration The following demonstration includes basic and intermediate uses of the LamAna Project library. It is intended to exhaustively reference all API features, therefore some advandced demonstrations will favor technical detail. Tutorial Step1: Goal Step2: That's it! The rest of this demonstration showcases API functionality of the LamAna project. Calling Case attributes Passed in arguments are acessible, but can be displayed as pandas Series and DataFrames. Step3: Reset material order. Changes are relfected in the properties view and stacking order. Step4: Serial resets Step5: Reset the parameters Step6: apply() Geometries and LaminateModels Construct a laminate using geometric, matrial paramaters and geometries. Step7: Access the user input geometries Step8: We can compare Geometry objects with builtin Python operators. This process directly compares GeometryTuples in the Geometry class. Step9: Get all thicknesses for selected layers. Step10: A general and very important object is the LaminateModel. Step11: Sometimes might you want to throw in a bunch of geometry strings from different groups. If there are repeated strings in different groups (set intersections), you can tell Case to only give a unique result. For instane, here we combine two groups of geometry strings, 5-plys and odd-plys. Clearly these two groups overlap, and there are some repeated geometries (one with different conventions). Using the unique keyword, Case only operates on a unique set of Geometry objects (independent of convention), resulting in a unique set of LaminateModels. Step12: DataFrame Access You can get a quick view of the stack using the snapshot method. This gives access to a Construct - a DataFrame converted stack. Step13: We can easily view entire laminate DataFrames using the frames attribute. This gives access to LaminateModels (DataFrame) objects, which extends the stack view so that laminate theory is applied to each row. Step14: NOTE, for even plies, the material is set alternate for each layer. Thus outers layers may be different materials. Step15: Totaling The distributions.Case class has useful properties available for totaling specific layers for a group of laminates as lists. As these properties return lists, these results can be sliced and iterated. Step16: .total property Step17: Geometry Totals The total attribute used in Case actually dervive from attributes for Geometry objects individually. On Geometry objects, they return specific thicknesses instead of lists of thicknesses. Step18: LaminateModel Attributes Access the LaminateModel object directly using the LMs attribute. Step19: Laminates are assumed mirrored at the neutral axis, but dissimilar inner_i thicknesses are allowed. Step20: Separate from the case attributes, Laminates have useful attributes also, such as nplies, p and its own total. Step21: Often the extreme stress values (those at the interfaces) are most important. This is equivalent to p=2. Step22: NOTE Step23: As with Geometry objects, we can compare LaminateModel objects also. ~~This process directly compares two defining components of a LaminateModel object Step24: Use python and pandas native comparison tracebacks that to understand the errors directly by comparing FeatureInput dict and LaminateModel DataFrame. Step25: plot() LT Geometries CAVEAT Step26: We try to quickly plot simple stress distriubtions with native pandas methods. We have two variants for displaying distributions Step27: While we get reasonable stress distribution plots rather simply, LamAna offers some plotting methods pertinent to laminates than assisting with visualization. Demo - An example illustration of desired plotting of multiple geometries from distributions. This is image of results from legacy code used for comparison. We can plot the stress distribution for a case of a single geometry. Step28: We can also plot multiple geometries of similar total thickness. Step29: Exporting Saving data is critical for future analysis. LamAna offers two formas for exporting your data and parameters. Parameters used to make calculations such as the FeatureInput information are saved as "dashboards" in different forms. - '.xlsx' Step30: <div class="alert alert-warning">**NOTE** For demonstration purposes, the `temp` and `delete` are activated. This will create temporary files in the OS temp directory and automatically delete them. For practical use, ignore setting these flags.</div> The latter LaminateModel data was saved to an .xlsx file in the default export folder. The filepath is returned (currently suppressed with the ; line). The next level to export data is for a case. This will save all files comprise in a case. If exported to csv format, files are saved seperately. In xlsx format, a single file is made where each LaminateModel data and dashboard are saved as seperate worksheets. Step31: Tutorial Step32: Exploring LamAna Objects This is brief introduction to underlying objects in this package. We begin with an input string that is parsed and converted into a Geometry object. This is part of the input_ module. Step33: This object has a number of handy methods. This information is shipped with parameters and properties in FeatureInput. A FeatureInput is simply a dict. This currently does have not an official class but is it import for other objects. Step34: The following objects are serially inherited and part of the constructs module. These construct the DataFrame represention of a laminate. The code to decouple LaminateModel from Laminate was merged in verions 0.4.13. Step35: The latter cells verify these objects are successfully decoupled. That's all for now. Generating Multiple Cases We've already seen we can generate a case object and plots with three lines of code. However, sometimes it is necessary to generate different cases. These invocations can be tedious with three lines of code per case. Have no fear. A simple way to produce more cases is to instantiate a Cases object. Below we will create a Cases which houses multiples cases that Step36: Cases() accepts a list of geometry strings. Given appropriate default keywords, this lone argument will return a dict-like object of cases with indicies as keys. The model and ps keywords have default values. A Cases() object has some interesting characteristics (this is not a dict) Step37: LamainateModels can be compared using set theory. Unique subsets of LaminateModels can be returned from a mix of repeated geometry strings. We will use the default model and ps values. Step38: Subclassing Custom Default Parameters We observed the benefits of using implicit, default keywords (models, ps) in simplifying the writing of Cases() instantiations. In general, the user can code explicit defaults for load_params and mat_props by subclassing BaseDefaults() from inputs_. While subclassing requires some extra Python knowledge, this is a relatively simple process that reduces a significant amount of redundant code, leading to a more effiencient anaytical setting. The BaseDefaults contains a dict various geometry strings and Geometry objects. Rather than defining examples for various geometry plies, the user can call from all or a groupings of geometries. Step39: The latter geometric defaults come out of the box when subclassed from BaseDefaults. If custom geometries are desired, the user can override the geo_inputs dict, which automatically builds the Geo_objects dict. Users can override three categories of defaults parameters Step40: Subclassing Custom Models One of the most powerful feauteres of LamAna is the ability to define customized modifications to the Laminate Theory models. Code for laminate theories (i.e. Classic_LT, Wilson_LT) are are located in the models directory. These models can be simple functions or sublclass from BaseModels in the theories module. Either approach is acceptable (see narrative docs for more details on creating custom models. This ability to add custom code make this library extensibile to use a larger variety of models. Plotting Cases An example of multiple subplots is show below. Using a former case, notice each subplot is indepent, woth separate geometries for each. LamAna treats each subplot as a subset or "caselet" Step41: Each caselet can also be a separate case, plotting multiple geometries for each as accomplished with Case. Step42: See Demo notebooks for more examples of plotting. More on Cases Step43: Applying caselets The term "caselet" is defined in LPEP 003. Most importantly, the various types a caselet represents is handled by Cases and discussed here. In 0.4.4b3+, caselets are contained in lists. LPEP entertains the idea of containing caselets in dicts. Step44: Characteristics Step45: Unique Cases from Intersecting Caselets Cases can check if caselet is unique by comparing the underlying geometry strings. Here we have a non-unique caselets of different types. We get unique results within each caselet using the unique keyword. Notice, different caselets could have similar LaminateModels. Step46: The following cells attempt to print the LM objects. Cases objects unordered and thus print in random orders. It is important to note that once set operations are performed, order is no longer a preserved. This is related to how Python handles hashes. This applies to Cases() in two areas Step47: Selecting From cases, subsets of LaminateModels can be chosen. select is a method that performs on and returns sets of LaminateModels. Plotting functions are not implement for this method directly, however the reulsts can be used to make new cases instances from which .plot() is accessible. Example access techniques using Cases. Access all cases Step48: Selections from latter cases. Step49: Advanced techniques Step50: By default, difference is subtracted as set(ps) - set(nplies). Currently there is no implementation for the converse difference, but set operations still work. Step51: Current logic seems to return a union. Enhancing selection algorithms with set operations Need logic to append LM for the following Step52: In order to compare objects in sets, they must be hashable. The simple requirement equality is include whatever makes the hash of a equal to the hash of b. Ideally, we should hash the Geometry object, but the inner values is a list which is unhashable due to its mutability. Conventiently however, strings are not hashable. We can try to hash the geometry input string once they have been converted to General Convention as unique identifiers for the geometry object. This requires some reorganization in Geometry. ~~isolate a converter function _to_gen_convention()~~ privative all functions invisible to the API ~~hash the converted geo_strings~~ ~~privatize _geo_strings. This cannot be altered by the user.~~ Here we see the advantage to using geo_strings as hashables. They are inheirently hashable. UPDATE Step53: Need to make Laminate class hashable. Try to use unique identifiers such as Geometry and p. Step54: Use sets to filter unique geometry objects from Defaults(). Step55: Mixing Geometries See above. Looks like comparing the order of these lists give different results. This test has been quarantine from the repo until a solution is found. Step56: Idiomatic Case Making As we transition to more automated techniques, tf parameters are to be reused multiple times, it can be helpful to store them as default values. Step57: Finally, if building several cases is required for the same parameters, we can use higher-level API tools to help automate the process. Note, for every case that is created, a seperate Case() instantiation and Case.apply() call is required. These techniques obviate such redundancies. Step58: Cases are differentiated by different ps. Step59: Python 3 no longer returns a list for .values() method, so list used to evalate a the dictionary view. While consuming a case's, dict value view with list() works in Python 2 and 3, iteration with loops and comprehensions is a preferred technique for both single and mutiple case processing. After cases are accessed, iteration can access the contetnts of all cases. Iteration is the preferred technique for processing cases. It is most general, cleaner, Py2/3 compatible out of the box and agrees with The Zen of Python Step60: We will demonstrate comparing two techniques for generating equivalent cases.
Python Code: #------------------------------------------------------------------------------ import pandas as pd import lamana as la #import LamAna as la %matplotlib inline #%matplotlib nbagg # PARAMETERS ------------------------------------------------------------------ # Build dicts of geometric and material parameters load_params = {'R' : 12e-3, # specimen radius 'a' : 7.5e-3, # support ring radius 'r' : 2e-4, # radial distance from center loading 'P_a' : 1, # applied load 'p' : 5, # points/layer } # Quick Form: a dict of lists mat_props = {'HA' : [5.2e10, 0.25], 'PSu' : [2.7e9, 0.33], } # Standard Form: a dict of dicts # mat_props = {'Modulus': {'HA': 5.2e10, 'PSu': 2.7e9}, # 'Poissons': {'HA': 0.25, 'PSu': 0.33}} # What geometries to test? # Make tuples of desired geometeries to analyze: outer - {inner...-....}_i - middle # Current Style g1 = ('0-0-2000') # Monolith g2 = ('1000-0-0') # Bilayer g3 = ('600-0-800') # Trilayer g4 = ('500-500-0') # 4-ply g5 = ('400-200-800') # Short-hand; <= 5-ply g6 = ('400-200-400S') # Symmetric g7 = ('400-[200]-800') # General convention; 5-ply g8 = ('400-[100,100]-800') # General convention; 7-plys g9 = ('400-[100,100]-400S') # General and Symmetric convention; 7-plys '''Add to test set''' g13 = ('400-[150,50]-800') # Dissimilar inner_is g14 = ('400-[25,125,50]-800') geos_most = [g1, g2, g3, g4, g5] geos_special = [g6, g7, g8, g9] geos_full = [g1, g2, g3, g4, g5, g6, g7, g8, g9] geos_dissimilar = [g13, g14] # Future Style #geos1 = ((400-400-400),(400-200-800),(400-350-500)) # same total thickness #geos2 = ((400-400-400), (400-500-1600), (400-200-800)) # same outer thickness #import pandas as pd pd.set_option('display.max_columns', 10) pd.set_option('precision', 4) Explanation: Demonstration The following demonstration includes basic and intermediate uses of the LamAna Project library. It is intended to exhaustively reference all API features, therefore some advandced demonstrations will favor technical detail. Tutorial: Basic User Input Startup End of explanation case1 = la.distributions.Case(load_params, mat_props) # instantiate a User Input Case Object through distributions case1.apply(['400-200-800']) case1.plot() Explanation: Goal: Generate a Plot in 3 Lines of Code End of explanation # Original case1.load_params # Series View case1.parameters # Original case1.mat_props # DataFrame View case1.properties # Equivalent Standard Form case1.properties.to_dict() Explanation: That's it! The rest of this demonstration showcases API functionality of the LamAna project. Calling Case attributes Passed in arguments are acessible, but can be displayed as pandas Series and DataFrames. End of explanation case1.materials = ['PSu', 'HA'] case1.properties Explanation: Reset material order. Changes are relfected in the properties view and stacking order. End of explanation case1.materials = ['PSu', 'HA', 'HA'] case1.properties case1.materials # get reorderd list of materials case1._materials case1.apply(geos_full) case1.snapshots[-1] '''Need to bypass pandas abc ordering of indicies.''' Explanation: Serial resets End of explanation mat_props2 = {'HA' : [5.3e10, 0.25], 'PSu' : [2.8e9, 0.33], } case1 = la.distributions.Case(load_params, mat_props2) case1.properties Explanation: Reset the parameters End of explanation case2 = la.distributions.Case(load_params, mat_props) case2.apply(geos_full) # default model Wilson_LT Explanation: apply() Geometries and LaminateModels Construct a laminate using geometric, matrial paramaters and geometries. End of explanation case2.Geometries # using an attribute, __repr__ print(case2.Geometries) # uses __str__ case2.Geometries[0] # indexing Explanation: Access the user input geometries End of explanation bilayer = case2.Geometries[1] # (1000.0-[0.0]-0.0) trilayer = case2.Geometries[2] # (600.0-[0.0]-800.0) #bilayer == trilayer bilayer != trilayer Explanation: We can compare Geometry objects with builtin Python operators. This process directly compares GeometryTuples in the Geometry class. End of explanation case2.middle case2.inner case2.inner[-1] case2.inner[-1][0] # List indexing allowed [first[0] for first in case2.inner] # iterate case2.outer Explanation: Get all thicknesses for selected layers. End of explanation case2.LMs Explanation: A general and very important object is the LaminateModel. End of explanation fiveplys = ['400-[200]-800', '350-400-500', '200-100-1400'] oddplys = ['400-200-800', '350-400-500', '400.0-[100.0,100.0]-800.0'] mix = fiveplys + oddplys mix # Non-unique, repeated 5-plys case_ = la.distributions.Case(load_params, mat_props) case_.apply(mix) case_.LMs # Unique case_ = la.distributions.Case(load_params, mat_props) case_.apply(mix, unique=True) case_.LMs Explanation: Sometimes might you want to throw in a bunch of geometry strings from different groups. If there are repeated strings in different groups (set intersections), you can tell Case to only give a unique result. For instane, here we combine two groups of geometry strings, 5-plys and odd-plys. Clearly these two groups overlap, and there are some repeated geometries (one with different conventions). Using the unique keyword, Case only operates on a unique set of Geometry objects (independent of convention), resulting in a unique set of LaminateModels. End of explanation case2.snapshots[-1] Explanation: DataFrame Access You can get a quick view of the stack using the snapshot method. This gives access to a Construct - a DataFrame converted stack. End of explanation '''Consider head command for frames list''' #case2.frames ##with pd.set_option('display.max_columns', None): # display all columns, within this context manager ## case2.frames[5] case2.frames[5].head() '''Extend laminate attributes''' case3 = la.distributions.Case(load_params, mat_props) case3.apply(geos_dissimilar) #case3.frames Explanation: We can easily view entire laminate DataFrames using the frames attribute. This gives access to LaminateModels (DataFrame) objects, which extends the stack view so that laminate theory is applied to each row. End of explanation case4 = la.distributions.Case(load_params, mat_props) case4.apply(['400-[100,100,100]-0']) case4.frames[0][['layer', 'matl', 'type']] ; '''Add functionality to customize material type.''' Explanation: NOTE, for even plies, the material is set alternate for each layer. Thus outers layers may be different materials. End of explanation '''Show Geometry first then case use.''' Explanation: Totaling The distributions.Case class has useful properties available for totaling specific layers for a group of laminates as lists. As these properties return lists, these results can be sliced and iterated. End of explanation case2.total case2.total_middle case2.total_middle case2.total_inner_i case2.total_outer case2.total_outer[4:-1] # slicing [inner_i[-1]/2.0 for inner_i in case2.total_inner_i] # iterate Explanation: .total property End of explanation G1 = case2.Geometries[-1] G1 G1.total # laminate thickness (um) G1.total_inner_i # inner_i laminae G1.total_inner_i[0] # inner_i lamina pair sum(G1.total_inner_i) # inner total G1.total_inner # inner total Explanation: Geometry Totals The total attribute used in Case actually dervive from attributes for Geometry objects individually. On Geometry objects, they return specific thicknesses instead of lists of thicknesses. End of explanation case2.LMs[5].Middle case2.LMs[5].Inner_i Explanation: LaminateModel Attributes Access the LaminateModel object directly using the LMs attribute. End of explanation case2.LMs[5].tensile Explanation: Laminates are assumed mirrored at the neutral axis, but dissimilar inner_i thicknesses are allowed. End of explanation LM = case2.LMs[4] LM.LMFrame.tail(7) Explanation: Separate from the case attributes, Laminates have useful attributes also, such as nplies, p and its own total. End of explanation LM.extrema LM.p # number of rows per group LM.nplies # number of plies LM.total # total laminate thickness (m) LM.Geometry '''Overload the min and max special methods.''' LM.max_stress # max interfacial failure stress Explanation: Often the extreme stress values (those at the interfaces) are most important. This is equivalent to p=2. End of explanation LM.min_stress '''Redo tp return series of bool an index for has_attrs''' LM.has_neutaxis LM.has_discont LM.is_special LM.FeatureInput '''Need to fix FeatureInput and Geometry inside LaminateModel''' Explanation: NOTE: this feature gives a different result for p=1 since a single middle cannot report two interfacial values; INDET. End of explanation case2 = la.distributions.Case(load_params, mat_props) case2.apply(geos_full) bilayer_LM = case2.LMs[1] trilayer_LM = case2.LMs[2] trilayer_LM == trilayer_LM #bilayer_LM == trilayer_LM bilayer_LM != trilayer_LM Explanation: As with Geometry objects, we can compare LaminateModel objects also. ~~This process directly compares two defining components of a LaminateModel object: the LM DataFrame (LMFrame) and FeatureInput. If either is False, the equality returns False.~~ End of explanation #bilayer_LM.FeatureInput == trilayer_LM.FeatureInput # gives detailed traceback '''Fix FI DataFrame with dict.''' bilayer_LM.FeatureInput #bilayer_LM.LMFrame == trilayer_LM.LMFrame # gives detailed traceback Explanation: Use python and pandas native comparison tracebacks that to understand the errors directly by comparing FeatureInput dict and LaminateModel DataFrame. End of explanation '''Find a way to remove all but interfacial points.''' Explanation: plot() LT Geometries CAVEAT: it is recommended to use at least p=2 for calculating stress. Less than two points for odd plies is indeterminant in middle rows, which can raise exceptions. End of explanation from lamana.utils import tools as ut from lamana.models import Wilson_LT as wlt dft = wlt.Defaults() #%matplotlib nbagg # Quick plotting case4 = ut.laminator(dft.geos_standard) for case in case4.values(): for LM in case.LMs: df = LM.LMFrame df.plot(x='stress_f (MPa/N)', y='d(m)', title='Unnormalized Distribution') df.plot(x='stress_f (MPa/N)', y='k', title='Normalized Distribution') Explanation: We try to quickly plot simple stress distriubtions with native pandas methods. We have two variants for displaying distributions: - Unnoormalized: plotted by the height (`d_`). Visaully: thicknesses vary, material slopes are constant. - Normalized: plotted by the relative fraction level (`k_`). Visually: thicknesses are constant, material slopes vary. Here we plot with the nbagg matplotlib backend to generatre interactive figures. NOTE: for Normalized plots, slope can vary for a given material. End of explanation case3 = la.distributions.Case(load_params, mat_props) case3.apply(['400-200-800'], model='Wilson_LT') case3.plot() Explanation: While we get reasonable stress distribution plots rather simply, LamAna offers some plotting methods pertinent to laminates than assisting with visualization. Demo - An example illustration of desired plotting of multiple geometries from distributions. This is image of results from legacy code used for comparison. We can plot the stress distribution for a case of a single geometry. End of explanation five_plies = ['350-400-500', '400-200-800', '200-200-1200', '200-100-1400', '100-100-1600', '100-200-1400', '300-400-600'] case4 = la.distributions.Case(load_params, mat_props) case4.apply(five_plies, model='Wilson_LT') case4.plot() '''If different plies or patterns, make new caselet (subplot)''' '''[400-200-800, '300-[400,200]-600'] # non-congruent? equi-ply''' '''[400-200-800, '400-200-0'] # odd/even ply''' # currently superimposes plots. Just needs to separate. Explanation: We can also plot multiple geometries of similar total thickness. End of explanation LM = case4.LMs[0] LM.to_xlsx(delete=True) # or `to_csv()` Explanation: Exporting Saving data is critical for future analysis. LamAna offers two formas for exporting your data and parameters. Parameters used to make calculations such as the FeatureInput information are saved as "dashboards" in different forms. - '.xlsx': (default); convient for storing multiple calculationa amd dashboards as se[arate worksheets in a Excel workbook. - '.csv': universal format; separate files for data and dashboard. The lowest level to export data is for a LaminateModel object. End of explanation case4.to_xlsx(temp=True, delete=True) # or `to_csv()` Explanation: <div class="alert alert-warning">**NOTE** For demonstration purposes, the `temp` and `delete` are activated. This will create temporary files in the OS temp directory and automatically delete them. For practical use, ignore setting these flags.</div> The latter LaminateModel data was saved to an .xlsx file in the default export folder. The filepath is returned (currently suppressed with the ; line). The next level to export data is for a case. This will save all files comprise in a case. If exported to csv format, files are saved seperately. In xlsx format, a single file is made where each LaminateModel data and dashboard are saved as seperate worksheets. End of explanation #------------------------------------------------------------------------------ import pandas as pd import lamana as la %matplotlib inline #%matplotlib nbagg # PARAMETERS ------------------------------------------------------------------ # Build dicts of loading parameters and and material properties load_params = {'R' : 12e-3, # specimen radius 'a' : 7.5e-3, # support ring radius 'r' : 2e-4, # radial distance from center loading 'P_a' : 1, # applied load 'p' : 5, # points/layer } # # Quick Form: a dict of lists # mat_props = {'HA' : [5.2e10, 0.25], # 'PSu' : [2.7e9, 0.33],} # Standard Form: a dict of dicts mat_props = {'Modulus': {'HA': 5.2e10, 'PSu': 2.7e9}, 'Poissons': {'HA': 0.25, 'PSu': 0.33}} # What geometries to test? # Make tuples of desired geometeries to analyze: outer - {inner...-....}_i - middle # Current Style g1 = ('0-0-2000') # Monolith g2 = ('1000-0-0') # Bilayer g3 = ('600-0-800') # Trilayer g4 = ('500-500-0') # 4-ply g5 = ('400-200-800') # Short-hand; <= 5-ply g6 = ('400-200-400S') # Symmetric g7 = ('400-[200]-800') # General convention; 5-ply g8 = ('400-[100,100]-800') # General convention; 7-plys g9 = ('400-[100,100]-400S') # General and Symmetric convention; 7-plys '''Add to test set''' g13 = ('400-[150,50]-800') # Dissimilar inner_is g14 = ('400-[25,125,50]-800') geos_most = [g1, g2, g3, g4, g5] geos_special = [g6, g7, g8, g9] geos_full = [g1, g2, g3, g4, g5, g6, g7, g8, g9] geos_dissimilar = [g13, g14] Explanation: Tutorial: Intermediate So far, the barebones objects have been discussed and a lot can be accomplished with the basics. For users who have some experience with Python and Pandas, here are some intermediate techniques to reduce repetitious actions. This section dicusses the use of abstract base classes intended for reducing redundant tasks such as multiple case creation and default parameter definitions. Custom model subclassing is also discussed. End of explanation # Geometry object la.input_.Geometry('100-200-1600') Explanation: Exploring LamAna Objects This is brief introduction to underlying objects in this package. We begin with an input string that is parsed and converted into a Geometry object. This is part of the input_ module. End of explanation # FeatureInput FI = { 'Geometry': la.input_.Geometry('400.0-[200.0]-800.0'), 'Materials': ['HA', 'PSu'], 'Model': 'Wilson_LT', 'Parameters': load_params, 'Properties': mat_props, 'Globals': None, } Explanation: This object has a number of handy methods. This information is shipped with parameters and properties in FeatureInput. A FeatureInput is simply a dict. This currently does have not an official class but is it import for other objects. End of explanation # Stack object la.constructs.Stack(FI) # Laminate object la.constructs.Laminate(FI) # LaminateModel object la.constructs.LaminateModel(FI) Explanation: The following objects are serially inherited and part of the constructs module. These construct the DataFrame represention of a laminate. The code to decouple LaminateModel from Laminate was merged in verions 0.4.13. End of explanation cases1 = la.distributions.Cases(['400-200-800', '350-400-500', '400-200-0', '1000-0-0'], load_params=load_params, mat_props=mat_props, model= 'Wilson_LT', ps=[3,4,5]) cases1 Explanation: The latter cells verify these objects are successfully decoupled. That's all for now. Generating Multiple Cases We've already seen we can generate a case object and plots with three lines of code. However, sometimes it is necessary to generate different cases. These invocations can be tedious with three lines of code per case. Have no fear. A simple way to produce more cases is to instantiate a Cases object. Below we will create a Cases which houses multiples cases that: - share similiar loading parameters/material properties and laminate theory model with - different numbers of datapoints, p End of explanation # Gettable cases1[0] # normal dict key selection cases1[-1] # negative indices cases1[-2] # negative indicies # Sliceable cases1[0:2] # range of dict keys cases1[0:3] # full range of dict keys cases1[:] # full range cases1[1:] # start:None cases1[:2] # None:stop cases1[:-1] # None:negative index cases1[:-2] # None:negative index #cases1[0:-1:-2] # start:stop:step; NotImplemented #cases1[::-1] # reverse; NotImplemented # Viewable cases1 cases1.LMs # Iterable for i, case in enumerate(cases1): # __iter__ values print(case) #print(case.LMs) # access LaminateModels # Writable #cases1.to_csv() # write to file # Selectable cases1.select(nplies=[2,4]) # by # plies cases1.select(ps=[3,4]) # by points/DataFrame rows cases1.select(nplies=[2,4], ps=[3,4], how='intersection') # by set operations Explanation: Cases() accepts a list of geometry strings. Given appropriate default keywords, this lone argument will return a dict-like object of cases with indicies as keys. The model and ps keywords have default values. A Cases() object has some interesting characteristics (this is not a dict): if user-defined, tries to import Defaults() to simplify instantiations dict-like storage and access of cases list-like ordering of cases gettable: list-like, get items by index (including negative indicies) sliceable: slices the dict keys of the Cases object viewable: contained LaminateModels iterable: by values (unlike normal dicts, not by keys) writable: write DataFrames to csv files selectable: perform set operations and return unique subsets End of explanation set(geos_most).issubset(geos_full) # confirm repeated items mix = geos_full + geos_most # contains repeated items # Repeated Subset cases2 = la.distributions.Cases(mix, load_params=load_params, mat_props=mat_props) cases2.LMs # Unique Subset cases2 = la.distributions.Cases(mix, load_params=load_params, mat_props=mat_props, unique=True) cases2.LMs Explanation: LamainateModels can be compared using set theory. Unique subsets of LaminateModels can be returned from a mix of repeated geometry strings. We will use the default model and ps values. End of explanation from lamana.input_ import BaseDefaults bdft = BaseDefaults() # geometry String Attributes bdft.geo_inputs # all dict key-values bdft.geos_all # all geo strings bdft.geos_standard # static bdft.geos_sample # active; grows # Geometry Object Attributes; mimics latter bdft.Geo_objects # all dict key-values bdft.Geos_all # all Geo objects # more ... # Custom FeatureInputs #bdft.get_FeatureInput() # quick builds #bdft.get_materials() # convert to std. form Explanation: Subclassing Custom Default Parameters We observed the benefits of using implicit, default keywords (models, ps) in simplifying the writing of Cases() instantiations. In general, the user can code explicit defaults for load_params and mat_props by subclassing BaseDefaults() from inputs_. While subclassing requires some extra Python knowledge, this is a relatively simple process that reduces a significant amount of redundant code, leading to a more effiencient anaytical setting. The BaseDefaults contains a dict various geometry strings and Geometry objects. Rather than defining examples for various geometry plies, the user can call from all or a groupings of geometries. End of explanation # Example Defaults from LamAna.models.Wilson_LT class Defaults(BaseDefaults): '''Return parameters for building distributions cases. Useful for consistent testing. Dimensional defaults are inheirited from utils.BaseDefaults(). Material-specific parameters are defined here by he user. - Default geometric and materials parameters - Default FeatureInputs Examples ======== >>>dft = Defaults() >>>dft.load_params {'R' : 12e-3, 'a' : 7.5e-3, 'p' : 1, 'P_a' : 1, 'r' : 2e-4,} >>>dft.mat_props {'Modulus': {'HA': 5.2e10, 'PSu': 2.7e9}, 'Poissons': {'HA': 0.25, 'PSu': 0.33}} >>>dft.FeatureInput {'Geometry' : '400-[200]-800', 'Geometric' : {'R' : 12e-3, 'a' : 7.5e-3, 'p' : 1, 'P_a' : 1, 'r' : 2e-4,}, 'Materials' : {'HA' : [5.2e10, 0.25], 'PSu' : [2.7e9, 0.33],}, 'Custom' : None, 'Model' : Wilson_LT, } ''' def __init__(self): BaseDefaults.__init__(self) '''DEV: Add defaults first. Then adjust attributes.''' # DEFAULTS ------------------------------------------------------------ # Build dicts of geometric and material parameters self.load_params = {'R' : 12e-3, # specimen radius 'a' : 7.5e-3, # support ring radius 'p' : 5, # points/layer 'P_a' : 1, # applied load 'r' : 2e-4, # radial distance from center loading } self.mat_props = {'Modulus': {'HA': 5.2e10, 'PSu': 2.7e9}, 'Poissons': {'HA': 0.25, 'PSu': 0.33}} # ATTRIBUTES ---------------------------------------------------------- # FeatureInput self.FeatureInput = self.get_FeatureInput(self.Geo_objects['standard'][0], load_params=self.load_params, mat_props=self.mat_props, ##custom_matls=None, model='Wilson_LT', global_vars=None) '''Use Classic_LT here''' from lamana.distributions import Cases # Auto load_params and mat_params dft = Defaults() cases3 = Cases(dft.geos_full, model='Wilson_LT') #cases3 = la.distributions.Cases(dft.geos_full, model='Wilson_LT') cases3 '''Refine idiom for importing Cases ''' Explanation: The latter geometric defaults come out of the box when subclassed from BaseDefaults. If custom geometries are desired, the user can override the geo_inputs dict, which automatically builds the Geo_objects dict. Users can override three categories of defaults parameters: geometric variables loading parameters material properties As mentioned, some geometric variables are provided for general laminate dimensions. The other parameters cannot be predicted and need to be defined by the user. Below is an example of a Defaults() subclass. If a custom model has been implemented (see next section), it is convention to place Defaults() and all other custom code within this module. If a custom model is implemented an located in the models directory, Cases will automatically search will the designated model modules, locate the load_params and mat_props attributes and load them automatically for all Cases instantiations. End of explanation cases1.plot(extrema=False) Explanation: Subclassing Custom Models One of the most powerful feauteres of LamAna is the ability to define customized modifications to the Laminate Theory models. Code for laminate theories (i.e. Classic_LT, Wilson_LT) are are located in the models directory. These models can be simple functions or sublclass from BaseModels in the theories module. Either approach is acceptable (see narrative docs for more details on creating custom models. This ability to add custom code make this library extensibile to use a larger variety of models. Plotting Cases An example of multiple subplots is show below. Using a former case, notice each subplot is indepent, woth separate geometries for each. LamAna treats each subplot as a subset or "caselet": End of explanation const_total = ['350-400-500', '400-200-800', '200-200-1200', '200-100-1400', '100-100-1600', '100-200-1400',] const_outer = ['400-550-100', '400-500-200', '400-450-300', '400-400-400', '400-350-500', '400-300-600', '400-250-700', '400-200-800', '400-0.5-1199'] const_inner = ['400-400-400', '350-400-500', '300-400-600', '200-400-700', '200-400-800', '150-400-990', '100-400-1000', '50-400-1100',] const_middle = ['100-700-400', '150-650-400', '200-600-400', '250-550-400', '300-400-500', '350-450-400', '400-400-400', '450-350-400', '750-0.5-400'] case1_ = const_total case2_ = const_outer case3_ = const_inner case4_ = const_middle cases_ = [case1_, case2_, case3_, case4_] cases3 = la.distributions.Cases(cases_, load_params=load_params, mat_props=mat_props, model= 'Wilson_LT', ps=[2,3]) cases3.plot(extrema=False) Explanation: Each caselet can also be a separate case, plotting multiple geometries for each as accomplished with Case. End of explanation '''Fix importing cases''' from lamana.distributions import Cases Explanation: See Demo notebooks for more examples of plotting. More on Cases End of explanation from lamana.models import Wilson_LT as wlt dft = wlt.Defaults() %matplotlib inline str_caselets = ['350-400-500', '400-200-800', '400-[200]-800'] list_caselets = [['400-400-400', '400-[400]-400'], ['200-100-1400', '100-200-1400',], ['400-400-400', '400-200-800','350-400-500',], ['350-400-500']] case1 = la.distributions.Case(dft.load_params, dft.mat_props) case2 = la.distributions.Case(dft.load_params, dft.mat_props) case3 = la.distributions.Case(dft.load_params, dft.mat_props) case1.apply(['400-200-800', '400-[200]-800']) case2.apply(['350-400-500', '400-200-800']) case3.apply(['350-400-500', '400-200-800', '400-400-400']) case_caselets = [case1, case2, case3] mixed_caselets = [['350-400-500', '400-200-800',], [['400-400-400', '400-[400]-400'], ['200-100-1400', '100-200-1400',]], [case1, case2,] ] dict_caselets = {0: ['350-400-500', '400-200-800', '200-200-1200', '200-100-1400', '100-100-1600', '100-200-1400'], 1: ['400-550-100', '400-500-200', '400-450-300', '400-400-400', '400-350-500', '400-300-600'], 2: ['400-400-400', '350-400-500', '300-400-600', '200-400-700', '200-400-800', '150-400-990'], 3: ['100-700-400', '150-650-400', '200-600-400', '250-550-400', '300-400-500', '350-450-400'], } cases = Cases(str_caselets) #cases = Cases(str_caselets, combine=True) #cases = Cases(list_caselets) #cases = Cases(list_caselets, combine=True) #cases = Cases(case_caselets) #cases = Cases(case_caselets, combine=True) # collapse to one plot #cases = Cases(str_caselets, ps=[2,5]) #cases = Cases(list_caselets, ps=[2,3,5,7]) #cases = Cases(case_caselets, ps=[2,5]) #cases = Cases([], combine=True) # test raises # For next versions #cases = Cases(dict_caselets) #cases = Cases(mixed_caselets) #cases = Cases(mixed_caselets, combine=True) cases cases.LMs '''BUG: Following cell raises an Exception in Python 2. Comment to pass nb reg test in pytest.''' cases.caselets '''get out tests from code''' '''run tests''' '''test set seletions''' Explanation: Applying caselets The term "caselet" is defined in LPEP 003. Most importantly, the various types a caselet represents is handled by Cases and discussed here. In 0.4.4b3+, caselets are contained in lists. LPEP entertains the idea of containing caselets in dicts. End of explanation from lamana.models import Wilson_LT as wlt dft = wlt.Defaults() cases = Cases(dft.geo_inputs['5-ply'], ps=[2,3,4]) len(cases) # test __len__ cases.get(1) # __getitem__ #cases[2] = 'test' # __setitem__; not implemented cases[0] # select cases[0:2] # slice (__getitem__) del cases[1] # __delitem__ cases # test __repr__ print(cases) # test __str__ cases == cases # test __eq__ not cases != cases # test __ne__ for i, case in enumerate(cases): # __iter__ values print(case) #print(case.LMs) cases.LMs # peek inside cases cases.frames # get a list of DataFrames directly cases #cases.to_csv() # write to file Explanation: Characteristics End of explanation str_caselets = ['350-400-500', '400-200-800', '400-[200]-800'] str_caselets2 = [['350-400-500', '350-[400]-500'], ['400-200-800', '400-[200]-800']] list_caselets = [['400-400-400', '400-[400]-400'], ['200-100-1400', '100-200-1400',], ['400-400-400', '400-200-800','350-400-500',], ['350-400-500']] case1 = la.distributions.Case(dft.load_params, dft.mat_props) case2 = la.distributions.Case(dft.load_params, dft.mat_props) case3 = la.distributions.Case(dft.load_params, dft.mat_props) case1.apply(['400-200-800', '400-[200]-800']) case2.apply(['350-400-500', '400-200-800']) case3.apply(['350-400-500', '400-200-800', '400-400-400']) case_caselets = [case1, case2, case3] Explanation: Unique Cases from Intersecting Caselets Cases can check if caselet is unique by comparing the underlying geometry strings. Here we have a non-unique caselets of different types. We get unique results within each caselet using the unique keyword. Notice, different caselets could have similar LaminateModels. End of explanation #----------------------------------------------------------+ # Iterating Over Cases from lamana.models import Wilson_LT as wlt dft = wlt.Defaults() # Multiple cases, Multiple LMs cases = Cases(dft.geos_full, ps=[2,5]) # two cases (p=2,5) for i, case in enumerate(cases): # iter case values() print('Case #: {}'.format(i)) for LM in case.LMs: print(LM) print("\nYou iterated several cases (ps=[2,5]) comprising many LaminateModels.") # A single case, single LM cases = Cases(['400-[200]-800']) # a single case and LM (manual) for i, case_ in enumerate(cases): # iter i and case for LM in case_.LMs: print(LM) print("\nYou processed a case and LaminateModel w/iteration. (Recommended)\n") # Single case, multiple LMs cases = Cases(dft.geos_full) # auto, default p=5 for case in cases: # iter case values() for LM in case.LMs: print(LM) print("\nYou iterated a single case of many LaminateModels.") Explanation: The following cells attempt to print the LM objects. Cases objects unordered and thus print in random orders. It is important to note that once set operations are performed, order is no longer a preserved. This is related to how Python handles hashes. This applies to Cases() in two areas: The unique keyword optionally invoked during instantiation. Any use of set operation via the how keyword within the Cases.select() method. Revamped Idioms Gotcha: Although a Cases instance is a dict, as if 0.4.4b3, it's __iter__ method has been overriden to iterate the values by default (not the keys as in Python). This choice was decided since keys are uninformative integers, while the values (curently cases )are of interest, which saves from typing .items() when interating a Cases instance. python &gt;&gt;&gt; cases = Cases() &gt;&gt;&gt; for i, case in cases.items() # python &gt;&gt;&gt; ... print(case) &gt;&gt;&gt; for case in cases: # modified &gt;&gt;&gt; ... print(case) This behavior may change in future versions. End of explanation # Iterating Over Cases from lamana.models import Wilson_LT as wlt dft = wlt.Defaults() #geometries = set(dft.geos_symmetric).union(dft.geos_special + dft.geos_standard + dft.geos_dissimilar) #cases = Cases(geometries, ps=[2,3,4]) cases = Cases(dft.geos_special, ps=[2,3,4]) # Reveal the full listdft.geos_specia # for case in cases: # iter case values() # for LM in case.LMs: # print(LM) # Test union of lists #geometries cases '''Right now a case shares p, size. cases share geometries and size.''' cases[0:2] '''Hard to see where these comem from. Use dict?''' cases.LMs cases.LMs[0:6:2] cases.LMs[0:4] Explanation: Selecting From cases, subsets of LaminateModels can be chosen. select is a method that performs on and returns sets of LaminateModels. Plotting functions are not implement for this method directly, however the reulsts can be used to make new cases instances from which .plot() is accessible. Example access techniques using Cases. Access all cases : cases Access specific cases : cases[0:2] Access all LaminateModels : cases.LMs Access LaminateModels (within a case) : cases.LMs[0:2] Select a subset of LaminateModels from all cases : cases.select(ps=[3,4]) End of explanation cases.select(nplies=[2,4]) cases.select(ps=[2,4]) cases.select(nplies=4) cases.select(ps=3) Explanation: Selections from latter cases. End of explanation cases.select(nplies=4, ps=3) # union; default cases.select(nplies=4, ps=3, how='intersection') # intersection Explanation: Advanced techniques: multiple selections. Set operations have been implemented in the selection method of Cases which enables filtering of unique LaminateModels that meet given conditions for nplies and ps. union: all LMs that meet either conditions (or) intersection: LMs that meet both conditions (and) difference: LMs symmetric difference: End of explanation cases.select(nplies=4, ps=3, how='difference') # difference cases.select(nplies=4) - cases.select(ps=3) # set difference '''How does this work?''' cases.select(nplies=4, ps=3, how='symm diff') # symm difference cases.select(nplies=[2,4], ps=[3,4], how='union') cases.select(nplies=[2,4], ps=[3,4], how='intersection') cases.select(nplies=[2,4], ps=3, how='difference') cases.select(nplies=4, ps=[3,4], how='symmeric difference') Explanation: By default, difference is subtracted as set(ps) - set(nplies). Currently there is no implementation for the converse difference, but set operations still work. End of explanation import numpy as np a = [] b = 1 c = np.int64(1) d = [1,2] e = [1,2,3] f = [3,4] test = 1 test in a #test in b #test is a test is c # if test is a or test is c: # True from lamana.utils import tools as ut ut.compare_set(d, e) ut.compare_set(b, d, how='intersection') ut.compare_set(d, b, how='difference') ut.compare_set(e, f, how='symmertric difference') ut.compare_set(d, e, test='issubset') ut.compare_set(e, d, test='issuperset') ut.compare_set(d, f, test='isdisjoint') set(d) ^ set(e) ut.compare_set(d,e, how='symm') g1 = dft.Geo_objects['5-ply'][0] g2 = dft.Geo_objects['5-ply'][1] cases = Cases(dft.geos_full, ps=[2,5]) # two cases (p=2,5) for i, case in enumerate(cases): # iter case values() for LM in case.LMs: print(LM) Explanation: Current logic seems to return a union. Enhancing selection algorithms with set operations Need logic to append LM for the following: all, either, neither (and, or, not or) a, b are int a, b are list a, b are mixed b, a are mixed End of explanation #PYTEST_VALIDATE_IGNORE_OUTPUT hash('400-200-800') #PYTEST_VALIDATE_IGNORE_OUTPUT hash('400-[200]-800') Explanation: In order to compare objects in sets, they must be hashable. The simple requirement equality is include whatever makes the hash of a equal to the hash of b. Ideally, we should hash the Geometry object, but the inner values is a list which is unhashable due to its mutability. Conventiently however, strings are not hashable. We can try to hash the geometry input string once they have been converted to General Convention as unique identifiers for the geometry object. This requires some reorganization in Geometry. ~~isolate a converter function _to_gen_convention()~~ privative all functions invisible to the API ~~hash the converted geo_strings~~ ~~privatize _geo_strings. This cannot be altered by the user.~~ Here we see the advantage to using geo_strings as hashables. They are inheirently hashable. UPDATE: decided to make a hashalbe version of the GeometryTuple End of explanation #PYTEST_VALIDATE_IGNORE_OUTPUT hash((case.LMs[0].Geometry, case.LMs[0].p)) case.LMs[0] L = [LM for case in cases for LM in case.LMs] L[0] L[8] #PYTEST_VALIDATE_IGNORE_OUTPUT hash((L[0].Geometry, L[0].p)) #PYTEST_VALIDATE_IGNORE_OUTPUT hash((L[1].Geometry, L[1].p)) set([L[0]]) != set([L[8]]) Explanation: Need to make Laminate class hashable. Try to use unique identifiers such as Geometry and p. End of explanation from lamana.models import Wilson_LT as wlt dft = wlt.Defaults() mix = dft.Geos_full + dft.Geos_all mix set(mix) Explanation: Use sets to filter unique geometry objects from Defaults(). End of explanation mix = dft.geos_most + dft.geos_standard # 400-[200]-800 common to both cases3a = Cases(mix, combine=True, unique=True) cases3a.LMs load_params['p'] = 5 cases3b5 = la.distributions.Case(load_params, dft.mat_props) cases3b5.apply(mix) cases3b5.LMs[:-1] Explanation: Mixing Geometries See above. Looks like comparing the order of these lists give different results. This test has been quarantine from the repo until a solution is found. End of explanation '''Add how to build Defaults()''' # Case Building from Defaults import lamana as la from lamana.utils import tools as ut from lamana.models import Wilson_LT as wlt dft = wlt.Defaults() ##dft = ut.Defaults() # user-definable case2 = la.distributions.Case(dft.load_params, dft.mat_props) case2.apply(dft.geos_full) # multi plies #LM = case2.LMs[0] #LM.LMFrame print("\nYou have built a case using user-defined defaults to set geometric \ loading and material parameters.") case2 Explanation: Idiomatic Case Making As we transition to more automated techniques, tf parameters are to be reused multiple times, it can be helpful to store them as default values. End of explanation # Automatic Case Building import lamana as la from lamana.utils import tools as ut #Single Case dft = wlt.Defaults() ##dft = ut.Defaults() case3 = ut.laminator(dft.geos_full) # auto, default p=5 case3 = ut.laminator(dft.geos_full, ps=[5]) # declared #case3 = ut.laminator(dft.geos_full, ps=[1]) # LFrame rollbacks print("\nYou have built a case using higher-level API functions.") case3 # How to get values from a single case (Python 3 compatible) list(case3.values()) Explanation: Finally, if building several cases is required for the same parameters, we can use higher-level API tools to help automate the process. Note, for every case that is created, a seperate Case() instantiation and Case.apply() call is required. These techniques obviate such redundancies. End of explanation # Multiple Cases cases1 = ut.laminator(dft.geos_full, ps=[2,3,4,5]) # multi ply, multi p print("\nYou have built many cases using higher-level API functions.") cases1 # How to get values from multiple cases (Python 3 compatible) list(cases1.values()) Explanation: Cases are differentiated by different ps. End of explanation # Iterating Over Cases # Latest style case4 = ut.laminator(['400-[200]-800']) # a sinle case and LM for i, case_ in case4.items(): # iter p and case for LM in case_.LMs: print(LM) print("\nYou processed a case and LaminateModel w/iteration. (Recommended)\n") case5 = ut.laminator(dft.geos_full) # auto, default p=5 for i, case in case5.items(): # iter p and case with .items() for LM in case.LMs: print(LM) for case in case5.values(): # iter case only with .values() for LM in case.LMs: print(LM) print("\nYou processed many cases using Case object methods.") # Convert case dict to generator case_gen1 = (LM for p, case in case4.items() for LM in case.LMs) # Generator without keys case_gen2 = (LM for case in case4.values() for LM in case.LMs) print("\nYou have captured a case in a generator for later, one-time use.") Explanation: Python 3 no longer returns a list for .values() method, so list used to evalate a the dictionary view. While consuming a case's, dict value view with list() works in Python 2 and 3, iteration with loops and comprehensions is a preferred technique for both single and mutiple case processing. After cases are accessed, iteration can access the contetnts of all cases. Iteration is the preferred technique for processing cases. It is most general, cleaner, Py2/3 compatible out of the box and agrees with The Zen of Python: There should be one-- and preferably only one --obvious way to do it. End of explanation # Style Comparisons dft = wlt.Defaults() ##dft = ut.Defaults() case1 = la.distributions.Case(load_params, mat_props) case1.apply(dft.geos_all) cases = ut.laminator(geos=dft.geos_all) case2 = cases # Equivalent calls print(case1) print(case2) print("\nYou have used classic and modern styles to build equivalent cases.") Explanation: We will demonstrate comparing two techniques for generating equivalent cases. End of explanation
12,914
Given the following text description, write Python code to implement the functionality described below step by step Description: <a href="https Step2: Environment Step3: Try out Environment Step4: Train model random has lower total reward than version with dense customers total cost when travelling all paths (back and forth) Step5: Visualizing Results https Step6: Enjoy model
Python Code: !pip install git+https://github.com/openai/baselines >/dev/null !pip install gym >/dev/null Explanation: <a href="https://colab.research.google.com/github/DJCordhose/ai/blob/master/notebooks/rl/berater-v7.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Berater Environment v7 Changes from v6 per episode set certain rewards to 0 to simulate different customers per consultant next steps consider returning to complete observation (should give better results) like in previous notebooks configure custom network including regularization (https://blog.openai.com/quantifying-generalization-in-reinforcement-learning/) Installation (required for colab) End of explanation import numpy import random import gym from gym.utils import seeding from gym import spaces def state_name_to_int(state): state_name_map = { 'S': 0, 'A': 1, 'B': 2, 'C': 3, 'D': 4, 'E': 5, 'F': 6, 'G': 7, 'H': 8, 'K': 9, 'L': 10, 'M': 11, 'N': 12, 'O': 13 } return state_name_map[state] def int_to_state_name(state_as_int): state_map = { 0: 'S', 1: 'A', 2: 'B', 3: 'C', 4: 'D', 5: 'E', 6: 'F', 7: 'G', 8: 'H', 9: 'K', 10: 'L', 11: 'M', 12: 'N', 13: 'O' } return state_map[state_as_int] class BeraterEnv(gym.Env): The Berater Problem Actions: There are 4 discrete deterministic actions, each choosing one direction metadata = {'render.modes': ['ansi']} showStep = False showDone = True envEpisodeModulo = 100 def __init__(self): # self.map = { # 'S': [('A', 100), ('B', 400), ('C', 200 )], # 'A': [('B', 250), ('C', 400), ('S', 100 )], # 'B': [('A', 250), ('C', 250), ('S', 400 )], # 'C': [('A', 400), ('B', 250), ('S', 200 )] # } self.map = { 'S': [('A', 300), ('B', 100), ('C', 200 )], 'A': [('S', 300), ('B', 100), ('E', 100 ), ('D', 100 )], 'B': [('S', 100), ('A', 100), ('C', 50 ), ('K', 200 )], 'C': [('S', 200), ('B', 50), ('M', 100 ), ('L', 200 )], 'D': [('A', 100), ('F', 50)], 'E': [('A', 100), ('F', 100), ('H', 100)], 'F': [('D', 50), ('E', 100), ('G', 200)], 'G': [('F', 200), ('O', 300)], 'H': [('E', 100), ('K', 300)], 'K': [('B', 200), ('H', 300)], 'L': [('C', 200), ('M', 50)], 'M': [('C', 100), ('L', 50), ('N', 100)], 'N': [('M', 100), ('O', 100)], 'O': [('N', 100), ('G', 300)] } self.action_space = spaces.Discrete(4) # position, and up to 4 paths from that position, non existing path is -1000 and no position change self.observation_space = spaces.Box(low=numpy.array([0,-1000,-1000,-1000,-1000]), high=numpy.array([13,1000,1000,1000,1000]), dtype=numpy.float32) self.reward_range = (-1, 1) self.totalReward = 0 self.stepCount = 0 self.isDone = False self.envReward = 0 self.envEpisodeCount = 0 self.envStepCount = 0 self.reset() self.optimum = self.calculate_customers_reward() def seed(self, seed=None): self.np_random, seed = seeding.np_random(seed) return [seed] def iterate_path(self, state, action): paths = self.map[state] if action < len(paths): return paths[action] else: # sorry, no such action, stay where you are and pay a high penalty return (state, 1000) def step(self, action): destination, cost = self.iterate_path(self.state, action) lastState = self.state customerReward = self.customer_reward[destination] reward = (customerReward - cost) / self.optimum self.state = destination self.customer_visited(destination) done = destination == 'S' and self.all_customers_visited() stateAsInt = state_name_to_int(self.state) self.totalReward += reward self.stepCount += 1 self.envReward += reward self.envStepCount += 1 if self.showStep: print( "Episode: " + ("%4.0f " % self.envEpisodeCount) + " Step: " + ("%4.0f " % self.stepCount) + lastState + ' --' + str(action) + '-> ' + self.state + ' R=' + ("% 2.2f" % reward) + ' totalR=' + ("% 3.2f" % self.totalReward) + ' cost=' + ("%4.0f" % cost) + ' customerR=' + ("%4.0f" % customerReward) + ' optimum=' + ("%4.0f" % self.optimum) ) if done and not self.isDone: self.envEpisodeCount += 1 if BeraterEnv.showDone: episodes = BeraterEnv.envEpisodeModulo if (self.envEpisodeCount % BeraterEnv.envEpisodeModulo != 0): episodes = self.envEpisodeCount % BeraterEnv.envEpisodeModulo print( "Done: " + ("episodes=%6.0f " % self.envEpisodeCount) + ("avgSteps=%6.2f " % (self.envStepCount/episodes)) + ("avgTotalReward=% 3.2f" % (self.envReward/episodes) ) ) if (self.envEpisodeCount%BeraterEnv.envEpisodeModulo) == 0: self.envReward = 0 self.envStepCount = 0 self.isDone = done observation = self.getObservation(stateAsInt) info = {"from": self.state, "to": destination} return observation, reward, done, info def getObservation(self, position): result = numpy.array([ position, self.getPathObservation(position, 0), self.getPathObservation(position, 1), self.getPathObservation(position, 2), self.getPathObservation(position, 3) ], dtype=numpy.float32) return result def getPathObservation(self, position, path): source = int_to_state_name(position) paths = self.map[self.state] if path < len(paths): target, cost = paths[path] reward = self.customer_reward[target] result = reward - cost else: result = -1000 return result def customer_visited(self, customer): self.customer_reward[customer] = 0 def all_customers_visited(self): return self.calculate_customers_reward() == 0 def calculate_customers_reward(self): sum = 0 for value in self.customer_reward.values(): sum += value return sum def modulate_reward(self): number_of_customers = len(self.map) - 1 number_per_consultant = int(number_of_customers/2) # number_per_consultant = int(number_of_customers/1.5) self.customer_reward = { 'S': 0 } for customer_nr in range(1, number_of_customers + 1): self.customer_reward[int_to_state_name(customer_nr)] = 0 # every consultant only visits a few random customers samples = random.sample(range(1, number_of_customers + 1), k=number_per_consultant) key_list = list(self.customer_reward.keys()) for sample in samples: self.customer_reward[key_list[sample]] = 1000 def reset(self): self.totalReward = 0 self.stepCount = 0 self.isDone = False self.modulate_reward() self.state = 'S' return self.getObservation(state_name_to_int(self.state)) def render(self): print(self.customer_reward) env = BeraterEnv() print(env.reset()) print(env.customer_reward) Explanation: Environment End of explanation BeraterEnv.showStep = True BeraterEnv.showDone = True env = BeraterEnv() print(env) observation = env.reset() print(observation) for t in range(1000): action = env.action_space.sample() observation, reward, done, info = env.step(action) if done: print("Episode finished after {} timesteps".format(t+1)) break env.close() print(observation) Explanation: Try out Environment End of explanation import tensorflow as tf tf.logging.set_verbosity(tf.logging.ERROR) print(tf.__version__) !rm -r logs !mkdir logs !mkdir logs/berater # https://github.com/openai/baselines/blob/master/baselines/deepq/experiments/train_pong.py # log_dir = logger.get_dir() log_dir = '/content/logs/berater/' import gym from baselines import bench from baselines import logger from baselines.common.vec_env.dummy_vec_env import DummyVecEnv from baselines.common.vec_env.vec_monitor import VecMonitor from baselines.ppo2 import ppo2 BeraterEnv.showStep = False BeraterEnv.showDone = False env = BeraterEnv() wrapped_env = DummyVecEnv([lambda: BeraterEnv()]) monitored_env = VecMonitor(wrapped_env, log_dir) # https://github.com/openai/baselines/blob/master/baselines/ppo2/ppo2.py # https://github.com/openai/baselines/blob/master/baselines/common/models.py#L30 %time model = ppo2.learn(\ env=monitored_env,\ network='mlp',\ num_hidden=2000,\ num_layers=3,\ ent_coef=0.03,\ total_timesteps=1000000) # %time model = ppo2.learn(\ # env=monitored_env,\ # network='mlp',\ # num_hidden=2000,\ # num_layers=3,\ # ent_coef=0.1,\ # total_timesteps=500000) # model = ppo2.learn( # env=monitored_env,\ # layer_norm=True,\ # network='mlp',\ # num_hidden=2000,\ # activation=tf.nn.relu,\ # num_layers=3,\ # ent_coef=0.03,\ # total_timesteps=1000000) # monitored_env = bench.Monitor(env, log_dir) # https://en.wikipedia.org/wiki/Q-learning#Influence_of_variables # %time model = deepq.learn(\ # monitored_env,\ # seed=42,\ # network='mlp',\ # lr=1e-3,\ # gamma=0.99,\ # total_timesteps=30000,\ # buffer_size=50000,\ # exploration_fraction=0.5,\ # exploration_final_eps=0.02,\ # print_freq=1000) model.save('berater-ppo-v7.pkl') monitored_env.close() Explanation: Train model random has lower total reward than version with dense customers total cost when travelling all paths (back and forth): 2500 additional pernalty for liiegal moves 1000 all rewards: 6000 perfect score??? estimate: half the travel cost and no illegal moves: (6000 - 1250) / 6000 = .79 but: rewards are much more sparse while routes stay the same, maybe expect less additionally: the agent only sees very little of the whole scenario changes with every episode was ok when network can learn fixed scenario End of explanation # !ls -l $log_dir from baselines.common import plot_util as pu results = pu.load_results(log_dir) import matplotlib.pyplot as plt import numpy as np r = results[0] plt.ylim(0, .75) # plt.plot(np.cumsum(r.monitor.l), r.monitor.r) plt.plot(np.cumsum(r.monitor.l), pu.smooth(r.monitor.r, radius=100)) Explanation: Visualizing Results https://github.com/openai/baselines/blob/master/docs/viz/viz.ipynb End of explanation import numpy as np observation = env.reset() env.render() state = np.zeros((1, 2*128)) dones = np.zeros((1)) BeraterEnv.showStep = True BeraterEnv.showDone = False for t in range(1000): actions, _, state, _ = model.step(observation, S=state, M=dones) observation, reward, done, info = env.step(actions[0]) if done: print("Episode finished after {} timesteps".format(t+1)) break env.close() Explanation: Enjoy model End of explanation
12,915
Given the following text description, write Python code to implement the functionality described below step by step Description: <font color = blue>Primer examen parcial </font> <font color= #8A0829> Simulación matemática.</font> <Strong> Lázaro Alonso </Strong> <Strong> Año </Strong> Step1: Ahora si gráficamos al péndulo en las coordenas $x,y$, se obtiene algo similar a lo siguiente Step2: Actividad 1.1 - Modificar el programa anterior para que el pédulo describa una trayectoria completamente circular. (10 puntos) - Indicar en el caso anterior con código de colores, los momentos en que el péndulo se encuetra por arriba del cero en $y$ y cuando se encuentra por debajo. (10 puntos) - Considere un conjunto de péndulos, con las siguientes longitudes Step3: Problema 2. Ley de Newton de enfriamiento La ley empírica de Newton, relativa al enfriamiento de un objeto, se expresa con la ecuación diferencial lineal de primer orden $$\frac{dT}{dt} = k (T - T_m)$$ donde $k$ es una constante de proporcionalidad, $T$ es la temperatura del objeto para $t>0$ Step4: Actividad 2.1 ¿Cuánto esperar para tomar el café? Primero calentamos agua a 80°C . Posteriormente agregamos café al vaso con el agua caliente. Después realizamos la medición de la temperatura ambiente, la cual fue de 24°C. Simula el sistema en un tiempo de 0 a 120 unidades de tiempo con una constante de proporcionalidad $k=−0.0565$. (10 puntos) Supoga que cada unidad de tiempo corresponde a un minuto. ¿En que tiempo aproximadamente la temperatura es menor a 30°C ? (10 puntos) Busca una constante de proporcionalidad $k$ en un rango ( de −0.2 a 0.2 con incremento de 0.01), para la cual el café tiene una temperatura menor de 30°C en un tiempo a 20 minutos. (10 puntos) Step5: Problema 3. Interés simple vs interés compuesto. Simple Recuerden $C_k$ es la meta $$C_k=C_0(1+ki).$$ Compuesto $$C_k=C_0(1+i)^k.$$ Actividad 3.1 - Realice un análisis comparativo del numero de periodos para llegar a la misma meta, considerenado el mismo interés $i$. Gráfique en la misma figura. (10 puntos) - Considere diferentes escenarios de interés. Usted elija. (10 puntos) - Discuta las ventajas o desventajas de los diferentes modelos. En el supuesto de que ustedes son el banco. (10 puntos)
Python Code: import numpy as np import matplotlib.pyplot as plt %matplotlib inline def theta_t(theta_0, theta_0_dot, g, l, t): omega_0 = np.sqrt(g/l) return theta_0 * np.cos(omega_0 * t) + theta_0_dot * np.sin(omega_0 * t)/omega_0 Explanation: <font color = blue>Primer examen parcial </font> <font color= #8A0829> Simulación matemática.</font> <Strong> Lázaro Alonso </Strong> <Strong> Año </Strong>: 2017 <Strong> Email: </Strong> <font color="blue"> [email protected], [email protected] </font> Estudiante: Diego Parra Pereda ic698296 Indicaciones: Todos aquellos cuya inicial de su primer apellido este entre la A y la L tienen que hacer la Actividad 1.1 del problema 1. Todos aquellos cuya inicial de su primer primer apellido este entre la M y la Z tienen que hacer la Actividad 1.2 del problema 1. Todos aquellos cuya inicial de su primer apellido este entre la M y la Z tienen que hacer el problema 2. Todos aquellos cuya inicial de su primer apellido este entre la A y la L tienen que hacer el problema 3. <font color = red>El objetivo final es que logren juntar 60 puntos. Es decir resolver 2 problemas por completo. </font> Problema 1. Considere un oscilador armónico simple(péndulo), cuya variación en $\theta$ está dada por: \begin{equation} \theta(t) = \theta(0) \cos(\omega_{0} t) + \frac{\dot{\theta}(0)}{\omega_{0}} \sin(\omega_{0} t) \end{equation} Considere la siguiente función que calcula $\theta$ en $t$. Para responder las siguientes actividades. End of explanation t = np.linspace(0,10) fig = plt.figure(figsize = (5,5)) ax = fig.add_subplot(1, 1, 1) x = 2 * np.sin(theta_t(.4, .6, 9.8, 2, t)) y = - 2 * np.cos(theta_t(.4, .6, 9.8, 2, t)) ax.plot(x, y, 'ko', ms = 1.5) ax.plot([0], [0], 'rD') ax.set_xlim(xmin = -2.2, xmax = 2.2) ax.set_ylim(ymin = -2.2, ymax = .2) plt.show() Explanation: Ahora si gráficamos al péndulo en las coordenas $x,y$, se obtiene algo similar a lo siguiente: End of explanation # Respuesta import numpy as np import matplotlib.pyplot as plt %matplotlib inline def theta_t(theta_0, theta_0_dot, g, l, t): omega_0 = np.sqrt(g/l) return theta_0 * np.cos(omega_0 * t) + theta_0_dot * np.sin(omega_0 * t) t = np.linspace(0,10) fig = plt.figure(figsize = (7,5)) ax = fig.add_subplot(1, 1, 1) x = 2 * np.sin(theta_t(1.57, .6, 9.8, 2, t)) y = - 2 * np.cos(theta_t(1.57, .6, 9.8, 2, t)) ax.plot(x, y, 'bd', ms = 5) ax.plot([0], [0], 'gD') ax.set_xlim(xmin = -2.2, xmax = 2.2) ax.set_ylim(ymin = -2.2, ymax = .2) plt.show() # Respuesta import numpy as np import matplotlib.pyplot as plt %matplotlib inline def theta_t(theta_0, theta_0_dot, g, l, t): omega_0 = np.sqrt(g/l) return theta_0 * np.cos(omega_0 * t) + theta_0_dot * np.sin(omega_0 * t) t = np.linspace(0,10) fig = plt.figure(figsize = (7,5)) ax = fig.add_subplot(1, 1, 1) x = 2 * np.sin(theta_t(1.57, 0, 9.8, 2, t)) y = - 2 * np.cos(theta_t(1.57, 0, 9.8, 2, t)) ax.plot(x, y, 'bo', ms = 3) ax.plot([0], [0], 'gD') ax.set_xlim(xmin = -3, xmax = 3) ax.set_ylim(ymin = -3, ymax = .3) plt.show() z=x[x<0] z len(x[x<0]) np.where(x<0) x1=x[np.where(x<0)] x2=x[np.where(x>0)] y2=y[np.where(x>0)] y1=y[np.where(x<0)] fig = plt.figure(figsize = (6,3)) ax = fig.add_subplot(1, 1, 1) ax.plot(x1, y1, 'm*') ax.plot(x2, y2, 'cd') ax.set_xlim(xmin = -3, xmax = 3) ax.set_ylim(ymin = -3, ymax = .3) plt.show() gravedad = np.array ([.5, 1, 2, 4, 8, 16]) def theta_t(theta_0, theta_0_dot, g, l, t): omega_0 = np.sqrt(g/l) return theta_0 * np.cos(omega_0 * t) + theta_0_dot * np.sin(omega_0 * t) for indx, g in enumerate (gravedad): x = 2 * np.sin(theta_t(.4, .6, g, 2, t)) y = - 2 * np.cos(theta_t(.4, .6, g, 2, t)) (x) (y) cmap= ['red','blue','green','black','orange',] fig = plt.figure(figsize = (15,10)) for indx, g in enumerate (gravedad): ax = fig.add_subplot(1, 1, 1) x = 2 * np.sin(theta_t(1.57, 0, g, 2, t)) y = - 2 * np.cos(theta_t(1.57, 0, g, 2, t)) plt.scatter(x,y, cmap = 'inferno', s= 35, lw=0) ax.plot(x, y, 'k*', ms = 7) ax.plot([0], [0], 'gD') ax.set_xlim(xmin = -2.2, xmax = 2.2) ax.set_ylim(ymin = -2.2, ymax = .2) plt.show() Explanation: Actividad 1.1 - Modificar el programa anterior para que el pédulo describa una trayectoria completamente circular. (10 puntos) - Indicar en el caso anterior con código de colores, los momentos en que el péndulo se encuetra por arriba del cero en $y$ y cuando se encuentra por debajo. (10 puntos) - Considere un conjunto de péndulos, con las siguientes longitudes: $$ l = [0.5, 1, 2, 4, 8, 16]. $$ Realice una representación similar a la anterior, pero ahora para todos los péndulos representados en la misma gráfica y cada uno con diferente color. Discutir resultados.(10 puntos) Actividad 1.2 - Modificar el programa anterior para que el pédulo describa una trayectoria semi-circular. (10 puntos) - Indicar en el caso anterior con código de colores, los momentos en que el péndulo se encuetra tienes coordenas en $x$ negativas y cuando se encuentra en positivas. (10 puntos) - Considere un conjunto de péndulos con la misma longitudes, pero con diferentes gravedades: $$ g = [0.5, 1, 2, 4, 8, 16]. $$ Realice una representación similar a la anterior, pero ahora para todos los péndulos representados en la misma gráfica y cada uno con diferente color. Discutir resultados. (10 puntos) End of explanation def temperatura(T, Tm, k, t): return k*(T - Tm) # usar odeint para encontrar la solución. (Puede ayudarle la clase de mapa logistico) import numpy as np from scipy.integrate import odeint import matplotlib.pyplot as plt %matplotlib inline Explanation: Problema 2. Ley de Newton de enfriamiento La ley empírica de Newton, relativa al enfriamiento de un objeto, se expresa con la ecuación diferencial lineal de primer orden $$\frac{dT}{dt} = k (T - T_m)$$ donde $k$ es una constante de proporcionalidad, $T$ es la temperatura del objeto para $t>0$ End of explanation def temperatura(T,t): k=-.0565 Tm=34 return k*(T-Tm) t = np.linspace(0,120) T0=80 t = np.linspace(0,120) respuesta = odeint(temperatura,T0,t) plt.plot(t,respuesta) plt.xlabel('$t$', fontsize = 40) plt.ylabel('$T$', fontsize = 40) plt.show() Explanation: Actividad 2.1 ¿Cuánto esperar para tomar el café? Primero calentamos agua a 80°C . Posteriormente agregamos café al vaso con el agua caliente. Después realizamos la medición de la temperatura ambiente, la cual fue de 24°C. Simula el sistema en un tiempo de 0 a 120 unidades de tiempo con una constante de proporcionalidad $k=−0.0565$. (10 puntos) Supoga que cada unidad de tiempo corresponde a un minuto. ¿En que tiempo aproximadamente la temperatura es menor a 30°C ? (10 puntos) Busca una constante de proporcionalidad $k$ en un rango ( de −0.2 a 0.2 con incremento de 0.01), para la cual el café tiene una temperatura menor de 30°C en un tiempo a 20 minutos. (10 puntos) End of explanation # Respuesta Explanation: Problema 3. Interés simple vs interés compuesto. Simple Recuerden $C_k$ es la meta $$C_k=C_0(1+ki).$$ Compuesto $$C_k=C_0(1+i)^k.$$ Actividad 3.1 - Realice un análisis comparativo del numero de periodos para llegar a la misma meta, considerenado el mismo interés $i$. Gráfique en la misma figura. (10 puntos) - Considere diferentes escenarios de interés. Usted elija. (10 puntos) - Discuta las ventajas o desventajas de los diferentes modelos. En el supuesto de que ustedes son el banco. (10 puntos) End of explanation
12,916
Given the following text description, write Python code to implement the functionality described below step by step Description: This notebook will explore the Ridge property data as modeled by FVS and the Ecotrust Growth-Yield-Batch system. Also serves as a demonstration of pandas and associated python libraries. First, we import the necessary libraries Step1: Create a connection ("engine") to the sqlite database produced by GYB and read the entire table into a pandas DataFrame. Step2: Ipython is not the cleanest interface with which to browse large datasets. Luckily you can just take the top couple rows Step3: or write it to excel with df.to_excel('file.txt'). Keep in mind the size of the dataset though, not recommended for the full dataset due to limitations in excel Step4: Basic descriptive statistics Step5: The best feature of both pandas and excel Step6: In testing in the Forest Planner, we noticed that yields were very low initially and that stands were starting the simulation at single-digit trees per acre (TPA). Let's confirm that we've resolved that issue. First subset the query for the grow-only rx in 2013. Step7: Examining the distribution of starting TPA, we see reasonable TPAs
Python Code: %matplotlib inline from matplotlib.pylab import plt import pandas as pd from sqlalchemy import create_engine from matplotlib import cm import seaborn as sns Explanation: This notebook will explore the Ridge property data as modeled by FVS and the Ecotrust Growth-Yield-Batch system. Also serves as a demonstration of pandas and associated python libraries. First, we import the necessary libraries End of explanation engine = create_engine('sqlite:///data.db') df = pd.read_sql_table('trees_fvsaggregate', engine) Explanation: Create a connection ("engine") to the sqlite database produced by GYB and read the entire table into a pandas DataFrame. End of explanation df.head(6) Explanation: Ipython is not the cleanest interface with which to browse large datasets. Luckily you can just take the top couple rows End of explanation df.shape # (rows, columns) Explanation: or write it to excel with df.to_excel('file.txt'). Keep in mind the size of the dataset though, not recommended for the full dataset due to limitations in excel End of explanation df.describe() Explanation: Basic descriptive statistics End of explanation import numpy as np pt_year = pd.pivot_table(df, index=['cond', 'rx', 'offset'], columns=['year'], values=['removed_merch_bdft'], aggfunc=[np.sum], margins=True) pt_year.to_excel("harvest_by_year.xls") pt_year.head() Explanation: The best feature of both pandas and excel: pivot tables. And note that we can write the resulting DataFrame out to an excel file for easier viewing. End of explanation startdf = df.query("year == 2013 and rx == 1") # same result with alternate syntax using .loc startdf = df.loc[(df.year == 2013) & (df.rx == 1)] Explanation: In testing in the Forest Planner, we noticed that yields were very low initially and that stands were starting the simulation at single-digit trees per acre (TPA). Let's confirm that we've resolved that issue. First subset the query for the grow-only rx in 2013. End of explanation startdf.start_tpa.hist() conds = df.cond.unique() conds.sort() conds plt.rcParams['figure.figsize'] = (10.0, 8.0) sns.tsplot(df.loc[(df.offset == 0)], "year", unit="cond", condition="rx", value="after_merch_ft3") from pandas.tools.plotting import scatter_matrix scatter_matrix(df[['after_qmd', 'after_tpa']], alpha=0.2, figsize=(6, 6), diagonal='kde') Explanation: Examining the distribution of starting TPA, we see reasonable TPAs End of explanation
12,917
Given the following text description, write Python code to implement the functionality described below step by step Description: Step2: <a href="https Step3: Step by Step Code Order #1. How to find the order of differencing (d) in ARIMA model p is the order of the AR term q is the order of the MA term d is the number of differencing required to make the time series stationary as I term Step4: A p-value less than 0.05 (typically ≤ 0.05) is statistically significant. It indicates strong evidence against the null hypothesis, as there is less than a 5% probability the null hypo is correct (and the results are by random). Therefore, we reject the null hypothesis, and accept the alternative hypothesis (there is correlation). Our timeserie is not significant, why? Example Step5: For the above series, the time series reaches stationarity with two orders of differencing. But we use for the beginning 1 order as a conservative part. Let me explain that Step6: #3. How to find the order of the MA term (q) Step7: 4. How to build the ARIMA Model Step8: Notice here the coefficient of the MA2 term is close to zero (-0.0010 ) and the P-Value in ‘P>|z|’ column is highly insignificant (0.9). It should ideally be less than 0.05 for the respective X to be significant << 0.05. 5. Plot residual errors Let’s plot the residuals to ensure there are no patterns (that is, look for constant mean and variance). Step9: 6. Plot Predict Actual vs Fitted When you set dynamic=False in-sample lagged values are used for prediction. That is, the model gets trained up until the previous values to make next prediction. This can make a fitted forecast and actuals look artificially good. Step10: 7. Now Create Training and Test Validation We can see that ARIMA is adequately forecasting the seasonal pattern in the series. In terms of the model performance, the RMSE (root mean squared error) and MFE (mean forecast error) and also best in terms of the lowest BIC . Step11: 8. Some scores and performance The 20 observations depends on the train/test set fc,se,conf = fitted.forecast(20, alpha=0.05) # 95% conf
Python Code: #sign:max: MAXBOX8: 03/02/2021 18:34:41 # optimal moving average OMA for market index signals ARIMA study- Max Kleiner # v2 shell argument forecast days - 4 lines compare - ^GDAXI for DAX # pip install pandas-datareader # C:\maXbox\mX46210\DataScience\princeton\AB_NYC_2019.csv AB_NYC_2019.csv #https://medium.com/abzuai/the-qlattice-a-new-machine-learning-model-you-didnt-know-you-needed-c2e037878cd #https://www.kaggle.com/dgomonov/data-exploration-on-nyc-airbnb 41 #https://www.kaggle.com/duygut/airbnb-nyc-price-prediction #https://www.machinelearningplus.com/time-series/arima-model-time-series-forecasting-python/ import numpy as np import matplotlib.pyplot as plt import sys import numpy as np, pandas as pd from statsmodels.tsa.arima_model import ARIMA from statsmodels.graphics.tsaplots import plot_acf, plot_pacf from statsmodels.tsa.stattools import adfuller, acf import matplotlib.pyplot as plt plt.rcParams.update({'figure.figsize':(9,7), 'figure.dpi':120}) # Import data wwwus = pd.read_csv('https://raw.githubusercontent.com/selva86/datasets/master/wwwusage.csv', names=['value'], header=0) import pandas as pd # Accuracy metrics def forecast_accuracy(forecast, actual): mape = np.mean(np.abs(forecast - actual)/np.abs(actual)) # MAPE me = np.mean(forecast - actual) # ME mae = np.mean(np.abs(forecast - actual)) # MAE mpe = np.mean((forecast - actual)/actual) # MPE rmse = np.mean((forecast - actual)**2)**.5 # RMSE corr = np.corrcoef(forecast, actual)[0,1] # corr mins = np.amin(np.hstack([forecast[:,None], actual[:,None]]), axis=1) maxs = np.amax(np.hstack([forecast[:,None], actual[:,None]]), axis=1) minmax = 1 - np.mean(mins/maxs) # minmax acf1 = acf(fc-test)[1] # ACF1 return({'mape':mape, 'me':me, 'mae': mae, 'mpe': mpe, 'rmse':rmse, 'acf1':acf1, 'corr':corr, 'minmax':minmax}) #wwwus = pd.read_csv(r'C:\maXbox\mX46210\DataScience\princeton\1022dataset.txt', \ # names=['value'], header=0) print(wwwus.head(10).T) #Transposed for column overview #1. How to find the order of differencing (d) in ARIMA model result = adfuller(wwwus.value.dropna()) print('ADF Statistic: %f' % result[0]) print('p-value: %f' % result[1]) # # Original Series fig, axes = plt.subplots(3, 2, sharex=True) axes[0, 0].plot(wwwus.value); axes[0, 0].set_title('Orig Series') plot_acf(wwwus.value, ax=axes[0, 1], lags=60) # 1st Differencing axes[1, 0].plot(wwwus.value.diff()); axes[1, 0].set_title('1st Order Differencing') plot_acf(wwwus.value.diff().dropna(), ax=axes[1, 1], lags=60) # 2nd Differencing axes[2, 0].plot(wwwus.value.diff().diff()); axes[2, 0].set_title('2nd Order Differencing') plot_acf(wwwus.value.diff().diff().dropna(), ax=axes[2, 1], lags=60) plt.show() #2. How to find the order of the AR term (p) plt.rcParams.update({'figure.figsize':(9,3), 'figure.dpi':120}) fig, axes = plt.subplots(1, 2, sharex=True) axes[0].plot(wwwus.value.diff()); axes[0].set_title('1st Differencing') axes[1].set(ylim=(0,5)) plot_pacf(wwwus.value.diff().dropna(), ax=axes[1], lags=100) plt.show() #3. How to find the order of the MA term (q) fig, axes = plt.subplots(1, 2, sharex=True) axes[0].plot(wwwus.value.diff()); axes[0].set_title('1st Differencing') axes[1].set(ylim=(0,1.2)) plot_acf(wwwus.value.diff().dropna(), ax=axes[1] , lags=60) plt.show() # #4. How to build the ARIMA Model model = ARIMA(wwwus.value, order=(1,1,2)) model_fit = model.fit(disp=0) print('first fit ',model_fit.summary()) # Plot residual errors residuals = pd.DataFrame(model_fit.resid) plt.rcParams.update({'figure.figsize':(9,3), 'figure.dpi':120}) fig, ax = plt.subplots(1,2) residuals.plot(title="Residuals", ax=ax[0]) residuals.plot(kind='kde', title='Density', ax=ax[1]) plt.show() #5. Plot Predict Actual vs Fitted # When you set dynamic=False in-sample lagged values are used for prediction. plt.rcParams.update({'figure.figsize':(9,3), 'figure.dpi':120}) model_fit.plot_predict(dynamic=False) plt.show() #That is, the model gets trained up until the previous value to make next prediction. This can make a fitted forecast and actuals look artificially good. # Now Create Training and Test train = wwwus.value[:80] test = wwwus.value[80:] #model = ARIMA(train, order=(3, 2, 1)) model = ARIMA(train, order=(2, 2, 3)) fitted = model.fit(disp=-1) print('second fit ',fitted.summary()) # Forecast fc,se,conf = fitted.forecast(20, alpha=0.05) # 95% conf # Make as pandas series fc_series = pd.Series(fc, index=test.index) lower_series = pd.Series(conf[:,0], index=test.index) upper_series = pd.Series(conf[:,1], index=test.index) # Plot plt.figure(figsize=(12,5), dpi=100) plt.plot(train, label='training') plt.plot(test, label='actual') plt.plot(fc_series, label='forecast') plt.fill_between(lower_series.index, lower_series, upper_series, color='k', alpha=.15) plt.title('maXbox4 Forecast vs Actuals ARIMA') plt.legend(loc='upper left', fontsize=8) plt.show() print(forecast_accuracy(fc, test.values)) print('Around 5% MAPE implies a model is about 95% accurate in predicting next 20 observations.') Explanation: <a href="https://colab.research.google.com/github/maxkleiner/maXbox4/blob/master/ARIMA_Predictor21.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> How to find the order of (p,d,q) in ARIMA timeseries model A time series is a sequence where a metric is recorded over regular time intervals. Inspired by https://www.machinelearningplus.com/time-series/arima-model-time-series-forecasting-python/ End of explanation #1. How to find the order of differencing (d) in ARIMA model result = adfuller(wwwus.value.dropna()) print('ADF Statistic: %f' % result[0]) print('p-value: %f' % result[1]) Explanation: Step by Step Code Order #1. How to find the order of differencing (d) in ARIMA model p is the order of the AR term q is the order of the MA term d is the number of differencing required to make the time series stationary as I term End of explanation # Original Series fig, axes = plt.subplots(3, 2, sharex=True) axes[0, 0].plot(wwwus.value); axes[0, 0].set_title('Orig Series') plot_acf(wwwus.value, ax=axes[0, 1], lags=60) # 1st Differencing axes[1, 0].plot(wwwus.value.diff()); axes[1, 0].set_title('1st Order Differencing') plot_acf(wwwus.value.diff().dropna(), ax=axes[1, 1], lags=60) # 2nd Differencing axes[2, 0].plot(wwwus.value.diff().diff()); axes[2, 0].set_title('2nd Order Differencing') plot_acf(wwwus.value.diff().diff().dropna(), ax=axes[2, 1], lags=60) plt.show() Explanation: A p-value less than 0.05 (typically ≤ 0.05) is statistically significant. It indicates strong evidence against the null hypothesis, as there is less than a 5% probability the null hypo is correct (and the results are by random). Therefore, we reject the null hypothesis, and accept the alternative hypothesis (there is correlation). Our timeserie is not significant, why? Example: ADF Statistic: -2.464240 p-value: 0.124419 0-Hypothesis non stationary 0.12 > 0.05 -> not significant, therefore we can not reject the 0-hypthesis so our time series is non stationary and we had to differencing it to make it stationary. The purpose of differencing it is to make the time series stationary. End of explanation plt.rcParams.update({'figure.figsize':(9,3), 'figure.dpi':120}) fig, axes = plt.subplots(1, 2, sharex=True) axes[0].plot(wwwus.value.diff()); axes[0].set_title('1st Differencing') axes[1].set(ylim=(0,5)) plot_pacf(wwwus.value.diff().dropna(), ax=axes[1], lags=100) plt.show() Explanation: For the above series, the time series reaches stationarity with two orders of differencing. But we use for the beginning 1 order as a conservative part. Let me explain that: D>2 is not allowed in statsmodels.tsa.arima_model! Maybe d>2 is not allowed means our best bet is to start simple, check if integrating once grants stationarity. If so, we can fit a simple ARIMA model and examine the ACF of the residual values to get a better feel about what orders of differencing to use. Also a drawback, if we integrate more than two times (d>2), we lose n observations, one for each integration. And one of the most common errors in ARIMA modeling is to "overdifference" the series and end up adding extra AR or MA terms to undo the forecast damage, so the author (I assume) decides to raise this exception. #2. How to find the order of the AR term (p) End of explanation fig, axes = plt.subplots(1, 2, sharex=True) axes[0].plot(wwwus.value.diff()); axes[0].set_title('1st Differencing') axes[1].set(ylim=(0,1.2)) plot_acf(wwwus.value.diff().dropna(), ax=axes[1] , lags=90) plt.show() Explanation: #3. How to find the order of the MA term (q) End of explanation model = ARIMA(wwwus.value, order=(1,1,2)) model_fit = model.fit(disp=0) print('first fit ',model_fit.summary()) Explanation: 4. How to build the ARIMA Model End of explanation residuals = pd.DataFrame(model_fit.resid) plt.rcParams.update({'figure.figsize':(9,3), 'figure.dpi':120}) fig, ax = plt.subplots(1,2) residuals.plot(title="Residuals", ax=ax[0]) residuals.plot(kind='kde', title='Density', ax=ax[1]) plt.show() Explanation: Notice here the coefficient of the MA2 term is close to zero (-0.0010 ) and the P-Value in ‘P>|z|’ column is highly insignificant (0.9). It should ideally be less than 0.05 for the respective X to be significant << 0.05. 5. Plot residual errors Let’s plot the residuals to ensure there are no patterns (that is, look for constant mean and variance). End of explanation plt.rcParams.update({'figure.figsize':(9,3), 'figure.dpi':120}) model_fit.plot_predict(dynamic=False) plt.show() Explanation: 6. Plot Predict Actual vs Fitted When you set dynamic=False in-sample lagged values are used for prediction. That is, the model gets trained up until the previous values to make next prediction. This can make a fitted forecast and actuals look artificially good. End of explanation train = wwwus.value[:80] test = wwwus.value[80:] #model = ARIMA(train, order=(3, 2, 1)) model = ARIMA(train, order=(2, 2, 3)) fitted = model.fit(disp=-1) print('second fit ',fitted.summary()) # Forecast fc,se,conf = fitted.forecast(20, alpha=0.05) # 95% conf # Make as pandas series fc_series = pd.Series(fc, index=test.index) lower_series = pd.Series(conf[:,0], index=test.index) upper_series = pd.Series(conf[:,1], index=test.index) # Plot plt.figure(figsize=(12,5), dpi=100) plt.plot(train, label='training') plt.plot(test, label='actual') plt.plot(fc_series, label='forecast') plt.fill_between(lower_series.index, lower_series, upper_series, color='k', alpha=.15) plt.title('maXbox4 Forecast vs Actuals ARIMA') plt.legend(loc='upper left', fontsize=8) plt.show() Explanation: 7. Now Create Training and Test Validation We can see that ARIMA is adequately forecasting the seasonal pattern in the series. In terms of the model performance, the RMSE (root mean squared error) and MFE (mean forecast error) and also best in terms of the lowest BIC . End of explanation print(forecast_accuracy(fc, test.values)) print('Around 5% MAPE implies a model is about 95% accurate in predicting next 20 observations.') Explanation: 8. Some scores and performance The 20 observations depends on the train/test set fc,se,conf = fitted.forecast(20, alpha=0.05) # 95% conf End of explanation
12,918
Given the following text description, write Python code to implement the functionality described below step by step Description: <!--BOOK_INFORMATION--> <img align="left" style="padding-right Step1: Introducing Principal Component Analysis Principal component analysis is a fast and flexible unsupervised method for dimensionality reduction in data, which we saw briefly in Introducing Scikit-Learn. Its behavior is easiest to visualize by looking at a two-dimensional dataset. Consider the following 200 points Step2: By eye, it is clear that there is a nearly linear relationship between the x and y variables. This is reminiscent of the linear regression data we explored in In Depth Step3: The fit learns some quantities from the data, most importantly the "components" and "explained variance" Step4: To see what these numbers mean, let's visualize them as vectors over the input data, using the "components" to define the direction of the vector, and the "explained variance" to define the squared-length of the vector Step5: These vectors represent the principal axes of the data, and the length of the vector is an indication of how "important" that axis is in describing the distribution of the data—more precisely, it is a measure of the variance of the data when projected onto that axis. The projection of each data point onto the principal axes are the "principal components" of the data. If we plot these principal components beside the original data, we see the plots shown here Step6: The transformed data has been reduced to a single dimension. To understand the effect of this dimensionality reduction, we can perform the inverse transform of this reduced data and plot it along with the original data Step7: The light points are the original data, while the dark points are the projected version. This makes clear what a PCA dimensionality reduction means Step8: Recall that the data consists of 8×8 pixel images, meaning that they are 64-dimensional. To gain some intuition into the relationships between these points, we can use PCA to project them to a more manageable number of dimensions, say two Step9: We can now plot the first two principal components of each point to learn about the data Step10: Recall what these components mean Step11: This curve quantifies how much of the total, 64-dimensional variance is contained within the first $N$ components. For example, we see that with the digits the first 10 components contain approximately 75% of the variance, while you need around 50 components to describe close to 100% of the variance. Here we see that our two-dimensional projection loses a lot of information (as measured by the explained variance) and that we'd need about 20 components to retain 90% of the variance. Looking at this plot for a high-dimensional dataset can help you understand the level of redundancy present in multiple observations. PCA as Noise Filtering PCA can also be used as a filtering approach for noisy data. The idea is this Step12: Now lets add some random noise to create a noisy dataset, and re-plot it Step13: It's clear by eye that the images are noisy, and contain spurious pixels. Let's train a PCA on the noisy data, requesting that the projection preserve 50% of the variance Step14: Here 50% of the variance amounts to 12 principal components. Now we compute these components, and then use the inverse of the transform to reconstruct the filtered digits Step15: This signal preserving/noise filtering property makes PCA a very useful feature selection routine—for example, rather than training a classifier on very high-dimensional data, you might instead train the classifier on the lower-dimensional representation, which will automatically serve to filter out random noise in the inputs. Example Step16: Let's take a look at the principal axes that span this dataset. Because this is a large dataset, we will use RandomizedPCA—it contains a randomized method to approximate the first $N$ principal components much more quickly than the standard PCA estimator, and thus is very useful for high-dimensional data (here, a dimensionality of nearly 3,000). We will take a look at the first 150 components Step17: In this case, it can be interesting to visualize the images associated with the first several principal components (these components are technically known as "eigenvectors," so these types of images are often called "eigenfaces"). As you can see in this figure, they are as creepy as they sound Step18: The results are very interesting, and give us insight into how the images vary Step19: We see that these 150 components account for just over 90% of the variance. That would lead us to believe that using these 150 components, we would recover most of the essential characteristics of the data. To make this more concrete, we can compare the input images with the images reconstructed from these 150 components
Python Code: %matplotlib inline import numpy as np import matplotlib.pyplot as plt import seaborn as sns; sns.set() Explanation: <!--BOOK_INFORMATION--> <img align="left" style="padding-right:10px;" src="figures/PDSH-cover-small.png"> This notebook contains an excerpt from the Python Data Science Handbook by Jake VanderPlas; the content is available on GitHub. The text is released under the CC-BY-NC-ND license, and code is released under the MIT license. If you find this content useful, please consider supporting the work by buying the book! <!--NAVIGATION--> < In-Depth: Decision Trees and Random Forests | Contents | In-Depth: Manifold Learning > <a href="https://colab.research.google.com/github/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/05.09-Principal-Component-Analysis.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open and Execute in Google Colaboratory"></a> In Depth: Principal Component Analysis Up until now, we have been looking in depth at supervised learning estimators: those estimators that predict labels based on labeled training data. Here we begin looking at several unsupervised estimators, which can highlight interesting aspects of the data without reference to any known labels. In this section, we explore what is perhaps one of the most broadly used of unsupervised algorithms, principal component analysis (PCA). PCA is fundamentally a dimensionality reduction algorithm, but it can also be useful as a tool for visualization, for noise filtering, for feature extraction and engineering, and much more. After a brief conceptual discussion of the PCA algorithm, we will see a couple examples of these further applications. We begin with the standard imports: End of explanation rng = np.random.RandomState(1) X = np.dot(rng.rand(2, 2), rng.randn(2, 200)).T plt.scatter(X[:, 0], X[:, 1]) plt.axis('equal'); Explanation: Introducing Principal Component Analysis Principal component analysis is a fast and flexible unsupervised method for dimensionality reduction in data, which we saw briefly in Introducing Scikit-Learn. Its behavior is easiest to visualize by looking at a two-dimensional dataset. Consider the following 200 points: End of explanation from sklearn.decomposition import PCA pca = PCA(n_components=2) pca.fit(X) Explanation: By eye, it is clear that there is a nearly linear relationship between the x and y variables. This is reminiscent of the linear regression data we explored in In Depth: Linear Regression, but the problem setting here is slightly different: rather than attempting to predict the y values from the x values, the unsupervised learning problem attempts to learn about the relationship between the x and y values. In principal component analysis, this relationship is quantified by finding a list of the principal axes in the data, and using those axes to describe the dataset. Using Scikit-Learn's PCA estimator, we can compute this as follows: End of explanation print(pca.components_) print(pca.explained_variance_) Explanation: The fit learns some quantities from the data, most importantly the "components" and "explained variance": End of explanation def draw_vector(v0, v1, ax=None): ax = ax or plt.gca() arrowprops=dict(arrowstyle='->', linewidth=2, shrinkA=0, shrinkB=0) ax.annotate('', v1, v0, arrowprops=arrowprops) # plot data plt.scatter(X[:, 0], X[:, 1], alpha=0.2) for length, vector in zip(pca.explained_variance_, pca.components_): v = vector * 3 * np.sqrt(length) draw_vector(pca.mean_, pca.mean_ + v) plt.axis('equal'); Explanation: To see what these numbers mean, let's visualize them as vectors over the input data, using the "components" to define the direction of the vector, and the "explained variance" to define the squared-length of the vector: End of explanation pca = PCA(n_components=1) pca.fit(X) X_pca = pca.transform(X) print("original shape: ", X.shape) print("transformed shape:", X_pca.shape) Explanation: These vectors represent the principal axes of the data, and the length of the vector is an indication of how "important" that axis is in describing the distribution of the data—more precisely, it is a measure of the variance of the data when projected onto that axis. The projection of each data point onto the principal axes are the "principal components" of the data. If we plot these principal components beside the original data, we see the plots shown here: figure source in Appendix This transformation from data axes to principal axes is an affine transformation, which basically means it is composed of a translation, rotation, and uniform scaling. While this algorithm to find principal components may seem like just a mathematical curiosity, it turns out to have very far-reaching applications in the world of machine learning and data exploration. PCA as dimensionality reduction Using PCA for dimensionality reduction involves zeroing out one or more of the smallest principal components, resulting in a lower-dimensional projection of the data that preserves the maximal data variance. Here is an example of using PCA as a dimensionality reduction transform: End of explanation X_new = pca.inverse_transform(X_pca) plt.scatter(X[:, 0], X[:, 1], alpha=0.2) plt.scatter(X_new[:, 0], X_new[:, 1], alpha=0.8) plt.axis('equal'); Explanation: The transformed data has been reduced to a single dimension. To understand the effect of this dimensionality reduction, we can perform the inverse transform of this reduced data and plot it along with the original data: End of explanation from sklearn.datasets import load_digits digits = load_digits() digits.data.shape Explanation: The light points are the original data, while the dark points are the projected version. This makes clear what a PCA dimensionality reduction means: the information along the least important principal axis or axes is removed, leaving only the component(s) of the data with the highest variance. The fraction of variance that is cut out (proportional to the spread of points about the line formed in this figure) is roughly a measure of how much "information" is discarded in this reduction of dimensionality. This reduced-dimension dataset is in some senses "good enough" to encode the most important relationships between the points: despite reducing the dimension of the data by 50%, the overall relationship between the data points are mostly preserved. PCA for visualization: Hand-written digits The usefulness of the dimensionality reduction may not be entirely apparent in only two dimensions, but becomes much more clear when looking at high-dimensional data. To see this, let's take a quick look at the application of PCA to the digits data we saw in In-Depth: Decision Trees and Random Forests. We start by loading the data: End of explanation pca = PCA(2) # project from 64 to 2 dimensions projected = pca.fit_transform(digits.data) print(digits.data.shape) print(projected.shape) digits.target i=int(np.random.random()*1797) plt.imshow(digits.data[i].reshape(8,8),cmap='Blues') digits.target[i] digits.data[i].reshape(8,8) Explanation: Recall that the data consists of 8×8 pixel images, meaning that they are 64-dimensional. To gain some intuition into the relationships between these points, we can use PCA to project them to a more manageable number of dimensions, say two: End of explanation plt.scatter(projected[:, 0], projected[:, 1], c=digits.target, edgecolor='none', alpha=0.5, cmap=plt.cm.get_cmap('Spectral', 10)) plt.xlabel('component 1') plt.ylabel('component 2') plt.colorbar(); Explanation: We can now plot the first two principal components of each point to learn about the data: End of explanation pca = PCA().fit(digits.data) plt.plot(np.cumsum(pca.explained_variance_ratio_)) plt.xlabel('number of components') plt.ylabel('cumulative explained variance'); Explanation: Recall what these components mean: the full data is a 64-dimensional point cloud, and these points are the projection of each data point along the directions with the largest variance. Essentially, we have found the optimal stretch and rotation in 64-dimensional space that allows us to see the layout of the digits in two dimensions, and have done this in an unsupervised manner—that is, without reference to the labels. What do the components mean? We can go a bit further here, and begin to ask what the reduced dimensions mean. This meaning can be understood in terms of combinations of basis vectors. For example, each image in the training set is defined by a collection of 64 pixel values, which we will call the vector $x$: $$ x = [x_1, x_2, x_3 \cdots x_{64}] $$ One way we can think about this is in terms of a pixel basis. That is, to construct the image, we multiply each element of the vector by the pixel it describes, and then add the results together to build the image: $$ {\rm image}(x) = x_1 \cdot{\rm (pixel~1)} + x_2 \cdot{\rm (pixel~2)} + x_3 \cdot{\rm (pixel~3)} \cdots x_{64} \cdot{\rm (pixel~64)} $$ One way we might imagine reducing the dimension of this data is to zero out all but a few of these basis vectors. For example, if we use only the first eight pixels, we get an eight-dimensional projection of the data, but it is not very reflective of the whole image: we've thrown out nearly 90% of the pixels! figure source in Appendix The upper row of panels shows the individual pixels, and the lower row shows the cumulative contribution of these pixels to the construction of the image. Using only eight of the pixel-basis components, we can only construct a small portion of the 64-pixel image. Were we to continue this sequence and use all 64 pixels, we would recover the original image. But the pixel-wise representation is not the only choice of basis. We can also use other basis functions, which each contain some pre-defined contribution from each pixel, and write something like $$ image(x) = {\rm mean} + x_1 \cdot{\rm (basis~1)} + x_2 \cdot{\rm (basis~2)} + x_3 \cdot{\rm (basis~3)} \cdots $$ PCA can be thought of as a process of choosing optimal basis functions, such that adding together just the first few of them is enough to suitably reconstruct the bulk of the elements in the dataset. The principal components, which act as the low-dimensional representation of our data, are simply the coefficients that multiply each of the elements in this series. This figure shows a similar depiction of reconstructing this digit using the mean plus the first eight PCA basis functions: figure source in Appendix Unlike the pixel basis, the PCA basis allows us to recover the salient features of the input image with just a mean plus eight components! The amount of each pixel in each component is the corollary of the orientation of the vector in our two-dimensional example. This is the sense in which PCA provides a low-dimensional representation of the data: it discovers a set of basis functions that are more efficient than the native pixel-basis of the input data. Choosing the number of components A vital part of using PCA in practice is the ability to estimate how many components are needed to describe the data. This can be determined by looking at the cumulative explained variance ratio as a function of the number of components: End of explanation def plot_digits(data): fig, axes = plt.subplots(4, 10, figsize=(10, 4), subplot_kw={'xticks':[], 'yticks':[]}, gridspec_kw=dict(hspace=0.1, wspace=0.1)) for i, ax in enumerate(axes.flat): ax.imshow(data[i].reshape(8, 8), cmap='binary', interpolation='nearest', clim=(0, 16)) plot_digits(digits.data) Explanation: This curve quantifies how much of the total, 64-dimensional variance is contained within the first $N$ components. For example, we see that with the digits the first 10 components contain approximately 75% of the variance, while you need around 50 components to describe close to 100% of the variance. Here we see that our two-dimensional projection loses a lot of information (as measured by the explained variance) and that we'd need about 20 components to retain 90% of the variance. Looking at this plot for a high-dimensional dataset can help you understand the level of redundancy present in multiple observations. PCA as Noise Filtering PCA can also be used as a filtering approach for noisy data. The idea is this: any components with variance much larger than the effect of the noise should be relatively unaffected by the noise. So if you reconstruct the data using just the largest subset of principal components, you should be preferentially keeping the signal and throwing out the noise. Let's see how this looks with the digits data. First we will plot several of the input noise-free data: End of explanation np.random.seed(42) noisy = np.random.normal(digits.data, 4) plot_digits(noisy) Explanation: Now lets add some random noise to create a noisy dataset, and re-plot it: End of explanation pca = PCA(0.50).fit(noisy) pca.n_components_ Explanation: It's clear by eye that the images are noisy, and contain spurious pixels. Let's train a PCA on the noisy data, requesting that the projection preserve 50% of the variance: End of explanation components = pca.transform(noisy) filtered = pca.inverse_transform(components) plot_digits(filtered) Explanation: Here 50% of the variance amounts to 12 principal components. Now we compute these components, and then use the inverse of the transform to reconstruct the filtered digits: End of explanation from sklearn.datasets import fetch_lfw_people faces = fetch_lfw_people(min_faces_per_person=60) print(faces.target_names) print(faces.images.shape) Explanation: This signal preserving/noise filtering property makes PCA a very useful feature selection routine—for example, rather than training a classifier on very high-dimensional data, you might instead train the classifier on the lower-dimensional representation, which will automatically serve to filter out random noise in the inputs. Example: Eigenfaces Earlier we explored an example of using a PCA projection as a feature selector for facial recognition with a support vector machine (see In-Depth: Support Vector Machines). Here we will take a look back and explore a bit more of what went into that. Recall that we were using the Labeled Faces in the Wild dataset made available through Scikit-Learn: End of explanation # from sklearn.decomposition import RandomizedPCA from sklearn.decomposition import PCA as RandomizedPCA pca = RandomizedPCA(150) pca.fit(faces.data) Explanation: Let's take a look at the principal axes that span this dataset. Because this is a large dataset, we will use RandomizedPCA—it contains a randomized method to approximate the first $N$ principal components much more quickly than the standard PCA estimator, and thus is very useful for high-dimensional data (here, a dimensionality of nearly 3,000). We will take a look at the first 150 components: End of explanation fig, axes = plt.subplots(3, 8, figsize=(9, 4), subplot_kw={'xticks':[], 'yticks':[]}, gridspec_kw=dict(hspace=0.1, wspace=0.1)) for i, ax in enumerate(axes.flat): ax.imshow(pca.components_[i].reshape(62, 47), cmap='bone') Explanation: In this case, it can be interesting to visualize the images associated with the first several principal components (these components are technically known as "eigenvectors," so these types of images are often called "eigenfaces"). As you can see in this figure, they are as creepy as they sound: End of explanation plt.plot(np.cumsum(pca.explained_variance_ratio_)) plt.xlabel('number of components') plt.ylabel('cumulative explained variance'); Explanation: The results are very interesting, and give us insight into how the images vary: for example, the first few eigenfaces (from the top left) seem to be associated with the angle of lighting on the face, and later principal vectors seem to be picking out certain features, such as eyes, noses, and lips. Let's take a look at the cumulative variance of these components to see how much of the data information the projection is preserving: End of explanation # Compute the components and projected faces pca = RandomizedPCA(150).fit(faces.data) components = pca.transform(faces.data) projected = pca.inverse_transform(components) # Plot the results fig, ax = plt.subplots(2, 10, figsize=(10, 2.5), subplot_kw={'xticks':[], 'yticks':[]}, gridspec_kw=dict(hspace=0.1, wspace=0.1)) for i in range(10): ax[0, i].imshow(faces.data[i].reshape(62, 47), cmap='binary_r') ax[1, i].imshow(projected[i].reshape(62, 47), cmap='binary_r') ax[0, 0].set_ylabel('full-dim\ninput') ax[1, 0].set_ylabel('150-dim\nreconstruction'); Explanation: We see that these 150 components account for just over 90% of the variance. That would lead us to believe that using these 150 components, we would recover most of the essential characteristics of the data. To make this more concrete, we can compare the input images with the images reconstructed from these 150 components: End of explanation
12,919
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: Part 1 Step2: pandas is an open source, BSD-licensed library providing high-performance, easy-to-use data structures and data analysis tools for the Python programming language. http Step3: A pandas Series, like a list, doesn't have to be homogenous. Step4: The index of a Series can be arbitrary as well. Step5: Multiple Series objects can be clubbed together to make a pandas DataFrame. The pandas DataFrame is similar to the data.frame object in R. Step6: Think of pandas DataFrames as dicts of Series. Almost all operations that are valid on a Python dictionary will work on a pandas DataFrame. Step7: Index Objects Index objects available in Pandas
Python Code: ---------------------------------------------------------------------- Filename : 01_basic_data_structs.py Date : 12th Dec, 2013 Author : Jaidev Deshpande Purpose : To get started with basic data structures in Pandas Libraries: Pandas 0.12 and its dependencies ---------------------------------------------------------------------- Explanation: Part 1: Data Structures in Pandas End of explanation # imports import pandas as pd from math import pi s = pd.Series(range(10)) print(s) print(s[5]) Explanation: pandas is an open source, BSD-licensed library providing high-performance, easy-to-use data structures and data analysis tools for the Python programming language. http://pandas.pydata.org There are many useful objects in Pandas: Series DataFrame Panel TimeSeries Series and DataFrame End of explanation s = pd.Series(['foo', None, 3+4j]) Explanation: A pandas Series, like a list, doesn't have to be homogenous. End of explanation inds = ['bar',1, (1, 2)] s.index = inds print(s['bar'], s[1], s[(1, 2)]) Explanation: The index of a Series can be arbitrary as well. End of explanation s1 = pd.Series(range(10)) s2 = pd.Series(range(10,20)) df = pd.DataFrame({'A':s1,'B':s2}) df.head() Explanation: Multiple Series objects can be clubbed together to make a pandas DataFrame. The pandas DataFrame is similar to the data.frame object in R. End of explanation df['C'] = [str(c) for c in range(20, 30)] print(df.head()) print(df['C']) del df['A'] print(df.head(10)) df.update({'B': range(50,60)}) print(df.head()) Explanation: Think of pandas DataFrames as dicts of Series. Almost all operations that are valid on a Python dictionary will work on a pandas DataFrame. End of explanation df.index Explanation: Index Objects Index objects available in Pandas: Index : The most general Pandas index, often created by default Int64Index : Specialized index for integer values MultiIndex : Hierarchical index DatetimeIndex: Nanosecond timestamps that can be used as indexes PeriodIndex : Specialized indices for timespans End of explanation
12,920
Given the following text description, write Python code to implement the functionality described below step by step Description: Saving your iPython notebook File -> Save and Checkpoint Can change the name also in that menu. But also possible via clicking the name above. Talk about command mode and edit mode of cells. And the help window. Data Types Step1: A float. Note, that Python just automatically converts the result of division to floats, to be more correct. Those kind of automatic data type changes were a problem in the old times, which is why older systems would rather insist on returning the same kind of data type as the user provided. These days, the focus has shifted on rather doing the math correct and let the system deal with the overhead for this implicit data type change. Step2: The reason why this automatic type conversion is even possible within Python is because it is a so called "dynamically typed" programming languages. As opposed to "statically typed" ones like C(++) and Java. Meaning, in Python this is possible Step3: I just changed the datatype of a without deleting it first. It was just changed to whatever I need it to be. But remember Step4: (read here, if you are interested in all the multi-media display capabilities of the Jupyter notebook.) A note about names and values Step5: Nice (lengthy / thorough) discussion of this Step6: Notice that I put spaces on either side of each mathematical operator. This isn't required, but enhances clarity. Consider the alternative Step7: Example 2 - Find the acceleration due to Earth's gravity (the g in F = mg) Using the gravitation equation above, set $m_2 = 1$ kg $$F(6.37 \times 10^{6}) = 6.67 \times 10^{-11} \cdot \frac{5.97 \times 10^{24} \cdot 1}{(6.37 \times 10^{6})^2}$$ Step8: Q. Why would the above $F(r)$ implementation be inconvenient if we had to do this computation many times, say for different masses? Q. How could we improve this? Step9: Q. What do the "x = y" statements do? Step10: Q. Can you imagine a downside to descriptive variable names? Dealing with long lines of code Split long lines with a backslash (with no space after it, just carriage return) Step11: Reserved Words Using "reserved words" will lead to an error Step12: See p.10 of the textbook for a list of Python's reserved words. Some really common ones are Step13: Q. What will the line below do? Step14: As an approx value, it's good practice to comment about 50\% of your code! But one can reduce that reasonbly, by choosing intelligle variable names. There is another way to specify "block comments" Step15: Notice that that comment was actually printed. That's because it's not technically a comment that is totally ignored, but just a multi-line string object. It is being used in source code for documenting your code. Why does that work? Because that long multi-line string is not being assigned to a variable, so the Python interpreter just throws it away for not being used. But it's very useful for creating documented code! Formatting text and numbers Step16: Hard to read!! (And, note the junk at the end.) Consider %x.yz % inside the quotes - means a "format statement" follows x is the number of characters in the resulting string - Not required y is the number of digits after the decimal point - Not required z is the format (e.g. f (float), e (scientific), s (string)) - Required % outside and to the right of the quotes - Separates text from variables -- more on this later - Uses parentheses if there is more than one variable There is a list of print format specifications on p. 12 in the textbook %s string (of ascii characters) %d integer %0xd integer padded with x leading zeros %f decimal notation with six decimals %e or %E compact scientific notation %g or %G compact decimal or scientific notation %xz format z right-justified in a field of width x %-xy same, left-justified %.yz format z with y decimals %x.yz format z with y decimals in a field of width x %% percentage sign The power of the new formatting If you don't care about length of the print Step17: Q. What will the next statement print? Step18: Note when block comments are used, the text appears on 2 lines versus when using the \, the text appears all on 1 line. Step19: Note the difference between %.0f (float) and %i (integer) (rounding vs. truncating) Also note, that the new formatting system actually warns you when you do something that would lose precision
Python Code: 10 / 3 # We provide integers # What will the output be? Explanation: Saving your iPython notebook File -> Save and Checkpoint Can change the name also in that menu. But also possible via clicking the name above. Talk about command mode and edit mode of cells. And the help window. Data Types: Integers vs. Floats End of explanation 1 / 10 + 2.0 # all fine here as well 4 / 2 # even so, mathematically not required, Python returns a float here as well. 4 // 2 # But if you need an integer to be returned, force it with // Explanation: A float. Note, that Python just automatically converts the result of division to floats, to be more correct. Those kind of automatic data type changes were a problem in the old times, which is why older systems would rather insist on returning the same kind of data type as the user provided. These days, the focus has shifted on rather doing the math correct and let the system deal with the overhead for this implicit data type change. End of explanation a = 5 a a = 'astring' a Explanation: The reason why this automatic type conversion is even possible within Python is because it is a so called "dynamically typed" programming languages. As opposed to "statically typed" ones like C(++) and Java. Meaning, in Python this is possible: End of explanation from IPython.display import YouTubeVideo YouTubeVideo('b23wrRfy7SM') Explanation: I just changed the datatype of a without deleting it first. It was just changed to whatever I need it to be. But remember: End of explanation x = 10 y = 2 * x x = 25 y # What is the value of y? If you are surprised, please discuss it. Explanation: (read here, if you are interested in all the multi-media display capabilities of the Jupyter notebook.) A note about names and values End of explanation 6.67e-11 * 5.97e24 * 70 / (6.37e6)**2 # remember: the return of the last line in any cell will be automatically printed Explanation: Nice (lengthy / thorough) discussion of this: http://nedbatchelder.com/text/names.html We haven't yet covered some of the concepts that appear in this blog post so don't panic if something looks unfamiliar. Today: More practice with IPython & a simple formula Recall that to start an Jupyter notebook, simply type (in your Linux shell): $&gt; jupyter notebook or to open a specific file and keep the terminal session free: $&gt; jupyter notebook filename.ipynb &amp; Note: Discuss cell types Code vs Markdown vs raw NB convert briefly Law of gravitation equation $F(r) = G \frac{m_1 m_2}{r^2}$ $G = 6.67 \times 10^{-11} \frac{\text{m}^3}{\text{kg} \cdot \text{s}^2}$ (the gravitational constant) $m_1$ is the mass of the first body in kilograms (kg) $m_2$ is the mass of the second body in kilograms (kg) $r$ is the distance between the centers of the two bodies in meters (m) Example 1 - Find the force of a person standing on earth For a person of mass 70 kg standing on the surface of the Earth (mass $5.97 \times 10^{24}$ kg, radius 6370 km (Earth fact sheet)) the force will be (in units of Newtons, 1 N = 0.225 lbs): $$F(6.37 \times 10^{6}) = 6.67 \times 10^{-11} \cdot \frac{5.97 \times 10^{24} \cdot 70}{(6.37 \times 10^{6})^2}$$ End of explanation 6.67e-11*5.97e24*70/(6.37e6)**2 Explanation: Notice that I put spaces on either side of each mathematical operator. This isn't required, but enhances clarity. Consider the alternative: End of explanation 6.67e-11 * 5.97e24 * 1 / (6.37e6)**2 Explanation: Example 2 - Find the acceleration due to Earth's gravity (the g in F = mg) Using the gravitation equation above, set $m_2 = 1$ kg $$F(6.37 \times 10^{6}) = 6.67 \times 10^{-11} \cdot \frac{5.97 \times 10^{24} \cdot 1}{(6.37 \times 10^{6})^2}$$ End of explanation G = 55 G = 6.67e-11 m1 = 5.97e24 m2 = 70 r = 6.37e6 F = G * m1 * m2 / r**2 # white-space for clarity! F # remember: no print needed for the last item of a cell. Explanation: Q. Why would the above $F(r)$ implementation be inconvenient if we had to do this computation many times, say for different masses? Q. How could we improve this? End of explanation G = 6.67e-11 mass_earth = 5.97e24 mass_object = 70 radius = 6.37e6 force = G * mass_earth * mass_object / radius**2 force Explanation: Q. What do the "x = y" statements do? End of explanation force2 = G * massEarth * \ massObject / radius**2 force2 Explanation: Q. Can you imagine a downside to descriptive variable names? Dealing with long lines of code Split long lines with a backslash (with no space after it, just carriage return): End of explanation lambda = 5000 # Some wavelength in Angstroms Explanation: Reserved Words Using "reserved words" will lead to an error: End of explanation # Comments are specified with the pound symbol # # Everything after a # in a line is ignored by Python Explanation: See p.10 of the textbook for a list of Python's reserved words. Some really common ones are: and, break, class, continue, def, del, if, elif, else, except, False, for, from, import, in, is, lambda, None, not, or, pass, return, True, try, while Comments End of explanation print('this') # but not 'that' Explanation: Q. What will the line below do? End of explanation # Comments without ''' ''' or # create an error: This is a comment that takes several lines. # However, in this form it does not, even for multiple lines: # ''' This is a really, super, super, super, super, super, super, super, super, super, super, super, super, super, super, super, super, long comment (not really). ''' # # We will use block comments to document modules later! Explanation: As an approx value, it's good practice to comment about 50\% of your code! But one can reduce that reasonbly, by choosing intelligle variable names. There is another way to specify "block comments": using two sets of 3 quotation marks ''' '''. End of explanation from math import pi # more in today's tutorial # With old style formatting "pi = %.6f" % pi # With new style formatting. # It's longer in this example, but is much more powerful in general. # You decide, which one you want to use. "pi = {:.6f}".format(pi) myPi = 3.92834234 print("The Earth's mass is %.0f kilograms." % myPi) # note the rounding that happens! print("This is myPi: {} is awesome".format(str(int(myPi)))) # converting to int cuts off decimals Explanation: Notice that that comment was actually printed. That's because it's not technically a comment that is totally ignored, but just a multi-line string object. It is being used in source code for documenting your code. Why does that work? Because that long multi-line string is not being assigned to a variable, so the Python interpreter just throws it away for not being used. But it's very useful for creating documented code! Formatting text and numbers End of explanation print(radius, force) # still alive from far above! Explanation: Hard to read!! (And, note the junk at the end.) Consider %x.yz % inside the quotes - means a "format statement" follows x is the number of characters in the resulting string - Not required y is the number of digits after the decimal point - Not required z is the format (e.g. f (float), e (scientific), s (string)) - Required % outside and to the right of the quotes - Separates text from variables -- more on this later - Uses parentheses if there is more than one variable There is a list of print format specifications on p. 12 in the textbook %s string (of ascii characters) %d integer %0xd integer padded with x leading zeros %f decimal notation with six decimals %e or %E compact scientific notation %g or %G compact decimal or scientific notation %xz format z right-justified in a field of width x %-xy same, left-justified %.yz format z with y decimals %x.yz format z with y decimals in a field of width x %% percentage sign The power of the new formatting If you don't care about length of the print: The type is being chosen correctly for you. Some more examples End of explanation # If we use triple quotes we don't have to # use \ for multiple lines print('''At the Earth's radius of %.2e meters, the force is %6.0f Newtons.''' % (radius, force)) # Justification print("At the Earth's radius of %.2e meters, \ the force is %-20f Newtons." % (radius, force)) Explanation: Q. What will the next statement print? End of explanation print("At the Earth's radius of %.2e meters, the force is %.0f Newtons." % (radius, force)) print("At the Earth's radius of %.2e meters, the force is %i Newtons." % (radius, force)) Explanation: Note when block comments are used, the text appears on 2 lines versus when using the \, the text appears all on 1 line. End of explanation print("At the Earth's radius of {:.2e} meters, the force is {:.0f} Newtons.".format(radius, force)) print("At the Earth's radius of {:.2e} meters, the force is {:i} Newtons.".format(radius, force)) # Line breaks can also be implemented with \n print('At the Earth radius of %.2e meters,\nthe force is\n%0.0f Newtons.' % (radius, force)) Explanation: Note the difference between %.0f (float) and %i (integer) (rounding vs. truncating) Also note, that the new formatting system actually warns you when you do something that would lose precision: End of explanation
12,921
Given the following text description, write Python code to implement the functionality described below step by step Description: Iris introduction course 4. Joining Cubes Together Learning outcome Step1: 4.1 Merge<a id='merge'></a> When Iris loads data it tries to reduce the number of cubes returned by collecting together multiple fields with shared metadata into a single multidimensional cube. In Iris, this is known as merging. In order to merge two cubes, they must be identical in everything but a scalar dimension, which goes on to become a new data dimension. The diagram below shows how three 2D cubes, which have the same x and y coordinates but different z coordinates, are merged together to create a single 3D cube. The iris.load_raw function can be used as a diagnostic tool to load the individual "fields" that Iris identifies in a given set of filenames before any merge takes place. Let's compare the behaviour of iris.load_raw and the behaviour of the general purpose loading function, iris.load First, we load in a file using iris.load Step2: As you can see iris.load returns a CubeList containing a single 3D cube. Now let's try loading in the file using iris.load_raw Step3: This time, iris has returned six 2D cubes. PP files usually contain multiple 2D fields. iris.load_raw has returned a 2D cube for each of these fields, whereas iris.load has merged the cubes together then returned the resulting 3D cube. When we look in detail at the raw 2D cubes, we find that they are identical in every coordinate except for the scalar forecast_period and time coordinates Step4: To merge a CubeList, we can use the merge or merge_cube methods. The merge method will try to merge together the cubes in the CubeList in order to return a CubeList of as few cubes as possible. The merge_cube method will do the same as merge but will return a single Cube. If the initial CubeList cannot be merged into a single Cube, merge_cube will raise an error, giving a helpful message explaining why the cubes cannot be merged. Let's merge the raw 2D cubes we previously loaded in Step5: merge has returned a cubelist of a single 3D cube. Step6: <div class="alert alert-block alert-warning"> <b><font color='brown'>Exercise Step7: When we look in more detail at our merged cube, we can see that the time coordinate has become a new dimension, as well as gaining another forecast_period auxiliary coordinate Step8: Identifying Merge Problems In order to avoid the Iris merge functionality making inappropriate assumptions about the data, merge is strict with regards to the uniformity of the incoming cubes. For example, if we load the fields from two ensemble members from the GloSea4 model sample data, we see we have 12 fields before any merge takes place Step9: If we try to merge these 12 cubes we get 2 cubes rather than one Step10: When we look in more detail at these two cubes, what is different between the two? (Hint Step11: As mentioned earlier, if merge_cube cannot merge the given CubeList to return a single Cube, it will raise a helpful error message identifying the cause of the failiure. <div class="alert alert-block alert-warning"> <b><font color="brown">Exercise Step12: By inspecting the cubes themselves or using the error message raised when using merge_cube we can see that some cubes are missing the realization coordinate. By adding the missing coordinate, we can trigger a merge of the 12 cubes into a single cube, as expected Step13: 4.2 Concatenate<a id='concatenate'></a> We have seen that merge combines a list of cubes with a common scalar coordinate to produce a single cube with a new dimension created from these scalar values. But what happens if you try to combine cubes along a common dimension. Let's create a CubeList with two cubes that have been indexed along the time dimension of the original cube. Step14: These cubes should be able to be joined together; after all, they have both come from the same original cube! However, merge returns two cubes, suggesting that these two cubes cannot be merged Step15: Merge cannot be used to combine common non-scalar coordinates. Instead we must use concatenate. Concatenate joins together ("concatenates") common non-scalar coordinates to produce a single cube with the common dimension extended. In the below diagram, we see how three 3D cubes are concatenated together to produce a 3D cube with an extended t dimension. To concatenate a CubeList, we can use the concatenate or concatenate_cube methods. Similar to merging, concatenate will return a CubeList of as few cubes as possible, whereas concatenate_cube will attempt to return a cube, raising an error with a helpful message where this is not possible. If we apply concatenate to our cubelist, we will see that it returns a CubeList with a single Cube Step16: <div class="alert alert-block alert-warning"> <b><font color='brown'>Exercise Step17: 4.3 Section Review Exercise<a id='exercise'></a> The following exercise is designed to give you experience of solving issues that prevent a merge or concatenate from taking place. Part 1 Identify and resolve the issue preventing the air_potential_temperature cubes from the resources/merge_exercise.1.*.nc files from being joined together into a single cube. a) Use iris.load_raw to load in the air_potential_temperature cubes from the files 'resources/merge_exercise.1.*.nc'. Store the cubes in a variable called raw_cubes. Hint Step18: b) Try merging the loaded cubes into a single cube. Why does this raise an error? Step19: c) Fix the cubes such that they can be merged into a single cube. Hint Step20: Part 2 Identify and resolve the issue preventing the air_potential_temperature cubes from the resources/merge_exercise.5.*.nc files from being joined together into a single cube. a) Use iris.load_raw to load in the air_potential_temperature cubes from the files 'resources/merge_exercise.5.*.nc'. Store the cubes in a variable called raw_cubes. Step21: b) Join the cubes together into a single cube. Should these cubes be merged or concatenated?
Python Code: import iris import numpy as np Explanation: Iris introduction course 4. Joining Cubes Together Learning outcome: by the end of this section, you will be able to apply Iris functionality to combine multiple Iris cubes into a new larger cube. Duration: 30 minutes Overview:<br> 4.1 Merge<br> 4.2 Concatenate<br> 4.3 Exercise<br> 4.4 Summary of the Section Setup End of explanation fname = iris.sample_data_path('GloSea4', 'ensemble_008.pp') cubes = iris.load(fname) print(cubes) Explanation: 4.1 Merge<a id='merge'></a> When Iris loads data it tries to reduce the number of cubes returned by collecting together multiple fields with shared metadata into a single multidimensional cube. In Iris, this is known as merging. In order to merge two cubes, they must be identical in everything but a scalar dimension, which goes on to become a new data dimension. The diagram below shows how three 2D cubes, which have the same x and y coordinates but different z coordinates, are merged together to create a single 3D cube. The iris.load_raw function can be used as a diagnostic tool to load the individual "fields" that Iris identifies in a given set of filenames before any merge takes place. Let's compare the behaviour of iris.load_raw and the behaviour of the general purpose loading function, iris.load First, we load in a file using iris.load: End of explanation fname = iris.sample_data_path('GloSea4', 'ensemble_008.pp') raw_cubes = iris.load_raw(fname) print(raw_cubes) Explanation: As you can see iris.load returns a CubeList containing a single 3D cube. Now let's try loading in the file using iris.load_raw: End of explanation print(raw_cubes[0]) print('--' * 40) print(raw_cubes[1]) Explanation: This time, iris has returned six 2D cubes. PP files usually contain multiple 2D fields. iris.load_raw has returned a 2D cube for each of these fields, whereas iris.load has merged the cubes together then returned the resulting 3D cube. When we look in detail at the raw 2D cubes, we find that they are identical in every coordinate except for the scalar forecast_period and time coordinates: End of explanation merged_cubelist = raw_cubes.merge() print(merged_cubelist) Explanation: To merge a CubeList, we can use the merge or merge_cube methods. The merge method will try to merge together the cubes in the CubeList in order to return a CubeList of as few cubes as possible. The merge_cube method will do the same as merge but will return a single Cube. If the initial CubeList cannot be merged into a single Cube, merge_cube will raise an error, giving a helpful message explaining why the cubes cannot be merged. Let's merge the raw 2D cubes we previously loaded in: End of explanation merged_cube = merged_cubelist[0] print(merged_cube) Explanation: merge has returned a cubelist of a single 3D cube. End of explanation # # edit space for user code ... # Explanation: <div class="alert alert-block alert-warning"> <b><font color='brown'>Exercise: </font></b> <p>Try merging <b><font face="courier" color="black">raw_cubes</font></b> using the <b><font face="courier" color="black">merge_cube</font></b> method.</p> </div> End of explanation print(merged_cube.coord('time')) print(merged_cube.coord('forecast_period')) Explanation: When we look in more detail at our merged cube, we can see that the time coordinate has become a new dimension, as well as gaining another forecast_period auxiliary coordinate: End of explanation fname = iris.sample_data_path('GloSea4', 'ensemble_00[34].pp') cubes = iris.load_raw(fname, 'surface_temperature') print(len(cubes)) Explanation: Identifying Merge Problems In order to avoid the Iris merge functionality making inappropriate assumptions about the data, merge is strict with regards to the uniformity of the incoming cubes. For example, if we load the fields from two ensemble members from the GloSea4 model sample data, we see we have 12 fields before any merge takes place: End of explanation incomplete_cubes = cubes.merge() print(incomplete_cubes) Explanation: If we try to merge these 12 cubes we get 2 cubes rather than one: End of explanation print(incomplete_cubes[0]) print('--' * 40) print(incomplete_cubes[1]) Explanation: When we look in more detail at these two cubes, what is different between the two? (Hint: One value changes, another is completely missing) End of explanation # # edit space for user code ... # Explanation: As mentioned earlier, if merge_cube cannot merge the given CubeList to return a single Cube, it will raise a helpful error message identifying the cause of the failiure. <div class="alert alert-block alert-warning"> <b><font color="brown">Exercise: </font></b><p>Try merging the loaded <b><font face="courier" color="black">cubes</font></b> using <b><font face="courier" color="black">merge_cube</font></b> rather than <b><font face="courier" color="black">merge</font></b>.</p> </div> End of explanation for cube in cubes: if not cube.coords('realization'): cube.add_aux_coord(iris.coords.DimCoord(np.int32(3), 'realization')) merged_cube = cubes.merge_cube() print(merged_cube) Explanation: By inspecting the cubes themselves or using the error message raised when using merge_cube we can see that some cubes are missing the realization coordinate. By adding the missing coordinate, we can trigger a merge of the 12 cubes into a single cube, as expected: End of explanation fname = iris.sample_data_path('A1B_north_america.nc') cube = iris.load_cube(fname) cube_1 = cube[:10] cube_2 = cube[10:20] cubes = iris.cube.CubeList([cube_1, cube_2]) print(cubes) Explanation: 4.2 Concatenate<a id='concatenate'></a> We have seen that merge combines a list of cubes with a common scalar coordinate to produce a single cube with a new dimension created from these scalar values. But what happens if you try to combine cubes along a common dimension. Let's create a CubeList with two cubes that have been indexed along the time dimension of the original cube. End of explanation print(cubes.merge()) Explanation: These cubes should be able to be joined together; after all, they have both come from the same original cube! However, merge returns two cubes, suggesting that these two cubes cannot be merged: End of explanation print(cubes.concatenate()) Explanation: Merge cannot be used to combine common non-scalar coordinates. Instead we must use concatenate. Concatenate joins together ("concatenates") common non-scalar coordinates to produce a single cube with the common dimension extended. In the below diagram, we see how three 3D cubes are concatenated together to produce a 3D cube with an extended t dimension. To concatenate a CubeList, we can use the concatenate or concatenate_cube methods. Similar to merging, concatenate will return a CubeList of as few cubes as possible, whereas concatenate_cube will attempt to return a cube, raising an error with a helpful message where this is not possible. If we apply concatenate to our cubelist, we will see that it returns a CubeList with a single Cube: End of explanation # # edit space for user code ... # Explanation: <div class="alert alert-block alert-warning"> <b><font color='brown'>Exercise: </font></b> <p>Try concatenating <b><font face="courier" color="black">cubes</font></b> using the <b><font face="courier" color="black">concatenate_cube</font></b> method. </div> End of explanation # EDIT for user code ... # SAMPLE SOLUTION : Un-comment and execute the following to see a possible solution ... # %load solutions/iris_exercise_4.3.1a Explanation: 4.3 Section Review Exercise<a id='exercise'></a> The following exercise is designed to give you experience of solving issues that prevent a merge or concatenate from taking place. Part 1 Identify and resolve the issue preventing the air_potential_temperature cubes from the resources/merge_exercise.1.*.nc files from being joined together into a single cube. a) Use iris.load_raw to load in the air_potential_temperature cubes from the files 'resources/merge_exercise.1.*.nc'. Store the cubes in a variable called raw_cubes. Hint: Constraints can be given to the load_raw function as you would with the other load functions. End of explanation # user code ... # SAMPLE SOLUTION # %load solutions/iris_exercise_4.3.1b Explanation: b) Try merging the loaded cubes into a single cube. Why does this raise an error? End of explanation # user code ... # SAMPLE SOLUTION # %load solutions/iris_exercise_4.3.1c Explanation: c) Fix the cubes such that they can be merged into a single cube. Hint: You can use del to remove an item from a dictionary. End of explanation # user code ... # SAMPLE SOLUTION # %load solutions/iris_exercise_4.3.2a Explanation: Part 2 Identify and resolve the issue preventing the air_potential_temperature cubes from the resources/merge_exercise.5.*.nc files from being joined together into a single cube. a) Use iris.load_raw to load in the air_potential_temperature cubes from the files 'resources/merge_exercise.5.*.nc'. Store the cubes in a variable called raw_cubes. End of explanation # user code ... # SAMPLE SOLUTION # %load solutions/iris_exercise_4.3.2b Explanation: b) Join the cubes together into a single cube. Should these cubes be merged or concatenated? End of explanation
12,922
Given the following text description, write Python code to implement the functionality described below step by step Description: Step4: The inspect module provides functions for learning about live objects, classes, instances, and methods. The functions in this module can be used to retrieve the original source code for a function, look the arguments to a method on the stack, and extract the sort of information useful for producing library documentation for source code. Example code Step5: The first kind of introspection probes live objects to learn about them. Use getmembers() to discover the member attributes of object. The types of members that might be returned depend on the type of object scanned. Modules can contain classes and functions; classes can contain methods and attributes; and so on. The arguments to getmembers() are an object to scan (a module, class, or instance) and an optional predicate function that is used to filter the objects returned. The return value is a list of tuples with two values Step6: The predicate argument can be used to filter the types of objects returned. Step7: Inspecting Classes Classes are scanned using getmembers() in the same way as modules, though the types of members are different. Step8: To find the methods of a class, use the isfunction() predicate. The ismethod() predicate only recognizes bound methods of instances. Step9: The output for B includes the override for get_name() as well as the new method, and the inherited init() method implemented in A. Step10: Inspecting Instances Introspecting instances works in the same way as other objects. Step11: Documentation Strings The docstring for an object can be retrieved with getdoc(). The return value is the doc attribute with tabs expanded to spaces and with indentation made uniform. Step12: The second line of the docstring is indented when it is retrieved through the attribute directly, but moved to the left margin by getdoc(). In addition to the actual docstring, it is possible to retrieve the comments from the source file where an object is implemented, if the source is available. The getcomments() function looks at the source of the object and finds comments on lines preceding the implementation. Step13: Retrieving Source If the .py file is available for a module, the original source code for the class or method can be retrieved using getsource() and getsourcelines(). Step14: To retrieve the source for a single method, pass the method reference to getsource(). Step15: Use getsourcelines() instead of getsource() to retrieve the lines of source split into individual strings. Step16: Method and Function Signatures In addition to the documentation for a function or method, it is possible to ask for a complete specification of the arguments the callable takes, including default values. The signature() function returns a Signature instance containing information about the arguments to the function. Step17: The function arguments are available through the parameters attribute of the Signature. parameters is an ordered dictionary mapping the parameter names to Parameter instances describing the argument. In this example, the first argument to the function, arg1, does not have a default value, while arg2 does. The Signature for a function can be used by decorators or other functions to validate inputs, provide different defaults, etc. Writing a suitably generic and reusable validation decorator has one special challenge, though, because it can be complicated to match up incoming arguments with their names for functions that accept a combination of named and positional arguments. The bind() and bind_partial() methods provide the necessary logic to handle the mapping. They return a BoundArguments instance populated with the arguments associated with the names of the arguments of a specified function. Step18: Class Hierarchies inspect includes two methods for working directly with class hierarchies. The first, getclasstree(), creates a tree-like data structure based on the classes it is given and their base classes. Each element in the list returned is either a tuple with a class and its base classes, or another list containing tuples for subclasses. Step19: The output from this example is the tree of inheritance for the A, B, C, and D classes. D appears twice, since it inherits from both C and A. If getclasstree() is called with unique set to a true value, the output is different. Step20: Method Resolution Order The other function for working with class hierarchies is getmro(), which returns a tuple of classes in the order they should be scanned when resolving an attribute that might be inherited from a base class using the Method Resolution Order (MRO). Each class in the sequence appears only once. Step21: The Stack and Frames In addition to introspection of code objects, inspect includes functions for inspecting the runtime environment while a program is being executed. Most of these functions work with the call stack, and operate on call frames. Frame objects hold the current execution context, including references to the code being run, the operation being executed, as well as the values of local and global variables. Typically such information is used to build tracebacks when exceptions are raised. It can also be useful for logging or when debugging programs, since the stack frames can be interrogated to discover the argument values passed into the functions. currentframe() returns the frame at the top of the stack (for the current function). Step22: Using stack(), it is also possible to access all of the stack frames from the current frame to the first caller. This example is similar to the one shown earlier, except it waits until reaching the end of the recursion to print the stack information. Step23: There are other functions for building lists of frames in different contexts, such as when an exception is being processed. See the documentation for trace(), getouterframes(), and getinnerframes() for more details. Command Line Interface The inspect module also includes a command line interface for getting details about objects without having to write out the calls in a separate Python program. The input is a module name and optional object from within the module. The default output is the source code for the named object. Using the --details argument causes metadata to be printed instead of the source.
Python Code: # %load example.py def module_level_function(arg1, arg2='default', *args, **kwargs): This function is declared in the module. local_variable = arg1 * 2 return local_variable class A(object): The A class. def __init__(self, name): self.name = name def get_name(self): "Returns the name of the instance." return self.name instance_of_a = A('sample_instance') class B(A): This is the B class. It is derived from A. # This method is not part of A. def do_something(self): Does some work def get_name(self): "Overrides version from A" return 'B(' + self.name + ')' Explanation: The inspect module provides functions for learning about live objects, classes, instances, and methods. The functions in this module can be used to retrieve the original source code for a function, look the arguments to a method on the stack, and extract the sort of information useful for producing library documentation for source code. Example code End of explanation import inspect import example for name, data in inspect.getmembers(example): if name.startswith('__'): continue print('{} : {!r}'.format(name, data)) Explanation: The first kind of introspection probes live objects to learn about them. Use getmembers() to discover the member attributes of object. The types of members that might be returned depend on the type of object scanned. Modules can contain classes and functions; classes can contain methods and attributes; and so on. The arguments to getmembers() are an object to scan (a module, class, or instance) and an optional predicate function that is used to filter the objects returned. The return value is a list of tuples with two values: the name of the member, and the type of the member. The inspect module includes several such predicate functions with names like ismodule(), isclass(), etc. End of explanation import inspect import example for name, data in inspect.getmembers(example, inspect.isclass): print('{} : {!r}'.format(name, data)) Explanation: The predicate argument can be used to filter the types of objects returned. End of explanation import inspect from pprint import pprint import example pprint(inspect.getmembers(example.A), width=65) Explanation: Inspecting Classes Classes are scanned using getmembers() in the same way as modules, though the types of members are different. End of explanation import inspect from pprint import pprint import example pprint(inspect.getmembers(example.A, inspect.isfunction)) Explanation: To find the methods of a class, use the isfunction() predicate. The ismethod() predicate only recognizes bound methods of instances. End of explanation import inspect from pprint import pprint import example pprint(inspect.getmembers(example.B, inspect.isfunction)) Explanation: The output for B includes the override for get_name() as well as the new method, and the inherited init() method implemented in A. End of explanation import inspect from pprint import pprint import example a = example.A(name='inspect_getmembers') pprint(inspect.getmembers(a, inspect.ismethod)) Explanation: Inspecting Instances Introspecting instances works in the same way as other objects. End of explanation import inspect import example print('B.__doc__:') print(example.B.__doc__) print() print('getdoc(B):') print(inspect.getdoc(example.B)) Explanation: Documentation Strings The docstring for an object can be retrieved with getdoc(). The return value is the doc attribute with tabs expanded to spaces and with indentation made uniform. End of explanation import inspect import example print(inspect.getcomments(example.B.do_something)) Explanation: The second line of the docstring is indented when it is retrieved through the attribute directly, but moved to the left margin by getdoc(). In addition to the actual docstring, it is possible to retrieve the comments from the source file where an object is implemented, if the source is available. The getcomments() function looks at the source of the object and finds comments on lines preceding the implementation. End of explanation import inspect import example print(inspect.getsource(example.A)) Explanation: Retrieving Source If the .py file is available for a module, the original source code for the class or method can be retrieved using getsource() and getsourcelines(). End of explanation import inspect import example print(inspect.getsource(example.A.get_name)) Explanation: To retrieve the source for a single method, pass the method reference to getsource(). End of explanation import inspect import pprint import example pprint.pprint(inspect.getsourcelines(example.A.get_name)) Explanation: Use getsourcelines() instead of getsource() to retrieve the lines of source split into individual strings. End of explanation import inspect import example sig = inspect.signature(example.module_level_function) print('module_level_function{}'.format(sig)) print('\nParameter details:') for name, param in sig.parameters.items(): if param.kind == inspect.Parameter.POSITIONAL_ONLY: print(' {} (positional-only)'.format(name)) elif param.kind == inspect.Parameter.POSITIONAL_OR_KEYWORD: if param.default != inspect.Parameter.empty: print(' {}={!r}'.format(name, param.default)) else: print(' {}'.format(name)) elif param.kind == inspect.Parameter.VAR_POSITIONAL: print(' *{}'.format(name)) elif param.kind == inspect.Parameter.KEYWORD_ONLY: if param.default != inspect.Parameter.empty: print(' {}={!r} (keyword-only)'.format( name, param.default)) else: print(' {} (keyword-only)'.format(name)) elif param.kind == inspect.Parameter.VAR_KEYWORD: print(' **{}'.format(name)) Explanation: Method and Function Signatures In addition to the documentation for a function or method, it is possible to ask for a complete specification of the arguments the callable takes, including default values. The signature() function returns a Signature instance containing information about the arguments to the function. End of explanation import inspect import example sig = inspect.signature(example.module_level_function) bound = sig.bind( 'this is arg1', 'this is arg2', 'this is an extra positional argument', extra_named_arg='value', ) print('Arguments:') for name, value in bound.arguments.items(): print('{} = {!r}'.format(name, value)) print('\nCalling:') print(example.module_level_function(*bound.args, **bound.kwargs)) Explanation: The function arguments are available through the parameters attribute of the Signature. parameters is an ordered dictionary mapping the parameter names to Parameter instances describing the argument. In this example, the first argument to the function, arg1, does not have a default value, while arg2 does. The Signature for a function can be used by decorators or other functions to validate inputs, provide different defaults, etc. Writing a suitably generic and reusable validation decorator has one special challenge, though, because it can be complicated to match up incoming arguments with their names for functions that accept a combination of named and positional arguments. The bind() and bind_partial() methods provide the necessary logic to handle the mapping. They return a BoundArguments instance populated with the arguments associated with the names of the arguments of a specified function. End of explanation import inspect import example class C(example.B): pass class D(C, example.A): pass def print_class_tree(tree, indent=-1): if isinstance(tree, list): for node in tree: print_class_tree(node, indent + 1) else: print(' ' * indent, tree[0].__name__) return if __name__ == '__main__': print('A, B, C, D:') print_class_tree(inspect.getclasstree( [example.A, example.B, C, D]) ) Explanation: Class Hierarchies inspect includes two methods for working directly with class hierarchies. The first, getclasstree(), creates a tree-like data structure based on the classes it is given and their base classes. Each element in the list returned is either a tuple with a class and its base classes, or another list containing tuples for subclasses. End of explanation import inspect import example print_class_tree(inspect.getclasstree( [example.A, example.B, C, D], unique=True, )) Explanation: The output from this example is the tree of inheritance for the A, B, C, and D classes. D appears twice, since it inherits from both C and A. If getclasstree() is called with unique set to a true value, the output is different. End of explanation import inspect import example class C(object): pass class C_First(C, example.B): pass class B_First(example.B, C): pass print('B_First:') for c in inspect.getmro(B_First): print(' {}'.format(c.__name__)) print() print('C_First:') for c in inspect.getmro(C_First): print(' {}'.format(c.__name__)) Explanation: Method Resolution Order The other function for working with class hierarchies is getmro(), which returns a tuple of classes in the order they should be scanned when resolving an attribute that might be inherited from a base class using the Method Resolution Order (MRO). Each class in the sequence appears only once. End of explanation import inspect import pprint def recurse(limit, keyword='default', *, kwonly='must be named'): local_variable = '.' * limit keyword = 'changed value of argument' frame = inspect.currentframe() print('line {} of {}'.format(frame.f_lineno, frame.f_code.co_filename)) print('locals:') pprint.pprint(frame.f_locals) print() if limit <= 0: return recurse(limit - 1) return local_variable if __name__ == '__main__': recurse(2) Explanation: The Stack and Frames In addition to introspection of code objects, inspect includes functions for inspecting the runtime environment while a program is being executed. Most of these functions work with the call stack, and operate on call frames. Frame objects hold the current execution context, including references to the code being run, the operation being executed, as well as the values of local and global variables. Typically such information is used to build tracebacks when exceptions are raised. It can also be useful for logging or when debugging programs, since the stack frames can be interrogated to discover the argument values passed into the functions. currentframe() returns the frame at the top of the stack (for the current function). End of explanation import inspect import pprint def show_stack(): for level in inspect.stack(): print('{}[{}]\n -> {}'.format( level.frame.f_code.co_filename, level.lineno, level.code_context[level.index].strip(), )) pprint.pprint(level.frame.f_locals) print() def recurse(limit): local_variable = '.' * limit if limit <= 0: show_stack() return recurse(limit - 1) return local_variable if __name__ == '__main__': recurse(2) Explanation: Using stack(), it is also possible to access all of the stack frames from the current frame to the first caller. This example is similar to the one shown earlier, except it waits until reaching the end of the recursion to print the stack information. End of explanation !python3 -m inspect -d example !python3 -m inspect -d example:A !python3 -m inspect example:A.get_name Explanation: There are other functions for building lists of frames in different contexts, such as when an exception is being processed. See the documentation for trace(), getouterframes(), and getinnerframes() for more details. Command Line Interface The inspect module also includes a command line interface for getting details about objects without having to write out the calls in a separate Python program. The input is a module name and optional object from within the module. The default output is the source code for the named object. Using the --details argument causes metadata to be printed instead of the source. End of explanation
12,923
Given the following text description, write Python code to implement the functionality described below step by step Description: facebook-scraper This is a short introduction to using the scraper to fully scrape a public FB page Requirements You need to register yourself as a developer on Facebook You create an App on your Facebook developer page You go to Graph Explorer to generate an Access Token with the permissions you want (I recommend getting all of them for this purpose to avoid errors later) Notes You will absolutely need to introduce the ACCESS_TOKEN but APP_ID and APP_ID_SECRET are only required in order to extend your ACCESS_TOKEN. If you are fine working with a short lived ACCESS_TOKEN and renewing that ACCESS_TOKEN manually on your Facebook developers page, then you can leave APP_ID and APP_ID_SECRET empty PAGE_ID Step1: Producer/Consummer Manager The prodcons module, builds on a Producer/Consummer multithreaded approach to issue batch requests to the FB API and process the corresponding responses, saving them to the respective .CSV files Step2: Extending ACCESS_TOKEN (Must have APP_ID and APP_ID_SECRET setup) This function extends the ACCESS_TOKEN and automatically replaces it in the mgr object NOTE Step3: Start scraping threads Just call the start() function from the Manager and wait until it is completed. A line is printed to indicate how far the scraping has reached (i.e. how many posts, reactions, comments, etc... have been received and stored in the .CSV file structure) Step4: Add scraping jobs From the mgr object, just add the group or post (what is available at the moment) that you would like to scrape
Python Code: import fb_scraper.prodcons APP_ID = '' APP_ID_SECRET = '' ACCESS_TOKEN = '' Explanation: facebook-scraper This is a short introduction to using the scraper to fully scrape a public FB page Requirements You need to register yourself as a developer on Facebook You create an App on your Facebook developer page You go to Graph Explorer to generate an Access Token with the permissions you want (I recommend getting all of them for this purpose to avoid errors later) Notes You will absolutely need to introduce the ACCESS_TOKEN but APP_ID and APP_ID_SECRET are only required in order to extend your ACCESS_TOKEN. If you are fine working with a short lived ACCESS_TOKEN and renewing that ACCESS_TOKEN manually on your Facebook developers page, then you can leave APP_ID and APP_ID_SECRET empty PAGE_ID: The ID of the Public page you will scrape (for instance: '1889414787955466'). You will usually see this on the URL on your browser. Sometimes, however, a name is provided. The name WILL NOT work, you need to figure out the ID. (There are plenty of websites that do this, I use https://www.wallflux.com/facebook_id/) End of explanation mgr = fb_scraper.prodcons.Manager( access_token=ACCESS_TOKEN, api_key=APP_ID, api_secret=APP_ID_SECRET ) Explanation: Producer/Consummer Manager The prodcons module, builds on a Producer/Consummer multithreaded approach to issue batch requests to the FB API and process the corresponding responses, saving them to the respective .CSV files End of explanation mgr.graph.extend_token() Explanation: Extending ACCESS_TOKEN (Must have APP_ID and APP_ID_SECRET setup) This function extends the ACCESS_TOKEN and automatically replaces it in the mgr object NOTE: Copy-paste it on your application setup to start using the extended token in the future End of explanation mgr.start() Explanation: Start scraping threads Just call the start() function from the Manager and wait until it is completed. A line is printed to indicate how far the scraping has reached (i.e. how many posts, reactions, comments, etc... have been received and stored in the .CSV file structure) End of explanation mgr.scrape_post('XXXXXXXXXXXXXX') # Where 'XXXXXXXXXXXXXXX' is the FULL post ID, i.e. GROUPID_POSTID mgr.scrape_group('XXXXXXXXXXXXXX') # Where 'XXXXXXXXXXXXXXX' is the Group ID Explanation: Add scraping jobs From the mgr object, just add the group or post (what is available at the moment) that you would like to scrape End of explanation
12,924
Given the following text description, write Python code to implement the functionality described below step by step Description: For high dpi displays. Step1: 0. General note This notebook shows an example of how to conduct equation of state fitting for the pressure-volume-temperature data using pytheos. Advantage of using pytheos is that you can apply different pressure scales and different equations without much coding. We use data on SiC from Nisr et al. (2017, JGR-Planet). 1. Global setup Step2: 2. Setups for fitting with two different gold pressure scales Equations of state of gold from Fei et al. (2007, PNAS) and Dorogokupets and Dewaele (2007, HPR). These equations are provided from pytheos as built-in classes. Step3: Because we use Birch-Murnaghan eos version of Fei2007 and Dorogokupets2007 used Vinet eos, we create a dictionary to provide different static compression eos for the different pressure scales used. Step4: Initial guess Step5: Physical constants for different materials Step6: 3. Setup data (3C) Data file is in csv format. Step7: pytheos provides plot.thermal_data function to show the data distribution in the P-V and P-T spaces. Step8: 4. Data fitting with constq equation (3C) The cell below shows fitting using constant q assumptions for the thermal part of eos. Normally weight for each data point can be calculated from $\sigma(P)$. In this case, using uncertainties, we can easily propagate the temperature and volume uncertainties to get the value. Step9: The warning message above is because the static EOS does not need temperature. lmfit generates warning if an assigned independent variable is not used in fitting for any components. 5. Data fitting with Dorogokupets2007 equation (3C) The cell below shows fitting using Altschuler equation for the thermal part of eos. Step10: 6. Data fitting with Speziale equation (3C) Speziale et al. (2000) presented a different way to describe the behavior of the Gruniense parameter.
Python Code: %config InlineBackend.figure_format = 'retina' Explanation: For high dpi displays. End of explanation import numpy as np import uncertainties as uct import pandas as pd from uncertainties import unumpy as unp import matplotlib.pyplot as plt import pytheos as eos Explanation: 0. General note This notebook shows an example of how to conduct equation of state fitting for the pressure-volume-temperature data using pytheos. Advantage of using pytheos is that you can apply different pressure scales and different equations without much coding. We use data on SiC from Nisr et al. (2017, JGR-Planet). 1. Global setup End of explanation au_eos = {'Fei2007': eos.gold.Fei2007bm3(), 'Dorogokupets2007': eos.gold.Dorogokupets2007()} Explanation: 2. Setups for fitting with two different gold pressure scales Equations of state of gold from Fei et al. (2007, PNAS) and Dorogokupets and Dewaele (2007, HPR). These equations are provided from pytheos as built-in classes. End of explanation st_model = {'Fei2007': eos.BM3Model, 'Dorogokupets2007': eos.VinetModel} k0_3c = {'Fei2007': 241.2, 'Dorogokupets2007': 243.0} k0p_3c = {'Fei2007': 2.84, 'Dorogokupets2007': 2.68} k0_6h = {'Fei2007': 243.1, 'Dorogokupets2007': 245.5} k0p_6h = {'Fei2007': 2.79, 'Dorogokupets2007': 2.59} Explanation: Because we use Birch-Murnaghan eos version of Fei2007 and Dorogokupets2007 used Vinet eos, we create a dictionary to provide different static compression eos for the different pressure scales used. End of explanation gamma0 = 1.06 q = 1. theta0 = 1200. Explanation: Initial guess: End of explanation v0 = {'3C': 82.8042, '6H': 124.27} n_3c = 2.; z_3c = 4. n_6h = 2.; z_6h = 6. Explanation: Physical constants for different materials End of explanation data = pd.read_csv('./data/3C-HiTEOS-final.csv') data.head() data.columns v_std = unp.uarray( data['V(Au)'], data['sV(Au)']) temp = unp.uarray(data['T(3C)'], data['sT(3C)']) v = unp.uarray(data['V(3C)'], data['sV(3C)']) Explanation: 3. Setup data (3C) Data file is in csv format. End of explanation for key, value in au_eos.items(): # iterations for different pressure scales p = au_eos[key].cal_p(v_std, temp) eos.plot.thermal_data({'p': p, 'v': v, 'temp': temp}, title=key) Explanation: pytheos provides plot.thermal_data function to show the data distribution in the P-V and P-T spaces. End of explanation for key, value in au_eos.items(): # iteration for different pressure scales # calculate pressure p = au_eos[key].cal_p(v_std, temp) # add prefix to the parameters. # this is important to distinguish thermal and static parameters eos_st = st_model[key](prefix='st_') eos_th = eos.ConstqModel(n_3c, z_3c, prefix='th_') # define initial values for parameters params = eos_st.make_params(v0=v0['3C'], k0=k0_3c[key], k0p=k0p_3c[key]) params += eos_th.make_params(v0=v0['3C'], gamma0=gamma0, q=q, theta0=theta0) # construct PVT eos # here we take advantage of lmfit to combine any formula of static and thermal eos's pvteos = eos_st + eos_th # fix static parameters and some other well known parameters params['th_v0'].vary=False; params['th_gamma0'].vary=False; params['th_theta0'].vary=False params['st_v0'].vary=False; params['st_k0'].vary=False; params['st_k0p'].vary=False # calculate weights. setting it None results in unweighted fitting weights = 1./unp.std_devs(p) #None fit_result = pvteos.fit(unp.nominal_values(p), params, v=unp.nominal_values(v), temp=unp.nominal_values(temp), weights=weights) print('********'+key) print(fit_result.fit_report()) # plot fitting results eos.plot.thermal_fit_result(fit_result, p_err=unp.std_devs(p), v_err=unp.std_devs(v), title=key) Explanation: 4. Data fitting with constq equation (3C) The cell below shows fitting using constant q assumptions for the thermal part of eos. Normally weight for each data point can be calculated from $\sigma(P)$. In this case, using uncertainties, we can easily propagate the temperature and volume uncertainties to get the value. End of explanation gamma_inf = 0.4 beta = 1. for key, value in au_eos.items(): # calculate pressure p = au_eos[key].cal_p(v_std, temp) # add prefix to the parameters. this is important to distinguish thermal and static parameters eos_st = st_model[key](prefix='st_') eos_th = eos.Dorogokupets2007Model(n_3c, z_3c, prefix='th_') # define initial values for parameters params = eos_st.make_params(v0=v0['3C'], k0=k0_3c[key], k0p=k0p_3c[key]) params += eos_th.make_params(v0=v0['3C'], gamma0=gamma0, gamma_inf=gamma_inf, beta=beta, theta0=theta0) # construct PVT eos pvteos = eos_st + eos_th # fix static parameters and some other well known parameters params['th_v0'].vary=False; params['th_theta0'].vary=False; params['th_gamma0'].vary=False; params['st_v0'].vary=False; params['st_k0'].vary=False; params['st_k0p'].vary=False # calculate weights. setting it None results in unweighted fitting weights = None #1./unp.std_devs(p) #None fit_result = pvteos.fit(unp.nominal_values(p), params, v=unp.nominal_values(v), temp=unp.nominal_values(temp))#, weights=weights) print('********'+key) print(fit_result.fit_report()) # plot fitting results eos.plot.thermal_fit_result(fit_result, p_err=unp.std_devs(p), v_err=unp.std_devs(v), title=key) Explanation: The warning message above is because the static EOS does not need temperature. lmfit generates warning if an assigned independent variable is not used in fitting for any components. 5. Data fitting with Dorogokupets2007 equation (3C) The cell below shows fitting using Altschuler equation for the thermal part of eos. End of explanation q0 = 1. q1 = 1. for key, value in au_eos.items(): # calculate pressure p = au_eos[key].cal_p(v_std, temp) # add prefix to the parameters. this is important to distinguish thermal and static parameters eos_st = st_model[key](prefix='st_') eos_th = eos.SpezialeModel(n_3c, z_3c, prefix='th_') # define initial values for parameters params = eos_st.make_params(v0=v0['3C'], k0=k0_3c[key], k0p=k0p_3c[key]) params += eos_th.make_params(v0=v0['3C'], gamma0=gamma0, q0=q0, q1=q1, theta0=theta0) # construct PVT eos pvteos = eos_st + eos_th # fix static parameters and some other well known parameters params['th_v0'].vary=False; params['th_theta0'].vary=False; params['th_gamma0'].vary=False; params['st_v0'].vary=False; params['st_k0'].vary=False; params['st_k0p'].vary=False # calculate weights. setting it None results in unweighted fitting weights = None #1./unp.std_devs(p) #None fit_result = pvteos.fit(unp.nominal_values(p), params, v=unp.nominal_values(v), temp=unp.nominal_values(temp))#, weights=weights) print('********'+key) print(fit_result.fit_report()) # plot fitting results eos.plot.thermal_fit_result(fit_result, p_err=unp.std_devs(p), v_err=unp.std_devs(v), title=key) Explanation: 6. Data fitting with Speziale equation (3C) Speziale et al. (2000) presented a different way to describe the behavior of the Gruniense parameter. End of explanation
12,925
Given the following text description, write Python code to implement the functionality described below step by step Description: Introduction Brief Overview Is a training set something immutable and unexpandable? Active learning relates to situations where the answer is no. The training set size can be increased, but, of course, labelling of new examples is not costless. Pool-based setup of active learning assumes that, given a model and a training set, there is also a fixed and known $n$-element set of initially unlabelled examples and the goal is to select $k$, $k < n$, examples from there such that disclosure of their target variables produces the most significant impact on model quality. There are other setups of active learning problems as well (e.g., how to synthesize feature representations of objects to be studied), but all of them are beyond the scope of this demo. In this notebook, one can find answers to the following questions Step1: Notebook-level Settings Step2: User-defined Settings Step4: Dataset Generation In this section, a synthetic dataset that is used by further examples is created. You can skip details of this section if you are interested only in user interface of active learning utilities. If this is the case, go to "Step-by-Step Tutorial" section. Step5: Step-by-Step Tutorial As of now, the most convenient and comprehensive user interface is provided by CombinedSamplerFromPool class. Its instances can exploit accumulated knowledge about decision boundary of the model and can make exploratory actions. Various approaches to exploitation and exploration are supported. Initialization Class CombinedSamplerFromPool have two initialization arguments Step6: The difference between these two ways becomes more clear if tools must be something more complicated than just one estimator. For example, tools can be a committee. Step7: So a sole estimator is passed only in the second case, whereas in the first case a committee of estimators is passed. One subtle issue is that formulas for confidence, margin, entropy, and divergence make rigorous sense only when predicted by classifier probabilities are true probabilities, i.e., numerical quantifications of uncertainty. However, some classifiers return just ordinal degrees of their internal assurance in class labels. Although such numbers are called probabilities, they are not probabilities. To go over this obstacle, it is supposed to calibrate predicted probabilities with Platt calibration or with isotonic regression. A class that can run any of these options is provided by sklearn package. Step8: An argument named scorers_probabilities Now, go to scorers_probabilities argument. It must be a list of floats. Step9: In the above example, $\varepsilon$-greedy strategy is implemented. After enough data are gathered, it still performs plenty of exploratory actions and this is a drawback of this strategy (at least in static environments). To fix it, gradual decrease of exploration probability is needed. It can be done by calls of set_scorers_probabilities method. Step10: Usage Usage of a created instance is as simple as the next cell. Step13: Illustrative End-to-End Example Here $\varepsilon$-greedy strategy is compared with a benchmark based on random selection from a pool. Step15: To conclude, it can be seen that there is a noticable gain from usage of active learning instead of selecting objects randomly. Customized Extensions Now suppose that any of pre-defined strings is not an appropriate choice for someone, because priorities of this user are unusual. Exploration is important, and another important thing is that it is more desirable to disclose the first class label than to disclose a label of second or third class. Moreover, sampling objects exactly near the decision boundary is not important. Sounds strange, does not it? However, it is easy to meet this specifications with dsawl package. Below, it is shown how to extend standard functionality with your own code. First of all, define customized scoring function. Step16: Then make a scorer. A single classifier is involved, so it can be UncertaintyScorerForClassification. Step17: And now all is ready for applied code.
Python Code: import math from copy import copy from typing import List import numpy as np import matplotlib.pyplot as plt %matplotlib inline import seaborn as sns from sklearn.base import BaseEstimator from sklearn.metrics import accuracy_score from sklearn.ensemble import RandomForestClassifier from sklearn.calibration import CalibratedClassifierCV from dsawl.active_learning import CombinedSamplerFromPool from dsawl.active_learning.utils import make_committee Explanation: Introduction Brief Overview Is a training set something immutable and unexpandable? Active learning relates to situations where the answer is no. The training set size can be increased, but, of course, labelling of new examples is not costless. Pool-based setup of active learning assumes that, given a model and a training set, there is also a fixed and known $n$-element set of initially unlabelled examples and the goal is to select $k$, $k < n$, examples from there such that disclosure of their target variables produces the most significant impact on model quality. There are other setups of active learning problems as well (e.g., how to synthesize feature representations of objects to be studied), but all of them are beyond the scope of this demo. In this notebook, one can find answers to the following questions: * How to use implementations of active learning strategies from dsawl package? * How do $\varepsilon$-greedy active learning perform relatively random selection? References An article that contains review of approaches to active learning: Yang, 2017; An article about EG-Active algorithm: Bouneffouf, 2014. General Preparations Import Statements End of explanation np.random.seed(361) sns.set() Explanation: Notebook-level Settings End of explanation # It is not a good practice to store binary files # (like PNG images) in a Git repository, but for # your local use you can set it to `True`. draw_plots = False Explanation: User-defined Settings End of explanation dimensionality = 2 lower_bound = -2 upper_bound = 2 pool_size = 300 X_train_initial = np.array( [[1, -1], [2, -2], [3, -3], [-1, -1], [-2, -2], [-3, -3], [0, 1], [0, 2], [0, 3]] ) X_new = np.random.uniform( lower_bound, upper_bound, size=(pool_size, dimensionality) ) X_hold_out = np.random.uniform( lower_bound, upper_bound, size=(pool_size, dimensionality) ) def compute_target(X: np.ndarray) -> np.ndarray: Compute class label for a simple classification problem where 2D plane is split into three regions by rays such that they start from the origin and an angle between any pair of them has 120 degrees. :param X: coordinates of points from the plane :return: labels of regions where points are located def compute_target_for_row(x: np.ndarray) -> int: if x[0] > 0: return 1 if x[1] - math.tan(math.radians(30)) * x[0] > 0 else 2 else: return 1 if x[1] + math.tan(math.radians(30)) * x[0] > 0 else 3 y = np.apply_along_axis(compute_target_for_row, axis=1, arr=X) return y y_train_initial = compute_target(X_train_initial) y_new = compute_target(X_new) y_hold_out = compute_target(X_hold_out) if draw_plots: fig = plt.figure(figsize=(15, 15)) ax = fig.add_subplot(111) for label, color in zip(range(1, 4), ['b', 'r', 'g']): curr_X = X_train_initial[y_train_initial == label, :] ax.scatter(curr_X[:, 0], curr_X[:, 1], c=color, marker='D') for label, color in zip(range(1, 4), ['b', 'r', 'g']): curr_X = X_new[y_new == label, :] ax.scatter(curr_X[:, 0], curr_X[:, 1], c=color) Explanation: Dataset Generation In this section, a synthetic dataset that is used by further examples is created. You can skip details of this section if you are interested only in user interface of active learning utilities. If this is the case, go to "Step-by-Step Tutorial" section. End of explanation sampler = CombinedSamplerFromPool(scorers=['confidence']) clf = RandomForestClassifier() clf.fit(X_train_initial, y_train_initial) sampler.set_tools(tools=clf, scorer_id=0) sampler = CombinedSamplerFromPool(scorers=['confidence']) sampler.update_tools( X_train=X_train_initial, y_train=y_train_initial, est=RandomForestClassifier(), scorer_id=0 ) Explanation: Step-by-Step Tutorial As of now, the most convenient and comprehensive user interface is provided by CombinedSamplerFromPool class. Its instances can exploit accumulated knowledge about decision boundary of the model and can make exploratory actions. Various approaches to exploitation and exploration are supported. Initialization Class CombinedSamplerFromPool have two initialization arguments: scorers and scorers_probabilities. Let us discuss both of them. An argument named scorers An argument named scorers defines a list of internal entities that rank new objects by usefullness of their labels. The more valuable a label of an object is, the higher the rank should be. As for technical implementation, all scoring entities are instances of classes that inherit from these one class: dsawl.active_learning.pool_based_sampling.BaseScorer. Any instance that satisfies the above condition can be an element of scorers. However, the easiest and the safest way to pass value of scorers is to pass list of strings that can be recognized as names of pre-defined scorers. If it is a classification problem, supported strings are: * 'confidence' — the $i$-th object has score $-(\max_{j} \hat{p}{ij})$ where $\hat{p}{ij}$ is estimated (predicted) probability that the $i$-th object is an object of $j$-th class; * 'margin' — the $i$-th object has score $-(\max_{j} \hat{p}{ij} - \max{j \ne \hat{y}i} \hat{p}{ij})$ where $\hat{y}i$ is predicted class of the $i$-th object, i.e., $\hat{y}_i = \arg \max{j} \hat{p}{ij}$; * 'entropy' — the $i$-th object has score $\sum{j} \hat{p}{ij} \log \hat{p}{ij}$; * 'divergence' — the $i$-th object has score $\sum_{k}D_{KL}(\hat{p}{ijk} \, \Vert \, \overline{p}{ij})$ where there is a committee (i.e., list) of classifiers indiced by $k$, $\hat{p}{ijk}$ is predicted by the $k$-th classifier probability that the $i$-th object is an object of $j$-th class, $\overline{p}{ij}$ is the average of all $\hat{p}{ijk}$ over $k$, and $D{KL}$ is Kullback-Leibler divergence between $\hat{p}{ijk}$ and $\overline{p}{ij}$ (both $\hat{p}{ijk}$ and $\overline{p}{ij}$ are considered to be distributions of class label $j$). Note that for a binary classification problem, the first three options result in the same ranking. If it is a regression problem, supported strings are: * 'predictions_variance' — the $i$-th object has score $\mathrm{Var}k \hat{y}{ik}$ where there is a committee of regressors indiced by $k$, $\hat{y}_{ik}$ is predicted by the $k$-th regressor target value for the $i$-th object and variance is taken over $k$; * 'target_variance' — the $i$-th object has score that is equal to an estimate of target's variance at it: $\max(\hat{y^2}_i - \hat{y}_i^2, 0)$ where there is a pair of regressors and the first one returns $\hat{y}_i$, i.e., prediction of the target itself, whereas the second one returns $\hat{y^2}_i$, i.e., prediction of the squared target. Finally, there are two strings for making exploratory actions: * 'random' — all objects are ranked randomly; * 'density' — the $i$-th object has score equal to negative logarithm of estimated density of data distribution at the corresponding to the $i$-th object point; such scoring is designed for outliers exploration. All of the above strings define scoring function, but do not define tools of scorers. The meaning of the word 'tools' depends on subclass of BaseScorer class: * if a string is 'confidence', 'margin', or 'entropy', tools are a classifier; * if a string is 'divergence', tools are a committee of classifiers; * if a string is 'predictions_variance', tools are a committee of regressors; * if a string is 'target_variance', tools are a pair of regressors; * if a string is 'random', tools are None; * if a string is 'density', tools are a density estimator (such as sklearn.mixture.GaussianMixture or sklearn.neighbors.KernelDensity). If scorer is created based on string, tools must be passed explicitly. It can be done either with set_tools method (properly trained tools are required) or with update_tools method (just one bare estimator is needed, but training data must be provided too). Below cells show two equivalent ways of passing and setting scorers. End of explanation sampler = CombinedSamplerFromPool(['divergence']) clf = RandomForestClassifier() committee = make_committee(clf, X_train_initial, y_train_initial) sampler.set_tools(committee, scorer_id=0) sampler = CombinedSamplerFromPool(['divergence']) sampler.update_tools(X_train_initial, y_train_initial, RandomForestClassifier(), scorer_id=0) Explanation: The difference between these two ways becomes more clear if tools must be something more complicated than just one estimator. For example, tools can be a committee. End of explanation sampler = CombinedSamplerFromPool(['confidence']) clf = CalibratedClassifierCV(RandomForestClassifier()) clf.fit(X_train_initial, y_train_initial) sampler.set_tools(clf, scorer_id=0) Explanation: So a sole estimator is passed only in the second case, whereas in the first case a committee of estimators is passed. One subtle issue is that formulas for confidence, margin, entropy, and divergence make rigorous sense only when predicted by classifier probabilities are true probabilities, i.e., numerical quantifications of uncertainty. However, some classifiers return just ordinal degrees of their internal assurance in class labels. Although such numbers are called probabilities, they are not probabilities. To go over this obstacle, it is supposed to calibrate predicted probabilities with Platt calibration or with isotonic regression. A class that can run any of these options is provided by sklearn package. End of explanation epsilon_greedy_sampler = CombinedSamplerFromPool( ['margin', 'random'], [0.95, 0.05] ) epsilon_greedy_sampler.update_tools( X_train_initial, y_train_initial, RandomForestClassifier(), scorer_id=0 ) Explanation: An argument named scorers_probabilities Now, go to scorers_probabilities argument. It must be a list of floats. End of explanation epsilon_greedy_sampler.set_scorers_probabilities([0.99, 0.01]) Explanation: In the above example, $\varepsilon$-greedy strategy is implemented. After enough data are gathered, it still performs plenty of exploratory actions and this is a drawback of this strategy (at least in static environments). To fix it, gradual decrease of exploration probability is needed. It can be done by calls of set_scorers_probabilities method. End of explanation indices = sampler.pick_new_objects(X_new, n_to_pick=3) X_new[indices, :] Explanation: Usage Usage of a created instance is as simple as the next cell. End of explanation # Random Forest usually does not warp probabilities. clf = RandomForestClassifier(n_estimators=20, random_state=361) max_n_points_to_explore = 100 scorers = ['margin', 'random'] scorers_probabilities = [0.9, 0.1] def report_accuracy_of_benchmark( n_new_points: int, clf: BaseEstimator, X_train_initial: np.ndarray, y_train_inital: np.ndarray, X_new: np.ndarray, y_new: np.ndarray, X_hold_out: np.ndarray, y_hold_out: np.ndarray ) -> float: Compute accuracy of approach where `n_new_points` objects are picked from a pool at random, without active learning. X_train = np.vstack((X_train_initial, X_new[:n_new_points, :])) y_train = np.hstack((y_train_initial, y_new[:n_new_points])) clf.fit(X_train, y_train) y_hold_out_hat = clf.predict(X_hold_out) return accuracy_score(y_hold_out, y_hold_out_hat) def report_accuracy_of_epsilon_greedy_strategy( n_new_points: int, clf: BaseEstimator, scorers: List[str], scorers_probabilities: List[float], X_train_initial: np.ndarray, y_train_inital: np.ndarray, X_new: np.ndarray, y_new: np.ndarray, X_hold_out: np.ndarray, y_hold_out: np.ndarray ) -> float: Compute accuracy of epsilon-greedy approach to active learning. X_train = copy(X_train_initial) y_train = copy(y_train_inital) clf.fit(X_train, y_train) sampler = CombinedSamplerFromPool( scorers, scorers_probabilities ) sampler.set_tools(clf, scorer_id=0) for i in range(n_new_points): indices = sampler.pick_new_objects(X_new, n_to_pick=1) X_train = np.vstack((X_train, X_new[indices, :])) y_train = np.hstack((y_train, y_new[indices])) sampler.update_tools(X_train, y_train, scorer_id=0) X_new = np.delete(X_new, indices, axis=0) y_new = np.delete(y_new, indices) clf = sampler.get_tools(0) y_hold_out_hat = clf.predict(X_hold_out) return accuracy_score(y_hold_out, y_hold_out_hat) benchmark_scores = [ report_accuracy_of_benchmark( n, clf, X_train_initial, y_train_initial, X_new, y_new, X_hold_out, y_hold_out ) for n in range(1, max_n_points_to_explore + 1) ] sum(benchmark_scores) epsilon_greedy_scores = [ report_accuracy_of_epsilon_greedy_strategy( n, clf, scorers, scorers_probabilities, X_train_initial, y_train_initial, X_new, y_new, X_hold_out, y_hold_out ) for n in range(1, max_n_points_to_explore + 1) ] sum(epsilon_greedy_scores) if draw_plots: fig = plt.figure(figsize=(15, 15)) ax = fig.add_subplot(111) ax.plot(benchmark_scores) ax.plot(epsilon_greedy_scores, c='g') Explanation: Illustrative End-to-End Example Here $\varepsilon$-greedy strategy is compared with a benchmark based on random selection from a pool. End of explanation def compute_bayesian_scores( predicted_probabilities: np.ndarray, class_of_interest: int = 0 ) -> np.ndarray: Sample labels of objects from corresponding to them predicted distributions and return binary indicators of having a label of the class of interest. :param predicted_probabilities: predicted by the classifier probabilities of classes for each of the new objects, shape = (n_new_objects, n_classes); it is recommended to pass calibrated probabilities :param class_of_interest: ordinal number of class of interest, i.e., index of column with this class probabilities :return: indicators that labels sampled from predicted distributions are labels of the class of interest n_classes = predicted_probabilities.shape[1] sampled_labels = [] for distribution in predicted_probabilities: sampled_labels.append(np.random.choice(n_classes, p=distribution)) sampled_labels = np.array(sampled_labels) result = (sampled_labels == class_of_interest).astype(int) return result Explanation: To conclude, it can be seen that there is a noticable gain from usage of active learning instead of selecting objects randomly. Customized Extensions Now suppose that any of pre-defined strings is not an appropriate choice for someone, because priorities of this user are unusual. Exploration is important, and another important thing is that it is more desirable to disclose the first class label than to disclose a label of second or third class. Moreover, sampling objects exactly near the decision boundary is not important. Sounds strange, does not it? However, it is easy to meet this specifications with dsawl package. Below, it is shown how to extend standard functionality with your own code. First of all, define customized scoring function. End of explanation from dsawl.active_learning import UncertaintyScorerForClassification scorer = UncertaintyScorerForClassification( scoring_fn=compute_bayesian_scores ) Explanation: Then make a scorer. A single classifier is involved, so it can be UncertaintyScorerForClassification. End of explanation sampler = CombinedSamplerFromPool(scorers=[scorer]) clf = RandomForestClassifier() clf.fit(X_train_initial, y_train_initial) sampler.set_tools(tools=clf, scorer_id=0) indices = sampler.pick_new_objects(X_new, n_to_pick=3) X_new[indices, :] Explanation: And now all is ready for applied code. End of explanation
12,926
Given the following text description, write Python code to implement the functionality described below step by step Description: <a href="http Step1: Topographic grids For this tutorial we will consider one topographic surface. Here it is plotted in three dimensions. Step2: Initalizing and running the FlowAccumulator To instantiate the FlowAccumulator, you must pass it the minimum of a model grid that has a field called 'topographic__elevation'. Alternatively, you can pass it the name of another field name at node, or an array with length number of nodes. This is the surface over which flow is first directed and then accumulated. FlowAccumulator will create and use a FlowDirector to calculate flow directions. The default FlowDirector is FlowDirectorSteepest, which is the same as D4 in the special case of a raster grid. There are a few different ways to specify which FlowDirector you want FlowAccumulator to use. The next section will go over these options. FlowAccumulator can take a constant or spatially variable input called runoff_rate, which it uses to calculate discharge. Alternatively, if there is an at_node field called water__unit_flux_in and no value is specified as the runoff_rate, FlowAccumulator will use the values stored in water__unit_flux_in. In addition to directing flow and accumulating it in one step, FlowAccumulator can also deal with depression finding internally. This can be done by passing a DepressionFinder to the keyword argument depression_finder. The default behavior is to not deal with depressions internally. Finally, if the FlowDirector you are using takes any keyword arguments, those can be passed to the FlowAccumulator. For example, FlowDirectorMFD has to option to use diagonals in addition to links and to proportion flow based on either the slope or the the square root of slope. Step3: The FlowAccumulator has two public methods Step4: We can illustrate the receiver node FlowDirectionSteepest has assigned to each donor node using a plotting function in Landlab called drainage_plot. We will see many of these plots in this tutorial so let's take a moment to walk through the plot and what it contains. The background image (white to black) shows the values of topographic elevation of the underlying surface or any other at_node field we choose to plot. The colors of the dots inside of each pixel show the locations of the nodes and the type of node. The arrows show the direction of flow, and the color shows the proportion of flow that travels along that link. An X on top of a node indicates that node is a local sink and flows to itself. Note that in Landlab Boundary Nodes, or nodes that are on the edge of a grid, do not have area and do not contribute flow to nodes. These nodes can either be Fixed Gradient Nodes, Fixed Value Nodes, or Closed Nodes. With the exception of Closed Nodes the boundary nodes can receive flow. An important step in all flow direction and accumulation is setting the proper boundary condition. Refer to the boundary condition tutorials for more information. Step5: In this drainage plot, we can see that all of the flow is routed down the steepest link. A plot of the drainage area would illustrate how the flow would move. Next let's make a similar plot except that instead of plotting the topographic elevation as the background, we will plot the drainage area. Step6: If we print out the drainage area, we can see that its maximum reaches 64, which is the total area of the interior of the grid. Step7: This is the same number as the number of core nodes. This makes sense becaue these are the only nodes in Landlab that have area, and in our model grid they each have an area of one. Step8: We can rain on the surface, store that rain in the field water__unit_flux_in, and then re-run the FlowAccumulator. As an example, we will 'rain' a uniformly distributed random number between 0 and 1 on every node. Since we already ran the FlowAccumulator, under the hood our grid already has a field called water__unit_flux_in and we need to set the clobber keyword to True. Step9: Next, we re-run the FlowAccumulator and plot the discharge. Step10: The basic pattern of drainage is the same but the values for the surface water discharge are different than for drainage area. Alternative ways to specify the FlowDirector FlowAccumulator allows the FlowDirector to be specified one of four ways Step11: Providing a DepressionFinder Just as with providing the FlowDirector, the DepressionFinder can be provided multiple ways. While there are presently four different FlowDirectors in Landlab, there is only one DepressionFinder. As a string of the full name of the DepressionFinder (e.g., 'DepressionFinderAndRouter') As the class name of the DepressionFinder component. As an instantiated version of a DepressionFinder component. NOTE Step12: Methods for specifying can be mixed, such that the following is permissible. Step13: Using the DepressionFinder with FlowAccumulator To conclude this tutorial, we examine an example of a Hexagonal Model grid with a depression. Step14: We will put a depression in the middle of the topography, and then see what the drainage plot looks like. Step15: As you can see, the flow gets stuck in the hole. We'd like the flow in the hole to move out and to the boundary. To route the flow out of the hole, we have two options. 1. Run the FlowAccumulator and then the DepressionFinder 2. Run them together in FlowAccumulator. The options look like the following and they are equivalent.
Python Code: %matplotlib inline # import plotting tools from mpl_toolkits.mplot3d import Axes3D import matplotlib.pyplot as plt from matplotlib import cm from matplotlib.ticker import LinearLocator, FormatStrFormatter import matplotlib as mpl # import numpy import numpy as np # import necessary landlab components from landlab import RasterModelGrid, HexModelGrid from landlab.components import FlowAccumulator from landlab.components import (FlowDirectorD8, FlowDirectorDINF, FlowDirectorMFD, FlowDirectorSteepest) from landlab.components import DepressionFinderAndRouter # import landlab plotting functionality from landlab.plot.drainage_plot import drainage_plot # create a plotting routine to make a 3d plot of our surface. def surf_plot(mg, surface='topographic__elevation', title='Surface plot of topography'): fig = plt.figure() ax = fig.gca(projection='3d') # Plot the surface. Z = mg.at_node[surface].reshape(mg.shape) color = cm.gray((Z - Z.min()) / (Z.max() - Z.min())) surf = ax.plot_surface(mg.x_of_node.reshape(mg.shape), mg.y_of_node.reshape(mg.shape), Z, rstride=1, cstride=1, facecolors=color, linewidth=0., antialiased=False) ax.view_init(elev=35, azim=-120) ax.set_xlabel('X axis') ax.set_ylabel('Y axis') ax.set_zlabel('Elevation') plt.title(title) plt.show() Explanation: <a href="http://landlab.github.io"><img style="float: left" src="../../landlab_header.png"></a> Introduction to the FlowAccumulator Landlab directs flow and accumulates it using two types of components: FlowDirectors use the topography to determine how flow moves between adjacent nodes. For every node in the grid it determines the nodes to receive flow and the proportion of flow to send from one node to its receiver. The FlowAccumulator uses the direction and proportion of flow moving between each node and (optionally) water runoff to calculate drainage area and discharge. In this tutorial we will go over how to initialize and run the FlowAccumulator. For tutorials on how to initialize and run a FlowDirector and a brief comparison between the different flow direction algorithms or for more detailed examples that contrast the differences between each flow direction algorithm, refer to the other tutorials in this section. First, we import the necessary python modules and make a small plotting routine. End of explanation mg = RasterModelGrid((10, 10)) _ = mg.add_field('topographic__elevation', 3. * mg.x_of_node**2 + mg.y_of_node**2, at='node') surf_plot(mg, title='Grid 1') Explanation: Topographic grids For this tutorial we will consider one topographic surface. Here it is plotted in three dimensions. End of explanation fa = FlowAccumulator(mg) # this is the same as writing: fa = FlowAccumulator(mg, surface='topographic__elevation', flow_director='FlowDirectorSteepest', runoff_rate=None, depression_finder=None) Explanation: Initalizing and running the FlowAccumulator To instantiate the FlowAccumulator, you must pass it the minimum of a model grid that has a field called 'topographic__elevation'. Alternatively, you can pass it the name of another field name at node, or an array with length number of nodes. This is the surface over which flow is first directed and then accumulated. FlowAccumulator will create and use a FlowDirector to calculate flow directions. The default FlowDirector is FlowDirectorSteepest, which is the same as D4 in the special case of a raster grid. There are a few different ways to specify which FlowDirector you want FlowAccumulator to use. The next section will go over these options. FlowAccumulator can take a constant or spatially variable input called runoff_rate, which it uses to calculate discharge. Alternatively, if there is an at_node field called water__unit_flux_in and no value is specified as the runoff_rate, FlowAccumulator will use the values stored in water__unit_flux_in. In addition to directing flow and accumulating it in one step, FlowAccumulator can also deal with depression finding internally. This can be done by passing a DepressionFinder to the keyword argument depression_finder. The default behavior is to not deal with depressions internally. Finally, if the FlowDirector you are using takes any keyword arguments, those can be passed to the FlowAccumulator. For example, FlowDirectorMFD has to option to use diagonals in addition to links and to proportion flow based on either the slope or the the square root of slope. End of explanation fa.run_one_step() (da, q) = fa.accumulate_flow() Explanation: The FlowAccumulator has two public methods: run_one_step() and accumulate_flow(). Both use the values of the surface provided to identify flow directions (and in the case of directing to more than one receiver, proportions) and then calculate discharge and drainage area. Both store the same information about receivers, proportions, and other calculated values to the model grid as fields. The difference is that run_one_step() does not return any values, while accumulate_flow() returns the drainage area and discharge as variables. End of explanation plt.figure() drainage_plot(mg) Explanation: We can illustrate the receiver node FlowDirectionSteepest has assigned to each donor node using a plotting function in Landlab called drainage_plot. We will see many of these plots in this tutorial so let's take a moment to walk through the plot and what it contains. The background image (white to black) shows the values of topographic elevation of the underlying surface or any other at_node field we choose to plot. The colors of the dots inside of each pixel show the locations of the nodes and the type of node. The arrows show the direction of flow, and the color shows the proportion of flow that travels along that link. An X on top of a node indicates that node is a local sink and flows to itself. Note that in Landlab Boundary Nodes, or nodes that are on the edge of a grid, do not have area and do not contribute flow to nodes. These nodes can either be Fixed Gradient Nodes, Fixed Value Nodes, or Closed Nodes. With the exception of Closed Nodes the boundary nodes can receive flow. An important step in all flow direction and accumulation is setting the proper boundary condition. Refer to the boundary condition tutorials for more information. End of explanation plt.figure() drainage_plot(mg, 'drainage_area') Explanation: In this drainage plot, we can see that all of the flow is routed down the steepest link. A plot of the drainage area would illustrate how the flow would move. Next let's make a similar plot except that instead of plotting the topographic elevation as the background, we will plot the drainage area. End of explanation print(mg.at_node['drainage_area'].reshape(mg.shape)) Explanation: If we print out the drainage area, we can see that its maximum reaches 64, which is the total area of the interior of the grid. End of explanation print(mg.number_of_core_nodes) Explanation: This is the same number as the number of core nodes. This makes sense becaue these are the only nodes in Landlab that have area, and in our model grid they each have an area of one. End of explanation rain = 1. + 5. * np.random.rand(mg.number_of_nodes) plt.imshow(rain.reshape(mg.shape), origin='lower', cmap='PuBu', vmin=0) plt.colorbar() plt.show() _ = mg.add_field('water__unit_flux_in', rain, at='node', clobber=True) Explanation: We can rain on the surface, store that rain in the field water__unit_flux_in, and then re-run the FlowAccumulator. As an example, we will 'rain' a uniformly distributed random number between 0 and 1 on every node. Since we already ran the FlowAccumulator, under the hood our grid already has a field called water__unit_flux_in and we need to set the clobber keyword to True. End of explanation fa.run_one_step() plt.figure() drainage_plot(mg, 'surface_water__discharge', title='Discharge') Explanation: Next, we re-run the FlowAccumulator and plot the discharge. End of explanation # option 1: Full name of FlowDirector fa = FlowAccumulator(mg, surface='topographic__elevation', flow_director='FlowDirectorSteepest') # option 2: Short name of FlowDirector fa = FlowAccumulator(mg, surface='topographic__elevation', flow_director='Steepest') # option 3: Uninstantiated FlowDirector Component fa = FlowAccumulator(mg, surface='topographic__elevation', flow_director=FlowDirectorSteepest) # option 4: Instantiated FlowDirector Component fd = FlowDirectorSteepest(mg) fa = FlowAccumulator(mg, surface='topographic__elevation', flow_director=fd) Explanation: The basic pattern of drainage is the same but the values for the surface water discharge are different than for drainage area. Alternative ways to specify the FlowDirector FlowAccumulator allows the FlowDirector to be specified one of four ways: 1. As a string of the full name of the FlowDirector (e.g., 'FlowDirectorSteepest' or 'FlowDirectorD8' ) 2. As a string of the short name of the FlowDirector method (e.g., 'Steepest' or 'D8') 3. As the class name for the desired FlowDirector component. 4. As an instantiated version of a FlowDirector component. Thus, the following four ways to instantiate a FlowAccumulator are equivalent. End of explanation # option 1: Full name of FlowDirector fa = FlowAccumulator(mg, surface='topographic__elevation', flow_director='FlowDirectorD8', depression_finder='DepressionFinderAndRouter') # option 2: Uninstantiated FlowDirector Component fa = FlowAccumulator(mg, surface='topographic__elevation', flow_director=FlowDirectorD8, depression_finder='DepressionFinderAndRouter') # option 3: Instantiated FlowDirector Component fd = FlowDirectorD8(mg) df = DepressionFinderAndRouter(mg) fa = FlowAccumulator(mg, surface='topographic__elevation', flow_director=fd, depression_finder=df) Explanation: Providing a DepressionFinder Just as with providing the FlowDirector, the DepressionFinder can be provided multiple ways. While there are presently four different FlowDirectors in Landlab, there is only one DepressionFinder. As a string of the full name of the DepressionFinder (e.g., 'DepressionFinderAndRouter') As the class name of the DepressionFinder component. As an instantiated version of a DepressionFinder component. NOTE: The current Landlab depression finder only works with FlowDirectorSteepest and FlowDirectorD8 no matter how the depression finder is run. This is because the depression finder presently only works with route-to-one methods. Thus, the following three ways to instantiated a DepressionFinder are equivalent. End of explanation df = DepressionFinderAndRouter(mg) fa = FlowAccumulator(mg, surface='topographic__elevation', flow_director='D8', depression_finder=df) Explanation: Methods for specifying can be mixed, such that the following is permissible. End of explanation hmg = HexModelGrid((9, 5)) _ = hmg.add_field('topographic__elevation', hmg.x_of_node + hmg.y_of_node, at='node') fa = FlowAccumulator(hmg, flow_director='MFD') fa.run_one_step() plt.figure() drainage_plot(hmg) plt.figure() drainage_plot(hmg, 'drainage_area') Explanation: Using the DepressionFinder with FlowAccumulator To conclude this tutorial, we examine an example of a Hexagonal Model grid with a depression. End of explanation hmg_hole = HexModelGrid((9, 5)) z = hmg_hole.add_field('topographic__elevation', hmg_hole.x_of_node + np.round(hmg_hole.y_of_node), at='node') hole_nodes = [21, 22, 23, 30, 31, 39, 40] z[hole_nodes] = z[hole_nodes] * 0.1 fa = FlowAccumulator(hmg_hole, flow_director='Steepest') fa.run_one_step() plt.figure() drainage_plot(hmg_hole) plt.figure() drainage_plot(hmg_hole, 'drainage_area') Explanation: We will put a depression in the middle of the topography, and then see what the drainage plot looks like. End of explanation # OPTION 1 fa = FlowAccumulator(hmg_hole, flow_director='Steepest') fa.run_one_step() df = DepressionFinderAndRouter(hmg_hole) df.map_depressions() # OPTION 2 fa = FlowAccumulator(hmg_hole, flow_director='Steepest', depression_finder='DepressionFinderAndRouter') fa.run_one_step() plt.figure() drainage_plot(hmg_hole, 'drainage_area') Explanation: As you can see, the flow gets stuck in the hole. We'd like the flow in the hole to move out and to the boundary. To route the flow out of the hole, we have two options. 1. Run the FlowAccumulator and then the DepressionFinder 2. Run them together in FlowAccumulator. The options look like the following and they are equivalent. End of explanation
12,927
Given the following text description, write Python code to implement the functionality described below step by step Description: An MNIST example for tensorflow-cloud on Google Colab This colab shows an example for using Keras to build a simple ConvNet model for MNIST, and utilize tensorflow-cloud to train the model on GCP. The example demonstrates the workflow of tensorflow-cloud. For the model definition part it is completely identical to what you would do for training locally (or on Colab); and with a simple call of tfc.run(), the training job can be moved to GCP. Setup and authentication Note that the set up and authentication steps may be needed every time if using hosted colab session. Local runtime saves this trouble. PIP Install Packages and dependencies Install tensorflow-cloud package. Please comment out first line after running this cell. Step1: Note Step2: Authenticate your GCP account Follow https Step3: Specify Cloud Storage bucket To create bucket, follow Step4: Testing This section include code for preparing data and training. These can be run with out without GCP and tensorflow_cloud. Before using GCP run, it is adviced to first test out here, possibly with smaller data size. Once ready, this section do not need to be changed. Prepare data Step5: Create and train model locally Step6: Save to GCS bucket When moving on to training on GCP, the trained model will be lost after training is complete unless it is saved on a cloud location. Step7: Using tensorflow_cloud Training on GCP After above cell is tested, run following cell to use GCP for training. Step8: Evaluate the model.
Python Code: import os import sys try: import tensorflow_cloud as tfc except: os.system('pip install -U --quiet tensorflow-cloud') import tensorflow_cloud as tfc import tensorflow_datasets as tfds import tensorflow as tf print(tf.__version__) Explanation: An MNIST example for tensorflow-cloud on Google Colab This colab shows an example for using Keras to build a simple ConvNet model for MNIST, and utilize tensorflow-cloud to train the model on GCP. The example demonstrates the workflow of tensorflow-cloud. For the model definition part it is completely identical to what you would do for training locally (or on Colab); and with a simple call of tfc.run(), the training job can be moved to GCP. Setup and authentication Note that the set up and authentication steps may be needed every time if using hosted colab session. Local runtime saves this trouble. PIP Install Packages and dependencies Install tensorflow-cloud package. Please comment out first line after running this cell. End of explanation PROJECT_ID = "[gcp-project-id]" #@param {type:"string"} COMPUTE_REGION = "us-central1" #@param {type:"string"} Explanation: Note: Try installing using sudo, if the above command throw any permission errors. Restart runtime session if cannot import. Set up your GCP Project Id Enter your Project Id in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook. End of explanation # Upload the downloaded JSON file that contains your key. if 'google.colab' in sys.modules: from google.colab import files keyfile_upload = files.upload() keyfile = list(keyfile_upload.keys())[0] os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = keyfile os.system(f'gcloud auth activate-service-account --key-file {keyfile}') Explanation: Authenticate your GCP account Follow https://github.com/tensorflow/cloud/blob/master/README.md#setup-instructions to get json key. Then proceed: End of explanation BUCKET_NAME = "[gcs-bucket-name]" #@param {type:"string"} MODEL_PATH = "examples-colab" #@param {type:"string"} Explanation: Specify Cloud Storage bucket To create bucket, follow: https://cloud.google.com/ai-platform/docs/getting-started-keras#create_a_bucket The bucket will both be used for creating docker image and for saving results. End of explanation tfds.disable_progress_bar() # Download the dataset datasets, info = tfds.load(name="mnist", with_info=True, as_supervised=True) mnist_train, mnist_test = datasets["train"], datasets["test"] # Setup input pipeline num_train_examples = info.splits["train"].num_examples num_test_examples = info.splits["test"].num_examples BUFFER_SIZE = 10000 BATCH_SIZE = 64 def scale(image, label): image = tf.cast(image, tf.float32) image /= 255 return image, label train_dataset = mnist_train.map(scale).cache() train_dataset = train_dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE) eval_dataset = mnist_test.map(scale).batch(BATCH_SIZE) Explanation: Testing This section include code for preparing data and training. These can be run with out without GCP and tensorflow_cloud. Before using GCP run, it is adviced to first test out here, possibly with smaller data size. Once ready, this section do not need to be changed. Prepare data End of explanation # Create the model model = tf.keras.Sequential( [ tf.keras.layers.Conv2D(32, 3, activation="relu", input_shape=(28, 28, 1)), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Flatten(), tf.keras.layers.Dense(64, activation="relu"), tf.keras.layers.Dense(10, activation="softmax"), ] ) model.compile( loss="sparse_categorical_crossentropy", optimizer=tf.keras.optimizers.Adam(), metrics=["accuracy"], ) # Function for decaying the learning rate. # You can define any decay function you need. def decay(epoch): if epoch < 3: return 1e-3 elif epoch >= 3 and epoch < 7: return 1e-4 else: return 1e-5 # Callback for printing the LR at the end of each epoch. class PrintLR(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs=None): print( "\nLearning rate for epoch {} is {}".format( epoch + 1, model.optimizer.lr.numpy() ) ) callbacks = [tf.keras.callbacks.LearningRateScheduler(decay), PrintLR()] model.fit(train_dataset, epochs=12, callbacks=callbacks) Explanation: Create and train model locally End of explanation if BUCKET_NAME: print('saving to GCS location...') model.save(f'gs://{BUCKET_NAME}/{MODEL_PATH}') else: print('saving to local path') model.save(MODEL_PATH) Explanation: Save to GCS bucket When moving on to training on GCP, the trained model will be lost after training is complete unless it is saved on a cloud location. End of explanation # requirements file for extra pip dependencies f = open('requirements.txt', 'w') f.write('tensorflow-datasets\n') f.write('pandas') f.close() # Calling `run` from within a script with contains the Keras model. # Comment out this line for a local run to debug. tfc.run( entry_point=None, distribution_strategy="auto", requirements_txt="requirements.txt", chief_config=tfc.MachineConfig( cpu_cores=8, memory=30, accelerator_type=tfc.AcceleratorType.NVIDIA_TESLA_T4, accelerator_count=2, ), docker_config=tfc.DockerConfig( image_build_bucket=BUCKET_NAME, ), worker_count=0 ) Explanation: Using tensorflow_cloud Training on GCP After above cell is tested, run following cell to use GCP for training. End of explanation model = tf.keras.models.load_model(f'gs://{BUCKET_NAME}/{MODEL_PATH}') model.evaluate(eval_dataset) Explanation: Evaluate the model. End of explanation
12,928
Given the following text description, write Python code to implement the functionality described below step by step Description: ABC calibration of $I_\text{CaL}$ in standardised model to unified dataset. Step1: Initial set-up Load experiments used for unified dataset calibration Step2: Plot steady-state and tau functions of original model (pretty much meaningless for standardised) Step3: Combine model and experiments to produce Step4: Set up prior ranges for each parameter in the model. See the modelfile for further information on specific parameters. Prepending `log_' has the effect of setting the parameter in log space. Step5: Run ABC calibration Step6: Analysis of results
Python Code: import os, tempfile import logging import matplotlib as mpl import matplotlib.pyplot as plt import seaborn as sns import numpy as np from ionchannelABC import theoretical_population_size from ionchannelABC import IonChannelDistance, EfficientMultivariateNormalTransition, IonChannelAcceptor from ionchannelABC.experiment import setup from ionchannelABC.visualization import plot_sim_results, plot_kde_matrix_custom import myokit from pyabc import Distribution, RV, History, ABCSMC from pyabc.epsilon import MedianEpsilon from pyabc.sampler import MulticoreEvalParallelSampler, SingleCoreSampler from pyabc.populationstrategy import ConstantPopulationSize Explanation: ABC calibration of $I_\text{CaL}$ in standardised model to unified dataset. End of explanation from experiments.ical_li import (li_act_and_tau, li_inact_1000, li_inact_kin_80, li_recov) modelfile = 'models/standardised_ical.mmt' Explanation: Initial set-up Load experiments used for unified dataset calibration: - Steady-state activation [Li1997] - Activation time constant [Li1997] - Steady-state inactivation [Li1997] - Inactivation time constant (fast+slow) [Li1997] - Recovery time constant (fast+slow) [Li1997] End of explanation from ionchannelABC.visualization import plot_variables sns.set_context('poster') V = np.arange(-80, 40, 0.01) sta_par_map = {'di': 'ical.d_ss', 'fi1': 'ical.f_ss', 'fi2': 'ical.f_ss', 'dt': 'ical.tau_d', 'ft1': 'ical.tau_f1', 'ft2': 'ical.tau_f2'} f, ax = plot_variables(V, sta_par_map, 'models/standardised_ical.mmt', figshape=(3,2)) Explanation: Plot steady-state and tau functions of original model (pretty much meaningless for standardised) End of explanation observations, model, summary_statistics = setup(modelfile, li_act_and_tau, li_inact_1000, li_inact_kin_80, li_recov) assert len(observations)==len(summary_statistics(model({}))) Explanation: Combine model and experiments to produce: - observations dataframe - model function to run experiments and return traces - summary statistics function to accept traces End of explanation limits = {'log_ical.p_1': (-7, 3), 'ical.p_2': (1e-7, 0.4), 'log_ical.p_3': (-7, 3), 'ical.p_4': (1e-7, 0.4), 'log_ical.p_5': (-7, 3), 'ical.p_6': (1e-7, 0.4), 'log_ical.p_7': (-7, 3), 'ical.p_8': (1e-7, 0.4), 'log_ical.A': (0., 3.)} prior = Distribution(**{key: RV("uniform", a, b - a) for key, (a,b) in limits.items()}) # Test this works correctly with set-up functions assert len(observations) == len(summary_statistics(model(prior.rvs()))) Explanation: Set up prior ranges for each parameter in the model. See the modelfile for further information on specific parameters. Prepending `log_' has the effect of setting the parameter in log space. End of explanation db_path = ("sqlite:///" + os.path.join(tempfile.gettempdir(), "standardised_ical.db")) logging.basicConfig() abc_logger = logging.getLogger('ABC') abc_logger.setLevel(logging.DEBUG) eps_logger = logging.getLogger('Epsilon') eps_logger.setLevel(logging.DEBUG) pop_size = theoretical_population_size(2, len(limits)) print("Theoretical minimum population size is {} particles".format(pop_size)) abc = ABCSMC(models=model, parameter_priors=prior, distance_function=IonChannelDistance( exp_id=list(observations.exp_id), variance=list(observations.variance), delta=0.05), population_size=ConstantPopulationSize(2000), summary_statistics=summary_statistics, transitions=EfficientMultivariateNormalTransition(), eps=MedianEpsilon(initial_epsilon=100), sampler=MulticoreEvalParallelSampler(n_procs=16), acceptor=IonChannelAcceptor()) obs = observations.to_dict()['y'] obs = {str(k): v for k, v in obs.items()} abc_id = abc.new(db_path, obs) history = abc.run(minimum_epsilon=0., max_nr_populations=100, min_acceptance_rate=0.01) history = abc.run(minimum_epsilon=0., max_nr_populations=100, min_acceptance_rate=0.01) Explanation: Run ABC calibration End of explanation history = History(db_path) df, w = history.get_distribution(m=0) df.describe() sns.set_context('poster') mpl.rcParams['font.size'] = 14 mpl.rcParams['legend.fontsize'] = 14 g = plot_sim_results(modelfile, li_act_and_tau, li_inact_1000, li_inact_kin_80, li_recov, df=df, w=w) plt.tight_layout() import pandas as pd N = 100 sta_par_samples = df.sample(n=N, weights=w, replace=True) sta_par_samples = sta_par_samples.set_index([pd.Index(range(N))]) sta_par_samples = sta_par_samples.to_dict(orient='records') sns.set_context('poster') mpl.rcParams['font.size'] = 14 mpl.rcParams['legend.fontsize'] = 14 V = np.arange(-140, 50, 0.01) sta_par_map = {'di': 'ical.d_ss', 'fi1': 'ical.f_ss', 'fi2': 'ical.f_ss', 'dt': 'ical.tau_d', 'ft1': 'ical.tau_f1', 'ft2': 'ical.tau_f2'} f, ax = plot_variables(V, sta_par_map, 'models/standardised_ical.mmt', [sta_par_samples], figshape=(3,2)) m,_,_ = myokit.load(modelfile) sns.set_context('paper') g = plot_kde_matrix_custom(df, w, limits=limits) plt.tight_layout() Explanation: Analysis of results End of explanation
12,929
Given the following text description, write Python code to implement the functionality described below step by step Description: <h1>Cesta ke kořenům</h1> <p>Moto Step1: <p>Vykreslení dat do grafu zajistí <t>plot(x,y)</t> Step2: <p>Fakt je tam... jen není vidět.</p> Step3: <p>Žádáme-li víc bodů, musíme je uzavřít do hranatých závorek.</p> Step4: <p>Zjevně bude potřeba upravit měřítka na osách Step5: <p>Poznámky Step6: <p>Vlastnosti seznamu Step7: <p> Pythoní obdoba udělej něco pro několik prvků je</p> <p> $$y_n = 1 + x_n^2, \quad \mathrm{pro} \ n = 1, N$$ </p> Step8: <h2>Vektory (pole)</h2> <p>Funkci $y = f(x)$ vykreslíme tak, že vygenerujeme sadu (vektor) bodů $\pmb{x} = [x_1, x_2, \ldots ,x_N]$ a vypočteme $y_n = f(x_n)$ pro $n=1,2,\ldots, N$. </p> <p>Jenže jak vygenerovat vektor?</p> Step9: <p><a href="http Step10: <p>$\pmb{y} = f(\pmb{x}) = 1 + \pmb{x}^2$</p> Step11: <h3>Vlastnosti vektorů</h3> <ul> <li>Jsou odvozené z matematických vlastností -- lze je sčítat, násobit (dvěma způsoby), mají velikost,..</li> <li>Počet prvků je neměnný a daný při definici.</li> <li>Možné definice Step12: <p>A právě toto je trik, který potřebujeme na vygenerování $x_n$.</p> Step13: <p>(Už vidíme, proč má funkce <t>plot(x,y)</t> jako default spojení čarami.</p> <p>Násobení vektorů ve mně vyvolává pnutí Step14: <h3>Obecné vlastnosti vektorových struktur</h3> <ul> <li>Sdružení mnoha prvků dohromady (kompaktnejší zápis, skoro neomezené délky)</li> <li>Nad prvky jsou definovány různe operace (+,-,*,x,..).</li> </ul> <h2>Kreslení grafů funkcí</h2> <p>Zkusíme několik speciálních funkcí (takových, které se nedají zapsat konečným počtem operací).</p> Step15: <p><a href="https Step16: <h2>Definice vlastních funkcí</h2> <pre>def jmeno(argumenty) Step17: <h2>Fourierova řada</h2> <p>Nyní máme vše připraveno pro kreslení složených funkcí. Například aproximace pily Step18: <h2>Rozlišení dalekohledu</h2> <img src="https Step19: <p>Hledání minima může být snadno nahrazeno řešením rovnice $f(x) = J_1(x) / x = 0$</p> Step20: <p>Výsledek řešení rovnice $f(x) = J_1(x)/x = 0$ je pro $x=3.8317 \pm 0.0001$.</p> Step21: <h2>Půlení intervalu</h2> <p>Numerické řešení rovnice $f(x) = 0$ je založeno na počátečním odhadu krajů intervalu $a,b$ pro které musí platit $f(a) \, f(b) < 0$. Ke správnému řešení se propracujeme pomocí půlení intervalu a testování předchozí podmínky. Tedy pro bod v polovině $$x = \frac{a + b}{2}$$ a víme, že kořen je v tom intervalu, pokud $f(a) f(x) < 0$ nebo $f(b) f(x) < 0$. Step22: <p>Potřebujeme nějak učinně opakovat výpočty a rozhodovat se na základě hodnot funkce.</p> <h2>Cykly</h2> <p>Cyklické opakování je možné zařídit prostřednictvím příkazu</p> <pre>for k in [vektor] Step23: <h2>Rozhodovací podmínky</h2> <p>Občas je potřeba se rozhodnout v závislosti na hodnotě nějaké podmínky. To je možné prostřednictvím příkazu Step24: <h2>Automatické půlení</h2> Step25: <h2>Knihovní funkce</h2> <a href="https
Python Code: import matplotlib.pyplot as plt # plt je vseobecne uzivana zkratka, grafy si kreslime primo do notebookove stranky: %matplotlib inline Explanation: <h1>Cesta ke kořenům</h1> <p>Moto: panda v koruně pevného stromu</p> <ul> <li>Grafy bodů</li> <li>Seznamy</li> <li>Vektory v numpy</li> <li>Grafy funkcí</li> <li>Definice vlastních funkcí</li> <li>Rozlišení dalekohledu</li> </ul> <p>Panda za nás prováděla hodně věcí automaticky -- typické pro Python. Řešení problémů je obvykle představováno kombinací několika základních kamenů. Příkladem budiž kreslení grafů.</p> <h2>Kreslení bodů</h2> <p> <a href="http://matplotlib.org/">Matplotlib.org</a> je grafický modul (sada nástrojů pro kreslení grafů). Zpřístupní se pomocí kouzelného slůvka <t>import</t>: </p> End of explanation plt.plot(1,2) # bod v x = 1, y = 2 Explanation: <p>Vykreslení dat do grafu zajistí <t>plot(x,y)</t>:</p> End of explanation plt.plot(1,2,'*') # muj oblibeny symbol je pentagram Explanation: <p>Fakt je tam... jen není vidět.</p> End of explanation plt.plot([1,2,3],[11,12,13],'+') # pokud nedefinujeme kreslici symbol, spoji se carou plt.plot? Explanation: <p>Žádáme-li víc bodů, musíme je uzavřít do hranatých závorek.</p> End of explanation plt.xlim(0.9,3.1) plt.ylim(10,15) plt.plot([1,2,3],[11,12,13],'*') Explanation: <p>Zjevně bude potřeba upravit měřítka na osách: </p> End of explanation x = [1,2,3] # vycet y = [] # prazdny seznam Explanation: <p>Poznámky:</p> <ul> <li>Modul pandas nám zajistil načtení bodů ze souboru. Plot ho umí vykreslit.</li> <li><b>Důležité:</b> plot(x,y) umí kreslit jen body nikoliv přímo funkce.</li> </ul> <h2>Seznam (list)</h2> <p>Chceme-li vykreslit mnoho bodů, je nutné je zapsat do seznamu. Seznam je něco jako pojmenovaný sloupec v tabulkovém procesoru.</p> <p>Vytváří se výčtem nebo vygenerováním nebo ...:</p> End of explanation x.append(4) # pridani prvku s hodnotou 4 nakonec print(x) x.pop() # odebrani posledniho print(x) print(x[2]) # vypsani s indexem 2 -- pozice 3 print(len(x)) # zjisteni delky x = [1,2,3] # prvni sloupecek y = [] y.append(1 + x[0]**2) y.append(1 + x[1]**2) y.append(1 + x[2]**2) # kapanek neprehledne plt.xlim(0.9,3.1) plt.ylim(0,12) plt.plot(x,y,'*') # presna obdoba Explanation: <p>Vlastnosti seznamu:</p> <ul> <li>Neomezený počet prvků.</li> <li>Prvky lze libovolně přidávat a ubírat.</li> <li>Jednotlivé prvky jsou dostupné jako jmeno[]</li> <li>První prvek má index nula: x[0]</li> <li><a href="https://docs.python.org/3/tutorial/datastructures.html">python.org</a> </ul> End of explanation y = [1 + x[n]**2 for n in range(0,len(x))] print(y) plt.xlim(0.9,3.1) plt.ylim(0,12) plt.plot(x,y,'*') Explanation: <p> Pythoní obdoba udělej něco pro několik prvků je</p> <p> $$y_n = 1 + x_n^2, \quad \mathrm{pro} \ n = 1, N$$ </p> End of explanation import numpy as np Explanation: <h2>Vektory (pole)</h2> <p>Funkci $y = f(x)$ vykreslíme tak, že vygenerujeme sadu (vektor) bodů $\pmb{x} = [x_1, x_2, \ldots ,x_N]$ a vypočteme $y_n = f(x_n)$ pro $n=1,2,\ldots, N$. </p> <p>Jenže jak vygenerovat vektor?</p> End of explanation x = np.array([1,2,3,4,5]) # vektor x Explanation: <p><a href="http://numpy.org">numpy</a> definuje operace s vektory, maticemi, umí je násobit, hledat vlastní čísla,...</p> <p>Definice vektoru pomocí listu (seznamu):</p> End of explanation y = 1 + x**2 # vektor y je vypocitany z vektoru x, prvek po prvku.... plt.plot(x,y,'*') Explanation: <p>$\pmb{y} = f(\pmb{x}) = 1 + \pmb{x}^2$</p> End of explanation x = np.zeros(5) # petiprvkovy vektor plny nul print(x) x = np.linspace(1,5,50) # vektor s prvky 1 az 5, 50 prvku print(x) print(x[0],x[1],x[50-1],(5-1)/49) Explanation: <h3>Vlastnosti vektorů</h3> <ul> <li>Jsou odvozené z matematických vlastností -- lze je sčítat, násobit (dvěma způsoby), mají velikost,..</li> <li>Počet prvků je neměnný a daný při definici.</li> <li>Možné definice: výčtem, počtem prvků, intervalem $(a .. b)$, odvozenením z listu, ...</li> <li>Prvky jsou přístupné pomocí indexů v hranatých závorkách: x[1]</li> <li>Prvky jsou indexované od nuly: první prvek x[0], druhý x[1], ... x[N-1] -- mimořádně záludné(!)</li> </ul> End of explanation y = 1 + x**2 plt.plot(x,y) Explanation: <p>A právě toto je trik, který potřebujeme na vygenerování $x_n$.</p> End of explanation x = np.array([0,1,0]) # jednotkovy vektor podel y osy y = np.array([1,0,0]) # jednotkovy vektor podel x osy s1 = np.inner(x,y) # inner() je skalarni soucin s2 = np.cross(x,y) # cross() je vektorovy soucin print("Skalární součin ",x,"*",y,"=",s1) print("Vektorový součin ",x,"x",y,"=",s2) Explanation: <p>(Už vidíme, proč má funkce <t>plot(x,y)</t> jako default spojení čarami.</p> <p>Násobení vektorů ve mně vyvolává pnutí:</p> End of explanation import scipy.special as spec Explanation: <h3>Obecné vlastnosti vektorových struktur</h3> <ul> <li>Sdružení mnoha prvků dohromady (kompaktnejší zápis, skoro neomezené délky)</li> <li>Nad prvky jsou definovány různe operace (+,-,*,x,..).</li> </ul> <h2>Kreslení grafů funkcí</h2> <p>Zkusíme několik speciálních funkcí (takových, které se nedají zapsat konečným počtem operací).</p> End of explanation x = np.linspace(0,10) y = np.sin(x) z = np.exp(-x) plt.plot(x,y) plt.plot(x,z) x = np.linspace(0,30) # bude to zubate bessel = spec.j1(x) # funkci je tolik, ze si kazdy muze zvolit svou vlastni... plt.plot(x,bessel) x = np.linspace(0,30,1000) # zvolime jemnejsi deleni intervalu bessel = spec.j1(x) plt.plot(x,bessel) Explanation: <p><a href="https://docs.scipy.org/doc/scipy/reference/special.html">Speciální</a> funkce ve <a href="https://www.scipy.org">scipy.org</a>. Jde o knihovnu se základními bohatými matematickými funkcemi na numerické integrování, řešení rovnic, ...</p> End of explanation # priklad def fun(x): print("Argument:",x) return x + 1 x = 3 print("Fun with fun:",fun(x)) Explanation: <h2>Definice vlastních funkcí</h2> <pre>def jmeno(argumenty): prikaz(y) return vysledek </pre> End of explanation def fourier(x,n): t = 2*np.pi*x # skalar s = np.array([np.sin(t*k) for k in range(0,n) if k >0]) a = np.array([(-1)**k/k for k in range(0,n) if k >0]) return sum(a*s) x = np.linspace(0,1.5,1000) # tady nelze pouzit y = fourier(x), protoze mame jen skalarni argument y = [fourier(x[i],n=5) for i in range(0,1000)] # volbou n menime presnost aproximace plt.plot(x,y) Explanation: <h2>Fourierova řada</h2> <p>Nyní máme vše připraveno pro kreslení složených funkcí. Například aproximace pily:</p> $$ f(x) \approx \sum_{k=1}^N \frac{(-1)^k}{k} \sin(2\pi k x)$$ <a href="https://en.wikipedia.org/wiki/Sawtooth_wave">Pila na wiki</a> End of explanation x = np.linspace(0,20,1000) y = (spec.j1(x)/x)**2 plt.plot(x,y) Explanation: <h2>Rozlišení dalekohledu</h2> <img src="https://upload.wikimedia.org/wikipedia/commons/thumb/1/14/Airy-pattern.svg/220px-Airy-pattern.svg.png"></img> <p><a href="https://en.wikipedia.org/wiki/Airy_disk">wiki</a>.</p> <p>Rozlišením je konvenčně myšlena poloha prvního minima intensity $$I(x) = I_0 \left( \frac{J_1(x)}{x} \right)^2,$$ kde $$x=\frac{2\pi}{\lambda} R \sin \theta$$.</p> End of explanation y = spec.j1(x)/x plt.grid() # pridani mrizky do obrazku plt.plot(x,y) plt.xlim(3,5) # zmensime interval plt.grid() plt.plot(x,y) plt.xlim(3.5,4.0) # znova zmensime interval plt.grid() plt.plot(x,y) plt.xlim(3.8,3.9) # a pokracujem ... plt.ylim(-0.1,0.1) plt.grid() plt.plot(x,y) plt.xlim(3.82,3.85) # a porad pokracujem ... plt.ylim(-0.01,0.01) plt.grid() plt.plot(x,y) plt.xlim(3.83,3.835) # porad nekoncime... plt.ylim(-1e-3,1e-3) plt.grid() plt.plot(x,y) plt.xlim(3.831,3.832) # tak naposledy.... plt.ylim(-1e-4,1e-4) plt.grid() plt.plot(x,y) Explanation: <p>Hledání minima může být snadno nahrazeno řešením rovnice $f(x) = J_1(x) / x = 0$</p> End of explanation R = 0.1 # [m] lam = 550e-9 # [m] rad = 180 / np.pi # radian je asi 57.3 stupne theta = 3.8317 / (2*np.pi / lam * (R / 2) ) * rad * 3600 print("Rozlišení dalekohledu o průměru",100*R,"cm ve viditelném světle je",theta,"arcsec.") # vomit print("Rozlišení dalekohledu o průměru {0:.0f} cm ve viditelném světle je {1:.1f} arcsec.".format(100*R,theta)) Explanation: <p>Výsledek řešení rovnice $f(x) = J_1(x)/x = 0$ je pro $x=3.8317 \pm 0.0001$.</p> End of explanation def f(x): return spec.j1(x)/x a = 3 b = 5 print(f(a)*f(b)) x = (a + b) / 2 print(x,f(x)*f(a),f(x)*f(b)) b = x x = (a + b) / 2 print(x,f(x)*f(a),f(x)*f(b)) a = x x = (a + b) / 2 print(x,f(x)*f(a),f(x)*f(b)) a = x x = (a + b) / 2 print(x,f(x)*f(a),f(x)*f(b)) b = x x = (a + b) / 2 print(x,f(x)*f(a),f(x)*f(b)) a = x x = (a + b) / 2 print(x,f(x)*f(a),f(x)*f(b)) b = x x = (a + b) / 2 print(x,f(x)*f(a),f(x)*f(b)) a = x x = (a + b) / 2 print(x,f(x)*f(a),f(x)*f(b)) b = x x = (a + b) / 2 print(x,f(x)*f(a),f(x)*f(b)) b = x x = (a + b) / 2 print(x,f(x)*f(a),f(x)*f(b)) a = x x = (a + b) / 2 print(x,f(x)*f(a),f(x)*f(b)) a = x x = (a + b) / 2 print(x,f(x)*f(a),f(x)*f(b)) a = x x = (a + b) / 2 print(x,f(x)*f(a),f(x)*f(b)) b = x x = (a + b) / 2 print(x,f(x)*f(a),f(x)*f(b)) a = x x = (a + b) / 2 print(x,f(x)*f(a),f(x)*f(b)) #for k in range(0,17): # x = (a + b)/2 # if Explanation: <h2>Půlení intervalu</h2> <p>Numerické řešení rovnice $f(x) = 0$ je založeno na počátečním odhadu krajů intervalu $a,b$ pro které musí platit $f(a) \, f(b) < 0$. Ke správnému řešení se propracujeme pomocí půlení intervalu a testování předchozí podmínky. Tedy pro bod v polovině $$x = \frac{a + b}{2}$$ a víme, že kořen je v tom intervalu, pokud $f(a) f(x) < 0$ nebo $f(b) f(x) < 0$. End of explanation # priklad cyklu for i in range(0,3): x = 1 + i**2 print("#",i," x=",x) Explanation: <p>Potřebujeme nějak učinně opakovat výpočty a rozhodovat se na základě hodnot funkce.</p> <h2>Cykly</h2> <p>Cyklické opakování je možné zařídit prostřednictvím příkazu</p> <pre>for k in [vektor]: prikaz(y) </pre> End of explanation # priklad podminky a = 1 b = 2 if (a > b): print("Podmínka splněna:", a > b) else: print("Podmínka nesplněna:",a > b) Explanation: <h2>Rozhodovací podmínky</h2> <p>Občas je potřeba se rozhodnout v závislosti na hodnotě nějaké podmínky. To je možné prostřednictvím příkazu:</p> <pre>if podminka: prikaz(y), pokud je splnena else: prikaz(y), pokud neni splnena </pre> End of explanation a = 3 b = 5 for k in range(0,20): x = (a + b) / 2 if (f(x)*f(a) < 0): b = x else: a = x print("#",k," x=",x) def pulky(f,a,b): for k in range(0,20): x = (a + b) / 2 if (f(x)*f(a) < 0): b = x else: a = x return x print("Funkce na pulky {0:.4f}.".format(pulky(f,3,5))) Explanation: <h2>Automatické půlení</h2> End of explanation import scipy.optimize x = scipy.optimize.brentq(f,3,5) print("Brent's method for x=",x) Explanation: <h2>Knihovní funkce</h2> <a href="https://docs.scipy.org/doc/scipy/reference/optimize.html#root-finding">Root finding</a> End of explanation
12,930
Given the following text description, write Python code to implement the functionality described below step by step Description: groupby With groupby, you can group data in a DataFrame and apply calculations on those groups in various ways. This Cheatbook (Cheatsheet + Notebook) introduces you to the core functionality of pandas' groupby function. Here can find the executable Jupyter Notebook version to directly play around with it! References Here you can find out more about this function. API Reference Pandas Grouper and Agg Functions Explained Understanding the Transform Function in Pandas Example Scenario This is an excerpt of a file list from a directory with the following information as separate columns / Series Step1: When to use it groupby is a great way to summarize data in a specific way to build a more higher-level view on your data (e.g., to go from code level to module level). E.g., in our scenario, we could count the number of files per directory. Let's take a look at this use case step by step. Basic Principles You can use the groupby function on our DataFrame df. As parameter, you can put in the name (or a list of names) of the Series you want to group. In our case, we want to group the directories / the Series dir. Step2: This gives you a GroupBy object. We can take a look at the built groups by inspecting the groups object of the GroupBy object. Step3: The groups object shows you the groups and their members, using their indexes. Aggregating Values Now we have built some groups, but now what? The next step is to decide what we want to do with the values that belong to a group. This means we need to tell the GroupBy object how we want to group the values. We can apply a multitude of aggregating functions here, e.g. count Step4: first Step5: max Step6: sum Step7: This gives us the number of bytes of all files that reside in a directory. Note that there is no more file Series because it doesn't contain any values we could sum up. So this Series was thrown away. We can also apply dedicated functions on each group using e.g., agg Step8: We can then group this data in a more sophisticated way by using two Series for our groups. We sum up the numeric values (= the bytes) for each file for each group. Step9: Last, we want to calculate the ratio of the files' bytes for each extension. We first calculate the overall size for each extension in each directory by using transform. The transform function doesn't compute results for each value of a group. Instead, it provides results for all values of a group. Step10: In our case, we summed up all the files' bytes of the file extensions per directory. We can add this new information to our existing DataFrame. Step11: Now we are able to calculate the ratio.
Python Code: import pandas as pd df = pd.DataFrame({ "file" : ['hello.java', 'tutorial.md', 'controller.java', "build.sh", "deploy.sh"], "dir" : ["src", "docs", "src", "src", "src"], "bytes" : [54, 124, 36, 78, 62] }) df Explanation: groupby With groupby, you can group data in a DataFrame and apply calculations on those groups in various ways. This Cheatbook (Cheatsheet + Notebook) introduces you to the core functionality of pandas' groupby function. Here can find the executable Jupyter Notebook version to directly play around with it! References Here you can find out more about this function. API Reference Pandas Grouper and Agg Functions Explained Understanding the Transform Function in Pandas Example Scenario This is an excerpt of a file list from a directory with the following information as separate columns / Series: file: The name of the file dir: The name of the directory where the file lives in bytes: The size of the file in bytes This data is stored into a pandas' DataFrame named df. End of explanation df.groupby('dir') Explanation: When to use it groupby is a great way to summarize data in a specific way to build a more higher-level view on your data (e.g., to go from code level to module level). E.g., in our scenario, we could count the number of files per directory. Let's take a look at this use case step by step. Basic Principles You can use the groupby function on our DataFrame df. As parameter, you can put in the name (or a list of names) of the Series you want to group. In our case, we want to group the directories / the Series dir. End of explanation df.groupby('dir').groups Explanation: This gives you a GroupBy object. We can take a look at the built groups by inspecting the groups object of the GroupBy object. End of explanation df.groupby('dir').count() Explanation: The groups object shows you the groups and their members, using their indexes. Aggregating Values Now we have built some groups, but now what? The next step is to decide what we want to do with the values that belong to a group. This means we need to tell the GroupBy object how we want to group the values. We can apply a multitude of aggregating functions here, e.g. count: count the number of entries of each group End of explanation df.groupby('dir').first() Explanation: first: take the first entry of each group End of explanation df.groupby('dir').max() Explanation: max: take the entry with the highest value End of explanation df.groupby('dir').sum() Explanation: sum: sum up all values within one group End of explanation df['ext'] = df["file"].str.split(".").str[-1] df Explanation: This gives us the number of bytes of all files that reside in a directory. Note that there is no more file Series because it doesn't contain any values we could sum up. So this Series was thrown away. We can also apply dedicated functions on each group using e.g., agg: apply a variety of aggregating functions on the groups (e.g., building the sum as well as counting the values at once) apply: apply a custom function on each group to execute calculations as you like transform: calculate summarizing values for each group (e.g., the sum of all entries for each group) We'll see these operations later on! More Advanced Use Cases Let's dig deeper into our example scenario. We want to find out which kind of files occupy what space in which directory. For this, we extract the files' extensions from the file series. We use the string split function to split by the . sign and keep just the last piece of the split file name (which is the file's extension). End of explanation dir_ext_bytes = df.groupby(['dir', 'ext']).sum() dir_ext_bytes Explanation: We can then group this data in a more sophisticated way by using two Series for our groups. We sum up the numeric values (= the bytes) for each file for each group. End of explanation bytes_per_dir = dir_ext_bytes.groupby('dir').transform('sum') bytes_per_dir Explanation: Last, we want to calculate the ratio of the files' bytes for each extension. We first calculate the overall size for each extension in each directory by using transform. The transform function doesn't compute results for each value of a group. Instead, it provides results for all values of a group. End of explanation dir_ext_bytes['all'] = bytes_per_dir dir_ext_bytes Explanation: In our case, we summed up all the files' bytes of the file extensions per directory. We can add this new information to our existing DataFrame. End of explanation dir_ext_bytes['ratio'] = dir_ext_bytes['bytes'] / dir_ext_bytes['all'] dir_ext_bytes Explanation: Now we are able to calculate the ratio. End of explanation
12,931
Given the following text description, write Python code to implement the functionality described below step by step Description: Goal A basic, full run of the SIPSim pipeline with the whole bacterial genome dataset to see Step1: Init Step2: Creating a community file 2 communities control vs treatment Step3: Plotting community rank abundances Step4: Simulating gradient fractions Step5: Plotting fractions Step6: Simulating fragments Step7: Number of amplicons per taxon Step8: Converting fragments to kde object Step9: Checking ampfrag info Step10: Adding diffusion Step11: Adding DBL 'contamination' DBL = diffusive boundary layer Step12: Comparing DBL+diffusion to diffusion Step13: Making an incorp config file 10% of taxa with 100% atom excess 13C Step14: Adding isotope incorporation to BD distribution Step15: Plotting stats on BD shift from isotope incorporation Step16: Simulating an OTU table Step17: Plotting taxon abundances Step18: Simulating PCR bias Step19: Plotting change in relative abundances Step20: Notes The PCR raises the relative abundances most for low-abundance taxa Results in a more even rank-abundance distribution Step21: Subsampling from the OTU table simulating sequencing of the DNA pool Step22: Plotting seq count distribution Step23: Plotting abundance distributions (paper figure) Step24: Making a wide OTU table Step25: Making metadata (phyloseq Step26: Community analysis Phyloseq Step27: DESeq2 Step28: Plotting results of DESeq2 Step29: Notes
Python Code: workDir = '/home/nick/notebook/SIPSim/dev/bac_genome1147/Meselson_diff/validation/' genomeDir = '/var/seq_data/ncbi_db/genome/Jan2016/bac_complete_spec-rep1_rn/' R_dir = '/home/nick/notebook/SIPSim/lib/R/' #figureDir = '/home/nick/notebook/SIPSim/figures/bac_genome_n1147/' bandwidth = 0.8 DBL_scaling = 0.5 subsample_dist = 'lognormal' subsample_mean = 9.432 subsample_scale = 0.5 subsample_min = 10000 subsample_max = 30000 Explanation: Goal A basic, full run of the SIPSim pipeline with the whole bacterial genome dataset to see: Is the pipeline functional? Check of output at each stage of pipeline Note: using diffusion method from Meselson etla., 1957 Setting variables End of explanation import glob from os.path import abspath import nestly from IPython.display import Image import os %load_ext rpy2.ipython %load_ext pushnote %%R library(ggplot2) library(dplyr) library(tidyr) library(gridExtra) if not os.path.isdir(workDir): os.makedirs(workDir) if not os.path.isdir(figureDir): os.makedirs(figureDir) %cd $workDir # Determining min/max BD that ## min G+C cutoff min_GC = 13.5 ## max G+C cutoff max_GC = 80 ## max G+C shift max_13C_shift_in_BD = 0.036 min_range_BD = min_GC/100.0 * 0.098 + 1.66 max_range_BD = max_GC/100.0 * 0.098 + 1.66 max_range_BD = max_range_BD + max_13C_shift_in_BD print 'Min BD: {}'.format(min_range_BD) print 'Max BD: {}'.format(max_range_BD) Explanation: Init End of explanation !SIPSim communities \ $genomeDir/genome_index.txt \ --n_comm 2 \ > comm.txt Explanation: Creating a community file 2 communities control vs treatment End of explanation %%R -w 750 -h 300 tbl = read.delim('comm.txt', sep='\t') tbl$library = as.character(tbl$library) tbl$library = ifelse(tbl$library == 1, 'Control', 'Treatment') ggplot(tbl, aes(rank, rel_abund_perc, color=library, group=taxon_name)) + geom_point() + scale_y_log10() + scale_color_discrete('Community') + labs(x='Rank', y='Relative abundance (%)') + theme_bw() + theme( text=element_text(size=16) ) Explanation: Plotting community rank abundances End of explanation !SIPSim gradient_fractions \ --BD_min $min_range_BD \ --BD_max $max_range_BD \ comm.txt \ > fracs.txt Explanation: Simulating gradient fractions End of explanation %%R -w 600 -h 300 tbl = read.delim('fracs.txt', sep='\t') ggplot(tbl, aes(fraction, fraction_size)) + geom_bar(stat='identity') + facet_grid(library ~ .) + labs(y='fraction size') + theme_bw() + theme( text=element_text(size=16) ) %%R -w 300 -h 250 tbl$library = as.character(tbl$library) ggplot(tbl, aes(library, fraction_size)) + geom_boxplot() + labs(y='fraction size') + theme_bw() + theme( text=element_text(size=16) ) Explanation: Plotting fractions End of explanation # estimated coverage mean_frag_size = 9000.0 mean_amp_len = 300.0 n_frags = 10000 coverage = round(n_frags * mean_amp_len / mean_frag_size, 1) msg = 'Average coverage from simulating {} fragments: {}X' print msg.format(n_frags, coverage) !SIPSim fragments \ $genomeDir/genome_index.txt \ --fp $genomeDir \ --fr ../../../515F-806R.fna \ --fld skewed-normal,9000,2500,-5 \ --flr None,None \ --nf 10000 \ --np 24 \ 2> ampFrags.log \ > ampFrags.pkl Explanation: Simulating fragments End of explanation !grep "Number of amplicons: " ampFrags.log | \ perl -pe 's/.+ +//' | hist !printf "Number of taxa with >=1 amplicon: " !grep "Number of amplicons: " ampFrags.log | \ perl -ne "s/^.+ +//; print unless /^0$/" | wc -l Explanation: Number of amplicons per taxon End of explanation !SIPSim fragment_KDE \ ampFrags.pkl \ > ampFrags_kde.pkl Explanation: Converting fragments to kde object End of explanation !SIPSim KDE_info \ -s ampFrags_kde.pkl \ > ampFrags_kde_info.txt %%R # loading df = read.delim('ampFrags_kde_info.txt', sep='\t') df.kde1 = df %>% filter(KDE_ID == 1) df.kde1 %>% head(n=3) BD_GC50 = 0.098 * 0.5 + 1.66 %%R -w 500 -h 250 # plotting p.amp = ggplot(df.kde1, aes(median)) + geom_histogram(binwidth=0.001) + geom_vline(xintercept=BD_GC50, linetype='dashed', color='red', alpha=0.7) + labs(x='Median buoyant density') + theme_bw() + theme( text = element_text(size=16) ) p.amp Explanation: Checking ampfrag info End of explanation !SIPSim diffusion \ --bw $bandwidth \ --np 24 \ -m Meselson \ ampFrags_kde.pkl \ > ampFrags_kde_dif.pkl \ 2> ampFrags_kde_dif.log Explanation: Adding diffusion End of explanation !SIPSim DBL \ --comm comm.txt \ --commx $DBL_scaling \ --np 24 \ ampFrags_kde_dif.pkl \ > ampFrags_kde_dif_DBL.pkl \ 2> ampFrags_kde_dif_DBL.log # checking output !tail -n 5 ampFrags_kde_dif_DBL.log Explanation: Adding DBL 'contamination' DBL = diffusive boundary layer End of explanation # none !SIPSim KDE_info \ -s ampFrags_kde.pkl \ > ampFrags_kde_info.txt # diffusion !SIPSim KDE_info \ -s ampFrags_kde_dif.pkl \ > ampFrags_kde_dif_info.txt # diffusion + DBL !SIPSim KDE_info \ -s ampFrags_kde_dif_DBL.pkl \ > ampFrags_kde_dif_DBL_info.txt %%R inFile = 'ampFrags_kde_info.txt' df.raw = read.delim(inFile, sep='\t') df.raw$stage = 'raw' inFile = 'ampFrags_kde_dif_info.txt' df.dif = read.delim(inFile, sep='\t') df.dif$stage = 'diffusion' inFile = 'ampFrags_kde_dif_DBL_info.txt' df.DBL = read.delim(inFile, sep='\t') df.DBL$stage = 'diffusion +\nDBL' df = rbind(df.raw, df.dif, df.DBL) df.dif = '' df.DBL = '' df %>% head(n=3) %%R -w 350 -h 300 df$stage = factor(df$stage, levels=c('raw', 'diffusion', 'diffusion +\nDBL')) ggplot(df, aes(stage)) + geom_boxplot(aes(y=min), color='red') + geom_boxplot(aes(y=median), color='darkgreen') + geom_boxplot(aes(y=max), color='blue') + scale_y_continuous(limits=c(1.3, 2)) + labs(y = 'Buoyant density (g ml^-1)') + theme_bw() + theme( text = element_text(size=16), axis.title.x = element_blank() ) Explanation: Comparing DBL+diffusion to diffusion End of explanation !SIPSim incorpConfigExample \ --percTaxa 10 \ --percIncorpUnif 100 \ > PT10_PI100.config # checking output !head PT10_PI100.config Explanation: Making an incorp config file 10% of taxa with 100% atom excess 13C End of explanation !SIPSim isotope_incorp \ --comm comm.txt \ --np 24 \ --shift ampFrags_BD-shift.txt \ ampFrags_kde_dif_DBL.pkl \ PT10_PI100.config \ > ampFrags_kde_dif_DBL_incorp.pkl \ 2> ampFrags_kde_dif_DBL_incorp.log # checking log !tail -n 5 ampFrags_kde_dif_DBL_incorp.log Explanation: Adding isotope incorporation to BD distribution End of explanation %%R inFile = 'ampFrags_BD-shift.txt' df = read.delim(inFile, sep='\t') %>% mutate(library = library %>% as.character) %%R -h 275 -w 375 inFile = 'ampFrags_BD-shift.txt' df = read.delim(inFile, sep='\t') %>% mutate(library = library %>% as.character) df.s = df %>% mutate(incorporator = ifelse(min > 0.001, TRUE, FALSE), incorporator = ifelse(is.na(incorporator), 'NA', incorporator), library = ifelse(library == '1', 'control', 'treatment')) %>% group_by(library, incorporator) %>% summarize(n_incorps = n()) # summary of number of incorporators df.s %>% filter(library == 'treatment') %>% mutate(n_incorps / sum(n_incorps)) %>% as.data.frame %>% print # plotting ggplot(df.s, aes(library, n_incorps, fill=incorporator)) + geom_bar(stat='identity') + labs(y = 'Count', title='Number of incorporators\n(according to BD shift)') + theme_bw() + theme( text = element_text(size=16) ) Explanation: Plotting stats on BD shift from isotope incorporation End of explanation !SIPSim OTU_table \ --abs 1e9 \ --np 20 \ ampFrags_kde_dif_DBL_incorp.pkl \ comm.txt \ fracs.txt \ > OTU_n2_abs1e9.txt \ 2> OTU_n2_abs1e9.log # checking log !tail -n 5 OTU_n2_abs1e9.log Explanation: Simulating an OTU table End of explanation %%R ## BD for G+C of 0 or 100 BD.GCp0 = 0 * 0.098 + 1.66 BD.GCp50 = 0.5 * 0.098 + 1.66 BD.GCp100 = 1 * 0.098 + 1.66 %%R -w 700 -h 350 # plotting absolute abundances # loading file df = read.delim('OTU_n2_abs1e9.txt', sep='\t') df.s = df %>% group_by(library, BD_mid) %>% summarize(total_count = sum(count)) ## plot p = ggplot(df.s, aes(BD_mid, total_count)) + #geom_point() + geom_area(stat='identity', alpha=0.3, position='dodge') + geom_vline(xintercept=c(BD.GCp50), linetype='dashed', alpha=0.5) + labs(x='Buoyant density', y='Total abundance') + facet_grid(library ~ .) + theme_bw() + theme( text = element_text(size=16) ) p %%R -w 700 -h 350 # plotting number of taxa at each BD df.nt = df %>% filter(count > 0) %>% group_by(library, BD_mid) %>% summarize(n_taxa = n()) ## plot p = ggplot(df.nt, aes(BD_mid, n_taxa)) + #geom_point() + geom_area(stat='identity', alpha=0.3, position='dodge') + #geom_histogram(stat='identity') + geom_vline(xintercept=c(BD.GCp50), linetype='dashed', alpha=0.5) + labs(x='Buoyant density', y='Number of taxa') + facet_grid(library ~ .) + theme_bw() + theme( text = element_text(size=16), legend.position = 'none' ) p %%R -w 700 -h 350 # plotting relative abundances ## plot p = ggplot(df, aes(BD_mid, count, fill=taxon)) + geom_vline(xintercept=c(BD.GCp50), linetype='dashed', alpha=0.5) + labs(x='Buoyant density', y='Absolute abundance') + facet_grid(library ~ .) + theme_bw() + theme( text = element_text(size=16), legend.position = 'none' ) p + geom_area(stat='identity', position='dodge', alpha=0.5) %%R -w 700 -h 350 p + geom_area(stat='identity', position='fill') + labs(x='Buoyant density', y='Relative abundance') Explanation: Plotting taxon abundances End of explanation !SIPSim OTU_PCR \ OTU_n2_abs1e9.txt \ --debug \ > OTU_n2_abs1e9_PCR.txt Explanation: Simulating PCR bias End of explanation %%R -w 800 -h 300 # loading file F = 'OTU_n2_abs1e9_PCR.txt' df.SIM = read.delim(F, sep='\t') %>% mutate(molarity_increase = final_molarity / init_molarity * 100) p1 = ggplot(df.SIM, aes(init_molarity, final_molarity)) + geom_point(shape='O', alpha=0.5) + labs(x='Initial molarity', y='Final molarity') + theme_bw() + theme( text = element_text(size=16) ) p2 = ggplot(df.SIM, aes(init_molarity, molarity_increase)) + geom_point(shape='O', alpha=0.5) + scale_y_log10() + labs(x='Initial molarity', y='% increase in molarity') + theme_bw() + theme( text = element_text(size=16) ) grid.arrange(p1, p2, ncol=2) %%R -w 800 -h 450 # plotting rank abundances df.SIM = df.SIM %>% group_by(library, fraction) %>% mutate(rel_init_molarity = init_molarity / sum(init_molarity), rel_final_molarity = final_molarity / sum(final_molarity), init_molarity_rank = row_number(rel_init_molarity), final_molarity_rank = row_number(rel_final_molarity)) %>% ungroup() p1 = ggplot(df.SIM, aes(init_molarity_rank, rel_init_molarity, color=BD_mid, group=BD_mid)) + geom_line(alpha=0.5) + scale_y_log10(limits=c(1e-7, 0.1)) + scale_x_reverse() + scale_color_gradient('Buoyant\ndensity') + labs(x='Rank', y='Relative abundance', title='pre-PCR') + theme_bw() + theme( text = element_text(size=16) ) p2 = ggplot(df.SIM, aes(final_molarity_rank, rel_final_molarity, color=BD_mid, group=BD_mid)) + geom_line(alpha=0.5) + scale_y_log10(limits=c(1e-7, 0.1)) + scale_x_reverse() + scale_color_gradient('Buoyant\ndensity') + labs(x='Rank', y='Relative abundance', title='post-PCR') + theme_bw() + theme( text = element_text(size=16) ) grid.arrange(p1, p2, ncol=1) Explanation: Plotting change in relative abundances End of explanation # PCR w/out --debug !SIPSim OTU_PCR \ OTU_n2_abs1e9.txt \ > OTU_n2_abs1e9_PCR.txt Explanation: Notes The PCR raises the relative abundances most for low-abundance taxa Results in a more even rank-abundance distribution End of explanation !SIPSim OTU_subsample \ --dist $subsample_dist \ --dist_params mean:$subsample_mean,sigma:$subsample_scale \ --min_size $subsample_min \ --max_size $subsample_max \ OTU_n2_abs1e9_PCR.txt \ > OTU_n2_abs1e9_PCR_subNorm.txt Explanation: Subsampling from the OTU table simulating sequencing of the DNA pool End of explanation %%R -w 300 -h 250 df = read.csv('OTU_n2_abs1e9_PCR_subNorm.txt', sep='\t') df.s = df %>% group_by(library, fraction) %>% summarize(total_count = sum(count)) %>% ungroup() %>% mutate(library = as.character(library)) ggplot(df.s, aes(library, total_count)) + geom_boxplot() + labs(y='Number of sequences\nper fraction') + theme_bw() + theme( text = element_text(size=16) ) Explanation: Plotting seq count distribution End of explanation %%R # loading file df.abs = read.delim('OTU_n2_abs1e9.txt', sep='\t') df.sub = read.delim('OTU_n2_abs1e9_PCR_subNorm.txt', sep='\t') lib.reval = c('1' = 'control', '2' = 'treatment') df.abs = mutate(df.abs, library = plyr::revalue(as.character(library), lib.reval)) df.sub = mutate(df.sub, library = plyr::revalue(as.character(library), lib.reval)) %%R -w 700 -h 800 # plotting absolute abundances ## plot p = ggplot(df.abs, aes(BD_mid, count, fill=taxon)) + geom_vline(xintercept=c(BD.GCp50), linetype='dashed', alpha=0.5) + labs(x='Buoyant density') + facet_grid(library ~ .) + theme_bw() + theme( text = element_text(size=16), axis.title.y = element_text(vjust=1), axis.title.x = element_blank(), legend.position = 'none', plot.margin=unit(c(1,1,0.1,1), "cm") ) p1 = p + geom_area(stat='identity', position='dodge', alpha=0.5) + labs(y='Total community\n(absolute abundance)') # plotting absolute abundances of subsampled ## plot p = ggplot(df.sub, aes(BD_mid, count, fill=taxon)) + geom_vline(xintercept=c(BD.GCp50), linetype='dashed', alpha=0.5) + labs(x='Buoyant density') + facet_grid(library ~ .) + theme_bw() + theme( text = element_text(size=16), legend.position = 'none' ) p2 = p + geom_area(stat='identity', position='dodge', alpha=0.5) + labs(y='Subsampled community\n(absolute abundance)') + theme( axis.title.y = element_text(vjust=1), axis.title.x = element_blank(), plot.margin=unit(c(0.1,1,0.1,1), "cm") ) # plotting relative abundances of subsampled p3 = p + geom_area(stat='identity', position='fill') + geom_vline(xintercept=c(BD.GCp50), linetype='dashed', alpha=0.5) + labs(y='Subsampled community\n(relative abundance)') + theme( axis.title.y = element_text(vjust=1), plot.margin=unit(c(0.1,1,1,1.35), "cm") ) # combining plots grid.arrange(p1, p2, p3, ncol=1) Explanation: Plotting abundance distributions (paper figure) End of explanation !SIPSim OTU_wideLong -w \ OTU_n2_abs1e9_PCR_subNorm.txt \ > OTU_n2_abs1e9_PCR_subNorm_w.txt Explanation: Making a wide OTU table End of explanation !SIPSim OTU_sampleData \ OTU_n2_abs1e9_PCR_subNorm.txt \ > OTU_n2_abs1e9_PCR_subNorm_meta.txt Explanation: Making metadata (phyloseq: sample_data) End of explanation # making phyloseq object from OTU table !SIPSimR phyloseq_make \ OTU_n2_abs1e9_PCR_subNorm_w.txt \ -s OTU_n2_abs1e9_PCR_subNorm_meta.txt \ > OTU_n2_abs1e9_PCR_subNorm.physeq ## making ordination !SIPSimR phyloseq_ordination \ OTU_n2_abs1e9_PCR_subNorm.physeq \ OTU_n2_abs1e9_PCR_subNorm_bray-NMDS.pdf ## filtering phyloseq object to just taxa/samples of interest (eg., BD-min/max) !SIPSimR phyloseq_edit \ OTU_n2_abs1e9_PCR_subNorm.physeq \ --BD_min 1.71 --BD_max 1.75 --occur 0.25 \ > OTU_n2_abs1e9_PCR_subNorm_filt.physeq ## making ordination !SIPSimR phyloseq_ordination \ OTU_n2_abs1e9_PCR_subNorm_filt.physeq \ OTU_n2_abs1e9_PCR_subNorm_filt_bray-NMDS.pdf # making png figures !convert OTU_n2_abs1e9_PCR_subNorm_bray-NMDS.pdf OTU_n2_abs1e9_PCR_subNorm_bray-NMDS.png !convert OTU_n2_abs1e9_PCR_subNorm_filt_bray-NMDS.pdf OTU_n2_abs1e9_PCR_subNorm_filt_bray-NMDS.png Image(filename='OTU_n2_abs1e9_PCR_subNorm_bray-NMDS.png') Image(filename='OTU_n2_abs1e9_PCR_subNorm_filt_bray-NMDS.png') Explanation: Community analysis Phyloseq End of explanation ## DESeq2 !SIPSimR phyloseq_DESeq2 \ --log2 0.25 \ --hypo greater \ OTU_n2_abs1e9_PCR_subNorm_filt.physeq \ > OTU_n2_abs1e9_PCR_subNorm_DESeq2 ## Confusion matrix !SIPSimR DESeq2_confuseMtx \ --padj 0.1 \ ampFrags_BD-shift.txt \ OTU_n2_abs1e9_PCR_subNorm_DESeq2 %%R -w 500 -h 250 byClass = read.delim('DESeq2-cMtx_byClass.txt', sep='\t') %>% filter(library == 2) ggplot(byClass, aes(variables, values)) + geom_bar(stat='identity') + labs(y='Value') + theme_bw() + theme( text = element_text(size=16), axis.title.x = element_blank(), axis.text.x = element_text(angle=45, hjust=1) ) Explanation: DESeq2 End of explanation %%R clsfy = function(guess,known){ if(is.na(guess) | is.na(known)){ return(NA) } if(guess == TRUE){ if(guess == known){ return('True positive') } else { return('False positive') } } else if(guess == FALSE){ if(guess == known){ return('True negative') } else { return('False negative') } } else { stop('Error: true or false needed') } } %%R df = read.delim('DESeq2-cMtx_data.txt', sep='\t') df = df %>% filter(! is.na(log2FoldChange), library == 2) %>% mutate(taxon = reorder(taxon, -log2FoldChange), cls = mapply(clsfy, incorp.pred, incorp.known)) df %>% head(n=3) %%R -w 800 -h 350 df.TN = df %>% filter(cls == 'True negative') df.TP = df %>% filter(cls == 'True positive') df.FP = df %>% filter(cls == 'False negative') ggplot(df, aes(taxon, log2FoldChange, color=cls, ymin=log2FoldChange - lfcSE, ymax=log2FoldChange + lfcSE)) + geom_pointrange(size=0.4, alpha=0.5) + geom_pointrange(data=df.TP, size=0.4, alpha=0.3) + geom_pointrange(data=df.FP, size=0.4, alpha=0.3) + labs(x = 'Taxon', y = 'Log2 fold change') + theme_bw() + theme( text = element_text(size=16), panel.grid.major.x = element_blank(), panel.grid.minor.x = element_blank(), legend.title=element_blank(), axis.text.x = element_blank(), legend.position = 'bottom' ) Explanation: Plotting results of DESeq2 End of explanation %%R df.ds = read.delim('DESeq2-cMtx_data.txt', sep='\t') df.comm = read.delim('comm.txt', sep='\t') df.j = inner_join(df.ds, df.comm, c('taxon' = 'taxon_name', 'library' = 'library')) df.ds = df.comm = NULL df.j %>% head(n=3) %%R -h 500 -w 600 df.j.f = df.j %>% filter(! is.na(log2FoldChange), library == 2) %>% mutate(cls = mapply(clsfy, incorp.pred, incorp.known)) y.lab = 'Pre-fractionation\nabundance (%)' p1 = ggplot(df.j.f, aes(padj, rel_abund_perc, color=cls)) + geom_point(alpha=0.7) + scale_y_log10() + labs(x='P-value (adjusted)', y=y.lab) + theme_bw() + theme( text = element_text(size=16), legend.position = 'bottom' ) p2 = ggplot(df.j.f, aes(cls, rel_abund_perc)) + geom_boxplot() + scale_y_log10() + labs(y=y.lab) + theme_bw() + theme( text = element_text(size=16) ) grid.arrange(p1, p2, ncol=1) %%R -h 300 # plotting ggplot(df.j.f, aes(log2FoldChange, rel_abund_perc, color=cls)) + geom_point(alpha=0.7) + scale_y_log10() + labs(x='log2 fold change', y=y.lab) + theme_bw() + theme( text = element_text(size=16) ) Explanation: Notes: Red circles = true positives False positives should increase with taxon GC Higher GC moves 100% incorporators too far to the right the gradient for the 'heavy' BD range of 1.71-1.75 Lines indicate standard errors. sensitivity ~ pre-frac relative_abundance Enrichment of TP for abundant incorporators? What is the abundance distribution of TP and FP? Are more abundant incorporators being detected more than low abundant taxa End of explanation
12,932
Given the following text description, write Python code to implement the functionality described below step by step Description: Probabilidades La matemática es la lógica de la certeza mientras que la probabilidad es la lógica de la incerteza, dice Joseph K. Blitzstein condensando el pensamiento de cientos de personas antes que el. Entender como pensar en presencia de incertezas es central en Ciencia de Datos. Esta incerteza proviene de diversas fuentes, incluyendo datos incompletos, errores de medición, límites de los diseños experimentales, dificultad de observar ciertos eventos, aproximaciones, etc. En este capítulo veremos una introducción breve a algunos conceptos centrales en probabilidad que nos dará el lenguaje para comprender mejor los fundamentos de varios métodos y procedimientos que veremos más adelante, para quienes tengan interés en profundizar en el tema recomiendo leer el libro Introduction to Probability de Joseph K. Blitzstein y Jessica Hwang. Empecemos por el concepto de probabilidad, existen al menos tres grandes definiciones de probabilidad Step1: Distribución binomial Es la distribución de probabilidad discreta que cuenta el número de éxitos en una secuencia de $n$ ensayos de Bernoulli (experimentos si/no) independientes entre sí, con una probabilidad fija $p$ de ocurrencia del éxito entre los ensayos. Cuando $n=1$ esta distribución se reduce a la distribución de Bernoulli. $$p(x \mid n,p) = \frac{n!}{x!(n-x)!}p^x(1-p)^{n-x}$$ El término $p^x(1-p)^{n-x}$ indica la probabilidad de obtener $x$ éxitos en $n$ intentos. Este término solo tiene en cuenta el número total de éxitos obtenidos pero no la secuencia en la que aparecieron. El primer término conocido como coeficiente binomial calcula todas las posibles combinaciones de $n$ en $x$, es decir el número de subconjuntos de $x$ elementos escogidos de un conjunto con $n$ elementos. Step2: Distribución de Poisson Es una distribución de probabilidad discreta que expresa la probabilidad que $x$ eventos sucedan en un intervalo fijo de tiempo (o espacio o volumen) cuando estos eventos suceden con una taza promedio $\mu$ y de forma independiente entre si. Se la utiliza para modelar eventos con probabilidades pequeñas (sucesos raros) como accidentes de tráfico o decaimiento radiactivo. $$ p(x \mid \mu) = \frac{\mu^{x} e^{-\mu}}{x!} $$ Tando la media como la varianza de esta distribución están dadas por $\mu$. A medida que $\mu$ aumenta la distribución de Poisson se aproxima a una distribución Gaussiana (aunque sigue siendo discreta). La distribución de Poisson tiene estrecha relación con otra distribución de probabilidad, la binomial. Una distribución binomial puede ser aproximada con una distribución de Poisson, cuando $n >> p$, es decir, cuando la cantidad de "éxitos" ($p$) es baja respecto de la cantidad de "intentos" (p) entonces $Poisson(np) \approx Binon(n, p)$. Por esta razón la distribución de Poisson también se conoce como "ley de los pequeños números" o "ley de los eventos raros". Ojo que esto no implica que $\mu$ deba ser pequeña, quien es pequeño/raro es $p$ respecto de $n$. Step3: Variables aleatorias y distribuciones de probabilidad continuas Hasta ahora hemos visto variables aleatorias discretas y distribuciones de masa de probabilidad. Existe otro tipo de variable aleatoria que es muy usado y son las llamadas variables aleatorias continuas, ya que toman valores en $\mathbb{R}$. La diferencia más importante entre variables aleatoria discretas y continuas es que para las continuas $P(X=x) = 0$, es decir, la probabilidad de cualquier valor es exactamente 0. En las gráficas anteriores, para variables discretas, es la altura de las lineas lo que define la probabilidad de cada evento. Si sumamos las alturas siempre obtenemos 1, es decir la suma total de las probabilidades. En una distribución continua no tenemos lineas si no que tenemos una curva continua, la altura de esa curva es la densidad de probabilidad. Si queremos averiguar cuanto más probable es el valor $x_1$ respecto de $x_2$ basta calcular Step4: Distribución Gaussiana (o normal) Es quizá la distribución más conocida. Por un lado por que muchos fenómenos pueden ser descriptos (aproximadamente) usando esta distribución. Por otro lado por que posee ciertas propiedades matemáticas que facilitan trabajar con ella de forma analítica. Es por ello que muchos de los resultados de la estadística frecuentista se basan en asumir una distribución Gaussiana. La distribución Gaussiana queda definida por dos parámetros, la media $\mu$ y la desviación estándar $\sigma$. Una distribución Gaussiana con $\mu = 0$ y $\sigma = 1$ es conocida como la distribución Gaussiana estándar. $$ p(x \mid \mu,\sigma) = \frac{1}{\sigma \sqrt{ 2 \pi}} e^{ - \frac{ (x - \mu)^2 } {2 \sigma^2}} $$ Step5: Distribución t de Student Históricamente esta distribución surgió para estimar la media de una población normalmente distribuida cuando el tamaño de la muestra es pequeño. En estadística Bayesiana su uso más frecuente es el de generar modelos robustos a datos aberrantes. $$p(x \mid \nu,\mu,\sigma) = \frac{\Gamma(\frac{\nu + 1}{2})}{\Gamma(\frac{\nu}{2})\sqrt{\pi\nu}\sigma} \left(1+\frac{1}{\nu}\left(\frac{x-\mu}{\sigma}\right)^2\right)^{-\frac{\nu+1}{2}} $$ donde $\Gamma$ es la función gamma y donde $\nu$ es un parámetro llamado grados de libertad en la mayoría de los textos aunque también se le dice grado de normalidad, ya que a medida que $\nu$ aumenta la distribución se aproxima a una Gaussiana. En el caso extremo de $\lim_{\nu\to\infty}$ la distribución es exactamente igual a una Gaussiana. En el otro extremo, cuando $\nu=1$, (aunque en realidad $\nu$ puede tomar valores por debajo de 1) estamos frente a una distribución de Cauchy. Es similar a una Gaussiana pero las colas decrecen muy lentamente, eso provoca que en teoría esta distribución no poseen una media o varianza definidas. Es decir, es posible calcular a partir de un conjunto de datos una media, pero si los datos provienen de una distribución de Cauchy, la dispersión alrededor de la media será alta y esta dispersión no disminuirá a medida que aumente el tamaño de la muestra. La razón de este comportamiento extraño es que en distribuciones como la Cauchy están dominadas por lo que sucede en las colas de la distribución, contrario a lo que sucede por ejemplo con la distribución Gaussiana. Para esta distribución $\sigma$ no es la desviación estándar, que como ya se dijo podría estar indefinida, $\sigma$ es la escala. A medida que $\nu$ aumenta la escala converge a la desviación estándar de una distribución Gaussiana. Step6: Distribución exponencial La distribución exponencial se define solo para $x > 0$. Esta distribución se suele usar para describir el tiempo que transcurre entre dos eventos que ocurren de forma continua e independiente a una taza fija. El número de tales eventos para un tiempo fijo lo da la distribución de Poisson. $$ p(x \mid \lambda) = \lambda e^{-\lambda x} $$ La media y la desviación estándar de esta distribución están dadas por $\frac{1}{\lambda}$ Scipy usa una parametrización diferente donde la escala es igual a $\frac{1}{\lambda}$ Step7: Distribución de Laplace También llamada distribución doble exponencial, ya que puede pensarse como una distribucion exponencial "más su imagen especular". Esta distribución surge de medir la diferencia entre dos variables exponenciales (idénticamente distribuidas). $$p(x \mid \mu, b) = \frac{1}{2b} \exp \left{ - \frac{|x - \mu|}{b} \right}$$ Step8: Distribución beta Es una distribución definida en el intervalo [0, 1]. Se usa para modelar el comportamiento de variables aleatorias limitadas a un intervalo finito. Es útil para modelar proporciones o porcentajes. $$ p(x \mid \alpha, \beta)= \frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)}\, x^{\alpha-1}(1-x)^{\beta-1} $$ El primer término es simplemente una constante de normalización que asegura que la integral de la $pdf$ de 1. $\Gamma$ es la función gamma. Cuando $\alpha=1$ y $\beta=1$ la distribución beta se reduce a la distribución uniforme. Si queremos expresar la distribución beta en función de la media y la dispersión alrededor de la media podemos hacerlo de la siguiente forma. $$\alpha = \mu \kappa$$ $$\beta = (1 − \mu) \kappa$$ Siendo $\mu$ la media y $\kappa$ una parámetro llamado concentración a media que $\kappa$ aumenta la dispersión disminuye. Notese, además que $\kappa = \alpha + \beta$. Step9: Distribución Gamma Scipy parametriza a la distribución gamma usando un parámetro $\alpha$ y uno $\theta$, usando estos parámetros la $pdf$ es Step10: Distribución acumulada La $pdf$ (o la $pmf$) son formas comunes de representar y trabajar con variables aleatorias, pero no son las únicas formas posibles. Existen otras representaciones equivalentes. Por ejemplo la función de distribución acumulada ($cdf$ en inglés). Al integrar una $pdf$ se obtiene la correspondiente $cdf$, y al derivar la $cdf$ se obtiene la $pdf$. La integral de la $pdf$ es llamada función de distribución acumulada ($cdf$) Step11: La siguiente figura tomada del libro Think Stats resume las relaciones entre la $cdf$, $pdf$ y $pmf$. <img src='imagenes/cmf_pdf_pmf.png' width=600 > Distribuciones empíricas versus teóricas Un método gráfico para comparar si un conjunto de datos se ajusta a una distribución teórica es comparar los valores esperados de la distribución teórica en el eje $x$ y en el eje $y$ los valores de los datos ordenados de menor a mayor. Si la distribución empírica fuese exactamente igual a la teórica los puntos caerían sobre la linea recta a $45^{\circ}$, es decir la linea donde $y = x$. Step12: Límites Los dos teoremás más conocidos y usados en probabilidad son la ley de los grande números y el teorema del límite central. Ambos nos dicen que le sucede a la media muestral a medida que el tamaño de la muestra aumenta. La ley de los grandes números El valor promedio calculado para una muestra converge al valor esperado (media) de dicha distribución. Esto no es cierto para algunas distribuciones como la distribución de Cauchy (la cual no tiene media ni varianza finita). La ley de los grandes números se suele malinterpretar y dar lugar a la paradoja del apostador. Un ejemplo de esta paradoja es creer que conviene apostar en la lotería/quiniela a un número atrasado, es decir un número que hace tiempo que no sale. El razonamiento, erróneo, es que como todos los números tienen la misma probabilidad a largo plazo si un número viene atrasado entonces hay alguna especie de fuerza que aumenta la probabilidad de ese número en los próximo sorteos para así re-establecer la equiprobabilidad de los números. Step13: El teorema central del límite El teorema central del límite (también llamado teorema del límite central) establece que si tomamos $n$ valores (de forma independiente) de una distribución arbitraria la media $\bar X$ de esos valores se distribuirá aproximadamente como una Gaussiana a medida que ${n \rightarrow \infty}$
Python Code: distri = stats.randint(1, 7) # límite inferior, límite superior + 1 x = np.arange(0, 8) x_pmf = distri.pmf(x) # la pmf evaluada para todos los "x" media, varianza = distri.stats(moments='mv') plt.vlines(x, 0, x_pmf, colors='C0', lw=5, label='$\mu$ = {:3.1f}\n$\sigma$ = {:3.1f}'.format(float(media), float(varianza)**0.5)) plt.xlabel('x') plt.ylabel('$p(x)$') plt.legend(frameon=True); Explanation: Probabilidades La matemática es la lógica de la certeza mientras que la probabilidad es la lógica de la incerteza, dice Joseph K. Blitzstein condensando el pensamiento de cientos de personas antes que el. Entender como pensar en presencia de incertezas es central en Ciencia de Datos. Esta incerteza proviene de diversas fuentes, incluyendo datos incompletos, errores de medición, límites de los diseños experimentales, dificultad de observar ciertos eventos, aproximaciones, etc. En este capítulo veremos una introducción breve a algunos conceptos centrales en probabilidad que nos dará el lenguaje para comprender mejor los fundamentos de varios métodos y procedimientos que veremos más adelante, para quienes tengan interés en profundizar en el tema recomiendo leer el libro Introduction to Probability de Joseph K. Blitzstein y Jessica Hwang. Empecemos por el concepto de probabilidad, existen al menos tres grandes definiciones de probabilidad: Decimos que una moneda tiene probabilidad 0,5 (o 50%) de caer cara, por que asumimos que ninguno de los dos eventos, {cara, ceca}, tiene preferencia sobre el otro. Es decir, pensamos que ambos eventos son equi-probables. Esto se conoce como definición clásica o naíf. Es la misma que usamos para decir que la probabilidad de obtener 3 al arrojar un dado es de $\frac{1}{6}$, o que la probabilidad de tener una hija es de 0,5. Esta definición se lleva a las patadas con preguntas como ¿Cuál es la probabilidad de existencia de vida en Marte?, claramente 0,5 es una sobreestimación, ya que el evento vida y el evento no-vida no son igualmente probables. Otra forma de ver a una probabilidad es bajo el prisma frecuentista. En esta concepción de probabilidad, en vez de asumir que los eventos son igualmente probables, diseñamos un experimento (en el sentido amplio de la palabra) y contamos cuantas veces observamos el evento que nos interesa $x$ respecto del total de intentos $n$. Entonces podemos aproximar la probabilidad mediante la frecuencia relativa $\frac{n_x}{n}$, según este procedimiento la probabilidad de obtener 3 al arrojar un dado no es necesariamente de $\frac{1}{6}$ si no que bien podría ser $\frac{1}{3}$. Esta noción de probabilidad se suele asociar con la idea de la existencia de un número correcto al que nos aproximamos a medida que aumentan los intentos $n$. Por lo tanto, podemos definir formalmente probabilidad como: $$p(x) = \lim_{n \rightarrow \infty} \frac{n_x}{n}$$ La definición frecuentista de probabilidad tiene el inconveniente de no ser muy útil para pensar en problemas que ocurren una sola vez. Por ejemplo, ¿Cuál es la probabilidad que mañana llueva? Estrictamente solo hay un mañana y o bien lloverá o bien no. Los frecuentistas suelen evadir este problema recurriendo a experimentos imaginarios. En ese caso podríamos intentar estimar la probabilidad de lluvia para mañana imaginando que hay una cantidad muy grande de mañanas y luego contando en cuantos de esos mañanas llueve y en cuantos no. Esta ficción científica es perfectamente válida y muy útil. La tercer forma de pensar una probabilidad se refiere a cuantificar la incertidumbre que tenemos sobre la posibilidad que un evento suceda. Si el evento es imposible entonces la probabilidad de ese evento será exactamente 0, si en cambio el evento sucede siempre entonces la probabilidad de ese evento será de 1. Todos los valores intermedios reflejan grados de certeza/incerteza. Desde este punto de vista es natural preguntarse cual es la probabilidad que la masa de Saturno sea $x$ kg, o hablar sobre la probabilidad de lluvia durante el 25 de Mayo de 1810, o la probabilidad de que mañana amanezca. Esta tercer interpretación del concepto de probabilidad es llamado Bayesiana y se puede pensar como una versión que incluye, como casos especiales, a las definiciones frecuentista y clásica. Independientemente de la interpretación del concepto de probabilidad la teoría de probabilidades nos ofrece un marco único, coherente y riguroso para trabajar con probabilidades. Probabilidades y conjuntos El marco matemático para trabajar con las probabilidades se construye alrededor de los conjuntos matemáticos. El espacio muestral $\mathcal{X}$ es el conjunto de todos los posibles resultados de un experimento. Un evento $A$ es un subconjunto de $\mathcal{X}$. Decimos que $A$ ha ocurrido si al realizar un experimento obtenemos como resultado $A$. Si tuviéramos un típico dado de 6 caras tendríamos que: $$\mathcal{X} = {1, 2, 3, 4, 5, 6}$$ Podemos definir al evento $A$ como: $$A = {2}$$ Si queremos indicar la probabilidad de que $A$ ocurra escribimos $P(A=2)$, es común usar una forma abreviada, simplemente $P(A)$. Recordemos que esta probabilidad no tiene por que ser $\frac{1}{6}$. Es importante notar que podriamos haber definido al evento $A$ usando más de un elemento de $\mathcal{X}$, por ejemplo cualquier número impar (siempre dentro de $\mathcal{X}$) $A = {1, 3, 5}$, o tal vez $A = {4,5,6}$, todo dependerá del problema que tengamos interés en resolver. Entonces, tenemos que los eventos son subconjuntos de un espacio muestral definido y las probabilidades son números asociados a la posibilidad que esos eventos ocurran, ya sea que esa "posibilidad" la definamos: a partir de asumir todos los eventos equiprobables como la fracción de eventos favorables respecto del total de eventos como el grado de certeza de obtener tal evento Axiomas de Kolmogorov Una aproximación a la formalización del concepto de probabilidad son los axiomas de Kolmogorov. Esta no es la única vía, una alternativa es el teorema de Cox que suele ser preferida por quienes suscriben a la definición Bayesiana de probabilidad. Nosotros veremos los axiomas de Kolmogorov por ser los más comúnmente empleados, pero es importante aclarar que ambas aproximaciones conducen, esencialmente, al mismo marco probabilístico. La probabilidad de un evento es un número real mayor o igual a cero $$P(A)\in \mathbb {R} ,P(A)\geq 0\qquad \forall A \in \mathcal{X}$$ La probabilidad que algo ocurra es 1, queda implícito que todo lo que puede suceder está contenido en $\mathcal{X}$ $$P(\mathcal{X}) = 1$$ Si los eventos $A_1, A_2, ..., A_j$ son mutuamente excluyentes entonces $$P(A_1 \cup A_2 \cup \cdots A_j) = \bigcup {i=1}^{j}P(A{i}) = \sum {i=1}^{j}P(A{i})$$ Si obtengo un 1 en un dado no puedo obtener simultaneamente otro número, por lo tanto la probabilidad de obtener, por ej 1 o 3, o 6 es igual $P(1) + P(3) + P(6)$ De estos tres axiomas se desprende que las probabilidades están restringidas al intervalo [0, 1], es decir números que van entre 0 y 1 (incluyendo ambos extremos). Probabilidad condicional Dado dos eventos $A$ y $B$ siendo $P(B) > 0$, la probabilidad $A$ dado $B$, que se simboliza como $P(A \mid B)$ Es definida como: $$P(A \mid B) = \frac{P(A, B)}{P(B)}$$ $P(A, B)$ es la probabilidad que ocurran los eventos $A$ y $B$, también se suele escribir como $P(A \cap B)$ (el símbolo $\cap$ indica intersección de conjuntos), la probabilidad de la intersección de los eventos $A$ y $B$. $P(A \mid B)$ es lo que se conoce como probabilidad condicional, y es la probabilidad de que ocurra el evento A condicionada por el hecho que sabemos que B ha ocurrido. Por ejemplo la probabilidad que una vereda esté mojada es diferente de la probabilidad que tal vereda esté mojada dado que está lloviendo. Una probabilidad condicional se puede vizualizar como la reducción del espacio muestral. Para ver esto de forma más clara vamos a usar una figura adaptada del libro Introduction to Probability de Joseph K. Blitzstein & Jessica Hwang. En ella se puede ver como pasamos de tener los eventos $A$ y $B$ en el espacio muestral $\mathcal{X}$, en el primer cuadro, a tener $P(A \mid B)$ en el último cuadro donde el espacio muestral se redujo de $\mathcal{X}$ a $B$. <center> <img src='imagenes/cond.png' width=500 > </center> El concepto de probabilidad condicional está en el corazón de la estadística y es central para pensar en como debemos actualizar el conocimiento que tenemos de un evento a la luz de nuevos datos, veremos más sobre esto en el curso "Análisis Bayesiano de datos" y en "Aprendizaje automático y minería de datos". Por ahora dejamos este tema con la siguiente aclaración. Todas las probabilidades son condicionales (respecto de algún supuesto o modelo) aún cuando no lo expresemos explícitamente, no existen probabilidades sin contexto. Variables aleatorias discretas y distribuciones de probabilidad Una variable aleatoria es una función que asocia numéros reales $\mathbb{R}$ con un espacio muestral. Podríamos definir entonces una variable aleatoria $C$ cuyo espacio muestral es ${rojo, verde, azul}$. Si los eventos de interés fuesen rojo, verde, azul, entonces podríamos codificarlos de la siguiente forma: C(rojo) = 0, C(verde)=1, C(azul)=2 Esta codificación es útil ya que en general es más facil operar con números que con strings, ya sea que las operaciones las hagamos manualmente o con una computadora. Una variable es aleatoria en el sentido de que en cada experimento es posible obtener un evento distinto sin que la sucesión de eventos siga un patrón determinista. Por ejemplo si preguntamos cual es el valor de $C$ tres veces seguida podríamos obtener, rojo, rojo, azul o quizá azul, verde, azul, etc. Es importante destacar que la variable NO puede tomar cualquier posible, en nuestro ejemplo solo son posibles 3 valores. Otra confusión muy común es creer que aleatorio implica que todos los eventos tienen igual probabilidad. Pero esto no es cierto, bien podría darse el siguiente ejemplo: $$P(C=rojo) = \frac{1}{2}, P(C=verde) = \frac{1}{4}, P(C=azul) = \frac{1}{4}$$ La equiprobabilidad de los eventos es solo un caso especial. Prácticamente la totalidad de los problemas de interés requiere lidiar con solo dos tipos de variables aleatorias: Discretas Continuas Una variable aleatoria discreta es una variable que puede tomar valores discretos, los cuales forman un conjunto finito (o infinito numerable). En nuestro ejemplo $C$ es discreta ya que solo puede tomar 3 valores, sin posibilidad de valores intermedios entre ellos, no es posible obtener el valor verde-rojizo! Si en vez de "rótulos" hubiéramos usado el espectro continuo de longitudes onda visibles otro sería el caso, ya que podríamos haber definido a $C={400 \text{ nm} ... 750\text{ nm}}$ y en este caso no hay dudas que sería posible obtener un valor a mitad de camino entre rojo ($\approx 700 \text{ nm}$) y verde ($\approx 530 \text{ nm}$), de hecho podemos encontrar infinitos valores entre ellos. Este sería el ejemplo de una variable aleatoria continua. Una variable aleatoria tiene una lista asociada con la probabilidad de cada evento. El nombre formal de esta lista es disribución de probabilidad, en el caso particular de variables aleatorias discretas se le suele llamar también función de masa de probabilidad (o pmf por su sigla en inglés). Es importante destacar que la $pmf$ es una función que devuelve probabilidades, por lo tanto siempre obtendremos valores comprendidos entre [0, 1] y cuya suma total (sobre todos los eventos) dará 1. En principio nada impide que uno defina su propia distribución de probabilidad. Pero existen algunas distribuciones de probabilidad tan comúnmente usadas que tienen nombre "propio" por lo que conviene saber que existen. El siguiente listado no es exhaustivo ni tiene como propósito que memoricen las distribuciones y sus propiedades, solo que ganen cierta familiaridad con las mismas. Si en el futuro necesitan utilizar alguna $pmf$ pueden volver a esta notebook (o pueden revisar Wikipedia!!!) En las siguientes gráficas la altura de las barras azules indican la probabilidad de cada valor de $x$. Se indican, además, la media ($\mu$) y desviación estándar ($\sigma$) de las distribuciones, es importante destacar que estos valores NO son calculados a partir de datos si no que son los valores exactos (calculados analíticamente) que le corresponden a cada distribución. Distribución uniforme discreta Es una distribución que asigna igual probabilidad a un conjunto finitos de valores, su $pmf$ es: $$p(k \mid a, b)={\frac {1}{b - a + 1}}$$ Para valores de $k$ en el intervalo [a, b], fuera de este intervalo $p(k) = 0$ Podemos usar esta distribución para modelar, por ejemplo un dado no cargado. End of explanation n = 4 # número de intentos p = 0.5 # probabilidad de "éxitos" distri = stats.binom(n, p) x = np.arange(0, n + 1) x_pmf = distri.pmf(x) # la pmf evaluada para todos los x media, varianza = distri.stats(moments='mv') plt.vlines(x, 0, x_pmf, colors='C0', lw=5, label='$\mu$ = {:3.1f}\n$\sigma^2$ = {:3.1f}'.format(float(media), float(varianza**0.5))) plt.xlabel('x') plt.ylabel('$p(x)$') plt.legend(frameon=True); Explanation: Distribución binomial Es la distribución de probabilidad discreta que cuenta el número de éxitos en una secuencia de $n$ ensayos de Bernoulli (experimentos si/no) independientes entre sí, con una probabilidad fija $p$ de ocurrencia del éxito entre los ensayos. Cuando $n=1$ esta distribución se reduce a la distribución de Bernoulli. $$p(x \mid n,p) = \frac{n!}{x!(n-x)!}p^x(1-p)^{n-x}$$ El término $p^x(1-p)^{n-x}$ indica la probabilidad de obtener $x$ éxitos en $n$ intentos. Este término solo tiene en cuenta el número total de éxitos obtenidos pero no la secuencia en la que aparecieron. El primer término conocido como coeficiente binomial calcula todas las posibles combinaciones de $n$ en $x$, es decir el número de subconjuntos de $x$ elementos escogidos de un conjunto con $n$ elementos. End of explanation distri = stats.poisson(2.3) # occurrencia media del evento x = np.arange(0, 10) x_pmf = distri.pmf(x) # la pmf evaluada para todos los x media, varianza = distri.stats(moments='mv') plt.vlines(x, 0, x_pmf, colors='C0', lw=5, label='$\mu$ = {:3.1f}\n$\sigma^2$ = {:3.1f}'.format(float(media), float(varianza**0.5))) plt.xlabel('x') plt.ylabel('$p(x)$') plt.legend(frameon=True); Explanation: Distribución de Poisson Es una distribución de probabilidad discreta que expresa la probabilidad que $x$ eventos sucedan en un intervalo fijo de tiempo (o espacio o volumen) cuando estos eventos suceden con una taza promedio $\mu$ y de forma independiente entre si. Se la utiliza para modelar eventos con probabilidades pequeñas (sucesos raros) como accidentes de tráfico o decaimiento radiactivo. $$ p(x \mid \mu) = \frac{\mu^{x} e^{-\mu}}{x!} $$ Tando la media como la varianza de esta distribución están dadas por $\mu$. A medida que $\mu$ aumenta la distribución de Poisson se aproxima a una distribución Gaussiana (aunque sigue siendo discreta). La distribución de Poisson tiene estrecha relación con otra distribución de probabilidad, la binomial. Una distribución binomial puede ser aproximada con una distribución de Poisson, cuando $n >> p$, es decir, cuando la cantidad de "éxitos" ($p$) es baja respecto de la cantidad de "intentos" (p) entonces $Poisson(np) \approx Binon(n, p)$. Por esta razón la distribución de Poisson también se conoce como "ley de los pequeños números" o "ley de los eventos raros". Ojo que esto no implica que $\mu$ deba ser pequeña, quien es pequeño/raro es $p$ respecto de $n$. End of explanation distri = stats.uniform(0, 1) # distribución uniforme entre a=0 y b=1 x = np.linspace(-0.5, 1.5, 200) x_rvs = distri.rvs(500) # muestrear 500 valores de la distribución x_pdf = distri.pdf(x) # la pdf evaluada para todos los x media, varianza = distri.stats(moments='mv') plt.plot (x, x_pdf, lw=5, label='$\mu$ = {:3.1f}\n$\sigma$ = {:3.1f}'.format(float(media), float(varianza)**0.5)) plt.hist(x_rvs, density=True) plt.xlabel('x') plt.ylabel('$p(x)$') plt.legend(frameon=True); Explanation: Variables aleatorias y distribuciones de probabilidad continuas Hasta ahora hemos visto variables aleatorias discretas y distribuciones de masa de probabilidad. Existe otro tipo de variable aleatoria que es muy usado y son las llamadas variables aleatorias continuas, ya que toman valores en $\mathbb{R}$. La diferencia más importante entre variables aleatoria discretas y continuas es que para las continuas $P(X=x) = 0$, es decir, la probabilidad de cualquier valor es exactamente 0. En las gráficas anteriores, para variables discretas, es la altura de las lineas lo que define la probabilidad de cada evento. Si sumamos las alturas siempre obtenemos 1, es decir la suma total de las probabilidades. En una distribución continua no tenemos lineas si no que tenemos una curva continua, la altura de esa curva es la densidad de probabilidad. Si queremos averiguar cuanto más probable es el valor $x_1$ respecto de $x_2$ basta calcular: $$\frac{pdf(x_1)}{pdf(x_2)}$$ Donde $pdf$ es la función de densidad de probabilidad (por su sigla en inglés). Y es análoga a la $pmf$ que vimos para variables discretas. Una diferencia importante es que la $pdf(x)$ puede ser mayor a 1. Para obtener una probabilidad a partir de una pdf debemos integrar en un intervalo dado, ya que es el área bajo la curva y no la altura lo que nos da la probabilidad, es decir es esta integral la que debe dar 1. $$P(a \lt X \lt b) = \int_a^b pdf(x) dx$$ En muchos textos es común usar $p$ para referirse a la probabilidad de un evento en particular o a la $pmf$ o a la $pdf$, esperando que la diferencia se entienda por contexto. A continuación veremos varias distribuciones continuas. La curva azul representa la $pdf$, mientras que el histograma (en naranja) representan muestras tomadas a partir de cada distribución. Al igual que con los ejemplos anteriores de distribuciones discretas. Se indican la media ($\mu$) y desviación estándar ($\sigma$) de las distribuciones, también en este caso recalcamos que estos valores NO son calculados a partir de datos si no que son los valores exactos (calculados analíticamente) que le corresponden a cada distribución. Distribución uniforme Aún siendo simple, la distribución uniforme es muy usada en estadística, por ej para representar nuestra ignorancia sobre el valor que pueda tomar un parámetro. La distribución uniforme tiene entropía cero (todos los estados son igualmente probables). $$ p(x \mid a,b)=\begin{cases} \frac{1}{b-a} & para\ a \le x \le b \ 0 & \text{para el resto} \end{cases} $$ End of explanation distri = stats.norm(loc=0, scale=1) # media cero y desviación standard 1 x = np.linspace(-4, 4, 100) x_rvs = distri.rvs(500) # muestrear 500 valores de la distribución x_pdf = distri.pdf(x) # la pdf evaluada para todos los x media, varianza = distri.stats(moments='mv') plt.plot (x, x_pdf, lw=5, label='$\mu$ = {:3.1f}\n$\sigma$ = {:3.1f}'.format(float(media), float(varianza)**0.5)) plt.hist(x_rvs, density=True) plt.xlabel('x') plt.ylabel('$p(x)$') plt.legend(frameon=True); Explanation: Distribución Gaussiana (o normal) Es quizá la distribución más conocida. Por un lado por que muchos fenómenos pueden ser descriptos (aproximadamente) usando esta distribución. Por otro lado por que posee ciertas propiedades matemáticas que facilitan trabajar con ella de forma analítica. Es por ello que muchos de los resultados de la estadística frecuentista se basan en asumir una distribución Gaussiana. La distribución Gaussiana queda definida por dos parámetros, la media $\mu$ y la desviación estándar $\sigma$. Una distribución Gaussiana con $\mu = 0$ y $\sigma = 1$ es conocida como la distribución Gaussiana estándar. $$ p(x \mid \mu,\sigma) = \frac{1}{\sigma \sqrt{ 2 \pi}} e^{ - \frac{ (x - \mu)^2 } {2 \sigma^2}} $$ End of explanation distri = stats.t(loc=0, scale=2, df=4) # media 0, escala 2, grados de libertad 4 x = np.linspace(-10, 10, 100) x_rvs = distri.rvs(500) # muestrear 500 valores de la distribución x_pdf = distri.pdf(x) # la pdf evaluada para todos los x media, varianza = distri.stats(moments='mv') plt.plot (x, x_pdf, lw=5, label='$\mu$ = {:3.1f}\n$\sigma$ = {:3.1f}'.format(float(media), float(varianza)**0.5)) plt.hist(x_rvs, density=True) plt.xlabel('x') plt.ylabel('$p(x)$') plt.legend(frameon=True); Explanation: Distribución t de Student Históricamente esta distribución surgió para estimar la media de una población normalmente distribuida cuando el tamaño de la muestra es pequeño. En estadística Bayesiana su uso más frecuente es el de generar modelos robustos a datos aberrantes. $$p(x \mid \nu,\mu,\sigma) = \frac{\Gamma(\frac{\nu + 1}{2})}{\Gamma(\frac{\nu}{2})\sqrt{\pi\nu}\sigma} \left(1+\frac{1}{\nu}\left(\frac{x-\mu}{\sigma}\right)^2\right)^{-\frac{\nu+1}{2}} $$ donde $\Gamma$ es la función gamma y donde $\nu$ es un parámetro llamado grados de libertad en la mayoría de los textos aunque también se le dice grado de normalidad, ya que a medida que $\nu$ aumenta la distribución se aproxima a una Gaussiana. En el caso extremo de $\lim_{\nu\to\infty}$ la distribución es exactamente igual a una Gaussiana. En el otro extremo, cuando $\nu=1$, (aunque en realidad $\nu$ puede tomar valores por debajo de 1) estamos frente a una distribución de Cauchy. Es similar a una Gaussiana pero las colas decrecen muy lentamente, eso provoca que en teoría esta distribución no poseen una media o varianza definidas. Es decir, es posible calcular a partir de un conjunto de datos una media, pero si los datos provienen de una distribución de Cauchy, la dispersión alrededor de la media será alta y esta dispersión no disminuirá a medida que aumente el tamaño de la muestra. La razón de este comportamiento extraño es que en distribuciones como la Cauchy están dominadas por lo que sucede en las colas de la distribución, contrario a lo que sucede por ejemplo con la distribución Gaussiana. Para esta distribución $\sigma$ no es la desviación estándar, que como ya se dijo podría estar indefinida, $\sigma$ es la escala. A medida que $\nu$ aumenta la escala converge a la desviación estándar de una distribución Gaussiana. End of explanation distri = stats.expon(scale=3) # escala 3, lambda = 1/3 x = np.linspace(0, 25, 100) x_rvs = distri.rvs(500) # muestrear 500 valores de la distribución x_pdf = distri.pdf(x) # la pdf evaluada para todos los x media, varianza = distri.stats(moments='mv') plt.plot (x, x_pdf, lw=5, label='$\mu$ = {:3.1f}\n$\sigma$ = {:3.1f}'.format(float(media), float(varianza)**0.5)) plt.hist(x_rvs, density=True) plt.xlabel('x') plt.ylabel('$p(x)$') plt.legend(frameon=True); Explanation: Distribución exponencial La distribución exponencial se define solo para $x > 0$. Esta distribución se suele usar para describir el tiempo que transcurre entre dos eventos que ocurren de forma continua e independiente a una taza fija. El número de tales eventos para un tiempo fijo lo da la distribución de Poisson. $$ p(x \mid \lambda) = \lambda e^{-\lambda x} $$ La media y la desviación estándar de esta distribución están dadas por $\frac{1}{\lambda}$ Scipy usa una parametrización diferente donde la escala es igual a $\frac{1}{\lambda}$ End of explanation distri = stats.laplace(0, 0.7) # escala 3, lambda = 1/3 x = np.linspace(-5, 5, 500) x_rvs = distri.rvs(500) # muestrear 500 valores de la distribución x_pdf = distri.pdf(x) # la pdf evaluada para todos los x media, varianza = distri.stats(moments='mv') plt.plot (x, x_pdf, lw=5, label='$\mu$ = {:3.1f}\n$\sigma$ = {:3.1f}'.format(float(media), float(varianza)**0.5)) plt.hist(x_rvs, density=True) plt.xlabel('x') plt.ylabel('$p(x)$') plt.legend(frameon=True); Explanation: Distribución de Laplace También llamada distribución doble exponencial, ya que puede pensarse como una distribucion exponencial "más su imagen especular". Esta distribución surge de medir la diferencia entre dos variables exponenciales (idénticamente distribuidas). $$p(x \mid \mu, b) = \frac{1}{2b} \exp \left{ - \frac{|x - \mu|}{b} \right}$$ End of explanation distri = stats.beta(5, 2) # alfa=5, beta=2 x = np.linspace(0, 1, 100) x_rvs = distri.rvs(500) # muestrear 500 valores de la distribución x_pdf = distri.pdf(x) # la pdf evaluada para todos los x media, varianza = distri.stats(moments='mv') plt.plot (x, x_pdf, lw=5, label='$\mu$ = {:3.1f}\n$\sigma$ = {:3.1f}'.format(float(media), float(varianza)**0.5)) plt.hist(x_rvs, density=True) plt.xlabel('x') plt.ylabel('$p(x)$') plt.legend(frameon=True); Explanation: Distribución beta Es una distribución definida en el intervalo [0, 1]. Se usa para modelar el comportamiento de variables aleatorias limitadas a un intervalo finito. Es útil para modelar proporciones o porcentajes. $$ p(x \mid \alpha, \beta)= \frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)}\, x^{\alpha-1}(1-x)^{\beta-1} $$ El primer término es simplemente una constante de normalización que asegura que la integral de la $pdf$ de 1. $\Gamma$ es la función gamma. Cuando $\alpha=1$ y $\beta=1$ la distribución beta se reduce a la distribución uniforme. Si queremos expresar la distribución beta en función de la media y la dispersión alrededor de la media podemos hacerlo de la siguiente forma. $$\alpha = \mu \kappa$$ $$\beta = (1 − \mu) \kappa$$ Siendo $\mu$ la media y $\kappa$ una parámetro llamado concentración a media que $\kappa$ aumenta la dispersión disminuye. Notese, además que $\kappa = \alpha + \beta$. End of explanation distri = stats.gamma(a=3, scale=0.5) # alfa 3, theta 0.5 x = np.linspace(0, 8, 100) x_rvs = distri.rvs(500) # muestrear 500 valores de la distribución x_pdf = distri.pdf(x) # la pdf evaluada para todos los x media, varianza = distri.stats(moments='mv') plt.plot (x, x_pdf, lw=5, label='$\mu$ = {:3.1f}\n$\sigma$ = {:3.1f}'.format(float(media), float(varianza)**0.5)) plt.hist(x_rvs, normed=True) plt.xlabel('x') plt.ylabel('$p(x)$') plt.legend(frameon=True); Explanation: Distribución Gamma Scipy parametriza a la distribución gamma usando un parámetro $\alpha$ y uno $\theta$, usando estos parámetros la $pdf$ es: $$ p(x \mid \alpha, \theta) = \frac{1}{\Gamma(\alpha) \theta^\alpha} x^{\alpha \,-\, 1} e^{-\frac{x}{\theta}} $$ Una parametrización más común en estadística Bayesiana usa los parámetros $\alpha$ y $\beta$, siendo $\beta = \frac{1}{\theta}$. En este caso la pdf queda como: $$ p(x \mid \alpha, \beta) = \frac{\beta^{\alpha}x^{\alpha-1}e^{-\beta x}}{\Gamma(\alpha)} $$ La distribución gamma se reduce a la exponencial cuando $\alpha=1$. End of explanation _, ax = plt.subplots(2,1, figsize=(6, 8), sharex=True) x_values = np.linspace(-4, 4, 200) values = [(0., .2), (0., 1.), (0., 2.), (-2., .5)] color = ['C0', 'C1', 'C2', 'C3'] for val, c in zip(values, color): pdf = stats.norm(*val).pdf(x_values) cdf = stats.norm(*val).cdf(x_values) ax[0].plot(x_values, pdf, lw=3, color=c, label='$\mu$ = {}, $\sigma$ = {}'.format(*val)) ax[1].plot(x_values, cdf, lw=3, color=c) ax[0].set_ylabel('$pdf$', fontsize=14, rotation=0, labelpad=20) ax[0].legend() ax[1].set_ylabel('$cdf$', fontsize=14, rotation=0, labelpad=20) ax[1].set_xlabel('$x$'); Explanation: Distribución acumulada La $pdf$ (o la $pmf$) son formas comunes de representar y trabajar con variables aleatorias, pero no son las únicas formas posibles. Existen otras representaciones equivalentes. Por ejemplo la función de distribución acumulada ($cdf$ en inglés). Al integrar una $pdf$ se obtiene la correspondiente $cdf$, y al derivar la $cdf$ se obtiene la $pdf$. La integral de la $pdf$ es llamada función de distribución acumulada ($cdf$): \begin{equation} cdf(x) = \int_{-\infty}^{x} pdf(x) d(x) \end{equation} En algunas situaciones se prefiere hablar de la función de supervivencia: \begin{equation} S(x) = 1 - cdf \end{equation} A continuación un ejemplo de la $pdf$ y $cdf$ para 4 distribuciones de la familia Gaussiana. End of explanation muestra = np.random.normal(0, 1, 100) dist = stats.norm(0, 1), stats.laplace(scale=0.7) x = np.linspace(-4, 4, 100) dist_pdf = dist[0].pdf(x), dist[1].pdf(x) _, ax = plt.subplots(2, 2, figsize=(8, 8)) for i in range(2): osm, osr = stats.probplot(muestra, fit=False, dist=dist[i]) ax[0,i].plot(osm, osm) ax[0,i].plot(osm, osr, 'o') ax[0,i].set_xlabel('Valores esperados') ax[0,i].set_ylabel('Valores observados') ax[1, i].plot(x, dist_pdf[i], lw=3) ax[1, i].hist(muestra, density=True) ax[1, i].set_ylim(0, np.max(dist_pdf) * 1.1) Explanation: La siguiente figura tomada del libro Think Stats resume las relaciones entre la $cdf$, $pdf$ y $pmf$. <img src='imagenes/cmf_pdf_pmf.png' width=600 > Distribuciones empíricas versus teóricas Un método gráfico para comparar si un conjunto de datos se ajusta a una distribución teórica es comparar los valores esperados de la distribución teórica en el eje $x$ y en el eje $y$ los valores de los datos ordenados de menor a mayor. Si la distribución empírica fuese exactamente igual a la teórica los puntos caerían sobre la linea recta a $45^{\circ}$, es decir la linea donde $y = x$. End of explanation tamaño_muestra = 200 muestras = range(1, tamaño_muestra) dist = stats.uniform(0, 1) media_verdadera = dist.stats(moments='m') for _ in range(3): muestra = dist.rvs(tamaño_muestra) media_estimada = [muestra[:i].mean() for i in muestras] plt.plot(muestras, media_estimada, lw=1.5) plt.hlines(media_verdadera, 0, tamaño_muestra, linestyle='--', color='k') plt.ylabel("media", fontsize=14) plt.xlabel("# de muestras", fontsize=14); Explanation: Límites Los dos teoremás más conocidos y usados en probabilidad son la ley de los grande números y el teorema del límite central. Ambos nos dicen que le sucede a la media muestral a medida que el tamaño de la muestra aumenta. La ley de los grandes números El valor promedio calculado para una muestra converge al valor esperado (media) de dicha distribución. Esto no es cierto para algunas distribuciones como la distribución de Cauchy (la cual no tiene media ni varianza finita). La ley de los grandes números se suele malinterpretar y dar lugar a la paradoja del apostador. Un ejemplo de esta paradoja es creer que conviene apostar en la lotería/quiniela a un número atrasado, es decir un número que hace tiempo que no sale. El razonamiento, erróneo, es que como todos los números tienen la misma probabilidad a largo plazo si un número viene atrasado entonces hay alguna especie de fuerza que aumenta la probabilidad de ese número en los próximo sorteos para así re-establecer la equiprobabilidad de los números. End of explanation np.random.seed(4) plt.figure(figsize=(9,6)) iters = 2000 distri = stats.expon(scale=1) mu, var = distri.stats(moments='mv') for i, n in enumerate([1, 5, 100]): sample = np.mean(distri.rvs((n, iters)), axis=0) plt.subplot(2, 3, i+1) sd = (var/n)**0.5 x = np.linspace(mu - 4 * sd, mu + 4 * sd, 200) plt.plot(x, stats.norm(mu, sd).pdf(x)) plt.hist(sample, density=True, bins=20) plt.title('n = {}'.format(n)) plt.subplot(2, 3, i+4) osm, osr = stats.probplot(sample, dist=stats.norm(mu, (var/n)**0.5), fit=False) plt.plot(osm, osm) plt.plot(osm, osr, 'o') plt.xlabel('Valores esperados') plt.ylabel('Valores observados') plt.tight_layout() Explanation: El teorema central del límite El teorema central del límite (también llamado teorema del límite central) establece que si tomamos $n$ valores (de forma independiente) de una distribución arbitraria la media $\bar X$ de esos valores se distribuirá aproximadamente como una Gaussiana a medida que ${n \rightarrow \infty}$: $$\bar X_n \dot\sim \mathcal{N} \left(\mu, \frac{\sigma^2}{n}\right)$$ Donde $\mu$ y $\sigma^2$ son la media y varianza poblacionales. Para que el teorema del límite central se cumpla se deben cumplir los siguientes supuestos: Las variables se muestrean de forma independiente Las variables provienen de la misma distribución La media y la desviación estándar de la distribución tiene que ser finitas Los criterios 1 y 2 se pueden relajar bastante y aún así obtendremos aproximadamente una Gaussiana, pero del criterio 3 no hay forma de escapar. Para distribuciones como la distribución de Cauchy, que no posen media ni varianza definida este teorema no se aplica. El promedio de $N$ valores provenientes de una distribución Cauchy no siguen una Gaussiana sino una distribución de Cauchy. El teorema del límite central explica la prevalencia de la distribución Gaussiana en la naturaleza. Muchos de los fenómenos que estudiamos se pueden explicar como fluctuaciones alrededor de una media, o ser el resultado de la suma de muchos factores diferentes. Además, las Gaussianas son muy comunes en probabilidad, estadística y machine learning ya que que esta familia de distribuciones son más simples de manipular matemáticamente que muchas otras distribuciones. A continuación vemos una simulación que nos muestra el teorema del límite central en acción. End of explanation
12,933
Given the following text description, write Python code to implement the functionality described below step by step Description: Logic functions Step1: Truth value testing Q1. Let x be an arbitrary array. Return True if none of the elements of x is zero. Remind that 0 evaluates to False in python. Step2: Q2. Let x be an arbitrary array. Return True if any of the elements of x is non-zero. Step3: Array contents Q3. Predict the result of the following code. Step4: Q4. Predict the result of the following code. Step5: Q5. Predict the result of the following code. Step6: Array type testing Q6. Predict the result of the following code. Step7: Q7. Predict the result of the following code. Step8: Q8. Predict the result of the following code. Step9: Logical operations Q9. Predict the result of the following code. Step10: Comparison Q10. Predict the result of the following code. Step11: Q11. Write numpy comparison functions such that they return the results as you see. Step12: Q12. Predict the result of the following code.
Python Code: import numpy as np np.__version__ Explanation: Logic functions End of explanation x = np.array([1,2,3]) # x = np.array([1,0,3]) # Explanation: Truth value testing Q1. Let x be an arbitrary array. Return True if none of the elements of x is zero. Remind that 0 evaluates to False in python. End of explanation x = np.array([1,0,0]) # x = np.array([0,0,0]) # Explanation: Q2. Let x be an arbitrary array. Return True if any of the elements of x is non-zero. End of explanation x = np.array([1, 0, np.nan, np.inf]) #print np.isfinite(x) Explanation: Array contents Q3. Predict the result of the following code. End of explanation x = np.array([1, 0, np.nan, np.inf]) #print np.isinf(x) Explanation: Q4. Predict the result of the following code. End of explanation x = np.array([1, 0, np.nan, np.inf]) #print np.isnan(x) Explanation: Q5. Predict the result of the following code. End of explanation x = np.array([1+1j, 1+0j, 4.5, 3, 2, 2j]) #print np.iscomplex(x) Explanation: Array type testing Q6. Predict the result of the following code. End of explanation x = np.array([1+1j, 1+0j, 4.5, 3, 2, 2j]) #print np.isreal(x) Explanation: Q7. Predict the result of the following code. End of explanation #print np.isscalar(3) #print np.isscalar([3]) #print np.isscalar(True) Explanation: Q8. Predict the result of the following code. End of explanation #print np.logical_and([True, False], [False, False]) #print np.logical_or([True, False, True], [True, False, False]) #print np.logical_xor([True, False, True], [True, False, False]) #print np.logical_not([True, False, 0, 1]) Explanation: Logical operations Q9. Predict the result of the following code. End of explanation #print np.allclose([3], [2.999999]) #print np.array_equal([3], [2.999999]) Explanation: Comparison Q10. Predict the result of the following code. End of explanation x = np.array([4, 5]) y = np.array([2, 5]) # # # # Explanation: Q11. Write numpy comparison functions such that they return the results as you see. End of explanation #print np.equal([1, 2], [1, 2.000001]) #print np.isclose([1, 2], [1, 2.000001]) Explanation: Q12. Predict the result of the following code. End of explanation
12,934
Given the following text description, write Python code to implement the functionality described below step by step Description: My tutorial This section describes a tool or operation that is desirable for someone. The title above should describe what is happening, and this paragraph explains in what situation the tool or process being described is useful. It sets the stage for what we're about to do. If we expect that they've completed previous tutorials, such as formatting their data or preprocessing it in some way, we should link here to those previous tutorials (or at the very least name them). Step 1 Step1: Step 2 Step2: It's also important to summarize what we've done, so that the user can Summarizing these results and those that require more intimate knowledge of the data, we come up with the following
Python Code: from scipy import misc as scm import os.path as op import matplotlib.pyplot as plt % matplotlib inline datadir = '/tmp/113_1/' im = scm.imread(op.join(datadir,'0090.png')) plt.imshow(im, cmap='gray') plt.show() Explanation: My tutorial This section describes a tool or operation that is desirable for someone. The title above should describe what is happening, and this paragraph explains in what situation the tool or process being described is useful. It sets the stage for what we're about to do. If we expect that they've completed previous tutorials, such as formatting their data or preprocessing it in some way, we should link here to those previous tutorials (or at the very least name them). Step 1: Getting set-up This sections should describe what a user needs to have in front of them in order for the tutorial to work, including providing example data that should be: a) publicly accessible, b) "small", and c) be able to be processed quickly. This should explain the properties of the data needed, and either display an image of the demo data inline, or have a code block below showing the demo data, like is currently below. End of explanation import os import numpy as np files = os.listdir(datadir) # get a list of all files in the dataset print 'X image size: ', im.shape[1] # second dimension is X in our png print 'Y image size: ', im.shape[0] # first dimension is Y in our png print 'Z image size: ', len(files) # we get Z by counting the number of images in our directory print 'Time range: (0, 0)' # default value if the data is not time series dtype = im.dtype print 'Data type: ', dtype try: im_min = np.iinfo(dtype).max im_max = np.iinfo(dtype).min except: im_min = np.finfo(dtype).max im_max = np.finfo(dtype).min for f in files: # get range by checking each slice min and max temp_im = scm.imread(op.join(datadir, f)) im_min = np.min(temp_im) if np.min(temp_im) < im_min else im_min # update image stack min im_max = np.max(temp_im) if np.max(temp_im) > im_max else im_max # update image stack max print 'Window range: (%f, %f)' % (im_min, im_max) Explanation: Step 2: Doing a thing to the data Here we should enumerate the goals of the pre-processing steps that need to be taken initially. Whether this is organization or documentation of the data, or computing some trasformation, this step is generally taking the fresh, "raw"-ish data you provided and the user is expected to have, and sets it up so that in the third step they can do real processing. As an example, in the case of ndstore, when creating datasets/projects/channels we need to learn the following features about our data prior to beginning: - {x, y, z} image size - time range - data type - window range An external link to documentation which explain things, like this one for the above example, is always helpful for users who wish to have more than the superficial and functional picture you're currently providing. We, again, should have a code block that does some analysis. the one below gets some items from that list above. End of explanation print "more code here, as always" Explanation: It's also important to summarize what we've done, so that the user can Summarizing these results and those that require more intimate knowledge of the data, we come up with the following: | property | value | |:--------- |:------ | | dataset name | kki2009_demo | | x size | 182 | | y size | 218 | | z size | 182 | | time range | (0, 0) | | data type | uint8 | | window range | (0, 255) | Step 3: Doing the thing This is usually the real deal. You've set the stage, preprocessed as needed, and now are ready for the task. Here you should provide a detailed description of the next steps, link to documentation, and provide some way to validate that what you are getting the user to do worked as expected. End of explanation
12,935
Given the following text description, write Python code to implement the functionality described below step by step Description: Think Bayes This notebook presents example code and exercise solutions for Think Bayes. Copyright 2016 Allen B. Downey MIT License Step2: Here's a problem from Joyce, "How probabilities reflect evidence" Step3: Here's the uniform prior Step4: Here's Jacob's update after 5 blue marbles. Step5: Here's Emily's update after an additional 12 blue and 3 green. Step6: What should Jacob believe about Bnext? Step7: Let's make it a function Step8: Here's what Jacob believes. Step9: And Emily. Step10: Suppose we draw a blue marble from the same urn and show it to Jacob and Emily. How much do their beliefs about Bnext change? Here's the effect on Jacob. Step11: And on Emily. Step12: Suppose we draw a green marble from the same urn and show it to Jacob and Emily. How much do their beliefs about Bnext change?
Python Code: # Configure Jupyter so figures appear in the notebook %matplotlib inline # Configure Jupyter to display the assigned value after an assignment %config InteractiveShell.ast_node_interactivity='last_expr_or_assign' # import classes from thinkbayes2 from thinkbayes2 import Pmf, Suite import thinkplot as tplt Explanation: Think Bayes This notebook presents example code and exercise solutions for Think Bayes. Copyright 2016 Allen B. Downey MIT License: https://opensource.org/licenses/MIT End of explanation class Urns(Suite): def Likelihood(self, data, hypo): Computes the likelihood of the data under the hypothesis. data: 'B' or 'G' hypo: urn index from 0..3 prob_blue = hypo / 3 if data == 'B': return prob_blue else: return 1-prob_blue Explanation: Here's a problem from Joyce, "How probabilities reflect evidence": Four Urns: Jacob and Emily both start out knowing that the urn U was randomly chosen from a set of four urns {urn0, urn1, urn2, urn3} where urn_i contains three balls, i of which are blue and 3-i of which are green. Since the choice of U was random both subjects assign equal credence to the four hypotheses about its contents: c(U = urn_i) = 1/4. Moreover, both treat these hypotheses as statements about the objective chance of drawing a blue ball from U, so that knowledge of U = urn_i ‘screen offs’ any sampling data in the sense that c(Bnext |E & U = urn_i) = c(Bnext | U = urni), where Bnext says that the next ball drawn from the urn will be blue and E is a proposition that describes any prior series of random draws with replacement from U. Finally, Jacob and Emily regard random drawing with replacement as an exchangeable process, so that any series of draws that produces m blue balls and n green balls is as likely as any other such series, irrespective of order. Use BmGn to denote the generic event in which m blue balls and n green balls are drawn at random and with replacement form U. Against this backdrop of shared evidence, suppose Jacob sees five balls drawn at random and with replacement from U and observes that all are blue, so his evidence is B5G0. Emily, who sees Jacob’s evidence, looks at fifteen additional draws of which twelve come up blue, so her evidence is B17G3. What should Emily and Jacob think about Bnext? Here's a class that represents a suite of hypotheses about the urns: End of explanation prior = Urns([0, 1, 2, 3]) tplt.Hist(prior) tplt.decorate(xlabel='Urn index (i)', ylabel='PMF') Explanation: Here's the uniform prior: End of explanation jacob = prior.Copy() B5G0 = 'B'*5 for data in B5G0: jacob.Update(data) jacob.Print() tplt.Hist(prior, color='gray') tplt.Hist(jacob) tplt.decorate(xlabel='Urn index (i)', ylabel='PMF') Explanation: Here's Jacob's update after 5 blue marbles. End of explanation emily = jacob.Copy() B12G3 = 'B'*12 + 'G'*3 for data in B12G3: emily.Update(data) emily.Print() tplt.preplot(cols=2) tplt.Hist(jacob, label='Jacob') tplt.decorate(xlabel='Urn index (i)', ylabel='PMF') tplt.subplot(2) tplt.Hist(emily, label='Emily') tplt.decorate(xlabel='Urn index (i)', ylabel='PMF') Explanation: Here's Emily's update after an additional 12 blue and 3 green. End of explanation total = 0 for i, prob_i in jacob.Items(): print(i, prob_i) prob_blue = i/3 total += prob_i * prob_blue total Explanation: What should Jacob believe about Bnext? End of explanation def prob_b_next(suite): total = 0 for i, prob_i in suite.Items(): prob_blue = i/3 total += prob_i * prob_blue return total Explanation: Let's make it a function: End of explanation prob_b_next(jacob) Explanation: Here's what Jacob believes. End of explanation prob_b_next(emily) Explanation: And Emily. End of explanation print(prob_b_next(jacob)) jacob.Update('B') print(prob_b_next(jacob)) Explanation: Suppose we draw a blue marble from the same urn and show it to Jacob and Emily. How much do their beliefs about Bnext change? Here's the effect on Jacob. End of explanation print(prob_b_next(emily)) emily.Update('B') print(prob_b_next(emily)) Explanation: And on Emily. End of explanation print(prob_b_next(jacob)) jacob.Update('G') print(prob_b_next(jacob)) print(prob_b_next(emily)) emily.Update('G') print(prob_b_next(emily)) Explanation: Suppose we draw a green marble from the same urn and show it to Jacob and Emily. How much do their beliefs about Bnext change? End of explanation
12,936
Given the following text description, write Python code to implement the functionality described below step by step Description: SINGA Model Classes <img src="http Step1: Common layers Step2: Dense Layer Step3: Convolution Layer Step4: Pooling Layer Step5: Branch layers Step6: Metric and Loss Step7: Optimizer Step8: FeedForwardNet
Python Code: from singa import tensor, device, layer #help(layer.Layer) layer.engine='singacpp' Explanation: SINGA Model Classes <img src="http://singa.apache.org/en/_static/images/singav1-sw.png" width="500px"/> Layer Typically, the life cycle of a layer instance includes: 1. construct layer without input_sample_shapes, goto 2; or, construct layer with input_sample_shapes, goto 3; call setup to create the parameters and setup other meta fields; initialize the parameters of the layer call forward or access layer members call backward and get parameters for update End of explanation from singa.layer import Dense, Conv2D, MaxPooling2D, Activation, BatchNormalization, Softmax Explanation: Common layers End of explanation dense = Dense('dense', 3, input_sample_shape=(2,)) #dense.param_names() w, b = dense.param_values() print(w.shape, b.shape) w.gaussian(0, 0.1) b.set_value(0) x = tensor.Tensor((2,2)) x.uniform(-1, 1) y = dense.forward(True, x) tensor.to_numpy(y) gx, [gw, gb] = dense.backward(True, y) print(gx.shape, gw.shape, gb.shape) Explanation: Dense Layer End of explanation conv = Conv2D('conv', 4, 3, 1, input_sample_shape=(3, 6, 6)) print(conv.get_output_sample_shape()) Explanation: Convolution Layer End of explanation pool = MaxPooling2D('pool', 3, 2, input_sample_shape=(4, 6, 6)) print(pool.get_output_sample_shape()) Explanation: Pooling Layer End of explanation from singa.layer import Split, Merge, Slice, Concat split = Split('split', 2, input_sample_shape=(4, 6, 6)) print(split.get_output_sample_shape()) merge = Merge('merge', input_sample_shape=(4, 6, 6)) print(merge.get_output_sample_shape()) sli = Slice('slice', 1, [2], input_sample_shape=(4, 6, 6)) print(sli.get_output_sample_shape()) concat = Concat('concat', 1, input_sample_shapes=[(3, 6, 6), (1, 6, 6)]) print(concat.get_output_sample_shape()) Explanation: Branch layers End of explanation from singa import metric import numpy as np x = tensor.Tensor((3, 5)) x.uniform(0, 1) # randomly genearte the prediction activation x = tensor.softmax(x) # normalize the prediction into probabilities print(tensor.to_numpy(x)) y = tensor.from_numpy(np.array([0, 1, 3], dtype=np.int)) # set the truth f = metric.Accuracy() acc = f.evaluate(x, y) # averaged accuracy over all 3 samples in x print(acc) from singa import loss x = tensor.Tensor((3, 5)) x.uniform(0, 1) # randomly genearte the prediction activation y = tensor.from_numpy(np.array([0, 1, 3], dtype=np.int)) # set the truth f = loss.SoftmaxCrossEntropy() l = f.forward(True, x, y) # l is tensor with 3 loss values g = f.backward() # g is a tensor containing all gradients of x w.r.t l print(l.l1()) print(tensor.to_numpy(g)) Explanation: Metric and Loss End of explanation from singa import optimizer sgd = optimizer.SGD(lr=0.01, momentum=0.9, weight_decay=1e-4) p = tensor.Tensor((3,5)) p.uniform(-1, 1) g = tensor.Tensor((3,5)) g.gaussian(0, 0.01) sgd.apply(1, g, p, 'param') # use the global lr=0.1 for epoch 1 sgd.apply_with_lr(2, 0.03, g, p, 'param') # use lr=0.03 for epoch 2 Explanation: Optimizer End of explanation from singa import net as ffnet layer.engine = 'singacpp' net = ffnet.FeedForwardNet(loss.SoftmaxCrossEntropy(), metric.Accuracy()) net.add(layer.Conv2D('conv1', 32, 5, 1, input_sample_shape=(3,32,32,))) net.add(layer.Activation('relu1')) net.add(layer.MaxPooling2D('pool1', 3, 2)) net.add(layer.Flatten('flat')) net.add(layer.Dense('dense', 10)) # init parameters for p in net.param_values(): if len(p.shape) == 0: p.set_value(0) else: p.gaussian(0, 0.01) print(net.param_names()) layer.engine = 'cudnn' net = ffnet.FeedForwardNet(loss.SoftmaxCrossEntropy(), metric.Accuracy()) net.add(layer.Conv2D('conv1', 32, 5, 1, input_sample_shape=(3,32,32,))) net.add(layer.Activation('relu1')) net.add(layer.MaxPooling2D('pool1', 3, 2)) net.add(layer.Flatten('flat')) net.add(layer.Dense('dense', 10)) # init parameters for p in net.param_values(): if len(p.shape) == 0: p.set_value(0) else: p.gaussian(0, 0.01) # move net onto gpu dev = device.create_cuda_gpu() net.to_device(dev) Explanation: FeedForwardNet End of explanation
12,937
Given the following text description, write Python code to implement the functionality described below step by step Description: Section 7.3 Step1: Load data Step3: Final data format Step4: How long did this take to run?
Python Code: user_agent_email = "REPLACE THIS WITH YOUR EMAIL plz kthxbye" import pandas as pd import seaborn as sns import matplotlib.pyplot as plt import numpy as np import glob import pickle import numpy as np import mwapi %matplotlib inline import datetime start = datetime.datetime.now() Explanation: Section 7.3: Sample diffs This is a data analysis script for creating tables of sample diffs for validation as described in section 7.3, which you can run based entirely off the files in this GitHub repository. It loads datasets/parsed_dataframes/df_all_comments_parsed_2016.pickle.xz and creates the following files: datasets/sample_tables/[language]_ns0_sample_dict.pickle analysis/main/sample_tables/[language]/ns0/[language]_ns0_sample_all.html analysis/main/sample_tables/[language]/ns0/[language]_ns0_sample_[bottype].html This entire notebook can be run from the beginning with Kernel -> Restart & Run All in the menu bar. On a laptop running a Core i5-2540M processor, it takes about 45 minutes to run, as it collects data from the Wikipedia API. IF YOU RUN THIS, YOU MUST REPLACE user_agent_email WITH YOUR E-MAIL End of explanation !unxz --keep --force ../../datasets/parsed_dataframes/df_all_comments_parsed_2016.pickle.xz with open("../../datasets/parsed_dataframes/df_all_comments_parsed_2016.pickle", "rb") as f: df_all = pickle.load(f) Explanation: Load data End of explanation df_all[0:2].transpose() %%bash rm -rf sample_tables mkdir sample_tables declare -a arr=("de" "en" "es" "fr" "ja" "pt" "zh") for i in "${arr[@]}" do mkdir sample_tables/$i/ mkdir sample_tables/$i/ns0 # or do whatever with individual element of the array done find sample_tables/ import mwapi import difflib session = {} for lang in df_all['language'].unique(): session[lang] = mwapi.Session('https://' + str(lang) + '.wikipedia.org', user_agent="Research script by " + user_agent_email) def get_revision(rev_id, language): try: rev_get = session[language].get(action='query', prop='revisions', rvprop="content", revids=rev_id) rev_pages = rev_get['query']['pages'] for row in rev_pages.items(): return(row[1]['revisions'][0]['*']) except: return np.nan def get_diff(row): #print(row) try: reverted_content = row['reverted_content'].split("\n") reverting_content = row['reverting_content'].split("\n") diff = difflib.unified_diff(reverted_content, reverting_content) return '<br/>'.join(list(diff)) except: return np.nan def get_diff_api(row): #print(row) rev_id = row['rev_id'] reverting_id = row['reverting_id'] #print(rev_id, reverting_id) rev_get = session.get(action='compare', fromrev=rev_id, torev=reverting_id) #print(rev_get) return rev_get['compare']['*'] !mkdir ../../datasets/sample_tables !mkdir sample_tables def get_lang_diffs(lang): print("-----------") print(lang) print("-----------") import os pd.options.display.max_colwidth = -1 df_lang_ns0 = df_all.query("language == '" + lang + "' and page_namespace == 0").copy() #df_lang_ns0['bottype'].unique() df_lang_ns0_sample_dict = {} for bottype in df_lang_ns0['bottype'].unique(): print(bottype) type_df = df_lang_ns0[df_lang_ns0['bottype']==bottype] if len(type_df) > 10000: type_df_sample = type_df.sample(round(len(type_df)/100)) elif len(type_df) > 100: type_df_sample = type_df.sample(100) else: type_df_sample = type_df.copy() type_df_sample['reverting_content'] = type_df_sample['reverting_id'].apply(get_revision, language=lang) type_df_sample['reverted_content'] = type_df_sample['rev_id'].apply(get_revision, language=lang) type_df_sample['diff'] = type_df_sample.apply(get_diff, axis=1) df_lang_ns0_sample_dict[bottype] = type_df_sample with open("../../datasets/sample_tables/df_" + lang + "_ns0_sample_dict.pickle", "wb") as f: pickle.dump(df_lang_ns0_sample_dict, f) for bottype, bottype_df in df_lang_ns0_sample_dict.items(): bottype_file = bottype.replace(" ", "_") bottype_file = bottype_file.replace("/", "_") filename = "sample_tables/" + lang + "/ns0/" + lang + "_ns0_sample_" + bottype_file + ".html" bottype_df[['reverting_id','reverting_user_text', 'rev_user_text', 'reverting_comment', 'diff']].to_html(filename, escape=False) with open(filename, 'r+') as f: content = f.read() f.seek(0, 0) f.write("<a name='" + bottype + "'><h1>" + bottype + "</h1></a>\r\n") f.write(content) call_s = "cat sample_tables/" + lang + "/ns0/*.html > sample_tables/" + lang + "/ns0/" + lang + "_ns0_sample_all.html" os.system(call_s) with open("sample_tables/" + lang + "/ns0/" + lang + "_ns0_sample_all.html", 'r+') as f: content = f.read() f.seek(0, 0) f.write("<head><meta charset='UTF-8'></head>\r\n<body>") f.write(<style> .dataframe { border:1px solid #C0C0C0; border-collapse:collapse; padding:5px; table-layout:fixed; } .dataframe th { border:1px solid #C0C0C0; padding:5px; background:#F0F0F0; } .dataframe td { border:1px solid #C0C0C0; padding:5px; } </style>) f.write("<table class='dataframe'>") f.write("<thead><tr><th>Bot type</th><th>Total count in " + lang + "wiki ns0</th><th>Number of sample diffs</th>") for bottype, bottype_df in df_lang_ns0_sample_dict.items(): len_df = str(len(df_lang_ns0[df_lang_ns0['bottype']==bottype])) len_sample = str(len(bottype_df)) toc_str = "<tr><td><a href='#" + bottype + "'>" + bottype + "</a></td>\r\n" toc_str += "<td>" + len_df + "</td>" toc_str += "<td>" + len_sample + "</td></tr>" f.write(toc_str) f.write("</table>") f.write(content) for lang in df_all['language'].unique(): get_lang_diffs(lang) Explanation: Final data format End of explanation end = datetime.datetime.now() time_to_run = end - start minutes = int(time_to_run.seconds/60) seconds = time_to_run.seconds % 60 print("Total runtime: ", minutes, "minutes, ", seconds, "seconds") Explanation: How long did this take to run? End of explanation
12,938
Given the following text description, write Python code to implement the functionality described below step by step Description: Sparse Linear Inverse with EM Learning In the sparse linear inverse demo, we saw how to set up a solve a simple sparse linear inverse problem using the vamp method in the vampyre package. Specifically, we solved for a vector $x$ from linear measurements of the form $y=Ax+w$. Critical in demo was that the vamp method had to be supplied a description of the statistics on the components on $x$ and the noise variance $w$. In many practical cases though, these are not known. In the demo, we show how to simultaneously learn $x$ and the distribution on $x$ with EM learning. The example here is taken from the following paper which introduced the combination of VAMP with EM learning Step1: Generating Synthetic Data Next, we will generate the synthetic sparse data. Recall, that in the sparse linear inverse problem, we want to estimate a vector $z_0$ from measurements $$ y = Az_0 + w, $$ for some known linear transform $A$. The vector $w$ represents noise. The sparse vector $z_0$ is described probabilistically. We will use a slightly different model than in the sparse linear inverse demo, and describe the sparse vector $z_0$ as a Gaussian mixture model Step2: Next, we generate a random matrix. Before, we generated the random matrix with Gaussian iid entries. In this example, to make the problem more challenging, we will use a more ill-conditioned random matrix. The method rand_rot_invariant creates a random matrix with a specific condition number. Step3: Finally, we add noise at the desired SNR Step4: Set up the solvers As in the sparse inverse demo, the VAMP estimator requires that we specify two probability distributions Step5: To evaluate the EM method, we will compare it against an oracle that knows the true density. We thus create two estimators for the prior Step6: We also create two estimators for the likelihood $p(y|z1,wvar)$. For the oracle estimator, the parameter wvar is set to its true value; for the EM estimator it is set to its initial estimate wvar_init. Step7: Running the solvers for the oracle and EM case We first run the solver for the oracle case and measure the MSE per iteration. Step8: Next, we run the EM estimator. We see we obtain a similar final MSE. Step9: We plot the two MSEs as a function of the iteration number.
Python Code: # Import vampyre import os import sys vp_path = os.path.abspath('../../') if not vp_path in sys.path: sys.path.append(vp_path) import vampyre as vp # Import the other packages import numpy as np import matplotlib import matplotlib.pyplot as plt %matplotlib inline Explanation: Sparse Linear Inverse with EM Learning In the sparse linear inverse demo, we saw how to set up a solve a simple sparse linear inverse problem using the vamp method in the vampyre package. Specifically, we solved for a vector $x$ from linear measurements of the form $y=Ax+w$. Critical in demo was that the vamp method had to be supplied a description of the statistics on the components on $x$ and the noise variance $w$. In many practical cases though, these are not known. In the demo, we show how to simultaneously learn $x$ and the distribution on $x$ with EM learning. The example here is taken from the following paper which introduced the combination of VAMP with EM learning: Fletcher, Alyson K., and Philip Schniter. Learning and free energies for vector approximate message passing, Proc. IEEE Acoustics, Speech and Signal Processing (ICASSP), 2017. Importing the Package First, as in the sparse linear inverse demo we load vampyre and other packages. End of explanation # Dimensions nz0 = 1000 nz1 = 500 ncol = 10 zshape0 = (nz0,ncol) zshape1 = (nz1,ncol) # Parameters for the two components varc_lo = 1e-4 # variance of the low variance component varc_hi = 1 # variance of the high variance component prob_hi = 0.1 # probability of the high variance component prob_lo = 1-prob_hi meanc = np.array([0,0]) probc = np.array([prob_lo, prob_hi]) varc = np.array([varc_lo, varc_hi]) nc = len(probc) # Generate random data following the GMM model zlen = np.prod(zshape0) ind = np.random.choice(nc,zlen,p=probc) u = np.random.randn(zlen) z0 = u*np.sqrt(varc[ind]) + meanc[ind] z0 = z0.reshape(zshape0) Explanation: Generating Synthetic Data Next, we will generate the synthetic sparse data. Recall, that in the sparse linear inverse problem, we want to estimate a vector $z_0$ from measurements $$ y = Az_0 + w, $$ for some known linear transform $A$. The vector $w$ represents noise. The sparse vector $z_0$ is described probabilistically. We will use a slightly different model than in the sparse linear inverse demo, and describe the sparse vector $z_0$ as a Gaussian mixture model: Each component of the vector $z_0$ is distributed as being randomly one of two components: $$ z_{0j} \sim \begin{cases} N(0,\sigma^2_H) & \mbox{with prob } P_H, \ N(0,\sigma^2_L) & \mbox{with prob } P_L, \end{cases} $$ where $\sigma^2_H$ represents a high variance and $\sigma^2_L$ a low variance. Thus, with some probability $p_L$, the component is small (close to zero) and probability $p_H$ it is large. End of explanation cond_num = 100 # Condition number A = vp.trans.rand_rot_invariant_mat(nz1,nz0,cond_num=cond_num) z1 = A.dot(z0) Explanation: Next, we generate a random matrix. Before, we generated the random matrix with Gaussian iid entries. In this example, to make the problem more challenging, we will use a more ill-conditioned random matrix. The method rand_rot_invariant creates a random matrix with a specific condition number. End of explanation snr = 40 # SNR in dB yvar = np.mean(np.abs(z1)**2) wvar = yvar*np.power(10, -0.1*snr) y = z1 + np.random.normal(0,np.sqrt(wvar), zshape1) Explanation: Finally, we add noise at the desired SNR End of explanation # Initial estimate for the noise wvar_init = np.mean(np.abs(y)**2) # Intiial estimates for the component means, variances and probabilities meanc_init = np.array([0,0]) prob_hi_init = np.minimum(nz1/nz0/2,0.95) prob_lo_init = 1-prob_hi_init var_hi_init = yvar/np.mean(np.abs(A)**2)/nz0/prob_hi_init var_lo_init = 1e-4 probc_init = np.array([prob_lo_init, prob_hi_init]) varc_init = np.array([var_lo_init, var_hi_init]) Explanation: Set up the solvers As in the sparse inverse demo, the VAMP estimator requires that we specify two probability distributions: * Prior: $p(z_0|\theta_0)$; * Likelihood: $p(y|z_0,\theta_1)$. In this case, both densities depend on parameters: $\theta_0$ and $\theta_1$. For the prior, the parameters $\theta_0$ represent the parameters of the components (probc,meanc,varc). For the likelihood, the unknown parameter $\theta_1$ is the output variance wvar. EM estimation is a method that allows to learn the values of the parameters $\theta_0$ and $\theta_1$ while also estimating the vector $z_0$. EM estimation is an iterative technique and requires that we specify initial estimates for the unknown parameters: wvar,probc,meanc,varc. We will use the initialization in the paper above. End of explanation # Estimator with EM, initialized to the above values est_in_em = vp.estim.GMMEst(shape=zshape0,\ zvarmin=1e-6,tune_gmm=True,probc=probc_init,meanc=meanc_init, varc=varc_init,name='GMM input') # No auto-tuning. Set estimators with the true values est_in_oracle = vp.estim.GMMEst(shape=zshape0, probc=probc, meanc=meanc, varc=varc, tune_gmm=False,name='GMM input') Explanation: To evaluate the EM method, we will compare it against an oracle that knows the true density. We thus create two estimators for the prior: one for the oracle that is set to the true GMM parameters with tuning disabled (tune_gmm=False); and one for the EM estimator where the parameters are set to the initial estimators and tuning enabled (tune_gmm=True). End of explanation Aop = vp.trans.MatrixLT(A,zshape0) b = np.zeros(zshape1) map_est = False est_out_em = vp.estim.LinEst(Aop,y,wvar=wvar_init,map_est=map_est,tune_wvar=True, name='Linear+AWGN') est_out_oracle = vp.estim.LinEst(Aop,y,wvar=wvar,map_est=map_est,tune_wvar=False, name='Linear+AWGN') Explanation: We also create two estimators for the likelihood $p(y|z1,wvar)$. For the oracle estimator, the parameter wvar is set to its true value; for the EM estimator it is set to its initial estimate wvar_init. End of explanation # Create the message handler msg_hdl = vp.estim.MsgHdlSimp(map_est=map_est, shape=zshape0) # Create the solver nit = 40 solver = vp.solver.Vamp(est_in_oracle, est_out_oracle,msg_hdl,hist_list=['zhat'],nit=nit) # Run the solver solver.solve() # Get the estimation history zhat_hist = solver.hist_dict['zhat'] nit2 = len(zhat_hist) zpow = np.mean(np.abs(z0)**2) mse_oracle = np.zeros(nit2) for it in range(nit2): zhati = zhat_hist[it] zerr = np.mean(np.abs(zhati-z0)**2) mse_oracle[it] = 10*np.log10(zerr/zpow) # Print final MSE print("Final MSE (oracle) = {0:f} dB".format(mse_oracle[-1])) Explanation: Running the solvers for the oracle and EM case We first run the solver for the oracle case and measure the MSE per iteration. End of explanation # Create the message handler msg_hdl = vp.estim.MsgHdlSimp(map_est=map_est, shape=zshape0) # Create the solver solver = vp.solver.Vamp(est_in_em, est_out_em, msg_hdl,hist_list=['zhat'],nit=nit) # Run the solver solver.solve() # Get the estimation history zhat_hist = solver.hist_dict['zhat'] nit2 = len(zhat_hist) zpow = np.mean(np.abs(z0)**2) mse_em = np.zeros(nit2) for it in range(nit2): zhati = zhat_hist[it] zerr = np.mean(np.abs(zhati-z0)**2) mse_em[it] = 10*np.log10(zerr/zpow) # Print final MSE print("Final MSE (EM) = {0:f} dB".format(mse_em[-1])) Explanation: Next, we run the EM estimator. We see we obtain a similar final MSE. End of explanation t = np.arange(nit2) plt.plot(t,mse_oracle,'o-') plt.plot(t,mse_em,'s-') plt.grid() plt.xlabel('Iteration') plt.ylabel('MSE (dB)') plt.legend(['Oracle', 'EM']) plt.show() Explanation: We plot the two MSEs as a function of the iteration number. End of explanation
12,939
Given the following text description, write Python code to implement the functionality described below step by step Description: Naive Bayes by Chiyuan Zhang This notebook illustrates <a href="http Step1: A helper function is defined to generate samples Step2: Then we train the GNB model with SHOGUN Step3: Run classification over the whole area to generate color regions Step4: Plot figure
Python Code: %matplotlib inline import os SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data') import numpy as np import pylab as pl np.random.seed(0) n_train = 300 models = [{'mu': [8, 0], 'sigma': np.array([[np.cos(-np.pi/4),-np.sin(-np.pi/4)], [np.sin(-np.pi/4), np.cos(-np.pi/4)]]).dot(np.diag([1,4]))}, {'mu': [0, 0], 'sigma': np.array([[np.cos(-np.pi/4),-np.sin(-np.pi/4)], [np.sin(-np.pi/4), np.cos(-np.pi/4)]]).dot(np.diag([1,4]))}, {'mu': [-8,0], 'sigma': np.array([[np.cos(-np.pi/4),-np.sin(-np.pi/4)], [np.sin(-np.pi/4), np.cos(-np.pi/4)]]).dot(np.diag([1,4]))}] Explanation: Naive Bayes by Chiyuan Zhang This notebook illustrates <a href="http://en.wikipedia.org/wiki/Multiclass_classification">multiclass</a> learning using <a href="http://en.wikipedia.org/wiki/Naive_Bayes_classifier">Naive Bayes</a> in Shogun. A semi-formal introduction to <a href="http://en.wikipedia.org/wiki/Logistic_regression">Logistic Regression</a> is provided at the end. $$ P\left( Y=k | X = x \right) = \frac{P(X=x|Y=k)P(Y=k)}{P(X=x)} $$ The prediction is then made by $$ y = \operatorname*{argmax}_{k\in{1,\ldots,K}}\; P(Y=k|X=x) $$ Since $P(X=x)$ is a constant factor for all $P(Y=k|X=x)$, $k=1,\ldots,K$, there is no need to compute it. In SHOGUN, GaussianNaiveBayes implements the Naive Bayes algorithm. It is prefixed with "Gaussian" because the probability model for $P(X=x|Y=k)$ for each $k$ is taken to be a multi-variate Gaussian distribution. Furthermore, each dimension of the feature vector $X$ is assumed to be independent. The Naive independence assumption enables us the learn the model by estimating the parameters for each feature dimension independently, thus the whole learning algorithm runs very quickly. And this is also the reason for its name. However, this assumption can be very restrictive. In this demo, we show a simple 2D example. There are 3 linearly separable classes. The scattered points are training samples with colors indicating their labels. The filled area indicate the hypothesis learned by the GaussianNaiveBayes. The training samples are actually generated from three Gaussian distributions. But since the covariance for those Gaussian distributions are not diagonal (i.e. there are rotations), the GNB algorithm cannot handle them properly. We first init the models for generating samples for this demo: End of explanation def gen_samples(n_samples): X_all = np.zeros((2, 0)) Y_all = np.zeros(0) for i, model in enumerate(models): Y = np.zeros(n_samples) + i+1 X = np.array(model['sigma']).dot(np.random.randn(2, n_samples)) + np.array(model['mu']).reshape((2,1)) X_all = np.hstack((X_all, X)) Y_all = np.hstack((Y_all, Y)) return (X_all, Y_all) Explanation: A helper function is defined to generate samples: End of explanation from shogun import GaussianNaiveBayes from shogun import features from shogun import MulticlassLabels X_train, Y_train = gen_samples(n_train) machine = GaussianNaiveBayes() machine.put('features', features(X_train)) machine.put('labels', MulticlassLabels(Y_train)) machine.train() Explanation: Then we train the GNB model with SHOGUN: End of explanation delta = 0.1 x = np.arange(-20, 20, delta) y = np.arange(-20, 20, delta) X,Y = np.meshgrid(x,y) Z = machine.apply_multiclass(features(np.vstack((X.flatten(), Y.flatten())))).get_labels() Explanation: Run classification over the whole area to generate color regions: End of explanation pl.figure(figsize=(8,5)) pl.contourf(X, Y, Z.reshape(X.shape), np.arange(0, len(models)+1)) pl.scatter(X_train[0,:],X_train[1,:], c=Y_train) pl.axis('off') pl.tight_layout() Explanation: Plot figure: End of explanation
12,940
Given the following text description, write Python code to implement the functionality described below step by step Description: Below path is a shared directory, swap to own Step1: Replication of 'csv_to_hdf5.py' Original repo used some bizarre tuple method of reading in data to save in a hdf5 file using fuel. The following does the same approach in that module, only using pandas and saving in a bcolz format (w/ training data as example) Step2: The array of long/lat coordinates per trip (row) is read in as a string. The function ast.literal_eval(x) evaluates the string into the expression it represents (safely). This happens below Step3: Split into latitude/longitude Step4: Further Feature Engineering After converting 'csv_to_hdf5.py' functionality to pandas, I saved that array and then simply constructed the rest of the features as specified in the paper using pandas. I didn't bother seeing how the author did it as it was extremely obtuse and involved the fuel module. Step5: The paper discusses how many categorical variables there are per category. The following all check out Step6: Self-explanatory Step7: Quarter hour of the day, i.e. 1 of the 4*24 = 96 quarter hours of the day Step8: Self-explanatory Step9: Target coords are the last in the sequence (final position). If there are no positions, or only 1, then mark as invalid w/ nan in order to drop later Step10: This function creates the continuous inputs, which are the concatened k first and k last coords in a sequence, as discussed in the paper. If there aren't at least 2* k coords excluding the target, then the k first and k last overlap. In this case the sequence (excluding target) is padded at the end with the last coord in the sequence. The paper mentioned they padded front and back but didn't specify in what manner. Also marks any invalid w/ na's Step11: Drop na's Step12: End to end feature transformation Step13: Pre-calculated below on train set Step14: MEANSHIFT Meanshift clustering as performed in the paper Step15: Clustering performed on the targets Step16: Can use the commented out code for a estimate of bandwidth, which causes clustering to converge much quicker. This is not mentioned in the paper but is included in the code. In order to get results similar to the paper's, they manually chose the uncommented bandwidth Step17: This takes some time Step18: This is very close to the number of clusters mentioned in the paper Step19: Formatting Features for Bcolz iterator / garbage Step20: MODEL Load training data and cluster centers Step21: Validation cuts Step22: The equirectangular loss function mentioned in the paper. Note Step23: The following returns a fully-connected model as mentioned in the paper. Takes as input k as defined before, and the cluster centers. Inputs Step24: As mentioned, construction of repeated cluster longs/lats for input Iterator for in memory train pandas dataframe. I did this as opposed to bcolz iterator due to the pre-processing Step25: Of course, k in the model needs to match k from feature construction. We again use 5 as they did in the paper Step26: Paper used SGD opt w/ following paramerters Step27: original Step28: new valid Step29: It works, but it seems to converge unrealistically quick and the loss values are not the same. The paper does not mention what it's using as "error" in it's results. I assume the same equirectangular? Not very clear. The difference in values could be due to the missing Earth-radius factor Kaggle Entry Step30: To-do Step31: hd5f files
Python Code: data_path = "/data/datasets/taxi/" Explanation: Below path is a shared directory, swap to own End of explanation meta = pd.read_csv(data_path+'metaData_taxistandsID_name_GPSlocation.csv', header=0) meta.head() train = pd.read_csv(data_path+'train/train.csv', header=0) train.head() train['ORIGIN_CALL'] = pd.Series(pd.factorize(train['ORIGIN_CALL'])[0]) + 1 train['ORIGIN_STAND']=pd.Series([0 if pd.isnull(x) or x=='' else int(x) for x in train["ORIGIN_STAND"]]) train['TAXI_ID'] = pd.Series(pd.factorize(train['TAXI_ID'])[0]) + 1 train['DAY_TYPE'] = pd.Series([ord(x[0]) - ord('A') for x in train['DAY_TYPE']]) Explanation: Replication of 'csv_to_hdf5.py' Original repo used some bizarre tuple method of reading in data to save in a hdf5 file using fuel. The following does the same approach in that module, only using pandas and saving in a bcolz format (w/ training data as example) End of explanation polyline = pd.Series([ast.literal_eval(x) for x in train['POLYLINE']]) Explanation: The array of long/lat coordinates per trip (row) is read in as a string. The function ast.literal_eval(x) evaluates the string into the expression it represents (safely). This happens below End of explanation train['LATITUDE'] = pd.Series([np.array([point[1] for point in poly],dtype=np.float32) for poly in polyline]) train['LONGITUDE'] = pd.Series([np.array([point[0] for point in poly],dtype=np.float32) for poly in polyline]) utils.save_array(data_path+'train/train.bc', train.as_matrix()) utils.save_array(data_path+'train/meta_train.bc', meta.as_matrix()) Explanation: Split into latitude/longitude End of explanation train = pd.DataFrame(utils.load_array(data_path+'train/train.bc'), columns=['TRIP_ID', 'CALL_TYPE', 'ORIGIN_CALL', 'ORIGIN_STAND', 'TAXI_ID', 'TIMESTAMP', 'DAY_TYPE', 'MISSING_DATA', 'POLYLINE', 'LATITUDE', 'LONGITUDE']) train.head() Explanation: Further Feature Engineering After converting 'csv_to_hdf5.py' functionality to pandas, I saved that array and then simply constructed the rest of the features as specified in the paper using pandas. I didn't bother seeing how the author did it as it was extremely obtuse and involved the fuel module. End of explanation train['ORIGIN_CALL'].max() train['ORIGIN_STAND'].max() train['TAXI_ID'].max() Explanation: The paper discusses how many categorical variables there are per category. The following all check out End of explanation train['DAY_OF_WEEK'] = pd.Series([datetime.datetime.fromtimestamp(t).weekday() for t in train['TIMESTAMP']]) Explanation: Self-explanatory End of explanation train['QUARTER_HOUR'] = pd.Series([int((datetime.datetime.fromtimestamp(t).hour*60 + datetime.datetime.fromtimestamp(t).minute)/15) for t in train['TIMESTAMP']]) Explanation: Quarter hour of the day, i.e. 1 of the 4*24 = 96 quarter hours of the day End of explanation train['WEEK_OF_YEAR'] = pd.Series([datetime.datetime.fromtimestamp(t).isocalendar()[1] for t in train['TIMESTAMP']]) Explanation: Self-explanatory End of explanation train['TARGET'] = pd.Series([[l[1][0][-1], l[1][1][-1]] if len(l[1][0]) > 1 else numpy.nan for l in train[['LONGITUDE','LATITUDE']].iterrows()]) Explanation: Target coords are the last in the sequence (final position). If there are no positions, or only 1, then mark as invalid w/ nan in order to drop later End of explanation def start_stop_inputs(k): result = [] for l in train[['LONGITUDE','LATITUDE']].iterrows(): if len(l[1][0]) < 2 or len(l[1][1]) < 2: result.append(numpy.nan) elif len(l[1][0][:-1]) >= 2*k: result.append(numpy.concatenate([l[1][0][0:k],l[1][0][-(k+1):-1],l[1][1][0:k],l[1][1][-(k+1):-1]]).flatten()) else: l1 = numpy.lib.pad(l[1][0][:-1], (0,20-len(l[1][0][:-1])), mode='edge') l2 = numpy.lib.pad(l[1][1][:-1], (0,20-len(l[1][1][:-1])), mode='edge') result.append(numpy.concatenate([l1[0:k],l1[-k:],l2[0:k],l2[-k:]]).flatten()) return pd.Series(result) train['COORD_FEATURES'] = start_stop_inputs(5) train.shape train.dropna().shape Explanation: This function creates the continuous inputs, which are the concatened k first and k last coords in a sequence, as discussed in the paper. If there aren't at least 2* k coords excluding the target, then the k first and k last overlap. In this case the sequence (excluding target) is padded at the end with the last coord in the sequence. The paper mentioned they padded front and back but didn't specify in what manner. Also marks any invalid w/ na's End of explanation train = train.dropna() utils.save_array(data_path+'train/train_features.bc', train.as_matrix()) Explanation: Drop na's End of explanation train = pd.read_csv(data_path+'train/train.csv', header=0) test = pd.read_csv(data_path+'test/test.csv', header=0) def start_stop_inputs(k, data, test): result = [] for l in data[['LONGITUDE','LATITUDE']].iterrows(): if not test: if len(l[1][0]) < 2 or len(l[1][1]) < 2: result.append(np.nan) elif len(l[1][0][:-1]) >= 2*k: result.append(np.concatenate([l[1][0][0:k],l[1][0][-(k+1):-1],l[1][1][0:k],l[1][1][-(k+1):-1]]).flatten()) else: l1 = np.lib.pad(l[1][0][:-1], (0,4*k-len(l[1][0][:-1])), mode='edge') l2 = np.lib.pad(l[1][1][:-1], (0,4*k-len(l[1][1][:-1])), mode='edge') result.append(np.concatenate([l1[0:k],l1[-k:],l2[0:k],l2[-k:]]).flatten()) else: if len(l[1][0]) < 1 or len(l[1][1]) < 1: result.append(np.nan) elif len(l[1][0]) >= 2*k: result.append(np.concatenate([l[1][0][0:k],l[1][0][-k:],l[1][1][0:k],l[1][1][-k:]]).flatten()) else: l1 = np.lib.pad(l[1][0], (0,4*k-len(l[1][0])), mode='edge') l2 = np.lib.pad(l[1][1], (0,4*k-len(l[1][1])), mode='edge') result.append(np.concatenate([l1[0:k],l1[-k:],l2[0:k],l2[-k:]]).flatten()) return pd.Series(result) Explanation: End to end feature transformation End of explanation lat_mean = 41.15731 lat_std = 0.074120656 long_mean = -8.6161413 long_std = 0.057200309 def feature_ext(data, test=False): data['ORIGIN_CALL'] = pd.Series(pd.factorize(data['ORIGIN_CALL'])[0]) + 1 data['ORIGIN_STAND']=pd.Series([0 if pd.isnull(x) or x=='' else int(x) for x in data["ORIGIN_STAND"]]) data['TAXI_ID'] = pd.Series(pd.factorize(data['TAXI_ID'])[0]) + 1 data['DAY_TYPE'] = pd.Series([ord(x[0]) - ord('A') for x in data['DAY_TYPE']]) polyline = pd.Series([ast.literal_eval(x) for x in data['POLYLINE']]) data['LATITUDE'] = pd.Series([np.array([point[1] for point in poly],dtype=np.float32) for poly in polyline]) data['LONGITUDE'] = pd.Series([np.array([point[0] for point in poly],dtype=np.float32) for poly in polyline]) if not test: data['TARGET'] = pd.Series([[l[1][0][-1], l[1][1][-1]] if len(l[1][0]) > 1 else np.nan for l in data[['LONGITUDE','LATITUDE']].iterrows()]) data['LATITUDE'] = pd.Series([(t-lat_mean)/lat_std for t in data['LATITUDE']]) data['LONGITUDE'] = pd.Series([(t-long_mean)/long_std for t in data['LONGITUDE']]) data['COORD_FEATURES'] = start_stop_inputs(5, data, test) data['DAY_OF_WEEK'] = pd.Series([datetime.datetime.fromtimestamp(t).weekday() for t in data['TIMESTAMP']]) data['QUARTER_HOUR'] = pd.Series([int((datetime.datetime.fromtimestamp(t).hour*60 + datetime.datetime.fromtimestamp(t).minute)/15) for t in data['TIMESTAMP']]) data['WEEK_OF_YEAR'] = pd.Series([datetime.datetime.fromtimestamp(t).isocalendar()[1] for t in data['TIMESTAMP']]) data = data.dropna() return data train = feature_ext(train) test = feature_ext(test, test=True) test.head() utils.save_array(data_path+'train/train_features.bc', train.as_matrix()) utils.save_array(data_path+'test/test_features.bc', test.as_matrix()) train.head() Explanation: Pre-calculated below on train set End of explanation train = pd.DataFrame(utils.load_array(data_path+'train/train_features.bc'),columns=['TRIP_ID', 'CALL_TYPE', 'ORIGIN_CALL', 'ORIGIN_STAND', 'TAXI_ID', 'TIMESTAMP', 'DAY_TYPE', 'MISSING_DATA', 'POLYLINE', 'LATITUDE', 'LONGITUDE', 'DAY_OF_WEEK', 'QUARTER_HOUR', "WEEK_OF_YEAR", "TARGET", "COORD_FEATURES"]) Explanation: MEANSHIFT Meanshift clustering as performed in the paper End of explanation y_targ = np.vstack(train["TARGET"].as_matrix()) from sklearn.cluster import MeanShift, estimate_bandwidth Explanation: Clustering performed on the targets End of explanation #bw = estimate_bandwidth(y_targ, quantile=.1, n_samples=1000) bw = 0.001 Explanation: Can use the commented out code for a estimate of bandwidth, which causes clustering to converge much quicker. This is not mentioned in the paper but is included in the code. In order to get results similar to the paper's, they manually chose the uncommented bandwidth End of explanation ms = MeanShift(bandwidth=bw, bin_seeding=True, min_bin_freq=5) ms.fit(y_targ) cluster_centers = ms.cluster_centers_ Explanation: This takes some time End of explanation cluster_centers.shape utils.save_array(data_path+"cluster_centers_bw_001.bc", cluster_centers) Explanation: This is very close to the number of clusters mentioned in the paper End of explanation train = pd.DataFrame(utils.load_array(data_path+'train/train_features.bc'),columns=['TRIP_ID', 'CALL_TYPE', 'ORIGIN_CALL', 'ORIGIN_STAND', 'TAXI_ID', 'TIMESTAMP', 'DAY_TYPE', 'MISSING_DATA', 'POLYLINE', 'LATITUDE', 'LONGITUDE', 'TARGET', 'COORD_FEATURES', "DAY_OF_WEEK", "QUARTER_HOUR", "WEEK_OF_YEAR"]) cluster_centers = utils.load_array(data_path+"cluster_centers_bw_001.bc") long = np.array([c[0] for c in cluster_centers]) lat = np.array([c[1] for c in cluster_centers]) X_train, X_val = train_test_split(train, test_size=0.2, random_state=42) def get_features(data): return [np.vstack(data['COORD_FEATURES'].as_matrix()), np.vstack(data['ORIGIN_CALL'].as_matrix()), np.vstack(data['TAXI_ID'].as_matrix()), np.vstack(data['ORIGIN_STAND'].as_matrix()), np.vstack(data['QUARTER_HOUR'].as_matrix()), np.vstack(data['DAY_OF_WEEK'].as_matrix()), np.vstack(data['WEEK_OF_YEAR'].as_matrix()), np.array([long for i in range(0,data.shape[0])]), np.array([lat for i in range(0,data.shape[0])])] def get_target(data): return np.vstack(data["TARGET"].as_matrix()) X_train_features = get_features(X_train) X_train_target = get_target(X_train) utils.save_array(data_path+'train/X_train_features.bc', get_features(X_train)) Explanation: Formatting Features for Bcolz iterator / garbage End of explanation train = pd.DataFrame(utils.load_array(data_path+'train/train_features.bc'),columns=['TRIP_ID', 'CALL_TYPE', 'ORIGIN_CALL', 'ORIGIN_STAND', 'TAXI_ID', 'TIMESTAMP', 'DAY_TYPE', 'MISSING_DATA', 'POLYLINE', 'LATITUDE', 'LONGITUDE', 'TARGET', 'COORD_FEATURES', "DAY_OF_WEEK", "QUARTER_HOUR", "WEEK_OF_YEAR"]) Explanation: MODEL Load training data and cluster centers End of explanation cuts = [ 1376503200, # 2013-08-14 18:00 1380616200, # 2013-10-01 08:30 1381167900, # 2013-10-07 17:45 1383364800, # 2013-11-02 04:00 1387722600 # 2013-12-22 14:30 ] print(datetime.datetime.fromtimestamp(1376503200)) train.shape val_indices = [] index = 0 for index, row in train.iterrows(): time = row['TIMESTAMP'] latitude = row['LATITUDE'] for ts in cuts: if time <= ts and time + 15 * (len(latitude) - 1) >= ts: val_indices.append(index) break index += 1 X_valid = train.iloc[val_indices] valid.head() for d in valid['TIMESTAMP']: print(datetime.datetime.fromtimestamp(d)) X_train = train.drop(train.index[[val_indices]]) cluster_centers = utils.load_array(data_path+"/data/cluster_centers_bw_001.bc") long = np.array([c[0] for c in cluster_centers]) lat = np.array([c[1] for c in cluster_centers]) utils.save_array(data_path+'train/X_train.bc', X_train.as_matrix()) utils.save_array(data_path+'valid/X_val.bc', X_valid.as_matrix()) X_train = pd.DataFrame(utils.load_array(data_path+'train/X_train.bc'),columns=['TRIP_ID', 'CALL_TYPE', 'ORIGIN_CALL', 'ORIGIN_STAND', 'TAXI_ID', 'TIMESTAMP', 'DAY_TYPE', 'MISSING_DATA', 'POLYLINE', 'LATITUDE', 'LONGITUDE', 'TARGET', 'COORD_FEATURES', "DAY_OF_WEEK", "QUARTER_HOUR", "WEEK_OF_YEAR"]) X_val = pd.DataFrame(utils.load_array(data_path+'valid/X_val.bc'),columns=['TRIP_ID', 'CALL_TYPE', 'ORIGIN_CALL', 'ORIGIN_STAND', 'TAXI_ID', 'TIMESTAMP', 'DAY_TYPE', 'MISSING_DATA', 'POLYLINE', 'LATITUDE', 'LONGITUDE', 'TARGET', 'COORD_FEATURES', "DAY_OF_WEEK", "QUARTER_HOUR", "WEEK_OF_YEAR"]) Explanation: Validation cuts End of explanation def equirectangular_loss(y_true, y_pred): deg2rad = 3.141592653589793 / 180 long_1 = y_true[:,0]*deg2rad long_2 = y_pred[:,0]*deg2rad lat_1 = y_true[:,1]*deg2rad lat_2 = y_pred[:,1]*deg2rad return 6371*K.sqrt(K.square((long_1 - long_2)*K.cos((lat_1 + lat_2)/2.)) +K.square(lat_1 - lat_2)) def embedding_input(name, n_in, n_out, reg): inp = Input(shape=(1,), dtype='int64', name=name) return inp, Embedding(n_in, n_out, input_length=1, W_regularizer=l2(reg))(inp) Explanation: The equirectangular loss function mentioned in the paper. Note: Very important that y[0] is longitude and y[1] is latitude. Omitted the radius of the earth constant "R" as it does not affect minimization and units were not given in the paper. End of explanation def taxi_mlp(k, cluster_centers): shp = cluster_centers.shape[0] nums = Input(shape=(4*k,)) center_longs = Input(shape=(shp,)) center_lats = Input(shape=(shp,)) emb_names = ['client_ID', 'taxi_ID', "stand_ID", "quarter_hour", "day_of_week", "week_of_year"] emb_ins = [57106, 448, 64, 96, 7, 52] emb_outs = [10 for i in range(0,6)] regs = [0 for i in range(0,6)] embs = [embedding_input(e[0], e[1]+1, e[2], e[3]) for e in zip(emb_names, emb_ins, emb_outs, regs)] x = merge([nums] + [Flatten()(e[1]) for e in embs], mode='concat') x = Dense(500, activation='relu')(x) x = Dense(shp, activation='softmax')(x) y = merge([merge([x, center_longs], mode='dot'), merge([x, center_lats], mode='dot')], mode='concat') return Model(input = [nums]+[e[0] for e in embs] + [center_longs, center_lats], output = y) Explanation: The following returns a fully-connected model as mentioned in the paper. Takes as input k as defined before, and the cluster centers. Inputs: Embeddings for each category, concatenated w/ the 4*k continous variable representing the first/last k coords as mentioned above. Embeddings have no regularization, as it was not mentioned in paper, though are easily equipped to include. Paper mentions global normalization. Didn't specify exactly how they did that, whether thay did it sequentially or whatnot. I just included a batchnorm layer for the continuous inputs. After concatenation, 1 hidden layer of 500 neurons as called for in paper. Finally, output layer has as many outputs as there are cluster centers, w/ a softmax activation. Call this output P. The prediction is the weighted sum of each cluster center c_i w/ corresponding predicted prob P_i. To facilitate this, dotted output w/ cluster latitudes and longitudes separately. (this happens at variable y), then concatenated into single tensor. NOTE!!: You will see that I have the cluster center coords as inputs. Ideally, This function should store the cluster longs/lats as a constant to be used in the model, but I could not figure out. As a consequence, I pass them in as a repeated input. End of explanation def data_iter(data, batch_size, cluster_centers): long = [c[0] for c in cluster_centers] lat = [c[1] for c in cluster_centers] i = 0 N = data.shape[0] while True: yield ([np.vstack(data['COORD_FEATURES'][i:i+batch_size].as_matrix()), np.vstack(data['ORIGIN_CALL'][i:i+batch_size].as_matrix()), np.vstack(data['TAXI_ID'][i:i+batch_size].as_matrix()), np.vstack(data['ORIGIN_STAND'][i:i+batch_size].as_matrix()), np.vstack(data['QUARTER_HOUR'][i:i+batch_size].as_matrix()), np.vstack(data['DAY_OF_WEEK'][i:i+batch_size].as_matrix()), np.vstack(data['WEEK_OF_YEAR'][i:i+batch_size].as_matrix()), np.array([long for i in range(0,batch_size)]), np.array([lat for i in range(0,batch_size)])], np.vstack(data["TARGET"][i:i+batch_size].as_matrix())) i += batch_size x=Lambda(thing)([x,long,lat]) Explanation: As mentioned, construction of repeated cluster longs/lats for input Iterator for in memory train pandas dataframe. I did this as opposed to bcolz iterator due to the pre-processing End of explanation model = taxi_mlp(5, cluster_centers) Explanation: Of course, k in the model needs to match k from feature construction. We again use 5 as they did in the paper End of explanation model.compile(optimizer=SGD(0.01, momentum=0.9), loss=equirectangular_loss, metrics=['mse']) X_train_feat = get_features(X_train) X_train_target = get_target(X_train) X_val_feat = get_features(X_valid) X_val_target = get_target(X_valid) tqdm = TQDMNotebookCallback() checkpoint = ModelCheckpoint(filepath=data_path+'models/tmp/weights.{epoch:03d}.{val_loss:.8f}.hdf5', save_best_only=True) batch_size=256 Explanation: Paper used SGD opt w/ following paramerters End of explanation model.fit(X_train_feat, X_train_target, nb_epoch=1, batch_size=batch_size, validation_data=(X_val_feat, X_val_target), callbacks=[tqdm, checkpoint], verbose=0) model.fit(X_train_feat, X_train_target, nb_epoch=30, batch_size=batch_size, validation_data=(X_val_feat, X_val_target), callbacks=[tqdm, checkpoint], verbose=0) model = load_model(data_path+'models/weights.0.0799.hdf5', custom_objects={'equirectangular_loss':equirectangular_loss}) model.fit(X_train_feat, X_train_target, nb_epoch=100, batch_size=batch_size, validation_data=(X_val_feat, X_val_target), callbacks=[tqdm, checkpoint], verbose=0) model.save(data_path+'models/current_model.hdf5') Explanation: original End of explanation model.fit(X_train_feat, X_train_target, nb_epoch=1, batch_size=batch_size, validation_data=(X_val_feat, X_val_target), callbacks=[tqdm, checkpoint], verbose=0) model.fit(X_train_feat, X_train_target, nb_epoch=400, batch_size=batch_size, validation_data=(X_val_feat, X_val_target), callbacks=[tqdm, checkpoint], verbose=0) model.save(data_path+'/models/current_model.hdf5') len(X_val_feat[0]) Explanation: new valid End of explanation best_model = load_model(data_path+'models/weights.308.0.03373993.hdf5', custom_objects={'equirectangular_loss':equirectangular_loss}) best_model.evaluate(X_val_feat, X_val_target) test = pd.DataFrame(utils.load_array(data_path+'test/test_features.bc'),columns=['TRIP_ID', 'CALL_TYPE', 'ORIGIN_CALL', 'ORIGIN_STAND', 'TAXI_ID', 'TIMESTAMP', 'DAY_TYPE', 'MISSING_DATA', 'POLYLINE', 'LATITUDE', 'LONGITUDE', 'COORD_FEATURES', "DAY_OF_WEEK", "QUARTER_HOUR", "WEEK_OF_YEAR"]) test['ORIGIN_CALL'] = pd.read_csv(data_path+'real_origin_call.csv', header=None) test['TAXI_ID'] = pd.read_csv(data_path+'real_taxi_id.csv',header=None) X_test = get_features(test) b = np.sort(X_test[1],axis=None) test_preds = np.round(best_model.predict(X_test), decimals=6) d = {0:test['TRIP_ID'], 1:test_preds[:,1], 2:test_preds[:,0]} kaggle_out = pd.DataFrame(data=d) kaggle_out.to_csv(data_path+'submission.csv', header=['TRIP_ID','LATITUDE', 'LONGITUDE'], index=False) def hdist(a, b): deg2rad = 3.141592653589793 / 180 lat1 = a[:, 1] * deg2rad lon1 = a[:, 0] * deg2rad lat2 = b[:, 1] * deg2rad lon2 = b[:, 0] * deg2rad dlat = abs(lat1-lat2) dlon = abs(lon1-lon2) al = np.sin(dlat/2)**2 + np.cos(lat1) * np.cos(lat2) * (np.sin(dlon/2)**2) d = np.arctan2(np.sqrt(al), np.sqrt(1-al)) hd = 2 * 6371 * d return hd val_preds = best_model.predict(X_val_feat) trn_preds = model.predict(X_train_feat) er = hdist(val_preds, X_val_target) er.mean() K.equal() Explanation: It works, but it seems to converge unrealistically quick and the loss values are not the same. The paper does not mention what it's using as "error" in it's results. I assume the same equirectangular? Not very clear. The difference in values could be due to the missing Earth-radius factor Kaggle Entry End of explanation cuts = [ 1376503200, # 2013-08-14 18:00 1380616200, # 2013-10-01 08:30 1381167900, # 2013-10-07 17:45 1383364800, # 2013-11-02 04:00 1387722600 # 2013-12-22 14:30 ] np.any([train['TIMESTAMP'].map(lambda x: x in cuts)]) train['TIMESTAMP'] np.any(train['TIMESTAMP']==1381167900) times = train['TIMESTAMP'].as_matrix() X_train.columns times count = 0 for index, row in X_val.iterrows(): for ts in cuts: time = row['TIMESTAMP'] latitude = row['LATITUDE'] if time <= ts and time + 15 * (len(latitude) - 1) >= ts: count += 1 one = count count + one import h5py h = h5py.File(data_path+'original/data.hdf5', 'r') evrData=h['/Configure:0000/Run:0000/CalibCycle:0000/EvrData::DataV3/NoDetector.0:Evr.0/data'] c = np.load(data_path+'original/arrival-clusters.pkl') Explanation: To-do: simple to extend to validation data Uh oh... training data not representative of test End of explanation from fuel.utils import find_in_data_path from fuel.datasets import H5PYDataset original_path = '/data/bckenstler/data/taxi/original/' train_set = H5PYDataset(original_path+'data.hdf5', which_sets=('train',),load_in_memory=True) valid_set = H5PYDataset(original_path+'valid.hdf5', which_sets=('cuts/test_times_0',),load_in_memory=True) print(train_set.num_examples) print(valid_set.num_examples) data = train_set.data_sources data[0] valid_data = valid_set.data_sources valid_data[4][0] stamps = valid_data[-3] stamps[0] for i in range(0,304): print(np.any([t==int(stamps[i]) for t in X_val['TIMESTAMP']])) type(X_train['TIMESTAMP'][0]) type(stamps[0]) check = [s in stamps for s in X_val['TIMESTAMP']] for s in X_val['TIMESTAMP']: print(datetime.datetime.fromtimestamp(s)) for s in stamps: print(datetime.datetime.fromtimestamp(s)) ids = valid_data[-1] type(ids[0]) ids X_val Explanation: hd5f files End of explanation
12,941
Given the following text description, write Python code to implement the functionality described below step by step Description: Twitter Data Analysis Predict inter-tweet times for a single user An implementation of the "Question", "Model", "Validate" process for data science. Step1: Load Data Step2: Feature Selection For this analysis, we are going to focus on the times intervals between tweets as our major feature. This is captured in the variable timeUntilNext Step3: Explore Data Create initial histogram Step4: From these histograms, we see that the data looks like an exponential distribution. In industry, exponentials are used to model rare events like heart attacks, when a request will come into the file system and so on. $$ y = \frac{1}{\beta} \,e^{\textstyle\frac{-y}{\beta}} $$ What we have to do is find the parameter $\beta$ and the probability density function (pdf) from which the observed data is most likely to have been produced. $X = $ time until next tweet $$f{X = t} = \frac{1}{\beta} \, e^{\textstyle\frac{-t}{\beta}} $$ We are using the $\beta$ as the number of seconds until the person tweets again. Fit Exponential Step5: Covariance Matrix gives one entry for each pair of parameters we are trying to fit for. It gives the number of times $\beta$ changed over many bootstrap samples This number amounts to the variance of that number The higher the value, the more that $\beta$ changed, the less certain we are, about the parameters value Step6: The covariance values for our data is very small, that means we can have strong confidence in the parameter. Evaluate the exponential function Step7: Confidence bounds and data How sure are we that $\beta$ is correct? Well, that depends on the number of datapoints we have! We can quantify the uncertainty of our $\beta$ by using Chebyshev's inequality $ P{ |\bar{X} - \beta^| > \epsilon } \leq 1/4n\epsilon^{2} $, which states that the absolute value of the deviation of our estimated $\beta^$ from the actual value of $\beta$, the probability of that being greater than $\epsilon$ is $1/4n\epsilon^{2}$. If we solve this equation for $n$, you will see that we would need a larger amount of datapoints than the ones we have. As such, lets see if we can get a tighter bound. Luckily we can using Hoeffding's Inequality. It is still subject to $n$, the number of datapoints we have, but it is much better. $ P{ |\bar{X} - \beta^*| > \epsilon } \leq 2e^{-2n \epsilon^{2}} $ Step8: This shows that 50% of the time, we are overestimating our inter-tweet prediction time by around 19 seconds Evaluate absolute difference of values Step9: Observe effect of adding offset Note that much of the histogram occurs before zero. Perhaps by adding an offset, we can improve the accuracy of our model. To do that, we are going to fit a more generalized exponential function. Fit more generalized exponential Ajdusted model $$ y = a \, (\frac{1}{\beta} \, e^{\textstyle\frac{-t}{\beta}}) + c $$ Step10: Evaluate adjusted exponential Step11: Examining the sources of prediction error Depection of variance in data Step12: Obtaining time-to-tweet vs. delta-t data points Step13: Obtain bounds on bootstrapped 95% confidence interval Step14: This graph shows that as time goes on the confidence band increases. This is just another way of saying, that as time goes on, we are more and more uncertain of how long it will be until they tweet next. Any estimator we build using this data will be averaging this spread. Fundamentally, this data suffers from inherent variance. Question Step15: Two variables plus gaussian noise To investigate the impact of uncaptured features on prediction, lets extend our test model from $f(x) = 2x + 3 + noise $ to a two feature model $f(x) = 2x + 2y+ noise $ We will not change the training data to reflect two features. Step16: Feature selection Proposed list of features to consider for influence upon intertweet time Step17: Obtain your own data files here! Step18: Capture time of mentions Step19: Mention distance for each @gruber tweet For each gruber tweet, let's find the tweet belonging to either @siracusa or @marcoarment which is the closest, in time, to mention @gruber Step20: Extract remaining features Step21: Evaluate features by mutual information gain Step22: 1) Time of day Step23: 2) Contains mention Step24: 3) Contains URL Step25: 4) Length of tweet Step26: 5) Contains hashtags Step27: 6) Is reply Step28: 7) Mention distance Step29: Fitting a new model Obtain data points - each point is of the form (delta-t, mention distance, text-length of last tweet) - label with time until next tweet kNN Step30: Performance on training set Step31: Performance on test set
Python Code: %pylab inline # Import libraries from __future__ import print_function import scipy import numpy as np import pandas as pd import matplotlib.pyplot as pyplt import seaborn as sns pyplt.rcParams['figure.figsize'] = (4, 3) import datetime from datetime import datetime from datetime import timedelta from datetime import time def extract_features(tweet_list): returnList = [] for tweet in tweet_list: timeOfDay = extractTimeOfDay(tweet) containsMention = extractContainsMention(tweet) containsURL = extractContainsURL(tweet) tweetLength = extractTweetLength(tweet) containsHashtag = extractContainsHashtag(tweet) isReply = extractIsReply(tweet) featureTuple = (timeOfDay, containsMention, containsURL, tweetLength, containsHashtag, isReply) returnList.append(featureTuple) return returnList def extractTimeOfDay(tweet): ''' Returns: 'morning, afternoon, evening, night' ''' returnString = None createdAtStr = None morningLeftEdge = time(11, 0, 0) afternoonLeftEdge = time(17, 0, 0) eveningLeftEdge = time(22, 0, 0) midnight = time(23, 59, 59) nightLeftEdge = time(2, 0, 0) if "created_at" in tweet: createdAtStr = tweet["created_at"] createdDate = datetime.strptime(createdAtStr[:-11], "%a %b %d %H:%M:%S") asTime = time(createdDate.hour, createdDate.minute, createdDate.second) if morningLeftEdge<=asTime and asTime<=afternoonLeftEdge: returnString = "morning" elif afternoonLeftEdge<=asTime and asTime<=eveningLeftEdge: returnString = "afternoon" elif asTime>=eveningLeftEdge and asTime<=midnight: returnString = "evening" elif asTime<=nightLeftEdge: returnString = "evening" elif nightLeftEdge<=asTime and asTime<=morningLeftEdge: returnString = "night" return returnString def extractContainsMention(tweet): returnFloat = 0.0 if "user_mentions" in tweet: numMentions = len(tweet["user_mentions"]) if numMentions>0: returnFloat = 1.0 return returnFloat def extractContainsURL(tweet): returnFloat = 0.0 if "urls" in tweet: if len(tweet["urls"])>0: returnFloat = 1.0 return returnFloat def extractTweetLength(tweet): tweetText = tweet["text"] return len(tweetText) def extractContainsHashtag(tweet): returnFloat = 0.0 if "hashtags" in tweet: if len(tweet["hashtags"])>0: returnFloat = 1.0 return returnFloat def extractIsReply(tweet): returnFloat = 0.0 if "in_reply_to_screen_name" in tweet: if tweet["in_reply_to_screen_name"] is not None: returnFloat = 1.0 return returnFloat Explanation: Twitter Data Analysis Predict inter-tweet times for a single user An implementation of the "Question", "Model", "Validate" process for data science. End of explanation tweetsDF = pd.io.json.read_json("new_gruber_tweets.json") tweetsDF.columns Explanation: Load Data End of explanation createdDF = tweetsDF.ix[0:, ["created_at"]] createdTextDF = tweetsDF.ix[0:, ["created_at", "text"]] createdTextVals = createdTextDF.values #turn it into an array tweetTimes = [] for i,row in createdDF.iterrows(): tweetTimes.append(row["created_at"]) tweetTimes.sort() timeUntilNext = [] for i in xrange(1, len(tweetTimes) - 1): timeDiff = (tweetTimes[i] - tweetTimes[i-1]).seconds timeUntilNext.append(timeDiff) Explanation: Feature Selection For this analysis, we are going to focus on the times intervals between tweets as our major feature. This is captured in the variable timeUntilNext End of explanation timeToNextSeries = pd.Series(timeUntilNext) pyplt.rcParams['figure.figsize'] = (8, 3) timeToNextSeries.hist(bins=30, normed=True, color="blue", label="Intertweet times") pyplt.legend(loc='upper center', shadow=True, fontsize='x-large') pyplt.xlabel('Seconds until next tweet', fontsize=14); Explanation: Explore Data Create initial histogram End of explanation from scipy.optimize import curve_fit def fitFunc(t, b): return b * numpy.exp(-b * t) count, division = np.histogram(timeUntilNext, bins=100, normed=True) fitParams, fitCov = curve_fit(fitFunc, division[0:len(division)-1], count, p0=1e-4) print ("beta parameter", fitParams, "actual parameter is {} seconds".format(1/fitParams[0])) Explanation: From these histograms, we see that the data looks like an exponential distribution. In industry, exponentials are used to model rare events like heart attacks, when a request will come into the file system and so on. $$ y = \frac{1}{\beta} \,e^{\textstyle\frac{-y}{\beta}} $$ What we have to do is find the parameter $\beta$ and the probability density function (pdf) from which the observed data is most likely to have been produced. $X = $ time until next tweet $$f{X = t} = \frac{1}{\beta} \, e^{\textstyle\frac{-t}{\beta}} $$ We are using the $\beta$ as the number of seconds until the person tweets again. Fit Exponential End of explanation fitCov Explanation: Covariance Matrix gives one entry for each pair of parameters we are trying to fit for. It gives the number of times $\beta$ changed over many bootstrap samples This number amounts to the variance of that number The higher the value, the more that $\beta$ changed, the less certain we are, about the parameters value End of explanation t = division[0:len(division)-1] timeToNextSeries.hist(bins=50, normed=True, color="blue", label="Intertweet times") pyplt.plot(t, fitFunc(t, fitParams[0]), color="yellow", label="Exponential function") pyplt.legend(loc='upper center', shadow=True, fontsize='x-large') pyplt.xlabel('Seconds until next tweet', fontsize=14); Explanation: The covariance values for our data is very small, that means we can have strong confidence in the parameter. Evaluate the exponential function End of explanation exp_diffs = [] # difference between our estimate and the actual for t in timeUntilNext: exp_diffs.append(t - (1/fitParams[0])) #signed difference of the actual from the predicted # signed error pd.Series(exp_diffs).hist(bins=50, label="Errors") pyplt.legend(loc='upper center', shadow=True, fontsize='x-large') pyplt.xlabel('Seconds until next tweet', fontsize=14) pyplt.ylabel('Counts', fontsize=14); pd.Series(exp_diffs).describe() pd.Series(exp_diffs).describe()['50%'] / 60 Explanation: Confidence bounds and data How sure are we that $\beta$ is correct? Well, that depends on the number of datapoints we have! We can quantify the uncertainty of our $\beta$ by using Chebyshev's inequality $ P{ |\bar{X} - \beta^| > \epsilon } \leq 1/4n\epsilon^{2} $, which states that the absolute value of the deviation of our estimated $\beta^$ from the actual value of $\beta$, the probability of that being greater than $\epsilon$ is $1/4n\epsilon^{2}$. If we solve this equation for $n$, you will see that we would need a larger amount of datapoints than the ones we have. As such, lets see if we can get a tighter bound. Luckily we can using Hoeffding's Inequality. It is still subject to $n$, the number of datapoints we have, but it is much better. $ P{ |\bar{X} - \beta^*| > \epsilon } \leq 2e^{-2n \epsilon^{2}} $ End of explanation import math exp_diffs = [] abs_diffs = [] for t in timeUntilNext: exp_diffs.append(t - (1/fitParams[0])) abs_diffs.append(math.fabs(t - (1/fitParams[0]))) #signed absolute error pd.Series(abs_diffs).hist(label="Absolute Errors") pyplt.legend(loc='upper center', shadow=True, fontsize='x-large') pyplt.xlabel('Seconds until next tweet', fontsize=14) pyplt.ylabel('Counts', fontsize=14); pd.Series(abs_diffs).describe() Explanation: This shows that 50% of the time, we are overestimating our inter-tweet prediction time by around 19 seconds Evaluate absolute difference of values End of explanation def fitFunc_gen(t, a, b, c): return a * (b) * numpy.exp(-b * t) + c fitParams_gen, fitCov_gen = curve_fit(fitFunc_gen, division[0:len(division)-1], count, p0=[0, 3e-4, 0]) a,b,c = fitParams_gen print ("a = {}\nb = {}\nc = {}\n".format(a, b, c)) print (fitCov_gen) print ("Expectation of t, E[t] = {} seconds\n".format((1 / (fitParams_gen[1])) * fitParams_gen[0] + fitParams_gen[2])) Explanation: Observe effect of adding offset Note that much of the histogram occurs before zero. Perhaps by adding an offset, we can improve the accuracy of our model. To do that, we are going to fit a more generalized exponential function. Fit more generalized exponential Ajdusted model $$ y = a \, (\frac{1}{\beta} \, e^{\textstyle\frac{-t}{\beta}}) + c $$ End of explanation t = division[0:len(division)-1] timeToNextSeries.hist(bins=50, normed=True, color="blue", label="Intertweet times") pyplt.plot(t, fitFunc(t, fitParams[0]), color="yellow", label="Exponential fit") pyplt.plot(t, fitFunc_gen(t, fitParams_gen[0], fitParams_gen[1], fitParams_gen[2]), color="red", label="Gen. exponential fit") pyplt.legend(loc='upper center', shadow=True, fontsize='x-large') pyplt.xlabel('Seconds until next tweet', fontsize=14) pyplt.ylabel('', fontsize=14); exp_gen_diffs = [] exp_gen_abs = [] for t in timeUntilNext: exp_gen_diffs.append((t-1/fitParams_gen[1]) * fitParams_gen[0] + fitParams_gen[1]) exp_gen_abs.append(math.fabs((t-1/fitParams_gen[1]) * fitParams_gen[0] + fitParams_gen[1])) pd.Series(exp_gen_abs).hist(label="Absolute Errors") pyplt.legend(loc='upper center', shadow=True, fontsize='x-large') pyplt.xlabel('Seconds until next tweet', fontsize=14) pyplt.ylabel('Counts', fontsize=14); pd.Series(exp_gen_diffs).describe() pd.Series(exp_gen_abs).describe() Explanation: Evaluate adjusted exponential End of explanation tweetsDF = pd.io.json.read_json("new_gruber_tweets.json") Explanation: Examining the sources of prediction error Depection of variance in data End of explanation step_size = 10 data_points = [] for v in timeUntilNext: bin_left_edges = np.arange(0, v, step_size) for l_edge in bin_left_edges: tempNewPoint = [l_edge, v-l_edge] data_points.append(tempNewPoint) data_points.sort() deltat_100 = [v[1] for v in data_points if v[0]==100] deltat_150 = [v[1] for v in data_points if v[0]==150] deltat_10 = [v[1] for v in data_points if v[0]==10] pd.Series(deltat_10).hist(bins=30, alpha=0.5, color="cyan", label="Intertweet times after 10 sec") pd.Series(deltat_150).hist(bins=30, alpha=0.5, color="red", label="Intertweet times after 150 sec") pyplt.legend(loc='upper center', shadow=True, fontsize='x-large') pyplt.xlabel('Seconds until next tweet', fontsize=14) pyplt.ylabel('Counts', fontsize=14); deltatToStd = [] deltaToDist = [] for i in np.arange(0, 100, 10): tempDeltas = [v[1] for v in data_points if v[0] == i] tempStd = scipy.std(tempDeltas) deltatToStd.append([i, tempStd]) deltaToDist.append([i, tempDeltas]) xVals = [v[0] for v in deltatToStd] yVals = [v[1] for v in deltatToStd] scipy.std(deltat_150) _= pyplt.plot(xVals, yVals, label="std dev (sec)") _= pyplt.legend(loc='upper left', fontsize='x-large') _= pyplt.xlabel('Number of seconds elapsed since last tweet', fontsize=14) _= pyplt.ylabel('Number of seconds till next tweet', fontsize=14) Explanation: Obtaining time-to-tweet vs. delta-t data points End of explanation deltaToBounds = [] for v in deltaToDist: topBound = numpy.percentile(v[1], 95) bottomBound = numpy.percentile(v[1], 5) deltaToBounds.append([v[0], (topBound, bottomBound)]) _= pyplt.plot(xVals, [e[1][0] for e in deltaToBounds], color="red") _= pyplt.plot(xVals, [e[1][1] for e in deltaToBounds], color="red") pyplt.fill_between(xVals, [e[1][0] for e in deltaToBounds], [e[1][1] for e in deltaToBounds], alpha=0.4, color="orange") _= pyplt.xlabel('Number of seconds elapsed since last tweet', fontsize=14) _= pyplt.ylabel('Number of seconds till next tweet', fontsize=14) Explanation: Obtain bounds on bootstrapped 95% confidence interval End of explanation dataPoints_1 = [] x = np.arange(0, 100, 10) for j in xrange(100): points = [(i, i*2 + 3 + numpy.random.normal(scale=50.0)) for i in x] dataPoints_1.extend(points) pointToVals = [] pointToBounds = [] for i in np.arange(0, 100, 10): valsForDataPoint = [v for v in dataPoints_1 if v[0]==i] pointToVals.append(valsForDataPoint) upperBound = numpy.percentile(valsForDataPoint, 95) lowerBound = numpy.percentile(valsForDataPoint, 5) pointToBounds.append([i, (upperBound, lowerBound)]) pyplt.plot(x, [v[1][0] for v in pointToBounds]) pyplt.plot(x, [v[1][1] for v in pointToBounds]) pyplt.plot(x, [[i*2+3] for i in x], color="red", label="true model") pyplt.fill_between(x, [v[1][0] for v in pointToBounds], [v[1][1] for v in pointToBounds], color="orange", alpha=0.4) pyplt.legend(loc=2, prop={'size':18}) pyplt.xlabel('Number of seconds elapsed since last tweet', fontsize=10); Explanation: This graph shows that as time goes on the confidence band increases. This is just another way of saying, that as time goes on, we are more and more uncertain of how long it will be until they tweet next. Any estimator we build using this data will be averaging this spread. Fundamentally, this data suffers from inherent variance. Question: What is the cause of the spread we see in the data? There are two possible causes of this spread It could be because the underlying process that generates the data is random. If that is the case, the best we can do is use our exponential function to model it. On the other hand, the spread could be caused because of factors we have not modeled. i.e., uncaptured features. If it is 2) we can reduce the effect of noise in our data by finding the factors on which inter-tweet time depends. Impact of unmeasured features One variable plus gaussian noise End of explanation dataPoints_2 = [] x = np.arange(0, 100, 10) y = np.arange(50, 150, 10) for j in xrange(100): yVal = random.choice(y) points = [(i, i*2 + yVal*2 + 3 + numpy.random.normal(scale=50.0)) for i in x] dataPoints_2.extend(points) avgYAtX = {} for i in x: lineVals = [(i*2 + yVal*2 + 3 ) for yVal in np.arange(50, 150, 10)] avgY = reduce(lambda x,y: x+y, lineVals)/len(lineVals) avgYAtX[i] = avgY pointToVals = [] pointToBounds = [] for i in np.arange(0, 100, 10): valsForDataPoint = [v for v in dataPoints_2 if v[0]==i] pointToVals.append(valsForDataPoint) upperBound = numpy.percentile(valsForDataPoint, 95) lowerBound = numpy.percentile(valsForDataPoint, 5) pointToBounds.append([i, (upperBound, lowerBound)]) pyplt.plot(x, [v[1][0] for v in pointToBounds]) pyplt.plot(x, [v[1][1] for v in pointToBounds]) pyplt.plot(x, [avgYAtX[i] for i in x], color="red", label="true model (best estimate)") pyplt.fill_between(x, [v[1][0] for v in pointToBounds], [v[1][1] for v in pointToBounds], color="orange", alpha=0.4) pyplt.legend(loc=2, prop={'size':18}) pyplt.xlabel('Number of seconds elapsed since last tweet', fontsize=10); Explanation: Two variables plus gaussian noise To investigate the impact of uncaptured features on prediction, lets extend our test model from $f(x) = 2x + 3 + noise $ to a two feature model $f(x) = 2x + 2y+ noise $ We will not change the training data to reflect two features. End of explanation import twitter_tools from twitter_tools import * import json Explanation: Feature selection Proposed list of features to consider for influence upon intertweet time: - mention distance - time of day - contains mention - contains URL - length of tweet (num chars) - contains hashtags - is_reply We now need to form the features described above. The most non-trivial one to create is "mention distance", as defined in lecture. We do this now. Create mention distance End of explanation tweetFile = open("new_gruber_tweets.json") jsonFile = json.load(tweetFile) tweetFile.close() gruberTweetsDF = pandas.io.json.read_json("new_gruber_tweets.json") siracusaTweetsDF = pandas.io.json.read_json("new_siracusa_tweets.json") armentTweetsDF = pandas.io.json.read_json("new_arment_tweets.json") gruberTimeDiffs = [] gruberTweetTimes = [] gruberTimeToDiff = {} gruberTimeToText = {} siracusaMentionTimes = [] armentMentionTimes = [] gruberCreatedDF = gruberTweetsDF.ix[0:, ["created_at"]] gruberCreatedTextDF = gruberTweetsDF.ix[0:, ["created_at", "text"]] createdTextVals = gruberCreatedTextDF.values for i, row in gruberCreatedDF.iterrows(): gruberTweetTimes.append(row["created_at"]) gruberTweetTimes.sort() for i in xrange(1, len(gruberTweetTimes)): timeDiff = (gruberTweetTimes[i]-gruberTweetTimes[i-1]).seconds gruberTimeDiffs.append(timeDiff) gruberTimeToDiff[gruberTweetTimes[i]] = timeDiff gruberTimeToText[gruberTweetTimes[i]] = gruberCreatedTextDF[gruberCreatedTextDF["created_at"]==gruberTweetTimes[i]] Explanation: Obtain your own data files here! End of explanation nearestMentionToTimeDiff = [] tweetIndexToNearestMention = {} def findTweetFollowingTime(timeStamp, tweetTimes): returnTweetTime = None for t in tweetTimes: if t>timeStamp: returnTweetTime = t break return returnTweetTime def findTweetPreceedingTime(timeStamp, tweetTimes): returnTweetTime = None i = len(tweetTimes)-1 while i>=0: t = tweetTimes[i] if t<timeStamp: returnTweetTime = t break i-=1 return returnTweetTime siracusaTimeOfGruberMentions = [] armentTimeOfGruberMentions = [] for i, row in armentTweetsDF.iterrows(): if "user_mentions" in row: if type(row["user_mentions"]) == list: if len([e for e in row["user_mentions"] if e["screen_name"]=="gruber"])>0: armentTimeOfGruberMentions.append(row["created_at"]) for i, row in siracusaTweetsDF.iterrows(): if "user_mentions" in row: if type(row["user_mentions"]) == list: if len([e for e in row["user_mentions"] if e["screen_name"]=="gruber"])>0: siracusaTimeOfGruberMentions.append(row["created_at"]) Explanation: Capture time of mentions End of explanation gruberTweetTimes.sort() siracusaTimeOfGruberMentions.sort() armentTimeOfGruberMentions.sort() for i in xrange(len(gruberTweetTimes)): t = gruberTweetTimes[i] t_next = None if i+1<len(gruberTweetTimes): t_next = gruberTweetTimes[i+1] #print "t_next: %s" % t_next t_s = findTweetFollowingTime(t, siracusaTimeOfGruberMentions) t_s_prev = findTweetPreceedingTime(t, siracusaTimeOfGruberMentions) #print "t_s: %s" % t_s t_a = findTweetFollowingTime(t, armentTimeOfGruberMentions) t_a_prev = findTweetPreceedingTime(t, armentTimeOfGruberMentions) sDiff = None aDiff = None if t_s_prev is not None and t_s is not None: sDiff = math.fabs((t_s - t).seconds) if sDiff >math.fabs((t-t_s_prev).seconds): sDiff = math.fabs((t-t_s_prev).seconds) if t_a_prev is not None and t_a is not None: aDiff = math.fabs((t_a - t).seconds) if aDiff > math.fabs((t-t_a_prev).seconds): aDiff = math.fabs((t - t_a_prev).seconds) closestMention = None if sDiff is not None: closestMention = sDiff elif aDiff is not None: closestMention = aDiff if aDiff is not None and sDiff is not None: if aDiff < sDiff: closestMention = aDiff if closestMention is not None: nearestMentionToTimeDiff.append((closestMention, (t_next-t).seconds)) tweetIndexToNearestMention[i] = closestMention Explanation: Mention distance for each @gruber tweet For each gruber tweet, let's find the tweet belonging to either @siracusa or @marcoarment which is the closest, in time, to mention @gruber End of explanation features_list = extract_features(jsonFile) featuresWithLabel = [] for i in range(len(gruberTimeDiffs)): timeDiff = gruberTimeDiffs[i] if timeDiff<4000: label = "short" else: label = "long" featuresForTweet = features_list[i] nearestMention = 0 if i in tweetIndexToNearestMention: nearestMention = tweetIndexToNearestMention[i] completeItem = [] completeItem.append(label) completeItem.extend(list(featuresForTweet)) completeItem.append(nearestMention) featuresWithLabel.append(completeItem) Explanation: Extract remaining features End of explanation from info_gain import * Explanation: Evaluate features by mutual information gain End of explanation valsY = ["short", "long"] binsY = None joint_list = [(v[1], v[0]) for v in featuresWithLabel] valsX = ["morning", "afternoon", "evening", "night"] binsX = None jpTable = compute_joint_prob(joint_list, valsX, valsY, None, None) print "Entropy loss for Time of Day feature is {}".format(entropy_loss(jpTable, valsX, valsY)) Explanation: 1) Time of day End of explanation joint_list = [(v[2], v[0]) for v in featuresWithLabel] valsX = None binsX = [[0, .9], [1.0, 100]] jpTable = compute_joint_prob(joint_list, valsX, valsY, bins1=binsX) print "Entropy loss for Contain Mention feature is {}".format(entropy_loss(jpTable, [0.0, 1.0], valsY)) Explanation: 2) Contains mention End of explanation joint_list = [(v[3], v[0]) for v in featuresWithLabel] valsX = None binsX = [[0, .9], [1.0, 100]] jpTable = compute_joint_prob(joint_list, valsX, valsY, bins1=binsX) print "Entropy loss for Contains URL feature is {}".format(entropy_loss(jpTable, [0.0, 1.0], valsY)) Explanation: 3) Contains URL End of explanation joint_list = [(v[4], v[0]) for v in featuresWithLabel] valsX = None binsX = [[0, 14], [14, 28], [28, 42], [42, 56], [56, 70], [70, 84], [84, 98], [98, 112], [112, 126]] jpTable = compute_joint_prob(joint_list, valsX, valsY, bins1=binsX) print "Entropy loss for Contains URL feature is {}".format(entropy_loss(jpTable, [v[4] for v in featuresWithLabel], valsY)) Explanation: 4) Length of tweet End of explanation joint_list = [(v[5], v[0]) for v in featuresWithLabel] valsX = None binsX = [[0, .9], [1.0, 100]] jpTable = compute_joint_prob(joint_list, valsX, valsY, bins1=binsX) print "Entropy loss for Contains Hashtags feature is {}".format(entropy_loss(jpTable, [0.0, 1.0], valsY)) Explanation: 5) Contains hashtags End of explanation joint_list = [(v[6], v[0]) for v in featuresWithLabel] valsX = None binsX = [[0, .9], [1.0, 100]] jpTable = compute_joint_prob(joint_list, valsX, valsY, bins1=binsX) print "Entropy loss for Contains Hashtags feature is {}".format(entropy_loss(jpTable, [0.0, 1.0], valsY)) Explanation: 6) Is reply End of explanation joint_list = [(v[7], v[0]) for v in featuresWithLabel] valsX = None binsX = [[0, 1000], [1000, 5000], [5000, 8000], [8000, 10000], [10000, 20000], [20000, 30000], [30000, 60000], [60000, 80000], [80000, 10000], [100000, 800000]] jpTable = compute_joint_prob(joint_list, valsX, valsY, bins1=binsX) print "Entropy loss for Contains Hashtags feature is {}".format(entropy_loss(jpTable, [0.0, 1.0], valsY)) Explanation: 7) Mention distance End of explanation # Form data points to be used with kNN model step_size = 10 knn_data_points = [] tweet_index = 0 for v in timeUntilNext: bin_left_edges = np.arange(0, v, step_size) features_for_tweet = features_list[tweet_index] if tweet_index in tweetIndexToNearestMention: for l_edge in bin_left_edges: newDeltaT = l_edge mentionDist = tweetIndexToNearestMention[tweet_index] + newDeltaT textLength = features_for_tweet[3] label = v-l_edge newPoint = [newDeltaT, mentionDist, textLength, label] knn_data_points.append(newPoint) tweet_index+=1 import sklearn from sklearn.neighbors import KNeighborsRegressor knn = KNeighborsRegressor(5) knn_3 = KNeighborsRegressor(3) print len(knn_data_points) print .70*len(knn_data_points) print .30*len(knn_data_points) import random trainingPoints = [random.choice(knn_data_points) for i in xrange(966880)] trainingX = [(v[0], v[1], v[2]) for v in trainingPoints] trainingY = [v[3] for v in trainingPoints] m_3 = knn_3.fit(trainingX, trainingY) m = knn.fit(trainingX, trainingY) Explanation: Fitting a new model Obtain data points - each point is of the form (delta-t, mention distance, text-length of last tweet) - label with time until next tweet kNN End of explanation y_ = m.predict(trainingX) y = m_3.predict(trainingX) train_diffs = [] for i in xrange(len(trainingY)): train_diffs.append(trainingY[i] - y_[i]) pandas.Series([math.fabs(x) for x in train_diffs]).describe() _= pandas.Series(train_diffs).hist(bins=50, color="red", label="Performance on training set") _= pyplt.xlabel('', fontsize=14) _= pyplt.ylabel('Counts', fontsize=14) _= pyplt.legend(loc=2, prop={'size':18}) Explanation: Performance on training set End of explanation testPoints = [random.choice(knn_data_points) for i in xrange(414377)] testX = [(v[0], v[1], v[2]) for v in testPoints] testY = [v[3] for v in testPoints] test_pred = m_3.predict(testX) test_diffs = [] for i in xrange(len(testY)): test_diffs.append(math.fabs(testY[i] - test_pred[i])) pandas.Series(test_diffs).describe() testSeries = pandas.Series(test_diffs) _=testSeries.hist(bins=50, label="Performance on training set") _= pyplt.xlabel('Seconds until next tweet', fontsize=14) _= pyplt.ylabel('Counts', fontsize=14) _= pyplt.legend(loc=2, prop={'size':18}) Explanation: Performance on test set End of explanation
12,942
Given the following text problem statement, write Python code to implement the functionality described below in problem statement Problem: Sklearn Principle Component Analysis - PCA Example
Python Code:: from sklearn.decomposition import PCA # Step 1: Initalise and fit PCA for 4 dimensions pca = PCA(n_components=4) pca.fit(X_train) # Step 2: Transform data X_train = pd.DataFrame(pca.transform(X_train)) X_test = pd.DataFrame(pca.transform(X_test)) # Step 3: Print out explained variance ratio print(pca.explained_variance_ratio_)
12,943
Given the following text description, write Python code to implement the functionality described below step by step Description: Files Python uses file objects to interact with external files on your computer. These file objects can be any sort of file you have on your computer, whether it be an audio file, a text file, emails, Excel documents, etc. Note Step1: Python Opening a file We can open a file with the open() function. The open function also takes in arguments (also called parameters). Lets see how this is used Step2: This happens because you can imagine the reading "cursor" is at the end of the file after having read it. So there is nothing left to read. We can reset the "cursor" like this Step3: In order to not have to reset every time, we can also use the readlines method. Use caution with large files, since everything will be held in memory. We will learn how to iterate over large files later in the course. Step4: Writing to a File By default, using the open() function will only allow us to read the file, we need to pass the argument 'w' to write over the file. For example Step5: Iterating through a File Lets get a quick preview of a for loop by iterating over a text file. First let's make a new text file with some iPython Magic Step6: Now we can use a little bit of flow to tell the program to for through every line of the file and do something Step7: Don't worry about fully understanding this yet, for loops are coming up soon. But we'll break down what we did above. We said that for every line in this text file, go ahead and print that line. Its important to note a few things here
Python Code: %%writefile test.txt Hello, this is a quick test file Explanation: Files Python uses file objects to interact with external files on your computer. These file objects can be any sort of file you have on your computer, whether it be an audio file, a text file, emails, Excel documents, etc. Note: You will probably need to install certain libraries or modules to interact with those various file types, but they are easily available. (We will cover downloading modules later on in the course). Python has a built-in open function that allows us to open and play with basic file types. First we will need a file though. We're going to use some iPython magic to create a text file! iPython Writing a File End of explanation # Open the text.txt we made earlier my_file = open('test.txt') # We can now read the file my_file.read() # But what happens if we try to read it again? my_file.read() Explanation: Python Opening a file We can open a file with the open() function. The open function also takes in arguments (also called parameters). Lets see how this is used: End of explanation # Seek to the start of file (index 0) my_file.seek(0) # Now read again my_file.read() Explanation: This happens because you can imagine the reading "cursor" is at the end of the file after having read it. So there is nothing left to read. We can reset the "cursor" like this: End of explanation # Readlines returns a list of the lines in the file. my_file.readlines() Explanation: In order to not have to reset every time, we can also use the readlines method. Use caution with large files, since everything will be held in memory. We will learn how to iterate over large files later in the course. End of explanation # Add a second argument to the function, 'w' which stands for write my_file = open('test.txt','w+') # Write to the file my_file.write('This is a new line') # Read the file my_file.read() Explanation: Writing to a File By default, using the open() function will only allow us to read the file, we need to pass the argument 'w' to write over the file. For example: End of explanation %%writefile test.txt First Line Second Line Explanation: Iterating through a File Lets get a quick preview of a for loop by iterating over a text file. First let's make a new text file with some iPython Magic: End of explanation for line in open('test.txt'): print line Explanation: Now we can use a little bit of flow to tell the program to for through every line of the file and do something: End of explanation # Pertaining to the first point above for asdf in open('test.txt'): print asdf Explanation: Don't worry about fully understanding this yet, for loops are coming up soon. But we'll break down what we did above. We said that for every line in this text file, go ahead and print that line. Its important to note a few things here: 1.) We could have called the 'line' object anything (see example below). 2.) By not calling .read() on the file, the whole text file was not stored in memory. 3.) Notice the indent on the second line for print. This whitespace is required in Python. We'll learn a lot more about this later, but up next: Sets and Booleans! End of explanation
12,944
Given the following text description, write Python code to implement the functionality described below step by step Description: Extending Development Patterns with Tails Getting Started This tutorial focuses on extending the developent patterns beyond the tail. Note that a lot of the examples shown here might not be applicable in a real world scenario, and is only meant to demonstrate some of the functionalities included in the package. The user should always exercise their best actuarial judgement, and follow any applicable laws, the Code of Professional Conduct, and applicable Actuarial Standards of Practice. Be sure to make sure your packages are updated. For more info on how to update your pakages, visit Keeping Packages Updated. Step1: Basic Tail Fitting Tails are another class of transformers. Similar to the Development estimator, they come with fit, transform and fit_transform methods. Also, like our Development estimator, you can define a tail in the absence of data or if you believe development will continue beyond your latest evaluation period. Step2: Upon fitting data, we get updated ldf_ and cdf_ attributes that extend beyond the length of the triangle. Notice how the tail includes extra development periods (age 147) beyond the end of the triangle (age 135) at which point an age-to-ultimate tail factor is applied. Step3: These extra twelve months (147 - 135, or one year) of development patterns are included as it is typical to want to track IBNR run-off over a 1-year time horizon from the valuation date. The one-year extension is currently fixed at one year and there is no ability to extend it even further. However, a subsequent version of chainladder will look to address this issue. Curve Fitting Curve fitting takes selected development patterns and extrapolates them using either an exponential or inverse_power fit. In most cases, the inverse_power produces a thicker (more conservative) tail. Step4: When fitting a tail, by default, all of the data will be used; however, we can specify which period of development patterns we want to begin including in the curve fitting process with fit_period. Patterns will also be generated for 100 periods beyond the end of the triangle by default, or we can specify how far beyond the triangle to project the tail factor to before dropping the age-to-age factor down to 1.0 using extrap_periods. Note that even though we can extrapolate the curve many years beyond the end of the triangle for computational purposes, the resultant development factors will compress all ldf_ beyond one year into a single age-ultimate factor. Step5: In this example, we ignore the first five development patterns for curve fitting, and we allow our tail extrapolation to go 50 quarters beyond the end of the triangle. Note that both fit_period and extrap_periods follow the development_grain of the triangle being fit. Chaining Multiple Transformers It is very common to need to get the development factors first then apply a tail curve to extend our development pattern. chainladder transformers take Triangle objects as inputs, but the returned objects are also Triangle objects with their transform method. To chain multiple transformers together, we must invoke the transform method on each transformer similar to how sklearn approaches its own tranformers. Step6: We can also invoke the methods without chaining the operations together. Step7: Chaining multiple transformers together is a very common pattern in chainladder. Like its inspiration sklearn, we can create an overall estimator known as a Pipeline that combines multiple transformers and optional predictors in one estimator. Step8: Pipeline keeps references to each step with its named_steps argument.
Python Code: # Black linter, optional %load_ext lab_black import pandas as pd import numpy as np import chainladder as cl import os print("pandas: " + pd.__version__) print("numpy: " + np.__version__) print("chainladder: " + cl.__version__) Explanation: Extending Development Patterns with Tails Getting Started This tutorial focuses on extending the developent patterns beyond the tail. Note that a lot of the examples shown here might not be applicable in a real world scenario, and is only meant to demonstrate some of the functionalities included in the package. The user should always exercise their best actuarial judgement, and follow any applicable laws, the Code of Professional Conduct, and applicable Actuarial Standards of Practice. Be sure to make sure your packages are updated. For more info on how to update your pakages, visit Keeping Packages Updated. End of explanation quarterly = cl.load_sample("quarterly") quarterly["paid"] Explanation: Basic Tail Fitting Tails are another class of transformers. Similar to the Development estimator, they come with fit, transform and fit_transform methods. Also, like our Development estimator, you can define a tail in the absence of data or if you believe development will continue beyond your latest evaluation period. End of explanation tail = cl.TailCurve() tail.fit(quarterly) print("Triangle latest", quarterly.development.max()) tail.fit(quarterly).ldf_["paid"] Explanation: Upon fitting data, we get updated ldf_ and cdf_ attributes that extend beyond the length of the triangle. Notice how the tail includes extra development periods (age 147) beyond the end of the triangle (age 135) at which point an age-to-ultimate tail factor is applied. End of explanation exp = cl.TailCurve(curve="exponential").fit(quarterly["paid"]) exp.tail_ inv = cl.TailCurve(curve="inverse_power").fit(quarterly["paid"]) inv.tail_ Explanation: These extra twelve months (147 - 135, or one year) of development patterns are included as it is typical to want to track IBNR run-off over a 1-year time horizon from the valuation date. The one-year extension is currently fixed at one year and there is no ability to extend it even further. However, a subsequent version of chainladder will look to address this issue. Curve Fitting Curve fitting takes selected development patterns and extrapolates them using either an exponential or inverse_power fit. In most cases, the inverse_power produces a thicker (more conservative) tail. End of explanation quarterly["incurred"] cl.TailCurve(fit_period=(12, None), extrap_periods=50).fit(quarterly).ldf_["incurred"] cl.TailCurve(fit_period=(1, None), extrap_periods=50).fit(quarterly).ldf_["incurred"] Explanation: When fitting a tail, by default, all of the data will be used; however, we can specify which period of development patterns we want to begin including in the curve fitting process with fit_period. Patterns will also be generated for 100 periods beyond the end of the triangle by default, or we can specify how far beyond the triangle to project the tail factor to before dropping the age-to-age factor down to 1.0 using extrap_periods. Note that even though we can extrapolate the curve many years beyond the end of the triangle for computational purposes, the resultant development factors will compress all ldf_ beyond one year into a single age-ultimate factor. End of explanation print("First attempt:") try: cl.TailCurve().fit(cl.Development().fit(quarterly)) print("This doesn't pass") except: print("This fails because we did not transform the triangle") print("\nSecond attempt:") try: cl.TailCurve().fit(cl.Development().fit_transform(quarterly)) print("This passes because we transformed the triangle") except: print("This doesn't fail") Explanation: In this example, we ignore the first five development patterns for curve fitting, and we allow our tail extrapolation to go 50 quarters beyond the end of the triangle. Note that both fit_period and extrap_periods follow the development_grain of the triangle being fit. Chaining Multiple Transformers It is very common to need to get the development factors first then apply a tail curve to extend our development pattern. chainladder transformers take Triangle objects as inputs, but the returned objects are also Triangle objects with their transform method. To chain multiple transformers together, we must invoke the transform method on each transformer similar to how sklearn approaches its own tranformers. End of explanation dev = cl.Development().fit_transform(quarterly) tail = cl.TailCurve().fit(dev) tail.cdf_["paid"] Explanation: We can also invoke the methods without chaining the operations together. End of explanation sequence = [ ("simple_dev", cl.Development(average="simple")), ("inverse_power_tail", cl.TailCurve(curve="inverse_power")), ] pipe = cl.Pipeline(steps=sequence).fit(quarterly) Explanation: Chaining multiple transformers together is a very common pattern in chainladder. Like its inspiration sklearn, we can create an overall estimator known as a Pipeline that combines multiple transformers and optional predictors in one estimator. End of explanation print(pipe.named_steps.simple_dev) print(pipe.named_steps.inverse_power_tail) Explanation: Pipeline keeps references to each step with its named_steps argument. End of explanation
12,945
Given the following text description, write Python code to implement the functionality described below step by step Description: Beyond Least Squares Measuring the size of the error with different norms We define the error as \begin{eqnarray} e = y - Aw \end{eqnarray} Least Squares measures the Euclidian norm of the error \begin{eqnarray} E(w) = \frac{1}{2}e^\top e = \frac{1}{2} \|e\|_2^2 \end{eqnarray} here \begin{eqnarray} \|e\|_2 & = & \left(e_1^2 + e_2^2 + \dots + e_N^2\right)^{\frac{1}{2}} \end{eqnarray} Another possibility is measuring the error with other norms, such as the absolute error \begin{eqnarray} \|e\|_1 & = & \left|e_1\right| + \left|e_2\right| + \dots + \left|e_N\right| \end{eqnarray} and more general $p$ norms \begin{eqnarray} \|e\|_p & = & \left( \left|e_1\right|^p + \left|e_2\right|^p + \dots + \left|e_N\right|^p\right)^{\frac{1}{p}} \end{eqnarray} Regularization Measuring the size of the parameter vector The idea is introducing a penalty for the parameter values $w$ that are away from the origin $$ F_{(2,2)}(w) = \| y - Aw \|_2^2 + \lambda \| w \|_2^2 $$ Lasso penalty $$ F_{(2,1)}(w) = \| y - Aw \|_2^2 + \lambda \| w \|_1 $$ $\ell_1$ Cost function $$ F_{(1,1)}(w) = \| y - Aw \|_1 + \lambda \| w \|_1 $$ A genaral mixed norm $$ F_{(p,q)}(w) = \| y - Aw \|_p^p + \lambda \| w \|_q^q $$ Aside A norm $\|\cdot\| Step1: Overcomplete Representations and regularization Suppose we are given a point $y$ in two dimensional space and want to represent this point with a linear combination of $N$ vectors $a_i$ for $i=1\dots N$, where $N>2$. We let $A$ be the $2 \times N$ matrix $$ A = [a_1, a_2,\dots, a_N] $$ To represent $y$, we need to solve the set of equations $$y = Aw$$ Clearly, there could be more than one solution, in fact in general there are an infinite number of solutions $w^*$ to this problem, so minimization of the error is not sufficient. To find a particular solution we may require an additional property from the solution $w^*$, such as having a small norm $\|w\|$. To achieve this, we can try to minimize $$ E_{(2,2)}(w) = \| y - Aw \|_2^2 + \lambda \| w \|_2^2 $$ Step2: Below, you can experiment by selecting different norms, $$ E_{(p,q)}(w) = \| y - Aw \|_p^p + \lambda \| w \|_q^q $$ Step3: Outlier detection with Basis Regression Set up some data with outliers Step4: A model for outlier detection A design matrix with a union of bases * Smooth low order polynomials for regular behavior $A_{\text smooth}$ * Spikes at every time point to model outliers $A_{\text spike}$ $$ y \approx A_{\text smooth} w_{\text smooth} + A_{\text spike} w_{\text spike} $$ Leads to an interpretable decomposition in terms of actual signal and noise Write in the generic form $y = Aw$ by $$ A = \left[A_{\text smooth}\; A_{\text spike}\right] \left( \begin{array}{c} w_{\text smooth} \ w_{\text spike} \end{array} \right) $$ Minimize the following objective $$ E(w_{\text smooth}, w_{\text spike}) = \|y - A_{\text smooth} w_{\text smooth} - A_{\text spike} w_{\text spike} \|{2}^2 + \lambda \|w{\text spike}\|_1 $$ Step5: Changepoint detection Step6: Feature selection Step7: Well log data
Python Code: # A toy data set with outliers x = np.matrix('[0,1,2,3,4,5]').T y = np.matrix('[2,4,6,-1,10,12]').T # Degree of the fitted polynomial degree = 1 N = len(x) A = np.hstack((np.power(x,i) for i in range(degree+1))) xx = np.matrix(np.arange(-1,6,0.1)).T A2 = np.hstack((np.power(xx,i) for i in range(degree+1))) # Norm parameter for p in np.arange(1,5,0.5): # Construct the problem. w = cvx.Variable(degree+1) objective = cvx.Minimize(cvx.norm(A*w - y, p)) #constraints = [0 <= x, x <= 10] #prob = Problem(objective, constraints) prob = cvx.Problem(objective) # The optimal objective is returned by prob.solve(). result = prob.solve() # The optimal value for x is stored in x.value. print(w.value) # The optimal Lagrange multiplier for a constraint # is stored in constraint.dual_value. #print(constraints[0].dual_value) plt.figure() plt.plot(x.T.tolist(), y.T.tolist(), 'o') plt.plot(xx, A2*w.value, '-') plt.title('p = '+str(p)) plt.show() Explanation: Beyond Least Squares Measuring the size of the error with different norms We define the error as \begin{eqnarray} e = y - Aw \end{eqnarray} Least Squares measures the Euclidian norm of the error \begin{eqnarray} E(w) = \frac{1}{2}e^\top e = \frac{1}{2} \|e\|_2^2 \end{eqnarray} here \begin{eqnarray} \|e\|_2 & = & \left(e_1^2 + e_2^2 + \dots + e_N^2\right)^{\frac{1}{2}} \end{eqnarray} Another possibility is measuring the error with other norms, such as the absolute error \begin{eqnarray} \|e\|_1 & = & \left|e_1\right| + \left|e_2\right| + \dots + \left|e_N\right| \end{eqnarray} and more general $p$ norms \begin{eqnarray} \|e\|_p & = & \left( \left|e_1\right|^p + \left|e_2\right|^p + \dots + \left|e_N\right|^p\right)^{\frac{1}{p}} \end{eqnarray} Regularization Measuring the size of the parameter vector The idea is introducing a penalty for the parameter values $w$ that are away from the origin $$ F_{(2,2)}(w) = \| y - Aw \|_2^2 + \lambda \| w \|_2^2 $$ Lasso penalty $$ F_{(2,1)}(w) = \| y - Aw \|_2^2 + \lambda \| w \|_1 $$ $\ell_1$ Cost function $$ F_{(1,1)}(w) = \| y - Aw \|_1 + \lambda \| w \|_1 $$ A genaral mixed norm $$ F_{(p,q)}(w) = \| y - Aw \|_p^p + \lambda \| w \|_q^q $$ Aside A norm $\|\cdot\|: \mathbb{C}^m \rightarrow \mathbb{R}$: (Nonnegativity) $\|x\| \geq 0$, $\|x\| = 0 \Leftrightarrow x = 0$ (Triangle Inequality) $\|x+y\| \leq \|x\| + \| y \|$ (Scaling) $\|\alpha x\| = \left|\alpha\right|\|x\|$ for a scalar $\alpha$, When the cost functions are convex, we can compute the global optimal solution. A general tool for convex optimization is cvx. Measuring the error with different norms Below example illustrates the effect of choosing a different norms as penalty (cost) functions. Using norms with $p$ close to $1$ has the effect of providing robustness agains outliers. The square function causes large deviations to have an even larger impact on the total error. End of explanation N = 7 #th = np.arange(0, np.pi-np.pi/N, np.pi/N) th = 2*np.pi*np.random.rand(N) A = np.vstack((np.cos(th), np.sin(th))) y = np.mat('[1.2;2.1]') fig = plt.figure(figsize=(8,8)) for i in range(len(th)): plt.arrow(0,0,A[0,i],A[1,i]) plt.plot(y[0],y[1],'ok') plt.gca().set_xlim((-3,3)) plt.gca().set_ylim((-3,3)) plt.show() Explanation: Overcomplete Representations and regularization Suppose we are given a point $y$ in two dimensional space and want to represent this point with a linear combination of $N$ vectors $a_i$ for $i=1\dots N$, where $N>2$. We let $A$ be the $2 \times N$ matrix $$ A = [a_1, a_2,\dots, a_N] $$ To represent $y$, we need to solve the set of equations $$y = Aw$$ Clearly, there could be more than one solution, in fact in general there are an infinite number of solutions $w^*$ to this problem, so minimization of the error is not sufficient. To find a particular solution we may require an additional property from the solution $w^*$, such as having a small norm $\|w\|$. To achieve this, we can try to minimize $$ E_{(2,2)}(w) = \| y - Aw \|_2^2 + \lambda \| w \|_2^2 $$ End of explanation ## Regularization lam = 0.02 p = 2 q = 1 def Visualize_Basis(A,w=None, x=None,ylim=[-0.5, 1.1]): K = A.shape[1] if x is None: x = np.arange(0,A.shape[0]) if w is None: plt.figure(figsize=(6,2*K)) #plt.show() for i in range(K): plt.subplot(K,1,i+1) plt.stem(x,A[:,i]) plt.gcf().gca().set_xlim([-1, K+2]) plt.gcf().gca().set_ylim(ylim) plt.gcf().gca().axis('off') plt.show() else: # if w is not None plt.figure(figsize=(6,2*K)) for i in range(K): plt.subplot(K,2,2*i+1) plt.stem(x,A[:,i]) plt.gcf().gca().set_xlim([-1, K+2]) plt.gcf().gca().set_ylim(ylim) plt.gcf().gca().axis('off') plt.subplot(K,2,2*i+2) plt.stem(x,A[:,i]*w[i]) plt.gcf().gca().set_xlim([-1, K+2]) if np.abs(w[i])<1: plt.gcf().gca().set_ylim(ylim) plt.gcf().gca().axis('off') plt.show() # Construct the problem. w = cvx.Variable(len(th)) objective = cvx.Minimize(cvx.norm(A*w - y, p)**p + lam*cvx.norm(w, q)**q) constraints = [] prob = cvx.Problem(objective, constraints) # The optimal objective is returned by prob.solve(). result = prob.solve() figw = 12 fig = plt.figure(figsize=(figw,figw)) ws = np.array(w.value) v = np.zeros(2) for i in range(len(th)): dx = A[0,i]*ws[i] dy = A[1,i]*ws[i] plt.arrow(v[0],v[1], dx[0], dy[0],color='red') v[0] = v[0]+dx v[1] = v[1]+dy plt.arrow(0,0, dx[0], dy[0]) plt.arrow(0,0,A[0,i],A[1,i],linestyle=':') plt.plot(y[0],y[1],'ok') plt.gca().set_xlim((-1,3)) plt.gca().set_ylim((-1,3)) plt.show() fig = plt.figure(figsize=(figw,1)) plt.stem(ws, markerfmt='.b', basefmt='b:') plt.axes().set_xlim((-1,N)) plt.gca().axis('off') plt.show() Visualize_Basis(A,w=ws,ylim=[-2,2]) Explanation: Below, you can experiment by selecting different norms, $$ E_{(p,q)}(w) = \| y - Aw \|_p^p + \lambda \| w \|_q^q $$ End of explanation import scipy as sc import numpy as np import pandas as pd import matplotlib as mpl import matplotlib.pylab as plt df_arac = pd.read_csv(u'data/arac.csv',sep=';') BaseYear = 1995 x = np.matrix(df_arac.Year[31:]).T-BaseYear y = np.matrix(df_arac.Car[31:]).T/1000000.0 # Introduce some artificial outliers y[-3] = y[-3]-3 y[4] = y[4]+5 plt.plot(x+BaseYear, y, 'o') plt.xlabel('Year') plt.ylabel('Number of Cars (Millions)') plt.xticks(range(BaseYear, BaseYear+len(x)+3,2)) plt.show() Explanation: Outlier detection with Basis Regression Set up some data with outliers End of explanation def triag_ones(N): A = np.zeros((N,N)) for i in range(N): A[i:,i] = np.ones(N-i) return A xx = np.matrix(np.arange(1995,2018,0.5)).T-BaseYear N = len(x) degree = 4 B = np.hstack((np.power(x,i) for i in range(degree+1))) # Make an orthogonal basis that spans the same column space Q, R, = np.linalg.qr(B) # Append an extra identity basis for outliers A = np.hstack((Q, np.eye(N))) B2 = np.hstack((np.power(xx,i) for i in range(degree+1))) A2 = B2*R.I Visualize_Basis(A,x=x) lam = 0.02 # Construct the problem. w = cvx.Variable(degree+1+N) p = 2 q = 1 objective = cvx.Minimize(cvx.norm(A*w - y, p)**p + lam*cvx.norm(w[degree+1:], q)**q) constraints = [] prob = cvx.Problem(objective, constraints) # The optimal objective is returned by prob.solve(). result = prob.solve() plt.figure(figsize=(10,5)) #plt.subplot(2,1,1) plt.plot(x, y, 'o') plt.plot(x, A*w.value, 'g:') plt.plot(xx, A2*w.value[0:degree+1,0], 'r-') fig.gca().set_xlim((0,25)) plt.show() fig = plt.figure(figsize=(10,5)) #plt.subplot(2,1,2) # The optimal value for w is stored in w.value. plt.stem(x,w.value[degree+1:],basefmt=':') fig.gca().set_xlim((0,25)) plt.show() Visualize_Basis(A,x=x,w=np.array(w.value), ylim=[-0.5, 1.1]) Explanation: A model for outlier detection A design matrix with a union of bases * Smooth low order polynomials for regular behavior $A_{\text smooth}$ * Spikes at every time point to model outliers $A_{\text spike}$ $$ y \approx A_{\text smooth} w_{\text smooth} + A_{\text spike} w_{\text spike} $$ Leads to an interpretable decomposition in terms of actual signal and noise Write in the generic form $y = Aw$ by $$ A = \left[A_{\text smooth}\; A_{\text spike}\right] \left( \begin{array}{c} w_{\text smooth} \ w_{\text spike} \end{array} \right) $$ Minimize the following objective $$ E(w_{\text smooth}, w_{\text spike}) = \|y - A_{\text smooth} w_{\text smooth} - A_{\text spike} w_{\text spike} \|{2}^2 + \lambda \|w{\text spike}\|_1 $$ End of explanation x = np.matrix(df_arac.Year[31:]).T-BaseYear y = np.matrix(df_arac.Truck[31:]).T/1000000.0 plt.plot(x+BaseYear, y, 'o') plt.xlabel('Year') plt.ylabel('Number of Trucks(Millions)') plt.show() degree = 1 lam = 1 p = 1 q = 1 xx = np.matrix(np.arange(1995,2018,0.5)).T-BaseYear N = len(x) B = np.hstack((np.power(x,i) for i in range(degree+1))) # Make an orthogonal basis that spans the same column space Q, R, = np.linalg.qr(B) # Append an extra identity basis for outliers A = np.hstack((Q, triag_ones(N))) B2 = np.hstack((np.power(xx,i) for i in range(degree+1))) A2 = B2*R.I # Construct the problem. w = cvx.Variable(degree+1+N) objective = cvx.Minimize(cvx.norm(A*w - y, p)**p + lam*cvx.norm(w[degree+1:], q)**q) constraints = [] prob = cvx.Problem(objective, constraints) # The optimal objective is returned by prob.solve(). result = prob.solve() plt.figure(figsize=(10,5)) #plt.subplot(2,1,1) plt.plot(x, y, 'o') plt.plot(x, A*w.value, ':') plt.plot(xx, A2*w.value[0:degree+1,0], '-') fig.gca().set_xlim((0,25)) plt.show() fig = plt.figure(figsize=(10,5)) #plt.subplot(2,1,2) # The optimal value for w is stored in w.value. plt.stem(x,w.value[degree+1:]) fig.gca().set_xlim((0,25)) # Visualize the Basis K = A.shape[1] plt.show() plt.figure(figsize=(6,2*K)) for i in range(K): plt.subplot(K,2,2*i+1) plt.stem(x,A[:,i]) plt.gcf().gca().set_xlim([0, K+2]) plt.gcf().gca().set_ylim([-0.5, 1.1]) plt.gcf().gca().axis('off') plt.subplot(K,2,2*i+2) plt.stem(x,A[:,i]*w.value[i,0]) plt.gcf().gca().set_xlim([0, K+2]) if np.abs(w.value[i,0])<1: plt.gcf().gca().set_ylim([-0.5, 1.1]) plt.gcf().gca().axis('off') plt.show() %run plot_normballs.py Explanation: Changepoint detection End of explanation p = 2 q = 1 lam = 0.1 K = 200 N = 100 R = 10 w_true = np.zeros(K) idx = np.random.choice(range(K), size=R) w_true[idx] = 2*np.random.randn(K,1) A = np.random.randn(N, K) y = 0.0*np.random.randn(N) + A.dot(w_true) # Construct the problem. w = cvx.Variable(K,1) objective = cvx.Minimize(cvx.norm(A*w - y, p)**p + lam*cvx.norm(w, q)**q) prob = cvx.Problem(objective, constraints) # The optimal objective is returned by prob.solve(). result = prob.solve() plt.stem(w.value) plt.stem(range(K),w_true.T,':',markerfmt='wo') plt.show() #print w.value #print w_true Explanation: Feature selection End of explanation import pandas as pd lam = 1.2 p = 2 q = 1 df_welllog = pd.read_csv(u'data/well-log.csv',names=['y']) y = np.array(df_welllog.y)[::4]/100000. N = len(y) A = triag_ones(N) K = N # Construct the problem. w = cvx.Variable(K,1) objective = cvx.Minimize(cvx.norm(A*w - y, p)**p + lam*cvx.norm(w, q)**q) prob = cvx.Problem(objective, constraints) # The optimal objective is returned by prob.solve(). result = prob.solve() thr = 0.01 plt.figure(figsize=(12,4)) plt.plot(A.dot(w.value),'r') plt.plot(y) plt.xlim((-5,N)) plt.figure(figsize=(12,4)) idx = np.where(np.abs(w.value)>thr)[0] plt.stem(idx,w.value[idx]) plt.xlim((-5,N)) plt.show() Explanation: Well log data End of explanation
12,946
Given the following text problem statement, write Python code to implement the functionality described below in problem statement Problem: Uisng Random Forest Regression
Python Code:: from sklearn.ensemble import RandomForestRegressor model = RandomForestRegressor() model.fit(X_train, Y_train) pred = model.predict(X_test)
12,947
Given the following text description, write Python code to implement the functionality described below step by step Description: Step3: Implementing MapReduce The Pool class can be used to create a simple single-server MapReduce implementation. Although it does not give the full benefits of distributed processing, it does illustrate how easy it is to break some problems down into distributable units of work. In a MapReduce-based system, input data is broken down into chunks for processing by different worker instances. Each chunk of input data is mapped to an intermediate state using a simple transformation. The intermediate data is then collected together and partitioned based on a key value so that all of the related values are together. Finally, the partitioned data is reduced to a result set. Step6: The following example script uses SimpleMapReduce to counts the “words” in the reStructuredText source for this article, ignoring some of the markup.
Python Code: import collections import itertools import multiprocessing class SimpleMapReduce: def __init__(self, map_func, reduce_func, num_workers=None): map_func Function to map inputs to intermediate data. Takes as argument one input value and returns a tuple with the key and a value to be reduced. reduce_func Function to reduce partitioned version of intermediate data to final output. Takes as argument a key as produced by map_func and a sequence of the values associated with that key. num_workers The number of workers to create in the pool. Defaults to the number of CPUs available on the current host. self.map_func = map_func self.reduce_func = reduce_func self.pool = multiprocessing.Pool(num_workers) def partition(self, mapped_values): Organize the mapped values by their key. Returns an unsorted sequence of tuples with a key and a sequence of values. reduce_func partitioned_data = collections.defaultdict(list) for key, value in mapped_values: partitioned_data[key].append(value) return partitioned_data.items() def __call__(self, inputs, chunksize=1): Process the inputs through the map and reduce functions given. inputs An iterable containing the input data to be processed. chunksize=1 The portion of the input data to hand to each worker. This can be used to tune performance during the mapping phase. map_responses = self.pool.map( self.map_func, inputs, chunksize=chunksize, ) partitioned_data = self.partition( itertools.chain(*map_responses) ) reduced_values = self.pool.map( self.reduce_func, partitioned_data, ) return reduced_values Explanation: Implementing MapReduce The Pool class can be used to create a simple single-server MapReduce implementation. Although it does not give the full benefits of distributed processing, it does illustrate how easy it is to break some problems down into distributable units of work. In a MapReduce-based system, input data is broken down into chunks for processing by different worker instances. Each chunk of input data is mapped to an intermediate state using a simple transformation. The intermediate data is then collected together and partitioned based on a key value so that all of the related values are together. Finally, the partitioned data is reduced to a result set. End of explanation import multiprocessing import string SimpleMapReduce def file_to_words(filename): Read a file and return a sequence of (word, occurences) values. STOP_WORDS = set([ 'a', 'an', 'and', 'are', 'as', 'be', 'by', 'for', 'if', 'in', 'is', 'it', 'of', 'or', 'py', 'rst', 'that', 'the', 'to', 'with', ]) TR = str.maketrans({ p: ' ' for p in string.punctuation }).rst print('{} reading {}'.format( multiprocessing.current_process().name, filename)) output = [] with open(filename, 'rt') as f: for line in f: # Skip comment lines. if line.lstrip().startswith('..'): continue line = line.translate(TR) # Strip punctuation for word in line.split(): word = word.lower() if word.isalpha() and word not in STOP_WORDS: output.append((word, 1)) return output.rst def count_words(item): Convert the partitioned data for a word to a tuple containing the word and the number of occurences. word, occurences = item return (word, sum(occurences)) if __name__ == '__main__': import operator import glob input_files = glob.glob('*.rst') mapper = SimpleMapReduce(file_to_words, count_words) word_counts = mapper(input_files) word_counts.sort(key=operator.itemgetter(1)) word_counts.reverse() print('\nTOP 20 WORDS BY FREQUENCY\n') top20 = word_counts[:20] longest = max(len(word) for word, count in top20) for word, count in top20: print('{word:<{len}}: {count:5}'.format( len=longest + 1, word=word, count=count) ) Explanation: The following example script uses SimpleMapReduce to counts the “words” in the reStructuredText source for this article, ignoring some of the markup. End of explanation
12,948
Given the following text description, write Python code to implement the functionality described below step by step Description: Subreddit Mapping using t-SNE This was my first effort at subreddit mapping to test if the idea was vaiable. It turns out that this was mostly quite similar to the final analysis, but I spent a while exploring some other options as well. Step1: I hadn't bothered to look if the relevant scikit-learn functions actually accepted sparse matrices when I was just playing, so I did the row normalization myself by hand. Step2: Again with the hand-rolled normalisation. It was not hard in this case. Step3: Instead of LargeVis we can just use t-SNE. Some caveats Step4: Clustering looks pretty much the same as it did in the final version. I played with parameters a little here, and also looked at leaf clustering as the cluster extraction method. In practice, however, the standard Excess of Mass approach was more than adequate. Step5: Onto the Bokeh plotting. This was still just experimenting with mapping and clustering so I hadn't honed down the plot code much. I don't do nice colormapping, for instance, but instead plot the noise and cluster points separately. There is also no adjustment of alpha channels based on zoom levels. It was good enough to view the map and mouse over regions to see how well things worked. Step6: The final real test was simply print out the contents of the clusters and see if they made sense. For the most part they are pretty good, but they are less good than what LargeVis provided, with more clusters for which there aren't clear topics. Feel free to do exactly this for the LargeVis version and you'll see what I mean.
Python Code: import pandas as pd import scipy.sparse as ss import numpy as np from sklearn.decomposition import TruncatedSVD import sklearn.manifold import tsne import re raw_data = pd.read_csv('subreddit-overlap') raw_data.head() subreddit_popularity = raw_data.groupby('t2_subreddit')['NumOverlaps'].sum() subreddits = np.array(subreddit_popularity.sort_values(ascending=False).index) index_map = dict(np.vstack([subreddits, np.arange(subreddits.shape[0])]).T) count_matrix = ss.coo_matrix((raw_data.NumOverlaps, (raw_data.t2_subreddit.map(index_map), raw_data.t1_subreddit.map(index_map))), shape=(subreddits.shape[0], subreddits.shape[0]), dtype=np.float64) count_matrix Explanation: Subreddit Mapping using t-SNE This was my first effort at subreddit mapping to test if the idea was vaiable. It turns out that this was mostly quite similar to the final analysis, but I spent a while exploring some other options as well. End of explanation conditional_prob_matrix = count_matrix.tocsr() row_sums = np.array(conditional_prob_matrix.sum(axis=1))[:,0] row_indices, col_indices = conditional_prob_matrix.nonzero() conditional_prob_matrix.data /= row_sums[row_indices] reduced_vectors = TruncatedSVD(n_components=500, random_state=0).fit_transform(conditional_prob_matrix) Explanation: I hadn't bothered to look if the relevant scikit-learn functions actually accepted sparse matrices when I was just playing, so I did the row normalization myself by hand. End of explanation reduced_vectors /= np.sqrt((reduced_vectors**2).sum(axis=1))[:, np.newaxis] Explanation: Again with the hand-rolled normalisation. It was not hard in this case. End of explanation seed_state = np.random.RandomState(0) subreddit_map = tsne.bh_sne(reduced_vectors[:10000], perplexity=50.0, random_state=seed_state) subreddit_map_df = pd.DataFrame(subreddit_map, columns=('x', 'y')) subreddit_map_df['subreddit'] = subreddits[:10000] subreddit_map_df.head() Explanation: Instead of LargeVis we can just use t-SNE. Some caveats: the tnse package is still quite a bit faster than t-SNE in scikit-learn, but iot only works with python 2. End of explanation import hdbscan clusterer = hdbscan.HDBSCAN(min_samples=5, min_cluster_size=20).fit(subreddit_map) cluster_ids = clusterer.labels_ subreddit_map_df['cluster'] = cluster_ids Explanation: Clustering looks pretty much the same as it did in the final version. I played with parameters a little here, and also looked at leaf clustering as the cluster extraction method. In practice, however, the standard Excess of Mass approach was more than adequate. End of explanation from bokeh.plotting import figure, show, output_notebook, output_file from bokeh.models import HoverTool, ColumnDataSource, value from bokeh.models.mappers import LinearColorMapper from bokeh.palettes import viridis from collections import OrderedDict output_notebook() color_mapper = LinearColorMapper(palette=viridis(256), low=0, high=cluster_ids.max()) color_dict = {'field': 'cluster', 'transform': color_mapper} plot_data_clusters = ColumnDataSource(subreddit_map_df[subreddit_map_df.cluster >= 0]) plot_data_noise = ColumnDataSource(subreddit_map_df[subreddit_map_df.cluster < 0]) tsne_plot = figure(title=u'A Map of Subreddits', plot_width = 700, plot_height = 700, tools= (u'pan, wheel_zoom, box_zoom,' u'box_select, resize, reset'), active_scroll=u'wheel_zoom') tsne_plot.add_tools( HoverTool(tooltips = OrderedDict([('subreddit', '@subreddit'), ('cluster', '@cluster')]))) # draw clusters tsne_plot.circle(u'x', u'y', source=plot_data_clusters, fill_color=color_dict, line_alpha=0.002, fill_alpha=0.1, size=10, hover_line_color=u'black') # draw noise tsne_plot.circle(u'x', u'y', source=plot_data_noise, fill_color=u'gray', line_alpha=0.002, fill_alpha=0.05, size=10, hover_line_color=u'black') # configure visual elements of the plot tsne_plot.title.text_font_size = value(u'16pt') tsne_plot.xaxis.visible = False tsne_plot.yaxis.visible = False tsne_plot.grid.grid_line_color = None tsne_plot.outline_line_color = None show(tsne_plot); Explanation: Onto the Bokeh plotting. This was still just experimenting with mapping and clustering so I hadn't honed down the plot code much. I don't do nice colormapping, for instance, but instead plot the noise and cluster points separately. There is also no adjustment of alpha channels based on zoom levels. It was good enough to view the map and mouse over regions to see how well things worked. End of explanation def is_nsfw(subreddit): return re.search(r'(nsfw|gonewild)', subreddit) for cid in range(cluster_ids.max() + 1): subreddits = subreddit_map_df.subreddit[cluster_ids == cid] if np.any(subreddits.map(is_nsfw)): subreddits = ' ... Censored ...' else: subreddits = subreddits.values print '\nCluster {}:\n{}\n'.format(cid, subreddits) Explanation: The final real test was simply print out the contents of the clusters and see if they made sense. For the most part they are pretty good, but they are less good than what LargeVis provided, with more clusters for which there aren't clear topics. Feel free to do exactly this for the LargeVis version and you'll see what I mean. End of explanation
12,949
Given the following text description, write Python code to implement the functionality described below step by step Description: Classifying movie reviews Step1: The argument num_words=10000 means that we will only keep the top 10,000 most frequently occurring words in the training data. Rare words will be discarded. This allows us to work with vector data of manageable size. The variables train_data and test_data are lists of reviews, each review being a list of word indices (encoding a sequence of words). train_labels and test_labels are lists of 0s and 1s, where 0 stands for "negative" and 1 stands for "positive" Step2: Since we restricted ourselves to the top 10,000 most frequent words, no word index will exceed 10,000 Step3: For kicks, here's how you can quickly decode one of these reviews back to English words Step4: Preparing the data We cannot feed lists of integers into a neural network. We have to turn our lists into tensors. There are two ways we could do that Step5: Here's what our samples look like now Step6: We should also vectorize our labels, which is straightforward Step7: Now our data is ready to be fed into a neural network. Building our network Our input data is simply vectors, and our labels are scalars (1s and 0s) Step8: Lastly, we need to pick a loss function and an optimizer. Since we are facing a binary classification problem and the output of our network is a probability (we end our network with a single-unit layer with a sigmoid activation), is it best to use the binary_crossentropy loss. It isn't the only viable choice Step9: We are passing our optimizer, loss function and metrics as strings, which is possible because rmsprop, binary_crossentropy and accuracy are packaged as part of Keras. Sometimes you may want to configure the parameters of your optimizer, or pass a custom loss function or metric function. This former can be done by passing an optimizer class instance as the optimizer argument Step10: The latter can be done by passing function objects as the loss or metrics arguments Step11: Validating our approach In order to monitor during training the accuracy of the model on data that it has never seen before, we will create a "validation set" by setting apart 10,000 samples from the original training data Step12: We will now train our model for 20 epochs (20 iterations over all samples in the x_train and y_train tensors), in mini-batches of 512 samples. At this same time we will monitor loss and accuracy on the 10,000 samples that we set apart. This is done by passing the validation data as the validation_data argument Step13: On CPU, this will take less than two seconds per epoch -- training is over in 20 seconds. At the end of every epoch, there is a slight pause as the model computes its loss and accuracy on the 10,000 samples of the validation data. Note that the call to model.fit() returns a History object. This object has a member history, which is a dictionary containing data about everything that happened during training. Let's take a look at it Step14: It contains 4 entries Step15: The dots are the training loss and accuracy, while the solid lines are the validation loss and accuracy. Note that your own results may vary slightly due to a different random initialization of your network. As you can see, the training loss decreases with every epoch and the training accuracy increases with every epoch. That's what you would expect when running gradient descent optimization -- the quantity you are trying to minimize should get lower with every iteration. But that isn't the case for the validation loss and accuracy Step16: Our fairly naive approach achieves an accuracy of 88%. With state-of-the-art approaches, one should be able to get close to 95%. Using a trained network to generate predictions on new data After having trained a network, you will want to use it in a practical setting. You can generate the likelihood of reviews being positive by using the predict method
Python Code: from keras.datasets import imdb (train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000) Explanation: Classifying movie reviews: a binary classification example This notebook contains the code samples found in Chapter 3, Section 5 of Deep Learning with Python. Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments. Two-class classification, or binary classification, may be the most widely applied kind of machine learning problem. In this example, we will learn to classify movie reviews into "positive" reviews and "negative" reviews, just based on the text content of the reviews. The IMDB dataset We'll be working with "IMDB dataset", a set of 50,000 highly-polarized reviews from the Internet Movie Database. They are split into 25,000 reviews for training and 25,000 reviews for testing, each set consisting in 50% negative and 50% positive reviews. Why do we have these two separate training and test sets? You should never test a machine learning model on the same data that you used to train it! Just because a model performs well on its training data doesn't mean that it will perform well on data it has never seen, and what you actually care about is your model's performance on new data (since you already know the labels of your training data -- obviously you don't need your model to predict those). For instance, it is possible that your model could end up merely memorizing a mapping between your training samples and their targets -- which would be completely useless for the task of predicting targets for data never seen before. We will go over this point in much more detail in the next chapter. Just like the MNIST dataset, the IMDB dataset comes packaged with Keras. It has already been preprocessed: the reviews (sequences of words) have been turned into sequences of integers, where each integer stands for a specific word in a dictionary. The following code will load the dataset (when you run it for the first time, about 80MB of data will be downloaded to your machine): End of explanation train_data[0] train_labels[0] Explanation: The argument num_words=10000 means that we will only keep the top 10,000 most frequently occurring words in the training data. Rare words will be discarded. This allows us to work with vector data of manageable size. The variables train_data and test_data are lists of reviews, each review being a list of word indices (encoding a sequence of words). train_labels and test_labels are lists of 0s and 1s, where 0 stands for "negative" and 1 stands for "positive": End of explanation max([max(sequence) for sequence in train_data]) Explanation: Since we restricted ourselves to the top 10,000 most frequent words, no word index will exceed 10,000: End of explanation # word_index is a dictionary mapping words to an integer index word_index = imdb.get_word_index() # We reverse it, mapping integer indices to words reverse_word_index = dict([(value, key) for (key, value) in word_index.items()]) # We decode the review; note that our indices were offset by 3 # because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown". decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]]) decoded_review Explanation: For kicks, here's how you can quickly decode one of these reviews back to English words: End of explanation import numpy as np def vectorize_sequences(sequences, dimension=10000): # Create an all-zero matrix of shape (len(sequences), dimension) results = np.zeros((len(sequences), dimension)) for i, sequence in enumerate(sequences): results[i, sequence] = 1. # set specific indices of results[i] to 1s return results # Our vectorized training data x_train = vectorize_sequences(train_data) # Our vectorized test data x_test = vectorize_sequences(test_data) Explanation: Preparing the data We cannot feed lists of integers into a neural network. We have to turn our lists into tensors. There are two ways we could do that: We could pad our lists so that they all have the same length, and turn them into an integer tensor of shape (samples, word_indices), then use as first layer in our network a layer capable of handling such integer tensors (the Embedding layer, which we will cover in detail later in the book). We could one-hot-encode our lists to turn them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence [3, 5] into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones. Then we could use as first layer in our network a Dense layer, capable of handling floating point vector data. We will go with the latter solution. Let's vectorize our data, which we will do manually for maximum clarity: End of explanation x_train[0] Explanation: Here's what our samples look like now: End of explanation # Our vectorized labels y_train = np.asarray(train_labels).astype('float32') y_test = np.asarray(test_labels).astype('float32') Explanation: We should also vectorize our labels, which is straightforward: End of explanation from keras import models from keras import layers model = models.Sequential() model.add(layers.Dense(16, activation='relu', input_shape=(10000,))) model.add(layers.Dense(16, activation='relu')) model.add(layers.Dense(1, activation='sigmoid')) Explanation: Now our data is ready to be fed into a neural network. Building our network Our input data is simply vectors, and our labels are scalars (1s and 0s): this is the easiest setup you will ever encounter. A type of network that performs well on such a problem would be a simple stack of fully-connected (Dense) layers with relu activations: Dense(16, activation='relu') The argument being passed to each Dense layer (16) is the number of "hidden units" of the layer. What's a hidden unit? It's a dimension in the representation space of the layer. You may remember from the previous chapter that each such Dense layer with a relu activation implements the following chain of tensor operations: output = relu(dot(W, input) + b) Having 16 hidden units means that the weight matrix W will have shape (input_dimension, 16), i.e. the dot product with W will project the input data onto a 16-dimensional representation space (and then we would add the bias vector b and apply the relu operation). You can intuitively understand the dimensionality of your representation space as "how much freedom you are allowing the network to have when learning internal representations". Having more hidden units (a higher-dimensional representation space) allows your network to learn more complex representations, but it makes your network more computationally expensive and may lead to learning unwanted patterns (patterns that will improve performance on the training data but not on the test data). There are two key architecture decisions to be made about such stack of dense layers: How many layers to use. How many "hidden units" to chose for each layer. In the next chapter, you will learn formal principles to guide you in making these choices. For the time being, you will have to trust us with the following architecture choice: two intermediate layers with 16 hidden units each, and a third layer which will output the scalar prediction regarding the sentiment of the current review. The intermediate layers will use relu as their "activation function", and the final layer will use a sigmoid activation so as to output a probability (a score between 0 and 1, indicating how likely the sample is to have the target "1", i.e. how likely the review is to be positive). A relu (rectified linear unit) is a function meant to zero-out negative values, while a sigmoid "squashes" arbitrary values into the [0, 1] interval, thus outputting something that can be interpreted as a probability. Here's what our network looks like: And here's the Keras implementation, very similar to the MNIST example you saw previously: End of explanation model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy']) Explanation: Lastly, we need to pick a loss function and an optimizer. Since we are facing a binary classification problem and the output of our network is a probability (we end our network with a single-unit layer with a sigmoid activation), is it best to use the binary_crossentropy loss. It isn't the only viable choice: you could use, for instance, mean_squared_error. But crossentropy is usually the best choice when you are dealing with models that output probabilities. Crossentropy is a quantity from the field of Information Theory, that measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and our predictions. Here's the step where we configure our model with the rmsprop optimizer and the binary_crossentropy loss function. Note that we will also monitor accuracy during training. End of explanation from keras import optimizers model.compile(optimizer=optimizers.RMSprop(lr=0.001), loss='binary_crossentropy', metrics=['accuracy']) Explanation: We are passing our optimizer, loss function and metrics as strings, which is possible because rmsprop, binary_crossentropy and accuracy are packaged as part of Keras. Sometimes you may want to configure the parameters of your optimizer, or pass a custom loss function or metric function. This former can be done by passing an optimizer class instance as the optimizer argument: End of explanation from keras import losses from keras import metrics model.compile(optimizer=optimizers.RMSprop(lr=0.001), loss=losses.binary_crossentropy, metrics=[metrics.binary_accuracy]) Explanation: The latter can be done by passing function objects as the loss or metrics arguments: End of explanation x_val = x_train[:10000] partial_x_train = x_train[10000:] y_val = y_train[:10000] partial_y_train = y_train[10000:] Explanation: Validating our approach In order to monitor during training the accuracy of the model on data that it has never seen before, we will create a "validation set" by setting apart 10,000 samples from the original training data: End of explanation history = model.fit(partial_x_train, partial_y_train, epochs=20, batch_size=512, validation_data=(x_val, y_val)) Explanation: We will now train our model for 20 epochs (20 iterations over all samples in the x_train and y_train tensors), in mini-batches of 512 samples. At this same time we will monitor loss and accuracy on the 10,000 samples that we set apart. This is done by passing the validation data as the validation_data argument: End of explanation history_dict = history.history history_dict.keys() Explanation: On CPU, this will take less than two seconds per epoch -- training is over in 20 seconds. At the end of every epoch, there is a slight pause as the model computes its loss and accuracy on the 10,000 samples of the validation data. Note that the call to model.fit() returns a History object. This object has a member history, which is a dictionary containing data about everything that happened during training. Let's take a look at it: End of explanation import matplotlib.pyplot as plt acc = history.history['acc'] val_acc = history.history['val_acc'] loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(1, len(acc) + 1) # "bo" is for "blue dot" plt.plot(epochs, loss, 'bo', label='Training loss') # b is for "solid blue line" plt.plot(epochs, val_loss, 'b', label='Validation loss') plt.title('Training and validation loss') plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend() plt.show() plt.clf() # clear figure acc_values = history_dict['acc'] val_acc_values = history_dict['val_acc'] plt.plot(epochs, acc, 'bo', label='Training acc') plt.plot(epochs, val_acc, 'b', label='Validation acc') plt.title('Training and validation accuracy') plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend() plt.show() Explanation: It contains 4 entries: one per metric that was being monitored, during training and during validation. Let's use Matplotlib to plot the training and validation loss side by side, as well as the training and validation accuracy: End of explanation model = models.Sequential() model.add(layers.Dense(16, activation='relu', input_shape=(10000,))) model.add(layers.Dense(16, activation='relu')) model.add(layers.Dense(1, activation='sigmoid')) model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy']) model.fit(x_train, y_train, epochs=4, batch_size=512) results = model.evaluate(x_test, y_test) results Explanation: The dots are the training loss and accuracy, while the solid lines are the validation loss and accuracy. Note that your own results may vary slightly due to a different random initialization of your network. As you can see, the training loss decreases with every epoch and the training accuracy increases with every epoch. That's what you would expect when running gradient descent optimization -- the quantity you are trying to minimize should get lower with every iteration. But that isn't the case for the validation loss and accuracy: they seem to peak at the fourth epoch. This is an example of what we were warning against earlier: a model that performs better on the training data isn't necessarily a model that will do better on data it has never seen before. In precise terms, what you are seeing is "overfitting": after the second epoch, we are over-optimizing on the training data, and we ended up learning representations that are specific to the training data and do not generalize to data outside of the training set. In this case, to prevent overfitting, we could simply stop training after three epochs. In general, there is a range of techniques you can leverage to mitigate overfitting, which we will cover in the next chapter. Let's train a new network from scratch for four epochs, then evaluate it on our test data: End of explanation model.predict(x_test) Explanation: Our fairly naive approach achieves an accuracy of 88%. With state-of-the-art approaches, one should be able to get close to 95%. Using a trained network to generate predictions on new data After having trained a network, you will want to use it in a practical setting. You can generate the likelihood of reviews being positive by using the predict method: End of explanation
12,950
Given the following text description, write Python code to implement the functionality described below step by step Description: Fitting Models Exercise 2 Imports Step1: Fitting a decaying oscillation For this problem you are given a raw dataset in the file decay_osc.npz. This file contains three arrays Step2: Now, using curve_fit to fit this model and determine the estimates and uncertainties for the parameters
Python Code: %matplotlib inline import matplotlib.pyplot as plt import numpy as np import scipy.optimize as opt Explanation: Fitting Models Exercise 2 Imports End of explanation f = np.load('decay_osc.npz', mmap_mode='r') list(f) ydata = f['ydata'] dy = f['dy'] tdata = f['tdata'] plt.figure(figsize=(10,5)) plt.errorbar(tdata, ydata, dy, fmt='.k', ecolor='lightgray') plt.xlabel("t", fontsize=14) plt.ylabel("y", fontsize=14) ax = plt.gca() ax.spines['right'].set_color('none') ax.spines['top'].set_color('none') ax.spines['bottom'].set_color('#a2a7ff') ax.spines['left'].set_color('#a2a7ff') ax.get_xaxis().tick_bottom() ax.get_yaxis().tick_left() plt.title("Decaying Oscillations Plot with Error") plt.show() assert True # leave this to grade the data import and raw data plot Explanation: Fitting a decaying oscillation For this problem you are given a raw dataset in the file decay_osc.npz. This file contains three arrays: tdata: an array of time values ydata: an array of y values dy: the absolute uncertainties (standard deviations) in y Your job is to fit the following model to this data: $$ y(t) = A e^{-\lambda t} \cos{\omega t + \delta} $$ First, import the data using NumPy and make an appropriately styled error bar plot of the raw data. End of explanation def model(t, A, lamb, omega, delta): return A*np.exp(-lamb*t)*np.cos(omega*t) + delta theta_best, theta_cov = opt.curve_fit(model, tdata, ydata, sigma=dy) print('A = {0:.3f} +/- {1:.3f}'.format(theta_best[0],np.sqrt(theta_cov[0,0]))) print('lambda = {0:.3f} +/- {1:.3f}'.format(theta_best[1],np.sqrt(theta_cov[1,1]))) print('omega = {0:.3f} +/- {1:.3f}'.format(theta_best[2],np.sqrt(theta_cov[2,2]))) print('delta = {0:.3f} +/- {1:.3f}'.format(theta_best[3],np.sqrt(theta_cov[3,3]))) Y = theta_best[0]*np.exp(-theta_best[1]*tdata)*np.cos(theta_best[2]*tdata) + theta_best[3] plt.figure(figsize=(10,5)) plt.plot(tdata,Y) plt.errorbar(tdata, ydata, dy, fmt='.k', ecolor='lightgray') plt.xlabel("t", fontsize=14) plt.ylabel("y", fontsize=14) ax = plt.gca() ax.spines['right'].set_color('none') ax.spines['top'].set_color('none') ax.spines['bottom'].set_color('#a2a7ff') ax.spines['left'].set_color('#a2a7ff') ax.get_xaxis().tick_bottom() ax.get_yaxis().tick_left() plt.title("Curve Fit for Decaying Oscillation Plot with Error") plt.show() assert True # leave this cell for grading the fit; should include a plot and printout of the parameters+errors Explanation: Now, using curve_fit to fit this model and determine the estimates and uncertainties for the parameters: Print the parameters estimates and uncertainties. Plot the raw and best fit model. You will likely have to pass an initial guess to curve_fit to get a good fit. Treat the uncertainties in $y$ as absolute errors by passing absolute_sigma=True. End of explanation
12,951
Given the following text description, write Python code to implement the functionality described below step by step Description: Your first neural network In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more. Step1: Load and prepare the data A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon! Step2: Checking out the data This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above. Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model. Step3: Dummy variables Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies(). Step4: Scaling target variables To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1. The scaling factors are saved so we can go backwards when we use the network for predictions. Step5: Splitting the data into training, testing, and validation sets We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders. Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set). Step7: Time to build the network Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters Step8: Training the network Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops. You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later. Choose the number of epochs This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting. Choose the learning rate This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge. Choose the number of hidden nodes The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose. Step9: Check out your predictions Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly. Step10: OPTIONAL
Python Code: %matplotlib inline %config InlineBackend.figure_format = 'retina' import numpy as np import pandas as pd import matplotlib.pyplot as plt Explanation: Your first neural network In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more. End of explanation data_path = 'Bike-Sharing-Dataset/hour.csv' rides = pd.read_csv(data_path) rides.head() Explanation: Load and prepare the data A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon! End of explanation rides[:24*10].plot(x='dteday', y='cnt') Explanation: Checking out the data This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above. Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model. End of explanation dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday'] for each in dummy_fields: dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False) rides = pd.concat([rides, dummies], axis=1) fields_to_drop = ['instant', 'dteday', 'season', 'weathersit', 'weekday', 'atemp', 'mnth', 'workingday', 'hr'] data = rides.drop(fields_to_drop, axis=1) data.head() Explanation: Dummy variables Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies(). End of explanation quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed'] # Store scalings in a dictionary so we can convert back later scaled_features = {} for each in quant_features: mean, std = data[each].mean(), data[each].std() scaled_features[each] = [mean, std] data.loc[:, each] = (data[each] - mean)/std Explanation: Scaling target variables To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1. The scaling factors are saved so we can go backwards when we use the network for predictions. End of explanation # Save data for approximately the last 21 days test_data = data[-21*24:] # Now remove the test data from the data set data = data[:-21*24] # Separate the data into features and targets target_fields = ['cnt', 'casual', 'registered'] features, targets = data.drop(target_fields, axis=1), data[target_fields] test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields] Explanation: Splitting the data into training, testing, and validation sets We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders. End of explanation # Hold out the last 60 days or so of the remaining data as a validation set train_features, train_targets = features[:-60*24], targets[:-60*24] val_features, val_targets = features[-60*24:], targets[-60*24:] Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set). End of explanation class NeuralNetwork(object): def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate): # Set number of nodes in input, hidden and output layers. self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes # Initialize weights self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5, (self.hidden_nodes, self.input_nodes)) self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5, (self.output_nodes, self.hidden_nodes)) self.lr = learning_rate #### TODO: Set self.activation_function to your implemented sigmoid function #### # # Note: in Python, you can define a function with a lambda expression, # as shown below. self.activation_function = lambda x : 1 / (1 + np.exp(-x)) # Replace 0 with your sigmoid calculation. ### If the lambda code above is not something you're familiar with, # You can uncomment out the following three lines and put your # implementation there instead. # #def sigmoid(x): # return 0 # Replace 0 with your sigmoid calculation here #self.activation_function = sigmoid def train(self, inputs_list, targets_list): # Convert inputs list to 2d array inputs = np.array(inputs_list, ndmin=2).T targets = np.array(targets_list, ndmin=2).T #### Implement the forward pass here #### ### Forward pass ### # TODO: Hidden layer - Replace these values with your calculations. hidden_inputs = np.dot(self.weights_input_to_hidden,inputs) # signals into hidden layer hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer # TODO: Output layer - Replace these values with your calculations. final_inputs = np.dot(self.weights_hidden_to_output,hidden_outputs) # signals into final output layer final_outputs = final_inputs # signals from final output layer #### Implement the backward pass here #### ### Backward pass ### # TODO: Output error - Replace this value with your calculations. output_errors = targets - final_outputs # Output layer error is the difference between desired target and actual output. # TODO: Backpropagated error - Replace these values with your calculations. hidden_errors = np.dot(output_errors,self.weights_hidden_to_output) hidden_grad = hidden_outputs * (1.0 - hidden_outputs) # hidden layer gradients hidden_error_term = hidden_grad * hidden_errors.T # TODO: Update the weights - Replace these values with your calculations. self.weights_hidden_to_output += self.lr * output_errors * hidden_outputs.T # update hidden-to-output weights with gradient descent step self.weights_input_to_hidden += self.lr * hidden_error_term * inputs.T # update input-to-hidden weights with gradient descent step def run(self, inputs_list): # Run a forward pass through the network inputs = np.array(inputs_list, ndmin=2).T #### Implement the forward pass here #### # TODO: Hidden layer - replace these values with the appropriate calculations. hidden_inputs = np.dot(self.weights_input_to_hidden, inputs) # signals into hidden layer hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer # TODO: Output layer - Replace these values with the appropriate calculations. final_inputs = np.dot(self.weights_hidden_to_output,hidden_outputs) # signals into final output layer final_outputs = final_inputs # signals from final output layer return final_outputs def MSE(y, Y): return np.mean((y-Y)**2) Explanation: Time to build the network Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes. The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation. We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation. Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$. Below, you have these tasks: 1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function. 2. Implement the forward pass in the train method. 3. Implement the backpropagation algorithm in the train method, including calculating the output error. 4. Implement the forward pass in the run method. End of explanation import sys ### Set the hyperparameters here ### epochs = 1500 learning_rate = 0.01 hidden_nodes = 8 output_nodes = 1 N_i = train_features.shape[1] network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate) losses = {'train':[], 'validation':[]} for e in range(epochs): # Go through a random batch of 128 records from the training data set batch = np.random.choice(train_features.index, size=128) for record, target in zip(train_features.ix[batch].values, train_targets.ix[batch]['cnt']): network.train(record, target) # Printing out the training progress train_loss = MSE(network.run(train_features), train_targets['cnt'].values) val_loss = MSE(network.run(val_features), val_targets['cnt'].values) sys.stdout.write("\rProgress: " + str(100 * e/float(epochs))[:4] \ + "% ... Training loss: " + str(train_loss)[:5] \ + " ... Validation loss: " + str(val_loss)[:5]) losses['train'].append(train_loss) losses['validation'].append(val_loss) plt.plot(losses['train'], label='Training loss') plt.plot(losses['validation'], label='Validation loss') plt.legend() plt.ylim(ymax=1) Explanation: Training the network Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops. You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later. Choose the number of epochs This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting. Choose the learning rate This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge. Choose the number of hidden nodes The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose. End of explanation fig, ax = plt.subplots(figsize=(8,4)) mean, std = scaled_features['cnt'] predictions = network.run(test_features)*std + mean ax.plot(predictions[0], label='Prediction') ax.plot((test_targets['cnt']*std + mean).values, label='Data') ax.set_xlim(right=len(predictions)) ax.legend() dates = pd.to_datetime(rides.ix[test_data.index]['dteday']) dates = dates.apply(lambda d: d.strftime('%b %d')) ax.set_xticks(np.arange(len(dates))[12::24]) _ = ax.set_xticklabels(dates[12::24], rotation=45) Explanation: Check out your predictions Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly. End of explanation import unittest inputs = [0.5, -0.2, 0.1] targets = [0.4] test_w_i_h = np.array([[0.1, 0.4, -0.3], [-0.2, 0.5, 0.2]]) test_w_h_o = np.array([[0.3, -0.1]]) class TestMethods(unittest.TestCase): ########## # Unit tests for data loading ########## def test_data_path(self): # Test that file path to dataset has been unaltered self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv') def test_data_loaded(self): # Test that data frame loaded self.assertTrue(isinstance(rides, pd.DataFrame)) ########## # Unit tests for network functionality ########## def test_activation(self): network = NeuralNetwork(3, 2, 1, 0.5) # Test that the activation function is a sigmoid self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5)))) def test_train(self): # Test that weights are updated correctly on training network = NeuralNetwork(3, 2, 1, 0.5) network.weights_input_to_hidden = test_w_i_h.copy() network.weights_hidden_to_output = test_w_h_o.copy() network.train(inputs, targets) self.assertTrue(np.allclose(network.weights_hidden_to_output, np.array([[ 0.37275328, -0.03172939]]))) self.assertTrue(np.allclose(network.weights_input_to_hidden, np.array([[ 0.10562014, 0.39775194, -0.29887597], [-0.20185996, 0.50074398, 0.19962801]]))) def test_run(self): # Test correctness of run method network = NeuralNetwork(3, 2, 1, 0.5) network.weights_input_to_hidden = test_w_i_h.copy() network.weights_hidden_to_output = test_w_h_o.copy() self.assertTrue(np.allclose(network.run(inputs), 0.09998924)) suite = unittest.TestLoader().loadTestsFromModule(TestMethods()) unittest.TextTestRunner().run(suite) Explanation: OPTIONAL: Thinking about your results(this question will not be evaluated in the rubric). Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does? Note: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter Your answer below The model predicted as well a good part of the results, but by the end of the year it probably missed the holidays. Unit tests Run these unit tests to check the correctness of your network implementation. These tests must all be successful to pass the project. End of explanation
12,952
Given the following text description, write Python code to implement the functionality described below step by step Description: <h3>Step 1 Step1: <h3>Step 2 Step2: <h3>Step 3 Step3: <h3>Step 4 Step4: <h4>Problem Step5: <h2>JSON</h2> <li>The python library - json - deals with converting text to and from JSON Step6: <h3>json.loads recursively decodes a string in JSON format into equivalent python objects</h3> <li>data_string's outermost element is converted into a python list <li>the first element of that list is converted into a dictionary <li>the key of that dictionary is converted into a string <li>the value of that dictionary is converted into a list of two integer elements Step7: <h3>json.loads will throw an exception if the format is incorrect</h3> Step8: <h2>requests library and JSON</h2> Step9: <h3>Exception checking!</h3> Step10: <h2>Problem 1 Step11: <h2>Problem 2 Step13: <h1>XML</h1> <li>The python library - lxml - deals with converting an xml string to python objects and vice versa</li> Step14: <h3>Iterating over an XML tree</h3> <li>Use an iterator. <li>The iterator will generate every tree element for a given subtree Step15: <h4>Or just use the child in subtree construction Step16: <h4>Accessing the tag</h4> Step17: <h4>Using the iterator to get specific tags<h4> <li>In the below example, only the author tags are accessed <li>For each author tag, the .find function accesses the First_Name and Last_Name tags <li>The .find function only looks at the children, not other descendants, so be careful! <li>The .text attribute prints the text in a leaf node Step18: <h4>Problem Step19: <h4>Using values of attributes as filters</h4> <li>Example Step20: <h4>Problem
Python Code: import requests Explanation: <h3>Step 1: Import the requests library</h3> End of explanation response = requests.get("http://www.epicurious.com/search/Tofu+Chili") Explanation: <h3>Step 2: Send an HTTP request, get the response, and save in a variable</h3> End of explanation print(response.status_code) Explanation: <h3>Step 3: Check the response status code to see if everything went as planned</h3> <li>status code 200: the request response cycle was successful <li>any other status code: it didn't work (e.g., 404 = page not found) End of explanation response.content.decode('utf-8') Explanation: <h3>Step 4: Get the content of the response</h3> <li>Convert to utf-8 if necessary End of explanation url = "https://en.wikipedia.org/wiki/main_page" #The rest of your code should go below this line wiki_response = requests.get(url) print(wiki_response.status_code) wiki_text = wiki_response.content.decode('utf-8') print(wiki_text.find('Did you know')) Explanation: <h4>Problem: Get the contents of Wikipedia's main page and look for the string "Did you know" in it</h4> End of explanation import json data_string = '[{"b": [2, 4], "c": 3.0, "a": "A"}]' python_data = json.loads(data_string) print(python_data) Explanation: <h2>JSON</h2> <li>The python library - json - deals with converting text to and from JSON End of explanation print(type(data_string),type(python_data)) print(type(python_data[0]),python_data[0]) print(type(python_data[0]['b']),python_data[0]['b']) Explanation: <h3>json.loads recursively decodes a string in JSON format into equivalent python objects</h3> <li>data_string's outermost element is converted into a python list <li>the first element of that list is converted into a dictionary <li>the key of that dictionary is converted into a string <li>the value of that dictionary is converted into a list of two integer elements End of explanation #Wrong #json.loads("'Hello'") #Correct json.loads('"Hello"') import json data_string = json.dumps(python_data) print(type(data_string)) print(data_string) Explanation: <h3>json.loads will throw an exception if the format is incorrect</h3> End of explanation address="Columbia University, New York, NY" url="https://maps.googleapis.com/maps/api/geocode/json?address=%s" % (address) response = requests.get(url).json() print(type(response)) Explanation: <h2>requests library and JSON</h2> End of explanation address="Columbia University, New York, NY" url="https://maps.googleapis.com/maps/api/geocode/json?address=%s" % (address) try: response = requests.get(url) if not response.status_code == 200: print("HTTP error",response.status_code) else: try: response_data = response.json() except: print("Response not in valid JSON format") except: print("Something went wrong with requests.get") print(type(response_data)) response_data['results'][0]['geometry']['location'] Explanation: <h3>Exception checking!</h3> End of explanation def get_lat_lng(address_string): #python code goes here url="https://maps.googleapis.com/maps/api/geocode/json?address=%s" % (address) import requests response = requests.get(url) if not response.status_code == 200: print("error http response code is :%s" % response.status_code) else: try: response_data = response.json() except: print("response not in valid JSON format") lat = response_data['results'][0]['geometry']['location']['lat'] lng = response_data['results'][0]['geometry']['location']['lng'] return lat, lng Explanation: <h2>Problem 1: Write a function that takes an address as an argument and returns a (latitude, longitude) tuple</h2> End of explanation address="Columbia" url="https://maps.googleapis.com/maps/api/geocode/json?address=%s" % (address) try: response = requests.get(url) if not response.status_code == 200: print("HTTP error",response.status_code) else: try: response_data = response.json() except: print("Response not in valid JSON format") except: print("Something went wrong with requests.get") print(type(response_data)) import json json.dumps(response_data) def get_lat_lng(address_string): #python code goes here url="https://maps.googleapis.com/maps/api/geocode/json?address=%s" % (address) import requests try: response = requests.get(url) if not response.status_code == 200: print('request error code : %s ' % response_status_code) else: try: response_data = response.json() except: print('response data is not a valid JSON format') except: print('request went wrong') geos = [] for geo in response_data['results']: geos.append([geo['geometry']['location']['lat'], geo['geometry']['location']['lng']]) return geos get_lat_lng('Columbia') Explanation: <h2>Problem 2: Extend the function so that it takes a possibly incomplete address as an argument and returns a list of tuples of the form (complete address, latitude, longitude)</h2> End of explanation data_string = <Bookstore> <Book ISBN="ISBN-13:978-1599620787" Price="15.23" Weight="1.5"> <Title>New York Deco</Title> <Authors> <Author Residence="New York City"> <First_Name>Richard</First_Name> <Last_Name>Berenholtz</Last_Name> </Author> </Authors> </Book> <Book ISBN="ISBN-13:978-1579128562" Price="15.80"> <Remark> Five Hundred Buildings of New York and over one million other books are available for Amazon Kindle. </Remark> <Title>Five Hundred Buildings of New York</Title> <Authors> <Author Residence="Beijing"> <First_Name>Bill</First_Name> <Last_Name>Harris</Last_Name> </Author> <Author Residence="New York City"> <First_Name>Jorg</First_Name> <Last_Name>Brockmann</Last_Name> </Author> </Authors> </Book> </Bookstore> from lxml import etree root = etree.XML(data_string) print(root.tag,type(root.tag)) print(etree.tostring(root, pretty_print=True).decode("utf-8")) Explanation: <h1>XML</h1> <li>The python library - lxml - deals with converting an xml string to python objects and vice versa</li> End of explanation for element in root.iter(): print(element.text) Explanation: <h3>Iterating over an XML tree</h3> <li>Use an iterator. <li>The iterator will generate every tree element for a given subtree End of explanation for child in root: print(child) Explanation: <h4>Or just use the child in subtree construction End of explanation for child in root: print(child.tag) Explanation: <h4>Accessing the tag</h4> End of explanation for element in root.iter("Author"): print(element.find('First_Name').text,element.find('Last_Name').text) Explanation: <h4>Using the iterator to get specific tags<h4> <li>In the below example, only the author tags are accessed <li>For each author tag, the .find function accesses the First_Name and Last_Name tags <li>The .find function only looks at the children, not other descendants, so be careful! <li>The .text attribute prints the text in a leaf node End of explanation for element in root.iter("Author"): print(element.find('Last_Name').text) # right solution for xpath for element in root.findall("Book/Authors/Author"): print(element.find('Last_Name').text) Explanation: <h4>Problem: Find the last names of all authors in the tree “root” using xpath</h4> End of explanation root.find('Book[@Weight="1.5"]/Authors/Author/First_Name').text root.find('Book/Remark').text Explanation: <h4>Using values of attributes as filters</h4> <li>Example: Find the first name of the author of a book that weighs 1.5 oz End of explanation for element in root.findall('Book/Authors/Author[@Residence="New York City"]'): print(element.find('First_Name').text, element.find("Last_Name").text) Explanation: <h4>Problem: Print first and last names of all authors who live in New York City</h4> End of explanation
12,953
Given the following text description, write Python code to implement the functionality described below step by step Description: Skip-gram word2vec In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation. Readings Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material. A really good conceptual overview of word2vec from Chris McCormick First word2vec paper from Mikolov et al. NIPS paper with improvements for word2vec also from Mikolov et al. An implementation of word2vec from Thushan Ganegedara TensorFlow word2vec tutorial Word embeddings When you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation. To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit. Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension. <img src='assets/tokenize_lookup.png' width=500> There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well. Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning. Word2Vec The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram. <img src="assets/word2vec_architectures.png" width="500"> In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts. First up, importing packages. Step1: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space. Step2: Preprocessing Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to &lt;PERIOD&gt;. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it. Step3: And here I'm creating dictionaries to convert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words. Step4: Subsampling Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by $$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$ where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset. I'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it. Exercise Step5: Making batches Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$. From Mikolov et al. Step6: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory. Step7: Building the graph From Chris McCormick's blog, we can see the general structure of our network. The input words are passed in as integers. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal. The idea here is to train the hidden layer weight matrix to find efficient representations for our words. After training, we can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset. I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal. Exercise Step8: Embedding The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary. Exercise Step9: Negative sampling For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss. Exercise Step10: Validation This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings. Step11: Training Below is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words. Step12: Restore the trained network if you need to Step13: Visualizing the word vectors Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data.
Python Code: import time import numpy as np import tensorflow as tf import utils Explanation: Skip-gram word2vec In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation. Readings Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material. A really good conceptual overview of word2vec from Chris McCormick First word2vec paper from Mikolov et al. NIPS paper with improvements for word2vec also from Mikolov et al. An implementation of word2vec from Thushan Ganegedara TensorFlow word2vec tutorial Word embeddings When you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation. To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit. Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension. <img src='assets/tokenize_lookup.png' width=500> There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well. Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning. Word2Vec The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram. <img src="assets/word2vec_architectures.png" width="500"> In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts. First up, importing packages. End of explanation from urllib.request import urlretrieve from os.path import isfile, isdir from tqdm import tqdm import zipfile dataset_folder_path = 'data' dataset_filename = 'text8.zip' dataset_name = 'Text8 Dataset' class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): self.total = total_size self.update((block_num - self.last_block) * block_size) self.last_block = block_num if not isfile(dataset_filename): with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar: urlretrieve( 'http://mattmahoney.net/dc/text8.zip', dataset_filename, pbar.hook) if not isdir(dataset_folder_path): with zipfile.ZipFile(dataset_filename) as zip_ref: zip_ref.extractall(dataset_folder_path) with open('data/text8') as f: text = f.read() Explanation: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space. End of explanation words = utils.preprocess(text) print(words[:30]) print("Total words: {}".format(len(words))) print("Unique words: {}".format(len(set(words)))) Explanation: Preprocessing Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to &lt;PERIOD&gt;. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it. End of explanation vocab_to_int, int_to_vocab = utils.create_lookup_tables(words) int_words = [vocab_to_int[word] for word in words] Explanation: And here I'm creating dictionaries to convert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words. End of explanation ## Your code here train_words = # The final subsampled word list Explanation: Subsampling Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by $$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$ where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset. I'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it. Exercise: Implement subsampling for the words in int_words. That is, go through int_words and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is the probability that a word is discarded. Assign the subsampled data to train_words. End of explanation def get_target(words, idx, window_size=5): ''' Get a list of words in a window around an index. ''' # Your code here return Explanation: Making batches Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$. From Mikolov et al.: "Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels." Exercise: Implement a function get_target that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you choose a random number of words from the window. End of explanation def get_batches(words, batch_size, window_size=5): ''' Create a generator of word batches as a tuple (inputs, targets) ''' n_batches = len(words)//batch_size # only full batches words = words[:n_batches*batch_size] for idx in range(0, len(words), batch_size): x, y = [], [] batch = words[idx:idx+batch_size] for ii in range(len(batch)): batch_x = batch[ii] batch_y = get_target(batch, ii, window_size) y.extend(batch_y) x.extend([batch_x]*len(batch_y)) yield x, y Explanation: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory. End of explanation train_graph = tf.Graph() with train_graph.as_default(): inputs = labels = Explanation: Building the graph From Chris McCormick's blog, we can see the general structure of our network. The input words are passed in as integers. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal. The idea here is to train the hidden layer weight matrix to find efficient representations for our words. After training, we can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset. I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal. Exercise: Assign inputs and labels using tf.placeholder. We're going to be passing in integers, so set the data types to tf.int32. The batches we're passing in will have varying sizes, so set the batch sizes to [None]. To make things work later, you'll need to set the second dimension of labels to None or 1. End of explanation n_vocab = len(int_to_vocab) n_embedding = # Number of embedding features with train_graph.as_default(): embedding = # create embedding weight matrix here embed = # use tf.nn.embedding_lookup to get the hidden layer output Explanation: Embedding The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary. Exercise: Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform. End of explanation # Number of negative labels to sample n_sampled = 100 with train_graph.as_default(): softmax_w = # create softmax weight matrix here softmax_b = # create softmax biases here # Calculate the loss using negative sampling loss = tf.nn.sampled_softmax_loss cost = tf.reduce_mean(loss) optimizer = tf.train.AdamOptimizer().minimize(cost) Explanation: Negative sampling For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss. Exercise: Below, create weights and biases for the softmax layer. Then, use tf.nn.sampled_softmax_loss to calculate the loss. Be sure to read the documentation to figure out how it works. End of explanation with train_graph.as_default(): ## From Thushan Ganegedara's implementation valid_size = 16 # Random set of words to evaluate similarity on. valid_window = 100 # pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent valid_examples = np.array(random.sample(range(valid_window), valid_size//2)) valid_examples = np.append(valid_examples, random.sample(range(1000,1000+valid_window), valid_size//2)) valid_dataset = tf.constant(valid_examples, dtype=tf.int32) # We use the cosine distance: norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True)) normalized_embedding = embedding / norm valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset) similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding)) # If the checkpoints directory doesn't exist: !mkdir checkpoints Explanation: Validation This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings. End of explanation epochs = 10 batch_size = 1000 window_size = 10 with train_graph.as_default(): saver = tf.train.Saver() with tf.Session(graph=train_graph) as sess: iteration = 1 loss = 0 sess.run(tf.global_variables_initializer()) for e in range(1, epochs+1): batches = get_batches(train_words, batch_size, window_size) start = time.time() for x, y in batches: feed = {inputs: x, labels: np.array(y)[:, None]} train_loss, _ = sess.run([cost, optimizer], feed_dict=feed) loss += train_loss if iteration % 100 == 0: end = time.time() print("Epoch {}/{}".format(e, epochs), "Iteration: {}".format(iteration), "Avg. Training loss: {:.4f}".format(loss/100), "{:.4f} sec/batch".format((end-start)/100)) loss = 0 start = time.time() if iteration % 1000 == 0: ## From Thushan Ganegedara's implementation # note that this is expensive (~20% slowdown if computed every 500 steps) sim = similarity.eval() for i in range(valid_size): valid_word = int_to_vocab[valid_examples[i]] top_k = 8 # number of nearest neighbors nearest = (-sim[i, :]).argsort()[1:top_k+1] log = 'Nearest to %s:' % valid_word for k in range(top_k): close_word = int_to_vocab[nearest[k]] log = '%s %s,' % (log, close_word) print(log) iteration += 1 save_path = saver.save(sess, "checkpoints/text8.ckpt") embed_mat = sess.run(normalized_embedding) Explanation: Training Below is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words. End of explanation with train_graph.as_default(): saver = tf.train.Saver() with tf.Session(graph=train_graph) as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) embed_mat = sess.run(embedding) Explanation: Restore the trained network if you need to: End of explanation %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt from sklearn.manifold import TSNE viz_words = 500 tsne = TSNE() embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :]) fig, ax = plt.subplots(figsize=(14, 14)) for idx in range(viz_words): plt.scatter(*embed_tsne[idx, :], color='steelblue') plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7) Explanation: Visualizing the word vectors Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data. End of explanation
12,954
Given the following text description, write Python code to implement the functionality described below step by step Description: Copyright 2018 The TensorFlow Authors. Step1: Ragged Tensors <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https Step2: Overview Your data comes in many shapes; your tensors should too. Ragged tensors are the TensorFlow equivalent of nested variable-length lists. They make it easy to store and process data with non-uniform shapes, including Step3: There are also a number of methods and operations that are specific to ragged tensors, including factory methods, conversion methods, and value-mapping operations. For a list of supported ops, see the tf.ragged package documentation. As with normal tensors, you can use Python-style indexing to access specific slices of a ragged tensor. For more information, see the section on Indexing below. Step4: And just like normal tensors, you can use Python arithmetic and comparison operators to perform elementwise operations. For more information, see the section on Overloaded Operators below. Step5: If you need to perform an elementwise transformation to the values of a RaggedTensor, you can use tf.ragged.map_flat_values, which takes a function plus one or more arguments, and applies the function to transform the RaggedTensor's values. Step6: Constructing a ragged tensor The simplest way to construct a ragged tensor is using tf.ragged.constant, which builds the RaggedTensor corresponding to a given nested Python list Step7: Ragged tensors can also be constructed by pairing flat values tensors with row-partitioning tensors indicating how those values should be divided into rows, using factory classmethods such as tf.RaggedTensor.from_value_rowids, tf.RaggedTensor.from_row_lengths, and tf.RaggedTensor.from_row_splits. tf.RaggedTensor.from_value_rowids If you know which row each value belongs in, then you can build a RaggedTensor using a value_rowids row-partitioning tensor Step8: tf.RaggedTensor.from_row_lengths If you know how long each row is, then you can use a row_lengths row-partitioning tensor Step9: tf.RaggedTensor.from_row_splits If you know the index where each row starts and ends, then you can use a row_splits row-partitioning tensor Step10: See the tf.RaggedTensor class documentation for a full list of factory methods. What you can store in a ragged tensor As with normal Tensors, the values in a RaggedTensor must all have the same type; and the values must all be at the same nesting depth (the rank of the tensor) Step11: Example use case The following example demonstrates how RaggedTensors can be used to construct and combine unigram and bigram embeddings for a batch of variable-length queries, using special markers for the beginning and end of each sentence. For more details on the ops used in this example, see the tf.ragged package documentation. Step12: Ragged tensors Step13: The method tf.RaggedTensor.bounding_shape can be used to find a tight bounding shape for a given RaggedTensor Step14: Ragged vs sparse tensors A ragged tensor should not be thought of as a type of sparse tensor, but rather as a dense tensor with an irregular shape. As an illustrative example, consider how array operations such as concat, stack, and tile are defined for ragged vs. sparse tensors. Concatenating ragged tensors joins each row to form a single row with the combined length Step15: But concatenating sparse tensors is equivalent to concatenating the corresponding dense tensors, as illustrated by the following example (where Ø indicates missing values) Step16: For another example of why this distinction is important, consider the definition of “the mean value of each row” for an op such as tf.reduce_mean. For a ragged tensor, the mean value for a row is the sum of the row’s values divided by the row’s width. But for a sparse tensor, the mean value for a row is the sum of the row’s values divided by the sparse tensor’s overall width (which is greater than or equal to the width of the longest row). Overloaded operators The RaggedTensor class overloads the standard Python arithmetic and comparison operators, making it easy to perform basic elementwise math Step17: Since the overloaded operators perform elementwise computations, the inputs to all binary operations must have the same shape, or be broadcastable to the same shape. In the simplest broadcasting case, a single scalar is combined elementwise with each value in a ragged tensor Step18: For a discussion of more advanced cases, see the section on Broadcasting. Ragged tensors overload the same set of operators as normal Tensors Step19: Indexing a 3-D ragged tensor with 2 ragged dimensions Step20: RaggedTensors supports multidimensional indexing and slicing, with one restriction Step21: Evaluating ragged tensors Eager execution In eager execution mode, ragged tensors are evaluated immediately. To access the values they contain, you can Step22: Use Python indexing. If the tensor piece you select contains no ragged dimensions, then it will be returned as an EagerTensor. You can then use the numpy() method to access the value directly. Step23: Decompose the ragged tensor into its components, using the tf.RaggedTensor.values and tf.RaggedTensor.row_splits properties, or row-partitioning methods such as tf.RaggedTensor.row_lengths() and tf.RaggedTensor.value_rowids(). Step24: Graph execution In graph execution mode, ragged tensors can be evaluated using session.run(), just like standard tensors. Step25: The resulting value will be a tf.ragged.RaggedTensorValue instance. To access the values contained in a RaggedTensorValue, you can Step26: Decompose the ragged tensor into its components, using the tf.ragged.RaggedTensorValue.values and tf.ragged.RaggedTensorValue.row_splits properties. Step27: Broadcasting Broadcasting is the process of making tensors with different shapes have compatible shapes for elementwise operations. For more background on broadcasting, see Step28: Here are some examples of shapes that do not broadcast Step29: RaggedTensor encoding Ragged tensors are encoded using the RaggedTensor class. Internally, each RaggedTensor consists of Step30: Multiple ragged dimensions A ragged tensor with multiple ragged dimensions is encoded by using a nested RaggedTensor for the values tensor. Each nested RaggedTensor adds a single ragged dimension. Step31: The factory function tf.RaggedTensor.from_nested_row_splits may be used to construct a RaggedTensor with multiple ragged dimensions directly, by providing a list of row_splits tensors Step32: Uniform Inner Dimensions Ragged tensors with uniform inner dimensions are encoded by using a multidimensional tf.Tensor for values. Step33: Alternative row-partitioning schemes The RaggedTensor class uses row_splits as the primary mechanism to store information about how the values are partitioned into rows. However, RaggedTensor also provides support for four alternative row-partitioning schemes, which can be more convenient to use depending on how your data is formatted. Internally, RaggedTensor uses these additional schemes to improve efficiency in some contexts. <dl> <dt>Row lengths</dt> <dd>`row_lengths` is a vector with shape `[nrows]`, which specifies the length of each row.</dd> <dt>Row starts</dt> <dd>`row_starts` is a vector with shape `[nrows]`, which specifies the start offset of each row. Equivalent to `row_splits[ Step34: The RaggedTensor class defines methods which can be used to construct each of these row-partitioning tensors.
Python Code: #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. Explanation: Copyright 2018 The TensorFlow Authors. End of explanation import math import tensorflow.compat.v1 as tf Explanation: Ragged Tensors <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r1/guide/ragged_tensors.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/r1/guide/ragged_tensors.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> </table> Note: This is an archived TF1 notebook. These are configured to run in TF2's compatibility mode but will run in TF1 as well. To use TF1 in Colab, use the %tensorflow_version 1.x magic. Setup End of explanation digits = tf.ragged.constant([[3, 1, 4, 1], [], [5, 9, 2], [6], []]) words = tf.ragged.constant([["So", "long"], ["thanks", "for", "all", "the", "fish"]]) print(tf.add(digits, 3)) print(tf.reduce_mean(digits, axis=1)) print(tf.concat([digits, [[5, 3]]], axis=0)) print(tf.tile(digits, [1, 2])) print(tf.strings.substr(words, 0, 2)) Explanation: Overview Your data comes in many shapes; your tensors should too. Ragged tensors are the TensorFlow equivalent of nested variable-length lists. They make it easy to store and process data with non-uniform shapes, including: Variable-length features, such as the set of actors in a movie. Batches of variable-length sequential inputs, such as sentences or video clips. Hierarchical inputs, such as text documents that are subdivided into sections, paragraphs, sentences, and words. Individual fields in structured inputs, such as protocol buffers. What you can do with a ragged tensor Ragged tensors are supported by more than a hundred TensorFlow operations, including math operations (such as tf.add and tf.reduce_mean), array operations (such as tf.concat and tf.tile), string manipulation ops (such as tf.substr), and many others: End of explanation print(digits[0]) # First row print(digits[:, :2]) # First two values in each row. print(digits[:, -2:]) # Last two values in each row. Explanation: There are also a number of methods and operations that are specific to ragged tensors, including factory methods, conversion methods, and value-mapping operations. For a list of supported ops, see the tf.ragged package documentation. As with normal tensors, you can use Python-style indexing to access specific slices of a ragged tensor. For more information, see the section on Indexing below. End of explanation print(digits + 3) print(digits + tf.ragged.constant([[1, 2, 3, 4], [], [5, 6, 7], [8], []])) Explanation: And just like normal tensors, you can use Python arithmetic and comparison operators to perform elementwise operations. For more information, see the section on Overloaded Operators below. End of explanation times_two_plus_one = lambda x: x * 2 + 1 print(tf.ragged.map_flat_values(times_two_plus_one, digits)) Explanation: If you need to perform an elementwise transformation to the values of a RaggedTensor, you can use tf.ragged.map_flat_values, which takes a function plus one or more arguments, and applies the function to transform the RaggedTensor's values. End of explanation sentences = tf.ragged.constant([ ["Let's", "build", "some", "ragged", "tensors", "!"], ["We", "can", "use", "tf.ragged.constant", "."]]) print(sentences) paragraphs = tf.ragged.constant([ [['I', 'have', 'a', 'cat'], ['His', 'name', 'is', 'Mat']], [['Do', 'you', 'want', 'to', 'come', 'visit'], ["I'm", 'free', 'tomorrow']], ]) print(paragraphs) Explanation: Constructing a ragged tensor The simplest way to construct a ragged tensor is using tf.ragged.constant, which builds the RaggedTensor corresponding to a given nested Python list: End of explanation print(tf.RaggedTensor.from_value_rowids( values=[3, 1, 4, 1, 5, 9, 2, 6], value_rowids=[0, 0, 0, 0, 2, 2, 2, 3])) Explanation: Ragged tensors can also be constructed by pairing flat values tensors with row-partitioning tensors indicating how those values should be divided into rows, using factory classmethods such as tf.RaggedTensor.from_value_rowids, tf.RaggedTensor.from_row_lengths, and tf.RaggedTensor.from_row_splits. tf.RaggedTensor.from_value_rowids If you know which row each value belongs in, then you can build a RaggedTensor using a value_rowids row-partitioning tensor: End of explanation print(tf.RaggedTensor.from_row_lengths( values=[3, 1, 4, 1, 5, 9, 2, 6], row_lengths=[4, 0, 3, 1])) Explanation: tf.RaggedTensor.from_row_lengths If you know how long each row is, then you can use a row_lengths row-partitioning tensor: End of explanation print(tf.RaggedTensor.from_row_splits( values=[3, 1, 4, 1, 5, 9, 2, 6], row_splits=[0, 4, 4, 7, 8])) Explanation: tf.RaggedTensor.from_row_splits If you know the index where each row starts and ends, then you can use a row_splits row-partitioning tensor: End of explanation print(tf.ragged.constant([["Hi"], ["How", "are", "you"]])) # ok: type=string, rank=2 print(tf.ragged.constant([[[1, 2], [3]], [[4, 5]]])) # ok: type=int32, rank=3 try: tf.ragged.constant([["one", "two"], [3, 4]]) # bad: multiple types except ValueError as exception: print(exception) try: tf.ragged.constant(["A", ["B", "C"]]) # bad: multiple nesting depths except ValueError as exception: print(exception) Explanation: See the tf.RaggedTensor class documentation for a full list of factory methods. What you can store in a ragged tensor As with normal Tensors, the values in a RaggedTensor must all have the same type; and the values must all be at the same nesting depth (the rank of the tensor): End of explanation queries = tf.ragged.constant([['Who', 'is', 'Dan', 'Smith'], ['Pause'], ['Will', 'it', 'rain', 'later', 'today']]) # Create an embedding table. num_buckets = 1024 embedding_size = 4 embedding_table = tf.Variable( tf.truncated_normal([num_buckets, embedding_size], stddev=1.0 / math.sqrt(embedding_size))) # Look up the embedding for each word. word_buckets = tf.strings.to_hash_bucket_fast(queries, num_buckets) word_embeddings = tf.ragged.map_flat_values( tf.nn.embedding_lookup, embedding_table, word_buckets) # ① # Add markers to the beginning and end of each sentence. marker = tf.fill([queries.nrows(), 1], '#') padded = tf.concat([marker, queries, marker], axis=1) # ② # Build word bigrams & look up embeddings. bigrams = tf.string_join([padded[:, :-1], padded[:, 1:]], separator='+') # ③ bigram_buckets = tf.strings.to_hash_bucket_fast(bigrams, num_buckets) bigram_embeddings = tf.ragged.map_flat_values( tf.nn.embedding_lookup, embedding_table, bigram_buckets) # ④ # Find the average embedding for each sentence all_embeddings = tf.concat([word_embeddings, bigram_embeddings], axis=1) # ⑤ avg_embedding = tf.reduce_mean(all_embeddings, axis=1) # ⑥ print(avg_embedding) Explanation: Example use case The following example demonstrates how RaggedTensors can be used to construct and combine unigram and bigram embeddings for a batch of variable-length queries, using special markers for the beginning and end of each sentence. For more details on the ops used in this example, see the tf.ragged package documentation. End of explanation tf.ragged.constant([["Hi"], ["How", "are", "you"]]).shape Explanation: Ragged tensors: definitions Ragged and uniform dimensions A ragged tensor is a tensor with one or more ragged dimensions, which are dimensions whose slices may have different lengths. For example, the inner (column) dimension of rt=[[3, 1, 4, 1], [], [5, 9, 2], [6], []] is ragged, since the column slices (rt[0, :], ..., rt[4, :]) have different lengths. Dimensions whose slices all have the same length are called uniform dimensions. The outermost dimension of a ragged tensor is always uniform, since it consists of a single slice (and so there is no possibility for differing slice lengths). In addition to the uniform outermost dimension, ragged tensors may also have uniform inner dimensions. For example, we might store the word embeddings for each word in a batch of sentences using a ragged tensor with shape [num_sentences, (num_words), embedding_size], where the parentheses around (num_words) indicate that the dimension is ragged. Ragged tensors may have multiple ragged dimensions. For example, we could store a batch of structured text documents using a tensor with shape [num_documents, (num_paragraphs), (num_sentences), (num_words)] (where again parentheses are used to indicate ragged dimensions). Ragged tensor shape restrictions The shape of a ragged tensor is currently restricted to have the following form: A single uniform dimension Followed by one or more ragged dimensions Followed by zero or more uniform dimensions. Note: These restrictions are a consequence of the current implementation, and we may relax them in the future. Rank and ragged rank The total number of dimensions in a ragged tensor is called its rank, and the number of ragged dimensions in a ragged tensor is called its ragged rank. In graph execution mode (i.e., non-eager mode), a tensor's ragged rank is fixed at creation time: it can't depend on runtime values, and can't vary dynamically for different session runs. A potentially ragged tensor is a value that might be either a tf.Tensor or a tf.RaggedTensor. The ragged rank of a tf.Tensor is defined to be zero. RaggedTensor shapes When describing the shape of a RaggedTensor, ragged dimensions are indicated by enclosing them in parentheses. For example, as we saw above, the shape of a 3-D RaggedTensor that stores word embeddings for each word in a batch of sentences can be written as [num_sentences, (num_words), embedding_size]. The RaggedTensor.shape attribute returns a tf.TensorShape for a ragged tensor, where ragged dimensions have size None: End of explanation print(tf.ragged.constant([["Hi"], ["How", "are", "you"]]).bounding_shape()) Explanation: The method tf.RaggedTensor.bounding_shape can be used to find a tight bounding shape for a given RaggedTensor: End of explanation ragged_x = tf.ragged.constant([["John"], ["a", "big", "dog"], ["my", "cat"]]) ragged_y = tf.ragged.constant([["fell", "asleep"], ["barked"], ["is", "fuzzy"]]) print(tf.concat([ragged_x, ragged_y], axis=1)) Explanation: Ragged vs sparse tensors A ragged tensor should not be thought of as a type of sparse tensor, but rather as a dense tensor with an irregular shape. As an illustrative example, consider how array operations such as concat, stack, and tile are defined for ragged vs. sparse tensors. Concatenating ragged tensors joins each row to form a single row with the combined length: End of explanation sparse_x = ragged_x.to_sparse() sparse_y = ragged_y.to_sparse() sparse_result = tf.sparse.concat(sp_inputs=[sparse_x, sparse_y], axis=1) print(tf.sparse.to_dense(sparse_result, '')) Explanation: But concatenating sparse tensors is equivalent to concatenating the corresponding dense tensors, as illustrated by the following example (where Ø indicates missing values): End of explanation x = tf.ragged.constant([[1, 2], [3], [4, 5, 6]]) y = tf.ragged.constant([[1, 1], [2], [3, 3, 3]]) print(x + y) Explanation: For another example of why this distinction is important, consider the definition of “the mean value of each row” for an op such as tf.reduce_mean. For a ragged tensor, the mean value for a row is the sum of the row’s values divided by the row’s width. But for a sparse tensor, the mean value for a row is the sum of the row’s values divided by the sparse tensor’s overall width (which is greater than or equal to the width of the longest row). Overloaded operators The RaggedTensor class overloads the standard Python arithmetic and comparison operators, making it easy to perform basic elementwise math: End of explanation x = tf.ragged.constant([[1, 2], [3], [4, 5, 6]]) print(x + 3) Explanation: Since the overloaded operators perform elementwise computations, the inputs to all binary operations must have the same shape, or be broadcastable to the same shape. In the simplest broadcasting case, a single scalar is combined elementwise with each value in a ragged tensor: End of explanation queries = tf.ragged.constant( [['Who', 'is', 'George', 'Washington'], ['What', 'is', 'the', 'weather', 'tomorrow'], ['Goodnight']]) print(queries[1]) print(queries[1, 2]) # A single word print(queries[1:]) # Everything but the first row print(queries[:, :3]) # The first 3 words of each query print(queries[:, -2:]) # The last 2 words of each query Explanation: For a discussion of more advanced cases, see the section on Broadcasting. Ragged tensors overload the same set of operators as normal Tensors: the unary operators -, ~, and abs(); and the binary operators +, -, *, /, //, %, **, &amp;, |, ^, &lt;, &lt;=, &gt;, and &gt;=. Note that, as with standard Tensors, binary == is not overloaded; you can use tf.equal() to check elementwise equality. Indexing Ragged tensors support Python-style indexing, including multidimensional indexing and slicing. The following examples demonstrate ragged tensor indexing with a 2-D and a 3-D ragged tensor. Indexing a 2-D ragged tensor with 1 ragged dimension End of explanation rt = tf.ragged.constant([[[1, 2, 3], [4]], [[5], [], [6]], [[7]], [[8, 9], [10]]]) print(rt[1]) # Second row (2-D RaggedTensor) print(rt[3, 0]) # First element of fourth row (1-D Tensor) print(rt[:, 1:3]) # Items 1-3 of each row (3-D RaggedTensor) print(rt[:, -1:]) # Last item of each row (3-D RaggedTensor) Explanation: Indexing a 3-D ragged tensor with 2 ragged dimensions End of explanation ragged_sentences = tf.ragged.constant([ ['Hi'], ['Welcome', 'to', 'the', 'fair'], ['Have', 'fun']]) print(ragged_sentences.to_tensor(default_value='')) print(ragged_sentences.to_sparse()) x = [[1, 3, -1, -1], [2, -1, -1, -1], [4, 5, 8, 9]] print(tf.RaggedTensor.from_tensor(x, padding=-1)) st = tf.SparseTensor(indices=[[0, 0], [2, 0], [2, 1]], values=['a', 'b', 'c'], dense_shape=[3, 3]) print(tf.RaggedTensor.from_sparse(st)) Explanation: RaggedTensors supports multidimensional indexing and slicing, with one restriction: indexing into a ragged dimension is not allowed. This case is problematic because the indicated value may exist in some rows but not others. In such cases, it's not obvious whether we should (1) raise an IndexError; (2) use a default value; or (3) skip that value and return a tensor with fewer rows than we started with. Following the guiding principles of Python ("In the face of ambiguity, refuse the temptation to guess" ), we currently disallow this operation. Tensor Type Conversion The RaggedTensor class defines methods that can be used to convert between RaggedTensors and tf.Tensors or tf.SparseTensors: End of explanation rt = tf.ragged.constant([[1, 2], [3, 4, 5], [6], [], [7]]) print(rt.to_list()) Explanation: Evaluating ragged tensors Eager execution In eager execution mode, ragged tensors are evaluated immediately. To access the values they contain, you can: Use the tf.RaggedTensor.to_list() method, which converts the ragged tensor to a Python list. End of explanation print(rt[1].numpy()) Explanation: Use Python indexing. If the tensor piece you select contains no ragged dimensions, then it will be returned as an EagerTensor. You can then use the numpy() method to access the value directly. End of explanation print(rt.values) print(rt.row_splits) Explanation: Decompose the ragged tensor into its components, using the tf.RaggedTensor.values and tf.RaggedTensor.row_splits properties, or row-partitioning methods such as tf.RaggedTensor.row_lengths() and tf.RaggedTensor.value_rowids(). End of explanation with tf.Session() as session: rt = tf.ragged.constant([[1, 2], [3, 4, 5], [6], [], [7]]) rt_value = session.run(rt) Explanation: Graph execution In graph execution mode, ragged tensors can be evaluated using session.run(), just like standard tensors. End of explanation print(rt_value.to_list()) Explanation: The resulting value will be a tf.ragged.RaggedTensorValue instance. To access the values contained in a RaggedTensorValue, you can: Use the tf.ragged.RaggedTensorValue.to_list() method, which converts the RaggedTensorValue to a Python list. End of explanation print(rt_value.values) print(rt_value.row_splits) tf.enable_eager_execution() # Resume eager execution mode. Explanation: Decompose the ragged tensor into its components, using the tf.ragged.RaggedTensorValue.values and tf.ragged.RaggedTensorValue.row_splits properties. End of explanation # x (2D ragged): 2 x (num_rows) # y (scalar) # result (2D ragged): 2 x (num_rows) x = tf.ragged.constant([[1, 2], [3]]) y = 3 print(x + y) # x (2d ragged): 3 x (num_rows) # y (2d tensor): 3 x 1 # Result (2d ragged): 3 x (num_rows) x = tf.ragged.constant( [[10, 87, 12], [19, 53], [12, 32]]) y = [[1000], [2000], [3000]] print(x + y) # x (3d ragged): 2 x (r1) x 2 # y (2d ragged): 1 x 1 # Result (3d ragged): 2 x (r1) x 2 x = tf.ragged.constant( [[[1, 2], [3, 4], [5, 6]], [[7, 8]]], ragged_rank=1) y = tf.constant([[10]]) print(x + y) # x (3d ragged): 2 x (r1) x (r2) x 1 # y (1d tensor): 3 # Result (3d ragged): 2 x (r1) x (r2) x 3 x = tf.ragged.constant( [ [ [[1], [2]], [], [[3]], [[4]], ], [ [[5], [6]], [[7]] ] ], ragged_rank=2) y = tf.constant([10, 20, 30]) print(x + y) Explanation: Broadcasting Broadcasting is the process of making tensors with different shapes have compatible shapes for elementwise operations. For more background on broadcasting, see: Numpy: Broadcasting tf.broadcast_dynamic_shape tf.broadcast_to The basic steps for broadcasting two inputs x and y to have compatible shapes are: If x and y do not have the same number of dimensions, then add outer dimensions (with size 1) until they do. For each dimension where x and y have different sizes: If x or y have size 1 in dimension d, then repeat its values across dimension d to match the other input's size. Otherwise, raise an exception (x and y are not broadcast compatible). Where the size of a tensor in a uniform dimension is a single number (the size of slices across that dimension); and the size of a tensor in a ragged dimension is a list of slice lengths (for all slices across that dimension). Broadcasting examples End of explanation # x (2d ragged): 3 x (r1) # y (2d tensor): 3 x 4 # trailing dimensions do not match x = tf.ragged.constant([[1, 2], [3, 4, 5, 6], [7]]) y = tf.constant([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]]) try: x + y except tf.errors.InvalidArgumentError as exception: print(exception) # x (2d ragged): 3 x (r1) # y (2d ragged): 3 x (r2) # ragged dimensions do not match. x = tf.ragged.constant([[1, 2, 3], [4], [5, 6]]) y = tf.ragged.constant([[10, 20], [30, 40], [50]]) try: x + y except tf.errors.InvalidArgumentError as exception: print(exception) # x (3d ragged): 3 x (r1) x 2 # y (3d ragged): 3 x (r1) x 3 # trailing dimensions do not match x = tf.ragged.constant([[[1, 2], [3, 4], [5, 6]], [[7, 8], [9, 10]]]) y = tf.ragged.constant([[[1, 2, 0], [3, 4, 0], [5, 6, 0]], [[7, 8, 0], [9, 10, 0]]]) try: x + y except tf.errors.InvalidArgumentError as exception: print(exception) Explanation: Here are some examples of shapes that do not broadcast: End of explanation rt = tf.RaggedTensor.from_row_splits( values=[3, 1, 4, 1, 5, 9, 2], row_splits=[0, 4, 4, 6, 7]) print(rt) Explanation: RaggedTensor encoding Ragged tensors are encoded using the RaggedTensor class. Internally, each RaggedTensor consists of: A values tensor, which concatenates the variable-length rows into a flattened list. A row_splits vector, which indicates how those flattened values are divided into rows. In particular, the values for row rt[i] are stored in the slice rt.values[rt.row_splits[i]:rt.row_splits[i+1]]. End of explanation rt = tf.RaggedTensor.from_row_splits( values=tf.RaggedTensor.from_row_splits( values=[10, 11, 12, 13, 14, 15, 16, 17, 18, 19], row_splits=[0, 3, 3, 5, 9, 10]), row_splits=[0, 1, 1, 5]) print(rt) print("Shape: {}".format(rt.shape)) print("Number of ragged dimensions: {}".format(rt.ragged_rank)) Explanation: Multiple ragged dimensions A ragged tensor with multiple ragged dimensions is encoded by using a nested RaggedTensor for the values tensor. Each nested RaggedTensor adds a single ragged dimension. End of explanation rt = tf.RaggedTensor.from_nested_row_splits( flat_values=[10, 11, 12, 13, 14, 15, 16, 17, 18, 19], nested_row_splits=([0, 1, 1, 5], [0, 3, 3, 5, 9, 10])) print(rt) Explanation: The factory function tf.RaggedTensor.from_nested_row_splits may be used to construct a RaggedTensor with multiple ragged dimensions directly, by providing a list of row_splits tensors: End of explanation rt = tf.RaggedTensor.from_row_splits( values=[[1, 3], [0, 0], [1, 3], [5, 3], [3, 3], [1, 2]], row_splits=[0, 3, 4, 6]) print(rt) print("Shape: {}".format(rt.shape)) print("Number of ragged dimensions: {}".format(rt.ragged_rank)) Explanation: Uniform Inner Dimensions Ragged tensors with uniform inner dimensions are encoded by using a multidimensional tf.Tensor for values. End of explanation values = [3, 1, 4, 1, 5, 9, 2, 6] print(tf.RaggedTensor.from_row_splits(values, row_splits=[0, 4, 4, 7, 8, 8])) print(tf.RaggedTensor.from_row_lengths(values, row_lengths=[4, 0, 3, 1, 0])) print(tf.RaggedTensor.from_row_starts(values, row_starts=[0, 4, 4, 7, 8])) print(tf.RaggedTensor.from_row_limits(values, row_limits=[4, 4, 7, 8, 8])) print(tf.RaggedTensor.from_value_rowids( values, value_rowids=[0, 0, 0, 0, 2, 2, 2, 3], nrows=5)) Explanation: Alternative row-partitioning schemes The RaggedTensor class uses row_splits as the primary mechanism to store information about how the values are partitioned into rows. However, RaggedTensor also provides support for four alternative row-partitioning schemes, which can be more convenient to use depending on how your data is formatted. Internally, RaggedTensor uses these additional schemes to improve efficiency in some contexts. <dl> <dt>Row lengths</dt> <dd>`row_lengths` is a vector with shape `[nrows]`, which specifies the length of each row.</dd> <dt>Row starts</dt> <dd>`row_starts` is a vector with shape `[nrows]`, which specifies the start offset of each row. Equivalent to `row_splits[:-1]`.</dd> <dt>Row limits</dt> <dd>`row_limits` is a vector with shape `[nrows]`, which specifies the stop offset of each row. Equivalent to `row_splits[1:]`.</dd> <dt>Row indices and number of rows</dt> <dd>`value_rowids` is a vector with shape `[nvals]`, corresponding one-to-one with values, which specifies each value's row index. In particular, the row `rt[row]` consists of the values `rt.values[j]` where `value_rowids[j]==row`. \ `nrows` is an integer that specifies the number of rows in the `RaggedTensor`. In particular, `nrows` is used to indicate trailing empty rows.</dd> </dl> For example, the following ragged tensors are equivalent: End of explanation rt = tf.ragged.constant([[3, 1, 4, 1], [], [5, 9, 2], [6], []]) print(" values: {}".format(rt.values)) print(" row_splits: {}".format(rt.row_splits)) print(" row_lengths: {}".format(rt.row_lengths())) print(" row_starts: {}".format(rt.row_starts())) print(" row_limits: {}".format(rt.row_limits())) print("value_rowids: {}".format(rt.value_rowids())) Explanation: The RaggedTensor class defines methods which can be used to construct each of these row-partitioning tensors. End of explanation
12,955
Given the following text description, write Python code to implement the functionality described below step by step Description: Introducing curiosity-driven learning Random exploration is not enough In the last tutorials, we compared the motor and goal babbling strategies on the simple arm environment. We saw that the problem with motor babbling is that it fails to effciently cover the sensory space in a reasonable amount of time. Due to the inherent redundancy of most robotic systems (e.g. many joint configurations lead to the same hand position on a multi-dimensional arm), random sampling in the motor space $M$ is therefore not appropriate if one want to learn an inverse model because the robot/agent will waiste a lot of time by trying all the motor configurations, whereas only a subset is generally useful to cover the reachable sensory space uniformly. In contrast, in the goal babbling strategy the agent randomly samples goals in the sensory space, then try to reach them using the current knowledge in its sensorimotor model. Although the agent will be poor at reaching these goals at the beginning, due to the poor quality of its sensorimotor model, it will on average cover the sensory space much more efficiently than in motor babbling, hence learning an inverse model more efficiently. Well, random goal babbling ourperforms random motor babbling in highly redundant sensorimotor systems. But there is still a couple of problems. To figure out what these problems can be, let's analyze random goal babbling in more details. Goal babbling with high sensory ranges As already done in the previous tutorial, we will study an experiment with a a high-dimensional arm environment, a random goal babbling interest model, and a nearest neighbor sensorimotor model. But this time we will use another 'simple_arm' configuration, called 'high_dim_high_s_range', which is the same as the 'high_dimensional' configuration used previously, except that the sensory space ranges are doubled Step1: We see that, whereas the 'high_dimensional' configuration specify sensory bounds are at the edges of the sensory space, the 'high_dim_high_s_range' has larger ones (between -2 and 2 on both hand coordinates). We will use this 'high_dim_high_s_range' configuration to reflect the fact that in the framework of Developmental Robotics, there is no reason to provide the reachable sensory space ranges directly to the agent (to make a child analogy, an infant probably does not now at birth that he is unable to reach hand positions at more than a few dozen of centimeters from its shoulder.) Instead the agent should discover these bounds by its own interaction with the environment. Let's see how random goal babbling performs with high sensory ranges. As usual, create the experiment Step2: Set the evaluation time steps Step3: and run the experiment Step4: Now we plot the randomly chosen goals and reached hand positions (in red and blue respectively) Step5: We observe that Step6: Until now we always used 'random', sampling uniformly in the choice space ranges (either the motor or the sensory space according to the babbling mode). We are using goal babbling in this tutoral, but this interest model also work with motor babbling. Discretized progress Another interest model available in Explauto is called discretized progress. This model divides the choice space into a grid and maintain an empirical measure of the learning progress in each cell. To do so, each cell keeps an history of the recent learning errors it observed. Given a choice $x\in X$ (i.e. a sample point in the choice space $X$, which corresponds to the sensory space $S$ in case of goal babbling) and an inference $y\in Y$ (the forward or inverse prediction performed by the sensorimotor model for the choice $x$, in case of goal babbling $Y=M$ and the sensorimotor model performs an inverse prediction), the associated learning error corresponds to the distance between $xy$ (the concatenation of $x$ and $y$, reordered as a vector in $M\times S$) and $ms$ (where $m$ is the executed motor command and $s$ the observed sensory consequence). (TODO Step7: Run it Step8: And compare the learning curves Step9: It seems that the Tree interest model gives better learning results, but more simulations and more test points would be necesary to see if the results are statistically significant. We can display an animation progressively showing the reached hand positions at regular time steps of the simulation, by running the below. Note Step10: Color code is the following Step11: After the experiment, we can re-construct the Tree interest model to plot it as it was at the end of the experiment
Python Code: from __future__ import print_function from explauto.environment import environments env_cls, env_configs, _ = environments['simple_arm'] print("'high_dimensional' configuration sensory bounds:") print('s_mins = {} ; s_maxs = {}'.format(env_configs['high_dimensional']['s_mins'], env_configs['high_dimensional']['s_maxs'])) print("'high_dim_high_s_range' configuration sensory bounds:") print('s_mins = {} ; s_maxs = {}'.format(env_configs['high_dim_high_s_range']['s_mins'], env_configs['high_dim_high_s_range']['s_maxs'])) Explanation: Introducing curiosity-driven learning Random exploration is not enough In the last tutorials, we compared the motor and goal babbling strategies on the simple arm environment. We saw that the problem with motor babbling is that it fails to effciently cover the sensory space in a reasonable amount of time. Due to the inherent redundancy of most robotic systems (e.g. many joint configurations lead to the same hand position on a multi-dimensional arm), random sampling in the motor space $M$ is therefore not appropriate if one want to learn an inverse model because the robot/agent will waiste a lot of time by trying all the motor configurations, whereas only a subset is generally useful to cover the reachable sensory space uniformly. In contrast, in the goal babbling strategy the agent randomly samples goals in the sensory space, then try to reach them using the current knowledge in its sensorimotor model. Although the agent will be poor at reaching these goals at the beginning, due to the poor quality of its sensorimotor model, it will on average cover the sensory space much more efficiently than in motor babbling, hence learning an inverse model more efficiently. Well, random goal babbling ourperforms random motor babbling in highly redundant sensorimotor systems. But there is still a couple of problems. To figure out what these problems can be, let's analyze random goal babbling in more details. Goal babbling with high sensory ranges As already done in the previous tutorial, we will study an experiment with a a high-dimensional arm environment, a random goal babbling interest model, and a nearest neighbor sensorimotor model. But this time we will use another 'simple_arm' configuration, called 'high_dim_high_s_range', which is the same as the 'high_dimensional' configuration used previously, except that the sensory space ranges are doubled: End of explanation from explauto.experiment import Experiment, make_settings s = make_settings(environment='simple_arm', environment_config = 'high_dim_high_s_range', babbling_mode='goal', interest_model='random', sensorimotor_model='nearest_neighbor') expe = Experiment.from_settings(s) Explanation: We see that, whereas the 'high_dimensional' configuration specify sensory bounds are at the edges of the sensory space, the 'high_dim_high_s_range' has larger ones (between -2 and 2 on both hand coordinates). We will use this 'high_dim_high_s_range' configuration to reflect the fact that in the framework of Developmental Robotics, there is no reason to provide the reachable sensory space ranges directly to the agent (to make a child analogy, an infant probably does not now at birth that he is unable to reach hand positions at more than a few dozen of centimeters from its shoulder.) Instead the agent should discover these bounds by its own interaction with the environment. Let's see how random goal babbling performs with high sensory ranges. As usual, create the experiment: End of explanation expe.evaluate_at([100, 200, 400, 1000], s.default_testcases) Explanation: Set the evaluation time steps: End of explanation expe.run() Explanation: and run the experiment: End of explanation %pylab inline rcParams['figure.figsize'] = (12.0, 10.0) ax = axes() title(('Random goal babbling')) expe.log.scatter_plot(ax, (('sensori', [0, 1]),)) expe.log.scatter_plot(ax, (('choice', [0, 1]),), marker='.', color='red') legend(['reached hand positions', 'chosen goals']) Explanation: Now we plot the randomly chosen goals and reached hand positions (in red and blue respectively): End of explanation from explauto.interest_model import interest_models print('Available interest models: {}'.format(interest_models.keys())) Explanation: We observe that: * The goals, which are sampled uniformly in the sensory space, can be very far from the actually reachable hand positions. This is due to the high sensory space ranges we use here (between -2 and 2 on each coordinate). A huge part of the chosen goals are therefore outside of the reachable space. * In consequence, the agent mainly reaches hand positions at the boundaries of the reachable space, because it rarely set goals inside the reachable aera. To solve this problem, we introduce curiosity-driven exploration based on the maximization of the learning progress as another possible interest model. Addition of Curiosity: The discretized progress and Tree interest models Look at the available interest models: End of explanation from explauto.experiment import ExperimentPool xps = ExperimentPool.from_settings_product(environments=[('simple_arm', 'high_dim_high_s_range')], babblings=['goal'], interest_models=[('random', 'default'), ('discretized_progress', 'default'), ('tree', 'default')], sensorimotor_models=[('nearest_neighbor', 'default')], evaluate_at=[200, 300, 500, 1000, 2000, 3000, 5000], same_testcases=True) Explanation: Until now we always used 'random', sampling uniformly in the choice space ranges (either the motor or the sensory space according to the babbling mode). We are using goal babbling in this tutoral, but this interest model also work with motor babbling. Discretized progress Another interest model available in Explauto is called discretized progress. This model divides the choice space into a grid and maintain an empirical measure of the learning progress in each cell. To do so, each cell keeps an history of the recent learning errors it observed. Given a choice $x\in X$ (i.e. a sample point in the choice space $X$, which corresponds to the sensory space $S$ in case of goal babbling) and an inference $y\in Y$ (the forward or inverse prediction performed by the sensorimotor model for the choice $x$, in case of goal babbling $Y=M$ and the sensorimotor model performs an inverse prediction), the associated learning error corresponds to the distance between $xy$ (the concatenation of $x$ and $y$, reordered as a vector in $M\times S$) and $ms$ (where $m$ is the executed motor command and $s$ the observed sensory consequence). (TODO: refer to the introductive section which will explain this in more details). The learning progress is then defined as the opposite the covariance between time (relative to a particular cell) and learning error (i.e. the agent is progressing in that cell if the covariance between time and error is negative). The discretized progress interest model then sample a cell according to the learning progress measure (favoring cells displaying high progresses) and finally sample a random point in that chosen cell. Tree If the number of sensori dimensions is large (>2), the discretization described before won't be feasible as the number of regions is exponential in the number of dimensions. An idea (e.g. described in SAGG-RIAC), is to split a region of the sensori domain in sub-regions only when needed: when the number of points in that region become greater than a given constant (100 point per region in these simulations). Based on the kd-tree algorithms, our implementation allows different rules to split a region. In the paper above was suggested to define a good split of the region $r$ in 2 sub-regions $r_1$ and $r_2$ as a split that maximizes $card(r_1)card(r_2)progress(r_1)*progress(r_2)$, where $card(r_i)$ is the number of points in the sub-region $i$, and $progress(r_i)$ is the absolute derivative of the competence on the points of the sub-region $r_i$. We define here the competence on a goal point $g$ where the observed sensori consequence is $s$, as $~e^{-power\times||g-s||}$, with $~power=10~$ in our simulations. As the empirical progress is measured in each region of the tree, the model can then choose a new goal to try to reach, in order to maximize the expected future progress. A possibility is to sample a point in the region with max progress (the greedy approach), but the model might then never discover interesting regions that had an under-estimated progress. A better way is to sample in a random region sometimes (epsilon-greedy approach), or to sample in a region with a probability weighted by the progress, and give the weights to a softmax function with a temperature parameter $T$ that allows to adjust exploration vs exploitation ($T=1$ in our simulations): $P_{r_i} = \frac{e^{\frac{progress(r_i)\times volume(r_i)}{T}}}{\sum_j e^{\frac{progress(r_j)\times volume(r_j)}{T}}}$ We also weight the probabilities with the volumes of the regions to ensure the continuity of the probability of a point being chosen in a region before and after its split. Comparisons So let's compare those new interest models with the random one we used before. First we create an ExperimentPool to compare random and progress based exploration in goal babbling, using the same environment as above: End of explanation logs = xps.run() Explanation: Run it: End of explanation %pylab inline ax = axes() for log in xps.logs: log.plot_learning_curve(ax) legend([s.interest_model for s in xps.settings]) Explanation: And compare the learning curves: End of explanation # animations does't work with inline plotting, hence: %pylab inline rcParams['figure.figsize'] = (21.0, 7.0) #ion() #mng = get_current_fig_manager() #mng.resize(*mng.window.maxsize()) last_t = 0 for t in linspace(100, xps.logs[0].eval_at[-1], 5): t = int(t) for i, (config, log) in enumerate(zip(xps.settings, xps.logs)): ax = subplot(1, 3, i+1) log.scatter_plot(ax, (('choice', [0, 1]),), t=range(t), marker='.', color='red') log.scatter_plot(ax, (('sensori', [0, 1]),), t=range(t), color='blue') log.scatter_plot(ax, (('testcases', [0, 1]),), color='green') title(config.interest_model + ' ' + config.babbling_mode) draw() last_t = t Explanation: It seems that the Tree interest model gives better learning results, but more simulations and more test points would be necesary to see if the results are statistically significant. We can display an animation progressively showing the reached hand positions at regular time steps of the simulation, by running the below. Note: if you're only visualizing this notebook on nbviewer.ipython.org, you won't be able to see the animation (you will only see the end of the simulation below). To see it, you need to install Explauto as well as a recent version of Ipython Notebook. End of explanation %pylab inline rcParams['figure.figsize'] = (20.0, 16.0) clf() last_t = 0 #mng = get_current_fig_manager() #mng.resize(*mng.window.maxsize()) for t in linspace(100, xps.logs[0].eval_at[-1], 20): t = int(t) for i, (config, log) in enumerate(zip(xps.settings, xps.logs)): ax = subplot(1, 3, i+1) log.scatter_plot(ax, (('sensori', [0, 1]),), range(0, t), marker='.', markersize=0.3, color = 'black') log.density_plot(ax, (('choice', [0, 1]),), range(last_t, t), width_x=1, width_y=1) title(config.interest_model + ' ' + config.babbling_mode) draw() last_t = t Explanation: Color code is the following: -RED: chosen goals, -BLUE: reached hand position, -GREEN: points where the competence is tested. We observe that, whereas the random strategy sample points in the entire sensory space, the discretized_progress and Tree interest model (right panel) behaves smarter. By choosing goals maximizing the learning progress, those strategy focuses more on the reachable aera (reached points are more uniformly distributed in the reachable area), and favors regions which were not explored before. This exploration is therefore an active strategy, where goals are chosen autonomously and adaptively in order to improve the quality of the sensorimotor model. Explauto also allows plotting heat maps by using the density_plot method. The fancy animation below shows reached hand positions by white dots as well as chosen goals on a slicing time window using a heat map (here again, if you are on ipython.notebook.com you should only see the final image of the animation). End of explanation %pylab inline rcParams['figure.figsize'] = (15.0, 12.0) ax = axes() from explauto.interest_model.tree import InterestTree, interest_models # We pick goals and reached points data_g = array(xps.logs[2].logs['choice']) # goals data_s = array(xps.logs[2].logs['sensori']) # reached n = len(data_g) xy = zeros((xps.logs[2].conf.ndims,)) ms = zeros((xps.logs[2].conf.ndims,)) # Exploratory dimensions expl_dims = xps.logs[2].conf.s_dims # We create an empty interest tree interest_tree = InterestTree(xps.logs[2].conf, xps.logs[2].conf.s_dims, **interest_models['tree'][1]['default']) # We add points one by one for i in range(n): xy[expl_dims] = data_g[i] ms[expl_dims] = data_s[i] interest_tree.update(xy, ms) # We plot the tree representation interest_tree.tree.plot(ax, True, True, True, interest_tree.progress(), 20) # Plot stuff plt.xlim((interest_tree.tree.bounds_x[0, 0], interest_tree.tree.bounds_x[1, 0])) plt.ylim((interest_tree.tree.bounds_x[0, 1], interest_tree.tree.bounds_x[1, 1])) import matplotlib.colorbar as cbar cax, _ = cbar.make_axes(ax) cb = cbar.ColorbarBase(cax, cmap=plt.cm.jet) cb.set_label('Normalized Competence Progress') Explanation: After the experiment, we can re-construct the Tree interest model to plot it as it was at the end of the experiment: End of explanation
12,956
Given the following text description, write Python code to implement the functionality described below step by step Description: A preliminary look at sensor data The general idea of the project is to get a handle on how the house heats and cools so that we can better program the thermostat. To gather data, I've assembled and programmed 5 probes using inexpensive hardware (Wemos D1 Mini ESP8266 Wifi boards and SHT30 temperature/humidity sensors). The intent is to move the probes around the house to help us tune the thermostat. Here, I look at several hours of data from an initial test run. The probes were colocated, but not identically oriented. The initial version of software connected probes to the house WiFi at startup, but not if WiFi dropped out. And there was a hiccup, and all of the probes stopped reporting. Fortunately, I was able to get a decent data sample. The probes report temperature and humidity readings every 30 seconds or so, along with the probe's WiFi MAC address. The web server on a spare laptop collects the data, adds a timestamp, and appends to a .csv file. (Eventually, data will go into a database, but flat files are fine for getting started.) Here's what we're starting with Step1: Some prelimaries. Import code, and configure chart sizes to be larger than the default. Step2: Load the .csv into a pandas DataFrame, adding column names. Step3: A quick plot to get a rough idea of how the sensors differ. Step4: Aside from a wider spread in sensor values than I'd like, and higher temperatures (the room wasn't that hot!), this is roughly what I expected for the temperature pattern. It was a hot, humid day. The bedroom starts off warm, cools when I turned on A/C at 9pm, then oscillates during the night as the A/C kicks in on a scheduled setting. I didn't know what to expect for humidity. To get per-sensor plots, the data needs to be reorganized so that each probe is in a different column. This'll need to be done for temperature and humidity independently. Temperature first, since that's what I'm interested in. Step5: This is roughly what's needed, except for the NaN (missing) values. Resampling the data into 2 minute buckets deals with those. Step6: The first thing that jumps out is that one of the sensor ~5 degrees lower than the others. The SHT30 sensors are inexpensive; it might be a manufacturing problem, or I might have damaged one while soldering on the headers. (Or maybe it's the sane one, and the other four are measuring hot.) There also seems to be a 20-30 minute warmup period. I suspect here that a probe, being basically a small computer with stuff attached, is generating its own heat, and the chart shows the slow warm-up. That might also explain why temperatures were higher than expected. Let's try adding 5F to the suspect sensor's temperature reading to bring it in line with the others. Step7: That looks promising. Next, reorganize the data so that we plot humidity. I'm not as interested in humidity, since it's not as easily controlled, but hey, it's data! Step8: That same sensor is the outlier. Eyeballing the graph, that sensor's humidity reading looks high by about 9 units.
Python Code: !head -5 temps.csv Explanation: A preliminary look at sensor data The general idea of the project is to get a handle on how the house heats and cools so that we can better program the thermostat. To gather data, I've assembled and programmed 5 probes using inexpensive hardware (Wemos D1 Mini ESP8266 Wifi boards and SHT30 temperature/humidity sensors). The intent is to move the probes around the house to help us tune the thermostat. Here, I look at several hours of data from an initial test run. The probes were colocated, but not identically oriented. The initial version of software connected probes to the house WiFi at startup, but not if WiFi dropped out. And there was a hiccup, and all of the probes stopped reporting. Fortunately, I was able to get a decent data sample. The probes report temperature and humidity readings every 30 seconds or so, along with the probe's WiFi MAC address. The web server on a spare laptop collects the data, adds a timestamp, and appends to a .csv file. (Eventually, data will go into a database, but flat files are fine for getting started.) Here's what we're starting with End of explanation %matplotlib inline import matplotlib matplotlib.rcParams['figure.figsize'] = (12, 5) import pandas as pd Explanation: Some prelimaries. Import code, and configure chart sizes to be larger than the default. End of explanation df = pd.read_csv('temps.csv', header=None, names=['time', 'mac', 'f', 'h'], parse_dates=[0]) df.head() Explanation: Load the .csv into a pandas DataFrame, adding column names. End of explanation df.plot(); Explanation: A quick plot to get a rough idea of how the sensors differ. End of explanation per_sensor_f = df.pivot(index='time', columns='mac', values='f') per_sensor_f.head() Explanation: Aside from a wider spread in sensor values than I'd like, and higher temperatures (the room wasn't that hot!), this is roughly what I expected for the temperature pattern. It was a hot, humid day. The bedroom starts off warm, cools when I turned on A/C at 9pm, then oscillates during the night as the A/C kicks in on a scheduled setting. I didn't know what to expect for humidity. To get per-sensor plots, the data needs to be reorganized so that each probe is in a different column. This'll need to be done for temperature and humidity independently. Temperature first, since that's what I'm interested in. End of explanation downsampled_f = per_sensor_f.resample('2T').mean() downsampled_f.head() downsampled_f.plot(); Explanation: This is roughly what's needed, except for the NaN (missing) values. Resampling the data into 2 minute buckets deals with those. End of explanation downsampled_f['5C:CF:7F:33:F7:F8'] += 5.0 downsampled_f.plot(); Explanation: The first thing that jumps out is that one of the sensor ~5 degrees lower than the others. The SHT30 sensors are inexpensive; it might be a manufacturing problem, or I might have damaged one while soldering on the headers. (Or maybe it's the sane one, and the other four are measuring hot.) There also seems to be a 20-30 minute warmup period. I suspect here that a probe, being basically a small computer with stuff attached, is generating its own heat, and the chart shows the slow warm-up. That might also explain why temperatures were higher than expected. Let's try adding 5F to the suspect sensor's temperature reading to bring it in line with the others. End of explanation per_sensor_h = df.pivot(index='time', columns='mac', values='h') downsampled_h = per_sensor_h.resample('2T').mean() downsampled_h.plot(); Explanation: That looks promising. Next, reorganize the data so that we plot humidity. I'm not as interested in humidity, since it's not as easily controlled, but hey, it's data! End of explanation downsampled_h['5C:CF:7F:33:F7:F8'] -= 9.0 downsampled_h.plot(); Explanation: That same sensor is the outlier. Eyeballing the graph, that sensor's humidity reading looks high by about 9 units. End of explanation
12,957
Given the following text description, write Python code to implement the functionality described below step by step Description: Comparing surrogate models Tim Head, July 2016. Reformatted by Holger Nahrstaedt 2020 .. currentmodule Step1: Toy model We will use the Step2: This shows the value of the two-dimensional branin function and the three minima. Objective The objective of this example is to find one of these minima in as few iterations as possible. One iteration is defined as one call to the Step3: Note that this can take a few minutes.
Python Code: print(__doc__) import numpy as np np.random.seed(123) import matplotlib.pyplot as plt Explanation: Comparing surrogate models Tim Head, July 2016. Reformatted by Holger Nahrstaedt 2020 .. currentmodule:: skopt Bayesian optimization or sequential model-based optimization uses a surrogate model to model the expensive to evaluate function func. There are several choices for what kind of surrogate model to use. This notebook compares the performance of: gaussian processes, extra trees, and random forests as surrogate models. A purely random optimization strategy is also used as a baseline. End of explanation from skopt.benchmarks import branin as _branin def branin(x, noise_level=0.): return _branin(x) + noise_level * np.random.randn() from matplotlib.colors import LogNorm def plot_branin(): fig, ax = plt.subplots() x1_values = np.linspace(-5, 10, 100) x2_values = np.linspace(0, 15, 100) x_ax, y_ax = np.meshgrid(x1_values, x2_values) vals = np.c_[x_ax.ravel(), y_ax.ravel()] fx = np.reshape([branin(val) for val in vals], (100, 100)) cm = ax.pcolormesh(x_ax, y_ax, fx, norm=LogNorm(vmin=fx.min(), vmax=fx.max())) minima = np.array([[-np.pi, 12.275], [+np.pi, 2.275], [9.42478, 2.475]]) ax.plot(minima[:, 0], minima[:, 1], "r.", markersize=14, lw=0, label="Minima") cb = fig.colorbar(cm) cb.set_label("f(x)") ax.legend(loc="best", numpoints=1) ax.set_xlabel("X1") ax.set_xlim([-5, 10]) ax.set_ylabel("X2") ax.set_ylim([0, 15]) plot_branin() Explanation: Toy model We will use the :class:benchmarks.branin function as toy model for the expensive function. In a real world application this function would be unknown and expensive to evaluate. End of explanation from functools import partial from skopt import gp_minimize, forest_minimize, dummy_minimize func = partial(branin, noise_level=2.0) bounds = [(-5.0, 10.0), (0.0, 15.0)] n_calls = 60 def run(minimizer, n_iter=5): return [minimizer(func, bounds, n_calls=n_calls, random_state=n) for n in range(n_iter)] # Random search dummy_res = run(dummy_minimize) # Gaussian processes gp_res = run(gp_minimize) # Random forest rf_res = run(partial(forest_minimize, base_estimator="RF")) # Extra trees et_res = run(partial(forest_minimize, base_estimator="ET")) Explanation: This shows the value of the two-dimensional branin function and the three minima. Objective The objective of this example is to find one of these minima in as few iterations as possible. One iteration is defined as one call to the :class:benchmarks.branin function. We will evaluate each model several times using a different seed for the random number generator. Then compare the average performance of these models. This makes the comparison more robust against models that get "lucky". End of explanation from skopt.plots import plot_convergence plot = plot_convergence(("dummy_minimize", dummy_res), ("gp_minimize", gp_res), ("forest_minimize('rf')", rf_res), ("forest_minimize('et)", et_res), true_minimum=0.397887, yscale="log") plot.legend(loc="best", prop={'size': 6}, numpoints=1) Explanation: Note that this can take a few minutes. End of explanation
12,958
Given the following text description, write Python code to implement the functionality described below step by step Description: Multi-Layer Perceptron, MNIST In this notebook, we will train an MLP to classify images from the MNIST database hand-written digit database. The process will be broken down into the following steps Step1: Load and Visualize the Data Downloading may take a few moments, and you should see your progress as the data is loading. You may also choose to change the batch_size if you want to load more data at a time. This cell will create DataLoaders for each of our datasets. Step2: Visualize a Batch of Training Data The first step in a classification task is to take a look at the data, make sure it is loaded in correctly, then make any initial observations about patterns in that data. Step3: View an Image in More Detail Step4: Define the Network Architecture The architecture will be responsible for seeing as input a 784-dim Tensor of pixel values for each image, and producing a Tensor of length 10 (our number of classes) that indicates the class scores for an input image. This particular example uses two hidden layers and dropout to avoid overfitting. Step5: Specify Loss Function and Optimizer It's recommended that you use cross-entropy loss for classification. If you look at the documentation (linked above), you can see that PyTorch's cross entropy function applies a softmax funtion to the output layer and then calculates the log loss. Step6: Train the Network The steps for training/learning from a batch of data are described in the comments below Step7: Test the Trained Network Finally, we test our best model on previously unseen test data and evaluate it's performance. Testing on unseen data is a good way to check that our model generalizes well. It may also be useful to be granular in this analysis and take a look at how this model performs on each class as well as looking at its overall loss and accuracy. model.eval() model.eval() will set all the layers in your model to evaluation mode. This affects layers like dropout layers that turn "off" nodes during training with some probability, but should allow every node to be "on" for evaluation! Step8: Visualize Sample Test Results This cell displays test images and their labels in this format
Python Code: # import libraries import torch import numpy as np Explanation: Multi-Layer Perceptron, MNIST In this notebook, we will train an MLP to classify images from the MNIST database hand-written digit database. The process will be broken down into the following steps: Load and visualize the data Define a neural network Train the model Evaluate the performance of our trained model on a test dataset! Before we begin, we have to import the necessary libraries for working with data and PyTorch. End of explanation from torchvision import datasets import torchvision.transforms as transforms # number of subprocesses to use for data loading num_workers = 0 # how many samples per batch to load batch_size = 20 # convert data to torch.FloatTensor transform = transforms.ToTensor() # choose the training and test datasets train_data = datasets.MNIST(root='data', train=True, download=True, transform=transform) test_data = datasets.MNIST(root='data', train=False, download=True, transform=transform) # prepare data loaders train_loader = torch.utils.data.DataLoader(dataset=train_data, batch_size=batch_size, num_workers=num_workers) test_loader = torch.utils.data.DataLoader(dataset=test_data, batch_size=batch_size, num_workers=num_workers) Explanation: Load and Visualize the Data Downloading may take a few moments, and you should see your progress as the data is loading. You may also choose to change the batch_size if you want to load more data at a time. This cell will create DataLoaders for each of our datasets. End of explanation import matplotlib.pyplot as plt %matplotlib inline # obtain one batch of training images dataiter = iter(train_loader) images, labels = dataiter.next() images = images.numpy() # plot the images in the batch, along with the corresponding labels fig = plt.figure(figsize=(25, 4)) for idx in np.arange(20): ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[]) ax.imshow(np.squeeze(images[idx]), cmap='gray') # print out the correct label for each image # .item() gets the value contained in a Tensor ax.set_title(str(labels[idx].item())) Explanation: Visualize a Batch of Training Data The first step in a classification task is to take a look at the data, make sure it is loaded in correctly, then make any initial observations about patterns in that data. End of explanation img = np.squeeze(images[1]) fig = plt.figure(figsize = (12,12)) ax = fig.add_subplot(111) ax.imshow(img, cmap='gray') width, height = img.shape thresh = img.max()/2.5 for x in range(width): for y in range(height): val = round(img[x][y],2) if img[x][y] !=0 else 0 ax.annotate(str(val), xy=(y,x), horizontalalignment='center', verticalalignment='center', color='white' if img[x][y]<thresh else 'black') Explanation: View an Image in More Detail End of explanation import torch.nn as nn import torch.nn.functional as F ## TODO: Define the NN architecture class Net(nn.Module): def __init__(self): super(Net, self).__init__() # Linear layer (784 -> 128 hidden nodes) self.fc1 = nn.Linear(in_features=(28 * 28), out_features=128) self.fc2 = nn.Linear(in_features=128, out_features=10) def forward(self, x): # flatten image input x = x.view(-1, 28 * 28) # add hidden layer, with relu activation function x = F.relu(self.fc1(x)) x = F.softmax(self.fc2(x), dim=1) return x # initialize the NN model = Net() print(model) Explanation: Define the Network Architecture The architecture will be responsible for seeing as input a 784-dim Tensor of pixel values for each image, and producing a Tensor of length 10 (our number of classes) that indicates the class scores for an input image. This particular example uses two hidden layers and dropout to avoid overfitting. End of explanation ## TODO: Specify loss and optimization functions # specify loss function criterion = nn.CrossEntropyLoss() # specify optimizer optimizer = torch.optim.Adam(params=model.parameters(), lr=0.001) optimizer Explanation: Specify Loss Function and Optimizer It's recommended that you use cross-entropy loss for classification. If you look at the documentation (linked above), you can see that PyTorch's cross entropy function applies a softmax funtion to the output layer and then calculates the log loss. End of explanation # Number of epochs to train the model n_epochs = 50 # suggest training between 20-50 epochs # Prep model for training model.train() for epoch in range(n_epochs): # Monitor training loss train_loss = 0.0 ################### # train the model # ################### for data, target in train_loader: # Clear the gradients of all optimized variables optimizer.zero_grad() # Forward pass: compute predicted outputs by passing inputs to the model output = model(data) # Calculate the loss loss = criterion(output, target) # Backward pass: compute gradient of the loss with respect to model parameters loss.backward() # Perform a single optimization step (parameter update) optimizer.step() # Update running training loss train_loss += loss.item() * data.size(0) # Print training statistics # Calculate average loss over an epoch train_loss = train_loss/len(train_loader.dataset) print('Epoch: {} \tTraining Loss: {:.6f}'.format(epoch+1, train_loss)) Explanation: Train the Network The steps for training/learning from a batch of data are described in the comments below: 1. Clear the gradients of all optimized variables 2. Forward pass: compute predicted outputs by passing inputs to the model 3. Calculate the loss 4. Backward pass: compute gradient of the loss with respect to model parameters 5. Perform a single optimization step (parameter update) 6. Update average training loss The following loop trains for 30 epochs; feel free to change this number. For now, we suggest somewhere between 20-50 epochs. As you train, take a look at how the values for the training loss decrease over time. We want it to decrease while also avoiding overfitting the training data. End of explanation # initialize lists to monitor test loss and accuracy test_loss = 0.0 class_correct = list(0. for i in range(10)) class_total = list(0. for i in range(10)) # Prep model for *evaluation* model.eval() for data, target in test_loader: # forward pass: compute predicted outputs by passing inputs to the model output = model(data) # calculate the loss loss = criterion(output, target) # update test loss test_loss += loss.item()*data.size(0) # convert output probabilities to predicted class _, pred = torch.max(output, 1) # compare predictions to true label correct = np.squeeze(pred.eq(target.data.view_as(pred))) # calculate test accuracy for each object class for i in range(batch_size): label = target.data[i] class_correct[label] += correct[i].item() class_total[label] += 1 # calculate and print avg test loss test_loss = test_loss/len(test_loader.dataset) print('Test Loss: {:.6f}\n'.format(test_loss)) for i in range(10): if class_total[i] > 0: print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % ( str(i), 100 * class_correct[i] / class_total[i], np.sum(class_correct[i]), np.sum(class_total[i]))) else: print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i])) print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % ( 100. * np.sum(class_correct) / np.sum(class_total), np.sum(class_correct), np.sum(class_total))) Explanation: Test the Trained Network Finally, we test our best model on previously unseen test data and evaluate it's performance. Testing on unseen data is a good way to check that our model generalizes well. It may also be useful to be granular in this analysis and take a look at how this model performs on each class as well as looking at its overall loss and accuracy. model.eval() model.eval() will set all the layers in your model to evaluation mode. This affects layers like dropout layers that turn "off" nodes during training with some probability, but should allow every node to be "on" for evaluation! End of explanation # obtain one batch of test images dataiter = iter(test_loader) images, labels = dataiter.next() # get sample outputs output = model(images) # convert output probabilities to predicted class _, preds = torch.max(output, 1) # prep images for display images = images.numpy() # plot the images in the batch, along with predicted and true labels fig = plt.figure(figsize=(25, 4)) for idx in np.arange(20): ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[]) ax.imshow(np.squeeze(images[idx]), cmap='gray') ax.set_title("{} ({})".format(str(preds[idx].item()), str(labels[idx].item())), color=("green" if preds[idx]==labels[idx] else "red")) Explanation: Visualize Sample Test Results This cell displays test images and their labels in this format: predicted (ground-truth). The text will be green for accurately classified examples and red for incorrect predictions. End of explanation
12,959
Given the following text description, write Python code to implement the functionality described below step by step Description: Table of Contents Objective Step1: Objective Step2: Figure out what makeARGB is doing Step3: Make a semi-transparent rectangle (image) Step4: What is np.vstack.transpose() doing? Step5: Answer Step6: Simple coordinate transformation We want to go from calculation coordinates (x,y,z) to pyqtgraph coordinates (xx,yy,zz). For the electric field the transformation is
Python Code: %%javascript IPython.load_extensions('calico-document-tools'); !date from pyqtgraph.Qt import QtCore, QtGui import pyqtgraph.opengl as gl import pyqtgraph as pg import numpy as np Explanation: Table of Contents Objective: propagating plane wave visualization How to get docstrings for a class definition Figure out what makeARGB is doing Make a semi-transparent rectangle (image) What is np.vstack.transpose() doing? Make vertical lines Simple coordinate transformation End of explanation help(pg.opengl.GLLinePlotItem) help(pg.opengl.GLGridItem) help(pg.QtGui.QGraphicsRectItem) Explanation: Objective: propagating plane wave visualization To do - DONE 2/27/15 - Propagating e-field - TRIED 2/27/15, ARROWS DON'T SEEM TO WORK IN 3D, JUST 2D. include some propagating arrows - DONE 2/27/15 - propagating h-field - TRIED 2/27/15, DOESN'T LOOK GOOD - semi-transparent plane in x-y through which plane wave propagates - DONE 2/27/15 - Add more visible axes - DONE 2/27/15 - What is np.vstack doing? - TRIED 2/25/15, NO FACILITY TO DO THIS - Fill to zero? - DONE 2/28/15 - If not, make my own vertical lines to zero? - TRIED 2/28/15, NO DOCUMENTATION INDICATING HOW TO DO THIS - Add labels (E, H, z) - DONE 2/28/15, DOESN'T LOOK GREAT IF NOT LINEAR POLARIZATION - Change efield function to set arbitrary polarization state - Add ability to change propagation velocity How to get docstrings for a class definition End of explanation image_shape = (4,4) uniform_values = np.ones(image_shape) * 255 uniform_image = pg.makeARGB(uniform_values) print uniform_values print uniform_image from pyqtgraph.Qt import QtCore, QtGui import pyqtgraph.opengl as gl import pyqtgraph as pg import numpy as np app = QtGui.QApplication([]) w = gl.GLViewWidget() w.opts['distance'] = 200 w.show() w.setWindowTitle('pyqtgraph example: GLImageItem') ## create volume data set to slice three images from shape = (100,100,70) data = np.random.normal(size=shape) #data += pg.gaussianFilter(np.random.normal(size=shape), (15,15,15))*15 ## slice out three planes, convert to RGBA for OpenGL texture levels = (-0.08, 0.08) tex1 = pg.makeRGBA(data[shape[0]/2], levels=levels)[0] # yz plane tex2 = pg.makeRGBA(data[:,shape[1]/2], levels=levels)[0] # xz plane tex3 = pg.makeRGBA(data[:,:,shape[2]/2], levels=levels)[0] # xy plane #tex1[:,:,3] = 128 tex2[:,:,3] = 128 #tex3[:,:,3] = 128 ## Create three image items from textures, add to view v1 = gl.GLImageItem(tex1) v1.translate(-shape[1]/2, -shape[2]/2, 0) v1.rotate(90, 0,0,1) v1.rotate(-90, 0,1,0) #w.addItem(v1) v2 = gl.GLImageItem(tex1) v2.translate(-shape[0]/2, -shape[2]/2, 0) v2.rotate(-90, 1,0,0) w.addItem(v2) v3 = gl.GLImageItem(tex3) v3.translate(-shape[0]/2, -shape[1]/2, 0) #w.addItem(v3) ax = gl.GLAxisItem() w.addItem(ax) ## Start Qt event loop unless running in interactive mode. if __name__ == '__main__': import sys if (sys.flags.interactive != 1) or not hasattr(QtCore, 'PYQT_VERSION'): QtGui.QApplication.instance().exec_() print shape[0], shape[1], shape[2] print len(data[shape[0]/2]), len(data[:,shape[1]/2]) shape = (5,4,3) data = np.random.normal(size=shape) print data print data[shape[0]/2] print data[:,shape[1]/2] print data[:,:,shape[2]/2] tex = pg.makeRGBA(data[shape[2]/2])[0] print tex image_shape = (3,5) uniform_values = np.ones(image_shape) * 255 uniform_image = pg.makeARGB(uniform_values)[0] uniform_image[:,:,3] = 128 print uniform_image Explanation: Figure out what makeARGB is doing End of explanation from pyqtgraph.Qt import QtCore, QtGui import pyqtgraph.opengl as gl import pyqtgraph as pg import numpy as np app = QtGui.QApplication([]) w = gl.GLViewWidget() w.opts['distance'] = 20 w.show() w.setWindowTitle('pyqtgraph example: GLImageItem') ## create volume data set to slice three images from shape = (100,100,70) data = np.random.normal(size=shape) #data += pg.gaussianFilter(np.random.normal(size=shape), (15,15,15))*15 ## make images image_shape = (6,6) uniform_values = np.ones(image_shape) * 255 uniform_image = pg.makeARGB(uniform_values)[0] uniform_image[:,:,1] = 128 uniform_image_transparent = pg.makeARGB(uniform_values)[0] uniform_image_transparent[:,:,3] = 128 ## Create image items from textures, add to view v2 = gl.GLImageItem(uniform_image) v2.translate(-image_shape[0]/2, -image_shape[1]/2, 0) v2.rotate(90, 1,0,0) v2.translate(0, -2, 0) w.addItem(v2) v1 = gl.GLImageItem(uniform_image_transparent) v1.translate(-image_shape[0]/2, -image_shape[1]/2, 0) v1.rotate(90, 1,0,0) w.addItem(v1) ax = gl.GLAxisItem() w.addItem(ax) ## Start Qt event loop unless running in interactive mode. if __name__ == '__main__': import sys if (sys.flags.interactive != 1) or not hasattr(QtCore, 'PYQT_VERSION'): QtGui.QApplication.instance().exec_() Explanation: Make a semi-transparent rectangle (image) End of explanation x = np.linspace(0,2,3) y = np.linspace(10,12,3) z = np.linspace(20,22,3) print x, '\n', y, '\n', z, '\n' pts = np.vstack([x,y,z]) print pts, '\n' pts = pts.transpose() print pts Explanation: What is np.vstack.transpose() doing? End of explanation x = np.linspace(0,3,4) y = np.linspace(10,13,4) z = np.linspace(20,23,4) #print x, '\n', y, '\n', z, '\n' pts = np.vstack([x,y,z]) #print pts, '\n' pts = pts.transpose() print pts print pts.shape pts2 = np.zeros(shape=(2*pts.shape[0], pts.shape[1])) print pts2 print pts2.shape for i in range(pts.shape[0]): pts2[2*i,2] = pts[i,2] pts2[2*i + 1,:] = pts[i,:] print pts2 # Function to create new array from old # where new array is formatted to prepare to # draw lines perpendicular from z-axis to # curve defined by input array def preptomakelines(pts): pts2 = np.zeros(shape=(2*pts.shape[0], pts.shape[1])) for i in range(pts.shape[0]): pts2[2*i,2] = pts[i,2] pts2[2*i + 1,:] = pts[i,:] return pts2 pts2 = preptomakelines(pts) print pts, '\n\n', pts2 Explanation: Answer: take row vectors x, y, & z and concatenate them as column vectors in a 2D matrix Make vertical lines End of explanation x = np.linspace(0,3,4) y = np.linspace(10,13,4) z = np.linspace(20,23,4) pts = np.vstack([x,y,z]) pts = pts.transpose() print pts temp2Darray = [[0, 0, 1], [1, 0, 0], [0, 1, 0]] rot_efield_coord = np.array(temp2Darray) print rot_efield_coord pts_efield_coord = np.dot(pts, rot_efield_coord) print pts_efield_coord temp2Darray = [[1, 0, 0], [0, 0, 1], [0, 1, 0]] rot_hfield_coord = np.array(temp2Darray) print rot_hfield_coord pts_hfield_coord = np.dot(pts, rot_hfield_coord) print pts_hfield_coord print pts pts = np.dot(pts, rot_efield_coord) print pts Explanation: Simple coordinate transformation We want to go from calculation coordinates (x,y,z) to pyqtgraph coordinates (xx,yy,zz). For the electric field the transformation is: x -> zz y -> xx z -> yy This is the same as rotate -90 degrees about the y axis, then rotate 90 degrees about the z-axis. For the magnetic field the transformation is: x -> xx y -> zz z -> yy End of explanation
12,960
Given the following text description, write Python code to implement the functionality described below step by step Description: Define problem, hparams, model, encoder and decoder Definition of this model (as well as many more) can be found on tensor2tensor github page. Step1: Define path to checkpoint In this demo we are using a pretrained model. Instructions for training your own model can be found in the tutorial on tensor2tensor page. Step2: Define transcribe function Step3: Decoding prerecorded examples You can upload any .wav files. They will be transcribed if frame rate matches Librispeeche's frame rate (16kHz). Step5: Recording your own examples
Python Code: problem_name = "librispeech_clean" asr_problem = problems.problem(problem_name) encoders = asr_problem.feature_encoders(None) model_name = "transformer" hparams_set = "transformer_librispeech_tpu" hparams = trainer_lib.create_hparams(hparams_set,data_dir=data_dir, problem_name=problem_name) asr_model = registry.model(model_name)(hparams, Modes.PREDICT) def encode(x): waveforms = encoders["waveforms"].encode(x) encoded_dict = asr_problem.preprocess_example({"waveforms":waveforms, "targets":[]}, Modes.PREDICT, hparams) return {"inputs" : tf.expand_dims(encoded_dict["inputs"], 0), "targets" : tf.expand_dims(encoded_dict["targets"], 0)} def decode(integers): integers = list(np.squeeze(integers)) if 1 in integers: integers = integers[:integers.index(1)] return encoders["targets"].decode(np.squeeze(integers)) Explanation: Define problem, hparams, model, encoder and decoder Definition of this model (as well as many more) can be found on tensor2tensor github page. End of explanation # Copy the pretrained checkpoint locally ckpt_name = "transformer_asr_180214" gs_ckpt = os.path.join(gs_ckpt_dir, ckpt_name) print(gs_ckpt) !gsutil cp -R {gs_ckpt} {checkpoint_dir} ckpt_path = tf.train.latest_checkpoint(os.path.join(checkpoint_dir, ckpt_name)) ckpt_path Explanation: Define path to checkpoint In this demo we are using a pretrained model. Instructions for training your own model can be found in the tutorial on tensor2tensor page. End of explanation # Restore and transcribe! def transcribe(inputs): encoded_inputs = encode(inputs) with tfe.restore_variables_on_create(ckpt_path): model_output = asr_model.infer(encoded_inputs, beam_size=2, alpha=0.6, decode_length=1)["outputs"] return decode(model_output) def play_and_transcribe(inputs): waveforms = encoders["waveforms"].encode(inputs) IPython.display.display(IPython.display.Audio(data=waveforms, rate=16000)) return transcribe(inputs) Explanation: Define transcribe function End of explanation uploaded = google.colab.files.upload() prerecorded_messages = [] for fn in uploaded.keys(): print('User uploaded file "{name}" with length {length} bytes'.format( name=fn, length=len(uploaded[fn]))) mem_file = cStringIO.StringIO(uploaded[fn]) save_filename = os.path.join(tmp_dir, fn) with open(save_filename, 'w') as fd: mem_file.seek(0) shutil.copyfileobj(mem_file, fd) prerecorded_messages.append(save_filename) for inputs in prerecorded_messages: outputs = play_and_transcribe(inputs) print("Inputs: %s" % inputs) print("Outputs: %s" % outputs) Explanation: Decoding prerecorded examples You can upload any .wav files. They will be transcribed if frame rate matches Librispeeche's frame rate (16kHz). End of explanation # Records webm file and converts def RecordNewAudioSample(filename=None, webm_filename=None): Args: filename - string, path for storing wav file webm_filename - string, path for storing webm file Returns: string - path where wav file was saved. (=filename if specified) # Create default filenames in tmp_dir if not specified. if not filename: filename = os.path.join(tmp_dir, "recording.wav") if not webm_filename: webm_filename = os.path.join(tmp_dir, "recording.webm") # Record webm file form colab. audio = google.colab._message.blocking_request('user_media', {"audio":True, "video":False, "duration":-1}, timeout_sec=600) #audio = frontend.RecordMedia(True, False) # Convert the recording into in_memory file. music_mem_file = cStringIO.StringIO( base64.decodestring(audio[audio.index(',')+1:])) # Store webm recording in webm_filename. Storing is necessary for conversion. with open(webm_filename, 'w') as fd: music_mem_file.seek(0) shutil.copyfileobj(music_mem_file, fd) # Open stored file and save it as wav with sample_rate=16000. pydub.AudioSegment.from_file(webm_filename, codec="opus" ).set_frame_rate(16000).export(out_f=filename, format="wav") return filename # Record the sample my_sample_filename = RecordNewAudioSample() print my_sample_filename print play_and_transcribe(my_sample_filename) Explanation: Recording your own examples End of explanation
12,961
Given the following text description, write Python code to implement the functionality described below step by step Description: CORDEX ESGF submission form .. outdated .. needs adaption to future use .. General Information Data to be submitted for ESGF data publication must follow the rules outlined in the Cordex Archive Design Document <br /> (https Step1: please provide information on the contact person for this CORDEX data submission request Type of submission please specify the type of this data submission Step2: Requested general information Please provide model and institution info as well as an example of a file name institution The value of this field has to equal the value of the optional NetCDF attribute 'institution' (long version) in the data files if the latter is used. Step3: institute_id The value of this field has to equal the value of the global NetCDF attribute 'institute_id' in the data files and must equal the 4th directory level. It is needed before the publication process is started in order that the value can be added to the relevant CORDEX list of CV1 if not yet there. Note that 'institute_id' has to be the first part of 'model_id' Step4: model_id The value of this field has to be the value of the global NetCDF attribute 'model_id' in the data files. It is needed before the publication process is started in order that the value can be added to the relevant CORDEX list of CV1 if not yet there. Note that it must be composed by the 'institute_id' follwed by the RCM CORDEX model name, separated by a dash. It is part of the file name and the directory structure. Step5: experiment_id and time_period Experiment has to equal the value of the global NetCDF attribute 'experiment_id' in the data files. Time_period gives the period of data for which the publication request is submitted. If you intend to submit data from multiple experiments you may add one line for each additional experiment or send in additional publication request sheets. Step6: Example file name Please provide an example file name of a file in your data collection, this name will be used to derive the other Step7: information on the grid_mapping the NetCDF/CF name of the data grid ('rotated_latitude_longitude', 'lambert_conformal_conic', etc.), i.e. either that of the native model grid, or 'latitude_longitude' for the regular -XXi grids Step8: Does the grid configuration exactly follow the specifications in ADD2 (Table 1) in case the native grid is 'rotated_pole'? If not, comment on the differences; otherwise write 'yes' or 'N/A'. If the data is not delivered on the computational grid it has to be noted here as well. Step9: Please provide information on quality check performed on the data you plan to submit Please answer 'no', 'QC1', 'QC2-all', 'QC2-CORDEX', or 'other'. 'QC1' refers to the compliancy checker that can be downloaded at http Step10: Terms of use Please give the terms of use that shall be asigned to the data. The options are 'unrestricted' and 'non-commercial only'. For the full text 'Terms of Use' of CORDEX data refer to http Step11: Information on directory structure and data access path (and other information needed for data transport and data publication) If there is any directory structure deviation from the CORDEX standard please specify here. Otherwise enter 'compliant'. Please note that deviations MAY imply that data can not be accepted. Step12: Give the path where the data reside, for example Step13: Exclude variable list In each CORDEX file there may be only one variable which shall be published and searchable at the ESGF portal (target variable). In order to facilitate publication, all non-target variables are included in a list used by the publisher to avoid publication. A list of known non-target variables is [time, time_bnds, lon, lat, rlon ,rlat ,x ,y ,z ,height, plev, Lambert_Conformal, rotated_pole]. Please enter other variables into the left field if applicable (e.g. grid description variables), otherwise write 'N/A'. Step14: Uniqueness of tracking_id and creation_date In case any of your files is replacing a file already published, it must not have the same tracking_id nor the same creation_date as the file it replaces. Did you make sure that that this is not the case ? Reply 'yes'; otherwise adapt the new file versions. Step15: Variable list list of variables submitted -- please remove the ones you do not provide Step16: Check your submission form Please evaluate the following cell to check your submission form. In case of errors, please go up to the corresponden information cells and update your information accordingly. Step17: Save your form your form will be stored (the form name consists of your last name plut your keyword) Step18: officially submit your form the form will be submitted to the DKRZ team to process you also receive a confirmation email with a reference to your online form for future modifications
Python Code: # Evaluate this cell to identifiy your form from dkrz_forms import form_widgets, form_handler, checks form_infos = form_widgets.show_selection() # Evaluate this cell to generate your personal form instance form_info = form_infos[form_widgets.FORMS.value] sf = form_handler.init_form(form_info) form = sf.sub.entity_out.report Explanation: CORDEX ESGF submission form .. outdated .. needs adaption to future use .. General Information Data to be submitted for ESGF data publication must follow the rules outlined in the Cordex Archive Design Document <br /> (https://verc.enes.org/data/projects/documents/cordex-archive-design) Thus file names have to follow the pattern:<br /> VariableName_Domain_GCMModelName_CMIP5ExperimentName_CMIP5EnsembleMember_RCMModelName_RCMVersionID_Frequency[_StartTime-EndTime].nc <br /> Example: tas_AFR-44_MPI-M-MPI-ESM-LR_rcp26_r1i1p1_MPI-CSC-REMO2009_v1_mon_yyyymm-yyyymm.nc The directory structure in which these files are stored follow the pattern:<br /> activity/product/Domain/Institution/ GCMModelName/CMIP5ExperimentName/CMIP5EnsembleMember/ RCMModelName/RCMVersionID/Frequency/VariableName <br /> Example: CORDEX/output/AFR-44/MPI-CSC/MPI-M-MPI-ESM-LR/rcp26/r1i1p1/MPI-CSC-REMO2009/v1/mon/tas/tas_AFR-44_MPI-M-MPI-ESM-LR_rcp26_r1i1p1_MPI-CSC-REMO2009_v1_mon_yyyymm-yyyymm.nc Notice: If your model is not yet registered, please contact contact [email protected] specifying: Full institution name, Short institution name (acronym), Contact person and e-mail, RCM Name (acronym), Terms of Use (unrestricted or non-commercial only) and the CORDEX domains in which you are interested. At some CORDEX ESGF data centers a 'data submission form' is in use in order to improve initial information exchange between data providers and the data center. The form has to be filled before the publication process can be started. In case you have questions pleas contact the individual data centers: o at DKRZ: [email protected] o at SMHI: [email protected] Start submission procedure The submission is based on this interactive document consisting of "cells" you can modify and then evaluate evaluation of cells is done by selecting the cell and then press the keys "Shift" + "Enter" <br /> please evaluate the following cell to initialize your form End of explanation sf.submission_type = "..." # example: sf.submission_type = "initial_version" Explanation: please provide information on the contact person for this CORDEX data submission request Type of submission please specify the type of this data submission: - "initial_version" for first submission of data - "new _version" for a re-submission of previousliy submitted data - "retract" for the request to retract previously submitted data End of explanation sf.institution = "..." # example: sf.institution = "Alfred Wegener Institute" Explanation: Requested general information Please provide model and institution info as well as an example of a file name institution The value of this field has to equal the value of the optional NetCDF attribute 'institution' (long version) in the data files if the latter is used. End of explanation sf.institute_id = "..." # example: sf.institute_id = "AWI" Explanation: institute_id The value of this field has to equal the value of the global NetCDF attribute 'institute_id' in the data files and must equal the 4th directory level. It is needed before the publication process is started in order that the value can be added to the relevant CORDEX list of CV1 if not yet there. Note that 'institute_id' has to be the first part of 'model_id' End of explanation sf.model_id = "..." # example: sf.model_id = "AWI-HIRHAM5" Explanation: model_id The value of this field has to be the value of the global NetCDF attribute 'model_id' in the data files. It is needed before the publication process is started in order that the value can be added to the relevant CORDEX list of CV1 if not yet there. Note that it must be composed by the 'institute_id' follwed by the RCM CORDEX model name, separated by a dash. It is part of the file name and the directory structure. End of explanation sf.experiment_id = "..." # example: sf.experiment_id = "evaluation" # ["value_a","value_b"] in case of multiple experiments sf.time_period = "..." # example: sf.time_period = "197901-201412" # ["time_period_a","time_period_b"] in case of multiple values Explanation: experiment_id and time_period Experiment has to equal the value of the global NetCDF attribute 'experiment_id' in the data files. Time_period gives the period of data for which the publication request is submitted. If you intend to submit data from multiple experiments you may add one line for each additional experiment or send in additional publication request sheets. End of explanation sf.example_file_name = "..." # example: sf.example_file_name = "tas_AFR-44_MPI-M-MPI-ESM-LR_rcp26_r1i1p1_MPI-CSC-REMO2009_v1_mon_yyyymm-yyyymm.nc" # Please run this cell as it is to check your example file name structure # to_do: implement submission_form_check_file function - output result (attributes + check_result) form_handler.cordex_file_info(sf,sf.example_file_name) Explanation: Example file name Please provide an example file name of a file in your data collection, this name will be used to derive the other End of explanation sf.grid_mapping_name = "..." # example: sf.grid_mapping_name = "rotated_latitude_longitude" Explanation: information on the grid_mapping the NetCDF/CF name of the data grid ('rotated_latitude_longitude', 'lambert_conformal_conic', etc.), i.e. either that of the native model grid, or 'latitude_longitude' for the regular -XXi grids End of explanation sf.grid_as_specified_if_rotated_pole = "..." # example: sf.grid_as_specified_if_rotated_pole = "yes" Explanation: Does the grid configuration exactly follow the specifications in ADD2 (Table 1) in case the native grid is 'rotated_pole'? If not, comment on the differences; otherwise write 'yes' or 'N/A'. If the data is not delivered on the computational grid it has to be noted here as well. End of explanation sf.data_qc_status = "..." # example: sf.data_qc_status = "QC2-CORDEX" sf.data_qc_comment = "..." # any comment of quality status of the files Explanation: Please provide information on quality check performed on the data you plan to submit Please answer 'no', 'QC1', 'QC2-all', 'QC2-CORDEX', or 'other'. 'QC1' refers to the compliancy checker that can be downloaded at http://cordex.dmi.dk. 'QC2' refers to the quality checker developed at DKRZ. If your answer is 'other' give some informations. End of explanation sf.terms_of_use = "..." # example: sf.terms_of_use = "unrestricted" Explanation: Terms of use Please give the terms of use that shall be asigned to the data. The options are 'unrestricted' and 'non-commercial only'. For the full text 'Terms of Use' of CORDEX data refer to http://cordex.dmi.dk/joomla/images/CORDEX/cordex_terms_of_use.pdf End of explanation sf.directory_structure = "..." # example: sf.directory_structure = "compliant" Explanation: Information on directory structure and data access path (and other information needed for data transport and data publication) If there is any directory structure deviation from the CORDEX standard please specify here. Otherwise enter 'compliant'. Please note that deviations MAY imply that data can not be accepted. End of explanation sf.data_path = "..." # example: sf.data_path = "mistral.dkrz.de:/mnt/lustre01/work/bm0021/k204016/CORDEX/archive/" sf.data_information = "..." # ...any info where data can be accessed and transfered to the data center ... " Explanation: Give the path where the data reside, for example: blizzard.dkrz.de:/scratch/b/b364034/. If not applicable write N/A and give data access information in the data_information string End of explanation sf.exclude_variables_list = "..." # example: sf.exclude_variables_list=["bnds", "vertices"] Explanation: Exclude variable list In each CORDEX file there may be only one variable which shall be published and searchable at the ESGF portal (target variable). In order to facilitate publication, all non-target variables are included in a list used by the publisher to avoid publication. A list of known non-target variables is [time, time_bnds, lon, lat, rlon ,rlat ,x ,y ,z ,height, plev, Lambert_Conformal, rotated_pole]. Please enter other variables into the left field if applicable (e.g. grid description variables), otherwise write 'N/A'. End of explanation sf.uniqueness_of_tracking_id = "..." # example: sf.uniqueness_of_tracking_id = "yes" Explanation: Uniqueness of tracking_id and creation_date In case any of your files is replacing a file already published, it must not have the same tracking_id nor the same creation_date as the file it replaces. Did you make sure that that this is not the case ? Reply 'yes'; otherwise adapt the new file versions. End of explanation sf.variable_list_day = [ "clh","clivi","cll","clm","clt","clwvi", "evspsbl","evspsblpot", "hfls","hfss","hurs","huss","hus850", "mrfso","mrro","mrros","mrso", "pr","prc","prhmax","prsn","prw","ps","psl", "rlds","rlus","rlut","rsds","rsdt","rsus","rsut", "sfcWind","sfcWindmax","sic","snc","snd","snm","snw","sund", "tas","tasmax","tasmin","tauu","tauv","ta200","ta500","ta850","ts", "uas","ua200","ua500","ua850", "vas","va200","va500","va850","wsgsmax", "zg200","zg500","zmla" ] sf.variable_list_mon = [ "clt", "evspsbl", "hfls","hfss","hurs","huss","hus850", "mrfso","mrro","mrros","mrso", "pr","psl", "rlds","rlus","rlut","rsds","rsdt","rsus","rsut", "sfcWind","sfcWindmax","sic","snc","snd","snm","snw","sund", "tas","tasmax","tasmin","ta200", "ta500","ta850", "uas","ua200","ua500","ua850", "vas","va200","va500","va850", "zg200","zg500" ] sf.variable_list_sem = [ "clt", "evspsbl", "hfls","hfss","hurs","huss","hus850", "mrfso","mrro","mrros","mrso", "pr","psl", "rlds","rlus","rlut","rsds","rsdt","rsus","rsut", "sfcWind","sfcWindmax","sic","snc","snd","snm","snw","sund", "tas","tasmax","tasmin","ta200","ta500","ta850", "uas","ua200","ua500","ua850", "vas","va200","va500","va850", "zg200","zg500" ] sf.variable_list_fx = [ "areacella", "mrsofc", "orog", "rootd", "sftgif","sftlf" ] Explanation: Variable list list of variables submitted -- please remove the ones you do not provide: End of explanation # simple consistency check report for your submission form report = checks.check_report(sf,"sub") checks.display_report(report) Explanation: Check your submission form Please evaluate the following cell to check your submission form. In case of errors, please go up to the corresponden information cells and update your information accordingly. End of explanation form_handler.save_form(sf,"..my comment..") # edit my comment info #evaluate this cell if you want a reference to the saved form emailed to you # (only available if you access this form via the DKRZ form hosting service) form_handler.email_form_info() Explanation: Save your form your form will be stored (the form name consists of your last name plut your keyword) End of explanation form_handler.form_submission(sf) Explanation: officially submit your form the form will be submitted to the DKRZ team to process you also receive a confirmation email with a reference to your online form for future modifications End of explanation
12,962
Given the following text description, write Python code to implement the functionality described below step by step Description: Autobatching log-densities example This notebook demonstrates a simple Bayesian inference example where autobatching makes user code easier to write, easier to read, and less likely to include bugs. Inspired by a notebook by @davmre. Step1: Generate a fake binary classification dataset Step2: Write the log-joint function for the model We'll write a non-batched version, a manually batched version, and an autobatched version. Non-batched Step3: Manually batched Step4: Autobatched with vmap It just works. Step5: Self-contained variational inference example A little code is copied from above. Set up the (batched) log-joint function Step6: Define the ELBO and its gradient Step8: Optimize the ELBO using SGD Step9: Display the results Coverage isn't quite as good as we might like, but it's not bad, and nobody said variational inference was exact.
Python Code: import functools import itertools import re import sys import time from matplotlib.pyplot import * import jax from jax import lax import jax.numpy as jnp import jax.scipy as jsp from jax import random import numpy as np import scipy as sp Explanation: Autobatching log-densities example This notebook demonstrates a simple Bayesian inference example where autobatching makes user code easier to write, easier to read, and less likely to include bugs. Inspired by a notebook by @davmre. End of explanation np.random.seed(10009) num_features = 10 num_points = 100 true_beta = np.random.randn(num_features).astype(jnp.float32) all_x = np.random.randn(num_points, num_features).astype(jnp.float32) y = (np.random.rand(num_points) < sp.special.expit(all_x.dot(true_beta))).astype(jnp.int32) y Explanation: Generate a fake binary classification dataset End of explanation def log_joint(beta): result = 0. # Note that no `axis` parameter is provided to `jnp.sum`. result = result + jnp.sum(jsp.stats.norm.logpdf(beta, loc=0., scale=1.)) result = result + jnp.sum(-jnp.log(1 + jnp.exp(-(2*y-1) * jnp.dot(all_x, beta)))) return result log_joint(np.random.randn(num_features)) # This doesn't work, because we didn't write `log_prob()` to handle batching. try: batch_size = 10 batched_test_beta = np.random.randn(batch_size, num_features) log_joint(np.random.randn(batch_size, num_features)) except ValueError as e: print("Caught expected exception " + str(e)) Explanation: Write the log-joint function for the model We'll write a non-batched version, a manually batched version, and an autobatched version. Non-batched End of explanation def batched_log_joint(beta): result = 0. # Here (and below) `sum` needs an `axis` parameter. At best, forgetting to set axis # or setting it incorrectly yields an error; at worst, it silently changes the # semantics of the model. result = result + jnp.sum(jsp.stats.norm.logpdf(beta, loc=0., scale=1.), axis=-1) # Note the multiple transposes. Getting this right is not rocket science, # but it's also not totally mindless. (I didn't get it right on the first # try.) result = result + jnp.sum(-jnp.log(1 + jnp.exp(-(2*y-1) * jnp.dot(all_x, beta.T).T)), axis=-1) return result batch_size = 10 batched_test_beta = np.random.randn(batch_size, num_features) batched_log_joint(batched_test_beta) Explanation: Manually batched End of explanation vmap_batched_log_joint = jax.vmap(log_joint) vmap_batched_log_joint(batched_test_beta) Explanation: Autobatched with vmap It just works. End of explanation @jax.jit def log_joint(beta): result = 0. # Note that no `axis` parameter is provided to `jnp.sum`. result = result + jnp.sum(jsp.stats.norm.logpdf(beta, loc=0., scale=10.)) result = result + jnp.sum(-jnp.log(1 + jnp.exp(-(2*y-1) * jnp.dot(all_x, beta)))) return result batched_log_joint = jax.jit(jax.vmap(log_joint)) Explanation: Self-contained variational inference example A little code is copied from above. Set up the (batched) log-joint function End of explanation def elbo(beta_loc, beta_log_scale, epsilon): beta_sample = beta_loc + jnp.exp(beta_log_scale) * epsilon return jnp.mean(batched_log_joint(beta_sample), 0) + jnp.sum(beta_log_scale - 0.5 * np.log(2*np.pi)) elbo = jax.jit(elbo) elbo_val_and_grad = jax.jit(jax.value_and_grad(elbo, argnums=(0, 1))) Explanation: Define the ELBO and its gradient End of explanation def normal_sample(key, shape): Convenience function for quasi-stateful RNG. new_key, sub_key = random.split(key) return new_key, random.normal(sub_key, shape) normal_sample = jax.jit(normal_sample, static_argnums=(1,)) key = random.PRNGKey(10003) beta_loc = jnp.zeros(num_features, jnp.float32) beta_log_scale = jnp.zeros(num_features, jnp.float32) step_size = 0.01 batch_size = 128 epsilon_shape = (batch_size, num_features) for i in range(1000): key, epsilon = normal_sample(key, epsilon_shape) elbo_val, (beta_loc_grad, beta_log_scale_grad) = elbo_val_and_grad( beta_loc, beta_log_scale, epsilon) beta_loc += step_size * beta_loc_grad beta_log_scale += step_size * beta_log_scale_grad if i % 10 == 0: print('{}\t{}'.format(i, elbo_val)) Explanation: Optimize the ELBO using SGD End of explanation figure(figsize=(7, 7)) plot(true_beta, beta_loc, '.', label='Approximated Posterior Means') plot(true_beta, beta_loc + 2*jnp.exp(beta_log_scale), 'r.', label='Approximated Posterior $2\sigma$ Error Bars') plot(true_beta, beta_loc - 2*jnp.exp(beta_log_scale), 'r.') plot_scale = 3 plot([-plot_scale, plot_scale], [-plot_scale, plot_scale], 'k') xlabel('True beta') ylabel('Estimated beta') legend(loc='best') Explanation: Display the results Coverage isn't quite as good as we might like, but it's not bad, and nobody said variational inference was exact. End of explanation
12,963
Given the following text description, write Python code to implement the functionality described below step by step Description: 如何爬取Facebook粉絲頁資料 (posts) ? 基本上是透過 Facebook Graph API 去取得粉絲頁的資料,但是使用 Facebook Graph API 還需要取得權限,有兩種方法 Step1: 第一步 - 要先取得應用程式的帳號,密碼 (app_id, app_secret) 第二步 - 輸入要分析的粉絲團的 id (page_id) [教學]如何申請建立 Facebook APP ID 應用程式ID Step2: 爬取的基本概念是送request透過Facebook Graph API來取得資料 而request就是一個url,這個url會根據你的設定(你要拿的欄位)而回傳你需要的資料 但是在爬取大型粉絲頁時,很可能會因為你送的request太多了,就發生錯誤 這邊的解決方法很簡單用一個while迴圈,發生錯誤就休息5秒,5秒鐘後,再重新送request 基本上由5個function來完成: request_until_succeed 來確保完成爬取 getFacebookPageFeedData 來產生post的各種資料(message,link,created_time,type,name,id...) getReactionsForStatus 來獲得該post的各reaction數目(like, angry, sad ...) processFacebookPageFeedStatus 是處理getFacebookPageFeedData得到的各種資料,把它們結構化 scrapeFacebookPageFeedStatus 為主程式 Step3: url = base + node + fields + parameters base Step4: 生成status_link ,此連結可以回到該臉書上的post status_published = status_published + datetime.timedelta(hours=8) 根據所在時區 TW +8 Step5: 假設一個粉絲頁,有250個posts 第一次用 getFacebookPageFeedData 得到 url 送入 request_until_succeed 得到第一個dictionary dictionary中有兩個key,一個是data(100筆資料都在其中) 而另一個是next(下一個100筆的url在裡面,把它送出去會在得到另一個dictionary,裡面又含兩個key,一樣是data和next) 第一次送的 request data Step6: 5234篇post共花了20分鐘,把結果存成csv交給下一章去分析 all_statuses[0] 為 column name all_statuses[1
Python Code: # 載入python 套件 import requests import datetime import time import pandas as pd Explanation: 如何爬取Facebook粉絲頁資料 (posts) ? 基本上是透過 Facebook Graph API 去取得粉絲頁的資料,但是使用 Facebook Graph API 還需要取得權限,有兩種方法 : 第一種是取得 Access Token 第二種是建立 Facebook App的應用程式,用該應用程式的帳號,密碼當作權限 兩者的差別在於第一種會有時效限制,必須每隔一段時間去更新Access Token,才能使用 Access Token 本文是採用第二種方法 要先取得應用程式的帳號,密碼 app_id, app_secret End of explanation # 分析的粉絲頁的id page_id = "appledaily.tw" app_id = "" app_secret = "" access_token = app_id + "|" + app_secret Explanation: 第一步 - 要先取得應用程式的帳號,密碼 (app_id, app_secret) 第二步 - 輸入要分析的粉絲團的 id (page_id) [教學]如何申請建立 Facebook APP ID 應用程式ID End of explanation # 判斷response有無正常 正常 200,若無隔五秒鐘之後再試 def request_until_succeed(url): success = False while success is False: try: req = requests.get(url) if req.status_code == 200: success = True except Exception as e: print(e) time.sleep(5) print("Error for URL %s: %s" % (url, datetime.datetime.now())) print("Retrying.") return req Explanation: 爬取的基本概念是送request透過Facebook Graph API來取得資料 而request就是一個url,這個url會根據你的設定(你要拿的欄位)而回傳你需要的資料 但是在爬取大型粉絲頁時,很可能會因為你送的request太多了,就發生錯誤 這邊的解決方法很簡單用一個while迴圈,發生錯誤就休息5秒,5秒鐘後,再重新送request 基本上由5個function來完成: request_until_succeed 來確保完成爬取 getFacebookPageFeedData 來產生post的各種資料(message,link,created_time,type,name,id...) getReactionsForStatus 來獲得該post的各reaction數目(like, angry, sad ...) processFacebookPageFeedStatus 是處理getFacebookPageFeedData得到的各種資料,把它們結構化 scrapeFacebookPageFeedStatus 為主程式 End of explanation # 取得Facebook data def getFacebookPageFeedData(page_id, access_token, num_statuses): # Construct the URL string; see http://stackoverflow.com/a/37239851 for # Reactions parameters base = "https://graph.facebook.com/v2.6" node = "/%s/posts" % page_id fields = "/?fields=message,link,created_time,type,name,id," + \ "comments.limit(0).summary(true),shares,reactions" + \ ".limit(0).summary(true)" parameters = "&limit=%s&access_token=%s" % (num_statuses, access_token) url = base + node + fields + parameters # 取得data data = request_until_succeed(url).json() return data # 取得該篇文章的 reactions like,love,wow,haha,sad,angry數目 def getReactionsForStatus(status_id, access_token): # See http://stackoverflow.com/a/37239851 for Reactions parameters # Reactions are only accessable at a single-post endpoint base = "https://graph.facebook.com/v2.6" node = "/%s" % status_id reactions = "/?fields=" \ "reactions.type(LIKE).limit(0).summary(total_count).as(like)" \ ",reactions.type(LOVE).limit(0).summary(total_count).as(love)" \ ",reactions.type(WOW).limit(0).summary(total_count).as(wow)" \ ",reactions.type(HAHA).limit(0).summary(total_count).as(haha)" \ ",reactions.type(SAD).limit(0).summary(total_count).as(sad)" \ ",reactions.type(ANGRY).limit(0).summary(total_count).as(angry)" parameters = "&access_token=%s" % access_token url = base + node + reactions + parameters # 取得data data = request_until_succeed(url).json() return data Explanation: url = base + node + fields + parameters base : 可以設定Facebook Graph API的版本,這邊設定v2.6 node : 分析哪個粉絲頁的post 由page_id去設定 fields : 你要取得資料的種類 parameters : 權限設定和每次取多少筆(num_statuses) End of explanation def processFacebookPageFeedStatus(status, access_token): # 要去確認抓到的資料是否為空 status_id = status['id'] status_type = status['type'] if 'message' not in status.keys(): status_message = '' else: status_message = status['message'] if 'name' not in status.keys(): link_name = '' else: link_name = status['name'] link = status_id.split('_') # 此連結可以回到該臉書上的post status_link = 'https://www.facebook.com/'+link[0]+'/posts/'+link[1] status_published = datetime.datetime.strptime(status['created_time'],'%Y-%m-%dT%H:%M:%S+0000') # 根據所在時區 TW +8 status_published = status_published + datetime.timedelta(hours=8) status_published = status_published.strftime('%Y-%m-%d %H:%M:%S') # 要去確認抓到的資料是否為空 if 'reactions' not in status: num_reactions = 0 else: num_reactions = status['reactions']['summary']['total_count'] if 'comments' not in status: num_comments = 0 else: num_comments = status['comments']['summary']['total_count'] if 'shares' not in status: num_shares = 0 else: num_shares = status['shares']['count'] def get_num_total_reactions(reaction_type, reactions): if reaction_type not in reactions: return 0 else: return reactions[reaction_type]['summary']['total_count'] # 取得該篇文章的 reactions like,love,wow,haha,sad,angry數目 reactions = getReactionsForStatus(status_id, access_token) num_loves = get_num_total_reactions('love', reactions) num_wows = get_num_total_reactions('wow', reactions) num_hahas = get_num_total_reactions('haha', reactions) num_sads = get_num_total_reactions('sad', reactions) num_angrys = get_num_total_reactions('angry', reactions) num_likes = get_num_total_reactions('like', reactions) # 回傳tuple形式的資料 return (status_id, status_message, link_name, status_type, status_link, status_published, num_reactions, num_comments, num_shares, num_likes, num_loves, num_wows, num_hahas, num_sads, num_angrys) Explanation: 生成status_link ,此連結可以回到該臉書上的post status_published = status_published + datetime.timedelta(hours=8) 根據所在時區 TW +8 End of explanation def scrapeFacebookPageFeedStatus(page_id, access_token): # all_statuses 用來儲存的list,先放入欄位名稱 all_statuses = [('status_id', 'status_message', 'link_name', 'status_type', 'status_link', 'status_published', 'num_reactions', 'num_comments', 'num_shares', 'num_likes', 'num_loves', 'num_wows', 'num_hahas', 'num_sads', 'num_angrys')] has_next_page = True num_processed = 0 # 計算處理多少post scrape_starttime = datetime.datetime.now() print("Scraping %s Facebook Page: %s\n" % (page_id, scrape_starttime)) statuses = getFacebookPageFeedData(page_id, access_token, 100) while has_next_page: for status in statuses['data']: # 確定有 reaction 再把結構化後的資料存入 all_statuses if 'reactions' in status: all_statuses.append(processFacebookPageFeedStatus(status,access_token)) # 觀察爬取進度,每處理100篇post,就輸出時間, num_processed += 1 if num_processed % 100 == 0: print("%s Statuses Processed: %s" % (num_processed, datetime.datetime.now())) # 每超過100個post就會有next,可以從next中取得下100篇, 直到沒有next if 'paging' in statuses.keys(): statuses = request_until_succeed(statuses['paging']['next']).json() else: has_next_page = False print("\nDone!\n%s Statuses Processed in %s" % \ (num_processed, datetime.datetime.now() - scrape_starttime)) return all_statuses all_statuses = scrapeFacebookPageFeedStatus(page_id, access_token) Explanation: 假設一個粉絲頁,有250個posts 第一次用 getFacebookPageFeedData 得到 url 送入 request_until_succeed 得到第一個dictionary dictionary中有兩個key,一個是data(100筆資料都在其中) 而另一個是next(下一個100筆的url在裡面,把它送出去會在得到另一個dictionary,裡面又含兩個key,一樣是data和next) 第一次送的 request data: 第100筆資料 next: 下100筆資料的url 第二次送的 request data: 第101-200筆資料 next: 下100筆資料的url 第三次送的 request data: 第201- 250筆資料 next: 無 (因為沒有下一百筆了) 總共送3次request 由於Facebook限制每次最多抓100篇posts,因此當粉絲頁超過100篇時, 就會有 next 的 url,必須送出此url在獲得下100篇,由 has_next_page 來決定 是否下100篇 num_processed是用來計算處理多少posts,每處理100筆就輸出時間 最後會把結果輸出成csv,供後續章節繼續分析和預測 End of explanation df = pd.DataFrame(all_statuses[1:], columns=all_statuses[0]) df.head() path = 'post/'+page_id+'_post.csv' df.to_csv(path,index=False,encoding='utf8') Explanation: 5234篇post共花了20分鐘,把結果存成csv交給下一章去分析 all_statuses[0] 為 column name all_statuses[1:] 為處理後結構化的資料 End of explanation
12,964
Given the following text description, write Python code to implement the functionality described below step by step Description: UK schools cluster analysis This notebook explores some potential correlations between the features of our UK school datasets and then performs an agglomerative clustering saving the labeling results on disk for further visualisation. Step1: Load schools features Load UK schools dataset cleaned with cleaning.ipynb and sample 5 data points. Step2: Describe dataset Step3: Select only numerical features Step4: Show correlation matrix Step5: Preprocess data and standard scaling Step6: Show names of numerical features Step7: K-Means clustering Perform an agglomerative clustering and visualise a 3D scatter plot for 3 features. Namely Step8: Save features with cluster labels Save result of clustering to disk for further visualisation in a Plotly dash.
Python Code: import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn import preprocessing from sklearn.cluster import KMeans, AgglomerativeClustering from mpl_toolkits.mplot3d import Axes3D %matplotlib inline Explanation: UK schools cluster analysis This notebook explores some potential correlations between the features of our UK school datasets and then performs an agglomerative clustering saving the labeling results on disk for further visualisation. End of explanation schools = pd.read_csv('/project/uk-schools-clustering/data/derived/2016-2017_england.csv') schools.head(5) schools.columns.tolist() Explanation: Load schools features Load UK schools dataset cleaned with cleaning.ipynb and sample 5 data points. End of explanation schools.describe() Explanation: Describe dataset End of explanation X=np.array(schools[schools.columns[-19:]]).astype(float) header = schools.columns Explanation: Select only numerical features End of explanation fig = plt.figure(figsize=(12,8)) correlationMatrix = np.corrcoef(X, rowvar=0) plt.pcolor(correlationMatrix, cmap = 'hot', vmin=-1, vmax=1) plt.colorbar() plt.yticks(np.arange(0.5, 19), range(0,19)) plt.xticks(np.arange(0.5, 19), range(0,19)) plt.show() Explanation: Show correlation matrix End of explanation scaler = preprocessing.StandardScaler().fit(X) X_scaled = scaler.transform(X) Explanation: Preprocess data and standard scaling End of explanation header = schools.columns[-19:] header Explanation: Show names of numerical features End of explanation features = ['idaci score', 'on free meal', 'english not first language'] estimator = AgglomerativeClustering(n_clusters=2, linkage='average', affinity='cosine') x_index = header.tolist().index(features[0]) y_index = header.tolist().index(features[1]) z_index = header.tolist().index(features[2]) fig = plt.figure(1, figsize=(8, 7)) plt.clf() ax = Axes3D(fig, rect=[0, 0, .95, 1], elev=28, azim=134) plt.cla() estimator.fit(X_scaled) labels = estimator.labels_ ax.scatter(X_scaled[:, x_index], X_scaled[:, y_index], X_scaled[:, z_index], c=labels.astype(np.float)) ax.w_xaxis.set_ticklabels([]) ax.w_yaxis.set_ticklabels([]) ax.w_zaxis.set_ticklabels([]) ax.set_xlabel(features[0]) ax.set_ylabel(features[1]) ax.set_zlabel(features[2]) Explanation: K-Means clustering Perform an agglomerative clustering and visualise a 3D scatter plot for 3 features. Namely: * IDACI score * Number of pupils on free meal * Number of pupils which english is not the first language End of explanation X_with_labels = np.insert(X, 19, labels, axis=1) column_names = header.tolist() column_names.append('cluster') clustered_schools = pd.DataFrame(X_with_labels, columns=column_names) clustered_schools['cluster'] = clustered_schools.cluster.astype(int) clustered_schools clustered_schools.insert(loc=0, column='name', value=schools['name']) clustered_schools.insert(loc=0, column='urn', value=schools['urn']) clustered_schools.to_csv('/project/uk-schools-clustering/data/derived/2016-2017_england_clusters.csv', index=False) Explanation: Save features with cluster labels Save result of clustering to disk for further visualisation in a Plotly dash. End of explanation
12,965
Given the following text description, write Python code to implement the functionality described below step by step Description: Quelques rappels sur les chaînes de caractères Les chaînes de caractères s'écrivent entre guillemets (quotes en anglais), simples ou doubles. Elles peuvent être comparées entre elles avec les opérateurs classiques de comparaison Step1: Exercices Calculer le nombre d'occurrences d'un caractère dans une chaîne, de deux façons différentes. Calculer le nombre de caractères en minuscules dans une chaîne. Déterminer l'indice de la première occurrence d'un caractère dans une chaîne. Déterminier l'indice maximum d'un caractère dans une chaîne, c'est-à-dire de sa dernière occurrence. Inverser une chaîne de caractères. Calculer le nombre de mots dans une chaîne de caractères. Step2: Exercice maison (facile) Step4: Exercice maison Step5: Exercice maison Step6: Exercice maison
Python Code: txt1 = "Ceci est un texte" txt2 = 'ceci est un autre texte' print("A" < txt1) print("B" < txt2) print("A" >"a") print("Z" < "a" and "z" < "é") print(txt1 + txt2) print(len(txt1)) print(len(txt2)) print(txt1[2]) Explanation: Quelques rappels sur les chaînes de caractères Les chaînes de caractères s'écrivent entre guillemets (quotes en anglais), simples ou doubles. Elles peuvent être comparées entre elles avec les opérateurs classiques de comparaison : ==, !=, <, <=, >, >=. Attention : l'ordre des chaînes ressemble à l'ordre alphabétique, mais comporte des différences. Par exemple, les majuscules sont avant les minuscules ; les caractères accentués sont "en dehors" de l'alphabet. La concaténation (ou assemblage) de deux chaînes s'effecture grâce à l'opérateur +. La fonction len permet d'obtenir le nombre de caractères (longueur) d'une chaîne. Dans une chaîne, chaque caractère a un indice. Le premier caractère se situe à l'indice 0, le second à l'indice 1, etc. Pour accéder au caractère à l'indice i, dans une chaîne ch, on utilise la notation entre crochets ch[i]. Les chaînes sont des objets immutables : on ne modifie jamais l'intérieur d'une chaîne, on en crée une autre. Le code suivant illustre ces propos. End of explanation # Corrigé de l'exo 1 avec un for def nbEfor(chaine): ''' :entree chaine: une chaine de caractères dans laquelle on va compter le nb de e :sortie compteur: le nb de e dans la chaine ''' compteur = 0 for i in range (len(chaine)): if chaine[i] == 'e': compteur +=1 return compteur nbEfor("Bonjour comment allez vous ce matin ?") Explanation: Exercices Calculer le nombre d'occurrences d'un caractère dans une chaîne, de deux façons différentes. Calculer le nombre de caractères en minuscules dans une chaîne. Déterminer l'indice de la première occurrence d'un caractère dans une chaîne. Déterminier l'indice maximum d'un caractère dans une chaîne, c'est-à-dire de sa dernière occurrence. Inverser une chaîne de caractères. Calculer le nombre de mots dans une chaîne de caractères. End of explanation # Corrigé de l'exo 1 avec un while def nbEwhile(chaine): ''' :entree chaine: une chaine de caractères dans laquelle on va compter le nb de e :sortie compteur: le nb de e dans la chaine ''' compteur = 0 i = 0 while (i < len(chaine)): if chaine[i] == 'e': compteur +=1 i +=1 return compteur nbEwhile("Je veux un café !") # Exercice : qu'est-ce qui ne fonctionne pas dans cet algorithme (supposé répondre à l'exo 1) ? def nbEBis(chaine): compteur = 0 for lettre in chaine: if lettre == 'e': compteur +=1 return compteur nbEBis("Moi ça roule !") Explanation: Exercice maison (facile) : Modifiez le code précédent de sorte à ce que l'on passe le caractère à rechercher en paramètre... End of explanation if "c" in "abc": print("Le caractère est dans la chaîne") if "d" in "abc": print("Le caractère est dans la chaîne") else: print("Le caractère n'est pas dans la chaîne...") # Corrigé de l'exo 2 def compteurMinuscules(chaine): ''' :entree chaine: une chaîne de caractères :sortie nombre: le nombre de minuscules dans la chaîne ''' nombre = 0 for i in range(len(chaine)): if chaine[i] <= "z" and chaine[i] >= "a": nombre +=1 return nombre compteurMinuscules("J'écris TouT PeTIT i") # Corrigé de l'exo 3 def premiereOccurence(chaine, car): :entree chaine: une chaine de caractère :entree car: le caractère recherché :sortie occu: la position de la premiere occurence du caractere :post-cond: occu = -1 si le caractere n'est pas dans la chaîne i=0 occu = -1 while(occu == -1 and i < len(chaine)): if(chaine[i] == car): occu = i i +=1 return occu print(premiereOccurence("Bonjour", "j")) print(premiereOccurence("Bonjour", "e")) # Corrigé de l'exo 4 def derniereOccurence(chaine, car): ''' :entree chaine: une chaîne de caractères :entre car: le caractère recherché :sortie occu: l'indice de la dernière apparition du caractère dans la chaîne :post-cond: occu = -1 si le caractère n'est pas présent... ''' retour =-1 for pos in range (len(chaine)): if chaine[pos] == car: retour = pos return retour print(derniereOccurence("Hello world", "l")) print(derniereOccurence("Hello world", "d")) print(derniereOccurence("Hello world", "f")) Explanation: Exercice maison : identifiez le problème du code précédent et corrigez-le. Remarque sur le code ci-dessous : la recherche d'un caractère dans une chaîne est rendue très facile en Python... mais nous faisons de l'algo ici... nous cherchons donc des solutions génériques et transposables à d'autres langages de programmation. C'est pourquoi nous allons éviter de prendre ce genre de raccourci ! End of explanation #Corrigé de l'exercice 5 def miroir(chaine): ''' :entre la chaine de caractère :sortie la chaine de caractere avec les caracteres inversés ''' stri="" for i in range(0,len(chaine)): stri+=chaine[len(chaine)-i-1] return stri print(miroir("coucou vous")) Explanation: Exercice maison : Réécrivez cette méthode de recherche de la dernière occurrence en parcourant la liste à partir de la fin. End of explanation #Corrigé de l'exercice 6 ch=str(input("Entrez votre caractere")) def mots (ch): ''' :entree:une chaine de caracteres :sortie: nbr de mots ''' mot=int() for i in range (0,len(ch)-1): if ch[i]!=' ' and ch[i+1]==' ': mot+=1 return (mot+1) print (mots(ch)) Explanation: Exercice maison : Pour chaque algorithme vu dans ce TD, donner la complexité moyenne et la complexité dans le pire des cas. End of explanation
12,966
Given the following text description, write Python code to implement the functionality described below step by step Description: Chainer basic module introduction Advanced memo is written as "Note". You can skip reading this for the first time reading. In this tutorial, basic chainer modules are introduced and explained * Variable * Link * Function * Chain For other chainer modules are explained in later tutorial. Initial setup Below is typecal import statement of chainer modules. Ref Step1: Check chainer version Step2: Variable Chainer variable can be created by Variable constructor, which creates chainer.Variable class object. When I write Variable, it means chainer's class for Variable. Please do not confuse with the usual noun of "variable". Note Step3: In the above code, numpy data type is explicitly set as dtype=np.float32. If we don't set data type, np.float64 may be used as default type in 64-bit environment. However such a precision is usually "too much" and not necessary in machine learning. It is better to use lower precision for computational speed & memory usage. attribute Chainer Variable has following attributes data dtype shape ndim size grad They are very similar to numpy.ndarray. You can access following attributes. Step4: One exception is data attribute, chainer Variable's data refers numpy.ndarray Step5: Function We want to process some calculation to Variable. Variable can be calculated using arithmetric operation (Ex. +, -, *, /) method which is subclass of chainer.Function (Ex. F.sigmoid, F.relu) Step6: Only basic calculation can be done with arithmetric operations. Chainer provides a set of widely used functions via chainer.functions, for example sigmoid function or ReLU (Rectified Linear Unit) function which is popularly used as activation function in deep learning. Step8: Note Step10: Link Link is similar to Function, but it owns internal parameter. This internal parameter is tuned during training of machine learning. Link is similar notion of Layer in caffe. Chainer provides layers which is introduced in popular papers via chainer.links. For example, Linear layer, Convolutional layer. Let's see the example, (below explanation is almost same with official tutorial) Step11: Note that internal parameter W is initialized with a random value. So every time you execute above code, the result will be different (try and check it!). This Linear layer will take 3-dimensional vectors [x0, x1, x2...] (Variable class) as input and outputs 2-dimensional vectors [y0, y1, y2...] (Variable class). In equation form, $$ y_i = W * x_i + b $$ where i = 0, 1, 2... denotes each "minibatch" of input/output. [Note] See source code of Linear class, you can easily understand it is just calling F.linear by <pre> return linear.linear(x, self.W, self.b) </pre> Step12: Let me emphasize the difference between Link and Function. Function``s input-output relationship is fixed. On the other hand,Link` module has internal parameter and the function behavior can be changed by modifying (tuning) this internal parameter. Step13: The value of output y is different compared to above code, even though we input same value of x. These internal parameters are "tuned" during training in machine learning. Usually, we do not need to set these internal parameter W or b manually, chainer will automatically update these internal parameters during training through back propagation. Chain Chain is to construct neural networks. It usually consists of several combination of Link and Function modules. Let's see example, Step14: Based on the official doc, Chain class provides following functionality * parameter management * CPU/GPU migration support * save/load features to provide convinient reusability of your neural network code. Memo
Python Code: # Initial setup following import numpy as np import chainer from chainer import cuda, Function, gradient_check, report, training, utils, Variable from chainer import datasets, iterators, optimizers, serializers from chainer import Link, Chain, ChainList import chainer.functions as F import chainer.links as L from chainer.training import extensions Explanation: Chainer basic module introduction Advanced memo is written as "Note". You can skip reading this for the first time reading. In this tutorial, basic chainer modules are introduced and explained * Variable * Link * Function * Chain For other chainer modules are explained in later tutorial. Initial setup Below is typecal import statement of chainer modules. Ref: - http://docs.chainer.org/en/stable/tutorial/basic.html End of explanation print(chainer.__version__) Explanation: Check chainer version End of explanation from chainer import Variable # creating numpy array # this is `numpy.ndarray` class a = np.asarray([1., 2., 3.], dtype=np.float32) # chainer variable is created from `numpy.ndarray` or `cuda.ndarray` (explained later) x = Variable(a) print('a: ', a, ', type: ', type(a)) print('x: ', x, ', type: ', type(x)) Explanation: Variable Chainer variable can be created by Variable constructor, which creates chainer.Variable class object. When I write Variable, it means chainer's class for Variable. Please do not confuse with the usual noun of "variable". Note: the reason why chainer need to use own Variable, Function class for the calculation instead of just using numpy is because back propagation is necessary during deep learning training. Variable holds its "calculation history" information and Function has backward method which is differencial function in order to process back propagation. See below for more details Chainer Tutorial <a href="http://docs.chainer.org/en/stable/tutorial/basic.html#forward-backward-computation">Forward/Backward Computation</a> End of explanation # These attributes return the same print('attributes', 'numpy.ndarray a', 'chcainer.Variable x') print('dtype', a.dtype, x.dtype) print('shape', a.shape, x.shape) print('ndim', a.ndim, x.ndim) print('size', a.size, x.size) # Variable class has debug_print function, to show this Variable's properties. x.debug_print() Explanation: In the above code, numpy data type is explicitly set as dtype=np.float32. If we don't set data type, np.float64 may be used as default type in 64-bit environment. However such a precision is usually "too much" and not necessary in machine learning. It is better to use lower precision for computational speed & memory usage. attribute Chainer Variable has following attributes data dtype shape ndim size grad They are very similar to numpy.ndarray. You can access following attributes. End of explanation # x = Variable(a) # `a` can be accessed via `data` attribute from chainer `Variable` print('x.data is a : ', x.data is a) # True -> means the reference of x.data and a are same. print('x.data: ', x.data) Explanation: One exception is data attribute, chainer Variable's data refers numpy.ndarray End of explanation # Arithmetric operation example x = Variable(np.array([1, 2, 3], dtype=np.float32)) y = Variable(np.array([5, 6, 7], dtype=np.float32)) # usual arithmetric operator (this case `*`) can be used for calculation of `Variable` z = x * y print('z: ', z.data, ', type: ', type(z)) Explanation: Function We want to process some calculation to Variable. Variable can be calculated using arithmetric operation (Ex. +, -, *, /) method which is subclass of chainer.Function (Ex. F.sigmoid, F.relu) End of explanation # Functoin operation example import chainer.functions as F x = Variable(np.array([-1.5, -0.5, 0, 1, 2], dtype=np.float32)) sigmoid_x = F.sigmoid(x) # sigmoid function. F.sigmoid is subclass of `Function` relu_x = F.relu(x) # ReLU function. F.relu is subclass of `Function` print('x: ', x.data, ', type: ', type(x)) print('sigmoid_x: ', sigmoid_x.data, ', type: ', type(sigmoid_x)) print('relu_x: ', relu_x.data, ', type: ', type(relu_x)) Explanation: Only basic calculation can be done with arithmetric operations. Chainer provides a set of widely used functions via chainer.functions, for example sigmoid function or ReLU (Rectified Linear Unit) function which is popularly used as activation function in deep learning. End of explanation %matplotlib inline import matplotlib.pyplot as plt def plot_chainer_function(f, xmin=-5, xmax=5, title=None): draw graph of chainer `Function` `f` :param f: function to be plotted :type f: chainer.Function :param xmin: int or float, minimum value of x axis :param xmax: int or float, maximum value of x axis :return: a = np.arange(xmin, xmax, step=0.1) x = Variable(a) y = f(x) plt.clf() plt.figure() # x and y are `Variable`, their value can be accessed via `data` attribute plt.plot(x.data, y.data, 'r') if title is not None: plt.title(title) plt.show() plot_chainer_function(F.sigmoid, title='Sigmoid') plot_chainer_function(F.relu, title='ReLU') Explanation: Note: You can find capital letter of Function like F.Sigmoid or F.ReLU. Basically, these capital letter is actual class implmentation of Function while small letter method is getter method of these capital lettered instance. It is recommended to use small letter method when you use F.xxx. Just a side note, sigmoid and ReLU function are non-linear function whose form is like this. End of explanation import chainer.links as L in_size = 3 # input vector's dimension out_size = 2 # output vector's dimension linear_layer = L.Linear(in_size, out_size) # L.linear is subclass of `Link` linear_layer has 2 internal parameters `W` and `b`, which are `Variable` print('W: ', linear_layer.W.data, ', shape: ', linear_layer.W.shape) print('b: ', linear_layer.b.data, ', shape: ', linear_layer.b.shape) Explanation: Link Link is similar to Function, but it owns internal parameter. This internal parameter is tuned during training of machine learning. Link is similar notion of Layer in caffe. Chainer provides layers which is introduced in popular papers via chainer.links. For example, Linear layer, Convolutional layer. Let's see the example, (below explanation is almost same with official tutorial) End of explanation x0 = np.array([1, 0, 0], dtype=np.float32) x1 = np.array([1, 1, 1], dtype=np.float32) x = Variable(np.array([x0, x1], dtype=np.float32)) y = linear_layer(x) print('W: ', linear_layer.W.data) print('b: ', linear_layer.b.data) print('x: ', x.data) # input is x0 & x1 print('y: ', y.data) # output is y0 & y1 Explanation: Note that internal parameter W is initialized with a random value. So every time you execute above code, the result will be different (try and check it!). This Linear layer will take 3-dimensional vectors [x0, x1, x2...] (Variable class) as input and outputs 2-dimensional vectors [y0, y1, y2...] (Variable class). In equation form, $$ y_i = W * x_i + b $$ where i = 0, 1, 2... denotes each "minibatch" of input/output. [Note] See source code of Linear class, you can easily understand it is just calling F.linear by <pre> return linear.linear(x, self.W, self.b) </pre> End of explanation # Force update (set) internal parameters linear_layer.W.data = np.array([[1, 2, 3], [0, 0, 0]], dtype=np.float32) linear_layer.b.data = np.array([3, 5], dtype=np.float32) # below is same code with above cell, but output data y will be different x0 = np.array([1, 0, 0], dtype=np.float32) x1 = np.array([1, 1, 1], dtype=np.float32) x = Variable(np.array([x0, x1], dtype=np.float32)) y = linear_layer(x) print('W: ', linear_layer.W.data) print('b: ', linear_layer.b.data) print('x: ', x.data) # input is x0 & x1 print('y: ', y.data) # output is y0 & y1 Explanation: Let me emphasize the difference between Link and Function. Function``s input-output relationship is fixed. On the other hand,Link` module has internal parameter and the function behavior can be changed by modifying (tuning) this internal parameter. End of explanation from chainer import Chain, Variable # Defining your own neural networks using `Chain` class class MyChain(Chain): def __init__(self): super(MyChain, self).__init__() with self.init_scope(): self.l1 = L.Linear(2, 2) self.l2 = L.Linear(2, 1) def __call__(self, x): h = self.l1(x) return self.l2(h) x = Variable(np.array([[1, 2], [3, 4]], dtype=np.float32)) model = MyChain() y = model(x) print('x: ', x.data) # input is x0 & x1 print('y: ', y.data) # output is y0 & y1 Explanation: The value of output y is different compared to above code, even though we input same value of x. These internal parameters are "tuned" during training in machine learning. Usually, we do not need to set these internal parameter W or b manually, chainer will automatically update these internal parameters during training through back propagation. Chain Chain is to construct neural networks. It usually consists of several combination of Link and Function modules. Let's see example, End of explanation from chainer import Chain, Variable # Defining your own neural networks using `Chain` class class MyChain(Chain): def __init__(self): super(MyChain, self).__init__( l1=L.Linear(2, 2), l2=L.Linear(2, 1) ) def __call__(self, x): h = self.l1(x) return self.l2(h) x = Variable(np.array([[1, 2], [3, 4]], dtype=np.float32)) model = MyChain() y = model(x) print('x: ', x.data) # input is x0 & x1 print('y: ', y.data) # output is y0 & y1 Explanation: Based on the official doc, Chain class provides following functionality * parameter management * CPU/GPU migration support * save/load features to provide convinient reusability of your neural network code. Memo: Above init_scope() method is introduced in chainer v2, and Link class instances are initialized inside this scope. In chainer v1, Chain was initialized as follows. Concretely, Link class instances are initialized in the argument of super method. For backward compatibility, you may use this type of initialization in chainer v2 as well. End of explanation
12,967
Given the following text description, write Python code to implement the functionality described below step by step Description: 5368175 function calls (5360007 primitive calls) in 17.618 seconds Ordered by Step1: Computing occupancy statistics Need to compute a bunch of output stats to use for visualization, metamodeling and to evaluate scenarios. Step2: Create dictionary to append to result data frame Each new record (a scenario and rep) is stored in a dictionary. Then append the dictionary to the results dataframe. Step3: Misc debugging cells
Python Code: hm.run_hillmaker(scenario_name,stops_df,in_fld_name, out_fld_name,cat_fld_name,start_analysis,end_analysis,tot_fld_name,bin_size_mins,categories=includecats,outputpath='./testing') occ_df = pd.read_csv(fn_occ_summary) bydt_df = pd.read_csv(fn_bydatetime) def num_gt_0(column): return (column != 0).sum() def get_stats(group, stub=''): return {stub+'count': group.count(), stub+'mean': group.mean(), stub+'min': group.min(), stub+'num_gt_0': num_gt_0(group), stub+'max': group.max(), stub+'stdev': group.std(), stub+'p01': group.quantile(0.01), stub+'p025': group.quantile(0.025), stub+'p05': group.quantile(0.05), stub+'p25': group.quantile(0.25), stub+'p50': group.quantile(0.5), stub+'p75': group.quantile(0.75), stub+'p90': group.quantile(0.9), stub+'p95': group.quantile(0.95), stub+'p975': group.quantile(0.975), stub+'p99': group.quantile(0.99)} pp_occ = bydt_df[(bydt_df['category'] == 'PP')]['occupancy'] pp_occ.describe() get_stats(pp_occ) ldr_occ = bydt_df[(bydt_df['category'] == 'LDR')]['occupancy'] ldr_occ.describe() get_stats(ldr_occ) plt.hist(pp_occ.values,12) plt.hist(ldr_occ.values,20) bydt_df.head() sns.tsplot(pp_occ); pp_occ.head() occ_df Explanation: 5368175 function calls (5360007 primitive calls) in 17.618 seconds Ordered by: internal time List reduced from 955 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 70524 4.363 0.000 8.160 0.000 hillpylib.py:97(dt_floor) 17631 1.436 0.000 5.527 0.000 hillpylib.py:146(occ_frac) 70524 1.129 0.000 1.235 0.000 offsets.py:198(apply) 1 0.871 0.871 16.276 16.276 bydatetime.py:25(make_bydatetime) 70526 0.800 0.000 1.124 0.000 offsets.py:177(_determine_offset) 17631 0.749 0.000 0.780 0.000 hillpylib.py:136(numbins) 70524 0.648 0.000 2.123 0.000 offsets.py:44(wrapper) 68659 0.494 0.000 0.494 0.000 {method 'set_value' of 'pandas.index.IndexEngine' objects} 70968 0.476 0.000 0.480 0.000 {method 'get_value' of 'pandas.index.IndexEngine' objects} 68659 0.466 0.000 1.868 0.000 indexing.py:1518(setitem) End of explanation ldr_occ_stats = get_stats(ldr_occ) pp_occ_stats = get_stats(pp_occ) grp_all = stops_df.groupby(['Unit']) blocked_uncond_stats = grp_all['Entered_TriedToEnter'].apply(get_stats,'delay_') blocked_uncond_stats grp_blocked = stops_df[(stops_df['Entered_TriedToEnter'] > 0)].groupby(['Unit']) blocked_cond_stats = grp_blocked['Entered_TriedToEnter'].apply(get_stats,'delay_') blocked_cond_stats blocked_cond_stats.index blocked_cond_stats[(1,'LDR','test_mean')] Explanation: Computing occupancy statistics Need to compute a bunch of output stats to use for visualization, metamodeling and to evaluate scenarios. End of explanation newrec = {} newrec['scenario'] = scenario_num newrec['rep'] = rep_num newrec['occ_mean_ldr'] = ldr_occ_stats['mean'] newrec['occ_p05_ldr'] = ldr_occ_stats['p05'] newrec['occ_p25_ldr'] = ldr_occ_stats['p25'] newrec['occ_p50_ldr'] = ldr_occ_stats['p50'] newrec['occ_p75_ldr'] = ldr_occ_stats['p75'] newrec['occ_p95_ldr'] = ldr_occ_stats['p95'] newrec['occ_mean_pp'] = pp_occ_stats['mean'] newrec['occ_p05_pp'] = pp_occ_stats['p05'] newrec['occ_p25_pp'] = pp_occ_stats['p25'] newrec['occ_p50_pp'] = pp_occ_stats['p50'] newrec['occ_p75_pp'] = pp_occ_stats['p75'] newrec['occ_p95_pp'] = pp_occ_stats['p95'] newrec['pct_waitq_ldr'] = blocked_uncond_stats[('LDR','delay_num_gt_0')]/blocked_uncond_stats[('LDR','delay_count')] newrec['waitq_ldr_mean'] = blocked_cond_stats[('LDR','delay_mean')] newrec['waitq_ldr_p95'] = blocked_cond_stats[('LDR','delay_p95')] newrec['pct_blocked_ldr'] = blocked_uncond_stats[('PP','delay_num_gt_0')]/blocked_uncond_stats[('PP','delay_count')] newrec['blocked_ldr_mean'] = blocked_cond_stats[('PP','delay_mean')] newrec['blocked_ldr_p95'] = blocked_cond_stats[('PP','delay_p95')] print(newrec) Explanation: Create dictionary to append to result data frame Each new record (a scenario and rep) is stored in a dictionary. Then append the dictionary to the results dataframe. End of explanation a_start = pd.Timestamp(start_analysis) a_end = pd.Timestamp(end_analysis) print(a_start,a_end) left_PP_df = df[(df['EnteredTS'] < a_start) & (a_start <= df['ExitedTS']) & (df['ExitedTS'] < a_end) & (df['Unit'] == 'PP')] right_PP_df = df[(a_start <= df['EnteredTS']) & (df['EnteredTS'] < a_end) & (df['ExitedTS'] >= a_end) & (df['Unit'] == 'PP')] print(right_PP_df.shape) right_PP_df[:][['EnteredTS','ExitedTS']] print(left_PP_df.shape) left_PP_df[:][['EnteredTS','ExitedTS']] Explanation: Misc debugging cells End of explanation
12,968
Given the following text description, write Python code to implement the functionality described below step by step Description: Conditional Probability Solution First we'll modify the code to have some fixed purchase probability regardless of age, say 40% Step1: Next we will compute P(E|F) for some age group, let's pick 30 year olds again Step2: Now we'll compute P(E)
Python Code: from numpy import random random.seed(0) totals = {20:0, 30:0, 40:0, 50:0, 60:0, 70:0} purchases = {20:0, 30:0, 40:0, 50:0, 60:0, 70:0} totalPurchases = 0 for _ in range(100000): ageDecade = random.choice([20, 30, 40, 50, 60, 70]) purchaseProbability = 0.4 totals[ageDecade] += 1 if (random.random() < purchaseProbability): totalPurchases += 1 purchases[ageDecade] += 1 Explanation: Conditional Probability Solution First we'll modify the code to have some fixed purchase probability regardless of age, say 40%: End of explanation PEF = float(purchases[30]) / float(totals[30]) print "P(purchase | 30s): ", PEF Explanation: Next we will compute P(E|F) for some age group, let's pick 30 year olds again: End of explanation PE = float(totalPurchases) / 100000.0 print "P(Purchase):", PE Explanation: Now we'll compute P(E) End of explanation
12,969
Given the following text description, write Python code to implement the functionality described below step by step Description: pysgrid only works with raw netCDF4 (for now!) Step1: The sgrid object Step2: The object knows about sgrid conventions Step3: Being generic is nice! This is an improvement up on my first design ;-) ... Step4: (Don't be scared, you do not need the sgrid object to get the variables. This just shows that there is a one-to-one mapping from the sgrid object to the netCDF4 object.) Step5: ... but we need a better way to deal with the slice of the slice! Step6: Some thing for the angle information Step7: Average velocity vectors to cell centers Step8: Rotate vectors by angles Step9: Speed Step10: Lon, lat of the center grid (This is kind of clunky... or maybe I just do not get the sgrid concept beyond the ROMS world.) Step11: Plotting
Python Code: from netCDF4 import Dataset url = ('http://geoport.whoi.edu/thredds/dodsC/clay/usgs/users/' 'jcwarner/Projects/Sandy/triple_nest/00_dir_NYB05.ncml') nc = Dataset(url) Explanation: pysgrid only works with raw netCDF4 (for now!) End of explanation import pysgrid # The object creation is a little bit slow. Can we defer some of the loading/computations? sgrid = pysgrid.from_nc_dataset(nc) sgrid # We need a better __repr__ and __str__ !!! Explanation: The sgrid object End of explanation sgrid.edge1_coordinates, sgrid.edge1_dimensions, sgrid.edge1_padding u_var = sgrid.u u_var.center_axis, u_var.node_axis v_var = sgrid.v v_var.center_axis, v_var.node_axis Explanation: The object knows about sgrid conventions End of explanation u_var.center_slicing v_var.center_slicing Explanation: Being generic is nice! This is an improvement up on my first design ;-) ... End of explanation u_velocity = nc.variables[u_var.variable] v_velocity = nc.variables[v_var.variable] Explanation: (Don't be scared, you do not need the sgrid object to get the variables. This just shows that there is a one-to-one mapping from the sgrid object to the netCDF4 object.) End of explanation from datetime import datetime, timedelta from netCDF4 import date2index t_var = nc.variables['ocean_time'] start = datetime(2012, 10, 30, 0, 0) time_idx = date2index(start, t_var, select='nearest') v_idx = 0 # Slice of the slice! u_data = u_velocity[time_idx, v_idx, u_var.center_slicing[-2], u_var.center_slicing[-1]] v_data = v_velocity[time_idx, v_idx, v_var.center_slicing[-2], v_var.center_slicing[-1]] Explanation: ... but we need a better way to deal with the slice of the slice! End of explanation angle = sgrid.angle angles = nc.variables[angle.variable][angle.center_slicing] Explanation: Some thing for the angle information End of explanation from pysgrid.processing_2d import avg_to_cell_center u_avg = avg_to_cell_center(u_data, u_var.center_axis) v_avg = avg_to_cell_center(v_data, v_var.center_axis) Explanation: Average velocity vectors to cell centers End of explanation from pysgrid.processing_2d import rotate_vectors u_rot, v_rot = rotate_vectors(u_avg, v_avg, angles) Explanation: Rotate vectors by angles End of explanation from pysgrid.processing_2d import vector_sum uv_vector_sum = vector_sum(u_rot, v_rot) Explanation: Speed End of explanation grid_cell_centers = sgrid.centers # Array of lon, lat pairs. lon_var_name, lat_var_name = sgrid.face_coordinates sg_lon = getattr(sgrid, lon_var_name) sg_lat = getattr(sgrid, lat_var_name) lon_data = grid_cell_centers[..., 0][sg_lon.center_slicing] lat_data = grid_cell_centers[..., 1][sg_lat.center_slicing] Explanation: Lon, lat of the center grid (This is kind of clunky... or maybe I just do not get the sgrid concept beyond the ROMS world.) End of explanation %matplotlib inline import numpy as np import matplotlib.pyplot as plt import cartopy.crs as ccrs from cartopy.io import shapereader from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER def make_map(projection=ccrs.PlateCarree(), figsize=(9, 9)): fig, ax = plt.subplots(figsize=figsize, subplot_kw=dict(projection=projection)) gl = ax.gridlines(draw_labels=True) gl.xlabels_top = gl.ylabels_right = False gl.xformatter = LONGITUDE_FORMATTER gl.yformatter = LATITUDE_FORMATTER return fig, ax sub = 10 scale = 0.06 fig, ax = make_map() kw = dict(scale=1.0/scale, pivot='middle', width=0.003, color='black') q = plt.quiver(lon_data[::sub, ::sub], lat_data[::sub, ::sub], u_rot[::sub, ::sub], v_rot[::sub, ::sub], zorder=2, **kw) cs = plt.pcolormesh(lon_data[::sub, ::sub], lat_data[::sub, ::sub], uv_vector_sum[::sub, ::sub], zorder=1, cmap=plt.cm.rainbow) _ = ax.coastlines('10m') Explanation: Plotting End of explanation
12,970
Given the following text description, write Python code to implement the functionality described below step by step Description: Parameter identification example Here is a simple toy model that we use to demonstrate the working of the inference package $\emptyset \xrightarrow[]{k_1} X \; \; \; \; X \xrightarrow[]{d_1} \emptyset$ Run the MCMC algorithm to identify parameters from the experimental data In this demonstration, we will try to use multiple trajectories of data taken under multiple initial conditions and different length of time points? Step1: Using Gaussian prior for k1 Step2: Using uniform priors and estimating both k1 and d1 and use the pid => parameter inference object directly. Step3: Check mcmc_results.csv for the results of the MCMC procedure and perform your own analysis. You can also plot the results as follows
Python Code: %matplotlib inline %config InlineBackend.figure_format = "retina" from matplotlib import rcParams rcParams["savefig.dpi"] = 100 rcParams["figure.dpi"] = 100 rcParams["font.size"] = 20 Explanation: Parameter identification example Here is a simple toy model that we use to demonstrate the working of the inference package $\emptyset \xrightarrow[]{k_1} X \; \; \; \; X \xrightarrow[]{d_1} \emptyset$ Run the MCMC algorithm to identify parameters from the experimental data In this demonstration, we will try to use multiple trajectories of data taken under multiple initial conditions and different length of time points? End of explanation %matplotlib inline import bioscrape as bs from bioscrape.types import Model from bioscrape.inference import py_inference import numpy as np import pylab as plt import pandas as pd # Import a bioscrape/SBML model M = Model(sbml_filename = 'toy_sbml_model.xml') # Import data from CSV # Import a CSV file for each experiment run df = pd.read_csv('test_data.csv', delimiter = '\t', names = ['X','time'], skiprows = 1) M.set_species({'X':df['X'][0]}) # Create prior for parameters prior = {'d1' : ['gaussian', 0.2, 200]} sampler, pid = py_inference(Model = M, exp_data = df, measurements = ['X'], time_column = ['time'], nwalkers = 5, init_seed = 0.15, nsteps = 1500, sim_type = 'deterministic', params_to_estimate = ['d1'], prior = prior) Explanation: Using Gaussian prior for k1 End of explanation %matplotlib inline import bioscrape as bs from bioscrape.types import Model from bioscrape.inference import py_inference import numpy as np import pylab as plt import pandas as pd # Import a bioscrape/SBML model M = Model(sbml_filename = 'toy_sbml_model.xml') # Import data from CSV # Import a CSV file for each experiment run df = pd.read_csv('test_data.csv', delimiter = '\t', names = ['X','time'], skiprows = 1) M.set_species({'X':df['X'][0]}) prior = {'d1' : ['uniform', 0, 10], 'k1' : ['uniform', 0, 100]} sampler, pid = py_inference(Model = M, exp_data = df, measurements = ['X'], time_column = ['time'], nwalkers = 20, init_seed = 0.15, nsteps = 5500, sim_type = 'deterministic', params_to_estimate = ['d1', 'k1'], prior = prior) Explanation: Using uniform priors and estimating both k1 and d1 and use the pid => parameter inference object directly. End of explanation from bioscrape.simulator import py_simulate_model M_fit = Model(sbml_filename = 'toy_sbml_model.xml') M_fit.set_species({'X':df['X'][0]}) timepoints = pid.timepoints flat_samples = sampler.get_chain(discard=200, thin=15, flat=True) inds = np.random.randint(len(flat_samples), size=200) for ind in inds: sample = flat_samples[ind] for pi, pi_val in zip(pid.params_to_estimate, sample): M_fit.set_parameter(pi, pi_val) plt.plot(timepoints, py_simulate_model(timepoints, Model= M_fit)['X'], "C1", alpha=0.1) # plt.errorbar(, y, yerr=yerr, fmt=".k", capsize=0) # plt.plot(timepoints, list(pid.exp_data['X']), label = 'data') plt.plot(timepoints, py_simulate_model(timepoints, Model = M)['X'], "k", label="original model") plt.legend(fontsize=14) plt.xlabel("Time") plt.ylabel("[X]"); flat_samples = sampler.get_chain(discard = 200, thin = 15,flat = True) flat_samples Explanation: Check mcmc_results.csv for the results of the MCMC procedure and perform your own analysis. You can also plot the results as follows End of explanation
12,971
Given the following text description, write Python code to implement the functionality described below step by step Description: numpy.vectorize Step1: Multi-core processing Step2: Single core Step3: Threads ```python %%time args = [(x, i) for i, x in enumerate(data)] def plot_one_(arg) Step4: Parallel comprehensions with joblib Step5: Blocking and non-blocking calls
Python Code: def in_unit_circle(x, y): if x**2 + y**2 < 1: return 1 else: return 0 @numba.vectorize('int64(float64, float64)',target='cpu') def in_unit_circle_serial(x, y): if x**2 + y**2 < 1: return 1 else: return 0 @numba.vectorize('int64(float64, float64)',target='parallel') def in_unit_circle_multicore(x, y): if x**2 + y**2 < 1: return 1 else: return 0 n = int(1e7) xs, ys = np.random.random((2, n)) %%time 4 * np.sum(in_unit_circle(x, y) for x, y in zip(xs, ys))/n %%time 4 * np.sum(in_unit_circle_serial(xs, ys))/n %%time 4 * np.sum(in_unit_circle_multicore(xs, ys))/n Explanation: numpy.vectorize End of explanation def plot_one(data, name): xs, ys = data.T plt.scatter(xs, ys, s=1, edgecolor=None) plt.savefig('%s.png' % name) return name data = np.random.random((10, 10000, 2)) Explanation: Multi-core processing End of explanation %%time for i, M in enumerate(data): plot_one(M, i) Explanation: Single core End of explanation %%time args = [(x, i) for i, x in enumerate(data)] with mp.Pool() as pool: pool.starmap(plot_one, args) %%time args = [(x, i) for i, x in enumerate(data)] with mp.Pool() as pool: results = pool.starmap_async(plot_one, args) Explanation: Threads ```python %%time args = [(x, i) for i, x in enumerate(data)] def plot_one_(arg): return plot_one(*arg) with ThreadPoolExecutor() as pool: pool.map(plot_one_, args) ``` Processes End of explanation %%time Parallel(n_jobs=-1)(delayed(plot_one)(x, i) for i, x in enumerate(data)) pass Explanation: Parallel comprehensions with joblib End of explanation def f(x): import time time.sleep(np.random.randint(0, 5)) return x %%time with mp.Pool(processes=4) as pool: result = pool.map(f, range(10)) result %%time pool = mp.Pool(processes=4) result = pool.map_async(f, range(10)) if result.ready() and result.successful(): print(result.get()) else: print(result.wait()) Explanation: Blocking and non-blocking calls End of explanation
12,972
Given the following text description, write Python code to implement the functionality described below step by step Description: Copyright 2020 The TensorFlow Authors. Step1: TensorFlow の NumPy API <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https Step2: NumPy 動作の有効化 tnp を NumPy として使用するには、TensorFlow の NumPy の動作を有効にしてください。 Step3: この呼び出しによって、TensorFlow での型昇格が可能になり、リテラルからテンソルに変換される場合に、型推論も Numpy の標準により厳格に従うように変更されます。 注意 Step4: 型昇格 TensorFlow NumPy API には、リテラルを ND 配列に変換するためと ND 配列入力で型昇格を実行するための明確に定義されたセマンティクスがあります。詳細については、np.result_type をご覧ください。 TensorFlow API は tf.Tensor 入力を変更せずそのままにし、それに対して型昇格を実行しませんが、TensorFlow NumPy API は NumPy 型昇格のルールに従って、すべての入力を昇格します。次の例では、型昇格を行います。まず、さまざまな型の ND 配列入力で加算を実行し、出力の型を確認します。これらの型昇格は、TensorFlow API では行えません。 Step5: 最後に、ndarray.asarray を使ってリテラルをND 配列に変換し、結果の型を確認します。 Step6: リテラルを ND 配列に変換する際、NumPy は tnp.int64 や tnp.float64 といった幅広い型を優先します。一方、tf.convert_to_tensor は、tf.int32 と tf.float32 の型を優先して定数を tf.Tensor に変換します。TensorFlow NumPy API は、整数に関しては NumPy の動作に従っています。浮動小数点数については、experimental_enable_numpy_behavior の prefer_float32 引数によって、tf.float64 よりも tf.float32 を優先するかどうかを制御することができます(デフォルトは False です)。以下に例を示します。 Step7: ブロードキャスティング TensorFlow と同様に、NumPy は「ブロードキャスト」値の豊富なセマンティクスを定義します。詳細については、NumPy ブロードキャストガイドを確認し、これを TensorFlow ブロードキャストセマンティクスと比較してください。 Step8: インデックス NumPy は、非常に洗練されたインデックス作成ルールを定義しています。NumPy インデックスガイドを参照してください。以下では、インデックスとして ND 配列が使用されていることに注意してください。 Step10: サンプルモデル 次に、モデルを作成して推論を実行する方法を見てみます。この簡単なモデルは、relu レイヤーとそれに続く線形射影を適用します。後のセクションでは、TensorFlow のGradientTapeを使用してこのモデルの勾配を計算する方法を示します。 Step11: TensorFlow NumPy および NumPy TensorFlow NumPy は、完全な NumPy 仕様のサブセットを実装します。シンボルは、今後追加される予定ですが、近い将来にサポートされなくなる体系的な機能があります。これらには、NumPy C API サポート、Swig 統合、Fortran ストレージ優先順位、ビュー、stride_tricks、およびいくつかのdtype(np.recarrayや<code> np.object</code>)が含まれます。詳細については、 <a>TensorFlow NumPy API ドキュメント</a>をご覧ください。 NumPy 相互運用性 TensorFlow ND 配列は、NumPy 関数と相互運用できます。これらのオブジェクトは、__array__インターフェースを実装します。NumPy はこのインターフェースを使用して、関数の引数を処理する前にnp.ndarray値に変換します。 同様に、TensorFlow NumPy 関数は、np.ndarray などのさまざまなタイプの入力を受け入れることができます。これらの入力は、<code>ndarray.asarray</code> を呼び出すことにより、ND 配列に変換されます。 ND 配列をnp.ndarrayとの間で変換すると、実際のデータコピーがトリガーされる場合があります。詳細については、バッファコピーのセクションを参照してください。 Step12: バッファコピー TensorFlow NumPy を NumPy コードと混在させると、データコピーがトリガーされる場合があります。これは、TensorFlow NumPy のメモリアライメントに関する要件が NumPy の要件よりも厳しいためです。 np.ndarrayが TensorFlow Numpy に渡されると、アライメント要件を確認し、必要に応じてコピーがトリガーされます。ND 配列 CPU バッファを NumPy に渡す場合、通常、バッファはアライメント要件を満たし、NumPy はコピーを作成する必要はありません。 ND 配列は、ローカル CPU メモリ以外のデバイスに配置されたバッファを参照できます。このような場合、NumPy 関数を呼び出すと、必要に応じてネットワークまたはデバイス全体でコピーが作成されます。 このため、NumPy API 呼び出しとの混合は通常、注意して行い、ユーザーはデータのコピーのオーバーヘッドに注意する必要があります。TensorFlow NumPy 呼び出しを TensorFlow 呼び出しとインターリーブすることは一般的に安全であり、データのコピーを避けられます。 詳細については、TensorFlow の相互運用性のセクションをご覧ください。 演算子の優先順位 TensorFlow NumPy は、NumPy よりも優先順位の高い__array_priority__を定義します。つまり、ND 配列とnp.ndarrayの両方を含む演算子の場合、前者が優先されます。np.ndarray入力は ND 配列に変換され、演算子の TensorFlow NumPy 実装が呼び出されます。 Step13: TF NumPy と TensorFlow TensorFlow NumPy は TensorFlow の上に構築されているため、TensorFlow とシームレスに相互運用できます。 tf.Tensor と ND 配列 ND 配列は tf.Tensor のエイリアスであるため、実際のデータのコピーを呼び出さずに混在させることが可能です。 Step14: TensorFlow 相互運用性 ND 配列は tf.Tensor のエイリアスにすぎないため、TensorFlow API に渡すことができます。前述のように、このような相互運用では、アクセラレータやリモートデバイスに配置されたデータであっても、データのコピーは行われません。 逆に言えば、tf.Tensor オブジェクトを、データのコピーを実行せずに、tf.experimental.numpy API に渡すことができます。 Step17: 勾配とヤコビアン Step18: トレースコンパイル Step19: ベクトル化:tf.vectorized_map TensorFlow には、並列ループのベクトル化のサポートが組み込まれているため、10 倍から 100 倍のスピードアップが可能です。これらのスピードアップは、tf.vectorized_map API を介して実行でき、TensorFlow NumPy にも適用されます。 w.r.t. (対応する入力バッチ要素)バッチで各出力の勾配を計算すると便利な場合があります。このような計算は、以下に示すように tf.vectorized_map を使用して効率的に実行できます。 Step20: デバイスに配置する TensorFlow NumPy は、CPU、GPU、TPU、およびリモートデバイスに演算を配置できます。デバイスにおける配置には標準の TensorFlow メカニズムを使用します。以下の簡単な例は、すべてのデバイスを一覧表示してから、特定のデバイスに計算を配置する方法を示しています。 ここでは取り上げませんが、TensorFlow には、デバイス間で計算を複製し、集合的な削減を実行するための API もあります。 デバイスをリストする 使用するデバイスを見つけるには、tf.config.list_logical_devices およびtf.config.list_physical_devices を使用します。 Step21: 演算の配置:tf.device デバイスに演算を配置するには、tf.device スコープでデバイスを呼び出します。 Step22: デバイス間での ND 配列のコピー Step25: パフォーマンスの比較 TensorFlow NumPy は、CPU、GPU、TPU にディスパッチできる高度に最適化された TensorFlow カーネルを使用します。TensorFlow は、演算の融合など、多くのコンパイラ最適化も実行し、パフォーマンスとメモリを向上します。詳細については、Grappler を使用した TensorFlow グラフの最適化をご覧ください。 ただし、TensorFlow では、NumPy と比較してディスパッチ演算のオーバーヘッドが高くなります。小規模な演算(約 10 マイクロ秒未満)で構成されるワークロードの場合、これらのオーバーヘッドがランタイムを支配する可能性があり、NumPy はより優れたパフォーマンスを提供する可能性があります。その他の場合、一般的に TensorFlow を使用するとパフォーマンスが向上するはずです。 以下のベンチマークを実行して、さまざまな入力サイズでの NumPy と TensorFlow Numpy のパフォーマンスを比較します。
Python Code: #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. Explanation: Copyright 2020 The TensorFlow Authors. End of explanation import matplotlib.pyplot as plt import numpy as np import tensorflow as tf import tensorflow.experimental.numpy as tnp import timeit print("Using TensorFlow version %s" % tf.__version__) Explanation: TensorFlow の NumPy API <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/guide/tf_numpy"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/guide/tf_numpy.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/guide/tf_numpy.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/guide/tf_numpy.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a> </td> </table> 概要 TensorFlow では、tf.experimental.numpyを利用してNumPy API のサブセットを実装します。これにより、TensorFlow により高速化された NumPy コードを実行し、TensorFlow のすべて API にもアクセスできます。 セットアップ End of explanation tnp.experimental_enable_numpy_behavior() Explanation: NumPy 動作の有効化 tnp を NumPy として使用するには、TensorFlow の NumPy の動作を有効にしてください。 End of explanation # Create an ND array and check out different attributes. ones = tnp.ones([5, 3], dtype=tnp.float32) print("Created ND array with shape = %s, rank = %s, " "dtype = %s on device = %s\n" % ( ones.shape, ones.ndim, ones.dtype, ones.device)) # `ndarray` is just an alias to `tf.Tensor`. print("Is `ones` an instance of tf.Tensor: %s\n" % isinstance(ones, tf.Tensor)) # Try commonly used member functions. print("ndarray.T has shape %s" % str(ones.T.shape)) print("narray.reshape(-1) has shape %s" % ones.reshape(-1).shape) Explanation: この呼び出しによって、TensorFlow での型昇格が可能になり、リテラルからテンソルに変換される場合に、型推論も Numpy の標準により厳格に従うように変更されます。 注意: この呼び出しは、tf.experimental.numpy モジュールだけでなく、TensorFlow 全体の動作を変更します。 TensorFlow NumPy ND 配列 ND 配列と呼ばれる tf.experimental.numpy.ndarray は、特定のデバイスに配置されたある dtype の多次元の密な配列を表します。tf.Tensor のエイリアスです。ndarray.T、ndarray.reshape、ndarray.ravel などの便利なメソッドについては、ND 配列クラスをご覧ください。 まず、ND 配列オブジェクトを作成してから、さまざまなメソッドを呼び出します。 End of explanation print("Type promotion for operations") values = [tnp.asarray(1, dtype=d) for d in (tnp.int32, tnp.int64, tnp.float32, tnp.float64)] for i, v1 in enumerate(values): for v2 in values[i + 1:]: print("%s + %s => %s" % (v1.dtype.name, v2.dtype.name, (v1 + v2).dtype.name)) Explanation: 型昇格 TensorFlow NumPy API には、リテラルを ND 配列に変換するためと ND 配列入力で型昇格を実行するための明確に定義されたセマンティクスがあります。詳細については、np.result_type をご覧ください。 TensorFlow API は tf.Tensor 入力を変更せずそのままにし、それに対して型昇格を実行しませんが、TensorFlow NumPy API は NumPy 型昇格のルールに従って、すべての入力を昇格します。次の例では、型昇格を行います。まず、さまざまな型の ND 配列入力で加算を実行し、出力の型を確認します。これらの型昇格は、TensorFlow API では行えません。 End of explanation print("Type inference during array creation") print("tnp.asarray(1).dtype == tnp.%s" % tnp.asarray(1).dtype.name) print("tnp.asarray(1.).dtype == tnp.%s\n" % tnp.asarray(1.).dtype.name) Explanation: 最後に、ndarray.asarray を使ってリテラルをND 配列に変換し、結果の型を確認します。 End of explanation tnp.experimental_enable_numpy_behavior(prefer_float32=True) print("When prefer_float32 is True:") print("tnp.asarray(1.).dtype == tnp.%s" % tnp.asarray(1.).dtype.name) print("tnp.add(1., 2.).dtype == tnp.%s" % tnp.add(1., 2.).dtype.name) tnp.experimental_enable_numpy_behavior(prefer_float32=False) print("When prefer_float32 is False:") print("tnp.asarray(1.).dtype == tnp.%s" % tnp.asarray(1.).dtype.name) print("tnp.add(1., 2.).dtype == tnp.%s" % tnp.add(1., 2.).dtype.name) Explanation: リテラルを ND 配列に変換する際、NumPy は tnp.int64 や tnp.float64 といった幅広い型を優先します。一方、tf.convert_to_tensor は、tf.int32 と tf.float32 の型を優先して定数を tf.Tensor に変換します。TensorFlow NumPy API は、整数に関しては NumPy の動作に従っています。浮動小数点数については、experimental_enable_numpy_behavior の prefer_float32 引数によって、tf.float64 よりも tf.float32 を優先するかどうかを制御することができます(デフォルトは False です)。以下に例を示します。 End of explanation x = tnp.ones([2, 3]) y = tnp.ones([3]) z = tnp.ones([1, 2, 1]) print("Broadcasting shapes %s, %s and %s gives shape %s" % ( x.shape, y.shape, z.shape, (x + y + z).shape)) Explanation: ブロードキャスティング TensorFlow と同様に、NumPy は「ブロードキャスト」値の豊富なセマンティクスを定義します。詳細については、NumPy ブロードキャストガイドを確認し、これを TensorFlow ブロードキャストセマンティクスと比較してください。 End of explanation x = tnp.arange(24).reshape(2, 3, 4) print("Basic indexing") print(x[1, tnp.newaxis, 1:3, ...], "\n") print("Boolean indexing") print(x[:, (True, False, True)], "\n") print("Advanced indexing") print(x[1, (0, 0, 1), tnp.asarray([0, 1, 1])]) # Mutation is currently not supported try: tnp.arange(6)[1] = -1 except TypeError: print("Currently, TensorFlow NumPy does not support mutation.") Explanation: インデックス NumPy は、非常に洗練されたインデックス作成ルールを定義しています。NumPy インデックスガイドを参照してください。以下では、インデックスとして ND 配列が使用されていることに注意してください。 End of explanation class Model(object): Model with a dense and a linear layer. def __init__(self): self.weights = None def predict(self, inputs): if self.weights is None: size = inputs.shape[1] # Note that type `tnp.float32` is used for performance. stddev = tnp.sqrt(size).astype(tnp.float32) w1 = tnp.random.randn(size, 64).astype(tnp.float32) / stddev bias = tnp.random.randn(64).astype(tnp.float32) w2 = tnp.random.randn(64, 2).astype(tnp.float32) / 8 self.weights = (w1, bias, w2) else: w1, bias, w2 = self.weights y = tnp.matmul(inputs, w1) + bias y = tnp.maximum(y, 0) # Relu return tnp.matmul(y, w2) # Linear projection model = Model() # Create input data and compute predictions. print(model.predict(tnp.ones([2, 32], dtype=tnp.float32))) Explanation: サンプルモデル 次に、モデルを作成して推論を実行する方法を見てみます。この簡単なモデルは、relu レイヤーとそれに続く線形射影を適用します。後のセクションでは、TensorFlow のGradientTapeを使用してこのモデルの勾配を計算する方法を示します。 End of explanation # ND array passed into NumPy function. np_sum = np.sum(tnp.ones([2, 3])) print("sum = %s. Class: %s" % (float(np_sum), np_sum.__class__)) # `np.ndarray` passed into TensorFlow NumPy function. tnp_sum = tnp.sum(np.ones([2, 3])) print("sum = %s. Class: %s" % (float(tnp_sum), tnp_sum.__class__)) # It is easy to plot ND arrays, given the __array__ interface. labels = 15 + 2 * tnp.random.randn(1, 1000) _ = plt.hist(labels) Explanation: TensorFlow NumPy および NumPy TensorFlow NumPy は、完全な NumPy 仕様のサブセットを実装します。シンボルは、今後追加される予定ですが、近い将来にサポートされなくなる体系的な機能があります。これらには、NumPy C API サポート、Swig 統合、Fortran ストレージ優先順位、ビュー、stride_tricks、およびいくつかのdtype(np.recarrayや<code> np.object</code>)が含まれます。詳細については、 <a>TensorFlow NumPy API ドキュメント</a>をご覧ください。 NumPy 相互運用性 TensorFlow ND 配列は、NumPy 関数と相互運用できます。これらのオブジェクトは、__array__インターフェースを実装します。NumPy はこのインターフェースを使用して、関数の引数を処理する前にnp.ndarray値に変換します。 同様に、TensorFlow NumPy 関数は、np.ndarray などのさまざまなタイプの入力を受け入れることができます。これらの入力は、<code>ndarray.asarray</code> を呼び出すことにより、ND 配列に変換されます。 ND 配列をnp.ndarrayとの間で変換すると、実際のデータコピーがトリガーされる場合があります。詳細については、バッファコピーのセクションを参照してください。 End of explanation x = tnp.ones([2]) + np.ones([2]) print("x = %s\nclass = %s" % (x, x.__class__)) Explanation: バッファコピー TensorFlow NumPy を NumPy コードと混在させると、データコピーがトリガーされる場合があります。これは、TensorFlow NumPy のメモリアライメントに関する要件が NumPy の要件よりも厳しいためです。 np.ndarrayが TensorFlow Numpy に渡されると、アライメント要件を確認し、必要に応じてコピーがトリガーされます。ND 配列 CPU バッファを NumPy に渡す場合、通常、バッファはアライメント要件を満たし、NumPy はコピーを作成する必要はありません。 ND 配列は、ローカル CPU メモリ以外のデバイスに配置されたバッファを参照できます。このような場合、NumPy 関数を呼び出すと、必要に応じてネットワークまたはデバイス全体でコピーが作成されます。 このため、NumPy API 呼び出しとの混合は通常、注意して行い、ユーザーはデータのコピーのオーバーヘッドに注意する必要があります。TensorFlow NumPy 呼び出しを TensorFlow 呼び出しとインターリーブすることは一般的に安全であり、データのコピーを避けられます。 詳細については、TensorFlow の相互運用性のセクションをご覧ください。 演算子の優先順位 TensorFlow NumPy は、NumPy よりも優先順位の高い__array_priority__を定義します。つまり、ND 配列とnp.ndarrayの両方を含む演算子の場合、前者が優先されます。np.ndarray入力は ND 配列に変換され、演算子の TensorFlow NumPy 実装が呼び出されます。 End of explanation x = tf.constant([1, 2]) print(x) # `asarray` and `convert_to_tensor` here are no-ops. tnp_x = tnp.asarray(x) print(tnp_x) print(tf.convert_to_tensor(tnp_x)) # Note that tf.Tensor.numpy() will continue to return `np.ndarray`. print(x.numpy(), x.numpy().__class__) Explanation: TF NumPy と TensorFlow TensorFlow NumPy は TensorFlow の上に構築されているため、TensorFlow とシームレスに相互運用できます。 tf.Tensor と ND 配列 ND 配列は tf.Tensor のエイリアスであるため、実際のデータのコピーを呼び出さずに混在させることが可能です。 End of explanation # ND array passed into TensorFlow function. tf_sum = tf.reduce_sum(tnp.ones([2, 3], tnp.float32)) print("Output = %s" % tf_sum) # `tf.Tensor` passed into TensorFlow NumPy function. tnp_sum = tnp.sum(tf.ones([2, 3])) print("Output = %s" % tnp_sum) Explanation: TensorFlow 相互運用性 ND 配列は tf.Tensor のエイリアスにすぎないため、TensorFlow API に渡すことができます。前述のように、このような相互運用では、アクセラレータやリモートデバイスに配置されたデータであっても、データのコピーは行われません。 逆に言えば、tf.Tensor オブジェクトを、データのコピーを実行せずに、tf.experimental.numpy API に渡すことができます。 End of explanation def create_batch(batch_size=32): Creates a batch of input and labels. return (tnp.random.randn(batch_size, 32).astype(tnp.float32), tnp.random.randn(batch_size, 2).astype(tnp.float32)) def compute_gradients(model, inputs, labels): Computes gradients of squared loss between model prediction and labels. with tf.GradientTape() as tape: assert model.weights is not None # Note that `model.weights` need to be explicitly watched since they # are not tf.Variables. tape.watch(model.weights) # Compute prediction and loss prediction = model.predict(inputs) loss = tnp.sum(tnp.square(prediction - labels)) # This call computes the gradient through the computation above. return tape.gradient(loss, model.weights) inputs, labels = create_batch() gradients = compute_gradients(model, inputs, labels) # Inspect the shapes of returned gradients to verify they match the # parameter shapes. print("Parameter shapes:", [w.shape for w in model.weights]) print("Gradient shapes:", [g.shape for g in gradients]) # Verify that gradients are of type ND array. assert isinstance(gradients[0], tnp.ndarray) # Computes a batch of jacobians. Each row is the jacobian of an element in the # batch of outputs w.r.t. the corresponding input batch element. def prediction_batch_jacobian(inputs): with tf.GradientTape() as tape: tape.watch(inputs) prediction = model.predict(inputs) return prediction, tape.batch_jacobian(prediction, inputs) inp_batch = tnp.ones([16, 32], tnp.float32) output, batch_jacobian = prediction_batch_jacobian(inp_batch) # Note how the batch jacobian shape relates to the input and output shapes. print("Output shape: %s, input shape: %s" % (output.shape, inp_batch.shape)) print("Batch jacobian shape:", batch_jacobian.shape) Explanation: 勾配とヤコビアン: tf.GradientTape TensorFlow の GradientTape は、TensorFlow と TensorFlow NumPy コードを介してバックプロパゲーションに使用できます。 サンプルモデルセクションで作成されたモデルを使用して、勾配とヤコビアンを計算します。 End of explanation inputs, labels = create_batch(512) print("Eager performance") compute_gradients(model, inputs, labels) print(timeit.timeit(lambda: compute_gradients(model, inputs, labels), number=10) * 100, "ms") print("\ntf.function compiled performance") compiled_compute_gradients = tf.function(compute_gradients) compiled_compute_gradients(model, inputs, labels) # warmup print(timeit.timeit(lambda: compiled_compute_gradients(model, inputs, labels), number=10) * 100, "ms") Explanation: トレースコンパイル: tf.function Tensorflow の tf.function は、コードを「トレースコンパイル」し、これらのトレースを最適化してパフォーマンスを大幅に向上させます。グラフと関数の概要を参照してください。 また、tf.function を使用して、TensorFlow NumPy コードを最適化することもできます。以下は、スピードアップを示す簡単な例です。tf.function コードの本文には、TensorFlow NumPy API への呼び出しが含まれていることに注意してください。 End of explanation @tf.function def vectorized_per_example_gradients(inputs, labels): def single_example_gradient(arg): inp, label = arg return compute_gradients(model, tnp.expand_dims(inp, 0), tnp.expand_dims(label, 0)) # Note that a call to `tf.vectorized_map` semantically maps # `single_example_gradient` over each row of `inputs` and `labels`. # The interface is similar to `tf.map_fn`. # The underlying machinery vectorizes away this map loop which gives # nice speedups. return tf.vectorized_map(single_example_gradient, (inputs, labels)) batch_size = 128 inputs, labels = create_batch(batch_size) per_example_gradients = vectorized_per_example_gradients(inputs, labels) for w, p in zip(model.weights, per_example_gradients): print("Weight shape: %s, batch size: %s, per example gradient shape: %s " % ( w.shape, batch_size, p.shape)) # Benchmark the vectorized computation above and compare with # unvectorized sequential computation using `tf.map_fn`. @tf.function def unvectorized_per_example_gradients(inputs, labels): def single_example_gradient(arg): inp, label = arg return compute_gradients(model, tnp.expand_dims(inp, 0), tnp.expand_dims(label, 0)) return tf.map_fn(single_example_gradient, (inputs, labels), fn_output_signature=(tf.float32, tf.float32, tf.float32)) print("Running vectorized computation") print(timeit.timeit(lambda: vectorized_per_example_gradients(inputs, labels), number=10) * 100, "ms") print("\nRunning unvectorized computation") per_example_gradients = unvectorized_per_example_gradients(inputs, labels) print(timeit.timeit(lambda: unvectorized_per_example_gradients(inputs, labels), number=10) * 100, "ms") Explanation: ベクトル化:tf.vectorized_map TensorFlow には、並列ループのベクトル化のサポートが組み込まれているため、10 倍から 100 倍のスピードアップが可能です。これらのスピードアップは、tf.vectorized_map API を介して実行でき、TensorFlow NumPy にも適用されます。 w.r.t. (対応する入力バッチ要素)バッチで各出力の勾配を計算すると便利な場合があります。このような計算は、以下に示すように tf.vectorized_map を使用して効率的に実行できます。 End of explanation print("All logical devices:", tf.config.list_logical_devices()) print("All physical devices:", tf.config.list_physical_devices()) # Try to get the GPU device. If unavailable, fallback to CPU. try: device = tf.config.list_logical_devices(device_type="GPU")[0] except IndexError: device = "/device:CPU:0" Explanation: デバイスに配置する TensorFlow NumPy は、CPU、GPU、TPU、およびリモートデバイスに演算を配置できます。デバイスにおける配置には標準の TensorFlow メカニズムを使用します。以下の簡単な例は、すべてのデバイスを一覧表示してから、特定のデバイスに計算を配置する方法を示しています。 ここでは取り上げませんが、TensorFlow には、デバイス間で計算を複製し、集合的な削減を実行するための API もあります。 デバイスをリストする 使用するデバイスを見つけるには、tf.config.list_logical_devices およびtf.config.list_physical_devices を使用します。 End of explanation print("Using device: %s" % str(device)) # Run operations in the `tf.device` scope. # If a GPU is available, these operations execute on the GPU and outputs are # placed on the GPU memory. with tf.device(device): prediction = model.predict(create_batch(5)[0]) print("prediction is placed on %s" % prediction.device) Explanation: 演算の配置:tf.device デバイスに演算を配置するには、tf.device スコープでデバイスを呼び出します。 End of explanation with tf.device("/device:CPU:0"): prediction_cpu = tnp.copy(prediction) print(prediction.device) print(prediction_cpu.device) Explanation: デバイス間での ND 配列のコピー: tnp.copy 特定のデバイススコープで tnp.copy を呼び出すと、データがそのデバイスに既に存在しない限り、そのデバイスにデータがコピーされます。 End of explanation def benchmark(f, inputs, number=30, force_gpu_sync=False): Utility to benchmark `f` on each value in `inputs`. times = [] for inp in inputs: def _g(): if force_gpu_sync: one = tnp.asarray(1) f(inp) if force_gpu_sync: with tf.device("CPU:0"): tnp.copy(one) # Force a sync for GPU case _g() # warmup t = timeit.timeit(_g, number=number) times.append(t * 1000. / number) return times def plot(np_times, tnp_times, compiled_tnp_times, has_gpu, tnp_times_gpu): Plot the different runtimes. plt.xlabel("size") plt.ylabel("time (ms)") plt.title("Sigmoid benchmark: TF NumPy vs NumPy") plt.plot(sizes, np_times, label="NumPy") plt.plot(sizes, tnp_times, label="TF NumPy (CPU)") plt.plot(sizes, compiled_tnp_times, label="Compiled TF NumPy (CPU)") if has_gpu: plt.plot(sizes, tnp_times_gpu, label="TF NumPy (GPU)") plt.legend() # Define a simple implementation of `sigmoid`, and benchmark it using # NumPy and TensorFlow NumPy for different input sizes. def np_sigmoid(y): return 1. / (1. + np.exp(-y)) def tnp_sigmoid(y): return 1. / (1. + tnp.exp(-y)) @tf.function def compiled_tnp_sigmoid(y): return tnp_sigmoid(y) sizes = (2 ** 0, 2 ** 5, 2 ** 10, 2 ** 15, 2 ** 20) np_inputs = [np.random.randn(size).astype(np.float32) for size in sizes] np_times = benchmark(np_sigmoid, np_inputs) with tf.device("/device:CPU:0"): tnp_inputs = [tnp.random.randn(size).astype(np.float32) for size in sizes] tnp_times = benchmark(tnp_sigmoid, tnp_inputs) compiled_tnp_times = benchmark(compiled_tnp_sigmoid, tnp_inputs) has_gpu = len(tf.config.list_logical_devices("GPU")) if has_gpu: with tf.device("/device:GPU:0"): tnp_inputs = [tnp.random.randn(size).astype(np.float32) for size in sizes] tnp_times_gpu = benchmark(compiled_tnp_sigmoid, tnp_inputs, 100, True) else: tnp_times_gpu = None plot(np_times, tnp_times, compiled_tnp_times, has_gpu, tnp_times_gpu) Explanation: パフォーマンスの比較 TensorFlow NumPy は、CPU、GPU、TPU にディスパッチできる高度に最適化された TensorFlow カーネルを使用します。TensorFlow は、演算の融合など、多くのコンパイラ最適化も実行し、パフォーマンスとメモリを向上します。詳細については、Grappler を使用した TensorFlow グラフの最適化をご覧ください。 ただし、TensorFlow では、NumPy と比較してディスパッチ演算のオーバーヘッドが高くなります。小規模な演算(約 10 マイクロ秒未満)で構成されるワークロードの場合、これらのオーバーヘッドがランタイムを支配する可能性があり、NumPy はより優れたパフォーマンスを提供する可能性があります。その他の場合、一般的に TensorFlow を使用するとパフォーマンスが向上するはずです。 以下のベンチマークを実行して、さまざまな入力サイズでの NumPy と TensorFlow Numpy のパフォーマンスを比較します。 End of explanation
12,973
Given the following text description, write Python code to implement the functionality described below step by step Description: The atmosphere and its layers The World Meteorological Organization (WMO) defines the atmosphere as Step1: Comparing coesa62 and coesa76 Also known as U.S. Standard Atmosphere, the atmospheric model coesa76 is just an update of his little brother coesa62. The difference is that geopotential heights diverge for higher altitudes. Let us plot the Temperature as increasing altitude for both atmospheric models. Step2: Temperature, pressure and density distrubutions One of the advantages of COESA76 is that it extends up to 1000 kilometers. The behaviour of previous magnitudes againts geometrical altitude can be checked in the following figure. A logarithmic scale is applied for pressure and density to better see their decay for high altitude values.
Python Code: from poliastro.atmosphere import COESA62, COESA76 from astropy import units as u import numpy as np import matplotlib.pyplot as plt Explanation: The atmosphere and its layers The World Meteorological Organization (WMO) defines the atmosphere as: A hypotetical vertical distribution of atmospheric temperature, pressure and density which by international agreement and for historical reasons, is roughly representative of year-round, midlatitude conditions. In fact, the atmosphere is the mean that makes the link between the ground and space and it is crucial when studying perturbations since drag affects LEO satellites. Therefore, it was necessary to develop some mathematical model based that could predict all the different conditions stated in WMO atmosphere definitoin for given altitudes. Along history different models have been developed: ISA: up to 11 km. ISA-ICAO: up to 80 km. COESA 1962: up to 700 km. COESA 1976: up to 1000 km. Jacchia-Roberts Since some of them are implemented in poliastro, let us compare the differences among them. End of explanation # We build the atmospheric instances coesa62 = COESA62() coesa76 = COESA76() # Create the figure fig, ax = plt.subplots(figsize=(10,10)) ax.set_title("U.S Standard Atmospheres") # Collect all atmospheric models and define their plotting properties atm_models = {coesa62: ["--r", "r", "Coesa 1962"], coesa76: ["-b", "b", "Coesa 1976"]} # Solve atmospheric temperature for each of the models for atm in atm_models: z_span = np.linspace(0, 86, 100) * u.km T_span = np.array([]) * u.K for z in z_span: # We discard density and pressure T = atm.temperature(z) T_span = np.append(T_span, T) # Temperature plot ax.plot(T_span, z_span, atm_models[atm][0], label=atm_models[atm][-1]) ax.plot(atm.Tb_levels[:8], atm.zb_levels[:8], atm_models[atm][1] + "o") ax.set_xlim(150, 300) ax.set_ylim(0, 100) ax.set_xlabel("Temperature $[K]$") ax.set_ylabel("Altitude $[km]$") ax.legend() # Add some information on the plot ax.annotate( "Tropopause", xy=(coesa76.Tb_levels[1].value, coesa76.zb_levels[1].value), xytext=(coesa76.Tb_levels[1].value + 10, coesa76.zb_levels[1].value + 5), arrowprops=dict(arrowstyle="simple", facecolor="black") ) ax.annotate( "Stratopause", xy=(coesa76.Tb_levels[4].value, coesa76.zb_levels[4].value), xytext=(coesa76.Tb_levels[4].value - 25, coesa76.zb_levels[4].value + 5), arrowprops=dict(arrowstyle="simple", facecolor="black") ) ax.annotate( "Mesopause", xy=(coesa76.Tb_levels[7].value, coesa76.zb_levels[7].value), xytext=(coesa76.Tb_levels[7].value + 10, coesa76.zb_levels[7].value + 5), arrowprops=dict(arrowstyle="simple", facecolor="black") ) # Layers in the atmosphere for h in [11.019, 47.350, 86]: ax.axhline(h, color='k', linestyle='--', xmin=0.0, xmax=0.35) ax.axhline(h, color='k', linestyle='-', xmin=0.0, xmax=0.15) layer_names = {"TROPOSPHERE": 5, "STRATOSPHERE": 30, "MESOSPHERE": 65, "THERMOSPHERE": 90} for name in layer_names: ax.annotate( name, xy=(152, layer_names[name]), xytext=(152, layer_names[name]), ) Explanation: Comparing coesa62 and coesa76 Also known as U.S. Standard Atmosphere, the atmospheric model coesa76 is just an update of his little brother coesa62. The difference is that geopotential heights diverge for higher altitudes. Let us plot the Temperature as increasing altitude for both atmospheric models. End of explanation # We create the basis for the figure fig, axs = plt.subplots(1, 3, figsize=(12, 5)) fig.suptitle("State variables against altitude", fontweight="bold") fig.text(0.04, 0.5, 'Altitude [km]', va='center', rotation='vertical') # Complete altitude range and initialization of state variables sets alt_span = np.linspace(0, 1000, 1001) * u.km T_span = np.array([]) * u.K p_span = np.array([]) * u.Pa rho_span = np.array([]) * u.kg / u.m ** 3 # We solve for each property at given altitude for alt in alt_span: T, p, rho = coesa76.properties(alt) T_span = np.append(T_span, T) p_span = np.append(p_span, p.to(u.Pa)) rho_span = np.append(rho_span, rho) # Temperature plot axs[0].set_title("Temperature") axs[0].set_xlabel("T [K]") axs[0].set_xlabel("Altitude [K]") axs[0].plot(T_span, alt_span) # Pressure plot axs[1].set_title("Pressure") axs[1].set_xlabel("p [Pa]") axs[1].plot(p_span, alt_span) axs[1].set_xscale('log') # Density plot axs[2].set_title("Density") axs[2].set_xlabel(r"$\rho$ [kg/m3]") axs[2].plot(rho_span, alt_span) axs[2].set_xscale('log') Explanation: Temperature, pressure and density distrubutions One of the advantages of COESA76 is that it extends up to 1000 kilometers. The behaviour of previous magnitudes againts geometrical altitude can be checked in the following figure. A logarithmic scale is applied for pressure and density to better see their decay for high altitude values. End of explanation
12,974
Given the following text description, write Python code to implement the functionality described below step by step Description: Stopword Removal from Media Unit & Annotation In this tutorial, we will show how dimensionality reduction can be applied over both the media units and the annotations of a crowdsourcing task, and how this impacts the results of the CrowdTruth quality metrics. We start with an open-ended extraction task, where the crowd was asked to highlight words or phrases in a text that identify or refer to people in a video. The task was executed on Figure Eight. This is how the task looked like to the workers Step1: Notice the diverse behavior of the crowd workers. While most annotated each word individually, the worker on row 5 annotated chunks of the sentence together in one word phrase. Also, when no answer was picked by the worker, the value in the cell is NaN. A basic pre-processing configuration Our basic pre-processing configuration attempts to normalize the different ways of performing the crowd annotations. We set remove_empty_rows = False to keep the empty rows from the crowd. This configuration option will set all empty cell values to correspond to a NONE token in the annotation vector. We build the annotation vector to have one component for each word in the sentence. To do this, we break up multiple-word annotations into a list of single words in the processJudgments call Step2: Now we can pre-process the data and run the CrowdTruth metrics Step3: Removing stopwords from Media Units and Annotations A more complex dimensionality reduction technique involves removing the stopwords from both the media units and the crowd annotations. Stopwords (i.e. words that are very common in the English language) do not usually contain much useful information. Also, the behavior of the crowds w.r.t them is inconsistent - some workers omit them, some annotate them. The first step is to build a function that removes stopwords from strings. We will use the stopwords corpus in the nltk package to get the list of words. We want to build a function that can be reused for both the text in the media units and in the annotations column. Also, we need to be careful about omitting punctuation. The function remove_stop_words does all of these things Step4: In the new configuration class ConfigDimRed, we apply the function we just built to both the column that contains the media unit text (inputColumns[2]), and the column containing the crowd annotations (outputColumns[0]) Step5: Now we can pre-process the data and run the CrowdTruth metrics Step6: Effect on CrowdTruth metrics Finally, we can compare the effect of the stopword removal on the CrowdTruth sentence quality score. Step7: The red line in the plot runs through the diagonal. All sentences above the line have a higher sentence quality score when the stopwords were removed. The plot shows that removing the stopwords improved the quality for a majority of the sentences. Surprisingly though, some sentences decreased in quality. This effect can be understood when plotting the worker quality scores.
Python Code: import pandas as pd test_data = pd.read_csv("data/person-video-highlight.csv") test_data["taggedinsubtitles"][0:30] Explanation: Stopword Removal from Media Unit & Annotation In this tutorial, we will show how dimensionality reduction can be applied over both the media units and the annotations of a crowdsourcing task, and how this impacts the results of the CrowdTruth quality metrics. We start with an open-ended extraction task, where the crowd was asked to highlight words or phrases in a text that identify or refer to people in a video. The task was executed on Figure Eight. This is how the task looked like to the workers: A sample dataset for this task is available in this file, containing raw output from the crowd on FigureEight. Download the file and place it in a folder named data that has the same root as this notebook. The answers from the crowd are stored in the taggedinsubtitles column. End of explanation import crowdtruth from crowdtruth.configuration import DefaultConfig class Config(DefaultConfig): inputColumns = ["ctunitid", "videolocation", "subtitles"] outputColumns = ["taggedinsubtitles"] open_ended_task = True annotation_separator = "," remove_empty_rows = False def processJudgments(self, judgments): # build annotation vector just from words judgments[self.outputColumns[0]] = judgments[self.outputColumns[0]].apply( lambda x: str(x).replace(' ',self.annotation_separator)) # normalize vector elements judgments[self.outputColumns[0]] = judgments[self.outputColumns[0]].apply( lambda x: str(x).replace('[','')) judgments[self.outputColumns[0]] = judgments[self.outputColumns[0]].apply( lambda x: str(x).replace(']','')) judgments[self.outputColumns[0]] = judgments[self.outputColumns[0]].apply( lambda x: str(x).replace('"','')) return judgments Explanation: Notice the diverse behavior of the crowd workers. While most annotated each word individually, the worker on row 5 annotated chunks of the sentence together in one word phrase. Also, when no answer was picked by the worker, the value in the cell is NaN. A basic pre-processing configuration Our basic pre-processing configuration attempts to normalize the different ways of performing the crowd annotations. We set remove_empty_rows = False to keep the empty rows from the crowd. This configuration option will set all empty cell values to correspond to a NONE token in the annotation vector. We build the annotation vector to have one component for each word in the sentence. To do this, we break up multiple-word annotations into a list of single words in the processJudgments call: judgments[self.outputColumns[0]] = judgments[self.outputColumns[0]].apply( lambda x: str(x).replace(' ',self.annotation_separator)) The final configuration class Config is this: End of explanation data_with_stopwords, config_with_stopwords = crowdtruth.load( file = "data/person-video-highlight.csv", config = Config() ) processed_results_with_stopwords = crowdtruth.run( data_with_stopwords, config_with_stopwords ) Explanation: Now we can pre-process the data and run the CrowdTruth metrics: End of explanation import nltk from nltk.corpus import stopwords import string stopword_set = set(stopwords.words('english')) stopword_set.update(['s']) def remove_stop_words(words_string, sep): ''' words_string: string containing all words sep: separator character for the words in words_string ''' words_list = words_string.replace("'", sep).split(sep) corrected_words_list = "" for word in words_list: if word.translate(None, string.punctuation) not in stopword_set: if corrected_words_list != "": corrected_words_list += sep corrected_words_list += word return corrected_words_list Explanation: Removing stopwords from Media Units and Annotations A more complex dimensionality reduction technique involves removing the stopwords from both the media units and the crowd annotations. Stopwords (i.e. words that are very common in the English language) do not usually contain much useful information. Also, the behavior of the crowds w.r.t them is inconsistent - some workers omit them, some annotate them. The first step is to build a function that removes stopwords from strings. We will use the stopwords corpus in the nltk package to get the list of words. We want to build a function that can be reused for both the text in the media units and in the annotations column. Also, we need to be careful about omitting punctuation. The function remove_stop_words does all of these things: End of explanation import pandas as pd class ConfigDimRed(Config): def processJudgments(self, judgments): judgments = Config.processJudgments(self, judgments) # remove stopwords from input sentence for idx in range(len(judgments[self.inputColumns[2]])): judgments.at[idx, self.inputColumns[2]] = remove_stop_words( judgments[self.inputColumns[2]][idx], " ") for idx in range(len(judgments[self.outputColumns[0]])): judgments.at[idx, self.outputColumns[0]] = remove_stop_words( judgments[self.outputColumns[0]][idx], self.annotation_separator) if judgments[self.outputColumns[0]][idx] == "": judgments.at[idx, self.outputColumns[0]] = self.none_token return judgments Explanation: In the new configuration class ConfigDimRed, we apply the function we just built to both the column that contains the media unit text (inputColumns[2]), and the column containing the crowd annotations (outputColumns[0]): End of explanation data_without_stopwords, config_without_stopwords = crowdtruth.load( file = "data/person-video-highlight.csv", config = ConfigDimRed() ) processed_results_without_stopwords = crowdtruth.run( data_without_stopwords, config_without_stopwords ) Explanation: Now we can pre-process the data and run the CrowdTruth metrics: End of explanation %matplotlib inline import matplotlib import matplotlib.pyplot as plt plt.scatter( processed_results_with_stopwords["units"]["uqs"], processed_results_without_stopwords["units"]["uqs"], ) plt.plot([0, 1], [0, 1], 'red', linewidth=1) plt.title("Sentence Quality Score") plt.xlabel("with stopwords") plt.ylabel("without stopwords") Explanation: Effect on CrowdTruth metrics Finally, we can compare the effect of the stopword removal on the CrowdTruth sentence quality score. End of explanation plt.scatter( processed_results_with_stopwords["workers"]["wqs"], processed_results_without_stopwords["workers"]["wqs"], ) plt.plot([0, 0.6], [0, 0.6], 'red', linewidth=1) plt.title("Worker Quality Score") plt.xlabel("with stopwords") plt.ylabel("without stopwords") Explanation: The red line in the plot runs through the diagonal. All sentences above the line have a higher sentence quality score when the stopwords were removed. The plot shows that removing the stopwords improved the quality for a majority of the sentences. Surprisingly though, some sentences decreased in quality. This effect can be understood when plotting the worker quality scores. End of explanation
12,975
Given the following text description, write Python code to implement the functionality described below step by step Description: Running EnergyPlus from Eppy It would be great if we could run EnergyPlus directly from our IDF wouldn’t it? Well here’s how we can. Step1: if you are in a terminal, you will see something like this
Python Code: # you would normaly install eppy by doing # python setup.py install # or # pip install eppy # or # easy_install eppy # if you have not done so, uncomment the following three lines import sys # pathnameto_eppy = 'c:/eppy' pathnameto_eppy = '../' sys.path.append(pathnameto_eppy) from eppy.modeleditor import IDF iddfile = "/Applications/EnergyPlus-8-3-0/Energy+.idd" IDF.setiddname(iddfile) idfname = "/Applications/EnergyPlus-8-3-0/ExampleFiles/BasicsFiles/Exercise1A.idf" epwfile = "/Applications/EnergyPlus-8-3-0/WeatherData/USA_IL_Chicago-OHare.Intl.AP.725300_TMY3.epw" idf = IDF(idfname, epwfile) idf.run() Explanation: Running EnergyPlus from Eppy It would be great if we could run EnergyPlus directly from our IDF wouldn’t it? Well here’s how we can. End of explanation help(idf.run) Explanation: if you are in a terminal, you will see something like this:: It’s as simple as that to run using the EnergyPlus defaults, but all the EnergyPlus command line interface options are also supported. To get a description of the options available, as well as the defaults you can call the Python built-in help function on the IDF.run method and it will print a full description of the options to the console. End of explanation
12,976
Given the following text description, write Python code to implement the functionality described below step by step Description: 切片 为了计算 seq[start Step1: 对列表使用 + 与 * 要连接多个同一列表副本,只需要将列表乘上一个整数 Step2: 用 * 构建内含多个列表的列表 如果我们想初始化列表中有一定数量的列表,最适合使用列表生成式,例如下面就可以表示井字的棋盘列表,里面有 3 个长度为 3 的列表 Step3: 上面很吸引人,并且是一种标准的做法,不过要注意,如果你在 a*n 语句中,如果 a 元素是对其它可变对象的引用的话,就要注意了,因为这个式子可能出乎你的意料,例如,你想用 my_list = [[]] * 3 初始化一个由列表组成的列表,但是我们其实得到的列表中包含的三个元素是 3 个引用,这 3 个引用指向同一个列表。 下面也是创建三个元素的列表,每个元素是一个列表,有三个项目,这是一种看起来很简洁,但是是错误的做法 Step4: 上面的程序本质上相当于 Step5: 列表加法 += 和 = 的行为会根据第一个运算元不同而有很大的不同。为了简化讨论,我们主要讨论 +=,但是它的概念也可以套用到 =(乘等于,显示有问题) 上。让 += 可以工作的是 __iadd__(代表 ”in-place addition“ 就地加法)。但是,如果没有实现 __iadd__ 方法的话,就会去调用 __add__ 方法,运算式 a += b 就和 a = a + b 效果一样,也就是 先算出 a + b,产生一个新的变量,将新的变量赋值给 a,换句话说,赋值给 a 的可能是一个新的内存地址的变量,变量名会不会被关联到新的对象完全取决于是否有 __iadd__ 决定。 一般来说,可变序列都实现了 __iadd__ 方法,因此 += 是就地加法,而不可变序列根本不支持这个操作,对这个方法的实现也就无从谈起。 这种 += 的概念也可以应用到 *=,它是用 __imul__重写的,关于这两个方法,第 13 章会谈到。 下面是一个例子,展现 乘等于在可变和不可变序列上的作用 Step6: Note Step7: 看到虽然报错了,但是 t 的值还是被改变了。我们看一下反汇编代码 Step8: 前面是初始化变量,看 17 开头的 INPLACE_ADD,执行 += 操作,这步成功了,然后 19 STORE_SUBSCR 执行 t[2] = [3, 4, 5, 6],但是由于 tuple 不可变,这步失败了,但是由于列表 执行的 += 操作是调用 __iadd__,不会改变内存地址,所以 list 内容已经改变了。 所以上面的相当于 Step9: list.sort 与 sorted() 内部函数 list.sort 会将原来的 list 排序,不会产生新的 list 副本,并返回一个 None 值告诉我们,已经改变了目标列表,没有创建一个新的列表,这是一种很重要的 Python API 惯例,当函数或方法改变目标时,返回一个 None 值。在 random.shuffle() 函数中也可以看到相同的用法。 相比之下,内建函数 sorted 会建立一个新的 list,并将它返回,事实上,它会接收所有可迭代对象,包括生成器,无论要给它排序的可迭代对象是什么,都会返回一个新的列表,不管 list.sort() 还是 sorted() 都有两个参数 reverse 参数 Step10: 当我们将列表排序后,就可以非常有效率的搜索它们,幸运的是,Python 标准库中的 bisect 模块已经提供标准的二分搜索法了。我们来讨论一下它的基本功能,包括方便的 bisect.insort() 函数,我们可以用它来确保排序后的列表仍然保持已经排序的状态 使用 bisect 来管理有效列表 bisect 主要提供两个函数 bisect() 和 insoct(),它们使用二分搜索,可以在任何有序列表中快速的查找和插入元素 使用 bisect 来搜索 bisect(haystack, needle) 会对 haystack(干草垛) 中(必须是已排序的列表)的 needle(针) 做二分搜索,找出可以插入 needle 的位置的索引,并保持 haystack 的顺序,也就是该位置左面的数都小于等于 needle,你可以使用 bisect(haystack, needle) 的结果来作为 haystack.insert(index, needle) 的参数,但是 insort() 可以做这两步,速度更快 Step11: bisect 函数有两种微调方式,再插入时,可以使用一对索引,lo 和 hi 来缩小索引范围,lo 的默认值是 0, hi 的默认值是列表的 len() 其次,bisect() 其实是 bisect_right() 的别名,它还有一种姐妹函数,叫做 bisect_left(),从上面看出,当列表和插入元素不相同时,看不出来差别,但是如果有相同元素时,bisect() 会在相同的最一个元素右面插入,bisect_left() 会在相同的第一个元素左面插入。 bisect 有一个有趣的用法,就是执行数值表格查询,例如,将考试的分数转成字母 Step12: 这段程序出自 bisect 模块文件,在搜索冗长的数字列表时,用 bisect 来取代 index 方法,可以加快查询速度 使用 bisect.insort 来插入 insort(seq, item) 会将 item 插入 seq,让 seq 保持升序排列 Step13: 和 bisect 一样,inosrt 可以使用 lo,hi 参数来搜索子列表。另外还有一个 insort_left 函数,使用 bisect_left 查找插入点 当列表不适用时 list 类型很好,但有时根据特定需求会有更好的选择,例如,如果你要存储一千万浮点数,那么 数组(array)会比较有效率,因为数组存的不是浮点数对象,而是像 C 语言一样保存它们的机器值。另一方面,如果你经常在列表尾端加入以及移除元素,将它当成栈或队列使用, deque(double-ended queue 双端队列) 的工作速度会较快 数组 如果列表中只存放数字,那么 array.array 会比列表更有效率,它支持所有的可变列表操作(包括 .pop, .insert 和 .extend)以及额外的方法,可以快速将内容存储到硬盘,例如 .frombytes 和 .tofile Python 数组与 C 中的数组一样精简,当建立数组时,需要提供一个 类型码(typecode)来指定在底层存储时使用的 C 语言类型,例如,b 是 signed char 的 类型码,如果建立一个 array('b'),那么每一个元素都会被存成一个 byte,并被解释成整数,范围是 -128 到 127。对于大型数字列表来说,节省了很多内存。且 Python 不允许任何不符合数组类型的数字放进去。 下面展示了创建 1000 万浮点数数组,如何存到文件中并读取到数组中。 Step14: 看到 array.tofile() 和 array.fromfile() 都很简单,执行速度也很快,事实证明,array.fromfile 载入这些数据只花了 0.1 秒,大约比从文本文件中读取数据快了 60 倍 Note Step15: 内存视图(Memoryview) memoryview 是一个内置类,可以让你在不复制内容的情况下下操作同一个数组的不同切片,本质上,memoryview 是一个泛化和去数学化的 Numpy 数组,在不需要复制内存情况下,可以再数据库结构之间共享内存,其中数据可以任何格式,例如 PIL 图像,SQLlite 数据库,Numpy 数组等,这个功能对处理大型数据集合时候非常重要。 memoryview.cast 使用类似数组模块的标记法,可以让你使用不同的方式读取同一块内存数据,而且内容字节不会随意移动。memoryview.cast 会将同一块内从打包成一个全新 memoryview 对象返回,听起来像 C 语言中的类型转换。 更改数组中的一个 bytes,来改变某个元素的值: Step16: 看到 memoryview 可以将数据用另一种类型读写,还是很方便的,在第 4 章会看到一个 memoryview 和 struct 操作二进制序列的例子 这时候,我们自然想到了如何处理数组中的数据,答案是使用 Numpy 和 Scipy Numpy 和 Scipy Numpy 和 Scipy 实现了数组和矩阵运算,使得 Python 成为了应用科学计算的主流。 Scipy 是一种基于 Numpy 的包,提供了很多科学计算方法,包括线性代数,数值计算,统计等,Scipy 即快速又可靠,是一个很好的科学计算包 Step17: 双向队列(Deque) 和其它的队列 .append() 和 .pop() 方法可以让你的列表当队列使用,每次进行 append() 和 pop(0),就可以产生 LIFO,但是如果你在最左端插入和移除元素,就很耗费资源,因为整个列表都要移位。 collections.deque(双向队列) 是一个线程安全的双端队列,可以快速的在两端进行插入和移除,如果你需要一个 “只保留最后看到的几个元素” 的功能,这也是一个最佳选择,因为 deque 可以设置队列的大小,当它被填满,加入新元素时,它会丢弃另一端的元素,下面是几个双向队列的典型操作
Python Code: l = list(range(10)) l l[2:5] = 100 #当赋值对象是切片时候,即使只有一个元素,等式右面也必须是一个可迭代元素 l[2:5] = [100] l Explanation: 切片 为了计算 seq[start:stop:step],Python 会调用 seq.__getitem__(slice(start, stop, step))。 多维切片 [ ] 运算符也可以接收以逗号分隔的多个索引或切片,举例来说,Numpy 中,你可以使用 a[i, j] 取得二维的 numpy.ndarray,以及使用 a[m:n, k:l] 这类的运算符获取二维的切片。处理 [ ] 运算符的 __getitem__ 和 __setitem__ 特殊方法,都只会接收 tuple 格式的 a[i, j] 内的索引,换句话说,Python 调用 a.getitem((i, j)) 算出来的 a[i, j] Python 会将省略号(三个句点)视为一种标记,在 Numpy 对多维矩阵进行切片时,会使用快捷的方式 ...,例如一个四维矩阵 a[i, ...] 相当于 a[i, :, :, :] 的简写。 对切片赋值 End of explanation l = [1, 2, 3] l * 5 Explanation: 对列表使用 + 与 * 要连接多个同一列表副本,只需要将列表乘上一个整数 End of explanation #建立一个有三个元素的列表,每个元素是一个列表,有三个项目 board = [[' '] * 3 for i in range(3)] board board[1][2] = 'X' board Explanation: 用 * 构建内含多个列表的列表 如果我们想初始化列表中有一定数量的列表,最适合使用列表生成式,例如下面就可以表示井字的棋盘列表,里面有 3 个长度为 3 的列表 End of explanation weir_board = [['_']* 3] * 3 #这是 3 个指向同一个地址的列表 weir_board weir_board[1][2] = 'O' weir_board Explanation: 上面很吸引人,并且是一种标准的做法,不过要注意,如果你在 a*n 语句中,如果 a 元素是对其它可变对象的引用的话,就要注意了,因为这个式子可能出乎你的意料,例如,你想用 my_list = [[]] * 3 初始化一个由列表组成的列表,但是我们其实得到的列表中包含的三个元素是 3 个引用,这 3 个引用指向同一个列表。 下面也是创建三个元素的列表,每个元素是一个列表,有三个项目,这是一种看起来很简洁,但是是错误的做法 End of explanation row = [' '] * 3 board = [] for i in range(3): board.append(row) board board[1][2] = '0' board Explanation: 上面的程序本质上相当于: End of explanation l = [1, 2, 3] id(l) l *= 2 l id(l) t = (1, 2, 3) id(t) t *= 2 id(t) Explanation: 列表加法 += 和 = 的行为会根据第一个运算元不同而有很大的不同。为了简化讨论,我们主要讨论 +=,但是它的概念也可以套用到 =(乘等于,显示有问题) 上。让 += 可以工作的是 __iadd__(代表 ”in-place addition“ 就地加法)。但是,如果没有实现 __iadd__ 方法的话,就会去调用 __add__ 方法,运算式 a += b 就和 a = a + b 效果一样,也就是 先算出 a + b,产生一个新的变量,将新的变量赋值给 a,换句话说,赋值给 a 的可能是一个新的内存地址的变量,变量名会不会被关联到新的对象完全取决于是否有 __iadd__ 决定。 一般来说,可变序列都实现了 __iadd__ 方法,因此 += 是就地加法,而不可变序列根本不支持这个操作,对这个方法的实现也就无从谈起。 这种 += 的概念也可以应用到 *=,它是用 __imul__重写的,关于这两个方法,第 13 章会谈到。 下面是一个例子,展现 乘等于在可变和不可变序列上的作用: End of explanation t = (1, 2, [30, 40]) t[2] += [50, 60] t Explanation: Note: 对不可变序列拼接效率很低,因为它要整个的把原来的内容复制到新的内存,然后拼接。 一个好玩的 += 例子 End of explanation import dis t = (1, 2, [3, 4]) dis.dis('a[2] += [5, 6]') Explanation: 看到虽然报错了,但是 t 的值还是被改变了。我们看一下反汇编代码 End of explanation x = [3, 4] x.__iadd__([5, 6]) t[2] = x #这步报错,tutle 不可变 Explanation: 前面是初始化变量,看 17 开头的 INPLACE_ADD,执行 += 操作,这步成功了,然后 19 STORE_SUBSCR 执行 t[2] = [3, 4, 5, 6],但是由于 tuple 不可变,这步失败了,但是由于列表 执行的 += 操作是调用 __iadd__,不会改变内存地址,所以 list 内容已经改变了。 所以上面的相当于: End of explanation fruits = ['grape', 'raspberry', 'apple', 'banana'] sorted(fruits) fruits sorted(fruits, reverse=True) sorted(fruits, key=len) sorted(fruits, key=len, reverse=True) fruits fruits.sort() fruits Explanation: list.sort 与 sorted() 内部函数 list.sort 会将原来的 list 排序,不会产生新的 list 副本,并返回一个 None 值告诉我们,已经改变了目标列表,没有创建一个新的列表,这是一种很重要的 Python API 惯例,当函数或方法改变目标时,返回一个 None 值。在 random.shuffle() 函数中也可以看到相同的用法。 相比之下,内建函数 sorted 会建立一个新的 list,并将它返回,事实上,它会接收所有可迭代对象,包括生成器,无论要给它排序的可迭代对象是什么,都会返回一个新的列表,不管 list.sort() 还是 sorted() 都有两个参数 reverse 参数: 如果值是 True,会降序返回,默认是 False key 参数: 一个函数,返回每一个元素的排序键。例如你要排序几个字符串,key = str.lower 可以执行不分大小写的排序,key = len 可以按照长度排序,默认值是恒等函数,也就是默认用元素自己的值排序 Note: key 关键字也可以在 min() 和 max() 函数中使用,另外有些标准程序库中也支持这个参数(例如 itertools.groupby() 和 heapq.nlargest() 等)。 下面例子可以让我们了解排序函数的用法: End of explanation import bisect import sys HAYSTACK = [1, 4, 5, 6, 8, 12, 15, 20, 21, 23, 23, 26, 29, 30] NEEDLES = [0, 1, 2, 5, 8, 10, 22, 23, 29, 30, 31] ROW_FMT = '{0:2d} @ {1:2d} {2}{0:<2d}' #构建的很好,每个数字占两个字节,不会错位 def demo(bisect_fn): for needle in reversed(NEEDLES): position = bisect_fn(HAYSTACK, needle) offset = position * ' |' #建立与位移相符的分隔符 print(ROW_FMT.format(needle, position, offset)) def main(args=None): if args == 'left': bisect_fn = bisect.bisect_left else: bisect_fn = bisect.bisect print('DEMO:', bisect_fn.__name__) print('haystack ->', ' '.join('%2d' % n for n in HAYSTACK)) demo(bisect_fn) main() main('left') Explanation: 当我们将列表排序后,就可以非常有效率的搜索它们,幸运的是,Python 标准库中的 bisect 模块已经提供标准的二分搜索法了。我们来讨论一下它的基本功能,包括方便的 bisect.insort() 函数,我们可以用它来确保排序后的列表仍然保持已经排序的状态 使用 bisect 来管理有效列表 bisect 主要提供两个函数 bisect() 和 insoct(),它们使用二分搜索,可以在任何有序列表中快速的查找和插入元素 使用 bisect 来搜索 bisect(haystack, needle) 会对 haystack(干草垛) 中(必须是已排序的列表)的 needle(针) 做二分搜索,找出可以插入 needle 的位置的索引,并保持 haystack 的顺序,也就是该位置左面的数都小于等于 needle,你可以使用 bisect(haystack, needle) 的结果来作为 haystack.insert(index, needle) 的参数,但是 insort() 可以做这两步,速度更快 End of explanation def grade(score, breakpoints=[60, 70, 80, 90], grades='FDCBA'): i = bisect.bisect(breakpoints, score) return grades[i] [grade(score) for score in [33, 99, 77, 70, 89, 90, 100]] Explanation: bisect 函数有两种微调方式,再插入时,可以使用一对索引,lo 和 hi 来缩小索引范围,lo 的默认值是 0, hi 的默认值是列表的 len() 其次,bisect() 其实是 bisect_right() 的别名,它还有一种姐妹函数,叫做 bisect_left(),从上面看出,当列表和插入元素不相同时,看不出来差别,但是如果有相同元素时,bisect() 会在相同的最一个元素右面插入,bisect_left() 会在相同的第一个元素左面插入。 bisect 有一个有趣的用法,就是执行数值表格查询,例如,将考试的分数转成字母: End of explanation import bisect import random SIZE = 7 random.seed(1729) # 哈哈,python 版本的插入排序,很方便 my_list = [] for i in range(SIZE): new_item = random.randrange(SIZE * 2) bisect.insort(my_list, new_item) print("%2d ->" % new_item, my_list) Explanation: 这段程序出自 bisect 模块文件,在搜索冗长的数字列表时,用 bisect 来取代 index 方法,可以加快查询速度 使用 bisect.insort 来插入 insort(seq, item) 会将 item 插入 seq,让 seq 保持升序排列 End of explanation from array import array from random import random floats = array('d', (random() for i in range(10 ** 7))) #双精度浮点数 floats[-1] fp = open('ipynb_floats.bin', 'wb') floats.tofile(fp) fp.close() floats2 = array('d') fp = open('ipynb_floats.bin', 'rb') floats2.fromfile(fp, 10 ** 7) fp.close() floats2[-1] floats == floats2 Explanation: 和 bisect 一样,inosrt 可以使用 lo,hi 参数来搜索子列表。另外还有一个 insort_left 函数,使用 bisect_left 查找插入点 当列表不适用时 list 类型很好,但有时根据特定需求会有更好的选择,例如,如果你要存储一千万浮点数,那么 数组(array)会比较有效率,因为数组存的不是浮点数对象,而是像 C 语言一样保存它们的机器值。另一方面,如果你经常在列表尾端加入以及移除元素,将它当成栈或队列使用, deque(double-ended queue 双端队列) 的工作速度会较快 数组 如果列表中只存放数字,那么 array.array 会比列表更有效率,它支持所有的可变列表操作(包括 .pop, .insert 和 .extend)以及额外的方法,可以快速将内容存储到硬盘,例如 .frombytes 和 .tofile Python 数组与 C 中的数组一样精简,当建立数组时,需要提供一个 类型码(typecode)来指定在底层存储时使用的 C 语言类型,例如,b 是 signed char 的 类型码,如果建立一个 array('b'),那么每一个元素都会被存成一个 byte,并被解释成整数,范围是 -128 到 127。对于大型数字列表来说,节省了很多内存。且 Python 不允许任何不符合数组类型的数字放进去。 下面展示了创建 1000 万浮点数数组,如何存到文件中并读取到数组中。 End of explanation floats = array(floats.typecode, sorted(floats)) floats[0:5] Explanation: 看到 array.tofile() 和 array.fromfile() 都很简单,执行速度也很快,事实证明,array.fromfile 载入这些数据只花了 0.1 秒,大约比从文本文件中读取数据快了 60 倍 Note: 另一种快速且更方便的数值存储方式是 pickle,pickle.dump 存储浮点数数组和 array.tofile() 几乎一样的快,但是 pickle 几乎可以处理所有的内置类型,包括复数等,甚至可以处理自己定义的实例(如果不是太复杂) 对于一些特殊的数字数组,用来表示二进制图像,例如光栅图像,里面涉及到的 bytes 和 bytearry 类型会在第 4 章讲到。 Note: 在 python3.5 为止,array 都没有像 list.sort() 这样的排序方法,如果排序可以使用 sorted() 函数排序建立一个新数组 a = array.array(a.typecode, sorted(a)) End of explanation numbers = array('h', [-2, -1, 0, 1, 2]) # h 代表 short 类型 memv = memoryview(numbers) len(memv) memv[0] memv_oct = memv.cast('B') #将 memv 转成无符号字节 memv_oct.tolist() memv_oct[5] = 4 #小端模式,所以 memv_oct[5] 代表 0 的高位 numbers Explanation: 内存视图(Memoryview) memoryview 是一个内置类,可以让你在不复制内容的情况下下操作同一个数组的不同切片,本质上,memoryview 是一个泛化和去数学化的 Numpy 数组,在不需要复制内存情况下,可以再数据库结构之间共享内存,其中数据可以任何格式,例如 PIL 图像,SQLlite 数据库,Numpy 数组等,这个功能对处理大型数据集合时候非常重要。 memoryview.cast 使用类似数组模块的标记法,可以让你使用不同的方式读取同一块内存数据,而且内容字节不会随意移动。memoryview.cast 会将同一块内从打包成一个全新 memoryview 对象返回,听起来像 C 语言中的类型转换。 更改数组中的一个 bytes,来改变某个元素的值: End of explanation import numpy as np a = np.arange(12) a type(a) a.shape a.shape = 3, 4 a a[:1] a.T floats = np.array([random() for i in range(10 ** 7)]) floats.shape floats[-3:] floats *= .5 floats[-3:] from time import perf_counter as pc #引入高效计时器(Python3.3 开始提供) t0 = pc(); floats /= 3; pc() - t0 #看到除以 3 这个操作只用了 20 毫秒的时间 Explanation: 看到 memoryview 可以将数据用另一种类型读写,还是很方便的,在第 4 章会看到一个 memoryview 和 struct 操作二进制序列的例子 这时候,我们自然想到了如何处理数组中的数据,答案是使用 Numpy 和 Scipy Numpy 和 Scipy Numpy 和 Scipy 实现了数组和矩阵运算,使得 Python 成为了应用科学计算的主流。 Scipy 是一种基于 Numpy 的包,提供了很多科学计算方法,包括线性代数,数值计算,统计等,Scipy 即快速又可靠,是一个很好的科学计算包 End of explanation from collections import deque dq = deque(range(10), maxlen=10) dq dq.rotate(3) #使用 n > 0 来旋转,从右面取出元素,放到最左面,n < 0 是从左面取出元素放到最右面 dq dq.rotate(-4) dq dq.appendleft(-1) #最左面插入 dq dq.extend([11, 22, 33]) #在最右面加入 3 个元素,最左面的 3 个元素将被丢弃 dq dq.extendleft([10, 20, 30, 40]) dq #注意 dq.extendleft(iter) 工作方式,它会将迭代器的个元素逐个加到队列左边,所以这些元素位置最后是反的 Explanation: 双向队列(Deque) 和其它的队列 .append() 和 .pop() 方法可以让你的列表当队列使用,每次进行 append() 和 pop(0),就可以产生 LIFO,但是如果你在最左端插入和移除元素,就很耗费资源,因为整个列表都要移位。 collections.deque(双向队列) 是一个线程安全的双端队列,可以快速的在两端进行插入和移除,如果你需要一个 “只保留最后看到的几个元素” 的功能,这也是一个最佳选择,因为 deque 可以设置队列的大小,当它被填满,加入新元素时,它会丢弃另一端的元素,下面是几个双向队列的典型操作 End of explanation
12,977
Given the following text description, write Python code to implement the functionality described below step by step Description: Linear Models Timothy Helton Imports Step1: Load Data Step2: Batting Step3: Player Step4: Salary Step5: Team Step6: Exercise 1 Step7: Exercise 2 Step8: Exercise 3 Step9: Exercise 4 Step10: Findings Adding a cubic term yeilded similar results as the quadratic regression. Both models appear to be overfitting at the end of the data. The quadratic model does not fit the data as well as the logrithmic model. The quadratic model has a far smaller confidence interval than the logrithmic model. A small handful of data points are having a large impact on both models. Exercise 5 Step11: Findings The feature bb (walks) is not statistically relevent and will be removed. Step12: Findings The F-statistic is not equal to zero, so a relationship exists between hits and at least one of the features. This is confirmed by the P-values for all features being zero. All included features exhibit a relationship to hits. The featrues with the largest coefficient confidence are Step13: Findings All of the features are statistically relavent. The model accuratly represents the data.
Python Code: import os import os.path as osp import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D import numpy as np import pandas as pd from sklearn.linear_model import LinearRegression import seaborn as sns import statsmodels.formula.api as smf from statsmodels.graphics.regressionplots import influence_plot from statsmodels.sandbox.regression.predstd import wls_prediction_std from k2datascience.utils import ax_formatter, save_fig, size from IPython.core.interactiveshell import InteractiveShell InteractiveShell.ast_node_interactivity = "all" %matplotlib inline Explanation: Linear Models Timothy Helton Imports End of explanation data_dir = osp.realpath(osp.join(os.getcwd(), '..', 'data', 'linear_regression')) Explanation: Load Data End of explanation batting = pd.read_csv(osp.join(data_dir, 'batting.csv')) category_cols = ( 'stint', 'league_id', 'triple', 'cs', 'ibb', 'hbp', 'sf', 'g_idp', ) for col in category_cols: batting.loc[:, col] = batting.loc[:, col].astype('category') batting.info() batting.head() batting.describe() Explanation: Batting End of explanation player = pd.read_csv(osp.join(data_dir, 'player.csv')) category_cols = ( 'bats', 'birth_month', 'death_month', 'throws', ) for col in category_cols: player.loc[:, col] = player.loc[:, col].astype('category') player.info() player.head() player.describe() Explanation: Player End of explanation salary = pd.read_csv(osp.join(data_dir, 'salary.csv')) category_cols = ( 'team_id', 'league_id', ) for col in category_cols: salary.loc[:, col] = salary.loc[:, col].astype('category') salary.info() salary.head() salary.describe() Explanation: Salary End of explanation team = pd.read_csv(osp.join(data_dir, 'team.csv')) category_cols = ( 'league_id', 'div_id', 'div_win', 'lg_win', 'rank', 'team_id', 'wc_win', 'ws_win', ) for col in category_cols: team.loc[:, col] = team.loc[:, col].astype('category') team.info() team.head() team.describe() Explanation: Team End of explanation mean_salary = (salary .groupby('year') .mean() .reset_index()) mean_salary.corr() ax = mean_salary.plot(x='year', y='salary', figsize=(8, 6), label='Mean Salary') ax.set_title('Mean Salary vs Year', fontsize=size['title']) ax.legend(fontsize=size['legend']) ax.set_xlabel('Year', fontsize=size['label']) ax.set_ylabel('Mean Salary (x $1000)', fontsize= size['label']) ax.yaxis.set_major_formatter(ax_formatter['thousands']) plt.show(); Explanation: Exercise 1: Compute the correlation between mean salary and year. Generate a graph of mean salary per year. End of explanation lr = smf.ols(formula=f'salary ~ year', data=mean_salary).fit() lr.summary() # Data ax = mean_salary.plot(x='year', y='salary', figsize=(8, 6), label='Mean Salary') # Regression Line ax.plot(mean_salary.year, lr.predict(mean_salary.year), linestyle='--', label='Linear Regression') # Confidence Intervals std, upper, lower = wls_prediction_std(lr) ax.plot(mean_salary.year, lower, alpha=0.5, color='black', label='Confidence Interval', linestyle='-.') ax.plot(mean_salary.year, upper, alpha=0.5, color='black', linestyle='-.') ax.set_title('Mean Salary vs Year', fontsize=size['title']) ax.legend(fontsize=size['legend']) ax.set_xlabel('Year', fontsize=size['label']) ax.set_ylabel('Mean Salary (x $1000)', fontsize= size['label']) ax.yaxis.set_major_formatter(ax_formatter['thousands']) plt.show(); Explanation: Exercise 2: Find the best line that approximates mean salary with respect to years. Plot this line together with the data from Exercise 1. End of explanation fig = plt.figure('Salary Boxp Plot', figsize=(12, 6), facecolor='white', edgecolor='black') rows, cols = (1, 1) ax0 = plt.subplot2grid((rows, cols), (0, 0)) sns.boxplot(x='year', y='salary', data=salary, fliersize=2, ax=ax0) ax0.set_title('Salary vs Year', fontsize=size['title']) ax0.set_xlabel('Year', fontsize=size['label']) ax0.set_ylabel('Salary (x $1000)', fontsize=size['label']) ax0.yaxis.set_major_formatter(ax_formatter['thousands']) fig.autofmt_xdate() plt.show(); Explanation: Exercise 3: Create a box plot for salaries per year. End of explanation gini = {} for year in salary.year.unique(): salaries = (salary.query(f'year == {year}') .salary .sort_values()) n = salaries.size gini[year] = ((2 * np.sum(salaries * (np.arange(n) + 1))) / (n * salaries.sum()) - ((n + 1) / n)) gini = (pd.Series(gini) .reset_index() .rename(columns={'index': 'year', 0: 'gini'})) gini.corr() ax = gini.plot(x='year', y='gini', figsize=(8, 6), label='Gini Coefficient') ax.set_title('Gini Coefficient vs Year', fontsize=size['title']) ax.legend(fontsize=size['legend']) ax.set_xlabel('Year', fontsize=size['label']) ax.set_ylabel('Gini Coefficient', fontsize= size['label']) plt.show(); features = ' + '.join([f'np.power(year, {x + 1})' for x in range(2)]) quadratic_model = smf.ols(formula=f'gini ~ {features}', data=gini).fit() quadratic_model.summary() log_model = smf.ols(formula='gini ~ np.log(year) * year', data=gini).fit() log_model.summary() fig = plt.figure('Regression Plot', figsize=(10, 5), facecolor='white', edgecolor='black') rows, cols = (1, 2) ax0 = plt.subplot2grid((rows, cols), (0, 0)) ax1 = plt.subplot2grid((rows, cols), (0, 1), sharey=ax0) # Regression Lines ax0.plot(gini.year, quadratic_model.predict(gini.year), color='red', linestyle='-.', label='Quadratic Regression') std0, upper, lower = wls_prediction_std(quadratic_model) ax0.plot(gini.year, lower, alpha=0.5, color='black', label='Confidence Interval', linestyle='-.') ax0.plot(gini.year, upper, alpha=0.5, color='black', linestyle='-.') ax1.plot(gini.year, log_model.predict(gini.year), color='red', linestyle='--', label='Logrithmic Regression') std, upper, lower = wls_prediction_std(log_model) ax1.plot(gini.year, lower, alpha=0.5, color='black', label='Confidence Interval', linestyle='-.') ax1.plot(gini.year, upper, alpha=0.5, color='black', linestyle='-.') # Data for ax in (ax0, ax1): gini.plot(x='year', y='gini', label='Gini Coefficient', ax=ax) ax.legend(fontsize=size['legend']) ax.set_xlabel('Year', fontsize=size['label']) ax.set_ylabel('Gini Coefficient', fontsize= size['label']) fig.autofmt_xdate() plt.tight_layout() plt.suptitle('Gini Coefficient vs Year', fontsize=size['super_title'], y=1.05) plt.show(); fig = plt.figure('Residual Plot', figsize=(12, 5), facecolor='white', edgecolor='black') rows, cols = (1, 2) ax0 = plt.subplot2grid((rows, cols), (0, 0)) ax1 = plt.subplot2grid((rows, cols), (0, 1)) # Quadratic Model ax0.scatter(quadratic_model.fittedvalues, quadratic_model.resid) ax0.set_title('Quadratic Model', fontsize=size['title']) # Logrithmic Model ax1.scatter(log_model.fittedvalues, log_model.resid) ax1.set_title('Logrithmic Model', fontsize=size['title']) for ax in (ax0, ax1): ax.set_xlabel('Fitted Values', fontsize=size['label']) ax.set_ylabel('Raw Residuals', fontsize=size['label']) plt.show(); fig = plt.figure('Influence Plot', figsize=(10, 10), facecolor='white', edgecolor='black') rows, cols = (1, 2) ax0 = plt.subplot2grid((rows, cols), (0, 0)) ax1 = plt.subplot2grid((rows, cols), (0, 1)) # Quadradic Model influence = influence_plot(quadratic_model, ax=ax0) ax0.set_title('Quadratic Model', fontsize=size['title']) # Logrithmic Model influence = influence_plot(log_model, ax=ax1) ax1.set_title('Logrithmic Model', fontsize=size['title']) for ax in (ax0, ax1): ax.set_xlabel('H Leverage', fontsize=size['label']) ax.set_ylabel('Studentized Residuals', fontsize=size['label']) plt.show(); Explanation: Exercise 4: From the previous graph we can see an increasing disparity in salaries as time increases. How would you measure disparity in salaries? Compute the correlation of disparity and years. Find the best line that approximates disparity with respect to years. The Gini Coefficient is a means to represent the income or wealth distribution of a population. - The Gini coefficient measures the inequality among values of a frequency distribution. - G = 0 represents perfect equality - G = 1 expresses maximal inequality $$G = \frac{2 \sum_{i=1}^n i y_i}{n \sum_{i=1}^n y_i} - \frac{n + 1}{n}$$ End of explanation fig = plt.figure('Team Heatmap', figsize=(10, 8), facecolor='white', edgecolor='black') rows, cols = (1, 1) ax0 = plt.subplot2grid((rows, cols), (0, 0)) sns.heatmap(team.corr(), annot=False, cbar_kws={'orientation': 'vertical'}, fmt='.2f', linewidths=1, vmin=-1, vmax=1, ax=ax0) ax0.set_title('Team Dataset', fontsize=size['title']) ax0.set_xticklabels(ax0.xaxis.get_majorticklabels(), fontsize=size['label'], rotation=80) ax0.set_yticklabels(ax0.yaxis.get_majorticklabels(), fontsize=size['label'], rotation=0) plt.show(); features = [ 'g', 'w', 'bb', 'ab', 'fp', 'ipouts', 'ha', 'er', 'double', ] model = ' + '.join(features) team_model = smf.ols(formula=f'h ~ {model}', data=team).fit() team_model.summary() Explanation: Findings Adding a cubic term yeilded similar results as the quadratic regression. Both models appear to be overfitting at the end of the data. The quadratic model does not fit the data as well as the logrithmic model. The quadratic model has a far smaller confidence interval than the logrithmic model. A small handful of data points are having a large impact on both models. Exercise 5: Build a predictive model for the amount of hits for a team given Games played, Wins, Walks by batters, At bats, Fielding percentage, Outs Pitched (innings pitched x 3), Hits allowed, Earned runs allowed, Doubles. To solve this problem you will use team.csv. How does your model measure accuracy? What was the score for its accuracy? Choose two features and create a 3d plot of feature1, feature2, h. End of explanation features.remove('bb') new_model = ' + '.join(features) team_model = smf.ols(formula=f'h ~ {new_model}', data=team).fit() team_model.summary() fig = plt.figure('Residual Plot', figsize=(8, 6), facecolor='white', edgecolor='black') rows, cols = (1, 1) ax0 = plt.subplot2grid((rows, cols), (0, 0)) # Model ax0.scatter(team_model.fittedvalues, team_model.resid) ax0.set_title('Team Model', fontsize=size['title']) ax0.set_xlabel('Fitted Values', fontsize=size['label']) ax0.set_ylabel('Raw Residuals', fontsize=size['label']) plt.show(); fig = plt.figure('Influence Plot', figsize=(10, 10), facecolor='white', edgecolor='black') rows, cols = (1, 1) ax0 = plt.subplot2grid((rows, cols), (0, 0)) # Team Model influence = influence_plot(team_model, ax=ax0) ax0.set_title('Team Model', fontsize=size['title']) ax.set_xlabel('H Leverage', fontsize=size['label']) ax.set_ylabel('Studentized Residuals', fontsize=size['label']) plt.show(); with sns.axes_style("white"): fig = plt.figure('At Bats, Wins, Games Played Scatter', figsize=(10, 6), facecolor='white', edgecolor='black') ax = Axes3D(fig) ax.view_init(15, 45) sc = ax.scatter(team.ab, team.w, team.g, c=team.h, cmap='gnuplot', vmin=team.h.min(), vmax=team.h.max()) plt.colorbar(sc) ax.set_title(f'Data Colored by Hits', fontsize=size['title'], y=1.02) ax.set_xlabel('\nAt Bats', fontsize=size['label']) ax.set_ylabel('\nWins', fontsize=size['label']) ax.set_zlabel('\nGames Played', fontsize=size['label']) plt.suptitle('Team\nAt Bats vs Wins vs Games Played', fontsize=size['super_title'], x=0.4, y=1.15) plt.show(); Explanation: Findings The feature bb (walks) is not statistically relevent and will be removed. End of explanation batting_data = (batting .set_index('player_id') .join((player.loc[:, ['player_id', 'bats']] .set_index('player_id'))) .loc[:, ['year', 'g', 'ab', 'bats', 'h']]) batting_data.info() batting_data.head() year_avg = (batting_data .groupby(['player_id', 'year'])[['g', 'ab', 'bats', 'h']] .sum() .reset_index()) year_avg.head() player_avg = (year_avg .groupby(['player_id'])[['g', 'ab', 'h']] .mean() .reset_index()) player_avg.head() player = (player_avg .set_index('player_id') .join(player.loc[:, ['player_id', 'bats']] .set_index('player_id'))) player.head() fig = plt.figure('Player Heatmap', figsize=(5, 4), facecolor='white', edgecolor='black') rows, cols = (1, 1) ax0 = plt.subplot2grid((rows, cols), (0, 0)) sns.heatmap(player.corr(), annot=True, cbar_kws={'orientation': 'vertical'}, fmt='.2f', linewidths=5, vmin=-1, vmax=1, ax=ax0) ax0.set_title('Player Dataset Correlation', fontsize=size['title']) ax0.set_xticklabels(ax0.xaxis.get_majorticklabels(), fontsize=size['label'], rotation=80) ax0.set_yticklabels(ax0.yaxis.get_majorticklabels(), fontsize=size['label'], rotation=0) plt.show(); grid = sns.pairplot(player.dropna(), diag_kws={'alpha': 0.5, 'bins': 10, 'edgecolor': 'black'}, plot_kws={'alpha': 0.7}) grid.fig.suptitle('Player Data Correlation', fontsize=size['super_title'], y=1.03) cols = (player .select_dtypes(exclude=['category']) .columns) for n, col in enumerate(cols): grid.axes[cols.size - 1, n].set_xlabel(cols[n], fontsize=size['label']) grid.axes[n, 0].set_ylabel(cols[n], fontsize=size['label']) plt.show(); features = [ 'g', 'ab', 'bats', ] model = ' + '.join(features) batting_model = smf.ols(formula=f'h ~ {model}', data=player).fit() batting_model.summary() Explanation: Findings The F-statistic is not equal to zero, so a relationship exists between hits and at least one of the features. This is confirmed by the P-values for all features being zero. All included features exhibit a relationship to hits. The featrues with the largest coefficient confidence are: At Bats Wins Hits Allowed The model has an $R^2$ value of 0.957. Numerous outliers are present in the data. The leverage values to not indicate any specific points having a disproportionate impact on the data. This model is an appropriate representation of the data and may be used to make predictions. Exercise 6: Build a similar model to predict average hits per year based on Games played, at bats and whether a player is a left or right handed batter. Consider only those players who are either left or right handed batters and for the moment do not worry about missing data or ambidextrous batters. End of explanation fig = plt.figure('Residual Plot', figsize=(8, 6), facecolor='white', edgecolor='black') rows, cols = (1, 1) ax0 = plt.subplot2grid((rows, cols), (0, 0)) # Model ax0.scatter(batting_model.fittedvalues, batting_model.resid, alpha=0.5) ax0.set_title('Batting Model', fontsize=size['title']) ax0.set_xlabel('Fitted Values', fontsize=size['label']) ax0.set_ylabel('Raw Residuals', fontsize=size['label']) plt.show(); with sns.axes_style("white"): fig = plt.figure(', Wins, Games Played Scatter', figsize=(10, 6), facecolor='white', edgecolor='black') ax = Axes3D(fig) sc = ax.scatter(batting_data.g, batting_data.ab, batting_data.h, c=batting_data.bats.cat.codes, cmap='gnuplot') cbar = plt.colorbar(sc, ticks=[-1, 0, 1, 2]) cbar.ax.set_yticklabels(['N/A', 'Both', 'Left', 'Right'], fontsize=size['legend']) ax.set_title(f'Data Colored by Batting Hand', fontsize=size['title'], y=1.02) ax.set_xlabel('\nGames Played', fontsize=size['label']) ax.set_ylabel('\nAt Bats', fontsize=size['label']) ax.set_zlabel('\nHits', fontsize=size['label']) plt.suptitle('Batter\nGames Played vs At Bats vs Hits', fontsize=size['super_title'], x=0.4, y=1.15) plt.show(); Explanation: Findings All of the features are statistically relavent. The model accuratly represents the data. End of explanation
12,978
Given the following text description, write Python code to implement the functionality described below step by step Description: STA 208 Step3: The response variable is quality.
Python Code: import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.model_selection import LeaveOneOut from sklearn import linear_model, neighbors %matplotlib inline plt.style.use('ggplot') # dataset path data_dir = "." sample_data = pd.read_csv(data_dir+"/hw1.csv", delimiter=',') sample_data.head() Explanation: STA 208: Homework 1 This is based on the material in Chapters 2, 3 of 'Elements of Statistical Learning' (ESL), in addition to lectures 1-4. Chunzhe Zhang came up with the dataset and the analysis in the second section. Instructions We use a script that extracts your answers by looking for cells in between the cells containing the exercise statements (beginning with Exercise X.X). So you MUST add cells in between the exercise statements and add answers within them and MUST NOT modify the existing cells, particularly not the problem statement To make markdown, please switch the cell type to markdown (from code) - you can hit 'm' when you are in command mode - and use the markdown language. For a brief tutorial see: https://daringfireball.net/projects/markdown/syntax In the conceptual exercises you should provide an explanation, with math when necessary, for any answers. When answering with math you should use basic LaTeX, as in $$E(Y|X=x) = \int_{\mathcal{Y}} f_{Y|X}(y|x) dy = \int_{\mathcal{Y}} \frac{f_{Y,X}(y,x)}{f_{X}(x)} dy$$ for displayed equations, and $R_{i,j} = 2^{-|i-j|}$ for inline equations. (To see the contents of this cell in markdown, double click on it or hit Enter in escape mode.) To see a list of latex math symbols see here: http://web.ift.uib.no/Teori/KURS/WRK/TeX/symALL.html 1. Conceptual Exercises Exercise 1.1. (5 pts) Recall that the Hamming loss for Binary classification ($y \in {0,1}$) is $$l(y,\hat y) = 1{y \ne \hat y} = (y - \hat y)^2$$ as long as $\hat y \in {0,1}$. This loss can be extended to multiclass classification where there are $K$ possible values that $y$ can take (for example 'dog','cat','squirrel' or 1-5 stars). Explain how you can re-encode $y$ and $\hat y$ to be a $K-1$ dimensional vector that generalizes binary classification, and rewrite the loss using vector operations. Exercise 1.2 (5 pts) Ex. 2.7 in ESL Exercise 1.3 (5 pts, 1 for each part) Recall that the true risk for a prediction function, $f$, a loss function, $\ell$, and a joint distribution for $Y,X$ is $$R(f) = E \ell(y,f(x))$$ For a training set ${x_i,y_x}{i=1}^n$, the empirical risk is $$R_n = \frac{1}{n} \sum{i=1}^n \ell(y_i,f(x_i)).$$ Let $y = x^\top \beta + \epsilon$ be a linear model for $Y|X$, where $x,\beta$ are $p$-dimensional such that $\epsilon$ is Gaussian with mean 0 and variance $\sigma^2$ (independent of X). Let $\ell(y,\hat y) = (y - \hat y)^2$ be square error loss. Show that $f^\star(x) = x^\top \beta$ gives the smallest true risk (also known as the Bayes rule). Why can't we use this prediction in practice? Recall that OLS is the empirical risk minimizer for linear functions. Why does this tell us the following: $$ E R_n (\hat f) \le R(f^\star)$$ How do we know that $E R_n (\hat f) \le R(\hat f)$? and use this to answer Ex. 2.9 in ESL. What about this was specific to OLS and least squares loss (can this be generalized)? What is the most general statement that you can think of that you can prove in this way? Exercise 1.4 (5 pts) Ex. 3.5 in ESL Exercise 1.5 (5 pts) Ex 3.9 in ESL 2. Data Analysis Instructions You will be graded based on several criteria, and each is on a 5 point scale (5 is excellent - A - 1 is poor - C - 0 is not answered - D/F). You should strive to 'impress us' if you want a 5. This means excellent code, well explained conclusions, well annotated plots, correct answers, etc. We will be grading you on several criteria: Conclusions: Conclusions should be consistent with the evidence provided, the conclusion should be well justified, the principles of machine learning that you have learned should be respected (such as overfitting and underfitting etc.) Correctness of calculations: code should be correct and reflect the principles learned in this course, the logic should be sound, the methods should match the setting and context, you should try many applicable methods that you have learned as long as they apply. Code, Figures, and Text: Code should be annotated and easy to follow, with docstrings on the functions; captions, titles, for figures Exercise 2 You should run the following code cells to import the code and reduce the variable set. Address the questions after the code. End of explanation X = np.array(sample_data.iloc[:,range(1,5)]) y = np.array(sample_data.iloc[:,0]) def loo_risk(X,y,regmod): Construct the leave-one-out square error risk for a regression model Input: design matrix, X, response vector, y, a regression model, regmod Output: scalar LOO risk loo = LeaveOneOut() loo_losses = [] for train_index, test_index in loo.split(X): X_train, X_test = X[train_index], X[test_index] y_train, y_test = y[train_index], y[test_index] regmod.fit(X_train,y_train) y_hat = regmod.predict(X_test) loss = np.sum((y_hat - y_test)**2) loo_losses.append(loss) return np.mean(loo_losses) def emp_risk(X,y,regmod): Return the empirical risk for square error loss Input: design matrix, X, response vector, y, a regression model, regmod Output: scalar empirical risk regmod.fit(X,y) y_hat = regmod.predict(X) return np.mean((y_hat - y)**2) Explanation: The response variable is quality. End of explanation
12,979
Given the following text description, write Python code to implement the functionality described below step by step Description: Vertex SDK Step1: Install the latest GA version of google-cloud-storage library as well. Step2: Restart the kernel Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages. Step3: Before you begin GPU runtime This tutorial does not require a GPU runtime. Set up your Google Cloud project The following steps are required, regardless of your notebook environment. Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs. Make sure that billing is enabled for your project. Enable the following APIs Step4: Region You can also change the REGION variable, which is used for operations throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you. Americas Step5: Timestamp If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial. Step6: Authenticate your Google Cloud account If you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step. If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth. Otherwise, follow these steps Step7: Create a Cloud Storage bucket The following steps are required, regardless of your notebook environment. When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions. Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization. Step8: Only if your bucket doesn't already exist Step9: Finally, validate access to your Cloud Storage bucket by examining its contents Step10: Set up variables Next, set up some variables used throughout the tutorial. Import libraries and define constants Step11: Initialize Vertex SDK for Python Initialize the Vertex SDK for Python for your project and corresponding bucket. Step12: Tutorial Now you are ready to start creating your own AutoML image object detection model. Location of Cloud Storage training data. Now set the variable IMPORT_FILE to the location of the CSV index file in Cloud Storage. Step13: Quick peek at your data This tutorial uses a version of the Salads dataset that is stored in a public Cloud Storage bucket, using a CSV index file. Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows. Step14: Create the Dataset Next, create the Dataset resource using the create method for the ImageDataset class, which takes the following parameters Step15: Create and run training pipeline To train an AutoML model, you perform two steps Step16: Run the training pipeline Next, you run the DAG to start the training job by invoking the method run, with the following parameters Step17: Review model evaluation scores After your model has finished training, you can review the evaluation scores for it. First, you need to get a reference to the new model. As with datasets, you can either use the reference to the model variable you created when you deployed the model or you can list all of the models in your project. Step18: Export as Edge model You can export an AutoML image object detection model as a Edge model which you can then custom deploy to an edge device or download locally. Use the method export_model() to export the model to Cloud Storage, which takes the following parameters Step19: Download the TFLite model artifacts Now that you have an exported TFLite version of your model, you can test the exported model locally, but first downloading it from Cloud Storage. Step20: Instantiate a TFLite interpreter The TFLite version of the model is not a TensorFlow SavedModel format. You cannot directly use methods like predict(). Instead, one uses the TFLite interpreter. You must first setup the interpreter for the TFLite model as follows Step21: Get test item You will use an arbitrary example out of the dataset as a test item. Don't be concerned that the example was likely used in training the model -- we just want to demonstrate how to make a prediction. Step22: Make a prediction with TFLite model Finally, you do a prediction using your TFLite model, as follows Step23: Cleaning up To clean up all Google Cloud resources used in this project, you can delete the Google Cloud project you used for the tutorial. Otherwise, you can delete the individual resources you created in this tutorial
Python Code: import os # Google Cloud Notebook if os.path.exists("/opt/deeplearning/metadata/env_version"): USER_FLAG = "--user" else: USER_FLAG = "" ! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG Explanation: Vertex SDK: AutoML training image object detection model for export to edge <table align="left"> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_image_object_detection_online_export_edge.ipynb"> <img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab </a> </td> <td> <a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_image_object_detection_online_export_edge.ipynb"> <img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo"> View on GitHub </a> </td> <td> <a href="https://console.cloud.google.com/ai/platform/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_image_object_detection_online_export_edge.ipynb"> Open in Google Cloud Notebooks </a> </td> </table> <br/><br/><br/> Overview This tutorial demonstrates how to use the Vertex SDK to create image object detection models to export as an Edge model using a Google Cloud AutoML model. Dataset The dataset used for this tutorial is the Salads category of the OpenImages dataset from TensorFlow Datasets. This dataset does not require any feature engineering. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model predicts the bounding box locations and corresponding type of salad items in an image from a class of five items: salad, seafood, tomato, baked goods, or cheese. Objective In this tutorial, you create a AutoML image object detection model from a Python script using the Vertex SDK, and then export the model as an Edge model in TFLite format. You can alternatively create models with AutoML using the gcloud command-line tool or online using the Cloud Console. The steps performed include: Create a Vertex Dataset resource. Train the model. Export the Edge model from the Model resource to Cloud Storage. Download the model locally. Make a local prediction. Costs This tutorial uses billable components of Google Cloud: Vertex AI Cloud Storage Learn about Vertex AI pricing and Cloud Storage pricing, and use the Pricing Calculator to generate a cost estimate based on your projected usage. Set up your local development environment If you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step. Otherwise, make sure your environment meets this notebook's requirements. You need the following: The Cloud Storage SDK Git Python 3 virtualenv Jupyter notebook running in a virtual environment with Python 3 The Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions: Install and initialize the SDK. Install Python 3. Install virtualenv and create a virtual environment that uses Python 3. Activate the virtual environment. To install Jupyter, run pip3 install jupyter on the command-line in a terminal shell. To launch Jupyter, run jupyter notebook on the command-line in a terminal shell. Open this notebook in the Jupyter Notebook Dashboard. Installation Install the latest version of Vertex SDK for Python. End of explanation ! pip3 install -U google-cloud-storage $USER_FLAG if os.environ["IS_TESTING"]: ! pip3 install --upgrade tensorflow $USER_FLAG Explanation: Install the latest GA version of google-cloud-storage library as well. End of explanation import os if not os.getenv("IS_TESTING"): # Automatically restart kernel after installs import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True) Explanation: Restart the kernel Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages. End of explanation PROJECT_ID = "[your-project-id]" # @param {type:"string"} if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]": # Get your GCP project id from gcloud shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT_ID = shell_output[0] print("Project ID:", PROJECT_ID) ! gcloud config set project $PROJECT_ID Explanation: Before you begin GPU runtime This tutorial does not require a GPU runtime. Set up your Google Cloud project The following steps are required, regardless of your notebook environment. Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs. Make sure that billing is enabled for your project. Enable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage. If you are running this notebook locally, you will need to install the Cloud SDK. Enter your project ID in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook. Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $. End of explanation REGION = "us-central1" # @param {type: "string"} Explanation: Region You can also change the REGION variable, which is used for operations throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you. Americas: us-central1 Europe: europe-west4 Asia Pacific: asia-east1 You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services. Learn more about Vertex AI regions End of explanation from datetime import datetime TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S") Explanation: Timestamp If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial. End of explanation # If you are running this notebook in Colab, run this cell and follow the # instructions to authenticate your GCP account. This provides access to your # Cloud Storage bucket and lets you submit training jobs and prediction # requests. import os import sys # If on Google Cloud Notebook, then don't execute this code if not os.path.exists("/opt/deeplearning/metadata/env_version"): if "google.colab" in sys.modules: from google.colab import auth as google_auth google_auth.authenticate_user() # If you are running this notebook locally, replace the string below with the # path to your service account key and run this cell to authenticate your GCP # account. elif not os.getenv("IS_TESTING"): %env GOOGLE_APPLICATION_CREDENTIALS '' Explanation: Authenticate your Google Cloud account If you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step. If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth. Otherwise, follow these steps: In the Cloud Console, go to the Create service account key page. Click Create service account. In the Service account name field, enter a name, and click Create. In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin. Click Create. A JSON file that contains your key downloads to your local environment. Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell. End of explanation BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"} if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]": BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP Explanation: Create a Cloud Storage bucket The following steps are required, regardless of your notebook environment. When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions. Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization. End of explanation ! gsutil mb -l $REGION $BUCKET_NAME Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket. End of explanation ! gsutil ls -al $BUCKET_NAME Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents: End of explanation import google.cloud.aiplatform as aip Explanation: Set up variables Next, set up some variables used throughout the tutorial. Import libraries and define constants End of explanation aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME) Explanation: Initialize Vertex SDK for Python Initialize the Vertex SDK for Python for your project and corresponding bucket. End of explanation IMPORT_FILE = "gs://cloud-samples-data/vision/salads.csv" Explanation: Tutorial Now you are ready to start creating your own AutoML image object detection model. Location of Cloud Storage training data. Now set the variable IMPORT_FILE to the location of the CSV index file in Cloud Storage. End of explanation if "IMPORT_FILES" in globals(): FILE = IMPORT_FILES[0] else: FILE = IMPORT_FILE count = ! gsutil cat $FILE | wc -l print("Number of Examples", int(count[0])) print("First 10 rows") ! gsutil cat $FILE | head Explanation: Quick peek at your data This tutorial uses a version of the Salads dataset that is stored in a public Cloud Storage bucket, using a CSV index file. Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows. End of explanation dataset = aip.ImageDataset.create( display_name="Salads" + "_" + TIMESTAMP, gcs_source=[IMPORT_FILE], import_schema_uri=aip.schema.dataset.ioformat.image.bounding_box, ) print(dataset.resource_name) Explanation: Create the Dataset Next, create the Dataset resource using the create method for the ImageDataset class, which takes the following parameters: display_name: The human readable name for the Dataset resource. gcs_source: A list of one or more dataset index files to import the data items into the Dataset resource. import_schema_uri: The data labeling schema for the data items. This operation may take several minutes. End of explanation dag = aip.AutoMLImageTrainingJob( display_name="salads_" + TIMESTAMP, prediction_type="object_detection", multi_label=False, model_type="MOBILE_TF_LOW_LATENCY_1", base_model=None, ) print(dag) Explanation: Create and run training pipeline To train an AutoML model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline. Create training pipeline An AutoML training pipeline is created with the AutoMLImageTrainingJob class, with the following parameters: display_name: The human readable name for the TrainingJob resource. prediction_type: The type task to train the model for. classification: An image classification model. object_detection: An image object detection model. multi_label: If a classification task, whether single (False) or multi-labeled (True). model_type: The type of model for deployment. CLOUD: Deployment on Google Cloud CLOUD_HIGH_ACCURACY_1: Optimized for accuracy over latency for deployment on Google Cloud. CLOUD_LOW_LATENCY_: Optimized for latency over accuracy for deployment on Google Cloud. MOBILE_TF_VERSATILE_1: Deployment on an edge device. MOBILE_TF_HIGH_ACCURACY_1:Optimized for accuracy over latency for deployment on an edge device. MOBILE_TF_LOW_LATENCY_1: Optimized for latency over accuracy for deployment on an edge device. base_model: (optional) Transfer learning from existing Model resource -- supported for image classification only. The instantiated object is the DAG (directed acyclic graph) for the training job. End of explanation model = dag.run( dataset=dataset, model_display_name="salads_" + TIMESTAMP, training_fraction_split=0.8, validation_fraction_split=0.1, test_fraction_split=0.1, budget_milli_node_hours=20000, disable_early_stopping=False, ) Explanation: Run the training pipeline Next, you run the DAG to start the training job by invoking the method run, with the following parameters: dataset: The Dataset resource to train the model. model_display_name: The human readable name for the trained model. training_fraction_split: The percentage of the dataset to use for training. test_fraction_split: The percentage of the dataset to use for test (holdout data). validation_fraction_split: The percentage of the dataset to use for validation. budget_milli_node_hours: (optional) Maximum training time specified in unit of millihours (1000 = hour). disable_early_stopping: If True, training maybe completed before using the entire budget if the service believes it cannot further improve on the model objective measurements. The run method when completed returns the Model resource. The execution of the training pipeline will take upto 20 minutes. End of explanation # Get model resource ID models = aip.Model.list(filter="display_name=salads_" + TIMESTAMP) # Get a reference to the Model Service client client_options = {"api_endpoint": f"{REGION}-aiplatform.googleapis.com"} model_service_client = aip.gapic.ModelServiceClient(client_options=client_options) model_evaluations = model_service_client.list_model_evaluations( parent=models[0].resource_name ) model_evaluation = list(model_evaluations)[0] print(model_evaluation) Explanation: Review model evaluation scores After your model has finished training, you can review the evaluation scores for it. First, you need to get a reference to the new model. As with datasets, you can either use the reference to the model variable you created when you deployed the model or you can list all of the models in your project. End of explanation response = model.export_model( artifact_destination=BUCKET_NAME, export_format_id="tflite", sync=True ) model_package = response["artifactOutputUri"] Explanation: Export as Edge model You can export an AutoML image object detection model as a Edge model which you can then custom deploy to an edge device or download locally. Use the method export_model() to export the model to Cloud Storage, which takes the following parameters: artifact_destination: The Cloud Storage location to store the SavedFormat model artifacts to. export_format_id: The format to save the model format as. For AutoML image object detection there is just one option: tf-saved-model: TensorFlow SavedFormat for deployment to a container. tflite: TensorFlow Lite for deployment to an edge or mobile device. edgetpu-tflite: TensorFlow Lite for TPU tf-js: TensorFlow for web client coral-ml: for Coral devices sync: Whether to perform operational sychronously or asynchronously. End of explanation ! gsutil ls $model_package # Download the model artifacts ! gsutil cp -r $model_package tflite tflite_path = "tflite/model.tflite" Explanation: Download the TFLite model artifacts Now that you have an exported TFLite version of your model, you can test the exported model locally, but first downloading it from Cloud Storage. End of explanation import tensorflow as tf interpreter = tf.lite.Interpreter(model_path=tflite_path) interpreter.allocate_tensors() input_details = interpreter.get_input_details() output_details = interpreter.get_output_details() input_shape = input_details[0]["shape"] print("input tensor shape", input_shape) Explanation: Instantiate a TFLite interpreter The TFLite version of the model is not a TensorFlow SavedModel format. You cannot directly use methods like predict(). Instead, one uses the TFLite interpreter. You must first setup the interpreter for the TFLite model as follows: Instantiate an TFLite interpreter for the TFLite model. Instruct the interpreter to allocate input and output tensors for the model. Get detail information about the models input and output tensors that will need to be known for prediction. End of explanation test_items = ! gsutil cat $IMPORT_FILE | head -n1 test_item = test_items[0].split(",")[0] with tf.io.gfile.GFile(test_item, "rb") as f: content = f.read() test_image = tf.io.decode_jpeg(content) print("test image shape", test_image.shape) test_image = tf.image.resize(test_image, (224, 224)) print("test image shape", test_image.shape, test_image.dtype) test_image = tf.cast(test_image, dtype=tf.uint8).numpy() Explanation: Get test item You will use an arbitrary example out of the dataset as a test item. Don't be concerned that the example was likely used in training the model -- we just want to demonstrate how to make a prediction. End of explanation import numpy as np data = np.expand_dims(test_image, axis=0) interpreter.set_tensor(input_details[0]["index"], data) interpreter.invoke() softmax = interpreter.get_tensor(output_details[0]["index"]) label = np.argmax(softmax) print(label) Explanation: Make a prediction with TFLite model Finally, you do a prediction using your TFLite model, as follows: Convert the test image into a batch of a single image (np.expand_dims) Set the input tensor for the interpreter to your batch of a single image (data). Invoke the interpreter. Retrieve the softmax probabilities for the prediction (get_tensor). Determine which label had the highest probability (np.argmax). End of explanation delete_all = True if delete_all: # Delete the dataset using the Vertex dataset object try: if "dataset" in globals(): dataset.delete() except Exception as e: print(e) # Delete the model using the Vertex model object try: if "model" in globals(): model.delete() except Exception as e: print(e) # Delete the endpoint using the Vertex endpoint object try: if "endpoint" in globals(): endpoint.delete() except Exception as e: print(e) # Delete the AutoML or Pipeline trainig job try: if "dag" in globals(): dag.delete() except Exception as e: print(e) # Delete the custom trainig job try: if "job" in globals(): job.delete() except Exception as e: print(e) # Delete the batch prediction job using the Vertex batch prediction object try: if "batch_predict_job" in globals(): batch_predict_job.delete() except Exception as e: print(e) # Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object try: if "hpt_job" in globals(): hpt_job.delete() except Exception as e: print(e) if "BUCKET_NAME" in globals(): ! gsutil rm -r $BUCKET_NAME Explanation: Cleaning up To clean up all Google Cloud resources used in this project, you can delete the Google Cloud project you used for the tutorial. Otherwise, you can delete the individual resources you created in this tutorial: Dataset Pipeline Model Endpoint AutoML Training Job Batch Job Custom Job Hyperparameter Tuning Job Cloud Storage Bucket End of explanation
12,980
Given the following text description, write Python code to implement the functionality described below step by step Description: Jupyter Notebook & Python Intro Zuerst navigieren wir mit der Kommandozeile in den Folder, wo wir das Jupyter Notebook abspeichern wollen. Dann gehen wir in unser virtual environment und starten mit "jupyter notebook" unser Notebook auf. Jupyter Notebook ist eine Arbeitsoberfläche, der für Coding-Anfänger sehr einfach zu bedienen ist, denn es lassen sich Code-Teile einzelnen abspielen. Es gibt zwei Formate der Zellen. Code-Format und sogenanntes Markdown. Letzteres ist ein Textformat, das dem Text möglichst wenige Formatinfos anhängt. Nicht wie Word zum Beispiel. Wenn man grosse Notebooks entwickelt, ist es sehr hilfreich damit zu arbeiten. Zum Beispiel Titel Titel Titel Titel Titel Step1: sad hello Oder Aufzählungen, Fetten. Das geht alles mit Markdown. Man kann sogar Tabellen bauen oder Hyper Links setzen. Wie zum Beispiel auf dieses Markdown Cheatsheet. Hier sind weitere sehr praktische Format. In der Regel benutzten wir Jupyter Notebooks aber nicht, um zu texten, sondern zu coden. Legen wir los. Print und Input Datentypen Aktionen Variablen und Zuordnungen If, elif, else Lists Dictionaries Tuples Simple Funktionen Step2: Datentypen Step3: Aktionen Step4: Variablen, Vergleiche und Zuordnungen von Variablen Step5: if - else - (elif) Step6: Lists Step7: Dictionaries Verwende hier die geschwungene Klammern Step8: Tuples Hier sind runde Klammern König. Step9: Simple Funktionen - len und sort Beachte wie man die aufruft. Nämlich mit runden Klammern Step10: For Loop Step11: For loop with list of dictionaries
Python Code: #dsfdskjfbskjdfbdkjbfkjdbf #asdasd Explanation: Jupyter Notebook & Python Intro Zuerst navigieren wir mit der Kommandozeile in den Folder, wo wir das Jupyter Notebook abspeichern wollen. Dann gehen wir in unser virtual environment und starten mit "jupyter notebook" unser Notebook auf. Jupyter Notebook ist eine Arbeitsoberfläche, der für Coding-Anfänger sehr einfach zu bedienen ist, denn es lassen sich Code-Teile einzelnen abspielen. Es gibt zwei Formate der Zellen. Code-Format und sogenanntes Markdown. Letzteres ist ein Textformat, das dem Text möglichst wenige Formatinfos anhängt. Nicht wie Word zum Beispiel. Wenn man grosse Notebooks entwickelt, ist es sehr hilfreich damit zu arbeiten. Zum Beispiel Titel Titel Titel Titel Titel End of explanation #Mit einem Hashtag vor einer Zeile können wir Code kommentieren, auch das ist sehr wichtig. #Immer, wirklich, immer den eigenen Code zu kommentieren. Vor allem am Anfang. print("hello world") #Der Printbefehl druckt einfach alles aus. Nicht wirklich wahnsinnig toll. #Doch er ist später sehr nützlich. Vorallem wenn es darum geht Fehler im eigenn Code zu finden. #Mit dem Inputbefehl kannst Du Den Nutzer mit dem intergieren. input('wie alt bis Du?') Explanation: sad hello Oder Aufzählungen, Fetten. Das geht alles mit Markdown. Man kann sogar Tabellen bauen oder Hyper Links setzen. Wie zum Beispiel auf dieses Markdown Cheatsheet. Hier sind weitere sehr praktische Format. In der Regel benutzten wir Jupyter Notebooks aber nicht, um zu texten, sondern zu coden. Legen wir los. Print und Input Datentypen Aktionen Variablen und Zuordnungen If, elif, else Lists Dictionaries Tuples Simple Funktionen: len, sort, sorted For Loop Python Print und Input End of explanation #Strings 'Hallo wie "geht es Dir"' "12345" 124 str(124) #Integer type(567) type(int('1234')) #Floats 4.542323 float(12) int(4.64) #Dates, eigentlich Strings '15-11-2019' Explanation: Datentypen End of explanation print('Hallo' + ' '+ 'wie' + 'geht' + 'es') print('Hallo','wie','geht','es') #Alle anderen gängigen: #minus - #Mal * #geteilt durch / #Spezial: Modulo. %, geteilt durch und der Rest, der übrigbleibt 22 % 5 2 Explanation: Aktionen End of explanation #Grösser und kleiner als: #< > #Gleich == (wichtig, doppelte Gleichzeichen) #Denn das einfach definiert eine Variable 'Schweiz' == 'Schweiz' Schweiz = 'reich' Schweiz Schweiz == 'reich' reich = 'arm' 1 = 'reich' "5schweiz" 1 = 6 a = 34 a = b a = 'b' a == 'b' a Explanation: Variablen, Vergleiche und Zuordnungen von Variablen End of explanation elem = int(input('Wie alt bist Du?')) elem if elem < 0: print('Das ist unmöglich') else: print('Du bist aber alt') elem = int(input('Wie alt bist Du?')) if elem < 0: print('Das ist unmöglich') elif elem < 25: print('Du bist aber jung') else: print('Du bist aber alt') Explanation: if - else - (elif) End of explanation #Eckige Klammern [1,"hallo",3,4,5.23,6,7] lst = [1,2,3,4,5,6,7] lst #Einzelene Elemente lst[0] #Ganze Abschnitte lst[:4] #Komplexere Schnitte lst[::3] lst #Append, Pop, etc. saved_item = lst.pop() lst lst.append(saved_item) list #Aufpassen mit Befehl: list weil das macht aus etwas eine Liste. Auch aus Strings: list('hallo wie geht') range(0,10) #Elegantester Weg, eine Liste zu schreiben. Und ganz wichtig, #der Computer beginn immer bei 0. list(range(10)) list(range(9,-1,-1)) Explanation: Lists End of explanation #Komische, geschwungene Klammern {'Tier': 'Hund', 'Grösse': 124, 'Alter': 10} dct = {'Tier': 'Hund', 'Grösse': 124, 'Alter': 10} dct dct['Grösse'] #List of Dictionaires dct_lst = [{'Tier': 'Hund', 'Grösse': 124, 'Alter': 10}, {'Tier': 'Katze', 'Grösse': 130, 'Alter': 8}] type(dct_lst) dct_lst[1] dct_lst[0]['Alter'] neue_list = [] for xxxxxxxxxxxx in dct_lst: neue_list.append(xxxxxxxxxxxx['Alter']) neue_list Explanation: Dictionaries Verwende hier die geschwungene Klammern End of explanation lst tuple(lst) lst lst = tuple(lst) lst #Unveränderbar. Also gutes Format, um Sachen abzuspeichern. #Aber das wirklich nur der Vollständigkeitshalber. Explanation: Tuples Hier sind runde Klammern König. End of explanation #len mit Strings len('hallo wie geht es Dir') #len mit Lists len([1,2,3,4,4,5]) #len mit dictionaries len({'Tier': 'Hund', 'Alter': 345}) #len mit Tuples len((1,1,1,2,2,1)) #sorted für momentane Sortierung sorted('hallo wie geht es Dir') a = 'hallo wie geht es Dir' sorted(a) a #Sort funktioniert allerdings "nur" mit lists lst = [1, 5, 9, 10, 34, 12, 12, 14] lst.sort() lst dic = {'Tier': 'Hund', 'Alter': 345} dic.sort() Explanation: Simple Funktionen - len und sort Beachte wie man die aufruft. Nämlich mit runden Klammern End of explanation lst for hghjgfjhf in lst: print(x) dicbkjghkg = {'Tier': 'Hund', 'Alter': 345} for key, value in dicbkjghkg.items(): print(key, value) #for loop to make new lists lst #Nehmen wir einmal an, wir wollen nur die geraden Zahlen in der Liste new_lst = [] for elem in lst: if elem % 2 == 0: new_lst.append(elem) # else: # continue new_lst Explanation: For Loop End of explanation dic_lst = [{'Animal': 'Dog', 'Size': 45}, {'Animal': 'Cat', 'Size': 23}, {'Animal': 'Bird', 'Size': 121212}] for dic in dic_lst: print(dic) for dic in dic_lst: print(dic['Animal']) for dic in dic_lst: print(dic['Animal'] + ': '+ dic['Size'])) Explanation: For loop with list of dictionaries End of explanation
12,981
Given the following text description, write Python code to implement the functionality described below step by step Description: The Forest Fire Model A rapid introduction to Mesa The Forest Fire Model is one of the simplest examples of a model that exhibits self-organized criticality. Mesa is a new, Pythonic agent-based modeling framework. A big advantage of using Python is that it a great language for interactive data analysis. Unlike some other ABM frameworks, with Mesa you can write a model, run it, and analyze it all in the same environment. (You don't have to, of course. But you can). In this notebook, we'll go over a rapid-fire (pun intended, sorry) introduction to building and analyzing a model with Mesa. First, some imports. We'll go over what all the Mesa ones mean just below. Step1: Building the model Most models consist of basically two things Step2: Now we need to define the model object itself. The main thing the model needs is the grid, which the trees are placed on. But since the model is dynamic, it also needs to include time -- it needs a schedule, to manage the trees activation as they spread the fire from one to the other. The model also needs a few parameters Step3: Running the model Let's create a model with a 100 x 100 grid, and a tree density of 0.6. Remember, ForestFire takes the arguments height, width, density. Step4: To run the model until it's done (that is, until it sets its running property to False) just use the run_model() method. This is implemented in the Model parent object, so we didn't need to implement it above. Step5: That's all there is to it! But... so what? This code doesn't include a visualization, after all. TODO Step6: And chart it, to see the dynamics. Step7: In this case, the fire burned itself out after about 90 steps, with many trees left unburned. You can try changing the density parameter and rerunning the code above, to see how different densities yield different dynamics. For example Step8: ... But to really understand how the final outcome varies with density, we can't just tweak the parameter by hand over and over again. We need to do a batch run. Batch runs Batch runs, also called parameter sweeps, allow use to systemically vary the density parameter, run the model, and check the output. Mesa provides a BatchRunner object which takes a model class, a dictionary of parameters and the range of values they can take and runs the model at each combination of these values. We can also give it reporters, which collect some data on the model at the end of each run and store it, associated with the parameters that produced it. For ease of typing and reading, we'll first create the parameters to vary and the reporter, and then assign them to a new BatchRunner. Step9: Now the BatchRunner, which we've named param_run, is ready to go. To run the model at every combination of parameters (in this case, every density value), just use the run_all() method. Step10: Like with the data collector, we can extract the data the batch runner collected into a dataframe Step11: As you can see, each row here is a run of the model, identified by its parameter values (and given a unique index by the Run column). To view how the BurnedOut fraction varies with density, we can easily just plot them Step12: And we see the very clear emergence of a critical value around 0.5, where the model quickly shifts from almost no trees being burned, to almost all of them. In this case we ran the model only once at each value. However, it's easy to have the BatchRunner execute multiple runs at each parameter combination, in order to generate more statistically reliable results. We do this using the iteration argument. Let's run the model 5 times at each parameter point, and export and plot the results as above.
Python Code: import random import numpy as np import matplotlib.pyplot as plt %matplotlib inline from mesa import Model, Agent from mesa.time import RandomActivation from mesa.space import Grid from mesa.datacollection import DataCollector from mesa.batchrunner import BatchRunner Explanation: The Forest Fire Model A rapid introduction to Mesa The Forest Fire Model is one of the simplest examples of a model that exhibits self-organized criticality. Mesa is a new, Pythonic agent-based modeling framework. A big advantage of using Python is that it a great language for interactive data analysis. Unlike some other ABM frameworks, with Mesa you can write a model, run it, and analyze it all in the same environment. (You don't have to, of course. But you can). In this notebook, we'll go over a rapid-fire (pun intended, sorry) introduction to building and analyzing a model with Mesa. First, some imports. We'll go over what all the Mesa ones mean just below. End of explanation class TreeCell(Agent): ''' A tree cell. Attributes: x, y: Grid coordinates condition: Can be "Fine", "On Fire", or "Burned Out" unique_id: (x,y) tuple. unique_id isn't strictly necessary here, but it's good practice to give one to each agent anyway. ''' def __init__(self, pos): ''' Create a new tree. Args: pos: The tree's coordinates on the grid. ''' self.pos = pos self.unique_id = pos self.condition = "Fine" def step(self, model): ''' If the tree is on fire, spread it to fine trees nearby. ''' if self.condition == "On Fire": neighbors = model.grid.get_neighbors(self.pos, moore=False) for neighbor in neighbors: if neighbor.condition == "Fine": neighbor.condition = "On Fire" self.condition = "Burned Out" Explanation: Building the model Most models consist of basically two things: agents, and an world for the agents to be in. The Forest Fire model has only one kind of agent: a tree. A tree can either be unburned, on fire, or already burned. The environment is a grid, where each cell can either be empty or contain a tree. First, let's define our tree agent. The agent needs to be assigned x and y coordinates on the grid, and that's about it. We could assign agents a condition to be in, but for now let's have them all start as being 'Fine'. Since the agent doesn't move, and there is only at most one tree per cell, we can use a tuple of its coordinates as a unique identifier. Next, we define the agent's step method. This gets called whenever the agent needs to act in the world and takes the model object to which it belongs as an input. The tree's behavior is simple: If it is currently on fire, it spreads the fire to any trees above, below, to the left and the right of it that are not themselves burned out or on fire; then it burns itself out. End of explanation class ForestFire(Model): ''' Simple Forest Fire model. ''' def __init__(self, height, width, density): ''' Create a new forest fire model. Args: height, width: The size of the grid to model density: What fraction of grid cells have a tree in them. ''' # Initialize model parameters self.height = height self.width = width self.density = density # Set up model objects self.schedule = RandomActivation(self) self.grid = Grid(height, width, torus=False) self.dc = DataCollector({"Fine": lambda m: self.count_type(m, "Fine"), "On Fire": lambda m: self.count_type(m, "On Fire"), "Burned Out": lambda m: self.count_type(m, "Burned Out")}) # Place a tree in each cell with Prob = density for x in range(self.width): for y in range(self.height): if random.random() < self.density: # Create a tree new_tree = TreeCell((x, y)) # Set all trees in the first column on fire. if x == 0: new_tree.condition = "On Fire" self.grid[y][x] = new_tree self.schedule.add(new_tree) self.running = True def step(self): ''' Advance the model by one step. ''' self.schedule.step() self.dc.collect(self) # Halt if no more fire if self.count_type(self, "On Fire") == 0: self.running = False @staticmethod def count_type(model, tree_condition): ''' Helper method to count trees in a given condition in a given model. ''' count = 0 for tree in model.schedule.agents: if tree.condition == tree_condition: count += 1 return count Explanation: Now we need to define the model object itself. The main thing the model needs is the grid, which the trees are placed on. But since the model is dynamic, it also needs to include time -- it needs a schedule, to manage the trees activation as they spread the fire from one to the other. The model also needs a few parameters: how large the grid is and what the density of trees on it will be. Density will be the key parameter we'll explore below. Finally, we'll give the model a data collector. This is a Mesa object which collects and stores data on the model as it runs for later analysis. The constructor needs to do a few things. It instantiates all the model-level variables and objects; it randomly places trees on the grid, based on the density parameter; and it starts the fire by setting all the trees on one edge of the grid (x=0) as being On "Fire". Next, the model needs a step method. Like at the agent level, this method defines what happens every step of the model. We want to activate all the trees, one at a time; then we run the data collector, to count how many trees are currently on fire, burned out, or still fine. If there are no trees left on fire, we stop the model by setting its running property to False. End of explanation fire = ForestFire(100, 100, 0.6) Explanation: Running the model Let's create a model with a 100 x 100 grid, and a tree density of 0.6. Remember, ForestFire takes the arguments height, width, density. End of explanation fire.run_model() Explanation: To run the model until it's done (that is, until it sets its running property to False) just use the run_model() method. This is implemented in the Model parent object, so we didn't need to implement it above. End of explanation results = fire.dc.get_model_vars_dataframe() Explanation: That's all there is to it! But... so what? This code doesn't include a visualization, after all. TODO: Add a MatPlotLib visualization Remember the data collector? Now we can put the data it collected into a pandas DataFrame: End of explanation results.plot() Explanation: And chart it, to see the dynamics. End of explanation fire = ForestFire(100, 100, 0.8) fire.run_model() results = fire.dc.get_model_vars_dataframe() results.plot() Explanation: In this case, the fire burned itself out after about 90 steps, with many trees left unburned. You can try changing the density parameter and rerunning the code above, to see how different densities yield different dynamics. For example: End of explanation param_set = dict(height=50, # Height and width are constant width=50, # Vary density from 0.01 to 1, in 0.01 increments: density=np.linspace(0,1,101)[1:]) # At the end of each model run, calculate the fraction of trees which are Burned Out model_reporter = {"BurnedOut": lambda m: (ForestFire.count_type(m, "Burned Out") / m.schedule.get_agent_count()) } # Create the batch runner param_run = BatchRunner(ForestFire, param_set, model_reporters=model_reporter) Explanation: ... But to really understand how the final outcome varies with density, we can't just tweak the parameter by hand over and over again. We need to do a batch run. Batch runs Batch runs, also called parameter sweeps, allow use to systemically vary the density parameter, run the model, and check the output. Mesa provides a BatchRunner object which takes a model class, a dictionary of parameters and the range of values they can take and runs the model at each combination of these values. We can also give it reporters, which collect some data on the model at the end of each run and store it, associated with the parameters that produced it. For ease of typing and reading, we'll first create the parameters to vary and the reporter, and then assign them to a new BatchRunner. End of explanation param_run.run_all() Explanation: Now the BatchRunner, which we've named param_run, is ready to go. To run the model at every combination of parameters (in this case, every density value), just use the run_all() method. End of explanation df = param_run.get_model_vars_dataframe() df.head() Explanation: Like with the data collector, we can extract the data the batch runner collected into a dataframe: End of explanation plt.scatter(df.density, df.BurnedOut) plt.xlim(0,1) Explanation: As you can see, each row here is a run of the model, identified by its parameter values (and given a unique index by the Run column). To view how the BurnedOut fraction varies with density, we can easily just plot them: End of explanation param_run = BatchRunner(ForestFire, param_set, iterations=5, model_reporters=model_reporter) param_run.run_all() df = param_run.get_model_vars_dataframe() plt.scatter(df.density, df.BurnedOut) plt.xlim(0,1) Explanation: And we see the very clear emergence of a critical value around 0.5, where the model quickly shifts from almost no trees being burned, to almost all of them. In this case we ran the model only once at each value. However, it's easy to have the BatchRunner execute multiple runs at each parameter combination, in order to generate more statistically reliable results. We do this using the iteration argument. Let's run the model 5 times at each parameter point, and export and plot the results as above. End of explanation
12,982
Given the following text description, write Python code to implement the functionality described below step by step Description: Preprocessing ... but you can't access it! So MDR has done, it, below... Download and unzip the data - MDR (Don't re-run the below unless needed - it's >800Mb, and takes about 3-4 min to download) Now the unzipping (15s, no output). Step1: Process the data Step2: Takes just under 2 min, no output. Step3: Looking at the vectors Then the following function will return the word vectors as a matrix, the word list, and the mapping from word to index. Step4: Here's the first 25 "words" in glove. Step5: This is how you can look up a word vector. Step6: Just for fun, let's take a look at a 2d projection of the first 350 words, using T-SNE.
Python Code: import zipfile with zipfile.ZipFile(path + "glove.6B.zip","r") as zip_ref: zip_ref.extractall(path) %ls $path Explanation: Preprocessing ... but you can't access it! So MDR has done, it, below... Download and unzip the data - MDR (Don't re-run the below unless needed - it's >800Mb, and takes about 3-4 min to download) Now the unzipping (15s, no output). End of explanation import pickle def get_glove(name): with open(path+ 'glove.' + name + '.txt', 'r') as f: lines = [line.split() for line in f] words = [d[0] for d in lines] vecs = np.stack(np.array(d[1:], dtype=np.float32) for d in lines) wordidx = {o:i for i,o in enumerate(words)} save_array(res_path+name+'.dat', vecs) pickle.dump(words, open(res_path+name+'_words.pkl','wb')) pickle.dump(wordidx, open(res_path+name+'_idx.pkl','wb')) Explanation: Process the data End of explanation get_glove('6B.50d') get_glove('6B.100d') get_glove('6B.200d') get_glove('6B.300d') Explanation: Takes just under 2 min, no output. End of explanation def load_glove(loc): return (load_array(loc+'.dat'), pickle.load(open(loc+'_words.pkl','rb')), pickle.load(open(loc+'_idx.pkl','rb'))) vecs, words, wordidx = load_glove(res_path+'6B.50d') vecs.shape Explanation: Looking at the vectors Then the following function will return the word vectors as a matrix, the word list, and the mapping from word to index. End of explanation ' '.join(words[:25]) Explanation: Here's the first 25 "words" in glove. End of explanation def w2v(w): return vecs[wordidx[w]] w2v('of') Explanation: This is how you can look up a word vector. End of explanation ## MDR: none of this seems to be needed?! #reload(sys) #sys.setdefaultencoding('utf8') tsne = TSNE(n_components=2, random_state=0) Y = tsne.fit_transform(vecs[:500]) start=0; end=400 dat = Y[start:end] plt.figure(figsize=(15,15)) plt.scatter(dat[:, 0], dat[:, 1]) for label, x, y in zip(words[start:end], dat[:, 0], dat[:, 1]): plt.text(x,y,label, color=np.random.rand(3)*0.7, fontsize=10) plt.show() Explanation: Just for fun, let's take a look at a 2d projection of the first 350 words, using T-SNE. End of explanation
12,983
Given the following text description, write Python code to implement the functionality described below step by step Description: TUTORIAL 05 - Exact Parametrized Functions for non-affine elliptic problems Keywords Step1: 3. Affine decomposition The parametrized bilinear form $a(\cdot, \cdot; \boldsymbol{\mu})$ is trivially affine. The exact solution will be used on the forcing term $g(\boldsymbol{x}; \boldsymbol{\mu})$ to obtain an efficient (exact affine) expansion of $f(\cdot; \boldsymbol{\mu})$. Step2: 4. Main program 4.1. Read the mesh for this problem The mesh was generated by the data/generate_mesh.ipynb notebook. Step3: 4.2. Create Finite Element space (Lagrange P1) Step4: 4.3. Allocate an object of the Gaussian class Step5: 4.4. Prepare reduction with a reduced basis method Step6: 4.5. Perform the offline phase Step7: 4.6. Perform an online solve Step8: 4.7. Perform an error analysis
Python Code: from dolfin import * from rbnics import * Explanation: TUTORIAL 05 - Exact Parametrized Functions for non-affine elliptic problems Keywords: exact parametrized functions 1. Introduction In this Tutorial, we consider steady heat conduction in a two-dimensional square domain $\Omega = (-1, 1)^2$. The boundary $\partial\Omega$ is kept at a reference temperature (say, zero). The conductivity coefficient is fixed to 1, while the heat source is characterized by the following expression $$ g(\boldsymbol{x}; \boldsymbol{\mu}) = \exp{ -2 (x_0-\mu_0)^2 - 2 (x_1 - \mu_1)^2} \quad \forall \boldsymbol{x} = (x_0, x_1) \in \Omega. $$ The parameter vector $\boldsymbol{\mu}$, given by $$ \boldsymbol{\mu} = (\mu_0,\mu_1) $$ affects the center of the Gaussian source $g(\boldsymbol{x}; \boldsymbol{\mu})$, which could be located at any point $\Omega$. Thus, the parameter domain is $$ \mathbb{P}=[-1,1]^2. $$ In order to be able to compare the interpolation methods (EIM and DEIM) used to solve this problem, we propose to use an exact solution of the problem. 2. Parametrized formulation Let $u(\boldsymbol{\mu})$ be the temperature in the domain $\Omega$. We will directly provide a weak formulation for this problem <center>for a given parameter $\boldsymbol{\mu}\in\mathbb{P}$, find $u(\boldsymbol{\mu})\in\mathbb{V}$ such that</center> $$a\left(u(\boldsymbol{\mu}),v;\boldsymbol{\mu}\right)=f(v;\boldsymbol{\mu})\quad \forall v\in\mathbb{V}$$ where the function space $\mathbb{V}$ is defined as $$ \mathbb{V} = \left{ v \in H^1(\Omega(\mu_0)): v|_{\partial\Omega} = 0\right} $$ Note that, as in the previous tutorial, the function space is parameter dependent due to the shape variation. the parametrized bilinear form $a(\cdot, \cdot; \boldsymbol{\mu}): \mathbb{V} \times \mathbb{V} \to \mathbb{R}$ is defined by $$a(u,v;\boldsymbol{\mu}) = \int_{\Omega} \nabla u \cdot \nabla v \ d\boldsymbol{x}$$ the parametrized linear form $f(\cdot; \boldsymbol{\mu}): \mathbb{V} \to \mathbb{R}$ is defined by $$f(v;\boldsymbol{\mu}) = \int_\Omega g(\boldsymbol{\mu}) v \ d\boldsymbol{x}.$$ End of explanation @ExactParametrizedFunctions() class Gaussian(EllipticCoerciveProblem): # Default initialization of members def __init__(self, V, **kwargs): # Call the standard initialization EllipticCoerciveProblem.__init__(self, V, **kwargs) # ... and also store FEniCS data structures for assembly assert "subdomains" in kwargs assert "boundaries" in kwargs self.subdomains, self.boundaries = kwargs["subdomains"], kwargs["boundaries"] self.u = TrialFunction(V) self.v = TestFunction(V) self.dx = Measure("dx")(subdomain_data=subdomains) self.f = ParametrizedExpression( self, "exp(- 2 * pow(x[0] - mu[0], 2) - 2 * pow(x[1] - mu[1], 2))", mu=(0., 0.), element=V.ufl_element()) # note that we cannot use self.mu in the initialization of self.f, because self.mu has not been initialized yet # Return custom problem name def name(self): return "GaussianExact" # Return the alpha_lower bound. def get_stability_factor_lower_bound(self): return 1. # Return theta multiplicative terms of the affine expansion of the problem. def compute_theta(self, term): if term == "a": return (1.,) elif term == "f": return (1.,) else: raise ValueError("Invalid term for compute_theta().") # Return forms resulting from the discretization of the affine expansion of the problem operators. def assemble_operator(self, term): v = self.v dx = self.dx if term == "a": u = self.u a0 = inner(grad(u), grad(v)) * dx return (a0,) elif term == "f": f = self.f f0 = f * v * dx return (f0,) elif term == "dirichlet_bc": bc0 = [DirichletBC(self.V, Constant(0.0), self.boundaries, 1), DirichletBC(self.V, Constant(0.0), self.boundaries, 2), DirichletBC(self.V, Constant(0.0), self.boundaries, 3)] return (bc0,) elif term == "inner_product": u = self.u x0 = inner(grad(u), grad(v)) * dx return (x0,) else: raise ValueError("Invalid term for assemble_operator().") Explanation: 3. Affine decomposition The parametrized bilinear form $a(\cdot, \cdot; \boldsymbol{\mu})$ is trivially affine. The exact solution will be used on the forcing term $g(\boldsymbol{x}; \boldsymbol{\mu})$ to obtain an efficient (exact affine) expansion of $f(\cdot; \boldsymbol{\mu})$. End of explanation mesh = Mesh("data/gaussian.xml") subdomains = MeshFunction("size_t", mesh, "data/gaussian_physical_region.xml") boundaries = MeshFunction("size_t", mesh, "data/gaussian_facet_region.xml") Explanation: 4. Main program 4.1. Read the mesh for this problem The mesh was generated by the data/generate_mesh.ipynb notebook. End of explanation V = FunctionSpace(mesh, "Lagrange", 1) Explanation: 4.2. Create Finite Element space (Lagrange P1) End of explanation problem = Gaussian(V, subdomains=subdomains, boundaries=boundaries) mu_range = [(-1.0, 1.0), (-1.0, 1.0)] problem.set_mu_range(mu_range) Explanation: 4.3. Allocate an object of the Gaussian class End of explanation reduction_method = ReducedBasis(problem) reduction_method.set_Nmax(20) reduction_method.set_tolerance(1e-4) Explanation: 4.4. Prepare reduction with a reduced basis method End of explanation reduction_method.initialize_training_set(50) reduced_problem = reduction_method.offline() Explanation: 4.5. Perform the offline phase End of explanation online_mu = (0.3, -1.0) reduced_problem.set_mu(online_mu) reduced_solution = reduced_problem.solve() plot(reduced_solution, reduced_problem=reduced_problem) Explanation: 4.6. Perform an online solve End of explanation reduction_method.initialize_testing_set(50) reduction_method.error_analysis(filename="error_analysis") Explanation: 4.7. Perform an error analysis End of explanation
12,984
Given the following text description, write Python code to implement the functionality described below step by step Description: ABU量化系统使用文档 <center> <img src="./image/abu_logo.png" alt="" style="vertical-align Step1: 下面先获取沙盒数据中美股一年的数据,为之后的分析做数据准备: Step2: 1. 传统的双均线择时策略 双均线策略是量化策略中经典的策略之一,其属于趋势跟踪策略,基本实现思想如下 预设两条均线:比如一个ma=5,一个ma=60, 5的均线被称作快线,60的均线被称作慢线 择时买入策略中当快线上穿慢线(ma5上穿ma60)称为形成金叉买点信号,买入股票 择时卖出策略中当快线下穿慢线(ma5下穿ma60)称为形成死叉卖点信号,卖出股票 熟悉技术指标的朋友都知道比如macd,kdj等等依赖均线的技术指标,其核心思想都差不多。 下面使用abupy中的技术指标模块nd绘制一下上述文字描述,如下: Step4: 上图中可以看到 第一次ma5(蓝线)上穿ma60(绿线)形成金叉的时候,没能够继续上升趋势,马上ma5下穿了ma60形成死叉卖出信号 第二次(红竖线)发出金叉信号的时候持续了上升趋势 下面使用abupy中内置的双均线策略进行回测示例,如下: Step5: 通过度量对象可视化可以发现择时生效买入因子都是ma5上穿ma60的金叉信号,卖出择时因子并行了4个策略,可以看到ma5下穿ma60的死叉信号生效比例也很高,如下: Step6: 下面从交易单中筛选出所有tsla的交易结果,如下所示: Step7: 通过nd模块的plot_ma_from_order接口可视化tsla交易单中的第一笔交易,可以发现这笔交易就是本节开始时示例的tsla第一次ma5(蓝线)上穿ma60(绿线)形成金叉且没能够继续上升趋势,马上ma5下穿了ma60形成死叉卖出信号的那笔交易。 Step8: 继续可视化tsla交易单中的第二笔交易,可以发现这笔交易就是本节开始时示例的tsla第二次ma5上穿ma60发出金叉信号且持续了上升趋势的那笔交易。 Step9: 2. 真 • 动态自适应双均线策略 传统双均线策略当两根信号线差距比较大时,如上面ma5,ma60这种组合,属于迟钝金叉,即可能趋势以发生了很长时间甚至都快要结束的时候才能发出金叉买入信号,比如本节开始时绘制的tsla的两个金叉买入信号中的第一个信号。 那么如果调近两根均值的差距,比如使用ma5,ma20呢,下面绘制本节开始时绘制的tsla,ma参数改为5,20,如下: Step10: 可以看到之前的金叉信号提前发出了,且死叉信号发出的也比较及时,这笔交易有一定的收益,但是注意观察其它的信号点可以发现产生了很多买卖信号,因为上面ma5,ma20这种组合,属于敏感金叉,这些信号中很多产生了失败的交易。 abupy中AbuDoubleMaBuy是动态自适应双均线策略,比如下面的参数中不传递快线值,只传递慢线值60,如下进行回测: Step11: 下面可视化买入因子的生效分布,如下所示: Step12: 可以看到买入因子慢线仍然使用参数中传递的60,快线有3,9,18三个值,这三个值是以慢线数值为基数,结合大盘的走势计算出来的。 策略中动态计算快线的策略主要参考了大盘最近一个月走势震荡程度,动态决策快线的值: 大盘最近一个月走势非常稳定:fast=slow X 0.05 eg Step13: 下面可视化买入因子的生效分布,如下所示: Step14: 可以看到买入因子慢线的值从20到100不等,快线的值也从2到30不等。 策略中动态计算快线的策略和上述的方式相同,动态自适应慢线值主要依据重采样周期内的振幅值来确定,在第‘第10节 比特币, 莱特币的回测’中已经使用过resample_close_mean统计重采样周期内的振幅值,如下:
Python Code: # 基础库导入 from __future__ import print_function from __future__ import division import warnings warnings.filterwarnings('ignore') warnings.simplefilter('ignore') import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline import os import sys # 使用insert 0即只使用github,避免交叉使用了pip安装的abupy,导致的版本不一致问题 sys.path.insert(0, os.path.abspath('../')) import abupy # 使用沙盒数据,目的是和书中一样的数据环境 abupy.env.enable_example_env_ipython() from abupy import AbuDoubleMaBuy, AbuDoubleMaSell, ABuKLUtil, ABuSymbolPd from abupy import AbuFactorCloseAtrNStop, AbuFactorAtrNStop, AbuFactorPreAtrNStop from abupy import abu, ABuProgress, AbuMetricsBase, EMarketTargetType, nd Explanation: ABU量化系统使用文档 <center> <img src="./image/abu_logo.png" alt="" style="vertical-align:middle;padding:10px 20px;"><font size="6" color="black"><b>第28节 真 • 动态自适应双均线策略</b></font> </center> 作者: 阿布 阿布量化版权所有 未经允许 禁止转载 abu量化系统github地址 (欢迎+star) 本节ipython notebook 上一节讲解了选股策略与择时策略相互配合的示例,本节的内容将讲解择时策略中的经典策略双均线策略,以及它的优化版本动态自适应双均线策略。 首先导入本节需要使用的abupy中的模块: End of explanation # 使用沙盒内的美股做为回测目标 us_choice_symbols = ['usTSLA', 'usNOAH', 'usSFUN', 'usBIDU', 'usAAPL', 'usGOOG', 'usWUBA', 'usVIPS'] kl_dict = {us_symbol[2:]: ABuSymbolPd.make_kl_df(us_symbol, start='2014-07-26', end='2015-07-26') for us_symbol in us_choice_symbols} Explanation: 下面先获取沙盒数据中美股一年的数据,为之后的分析做数据准备: End of explanation nd.ma.plot_ma_from_klpd(kl_dict['TSLA'], time_period=(5, 60), with_points_ext=pd.to_datetime('2015-04-09'), with_points=pd.to_datetime('2014-11-17')) Explanation: 1. 传统的双均线择时策略 双均线策略是量化策略中经典的策略之一,其属于趋势跟踪策略,基本实现思想如下 预设两条均线:比如一个ma=5,一个ma=60, 5的均线被称作快线,60的均线被称作慢线 择时买入策略中当快线上穿慢线(ma5上穿ma60)称为形成金叉买点信号,买入股票 择时卖出策略中当快线下穿慢线(ma5下穿ma60)称为形成死叉卖点信号,卖出股票 熟悉技术指标的朋友都知道比如macd,kdj等等依赖均线的技术指标,其核心思想都差不多。 下面使用abupy中的技术指标模块nd绘制一下上述文字描述,如下: End of explanation # 初始资金量 cash = 3000000 def run_loo_back(choice_symbols, ps=None, n_folds=2, start=None, end=None, only_info=False): 封装一个回测函数,返回回测结果,以及回测度量对象 if choice_symbols[0].startswith('us'): abupy.env.g_market_target = EMarketTargetType.E_MARKET_TARGET_US else: abupy.env.g_market_target = EMarketTargetType.E_MARKET_TARGET_CN abu_result_tuple, _ = abu.run_loop_back(cash, buy_factors, sell_factors, ps, start=start, end=end, n_folds=n_folds, choice_symbols=choice_symbols) ABuProgress.clear_output() metrics = AbuMetricsBase.show_general(*abu_result_tuple, returns_cmp=only_info, only_info=only_info, only_show_returns=True) return abu_result_tuple, metrics # 买入双均线策略AbuDoubleMaBuy寻找金叉买入信号:ma快线=5,ma慢线=60 buy_factors = [{'fast': 5, 'slow': 60, 'class': AbuDoubleMaBuy}] # 卖出双均线策略AbuDoubleMaSell寻找死叉卖出信号:ma快线=5,ma慢线=60,并行继续使用止盈止损基础策略 sell_factors = [{'fast': 5, 'slow': 60, 'class': AbuDoubleMaSell}, {'stop_loss_n': 1.0, 'stop_win_n': 3.0, 'class': AbuFactorAtrNStop}, {'class': AbuFactorPreAtrNStop, 'pre_atr_n': 1.5}, {'class': AbuFactorCloseAtrNStop, 'close_atr_n': 1.5}] # 开始回测 abu_result_tuple, metrics = run_loo_back(us_choice_symbols) Explanation: 上图中可以看到 第一次ma5(蓝线)上穿ma60(绿线)形成金叉的时候,没能够继续上升趋势,马上ma5下穿了ma60形成死叉卖出信号 第二次(红竖线)发出金叉信号的时候持续了上升趋势 下面使用abupy中内置的双均线策略进行回测示例,如下: End of explanation metrics.plot_buy_factors() metrics.plot_sell_factors() Explanation: 通过度量对象可视化可以发现择时生效买入因子都是ma5上穿ma60的金叉信号,卖出择时因子并行了4个策略,可以看到ma5下穿ma60的死叉信号生效比例也很高,如下: End of explanation tsla_orders = abu_result_tuple.orders_pd[abu_result_tuple.orders_pd.symbol=='usTSLA'] tsla_orders Explanation: 下面从交易单中筛选出所有tsla的交易结果,如下所示: End of explanation nd.ma.plot_ma_from_order(tsla_orders.iloc[0], time_period=(5, 60)) Explanation: 通过nd模块的plot_ma_from_order接口可视化tsla交易单中的第一笔交易,可以发现这笔交易就是本节开始时示例的tsla第一次ma5(蓝线)上穿ma60(绿线)形成金叉且没能够继续上升趋势,马上ma5下穿了ma60形成死叉卖出信号的那笔交易。 End of explanation nd.ma.plot_ma_from_order(tsla_orders.iloc[1], time_period=(5, 60)) Explanation: 继续可视化tsla交易单中的第二笔交易,可以发现这笔交易就是本节开始时示例的tsla第二次ma5上穿ma60发出金叉信号且持续了上升趋势的那笔交易。 End of explanation nd.ma.plot_ma_from_klpd(kl_dict['TSLA'], time_period=(5, 20), with_points_ext=pd.to_datetime('2014-11-25'), with_points=pd.to_datetime('2014-10-31')) Explanation: 2. 真 • 动态自适应双均线策略 传统双均线策略当两根信号线差距比较大时,如上面ma5,ma60这种组合,属于迟钝金叉,即可能趋势以发生了很长时间甚至都快要结束的时候才能发出金叉买入信号,比如本节开始时绘制的tsla的两个金叉买入信号中的第一个信号。 那么如果调近两根均值的差距,比如使用ma5,ma20呢,下面绘制本节开始时绘制的tsla,ma参数改为5,20,如下: End of explanation # 只传递慢线60,不传递快线参数为动态自适应快线值 buy_factors = [{'slow': 60, 'class': AbuDoubleMaBuy}] abu_result_tuple, metrics = run_loo_back(us_choice_symbols) Explanation: 可以看到之前的金叉信号提前发出了,且死叉信号发出的也比较及时,这笔交易有一定的收益,但是注意观察其它的信号点可以发现产生了很多买卖信号,因为上面ma5,ma20这种组合,属于敏感金叉,这些信号中很多产生了失败的交易。 abupy中AbuDoubleMaBuy是动态自适应双均线策略,比如下面的参数中不传递快线值,只传递慢线值60,如下进行回测: End of explanation metrics.plot_buy_factors() Explanation: 下面可视化买入因子的生效分布,如下所示: End of explanation # 不传递任何参数,快线, 慢线都动态自适应 buy_factors = [{'class': AbuDoubleMaBuy}] abu_result_tuple, metrics = run_loo_back(us_choice_symbols) Explanation: 可以看到买入因子慢线仍然使用参数中传递的60,快线有3,9,18三个值,这三个值是以慢线数值为基数,结合大盘的走势计算出来的。 策略中动态计算快线的策略主要参考了大盘最近一个月走势震荡程度,动态决策快线的值: 大盘最近一个月走势非常稳定:fast=slow X 0.05 eg: slow=60->fast=60 X 0.05=3 大盘最近一个月走势比较稳定:fast=slow X 0.15 eg: slow=60->fast=60 X 0.15=9 大盘最近一个月走势比较震荡:fast=slow X 0.3 eg: slow=60->fast=60 X 0.3=18 大盘最近一个月走势非常震荡:fast=slow X 0.5 eg: slow=60->fast=60 X0.5=30 择时周期内每一个月重新计算一次 具体实现以及如何判定大盘的震荡程度请自行阅读源代码 上面的双均线策略中的慢线依然是通过参数传递,下面示例慢线参数也动态自适应计算的回测,如下: End of explanation metrics.plot_buy_factors() Explanation: 下面可视化买入因子的生效分布,如下所示: End of explanation ABuKLUtil.resample_close_mean(kl_dict) Explanation: 可以看到买入因子慢线的值从20到100不等,快线的值也从2到30不等。 策略中动态计算快线的策略和上述的方式相同,动态自适应慢线值主要依据重采样周期内的振幅值来确定,在第‘第10节 比特币, 莱特币的回测’中已经使用过resample_close_mean统计重采样周期内的振幅值,如下: End of explanation
12,985
Given the following text description, write Python code to implement the functionality described below step by step Description: PCA Python vs R Originally, R was used to calculate PCA using both princomp and prcomp. However, rpy2 stopped was intorducing some issues on the galaxy server. I decided to switch the calculation over to a pure python solution. scikit-learn has a PCA package which we can used, but it only does SVD and matches the output of prcomp with its default values. Here I am testing and figuing out how to output the different values. Step1: Import Example Data Step2: Use R to calculate PCA Mi has looked at this already, but wanted to put the R example here to be complete. Here are the two R methods to output PCA Step3: Use Python to calculate PCA scikit-learn has a PCA package that we will use. It uses the SVD method, so results match the prcomp from R. http Step4: I compared these results with prcomp and they are identical, note that the python version formats the data in scientific notation. Build output tables that match the original PCA script Build comment block At the top of each output file, the original R version includes the standard deviation and the proportion of variance explained. I want to first build this block.
Python Code: import pandas as pd import numpy as np from sklearn.decomposition import PCA Explanation: PCA Python vs R Originally, R was used to calculate PCA using both princomp and prcomp. However, rpy2 stopped was intorducing some issues on the galaxy server. I decided to switch the calculation over to a pure python solution. scikit-learn has a PCA package which we can used, but it only does SVD and matches the output of prcomp with its default values. Here I am testing and figuing out how to output the different values. End of explanation dat = pd.read_table('../example_data/ST000015_log.tsv') dat.set_index('Name', inplace=True) dat[:3] Explanation: Import Example Data End of explanation %%R -i dat # First method uses princomp to calulate PCA using eigenvalues and eigenvectors pr = princomp(dat) #str(pr) loadings = pr$loadings scores = pr$scores #summary(pr) %%R -i dat pr = prcomp(dat) #str(pr) loadings = pr$rotation scores = pr$x sd = pr$sdev #summary(pr) Explanation: Use R to calculate PCA Mi has looked at this already, but wanted to put the R example here to be complete. Here are the two R methods to output PCA End of explanation # Initiate PCA class pca = PCA() # Fit the model and transform data scores = pca.fit_transform(dat) # Get loadings loadings = pca.components_ # R also outputs the following in their summaries sd = loadings.std(axis=0) propVar = pca.explained_variance_ratio_ cumPropVar = propVar.cumsum() Explanation: Use Python to calculate PCA scikit-learn has a PCA package that we will use. It uses the SVD method, so results match the prcomp from R. http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html Generate PCA with default settings End of explanation # Labels used for the comment block labels = np.array(['#Std. deviation', '#Proportion of variance explained', '#Cumulative proportion of variance explained']) # Stack the data into a matrix data = np.vstack([sd, propVar, cumPropVar]) # Add the labels to the first position in the matrix block = np.column_stack([labels, data]) # Create header header = np.array(['Comp{}'.format(x+1) for x in range(loadings.shape[1])]) compoundIndex = np.hstack([dat.index.name, dat.index]) sampleIndex = np.hstack(['sampleID', dat.columns]) # Create loadings output loadHead = np.vstack([header, loadings]) loadIndex = np.column_stack([sampleIndex, loadHead]) loadOut = np.vstack([block, loadIndex]) # Create scores output scoreHead = np.vstack([header, scores]) scoreIndex = np.column_stack([compoundIndex, scoreHead]) scoreOut = np.vstack([block, scoreIndex]) np.savetxt('/home/jfear/tmp/dan.tsv', loadOut, fmt="%s", delimiter='\t') bob = pd.DataFrame(loadOut) Explanation: I compared these results with prcomp and they are identical, note that the python version formats the data in scientific notation. Build output tables that match the original PCA script Build comment block At the top of each output file, the original R version includes the standard deviation and the proportion of variance explained. I want to first build this block. End of explanation
12,986
Given the following text description, write Python code to implement the functionality described below step by step Description: <hr> <h1>Detecting Abnormalities in Mammograms</h1> <p>Jay Narhan</p> May 2017 Screening for breast cancer will often make use of mammography as the primary imaging modality for early detection efforts. However identifying abnormalities is a difficult problem due to large variability in the appearance of normal and abnormal tissue in mammograms and due to the subtlety associated with abnormal manifestations. Indeed, the interpretation of mammography imaging is still a manual process that is prone to human error. The application of automated classification of mammography is <a href="http Step1: <h2>Reproducible Research</h2> Step2: Class Balancing Here - I look at a modified version of SMOTE, growing the under-represented class via synthetic augmentation, until there is a balance among the categories Step3: Create the Training and Test Datasets Step4: <h2>Support Vector Machine Model</h2> Step6: <h2>CNN Modelling Using VGG16 in Transfer Learning</h2> Step7: <h2>Core CNN Modelling</h2> Prep and package the data for Keras processing Step8: Heavy Regularization
Python Code: import os import sys import time import numpy as np from tqdm import tqdm import sklearn.metrics as skm from sklearn import metrics from sklearn.svm import SVC from sklearn.model_selection import train_test_split from skimage import color import keras.callbacks as cb import keras.utils.np_utils as np_utils from keras import applications from keras import regularizers from keras.models import Sequential from keras.constraints import maxnorm from keras.preprocessing.image import ImageDataGenerator from keras.layers.convolutional import Convolution2D, MaxPooling2D from keras.layers import Activation, Dense, Dropout, Flatten, GaussianNoise from matplotlib import pyplot as plt %matplotlib inline plt.rcParams['figure.figsize'] = (10,10) np.set_printoptions(precision=2) sys.path.insert(0, '../helper_modules/') import jn_bc_helper as bc Explanation: <hr> <h1>Detecting Abnormalities in Mammograms</h1> <p>Jay Narhan</p> May 2017 Screening for breast cancer will often make use of mammography as the primary imaging modality for early detection efforts. However identifying abnormalities is a difficult problem due to large variability in the appearance of normal and abnormal tissue in mammograms and due to the subtlety associated with abnormal manifestations. Indeed, the interpretation of mammography imaging is still a manual process that is prone to human error. The application of automated classification of mammography is <a href="http://www.sciencedirect.com/science/article/pii/S0169260715300110">described</a> as an unsolved problem. As such, this workbook aims to extend research into how machine learning technologies may supplement the <b>detection efforts</b> of radiologists. Presented in this workbook are a number of models that have been developed to classify breast tissue imaging as either normal or abnormal, where abnormality contains benign and/or malignant lesions (masses or calcifications). A number of preprocessing steps have been performed before application of these models. These steps include thresholding and segmentation of hardware artifacts from breast tissue. The preprocessing also includes the generation of difference images for every bilateral pairings of CC and MLO mammogram views for each patient in the data set. The data being used consists of these differenced images. The workbook also leverages a modified version of the Synthetic Minority Over-sampling Technique (SMOTE), which looks to balance under-represented classes in the differenced dataset through creating synthetic minority cases via image augmentation (e.g. rotations, vertical and horizontal pixel shifts). In detection cases, this has a minor improvement in performance metrics. In the diagnosis workbook, it has a major impact on improvements. <hr> End of explanation %%python import os os.system('python -V') os.system('python ../helper_modules/Package_Versions.py') SEED = 7 np.random.seed(SEED) CURR_DIR = os.getcwd() DATA_DIR = '/Users/jnarhan/Dropbox/Breast_Cancer_Data/Data_Differenced/ALL_IMGS/' AUG_DIR = '/Users/jnarhan/Dropbox/Breast_Cancer_Data/Data_Differenced/AUG_DETECT_IMGS/' meta_file = '../../Meta_Data_Files/meta_data_detection.csv' PATHO_INX = 3 # Column number of detected label in meta_file FILE_INX = 1 # Column number of File name in meta_file meta_data, cls_cnts = tqdm( bc.load_meta(meta_file, patho_idx=PATHO_INX, file_idx=FILE_INX, balanceByRemoval=False, verbose=True) ) bc.pprint('Loading data') cats = bc.bcLabels(['normal', 'abnormal']) # For smaller images supply tuple argument for a parameter 'imgResize': # X_data, Y_data = bc.load_data(meta_data, DATA_DIR, cats, imgResize=(150,150)) X_data, Y_data = tqdm( bc.load_data(meta_data, DATA_DIR, cats) ) Explanation: <h2>Reproducible Research</h2> End of explanation datagen = ImageDataGenerator(rotation_range=5, width_shift_range=.01, height_shift_range=0.01, data_format='channels_first') X_data, Y_data = bc.balanceViaSmote(cls_cnts, meta_data, DATA_DIR, AUG_DIR, cats, datagen, X_data, Y_data, seed=SEED, verbose=True) Explanation: Class Balancing Here - I look at a modified version of SMOTE, growing the under-represented class via synthetic augmentation, until there is a balance among the categories: End of explanation X_train, X_test, Y_train, Y_test = train_test_split(X_data, Y_data, test_size=0.25, random_state=SEED, stratify=zip(*Y_data)[0]) print 'Size of X_train: {:>5}'.format(len(X_train)) print 'Size of X_test: {:>5}'.format(len(X_test)) print 'Size of Y_train: {:>5}'.format(len(Y_train)) print 'Size of Y_test: {:>5}'.format(len(Y_test)) print X_train.shape print X_test.shape print Y_train.shape print Y_test.shape data = [X_train, X_test, Y_train, Y_test] Explanation: Create the Training and Test Datasets End of explanation X_train_svm = X_train.reshape( (X_train.shape[0], -1)) X_test_svm = X_test.reshape( (X_test.shape[0], -1)) SVM_model = SVC(gamma=0.001) SVM_model.fit( X_train_svm, Y_train) predictOutput = SVM_model.predict(X_test_svm) svm_acc = metrics.accuracy_score(y_true=Y_test, y_pred=predictOutput) print 'SVM Accuracy: {: >7.2f}%'.format(svm_acc * 100) print 'SVM Error: {: >10.2f}%'.format(100 - svm_acc * 100) svm_matrix = skm.confusion_matrix(y_true=Y_test, y_pred=predictOutput) numBC = bc.reverseDict(cats) class_names = numBC.values() plt.figure(figsize=(8,6)) bc.plot_confusion_matrix(svm_matrix, classes=class_names, normalize=True, title='SVM Normalized Confusion Matrix Using Differencing \n') plt.tight_layout() plt.savefig('../../figures/jn_SVM_Detect_CM_20170530.png', dpi=100) plt.figure(figsize=(8,6)) bc.plot_confusion_matrix(svm_matrix, classes=class_names, normalize=False, title='SVM Raw Confusion Matrix Using Differencing \n') plt.tight_layout() bc.cat_stats(svm_matrix) Explanation: <h2>Support Vector Machine Model</h2> End of explanation def VGG_Prep(img_data): :param img_data: training or test images of shape [#images, height, width] :return: the array transformed to the correct shape for the VGG network shape = [#images, height, width, 3] transforms to rgb and reshapes images = np.zeros([len(img_data), img_data.shape[1], img_data.shape[2], 3]) for i in range(0, len(img_data)): im = (img_data[i] * 255) # Original imagenet images were not rescaled im = color.gray2rgb(im) images[i] = im return(images) def vgg16_bottleneck(data, modelPath, fn_train_feats, fn_train_lbls, fn_test_feats, fn_test_lbls): # Loading data X_train, X_test, Y_train, Y_test = data print('Preparing the Training Data for the VGG_16 Model.') X_train = VGG_Prep(X_train) print('Preparing the Test Data for the VGG_16 Model') X_test = VGG_Prep(X_test) print('Loading the VGG_16 Model') # "model" excludes top layer of VGG16: model = applications.VGG16(include_top=False, weights='imagenet') # Generating the bottleneck features for the training data print('Evaluating the VGG_16 Model on the Training Data') bottleneck_features_train = model.predict(X_train) # Saving the bottleneck features for the training data featuresTrain = os.path.join(modelPath, fn_train_feats) labelsTrain = os.path.join(modelPath, fn_train_lbls) print('Saving the Training Data Bottleneck Features.') np.save(open(featuresTrain, 'wb'), bottleneck_features_train) np.save(open(labelsTrain, 'wb'), Y_train) # Generating the bottleneck features for the test data print('Evaluating the VGG_16 Model on the Test Data') bottleneck_features_test = model.predict(X_test) # Saving the bottleneck features for the test data featuresTest = os.path.join(modelPath, fn_test_feats) labelsTest = os.path.join(modelPath, fn_test_lbls) print('Saving the Test Data Bottleneck Feaures.') np.save(open(featuresTest, 'wb'), bottleneck_features_test) np.save(open(labelsTest, 'wb'), Y_test) # Locations for the bottleneck and labels files that we need train_bottleneck = '2Class_VGG16_bottleneck_features_train.npy' train_labels = '2Class_VGG16_labels_train.npy' test_bottleneck = '2Class_VGG16_bottleneck_features_test.npy' test_labels = '2Class_VGG16_labels_test.npy' modelPath = os.getcwd() top_model_weights_path = './weights/' np.random.seed(SEED) vgg16_bottleneck(data, modelPath, train_bottleneck, train_labels, test_bottleneck, test_labels) def train_top_model(train_feats, train_lab, test_feats, test_lab, model_path, model_save, epoch = 50, batch = 64): start_time = time.time() train_bottleneck = os.path.join(model_path, train_feats) train_labels = os.path.join(model_path, train_lab) test_bottleneck = os.path.join(model_path, test_feats) test_labels = os.path.join(model_path, test_lab) history = bc.LossHistory() X_train = np.load(train_bottleneck) Y_train = np.load(train_labels) Y_train = np_utils.to_categorical(Y_train, num_classes=2) X_test = np.load(test_bottleneck) Y_test = np.load(test_labels) Y_test = np_utils.to_categorical(Y_test, num_classes=2) model = Sequential() model.add(Flatten(input_shape=X_train.shape[1:])) model.add( Dropout(0.7)) model.add( Dense(256, activation='relu', kernel_constraint= maxnorm(3.)) ) model.add( Dropout(0.5)) # Softmax for probabilities for each class at the output layer model.add( Dense(2, activation='softmax')) model.compile(optimizer='rmsprop', # adadelta loss='binary_crossentropy', metrics=['accuracy']) model.fit(X_train, Y_train, epochs=epoch, batch_size=batch, callbacks=[history], validation_data=(X_test, Y_test), verbose=2) print "Training duration : {0}".format(time.time() - start_time) score = model.evaluate(X_test, Y_test, batch_size=16, verbose=2) print "Network's test score [loss, accuracy]: {0}".format(score) print 'CNN Error: {:.2f}%'.format(100 - score[1] * 100) bc.save_model(model_save, model, "jn_VGG16_Detection_top_weights.h5") return model, history.losses, history.acc, score np.random.seed(SEED) (trans_model, loss_cnn, acc_cnn, test_score_cnn) = train_top_model(train_feats=train_bottleneck, train_lab=train_labels, test_feats=test_bottleneck, test_lab=test_labels, model_path=modelPath, model_save=top_model_weights_path, epoch=100) plt.figure(figsize=(10,10)) bc.plot_losses(loss_cnn, acc_cnn) plt.savefig('../../figures/epoch_figures/jn_Transfer_Detection_Learning_20170530.png', dpi=100) print 'Transfer Learning CNN Accuracy: {: >7.2f}%'.format(test_score_cnn[1] * 100) print 'Transfer Learning CNN Error: {: >10.2f}%'.format(100 - test_score_cnn[1] * 100) predictOutput = bc.predict(trans_model, np.load(test_bottleneck)) trans_matrix = skm.confusion_matrix(y_true=Y_test, y_pred=predictOutput) plt.figure(figsize=(8,6)) bc.plot_confusion_matrix(trans_matrix, classes=class_names, normalize=True, title='Transfer CNN Normalized Confusion Matrix Using Differencing \n') plt.tight_layout() plt.savefig('../../figures/jn_Transfer_Detection_CM_20170530.png', dpi=100) plt.figure(figsize=(8,6)) bc.plot_confusion_matrix(trans_matrix, classes=class_names, normalize=False, title='Transfer CNN Raw Confusion Matrix Using Differencing \n') plt.tight_layout() bc.cat_stats(trans_matrix) Explanation: <h2>CNN Modelling Using VGG16 in Transfer Learning</h2> End of explanation data = [X_train, X_test, Y_train, Y_test] X_train, X_test, Y_train, Y_test = bc.prep_data(data, cats) data = [X_train, X_test, Y_train, Y_test] print X_train.shape print X_test.shape print Y_train.shape print Y_test.shape Explanation: <h2>Core CNN Modelling</h2> Prep and package the data for Keras processing: End of explanation def diff_model_v7_reg(numClasses, input_shape=(3, 150,150), add_noise=False, noise=0.01, verbose=False): model = Sequential() if (add_noise): model.add( GaussianNoise(noise, input_shape=input_shape)) model.add( Convolution2D(filters=16, kernel_size=(5,5), data_format='channels_first', padding='same', activation='relu')) else: model.add( Convolution2D(filters=16, kernel_size=(5,5), data_format='channels_first', padding='same', activation='relu', input_shape=input_shape)) model.add( Dropout(0.7)) model.add( Convolution2D(filters=32, kernel_size=(3,3), data_format='channels_first', padding='same', activation='relu')) model.add( MaxPooling2D(pool_size= (2,2), data_format='channels_first')) model.add( Dropout(0.4)) model.add( Convolution2D(filters=32, kernel_size=(3,3), data_format='channels_first', activation='relu')) model.add( Convolution2D(filters=64, kernel_size=(3,3), data_format='channels_first', padding='same', activation='relu', kernel_regularizer=regularizers.l2(0.01))) model.add( MaxPooling2D(pool_size= (2,2), data_format='channels_first')) model.add( Convolution2D(filters=64, kernel_size=(3,3), data_format='channels_first', activation='relu', kernel_regularizer=regularizers.l2(0.01))) model.add( Dropout(0.4)) model.add( Convolution2D(filters=128, kernel_size=(3,3), data_format='channels_first', padding='same', activation='relu', kernel_regularizer=regularizers.l2(0.01))) model.add( MaxPooling2D(pool_size= (2,2), data_format='channels_first')) model.add( Convolution2D(filters=128, kernel_size=(3,3), data_format='channels_first', activation='relu', kernel_regularizer=regularizers.l2(0.01))) model.add(Dropout(0.4)) model.add( Flatten()) model.add( Dense(128, activation='relu', kernel_constraint= maxnorm(3.)) ) model.add( Dropout(0.4)) model.add( Dense(64, activation='relu', kernel_constraint= maxnorm(3.)) ) model.add( Dropout(0.4)) # Softmax for probabilities for each class at the output layer model.add( Dense(numClasses, activation='softmax')) if verbose: print( model.summary() ) model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy']) return model diff_model7_noise_reg = diff_model_v7_reg(len(cats), input_shape=(X_train.shape[1], X_train.shape[2], X_train.shape[3]), add_noise=True, verbose=True) np.random.seed(SEED) (cnn_model, loss_cnn, acc_cnn, test_score_cnn) = bc.run_network(model=diff_model7_noise_reg, data=data, epochs=50, batch=64) plt.figure(figsize=(10,10)) bc.plot_losses(loss_cnn, acc_cnn) plt.savefig('../../figures/epoch_figures/jn_Core_CNN_Detection_Learning_20170530.png', dpi=100) bc.save_model(dir_path='./weights/', model=cnn_model, name='jn_Core_CNN_Detection_20170530') print 'Core CNN Accuracy: {: >7.2f}%'.format(test_score_cnn[1] * 100) print 'Core CNN Error: {: >10.2f}%'.format(100 - test_score_cnn[1] * 100) predictOutput = bc.predict(cnn_model, X_test) cnn_matrix = skm.confusion_matrix(y_true=[val.argmax() for val in Y_test], y_pred=predictOutput) plt.figure(figsize=(8,6)) bc.plot_confusion_matrix(cnn_matrix, classes=class_names, normalize=True, title='CNN Normalized Confusion Matrix Using Differencing \n') plt.tight_layout() plt.savefig('../../figures/jn_Core_CNN_Detection_CM_20170530.png', dpi=100) plt.figure(figsize=(8,6)) bc.plot_confusion_matrix(cnn_matrix, classes=class_names, normalize=False, title='CNN Raw Confusion Matrix Using Differencing \n') plt.tight_layout() bc.cat_stats(cnn_matrix) Explanation: Heavy Regularization End of explanation
12,987
Given the following text problem statement, write Python code to implement the functionality described below in problem statement Problem: how split dataset into training and testing sets
Python Code:: from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split(ds.data, ds.target, test_size = 0.20)
12,988
Given the following text description, write Python code to implement the functionality described below step by step Description: Laboratory 02 Requirements For the second part of the exercises you will need the wikipedia package. On Windows machines, use the following command in the Anaconda Prompt (Start --&gt; Anaconda --&gt; Anaconda Prompt) Step1: 1.2 Define a function that takes a sequence and an integer $k$ as its input and returns the $k$ largest element. Do not use the built-in max function. Do not change the original sequence. If $k$ is not specified return one element in a list. Step2: *1.3 Add an optional key argument that works analogously to the built-in sorted's key argument. Define a function that takes a matrix as an input represented as a list of lists (you can assume that the input is a valid matrix). Return its transpose without changing the original matrix. Step3: 2.1 Define a function that takes a string as its input and returns a dictionary with the character frequencies. Step4: 2.2 Add an optional skip_symbols to the char_freq function. skip_symbols is the set of symbols that should be excluded from the frequence dictionary. If this argument is not specified, the function should include every symbol. Step5: 2.2 Define a function that computes word frequencies in a text. Step6: 2.3 Define a function that counts the uppercase letters in a string. Step7: 2.4 Define a function that takes two strings and decides whether they are anagrams. A string is an anagram of another string if its letters can be rearranged so that it equals the other string. For example Step8: 2.5. Define a sentence splitter function that takes a string and splits it into a list of sentences. Sentences end with . and the new sentence must start with a whitespace (str.isspace) or be the end of the string. See the examples below. Step9: Wikipedia module The following exercises use the wikipedia package. The basic usage is illustrated below. The documentation is available here. Searching for pages Step10: Downloading an article Step11: The content attribute contains the full text Step12: By default the module downloads the English Wikipedia. The language can be changed the following way Step13: 3.0 Change the language back to English and test the package with a few other pages. Step14: 3.1 Download 4-5 arbitrary pages from the English Wikipedia (they should exceed 100000 characters combined) and compute the word frequencies using your previously defined function(s). Print the most common 20 words in the following format (the example is not the correct answer) Step15: 3.2 Repeat the same exercise for your native language if it denotes word boundaries with spaces. If it doesn't choose an arbitrary language other than English. Step16: 3.3 Define a function that takes a string and returns its bigram frequencies as a dictionary. Character bigrams are pairs of subsequent characters. For example word apple contains the following bigrams Step17: 3.4 Using your previous English collection compute bigram frequencies. What are the 10 most common and 10 least common bigrams? Step18: Most common bigrams Step19: Least common bigrams Step20: *3.5 Define a function that takes two parameters Step21: 3.6 Compute the same statistics for your native language.
Python Code: def is_symmetric(l): for i in range(len(l) // 2): if l[i] != l[len(l)-i-1]: return False return True # idiomatic solution def is_symmetric(l): return all(l[i] == l[len(l)-i-1] for i in range(len(l) // 2)) assert(is_symmetric([1]) == True) assert(is_symmetric([]) == True) assert(is_symmetric([1, 2, 3, 1]) == False) assert(is_symmetric([1, "foo", "bar", "foo", 1]) == True) assert(is_symmetric("abcba") == True) Explanation: Laboratory 02 Requirements For the second part of the exercises you will need the wikipedia package. On Windows machines, use the following command in the Anaconda Prompt (Start --&gt; Anaconda --&gt; Anaconda Prompt): conda install -c conda-forge wikipedia This command should work with other Anaconda environments (OSX, Linux). If you are using virtualenv directly instead of Anaconda, the following command installs it in your virtualenv: pip install wikipedia or sudo pip install wikipedia installs it system-wide. You are encouraged to reuse functions that you defined in earlier exercises. 1.1 Define a function that takes a sequence as its input and returns whether the sequence is symmetric. A sequence is symmetric if it is equal to its reverse. End of explanation def k_largest(l, k=1): return sorted(l)[-k:] l = [-1, 0, 3, 2] assert(k_largest(l) == [3]) assert(k_largest(l, 2) == [2, 3] or k_largest(l, 2)) Explanation: 1.2 Define a function that takes a sequence and an integer $k$ as its input and returns the $k$ largest element. Do not use the built-in max function. Do not change the original sequence. If $k$ is not specified return one element in a list. End of explanation def transpose(M): Mt = [] for j in range(len(M[0])): Mt.append([]) for i in range(len(M)): Mt[-1].append(M[i][j]) return Mt m1 = [[1, 2, 3], [4, 5, 6]] m2 = [[1, 4], [2, 5], [3, 6]] assert(transpose(m1) == m2) assert(transpose(transpose(m1)) == m1) Explanation: *1.3 Add an optional key argument that works analogously to the built-in sorted's key argument. Define a function that takes a matrix as an input represented as a list of lists (you can assume that the input is a valid matrix). Return its transpose without changing the original matrix. End of explanation def char_freq(s): freq = {} for c in s: if c not in freq: freq[c] = 0 freq[c] += 1 return freq assert(char_freq("aba") == {"a": 2, "b": 1}) Explanation: 2.1 Define a function that takes a string as its input and returns a dictionary with the character frequencies. End of explanation def char_freq_with_skip(s, skip_symbols=None): freq = {} for c in s: if c in skip_symbols: continue if c not in freq: freq[c] = 0 freq[c] += 1 return freq assert(char_freq_with_skip("ab.abc?", skip_symbols=".?") == {"a": 2, "b": 2, "c": 1}) Explanation: 2.2 Add an optional skip_symbols to the char_freq function. skip_symbols is the set of symbols that should be excluded from the frequence dictionary. If this argument is not specified, the function should include every symbol. End of explanation def word_freq(s): freq = {} for word in s.split(): if word not in freq: freq[word] = 0 freq[word] += 1 return freq s = "the green tea and the black tea" assert(word_freq(s) == {"the": 2, "tea": 2, "green": 1, "black": 1, "and": 1}) Explanation: 2.2 Define a function that computes word frequencies in a text. End of explanation def count_upper_case(s): cnt = 0 for c in s: if c.isupper(): cnt += 1 return cnt # idiomatic solution def count_upper_case(s): return sum(int(c.isupper()) for c in s) assert(count_upper_case("A") == 1) assert(count_upper_case("abA bcCa") == 2) Explanation: 2.3 Define a function that counts the uppercase letters in a string. End of explanation def anagram(s1, s2): return char_freq(s1) == char_freq(s2) assert(anagram("abc", "bac") == True) assert(anagram("aabb", "abab") == True) assert(anagram("abab", "aaab") == False) Explanation: 2.4 Define a function that takes two strings and decides whether they are anagrams. A string is an anagram of another string if its letters can be rearranged so that it equals the other string. For example: abc -- bac aabb -- abab Counter examples: abc -- aabc abab -- aaab End of explanation def sentence_splitter(s): sentences = [] sent = [] parts = s.split('.') for i, part in enumerate(parts[:-1]): sent.append(part) if len(parts[i+1]) > 0 and parts[i+1][0].isspace(): print(part, sent) sentences.append('.'.join(sent).strip()) sent = [] sentences.append('.'.join(sent).strip()) return sentences assert(sentence_splitter("A.b. acd.") == ['A.b', 'acd']) assert(sentence_splitter("A. b. acd.") == ['A', 'b', 'acd']) Explanation: 2.5. Define a sentence splitter function that takes a string and splits it into a list of sentences. Sentences end with . and the new sentence must start with a whitespace (str.isspace) or be the end of the string. See the examples below. End of explanation import wikipedia results = wikipedia.search("Budapest") results Explanation: Wikipedia module The following exercises use the wikipedia package. The basic usage is illustrated below. The documentation is available here. Searching for pages: End of explanation article = wikipedia.page("Budapest") article.summary[:100] Explanation: Downloading an article: End of explanation type(article.content), len(article.content) Explanation: The content attribute contains the full text: End of explanation wikipedia.set_lang("fr") wikipedia.search("Budapest") fr_article = wikipedia.page("Budapest") fr_article.summary[:100] Explanation: By default the module downloads the English Wikipedia. The language can be changed the following way: End of explanation wikipedia.set_lang("en") Explanation: 3.0 Change the language back to English and test the package with a few other pages. End of explanation wikipedia.set_lang("en") en_content = "" for title in wikipedia.search("Budapest")[:5]: page = wikipedia.page(title) en_content += page.content wp_word_freq = word_freq(en_content) for word, freq in sorted(wp_word_freq.items(), key=lambda x: -x[1])[:20]: print("{}\t{}".format(word, freq)) # print("\n".join("{}\t{}".format(word, freq) for word, freq in sorted(wp_word_freq.items(), key=lambda x: -x[1])[:20])) Explanation: 3.1 Download 4-5 arbitrary pages from the English Wikipedia (they should exceed 100000 characters combined) and compute the word frequencies using your previously defined function(s). Print the most common 20 words in the following format (the example is not the correct answer): unintelligent &lt;TAB&gt; 123456 moribund &lt;TAB&gt; 123451 ... The words and their frequency are separated by TABS and no additional whitespace should be added. End of explanation wikipedia.set_lang("hu") hu_content = "" for title in wikipedia.search("Budapest")[:5]: page = wikipedia.page(title) hu_content += page.content wp_word_freq = word_freq(hu_content) for word, freq in sorted(wp_word_freq.items(), key=lambda x: -x[1])[:20]: print("{}\t{}".format(word, freq)) Explanation: 3.2 Repeat the same exercise for your native language if it denotes word boundaries with spaces. If it doesn't choose an arbitrary language other than English. End of explanation from collections import defaultdict def get_char_bigrams(s): bigrams = defaultdict(int) for i in range(len(s) - 1): bigram = s[i:i+2] bigrams[bigram] += 1 return bigrams print(get_char_bigrams("apple")) print(get_char_bigrams("apple apple")) Explanation: 3.3 Define a function that takes a string and returns its bigram frequencies as a dictionary. Character bigrams are pairs of subsequent characters. For example word apple contains the following bigrams: ap, pp, pl, le. They are used for language modeling. End of explanation en_bigrams = get_char_bigrams(en_content) Explanation: 3.4 Using your previous English collection compute bigram frequencies. What are the 10 most common and 10 least common bigrams? End of explanation print("\n".join( "{}\t{}".format(bigram, freq) for bigram, freq in sorted(en_bigrams.items(), key=lambda x: -x[1])[:10] )) Explanation: Most common bigrams: End of explanation print("\n".join( "{}\t{}".format(bigram, freq) for bigram, freq in sorted(en_bigrams.items(), key=lambda x: x[1])[:10] )) Explanation: Least common bigrams: End of explanation # without collections.defaultdict def get_n_grams(text, N): freqs = {} for i in range(len(text) - N + 1): ngram = text[i:i+N] freqs.setdefault(ngram, 0) freqs[ngram] += 1 return freqs for n in range(1, 6): ngram_freqs = get_n_grams(en_content, n) print("The number of unique {}-grams is {}".format(n, len(ngram_freqs))) Explanation: *3.5 Define a function that takes two parameters: a string and an integer N and returns the N-gram frequencies of the string. For $N=2$ the function works the same as in the previous example. Try the function for $N=1..5$. How many unique N-grams are in your collection? End of explanation for n in range(1, 6): ngram_freqs = get_n_grams(hu_content, n) print("The number of unique {}-grams is {}".format(n, len(ngram_freqs))) Explanation: 3.6 Compute the same statistics for your native language. End of explanation
12,989
Given the following text description, write Python code to implement the functionality described below step by step Description: Tutorial 1 Step1: Spots magic I wrote an %imaris_pull shortcut to pull spots, cells, filaments and surfaces. Typing the following create a spots dictionary with spot objects names as keys and the ISpots objects as values. The spots names are displayed after completion of the magic command. Step2: Spots calculation using numpy functions The spots returned from Imaris are associated with a unique timepoint. This data is returned by sv.GetIndicesT(). Counting the number of instances for each timepoint gives the "number of cells vs time" data we are after. We can do this through a loop, or there is a numpy function we can use to do this. np.unique() counts the unique number of instances in an array Step3: If we want to display the actual timepoints instead of their indexes, there's a way to do this using BridgeLib (timepoints returned in second, starting from 0) Step4: NOTE
Python Code: %reload_ext XTIPython import numpy as np Explanation: Tutorial 1: Number of cells vs time NOTE: This tutorials works with the R18Demo.ims dataset. You will also need to create some spot data. Create a new spot object in Imaris, and just use the defaults until you reach the end of the spots creation wizard. End of explanation %imaris_pull spots sv = spots.values()[0] #We should only have one ISPots object in the dictionary, so let's pull it. Explanation: Spots magic I wrote an %imaris_pull shortcut to pull spots, cells, filaments and surfaces. Typing the following create a spots dictionary with spot objects names as keys and the ISpots objects as values. The spots names are displayed after completion of the magic command. End of explanation tpindexes, nspots = np.unique(sv.GetIndicesT(), return_counts=True) Explanation: Spots calculation using numpy functions The spots returned from Imaris are associated with a unique timepoint. This data is returned by sv.GetIndicesT(). Counting the number of instances for each timepoint gives the "number of cells vs time" data we are after. We can do this through a loop, or there is a numpy function we can use to do this. np.unique() counts the unique number of instances in an array: End of explanation tps = BridgeLib.GetTimepoints(vDataSet,tpindexes) print tps Explanation: If we want to display the actual timepoints instead of their indexes, there's a way to do this using BridgeLib (timepoints returned in second, starting from 0): End of explanation %matplotlib inline import matplotlib import matplotlib.pyplot as plt fig = plt.figure() ax = fig.add_axes([1,1,1,1]) ax.plot(tps,nspots) ax.set_xlabel('Time (seconds)') ax.set_ylabel('Number of cells') ax.set_title('Number of spots vs Time') Explanation: NOTE: In this case, the timepoints are separated by exactly 1s, this may not always be the case! Plotting the data End of explanation
12,990
Given the following text description, write Python code to implement the functionality described below step by step Description: Classification on the Titanic Dataset The following example gives an idea about how you could run basic classification using a Gaussian mixture model on the Titanic dataset, using a latent node, continuous variables as well as discrete variables. The example uses cross validation to get a more robust accuracy score across the training and testing data sets. The initial step is our imports, and a bit of code for extracting floor and room number. Step1: The first step is a bit of preprocessing to get the data in the required format. Step2: It's then necessary to attach the thread to the JVM through a pipe created by Jpype (otherwise you get a recursion error message). Step3: There are a few basic utility functions for deciding on the type of the data provided - obviously if you're already aware of the type then it's more accurate to manually specify datatypes. Step4: The structure will look something like the following (as visualised in networkx). Bayes Server does have a UI, so you could save the model that you generate through the API. Step5: Finally, run the code through 3 folds to get an average score from three different models.
Python Code: %matplotlib inline import pandas as pd import numpy as np import re import sys sys.path.append("../../../bayesianpy") import bayesianpy import bayesianpy.visual import logging import os from sklearn.cross_validation import KFold from sklearn.metrics import accuracy_score pattern = re.compile("([A-Z]{1})([0-9]{1,3})") def get_cabin_floor_and_number(cabin): if not isinstance(cabin, str): return "", np.nan cabins = cabin.split(" ") for cabin in cabins: match = re.match(pattern, cabin) if match is not None: floor = match.group(1) number = match.group(2) return floor, number return "", np.nan logger = logging.getLogger() logger.addHandler(logging.StreamHandler()) logger.setLevel(logging.INFO) Explanation: Classification on the Titanic Dataset The following example gives an idea about how you could run basic classification using a Gaussian mixture model on the Titanic dataset, using a latent node, continuous variables as well as discrete variables. The example uses cross validation to get a more robust accuracy score across the training and testing data sets. The initial step is our imports, and a bit of code for extracting floor and room number. End of explanation db_folder = bayesianpy.utils.get_path_to_parent_dir("") titanic = pd.read_csv(os.path.join(db_folder, "data/titanic.csv")) titanic['Floor'], titanic['CabinNumber'] = zip(*titanic.Cabin.map(get_cabin_floor_and_number)) titanic.CabinNumber = titanic.CabinNumber.astype(float) titanic.Floor.replace("", np.nan, inplace=True) # drop variables that vary too much, e.g. with almost every row titanic.drop(['Cabin', 'Ticket', 'Name', 'PassengerId'], inplace=True, axis=1) Explanation: The first step is a bit of preprocessing to get the data in the required format. End of explanation bayesianpy.jni.attach(logger) Explanation: It's then necessary to attach the thread to the JVM through a pipe created by Jpype (otherwise you get a recursion error message). End of explanation auto = bayesianpy.data.AutoType(titanic) network_factory = bayesianpy.network.NetworkFactory(logger) discrete = titanic[list(auto.get_discrete_variables())] continuous = titanic[list(auto.get_continuous_variables())] print("Discrete variables: {}".format(discrete.columns.tolist())) print("Continuous variables: {}".format(continuous.columns.tolist())) Explanation: There are a few basic utility functions for deciding on the type of the data provided - obviously if you're already aware of the type then it's more accurate to manually specify datatypes. End of explanation # write data to the temporary sqllite db with bayesianpy.data.DataSet(titanic, db_folder, logger) as dataset: # Use a standard template, which generally gives good performance mixture_naive_bayes_tpl = bayesianpy.template.MixtureNaiveBayes(logger, discrete=discrete, continuous=continuous) model = bayesianpy.model.NetworkModel( mixture_naive_bayes_tpl.create(network_factory), logger) # result contains a bunch of metrics regarding the training step results = model.train(dataset) layout = bayesianpy.visual.NetworkLayout(results.get_network()) graph = layout.build_graph() pos = layout.fruchterman_reingold_layout(graph) layout.visualise(graph, pos) Explanation: The structure will look something like the following (as visualised in networkx). Bayes Server does have a UI, so you could save the model that you generate through the API. End of explanation # write data to the temporary sqllite db with bayesianpy.data.DataSet(titanic, db_folder, logger) as dataset: # Use a standard template, which generally gives good performance mixture_naive_bayes_tpl = bayesianpy.template.MixtureNaiveBayes(logger, discrete=discrete, continuous=continuous) k_folds = 3 kf = KFold(titanic.shape[0], n_folds=k_folds, shuffle=True) score = 0 # use cross validation to try and predict whether the individual survived or not for k, (train_indexes, test_indexes) in enumerate(kf): model = bayesianpy.model.NetworkModel( mixture_naive_bayes_tpl.create(network_factory), logger) # result contains a bunch of metrics regarding the training step model.train(dataset.subset(train_indexes)) # note that we've not 'dropped' the target data anywhere, this will be retracted when it's queried, # by specifying query_options.setQueryEvidenceMode(bayesServerInference().QueryEvidenceMode.RETRACT_QUERY_EVIDENCE) results = model.batch_query(dataset.subset(test_indexes), bayesianpy.model.QueryMostLikelyState("Survived", output_dtype=titanic['Survived'].dtype)) # Each query just appends a column/ columns on to the original dataframe, so results is the same as titanic.iloc[test_indexes], # with (in this case) one additional column called 'Survived_maxlikelihood', joined to the original. score += accuracy_score(y_pred=results['Survived_maxlikelihood'].tolist(), y_true=results['Survived'].tolist()) print("Average score was {}. Baseline accuracy is about 0.61.".format(score / k_folds)) Explanation: Finally, run the code through 3 folds to get an average score from three different models. End of explanation
12,991
Given the following text description, write Python code to implement the functionality described below step by step Description: FastAI models.validate CUDA Tensor Issue WNixalo – 2018/6/11 I ran into trouble trying to reimplement a CIFAR-10 baseline notebook. The notebook used PyTorch dataloaders fed into a ModelData object constructor. The issue occurred when running the learning rate finder Step1: None of this is an issue if constructing a fast.ai Model Data object via it's constructors (eg Step2: Note Step3: Using a small (10%) random subset of the dataset Step4: 1. With current fastai version Step5: 2. With models.validate modified so that y = VV(y) and y.data is now passed to f(.) in res.append(.) Step6: Making sure test predictions work Step7: Example same as the above, but with a full-size (50,000 element) training set Step8: 3. An example debugging and showing y vs VV(y) vs VV(y).data. accuracy(preds.data, y) throws a TypeError because preds.data is a torch.cuda.FloatTensor and y is a torch.LongTensor. y needs to be a CUDA tensor. accuracy([preds.data, VV(y)) throws another TypeError because VV(y) is a Variable. accuracy([pred.sdata, VV(y).data) works and returns an accuracy value, because y is now a torch.cuda.LongTensor which can be compared to the CUDA tensor preds.data.
Python Code: import torch from fastai.conv_learner import * x = torch.FloatTensor([[[1,1,],[1,1]]]); x VV(x) VV(VV(x)) torch.equal(VV(x), VV(VV(x))) Explanation: FastAI models.validate CUDA Tensor Issue WNixalo – 2018/6/11 I ran into trouble trying to reimplement a CIFAR-10 baseline notebook. The notebook used PyTorch dataloaders fed into a ModelData object constructor. The issue occurred when running the learning rate finder: at the end of its run a TypeError would be thrown. This error came from attempting to compare preds.data and y inside of the metrics.accuracy function which was called in model.validate by the line: res.append([f(preds.data, y) for f in metrics]) where metrics is [accuracy]. On an AWS p2.xlarge (and I assume any GPU) machine, this results in comparing a torch.cuda.FloatTensor (preds.data) to a torch.LongTensor (y), throwing an error. This error did not occur when using an older version of the fast.ai library, available here. The reason is that within model.validate(.), y = VV(y), and y.data is passed into the accuracy metric function. This is the proposed fix. To make sure that recasting a Variable to Variable via VV(.) won't break anything (eg: if a fast.ai dataloader is used, returning .cuda. tensors: End of explanation %matplotlib inline %reload_ext autoreload %autoreload 2 import cifar_utils from fastai.conv_learner import * from torchvision import transforms, datasets torch.backends.cudnn.benchmark = True Explanation: None of this is an issue if constructing a fast.ai Model Data object via it's constructors (eg: md = ImageClassifierData.from_csv(...)) because fast.ai uses it's own dataloaders which automatically place data on the GPU if in use via dataloader.get_tensor(.). The issue arises when PyTorch dataloaders are used, but all low-level details (calculating loss, metrics, etc) are handled internally by the fast.ai library. This notebook shows a demo workflow: triggering the issue, demonstrating the fix, and showing a mini debug walkthrough. For more detailed troubleshooting notes, see the accompanying debugging_notes.txt. End of explanation # fastai/imagenet-fast/cifar10/models/ repo from imagenet_fast_cifar_models.wideresnet import wrn_22 stats = (np.array([ 0.4914 , 0.48216, 0.44653]), np.array([ 0.24703, 0.24349, 0.26159])) def get_loaders(bs, num_workers): traindir = str(PATH/'train') valdir = str(PATH/'test') tfms = [transforms.ToTensor(), transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010))] aug_tfms =transforms.Compose([ transforms.RandomCrop(32, padding=4), transforms.RandomHorizontalFlip(), ] + tfms) train_dataset = datasets.ImageFolder( traindir, aug_tfms) train_loader = torch.utils.data.DataLoader( train_dataset, batch_size=bs, shuffle=True, num_workers=num_workers, pin_memory=True) val_dataset = datasets.ImageFolder(valdir, transforms.Compose(tfms)) val_loader = torch.utils.data.DataLoader( val_dataset, batch_size=bs, shuffle=False, num_workers=num_workers, pin_memory=True) aug_dataset = datasets.ImageFolder(valdir, aug_tfms) aug_loader = torch.utils.data.DataLoader( aug_dataset, batch_size=bs, shuffle=False, num_workers=num_workers, pin_memory=True) return train_loader, val_loader, aug_loader def get_data(bs, num_workers): trn_dl, val_dl, aug_dl = get_loaders(bs, num_workers) data = ModelData(PATH, trn_dl, val_dl) data.aug_dl = aug_dl data.sz=32 return data def get_learner(arch, bs): learn = ConvLearner.from_model_data(arch.cuda(), get_data(bs, num_cpus())) learn.crit = nn.CrossEntropyLoss() learn.metrics = [accuracy] return learn def get_TTA_accuracy(learn): preds, targs = learn.TTA() # combining the predictions across augmented and non augmented inputs preds = 0.6 * preds[0] + 0.4 * preds[1:].sum(0) return accuracy_np(preds, targs) Explanation: Note: the fastai/imagenet-fast repository was cloned, with a symlink imagenet_fast_cifar_models pointing to imagenet-fast/cifar10/models/. This is because the wide-resnet-22 from fast.ai's DAWN Bench submission was used. Any other architecture can be used without going through the trouble to import this. End of explanation PATH = Path("data/cifar10_tmp") # PATH = Path("data/cifar10") # print(cifar_utils.count_files(PATH)) # PATH = cifar_utils.create_cifar_subset(PATH, copydirs=['train','test'], p=0.1) # print(cifar_utils.count_files(PATH)) Explanation: Using a small (10%) random subset of the dataset: End of explanation learn = get_learner(wrn_22(), 512) learn.lr_find(wds=1e-4) learn.sched.plot(n_skip_end=1) Explanation: 1. With current fastai version: End of explanation learn = get_learner(wrn_22(), 256) learn.lr_find(wds=1e-4) learn.sched.plot(n_skip_end=1) Explanation: 2. With models.validate modified so that y = VV(y) and y.data is now passed to f(.) in res.append(.): (note a smaller batch size is used just so a any plot is displayed -- the sample dataset size is a bit too small for useful information with these settings) End of explanation log_preds, _ = learn.TTA(is_test=False) # 'test' dataloader never initialized; using val Explanation: Making sure test predictions work: End of explanation PATH = Path('data/cifar10') learn = get_learner(wrn_22(), 512) learn.lr_find(wds=1e-4) learn.sched.plot(n_skip_end=1) Explanation: Example same as the above, but with a full-size (50,000 element) training set: End of explanation # using the 10% sample dataset learn = get_learner(wrn_22(), 512) learn.lr_find(wds=1e-4) learn.sched.plot(n_skip_end=1) Explanation: 3. An example debugging and showing y vs VV(y) vs VV(y).data. accuracy(preds.data, y) throws a TypeError because preds.data is a torch.cuda.FloatTensor and y is a torch.LongTensor. y needs to be a CUDA tensor. accuracy([preds.data, VV(y)) throws another TypeError because VV(y) is a Variable. accuracy([pred.sdata, VV(y).data) works and returns an accuracy value, because y is now a torch.cuda.LongTensor which can be compared to the CUDA tensor preds.data. End of explanation
12,992
Given the following text description, write Python code to implement the functionality described below step by step Description: Python_중간발표 데이터사이언스학과 M2015228 조재환 'key Step1: 이전 제출물 Step2: 이전에 제출한 것은 출력하면 알파벳과 모스부호가 정렬되지 않았습니다. 입력한 문장대로 모스부호를 나타내고 싶었었는데 조금 더 공부하다보니 코드를 만들 수 있어서 다시 한번 제출합니다. 수정
Python Code: drinks={ 'martini' : {'vodka', 'vermouth'}, 'black russian' : {'vodka', 'kahlua'}, 'white russian' : {'cream', 'kahlua', 'vodka'}, 'manhattan' : {'rye', 'vermouth', 'bitters'}, 'screwdriver': {'orange juice', 'vodka'}, 'verorange' : {'orange juice', 'vermouth'}, 'kahlua milk' : {'kahlua', 'milk'}, 'jin tonic' : {'jin', 'tonic water'}, 'mojito' : {'rum', 'lime juice'}, 'cinderella' : {'orange juice', 'lemon juice','pineapple juice'} } inputs = input('what do you want? ') print('Here are some Recipt:') for name, contents in drinks.items(): if inputs in contents: print(name) morse = { '.-':'A','-...':'B','-.-.':'C','-..':'D','.':'E','..-.':'F', '--.':'G','....':'H','..':'I','.---':'J','-.-':'K','.-..':'L', '--':'M','-.':'N','---':'O','.--.':'P','--.-':'Q','.-.':'R', '...':'S','-':'T','..-':'U','...-':'V','.--':'W','-..-':'X', '-.--':'Y','--..':'Z', '':' ' } code = '.... . ... .-.. . . .--. ... . .- .-. .-.. -.--' Explanation: Python_중간발표 데이터사이언스학과 M2015228 조재환 'key : value' I/O 연습문제 End of explanation senten = input("What's going on? ") senten = ".".join(senten) senten = senten.split('.') print(senten) for dot, capi in morse.items(): if capi in senten: print(dot,end=" ") #dotted = sorted(morse.get(dot)) #print(sorted(morse.get(dot),reverse=True), end=" ") print(morse.get(dot),end=" ") Explanation: 이전 제출물 End of explanation senten = input("What's going on? ") # 모스부호로 나타낼 문장을 입력 senten = ".".join(senten) # 모스부호의 형태가 '알파벳' : '모스부호'로 되어있어서 입력받은 문장을 # 알파벳 단위로 끊어주기 위해 "."join으로 각 단어 사이에 .을 넣습니다. senten = senten.split('.') # .을 기준으로 단어들을 모두 끊어 줍니다. print(senten) for word in senten: # str형태를 for문으로 출력하면 값하나가 그대로 나옵니다. for dot, capi in morse.items(): # 모스부호의 dictionary를 가져옵니다. if word in capi: # senten안의 word가 모스부호의 알파벳과 같으면 print(capi,"=",dot, end=", ") # 알파벳에 해당하는 모스부호를 출력합니다. senten = input("What's going on? ") # 모스부호로 나타낼 문장을 입력 print(senten) for word in senten: # str형태를 for문으로 출력하면 값하나가 그대로 나옵니다. for dot, capi in morse.items(): # 모스부호의 dictionary를 가져옵니다. if word in capi: # senten안의 word가 모스부호의 알파벳과 같으면 print(capi,"=",dot, end=", ") # 알파벳에 해당하는 모스부호를 출력합니다. sentens = 'IM LATE' sentens[0] morse.items() Explanation: 이전에 제출한 것은 출력하면 알파벳과 모스부호가 정렬되지 않았습니다. 입력한 문장대로 모스부호를 나타내고 싶었었는데 조금 더 공부하다보니 코드를 만들 수 있어서 다시 한번 제출합니다. 수정 End of explanation
12,993
Given the following text description, write Python code to implement the functionality described below step by step Description: Introduction This is the central location where all variables should be defined, and any relationships between them should be given. Having all definitions collected in one file is useful because other files can reference this one, so there is no need for duplication, and less room for mistakes. In particular, the relationships between variables are defined only here, so we don't need to reimplement those relationships. And if you do ever find a mistake, you only have to fix it in one place, then just re-run the other notebooks. There are two main classes of variables Step1: Fundamental variables Only the most basic variables should be defined here. Note that we will be using (quaternion) logarithmic rotors to describe the orientations of the spins, and the orientation and velocity of the binary itself. This allows us to reduce the number of constraints in the system, and only evolve the minimal number of equations. For example, the spins are constant, so only two degrees of freedom are needed. These can be expressed without ambiguities or singularities in the form of logarithmic rotors Step2: Derived variables Any variable that can be derived from the variables above should be put in this section. These variables should probably be left in arbitrary form, unless a particular simplification is desired. The substitutions dictionary should map from the general names and their definitions in terms of basic variables. In numerical codes, their values can be calculated once per time step and then stored, so that the values do not have to be re-calculated every time they appear in an expression. Various common combinations of the two masses Step3: The system's vector basis is given by $(\hat{\ell}, \hat{n}, \hat{\lambda})$, and will be computed by the code in terms of the fundamental logarithmic rotors defined above. Here, we give all the substitutions that will be needed in the code. Step4: Various spin components and combinations Step5: Other functions of the angular velocity that find frequent use
Python Code: # Make sure division of integers does not round to the nearest integer from __future__ import division # Make everything in python's symbolic math package available from sympy import * # Make sure sympy functions are used in preference to numpy import sympy # Make sympy. constructions available from sympy import Rational as frac # Rename for similarity to latex from sympy import log as ln # Print symbolic expressions nicely init_printing() # We'll use the numpy `array` object for vectors from numpy import array, cross, dot # We'll use a custom object to keep track of variables from Utilities.PNObjects import PNCollection PNVariables = PNCollection() Explanation: Introduction This is the central location where all variables should be defined, and any relationships between them should be given. Having all definitions collected in one file is useful because other files can reference this one, so there is no need for duplication, and less room for mistakes. In particular, the relationships between variables are defined only here, so we don't need to reimplement those relationships. And if you do ever find a mistake, you only have to fix it in one place, then just re-run the other notebooks. There are two main classes of variables: Fundamental variables Derived variables The distinction is only required for code output, to ensure that everything gets calculated correctly. The PN equations you write down and manipulate can be in terms of any of these variables. The fundamental variables that go into PN equations are things like the mass, spins $\chi_1$, and $\chi_2$, orbital angular velocity $\hat{\ell}$, and unit separation vector $\hat{n}$. We also include the tidal-coupling parameters in this list. Also, note that only $M_1$ is included. This is because the total mass is always assumed to be 1, so $M_2 = 1-M_1$. The derived variables are further distinguished by whether they will need to be recalculated at each time step or not. For example, though we define the spins fundamentally as $\chi_1$ and $\chi_2$, we can also define derived spins $S$ and $\Sigma$, which need to be recalculated if the system is precessing. On the other hand, the masses are constant and fundamentally defined by $M_1$, so $M_2$ and $\nu$ only need to be calculated from that information once. Set up python End of explanation # Dimensionful quantities, just in case anybody uses them... PNVariables.AddBasicConstants('G, c') # Masses of objects 1 and 2. PNVariables.AddBasicConstants('M1') PNVariables.AddBasicConstants('M2') # Angular speed of separation vector PNVariables.AddBasicVariables('v', positive=True) # Tidal deformabilities, in units where the total mass is 1 PNVariables.AddBasicConstants('lambda1, lambda2') # Spin vectors (assumed to be constant) PNVariables.AddBasicVariables('chi1_x, chi1_y, chi1_z') PNVariables.AddBasicVariables('chi2_x, chi2_y, chi2_z') # Orbital angular-velocity unit vector ("Newtonian" angular momentum direction) PNVariables.AddBasicVariables('ellHat_x, ellHat_y, ellHat_z') # Orbital separation unit vector PNVariables.AddBasicVariables('nHat_x, nHat_y, nHat_z') Explanation: Fundamental variables Only the most basic variables should be defined here. Note that we will be using (quaternion) logarithmic rotors to describe the orientations of the spins, and the orientation and velocity of the binary itself. This allows us to reduce the number of constraints in the system, and only evolve the minimal number of equations. For example, the spins are constant, so only two degrees of freedom are needed. These can be expressed without ambiguities or singularities in the form of logarithmic rotors: $\mathfrak{r}1 = \mathfrak{r}{\chi_1 x} \hat{x} + \mathfrak{r}_{\chi_1 y} \hat{y}$, so that $\vec{\chi}_1 = \lvert \chi_1 \rvert\, e^{\mathfrak{r}_1}\, \hat{z}\, e^{-\mathfrak{r}_1}$. This may look complicated, but it performs very well numerically. We will still be able to write and manipulate the PN equations directly in terms of familiar quantities like $\vec{S}_1 \cdot \hat{\ell}$, etc., but the fundamental objects will be the rotors, which means that the substitutions made for code output will automatically be in terms of the rotors. End of explanation PNVariables.AddDerivedConstant('M', M1+M2) PNVariables.AddDerivedConstant('delta', (M1-M2)/M) PNVariables.AddDerivedConstant('nu', M1*M2/M**2) PNVariables.AddDerivedConstant('nu__2', (M1*M2/M**2)**2) PNVariables.AddDerivedConstant('nu__3', (M1*M2/M**2)**3) PNVariables.AddDerivedConstant('q', M1/M2) Explanation: Derived variables Any variable that can be derived from the variables above should be put in this section. These variables should probably be left in arbitrary form, unless a particular simplification is desired. The substitutions dictionary should map from the general names and their definitions in terms of basic variables. In numerical codes, their values can be calculated once per time step and then stored, so that the values do not have to be re-calculated every time they appear in an expression. Various common combinations of the two masses: End of explanation PNVariables.AddDerivedVariable('ellHat', array([ellHat_x, ellHat_y, ellHat_z]), datatype='std::vector<double>') PNVariables.AddDerivedVariable('nHat', array([nHat_x, nHat_y, nHat_z]), datatype='std::vector<double>') PNVariables.AddDerivedVariable('lambdaHat', cross(ellHat.substitution, nHat.substitution), datatype='std::vector<double>') # Components of lambdaHat are defined in terms of components of ellHat and nHat for i,d in zip([0,1,2],['x','y','z']): PNVariables.AddDerivedVariable('lambdaHat_'+d, lambdaHat.substitution[i]) Explanation: The system's vector basis is given by $(\hat{\ell}, \hat{n}, \hat{\lambda})$, and will be computed by the code in terms of the fundamental logarithmic rotors defined above. Here, we give all the substitutions that will be needed in the code. End of explanation PNVariables.AddDerivedVariable('chiVec1', array([chi1_x, chi1_y, chi1_z]), datatype='std::vector<double>') PNVariables.AddDerivedVariable('chiVec2', array([chi2_x, chi2_y, chi2_z]), datatype='std::vector<double>') PNVariables.AddDerivedVariable('chi1Mag', sqrt(chi1_x**2 + chi1_y**2 + chi1_z**2)) PNVariables.AddDerivedVariable('chi2Mag', sqrt(chi2_x**2 + chi2_y**2 + chi2_z**2)) PNVariables.AddDerivedConstant('chi1chi1', dot(chiVec1.substitution, chiVec1.substitution)) PNVariables.AddDerivedVariable('chi1chi2', dot(chiVec1.substitution, chiVec2.substitution)) PNVariables.AddDerivedConstant('chi2chi2', dot(chiVec2.substitution, chiVec2.substitution)) PNVariables.AddDerivedVariable('chi1_ell', dot(chiVec1.substitution, ellHat.substitution)) PNVariables.AddDerivedVariable('chi1_n', dot(chiVec1.substitution, nHat.substitution)) PNVariables.AddDerivedVariable('chi1_lambda', dot(chiVec1.substitution, lambdaHat.substitution)) PNVariables.AddDerivedVariable('chi2_ell', dot(chiVec2.substitution, ellHat.substitution)) PNVariables.AddDerivedVariable('chi2_n', dot(chiVec2.substitution, nHat.substitution)) PNVariables.AddDerivedVariable('chi2_lambda', dot(chiVec2.substitution, lambdaHat.substitution)) PNVariables.AddDerivedConstant('sqrt1Mchi1chi1', sqrt(1-chi1chi1)) PNVariables.AddDerivedConstant('sqrt1Mchi2chi2', sqrt(1-chi2chi2)) PNVariables.AddDerivedVariable('S', chiVec1.substitution*M1**2 + chiVec2.substitution*M2**2, datatype=chiVec1.datatype) PNVariables.AddDerivedVariable('S_ell', chi1_ell*M1**2 + chi2_ell*M2**2) PNVariables.AddDerivedVariable('S_n', chi1_n*M1**2 + chi2_n*M2**2) PNVariables.AddDerivedVariable('S_lambda', chi1_lambda*M1**2 + chi2_lambda*M2**2) PNVariables.AddDerivedVariable('Sigma', M*(chiVec2.substitution*M2 - chiVec1.substitution*M1), datatype=chiVec1.datatype) PNVariables.AddDerivedVariable('Sigma_ell', M*(chi2_ell*M2 - chi1_ell*M1)) PNVariables.AddDerivedVariable('Sigma_n', M*(chi2_n*M2 - chi1_n*M1)) PNVariables.AddDerivedVariable('Sigma_lambda', M*(chi2_lambda*M2 - chi1_lambda*M1)) PNVariables.AddDerivedVariable('chi_s', (chiVec1.substitution + chiVec2.substitution)/2, datatype=chiVec1.datatype) PNVariables.AddDerivedVariable('chi_s_ell', (chi1_ell+chi2_ell)/2) PNVariables.AddDerivedVariable('chi_s_n', (chi1_n+chi2_n)/2) PNVariables.AddDerivedVariable('chi_s_lambda', (chi1_lambda+chi2_lambda)/2) PNVariables.AddDerivedVariable('chi_a', (chiVec1.substitution - chiVec2.substitution)/2, datatype=chiVec1.datatype) PNVariables.AddDerivedVariable('chi_a_ell', (chi1_ell-chi2_ell)/2) PNVariables.AddDerivedVariable('chi_a_n', (chi1_n-chi2_n)/2) PNVariables.AddDerivedVariable('chi_a_lambda', (chi1_lambda-chi2_lambda)/2) Explanation: Various spin components and combinations: End of explanation PNVariables.AddDerivedVariable('x', v**2) PNVariables.AddDerivedVariable('Omega_orb', (v**3)/M) PNVariables.AddDerivedVariable('logv', log(v)) Explanation: Other functions of the angular velocity that find frequent use: End of explanation
12,994
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: Circuit optimization, gate alignment, and spin echoes <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https Step3: Preparing circuits to run on QCS For the sake of this tutorial, we will use a circuit structure created by the create_benchmark_circuit function defined above. Step5: This circuit divides the qubits into two registers Step6: We choose this circuit as an example for two reasons Step7: To convert gates to this gateset, use the cirq.MergeInteractionsToSqrtIswap optimizer. This optimizer merges all consecutive (one- and two-qubit) interactions on two qubits into a unitary matrix and then decomposes this unitary using $\sqrt{\text{iSWAP}}$ gates in an attempt to (a) convert to the target gateset and (b) reduce the circuit depth by reducing the number of operations. Step9: Note Step10: Eject single-qubit operations After converting to a target gateset, you can use various circuit optimizers to attempt to reduce the number of gates as shown below. The cirq.eject_phased_paulis optimizer pushes cirq.X, cirq.Y, and cirq.PhasedXPowGate gates towards the end of the circuit. Step11: Note that, for example, the back-to-back cirq.X gates on the ancilla have been removed from the start of the circuit. You can also use the cirq.eject_z optimizer to attempt to push cirq.Z gates towards the end of the circuit. Step12: Note that, for example, the cirq.Z gate immediately before the ancilla measurement has been removed. Align gates in moments After optimizing, gates should be aligned into cirq.Moments to satisfy the following criteria Step13: Note Step14: You can also align gates and push them to the right with cirq.align_right. Step15: Also, you can use cirq.stratified_circuit to align operations into similar categories. For example, you can align single-qubit and two-qubit operations in separate moments as follows. Step16: Note that each moment now only contains single-qubit gates or two-qubit gates. Drop moments To drop moments that have a tiny effect or moments that are empty, you can use the following optimizers. Step17: Synchronize terminal measurements You can use the cirq.synchronize_terminal_measurements to move all measurements to the final moment if it can accommodate them (without overlapping with other operations). Step18: Adding spin echoes Dynamical decoupling applies a series of spin echoes to otherwise idle qubits to reduce decoherent effects. As mentioned above, spin echoes were used as an effective error mitigation technique in Information Scrambling in Computationally Complex Quantum Circuits, and the performance of any circuit with idle qubits can potentially be improved by adding spin echoes. The following codeblock shows how to insert spin echoes on the ancilla qubit. Step20: The ancilla now has spin echoes between the two-qubit gates at the start/end of the circuit instead of remaining idle. Benchmark Now that we have discussed how to remove uncessary gates, align gates, and insert spin echoes, we run an experiment to benchmark the results. First we get a line of qubits, list of cycle values (one circuit per cycle value), and set other experimental parameters. Step21: The create_benchmark_circuit defined at the start of this tutorial has options to optimize the circuit and insert spin echoes on the ancilla as we have discussed above. Without any optimization or spin echoes, an example circuit looks like this Step22: After removing unecessary gates (optimization) and aligning gates, the same circuit looks like this Step23: And with optimization + alginment + spin echoes on the ancilla, the same circuit looks like this Step25: Now we create circuits for all cycle values without optimization, with optimization + alignment, and with optimization + alignment + spin echoes. Step27: The next cell runs them on the device. Step29: And the next cell plots the results.
Python Code: try: import cirq except ImportError: print("installing cirq...") !pip install --quiet cirq --pre print("installed cirq.") import matplotlib.pyplot as plt import numpy as np import cirq import cirq_google as cg import os # The Google Cloud Project id to use. project_id = '' #@param {type:"string"} processor_id = "" #@param {type:"string"} from cirq_google.engine.qcs_notebook import get_qcs_objects_for_notebook device_sampler = get_qcs_objects_for_notebook(project_id, processor_id) # @markdown Helper functions. from typing import Optional, Sequence from cirq.experiments import random_rotations_between_grid_interaction_layers_circuit # Gates for spin echoes. pi_pulses = [ cirq.PhasedXPowGate(phase_exponent=p, exponent=1.0) for p in (-0.5, 0.0, 0.5, 1.0) ] def create_benchmark_circuit( qubits: Sequence[cirq.GridQubit], cycles: int, twoq_gate: cirq.Gate = cirq.SQRT_ISWAP, seed: Optional[int] = None, with_optimization: bool = False, with_alignment: bool = False, with_spin_echoes: bool = False, ) -> cirq.Circuit: Returns an "OTOC-like" circuit [1] used to benchmark optimization and/or alignment and/or spin echoes. Args: qubits: Qubits to use. cycles: Depth of random rotations in the forward & reverse unitary. twoq_gate: Two-qubit gate to use. seed: Seed for circuit generation. with_optimization: Run a series of optimizations on the circuit. with_alignment: Align moments and synchronize terminal measurements. with_spin_echoes: Insert spin echoes on ancilla qubit. References: [1] Fig. S10 of https://arxiv.org/abs/2101.08870. ancilla, qubits = qubits[0], qubits[1:] # Put ancilla into the |1⟩ state and couple it to the rest of the qubits. excite_ancilla = [cirq.X(ancilla), twoq_gate(ancilla, qubits[0])] # Forward operations. forward = random_rotations_between_grid_interaction_layers_circuit( qubits, depth=cycles, two_qubit_op_factory=lambda a, b, _: twoq_gate.on(a, b), pattern=cirq.experiments.GRID_STAGGERED_PATTERN, single_qubit_gates=[cirq.PhasedXPowGate(phase_exponent=p, exponent=0.5) for p in np.arange(-1.0, 1.0, 0.25)], seed=seed ) # Full circuit. Note: We are intentionally creating a bad circuit structure # by putting each operation in a new moment (via `cirq.InsertStrategy.New`) # to show the advantages of optimization & alignment. circuit = cirq.Circuit(excite_ancilla) circuit.append(forward.all_operations(), strategy=cirq.InsertStrategy.NEW) circuit.append(cirq.inverse(forward).all_operations(), strategy=cirq.InsertStrategy.NEW) circuit.append(cirq.inverse(excite_ancilla[1:])) circuit.append(cirq.measure(ancilla, key="z"), strategy=cirq.InsertStrategy.NEW) # Run optimization. if with_optimization: cirq.MergeInteractionsToSqrtIswap().optimize_circuit(circuit) circuit = cirq.eject_phased_paulis(circuit) circuit = cirq.eject_z(circuit) circuit = cirq.drop_negligible_operations(circuit) circuit = cirq.drop_empty_moments(circuit) # Insert spin echoes. Note: It's important to do this after optimization, as # optimization will remove spin echoes. if with_spin_echoes: random_state = np.random.RandomState(seed) spin_echo = [] for _ in range(cycles * 2): op = random_state.choice(pi_pulses).on(ancilla) spin_echo += [op, cirq.inverse(op)] circuit.insert(2, spin_echo) # Alignment. if with_alignment: circuit = cirq.align_right(circuit) circuit = synchronize_terminal_measurements(circuit) return circuit def to_survival_prob(result: cirq.Result) -> float: return np.mean(np.sum(result.measurements["z"], axis=1) == 1) Explanation: Circuit optimization, gate alignment, and spin echoes <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://quantumai.google/cirq/tutorials/google/spin_echoes"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/quantumlib/Cirq/blob/master/docs/tutorials/google/spin_echoes.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/quantumlib/Cirq/blob/master/docs/tutorials/google/spin_echoes.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/Cirq/docs/tutorials/google/spin_echoes.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a> </td> </table> This tutorial shows how to prepare circuits to run on the Quantum Computing Service (QCS) and optimize them to improve performace, showing an example of the procedure outlined in the Best practices guide. This is an "advanced" tutorial where you will learn to perform the following optimization techniques: Converting to target gateset Ejecting single-qubit operations Aligning gates in moments Inserting spin echoes to reduce leakage & cross-talk. Note: The function cirq_google.optimized_for_sycamore implements some of the optimizations shown here. This tutorial provides more detail for finer control. Setup Note: this notebook relies on unreleased Cirq features. If you want to try these features, make sure you install cirq via pip install cirq --pre. End of explanation Create an example circuit. qubits = cirq.GridQubit.rect(2, 3) # [cirq.GridQubit(x, y) for (x, y) in [(3, 2), (4, 2), (4, 1), (5, 1), (6, 1), (6, 2), (5, 2)]] circuit = create_benchmark_circuit(qubits, twoq_gate=cirq.ISWAP, cycles=3, seed=1) print("Example benchmark circuit:\n") circuit Explanation: Preparing circuits to run on QCS For the sake of this tutorial, we will use a circuit structure created by the create_benchmark_circuit function defined above. End of explanation Without noise, only the 1 state is measured. result = cirq.Simulator().run(circuit, repetitions=1000) result.histogram(key="z") Explanation: This circuit divides the qubits into two registers: a single ancilla (top qubit) as the first register, and the remaining qubits as the second register. First, the ancilla is excited into the $|1\rangle$ state and coupled to the second register. Then, a Loschmidt echo is performed on the second register. Last, the ancilla is uncoupled from the second register and measured. Without any noise, the only measurement result should be $1$. End of explanation # Create an Engine object to use. spec = cg.Engine(project_id).get_processor(processor_id).get_device_specification() # Iterate through each gate set valid on the device. for gateset in spec.valid_gate_sets: print(gateset.name) print('-------') # Prints each gate valid in the set with its duration for gate in gateset.valid_gates: print('%s %d' % (gate.id, gate.gate_duration_picos)) print() Explanation: We choose this circuit as an example for two reasons: Each gate in the circuit is in its own cirq.Moment, so this is a poor circuit structure to run on devices without any optimization / alignment. The ancilla qubit is idle except at the start and end of the circuit, so this is a prime example where adding spin echoes can improve performance. A similar circuit was used in Information Scrambling in Computationally Complex Quantum Circuits (see Fig. S10) to benchmark the performance of spin echoes. Starting from this circuit, we show how to optimize gates, align moments, and insert spin echoes to improve the performance on a real device. Convert to target gateset To run on a device, all gates in the circuit will be converted to a gateset supported by that device. (See the Device specifications guide for information on supported gatesets.) We will use the $\sqrt{\text{iSWAP}}$ gateset in this tutorial. You can see this gateset and others as follows. End of explanation cirq.MergeInteractionsToSqrtIswap().optimize_circuit(circuit) circuit Explanation: To convert gates to this gateset, use the cirq.MergeInteractionsToSqrtIswap optimizer. This optimizer merges all consecutive (one- and two-qubit) interactions on two qubits into a unitary matrix and then decomposes this unitary using $\sqrt{\text{iSWAP}}$ gates in an attempt to (a) convert to the target gateset and (b) reduce the circuit depth by reducing the number of operations. End of explanation Compile an arbitrary two-qubit operation to the sqrt_iswap gateset. ops = cirq.two_qubit_matrix_to_sqrt_iswap_operations( q0=qubits[0], q1=qubits[1], mat=cirq.testing.random_unitary(dim=4, random_state=1) ) cirq.Circuit(ops) Explanation: Note: Cirq supports decomposing to many different target gatesets via analytical decomposition functions present in cirq.optimizers. For example, you can compile an arbitrary two-qubit unitary (provided as a matrix) to $\sqrt{\text{iSWAP}}$ operations as shown below. This is useful when using custom gates in a circuit. End of explanation circuit = cirq.eject_phased_paulis(circuit) circuit Explanation: Eject single-qubit operations After converting to a target gateset, you can use various circuit optimizers to attempt to reduce the number of gates as shown below. The cirq.eject_phased_paulis optimizer pushes cirq.X, cirq.Y, and cirq.PhasedXPowGate gates towards the end of the circuit. End of explanation circuit = cirq.eject_z(circuit) circuit Explanation: Note that, for example, the back-to-back cirq.X gates on the ancilla have been removed from the start of the circuit. You can also use the cirq.eject_z optimizer to attempt to push cirq.Z gates towards the end of the circuit. End of explanation left_aligned_circuit = cirq.align_left(circuit) left_aligned_circuit Explanation: Note that, for example, the cirq.Z gate immediately before the ancilla measurement has been removed. Align gates in moments After optimizing, gates should be aligned into cirq.Moments to satisfy the following criteria: The fewer moments the better (generally speaking). Each moment is a discrete time slice, so fewer moments means shorter circuit execution time. Moments should consist of gates with similar durations. Otherwise some qubits will be idle for part of the moment. It's best to align one-qubit gates in their own moment and two-qubit gates in their own moment if possible. All measurements should be terminal and in a single moment. Intermediate measurements are not currently supported, and measurement operation times are roughly two orders of magnitude longer than other gate times (see the above cell which prints out gatesets and gate times). To align gates into moments and push them as far left as possible, use cirq.align_left. End of explanation print(f"Original circuit has {len(circuit)} moments.") print(f"Aligned circuit has {len(left_aligned_circuit)} moments.") Explanation: Note: Optimizers can cause terminal measurements to become misaligned, but this can be fixed with cirq.synchronize_terminal_measurements as discussed below. Note how many fewer moments this aligned circuit has. End of explanation right_aligned_circuit = cirq.align_right(circuit) right_aligned_circuit Explanation: You can also align gates and push them to the right with cirq.align_right. End of explanation circuit = cirq.stratified_circuit( circuit, categories=[lambda op : len(op.qubits) == 1, lambda op : len(op.qubits) == 2] ) circuit Explanation: Also, you can use cirq.stratified_circuit to align operations into similar categories. For example, you can align single-qubit and two-qubit operations in separate moments as follows. End of explanation circuit = cirq.drop_negligible_operations(circuit) circuit = cirq.drop_empty_moments(circuit) circuit Explanation: Note that each moment now only contains single-qubit gates or two-qubit gates. Drop moments To drop moments that have a tiny effect or moments that are empty, you can use the following optimizers. End of explanation circuit = cirq.synchronize_terminal_measurements(circuit) circuit Explanation: Synchronize terminal measurements You can use the cirq.synchronize_terminal_measurements to move all measurements to the final moment if it can accommodate them (without overlapping with other operations). End of explanation # Gates for spin echoes. Note that these gates are self-inverse. pi_pulses = [ cirq.PhasedXPowGate(phase_exponent=p, exponent=1.0) for p in (-0.5, 0.0, 0.5, 1.0) ] # Generate spin echoes on ancilla. num_echoes = 3 random_state = np.random.RandomState(1) spin_echo = [] for _ in range(num_echoes): op = random_state.choice(pi_pulses).on(qubits[0]) spin_echo += [op, cirq.inverse(op)] # Insert spin echo operations to circuit. optimized_circuit_with_spin_echoes = circuit.copy() optimized_circuit_with_spin_echoes.insert(5, spin_echo) # Align single-qubit spin echo gates into other moments of single-qubit gates. optimized_circuit_with_spin_echoes = cirq.stratified_circuit( optimized_circuit_with_spin_echoes, categories=[lambda op : len(op.qubits) == 1, lambda op : len(op.qubits) == 2] ) optimized_circuit_with_spin_echoes Explanation: Adding spin echoes Dynamical decoupling applies a series of spin echoes to otherwise idle qubits to reduce decoherent effects. As mentioned above, spin echoes were used as an effective error mitigation technique in Information Scrambling in Computationally Complex Quantum Circuits, and the performance of any circuit with idle qubits can potentially be improved by adding spin echoes. The following codeblock shows how to insert spin echoes on the ancilla qubit. End of explanation Set experiment parameters. qubits = cg.line_on_device(device_sampler.device, length=7) cycle_values = range(0, 100 + 1, 4) nreps = 20_000 seed = 1 Explanation: The ancilla now has spin echoes between the two-qubit gates at the start/end of the circuit instead of remaining idle. Benchmark Now that we have discussed how to remove uncessary gates, align gates, and insert spin echoes, we run an experiment to benchmark the results. First we get a line of qubits, list of cycle values (one circuit per cycle value), and set other experimental parameters. End of explanation circuit = create_benchmark_circuit(qubits, cycles=2, seed=1) print(f"Unoptimized circuit ({len(circuit)} moments):\n") circuit Explanation: The create_benchmark_circuit defined at the start of this tutorial has options to optimize the circuit and insert spin echoes on the ancilla as we have discussed above. Without any optimization or spin echoes, an example circuit looks like this: End of explanation optimized_circuit = create_benchmark_circuit(qubits, cycles=2, seed=1, with_optimization=True, with_alignment=True) print(f"Circuit with optimization + alignment ({len(optimized_circuit)} moments):\n") optimized_circuit Explanation: After removing unecessary gates (optimization) and aligning gates, the same circuit looks like this: End of explanation optimized_circuit_with_spin_echoes = create_benchmark_circuit(qubits, cycles=2, seed=1, with_optimization=True, with_alignment=True, with_spin_echoes=True) print(f"Circuit with optimization + alignment + spin echoes ({len(optimized_circuit_with_spin_echoes)} moments):\n") optimized_circuit_with_spin_echoes Explanation: And with optimization + alginment + spin echoes on the ancilla, the same circuit looks like this: End of explanation Create all circuits. batch = [ create_benchmark_circuit(qubits, cycles=c, seed=seed) for c in cycle_values ] batch_with_optimization = [ create_benchmark_circuit(qubits, cycles=c, seed=seed, with_optimization=True, with_alignment=True) for c in cycle_values ] batch_with_optimization_and_spin_echoes = [ create_benchmark_circuit(qubits, cycles=c, seed=seed, with_optimization=True, with_alignment=True, with_spin_echoes=True) for c in cycle_values ] Explanation: Now we create circuits for all cycle values without optimization, with optimization + alignment, and with optimization + alignment + spin echoes. End of explanation Run all circuits. all_probs = [] for b in (batch, batch_with_optimization, batch_with_optimization_and_spin_echoes): results = device_sampler.sampler.run_batch(b, repetitions=nreps) all_probs.append([to_survival_prob(*res) for res in results]) Explanation: The next cell runs them on the device. End of explanation Plot results. labels = ["Unoptimized", "Optimization + Alignment", "Optimization + Alignment + Spin echoes"] for (probs, label) in zip(all_probs, labels): plt.plot(cycle_values, probs, "-o", label=label) plt.xlabel("Cycles") plt.ylabel("Survival probability") plt.legend(); Explanation: And the next cell plots the results. End of explanation
12,995
Given the following text description, write Python code to implement the functionality described below step by step Description: Getting Started With TensorFlow Reference To get the most out of this guide, you should know the following Step1: TensorFlow Core tutorial Importing TensorFlow The canonical import statement for TensorFlow programs is as follows Step2: This gives Python access to all of TensorFlow's classes, methods, and symbols. Most of the documentation assumes you have already done this. The Computational Graph You might think of TensorFlow Core programs as consisting of two discrete sections Step3: Notice that printing the nodes does not output the values 3.0 and 4.0 as you might expect. Instead, they are nodes that, when evaluated, would produce 3.0 and 4.0, respectively. To actually evaluate the nodes, we must run the computational graph within a session. A session encapsulates the control and state of the TensorFlow runtime. The following code creates a Session object and then invokes its run method to run enough of the computational graph to evaluate node1 and node2. By running the computational graph in a session as follows Step4: We can build more complicated computations by combining Tensor nodes with operations (Operations are also nodes.). For example, we can add our two constant nodes and produce a new graph as follows Step5: TensorFlow provides a utility called TensorBoard that can display a picture of the computational graph. Here is a screenshot showing how TensorBoard visualizes the graph Step6: The preceding three lines are a bit like a function or a lambda in which we define two input parameters (a and b) and then an operation on them. We can evaluate this graph with multiple inputs by using the feed_dict parameter to specify Tensors that provide concrete values to these placeholders Step7: In TensorBoard, the graph looks liket his Step8: The preceding computational graph would look as follows in TensorBoard Step9: Constants are initialized when you call tf.constant, and their value can never change. By contrast, variables are not initialized when you call tf.Variable. To initialize all the variables in a TensorFlow program, you must explicitly call a special operation as follows Step10: It is important to realize init is a handle to the TensorFlow sub-graph that initializes all the global variables. Until we call sess.run, the variables are uninitialized. Since x is a placeholder, we can evaluate linear_model for several values of x simultaneously as follows Step11: Summary of the code for the model so far Step12: We've created a model, but we don't know how good it is yet. To evaluate the model on training data, we need a y placeholder to provide the desired values, and we need to write a loss function. A loss function measures how far apart the current model is from the provided data. We'll use a standard loss model for linear regression, which sums the squares of the deltas between the current model and the provided data. linear_model - y creates a vector where each element is the corresponding example's error delta. We call tf.square to square that error. Then, we sum all the squared errors to create a single scalar that abstracts the error of all examples using tf.reduce_sum Step13: We could improve this manually by reassigning the values of W and b to the perfect values of -1 and 1. A variable is initialized to the value provided to tf.Variable but can be changed using operations like tf.assign. For example, W=-1 and b=1 are the optimal parameters for our model. We can change W and b accordingly Step14: We guessed the "perfect" values of W and b, but the whole point of machine learning is to find the correct model parameters automatically. We will show how to accomplish this in the next section. tf.train API A complete discussion of machine learning is out of the scope of this tutorial. However, TensorFlow provides optimizers that slowly change each variable in order to minimize the loss function. The simplest optimizer is gradient descent. It modifies each variable according to the magnitude of the derivative of loss with respect to that variable. In general, computing symbolic derivatives manually is tedious and error-prone. Consequently, TensorFlow can automatically produce derivatives given only a description of the model using the function tf.gradients. For simplicity, optimizers typically do this for you. For example, Step15: Now we have done actual machine learning! Although doing this simple linear regression doesn't require much TensorFlow core code, more complicated models and methods to feed data into your model necessitate more code. Thus TensorFlow provides higher level abstractions for common patterns, structures, and functionality. We will learn how to use some of these abstractions in the next section. Complete program The completed trainable linear regression model is shown here
Python Code: 3 # a rank 0 tensor; this is a scalar with shape [] [1. ,2., 3.] # a rank 1 tensor; this is a vector with shape [3] [[1., 2., 3.], [4., 5., 6.]] # a rank 2 tensor; a matrix with shape [2, 3] [[[1., 2., 3.]], [[7., 8., 9.]]] # a rank 3 tensor with shape [2, 1, 3] Explanation: Getting Started With TensorFlow Reference To get the most out of this guide, you should know the following: How to program in Python. At least a little bit about arrays. Ideally, something about machine learning. However, if you know little or nothing about machine learning, then this is still the first guide you should read. TensorFlow provides multiple APIs. The lowest level API--TensorFlow Core-- provides you with complete programming control. We recommend TensorFlow Core for machine learning researchers and others who require fine levels of control over their models. This guide begins with a tutorial on TensorFlow Core. Later, we demonstrate how to implement the same model in tf.contrib.learn. Knowing TensorFlow Core principles will give you a great mental model of how things are working internally when you use the more compact higher level API. Tensors The central unit of data in TensorFlow is the tensor. A tensor consists of a set of primitive values shaped into an array of any number of dimensions. A tensor's rank is its number of dimensions. Here are some examples of tensors: End of explanation import tensorflow as tf Explanation: TensorFlow Core tutorial Importing TensorFlow The canonical import statement for TensorFlow programs is as follows: End of explanation node1 = tf.constant(3.0, dtype=tf.float32) node2 = tf.constant(4.0, dtype=tf.float32) print(node1, node2) Explanation: This gives Python access to all of TensorFlow's classes, methods, and symbols. Most of the documentation assumes you have already done this. The Computational Graph You might think of TensorFlow Core programs as consisting of two discrete sections: Building the computational graph. Running the computational graph. A computational graph is a series of TensorFlow operations arranged into a graph of nodes. Let's build a simple computational graph. Each node takes zero or more tensors as inputs and produces a tensor as an output. One type of node is a constant. Like all TensorFlow constants, it takes no inputs, and it outputs a value it stores internally. We can create two floating point Tensors node1 and node2 as follows: End of explanation sess = tf.Session() print(sess.run([node1, node2])) Explanation: Notice that printing the nodes does not output the values 3.0 and 4.0 as you might expect. Instead, they are nodes that, when evaluated, would produce 3.0 and 4.0, respectively. To actually evaluate the nodes, we must run the computational graph within a session. A session encapsulates the control and state of the TensorFlow runtime. The following code creates a Session object and then invokes its run method to run enough of the computational graph to evaluate node1 and node2. By running the computational graph in a session as follows: End of explanation node3 = tf.add(node1, node2) print("node3: ", node3) print("sess.run(node3): ", sess.run(node3)) Explanation: We can build more complicated computations by combining Tensor nodes with operations (Operations are also nodes.). For example, we can add our two constant nodes and produce a new graph as follows: End of explanation a = tf.placeholder(tf.float32) b = tf.placeholder(tf.float32) adder_node = a + b # + provides a shortcut for tf.add(a, b) Explanation: TensorFlow provides a utility called TensorBoard that can display a picture of the computational graph. Here is a screenshot showing how TensorBoard visualizes the graph: <img src="images/image-01.png?raw=true" alt="Smiley face" height="150" width="150"> As it stands, this graph is not especially interesting because it always produces a constant result. A graph can be parameterized to accept external inputs, known as placeholders. A placeholder is a promise to provide a value later. End of explanation print(sess.run(adder_node, {a:3, b:4.5})) print(sess.run(adder_node, {a: [1,3], b: [2,4]})) Explanation: The preceding three lines are a bit like a function or a lambda in which we define two input parameters (a and b) and then an operation on them. We can evaluate this graph with multiple inputs by using the feed_dict parameter to specify Tensors that provide concrete values to these placeholders: End of explanation add_and_triple = adder_node * 3 print(sess.run(add_and_triple, {a:3, b:4.5})) Explanation: In TensorBoard, the graph looks liket his: <img src="images/image-02.png?raw=true" alt="Smiley face" height="150" width="150"> We can make the computational graph more complex by adding another operation. For example, End of explanation W = tf.Variable([.3], dtype=tf.float32) b = tf.Variable([-.3], dtype=tf.float32) x = tf.placeholder(tf.float32) linear_model = W * x + b Explanation: The preceding computational graph would look as follows in TensorBoard: <img src="images/image-03.png?raw=true" alt="Smiley face" height="150" width="150"> In machine learning we will typically want a model that can take arbitrary inputs, such as the one above. To make the model trainable, we need to be able to modify the graph to get new outputs with the same input. Variables allow us to add trainable parameters to a graph. They are constructed with a type and initial value: End of explanation init = tf.global_variables_initializer() sess.run(init) Explanation: Constants are initialized when you call tf.constant, and their value can never change. By contrast, variables are not initialized when you call tf.Variable. To initialize all the variables in a TensorFlow program, you must explicitly call a special operation as follows: End of explanation print(sess.run(linear_model, {x:[1,2,3,4]})) Explanation: It is important to realize init is a handle to the TensorFlow sub-graph that initializes all the global variables. Until we call sess.run, the variables are uninitialized. Since x is a placeholder, we can evaluate linear_model for several values of x simultaneously as follows: End of explanation import tensorflow as tf # Create variables and placeholder W = tf.Variable([.3], dtype=tf.float32) b = tf.Variable([-.3], dtype=tf.float32) x = tf.placeholder(tf.float32) # Create the model linear_model = W * x + b # Create a session object sess = tf.Session() # Initialize all the variables in a TensorFlow program init = tf.global_variables_initializer() sess.run(init) # Evaluate linear_model for several values of x simultaneously print(sess.run(linear_model, {x:[1,2,3,4]})) Explanation: Summary of the code for the model so far: End of explanation y = tf.placeholder(tf.float32) squared_deltas = tf.square(linear_model - y) loss = tf.reduce_sum(squared_deltas) print(sess.run(loss, {x:[1,2,3,4], y:[0,-1,-2,-3]})) Explanation: We've created a model, but we don't know how good it is yet. To evaluate the model on training data, we need a y placeholder to provide the desired values, and we need to write a loss function. A loss function measures how far apart the current model is from the provided data. We'll use a standard loss model for linear regression, which sums the squares of the deltas between the current model and the provided data. linear_model - y creates a vector where each element is the corresponding example's error delta. We call tf.square to square that error. Then, we sum all the squared errors to create a single scalar that abstracts the error of all examples using tf.reduce_sum: End of explanation fixW = tf.assign(W, [-1.]) fixb = tf.assign(b, [1.]) sess.run([fixW, fixb]) print(sess.run(loss, {x:[1,2,3,4], y:[0,-1,-2,-3]})) Explanation: We could improve this manually by reassigning the values of W and b to the perfect values of -1 and 1. A variable is initialized to the value provided to tf.Variable but can be changed using operations like tf.assign. For example, W=-1 and b=1 are the optimal parameters for our model. We can change W and b accordingly: End of explanation optimizer = tf.train.GradientDescentOptimizer(0.01) train = optimizer.minimize(loss) sess.run(init) # reset values to incorrect defaults. for i in range(1000): sess.run(train, {x:[1,2,3,4], y:[0,-1,-2,-3]}) print(sess.run([W, b])) Explanation: We guessed the "perfect" values of W and b, but the whole point of machine learning is to find the correct model parameters automatically. We will show how to accomplish this in the next section. tf.train API A complete discussion of machine learning is out of the scope of this tutorial. However, TensorFlow provides optimizers that slowly change each variable in order to minimize the loss function. The simplest optimizer is gradient descent. It modifies each variable according to the magnitude of the derivative of loss with respect to that variable. In general, computing symbolic derivatives manually is tedious and error-prone. Consequently, TensorFlow can automatically produce derivatives given only a description of the model using the function tf.gradients. For simplicity, optimizers typically do this for you. For example, End of explanation import numpy as np import tensorflow as tf # Model parameters W = tf.Variable([.3], dtype=tf.float32) b = tf.Variable([-.3], dtype=tf.float32) # Model input and output x = tf.placeholder(tf.float32) linear_model = W * x + b y = tf.placeholder(tf.float32) # loss loss = tf.reduce_sum(tf.square(linear_model - y)) # sum of the squares # optimizer optimizer = tf.train.GradientDescentOptimizer(0.01) train = optimizer.minimize(loss) # training data x_train = [1,2,3,4] y_train = [0,-1,-2,-3] # training loop init = tf.global_variables_initializer() sess = tf.Session() sess.run(init) # reset values to wrong for i in range(1000): sess.run(train, {x:x_train, y:y_train}) # evaluate training accuracy curr_W, curr_b, curr_loss = sess.run([W, b, loss], {x:x_train, y:y_train}) print("W: %s b: %s loss: %s"%(curr_W, curr_b, curr_loss)) Explanation: Now we have done actual machine learning! Although doing this simple linear regression doesn't require much TensorFlow core code, more complicated models and methods to feed data into your model necessitate more code. Thus TensorFlow provides higher level abstractions for common patterns, structures, and functionality. We will learn how to use some of these abstractions in the next section. Complete program The completed trainable linear regression model is shown here: End of explanation
12,996
Given the following text description, write Python code to implement the functionality described below step by step Description: Pandas DataFrame Step1: V4 grade (범주형 데이터형) LC assigned loan grade A,B,C,D,E,F,G = {1, 2, 3, 4, 5, 6, 7} Step2: V5 sub_grade (범주형 데이터형) LC assigned loan subgrade 1, 2, 3, 4, 5 Step3: V6 emp_title (범주형 데이터형) The job title supplied by the Borrower when applying for the loan.* True = 1, False = 0 Step4: V7 emp_length (범주형 데이터형) Employment length in years. Possible values are between 0 and 10 where 0 means less than one year and 10 means ten or more years. < 1' = 0, 10+ = 10, 'n/a' = 11 Step5: V8 home_ownership (범주형 데이터형) The home ownership status provided by the borrower during registration or obtained from the credit report. Our values are Step6: V10 verification_status (범주형 데이터형) Indicates if income was verified by LC, not verified, or if the income source was verified Source Verified, Verified = 1, Not Verified = 0 Step7: V11 issue_d (범주형 데이터형) The month which the loan was funded mm 으로 변환 Step8: V14 purpose (범주형 데이터형) A category provided by the borrower for the loan request. 15개 범주 = {1 Step9: V23 initial_list_status (범주형 데이터형) The initial listing status of the loan. Possible values are – W, F W = 1, F = 0
Python Code: lc_data = pd.DataFrame.from_csv('./lc_dataframe(cleaning).csv') lc_data = lc_data.reset_index() lc_data.tail() Explanation: Pandas DataFrame End of explanation x = lc_data['grade'] sns.distplot(x, color = 'r') plt.show() Explanation: V4 grade (범주형 데이터형) LC assigned loan grade A,B,C,D,E,F,G = {1, 2, 3, 4, 5, 6, 7} End of explanation x = lc_data['sub_grade'] sns.distplot(x, color = 'g') plt.show() Explanation: V5 sub_grade (범주형 데이터형) LC assigned loan subgrade 1, 2, 3, 4, 5 End of explanation x = lc_data['emp_title'] plt.hist(x) plt.show() Explanation: V6 emp_title (범주형 데이터형) The job title supplied by the Borrower when applying for the loan.* True = 1, False = 0 End of explanation x = lc_data['emp_length'] sns.distplot(x, color = 'r') plt.show() Explanation: V7 emp_length (범주형 데이터형) Employment length in years. Possible values are between 0 and 10 where 0 means less than one year and 10 means ten or more years. < 1' = 0, 10+ = 10, 'n/a' = 11 End of explanation x = lc_data['home_ownership'] sns.distplot(x, color = 'g') plt.show() Explanation: V8 home_ownership (범주형 데이터형) The home ownership status provided by the borrower during registration or obtained from the credit report. Our values are: RENT, OWN, MORTGAGE, OTHER Mortgage, None, Other, Own, Rent = {1, 2, 3, 4, 5} End of explanation x = lc_data['verification_status'] sns.distplot(x) plt.show() Explanation: V10 verification_status (범주형 데이터형) Indicates if income was verified by LC, not verified, or if the income source was verified Source Verified, Verified = 1, Not Verified = 0 End of explanation x = lc_data['issue_d'] sns.distplot(x, color = 'r') plt.show() Explanation: V11 issue_d (범주형 데이터형) The month which the loan was funded mm 으로 변환 End of explanation x = lc_data['purpose'] sns.distplot(x, color = 'g') plt.show() Explanation: V14 purpose (범주형 데이터형) A category provided by the borrower for the loan request. 15개 범주 = {1:15} (1부터 15까지) End of explanation x = lc_data['initial_list_status'] plt.hist(x) plt.show() Explanation: V23 initial_list_status (범주형 데이터형) The initial listing status of the loan. Possible values are – W, F W = 1, F = 0 End of explanation
12,997
Given the following text description, write Python code to implement the functionality described below step by step Description: Downloading Overlays This notebook demonstrates how to download an FPGA overlay and examine programmable logic state. 1. Instantiating an overlay With the following overlay bundle present in the overlays folder, users can instantiate the overlay easily. A bitstream file (*.bit). An hwh file (*.hwh). A python class (*.py). For example, an overlay called base can be loaded by Step1: Note Step2: Now we can check the download timestamp for this overlay. Step3: 2. Examining the PL state While there can be multiple overlay instances in Python, there is only one bitstream that is currently loaded onto the programmable logic (PL). This bitstream state is held in the singleton class, PL, and is available for user queries. Step4: Users can verify whether an overlay instance is currently loaded using the Overlay is_loaded() method Step5: 3. Overlay downloading overhead Finally, using Python, we can see the bitstream download time over 50 downloads.
Python Code: import os, warnings from pynq import PL from pynq import Overlay if not os.path.exists(PL.bitfile_name): warnings.warn('There is no overlay loaded after boot.', UserWarning) Explanation: Downloading Overlays This notebook demonstrates how to download an FPGA overlay and examine programmable logic state. 1. Instantiating an overlay With the following overlay bundle present in the overlays folder, users can instantiate the overlay easily. A bitstream file (*.bit). An hwh file (*.hwh). A python class (*.py). For example, an overlay called base can be loaded by: python from pynq.overlays.base import BaseOverlay overlay = BaseOverlay("base.bit") Users can also use the absolute file path of the bitstream to instantiate the overlay. In this notebook, we get the current bitstream loaded on PL, and try to download it multiple times. End of explanation ol = Overlay(PL.bitfile_name) Explanation: Note: If you see a warning message in the above cell, it means that no overlay has been loaded after boot, hence the PL server is not aware of the current status of the PL. In that case you won't be able to run this notebook until you manually load an overlay at least once using: python from pynq import Overlay ol = Overlay('your_overlay.bit') If you do not see any warning message, you can safely proceed. End of explanation ol.download() ol.timestamp Explanation: Now we can check the download timestamp for this overlay. End of explanation PL.bitfile_name PL.timestamp Explanation: 2. Examining the PL state While there can be multiple overlay instances in Python, there is only one bitstream that is currently loaded onto the programmable logic (PL). This bitstream state is held in the singleton class, PL, and is available for user queries. End of explanation ol.is_loaded() Explanation: Users can verify whether an overlay instance is currently loaded using the Overlay is_loaded() method End of explanation import time import matplotlib.pyplot as plt length = 50 time_log = [] for i in range(length): start = time.time() ol.download() end = time.time() time_log.append((end-start)*1000) %matplotlib inline plt.plot(range(length), time_log, 'ro') plt.title('Bitstream loading time (ms)') plt.axis([0, length, 0, 1000]) plt.show() Explanation: 3. Overlay downloading overhead Finally, using Python, we can see the bitstream download time over 50 downloads. End of explanation
12,998
Given the following text description, write Python code to implement the functionality described below step by step Description: Lecture 10 Step1: With NumPy arrays, all the same functionality you know and love from lists is still there. Step2: These operations all work whether you're using Python lists or NumPy arrays. The first place in which Python lists and NumPy arrays differ is when we get to multidimensional arrays. We'll start with matrices. To build matrices using Python lists, you basically needed "nested" lists, or a list containing lists Step3: To build the NumPy equivalent, you can basically just feed the Python list-matrix into the NumPy array method Step4: The real difference, though, comes with actually indexing these elements. With Python lists, you can index individual elements only in this way Step5: With NumPy arrays, you can use that same notation...or you can use comma-separated indices Step6: It's not earth-shattering, but enough to warrant a heads-up. When you index NumPy arrays, the nomenclature used is that of an axis Step7: Here's a great visual summary of slicing NumPy arrays, assuming you're starting from an array with shape (3, 3) Step8: We know video is 3D because we can also access its ndim attribute. Step9: Another example--to go straight to cutting-edge academic research--is 3D video microscope data of multiple tagged fluorescent markers. This would result in a five-axis NumPy object Step10: We can also ask how many elements there are total, using the size attribute Step11: These are extreme examples, but they're to illustrate how flexible NumPy arrays are. If in doubt Step12: Notice how the number "9", initially the third axis, steadily marches to the front as the axes before it are accessed. Part 2 Step13: how does Python know that you want to add the scalar value 10 to each element of the vector x? Because (in a word) broadcasting. Broadcasting is the operation through which a low(er)-dimensional array is in some way "replicated" to be the same shape as a high(er)-dimensional array. We saw this in our previous example Step14: In this example, the scalar value 1 is broadcast to all the elements of zeros, converting the operation to element-wise addition. This all happens under the NumPy hood--we don't see it! It "just works"...most of the time. There are some rules that broadcasting abides by. Essentially, dimensions of arrays need to be "compatible" in order for broadcasting to work. "Compatible" is defined as both dimensions are of equal size (e.g., both have the same number of rows) one of them is 1 (the scalar case) If these rules aren't met, you get all kinds of strange errors Step15: But on some intuitive level, this hopefully makes sense Step16: In this example, the shape of x is (3, 4). The shape of y is just 4. Their trailing axes are both 4, therefore the "smaller" array will be broadcast to fit the size of the larger array, and the operation (addition, in this case) is performed element-wise. Part 3 Step17: This is randomly generated data, yes, but it could easily be 7 data points in 4 dimensions. That is, we have 7 observations of variables with 4 descriptors. Perhaps it's 7 people who are described by their height, weight, age, and 40-yard dash time, or Data on 7 video games, each described by their PC Gamer rating, Steam downloads count, average number of active players, and total cheating complaints ...insert your own example here! Whatever our data, a common first step before any analysis involves some kind of preprocessing (this is just a fancy term for "making sure the data make sense"). If the example we're looking at is the video game scenario from the previous slide, then we know that any negative numbers are junk. After all, how can you have a negative rating? Or a negative number of active players? Perhaps some goofy players decided to make bogus ratings just for the lulz. Funny to them, perhaps, but not exactly useful to you when you're trying to write an algorithm to recommend games to players based on their ratings. So, you have to "clean" the data a bit. So our first course of action might be to set all negative numbers in the data to 0. We could potentially set up a pair of loops--you should know how to do this!--but it's much easier (and faster) to use boolean indexing. First, we create a mask. This is what it sounds like Step18: Just for your reference, here's the original data Step19: Now, we can use our mask to access only the indices we want to set to 0. Step20: voilà! Every negative number has been set to 0, and all the other values were left unchanged. Now we can continue with whatever analysis we may have had in mind. One small caveat with boolean indexing. Yes, you can string multiple boolean conditions together, as you may recall doing in the lecture with conditionals. But... and and or DO NOT WORK. You have to use the arithmetic versions of the operators Step21: Fancy Indexing "Fancy" indexing is a term coined by the NumPy community to refer to this little indexing trick. To explain is simple enough Step22: We have 8 rows and 4 columns, where each row is a 4-element vector of the same value repeated across the columns, and that value is the index of the row. In addition to slicing and boolean indexing, we can also use other NumPy arrays to very selectively pick and choose what elements we want, and even the order in which we want them. Let's say I want rows 7, 0, 5, and 2. In that order. Step23: Ta-daaa! Pretty spiffy! Row 7 shows up first (we know that because of the straight 7s), followed by row 0, then row 5, then row 2. You could get the same thing if you did matrix[7], then matrix[0], then matrix[5], and finally matrix[2], and then stacked the results into that final matrix. But this just condenses all those steps. But wait, there's more! Rather than just specifying one dimension, you can provide tuples of NumPy arrays that very explicitly pick out certain elements (in a certain order) from another NumPy array. Step24: Ok, this will take a little explaining, bear with me
Python Code: li = ["this", "is", "a", "list"] print(li) print(li[1:3]) # Print element 1 (inclusive) to 3 (exclusive) print(li[2:]) # Print element 2 and everything after that print(li[:-1]) # Print everything BEFORE element -1 (the last one) Explanation: Lecture 10: Array Indexing, Slicing, and Broadcasting CSCI 1360E: Foundations for Informatics and Analytics Overview and Objectives Most of this lecture will be a review of basic indexing and slicing operations, albeit within the context of NumPy arrays. Therefore, there will be some additional functionalities that are critical to understand. By the end of this lecture, you should be able to: Use "fancy indexing" in NumPy arrays Create boolean masks to pull out subsets of a NumPy array Understand array broadcasting for performing operations on subsets of NumPy arrays Part 1: NumPy Array Indexing and Slicing Hopefully, you recall basic indexing and slicing from Lecture 4. If not, please go back and refresh your understanding of the concept. End of explanation import numpy as np x = np.array([1, 2, 3, 4, 5]) print(x) print(x[1:3]) print(x[2:]) print(x[:-1]) Explanation: With NumPy arrays, all the same functionality you know and love from lists is still there. End of explanation python_matrix = [ [1, 2, 3], [4, 5, 6], [7, 8, 9] ] print(python_matrix) Explanation: These operations all work whether you're using Python lists or NumPy arrays. The first place in which Python lists and NumPy arrays differ is when we get to multidimensional arrays. We'll start with matrices. To build matrices using Python lists, you basically needed "nested" lists, or a list containing lists: End of explanation numpy_matrix = np.array(python_matrix) print(numpy_matrix) Explanation: To build the NumPy equivalent, you can basically just feed the Python list-matrix into the NumPy array method: End of explanation print(python_matrix) # The full list-of-lists print(python_matrix[0]) # The inner-list at the 0th position of the outer-list print(python_matrix[0][0]) # The 0th element of the 0th inner-list Explanation: The real difference, though, comes with actually indexing these elements. With Python lists, you can index individual elements only in this way: End of explanation print(numpy_matrix) print(numpy_matrix[0]) print(numpy_matrix[0, 0]) # Note the comma-separated format! Explanation: With NumPy arrays, you can use that same notation...or you can use comma-separated indices: End of explanation x = np.array([ [1, 2, 3], [4, 5, 6], [7, 8, 9] ]) print(x) print(x[:, 1]) # Take ALL of axis 0, and one index of axis 1. Explanation: It's not earth-shattering, but enough to warrant a heads-up. When you index NumPy arrays, the nomenclature used is that of an axis: you are indexing specific axes of a NumPy array object. In particular, when access the .shape attribute on a NumPy array, that tells you two things: 1: How many axes there are. This number is len(ndarray.shape), or the number of elements in the tuple returned by .shape. In our above example, numpy_matrix.shape would return (3, 3), so it would have 2 axes (since there are two numbers--both 3s). 2: How many elements are in each axis. In our above example, where numpy_matrix.shape returns (3, 3), there are 2 axes (since the length of that tuple is 2), and both axes have 3 elements (hence the numbers--3 elements in the first axis, 3 in the second). Here's the breakdown of axis notation and indices used in a 2D NumPy array: As with lists, if you want an entire axis, just use the colon operator all by itself: End of explanation video = np.empty(shape = (1920, 1080, 5000)) print("Axis 0 length:", video.shape[0]) # How many rows? print("Axis 1 length:", video.shape[1]) # How many columns? print("Axis 2 length:", video.shape[2]) # How many frames? Explanation: Here's a great visual summary of slicing NumPy arrays, assuming you're starting from an array with shape (3, 3): STUDY THIS CAREFULLY. This more or less sums up everything you need to know about slicing with NumPy arrays. Depending on your field, it's entirely possible that you'll go beyond 2D matrices. If so, it's important to be able to recognize what these structures "look" like. For example, a video can be thought of as a 3D cube. Put another way, it's a NumPy array with 3 axes: the first axis is height, the second axis is width, and the third axis is number of frames. End of explanation print(video.ndim) del video Explanation: We know video is 3D because we can also access its ndim attribute. End of explanation tensor = np.empty(shape = (2, 640, 480, 360, 100)) print(tensor.shape) # Axis 0: color channel--used to differentiate between fluorescent markers # Axis 1: height--same as before # Axis 2: width--same as before # Axis 3: depth--capturing 3D depth at each time interval, like a 3D movie # Axis 4: frame--same as before Explanation: Another example--to go straight to cutting-edge academic research--is 3D video microscope data of multiple tagged fluorescent markers. This would result in a five-axis NumPy object: End of explanation print(tensor.size) del tensor Explanation: We can also ask how many elements there are total, using the size attribute: End of explanation example = np.empty(shape = (3, 5, 9)) print(example.shape) sliced = example[0] # Indexed the first axis. print(sliced.shape) sliced_again = example[0, 0] # Indexed the first and second axes. print(sliced_again.shape) Explanation: These are extreme examples, but they're to illustrate how flexible NumPy arrays are. If in doubt: once you index the first axis, the NumPy array you get back has the shape of all the remaining axes. End of explanation x = np.array([1, 2, 3, 4, 5]) x += 10 print(x) Explanation: Notice how the number "9", initially the third axis, steadily marches to the front as the axes before it are accessed. Part 2: NumPy Array Broadcasting "Broadcasting" is a fancy term for how Python--specifically, NumPy--handles vectorized operations when arrays of differing shapes are involved. (this is, in some sense, "how the sausage is made") When you write code like this: End of explanation zeros = np.zeros(shape = (3, 4)) print(zeros) zeros += 1 # Just add 1. print(zeros) Explanation: how does Python know that you want to add the scalar value 10 to each element of the vector x? Because (in a word) broadcasting. Broadcasting is the operation through which a low(er)-dimensional array is in some way "replicated" to be the same shape as a high(er)-dimensional array. We saw this in our previous example: the low-dimensional scalar was replicated, or broadcast, to each element of the array x so that the addition operation could be performed element-wise. This concept can be generalized to higher-dimensional NumPy arrays. End of explanation x = np.zeros(shape = (3, 3)) y = np.ones(4) x + y Explanation: In this example, the scalar value 1 is broadcast to all the elements of zeros, converting the operation to element-wise addition. This all happens under the NumPy hood--we don't see it! It "just works"...most of the time. There are some rules that broadcasting abides by. Essentially, dimensions of arrays need to be "compatible" in order for broadcasting to work. "Compatible" is defined as both dimensions are of equal size (e.g., both have the same number of rows) one of them is 1 (the scalar case) If these rules aren't met, you get all kinds of strange errors: End of explanation x = np.zeros(shape = (3, 4)) y = np.array([1, 2, 3, 4]) z = x + y print(z) Explanation: But on some intuitive level, this hopefully makes sense: there's no reasonable arithmetic operation that can be performed when you have one $3 \times 3$ matrix and a vector of length 4. Draw them out if you need to convince yourself--how would add a $3 \times 3$ matrix and a 4-length vector? Or subtract them? There's no way to do it, and Python knows that. To be rigorous: it's the trailing dimensions / axes that you want to make sure line up (as in, the last number that shows up when you do the .shape property): End of explanation x = np.random.standard_normal(size = (7, 4)) print(x) Explanation: In this example, the shape of x is (3, 4). The shape of y is just 4. Their trailing axes are both 4, therefore the "smaller" array will be broadcast to fit the size of the larger array, and the operation (addition, in this case) is performed element-wise. Part 3: "Fancy" Indexing Hopefully you have at least an intuitive understanding of how indexing works so far. Unfortunately, it gets more complicated, but still retains a modicum of simplicity. First: indexing by boolean masks. Boolean indexing We've already seen that you can index by integers. Using the colon operator, you can even specify ranges, slicing out entire swaths of rows and columns. But suppose we want something very specific; data in our array which satisfies certain criteria, as opposed to data which is found at certain indices? Put another way: can we pull data out of an array that meets certain conditions? Let's say you have some data. End of explanation mask = x < 0 print(mask) Explanation: This is randomly generated data, yes, but it could easily be 7 data points in 4 dimensions. That is, we have 7 observations of variables with 4 descriptors. Perhaps it's 7 people who are described by their height, weight, age, and 40-yard dash time, or Data on 7 video games, each described by their PC Gamer rating, Steam downloads count, average number of active players, and total cheating complaints ...insert your own example here! Whatever our data, a common first step before any analysis involves some kind of preprocessing (this is just a fancy term for "making sure the data make sense"). If the example we're looking at is the video game scenario from the previous slide, then we know that any negative numbers are junk. After all, how can you have a negative rating? Or a negative number of active players? Perhaps some goofy players decided to make bogus ratings just for the lulz. Funny to them, perhaps, but not exactly useful to you when you're trying to write an algorithm to recommend games to players based on their ratings. So, you have to "clean" the data a bit. So our first course of action might be to set all negative numbers in the data to 0. We could potentially set up a pair of loops--you should know how to do this!--but it's much easier (and faster) to use boolean indexing. First, we create a mask. This is what it sounds like: it "masks" certain portions of the data we don't want to change (in this case, all the numbers greater than 0, since we're assuming they're already valid). End of explanation print(x) Explanation: Just for your reference, here's the original data: notice how, in looking at the data below and the boolean mask above, all the spots where there are negative numbers also correspond to "True" in the mask? End of explanation x[mask] = 0 print(x) Explanation: Now, we can use our mask to access only the indices we want to set to 0. End of explanation mask = (x < 1) & (x > 0.5) # True for any value less than 1 but greater than 0.5 x[mask] = 99 # We're setting any value in this matrix < 1 but > 0.5 to 99 print(x) Explanation: voilà! Every negative number has been set to 0, and all the other values were left unchanged. Now we can continue with whatever analysis we may have had in mind. One small caveat with boolean indexing. Yes, you can string multiple boolean conditions together, as you may recall doing in the lecture with conditionals. But... and and or DO NOT WORK. You have to use the arithmetic versions of the operators: &amp; (for and) and | (for or). End of explanation matrix = np.empty(shape = (8, 4)) for i in range(8): matrix[i] = i # Broadcasting is happening here! print(matrix) Explanation: Fancy Indexing "Fancy" indexing is a term coined by the NumPy community to refer to this little indexing trick. To explain is simple enough: fancy indexing allows you to index arrays with other [integer] arrays. Before you go down the Indexing Inception rabbit hole, just keep in mind: it's basically like slicing, but you're condensing the ability to perform multiple slicings all at one time, instead of one at a time. Now, to demonstrate: Let's build a 2D array that, for the sake of simplicity, has across each row the index of that row. End of explanation indices = np.array([7, 0, 5, 2]) # Here's my "indexing" array--note the order of the numbers. print(matrix[indices]) Explanation: We have 8 rows and 4 columns, where each row is a 4-element vector of the same value repeated across the columns, and that value is the index of the row. In addition to slicing and boolean indexing, we can also use other NumPy arrays to very selectively pick and choose what elements we want, and even the order in which we want them. Let's say I want rows 7, 0, 5, and 2. In that order. End of explanation matrix = np.arange(32).reshape((8, 4)) print(matrix) # This 8x4 matrix has integer elements that increment by 1 column-wise, then row-wise. indices = ( np.array([1, 7, 4]), np.array([3, 0, 1]) ) # This is a tuple of 2 NumPy arrays! print(matrix[indices]) Explanation: Ta-daaa! Pretty spiffy! Row 7 shows up first (we know that because of the straight 7s), followed by row 0, then row 5, then row 2. You could get the same thing if you did matrix[7], then matrix[0], then matrix[5], and finally matrix[2], and then stacked the results into that final matrix. But this just condenses all those steps. But wait, there's more! Rather than just specifying one dimension, you can provide tuples of NumPy arrays that very explicitly pick out certain elements (in a certain order) from another NumPy array. End of explanation ( np.array([1, 7, 4]), np.array([3, 0, 1]) ) Explanation: Ok, this will take a little explaining, bear with me: When you pass in tuples of NumPy arrays as indices, they act as $(x, y)$ coordinate pairs: the first NumPy array of the tuple is the list of $x$ coordinates, while the second NumPy array is the list of corresponding $y$ coordinates. In this way, the corresponding elements of the two NumPy arrays in the tuple give you the row and column indices to be selected from the original NumPy array. In our previous example, this was our tuple of indices: End of explanation
12,999
Given the following text description, write Python code to implement the functionality described below step by step Description: 1. Distributions Step1: Ex 2. LINEARITY OF THE NORMAL DISTRIBUTION. Repeat the exercise using random normal variables with mean 3 and standard deviation 0.4, using now 3 times each of 4,20 and 200. The automatically added line on the qq-plot is estimated by taking the empirical mean and square-root empirical variance of the sample; these are the simplest estimators of the parameters of a normal sample. For a normal distribution, if X ~ N(0,1), then the variable Y = m + sX ~ N(m, s^2). Therefore, if you plotted your sample Y against standard normal quantiles, a line using the true mean m as intersect and the true standard deviation s as slope should represent the truth, and probably should fit the sample well. Add a line with intersection 3 and slope 0.4 (representing the n = Infty perfect sample). Are the empirical mean and variance good estimators of the population mean and variance? Step2: Ex 3. RECOGNIZE DISTRIBUTIONAL FEATURES ON THE QQ-PLOTS. The qq-plot provides a powerful check of distributional assumptions. Use the template below to see the following distributions Step3: Lets do a set of 9 similar to the above observations to demonstrate how small amounts of data can generate misleading conclusions -- Step4: 1.2. Estimating the parameters and an (incomplete) collection of frequent problems We will walk through some of the simplest complications that can affect real data, and see how to recognize them by the means of the quantile-quantile plot. Ex. 4. NUISANCES IN THE DATA (starter's guide for dataframes in pandas Step5: Replace the line by one defined using the quantile estimators (a line passing through the first and third quartiles). Step6: Some robust (regression) models, M-estimators Step7: The sample is, in fact, heteroscedastic, and the (estimated) standard errors are given in the column 'mag.het.error'. What can you do to check about the errors if you are in doubt about them? How can you check the normality of the sample? Step8: The above standardization should lead to a homoscedastic standard normal sample (by the linearity of the normal distribution). Its QQ-plot should be now close to a line with slope 1. Step9: 4.3. Any other uexpected effect Use now the 'mag5' column in place of 'mag.outlier'. How does the standardization work in this case? Step10: Use the 'time' column in the dataframe to plot the data. Step11: 2. Classical estimation and hypothesis testing The demo data set for this part is the Wesenheit index of the OGLE-III fundamental-mode and first overtone classical Cepheids. We'll try to estimate their period-luminosity relationship. The Wesenheit index is defined as W = I - 1.55(V - I), and its main advantage over using simply the I or V photometry is that it is insensitive to extinction. It is denoted by 'W' among the data columns. Other columns are 'name', the identifier of the star; 'RA0' (in decimal hours) and 'Decl0' (in decimal degrees), celestial coordinates; 'Mode', the mode of the Cepheid ('F' indicates fundamental-mode, '1' indicates first overtone star); 'Cloud', indicating which Magellanic Cloud the star belongs to; 'logP1', the base-10 logarithm of the period in days; 'VI', the colour V-I. Ex. 5. ORDINARY LEAST SQUARES REGRESSION (= GAUSSIAN MAXIMUM LIKELIHOOD WITH A MEAN DEPENDING ON A COVARIATE) 5.1 MODEL FIT There are fundamental-mode (FU) and first overtone (FO) Cepheids both from the SMC and the LMC. Represent the fundamental and first overtone Cepheids' P-L relationship (W versus logP1) in two separate scatterplots, the LMC and SMC stars with different colours. What do you see? Fit a separate linear regression model to each of the distinct groups (to check the content of the resulting objects 'lmfit_lmc_fu' etc., see with dir(lmfit_lmc_fu) ). How would you decide whether the slopes are the same for stars of the same mode in the two Clouds? Step12: 5.2 MODEL DIAGNOSTICS Step13: There are a few possible explanations. The Magellanic Clouds are extended in the line of sight. It is possible that we see an effect of the slightly different distances of stars towards the foreground and of those towards background. The literature suggests that the P-L relationship can contain colour (V-I) terms, and can have dependence on metallicity. There are also suggestions of either a break in the P-L relationships (at log(P) = 1 for FU and at at log(P) = 0.5 for FO) or the inclusion of a quadratic term. Unidentified effect or naturally non-normally distributed errors in period and the Wesenheit index. First we check the first point up there. Create a map of the stars on the sky (plot of RA0 and Decl0), coloured according to the sign of the residuals; if there is an effect of distance, then negative residuals and positive residuals will be differently grouped, and hinting at the geometry of the Cloud. Do this separately for the four fits. Step14: 5.2 RESIDUALS AGAINST FITTED VALUES AND COVARIATE After concluding on this point, we can do some further checks on the distribution. Statisticians usually check whether the variance of the response (or the residuals) depends on the fitted value. For example, if our response variable should be considered to be a Poisson variable, then its variance would be equal to the mean, which is varying with the covariate(s). Thus, in such a case, plotting the residuals against the fitted values, we would see a band narrow at small fitted values, and widening with increasing fitted values. For a homoscedastic normal distribution, we would find a band of constant width. Other patterns can hint to other distributions. Plot the residuals versus the fitted value for each of the four fits. What do you think? Take into account the local number of the data Step15: Another useful plot (which is generally used) is the plot of residuals against covariates. We can see the intervals of lack of fits, the bias, the necessity of more terms or a nonparametric model. Create this plot. Do you see a strong indication of quadratic terms or breaks in the model? Step16: 5.3 MODEL COMPARISON Step17: Just to check on the literature, to see a frequent and sometimes unnoticed problem of linear models, and to use the model comparison techniques, we fit models with both logP1 and V - I. However, we should be careful. It cannot be excluded that the two explanatory variables directly depend on each other (actually, this can even be expected, since the Cepheids have a very constrained pulsation and stellar structure model). If such a relationship holds between two covariates in a linear model, then mathematically, the model can become ill-determined, and strongly unstable against small changes in the data. This is because in such a case, a change in the coefficient of one of these covariates can be compensated by a corresponding change in the coefficient of the other. So first use a scatterplot to see the logP1-VI relationship in the four Cepheid groups. What do you conclude? Step18: This problem is called the collinearity problem. A solution is to orthogonalize the variables; we perform this by regressing VI on logP1, and extracting the residuals of this model. The residuals now, by virtue of some statistical magic, are now uncorrelated with logP1, and can be used in a two-variate period-luminosity-color relationship without the risk of ending up with a singular model. After fitting, check up on the significance table of the model parameters. Step19: Next, add the new column 'resid_vi' as an additional variable to the models. Compare the different model comparison measures Step20: Finally, for seeing how collinearity affects the model results, we fit models also using the original V-I, correlated with log(P).
Python Code: fig = plt.figure(1,figsize=(8,8)) for ii in range(1, 10): rv = scipy.stats.norm.rvs(0, 1, size = 200) ax = fig.add_subplot(3,3,ii) sm.qqplot(rv, line = 's', ax = ax) ax.set_xlabel('') ax.set_ylabel('') fig.text(0.5, 0.02, 'Theoretical Quantiles', ha='center',size=16) fig.text(0.02, 0.5, 'Observed Quantiles', va='center', rotation='vertical',size=16) plt.show() Explanation: 1. Distributions: how to get to know better the distribution of the data, identify various issues, and check fits? 1.1. Probability distributions Ex 1. QQ-PLOT BASICS. First, simulate 9 times 200 standard random normal variables, and inspect the variations in the qq-plots. Take a look at the help of qqplots in the statmodels.api module of Python using sm.qqplot? if necessary. End of explanation x = np.arange(-3.,3.,0.01) y = 3. + 0.4*x nsample = [4,4,4,20,20,20,200,200,200] fig = plt.figure(1,figsize=(8,8)) for ii in range(0,9): rv = scipy.stats.norm.rvs(3., 0.4, size = nsample[ii]) ax = fig.add_subplot(3,3,ii+1) sm.qqplot(rv, line = 's', ax = ax) ax.plot(x, y, color='c') ax.set_xlabel('') ax.set_ylabel('') fig.text(0.5, 0.02, 'Theoretical Quantiles', ha='center',size=16) fig.text(0.02, 0.5, 'Observed Quantiles', va='center', rotation='vertical',size=16) plt.show() Explanation: Ex 2. LINEARITY OF THE NORMAL DISTRIBUTION. Repeat the exercise using random normal variables with mean 3 and standard deviation 0.4, using now 3 times each of 4,20 and 200. The automatically added line on the qq-plot is estimated by taking the empirical mean and square-root empirical variance of the sample; these are the simplest estimators of the parameters of a normal sample. For a normal distribution, if X ~ N(0,1), then the variable Y = m + sX ~ N(m, s^2). Therefore, if you plotted your sample Y against standard normal quantiles, a line using the true mean m as intersect and the true standard deviation s as slope should represent the truth, and probably should fit the sample well. Add a line with intersection 3 and slope 0.4 (representing the n = Infty perfect sample). Are the empirical mean and variance good estimators of the population mean and variance? End of explanation #uncomment for cauchy #x = np.linspace(scipy.stats.cauchy.ppf(0.005), scipy.stats.cauchy.ppf(0.995), 200) #uncomment for poisson x = np.arange(scipy.stats.poisson.ppf(0.005, mu=250), scipy.stats.poisson.ppf(0.995, mu=250), 1) #uncomment for cauchy #rv = scipy.stats.cauchy.rvs(size = 400) #uncomment for poisson rv = scipy.stats.poisson.rvs(mu=250,size=400) fig = plt.figure(3) ax1 = fig.add_subplot(121) plt.hist(rv, 20, normed=1, facecolor='y', alpha=0.75) #uncomment for cauchy #ax1.plot(x, scipy.stats.cauchy.pdf(x),'b-', lw=2) ax1.set_yscale('log') #uncomment for poission ax1.vlines(x, 0, scipy.stats.poisson.pmf(x, mu=250), colors='b', lw=5, alpha=0.2) ax2 = fig.add_subplot(122) sm.qqplot(rv, line = 's', ax = ax2) fig.tight_layout() plt.show() # To see the extremes of the random variates: #rv.min() #rv.max() Explanation: Ex 3. RECOGNIZE DISTRIBUTIONAL FEATURES ON THE QQ-PLOTS. The qq-plot provides a powerful check of distributional assumptions. Use the template below to see the following distributions: Cauchy (heavy-tailed), chi-squared (much used), beta(0.5,2) (restricted to the interval [0,1]; contains the uniform distribution as beta(1,1)) and two Poisson distributions, with mean 3 and 250. This time, we use the option fit = True in sm.qqplot, so that the sample is standardized by its mean and standard error before plotting. Replace the random distribution in the codes, both for the comparison with the normal distribution and for the qq-plot (check the parameters at http://docs.scipy.org/doc/scipy/reference/stats.html). Compare the tail behavior on the plot of the density and on the qq-plot. End of explanation nsample = [4,4,4,20,20,20,200,200,200] fig = plt.figure(1,figsize=(8,8)) for ii in range(0,9): rv = scipy.stats.cauchy.rvs(3., 0.4, size = nsample[ii]) ax = fig.add_subplot(3,3,ii+1) sm.qqplot(rv, line = 's', ax = ax) ax.set_xlabel('') ax.set_ylabel('') fig.text(0.5, 0.02, 'Theoretical Quantiles', ha='center',size=16) fig.text(0.02, 0.5, 'Observed Quantiles', va='center', rotation='vertical',size=16) plt.show() Explanation: Lets do a set of 9 similar to the above observations to demonstrate how small amounts of data can generate misleading conclusions -- End of explanation dfr = pd.read_csv("./data/IntroStat_demo.csv") fig = plt.figure(1) plt.subplot(121) n, bins, patches = plt.hist(dfr['mag.outlier'], 12, normed=1, facecolor='y', alpha=0.75) ax = fig.add_subplot(122) sm.qqplot(dfr['mag.outlier'], line = 's', ax = ax) plt.show() Explanation: 1.2. Estimating the parameters and an (incomplete) collection of frequent problems We will walk through some of the simplest complications that can affect real data, and see how to recognize them by the means of the quantile-quantile plot. Ex. 4. NUISANCES IN THE DATA (starter's guide for dataframes in pandas: http://pandas.pydata.org/pandas-docs/stable/dsintro.html, as well as http://www.scipy-lectures.org/packages/statistics/index.html) 4.1. Outliers End of explanation fig = plt.figure(1) plt.subplot(121) plt.hist(dfr['mag.outlier'], 12, normed=1, facecolor='y', alpha=0.75) ax = fig.add_subplot(122) sm.qqplot(dfr['mag.outlier'], line = 'q', ax = ax) plt.show() Explanation: Replace the line by one defined using the quantile estimators (a line passing through the first and third quartiles). End of explanation fig = plt.figure(1) plt.subplot(121) plt.hist(dfr['mag.het'], 12, normed=1, facecolor='y', alpha=0.75) ax = fig.add_subplot(122) sm.qqplot(dfr['mag.het'], line = 'q', ax = ax) fig.tight_layout() plt.show() Explanation: Some robust (regression) models, M-estimators: http://statsmodels.sourceforge.net/stable/rlm.html. 4.2. Heteroscedasticity Use now the 'mag.het' column in place of 'mag.outlier'. Does the quantile-based or the moment-estimated line work? End of explanation def std_fn(x, mean, std): res = (x-mean) / std return res Explanation: The sample is, in fact, heteroscedastic, and the (estimated) standard errors are given in the column 'mag.het.error'. What can you do to check about the errors if you are in doubt about them? How can you check the normality of the sample? End of explanation w = dfr['mag.het.error']**(-2) m = np.average(dfr['mag.het'], weights = w) std_het = std_fn(x = dfr['mag.het'], mean = m, std = dfr['mag.het.error']) fig = plt.figure(1) plt.subplot(121) plt.hist(std_het, 12, normed=1, facecolor='grey', alpha=0.75) ax = fig.add_subplot(122) # The option line = '45' means a line with intersection 0 and slope 1. sm.qqplot(std_het, line = '45', ax = ax) fig.tight_layout() plt.show() Explanation: The above standardization should lead to a homoscedastic standard normal sample (by the linearity of the normal distribution). Its QQ-plot should be now close to a line with slope 1. End of explanation w = dfr['mag5.error']**(-2) m = np.average(dfr['mag5'], weights = w) std5 = std_fn(x = dfr['mag5'], mean = m, std = dfr['mag5.error']) fig = plt.figure(1) plt.subplot(121) plt.hist(std5, 12, normed=1, facecolor='grey', alpha=0.75) ax = fig.add_subplot(122) sm.qqplot(std5, line = '45', ax = ax) fig.tight_layout() plt.show() Explanation: 4.3. Any other uexpected effect Use now the 'mag5' column in place of 'mag.outlier'. How does the standardization work in this case? End of explanation fig = plt.figure(1) plt.plot(dfr['time'], dfr['mag5'], 'ro') plt.xlabel('Time') plt.ylabel('Magnitude') # this is just to extend a bit the plotting area, so that no points fall exactly on the border: mn, mx = sorted(dfr['time'])[::len(dfr['time'])-1] plt.xlim(mn - 0.05*(mx-mn), mx + 0.05*(mx-mn)) plt.show() Explanation: Use the 'time' column in the dataframe to plot the data. End of explanation import statsmodels.formula.api as smf cep = pd.read_csv("./data/Cepheids.csv") cep[0:10] i_lmc = cep['Cloud'] == "LMC" i_fu = cep['Mode'] == "F" fig = plt.figure(4) ax1 = fig.add_subplot(211) plt.plot(cep[i_lmc & i_fu]['logP1'], cep[i_lmc & i_fu]['W'], 'b*', alpha=0.3, label = 'LMC') plt.plot(cep[-i_lmc & i_fu]['logP1'], cep[-i_lmc & i_fu]['W'], 'r*', alpha=0.3, label = 'SMC') plt.legend(loc = 'best', numpoints = 1) ax2 = fig.add_subplot(212) plt.plot(cep[i_lmc & -i_fu]['logP1'], cep[i_lmc & -i_fu]['W'], 'b*', alpha=0.3, label = 'LMC') plt.plot(cep[-i_lmc & -i_fu]['logP1'], cep[-i_lmc & -i_fu]['W'], 'r*', alpha=0.3, label = 'SMC') plt.legend(loc = 'best', numpoints = 1) plt.show() lmfit_lmc_fu = smf.ols(formula = 'W ~ logP1', data = cep, subset = i_lmc & i_fu).fit() lmfit_lmc_fo = smf.ols(formula = 'W ~ logP1', data = cep, subset = i_lmc & -i_fu).fit() lmfit_smc_fu = smf.ols(formula = 'W ~ logP1', data = cep, subset = -i_lmc & i_fu).fit() lmfit_smc_fo = smf.ols(formula = 'W ~ logP1', data = cep, subset = -i_lmc & -i_fu).fit() print lmfit_smc_fo.summary() cep['resid0'] = np.zeros(cep.shape[0]) cep['fitted0'] = np.zeros(cep.shape[0]) cep.loc[(i_lmc & i_fu),'resid0'] = lmfit_lmc_fu.resid cep.loc[(-i_lmc & i_fu),'resid0'] = lmfit_smc_fu.resid cep.loc[(i_lmc & -i_fu),'resid0'] = lmfit_lmc_fo.resid cep.loc[(-i_lmc & -i_fu),'resid0'] = lmfit_smc_fo.resid cep.loc[(i_lmc & i_fu),'fitted0'] = lmfit_lmc_fu.fittedvalues cep.loc[(-i_lmc & i_fu),'fitted0'] = lmfit_smc_fu.fittedvalues cep.loc[(i_lmc & -i_fu),'fitted0'] = lmfit_lmc_fo.fittedvalues cep.loc[(-i_lmc & -i_fu),'fitted0'] = lmfit_smc_fo.fittedvalues cep.iloc[0:10] logp_tmp = np.linspace(cep['logP1'].min(), cep['logP1'].max(), 500) fig = plt.figure(4) ax1 = fig.add_subplot(211) plt.plot(cep[i_lmc & i_fu]['logP1'], cep[i_lmc & i_fu]['W'], 'c*', alpha=0.2, label = 'LMC') plt.plot(cep[-i_lmc & i_fu]['logP1'], cep[-i_lmc & i_fu]['W'], 'r*', alpha=0.2, label = 'SMC') plt.plot(logp_tmp, lmfit_lmc_fu.params['Intercept'] + logp_tmp * lmfit_lmc_fu.params['logP1'], 'blue', lw = 1) plt.plot(logp_tmp, lmfit_smc_fu.params['Intercept'] + logp_tmp * lmfit_smc_fu.params['logP1'], 'brown', lw = 1) plt.legend(loc = 'best', numpoints = 1) ax2 = fig.add_subplot(212) plt.plot(cep[i_lmc & -i_fu]['logP1'], cep[i_lmc & -i_fu]['W'], 'c*', alpha=0.2, label = 'LMC') plt.plot(cep[-i_lmc & -i_fu]['logP1'], cep[-i_lmc & -i_fu]['W'], 'r*', alpha=0.2, label = 'SMC') plt.legend(loc = 'best', numpoints = 1) plt.plot(logp_tmp, lmfit_lmc_fo.params['Intercept'] + logp_tmp * lmfit_lmc_fo.params['logP1'], 'blue', lw = 1) plt.plot(logp_tmp, lmfit_smc_fo.params['Intercept'] + logp_tmp * lmfit_smc_fo.params['logP1'], 'brown', lw = 1) plt.show() Explanation: 2. Classical estimation and hypothesis testing The demo data set for this part is the Wesenheit index of the OGLE-III fundamental-mode and first overtone classical Cepheids. We'll try to estimate their period-luminosity relationship. The Wesenheit index is defined as W = I - 1.55(V - I), and its main advantage over using simply the I or V photometry is that it is insensitive to extinction. It is denoted by 'W' among the data columns. Other columns are 'name', the identifier of the star; 'RA0' (in decimal hours) and 'Decl0' (in decimal degrees), celestial coordinates; 'Mode', the mode of the Cepheid ('F' indicates fundamental-mode, '1' indicates first overtone star); 'Cloud', indicating which Magellanic Cloud the star belongs to; 'logP1', the base-10 logarithm of the period in days; 'VI', the colour V-I. Ex. 5. ORDINARY LEAST SQUARES REGRESSION (= GAUSSIAN MAXIMUM LIKELIHOOD WITH A MEAN DEPENDING ON A COVARIATE) 5.1 MODEL FIT There are fundamental-mode (FU) and first overtone (FO) Cepheids both from the SMC and the LMC. Represent the fundamental and first overtone Cepheids' P-L relationship (W versus logP1) in two separate scatterplots, the LMC and SMC stars with different colours. What do you see? Fit a separate linear regression model to each of the distinct groups (to check the content of the resulting objects 'lmfit_lmc_fu' etc., see with dir(lmfit_lmc_fu) ). How would you decide whether the slopes are the same for stars of the same mode in the two Clouds? End of explanation mn, mx = sorted(cep['resid0'])[::len(cep['resid0'])-1] fig = plt.figure(1) plt.subplots_adjust(left=0.07, bottom=0.08, right=0.95, top=0.95, wspace=None, hspace=0.35) ax1 = fig.add_subplot(221) sm.qqplot(cep[i_lmc & i_fu]['resid0'], line = 's', ax = ax1) plt.title("LMC FU") plt.ylim(mn - 0.05*(mx-mn), mx + 0.05*(mx-mn)) ax2 = fig.add_subplot(222) sm.qqplot(cep[-i_lmc & i_fu]['resid0'], line = 's', ax = ax2) plt.title("SMC FU") plt.ylim(mn - 0.05*(mx-mn), mx + 0.05*(mx-mn)) ax3 = fig.add_subplot(223) sm.qqplot(cep[i_lmc & -i_fu]['resid0'], line = 's', ax = ax3) plt.title("LMC FO") plt.ylim(mn - 0.05*(mx-mn), mx + 0.05*(mx-mn)) ax4 = fig.add_subplot(224) sm.qqplot(cep[-i_lmc & -i_fu]['resid0'], line = 's', ax = ax4) for ax in [ax1,ax2,ax3,ax4]: ax.set_xlabel('') ax.set_ylabel('') fig.text(0.5, 0.00, 'Theoretical Quantiles', ha='center',size=16) fig.text(0.00, 0.5, 'Observed Quantiles', va='center', rotation='vertical',size=16) plt.title("SMC FO") plt.ylim(mn - 0.05*(mx-mn), mx + 0.05*(mx-mn)) plt.show() Explanation: 5.2 MODEL DIAGNOSTICS: QQ-PLOT OF RESIDUALS Let's start with checking the distributional assumptions of the model: do the residuals admit a normal distribution? Take a look at the output of the four linear models ( lmobject.summary() ). Can you see indications there of non-normality? Make and inspect the QQ-plot of the residuals, separately for the four groups of Cepheids. What do you find? What can be the reason of what you observe? End of explanation i_posresid = (cep['resid0'] > 0) fig = plt.figure(1) fig.add_subplot(221) plt.plot(cep[-i_lmc & i_fu & i_posresid]['RA0'], cep[-i_lmc & i_fu & i_posresid]['Decl0'], 'bo', alpha=0.2) plt.plot(cep[-i_lmc & i_fu & -i_posresid]['RA0'], cep[-i_lmc & i_fu & -i_posresid]['Decl0'], 'yo', alpha=0.2) plt.title('SMC FU') fig.add_subplot(222) plt.plot(cep[i_lmc & i_fu & i_posresid]['RA0'], cep[i_lmc & i_fu & i_posresid]['Decl0'], 'bo', alpha=0.2) plt.plot(cep[i_lmc & i_fu & -i_posresid]['RA0'], cep[i_lmc & i_fu & -i_posresid]['Decl0'], 'yo', alpha=0.2) plt.title('LMC FU') fig.add_subplot(223) plt.plot(cep[-i_lmc & -i_fu & i_posresid]['RA0'], cep[-i_lmc & -i_fu & i_posresid]['Decl0'], 'bo', alpha=0.2) plt.plot(cep[-i_lmc & -i_fu & -i_posresid]['RA0'], cep[-i_lmc & -i_fu & -i_posresid]['Decl0'], 'yo', alpha=0.2) plt.title('SMC FU') fig.add_subplot(224) plt.plot(cep[i_lmc & -i_fu & i_posresid]['RA0'], cep[i_lmc & -i_fu & i_posresid]['Decl0'], 'bo', alpha=0.2) plt.plot(cep[i_lmc & -i_fu & -i_posresid]['RA0'], cep[i_lmc & -i_fu & -i_posresid]['Decl0'], 'yo', alpha=0.2) plt.title('LMC FU') fig.tight_layout() plt.show() Explanation: There are a few possible explanations. The Magellanic Clouds are extended in the line of sight. It is possible that we see an effect of the slightly different distances of stars towards the foreground and of those towards background. The literature suggests that the P-L relationship can contain colour (V-I) terms, and can have dependence on metallicity. There are also suggestions of either a break in the P-L relationships (at log(P) = 1 for FU and at at log(P) = 0.5 for FO) or the inclusion of a quadratic term. Unidentified effect or naturally non-normally distributed errors in period and the Wesenheit index. First we check the first point up there. Create a map of the stars on the sky (plot of RA0 and Decl0), coloured according to the sign of the residuals; if there is an effect of distance, then negative residuals and positive residuals will be differently grouped, and hinting at the geometry of the Cloud. Do this separately for the four fits. End of explanation fig = plt.figure(1) fig.add_subplot(221) plt.plot(cep[-i_lmc & i_fu]['fitted0'], cep[-i_lmc & i_fu]['resid0'], 'g*', alpha=0.2) plt.hlines(0, xmin = cep[-i_lmc & i_fu]['fitted0'].min(), xmax = cep[-i_lmc & i_fu]['fitted0'].max(), lw = 2) plt.title('SMC FU') fig.add_subplot(222) plt.plot(cep[i_lmc & i_fu]['fitted0'], cep[i_lmc & i_fu]['resid0'], 'g*', alpha=0.2) plt.hlines(0, xmin = cep[-i_lmc & i_fu]['fitted0'].min(), xmax = cep[-i_lmc & i_fu]['fitted0'].max(), lw = 2) plt.title('LMC FU') fig.add_subplot(223) plt.plot(cep[-i_lmc & -i_fu]['fitted0'], cep[-i_lmc & -i_fu]['resid0'], 'g*', alpha=0.2) plt.hlines(0, xmin = cep[-i_lmc & -i_fu]['fitted0'].min(), xmax = cep[-i_lmc & -i_fu]['fitted0'].max(), lw = 2) plt.title('SMC FO') fig.add_subplot(224) plt.plot(cep[i_lmc & -i_fu]['fitted0'], cep[i_lmc & -i_fu]['resid0'], 'g*', alpha=0.2) plt.hlines(0, xmin = cep[i_lmc & -i_fu]['fitted0'].min(), xmax = cep[i_lmc & -i_fu]['fitted0'].max(), lw = 2) plt.title('LMC FO') fig.tight_layout() plt.show() Explanation: 5.2 RESIDUALS AGAINST FITTED VALUES AND COVARIATE After concluding on this point, we can do some further checks on the distribution. Statisticians usually check whether the variance of the response (or the residuals) depends on the fitted value. For example, if our response variable should be considered to be a Poisson variable, then its variance would be equal to the mean, which is varying with the covariate(s). Thus, in such a case, plotting the residuals against the fitted values, we would see a band narrow at small fitted values, and widening with increasing fitted values. For a homoscedastic normal distribution, we would find a band of constant width. Other patterns can hint to other distributions. Plot the residuals versus the fitted value for each of the four fits. What do you think? Take into account the local number of the data: with more data within some fitted value bin, you see more of the extremes of the local distribution than with fewer data. End of explanation fig = plt.figure(1) fig.add_subplot(221) plt.plot(cep[-i_lmc & i_fu]['logP1'], cep[-i_lmc & i_fu]['resid0'], 'g*', alpha=0.2) plt.hlines(0, xmin = cep[-i_lmc & i_fu]['logP1'].min(), xmax = cep[-i_lmc & i_fu]['logP1'].max(), lw = 2) plt.title('SMC FU') fig.add_subplot(222) plt.plot(cep[i_lmc & i_fu]['logP1'], cep[i_lmc & i_fu]['resid0'], 'g*', alpha=0.2) plt.hlines(0, xmin = cep[-i_lmc & i_fu]['logP1'].min(), xmax = cep[-i_lmc & i_fu]['logP1'].max(), lw = 2) plt.title('LMC FU') fig.add_subplot(223) plt.plot(cep[-i_lmc & -i_fu]['logP1'], cep[-i_lmc & -i_fu]['resid0'], 'g*', alpha=0.2) plt.hlines(0, xmin = cep[-i_lmc & -i_fu]['logP1'].min(), xmax = cep[-i_lmc & -i_fu]['logP1'].max(), lw = 2) plt.title('SMC FO') fig.add_subplot(224) plt.plot(cep[i_lmc & -i_fu]['logP1'], cep[i_lmc & -i_fu]['resid0'], 'g*', alpha=0.2) plt.hlines(0, xmin = cep[i_lmc & -i_fu]['logP1'].min(), xmax = cep[i_lmc & -i_fu]['logP1'].max(), lw = 2) plt.title('LMC FO') fig.tight_layout() plt.show() Explanation: Another useful plot (which is generally used) is the plot of residuals against covariates. We can see the intervals of lack of fits, the bias, the necessity of more terms or a nonparametric model. Create this plot. Do you see a strong indication of quadratic terms or breaks in the model? End of explanation fig = plt.figure(1) fig.add_subplot(221) plt.plot(cep[-i_lmc & i_fu]['VI'], cep[-i_lmc & i_fu]['resid0'], 'g*', alpha=0.2) plt.hlines(0, xmin = cep[-i_lmc & i_fu]['VI'].min(), xmax = cep[-i_lmc & i_fu]['VI'].max(), lw = 2) plt.title('SMC FU') fig.add_subplot(222) plt.plot(cep[i_lmc & i_fu]['VI'], cep[i_lmc & i_fu]['resid0'], 'g*', alpha=0.2) plt.hlines(0, xmin = cep[-i_lmc & i_fu]['VI'].min(), xmax = cep[-i_lmc & i_fu]['VI'].max(), lw = 2) plt.title('LMC FU') fig.add_subplot(223) plt.plot(cep[-i_lmc & -i_fu]['VI'], cep[-i_lmc & -i_fu]['resid0'], 'g*', alpha=0.2) plt.hlines(0, xmin = cep[-i_lmc & -i_fu]['VI'].min(), xmax = cep[-i_lmc & -i_fu]['VI'].max(), lw = 2) plt.title('SMC FO') fig.add_subplot(224) plt.plot(cep[i_lmc & -i_fu]['VI'], cep[i_lmc & -i_fu]['resid0'], 'g*', alpha=0.2) plt.hlines(0, xmin = cep[i_lmc & -i_fu]['VI'].min(), xmax = cep[i_lmc & -i_fu]['VI'].max(), lw = 2) plt.title('LMC FO') fig.tight_layout() plt.show() Explanation: 5.3 MODEL COMPARISON: IS V-I NECESSARY TO INCLUDE? Several authors propose the inclusion of a linear V-I term to the P-L relationship in its form using magnitudes of the stars. As we use the Wesenheit index, this is equivalent to allow for a correction term to the used coefficient 1.55 (recall that W = I - 1.55(V - I)). Do we need this term? First visualize. End of explanation fig = plt.figure(1) fig.add_subplot(221) plt.plot(cep[-i_lmc & i_fu]['logP1'], cep[-i_lmc & i_fu]['VI'], 'g*', alpha=0.2) plt.title('SMC FU') fig.add_subplot(222) plt.plot(cep[i_lmc & i_fu]['logP1'], cep[i_lmc & i_fu]['VI'], 'g*', alpha=0.2) plt.title('LMC FU') fig.add_subplot(223) plt.plot(cep[-i_lmc & -i_fu]['logP1'], cep[-i_lmc & -i_fu]['VI'], 'g*', alpha=0.2) plt.title('SMC FO') fig.add_subplot(224) plt.plot(cep[i_lmc & -i_fu]['logP1'], cep[i_lmc & -i_fu]['VI'], 'g*', alpha=0.2) plt.title('LMC FO') fig.tight_layout() plt.show() Explanation: Just to check on the literature, to see a frequent and sometimes unnoticed problem of linear models, and to use the model comparison techniques, we fit models with both logP1 and V - I. However, we should be careful. It cannot be excluded that the two explanatory variables directly depend on each other (actually, this can even be expected, since the Cepheids have a very constrained pulsation and stellar structure model). If such a relationship holds between two covariates in a linear model, then mathematically, the model can become ill-determined, and strongly unstable against small changes in the data. This is because in such a case, a change in the coefficient of one of these covariates can be compensated by a corresponding change in the coefficient of the other. So first use a scatterplot to see the logP1-VI relationship in the four Cepheid groups. What do you conclude? End of explanation vi_lmfit_lmc_fu = smf.ols(formula = 'VI ~ logP1', data = cep, subset = i_lmc & i_fu).fit() vi_lmfit_lmc_fo = smf.ols(formula = 'VI ~ logP1', data = cep, subset = i_lmc & -i_fu).fit() vi_lmfit_smc_fu = smf.ols(formula = 'VI ~ logP1', data = cep, subset = -i_lmc & i_fu).fit() vi_lmfit_smc_fo = smf.ols(formula = 'VI ~ logP1', data = cep, subset = -i_lmc & -i_fu).fit() print vi_lmfit_smc_fo.summary() Explanation: This problem is called the collinearity problem. A solution is to orthogonalize the variables; we perform this by regressing VI on logP1, and extracting the residuals of this model. The residuals now, by virtue of some statistical magic, are now uncorrelated with logP1, and can be used in a two-variate period-luminosity-color relationship without the risk of ending up with a singular model. After fitting, check up on the significance table of the model parameters. End of explanation cep['resid_vi'] = np.zeros(cep.shape[0]) cep.loc[(i_lmc & i_fu),'resid_vi'] = vi_lmfit_lmc_fu.resid cep.loc[(-i_lmc & i_fu),'resid_vi'] = vi_lmfit_smc_fu.resid cep.loc[(i_lmc & -i_fu),'resid_vi'] = vi_lmfit_lmc_fo.resid cep.loc[(-i_lmc & -i_fu),'resid_vi'] = vi_lmfit_smc_fo.resid cep[0:10] lmfit_lmc_fu2 = smf.ols(formula = 'W ~ logP1 + resid_vi', data = cep, subset = i_lmc & i_fu).fit() lmfit_lmc_fo2 = smf.ols(formula = 'W ~ logP1 + resid_vi', data = cep, subset = i_lmc & -i_fu).fit() lmfit_smc_fu2 = smf.ols(formula = 'W ~ logP1 + resid_vi', data = cep, subset = -i_lmc & i_fu).fit() lmfit_smc_fo2 = smf.ols(formula = 'W ~ logP1 + resid_vi', data = cep, subset = -i_lmc & -i_fu).fit() print lmfit_lmc_fu2.summary() print lmfit_lmc_fo.bic print lmfit_lmc_fo2.bic Explanation: Next, add the new column 'resid_vi' as an additional variable to the models. Compare the different model comparison measures: the likelihood, the AIC and the BIC to those of models without resid_vi. As well, repeat the former plot, now superposing the new residuals in a new (transparent) colour. What do you see? Would you accept the necessity of including such a term into your models? Consider different aspects of the problem: improvement on the model as goodness-of-fit measures summarize, behaviour of the residuals, behaviour of the outliers, size of the effect, errors on the coefficients in the two models. End of explanation lmfit_lmc_fu1 = smf.ols(formula = 'W ~ logP1 + VI', data = cep, subset = i_lmc & i_fu).fit() lmfit_lmc_fo1 = smf.ols(formula = 'W ~ logP1 + VI', data = cep, subset = i_lmc & -i_fu).fit() lmfit_smc_fu1 = smf.ols(formula = 'W ~ logP1 + VI', data = cep, subset = -i_lmc & i_fu).fit() lmfit_smc_fo1 = smf.ols(formula = 'W ~ logP1 + VI', data = cep, subset = -i_lmc & -i_fu).fit() print lmfit_lmc_fu1.summary() print lmfit_lmc_fu2.summary() Explanation: Finally, for seeing how collinearity affects the model results, we fit models also using the original V-I, correlated with log(P). End of explanation